Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Jesus WAS a prophet. Jesus was a healer. Jesus IS the Messiah.
Ask AI to read t…
ytc_UgwrEN8zd…
G
If you ask a man why he’s afraid of AI he’ll likely say “it could take my job.” …
ytc_UgxwkfCQn…
G
We haven't solved word hunger, nor cancer... but who cares.. we got ourselves a …
ytc_UgxhjtKKq…
G
Business models nowadays are like: Let's slap an AI logo on it and make it 25% m…
ytc_UgzJJdnLc…
G
Another of thems who think they know more then edu.-trained-licensed to practice…
ytc_UgyYim_e5…
G
Just like calculators shouldn't replace math teachers, AI art shouldn't replace …
ytc_UgyX9-IT5…
G
yea probably. Google translate completely destroyed the human translation indust…
ytc_UgxxvFVjf…
G
I think it would be good for context if you included some information about how …
ytc_UgxaHw3hb…
Comment
Great video, and a lot to think about when it comes to self driving cars. But the problem here is that the scenario in the video will never happen. Not saying that something won't randomly come into the road while a car is driving, but that the car will never be in a situation where it has to decide whether to crash into something or someone. It has breaks. I know the video said that in this situation, in won't be able to break in time, but again, that will never be the case. The car has sensors that are always on and always analyzing the road and cars around it. It will not be close enough to a truck with an unstable load that breaking in time wouldn't be an option. And the second that load comes undone and becomes a potential hazard, it will have already started slowing down.
It's an interesting concept to think about, but I feel like anytime someone comes up with a hypothetical, they forget that these cars don't have the ability to be "surprised" like a human driver would. It doesn't look around and think "everything seems alright so far, maybe I can relax now".
youtube
AI Harm Incident
2015-12-08T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UghSiRcVXA-3FHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg36gd_wQOCXHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UggzSEiGsQNLKngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghidMHZsCybB3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjzNTXzuzIxOngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UghfmsovrnUJPXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjQy7gtc5pA_XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugio_pXgICTxCXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggKQCpjXBYZKXgCoAEC","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgggitcG_CbrUXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}
]