Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great video, and a lot to think about when it comes to self driving cars. But the problem here is that the scenario in the video will never happen. Not saying that something won't randomly come into the road while a car is driving, but that the car will never be in a situation where it has to decide whether to crash into something or someone. It has breaks. I know the video said that in this situation, in won't be able to break in time, but again, that will never be the case. The car has sensors that are always on and always analyzing the road and cars around it. It will not be close enough to a truck with an unstable load that breaking in time wouldn't be an option. And the second that load comes undone and becomes a potential hazard, it will have already started slowing down. It's an interesting concept to think about, but I feel like anytime someone comes up with a hypothetical, they forget that these cars don't have the ability to be "surprised" like a human driver would. It doesn't look around and think "everything seems alright so far, maybe I can relax now".
youtube AI Harm Incident 2015-12-08T19:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UghSiRcVXA-3FHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugg36gd_wQOCXHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UggzSEiGsQNLKngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UghidMHZsCybB3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjzNTXzuzIxOngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UghfmsovrnUJPXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgjQy7gtc5pA_XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugio_pXgICTxCXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggKQCpjXBYZKXgCoAEC","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgggitcG_CbrUXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"} ]