Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
IF the autopilot fully disengages when you do an action like speed up and etc. Then i wouldnt blame the auto pilot at all. If it DOESNT disengage and SHOULD work normally just speed up then yes there is 100% fault on the auto. Question is Would he have crashed if he didnt do any input and let it on autopilot in this exact same situation? I would love to see a test without a person in the car obviously and the exact same situation EXCEPT the speeding up if it crashes Tesla is 100% is guilty if it doesnt crash its kinda his fault?? But I think the automatic system should always be ON at all times to break when it detects something. It should ALWAYS be on. even if autopilot is off. I will add I think you should always pay some attention to the road even on auto pilot I would NEVER trust it 100% no matter how its perfect because You can break and save yourself if something happens as long as you pay attention. But thats just me.
youtube AI Harm Incident 2025-08-15T18:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw3SaUrZX2yOzQ_6Oh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxVHhzoJA5ZGTRNuXB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgysDxWZFwMGJvuStPt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwVm3rQ3D6BcwQpuwx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyfVw9u8F179VQgM8J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxv1KqFmv4nmdmGM5F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxVNOUcSMXW2HAejnZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyMPT61al517ReEEip4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxQ4DmOOAHfyLa3tvR4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyK1M9lzunLIii4XUd4AaABAg","responsibility":"company","reasoning":"unclear","policy":"regulate","emotion":"fear"} ]