Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How about when the self-driving car encounters something it wasn't programmed for in a context not requiring quick reflexes? A burst water main is gurgling onto the street and the computer thinks the hump of water is a living creature and refuses to run it over. Or a small mudslide is covering the street and the computer thinks it's a barrier of some kind. Or a banner over the street has fallen down and the computer stops because it can't tell the difference between concrete and fabric. A human driver would just go through slowly, or pull onto the sidewalk to get around. The self-driving cars might just stop, and then quickly clog the street so the first cars can't back out. Are we assuming the computers will be 100% competent at dealing with every scenario from the get-go? It's one thing to say humans suck at driving compared to the computers, it's another thing to say humans always suck in every scenario so no more steering wheel ever.
reddit AI Harm Incident 1475487464.0 ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_d8b5jf9","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"rdc_d8aswd1","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_d8b71e9","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_d8bxcey","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_d8aybz0","responsibility":"government","reasoning":"mixed","policy":"ban","emotion":"outrage"} ]