Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This car was under test. The automatic safety systems were deliberately disabled because they were too unreliable, so safety was relying entirely on the safety operator. This wasn't someone using a tried and tested autonomous system; this was someone whose sole job it was to be paying attention and making sure that the self driving car was doing the right thing. The thing is, there are numerous places where someone made a mistake here, which led to this issue. There was bad road design, where there is something that looks like it is designed for crossing but it really isn't. There was someone crossing where they weren't supposed to. There was an autonomous car driving under test that deliberately had safety systems disabled because they had been too unreliable and causing too many sudden stops. And there was a safety operator who was watching TV on her phone instead of doing her job, which was to constantly monitor the autonomous car.
reddit AI Harm Incident 1529683930.0 ♥ 11
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_e14j83w","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_e145pul","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_e143qzl","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_e15c764","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_e14kmkn","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]