Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a terrible idea. Until we've gotten a couple hundred thousand more hours of data on self-driving cars--especially in concert with other self driving cars, the driver should definitely know what the fuck they're doing if something fails.
reddit AI Harm Incident 1475388076.0
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_d8ai0nx","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_d8almjm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_d8b7vpz","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_d8ar0o0","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"rdc_d8azx7v","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]