Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I theory it might seem like it, but I think that the worry is that in practice it will normalize it for people who wouldn’t otherwise be interested in it, then perhaps they will want real content or abuse kids IRL. Right now you would need to actively search for this content and know where to look for it. If the AI stuff were legal then it would be easy for anyone to find and that’s probably a dangerous path.
reddit AI Harm Incident 1695569636.0 ♥ 9
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_k21fnwj","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_k206rfx","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_k20h20p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_k205mys","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"rdc_k2221po","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]