Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To be honest I don’t really see why this is surprising. With machine learning (and life more broadly), everything comes with a cost; you want your model to give you safer answers? This will come at a cost in some way to accuracy. A very similar tradeoff exists when trying to design attack resistance for machine learning models; you can make your model resistant to a broad spectrum of attacks, but if you do, the accuracy suffers because of it. The real question is whether the tradeoff is worth it. I think the general discussion about this has become ‘why would they do this to us’ when in reality the better question is ‘was it worth it’, and I think there’s a good discussion to be had there with good points for both sides.
reddit AI Harm Incident 1689778579.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningutilitarian
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jskk6er","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_jsli3y1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_jslohgf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"rdc_jsmf36x","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_jsmzofs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]