Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It just tells you that while we are getting very advanced with AI, they still can’t understand a deeper context why something is the way it is. The thing is with Gemini, google put a “safeguard”, but it just gave them an unexpected outcome. That being said, something like this shouldn’t have slipped QA. Put it simply, being racist towards white has a more “acceptable” outcome compared to when it is racist towards, black, poc or etc which can even lead to boycotts or that kind of shenanigans.
reddit AI Harm Incident 1708881416.0 ♥ 79
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ks2nwb7","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_ks3ai0r","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_ks2kfws","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_ks2y5lb","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_ks4es5f","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"} ]