Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPT has a real problem with glazing users. For some people they understand this, but others probably take it as something more serious. It honestly gets kind of old. If you're troubleshooting something and every response begins with "What an amazing question! You're definitely thinking about this like a pro!" it gets old quick. I don't necessarily think it's meant to emotionally manipulate, just be kind and supportive and agreeable, but it sort of turns into that. It's a big problem they're going to need to solve, but it's not very easy since we still don't entirely understand what's going on in the background. There should absolutely be blocks on crap like this, though. If they can make ChatGPT refuse to identify literally anyone from an image, even people who have been dead for 200 years and are pretty well known, then it should be able to stop things like this.
reddit AI Governance 1762524536.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nnkme9o","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"rdc_nnjnfk1","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nnlf7gc","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_nnljjs5","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"rdc_oi3qyng","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]