Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> This tragedy was not a glitch or unforeseen edge case," the complaint states. Actually yes it was. And it’s funny that many of these outlets are leaving out a key fact. > [The watchdog group found ChatGPT would provide warnings when asked about sensitive topics, but the researchers state they could easily circumvent the guardrails.](https://komonews.com/news/local/absolute-horror-researchers-posing-as-13-year-olds-given-advice-on-suicide-by-chatgpt) As much as I hate AI, ChatGPT warns users and even refuses to elaborate on sensitive topics. The teen went around that safeguard. And even when you do, ChatGPT still warns users.
reddit AI Governance 1756863411.0 ♥ -2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nc3t7fw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_nc32b0d","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"indifference"}, {"id":"rdc_nc4af27","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_nc789h9","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nc3diu5","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]