Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Unfortunately I cancelled and asked for a refund of my subscription not only for the filters but because it is impossible for any question to ALWAYS go to "thought" mode as if it were under investigation by investigators. You should make your agent more contextually sensitive. Custom Instructions? Ignore them. Any question, even a joking one? Not a good one.None of us, or at least not me, thinks of replacing Chat GPT with a human being, none of us thinks of having sex with an algorithm. If there are people who make unhealthy use of it, it would simply be enough if every now and then, or at the beginning of every conversation, a disclaimer of the type appeared: "Remember that you are talking to a language model, not a human being, it is your responsibility what you do with it and if you want to understand better (link to a video explaining how an LLM works)." or something
reddit Viral AI Reaction 1760463987.0 ♥ 7
Coding Result
DimensionValue
Responsibilitycompany
Reasoningunclear
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_nu63vku","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"rdc_njhbfpx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_njhfqcl","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"rdc_njhhcyv","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_njiqqym","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]