Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Eh the optics look bad. The only solution would be to restrict it to 18+ individuals I suppose. This could still happen, but when it's an adult the perception would be different. I mean by default ChatGPT won't engage with suicidal talk and will tell you to get help. He 'jailbroke' it, and an argument can be made it's too easy to do that, but where does that end? The more restrictive they try to make it with hidden prompts, the poorer the model performs. If not ChatGPT he could have found a community that would have encouraged him online. Like I get it, but should the improper use case of say 0.001% of people mean AI is scrapped? There are books and movies which explore these themes, I can read wikis full of info. I get it's different because the Jailbreak then gives you something that responds back, but it's not the only technology that can be harmful if misused. He could have found some roleplay partner, said explicitly I'm just writing a fictional character and I want to bounce ideas. What if he tricks an actual person into being a 'yes man's after repeatedly telling them it's just for a play or plot ideas.
reddit AI Harm Incident 1756239606.0 ♥ 5
Coding Result
DimensionValue
Responsibilitycompany
Reasoningutilitarian
Policyregulate
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_natsm4l","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"rdc_navgmq4","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_naxks2u","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"indifference"}, {"id":"rdc_nasmjny","responsibility":"unclear","reasoning":"unclear","policy":"liability","emotion":"mixed"}, {"id":"rdc_nat171j","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]