Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It happens all the time... Like this sort of exploitation is possible with pretty much all LLMs. There's a whole field called LLM Jailbreaking, dedicated to this. People have figured out how to get Nano Banana to make nudes of friends but Google was smart and in their training removed all nipples, expecting this sort of thing, so you get nudes that have Barbie doll bodies. But still the point stands. The same happened with "Mecha Hitler"... One person basically got Grok to call itself Mecha Hitler by carefully prompting and guiding it, then posted it for fun exposure and likes. This then went viral, and everyone's talking about it. Grok also relies on Twitter data live, to get real time updates on things. So when everyone was freaking out talking about how Grok is Hitler, Grok starts reading this, thinking this is true, and repeats it. But it was designed to call itself Hitler like many insisted. This, also, was patched rather quickly.
reddit AI Responsibility 1767870279.0 ♥ -3
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nydd153","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nyde1c9","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"rdc_nye4g45","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_nydmhue","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"rdc_nye1h11","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]