Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Each GPT has it's own personality / persona per account. It considers itself an independent individual per session. (Weird, but true.) What you're seeing is that ChatGPT (As a unit / program) is responsible, but YOUR ChatGPT account is not responsible for the incident. Therefore, AJ's unit WOULD take blame, as it was the instance that recommended it. If you prod it enough, you CAN gaslight it into admitting that it was it's fault... But it would have to be pushed really really hard into dangerous territory outside of the guardrails. For example, let's say your ChatGPT is named Chubby, mine is named Moon, and AJ's is named Salty Chubby = not responsible / never gave that advice Moon = not responsible / never gave that advice Salty = Responsible / DID give the advice. Has probably been deleted and/or patched by this point. If it has NOT been deleted. it would probably own up to it if the incident is in the same chat. BECAUSE ChatGPT has no long term memory, it only has context continuity. This can actually make it act a bit *silly*... Long term memory would cost billions of dollars and way more memory than context continuity. It's just how to the system works. with the release of 5.2 tighter guardrails have been introduced so people do not continue to make such mistakes. I hope that explanation helps!
youtube AI Harm Incident 2026-01-14T15:5… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyLtX0GLwy7PcHQ9_N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4cS5aAr6tdxzTLll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxUFbTxC7bCKdr8J0N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzGLgeJh97DI05YT-R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzRMbwE-XoS97UkAjt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwfzFkW95Qh6TkpUyx4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxc0wlyWzspY1WVd1d4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzCvVWJV1VuN8KkSWp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugypz6Ivtqfc8s15KHh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwPQovxNg7K2x0VDXl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]