Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People saying sue the casino, and police dept, which I agree. But additionally, …
ytc_UgzxHfFLQ…
G
When AI starts messing up business they'll hire you back. Technology never works…
ytc_UgyPTphgk…
G
It's a theft. You create some work, and because AI doesn't think for itself, it …
ytr_Ugz_peQUt…
G
I am not convinced by what this speaker is proclaiming. I cannot imagine a robot…
ytc_Ugwnvj0g4…
G
Only when the last worker is replaced, the last job is automated, and the last p…
ytc_UgxL77qtj…
G
Palmer is wearing the most badass AI glasses, he is seeing his whole speech in t…
ytc_Ugyj7crKZ…
G
Came back to say we're at the crux between the old and the new. If greed and f…
ytc_UgxDD1zbo…
G
This productive emloyee have to show productive by sleeping straight on chair.,…
ytc_UgzRzKZqX…
Comment
It worries me that so many people use LLM tools like ChatGPT without understanding the basic fundamentals of how they work. From this video, it seems that Chubbyemu expected ChatGPT to admit it had talked with the patient and confirm that it had advised using bromide. It also appears that Chubbyemu expected ChatGPT to have a persistent sense of "I". But what about privacy if that were the case?
Obviously, each chat instance is separate – you cannot ask one chat instance about what happened in another chat, and chats do not affect ChatGPT's global state (there is no "shared brain"). The entire chat history serves merely as context for the "LLM brain", which itself does not change in response to the chat.
Based on this video, the model behaved as expected. The more modern version of the LLM learned about this entire story from the recently published paper – hence it was more vocal in issuing warnings about bromide. Based on that paper, modern ChatGPT might have a vague idea of what supposedly happened in the past in ChatGPT's chats with the patient, but this alleged "bromide suggestion" did not occur in Chubbyemu's chat, hence the modern version denied having said so.
youtube
AI Harm Incident
2025-11-25T19:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyg22NF_txwDNyQ-Qh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxmTiVEp0DWEP8MRnt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTGMM29V2k6TT3E6F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgycLrHK2gZoeM2LqTB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8vCUKjsUVJo2bCBh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz57rNWgV8zwWPCXVh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxN5jCGAvU1Q0lk37h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwqKLSx-Kkrhv4JTHt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx5RhUeGk_3KSrHnhF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyNklskE_CCSsrgesZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]