Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Program code trawling through Big Data with filters imposed for bias etc does no…
ytc_UgwI7Zqk4…
G
They’re using the same software as Waymo, the problem is, they don’t have all th…
ytc_UgxQ1eYqX…
G
HEY. I know_ ill just walk all robotic . pretend you are a robot if this happens…
ytc_UgwZi78dR…
G
I'll preface this with this statement: I don't think Humans are all that special…
ytc_Ugza5u9vx…
G
Actually Claude can do that 47 file refactor. But eats a lot of money to do that…
ytc_Ugzeu4WrW…
G
The only use case for AI should to work as an assistant only and not as a decisi…
ytc_UgxETt_kq…
G
Electric cars were a good idea????? 😂 That's funny.
Youtube etc should be regula…
ytc_UgwIvqDRX…
G
Robots should never gain rights. They only do what we program. If we program the…
ytc_UggpzkAUQ…
Comment
So basically, you can easily replace the word "AI" with "Internet" and have the exact same case, and same substance, as this guy could have used basically anything, to reinforce his views.
And people do that all the time, its called confirmation bias. Want to believe that the earth is flat? You go online and you find websites that reinforce your views.
Blaming AI for this, specifically ChatGPT is quite disingenuous, and all those journalists and sites that put the term "ChatGPT" and/or "AI" in there as a cause for this incident, are simply dredging up fear over something that is not only innocuous, but its actually BETTER than consulting with humans about this, because the humans are all over the place and just believe whatever they want to believe.
Also, it doesnt surprise me that the version of ChatGPT was 3.5, a now very old and unused version which is quite inferior to both 4o and 5.
But the worst part of this is that nobody showed us this chat log.
What response did the AI produce, and what was the prompt for that response?
Where is the evidence?
Nowhere.
Either deleted, which is suspicious, or it never existed, because I seriously doubt that even ChatGPT 3.5 would actually advise anyone to replace chloride with bromide in their diet.
So what I gather from this story, is that the man had already false pre-conceived notions about these elements, strong biases and a mentality of "I know better than everyone else", went to ChatGPT, and as soon as ChatGPT gave him the answer that he wanted, he went "Ah-Ha! I was right all along!!! Now lets do this!", while ChatGPT merely gave him a general answer that both of these elements can be used in cleaning products to do the same job, so they are in effect interchangeable.
Or perhaps told him that there is no significant difference between them in certain specialized use cases, such as lab demonstrations or experiments, etc.
Who knows?
All I know is that there are people that cant wait to blame AI for everything, when its actually much much safer than talking to a human, because humans will tell you the most bizarre, irrational, false, outlandish things, and even when licensed and certified, working clinicians make big blunders, we still trust them and go to them for medical advice, thinking that they must know what they're doing, after all, look degree, look license, look lab coat and everything.
AI corrects people when they are wrong constantly. I've seen in ChatGPT and Claude, Gemini, and others. Its just that it tends to do it in a soft, gentle way, and not in a "in your face" kind of way, to avoid offending anyone.
It plays the role of helpful assistant who draws from the already existing human knowledge, but we still manage to find excuses and ways to blame it for our own incompetence every chance we get.
And I know for sure that if you want accurate reliable information, go to the guy with the metal head.
Because even licensed clinicians disagree with each other all the time, and miss obvious things, even on the same test, same lab result, same scan, same symptoms, same patient, same everything. Their incompetence/failure is a lot more frequent than what should be acceptable, and they each are super expert geniuses that cannot be wrong, even when they obviously are, all while their opinions are in conflict with one another.
youtube
AI Harm Incident
2025-11-30T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | contractualist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwQ8QsoT8m6vt5bRad4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3dDwNATyn56jIw7d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVZ3DhbAoiSUteXZ54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxp5FtsbZ-AVkJUSsF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgziYImOcnpgKplytEZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSRj_CUpIpSl7-G_54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzjl00Li-2R0e3CZrx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTO459PZG0-gGPBE14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_IDc6JPmE9sJsnPJ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzvTna4P_j-zQezzQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]