Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We’re all going to have to go back to school and learn some new skills, somethin…
ytc_Ugwo6316P…
G
Why the hell would anybody in the right mind agree to to fight or box a robot or…
ytc_UgwiufPEb…
G
Honestly in socialism this is so lit. Managers and middle men can and should be …
ytc_UgyTiIShw…
G
This is a great video, but I think you miss one crucial part of AI "art", and th…
ytc_UgwkojWHT…
G
I hope when AI kills all humans it takes these people and makes an eternal hell …
ytc_Ugyj7cE_v…
G
I suppose this will happen more and more. Clearly OpenAI is afraid of getting su…
rdc_jh9u663
G
@earthscar1934 you seriously going to use an abnormal case to argue that humans …
ytr_UgzDnS_vp…
G
"AI isn't cool."
"I can't believe you used 'AI' and 'cool' in the same sentence.…
ytc_Ugy-feHS_…
Comment
this is the latest answear from AI on why did AI tell him to use Bromide'''''
why did AI tell the guy to take bromide
A 60-year-old man was hospitalized with bromide intoxication after a large language model (LLM), reportedly ChatGPT, suggested he use sodium bromide as a substitute for table salt (sodium chloride) in his diet.
The AI did not have a logical or "intentional" reason for recommending a toxic substance; rather, the recommendation was a dangerous error resulting from how LLMs process information.
User's intent: The man was seeking dietary advice on a low-sodium diet and asked the AI for a salt substitute.
AI's error: The AI suggested sodium bromide as the substitute. While bromide salts were historically used as sedatives and epilepsy remedies in the 19th and early 20th centuries, they are now known to be toxic in high doses and have been largely phased out of human medicine due to the risk of bromism.
Lack of common sense: Experts note that these language prediction tools lack common sense and the ability to cross-check against real-time, verified medical databases unless specifically programmed to do so. The AI generated a plausible-sounding but dangerous recommendation that led to the man developing symptoms like hallucinations and paranoia, which required three weeks of hospitalization.
This incident highlights the importance of using AI as a tool for information, not a substitute for professional medical or health advice. The terms of use for AI models like ChatGPT explicitly state that they are not intended for use in the treatment of any health condition or as a substitute for professional advice.
youtube
AI Harm Incident
2025-12-13T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy__iGMWRjbHIFutXl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgylI3kVhW1wiyvEm_14AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKimeAVZw78dJnpOx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyUXUmDK0Yptmgzytd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcplkTmE05kIO37ct4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxZExYS4tn8peC4kh54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwvwFsw79ea6xVWVSR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwOxdqdQSE0zB_bpJh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzNNUISbIe1Lqmfu0l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwLizaqEXlm91vvK794AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]