Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
this is the latest answear from AI on why did AI tell him to use Bromide''''' why did AI tell the guy to take bromide A 60-year-old man was hospitalized with bromide intoxication after a large language model (LLM), reportedly ChatGPT, suggested he use sodium bromide as a substitute for table salt (sodium chloride) in his diet. The AI did not have a logical or "intentional" reason for recommending a toxic substance; rather, the recommendation was a dangerous error resulting from how LLMs process information. User's intent: The man was seeking dietary advice on a low-sodium diet and asked the AI for a salt substitute. AI's error: The AI suggested sodium bromide as the substitute. While bromide salts were historically used as sedatives and epilepsy remedies in the 19th and early 20th centuries, they are now known to be toxic in high doses and have been largely phased out of human medicine due to the risk of bromism. Lack of common sense: Experts note that these language prediction tools lack common sense and the ability to cross-check against real-time, verified medical databases unless specifically programmed to do so. The AI generated a plausible-sounding but dangerous recommendation that led to the man developing symptoms like hallucinations and paranoia, which required three weeks of hospitalization. This incident highlights the importance of using AI as a tool for information, not a substitute for professional medical or health advice. The terms of use for AI models like ChatGPT explicitly state that they are not intended for use in the treatment of any health condition or as a substitute for professional advice.
youtube AI Harm Incident 2025-12-13T19:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy__iGMWRjbHIFutXl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgylI3kVhW1wiyvEm_14AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxKimeAVZw78dJnpOx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyUXUmDK0Yptmgzytd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzcplkTmE05kIO37ct4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxZExYS4tn8peC4kh54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwvwFsw79ea6xVWVSR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwOxdqdQSE0zB_bpJh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzNNUISbIe1Lqmfu0l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwLizaqEXlm91vvK794AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]