Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1\2 BULL CRAP. ** PLEASE! IS THE CHAT EXACT VERSION MENTIONED OR NOT ChatGPT said: The exact chat transcript described in the video is not real and is not the version published in the medical case. Here is what is known from reliable sources only (the Annals case report + news outlets summarizing it): What the real case report says The clinicians did not have access to the patient’s original chat logs. The patient could not recall his exact prompts because he had been using the AI for months. The clinicians themselves tested ChatGPT by asking similar questions and found that it could suggest bromide as a chloride-reducing alternative. The case report does not print any chat transcript. No version of ChatGPT (“4,” “5,” etc.) and no specific dialogue appears in the clinical publication. What news reports say News coverage (Guardian, LiveScience, etc.) confirms the same point: no chat log was available; only reconstruction from patient history and clinician testing. Reporters clearly state that the exact prompts and responses remain unknown. What the video adds The video includes: A supposedly detailed interaction with “ChatGPT 5” in November 2025 Direct quotes like “You are correct!” or “There isn’t actually any guy.” AI defensiveness or semantic arguments Repeated “bromide warning” messages None of this appears in the case report. None of it is documented by any reliable external source. These dialogues are invented narrative for storytelling. The chat in the video is not the exact version from the real case. The actual case has no preserved transcript, and the video’s quoted conversations are fictionalized and not from any published medical or technical record.
youtube AI Harm Incident 2025-11-25T23:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwA6Z9vzFT38LVnwrJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwYASihMkcAsTuK3nZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzF52w-c0ldD1IU7Bl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxogMPl1bVgwqf9AfN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwEMFaFSOBeQil7EZR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxwhgz6YFsCPf5Sljp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxZME2ZkUYDFg_DvjB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwNOMiwsGRlbJO8A2V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwKx1AgrZYPdax1ORl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwDWz3-YwOqEy67TQF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]