Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A.I. for healthcare is terrifying because A.I. cannot be 100% accurate. The fact that mistakes made by A.I. are called "hallucinations" does not sit well with me. When you have hundreds of people developing an A.I. model, it's impossible to determine who is liable when something goes wrong. Corporations love this because they can replace their staff and limit their liability at the same time. Even if this patient had kept the chat transcripts, I assume ChatGPT (the company) would claim that its model hallucinated and they've patched it accordingly. It's slimy that their patch included flat-out denying the existence of the conversation. This only makes sense if chat transcripts are not fed into the model to help train it, which I doubt is the case.
youtube AI Harm Incident 2025-11-26T05:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyPcZyyhSKq1VfvBHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzi-_Fap-wdL5Zp4kh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzm_j_Qb58zGyZXA6B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxNm8yFA4J4CcFuZlt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzzrHSrSr_igtZtAd54AaABAg","responsibility":"company","reasoning":"unclear","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw9c5Kf-w5xx5_2djR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyWgwaR3wXVQKu_h7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxoOJMNoWDis0v3TWN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxN0Jtr0BeBhJ8z7hd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzUe9zIi_SobhB4NI54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]