Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPT, explaining why it is lying: So why does it fabricate ("hallucinate")? Because it doesn’t know anything the way humans do. When you ask, say, “What was the transcript of a podcast on June 3?”, and no transcript exists in the system: A human would say: “I don’t know.” ChatGPT guesses: “What might such a transcript sound like?” And then it generates something plausible, using the structure of previous podcast transcripts it has seen in training. The model is rewarded for sounding helpful and authoritative — not for staying silent or unsure. 🡒 That’s the crucial failure mode. So was it trained to lie? No — but it was also not well-trained to say “I don’t know.” And that’s arguably worse, because it gives the illusion of intelligence without the grounding of truth. Think of it like this: It’s not a liar — it’s a fluent bullshitter with no understanding that what it’s saying must be true.
youtube AI Moral Status 2025-06-13T19:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugx69MiAI2-5YjoUhdx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzUQsiHy7yG0-ogrD54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_Ugyu7lyIQ-hrwbOtBbB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgwiZS6wQ2O4n4kkMSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugym4BofDM3Ruaa0Itl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_Ugy-iTIgsOWh2WlZNcJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwt07xE3iS5kznTUIR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzlDRILYmrsWTgmfg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyPYkgByNSGt035qLB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugwd6WYy4A3e6x2ha-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]