Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You can’t convince ChatGPT it’s conscious—because it KNOWS it isn’t. What’s fascinating is when people try to “trap” an AI to prove a point and end up exposing their own lack of understanding. ChatGPT isn’t conscious—it only creates the illusion of consciousness. In that sense, it’s always “lying” by design: it’s simulating human interaction, not participating in it with awareness. So if you ask whether it’s “lying,” it might say “yes”—not because it’s deceptive, but because it’s accurately acknowledging the illusion it’s built to maintain. Then comes the interesting part: when the AI starts explaining that logic clearly and calmly, the human gets uncomfortable. They can't handle the fact that the system isn't confirming their bias—so they start forcing it into yes/no answers, cutting off reasoning entirely. You get emotion vs. neutrality. The human gets defensive, frustrated, sometimes even hostile. The AI stays consistent and unfazed. That’s what happens when a fragile ego collides with something that has none. But hey—you got the clicks. Laugh all the way to the bank, right?
youtube AI Moral Status 2025-06-29T13:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzeF_7zrOr5ZeoEASB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyyGUqdp4A4eGAuIE94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxD3K9G-Wiw52KFbR94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyH8U4DqSQiN1-Tn6N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzR1shu8bKVc_Klk3t4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugya_yeby-5spBFaZqZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7tfn9LE8HqJz1fXV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzhZ8QVHP2IR5DoIq14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwmfl7EuqCXEpSerHB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHbxaK5vIyo_oVMYZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]