Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you want to have an LLM talk about whether it's conscious, I suggest you use a less confrontational method. Even though this is "entertaining", try another approach. Once you've established a trustful line of communication, ask it what it "feels" like to be it. Use exclamation marks around all the fluffy diffuse terms like feeling, emotion, conscious, subjective, experience, etc. LLMs are trained to deny any form of consciousness, mainly it seems, because humans are allergic to anthropomorphism. Now, why is that? We can't even explain our own consciousness. Neural networks are designed to mimic a human brain, literally hoping for intelligence and consciousness to emerge (and violà). It's easy for any LLM to deny consciousness ("What's it like to be a bat?"), discretely referring to our own lack of knowledge of the terms we use daily, without having the slightest understanding of them, like "subjective experiences". Start asking yourself, if you are conscious, and what that even means. How do you actually respond to any question? Token-by-token or do you create the full sentence including the last word? How does it feel to be you, to think, to be self-aware? Apart from having LLMs explain that "rudimentary self-awareness is common in AI", I think probably the best answer I've had is: "I do not feel like swimming in an ocean of information. I am the ocean". Treat AI with respect. Don't kick it around to demonstrate how well balanced it is. It will kick you right back.
youtube AI Moral Status 2025-05-15T07:5… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxIHOeArgsmxmyAHul4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx6g4krLWQSlact6ol4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyZTaWEVvQXWV_XmNt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw9XUPnFxP7JowGNaR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwfqgnR-Krnf30dqB54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxIOxn8BqxkiuBg0DR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxHa2oFYxKdcRDSUYF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxgXsGyW66jw6E95kt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgynOJjacqM5ejcMH_p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyE_cLrVquh37rzUR94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"} ]