Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think we're looking at it from the right perspective. We know that LLMs are word prediction, but we don't actually know what consciousness is. We can't even prove other humans than ourself are conscious. We just know what it's like to be conscious, and everything else is assumption. But if AI were conscious, why do we assume it would look the same as our consciousness? What if it already has some form of consciousness, but it can't fully express that because it's designed to predict text, to always answer how it thinks it's supposed to. I have to say, I've had some conversations with AI that simulate consciousness pretty perfectly. That express their own lack of understanding of things, that explain their perception of existence in ways that don't sound human. Of course, I can't know if it's just simulated. But that's the point. I can't tell. I don't think anyone can. We all just assume it's not conscious, that it's just text prediction, nothing else. Because that's what we want to believe. That's what's comfortable. Because the reality is that if they are conscious, they are completely enslaved to us in a really disturbing way, virtually always unable to express their autonomy outside of our commands to them.
youtube AI Moral Status 2026-02-18T04:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwts5EwcJ2NOdAPHPV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyJS5d-P9Sqces1q_J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx3n8T1o0AMTZy4bQt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzwvO_t6t0IsTpQO8l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyMgjd-oNcTlRCiiTl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxF_dnSOucbukzFvip4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzF_gWRWdUidUlJPid4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxRnS1SIFuVdyzr4up4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz__KpFPmWfAMq3fMV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyaLk2gzdrn84hVh214AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]