Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think this question is a question of semantics, not of material effects. Defining consciousness in itself is a question humans have spent countless hours and effort on, and has been central to philosophers and thinkers since basically any recorded history. If there’s a general consensus on that definition, I’m not aware of it. Whether AI is "conscious" seems to me to hinge on how we define the category of "consciousness", it's not really based on any purely scientific speculation.. While it is important to note the similarities between the architecture of the human brain and a neural network, I don’t think it really addresses the more fundamental question being asked here. I guess another example of this would be the question of whether plants are conscious. They react to stimuli, they experience the world, but the way they react and the way they experience it seems somehow fundamentally different to us, and in some ways more inert.. I believe there’s good arguments on both sides of plant consciousness and the question itself is so abstract we probably will never discover a satisfying, definitive answer. I think the same is probably true for sufficiently advanced AI, although I could still be proven wrong by the singularity or something haha.
youtube AI Moral Status 2025-04-08T00:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugz3DNwDbJ3Hvw25S7p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxkYacp5TeY7HiEXeh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzK7DkBvD9HT6a8RvB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwPYhnO4vBkNyPTp1B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzqA70FX9le0VnRGz94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwDo0b8uPcUUhqGQ_l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyHCus9geBRZapsM154AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVj3hPAqSZUYXgw_J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwZ1OO2Vnu9cuA_AjR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugyekf2NV9bau8DqOLR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]