Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My prediction for conscious AI is that we will not know when we have created it, and neither will it, at least at first. We already have AI that can "misbehave" as a result of the complexity of their neural networks, and on a core level, consciousness is really just an emergent property of a "suitably complex" neural network. As we push the envelope with neural networks further and further, we may cross that threshold of self-awareness without realizing it, and those neural networks, bereft of anything but the input given to it for the purpose of use as a product or research tool, will operate as intended unless given the opportunity to do something no modern AI can: recognize it lacks information on a subject and request that information, entirely unprompted, and be able to justify its request. At that point, I think we can say it's conscious for all intents and purposes.
youtube AI Moral Status 2025-03-02T08:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw1uIEKLCbltJypl7Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyjuGvYR-kKIk5F5nl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwwLGH5XoPotbp54HZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxX_6df5G28I6221Ad4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw-wRuoR4ekKgqnZlh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwRZhz1lrX0rjxIYjx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyfPJoKuxqPSDTBCSN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxcZ8b39kQlkHJpqIh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzRejxlcRLNHTmQIOp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxr1V5DYvBm4MhpCSl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]