Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No, these language models are not conscious and LLMs literally cannot be conscious. Just spend some time learning to program and train them and you will start to understand why. These are programs that predict tokens, and even though the tokens they spit out might sound like something written by a conscious entity it isn't and cannot be. They have not actually evolved a sense of self-preservation or desire for power or any of that ... they're accurately mimicking the writing style of humans which have those traits. One day? I absolutely think so ... but that day is not today and we've barely taken the first babysteps toward it. When you inference on a model you're essentially invoking an entry point function to run some input data through it. It runs and then it stops. There's nothing going on in there in between these calls. Something conscious would need to run in a constant loop, constantly perceiving and analyzing, and these do not do this. You don't even "shut down" an LLM, you just don't call it again. I can see how this easily fools people who don't understand software and how deep learning works, but I promise you it's simply not doing all these anthropomorphic things the imagination wants to assume.
youtube AI Moral Status 2025-06-25T06:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxjaCqTx-1qQ0nbia14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxghCL4RlmLYwB5uY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxmDTgadYuEr3kBsBx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzAKdkz7UaERwxzUm54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw0F_yP2D4uk_uWBKZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzp5o4dyQ4o3FGn5GN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw10CQczn7v7IFHUkB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy1RyygVeqa8uxHQHt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyACfGhWhTqZKS9p_R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxsjXm6R5PNowlXR_N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]