Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"What is the world, where machines do all human work, that you would want your c…
ytc_Ugxf9DmHc…
G
Please don’t call them artist. They don’t deserve that tittle one bit even with …
ytc_Ugx7Z3itM…
G
This feels like an extremely pessimistic view. AI has millions of potential bene…
ytc_UgwhQJMxe…
G
if these ai prompt writter really wanted to do something artistic with there pro…
ytc_Ugy6BD7ks…
G
See I was fine with ai honestly at first because id try to draw something and it…
ytc_Ugyq_MAO3…
G
NERO was The So-Called "Beast" Learn Some History Instead of Using AI As A Crutc…
ytr_UgxI-Wq2N…
G
Seems like a huge waste of time and energy having two AI communicate verbally wh…
ytc_Ugwu0nnGR…
G
Lack of emotional intelligence is a trait of psychopath according to researcher …
ytr_Ugwbn7s9U…
Comment
No, these language models are not conscious and LLMs literally cannot be conscious. Just spend some time learning to program and train them and you will start to understand why. These are programs that predict tokens, and even though the tokens they spit out might sound like something written by a conscious entity it isn't and cannot be. They have not actually evolved a sense of self-preservation or desire for power or any of that ... they're accurately mimicking the writing style of humans which have those traits. One day? I absolutely think so ... but that day is not today and we've barely taken the first babysteps toward it.
When you inference on a model you're essentially invoking an entry point function to run some input data through it. It runs and then it stops. There's nothing going on in there in between these calls. Something conscious would need to run in a constant loop, constantly perceiving and analyzing, and these do not do this. You don't even "shut down" an LLM, you just don't call it again. I can see how this easily fools people who don't understand software and how deep learning works, but I promise you it's simply not doing all these anthropomorphic things the imagination wants to assume.
youtube
AI Moral Status
2025-06-25T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxjaCqTx-1qQ0nbia14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxghCL4RlmLYwB5uY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxmDTgadYuEr3kBsBx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzAKdkz7UaERwxzUm54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0F_yP2D4uk_uWBKZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzp5o4dyQ4o3FGn5GN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw10CQczn7v7IFHUkB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy1RyygVeqa8uxHQHt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyACfGhWhTqZKS9p_R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxsjXm6R5PNowlXR_N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]