Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've been calling it **Context Inertia.** It seems to be something confined to t…
rdc_mumjkgx
G
Traditional artist here- i think that drawing is a fun way of expression, and yo…
ytc_UgzDahcWM…
G
yeah, wake up call saying that AI isn't that big of a deal because it's not as p…
ytc_Ugzc9QtKB…
G
Sure, but it IS doing that. If we collectively agree that plagiarism is wrong, w…
ytr_UgwBEon8a…
G
I'm hoping that perhaps/maybe if AI kept "Improving Itself" time after time that…
ytc_UgwRxf9Rg…
G
2025 - Sure, why not? "Agentic Misalignment" Thankfully, we are lead by the stea…
ytc_UgzU0wVW2…
G
You won’t regulate nothing. Ai is already too big and vast ti be stopped and wil…
ytc_UgyrFUlwa…
G
This is just what AI does. It doesn’t even start until it has already won. And…
ytc_UgxrpwpWR…
Comment
Here’s my opinion. There’s two types of consciousness. Natural and given. Natural consciousness is what we have. We have it from the second we are born to the second we die. It is not given it is not trained. It is naturally instilled within us. It is truly random, and we can never truly understand what it is. And then there’s simulated consciousness. It is given it is trained and can be taken away. That’s the big part. You can never truly get rid of someone’s consciousness as far as science knows right now. With a computer, you can remove the code to get rid of it. It is given to the computer and it is trained. Because the computer exist doesn’t mean it’s conscious immediately. You would have to train that into it. the human consciousness is completely random. Nature is only 100% true the random thing. A computer can never be 100% scientifically random. No matter how hard you try. Not like a human. So yes, and no AI can become conscious. Not on the level of the human but still there.
youtube
AI Moral Status
2023-11-02T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz8LYD3A_2e4hJIWq54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzMPLaEcdtKgIQRdyF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz3QiL-6Xj0FTSCePV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzktCcP2tymTWcSsyR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFlOkndRtAeuUL7rB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzr4xxFLGihCzf3FS14AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtLlqZtcqQFSDao794AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzeMTrOb2fOgYe2ojx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjiGo95m9bbtPb_cd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyf0IgGH2ND0ESexB14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]