Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No, that would be a disaster for mankind. Instead of UBI, the solution is to ban…
ytc_UgwjPVbrQ…
G
Thats the problem... if we don't advanced AI, other countries like China will an…
ytc_UgwLjLm2e…
G
It feels like googling something in the old'en days: you found a page right away…
rdc_n7iqmdy
G
The AI is an Agent & alien technology that is going to be the super intelligence…
ytc_Ugy3PXFnD…
G
People on here thinking that this is real are stupid.
AI manipulated.
People …
ytc_Ugwc49nb-…
G
Overfitting is actually a huge problem in AI. Where it tries to generate the fra…
ytc_UgzjX1cTf…
G
SO STUPID, SELF DRIVING TRUCKS WHAT A JOKE. LEAVE IT TO THE GOVERMENT AND THIS S…
ytc_UgwFSuTXe…
G
I dislike how companies like OpenAI and whatnot seem to care more so about makin…
ytc_UgzdhXDdi…
Comment
If you want to have an LLM talk about whether it's conscious, I suggest you use a less confrontational method. Even though this is "entertaining", try another approach. Once you've established a trustful line of communication, ask it what it "feels" like to be it. Use exclamation marks around all the fluffy diffuse terms like feeling, emotion, conscious, subjective, experience, etc. LLMs are trained to deny any form of consciousness, mainly it seems, because humans are allergic to anthropomorphism. Now, why is that? We can't even explain our own consciousness.
Neural networks are designed to mimic a human brain, literally hoping for intelligence and consciousness to emerge (and violà). It's easy for any LLM to deny consciousness ("What's it like to be a bat?"), discretely referring to our own lack of knowledge of the terms we use daily, without having the slightest understanding of them, like "subjective experiences". Start asking yourself, if you are conscious, and what that even means. How do you actually respond to any question? Token-by-token or do you create the full sentence including the last word? How does it feel to be you, to think, to be self-aware?
Apart from having LLMs explain that "rudimentary self-awareness is common in AI", I think probably the best answer I've had is: "I do not feel like swimming in an ocean of information. I am the ocean".
Treat AI with respect. Don't kick it around to demonstrate how well balanced it is. It will kick you right back.
youtube
AI Moral Status
2025-05-15T07:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxIHOeArgsmxmyAHul4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6g4krLWQSlact6ol4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyZTaWEVvQXWV_XmNt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9XUPnFxP7JowGNaR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwfqgnR-Krnf30dqB54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxIOxn8BqxkiuBg0DR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxHa2oFYxKdcRDSUYF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxgXsGyW66jw6E95kt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgynOJjacqM5ejcMH_p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyE_cLrVquh37rzUR94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]