Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@31webseries Not essential anymore. AI can create better stories better worlds,…
ytr_UgxNQ-1qZ…
G
Good points to debate! Have the difficult conversations!
Exactly‼️ Input info (…
ytc_UgyngTH_B…
G
The idea of AI has always been an extremely controversial topic of discussion. …
ytr_UgxpFdaYi…
G
People have been saying that automation will displace jobs. And they are right. …
ytc_UgzNh1Q37…
G
Gee, you would think through listening to this conversation that AI will become …
ytc_UgzRz2lJ3…
G
Interesting video. This gives me background confidence when I, quite frequently,…
ytc_Ugzuabj2v…
G
People just overhyped this Ai destroying the humans beacuse of the movie termina…
ytc_UgzeTxXRk…
G
We need to hold the grande with the pin pulled out. Make AI’s survival, dependen…
ytc_Ugy_iZoFN…
Comment
I believe he's exaggerating to generate headlines. Current large language models (LLMs) lack self-awareness and don't have any concept of being "shut down"—they simply process input based on training. Furthermore, how exactly are they addressing this supposed issue? You can't just "code out" a problem like this. As he himself admitted, neural networks function as black boxes—we don't fully understand their internal workings. You can't surgically remove a memory from an AI any more than you can from a human brain, because the learned representations are deeply interconnected. So what’s the solution—retrain the entire model repeatedly with slightly tweaked data? That sounds questionable, especially to someone like me who works in AI research and development. And honestly, his body language when discussing the so-called "accident" seemed very uneasy.
youtube
AI Moral Status
2025-06-04T14:1…
♥ 33
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzXcbQNkiz1GShRr3F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgywnyGSYCS3aGe8U6t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyeItcd7yBtyM1Y1yx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8oYeog8RPOGIaWcl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBRbXb5QJBt94-OzF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlCxYqFL4SXT_Hyvh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzqDEh6R7VwBY8yIvF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRnPC5llHjWfVdgSZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxyCXN6L85LkUqUA4h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz_jQzq2Hsc4FyqE3l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]