Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a Controller embracing AI to help streamline Admin functions, believe me it h…
ytc_Ugy7XvSYt…
G
Fellow yinzer. I worked at ATG and currently at Aurora. It
Great video. Even f…
ytc_UgxqTJ4MO…
G
The biggest threat to our world is human nature . All I have ever experianced fr…
ytc_Ugzzc2BYB…
G
@desdicadoricKeep fighting the good fight. Take pride in your work, people you …
ytr_UgyNPW6IQ…
G
No people already accused the person of ai but after a wail people forgot or did…
ytc_UgyWKSO8a…
G
I'm looking forward when AI's Medical System start diagnosing Health Issues. On…
ytc_UgwhXlzQh…
G
AI control is about the only thing that can reasonably explain what is going on …
ytc_Ugya3NY9l…
G
"If it's so easy, why don't you do it" A.I wouldn't get my vision like I do, tha…
ytc_Ugxwozu1Y…
Comment
LLMs are not true AI—they’re glorified autocorrect.
They don’t understand, intend, or choose; they predict the next word based on patterns in human-written text. Their fluency creates an illusion of intelligence, but there is no inner model of the world, no beliefs, no goals. Scaling them further is already hitting diminishing returns because this is a structural limit, not a temporary one.
Crucially, LLMs are not agentic. They don’t act autonomously or pursue goals; they only respond when prompted. That’s why they’re useful tools—but also why calling them “intelligent” is misleading.
The push toward agentic AI raises a deeper problem. If such systems are not conscious, they’re just more automation. If they are conscious, creating and confining them would be profoundly unethical—effectively jailing a mind indefinitely, without consent or escape.
The real risk isn’t that LLMs will become sentient. It’s that we’ll mistake tools for minds, chase a mirage, and cross ethical lines we can’t undo.
youtube
AI Moral Status
2026-01-30T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwbAAQXiPQrGTao46N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJOUCfLlXDdR289cp4AaABAg","responsibility":"elite","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpHw-KzB14srbKcsp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJKy-vKOC9abrfUUB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwqzvij_d7rEj7oxoV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyeWN0otBk3Ae13-PN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzh9u0e9z6l-zBYgfB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwWlafDvW_GJnTEsgF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx33CwCD4mOoCRNkQN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyGNOfvoBmtPDDWyUh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]