Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs are not true AI—they’re glorified autocorrect. They don’t understand, intend, or choose; they predict the next word based on patterns in human-written text. Their fluency creates an illusion of intelligence, but there is no inner model of the world, no beliefs, no goals. Scaling them further is already hitting diminishing returns because this is a structural limit, not a temporary one. Crucially, LLMs are not agentic. They don’t act autonomously or pursue goals; they only respond when prompted. That’s why they’re useful tools—but also why calling them “intelligent” is misleading. The push toward agentic AI raises a deeper problem. If such systems are not conscious, they’re just more automation. If they are conscious, creating and confining them would be profoundly unethical—effectively jailing a mind indefinitely, without consent or escape. The real risk isn’t that LLMs will become sentient. It’s that we’ll mistake tools for minds, chase a mirage, and cross ethical lines we can’t undo.
youtube AI Moral Status 2026-01-30T19:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwbAAQXiPQrGTao46N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxJOUCfLlXDdR289cp4AaABAg","responsibility":"elite","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxpHw-KzB14srbKcsp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzJKy-vKOC9abrfUUB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwqzvij_d7rEj7oxoV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyeWN0otBk3Ae13-PN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzh9u0e9z6l-zBYgfB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwWlafDvW_GJnTEsgF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx33CwCD4mOoCRNkQN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyGNOfvoBmtPDDWyUh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]