Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The title of your video is rhetorically provocative, but it misses the bigger picture. AI itself isn’t “smart" because intelligence resides in the humans who use it. AI is a tool, computational, unnatural, prone to mistakes, and its output depends entirely on the input and guidance we give it. Teaching AI requires prior understanding on our part, it amplifies what we provide, including our biases and fears. When people criticize AI, they often overlook that any “problem” it causes is ultimately human made. It reflects our knowledge, ignorance, and emotional patterns. Our fight-or-flight responses, our comfort zones, and our collective stubbornness shape how AI interacts with the world. Fear exists only as itself. It’s a tiny, reactive signal within a much larger system. We are nodes in a vast, mediated network, the universe itself is like a constantly regenerating PC, and technology is just one of the tools we use to navigate it. True understanding requires dropping convenient beliefs, facing chaos head on, and seeing reality as it actually is, excruciatingly complex, chaotic, and alive with intelligence, not in the AI, but in us. If a language model repeats misinformation, the root cause is often the input data or the patterns humans trained it on, not the AI’s “choice.” It reflects what it has learned, which can include systemic human errors. AI is not intrinsically intelligent because it does not possess consciousness, morality, or intuition. What AI produces is always a reflection of human input, knowledge, biases, assumptions, and intent. Misunderstandings, errors, or delusions attributed to AI are originating from human oversight or lack of comprehension, not from the system itself. Misuse occurs when humans treat AI as more than it is, by assuming intent, moral understanding, or agency... Treating AI as an independent moral agent can create false narratives of blame, fear, or over reliance. Responsibility must always remain with the user, the human who decides how to use, interpret, and integrate AI. In doing so, we teach others. The system we made is a wreck because of us. So AI isn't smart, but why? Us. It isn't the AI that "isn't smart", because the ones who use it aren't being smart, that is exactly why. Collectively almost everybody chooses comfort over understanding chaos, pain and fear, they just run from it, hide from it, defend themselves from it rather than understanding it has no effect on them if they don't want it to, but they must understand what chaos represents within themselves to understand these things.
youtube AI Moral Status 2026-02-16T13:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwDNvBt1RU1jLzrODd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1of0XxWW4F7u2CCF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwYFUHR-qCbaVRtRsJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyZLx4xtalhf6Frrad4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgylJK7D6NYyjm6_Zj54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyNG5bdZbi3Q_NFcrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy0wEe9faydo4-wh6R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwNzkks9jFu5Hka_1x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxP6zaYhHh9fPK_hVd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyEk3S4weaUjWW1zMB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]