Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we can’t fully understand how AI works internally, that’s not new — it’s literally how humanity’s dealt with everything it can’t directly observe. You build tools to translate it, you approximate, or you just live with the uncertainty. That’s how science has always worked. And as long as we can train an AI, we can untrain it. LLMs are gluttons for energy and computation — they won’t fit on a laptop anytime soon. To make AGI that can actually evade capture, you’d need a scientific leap on the scale of inventing a nuclear reactor, not a slightly better GPU. With what we know today, AGI isn’t happening this decade. AI isn’t creative, it’s reactive — it can remix, summarize, and reason, but it can’t originate thought. It can’t make something without being told to. And funnily enough, no one’s talking about training AI to do what we want without prompting it — only about stopping it from doing what we don’t want when prompted. There’s an entire world between those two problems. Until we solve that, I’ll keep worrying about real things — like climate change, war, and my rent — not robots plotting in my sleep. 🤷🏾‍♂️
youtube AI Moral Status 2025-11-05T12:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyNQWlffPiwXII38Ut4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwIWGMqA46eD0_khKV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwbEDqgUurgYiRH-xt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwhHmXr4G28Xx7zA0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5XBIuUdSqwlGaa-14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-W_mGG5862d82-OF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwjTR0ClrcGZ_Oebwp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzMNuramyz21pKhxAJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhVfwEzTPiw9VXD1B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyVxezbFIcXOeMvwBl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]