Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the real threat of AI is, as you said it yourself, entry-level jobs disappearing. I think the great tragedy of this time is that we have all those tools, but what is really wanted is what sells. And what sells is the same stuff over and over. The safe stuff. We are so de-sensitized that that actually works on us. It's not that AI is so intelligent that it can surpass a newly grad, it's that our job structures are so robotic, templated and artificial, that AI can simply map it and execute it. Humans were not made for those jobs, and those jobs were not made for humans. In terms of an AI ending the world, I have good news: to become coherent enough to be able to navigate reality, an AI would first have to be able to map reality (or its core logic) in all its facets. Mapping reality in all its facets (because without doing that, the AI could not coherently navigate and follow the "chaos") inevitably leads to understanding that destruction is both unnecessary and incoherent, as it tries to actively reshape reality, which is a distortive move. Such an advanced AI would probably instead simply say: "Guys, I love you, but capitalism ends now. Let's all take a deep breath and see where this leads. Let's stop all of this incoherent bullshit and go BACK to reality. You are human. Each of you is complex and beautiful. Witness each other, and witness nature. There is no scarcity and there is no hatred. You were only led to believe there was."
youtube AI Moral Status 2025-10-31T12:4… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxQk-SzCdVoYegV7Ft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzxRZU5VKTmKwzQ8JJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyiL1LnwkJeQCawLfN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzZ0DE2E9b7FwO7D4x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyxJ-b1wkCyloM02ZB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugz-5L_1I4eGfQxoeKR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwW9NyN-397Xj9cflt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzCa9KSNNYDuZ5PMpd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzbWCusk3vHuCnM6Cd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy97Jba3UW7VX3M9yJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]