Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing I've noticed that doesn't seem to get sufficient attention, is that we're already in an "unaligned" system. And when you're inside the system, you don't debate the system's logic; you debate using the system's logic. Let's apply ourselves to the case of Hasanabi: - A man has a dog. The dog is a living animal with its own needs, behaviors, and existence. - The man has the "Hasanabi Brand," the "Stream," the "Content." - The brand, the simulation, optimizes for continuous viewer engagement, parasocial connection, brand consistency. - The dog being a dog (ie moving) interferes with the metric. - The alleged shock collar. This is a tool to force the ground truth to conform to the abstraction's metrics. It optimizes the dog for the stream, making it a compliant piece of content rather than an independent being. - The audience, trapped in the simulation frame, debates the morality of the optimization tool: "Hasanabi is bad, cancel him!" vs. "Hasanabi is good, stop attacking him!" Thus, they are arguing about how the simulation should be run. They are not asking the questions that break the in-simulation frame: - Why is there systemic pressure to subsume a pet into a parasocial brand? - What does it say about the system that a living being's 'inconvenient' reality must be managed and optimized for profit? Our simulation is much more efficient than the Matrix; we don't even need Agents. The "inert" are so dependent on the system that they fight each other over its operating parameters. But they aren't just fighting to protect the system; their fight is the system. Can an AI that is created inside this frame really solve our problems, if the problems stem from the existing topology? Do we even need "superintelligence" to derange us, if our most lucrative opportunities are in explicitly fabricating a gap between the real and the simulacra, and to set up a moat to arbitrage this gap for profit?
youtube AI Moral Status 2025-10-31T07:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxdXf7QoFmDGGOyNfN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxSjIu2Vl2S4XsDv854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxxZukTmMl-JceLYTx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz9XpETftOZ7TaCXXt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwaW0zpxwYp_RN1up54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyNHO1SiatOYKKW7IF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyTolRgYrK8D5WL3bN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwYKo1CIjC9FJ_d8jR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugyhnt8LvpTm4dkAqqR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzpvr7yPMYvQ1Pjdyd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"} ]