Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’ve been playing chess against AI since the 1990s, and it still feels fundamentally different from facing a human opponent. Even when you dial down the AI’s skill level, its mistakes are unmistakably artificial—almost eerie in their difference. Unlike humans, AI doesn’t blunder out of nerves, distraction, or creative misjudgment. It’s a reminder of how difficult it is to replicate the rich, irrational depths of the human unconscious. That’s why AI feels dull to me. No matter how nuanced we try to make them appear, they remain flat characters by human standards. As Daniel Dennett argued, I’m in favor of creating smarter tools, not artificial people. We live in a world shaped by a zero-sum business ethic, as if this is the only possible reality—a mindset that traces back to our cultural roots and even our prehuman ancestors. But what if our cultures reimagined business as truly successful only when both parties benefit? Everything could change. This way of thinking isn’t carved in stone; we don’t have to act thoughtlessly or with automatic self-interest. As humans, we are capable of much more—especially when we use our tools to elevate ourselves and our societies. Also on- "Maybe knowing you have a limited life will help you have a better life." I liked that a lot.
youtube AI Governance 2026-04-20T14:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxiAwhlcQZnsOPt0VR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxNfc7tSRlM_bjev5F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzhugrdmD7qhFjsL8p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkXEo-Jv_X8LUY9nx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzqERxsbB4oBuznCvN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzg4qQVdpJfqxqiQmt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwnRqZWljYuEjVMpph4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFxUOPnwd0gEllI0Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxjArRbMV5UycOufCl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwMS5bWnZrxRBH95DZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]