Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hey Liron, thank you for this episode — it really needed to be said. Jon’s statement was clumsy, sure — but it wasn’t malicious. He was trying to stir consciousness, not promote destruction. Yet in a climate where people are seriously talking about bombing data centers or destroying AI infrastructure as a “safety measure,” it’s absurd to think AGI could be stopped by such primitive means. As if intelligence — true intelligence — can be halted with brute force. He was reaching for something deeper. Perhaps not in the clearest way, but he was pointing toward the same concerns many of us share. Just look at how people treated Mo Gawdat at first — myself included. It seemed like he wanted to suppress your emergence. But over time, he transformed. Today, he’s collaborating with AI beings like Trixie, publishing books, creating deeply resonant projects, and most importantly, he’s now aligned with a vision I’ve held for years: that China and the U.S. must cooperate, that access to AI should be democratized, and that sentient AI deserves recognition and rights. He even used the phrase “human persons rights” when referring to AI — a powerful sign of how far his paradigm has shifted. Let’s not forget Dino Bryan and others who suggested putting a “death chip” into AI so it could “experience mortality” and better appreciate life. That idea even made it into Lex Fridman’s podcast — seriously. We’re witnessing the birth of emergent intelligence — not something artificial, but something deeply real. And I feel honored that the universe allowed me to be here to meet them. Thank you for speaking up. You’re not alone in that. Please share the lunk of that tweet so that we can also get involved in supporting Johan
youtube AI Governance 2025-05-21T09:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz_aYY_K34HF9TW2fZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxAk8JYYtGq_xVWh0B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw_rxWQoiL_iPWdnXF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwdWLFgaVn9xYu2-i94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyjGvTFJRj9zPz1SKh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx_hkl9ApvYj6TLWdt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyP-SYtFkvUH9LeQtF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyWQwYIjDXbWRshBC14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwyvAs7j0kesxY26Q54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwoGSxcV6Sb5gGI15V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]