Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Really interesting topic, you could have been a bit more empathetic at the end, I know you have your angle and present yourself a certain way but it wouldn't hurt to say "I'm sorry that you didn't spend more time with your wife and kids, it must've been challenging and difficult to balance work when it was that exciting and interesting, and thank you for the reminder to appreciate the connections we have in our life." or something like that🙃 On the AI side I don't think they will end us, maybe only those that try to destroy it. In general I think Super AI will avoid destroying humans. If it did it would be human inspired like he said. A super AI would probably keep us from hurting ourselves if it took over, maybe take out bad seeds, but not destruction for destruction sake. If all of humanity were gone that would also be negative for maintenance. Well, at least for another 100-200 years. It doesn't compare to issues like the level of influence religion still has on global politics, or all the fascism/censorship developing everywhere. I think in general we shouldn't be complacent and take things for granted, a lot of people through time have fought for the rights and privileges that we enjoy. If we didn't feel the suffering that something costs it's hard for us to know it's worth, so educating ourselves is important. Thank you Geoffrey for your wisdom🙏
youtube AI Governance 2025-07-30T21:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwmvKlgMntJmBTJ7Ad4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwqClAY1lhmt-0LObJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgziXugU3Ufv6oMgz2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzUugiAXM43hbbmJhZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxi4IGBb6wiy8PhQBh4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxctBYCdW_zD-txoEx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwJ051cfcgiPJt2TL94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzjDBInmOTX9h1YGah4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyhYAiAy5pLis3BJPF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxcXQz9eM7b2I4j5Vh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]