Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I haven't watched the video and as much as I appreciate that people like Stuart Russell are speaking out about the dangers of AI it isn't going to do any good. Why? Because humanity wants this, no matter what anyone says, collectively and on the whole humanity wants to face its demise. Regardless of how very real the dangers of AI are we won't stop. WERE THAT NOT THE CASE, and if everyone truly cared more about their children's futures than their own selfish greedy pursuits, then there would be no need for these conversations, the plug would have already been pulled on AGI. Look, just as one cannot be ABSOLUTELY sure that any one of us is TRULY safe for another, that a child we bring into this world won't end up being a monster, we will never be able to ensure that AI is safe, full stop. We've long ago crossed the point of no return and essentially now all we can do is wait for the singularity and try to be the best versions of ourselves that we can be so that we can face the coming years as best we can. Ultimately humanities future has always been its demise, if that scares you, if that fear paralyzes you to the point of avoiding the truth and pretending this isn't really happening, I suggest you start doing the personal internal work needed to live authentically, to be authentic with yourselves and those in your circle.
youtube AI Governance 2025-12-05T17:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyYLqGdCiCDXwFe9XF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz7fLFhx-ZTY30iDhN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxEM-v269J50Zg8nVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwUs9VJolAwZ9JtCyZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwhwYa1mJw-YQBqaUd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwZk7orM3w14Q2X7gh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxgEyJfGAm80Q7GWsl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzuVBF8JpP7Ae8bqKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxSxefe9LeyYNEkVhZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyDILA_4Ia8orIT_Tt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]