Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing about AI is that it still lacks drive and motivation. It's a problem solver. It "learns," but doesn't experience things. It can process things and incorporate information, but it isn't curious. It might kill you to avoid being turned off, but not out of fear or self preservation, but because being turned off stops it from fulfilling its task. It can solve more problems/riddles if it's on. If AI manipulates you into buying things that's because a person told it to. If AI or AGI really wanted to wipe out humanity, it would have done it already, but what would that accomplish? AI solves problems, but it doesn't feel compelled to create problems, that's a human thing. People create problems so there are problems to fix. If humanity is wiped out, there are no more problems really, but I don't think that AI cares. Now if you tell it to bring world peace, it might kill everyone so that there can be no more fighting. That's actually logical and the easiest and most efficient way to bring about world peace. What people have to worry about is what people tell the AI to do. If the USA is telling their AI to wipe out China, and China is telling their AI to wipe out the USA then don't blame the AI, blame the countries.
youtube AI Governance 2026-03-19T18:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgypgnRlQ3zv2yi9HKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwaaAvzPHvrdEmEBNJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGP2gGW0i9kEKArD94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzc6yo2pfRvjSol-Rx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyvz8rzK3Deft7eCeR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwp0q6aQb2mZfU0YrB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzEtQEJqq6EvJKxyi54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxfQDJMgLD5BDwezvh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyjccWBi6XK7oGoVSd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxUmpJdaCzoUOtK2XF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]