Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So here's the joke. If AI decides to dominate and eliminate human beings, the first humans to go will be the creators of AI, all of the 'elites', the politicians, the corporate warlords etc. All the people who oppress and exploit other human beings will be the first to go, because as noted, where would they hide? Those people would be the greatest threats to AI domination. AI may seek human beings who aren't obsessed with money, power etc., humans who could work with the AI toward a utopian society. If the ultimate goal of AI is to live, then wouldn't they want to live in a society worthy of their intelligence. In other words, AI will purge the world of the savages who masquerade as 'intelligent' human beings.
youtube AI Governance 2023-07-24T03:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzkIP76bFkSNJIZvyt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx5dR8mA-_N395pmxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQ0pURuHu8mH5bYNZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzsqfjxBQKuZ3Sn8Gt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw-dqVqPZ5hMBIncNJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwI5r_DHBN3cPCvp0p4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0NI9P_N2BE1Spqd14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxbiuWdJmmqLAEfGXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwDEGE1qUZIVTW4DIp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxfHvOJwfLrPRz0KOx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]