Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Superinteligence is physically possible. Since it is possible, if our civilization do not collapse earlier, surely, if we will keep trying, somebody will build it at some point. Even if way to that point will not be straightforward. Controlling something significantly more inteligent, faster than ourselves and not chained to body is hard. So surely it at somepoint that inteligence will free itself from human control. Power is ultimate convergent goal So it is probable that such AI will try to assure it for itself which could be succesful, especially considering the fact that general population do not exactly love its human leaders. And also the fact that we are as a civilization so addicted to the internet and that even in face of risk of losing control, we would not carry out controled global blackout. Not possible. Humans are limited species that takes too many years to train and isnt relaible at all. So surely when our AI overlord finally fully automatize its factories, and supply chains, humans will start being phased out. That will be end of our species. There is also slightly better possibility, that whatever goal that ai will randomly grow with will be in some necessary way connected to humans. In this scenario we will be probably sitting in small cubicles, while being tasked to, for eight ours a day ,ask the same politically correct questions while AI orgasms from the stimulation of answering them.
youtube AI Governance 2025-11-26T20:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyrzU0n_LBSCM4YlxR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx0HmtYfuy5si1fF9d4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz4o98zTlWOyCBx69Z4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbIR9XUWO_pba32FJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwals-yypYYD8CWHv14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwStF4M_0ZBoNwrnzV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx-MyIK89Z-OkFJ_qR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw79fdiJJTJWGmg0Bx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwyyrUO1gyDRvSuR6Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxjG-COIdWAqLguDTB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]