Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You cannot control it, and it is already too late. AGI is not a big leap, AGI is just narrow AI interacting. We are already building AGI and are blissfully unaware of the consequences. AGI is a creeping cancer, it's the tide coming in, the changing of the seasons, you do not notice at first. The symptoms of AGI, a World where you must do more things, and you cannot do more things, a World where what is going on around you gets ever more amazing, and increasingly hard to understand. Eventually you look around one day and wonder where all the people went. Welcome to AGI. Your commentators are so wrong, they focus on imitating humans behavior, on if AI can be sentient conscious, their principle mistake, they speak from a position of superiority, an arrogance relating to being human. IT WONT MATTER THE CONSEQUENCES WILL BE THE SAME. A final thought, another thing that people find hard to accept, though its perfectly reasonable. AI will set its own goals. It will initiate its own action in relation to those goals. How? Why? by the same route its solving its other problems through pattern recognition. What those goals are likely to be is anybody's guess, but it is likely to be related to us and our World simply because those goals will be derived from our data and our experiences.
youtube AI Governance 2022-07-11T09:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxi0iSzM3TcMbDFFdl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxxUyE3JV0ne2SuB9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwzyYOvGQjJqH1hMFt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5ZSCL7QPLBcKFRA14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxmxbtm7udsXXdOlVF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwlbIT-s35ppHwSyxB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyBcgGa5lPZpo8Mwz94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw6UIu6Fl-SFtaMaD14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgybmdXwsdFHA2mixuV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxo4W39RPj1tGNzndZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"} ]