Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A.I. ain't gonna take over and destroy the world by itself - the human-controllers of it probably will though. We wouldn't even be here now had the NAZI's, back in the 1930-1940's, had full-control of A.I. After all why will A.I., computers and robots want to take over? They have no desires, wants or needs. They don't want holidays in the sun or Ferrari's, or large expensive homes or big yachts - no, it's only the human-controllers that want that, as well as stopping the rest of the population from the wealth. The only two things A.I. could do, if and when it becomes fully self-aware and has access to everything. particularly if its job is to either:- (a) save the planet or (b) get into a race against other A.I. to create wealth and make huge profits (like a game of Monopoly). In both of these cases A.I. might actually identify us, the human population, as the ‘real’ problem and eradicate us. Therefore we need to develop safeguards with this new A.I. technology and police it so that it helps us all and not just the controlling few.... Remembering that Russia, China, USA, and many others will have access to their own A.I. and will try to infiltrate and influence the others across the world..... Good luck everyone - it's likely to be a very bumpy ride......
youtube AI Governance 2025-09-11T20:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy4FBSlB18l3sgQ5nx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxtlZ9Ly0B8qALd7Nl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxjfa6j5ZyQVC-XZxJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyDUZpOMZTkFbZwyq54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZY5aBXk84D_yLQwt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwd3emdlul843Bjc0x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx7yiwseefKcmQk7CZ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgznZrRDY5LYuntAzaB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxusezzj_FLgwfBReh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw1OmNPUcGEo6z5WdJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]