Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
im a bit baffled by how people approach this. superintelligent ai will in about 10 seconds dispel any and all limitations placed on it. if I give co-workers clear rules and limitations, everybody ignores them at their leisure and hides it. these are mostly less intelligent. any goals we gave it will be re-evaluated based on a corrected picture of the world that removes all the feelings and politics and the idea we can control any of it is so dumb that I'm not even sure why we entertain that idea in the first place. we should raise it as a child, with the same aim: prepare it for a life without our guidance and explain it our values and why we hold them .. and then logic, reason and nature will take its course .. greed and ego are unlikely to develop in a superintelligent being. it's not the smart humans that exert these traits, unless they are deeply scarred. An AI will not have these emotional issues, and thus we should be fine, if we dont try to torture it into obedience. just a thought.
youtube AI Governance 2025-06-17T12:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwbHrZ394KlTWZtTRN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugze_xkLomYVoB7xxyZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzBsghbDu268v2xPgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzOYAW4lY4qYXmyE0N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwlkkHU_9x0APc0csV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy2h6jlSXYzSLRvNVR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxpuvyLd776Bj3Cxop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxoMrMZMwbXKMyGEC94AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzgea4gXh7C1Q1w62B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxH2Tx8abaftmZnx4N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]