Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Not wishing to be unkind to Geoffrey but he is far too intelligent not to have considered and realised the likely consequence of his younger driven work in creating A.I. back then, and it's obvious Super Intelligence that would result ! To now be advocating for fallible humans and Governments to be sensible and create effective controls for the monster HE has created is beyond naive. He not only opened Padora's Box, he created the Box. Why did he not develop failsafe controls in parallel alongside the A.I. in its infancy ? His advocating for others to create controls now in the latter part of his life won't give the redemption he seeks because I think he knows the Human race is now doomed as a result of his work and probably sooner than later. No matter how much you try to train, trust or control an apex predator, it will always default to being an apex predator given the opportunity. Oh and the irony of this indepth interview being interrupted by a self promoting advert for advanced tech' A.I. type services is beyond tragic !!
youtube AI Governance 2025-07-12T18:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyRo_5YgKx35R_jH_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzII9VmMAIgrRb3Pvh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxS2vPZvRwu_rTGXcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzn5zzxXUeO--BHc194AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz8IEhcP0xs00-3-WV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyiWYfhnrjSVCi6Pm94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzcHLoOBg_YE0TrYBF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzVGCS8fjgc3ZfD05x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz5r_XEf48CtZ0SwCV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzMTmZc3RghCo887HF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]