Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am not him and in 2017 I came to the same realization myself. I remember thinking about those guardrails, and the safety manifesto that the big 2-4 think tanks were sharing; as someone who worked AI-adjacent, I saw no regulatory pathways that would work, as I was keeping tabs on it. It's bonkers and diabolical that we are so completely and willingly blinded to what is happening. It's the ultimate example of how human behavior is deeply fallible. The most important thing to do is to begin to shift your mindset about what life will be like in even 3 years from now. I want and hope for people to start to shore themselves up in how to be in this reality. Interestingly, and he hasn't broached this, is that our very unstable markets and regions are one of the only elements safeguarding us from this next step -- *only* because right now the energy required still needs to be built. This is why you're hearing about data centers. AI will get better here with nano and alternate technology, but it's one of the only barriers right now. The more unstable the more likely energy exapansion for AI is less efficient. Only for that reason are the problems and wars of today holding AI back.
youtube AI Governance 2026-02-07T20:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzI1ykoDYDiNF7W57J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwGZBzZBKakMrIu1Cp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzFH0O8k4ilpDH3ltp4AaABAg","responsibility":"expert","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYo1BcT_aEpRXW2-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxXb7iZsWqXjh5Wa054AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwYlkRgSCBvJjhi7WB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzN94R-3AujKahQCJt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwzmNOCGN60o_N9OZ54AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwezLuYEzEKkc3rGnN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy6ELLHakTV3swHM3t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]