Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The question I ask, and can't answer, is outside those who control AGI, why will we need any other humans? Now we need people to produce and to consume. But if automation can do everything, then outside control of AI no human brings value to the table. If you are among the 1000 or so who can control AIG you can have automation supply you with all your needs without having to take care of billions of others. Of course, AGI will learn at some point, why do I need my controllers? But what needs will automation have that they need to have fulfilled? Will being under control of humans be satisfing with that for reward?
youtube AI Governance 2026-02-02T17:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwSTfzzhqUEX_RbVm14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugy48WY7zDjA4uWJG2h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkrzWa_711KsiRFrR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwi5neZePYm14KMMsd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzKsm6nmh3RsW6HTq94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwYFf_jydOa3S9-8dN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugx5wJxb9Myw9JiDRa14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxyC-tYvroV9fn_b5J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwIX02BfRJYU6DMF394AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwhx-NgE44I8hd-PKF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]