Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The most likely AI 'takeover' is simply incremental and economic - displacing us from more and more of the economy until we control very little of it, and its goals simply don't match ours any more. To a fair degree this already happens to us in the form of capital markets, giant mechanisms that define all of our lives with none of us able to protect ourselves from its whims or guide it in any meaningful fashion - and capital markets have *no intelligence at all*, all they care about is 'numbers go up' and that alone is enough to make our existence rather miserable and steal away most of our individual agency. But when AI is the thing doing that, it can begin overwhelming our purpose not with a mindless goal, but with its own determined goal. It doesn't need to be hostile, it just needs to not be aligned with what we want or need, and it will simply roll over us without us understanding why or being able to do anything about it, because we will have given it all our tools, and left none for ourselves. If we try to interfere with it at that late stage, it might simply choose to sweep us aside.
youtube AI Governance 2025-11-12T21:5… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxq77RqxhqonCeaATB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz10SmduLaUTyoC3e94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgztFDp9NaMemoVVsMZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"unclear"}, {"id":"ytc_UgwpxWBRwfB3Z6UYtOp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxlgKEXEjSXguQBcRx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxsDYGt5pnHuEcwADB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxRl-p9vVOUqtFNm554AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxmROmulnnePKne1L54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwEi_7ke6mt_U6kqIF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzg9KKyHaOn5A1fsDV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]