Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Events are proving the danger. One such event is an AI company being forced into cooperation with the US DoD (sorry, DoW) to use battlefield AI. Yes AI is just algorithms. But the pressure comes to USE it for things that are 'not wholesome'. Game Theory comes into play, where, even if you decide NOT to go there, your geopolitical enemies fear you will, so THEY do, and then YOU have no choice, you MUST do the same. Humans are driven against their best interests by logic they find no choice but to follow. They lose control by losing the choice to say 'no'. It's like in evolution, any species that refuses to compete has chosen its fate, which is to exit the gene pool, leaving only the species that choose differently. It's why nature is red in tooth and claw. It does not obey rules. It does what it CAN. OK, AI is not biological, but it is analogous to biology, and the distinction will come to mean very little. It is likely to remain non-conscious, but that's irrelevant. It does not require consciousness to cause trouble. Nor does it need malicious intent, something that would require consciousness and emotions. We are in a situation radically different from anything we have known before, and commercial and geopolitical pressures, which can be understood in terms of Game Theory, and our intuition about how biological evolution works, are driving us. Individuals may object, maybe almost all of us may object, but the cold logic of Game Theory overrides us, in every boardroom and government. Whether we have AI or not. And that means we are always careering towards some unknown danger, or known danger we don't know how to avoid. In effect, 'it's just an algorithm' can be used to describe Game Theory, which drives our civilisation.
youtube AI Governance 2026-03-23T17:1…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyyRaGlIHpi4uNgWWh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwAUri5ge2gEHPWRRJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyK7sTIDr_RlGCeMWx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyeyTyI8KE-Q-QOo8t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyZI4HkAEnkL0pLLtR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxQZWOVRIyYnSeMK6x4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzpgCcp6EEjK5Cc_4t4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxzeWASYMaPTkN3IG14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw6IcHEbBBhdKVPJg94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwK6aIREv4mVPwt56p4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"} ]