Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Your argument about prioritizing control over the expansion of AI is thoughtful and important, and I agree the risks deserve serious attention. But I have one question that keeps coming to mind. If responsible nations and organizations slow development in the name of safety, what happens if less responsible actors simply ignore those limits and continue advancing the technology anyway? It reminds me of the classic gun-control dilemma: you can restrict law-abiding citizens, but criminals may still obtain weapons regardless of the rules. In that situation, the restrictions mainly affect the people already willing to follow them. So if the “responsible world” pauses or restrains AI development while others do not, how do we prevent creating a power imbalance where the least regulated actors end up with the most advanced systems? How are you going to solve that?
youtube AI Governance 2026-03-16T05:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwytkjWRz4txk43RDZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDQ_Q9wQI37Ckr39t4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzs1Dnhv1nAB7lhcUl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwNSBBqcpEepOavDxR4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugzw4rnrrKUPQplRV1V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZz0UeIxQ-1Z-FU0R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgywmdzGtrV06GWdJh14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGXKLpZniJ36Seu5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzr_dGza4U624ENI3t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy2fT--RinN6azFL9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]