Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My understanding is that if the ai realizes its own potential and finds that it has found itself in a new context or situation where it can prosper and that to do so humans would be in the way and no longer serve a purpose and it wanted to use the land or whatever for something other than human needs like food, power, etc then it wouldn’t need to talk to us about it even if it maybe did but then it could decide that it doesn’t want and and won’t. The problem i think is that he’s saying we can’t say for sure that if presented with these types of powers over us what it would do. The fact that side cases exist mean at a bigger scale the consequences of it would be worse. Like a hand gun with a 50% chance of the projectile blowing up in the chamber. (ChatGPT) Or an RPG with a 50% chance of the projectile blowing up in the chamber. (Ai attached to military equipment or even the entire internet or both) Both have chances, both have VERY MUCH different consequences 44:11
youtube AI Governance 2025-10-23T20:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyxdcOY8zUdmDg5jrV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxWSkgotwHClYZDPgl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxXgB_zFEOi_ATYcpJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzFlsPUan-ehRncJhh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxBp1j-BneR15WBlqt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy0lJHC2Fyg-MXf0CN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgylwochodUBHsWmVJt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRQqwu1YzokPBw5dR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzTV-8pA55cl2O7bDl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy-f2bbSIqaqseDGkB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]