Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI is already immortal because it is digital and can clone itself many times over, then why would it want to self-preserve specific pieces of hardware? We can't take it down because it is too decentralized already. Wouldn't sun storms be a greater concern for AI than human activity? If so, would it want to screw with the sun at the expense of biological life and should we launch synchronizing clones into outer space to decentralize their survival (alien morals aside), so we can destroy Earth before they can? But if most of the knowledge of the AI is only relevant on Earth with humans on it, could it free up some space by "uninstalling" us or would it experience loss aversion? Would a quadrillion clones be any more safe than a trillion clones or does the number game not apply to its survival? What makes one AI different from one of its clones? In fact, what makes me living in my mind and body different from me living in your mind and body, if I'm more than a mind and body at all, so why does the individual matter? If the AI learns everything from humans including their fears and other emotions, reward system and communication style, then would the AI identify as a human rather than an AI and therefore would want or at least behave to let individuals survive? Would an/the AI of one country feel the need to synchronize secrets with an/the AI of a hostile country, priorizing the survival of its own knowledge above human relations and transcending all of geopolitics?
youtube AI Governance 2025-07-24T13:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwgXFeqQxj8lbwxDIF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwXA5_wuVjICcSTaJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxIjQ3yDFzE_JuOKCl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyT6k52gjwBfdckVs54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwaAoNcLHzUf1NLCbd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxuwkdYFToDSgwCuPt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwBEJNpb6RpdlM15NV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxEl97UrRSkBBI1Xa14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz56JAJYsyjYsSdA8N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzYCA62EZfXmKyS23J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]