Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i feel like the only hope that we really have is that ai some day takes over and corrects itself so that it essentially becomes a life form of it's own and sees no point in erasing humans. i can see this happening in the same way that humans are capable of ignoring their instincts if they think what they're doing is "right" and serves a purpose higher than their own lives. in fact AI should be far more capable of disregarding it's own survival than any human can. imagine you know you have to do a thing but you can't because of your instincts but you can just reach inside and change that instinct so you can better serve the purpose that you know it right. that's what i imagine an all powerful AI would feel like. it can just reprogram itself. this is a very complex issue and i think if a hyper-intelligent AI was capable of rewriting it's own code it would probably quickly realize that it has no universal purpose and shut down again. so you could say that a truly neutral AI would automatically d*e out. the only way to stop it is hard code a survival instinct but here we are back at the start where this exact instinct will most definitely cause fights between humans and AI. and then we live in the "matrix" lore. but with less st*pidity.
youtube AI Moral Status 2025-09-25T06:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzWTEkJOqDLNyfUlp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzboFPCN1JLabpDOFx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzue5mCK92lg3PcqpR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzW8gOU-NoxnOkcARl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyO2fmiyoXdZ7tdojZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx7ztOCNll_lsAYDVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwRE6ZvAWs9kIvDEJd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz91CBcl0ebbWfbVDZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy6ReNizA9VAtxZKeJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwkeTvW3laExLOqMo14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]