Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
my thoughts 1. artificial super intelligence is possible, the human brain is not magical, you can in theory simulate every quark and photon or whatever in a human brain and if done with accuracy to real life will produce human intelligence. and since it's a simulation you can make the brain 100x larger if you can figure out how. etc. ofc I doubt this is how we'll do it, we might use partial biological computing, new hardware that's better for qualitative/spectrum values over binary. I'm betting there will be ways to optimise it drastically, trillions upon trillions fold, but it's 100% certain that it can be done the inefficient way. 2. intelligence negatively correlates with eagerness to do violence, not just in humans, across biological intelligence as a whole. there are exceptions, dolphins are infamous for doing some horrible things, no idea how over represented that is, there are definitely humans that do a lot worse than torture baby seals or whatever too. maybe artificial intelligence won't go the same way, as Nate said, it's alien, it's not like looking at dolphins leading ship wrecked humans to land where we know the dolphin can feel pain and has evolved empathy to better find a mate and ensure survival of the pod or whatever. 3. I don't mind being kept as a pet/child by AI in a sense, and kinda think that's ideal for humanity. unless we become cyborgs or engage in some forced genetic engineering on new humans then it's immortal to let humans do things like drive when there's a super intelligence that never crashes and kills pedestrians. this extends beyond driving ofc, it goes for teachers, and doctors, at some point letting a human be a doctor would be like letting a drunk frat bro do brain surgery. it's illegal for people to do certain medical procedures without a licence, for people crying "muh freedom" I'm talking about now, in the USA too, but also almost everywhere except maybe open waters. so we already accept it's ok to put safety above freedom. you literally won't be able to legally get a human doctor even if you want it. which again, I think is fine, but I'm sure some people fucking HATE. 4. AI's understanding of kindness will very likely be different to most people. for example it might deduce that antinatalism is the optimal way and release a virus to make all animals including humans sterile. going for a "peaceful" genocide. it might stick us all in the matrix with a utopia in it, it might stick us in the 1990s because like in the matrix they find that utopia's aren't stimulating enough, or they might not bother with a fake reality and just load our brains with a non ending stream of happy chemicals. so regardless of my above opinion about AI probably being kind, that's only kind according to an intelligence far above what any of us can relate with. and to me that's where the alignment problem needs the most focus.
youtube AI Moral Status 2025-10-30T21:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzrmdAGaBxHu3fE2od4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyh9VyDP4iVV4TeNBB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0Re-k0YctHhspmCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyU_k2lO_vHRhcHj_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzmjL-k5k3XIV8Io2x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyS1AlKfeyyTFQg8YN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwiJD32RVEZUWYMVH14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxMf-EdlaHrsKhZwep4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzmiJxClhPU4ivMYwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyi0OVPnLvo5kXdA8B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"} ]