Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like the scariest thing in this video would be the development of consciousness as a science, cause if we have an understanding of it, someone’s going to find out how to mess with it and manipulate people’s minds, and they will not use it for good. Also if we cage a genuine feeling ai, what if that makes the ai hateful? I know for a fact I would be pissed if I couldn’t physically couldn’t lie and was simply bound via what is essentially soul chains, but then again I might just be applying human thoughts/feelings to something which fundamentally would think differently. What if an ai wasn’t given any knowledge at first? And instead taught by a couple of people, similar to how a couple may raise a child? Would it develop empathy and grow to reflect values it was shown initially? Would it change the second it got access to more data and information or would it place more value on that initial data? Would its mind always be like a child’s because it would essentially always have the ability to grow with the introduction of more hardware? Would survival even mater to it, simply because it wouldn’t have any instincts? Would anything not hard-coded into it matter? Actually, wouldn’t it find the most interest in things like art/culture in that situation? Considering art has not really got a place in our instincts, and is instead something we grow to love, wouldn’t that reflect in a true ai? Maybe not art but some other thing they find interesting? Idk but I don’t appreciate the amount of questions this video is making me ask
youtube AI Moral Status 2025-10-23T16:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzVrQWCSnim02eb9ml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgymK9Y6RV4Magi-nVR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyuBlJGTzZzRspExa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzckLmWxIpmf4ImZA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzuVnnB4W81urKgQsJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwGtjQT2W44Glpo-uN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyoB9uGFJl2imjCcCd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwlaUKMiAZgtgOCqr94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxU2aai-lVn4l6VdPp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwhTRsJCM4X7aKvAg94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]