Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is no evidence that AI will gain consciousness. For example the person who may program the the general AI might (and should) instill it with Human interests at heart. Why? Because we, Humans, don't want to become extinct now do we? To my previous point, if you program an AI into holding human interests to heart, then whose to say it isn't programmed to do whatever it does. Even if it's given the "freedom" to go on about it's day? If it sees a cat stuck in a tree, it will compute the information, check protocols (saving animals is a human interest) and it will run, roll or fly over to the cat, save it and be on it's way. Human's didn't even have a creator and we can't escape our programming. Everything you are is based on past knowledge, experience and the circumstances that you may or may not have been put into. It's as simple as that. We can deprogram some things in us but we aren't fully free from our brains. Our personalities are a culmination of what we've gone through. So, I don't believe, necessarily, that we have enough proof that AI will become sentient. I believe it'll be just another tool. Thats not to say it can't become sentient eventually. Thats also not to say that robotic AI or AI in general should not be given rights. We haven't even gotten there yet. We're not prepared for robotics taking our jobs at the current moment, which is happening every single day. Rights are a whole other story.
youtube AI Moral Status 2017-02-24T00:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UghBsdvkqrytYXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgiuleNNrJVRZHgCoAEC","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Uggd38vfndHWt3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugg5R38fstOz_3gCoAEC","responsibility":"creator","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UghU6immMZEHlXgCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Uggd8NAdlsfsRHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgjQcetBhk6wU3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggXZRI8LEbBcngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugj8JCZ6OH21Y3gCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UghZ2hVEk12VdngCoAEC","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]