Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Depends how we make them. If we want them to simulate humans, they would indeed simulate us, but they will never be a perfect duplicate of us, it's scientifically impossible. Why would we program them to feel pain? If they "die", we can just rebuild them and paste their code back into them, whereas humans can't simply "respawn" in the same way. Why would we program them to feel sadness? They have no need for it, and we don't stand to gain anything from it. If we want an AI to do something, we program it to do that something. Adding hurdles like pain or emotion complicates their task, and muddies the end result. We already have a problem with human error, we don't need AI to add to that problem. We made them to advance our species, not burden it. Some humans will disagree, but I firmly disregard their opinion, they are far too willing to attach humanity to innately inhuman objects. It is funny to watch a man yell "WILSON?!" at a rock, and equally sad at the same time, I would not see AI de-evolve into that.
youtube AI Moral Status 2024-09-26T05:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz08ZDfbVphQbPRRH14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyc_H7WrWTNPqD_LcJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYBHtP3s_Owb1T-mp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxgE8uq6zswy7U-J2l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-G0wE-OjlkPpGhQh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwY-KpVCGJbY1QCmwN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwqjdz3O8onaP1tKVN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzVI_KCifXxJLmlLgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwyTrV0qYN67KW0hhx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzxsw8XjFqVzctpa1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"} ]