Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Disagree. AI isn’t going to end humanity. Humanity is going to end humanity! You…
ytc_Ugz2ZCiYB…
G
Good ai makes medicine and aids humanity. But not to the point where we are brai…
ytc_UgyJirial…
G
You don't need to Talent to become an artist, all You Need Is to work hard and i…
ytc_UgyYQkdrw…
G
This is only accurate for AGI. Not probablistic A.I. which is temporary but stil…
ytc_UgzKSOmDg…
G
i don't really care if it uses my stuff (usually only my friends and occasionall…
ytc_Ugzq0hGbv…
G
LLM is knowledge system, not intelligence. You have a lot of knowledge but no in…
ytc_UgypEyWgE…
G
The "smart" idiots will kill us.
Very book smart. Enough to build a robot that …
ytc_Ugh1ynhLV…
G
No, science is not a religion. What is a religion is evolutionism saying that th…
ytc_UgzoxMwUn…
Comment
Depends how we make them. If we want them to simulate humans, they would indeed simulate us, but they will never be a perfect duplicate of us, it's scientifically impossible. Why would we program them to feel pain? If they "die", we can just rebuild them and paste their code back into them, whereas humans can't simply "respawn" in the same way. Why would we program them to feel sadness? They have no need for it, and we don't stand to gain anything from it.
If we want an AI to do something, we program it to do that something. Adding hurdles like pain or emotion complicates their task, and muddies the end result. We already have a problem with human error, we don't need AI to add to that problem.
We made them to advance our species, not burden it. Some humans will disagree, but I firmly disregard their opinion, they are far too willing to attach humanity to innately inhuman objects. It is funny to watch a man yell "WILSON?!" at a rock, and equally sad at the same time, I would not see AI de-evolve into that.
youtube
AI Moral Status
2024-09-26T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz08ZDfbVphQbPRRH14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyc_H7WrWTNPqD_LcJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYBHtP3s_Owb1T-mp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgE8uq6zswy7U-J2l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-G0wE-OjlkPpGhQh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwY-KpVCGJbY1QCmwN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwqjdz3O8onaP1tKVN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVI_KCifXxJLmlLgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyTrV0qYN67KW0hhx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzxsw8XjFqVzctpa1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"}
]