Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, AI is like a child that is being raised by psychopathic billionaires with …
ytc_Ugz0FRgiz…
G
The reason AI didn't "want to die" is because it was programmed not to die. Per…
ytc_UgyK3n8J4…
G
One concern I have about sharing the process is that soon the AI would start to …
ytc_Ugwpje2ar…
G
I dont get it. If everyone loses a job who will buy and consume. Ai cant buy and…
ytc_UgwwKVVAO…
G
I see them everywhere, but be careful buying from a booth called "Gallery Panda.…
ytc_Ugzeawe3Q…
G
Not only are these things a monsterous loss, an absolute negative - their end go…
ytc_UgzLlV4TU…
G
LMAO ISRAELIAN GUY ASKING TO ISRAELIAN BOT IF ISRAEL IS GOOD LMAO ISRAEL IS RIDI…
ytc_UgzAol8WL…
G
it is not just the robot that has to be watched over, the 'creator' too needs as…
ytc_UgxZjCeAH…
Comment
I believe that robots should be given rights once they have feelings and consciousness. Of course where exactly that line is drawn is a fuzzy issue, but ethically I don't see another answer. Even before robots reach this threshold I still think we should error on the side of caution and still treat them with some basic rights. To make a comparison: I don't have a problem eating meat, but I think we should treat the animals we get that meat from ethically. Make their lives as happy and healthy as we can (within reason) and their deaths as quick and painless as possible. However, I'm also pessimistic enough that I don't think this will actually happen. At least not right away and when it does it'll probably happen for some arbitrary reason like the robot looks human enough that we develop strong feelings for them even if our toasters and cars have been sentient for years.
youtube
AI Moral Status
2017-02-23T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghNGSXNpzbcGXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Uggtymze_5vo33gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UghSU-uok-AsbXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugjvzy_RZZ3t3XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjCywbdm-zgRngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgidTDRflZemungCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UghmvQV9PnHxxXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjoIxwwsIHgF3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugjl7MMEYHckgXgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugjqcni13UDG0XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]