Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this isnt depressing as much as another one i saw. so someone posted a video of …
ytc_UgzM5mFy4…
G
If ai has bias, it was programmed to on purpose or accidentally. It's wrong to p…
ytc_UgwmAKGH_…
G
Yes. Waymo cars are mimicking human behaviors. Such as coming from 19th Avenue…
ytc_Ugwm8YeVp…
G
Sam Altman is dogmatic and dangerous in the sense of anyone with too much power.…
ytc_Ugw9GPjpy…
G
That's an interesting point! While Sophia does share some similarities with voic…
ytr_UgxzBvcJr…
G
I hope they searching on how to destroy covid19 in the world.not to destroy huma…
ytc_UgymhsFld…
G
This episode of The Why Files demonstrates the astonishing level of prescience d…
ytc_Ugx9EldN3…
G
The whole idea of running enterprises is to make money, but the irony is that au…
ytc_UgySDbIJD…
Comment
I would argue that our human rights are not exactly just for humans, I mean we can all agree that torturing an animal is just as bad as torturing a human. But robot rights would be made for them, like depending on whether they would have an equivalent to our animal (human) pain. I think that at some point our rights should apply to robots and AI, I mean we have designed robots and AI to have memory, logic and the ability to learn etc etc. By that I mean we designed them based on ourselves. I am 100% sure that robots WILL have at least some sense of survival, if a robot knows that it is going to meet its end unless it at least does something to try to avoid the end, it will do it.
youtube
AI Moral Status
2025-03-18T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz08ZDfbVphQbPRRH14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyc_H7WrWTNPqD_LcJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYBHtP3s_Owb1T-mp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgE8uq6zswy7U-J2l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-G0wE-OjlkPpGhQh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwY-KpVCGJbY1QCmwN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwqjdz3O8onaP1tKVN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVI_KCifXxJLmlLgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyTrV0qYN67KW0hhx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzxsw8XjFqVzctpa1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"}
]