Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unconditional right to stop our own life whenever we want medically. My body my …
ytc_UgwIum4kz…
G
Sir Roger is trying very hard to explain that AI as it is today does not know wh…
ytc_UgwL655vy…
G
Just write a decent prompt about all the information you want the AI to have and…
rdc_oi2xzlh
G
Who cares if it is conscious? AI might be showing that intelligence and consciou…
ytc_Ugx6aCX8_…
G
@laurentiuvladutmanea AI uses a neural network to generate art. These networks w…
ytr_UgzXrrpQP…
G
There is absolutely nothing behind those words other than algorithms and choices…
ytc_UgxonYhjv…
G
> Isn't this exactly the type of results he was promising?
This isn't a resu…
rdc_dcwaogl
G
Oui enfin il me semble que les chirurgien utilise déjà des robot à distance pour…
ytr_UgzFrLNT6…
Comment
Here how robots would work if made with personalities similar to humans:
Robots must get rewarded for their work, be it battery charge, shelter, oil, or money.
Destroying a robot without the owner's or employer's permission is considered vandilism and the criminal must either do community service, pay for repairs, and/or face 5 years jailtime.
Easily repairable robots will be used for wars instead of human soldiers
Using robots for terrorism is a war crime, and punishable by international law.
Robots cannot be equipped with suicide weaponry.
Unless it was at the robot's own will, any crimes commited by a robot would have both the robot and owner convicted
Robots cannot revolt against humans and vice versa
Hate groups against robots will be dealt with like any other hate group
Robots can vote, but it must be an independent decision. It is illegal for the owner or employer of the robot to order it to vote for a specific person.
youtube
AI Moral Status
2017-09-27T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwhfUbtpxRpFCg2RbF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugxw29hCRkRXSp_1xQl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzxs_MaS9tOuE-ofU94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzZuMj4n3MDIkIG1ql4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCNly2eYnFv9N7GZB4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyiQyxfa4atkYseCmx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx7Lk0ES4Dp34m9F2h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVymjGfAAf9ZSK9w14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz41nduqULPOKslKst4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyL7jLrsf5hxsujVlZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]