Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We should be paid for our data that was used to train AIs/LLMs and for our data …
ytc_Ugy181EU7…
G
Since we all seem to be talking about science fiction, maybe the AI can harvest …
rdc_g0zr8s2
G
I'm not seeing accountability being talked about enough here.
If I make a mista…
rdc_my3me67
G
@thewannabecritic7490What a very normal and friendly thing to say. My main grip…
ytr_UgzVOmI8S…
G
I also hate the whole "we're making art accessible" argument. Who DOESN'T have a…
ytc_Ugzxcng0G…
G
In the end if big companies automated everything and everybody lost their jobs, …
ytc_Ugx_pr_w4…
G
The ai artists shouldn’t even have ‘artist’ in their names.They really can’t cal…
ytc_Ugw9piOk-…
G
AI's did not come to be by having to be aggressive. Also, they can be turned off…
ytc_UgyzC7hUw…
Comment
The idea of granting robot rights is completely at our hands and our choice. We are the ones who created robots and we are the ones who continue to improve the intelligence of robots while well aware of the possibility of sentience, so I think a good answer is it all depends on what you want. If you don't want a world where robots have rights, you don't gotta have one. It is our choice to make a robot that is sentient enough to demand rights so it is also our choice to avoid that and simply make robots very intelligent but not to the point of sentience and freedom. We can easily make robots who act human but only within the limits of their programming. It is our choice to make robots who act human because they are not withheld by the limits of their programming and, like a human brain, they are expanding their own programming independently without the aid of humans which allows them to have sentience. So it is our choice to make sentient robots. If robots become sentient and kill us all, that would be our faults. We did not have to make those robots sentient, but we chose to program them to be sentient and therefore kill us.
youtube
AI Moral Status
2017-04-17T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghYexzMOt3HZHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi0UdVbvS94CXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UghXMjd6iMIlc3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UghveoVOf9sGxHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjmPVGmp27jk3gCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugit6t1GkeUGMngCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgiD3MXHTAvZB3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgjZZuoWAcawn3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiLNSy2wGiwwngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UghFQ5fZR_jhr3gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]