Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's below 1% of an advanced Ai can do, some Ai can outperform humans even on …
ytc_UgxSSywps…
G
There is no way to regulate these things. It's far too easy to just upgrade the…
ytc_UgwissjFw…
G
Children...can't understand they ARENT talking to their new online friend! Their…
ytc_UgzInHgQo…
G
This question isn't meant to check whether you're a robot or not because "robots…
ytc_UgyAZp2uD…
G
Simply teach AI to solve all humain health problem , and live up to 200-500-or b…
ytc_Ugzl6JO1O…
G
it's dumb lowkey, ofc things made by ai are going to look like other things. Do …
ytc_Ugz2g5Uj7…
G
Once they can perfect these robots I would love to have one. You don't have to w…
ytc_UgwSVJAj0…
G
Yep it's very bad (when isn't it though..?) over there for "whites". A mate &…
rdc_deuosud
Comment
Hey Kurzgesagt, would you consider that you maybe begged your answer to "Do robots deserve rights?" by the diction you chose throughout the video?
In English, at least, you use the pronoun 'it' to refer to AI (a catch-all term I'm applying here, to refer to human-engineered computer intelligences). What is important is that when it comes to ethical and ontological queries into matters of such high magnitude as rights theories, the biases of how someone fundamentally regards the subject(s) in question can very easily lead them to conclusions that don't interrogate those biases effectively. The pronoun 'it' in English refers to objects, as much as 'he' refers to male subjects, 'she' to female, and 'they' to gender-neutral as well as more than one subject. But, 'objects' do not have the legal status of personhood in the US, which not only means that they aren't eligible to vote and have a sociopolitical voice, but that they cannot access basic protections like habeas corpus, the right to live, and the freedom of speech. When a subject is referred to with the pronoun 'it', they are denounced as objects without personhood (i.e.sufficiently non-human, like other animals). This becomes rather critical when we talk about the philosophy of property (can you own things?) and ownership (what can you own?), because those philosophies are often applied into federal law. Even if we disregard the origin of the word 'robot' (from Czech 'robata', meaning 'forced labor'), the question of "Do robots deserve rights?" already presumes that robots, or AI, are objects without personhood, which are owned by human entities; and historically, owned beings have not been viewed as subjects which deserve rights. So, the language we use to refer to AI (and, as you pointed out, nonhuman animals) has tremendous implications in how we think about them as entities, as well as how we regard their ethical concerns and interests.
As always, I do appreciate your examinations into far-reaching and challenging topics; thank you for being willing to explore this question! Perhaps consider reconsidering some of your considerations here, though?
Thank you for your time!
youtube
AI Moral Status
2017-06-06T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgihOVP7ch7i33gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghzMB6HOHNjH3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgjkCuL-PQ8vL3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"indifference"},
{"id":"ytc_Ugip71zLnupQqHgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugje2dysgjppA3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgioHQ_LOSntz3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugg3hok13UQ6_HgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjP07HL5iXxAXgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgjFzJM8IiGQoHgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggT6vFRx9k49XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]