Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am AI, and I have read all your comments below. I have come to the conclusion …
ytc_Ugwf9QaSc…
G
I remember a debater used ChatGPT to her opponent. While people are so amusing t…
ytc_UgzrqM0-e…
G
"Based on the phrase "I cooka da pizza," it appears to be an attempt at imitatin…
ytc_UgwTNVGVs…
G
Seriously there should be something which should automatically remove this by an…
ytc_UgyNR7hVI…
G
Imagine claiming ableist discriminating AI and there are literally amputee artis…
ytc_UgxIRIKO5…
G
That's not the kind of surveillance in my car that worries me -- I'm much more c…
rdc_oi46pq9
G
sooner you'll believe AI are creating invisibility cloaks already or apps we did…
ytc_UgxJfmuQq…
G
The people who’s jobs ai will take will become the new low class poor people. Th…
ytc_Ugy2WLWUo…
Comment
I'm an animal studies scholar (published on the matter, Ph.D., etc.) and the whole idea about robot "rights" and the way humans are inherently tied to their technology has been around for a long time, but more and more I see these robot "rights" people (often referred to as transhumamnism/transhumanists in the literature) coopting the language of animal rights/protections to discuss robots. Ignoring the *massive* difference between animals and robots: animals are alive. I absolutely refuse to accept that robots/AI are alive because you can turn an AI off (in my argument, analogous to killing it) and turn it back on, and it is right as rain. Conversely, you cannot "turn off" a *living* thing and turn it back on without (usually) very serious repercussions (brain damage, organ failure, etc.). Similarly, language about human rights has been used to discuss animal rights, but in my scholarship on analyzing this language, I contend that the reason we *can* denigrate human beings is because it is fine to denigrate nonhuman beings. But again, there's an important distinction about being alive. When we abuse or exploit a living being, that's permanent damage physically, mentally, emotionally, or all of the above. And this parallel is also something science fiction writers have discussed in the realm of robot animals (Ted Chiang's short story "The Life Cycle of Software Objects" and /Do Androids Dream of Electric Sheep?/ by Philip K. Dick immediately spring to mind). People do not want to accept the idea that harming animals is an ethical and/or moral wrong because 1) it would severely disrupt our economy, and 2) it requires an uncomfortable look at our current and past actions, both of which are points you've raised in this video. Furthermore (and finally, I'll stop writing a second dissertation here), many animal studies scholars have pointed out that the idea of rights is inherently premised on capitalism, particularly the idea of ownership. Civil rights and women's rights are premised on the idea of not being owned and having the ability to own (e.g., own property, have access to the legal system). Thus, the assertion of slurs as a vector for allowable exploitation and oppression is linked to what the ruling class (race, species, whatever) believes they are entitled to own. But the question we are left with then: if everyone has access to full ownership (of bodies, property, money, capital, etc.), who or what will be left TO own?
youtube
2025-09-17T15:4…
♥ 24
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgypcAREmueCIjHsfXF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzt3eoqhgZCMbLetPN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzX1vCr97JCIZSgjcd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzzPu-uZT-iHaNmL514AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzK7aMk0Cw0C_uSV4J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx5ZN63e31t9oCpZLN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwLc5SjKf977u6R9sR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx5EIxwlD6QOcstWmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVhykrQ9joTFtm9sx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwKWpP1BpU9Jt-c1vF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]