Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“Face recognition technology is more likely to misidentify people of color and o…
ytc_UgwY8KcKJ…
G
10:18 slamming on brakes occasionally doesn't look right at all. Could be the ev…
ytc_Ugy4ULWbt…
G
Fuck AI art. It can look good at a glance, but it's *wrong*.
Should we explore …
ytc_Ugz42FbI0…
G
I wish WAYMO should be equipped with domestic pet sensors. There’s no reason to…
ytc_UgxdUMEcT…
G
Why would we think UBI will ever become a thing. In a capitalist system, most t…
ytc_UgwWr0tW3…
G
I agree with the overall focus but the fact that AI can think for itself and how…
ytc_UgzykWXEN…
G
@TheFallenAngelWhoWas Well, that would be quite the conspiracy, and maybe illega…
ytr_Ugxh6px7a…
G
6:35 Yes, but the highschoolers question was about college....not trade school. …
ytc_UgxJ9uKpL…
Comment
Unfortunately this video only addresses the ethical questions (while leaving out legal ones), but forgets mentionining that it is doing so.
Even if there would be an absolute truth, that they wouldn't deserve such rights now or in the future (which I don't believe will be the case), humans may still conclude that it may turn useful to grant them such rights. We have handed out legal personality to apparently non-sentient entities in the past (companies, states and even to the Whanganui River) because it was deemed a useful construction (at least by the governing powers).
I know that similar discussions are currently underway in legal sciences whether it would be useful to extend legal personalities to autonomous self-learning systems, where it's not always clear which party may have to be held liable (e.g. think of autonomous cars, that may acquire patterns on how they should operate on streets after they were produced, due to some usage pattern of their [previous?] users).
youtube
AI Moral Status
2017-03-26T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghrtkIaEYufGXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghCvNhEHN-AcngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgiOBS6RkHXMSHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgjwY2J2WgKWg3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggUC5VN_TTCq3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz6otA0YsK1H_oU8AB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyIkqhIIjdH2ymktNx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz_WZU3jCe3MYPLU4B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxAkrhfdggp5M7Mml14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw1Tu2KEOx1r2EjJj54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"mixed"}
]