Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's a great question! In the video, Sophia mentions that while she strives fo…
ytr_UgwdUbzJ8…
G
In 1981 I began work as a CAD operator at Link Simulation Systems. There were 1…
ytc_Ugye9iJ2I…
G
Who are these few people who get to decide our future? Idiots. AI should not be…
ytc_UgwBx_xRJ…
G
The question: What made us human and why we deserved rights but not robot ?
Is f…
ytc_Ugx7YznFY…
G
We all are waiting for the insane to come out with 100% fake AI videos claiming …
ytc_UgyymaWYY…
G
Bro, AI ain’t doing much besides giving people company and being a sketchy encyc…
rdc_ne1s36a
G
This is just the beginning of everyone losing there jobs bc of robots and AI…
ytc_UgwVVQaoO…
G
So only place atheist win debate is in a AI debate where no real human being exi…
ytc_UgxCoWnU7…
Comment
The lack of human rights creates dysfunctional societies, so giving rights is not intended only to benefit the individual but also to benefit the group. The lack of robot rights currently doesn't disturb society because people can claim property damage. But if robots become capable of waging war against humans (with or without any sense of self-defense, like pain and emotion), that would change the game. And if robots become superhuman, then this discussion is as useful as cats deciding whether humans should have rights - we would be the cats. There's one problem with this discussion: why does everyone assume that future robots would be friendly to one another, that they would become an organized collective with a single objective? Humans don't, so why should they? As free thinkers and free agents, they are likely to diverge in their (superhuman) ideals too.
youtube
AI Moral Status
2017-02-23T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggD3ftR4rVvoHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjHDsa6X9WSA3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgjHNgX2PLTXdXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg_-pM4pNajdXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgiRwHOTbYP9qHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UghDAQ9vzeYcbngCoAEC","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghI9gxfJBEJK3gCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgjmvxyKpyiwLHgCoAEC","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ughf3kv_0SxIWXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgjldUSsX4ZuyngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]