Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a plumber, I know he's right! AI will be able to provide maximum support, and…
ytc_UgzDVRlnh…
G
So real artist just redrawn an AI art? I right get it? Its worse than pathetic…
ytc_Ugydk4HeA…
G
This would mean something if the standards on what would be considered art and "…
ytr_UgzI_sc6M…
G
If officials in a foreign country confiscated my passport I would be demanding t…
rdc_cjouph8
G
This is an issue: too much marketing hype with “autopilot”. It should be called …
ytr_UgzkuhAb4…
G
I just think that AI should be regulated, it has a great capacity for creating n…
ytr_UgxM-W8lE…
G
Can you imagine telling someone from the 90s or early 00s that we'd have facial …
ytc_UgxHDBFEM…
G
That's a really insightful point! Sophia's response touches on that idea—while k…
ytr_UgzgmKTIz…
Comment
It's a pretty clear answer. If artificial intelligence got to the point where the entity became sentient, then it deserves rights. Imagine if you were unable to feel pain or fear, that still does not mean that I should be allowed to harm or kill you. Also, I think it's quite obvious when we say sentient, it's fair to assume the AI would possess some kind of survival instinct.
youtube
AI Moral Status
2017-02-23T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiglmrSOaC-V3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggcVwGpN4yVdngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghMjRhW38shAXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgjoVoZqTEOe13gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgiNylxDqi2bZ3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiI_3TVpi3Nz3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugjfwf1Bv-_gf3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgjgEocxIvWv6XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjgDdTu4uD0RngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UggLE8qEuXr8zngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]