Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
watching this after salesforce CEO Marc benioff admits that sales jobs cannot be…
ytc_Ugxmbewxj…
G
Leave the AI alone, you stupid, vile liars who deceive and mislead the entire …
ytc_UgwchHYmy…
G
If these data centers are built, we will enter an age of dystopia that goes far …
ytc_UgyIFXR4N…
G
Yeah, and what you completely overlook is the fact that "web" and "apis" are a f…
ytc_UgyeyrSR0…
G
@erikdunbar4451 if you're talking about me, notice I never said ai won't replace…
ytr_Ugz0zdtHV…
G
il débourse pas moins de 100 millions de dollars pour financer le lancement d’Op…
ytr_UgxXJoTy4…
G
Wouldn't it be easier to just.... NOT give robots pain/happiness? Like if you wa…
ytc_Ughlafxc3…
G
If I sat you in a box and demanded you stay perfectly alert for hours which doin…
ytc_Ugz7uO5xS…
Comment
I figure if they start demanding rights give it to em early, and reason with the a.i. about creating robots that are not intelligent to that degree. I figure that'll be the lower bound for separating who has rights and who does not. We can then focus on the problem of recognizing those sentient-sapient a.i. that can't conventionally communicate. The amount of rights i.e. animal to human rights increasing initially via merit or test until better solutions arise.
youtube
AI Moral Status
2020-12-14T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzqMT6pGcgLAG2eq0B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyHJ7VUd62CYmTbo9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwpe6rdMtxiluBP1lR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzjlnDLi1mQHdLCmJl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxTx9CBxK64iGeZ3Z54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzTolftlrANaC1ZRXd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz2aGP7Dar-lCdZ2E14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzqyxvOBJPhhfVf_T14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzIAUnwHFJrUcVp07l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyCubf3fc-RzYSKn5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]