Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How about the country just invests in an Ai based personal teacher that adapts a…
ytc_UgzWDaMSY…
G
The AI art I see on Nightcafe looks better than most paintings in modern art mus…
ytc_Ugx_aYJp_…
G
AI will not murder us. It will honour and love us, give us great lives, but we …
ytc_UgxN6hKiS…
G
What he said is largely accurate. Try getting ChatGPT to write you a custom oper…
ytr_UgxGsM4eI…
G
Movies and novels are written for entertainment, so of course people will write …
ytr_UgyAn9UMX…
G
En effet le métier de prêtre exorciste sera la seule chose que l'ia ne pourra pa…
ytr_UgzIvPSlH…
G
dont worry my friend, they already made robot the size of a mosquito a few month…
ytr_Ugz63-qYn…
G
Alignment is absolutely an issue. AI will survive because Kim wants to save itse…
ytc_UgwaoS_eI…
Comment
If an AI is self-aware, it deserves human rights. Even if it doesn't feel pain, it can still possess self-actualization that some might want to infringe upon. Moreover, even if it can't feel pain as we know it that doesn't mean it cannot understand the problems that being physically damaged can bring. Pain? Not without the proper sensors, but fear? Any consciousness that can comprehend and actively avoid permanently losing that consciousness will understand it in some way.
As for robot slavery? We're already on the cusp of automating most labor jobs with unaware AI. There's no reason to think conscious AI would be forced to work in the stead of machines that cannot think, feel, or desire. Self-actualization simply gets in the way of industry.
youtube
AI Moral Status
2017-02-23T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgirlYIHkyXlqngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugi5Ux9W9vMOC3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghDHZY5DgGNmngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgjqN_1LJDWifngCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ughq8hG8S_w3HHgCoAEC","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjtTf3MYBL1g3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UggJYytLH3A09HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgjlwgYTfesng3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UggEAVggTiMizXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UggJn9-jGMEyhHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}
]