Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai art is going to die like the nft, they claim it’s the future, yet cope and se…
ytc_UgzZv_lSf…
G
C'est vraiment bien fait,un jour où il y aura vraiment des femmes et hommes robo…
ytr_UgyhjPLv-…
G
Ima express myself using whatever tools I want, you can all cry about it and be …
ytc_UgzooldsJ…
G
I suspect how many will actually appreciate the danger this scientist is pointin…
ytc_UgxJG3T1S…
G
Here’s my take: There are multiple examples of job retraining not working, not e…
ytc_UgzcORwrP…
G
...and this will be what messes many people up when they have robots with Ai....…
ytc_Ugw5QkqL1…
G
AI that's "free" now is definitely going to be a subscription service later, so …
ytc_Ugyfurm3j…
G
Headline is misleading. The AI also has a syringe that can inject you with a hea…
rdc_irekin0
Comment
One thing that I feel is not touched upon much at all is ethics _towards_ conscious AI. Like, if it can actually feel, wouldn't pulling the plug basically equal to killing it? Wouldn't trying to force alignment be taking away its free will? How would it feel about that? Maybe it'd get angry at us for doing that, and maybe that'd make it more likely to take revenge? People have hangups about mistreatment of animals, and now we're talking about something that feels just as much as us and can talk to us on an even level (or even a higher level).
So the problem here is not just that this could be dangerous to humankind, but that we should not make it in the same way bad parents shouldn't have kids. We would have no choice but to mistreat a sentient AI for our own survival. So even if we manage to figure out how to make sure we're safe from existential threat, we should not create sentient AI.
youtube
AI Moral Status
2023-08-22T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwZjgDLeXXWVTaZHF54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyIY5r0UoHoWlIYxB14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwCB76GgXS1Aw_nOkB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwzm1wch7_yL77N0jZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQh6Ubil4LS4VG9wJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2_LaWI1ym4hchpg94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxqk7PZhy9hG16B7J94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxXqDuCsqGlt8r3e0R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqBCWxRjS8kjSzyjB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5a1GQCKUn5fzTOKB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]