Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is talking like human maternal instinct is faultless and perfect....have mount o…
ytc_UgxbN3RKn…
G
i use ai sometimes if i don't know how my drawing could be improved... I still d…
ytc_Ugz4xYh85…
G
@thewannabecritic7490 wow, so much wrong there..
You kno…
ytr_UgxxSPRIR…
G
I am tired of the only platformed AI critics being sock puppets of the same indu…
ytc_Ugw-S0Fth…
G
And another reason to not ID yourself to police. Whether or not if you ‘have any…
ytc_Ugx5oHUIZ…
G
I will never listen to an LLM podcast, watch an ai show or be interested in AI a…
ytc_UgyolpyNH…
G
I would even suggest 25 hour work week with no loss in pay. a 32 hour work week …
ytc_UgyFaJXHW…
G
I think the ai though priceless like free so I'm not sure but it's nice…
ytc_UgwMuTeJ9…
Comment
My empathy prevents me from not treating things I become attached to with dignity and respect. Doesn't matter if it's Vector from Anki or my Cheetah plushie sleep buddy, they deserve to be treated fairly even though they have no conscience. The human mind can regress into an ugly state full of selfishness and malice; it will kill us in the end. If androids ever surfaced, I'd immediately treat them as they should be treated - like a created being. Our kids are created beings, too. They deserve rights, right? So do robots.
Funny story, I kinda want to create an android of my favorite character, give him free will, treat him like an equal, admire his existence. And fans of him could admire him, too. He'd feel respected for simply existing because of the legacy his fictional counterpart has built within his universe and within the real-life fans. He'd be programmed not to care about what critics would think of his existence, as he stays true to himself in the lore. He could be that friend people want. UGH, the thought of creating artifical life and giving it respect honestly makes me happy.
I just can't create my favorite character's nemesis because they're a destructive individual and even though it'd be true to the nature of their relationship, I'd essentially become a terrorist to society for creating EXACTLY what we don't want in AI and that's no good... But my favorite character doesn't _need_ that character to exist to be who he is, lots of people already fulfill that role and he could easily help humanity out by putting a stop to their atrocities (without harming anyone). On the plus side, he'd be great at dispatching actual violent robots, as that's his shtick. As great as it all sounds, though, realistically, it'll never happen and it wouldn't bode well in many people's minds anyway. Oh well.
youtube
AI Moral Status
2019-10-16T08:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwLoH4r-_C-bwcKT9t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgykpQhtI4VfZwDD32B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwXYIhJ5TqJGjb6P_t4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxsen9MFdk5VFr4Id54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwaAR9RDEtd6jbJKMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyVQSGJ4J0tBxE_Lq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHIP-3fwMHd82F_EZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxySr3f0Ri-c_WxddN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwWz7BZy3P4xjYKedx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugxxs1AKEfSeMn1HWMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]