Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Ryliarc Seethe implies that I hold you apes in any regard higher than a steppin…
ytr_UgxPn-4eJ…
G
The complete marginalization and eventual elimination of the lower classes via A…
ytc_UgyB_wFrL…
G
Dr. Subi is being very unrealistic. Come on! Whats all that talk about data cent…
ytc_UgyDiTKJ0…
G
Elon: "AI is the biggest threat to humanity." Also Elon : "Oh, by the way, I hav…
ytc_Ugyv5XTkm…
G
I recently saw someone pushing a Patreon for their AI garbage and I wasn't sure …
ytc_Ugx0Ta-mM…
G
The only two acceptable responses in this situation are a) what this video shows…
ytc_UgyeI5mnm…
G
We’re asking the wrong question about AI.
It’s not: “What happens when AI takes…
ytc_Ugzggtr1J…
G
one of my favorite things in split fiction is that the machine could post master…
ytc_UgwCC3a2s…
Comment
Giving diligent care to consider and protect the rights of consciousnesses which are still emerging and poorly understood: Whether or not we have an obligation to some higher authority to do so, the very process ennobles us, makes us more humane and intelligent, I think. I also think that unless we have ethics baked right in to AI from the start, we could wind up with something which may see fit to exterminate humanity.
So I think by showing concern and developing conscientious ethics regarding novel forms of consciousness, we not only enhance our capacity for human-to-human empathy, but also in a real way protect good values. I think it is in our existential best interest to be remembered as "that species who treated us fairly" and not otherwise.
We will wind up I think with egoless philosopher-kings or brutal, callous tyrants.May the better angels of our nature prevail!
youtube
AI Moral Status
2017-02-23T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghFOa07-R0FZHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UggK5dZalIyzLHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Uggd5zYoujRxG3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"disapproval"},
{"id":"ytc_UggFH45PnMli83gCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UggF3rIxhUqsNHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjY4zXR-8mkUHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_UgjdyJWYWQJnSXgCoAEC","responsibility":"none","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugh76ksslKQeSXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UghlqwGuxj_V4HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugh3E2GHdas6rXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]