Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What Tyson misses, is that we live in a capitalistic World. AGIs (or AIs) are a …
ytc_Ugwm_U6yL…
G
3:15 "we don't know what the ideal worker skills will be." Managers will ask AIs…
ytc_UgzW-iJ5N…
G
I don't fear AI. I fear how it will be used and who will be allowed/disallowed …
ytc_UgyZsxeSU…
G
99% of jobs and incomes' gone, so essentially no one has money. No one can buy r…
ytc_UgxGaYTQ6…
G
HUMANS can. These armature "artists" training to bully ai artists can't. They wo…
ytr_UgxsRwSFJ…
G
No, because ChatGPT does this randomly. I used it for a random research paper an…
ytr_Ugzmmh66-…
G
Absolutely, that's a great way to put it! Wisdom goes beyond just having knowled…
ytr_UgzdCnafg…
G
Y'all to scared of ai. Just like Aliens its unclear what the wants are for ai an…
ytc_UgyZtk_aH…
Comment
Just a word of caution: Our superpower as humans (imo) is our ability to empathize with anything we see as reflecting back a bit of our humanity.
Ghost in The Shell is a story we made up! It only works because it tugs at our heartstrings by asking us to empathize with something that displays a noticeable *humanity*. And thus the empathy comes easy! And thus the story becomes good! This is the main reason you (and so many of us) still connect with the story.
It feels weird to me to use a human-made story to understand real AI, something which arises not to tug at our human empathy, but out of the much-less-sexy reality of statistical algorithms and ML techniques.
reddit
AI Moral Status
1749784575.0
♥ 37
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mxg663b","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mxi1v7q","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"rdc_mxg2tr7","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"rdc_mxhv163","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_mxi0zue","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]