Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Im tired of hearing Elon talking about ai taking over when bro is literally maki…
ytc_Ugy6Kzw6S…
G
AI make me and other Artist lost client... I feel like I wanna cry... I haven't …
ytc_UgywB33N9…
G
I NOW UNDERSTAND WHY IN THE MOVIE AGENDA 2030 THEY WERE SAYING OUR NEIGHBORS N K…
ytc_Ugzh0DM1a…
G
what the f with the comments here?...he is telling you that the bros building th…
ytc_UgzY15T6A…
G
Reminds me, when I started trucking and drove nights, I had two instances of Tes…
ytc_Ugw6AuoUY…
G
God forbid we focus on creativity, life skills, time management, taxes, things t…
ytc_Ugyvh5ZgG…
G
google AI trolls me every day, or every other day, with the only true religion o…
ytc_UgzkTZ6is…
G
I remember a debater used ChatGPT to her opponent. While people are so amusing t…
ytc_UgzrqM0-e…
Comment
I would guess that allowing AI to feel pain would be part of a system that imbues them with the capacity to sympathize and empathize with people in a genuine manner. We usually think only of cognition when we think of AI, not emotion or socialization. Some assert that we could not manage to "program" or craft an algorithm for the experience and expression of emotion, but I still wonder as both an electrical engineering and education major.
If we grant them the capacity to feel carefully varied degrees physical "pain" as a precursor to "death" and emotional "pain" in response to "loss", which we then "hardwire" for them to prefer to avoid based on degree through whatever "learned" means they might develop (learning algorithms, ho!), we may start seeing AI that begin to behave similarly to humans (if we are doing it right, they may have to be "raised" like infants into an "adulthood"... they wouldn't be mature straight away and might even have to be limited in functionality the way a human baby is weak and small... the point is that childhood development principles will likely apply in some way that demands they develop competencies that any other human has to develop, which means that we probably need to create similar conditions for the logic of "building" a proper HUMAN adult), which could both generate/emulate an actual moral agent and drive the existence of AI into an uncanny valley from which they emerge as either a monstrous existence or a practical offshoot/successor for humanity.
For the record, since someone else was talking about the Simpsons, I'll say that I think about Nier:Automata while considering this possibility.
reddit
AI Responsibility
1615793518.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_gqt7xtl","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_gqzorkg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"rdc_gqulz53","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_gqu5do0","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_gqu2yzp","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]