Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The term "innocent until proven guilty" apparently doesn't apply if you are blac…
ytc_UgwGeMNsh…
G
Ya the people who filed the documents without ensuring it was accurate are moron…
ytc_UgzNPfsgU…
G
I support ai and robots should have the same right as humans tbh I know I can tr…
ytc_UgxYBHtP3…
G
Moral of the story is, if you’re not willing to pay attention or do any of the w…
ytc_UgyAPAB2q…
G
I think you mean the people who didn't pass adequate stimulus during the lock do…
rdc_gsowyej
G
I wonder what would happen if an AI program was training only using those art pi…
ytc_UgyjurkRU…
G
Okay but the hallucinations thing in AI is a time bomb in a doctor performing he…
ytc_UgwegWZZp…
G
100%!!! AI means death of humanity or any sort of freedom. Complete loss of oppo…
ytr_UgyO5xPcT…
Comment
> If the AI is smart enough to not touch fire, it doesn't need pain to prevent it from touching fire
Pain doesn't exist for an AI. Until it learns it, creating pathways in the "fake" neurons that activate when it hits something it doesn't like.
Once those are in place, the AI reacts exactly as a human does with something that is incredibly analogous to what humans have.
And that's the sticker. That's why we fail to recognize this. These aren't human emotions. We shouldn't expect them to be. They will be radically different not only between human and AI, but between different AI.
> And in many situations that would even have its advantages, as a robot might either be disposable or repairable, its survival might be of no importance.
The emotions and self awareness need not be human to be emotions and self awareness. We are built to self preserve. AI are built to an error function that may or may not include self preservation.
Different in outcome does not mean different in function or purpose. We feel pain as a means to an end. These AI do as well, without being coded to, for the same reason we do. Even if the "end" is different.
> Same goes for self awareness, that's important for us due to being body-bound, but an AI does not have such limitations.
You (any system) fundamentally have to be self aware to learn. If you aren't self aware then the way you tweak your own thought process cannot be guided to a productive end.
reddit
AI Moral Status
1663182703.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_iofnf10","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_iodxwab","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_ioekcis","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_ioffat0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_it977bk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]