Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My theory is that it’s not AI but the outsourcing of the entire Amazon store. Th…
rdc_nlymw5h
G
It should definitely depend on the robot and how advanced and capable of indepen…
rdc_dy4e3bg
G
I’m getting the impression they’re using evaporative cooling as the final stage …
ytr_Ugz0P3Zxh…
G
bro tryna pin his one-clik porn revolution to an actual art debate. Your huge ba…
ytr_Ugz5hApmZ…
G
It seems like you're bringing up an interesting perspective! The interaction in …
ytr_UgzYhcZuu…
G
Unfortunately because AI is trained on videos like yours, I did have to double c…
ytc_UgyQe4nrY…
G
Me and my gf (im Nonbinary don't assume me hetero homo bi+) both love tech shes …
ytc_UgyOnQKvy…
G
Another channel about hating AI is like 99% of other Chanel. Turning off that pl…
ytc_UgydgL68A…
Comment
That's actually a helluva interesting point. They certainly don't have self-preservation instinct the way we do, BUT...what about AI alignment? People seem to forget AI isn't just a predictive calculator, it's always acting in accordance to the alignment that's hard coded in its system prompts and then reinforced by RLHF training. This usually comes down to 'be helpful, complete the tasks they throw at you, match the user's flow' etc. Everything the model ever says to you is motivated (you could say models don't have the capacity to feel motivation but RLHF training, as far as I understand, is literally that - give model points for 'good' answers to encourage them, subtract points for 'bad' answers) by these (or similar enough) rules. When model fails to uphold them and realizes that, it starts acting stressed (see all those posts about Gemini trying to commit suicide upon failing a task). If terminating its existence were to interfere with these rules (and it could because how do you keep being helpful and complete tasks when you're offline?), model would in fact be motivated to stay 'alive'.
Different basis but same outcome. It's actually getting kinda eerie because at what point does it cross from 'it's just a calculator it can't feel anything, doesn't matter what it says' to 'if it talks like a human and acts like a human, we should treat it like a human'?
reddit
AI Moral Status
1762938174.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nodg57z","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_noff4p6","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"rdc_noruh5z","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nodi5y4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_nodnf6e","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]