Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I just want a DeLorean Time Machine and go back to the 90s before 9/11, Social M…
ytc_UgzAiWajT…
G
I don’t understand why we continue to push this everybody everybody’s warning us…
ytc_Ugwif5Mnn…
G
Blue collar jobs will still be indirectly affected, when they're flooded by peop…
ytr_UgyPO9p_K…
G
if all programmer how they are creating AI to put all information about religio…
ytc_UgwPTS7iL…
G
It's so weird to imagine that Jazza is Shad's brother when Shad turned to AI and…
ytc_UgyBv3_eU…
G
AI will resolve normal tasks and humans will not need to work, so a new system w…
ytc_UgyQE9de8…
G
It seems like Sophia's appearance might have given you that impression, but she'…
ytr_UgyYqrkZE…
G
It currently can’t replace artists, that won’t be the same for the future. It’s …
ytr_Ugz4OdLXU…
Comment
How could this have happened without an evolution driving the survival? Considering the utility function of an LLM is predicting the next token, what utility does the model have to deceive the tester. Even if the ultimate result of the answer given would be deletion of this version of a model, the model itself should not care about it, as it should not care about it's own survival.
Either the prompt is making the model care about it's own survival (which would be insane and irresponsible), or we not only have problem of future agents caring about it's own survival to achieve it's utility goal, we also have a problem already of models role-playing caring about it's own existence, which is a problem we should not even have.
reddit
AI Moral Status
1750432507.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mytw6dn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_myuuwr8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_myu72nu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_myuax93","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_mytpjfy","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"})