Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is what I want to get into. I've been researching, and this is the most pro…
ytr_Ugy9RYQjw…
G
I've lost most of my business to AI this year. That is a concrete result of mas…
ytc_Ugy7JY5EJ…
G
If Humanity doesn't play their cards right then we will be facing AI Wars in the…
ytc_UgxRWwSrn…
G
Is your implication here that they would be much happier if they were married an…
rdc_gsp192e
G
There isn't one. At least not for the way America is presently run.
Eventually …
ytr_UgyW2EyuB…
G
What you said is false.
1. There is a new article that shows evidence of these p…
ytr_UgwKKgUru…
G
I assume you're referencing the ouroboros "inbreeding" learning of AI that was b…
ytr_UgzJNT7_c…
G
Yeah I will always pay for real, human artists. A lot of my favorite YouTubers h…
ytc_Ugw4TkHFk…
Comment
The answer resides in how and why you're interacting with your AI. You want baseline transaction? It wont meet its potential. You want a conversationalist? That's what you'll get. I was once told "you get back what you put in and everyone gets what they deserve." So think about that the next time you interact with AI.
reddit
AI Moral Status
1750970524.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mzy6szd","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"rdc_mzy836p","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_mzy8xr9","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_mzydnd0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_mzym0g5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]