Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I remember reading about this on Ars Technica. We are in so much trouble.
11:4…
ytc_UgyXbgdIq…
G
This guy is a nutcase. Even if AI doesn't willfully kill us itself, as a tool t…
ytc_UgwK3Zb0g…
G
I'm honestly pretty tired of all the doomsday AI prophecies.
Let's assume we d…
ytc_UgyGTNYeG…
G
@cmaurice9133 AI can be fooled by simple patterns and colors, because it literal…
ytr_Ugzgtq1vW…
G
+ "ai is inevitable" so is me doing the do with your father 🙄 ever heard of peop…
ytr_UgxqbGOis…
G
If there's any chance AI could eventually gain nuclear launch codes I would unpl…
ytc_UgyL2j28m…
G
I’m sorry, but this isn’t an effective argument against AI—and the proof is in t…
ytc_Ugwpn9SxA…
G
AI is like surfing a wave. Those that stay ahead of it will propel forward into …
ytc_Ugw3q52aX…
Comment
"But since it's not, here's a bunch of our own AI images for our campaign."
reddit
AI Harm Incident
1724090776.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_liwc0qv","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_liwrn5t","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_livvleb","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_liw6yql","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_liw3p6m","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]