Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
so ais first kill yet theres a 20 minute video where its all the same thing, ai …
ytc_UgyBSvgPB…
G
The correct answer for what would an AI that HADN'T seen the clip in advance gen…
ytc_Ugwt9jpR7…
G
Tracing the money in the AI bubble shows that it is circular. There’s no long-te…
ytc_UgxpQna_R…
G
Let the Robot destroy one person/student first and see what happens to the Robot…
ytc_Ugx40aSB2…
G
Yess so true. But also those videos about “do i draw bette than ai?”. Yes. Its m…
ytc_UgzmhujYa…
G
Considering how many pieces of media we have that show why unchecked AI are nig…
ytc_Ugz3SMmYh…
G
The biggest danger is AI reaching self-sufficiency or (mistakenly) realising tha…
ytc_Ugzk9WKRJ…
G
A lot of questions was answer with “one” word kkkk THIS VIDEO / the story / the …
ytc_UgyNdNcNF…
Comment
Holy shit, LMAO. If someone’s going to use AI to illustrate a book, they should *at least* go over it in photoshop — you know, edit it, give meaning to it, and such. This just feels low-effort.
reddit
AI Harm Incident
1671131226.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j0b5ipo","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_j0cxmjk","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_j09lin2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_j09tam1","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_j0aen8g","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]