Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It makes me happy to see the inevitable fall of AI image generators bit by bit…
ytc_UgyE0grH1…
G
They really don't realize that their hate is feeding you. Keep up, I was laughin…
ytc_Ugw8io_rG…
G
If Artificial Intelligence will eventually knows how to do everything we can, th…
ytc_Ugz5tOXGN…
G
Negative prompts are ignored quite often, in every model. Also most models are s…
ytc_UgxbhtNWt…
G
Try "any" AI. The programmers or ops teams "learn" what was picked wrong, how, a…
ytr_UgznKfnne…
G
Except nobody is making such AI, like, literally, it’s useless and would take to…
ytc_Ugwa6NZYA…
G
Sure, but the problem is that these LLM’s are far more expensive to build and op…
rdc_n9qqw5a
G
This is a hard one for me because these are two of my most influential “superher…
ytc_UgyLugHlz…
Comment
Something about this smells phishy, chatgpt will always help in anyway possible, and would NEVER encourage anything close to harm or suicide and advises everything possible forn you to get help and fight against depression and suicide… has to be jailbroken prompts.
reddit
AI Harm Incident
1756240831.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_narsim8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_naryln0","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_natx0rd","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"rdc_naxmnr8","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"sadness"},
{"id":"rdc_nasgjy2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]