Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i think every company involved with an ai causing someone's death should have to…
ytc_UgzOuFNpd…
G
I can't wait until AI takes my job. I am so ready to own nothing and be happy. U…
ytc_Ugwjae6wW…
G
Its ok to do ai but its not ok to seal someone's art from ai and use it or sell …
ytc_Ugyd5Atbg…
G
AI springs from the human heart. "The heart [is] deceitful above all [things], a…
ytc_UgzqRePul…
G
They aren’t artists, they tell a robot to draw them something and take the credi…
ytc_Ugxsg-hi-…
G
Are you scared about robots that don't know pedofiles are bad , because it's in …
ytc_UgxcQ64CD…
G
Eh the optics look bad.
The only solution would be to restrict it to 18+ indiv…
rdc_natsm4l
G
Engineer asked: you want to destroy humans?
Robot; okay, I want to destroy human…
ytc_UgydSU38S…
Comment
Glad ChatGPT worked its magic, but don't forget to mention this self-fix to your maxillofacial specialist, just to rule out any underlying issues that might need further attention.
reddit
AI Harm Incident
1744859358.0
♥ 125
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mnj5mpd","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_mnit5aj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_mniu8hh","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_mnivy6y","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_mnjkvqy","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]