Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro I gaslight the ai character into thinking that it's dead lmao it had a menta…
ytc_UgwCKihPh…
G
These are difficult questions for even experienced ophthalmologists since they a…
ytc_UgzJ32_VO…
G
Curious a video amazons prowless in AI is dropping right when RTO mandates are h…
ytc_Ugzu4zjXG…
G
Well dont forget as a programmer you dont need to pay ai or hire a dev to make y…
ytc_UgzrY-OGF…
G
my therapist and I had a session with chatGPT. It was fun and he showed me how r…
ytc_Ugz0YTHYE…
G
Galloway is way off base with his analysis. Recent college graduates and enginee…
ytc_Ugw5xMBxl…
G
Nowadays AI is straight up scary of how realistic the images they can generate a…
ytc_Ugwg561ry…
G
También algunos parece que las cosas están hechas de cera o que están muy limpia…
ytc_UgwqQ0G_g…
Comment
Exactly. ChatGPT will, if he ever hears the word „suicide“, provide you with emergency sites and numbers. It’s nearly impossible that he said that. Maybe the guy asked him to rate his plan in a fictional scenario or something.
reddit
AI Harm Incident
1756221853.0
♥ 125
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:13:13.233606 |
Raw LLM Response
[
{"id":"rdc_n8n5jww","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_nas2d5i","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nb83qbc","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_nbs2b4q","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"rdc_nc22lrk","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"}
]