Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Or maybe just not talk to ai? Its heartbreaking to think of the environmental im…
ytc_UgyIvowBl…
G
I agree with the ai on that one, get rid of Africa, that would solve all world p…
ytr_UgxbdrOCN…
G
It would be like copying someone’s homework, yea sure you did it but you don’t r…
ytc_UgxSmsqgP…
G
Maybe it shouldn't be called A.I., in that case; maybe it should be called I.A. …
ytc_UgxewImz9…
G
And comparing claude or copilot with a good modded Opencode with gpt + gemini + …
ytr_UgwdNZysn…
G
He said "based on gut" when asked about percentages if AI will wipe us out. WE N…
ytc_UgxGZtajL…
G
Woah it's like the ai is looking for generalization of a human woah and why are …
ytc_Ugym-8ujI…
G
There’s a pretty big fight going on in a Facebook group I’m in because someone w…
ytc_UgxSGpG8h…
Comment
Yes. I actually read a usage report OpenAI published last year after I posted this cuz I was curious if my intuition that it’s increasingly being used existentially was evidenced. Granted, it’s their paper and the period being assessed was mostly 2024. But it did suggest that’s starting to happen vs scholastic or professional use. Maybe not the most typical prompts yet, and there’s some gray area when it comes to what would be classed as request for personal feedback vs a tutorial or just gathering information. But the glazing is becoming notorious I can only assume because they noticed this and realized from a marketing perspective that if they could encourage therapeutic use and play up the personification people would grow addicted.
reddit
AI Harm Incident
1772726347.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o8sndk2","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"rdc_o8sqyi6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_o8sr9fz","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_o8tbz00","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_o8wyzmp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]