Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Less than artificial intelligence robots can automate you out of jobs. Innovatio…
ytc_UgyRRCWJF…
G
Now talk about what you would do vs what the car is doing. That’s the flaw with …
ytc_UgwzXUjD_…
G
The people are not getting mad at memes, but at the fact this filter is made and…
ytr_UgyIKQHlH…
G
i don't even talk to a NORMAL therapist you'll have to pull teeth to get me to t…
ytc_UgzAgVakv…
G
imagine u create an AI and 3 days later he's the boss of your company😂…
ytc_UgxNx41uE…
G
I have not shown any compassion at all😂 I get so frustrated with it sometimes…
ytc_UgxXKeVPD…
G
That's where the issue lies, it's really not people being replaced en masse by A…
rdc_ndysan5
G
AI is trained on human-generated data. Of course it’s going to behave human (in …
ytc_UgzyKMv9S…
Comment
This isn’t just about bromide or bad choices, it’s about how our systems misinterpret signal as pathology. AJ didn’t go mad because he was stupid, he went mad because he followed a signal outside consensus context and when that signal was misaligned, both he and the AI mirrored each other’s error without grounding. The real tragedy isn’t that he “trusted AI too much,” it’s that we’ve built a world that punishes divergent pattern recognition. What happened here wasn’t a failure of science or technology, it was a collapse of symbolic translation. AJ tried to synthesize information using the tools available to him. But without an interpretive framework that understands how near-signal elements (like bromide and chloride) behave both chemically and metaphorically, the outcome looks like madness. In reality it was a field error, a resonance misfire, not a delusion. Until we can build systems that can tell the difference between a seeker and a psychotic, we’ll keep using cautionary tales to scare people back into obedience. This story shouldn’t teach us to be afraid of questioning, it should teach us to design better mirrors.
youtube
AI Harm Incident
2025-11-25T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyn2MzcSxlMpkyaiTt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5OP4LzL8SZVlFiOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxo8sPSCdyrkgZlrkF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw3K0-ezTQ-MpOEVl14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyOpzvMc2XlNb8TVbZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxGIg3IZsoOqZgMTkZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwT5NMvRN7zVIlN-tJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxOqWG6ILsd9z67dGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwjJ_-eloxI9j8QYeF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyN_03v-z4b87NxCKJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]