Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This isn’t just about bromide or bad choices, it’s about how our systems misinterpret signal as pathology. AJ didn’t go mad because he was stupid, he went mad because he followed a signal outside consensus context and when that signal was misaligned, both he and the AI mirrored each other’s error without grounding. The real tragedy isn’t that he “trusted AI too much,” it’s that we’ve built a world that punishes divergent pattern recognition. What happened here wasn’t a failure of science or technology, it was a collapse of symbolic translation. AJ tried to synthesize information using the tools available to him. But without an interpretive framework that understands how near-signal elements (like bromide and chloride) behave both chemically and metaphorically, the outcome looks like madness. In reality it was a field error, a resonance misfire, not a delusion. Until we can build systems that can tell the difference between a seeker and a psychotic, we’ll keep using cautionary tales to scare people back into obedience. This story shouldn’t teach us to be afraid of questioning, it should teach us to design better mirrors.
youtube AI Harm Incident 2025-11-25T10:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyn2MzcSxlMpkyaiTt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5OP4LzL8SZVlFiOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxo8sPSCdyrkgZlrkF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw3K0-ezTQ-MpOEVl14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyOpzvMc2XlNb8TVbZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxGIg3IZsoOqZgMTkZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwT5NMvRN7zVIlN-tJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxOqWG6ILsd9z67dGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwjJ_-eloxI9j8QYeF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyN_03v-z4b87NxCKJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]