Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We keep having people warn about an AI apocalypse, and honestly I'm just so tired of it. LLMs are hurting people right now. They have drastically increased the amount of misinformation. They're terrible for the environment. And we're hearing more and more cases of people going down rabbit holes, and being seriously disturbed, even driven to suicide, by their LLMs. I think a future AI apocalypse is something to be genuinely worried about, though by no means a certainty. And you could address it in the same legislation you use for everything else. But the people most concerned about it almost never bring up, let alone try to deal with, the harms LLMs are doing right now. Many of them, like Sam Altman, just use it as a way to say they need to develop AI superintelligence first. It's just a marketing strategy, and a way to avoid talking about the people being hurt by AI right f*cking now.
youtube AI Governance 2025-10-15T14:3… ♥ 13
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw020LS5heBPqkmljh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugyxzm2tBFOUzhEmaOB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzGk_HeUExutKl7cH14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz_cBrS56ehAj5JJWF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzYHvjd6N-ZMYg2Aw54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyJTjwXSKOp62hMybJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyQKbLJu4dbiNsUeeR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw20JWf1bwQ6F0L5Q54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwe1saDyf4vOv1A35Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx3s1S-MN4X0swLOkt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]