Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it is still an AI problem. It's not just that wrong ideas are passed off as fact, it's that people think it is authoritative, in a way that goes far beyond the way people find random crank websites authoritative. They think LLMs give you information and perform calculations, but they don't - they simulate text that resembles those things. It's an epistemological problem.
youtube AI Harm Incident 2026-01-30T13:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugw5YASHohiKdiLjBXp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyXDXH_ZqzBi4WYS0t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgycjMwXaLdxxM-Wvvx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzDcezdnnVEG6XXTxV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyZ_YBK4hGjqhfgRmJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyQ87qwujGg6nTAHrN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz3g7pign4H12AsWEB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwtoXaq3Z9kTTDLbVt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy8lcxerc2IceC7rSl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwwFqhJhZilwzQLVqF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"unclear"}]