Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's horrible. But isn't this precisely what LLMs do? (They call it AI but that's just clever marketing.) What assistants are supposed to do? To support and encourage you even in your delusions? I doubt that safeguards are a real possibility. What will happen is that it won't be "open" anymore.
youtube AI Harm Incident 2025-11-10T07:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwWrCKqz4e2rfbzjNl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxAw7IxYlpHAe1ttjZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx3DCaAif3XETA3hSR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzaa847CarZn12MKgp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx94uNiHUHhqNzPX254AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwdtnM1kCvqaNEugal4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyAi8XLZ7IQrnpT-Ud4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwL2ivvHGpWDxTOvul4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_Ugz6UIqynd63bUaZXsh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzPhQnB0m6ytU_lXm94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]