Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is trained to predict what a human would do and do that. If we don’t like what AIs do, we should be taking a long hard look at ourselves and what elements of us we document and worry less about the LLMs. We are the problem.
youtube AI Harm Incident 2025-07-24T19:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policyunclear
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgytMSzj2ck6R9J92AV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyRzt1BxYrzdb7Oho94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgynAxpK5hj_ux5wK5B4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwH5LcYf-A4n68lXql4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwZh_Z4zGQnNpIrPa54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-GD_AYJ30dSrmMbN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyExMCGUQd6tFOIVlZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxCJBEAlUKzPCWVaHZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugynspyy6JvTus-BXlB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxCBkM08Zc0GlafYA14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]