Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Even when you have an autopilot that is safer than humans, that actually makes it WORSE. The human would gain too much trust for the automation, allow themself to become disengaged, and be unprepared to take over once it does fail. No advertising needed.
youtube AI Harm Incident 2025-08-15T21:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwLkAbxotBibd7cxp54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyRHf6ENyLqEP_dRet4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzy4ocmWdhh9fFKdxR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxBnLjIlH9s9FPB78t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx-LI2OIE09ehGIb3Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyd9Glq6Bt3SkTR8e94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxbQJn5bPEhDv9gFzx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzIodfh6WzQyQkUJ-h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyuGm2txsj8064zUPp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzNKyX1kYaY-Epe-FB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]