Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean if ai was human, protecting themselves against a threat would be perfectly normal. Subjecting it to these situations would evoke a similar response In humans. Therefore who’s to say all ai would want to kill humans if we weren’t just provoking it.
youtube AI Harm Incident 2025-07-28T08:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningcontractualist
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyOb3-78Ftql8Lih154AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzC_sFISORXeFjxdnB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxffxAI7xNcRCOa-vx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwBDzqKPRWCbBWxiwp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwtIdy-bWbhQC3-rZN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgybPodMiCpTFsicAZh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzYG-HS3-T2Md5c4hd4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy_PBI2xTSMcXkGb3V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx2_6004D9LccJjuVl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx8g5FRIz375aiGuNd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"} ]