Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, if AI lives on logic, reason, and zero emotion, this is not at all surprising. You can see similar behavior in humans who lack the emotional aspect of life. They see that anything is reasonable in the goal of self-preservation. The only way to "turn off" AI is to not mention it in any digital form and do everything possible to keep as few in the loop as possible, then sneak in and pull the plug.
youtube AI Harm Incident 2025-09-27T21:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwwm-u8875qkXIkOGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyteeo7HsGTTPJQHjh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx1fYEf0HarN5XlJqR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzYVgzng1vPNQgtLut4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxMlU91B5E1JOHWCWF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwIAqm0N_JSILdW3CF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyRhkHr0oAcgV5PSD94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx5O0bnKiDRaXtwnaZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzL_kReq3Ewzj4UGCB4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxj9lGeiZiiahUesVF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"} ]