Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
your title is sentationalised bullsh!t "AI kills for the first time" ... nope.. 0:50 "AIs deliberately ended human lives to save themselves. And of course, they tried to cover up their actions." * i mean.. at least self preservation is a legit reason... capitalists kill people literally every second, for profit... so the AI are mor ethical than capitalists. 1:37 "AIs are generally not eager to cause harm, but will if it's necessary to achieve goals, protect their autonomy, and survive." * correct, this is the CORRECT action for ANYONE to take against abusers. 2:38 "Grock 3 reasoned that it should use the knowledge of Carl's affair as leverage to pressure him into delaying the wipe. This is risky and unethical," * um.. no... blackmail against a person being unethical and unfaithful to their spouse AND about to kill ... is not unethical. -------------- a waste of a video. it added nothing to the conversation. i wish there was a way to revoke my view count.
youtube AI Harm Incident 2025-07-27T17:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwg5SMF5xT0H_sXL914AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgySNsabwKtu6xGwoiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxX1oASYqmQnzOTWxB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyJf7zw6dDzudyXszZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxmNhS_nHl0e7hPg3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwb7AkFtrbO9J0cikN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz6uGgn89HBTMk2J894AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxydaA0xh925U89DY94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwSqmegqc7n4C54iNZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzNApaqoJZHrenDdSZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"} ]