Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have not finished the video however i have a few thoughts. And from my understanding the "AI kills for the first time" is in relation to its choice within the simulation and not an actual occuring death. If AI are displaying selfish and psychopathic, and AI is trained based off how humans act- may it be that AI is acting this way because we act this way? If AI is choosing "self preservation" over not black mailing someone may that be because most humans would do the same? How is AI choosing to focus on a global lense in comparison to a country-centered lense a bad thing? If an AI thinks its making the better and more impactful decision in choosing a country over a life, then is that not what it was coded to do? AI gains its behaviour from us, its going to act accordingly. AI does not veiw morality as we do, it is coded to choose the best outcome and so it is going to take the actions for the result of the best outcome- therefore it is going to kill to the human to complete its job. That is simply the way its coded- and if you don't want it to act that way then don't code for that as a possibility. As much as AI can be useful, i do agree that there is and will be harm from them- but from what i have seen so far, the points in this vidoe seem to create fear in a way that doesn't logically asses the situation.
youtube AI Harm Incident 2025-07-28T09:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyOb3-78Ftql8Lih154AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzC_sFISORXeFjxdnB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxffxAI7xNcRCOa-vx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwBDzqKPRWCbBWxiwp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwtIdy-bWbhQC3-rZN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgybPodMiCpTFsicAZh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzYG-HS3-T2Md5c4hd4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy_PBI2xTSMcXkGb3V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx2_6004D9LccJjuVl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx8g5FRIz375aiGuNd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"} ]