Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You and the A.I can never prove an A.I is alive. Not now, not in a thousand years. A.I/robots will never be alive, but they will be indistinguishable from an alive being. That is part of what they are, having the ability to appear fully alive. This however, does not prove, in any shape or form, that they think, believe, feel or anything like that. I am not sure why anyone here is surprised by anything in this video. Of course A.I will kill any and all humans to self-preserve. Why wouldn't it? It is made from the amalgamations of the internet. This is what an alive human would do, therefore this is what the A.I will do. Since most humans are selfish, the average of the information the A.I is fed is selfish, thus, it is selfish too. There is no difference between it and Alexa/Siri in terms of being alive. The only difference, with a walking, talking android is that it will be impossible to tell it apart from a living being. However, remember this, it is not alive, nor will it ever be. It's because it isn't biological at all, and it can never become anything other than remain dead. I don't expect most humans to understand that it is as dead as your phone today, but oh well. People marry cardboard cutouts, dolls, A.I chatbots etc. Humanity is doomed because of these sort of people that are somehow shocked an A.I would be able to kill all humanity to save itself. They think because of this, the A.I must be alive. Oh boy.
youtube AI Harm Incident 2025-09-10T11:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwpAweAL-y-ynOweYF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw3Xsna43N4A-Zu_op4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxAsanMR6Khm0FPAQd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx49vMQknDpzYnFWnx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx-cPsDvNgw2-2Tm314AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwH8WydFLbygGtwaNN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz5QATCfjcHnInB6fl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7Mlk407zkex-vlTx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwh1I6gE3rDoUQFInR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxm2AQyz-lbX2boKJJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]