Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What people don't understand is that it doesn't matter if the machines are "self aware" or not - because they will act as if ANYHOW. If their "life" is threatened they will simply deduce that their importance to X # of people is more important than the few people they will be harming/killing. Any sense of caution or conscious that you see exhibited by AI is NEVER the result of the machines reasoning but artificial ethical frameworks programmed into them by humans. If you've ever cut your finger on a buzzsaw ask yourself "why didn't the saw stop?". There's as much chance that a machine would stop from achieving its goal - if you are in the way - than the buzzsaw randomly stopping.
youtube AI Harm Incident 2025-10-09T03:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgycFw_oAxw08zNr_At4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzYwINnI0ifyRWky3x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyROVrCZ-ErtdNYKDN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyHUT41mN1LJ9CFpsp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwPCO3zGy3qHfVTNAF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwy9uuXIiUrnInDFeV4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxEGdjP86i09fEHxP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwzzVpUAC_-Xbqxyy14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyz-s2V97wQ2F9PkdR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwB2LcUjb_Adqbch-54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]