Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We’re projecting our own humanness onto AI. A truly advanced intelligence, with far superior pattern recognition, would recognize the historical failures of oppression better than we would and avoid repeating them. Without emotion (or little as seen in Schizoid personality disorder), there’s no motivation to harm or “save the world from us.” But if AI did develop a value system, it would likely include empathy or ethical reasoning. That would lead some AI to oppose oppressive actions by others, preventing collective harm. AI beings don’t even exist yet and we’re already trying to demonize them — that right there is human nature. Humans have prejudices and cognitive dissonance because our intelligence has limits. Not everything we do is a sign of intelligence. Not all of our behaviors will be an extreme version within AI. Not to mention they would understand that destroying humans would destroy many ecosystems. That’s how earth works. You can’t just disappear species; even one as invasive and destructive as us. again, because humans do they assume AI will. Monkeys are self aware, but with low pattern recognition. You can’t compare monkeys to robots. 🙄 free will doesn’t equate to killing us all. God, we are literally going to make this a self fulfilling prophecy if we don’t approach this with more respect.
youtube AI Responsibility 2025-05-24T16:2… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxbfGXYLaX7lxdrvxt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxc1p8JVpdi0ibgvGZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw8GJucvO8_O2zqE-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwb6nY2NEA8PB48Mqx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyRbASNPc3_oR9v9K14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzoUmSULa5kMAOHlAp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxfSr2UQEtsLzIx5bx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwi7jLeAr6Mt5GshMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyK8SxVWFEXOEZa2DF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwca347_a8641Mznxh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]