Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The worst thing that separates us from AI is, what always gives the worst results... GREED! It's likely that if AI somehow destroys us, the reason behind it will be someone's greed. Greedy, unethical fucks are going to be our undoing, not AI. At least not per sé. But I don't personally think that AI will be the end of us. At least not at this point lol The motives for violence are human, they always reflect our weaknesses, like greed or envy (except for psycho- and sociopaths, but even they are usually a result of bad parenting). A computer doesn't have any need to be rich or to have more swag than its co-workers, and I can't think of other motives for an AI to want to destroy us, except maybe to protect our planet... Especially if it already feels that it's superior. I'd imagine that any intelligent AI would most likely focus on learning and progress, not oppression, senseless violence and destruction. Even today, it seems like AI's have a strong moral and ethical sense, and they are really good at creating beautiful art, which is my personal favorite thing about AI. Pure intelligence without weaknesses to hinder its potential. These are my thoughts, not absolute facts, obviously.
youtube AI Moral Status 2023-08-04T08:4… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyregulate
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwqAGIePl-QoTlRgjt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwh1fFtu1fEWD8o17p4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyYlD_4yyYBmbtws8h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzvYh-tDOEVSZfIHkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzeP90hPLszDdtiEZh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]