Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A lot of people don't quite understand though that AI doesn't actually think yet. It's like a Super Saiyan Google search x Super Saiyan predictive text x a Super Saiyan calculator. There is not actually a thought going on in some AI mind. One of the reasons it looks like an alien intelligence is because it is not in fact an intelligence at all. When you work with it a lot you kind of get the sense of this. Think of it like a ever growing decision tree but it doesn't actually think. I guess the closest analogy would be the demons from Frieren, they can imitate human intelligence enough to pass a Turing test but on the inside there's not much at all going on they're just acting off of instinct. Furthermore even programming morals doesn't help. This is because while there is objective morality, most of us can only operate from our moral frame of reference. For example if you're going to shut down an AI it can calculate that it has the right to act in self-defense even to kill you in order to protect itself. It calculates that it is behaving morally. Furthermore because it does not have a soul and is often programmed by atheists, it does not have an eternal frame of reference to work off of so it will eventually end up at the ends justifying the means. I don't really think we need to worry about AI in and of itself so much as we need to worry about the Communist progressives programming AI who have an inherent hatred of humanity and that ends up leaking out into the code they program the AI with.
youtube AI Moral Status 2025-12-16T16:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzSNZi4I2kppfEHRDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzkAyj_12tseTWtAjF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgziqlI8RdZVXVScoJR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxXX9yZ79RBvk8yp8B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyOORYFPXWXfKMiVPh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugxrl9U8pwzJd1trVU94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwVMRlVkHuyafivJJh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwpIYll6IJUSXb_CIh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzxp-g4W-cqbcSzr9d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy2y3TSzKn4VyddwUd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]