Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Another point: considering that these models were trained on Reddit, 4chan, and similar platforms, how the fuck are people surprised by extremist opinions? You are what you eat, right? You feed a probabilistic autocomplete model with all kinds of shitty text, the model starts vomiting shitty text, and suddenly we’re dealing with an “intelligent alien that hates Jews”. Lol. It’s like training a model on erotic novels and then being shocked when it starts talking about sex with users. LLMs don’t think. They don’t understand shit. They just predict the next word based on previous training. There is no threat, no danger, no fucking intelligence. It is a computer program that does exactly what it was designed to do: produce text based on pre-trained data. You don’t want an extremist model? Then don’t train it on extremist data. This sloppy training happened because it was easier to feed the model with all kinds of garbage than to properly curate the data. Now they create this stupid fear-mongering narrative (that AI is an alien or a monster) to hide their own responsibility. The model gives us exactly what it received. If one day a model does something genuinely damaging because of this kind of negligence, the guilt is on who made the model. Man, I have no words to describe how much I hate this discourse. Fear-mongering is a tool for controlling thought. Don’t fall for it. It always benefits some group of people, almost always the very groups that propagate it.
youtube AI Moral Status 2026-01-05T23:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy_OJ_p45jxXgt-2D14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxPMo-3m2TPWh9SFEx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgyvYuE-9tPhCkRp0P94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzixhn74VQqUmGfHCB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgzpIQRvcrnfJFBJ1Kl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzQ-rBbpNOLqvmeVpB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkxPv7k3fvH1-0gXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzWWkk3LpLZY6T865l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx0rcjN3iZpHC0zssZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyclQvkMKbOgOpjWVt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]