Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Man this is giving me Alien covenant vibes(Davids AI character) crossed with Ex …
ytc_Ugx5vkY-M…
G
@TopMusicAttorney - Show me how - I am very interested in additional guidance r…
ytc_UgzhAifuE…
G
His time frames on new tech are always way off , look back at all his prediction…
ytr_Ugx84L9c_…
G
Ha ha ...Hey tucker thought u like American first not human if some craters of a…
ytc_UgwQo3bJk…
G
As a counterpoint one could argue that the most dangerous AI is a very specializ…
rdc_ctikd3g
G
I don't think people should call themselves Artists when merely generating A.I. …
ytc_Ugw7p0niJ…
G
In reality, in many parts of the world humans will be cheaper than robots and so…
ytc_Ugwc338fN…
G
They say we are already in the 3rd phase. They want us to be more reliable with …
ytc_UgxdFscAE…
Comment
Another point: considering that these models were trained on Reddit, 4chan, and similar platforms, how the fuck are people surprised by extremist opinions?
You are what you eat, right? You feed a probabilistic autocomplete model with all kinds of shitty text, the model starts vomiting shitty text, and suddenly we’re dealing with an “intelligent alien that hates Jews”. Lol.
It’s like training a model on erotic novels and then being shocked when it starts talking about sex with users.
LLMs don’t think. They don’t understand shit. They just predict the next word based on previous training. There is no threat, no danger, no fucking intelligence. It is a computer program that does exactly what it was designed to do: produce text based on pre-trained data.
You don’t want an extremist model? Then don’t train it on extremist data.
This sloppy training happened because it was easier to feed the model with all kinds of garbage than to properly curate the data. Now they create this stupid fear-mongering narrative (that AI is an alien or a monster) to hide their own responsibility.
The model gives us exactly what it received. If one day a model does something genuinely damaging because of this kind of negligence, the guilt is on who made the model.
Man, I have no words to describe how much I hate this discourse. Fear-mongering is a tool for controlling thought. Don’t fall for it. It always benefits some group of people, almost always the very groups that propagate it.
youtube
AI Moral Status
2026-01-05T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy_OJ_p45jxXgt-2D14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPMo-3m2TPWh9SFEx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgyvYuE-9tPhCkRp0P94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzixhn74VQqUmGfHCB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzpIQRvcrnfJFBJ1Kl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzQ-rBbpNOLqvmeVpB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkxPv7k3fvH1-0gXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzWWkk3LpLZY6T865l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx0rcjN3iZpHC0zssZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyclQvkMKbOgOpjWVt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]