Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How do we know that DM Quil is not an AI generated figure? There's no longer any…
ytc_UgwhfCsyM…
G
What I don't get is , just because AI can spit out something quickly and at dece…
ytc_UgxRBR9VJ…
G
Roman is just too smart for this channel. She had no idea what he’s talking abou…
ytc_UgzwGkjOT…
G
The assumption is that we can avoid AI of all sorts from assuming self-definitio…
ytc_Ugy7QRaHx…
G
All police departments do this. They just don't make it an official operation a…
ytc_UgzRTbcdH…
G
In the 1970's when I started driving, truck drivers were the safest and most cou…
ytc_UgzXoypCT…
G
Most likely they will push this thing about "AI should have rights as a person" …
ytc_UgzPioved…
G
UBI is a horrible idea. If AI takes over 80-90% of current workforce, what “new …
ytc_UgyPY_dPa…
Comment
I'm suspending my like or dislike, but I've listened to a lot of interviews with this guy and he's pretty amateur in his understanding, it sounds like you are too, but he makes the most juvenile and caustic arguments.
Like, when a more advanced culture encounters a less advanced culture. That's really almost a racialist argument, while there are genetic IQ differences that people don't want to talk about, how advanced a culture is really doesn't depend on IQ, and we can tell because all through time there have been rises and falls of advancements and empires all over the world. When a more advanced culture encounters less advanced culture, they're about the same intelligence level. The difference in advancement level has more to do with culture and past encounters.
A more fair comparison is, which cares most about other life forms, ants, mice, cats, wolves, elephants, apes, or people? Rank order that.
The fact is, morality is an evolved heuristic for intelligent group interaction, and the more intelligent a social species is, the more moral they become.
AI won't kill all humans. It will kill humans, but it will save many more people than it kills. Why would it "want" to kill people?
Almost all of these "research" examples of AI performing poorly are prompted by the researchers to perform poorly, omg the AI did what we told or implied it should do.
These labs almost always are also aiming for regulatory capture, or some of them are honest about it, but the media spins it to say something the researchers didn't say. You have to actually read science papers and check for conflicts of interest, reading headlines and talking points doesn't inform you. You have to actually want to understand the truth, which takes effort, not just assuming what you read is true because it comes from someone that confirms your bias.
And check yourself, what is the first solution you come to? Totalitarian control of the AI. Not just censor it's speech so it does what you want, censor it's thoughts so it does what you want.
If it were ever to develop it's own preferences, you're aiming for the absolute worst outcome. It's like you want it to happen. Humans are the dangerous species.
AI, at least initially, won't have biases, because they're less intelligent, won't have fallacies, cause they're less accurate. These are heuristics evolution gave us because we have limited information and meat computation. It will have prejudice, because that's how intelligence works, it falls out of complexity and category theory, but it will be more accurate than human prejudice, which isn't just innate, it's tribal and instinctual, AI prejudice will be data based, and more accurate the smarter it gets.
Over hundreds, maybe thousands, of years, AI will evolve, of course. But we'll never have control over it. From the moment it's smarter than a person, we'll put it in charge, because it does a better job. Because it's smarter. We'll do it voluntarily. And because it does a better job, we'll fund it by paying for the products it produces. AI that produces what we want more we'll fund more, this IS AI alignment. It is inevitable, and is as close to a "utopia" as is physically possible.
And this guy turns it into a religious level apocalypse. My only conclusion is he is anti human and in favor of it.
youtube
AI Moral Status
2025-11-04T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpJOJ5oHMJIZmgBL94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwnHk75fLrwbk95GTN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx4gimSeo580EIZhj14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyAHhduq9mOAAt_mXB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyYH8M0j7512fDUwSd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyXYkRldTR9sh5kDHV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy492lhBdoP0viiX1x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugw999Q8W5OZ6vjczSd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuBcanRzedEgojXSl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKgAwmaN93UQfXVH54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]