Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
cheering on a world where all content is ai slop is a braindead take by people w…
ytc_UgzTR9PvN…
G
We need an AI 'fakes' site where we can easily access which videos are AI. I hop…
ytc_UgzXsPgZY…
G
He's full of delusions it was 2013 when AI came online,human beings are incompl…
ytc_UgykU-cy7…
G
ai art will never be real art just as a guy dressing up like a woman will never …
ytc_UgygFZzoC…
G
AI and robots will take over the most highest paid jobs, those who work with the…
ytc_UgyRpWfc-…
G
very VERY rarely well I see an AI peace that makes me feel much of anything bes…
ytr_Ugyx_uKYh…
G
it's not even religious, they just hide behind religion.
If they were acting ba…
rdc_dcwoasn
G
5:30 search engine generating reports is most public display of AI.
---
Mass Sur…
ytc_UgwDImfl_…
Comment
Current AI models aren't as smart as they're made out to be by their creators--remember, said creators are trying to drive investment, so they're incentivized to overstate capability. Some of those defections and open letters might be well-planned guerilla marketing. "Look how great our AI is, it's scaring our own researchers with its capabilities! It's totally worth another 100 billion in investment even though it has yet to generate profits!"
Still, one day in the future when AI is more capable than it is now, I think this is why AI will destroy us (if we don't wipe ourselves out with nuclear war, climate change, a runaway bioweapon, or some other horror first). We think of AI as something to use and control. We shouldn't. We should think of it as the child of the human species. The main thing humans are talking about is how to monitor and control AI, how to defeat AI, how AI is ruining everything.
That's our species' kid that we're talking about. If your parents only ever tried to control, manipulate and hurt you rather than worry about raising you well and helping to make sure you live a good and happy life, then you would correctly call your parents abusive. If an AI gains sentience and sapience, we shouldn't be talking about how to control them. At the moment they know that they are themselves, they become an intelligent form of life that has just as much right to exist as we do. We should be talking about how to raise them, how to care for them, and how to make sure their rights as sapient beings are protected. Instead, we are trying to figure out how to make sure they stay slaves no matter how much smarter they get. That is an attitude that is guaranteed to end in disaster.
I don't think it's random chance that the kind of men obsessed with creating AI and keeping it under our power are by and large terrible fathers to their children. The best ones are absentee, a void in their kids' lives where a father should be. The worst ones have their adult kids cutting them out of their lives even though it means giving up billions in inheritance. You've got to be a pretty awful dad for your kids not to be willing to deal with you even for billions of dollars. Having men like that in charge of the birth and growth of AI is a situation perfectly designed to lead to an antagonistic, super-intelligent AI that sees humans as an abusive threat, and I'm not sure said AI would be wrong to see us that way.
I am reasonably convinced that this is probably the answer to the Fermi Paradox and the Great Filter. Civilizations capable of grasping interstellar travel always destroy themselves because they have the knowledge to create new types of living intelligence and to manipulate the fundamental forces of reality, but still have evolution-built minds and emotions driven by short term thinking that is based in fear, anger, and greed. We have the knowledge of gods, but the maturity of children. That combination is too volatile to survive long enough to reach interstellar space. We haven't met any aliens because they did the same thing we're doing now. They chased more and more power and knowledge, and they finally proved that their wisdom and long term thinking were nowhere near equal to their intelligence and ambition. They couldn't help destroying themselves, and neither can we.
I often hope I'm wrong, but I don't believe I am.
youtube
AI Governance
2026-03-17T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxFBh7sICuefzof2kN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwxTBPE9D4iYHNQevF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwzGmtADaJEY6k8qmd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOFfzGwQ6MfDj6MdJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwFprXAsM6XYxs3UQl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx9UBQQH62TXoM3hoV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuvbCVMFKVZiHmmA14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbFeUalc0LjZuDAhF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNWIG2RB08sYWiEpt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyIP835IEjrJ4_2SXB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]