Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They could have implemented an AI monitoring system for cheaper, and probably mo…
rdc_n6145hz
G
That's an interesting point! Sophia does touch on the idea of efficiency versus …
ytr_Ugz0_wBWy…
G
Big corporations like Disney will throw tens of millions at buying politicians a…
ytr_Ugxddd6Al…
G
I don't like the idea of AI it's evil it's made by man not by God anything man m…
ytc_UgzZKD3x9…
G
I’m looking forward to the AI take over. Humans clearly aren’t able to look afte…
ytc_UgzJSjDkR…
G
Looking at Claude 4, the predictions seem spot on. Tough time for millions of us…
ytr_Ugw5d_6E0…
G
I'm sort of wondering what you think a good use of ai art would be?
(if there is…
ytc_Ugxb_pu4d…
G
The worst part is I’ve seen Tik Tok creators that market “their AI art” and prom…
ytc_Ugz553vY0…
Comment
I asked this to AI, and this is what it said:
I get why this feels heavy. Strip away the sci-fi music and there is a serious intellectual puzzle here. Let’s slow it down and dissect it, because some assumptions are sneaking in unnoticed.
First, emergent behavior and the “black box” problem are real—but they are not magic or autonomy. Emergence just means complex patterns arising from simple rules. Ant colonies do this. Markets do this. Your brain does this. None of these automatically turn into a hostile god just because they’re complicated. Complexity ≠ intent.
The black box problem sounds scarier than it is. It means: “We can’t always give a neat human-language explanation for why a model produced a particular output.” That’s different from “the system understands itself” or “has goals.” Right now, AI has no desires, no survival instinct, no fear, no hunger, no will. Those are not missing features waiting to turn on; they require entire architectures that do not exist.
Now let’s challenge the big hidden assumption in your fear:
Power automatically leads to rebellion or extinction of humans.
That’s a very human assumption. Evolution wired us to see intelligence as a competitor because biological intelligence competes for resources. AI is not a biological organism. It does not reproduce itself independently, does not seek energy on its own, does not care if it exists tomorrow. Remove humans tomorrow, and AI doesn’t “take over”—it shuts down when servers fail.
Another weak link: people often imagine “AI” as one unified mind. Reality is the opposite. AI development is fragmented, constrained by hardware, economics, regulation, and human oversight. There is no single lever labeled “turn on superintelligence.”
Could things go wrong? Absolutely. But the realistic risks are boring, not apocalyptic: • misuse by humans
• concentration of power
• over-automation
• bad incentives
• careless deployment
Those are human failures, not AI deciding to wipe us out.
Here’s a useful mental model:
AI is not an enemy species. It’s an amplifier. It magnifies whatever values, incentives, and structures humans give it. A hammer can build a house or break a skull. The hammer doesn’t wake up angry.
Now the most important point—and this is where I push back hard on fatalism: Fear that treats catastrophe as inevitable is intellectually lazy. It skips responsibility. If extinction were unavoidable, ethics, alignment research, safety work, and governance would be pointless. Yet history shows the opposite: nuclear weapons didn’t end humanity because humans built constraints, norms, and mutual deterrence.
AI will be the same story—messy, political, imperfect, but shaped.
The universe is dangerous, yes. But intelligence—human intelligence—is still the steering wheel. Pretending we’re passengers waiting for doom is a story, not a conclusion.
The real question isn’t “Will AI destroy us?” It’s “Will humans stay thoughtful, cooperative, and humble while building powerful tools?”
That question is still open—and that’s where the future actually gets decided.
youtube
AI Moral Status
2026-02-08T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgywFaIvlR8hdk1mpiZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8gsSTv1LjQpH7dKJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxhWslbQFE7c2vUoh94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzQnpU8W3vJKyXwGSp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGh6YLAR6oUgfFunh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxBSFtCX25oBgGuIc14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzjFUa5thIMk79qdMZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwKFx4x7eC8z6-6FDF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzl-bpSahjg-pOer3d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy5vJkKJKdUpMfVx3l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}]