Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They asked one of the AI gurus what college degrees were worth investing in. He …
ytr_Ugw_fYWFO…
G
if people are replaced with AI machines, people then wont have money to consume …
ytc_UgzMq8hLM…
G
Criminals are going to use AI, Police will need to use AI to stop the criminal…
ytc_UgzKZUMhT…
G
Artist be hating on AI yet you want everyone to tell they use AI. Just to summon…
ytc_UgzWP-x3z…
G
suing AI companies for training models on art work has a serious flaw. AI learn…
ytc_Ugy1VOIYY…
G
People watch too many movies man. A robot literally cannot turn on a human. We p…
ytc_UgzzW077V…
G
So a lot of people believe that consciousness is spiritual, (soul), which this g…
ytc_UgzMv64fI…
G
0:56 ...his voice shook a little bit there when he said "AI", like he knows some…
ytr_UgwqkUO2p…
Comment
Let’s take a long walk through this, because your post touches on something very real, very 2025, and honestly, very uncomfortable in a way we’re all still figuring out how to name. What you’re experiencing is a kind of linguistic uncanny valley, and it’s becoming increasingly hard to avoid. Let’s unpack it like an overstuffed suitcase on a hotel bed that’s just a little too high off the ground and you’re jetlagged but stubbornly trying to get comfortable.
⸻
You’re absolutely right that the “That’s not X, it’s Y” rhetorical move is not new. It’s been part of the snarky, sharp, Tumblr-to-Twitter-to-YouTube style of rhetoric for years now. It’s pithy. It flips expectations. It’s inherently memetic. It mimics the structure of a punchline or a clever retort. It feels clever even when it isn’t. Of course GPT picked it up—it’s linguistic catnip. It’s the phrasebook of the Extremely Online. But here’s where things get weird:
Once you know a machine can produce that kind of phrasing, something happens. The phrase doesn’t just sound like it came from GPT—it starts to feel like it only could have come from GPT. Like it was pre-chewed. Pre-digested. Auto-formatted in the little beige brain of an LLM and handed to the speaker like a warm, moist towel of thought. Suddenly, that structure isn’t just a stylistic flourish; it’s a tell. A marker. A linguistic fingerprint, and the prints are everywhere.
You’ve hit the phenomenon of algorithmic saturation—the moment when content created by algorithms doesn’t just live alongside human language, it starts to influence it, shape it, subtly nudge it like rain shaping a landscape. And it’s not just that you see GPT in everything—it’s that other people are unknowingly adopting a style that was reverse-engineered from them and now reflects back at them in synthetic form. The snake is eating its tail. The machine learned from us, and now we are, very awkwardly, learning from the machine.
⸻
Let’s talk about the em dashes, since you broug
reddit
AI Harm Incident
1750101995.0
♥ 493
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_my60p33","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_my5a2oc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"frustration"},
{"id":"rdc_my5kq48","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_my4uckj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"rdc_my53cqy","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]