Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Full agreement on the opt-in/opt-out framing. That should be the standard. Partial agreement on the labor-displacement point. There are inherent technical limitations to what we're calling "AI"—this is a tensor-field transformer, not Skynet. But yes, if corporations can replace paid labor with tech, they will. That's a late-capitalism problem, not an AI-specific one. Your UBI/Universe 25 section raises interesting questions about meaning under automation, but Universe 25 was about overcrowding stress, not abundance-induced nihilism—that interpretation has been thoroughly debunked. And the claim that UBI studies show people "become poorer" contradicts the actual data from Finland, Kenya, and Stockton trials. The real risk isn't that people lose purpose without work—it's wealth concentration creating neo-feudalism. And then we hit the metaphysical fluff—which I reject entirely. Same with the baked-in assumption that effort automatically grants value. That's just Calvinist work ethic dressed up as moral philosophy, and I see no reason to treat it as self-evident. Very common in the US, sure—almost foundational—but not universal. And I'm not American anyway. This assumption actually runs throughout the entire video, including the claim that in a UBI society, there would be "nothing to strive for"—as if meaning is inherently derived from hardship or economic necessity. That's a substantial philosophical claim requiring justification, not assertion. Philosophers have been trying to defend variants of this for centuries—the Protestant work ethic, meaning-through-suffering theodicies, Nietzschean struggle-as-virtue. These arguments are defensible only if you grant certain foundational axioms: human nature requires struggle, purpose must be externally imposed, economic productivity equals human worth. But those axioms aren't self-evident truths—they're contestable metaphysical commitments that many philosophical traditions explicitly reject. Including those I personally subscribe to. Now, the fluff, point by point: 1. "Robots can't yearn." Not explained why yearning is relevant here, or why its absence should matter for the output. 2. "True human endeavor." As opposed to… fake human endeavor? And who decides what qualifies as which? 3. "Soul of art." Pure mysticism. Never defined. Functions as a placeholder for "I get vibes from it," which is fine as personal experience but meaningless as an argument. 4. The McNugget analogy. Aesthetic class snobbery dressed as moral reasoning. You're literally moralizing personal taste. And honestly, if you think all home-cooked meals are inherently good, you're blessed to have never encountered true domestic culinary horror. Sometimes McNuggets would be mercy. Finally, on "democratization": Telling someone "just learn to draw" is like telling them "just learn to code." Come on, do it, instead of using ready-made website templates—did you know those took work away from front-end developers? And anyway, your site won't have any value because you didn't put in the appropriate amount of work. Right? Unless we accept that sometimes the output matters more than the process? Just like you're not interested in learning programming to make your blog/website, I'm not interested in learning to draw to get pretty prop art for my D&D campaign. P.S. Just so it's clear: I fully support transparency when it comes to AI. Both "AI art needs to be marked as such" and "datasets used for training need to be revealed." The second is crucial so copyright infringements can be detected and pursued. It might interest you that the EU AI Act actually does require this dataset transparency. We'll see whether they manage to enforce it, but at least there's a legal framework. The US, on the other hand, is... kind of fucked. It sucks. P.P.S. This was genuinely fun. I disagree with you pretty hard (obviously), but I do enjoy your style. Hence the wall of text — I actually wanted to engage with the points you raised. Subscribed.
youtube Viral AI Reaction 2025-12-11T01:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzaZd6Q3hyQARG7vYp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugznxb4eZ4WSbGQaJgZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgynXYhGHm9-gEfpIlN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyO21VPM8c8yrTYW894AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzE5lUXL0Yx7am5rW94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwzTKOIdaDTx3GNb5t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgywpRPa6tacgXkyPsd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxDLLoaLWdmGvIG0lh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzajMXOEU-npkndKQh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugypenv1mZoaJ4w3yzF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"} ]