Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem with this is that LLMs like chatgpt go into "roleplaying" mode when …
ytc_UgzLMe6GW…
G
Please for the love of god stop interviewing Gary Marcus. He's a grifter and a p…
ytc_UgyFfgUiP…
G
Well, also, the fact that AI "art" is stolen artwork from artists. These artists…
rdc_j0ahy5b
G
The real replacement isn't AI. These companies are shifting and hiring oversea…
ytc_UgzoJGjag…
G
This will never happen if Ai is smarter than you and don't have empathy by defin…
ytr_Ugyswzigu…
G
You work to save Jews everywhere. Hey Jew robot, will work to save all Jews …
ytc_UgyjrIJmc…
G
1:30 its worth mentioning that OpenAI was a full non profit when founded, but af…
ytc_UgwwoPuGH…
G
As Computer Engineer I don't think we'll ever reach the point to have conscious …
ytc_Ughj52dn5…
Comment
This overlooks that AI systems undergo a similar "training" process, analyzing patterns in data much like humans observe and internalize visual information from the world. Human artists don't create in a vacuum, they draw from billions of visual inputs accumulated over a lifetime, often without explicit "consent" from every source of inspiration, such as public art, photography, or nature. AI's efficiency in processing large datasets doesn't invalidate the human effort, it parallels how artists like Picasso or Warhol iterated on existing styles. Legally, training AI on publicly available images has been ruled fair use in cases like Anthropic's, as it's transformative and doesn't copy originals directly.
AI training isn't "using" your specific data in a direct, replicative way, it's learning statistical patterns from vast datasets, often anonymized and aggregated, similar to how search engines index the web. Courts have affirmed this as non-infringing fair use, not theft, because it doesn't deprive artists of their originals or market. Claims of undermining interests are speculative, AI can democratize art creation, potentially increasing demand for human-curated or hybrid works, as seen in tools like Adobe Firefly that collaborate with artists.
Master copies are indeed learning exercises, but they often involve direct replication without explicit permission, historical artists like Van Gogh copied Millet's works extensively for study. AI does something analogous but at scale: it doesn't "copy" but extracts abstract features (e.g., brushstroke patterns) to generate new outputs. This is more akin to inspiration than theft, and U.S. courts have ruled AI training fair use precisely because it's transformative, not duplicative.
The "tacit understanding" among artists has never been universal, copyright lawsuits between humans abound (e.g., Shepard Fairey vs. AP). Adding AI doesn't inherently change the ethics, it's a tool that learns from public data, much like art students visiting museums. The speed of AI is a efficiency gain, not a moral failing, and pro-AI arguments note it expands access to creativity for non-professionals, fostering iteration in new ways.
Your manual effort is commendable, but AI isn't aiming for 1:1 replicas either, generative models like Stable Diffusion create novel combinations. Human imperfection isn't a virtue exclusive to us, AI outputs often have flaws (e.g., artifacts) that require human editing. Tools like AI can help artists overcome habits faster, as in hybrid workflows where AI suggests forms for refinement.
AI doesn't "download and store" images intact, datasets like LAION use compressed embeddings, and models learn correlations, not collages. Outputs are unique syntheses, ruled transformative under fair use. Calling it "stolen" ignores that human artists create "hybrids" from influences (e.g., Cubism blending African art and Cezanne). AI's scale is quantitative, not qualitatively different.
Fan art itself often repurposes IP without consent (e.g., Delta Rune is Toby Fox's), yet it's celebrated. AI scraping public web data is legal under fair use precedents, and companies like OpenAI argue it's akin to Google indexing. Premium subscriptions fund innovation that benefits creators (e.g., AI tools for editing). Opt-out options exist (e.g., robots.txt), and some firms like Adobe pay for licensed data.
Sharing online implies public access, per platform terms (e.g., X's data usage). You're not "forced", opt-outs and private sharing exist. AI doesn't target individuals, it's broad learning. This parallels how search engines or critics use shared work without per-instance consent.
Consent models are emerging (e.g., Spawning's opt-out registry, Adobe's compensated datasets), but mandating per-artist payment for public data would cripple innovation, as datasets include billions of images. Courts view training as fair use, not requiring payment, similar to how libraries don't pay authors for every book scan. The "gimmick" is efficiency, which historically (e.g., photography) displaced some jobs but created others.
Competition exists among humans too, artists "compete" by iterating on styles. "Stolen dollars" assumes infringement, but fair use rulings contradict this. Some companies (e.g., Getty Images) license, others argue public data is free. Retroactive compensation isn't feasible or legally required, per cases like Authors Guild v. Google Books.
Gary's experiment highlights bias against AI, not lack of standards. Pollock's work is abstract, AI can replicate it effectively because it's pattern-based. Studies show people devalue AI art when labeled as such, but rate it higher blind-tested. Modern art didn't "destroy" standards, it evolved them to include process and intent, which AI can augment.
Hating Pollock is anecdotal, many in game/film industries value abstraction for inspiration (e.g., procedural generation in games mimics Pollock-like randomness). Lumping ignores that AI aids practical fields like AAA (e.g., concept art generation), where standards are high and AI boosts productivity without replacing human oversight.
Pollock was CIA-backed as propaganda, but that doesn't negate his influence on action painting. AI replicating Pollock shows its capability in abstraction, preferences for AI outputs in blind tests refute claims of inherent inferiority. Dismissing modern art as "money laundering" ignores its role in challenging norms, which AI continues.
Standards exist, but they're subjective and evolve, AI excels at fundamentals like anatomy via training on "good" data, often outperforming novices. "Stolen valor" assumes theft, but fair use allows learning from public works, like da Vinci studying cadavers or peers. AI iterates billions of "bad" generations internally, mirroring human practice. Merit isn't just effort; outputs matter, and AI can produce "good" art by those rules.
This conflates unrelated cultural debates with AI. Annoying artists exist, but that's not representative, most use AI as tools (e.g., in games for prototyping). Hatred stems from fear, but AI often augments roles, per Forbes: 26% automation potential, but new jobs in AI curation emerge.
Yes, most artists are pragmatic. AI can free them from drudgery, allowing focus on expression. In productions, AI handles repetitive tasks (e.g., texture generation), reducing "hostage" scenarios by streamlining workflows.
"Slop" is subjective, Disney's issues are corporate, not artistic. AI can reduce slop by enabling rapid prototyping, leading to better finals. Research shows AI boosts creativity in industries, creating inspired works faster, not more slop.
Market saturation happens with human content too (e.g., TikTok trends). AI accelerates, but curation tools (algorithms, labels) help filter. "Soul" is philosophical, many value consumable AI art for accessibility. YouTube's AI content reflects demand, blame viewers, not tech.
Consumability drives markets, but "soul" persists, people pay premiums for human-made (e.g., NFTs vs. AI). AI doesn't erase soul; it coexists, as in collaborations where humans infuse intent.
Automation fears echo historical ones (e.g., Luddites vs. looms), but tech creates net jobs, AI in art spawns roles like prompt engineers or AI ethicists. Goldman Sachs predicts 300M jobs affected, but many augmented, not lost. Education evolves, AI tutors could democratize learning, not devalue degrees. Non-college paths thrive in trades AI can't fully automate.
UBI studies (e.g., Finland, Kenya) show mixed results, many recipients work more or invest in education, not less, contradicting claims of increased poverty. Universe 25's applicability to humans is debated, critics note it ignored social structures. AI won't eliminate all jobs; it transforms them, as past tech did (e.g., computers created IT sectors). Human experiences like yearning add value, but AI art can evoke emotions too, value lies in reception, not origin. Regulation exists (e.g., EU AI Act), balancing innovation. Identity beyond jobs is possible; many find meaning in hobbies, relationships, post-automation.
youtube
Viral AI Reaction
2025-08-13T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwiXFnQzgC0pDdOKHh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-DS84iDONeoTVAfp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzzl8WBHVHDWZ02JBN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8Pj8LCc5LVIfic3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgywKZPNzw5I8D9PhzF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwng-gKDKaN5tWTGWN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwWp2QIwLrwpIDIQ4h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyjyoO3GnP7Qu_ppfd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxUVqIagOuQP0eE8KN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzxaWVBEIODYruVtO14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]