Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@vi6ddarkking
LLMs have been in the spot light for like what, 2 years? That's ha…
ytr_UgykjzIue…
G
I wish we were born with a magic gift. It took years of training and practice. A…
ytc_UgzWxgxr2…
G
We have the worlds best solar energy potential and are using almost none of it. …
rdc_da3zzcn
G
ChatGPT is not in the same state between conversations. It does not learn as a h…
ytr_UgydGQ7P8…
G
Yes will put A.l. into everything until it starts to take us over , trust the go…
ytc_UgyWByqFI…
G
The world might just be better off for it. It's not like we're doing anything to…
ytr_UggxItRjp…
G
That’s what I’m saying! One isn’t just “born” with a talent. I’ve been drawing f…
ytc_UgzUJuOea…
G
Buddy, we don’t want or need “AI
Capacity Tokens”; we need reform of wealth dist…
ytc_UgwEEGhby…
Comment
@Fern_leaf255 I appreciate your passion, but calling people “keyboard warriors” isn’t productive. Saying AI art “isn’t art” ignores that humans design, train, and curate every step—just like photographers and digital artists once did. Different process ≠ no value. Different methods can coexist, and so can differing tastes.
You argue AI “steals the process,” but for many creators, crafting prompts and refining models is their artistic journey. And assuming everyone finds joy in traditional methods discounts some disabled people or those with time constraints. Which is the case with professional artists.
If we’re worried about ethics, let’s focus on dataset transparency and crediting original creators for now—not dismissing an entire medium.
Crediting original creators in AI‐generated art boils down to two pieces:
*_Provenance in the training data_*
Curated, licensed datasets: If a user trains (or fine‐tunes) their model only on openly licensed or public-domain images that come with clear creator metadata, they can carry that metadata forward. In other words, users build their dataset so that every image is tagged with its author, source URL, and license—and never ingest “unknown” works. The responsibility lies with humans. Although, I do believe the talented men and women behind ChatGPT and OpenAI had too much faith in people to think this was going to be used regularly. They should have credited the inspirations by default.
Dataset cards & model cards: By publishing a Dataset Card (which lists exactly what the user trained it on, with URLs and artist credits) and a Model Card (which explains how they trained it), the user gives users down the line the ability to see who contributed to the model’s “knowledge.”
*_Attribution in the Generation Pipeline_*
Metadata embedding: When a model emits an image, you can inject EXIF or XMP metadata that lists the model name, version, and dataset provenance. Future viewers (or platforms) can read that metadata and see, for example, “Trained on 10,000 public-domain works by Monet, Kahlo, etc.”
Nearest-neighbor recall tools: Projects like “Stable Attribution” perform a quick search over the model’s training embeddings to find the closest real artworks that influenced the output—and then list those as suggested credits. It’s not perfect, but it gives users a shortlist of likely sources.
Prompt-level crediting: If one's using an API or UI that lets you include “seed images” or style references, users can manually note in their prompt or output description. Again, it boils down to human error. NOT the tool in and of itself.
As a standalone rant, you expressed genuine concerns about preserving tradition—but it falls short of persuading anyone who doesn’t already agree. It would be far stronger if it:
1. Defined clearly what qualities only “real” art can have, and why AI cannot share them.
2. Addressed counterexamples (e.g., digital artists who blend AI and hand-drawing).
3. Focused on specific harms (copyright infringement, bias in training sets) rather than parroting talking points.
Until then, it reads more like a resistance manifesto than a reasoned critique. Sorry.
youtube
2025-04-25T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyRKCo492CnyAHQIe14AaABAg.AHyDl0V7OzwANsyHS0qnPf","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwIzVFpibTEJBaj_Rx4AaABAg.AHVeGwEROylAMC003STdCk","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgycDfkEiecDiTatO-x4AaABAg.AGhdh52XAZRAGxpELUeWzj","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgycDfkEiecDiTatO-x4AaABAg.AGhdh52XAZRAHM0P_hcy7t","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytr_UgycDfkEiecDiTatO-x4AaABAg.AGhdh52XAZRAHMEt4_juzA","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgzaX3AKJrzc6mnKSxZ4AaABAg.AG2v4r14QjyAGf-HxAQ_DJ","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgySZCcwPHxxEH6HNwh4AaABAg.ADO33wOWopNAO23PdRZLdR","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgySZCcwPHxxEH6HNwh4AaABAg.ADO33wOWopNASROjFHLnEx","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_UgxjiVJCH0P9yOGdDbd4AaABAg.ACS6vy9SHblAGKmIeDo5jE","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxjiVJCH0P9yOGdDbd4AaABAg.ACS6vy9SHblALKVqrFhzTR","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]