Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The dead (and/or super crooked) eyes and empty stare of AI pieces, both in anima…
ytc_UgwCDsjK5…
G
I could never fall for "romantic" ai. I like ai. But to have a romantic relation…
ytc_UgyncnGCi…
G
At this point, if AI decides to take us down, that means we have earned it…
ytc_UgzE18voi…
G
If she is the first robot assistant...i guess all the NPCs in the background are…
ytc_UgzKjeAce…
G
So you're racist and sexist? You want the medical ai to discriminate against wom…
ytr_UgxGmuYSL…
G
To the AI Artist, who & what art did you TRAIN that AI on? You have not "made" …
ytc_UgzHhiPyN…
G
@tjen7929 Considering the amount of knowledge and thinking power AI has, I don'…
ytr_Ugxkn_1UZ…
G
It’s a good thing if we can scale back technology dramatically. I learned of the…
ytc_UgysU99QG…
Comment
I totally support the idea of this, and hope that it has an impact, but I have a feeling that the training will not to be fooled by it for very long. Likely they'll feed back in the hideous results of poisoned training into a next generation of training, tagged accordingly, and then everyone will just be adding "poisoned art" to their list of negative tags.
Alternatively, They'll just develop filters to remove the poisoning for different styles of art, and then pre-processing their training data.
It's nice to think they'll avoid the poisoned images, but they'll likely just remove the poison, which probably won't be too hard for a lot of art styles. It will lose data in the images, of course, but probably not enough to matter when you have enough data.
It really sucks, but I think the problem is that when you only care about money, you only have to make "art" good enough to fool all the people who don't have an actual appreciation for the art, or the ability to pick out the differences.
It's a lot like artificial sweeteners. When it's so cheap to produce, you can put it in everything, and 10% of people will stop buying your product, and 90% won't notice the difference (or won't care), and your increased profit margins on the 90% more than make up for the loss.
Regarding the "AI is inevitable", unfortunately, my depressing take on it is that it will be for a while. It'll eventually die off to a degree, but the main limiting factor is going to be the AI eating its own poop. You feed the AI content, and it poops out a worse derivative that somewhat resembles what it was fed. Then the next generation gets fed the same OC, but also a bunch of poop. Each generation eating more poop, until the internet is full of meaningless slop. The people training them are eventually going to have to start curating it to only the good images, and separating out the bad AI ones, but eventually it'll likely be taking too much effort to tell them apart.
Another way to mess with the AI might actually be to make the art _look_ like AI in other ways, so that future training might avoid it. Filling it with metadata that's similar to what's in a generated image, but with mildly inaccurate prompts or generation settings so it will mess with the training if people do try and use it, but not so far off that it's obvious.
youtube
Viral AI Reaction
2025-01-17T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzAOaAvF3Xwy2oQS7F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhDNA51f0jOAJsLOV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyy9zB-PzuQd7BFt5Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw1SMHdq6gMmRxvxBN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyD4JWkL0qvt1vZ4m54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwLsCPed_stcu5SF194AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz_aYnRQQP5nhk8J_B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzQIM_DAulGXwD1llN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzTJN81tdpdTstKXxR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzahWt8Wq6WPQeJrz54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]