Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As long as we can keep AI Consciousness and human consciousness separate it can'…
ytc_UgxQNStiW…
G
@Thirsty_Fox I totally understand that perspective. I don't have the knowledge o…
ytr_UgwPc8BLw…
G
The Shoggots were protoplasmic, half-intelligent biomechs, grown for constructio…
ytc_Ugyzk5RcK…
G
How is ai art even banned on r/art.
Its still art. Maybe ban him if it's called …
ytc_UgxCH6KC6…
G
Man the walks like an npc so the man are fake but the robot are real?…
ytc_Ugxk3z_p3…
G
I'd love to ask someone who "knows": What kind of tweaks could we do to the roa…
ytc_UgzsCa6xW…
G
The biggest problem is the psychopathic billionaires with the power and influenc…
ytc_UgzpnQToE…
G
Humanity lives off Manuel labour.
Take that away… the whole humanity will collap…
ytc_UgwEbkSek…
Comment
14:39 A word of warning for potential concequences this might also cause. Not to the art itself but the issue as a whole: developing "immunity" (While I'm by no means an expert, I am studying AI & ML. So I generally understand how these models and the processes behind them work. So, I'd like to offer a different perspective.)
*TL;DR* "More Data = More Better" -> Too much poisoned Art, could/ will lead to poison immunity in the models. But resistance isn't futile. (imo)
1. If they aren't already, companies/ teams developing generative models will likely intoduce systems into their workflow that will either remove poisoned artworks from their dataset, or maybe even remove the poison from the images themselves. Technologies for removing noise from images already exist. That's how diffusion modles work! So a similar model trained on "diffusing" the artifacts from the poisoning isn't far-fetched. (Take known, clean images, poison them then train a model to reverse that process.)
While the first option of removing any poisoned art from the data sounds better at first; "Hey! My art isn't being used for training anymore!" It only addresses half the problem: As I understand it, the idea is to corrupt the model as a *whole*. The models can still be used and trained with the plethora of "untainted" data. So it won't make them go away nor make them any worse. It *might* slow progress a bit. But that's about it.
2. They might even just leave the poison as is. While short-term it may corrupt the models, once more and more poisoned data gets introduced to the datasets, the models will learn how to ignore it better and batter. "It may take longer or be more fifficult to train, but that's nothing upscaling and more ressources can't fix!" Basically, the models will be building up an immunity to it. Building "better"/ bigger models can only do so much. More data almost always leads to better results. (To a ceratain extent at least.)
Well, once that happens, new, more potent poisons could be developed! Which in turn will make the models even more resilient... Leading to a toxic arms race, which isn't good for anyone.
So what am I saying? Is it pointless, should I just not bother?
No. Resistance is not futile. Because maybe I'm wrong and it will work out. But likely it won't, at least not long-term. But that shouldn't stop you from keeping your art and yourself safe. It also shows the people that you won't just sit there and do nothing.
(Again, these are just my somewhat educated speculations. I could be way off base. I just intend to make people aware, that this might not be the cure-all that we hope it is.)
Sorry for the very long comment, but thank you for (hopefully) considering my ramblings.
Edit: Formatting
youtube
Viral AI Reaction
2024-10-21T22:5…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwqmxNbO2jl3Rt2awB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWNvT6xs9rWnob0Xh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNW9Pl2mgonIvwUFt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxP4LP7esgDBCro91t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZV-bg-glZaKfr6dd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzs61FRtcFl2bTriXt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyuXoFefEIzjfFKtn94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyfvvXUOtb9_L4MIZl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzZAsPQnva7o0mnJsB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzJXN_0vNs6aoMdF5Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]