Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Artists" acting like losers, listen if your art is good AI isnt taking over you…
ytc_UgznARO4A…
G
I do know that this man is leaving one very important piece of the puzzle out of…
ytc_UgyLlKBop…
G
How about instead of facing the problem in a 100 years why can't we stop making …
ytc_Ugxr6-lal…
G
don't ever let a robot handle a weapon, very dangerous and they seem to know wha…
ytc_UgzDvhHmh…
G
The "I" your chatgpt was refering to is *this* version of chatgpt. In saying tha…
ytc_Ugxjx6V7L…
G
I have been writing about this for 3 years. And this is before robots. Where is …
ytc_UgyqJGqEo…
G
My wife and I bought the Samsung s22/s23 ultra. No ai at the time of purchase(as…
ytc_UgwbBNYh-…
G
I use AI art, and I really like the idea of paying artists... just I haven't see…
ytc_UgwQaGzb4…
Comment
I just want to clarify one other tiny bit of detail:
16. Copyright in Your Content
DeviantArt does not claim ownership rights in Your Content. For the sole purpose of enabling us to make your Content available through the Service, ***you grant to DeviantArt a non-exclusive, royalty-free license to reproduce, distribute, re-format, store, prepare derivative works based on, and publicly display and perform Your Content.***
So yeah, default opt-in was technically ok as it was something you already were given the choice to opt-out from (by not agreeing to the above terms). I will still agree that it was a bit of a faux pas for DA to do what they did, how they responded, and such given the state of the industry at the time that they did it.
---=== below this point is a bit rambly and goes off on a few tangents, my opinion on the situation follows ===---
"Use in a dataset" or "feed to an AI" is not a copyright permission. There is no rational way to even handle such a right, either. There may be some moral and ethical discussions about the curation process, but those can only ever apply to the person compiling the data, not the software (going after the software is functionally equivalent to trying to sue Texas Instruments because they supplied the chips that went into the on-board computer of a car you don't drive because it crashed into someone else's house and now your property value went down because your neighborhood is considered dangerous now).
Even just identifying the point at which "can this arbitrary program read this data" goes from `completely rational` (well of course Chrome can read the data, it needs to display my art to the user) to `horrifying ethical dilemma` ("AI can't use my art!") isn't even clear cut. What constitutes "your image"? What constitutes an AI? What if I train write a program that operates on Very Very Large Numbers (that when interpreted in the right context through the right software displays what appears to be your art)? You as an artist don't own the number "3121491355794998332440739552097977007353231296357070843005722815009035246433220686378881420659391378316482528838109777011691037239240361352724200445025302725444438163079585978557249556" so you can't stop me from using it in a math equation (this is an arbitrary large value that just so happens to be this brainfuck program https://tio.run/##SypKzMxLK03O/v9fN9pO204biGxsbHRBIBbIwwL07HT1YEw9CB8C9HSRgJ6NDaomsFKEKj04qYeqDGo5kkl2yBxM50CMRVKurff/PwA , just to point out the fact that this translation exists; how? It's called Unary; programs are some number of null bytes long--because unary only has 0s, no 1s--and that number expressed in binary, then read as ASCII results in a BF program): your artwork has a(t least one) decimal value that expressed as bytes read by the appropriate software, displays an image that is your artwork. But you don't own exclusive control of *the data* but the representation in the appropriate translation. See also https://en.wikipedia.org/wiki/Illegal_number , See also https://babelia.libraryofbabel.info/
And now a sidenote about how Stable Diffusion works:
The AI portion--the thing people are so up in arms about--doesn't actually generate images. Images come out, yes, but the neural network **itself** doesn't actually draw them. The network **predicts noise.** That's it. The reason images come out when the end-user uses the AI is because when that predicted noise is subtracted from the random input pixels (the seed image) you get an image that looks like stuff.
That's why the technique is so robust: because it wasn't trained one such abstract concepts, but rather on identifying gaussian noise. It just so happens that we're feeding it garbage data (the seed image containing pure noise) and saying "this image contains a dog" and the AI does its best to identify the dog within the noise and clean it up (and then do that again 10, 15, 100 times).
---=== my personal opinion ===---
The one thing that I think would solve the issue is that **artist names** should not be part of the data that the AI sees when it is training (that data might still be in the dataset, just that it isn't part of what the AI sees as part of training), as it too easily enables the end user to mimic a specific living artist's style. We still can't stop bad actors from creating their own fine tuning or hypernetwork on top of the base model, nor can we stop bad actors from generating mimic artwork, but we can at least put up some safety features so that it isn't so trivial to do.
I am, however, afraid that that horse has already left the barn. :\
youtube
Viral AI Reaction
2022-11-25T19:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyJWk53gJ8iuGFmrLB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmTXjXMHMMPriPJuJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzcQMum8JEBRCyNPWl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy6R6eO-TZIwXtJCsh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxiTXhPsL2oujmjovl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxaSMbWcDjoGaIb79J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxSwco5fzSO0TPU6h54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgynwmItN9o4_UyvPuh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAA0JtpNqEHm3tXGR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwjWrjd8-icwkTlkxl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]