Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This video is definitely creepy but we should not get too caught up in the horro…
ytc_Ugw9U9D3c…
G
I’m all for AI writing new material. I haven’t watch a new movie or TV series i…
ytc_UgzMoUbzK…
G
omg the irony of using chatgpt to answer surverys :D I guess the ai will think t…
ytc_UgxQCNtl1…
G
I just find it extremely interesting that after learning about agendas of the mo…
ytc_UgzLfQfRy…
G
@alexalexis7899 I know it is a valid take. I just don’t share that view… just li…
ytr_UgySUuDsH…
G
So learning code is still valuable. Someone is still going to be working with th…
ytc_UgyMyLROy…
G
We will never be at the human level fully because they cannot feel like he said …
ytc_Ugz5E09Is…
G
HAND YOU LIFE OVER TO THE BEAST SYSTEM!!! AI !!! CONTROLLED THEY WILL CONTROL AL…
ytc_UgyTD5Lq9…
Comment
Sam, you need to be careful when you let your fans know that someone else created a model. If you publicly announce X person created a model on your work, your fans will witch hunt X. This is why people clapped back on you. As for the rest of the video, I get yours and artists frustrations, concerns, and worries over AI. It's scary seeing something that can generate something extremely similar to your own work in mere seconds, that's threatening and frustrating.
However I think the concerns and frustrations are a little misplaced. Allow me to explain: Your complaints are that it used copyright data. However, if it produced garbage artwork, the conversation would never be taking place. It's only taking place because it does produce pretty amazing results pretty quickly (and because you can technically use certain artist names as prompts - I agree, most living artists should not have their names in the training data at all unless they want it to be in there). So copyright work being in the training data becomes an issue, but it's more of a second hand issue. The real issue is that AI is competition, and can be seen as a threat. If we removed all copyright data from the work and people had to opt in, the HOPE is that it would produce significantly worse art and allow artists to be well above it. Realistically though, even if they removed all the copyright work and allowed people to opt in, it could still produce extremely good art, and there are plenty of artists who are already using AI in their workflow to produce work at a faster rate.
AI is very interesting. The "training" data was 5 billion images. They used 512x512 images exclusively. At 512x512 image size, we are looking at least 200kb per image (maybe more on average) - this is assuming that every image is pretty small. If we go by 200kb per image, that would be over 953,000 GB. Many people are mistakenly thinking that the models that were created with the training images, have access to all of those images - the models are 2gb to 4gb in size. They don't have access to the training images. So to make a comparison, in order to compress 953k GB down to 4 GB.. we're talking about the ability to compress a file by 0.0000041943% of it's original size. In other words... if that type of compression existed, you'd be able to compress a 4GB file down to 17KB. That type of compression technology would be the holy grail of internet inventions, and would be worth trillions on it's own. If you had a 64GB phone, you'd be able to fit just about 4 million movies on it, with that amount of compression. This is to address people who are just specifically saying it "copy pastes" images together. It doesn't copy any images (there are overtraining/overfit issues as you mentioned - but those aren't really a huge issue in the original training data.)
Unfortunately, pandora's box HAS been opened already. Those models already exist. They are small files, capable of extremely good artwork. (It's not perfect, and it's used better as a tool for real artists IMO). They could try to remove all copyright work from the original data set. But I am not sure that would even be a good idea at this point. And would you really want them to? Here's something to think about. Lets say you want them to, and they do, and all new models going forward produce magically worse art, lets just assume that for a second. What happens? Does your position as an artist go back up to where it was pre-AI-scare? Well, yes, for the immediate moment, maybe. However, companies like Disney will be the ones to be able to train models off of BILLIONS of copyright works that they personally own. So what happens if Disney does that? Well... Your position with Disney is no longer safe. You can't fight it. They own the artwork and the trademarks for it. How do you compete with Disney if they train their model? You don't, and can't at that point. You would have to hope Disney gives artists access to it for a monthly cost. You'd be paying to use the models - make no mistake, the artists that did all that work wouldn't be getting royalties from it. However they would be paying artists to work with their AI tools and model.
So in the end, your hope is to cripple the publics access to a "good" model. But this is something that will absolutely backfire, when rich companies like Disney who own massive amounts of copyrighted content train their own models. You're then left in a place where you can't compete with Disney's work, unless they allowed you to use their model - in which case you'll either be working for them, or paying to use their model and be limited on what you can do with the content it produces unless, again, you work for them.
Artists aren't the only people this is happening to. This is happening to programmers. This is happening to McDonalds workers. This will be happening to truck drivers in the near future. Many things will be automated by AI.
youtube
Viral AI Reaction
2023-01-13T15:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy0T7VYG2i1bRCmcwV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzZjW3cBIkoel7x-zR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxl4XutLROiqAJY0IJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxLriCzkt9qUiGg2s94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTkXGTrBuf2CfnRHB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyv3x7PFc4in4CBGLR4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySuj6lTVh3vB88PHB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgySnAR0-8C3CdKuU3t4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzgI1KVaoQX2mVv72p4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugygd9jI2x-KIwzmiFJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]