Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think much of the discourse pro/against AI, specifically generative AI, is not being well understood when it comes to the money. For context I am a finance guy, the profit drive of these systems are primarily for cost cutting and resetting the floor. When a company contracts with an artist, as you know its set for specific terms. This is usually a gamble for both parties, as the artist doesn't know how much value it will bring to the brand or vice versa. However, as time goes on if the brand is successful, and increases in value, so too then does the importance of retaining their image. I think its perfectly fair to require more compensation when this situation happens, but as demonstrated, companies are more than happy to use a next best alternative than be held hostage by an artist. That's really where the core of much of what i've seen of this on the back end is really about. What is being sold to the money people right now is automation, in all things, the art side is just one part of it. The money side of generative art currently is being most heavily made in porn, especially as video is becoming easier and not shit. There's also the question of value additive basis for having art, AI or not, in certain circumstances. A mild case example, is we see plenty of garbage AI output, and cases even in corporate spaces using mild or at worst inoffensive uses of it. As explained in the video however, more details require more basis for it to fit a desired outcome. I think it is fair to say that monetization of AI art, and more specifically, copyright protection of it being where it is currently is fair. It does still need addressing ultimately, but so too does copyright in general. As to model poisoning, as I understand it, Nightshade or any other similar tools are just adding a underlayer not displayed on the image itself. If this is the case, how a scrape is done will matter, if its a file based issue and not an artifacting disfigurement of the visual image at a pixel level, it's much easier to circumvent. I've done enough test cases of this to conclude that certain model training methods are genuinely better than others in terms of getting around dataset poisoning. Honestly fucking up tags on image galleries and large sets of data is more detrimental I think at this point than even hundreds of poisoned nightshade images. It's why the Pony model is beating SD 1.5/SDXL, even though it was based off it. FLUX is the same deal really, they just curated and privately organized a data scrape rather than a public one for the model. The problem with generative AI as it is currently, is much of what you see is first pass generations, no cleaning, no editing, 30 seconds hot fresh slop. I think it is a fair claim that if I told any speed drawer or line artist to stamp out anything in 30 seconds, they would warn me it's not going to be great. The difference with AI, or at least most of the tools facing the public, is like with references, what you as an artist need for reference vs what a commissioner thinks you need are not the same. It's the same problem with AI, people do not understand how to get what they actually want out of a tool until they learn to use it properly nor how it interacts with other tools; trying to use a ruler with a crayon the first time is a easy example. I might just be talking at nothing with all of this, but I do acknowledge artists wanting something for their effort beyond doing what they want to do. However, I think it is foolish to assume tools already made, will collapse into ruin as long as people are willing and able to use them. I don't think however it means traditional art will die either, but already we see far fewer people oil painting, sculpting, or being that into bronze casting. Some of it comes back to expense, but also the difficulty and time required with the mediums to learn it. I think its more than just not being lazy, its that alternatives now exist and people will use them if they choose.
youtube Viral AI Reaction 2025-04-21T00:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzgqRjVX6oHPvFnhCp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyeEGDUzGn2jv5Fm6R4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyXtStQeNN1KrtV8Cx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzqF13BVgAgjU2F4vJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxE-b2AOBX0M4RhsIJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwNKk744G2-qlUD8pB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTLZSDlND4v_Bfec54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgybiyG2uSWIfzQvdPp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwpdl493sFhrolAr4J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyKiJkk8OsWpAELC-J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"} ]