Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, I think theres some nuance in the convo about nightshade you've missed because the loudest people always the most extreme. Nightshade probably does have some effect, if it's added to ai sets untagged and uncritically- it definitely could damage the dataset as a whole if some large portion of images were using it. Large being a subjective term here, how many images would need to be fed into the system is unknown and could very well be in the thousands to hundreds of thousands to make a NOTICABLE difference. That's not nothing though and if that's all that it took, a large portion of the internet using it for a couple of weeks really might be all that was needed to damage AI datasets for a long time. If you go to Nightshade's university of Chicago page, you'll find that they are excited that in the short term this seemed to have produced some results in limited testing in closed datasets that they control. They also mention that its very unlikely that it produces long term effects. So what are the DOWNSIDES? Well- for one thing these programs use AI, use tons of wasted resources, trained on questionable sources who may or may not have consented. Nightshade specifically seems to use mountains of "ai junk" which means probably thousands upon thousands of cycles of running an AI program which is one of the things we're trying to stop, and if we would like to keep it updated and functional at any level that dataset itself would need to be expanded forever just like AI itself is! This isnt the worst thing ever, but it does represent a strange perverse incentive for the "anti ai art piracy" program to invest heavily in the exact same things that AI is doing, and if the dataset was ever leaked or used incorrectly could end up being used as a "negative filter" in bigger AI models, a program that would check for bad art to improve AI stolen art itself. That hasn't happened yet, but it could. In the super short term artists that use it could find themselves essentially deplatformed by AI driven algorithms that essentially think their art is "junk/lowquality" and move it to the bottom of the list where it wont reach their followers. Nightshade and Glaze being open source means that anyone can host their own- lots of people used a website where they uploaded their pristine art to a random website running glaze when it first came out. It's certainly possible that these websites saved these uncorrupted arts and sold the dataset to AI since these "Free" sites dont seem to have many other ways of making money... Adding a layer of AI junk to your art has led some artists to be accused of... using AI which could have an impact on their career especially in a super wary anti-ai landscape most commission artists find themselves in. Most importantly though, is that AI is already collapsing, because AI is in an incestous relationship with itself, being forced to train on itself and is one of the leading concerns for all datasets right now. This all leads a lot of people to say, that this type of attack has a low efficacy in general, other types of attacks are far superior, and the short term negatives may outweigh the unproven longterm effects. Which contributes to the attitude some have that "it doesnt work" or "its ineffective." As well as it's suspicious origins as an AI project made by an AI focused research team who have so far been benevelont but could very easily sell their work to make AI theft even better in the future. (unlikely but its happened before.) Which just leads me to the TLDR... You might not want to sponsor nightshade or glaze without bringing up the cavets because only time can prove its worth.
youtube Viral AI Reaction 2025-05-02T12:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxAKS5Xhf4GcdBbytp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"disapproval"}, {"id":"ytc_Ugzwy8ropZeHmIcnERl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwnJj6hlCXcwqoTs7Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgzYT1MitcJLxWl1enV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgwKHTqpUfbOWyCBoH14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxpEBiGIVSGu4OuFi54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw8Ka_SuhOTKNmFBoZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz2WDw-iF1BsHo-adx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzdRyPhGmF5o0D1ePx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx22eoOGyhccaaOE2N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]