Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
one argument that proponents of LMs make is that artists had "the same reaction" to the steady uprising of digital art in the late 90s. and that's a fact, given that I often saw this argument in real time on forums of the day. traditional media artists said pen tablet artists were "lower" on the "true artist" metric, given that it was as easy as hitting ctrl+z to undo a mistake compared to "real art" where you may not be able to erase a mistake fully, depending on what it was. the problem with using that argument to poke holes in people who don't like generated art is that pen tablets didn't draw it for you. but by that same token, I will say that there is more nuance to generating a piece of art with an LM. yes, the basics of it is "tell the AI what you want and hit enter," but that's as far as most people who mess around with generators go. to do something extremely specific, like down to the exact detail, you have to iterate hundreds of times over. does that make it "art?" maybe, if training a learning model can be considered "art." do I think it's "art?" not really. to put it into more basic terms, imagine you're speaking to someone who isn't fully fluent in your native language. you ask them to do something for you. they make an attempt and maybe they got the broad strokes of it, but the nuance was lost. so you ask again, maybe worded more simply. again, they get the gist, but it's still not right. most people fall into the category of cutting their losses and accepting what the "output" ends up being at this stage, even if it's not exactly what they envisioned. the people like this guy who's trying to copyright shit fall into a much smaller camp: the camp of "no, we're gonna sit here until it's correct." obviously there's social nuance that's being disregarded here, but that's the real "difference:" the amount of determination and stubbornness required to get an exact image to an exact specification is beyond that of those who would try once or twice and then move on. so the comparison, while understandable, doesn't accurately depict what the process actually looks like. "who cares?" well nuance is important if you're trying to argue your point. people who are on the fence about AI listen to an artist say something like "all they're doing is typing a prompt and hitting enter, how can they call themselves artists?" then they find out how people are *actually* making these highly specified pieces of generated imagery, not just meme pics of Walter White taking fat bong rips. the immediate thought that follows is that the artists who have been complaining are just "misunderstanding" the process and overreacting to a non-issue. they lose the script as to why it's actually a problem. it's not that it's a "simple" process to generate pictures. it often isn't that straightforward. the real issue that I have with it (and many others) is that these models all mostly use art that is not in the public domain or free to use to train themselves on. that's the real issue, not whether or not the people who use midjourney have the right to call themselves "artists".
youtube Viral AI Reaction 2024-10-01T02:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz1X6ZXz-lIYyQ4jkJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwQF1krW4E1mUa0IpV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzfUrZz5w6CYHs1z2h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugys-b3Uwt9fGpWsROR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzVrZGiGixT33Lk3DN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx63kA0zknM03ShGZl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzzqi4VNVdZmdaTCOt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw-Ei3k7rY4jvQnNGB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxnIC9O08h9seUMyd54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy-vCiDd3HPepO-7Mp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"} ]