Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While I understand where you are coming from. I think you may have misunderstood how the technology works. The data set does not store any images pixel by pixel as a traditional image. Instead what the data consists of are patterns / fingerprints if you will, of how a style / piece is composed. Each fingerprint is identified and tagged, lets say for example we are looking at a fingerprint of a face, there's the distance from the chin to the mouth, the distance from the nose to the eyes, how flat the color is or how much of a gradient is used when showing light on the surface. All of these combined factors make up a "style", but it's not saved as pixels, rather as descriptive information you would use to disassemble an image and categorize it into a much smaller format, very similar to how a human learns from looking at other styles of art. Stable diffusions model was about 4gb for the 1.4 model. There is no possible way that the entire LAION dataset could compress every image down to 4gb. I just checked on google and the dataset is 380 TB. Even with the best compression, there's no way the image data could be stored in 4gb. Here's where legally this gets sticky. Copyrighted reference data can be used, but the results must be transformative. In this sense, SD has transformed the images from pixels into fingerprint data which it can use to generate new content. Thus, it is transformative in nature and thus it has a strong argument against copyright claims. Currently we don't have a better legal framework to protect artists in this type of case. To be fair, I do agree there should be better opt-in rules, and I believe that in time you will see that. SD and MJ are currently involved in a class action, and I can see this as being their defense. I think what the companies behind this could do in the short term is halt certain keywords. Artists who wish to opt-out can blacklist their name on the engine side. I think it's also not quite accurate to say the software can't unlearn. The model is constantly retrained with new parameters and tunings. So in this sense, if certain data is omitted, it has no reference to use, so future versions won't contain any relative data.
youtube Viral AI Reaction 2023-03-03T15:5… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgynQAyxizg03PnILfN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyRGxQHshuRVnzSVaJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"frustration"}, {"id":"ytc_UgxZVFxecmBQTvb8xgh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwnhTpt4pFQt0Q7pvF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzZUEgeoKihW1tqMgl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyp8mOamQ1eUHRy6254AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzxFP0BK9KWSnkZ8nZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMKkIJbt7TZoTU2zx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugwjd81qqQzbyGljtFh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwmukBspagKVDtUTbN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]