Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As someone who is a fan of ethically sourced AI for making accessable skills I don't have the patience to earn myself... Good for you! I encourage the poisoning of images that you don't explicitly want to be part of some AI's training data, because using scrapers to steal people's hard work en mass is highly unethical. I believe that AI training data should be composed only of images which are: 1) so old that public domain rights apply, 2) are purchased directly from the artist or photographer with the specific information given to them that it will be used for AI training (not hidden in small text or puchased underhandedly) as a licensed product (so, for more than one would sell a copy of an image for normally), or 3) created by the owners of the AI themselves. So, those 3 sources of training data, totally ethical in my books. And I will fully support an AI that follows those guidelines. The trouble is that these guidelines are rarely if ever followed, and soooo many AIs don't disclose where they get their data. I don't want to support any that use scrapers to steal images, so I definitely want people to know about tools like the one you've used to artifact their pieces and screw over unethical AI. You'll notice Stable Diffusion encourages users to use artist names to influence the style. Stable Diffusion is NOT an ethically sourced model. Midjourney, however, while names may affect stuff, it doesn't make the images look like the artists' styles unless they're really old paintings and the artists are long long dead. You know, artists whose works are already in the public domain. At least, that's my experience. I really hope that Midjourney doesn't use stolen art, because I have used that generator a lot 😢 I have no doubts that Bing Image Creator used stolen art for training data. Ugh. And the images are so low quality and it just cannot understand simple prompts. No doubt this is because of the hard work of image poisoning.
youtube Viral AI Reaction 2024-10-26T06:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugw2rle64mjlidDRWY94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxFfCTouBLWWe-lsgt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyJoE7uevVwrWNwR4N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwh5U78ZGrnG7j6VeN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyXthPB_sYiozcOwHF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgymO2RZPs1DVWzR1fN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzicVuEMV8030v42IJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwwON5NVojk6Gm5qqp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwR1imQGXEwj_pf23t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwqVK2Z4SHrvI8nUyx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"})