Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even a robot has to wear a makeup to look realistically human, I've no hard feel…
ytc_UgxXSo0wr…
G
The students should all comme to class next.morning wearing a mask of the teache…
rdc_eerb5lb
G
There are several cases where AI has told a person to murder someone. And other …
ytc_UgyQGNVWM…
G
demis is one of those responsible for AI. Most people that are involved in this …
ytr_Ugxio3XMv…
G
I would like to know how say the UK who want to use only renewables will power a…
ytc_UgzFeboKL…
G
@_sayandas I completely understand your point about how AI will create new job o…
ytr_UgyOyVgpu…
G
not true you see a chatbot is designed to give you what you want to hear not wha…
ytc_Ugy2SsJ6q…
G
It's crazy to me that people unironically look at piece, enjoy it, and then scof…
ytc_Ugw6vmSTj…
Comment
While the AI training is inevitable at this point (the increasing complexity of the networks just means they will get better both at training and at output), I'm hoping we can come to a societal agreement on what's ethical and unethical usage of the data. For me personally, I use it as a rough draft. I have no skill in visual art and just don't have the time to learn it (I already have very time consuming hobbies.) The rough draft idea lets me sift through a thousand different variations of what I think I want until I get a more solid idea of what I was really looking for, then I can take the rough draft to an actual artist with the notes on what aspects I want to keep and what aspects are open for interpretation. This allows me to reduce revision requests and be more happy with what I end up paying for in the end.
Another usage that I believe can be ethical is interpolation frames in animation. Allowing the artists to work on the keyframes exclusively. Though I believe a lot of animation is done with 3d models now and specific shaders that flatten the image to give it the more traditionally drawn look so that might not be a thing anymore in studios.
In addition, I think having royalties provided based on the weight of a specific artist within the training and fine-tuning data could go a long way for ethical usage. I'm not sure how that would be handled with privately hosted networks, but commercial networks are already charging money and should be able to implement.
Grey area usage: (Where I believe it's less clear if it's ethical or not.) Those without the skill to do art or the money to commission art and just wanting things for personal, non-commercial usage. Such as their own D&D characters. Not having the money to commission means they'd never be a paying customer of any artist, so we don't have a "just hire someone" counter for their usage, meaning the alternative is just never having any artwork.
unethical: commercial usage, or claiming to be the artist that made the output I'd say both fall squarely into this category.
But I'm not an artist, so I'd be curious if artists consider these categories to be incorrect. Though I'm not interested in an opinion of "All usage no matter the circumstance is unethical" as an uncompromising black&white position doesn't benefit the conversation.
Also, I'm not saying "Don't bother poisoning your art." I do find it important to protect yourself in these early days of the technology when unethical use will be at its highest.
youtube
Viral AI Reaction
2024-10-23T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy3dpex_jNOL8T3D6x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxrsMABTs-GGfOZco94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwxYeb0I4D_acUaz4h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzDm0ScHmMvuoGOOCx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyksM0UpZHNKZNRotV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfiAy6qH2nmvQrpZF4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzvPs60BVZHCDYk8st4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxAsSH8BSSxEFunI594AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2JIGQ6XGA82HxAaV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyePvrbfXyWvg8I0w94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]