Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But this AI artist now has more exposure (and presumably customers) than they co…
ytc_UgxjoS2XA…
G
I love singers and songwriters.
I laugh at and with AI songs. I’m not against t…
ytc_UgyKbu5Lt…
G
YTBSummarizer: "This video provides a comprehensive introduction to artificial i…
ytr_Ugw13-szw…
G
I'm hoping the new york times wins and all other ai companies soon fold under th…
ytc_UgxHJiFfi…
G
Those automatic cars should be permanently banned . This reminds me of a Speed …
ytc_Ugz6efHsT…
G
for google searching i learned that you can type [minus sign then key word] so t…
ytc_UgxbbsU7S…
G
Ok. There where already tons of deterministic tools which helps to generate boil…
ytc_UgzWtim2I…
G
I’m not sure that is the way it works. If you tell it to code your entire projec…
ytc_UgyI4ejTp…
Comment
Here is how AI actually works: From a software engineer and artist
Machine Learning models are a bunch of nodes with values placed on them. An input of data is then put into this node, the nodes will take this data and turn it into an output. The model will update the values of the nodes accordingly depending on if the output properly fits what the trainers want or not. For example: A model predicts whether or not an inputted image has a dog or not. When it receives an image and gets it wrong or right, it will adjust the weights accordingly to either discourage or encourage the model towards specific biases. Basically, the AI is trained to PREDICT the output of "yes this is a dog" or "not this is not a dog" based on the image given. The more images of dogs and not dogs, the better it is at that output. You give them pictures of dogs or not dogs, and the model will predict whether or not you put "yes this is a dog" or "not this is not a dog" on it.
ChatGPT works in a similar way, where it PREDICTS what the output would be based on the input. It is trained on a bunch of text data and based on the input, will try to autocomplete that input as best it can based on what it has already seen. For example, if the AI has seen that "George Washington is the first US president" a lot of times, it is more biased towards associating George Washington and First President together.
Now let's move on to Image generation. The models receive a bunch of images with tags and descriptions. These tags are used to help train the AI to associate certain image data and patterns with certain descriptions and tags (and also like a little bit of language processing to boot). So basically, Image with a tag and description gets added to the model. With enough data, this model can easily give any image that is inputted descriptions and tags. You give it a cat, it will say it's a cat.
But, what if you do that in reverse? Give it tags and a description. What will happen is that the AI will try to predict WHAT IMAGE WOULD HAVE BEEN UPLOADED HAD THIS TAG/PROMPT BEEN INPUTTED AS TRAINING DATA. It is a prediction algorithm, which is what deep learning is primarily built on. They are complicated probability machines that give a result that has a high CHANCE of being the right answer, but not exactly the right answer.
This is why Machine Learning Models are notoriously bad at math, they are not actually doing calculations (unless they are hardcoded), they are simply looking at problems people already did and predicting what the answer could be based on any patterns in the math they see. However, they do not actually do the work to calculate it, resulting in a lot of just straight up wrong answers. It is not solving the problem, just predicting WHAT the answer would be based on the answers it's already seen.
When applied to art, it can get really good at looking at complex data and predicting what the description or tag can be. This is because you don't need to be 100% sure something is a tiger or a cat, you can pretty much tell that it's 98% sure it's a tiger and that's good enough. It is taking a complex dataset and coming with a conclusion. However, doing it the other way around is where it just seems pointless? You can take an image of a tiger and be 98% sure that it's a tiger. But if an AI generator takes the word "tiger", that is so open ended that it will never actually create something that the user intends. Only something "good enough" if they generate enough tests.
AI generators "steal" not because they download the artwork and repost it, but because they are designed to replicate and imitate the works of existing artists like a factory. It is different from inspiration because prediction algorithms are fundamentally designed to imitate, it is meant to just predict what the output might be based on training data. When an artist is inspired, they carefully study the techniques, choices, and decisions that the original artist made and put that into practice. There is an outside human filter that adds originality. When you see "draw this in X artist's style", people are okay with it because the challenge is trying to learn what the artist did and create something inspired by it. The skillset and techniques are researched and practiced. Also, they give credit where credit is due. Tracing and copying without giving credit are bad because you are taking the credit for a work and skillset you didn't do. If you just imitate an artist's style forever and never deviate, that is unoriginal. That is why human brains can deviate and put in personal biases in the work they do, it adds originality. AI cannot add that originality because ultimately it is not a human making it.
This is why AI art and Digital/Photography/Traditional art is so different. It is the INPUT and CHOICE that the user has compared to how the end product turns out. Illustrators can directly manipulate the canvas with their tools and understand HOW those tools affect the final product. Photographers can control the camera's settings and actually CHOOSE what to take a photo of. Cameras might "automate" the process, but they take pictures of things that are real. Photographs are meant to capture reality, and the photographer can choose HOW it captures that reality. AI image generators, even with the fancy node based comfy tools, take that understanding and control away from the artist for the sake of convenience.
A lot of the advancement of AI has mainly been in two parts: 1. Making bigger and more expensive models. More nodes = better processing and better results and 2. Hard coding out inaccuracies and biases which kind of defeats the purpose of a self-sufficient intelligence.
youtube
Viral AI Reaction
2024-09-16T19:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzzR5WlSchRQ0eQ8dZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyF48ezLWDfTattjHR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgziL_t8b72SEdEwYb14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxAuIjQgLZVSk_Ucjx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz4c4JXp8NE_bpLEq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy32dCelqXfsTivrLl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyryFqxLdX3eNhQx6J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsuMFHW95eR0YOQ-14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwfXam3s9taI8sUGzt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzd5EXNprtM5eqOaAZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}
]