Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
honestly I started in a similar spot and the biggest thing that helped was just.…
rdc_ohtynmh
G
Now, i am not advocating for this, but a quick way to get legislation and action…
ytc_UgyAVLJMr…
G
AI needs to be deprivatized. Can’t have private ownership of the technology that…
ytc_Ugz2uPicu…
G
BS you know exactly what your doing Satan's Soldier. Nevertheless the Almighty w…
ytc_Ugw-yR8nf…
G
In layman's terms, a "bad actor" could be replaced with the "c" word. This shoul…
ytc_UgxZMpSip…
G
Important note, it's also very inaccurate at detecting AI because things likr GP…
ytc_UgzmZbb5b…
G
It really annoys me how CEOs have fallen hard for the AI grift. It's just autoco…
ytc_UgzS2LQZ6…
G
How about taxing the wealth on the obscenely wealthy 0.01%, so the rest of us do…
rdc_ohjltka
Comment
Small note before I start : I wrote all of this at 5 minutes of the video because I tend to get triggered pretty easily when people talk about technicale stuff I know a bit about.
Please, see this as me trying to add some precisions and give some thoughts on how AI works and not directly as a critic
I see a lot of people explaining generative AI as "works with noise" and draw conclusions based on that afterwards, but I don't think it really works that well...
It's a bit hard to explain, but that "noise" is just the most basic way of representing what happens inside of an AI.
It's not wrong... But it's like describing the brain as a big sponge. It over-simplifies it too much to draw any conclusion (obviously sponge can't think, duh).
This noise isn't random, and it's a representation of how generative AI starts from pretty much nothing, and through iteration creates an image (in the example of image generation).
Though if you look at the steps that goes inside a generative AI "brain" (those different layers of noises), you can see that it's not just "unbluring" an image through statistic. It's actually figuring out stuff.
Like, firstly it delimites different blobs for the face and the body for example, then it figures out where to put hands on that, then eyes and mouth, etc.
And there is a reason for it. Even if the training of an AI is heavily based on statistic, the goal is still to evolve the model to do a task. The way it does the task is mostly out of the control of the people training it.
A really good video I saw (that I sadly can't find anymore T_T) was about auto-encoder neural network.
Basically (if I simplify a bit), you give your AI a set of input images, and you train it so it gives you the exact same images as an output. The goal being that for every image, it can give you back a replica.
What's the point of this ? Well, the neural network is a bit special. It is made in a way that there is a bottleneck in the middle.
Basically, it doesn't have enough space to pass down the whole input image through itself.
So what does it do ? Well, it has to figure out how to reduce the input image to a small amount of information so it can go through the bottleneck, and then figure out how to re-create the image from said litle information.
If you look at how the image looks like inside that bottle neck, it essentialy looks like noise. Though does it mean it's just that ? Well, to figure it out you need to look at how that noise is used after the bottleneck.
One way of doing that is to cut the "brain" of the AI in half, right at the bottleneck (where the noise is), and feed it different kinds of noise and see what it produces.
If you do that, you'll quickly figure out that this noise is actually not random at all. Basically, every "pixels" end up representing an abstract concept with more or less influence on the final image.
What's more interesting is that the more influent a pixel is, the more sense the concept makes.
In the video I talked about earlier the neural network was trained on human portraits, and the most influential "pixel" of that noise was controling the hair length of the output. The second was controling hair colour, etc.
Obciously, not all of them were super clear, but they were linked to concepts on the images that were used as training datas.
My point is just that generative AIs do figure stuff out. Not necessarily the same way humans do, and not necessarily in the best way possible, but they do extract concepts from the things they are trained on, and calling that "noise" is really just a missnommer in my opinion. Just a way to represent how it works at a very high level so you can have a rough idea, but completely useless to draw any conclusions.
Anyway, my goal wasn't to draw any conclusion. Maybe they create stuff on their own, maybe not, I don't have clear answers. Though it's a complexe topic, and most people (not to say "every" people, myself and AI ingeneer included) are very curious about it, want conclusions, but struggle to grasp exactly what it does and how it works.
Drawing conclusions on a surface level understanding is kinda dangerous in my opinion (not saying that you did, this is just a general statement).
Small note : for context, I'm a major in software application in biomedical. This was a very broad topic going from human vision to how AI works and are trained (You see those complexe math formula you showed ? I had to implemented those in a project for one of the courses).
I'm nowhere near an expert (haven't worked on AI since graduating) but I can assure you, it's way more complexe than just un-noising noise.
I've also been drawing for the last 10 years, so I'm kind of in an uncomfortable middle ground... That's why I get a bit triggered when artists talk about AIs, or AI bros talk about art... I hope this was at least interesting and made some people curious about the topic x)
PS : I might also have miss-interpreted what the "noise" you were talking about at the beginning of the video is.
youtube
2024-07-15T20:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxR7bKx7hPeTClh9Ox4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0083XmOvbI3fQWLd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzZoY1pa-_2ayoG-R54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUhz6LmOTU0aTVMvV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz0mcfrhrBq9xXCSWF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwsF_CxY2y7BhElGKZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy-eBxbqKd0EQ9F_Ad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxdZShYhyU18HD1Asl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBhToXCxm7z_U5w-54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxnRf-8D2xc7Lx-WlV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})