Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What about the ppl that absolutely refuse to use an app, open a website, or orde…
ytc_UgyKfaE3b…
G
Insan ko baaki janvaro se alag uska dimaag hi toh banata hai. Aur insan ka dimag…
ytr_UgxLPkzb3…
G
Funny how he used the same debating style against the AI he does against his fel…
ytc_UgweOkqvE…
G
People are going to be more creative? Seems like a whole bunch of BS. People eng…
ytc_UgyWBhxSI…
G
That is possible that human can be erased on earth and robot will remain. Becaus…
ytc_Ugz4iE8a-…
G
It is unnerving. I don’t think they should have human features. I believe we sho…
ytc_UgytZXL8Z…
G
Wslking along the river in nh saw a
robot walking to my side.what they could c…
ytc_UgwpKj7ds…
G
That's a great point! "Sophia" indeed has roots in multiple languages, emphasizi…
ytr_UgwdaBdl1…
Comment
Not to take away from your main gripes, but you mistakenly and repeatedly describe Stable Diffusion as if it works the same as ChatGPT's image gen -- you cannot just "ask" a diffusion model to change part of the image for you.
Diffusion models rely on a single prompt used to navigate the latent space of their training data. You can ask a language model to change this prompt for you to make some specific change, but this will inherently affect your entire image by moving you around within this latent space. Changing the initial noise (= image) will do the same, which is why all of Shad's iterations look so different from one another, artistic vision be damned. (Furthermore, the fact that "his" image exists as-is within the latent space, makes his claim that the model could never have done it on its own somewhat absurd, but I digress.)
Early attempts at true instructable image generators, capable of making changes while leaving the rest of the image as-is, used multi-modal language models to select inpainting regions for targeted diffusion, but I'm pretty sure this is not built into Stable Diffusion. The workflow you are describing has only really been possible since the release of OpenAI's newest image generator, which uses an auto-regressive model built directly into their multi-modal language models rather than diffusion.
That is to say, Shad's Photoshop approach is slightly more clever than you are giving it credit for, as insufferable as he sounds about the process. Though I wonder why he isn't combining this with inpainting regions to keep some semblance of artistic vision...
youtube
Viral AI Reaction
2025-08-13T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwd3dX4W_w2eRlsHN14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPxwbHFVB9bVkQXAt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDRFq-oWL1ElAoENl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzdGqvX_P1XqX6B3Oh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcoNjLDFTcVo5u3jx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwjq6h8a4QZldUzIrZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"disapproval"},
{"id":"ytc_UgxsB9Iarxi2OFXEU5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwgeDvztM0GzWSIs-N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwC-D6jUPEjoVUeTTR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyhOaNv4iRlJVWKAxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]