Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI does not cause job bloodbath. Bad AI investmnt with little return was the rea…
ytc_UgxLOR80B…
G
AI Art doesn't have to be Theft, all it has to do is be gathered with pure conse…
ytc_UgxyZ2Wox…
G
I see, so they think that its okay to just do something like that to a helpless …
ytc_Ugz3sZRbM…
G
It sounds like you're grappling with some heavy questions about technology and i…
ytr_UgyqXDtah…
G
Are we in danger? Yes Can we be killed? Yes
List the potential dangers: War
S…
ytc_UgwYmvsvf…
G
AI is going to destroy AI - as long as it is connected to the internet there wil…
ytc_UgzTagzQy…
G
Smart guy. He’s full of it here. AI will be just as complicated as anything else…
ytc_UgyWET7oV…
G
1000%, and it's so short-sighted because even if it does "work" then pretty soon…
ytr_UgwYrMGqJ…
Comment
I don't know if this is going to have the effect you're going for - these 'poisoned' works of art, with tags that accurately reflect what a normal human would see in them, are the *most valuable* kind of training data for an AI, because they teach the AI what kinds of artifacts are irrelevant to a human viewer. A better approach, I think, would be to post normal art with completely incorrect descriptions, like if you had tagged that hand picture with the description "a beautiful fantasy landscape by Greg Rutkowski", or to post art with correct tags that also have the kinds of weird artifacting we see in AI art, bad anatomy, discontinuous lines, etc. At the scale these companies are scraping the internet, they can't possibly catch mistakes like this that seem fine if you only look at the art or the description individually, and AI doesn't know what things mean, it only makes connections between words and patterns of pixels, so muddying that connection is the best way to break the AI. Good luck!
youtube
Viral AI Reaction
2024-10-20T20:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxQBtqqAznL1BIpjL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlBJpvwdpEIei17ct4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzIlMgjZPi0v6q9HTB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzKBL0HZ7CRGxewgKl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwPVGHhdpkDFLMZ0RF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7OxeTAc-miY8Fuxt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxnZ86ECZXLl0ThoCR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy8ID7Oi5UduajAH5t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugzn60t9hF3UgOVVcnx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxIH6ysLpbWM0opQqB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]