Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I thought the techbros who think LLMs are sentient and deserve human rights are …
ytc_UgygN7pR-…
G
@NecessaryTet Your original comment talks about the need of AI's with "same gen…
ytr_UgyjVZzW3…
G
The main reason AI represents a danger is that it will get into the hands of cri…
ytc_Ugz90kwHQ…
G
Art has been around since the dawn of human civilisation
AI has existed (in it'…
ytr_Ugw-MYY4o…
G
It really depends on the type of project. Its way better (or just faster) than m…
rdc_lqugx39
G
Data centers are becoming a real environmental threat, especially when they’re p…
ytc_Ugyb3TiSG…
G
I think it's a bad idea. There is too much room for error with the non automated…
ytc_UgyF98bxO…
G
The real danger is that AI is malevolently misused and blamed on the AI itself, …
ytc_Ugz90Yx9B…
Comment
The point half way through is key: pure AI art just isn't making money. AI assist tools can make money, but making consumable products still requires a lot of intervention from human talent... and that hasn't changed all that much for the last couple of years. It IS killing some art jobs, but not as many as people thought a couple of years ago. We could say that AI art will get better... but as someone who's involved in tech industry, we can see it hitting certain limitations based on the fundamental limitations of LLMs themselves. There just isn't enough training data in all of existence, and LLMs don't actually "think" in the way we might expect an "intelligent" system might. The field will require some real breakthroughs before it can surpass these limitations.
youtube
Viral AI Reaction
2025-05-14T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgytbBy9l-2BpaxPPXJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx8079VK9A8NfQpB7V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmgqN9S97r0hU3Oed4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyaMl6nD0tOtX-jTEl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyqrRXDAsR6MbXRUM94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxGSCfrxaz7ptNVVbN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwpydBoDlKVWe1h8wl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxIXfKeSzCAwVMti7F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwL5dOcVMBn6SPK2Rl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJbKhf1fLf6GE9eCd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}
]