Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Making a doll repeat sentences created by humans isn't ai, ai has to learn as a …
ytc_UgwqLbBeY…
G
The problem isn't AI, it's capitalism. The goal of life isn't to work, it's to b…
ytc_UgwIY3vyG…
G
Maybe k my words & those of others one day this AI will destroy the human kind .…
ytc_UgxgZxxUI…
G
I am imagining a robot Karen that screams: "01101001 00100000 01110111 01000001 …
ytc_Ugy3ALZ5w…
G
This is pretty much the way that AI ACTUALLY ends up killing innocent people in …
ytc_Ugx9mbZik…
G
Most artists have a superiority complex, they will take a shit stain smeared on …
ytr_Ugw1BLypm…
G
2025: The best AI based robot in the world can't close a dishwasher on his own w…
ytc_UgyxV2odp…
G
We have been hearing the robots are coming for decades now. We should have had t…
ytc_UgwmBH7IV…
Comment
The reason I don't expect AI to get much better (at least in the form we have now) is because I don't believe that simply throwing endless amounts of money at a problem is always enough to solve it. With modern software especially we see this so often, so many new versions of apps and operating systems that are only marginally better than the previous version and maybe even worse in some ways, even though millions of dollars were funneled into their development. Windows 11 arguably isn't any better than Windows 10, and Windows 8 absolutely isn't better than Windows 7. AI just feels like the same thing. Maybe eventually it will turn into something genuinely useful to me, but we're not going to get there from companies trying to brute force it by sinking trillions of dollars into it.
youtube
2025-03-21T19:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwGL8KIGdZRR6YYUG14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxau7k2iSfsXUcJid94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"approval"},
{"id":"ytc_Ugx2FvZANEJdmtYpPeJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwG7F6ejDO-n0yk1AZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzT-rAyntdibI6J03t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYZNfUUw-APV13Q8R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzwFDCcf3zWG4O0GPB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwiwSlorZu5BqVBur14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxa3JNCI8Al2ZgesG14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGIVbpmLpqnb259Kt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]