Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These words used to be a part of my vernacular BEFORE ai. Stopped using them bec…
ytc_UgzEfHaB9…
G
I use ChatGPT for research sometimes and it makes tons of mistakes, gets confuse…
ytr_UgwjhonI5…
G
Same man. I like AI art because it's like comissioning art but for free. With ju…
ytr_UgzXmQBwm…
G
No one laughs at their little jokes because this is pretty terrifying. Why would…
ytc_UgwVuytIP…
G
Imagine buying an artwork you could easily generate online. Lol.
As an artist, …
ytc_UgyxOB3Mo…
G
Thanks but the ship has sailed. CBDC is nothing without AI and all first world c…
ytc_Ugyo0diA4…
G
That's exactly why i will support AI because it will bring the end of this disgu…
ytc_Ugy750f8a…
G
We should be putting a stop to AI until you at least get those questions. Answer…
ytc_UgyxjUbgn…
Comment
NO, NO , NO... What people call “AGI” right now is mostly marketing. LLMs and “agents” are useful, but they are not general intelligence. LLMs scale with a clear problem: you burn vastly more compute for smaller gains. That diminishing return matters because it turns “just scale it” into a power and cost wall. A system that needs huge GPU farms to get marginal improvements is not on a clean path to human level general intelligence. And the “agent” layer doesn’t fix the core issue. Agents are task loops: call the model, check output, call tools, retry, patch failures, repeat. That can reduce hallucinations by adding filters and verification steps, but it’s still a brittle routine. It’s closer to automated workflow than a mind. Iterating until you get a coherent answer is not the same as understanding, learning, or reasoning robustly across new situations. So yes, LLMs have a scaling and efficiency problem, and agents are mostly a wrapper that compensates for weaknesses. That combination can produce impressive demos, but it’s not AGI.
youtube
2026-02-06T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxULa83FZ45v4baS4B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOMcz4ECaofmsxYRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyAHji4ybUbrw9hApl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyq-wZA5h8aqDzbkXB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybcsHqzXzMqgDQFIR4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxDEk5XLRAtwFS0dIV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYrnJRGPMuJMah83x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw4SY4f03fOfKYHNhx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyRQzKHHbaaOgtfcDR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"hope"},
{"id":"ytc_UgxYsTz43jL9j9D914F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]