Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
NO, NO , NO... What people call “AGI” right now is mostly marketing. LLMs and “agents” are useful, but they are not general intelligence. LLMs scale with a clear problem: you burn vastly more compute for smaller gains. That diminishing return matters because it turns “just scale it” into a power and cost wall. A system that needs huge GPU farms to get marginal improvements is not on a clean path to human level general intelligence. And the “agent” layer doesn’t fix the core issue. Agents are task loops: call the model, check output, call tools, retry, patch failures, repeat. That can reduce hallucinations by adding filters and verification steps, but it’s still a brittle routine. It’s closer to automated workflow than a mind. Iterating until you get a coherent answer is not the same as understanding, learning, or reasoning robustly across new situations. So yes, LLMs have a scaling and efficiency problem, and agents are mostly a wrapper that compensates for weaknesses. That combination can produce impressive demos, but it’s not AGI.
youtube 2026-02-06T09:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxULa83FZ45v4baS4B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxOMcz4ECaofmsxYRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyAHji4ybUbrw9hApl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyq-wZA5h8aqDzbkXB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgybcsHqzXzMqgDQFIR4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxDEk5XLRAtwFS0dIV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYrnJRGPMuJMah83x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw4SY4f03fOfKYHNhx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyRQzKHHbaaOgtfcDR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"hope"}, {"id":"ytc_UgxYsTz43jL9j9D914F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]