Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hey, as an AI researcher I feel like this is... misinformed in a lot of ways. I've been a huge fan of yours for years but I just feel like this video misses the mark. The whole rhetoric of "we don't understand how AI works" is misleading. We understand the building blocks, but big companies like OpenAI and Anthropic (whose works you cited multiple times) have built such large, convoluted models that it's computationally infeasible to try and track the decision. To me, that doesn't equate to "we cannot understand these models they are displaying superhuman capabilities" etc etc. Anthropic in particular has gotten a lot of flack among the researcher community for intentionally releasing papers that try to hype up the supposed "emergent capabilities" of their models (that almost always make perfect sense for text completion/generation models that have been trained on the entire damn internet, including tons of science fiction stories where AI "goes rogue." Also, saying an AI will develop "superhuman capabilities" is absolutely moronic. AI already HAS superhuman capabilities. So does my laptop, or hell, my TI-84 calculator. As a professional in this field, I can honestly say we're very, VERY far from developing an AI of any variety (LLM, CNN, what have you) with what we imagine as AGI. These companies benefit from the fearmongering around AI because it convinces their stockholders that it's possible that they'll be able to automate away those pesky human workers that demand awful things like "rights" and "general human decency." That belief is the only thing keeping money flowing, which is the only thing keeping these companies that are absolutely hemorrhaging money going. (For those of you about to say "But random AI person on the internet, what about programming jobs? Those are being automated away by LLMs!" From what I'm hearing on the ground most of the big companies are doing mass layoffs, claiming that it's due to AI, and instead dumping extra work on their remaining workforce. Amazon's workers have recently signed an open letter complaining about this. I've spoken to a lot of software developers and the reaction ranges from "They actively make my code worse" to "I liked them when I was inexperienced but the more experience I get/the more specific the problem I have to solve is the worse they are" to "I like them to play with and sometimes they save me time but the bugs they introduce cost me so much time it ends up breaking even." Eeeeevery once in a while I hear "it's amazing for prototyping but I can't use it for production code". The only people I've ever heard raving about them and saying how they'll replace programmers are my fellow researchers who, unlike me, have never worked in industry as an actual software dev and have no understanding of ) AI "agents" are a whole other can of worms- suffice to say that giving an unstable, poorly designed piece of software access to tools for doing things like making purchases or acting autonomously on your behalf is stupid and it's an ACTIVE stupid. That does not mean that these software are acting intelligently and maliciously, it means that these companies are intentionally throwing a large, poorly designed hunk of software into the world and giving it way too much power, which isn't anything new in the software world. However, again this is a CHOICE on the part of the programmers of these models. I cannot express to you how much these programs CANNOT act fully autonomously and how they need these tools to be explicitly written and given to them and they need to explicitly have "use x tool in response to y" built into them. If you're interested in other voices, I suggest reading Gary Marcus, watching Internet of Bugs, and Simon Willison (for a more optimistic view that is still very grounded in realism) Tl;dr Hank, I think you need to consider that you're paying too much attention to the "sci-fi" fearmongering and ignoring the real problem, which is LLM companies trying to sell the idea of AGI so they can convince companies to fire all their workers and replace them with a shitty, unreliable poorly designed software.
youtube 2025-11-11T02:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyfINCepppSmJgQ4MJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzl12FhB9NM3Uy0y9p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgwoBphgNtl_M4qBMux4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxAK5zs5VthtBQ5ITR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyRQ_p8nA1XA9BO2rF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw2Zbvq7RteVjKmrq54AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzfwBOHXgv86wTPl3x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx8iXhI3DLAcJYXbYN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzEJDpYrwbmcSezPNV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyNVQcUqafyi9wRODV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]