Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Techie who works with AI here. AGI is literally not possible with current technology. That's why it won't happen. Achieving AGI via LLM's is the equivalent of building an entire human brain with a fraction of the speech center of the brain. Because that's all it is. It's better at large scale things, because it's computer software. But it has zero awareness, and therefore zero semantic understanding. It's a word calculator. A human baby takes much longer to train; but at a fraction of the cost and environmental impact, and with nearly perfect semantic understanding. So first you have to finish the speech center. LLM's are built on top of decades of research and hundreds of years of math. Now do the same thing with a dozen or two dozen other brain components, and figure out how to wire them all together, in a way that is actually affordable. Oh, and you have about 10-20 years to do it, because climate change is on our ass. We're talking like dozens of moon shots. Not happening, it's a fantasy to inflate stock prices and valuations. Do an experiment. Download a free tool called Ollama. Ollama let's you run and interact with any open-source language model. Install it, and download a model, say llama3 or deepseek:r1; whatever you want. Now, navigate to the folder on your hard-drive where the model resides. You can find the location on Google or your favorite browser. On Windows, it's like C:\Users\username\.ollama\models\blobs. Look at the blob files. The ones with the big sizes are the actual models; the smaller ones are manifests and things like that. That's the thing people think is alive; a .blob file sitting on your hard-drive. Keep that folder open for a day; I guarantee nothing will happen, for the same reason that if you stare at your toaster for a day, it won't come to life and start talking to you and making life choices. Without software to interact with it, it's just, literally, a big blob of numbers. As a side note, it's really interesting how in the early days of AI hype, the hypers re-branded the concept of AGI, which has been around for decades, watering it down to just mean 'a computerized system that can do business tasks better than a human', which, well, then I guess every computer is AGI, then? Or a photocopier? I sure can't make 1,000 perfect copies of a memo in a few minutes, can you? But since there has been all of this push-back, and their fantastic predictions have failed to come true, without anyone noticing, they have quietly walked back to the original definition of AGI as defined by AI researchers in the first place, which in a nutshell is machine intelligence. You know, like Data or the Terminator. The public thinks of that as 'AI', but it's actually AGI. But AI sounds better than the slightly clunkier AGI, so that's the term media uses. Just like in media, a veteran is someone who has fought in a war; whereas in real life, a veteran is anyone who has served in the military honorably. BTW, AI has been around 'forever' as Neil says. The NPC AI in your video game likely uses something called a behavior tree, from the automata-based programming branch of AI. Those have been around for a couple decades. Before that, they used something called finite state machines. Collective intelligence algorithms are AI. Expert systems are AI. AI is any system that *simulates* some form of intelligence--not a system that *is* intelligent, but artificial. The emphasis is on artificial, not intelligence. What you should be afraid of is what capabilities these AI systems do have, and how they are being deployed against you by corporations, governments and the wealthy. And you should be most afraid of the impact AI data-centers will have on the climate.
youtube AI Moral Status 2025-09-10T22:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQdeZRwZUzq-eVvFB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyQS0kZAVehidSA2FR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzqgktRy2HQiWr3FXp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz-0FEm-C9xb4Kb1z14AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgywcauFNe6W1NLfdy94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgyQmussWQIRn-QVWox4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxdOkNwy72envuEtnJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgynZckacNK7Qog7ZBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxF7a73Bsou-yDKLIJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwjSVw15EAg5cGuUjN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]