Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
35:00 while so far I don't really vibe with his evaluation of how human "intelligence" works, you can concede his whole model and still say "humans succeeded in generating training data that trained the next generations to do things that earlier generations would consider magic". Send a well informed human back in time - maybe a chemist or a doctor - and they stand a good chance of accumulating a lot of power. If we distill the ability to do that into an AI, can they make the same kind of progress? Can it be supercharged or improved to do it really fast? We developed social and psychological technologies - writing, the scientific method - which increased our own intelligence. So even if he's right, there's still a good argument for FOOM. And AI's CAN learn things that we can't teach them. They know how to play go a lot better than us. They know how to predict the shape of a protein far better than our vaunted physics knowledge and supercomputer simulations. But he has inspired me to change my thinking. For about 9 years, no one has come up with any idea in machine learning that even hints as being as big a breakthrough as the transformer. I might be exaggerating a little bit, but you get my point. There's millions of people in STEM fields who spend some significant amount of time thinking about this, and probably hundreds of thousands working on it full time. Way more people than were doing so in 2017. Maybe the next leap really is harder, just like physicists got all the low hanging fruit by the 60s and now they're completely stuck. LLMs have shattered our conceptions of what intelligence might be and how it might work. But does it seem like someone has a better model of it than we did in 2017? Not really. If we understood how to replicate even a bit of it, well, we wouldn't still be using a brute force compute based method, would we. All this to say - if LLMs can distill our intelligence and nothing more, then maybe they'll get stuck for a while at human parity. Above average for a human, but not on the level of the Eulers and the Einsteins. After all - there's very little training data for those minds. Only a tiny fraction of what went on in their brains ever got written down and preserved. In that case, the doom will not immediately be in the form of ASI, it will begin as humans being 'outnumbered' and 'outworked' by millions of copies of mere AGI.
youtube 2026-03-26T00:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx8CgiHPtR_PcujKzJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyL3gRwSi8GiYrlhU14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxoAKbFXaFNKheaIWZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy-OQdbloj83oKvmI94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzumermdBNf4qTtoHZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy65NnMH35v_0m6yqZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyvVxyjyNLIEeXpHht4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyM84xeyznV3P1Wn4h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxPkdUi-FguJl50ZMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy3bSc6GOdDKVbDIJd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"} ]