Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai image generator aka "ai art" don't even have a very basic fundamental underst…
ytc_UgyMT1nGs…
G
Funny when you stop to realize that AI exists on the shoulders of a massive huma…
ytc_Ugzgsafph…
G
We have something inside us telling us, subconsciously, that we have certain tas…
ytr_UgzvMHXAq…
G
Re AI agenda I heard xAI Musk Grok suddenly flooded with fake South Africa genoc…
ytc_Ugy5cRwat…
G
About 1:40: Here is a possible explanation for the fact that we don't understand…
ytc_UgxMEFTE-…
G
The funny thing is that people think they can become anything after doing a few …
ytc_UgzfTWg4U…
G
If this guy believes we r living in a simulation then what is the point of makin…
ytc_UgzV3MbFo…
G
Autonomous killer robots have been around since the 80s. The Goalkeeper CIWS is …
ytc_UgwmKYJt_…
Comment
Fan of the show, specifically how you look at conspiracies and then ground them in reality - often debunking them. I'm an AI expert working on one of the projects you mentioned, so this episode was particularly interesting from my perspective. I think most of what you covered is true - you may have over indexed a bit on the Bing incidents and their meaning - they're less significant than they seem on the surface. I think this topic probably deserves follow-ups, so you might consider a few things moving forward:
1. Being more grounded in how these actually work: you mentioned LLMs and neural networks in two separate beats, however LLMs are a subset of neural networks, specifically they are called transformers. They are a specific architecture of neural network that focus on the ability to learn how to focus and relate information in a huge input: language input. LLMs are really where the game changed - they are able to learn how to structure meaning in language really well. At first it was apparent that they could regurgitate (predict) well-formed language and now, as you mentioned, GPT4 exhibits abstract reasoning. Two things for you: it would be useful to explain the high level of how LLMs work (because they're the important mover right now) and report on how that trends in reality (which I think it will) and projects out each year for 10 years. We're in the middle of teaching LLMs how to use tools, specifically invoke non-model code, this is becoming a standard industry practice as I type this. We're also integrating other forms of modality, LLMs first could only "see" text, we're adding images, sound, haptics, and other senses. We're also working on giving them longer term memory. Generally, these will converge on true AGI very soon and rapidly progresses to ASI. The next generation of LLMs won't be as hardware constrained as they are today, Nvidia's new GPU supercomputer was really designed for training this next generation of LLMs that are an order of magnitude larger at the same time as being close to an order of magnitude more efficient computationally and algorithmically. At the same time custom hardware is being designed to forego GPUs for another leap in model size. At about current gen + 2, we'll start to see AGI, which is around 2027. In 2030, we'll likely see ASI.
2. You focused on the bad side of AI, but per your normal routine, I would suggest you ground the topic in the other side, which the potential upside of AI is equally amazing as its potential downside. Maybe that's not the beat you're looking for, but I think it would be prudent given the gravity of the topic.
3. Something nobody is thinking about (or at least talking about) (that I've seen), is the implications AI has for many of the other topics on this channel. If we, lowly humans, are capable of creating super intelligences within a few thousand years of harnessing fire, what does this mean for things like aliens (hint: they won't be humanoid), the fermi paradox, advanced civilizations, etc. It really provides a realistic lense to view it all through. It might even explain a thing of two...
I know a thing or two about all this, so happy to help understand anything or consult.
youtube
AI Governance
2023-07-07T03:0…
♥ 40
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwDZd4iA4Wo0ie0vXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_Daysgtmt50CSqOJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwoQSnxKQHVuBOtuEN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyrwCrLXvfklilSlsx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwKkZko7Q6x_jBr9Xh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzMSvX8VWSUBYzLjxJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYJAAKtk6aTIocImh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-UDTt3cjhTuABGBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyfVCSpfRh782LR6eN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz2sO8Bs6ZNXhCe3PZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}
]