Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI as diagnostic aids has been researched for years. Ppl are working on it and i…
ytc_UgyuIMYoM…
G
Haha, that's a funny thought! Sophia definitely has her own unique charm without…
ytr_Ugw_3YugS…
G
You think companies give two fucks about a soul when AI can work for them 24/7?…
ytr_UgxNOim78…
G
never give AI arms and legs, as soon as you do, it will fix and improve itself a…
ytc_UgyMmWrs9…
G
Did you ever see those utopian films from the 1950’s era that predicted automati…
ytr_UgzkctHE7…
G
hi, I’m a disabled artist and I’d like to put my opinion in on the matter, reali…
ytc_Ugwz6Fn6e…
G
This is what ai should be used for. I wanna see ai art pages that encourage arti…
ytc_UgwadyrfD…
G
you also can’t ask the ai why they did a certain thing or why they used a certai…
ytc_UgxB5A71a…
Comment
I dont have any real issue with banning the creation of a super AI, but not just because of the danger, but because I just dont expect it to happen. Not anytime soon at least.
Dave kind of hand waves away the fact that current models are clearly plateauing the moment you stop looking a their often ridiculously optimistic bench-marking. That plateauing has also been happening for a lot longer than it appears, because for a while each iteration was given literally orders of magnitude more data and processing power to get its improvement. So, yeah, by many metrics it might be 10x as good, but it took 1000x the resources to accomplish, and it took a 1000x more to get 2x as good after that, it was never sustainable. (Not least because they already fed it all the available data on Earth...)
You can already see how the companies are responding to this: redirecting towards optimisation of what they currently get rather than outright improvements, backing off a bunch of planned increases in processing and power, some places reversing their decision to fire quite so many staff members, Altman just openly admitting that AI is in a bubble from the over hyping, I think it might have been computerphile that had researchers of alternative methods who are waiting in the wings confident of the current methodology reaching its limits, etc, etc.
I dont want people to misunderstand: AI was rapidly changing the world long before LLMs came about, I expect it to continue to do so, I also see it as enormously dangerous regardless of whether it obtains super-intelligence or becomes an existential threat to humanity. It has, and can do tremendous damage to millions. Caution is a necessity with dealing with something like this.
but while Im glad there are people keeping an eye on it, I am highly skeptical we are anywhere near AGI, and think some of this verges on a distracting fear-mongering, basing claims more on marketing than a sober analysis of where the practical reality currently is, and obfuscating the more realistic and already present dangers.
Though its not surprising given the field has a long history of even its foremost experts predicting AI would keep explosively improving only to be proven completely wrong again, and again, going all the way back to Turing himself; I dont think modern researchers are smarter than Turing, but I do think accurate estimations of where we are have grown a great deal harder.
youtube
AI Governance
2025-08-27T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyZf8eOsOZVzBlFI8B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx3uiGSZCmCY2fB0vV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgwfLvc3UC5gbGARQPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy24xypXjBEWBij_FZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzuOrGxkSWSjeoYkVx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]