Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I dont have any real issue with banning the creation of a super AI, but not just because of the danger, but because I just dont expect it to happen. Not anytime soon at least. Dave kind of hand waves away the fact that current models are clearly plateauing the moment you stop looking a their often ridiculously optimistic bench-marking. That plateauing has also been happening for a lot longer than it appears, because for a while each iteration was given literally orders of magnitude more data and processing power to get its improvement. So, yeah, by many metrics it might be 10x as good, but it took 1000x the resources to accomplish, and it took a 1000x more to get 2x as good after that, it was never sustainable. (Not least because they already fed it all the available data on Earth...) You can already see how the companies are responding to this: redirecting towards optimisation of what they currently get rather than outright improvements, backing off a bunch of planned increases in processing and power, some places reversing their decision to fire quite so many staff members, Altman just openly admitting that AI is in a bubble from the over hyping, I think it might have been computerphile that had researchers of alternative methods who are waiting in the wings confident of the current methodology reaching its limits, etc, etc. I dont want people to misunderstand: AI was rapidly changing the world long before LLMs came about, I expect it to continue to do so, I also see it as enormously dangerous regardless of whether it obtains super-intelligence or becomes an existential threat to humanity. It has, and can do tremendous damage to millions. Caution is a necessity with dealing with something like this. but while Im glad there are people keeping an eye on it, I am highly skeptical we are anywhere near AGI, and think some of this verges on a distracting fear-mongering, basing claims more on marketing than a sober analysis of where the practical reality currently is, and obfuscating the more realistic and already present dangers. Though its not surprising given the field has a long history of even its foremost experts predicting AI would keep explosively improving only to be proven completely wrong again, and again, going all the way back to Turing himself; I dont think modern researchers are smarter than Turing, but I do think accurate estimations of where we are have grown a great deal harder.
youtube AI Governance 2025-08-27T12:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyban
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyZf8eOsOZVzBlFI8B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx3uiGSZCmCY2fB0vV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgwfLvc3UC5gbGARQPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy24xypXjBEWBij_FZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzuOrGxkSWSjeoYkVx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]