Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
this is more than a little stupid. i actually agree with the inherent message, however there are significantly more than four outcomes - and some of the outcomes you listed don't even seem to be the most likely. this is just reductionist, and betrays a lack of research into the actual development and direction of this problem. in my honest opinion, AI will hit a brick wall, it won't achieve superintelligence or general intelligence because these are poorly defined in the first place. more than this, i don't think LLM's are fundamentally, architecturally capable of the scenarios you describe, and LLM's are where most of the world's AI r&d resources are going towards. AI training will see diminishing returns, the biggest use case at the end of the race will be software development. AI companies will use the bubble as a platform to grift and acquire as much power as they can in multiple sectors before settling down and monopolising - just like the social media tech giants of web 2.0 did. they will sustain their frontier models, possibly at a deficit maybe at a profit depending on the company - the AI will be capable at niche tasks, terrible at others, and they may continue to invest in other exploratory avenues beyond LLM's that we simply can't foresee right now. the whole thing will be frankly boring, and the biggest threat on display will be big tech companies asserting more power and control through the continued slow-choking of our planet to death by the insidious, tedious, continual crawl of capitalism. as i said, very boring and expected. GPU farms will noticeably contribute to global warming. google deepmind and other academic/scholarly AI will probably continue to do some good though. the other big consequences will be increasingly sophisticated scams which target vulnerable people, harder access to job markets and housing due to AI screening, continued misinformation and AI slop, just a general degradation of culture. finally, the other main threat is AI for military application. this is the domain where i see AI being granted the most autonomy, and causing the most immediate harm. this wouldn't be through going rogue though, but through being given access to systems it shouldn't and through hallucination, misinterpretation, and unforeseen consequences. there are other possible outcomes not mentioned in your video, but this is the one i personally believe is most likely - significantly more so than all four of your totalistic possibilities. i just think the outcome of all of this will be very boring, very disappointing, give more power to capitalists at the expense of everyone else, increase wealth disparity, and fundamentally change nothing will making things measurably worse for creatives, vulnerable people, and low-income peeps below the poverty line. frankly the way you explore this issue i think just serves to obfuscate the real consequences, as well as taking focus away from other much more urgent existential threats such as renuclear proliferation, rising tensions, famine, rise of fascism, climate change, biopshere collapse, etc. each one of these issues is significantly more important, and several of them are already posing existential threats at this very moment - leaving me wondering: what on earth does this video actually achieve? for anyone?
youtube AI Moral Status 2025-05-28T11:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzJSjDkRUIAkZ3k1EV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxpZTcRDhm2cwMXVJJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzVHl5ZQcgiH_AL0xN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgycC3XSmal59R7YHet4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw7he0dI4xjzRP3GRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyu1GK6LX1RDV3iOpV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"disapproval"}, {"id":"ytc_Ugy9iFXCT6AT0RbihWx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwdLIbaN63DDloojWh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwmK9mwuCoWpdkaFRd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy_5mQd1t5Ej-OnEPV4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"} ]