Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This guy is over selling the fear of what could happen. He is correct that his vision of the future is one where the worst of humanity takes over. Reasons it will be slower and controlled to prevent a collapse #1 the governments would lose BILLIONS of taxes because the companies won't pick up that slack of each human they replace. #2 there is no government that doesn't have a plan incase we get to far into the Intelligence that we are getting Terminator level evil. #3 Human greed will get riots/violent action taken vs companies who try. They won't get away with it as you steal peoples chances to make money to support. #4 He is also not taking into account the fact that governments have MASSIVE stores of technology we have yet to see. Its coming out 15-30 years after being created when they think that humans can understand it enough to adapt. I have documents and stories from my great grandfather about this exact thing. #5 if we don't continue this journey down this road we can't recreate/come forward about technology that can actually be helpful because there will be no need if the common people doesn't have access to the ability to figure it out themselves (examples: Roman concrete, curing cancer, helping people with disabilities, etc). #6 The government also will not hand over power to computers. The current system grants them to much to allow it to be replaced. This is getting long but for my last point I do agree that AI Safety is a major thing people need to talk about in tech circles, and in these labs more deeply. It does need more transparent logs for people to look at. publicly, however i disagree that they we should stop. We need to continue developing this so we can be push our society to fixing things we have to fix. We can't stay stagnant any more than we already are.
youtube AI Governance 2025-09-17T13:4…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwFvQrgmijLStc-jY94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzFqT_Ryp9f90TbZAZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxT3av90U1zzHMCeAh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxuY_CtZOMVNmb4bRh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy0ihJ4hM6APeQDHu94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw7aL_Qi4SINp-rKgB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzXIpCiDZsS096Phst4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxMIYU5GnLQvYS8GRx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy6_C-ZLOpOhdI5r4J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyTyUvqaRp0lmqyTMl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]