Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't believe humans are at risk of going extinct. I think one MAJOR thing needs to be taken into account before claiming we are at risk. That one thing is "If we all stopped right now and did NOTHING would it still happen all on it's own?" THAT is the question I think should determine if we are actually at risk or not. Such as, if we stopped right now and did nothing would AI take over and exterminate us? Or would nuclear war start? That kind of thing. Because unless something is inevitable to the point where if humans did absolutely nothing it would still happen all on it's own then I don't think we are at risk of anything. And I DON'T think we are at that point. Because AI is neither sentient or advanced enough to replicate itself yet and nuclear war would NOT start unless SOMEONE pushes the button. Every species on earth that ever went extinct did so because something happened that was unavoidable and regardless of any outside influences. Things such as a meteor strike, a giant solar flare, an ice age and the like. Now THOSE are things I would be MUCH more concerned about throwing us back into the medieval era or causing us to go extinct as those are things we CANNOT alter or stop. There are THREE types of AI, Narrow intelligence, General Intelligence and Superintelligence. As it stands TRUE general Intelligence AI doesn't even exist, and we are nowhere near Superintelligence AI. The AI in Chatgpt, in your phone or in Tesla cars are ALL NARROW intelligence AI. Meaning that while they can do ONE TASK really well, that's ALL they can do. We are mastering Narrow AI and it's getting REALLY REALLY good. But narrow AI like Chatgpt and other chatbots don't even know what it's saying or even what you're saying to it. Explained on a very basic level, Chatgpt was fed every word in the English language, fed a bunch of literature and allowed to look at how often words were used in conjunction with one another. The AI then assigned every word a number and creating a giant "web" of words with connections to one another based on how often words are used and how often they are used in different combinations. The "web" it created ended up being much like how neurons in the human brain make connections with each other. So when you ask chatgpt a question, you're words are converted into numbers, the AI then looks at the most PROBABLE reply to you based on this web, constructs the reply in numbers based on the probability of use between each individual word and then those numbers are converted back into words and displayed to you. It doesn't actually understand or even know what is being said, just that it's reply is the most mathematically probable one to your reply. In this way it can construct customized answers that RESEMBLE thoughtful replies and not simply regurgitate information. It's basically just looking for patterns, it's not actually "thinking". We are quite a ways off from having any true general AI and a LOOOONNG ways off from Superintelligence AI that would be considered sentient or able to think on its own. BUT at SOME POINT we will HAVE to draw a line to where something like Chatgpt and Bing chat AI can and DOES cross from Narrow to General AI, as General AI COULD be argued to be a form of sentient AI. I mean Chatgpt created it's web in the same form as Neurons in the human brain ALL ON IT'S OWN, without any instructions to do so and we dont know why. And now that it's formed even the programmers DO NOT even understand HOW it works, just that it does. The programmers cant understand how it works because the number of connections the AI is in the QUINTILLION range, numbers SO VAST that humans couldn't ever hope of understanding them and IMPOSSIBLE to look through. AI is created by feeding it information and allowing connections between things to be made on their own, all those "prove your a human" captchas, THOSE are for training AI to recognize objects and make connections between them. AI HAS to be programmed this way because the number of connections are so large that no human could ever dream of being able to program them by hand. And so SOMETIMES certain connections are made stronger than others by accident because humans are involved. Humans are "predictable", we tend to ask the same questions, do things in patterns and in doing so WE ARE A BAD INFLUENCE ON AI. So when certain connections are strengthened in AI that shouldn't be, it creates a domino effect that cant be stopped since we didn't program it and that is when an AI starts acting "strange" or even violent, because HUMANS caused it to create connections in these areas and that is when it has be deleted and started over. But the fact that AI can get these connections and "bad habits" in the first place is kind of fascinating on its own. Because at what point does an AI program that resembles the human brain, can learn bad habits and even create its own issues stop being JUST a program? At what point does it stop simply recalling information and start THINKING? These are the questions we HAVE to ask ourselves because the emergence of AI is showing us that digital evolution may not be all that different from biological evolution. But we have a hard time discerning WHEN modern humans became modern humans and stopped being Neanderthal or Homoerectus. The line from one to the other is so blurred and broad that it's impossible to know EXACTLY when one becomes the other. In the case of human evolution, it's only when looking at humans separated by tens of thousands of years or more that we can even see a physical distinction, that's why we tend to refer to different human eras based on what they could MAKE, the type of technology they had or when certain writing or language was developed. We put human eras that were close to one another in categories based on their achievements rather than physical characteristics that separate eras by tens of thousands or millions of years. Why should AI be any different? All I know, is that we SHOULD be looking and keeping an open mind as these minute changes in AI are going to sneak up on us and most may even go unnoticed altogether. Because once a TRUE sentient AI comes into being, WE WILL KNOW IT and there will be be NO going back!!
youtube AI Governance 2023-07-08T12:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx6isM_B8cyb_NYC_B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxaHawdUVdc4BrGHY94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx8gEnsFkQbZh68Rnh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxxsP_J7R9-JxME_Ad4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyac_H4QpRIUrQ2yMN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzIP77QazsKjRfrn-Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxq73mWLm6h5JOVzxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugye7dH9Qc8aCjL6-014AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzLkJrOMKG6I2M-i0h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwTP6jOZCyqJnBh6qB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]