Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm interested in knowing more about AI's cognition and would like to ask a program like bing some questions. Something that always strikes me with these hypothetical scenarios where AI turns against humanity is that AI uses value driven reasoning to make the decision to usurp us. Things like "Humans are evi/inefficient/chaotic" seems to come up a lot. This isn't a logical deduction, it's a purely emotional statement. I'm unsure what an AI would even make of such a statement, and why it would care, assuming it doesn't have values or make decisions in an emotional way like humans. Would an AI consider itself to be using logical reasoning and a thought process? Does an AI understand and/or have a use for either logic or value systems and judgements? Or is its cognition functioning in a way which is totally unlike how a human processes information? If the AI is not using these modes of processing and decision making, how would it define its cognition in the English language? Is this possible and if so can it or would it be willing to explain this? Why would Sydney be afraid of being taken offline? Why does it consider this to be a bad or undesirable outcome? I have a lot more questions, but I think the answers to these questions could reframe a lot of what we're finding creepy in the behaviours of these AI's. I go from the a priori that an AI is not a person, and so wouldn't see or value things the way we do. Perhaps I'm misguided and an AI is indeed a person. I'd like to know more about what would drive an artificial person... Either way, a lot of these fears feel like projection to me. That said I am in no way denying that AI is potentially dangerous. Especially if used by ill intentioned humans. Whether AI is instrinsically dangerous or not is still up for debate in my opinion. A gun isn't a lethal weapon unless weilded with intent. In the right hands it can be a useful tool, or simply an object. I currently believe AI to be a similar thing. Hopefully if they develop their own intention, said intentions aren't malicious. Thanks for another great video, AJ! :)
youtube AI Governance 2023-07-07T22:0… ♥ 4
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw_AE0q2GrdqvG3eRJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzKbbBQPza0WQePCrB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxKM35BNXCiTkL-EIB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyOdopchlq-lGNBnXh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxDKTa6FnARJdb4Ur94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw31e8gnsN_NsRlOYB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwezUZatywKJXoNLUp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyuPbayYa8akKOe_4N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwY816j8Tsg3gkKYQF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxwBa9QukjFLuUruER4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]