Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm an hour and 14 minutes into this an extremely frustrated because it seems like neither of them understand what's actually going on. They're very intelligently missing the trap they've fallen into. In particular. Over and over again in all sorts of creative ways, Eliezer is saying we ought to be afraid of ais because there is a high probability that they will destroy everything we value. Add Wolfram is in a very intelligent creative way saying, I don't see how you can say we ought to be doing that prove that we ought to be doing that from what is. Essentially, it is just the is ought problem. And Wolfram keeps on I'm trying to force Eliezer to justify his oughts from first principles of what is. There is a simple out to this. Eliezer needs to ask Wolfram, what would he hate to see destroyed, what would he hate to see created, of those things that he would hate to see destroyed and hate to see created what are those would most humans hate to see destroyed and or hate to see created that most other humans would agree with. Okay we now have a list of things that we're pretty sure we want and don't want. Here is why I think AI will destroy what we want and may create what we don't want. Since most humans are concerned about this, we should be concerned about bringing super intelligent AI into existence. This is why I think super intelligent AI will not preserve our values. Wolfram without realizing it it seems like it's making the nihilist moral argument and asking Eliezer to refute it from logic and math. Logic and math don't say anything about moral values. This is a policy argument about how we as humans want our world to be in the not-so-distant future. Wolfram's argument is essentially like saying during the middle of the Holocaust, objectively prove to me that the Holocaust is a bad thing from first logical and mathematical principles, otherwise I can't say whether or not it's bad and therefore we shouldn't do anything about it.
youtube AI Governance 2024-11-12T20:3… ♥ 14
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzZjk-dccsmE4r1CbF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzN-dfsvH0_3hTj87Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyBrsbkOUjTW8bZHgt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy2Qq17d-rNew-K7hJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzmT97vvYHntMl9Y5d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxJEWyj3-VMGPf5UR14AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw75_NQVGIiLn5jb9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwIcUBDH-ncdjtaAw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxkP3JTDL_ibbhpF8V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"frustration"}, {"id":"ytc_Ugz4NAtgI9yTWXsehN94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]