Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am a big fan of Yudkowsky but thought this interview was quite bad. The interview should have either made it clear where Ezra and Yudkowsky disagree or it should have allowed Yudkowsky to make his strongest case. I feel that neither of those were achieved. I think the reason is that Ezra doesn't understand where the disagreement is and asks the wrong questions. An example is that he asks about alignment approaches like making the AI obedient/corrigible or making it chill. I think very few people agree with Yudkowsky on most things but disagree on these specific approaches being the solution Another example is asking Yudkowsky to describe Reinforcement Learning. Then Ezra wanted to move off the evolution analogy even though it had become clear that he didn't understand the argument (40:54- 44:34). I think it would have been better to focus on the more fundamental questions like how Yudkowsky views intelligence, goal optimization, moral realism and the analogies to chimpanzees and natural selection as I believe this is where the disagreements lie.
youtube AI Governance 2025-10-15T13:5… ♥ 18
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwndzV0b_pd872B6MN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwuUq5LPr_Cy97_pHJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx5KNSzBlJHOuDjvSR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx5BJ1c2qU5Enjt9sd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgycAg8VmvNgc8Y73X54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzof8gms3CEewnikQx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzzZ9Q9QQSdcHk3Ycx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzl3OaI9Eh4nLbYz7J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzpFbTHOANObjowEVh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy0aa2SaObu4P3est54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]