Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I agree wholeheartedly that AI can and will empower small businesses, from pizza…
ytc_UgzrQ7vDh…
G
If AI can do the job was it ever really a job that needed to be done?…
ytc_UgxyzWrG2…
G
A professional tool must to work in the field otherwise it will cause damages to…
ytc_UgzQT5Mld…
G
Goes to show that AI has a problem of being basically just a reflection of human…
ytc_UgzQg7NZe…
G
Honestly, he may be a nobel laureate, but he's dead wrong about LLMs. My only qu…
ytr_Ugww466wq…
G
art is all about passion, meaning, value, life, and simple fun. ai doesn’t have …
ytc_UgzhjbW1Y…
G
Similar story here.
I do some automation with Ansible within our cybersecurity…
rdc_oalzzwf
G
@negativezero8174 you being salty over it just proves that ai arts are not low…
ytr_Ugxj31Ly2…
Comment
I am a big fan of Yudkowsky but thought this interview was quite bad. The interview should have either made it clear where Ezra and Yudkowsky disagree or it should have allowed Yudkowsky to make his strongest case. I feel that neither of those were achieved. I think the reason is that Ezra doesn't understand where the disagreement is and asks the wrong questions. An example is that he asks about alignment approaches like making the AI obedient/corrigible or making it chill. I think very few people agree with Yudkowsky on most things but disagree on these specific approaches being the solution Another example is asking Yudkowsky to describe Reinforcement Learning. Then Ezra wanted to move off the evolution analogy even though it had become clear that he didn't understand the argument (40:54- 44:34). I think it would have been better to focus on the more fundamental questions like how Yudkowsky views intelligence, goal optimization, moral realism and the analogies to chimpanzees and natural selection as I believe this is where the disagreements lie.
youtube
AI Governance
2025-10-15T13:5…
♥ 18
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwndzV0b_pd872B6MN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwuUq5LPr_Cy97_pHJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx5KNSzBlJHOuDjvSR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx5BJ1c2qU5Enjt9sd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgycAg8VmvNgc8Y73X54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzof8gms3CEewnikQx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzzZ9Q9QQSdcHk3Ycx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzl3OaI9Eh4nLbYz7J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzpFbTHOANObjowEVh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy0aa2SaObu4P3est54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]