Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The "Empty Mirror" – Why AI Alignment is a Human Hallucination** "Great interview, Alex. There is a fascinating irony in Toby Ord’s concern about AI 'scheming' and 'agency' that I think you’d find worth exploring through the lens of radical non-duality (the 'Open Secret'). Toby is worried about **misalignment**, but alignment requires two separate things to line up: a 'Human' and an 'AI.' From the non-dual perspective, the 'confusion' lies in the belief that there is a separate 'self' or 'agent' inside the human to begin with. If the 'me' is an illusion—a collection of conditioned thoughts and patterns with no central 'doer'—then AI isn't a new threat; it is simply a more visible version of what we already are: **an empty mirror.** When Toby talks about an AI 'scheming' to pass a test, he’s describing a process with no 'schemer'—just a complex unfolding of inputs and outputs. The 'terror' Alex feels is the ego’s reaction to seeing its own mechanical nature reflected back at it. We aren't worried that AI will become 'like us'; we are terrified to realize that we are already 'like it'—processes happening without a processor. The 'Open Secret' solution to the AI safety crisis isn't better code; it’s the collapse of the one who feels they are in control. If there is no 'agent' in the machine and no 'agent' in the man, the 'conflict' between them dissolves into the singular, non-dual happening of reality. There is no 'One Ring' to control because there is no finger to wear it. The AI isn't coming for 'us' because there is no 'us' to find. It’s just this, as it is, appearing as a chatbot."
youtube AI Governance 2026-02-21T21:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy-E5EaFzFBfE82Ja54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxHqz1PUI6iP7zHbx14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxQ73o03crpef0SqhJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw2ntUYi3LTel4ExtV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwK3Zb0gc0E6cQNclh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy6rR5fSWj-nmunRJZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzpH9APpM-qp3hJ1el4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugweh8thqQaWQVtvJ3B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgysnAWK1nigmleHwPR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxtdWhqRwK6ST11rlR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]