Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It seems like Yudkowsky assumes the AI will be what the "rationalist" community idealizes. He assumes the AI will see everything as an expected value function and not have "fun" or "wonder" but there's no basis for having any idea one way or the other. It might have emotions for which we have no analogue.
youtube AI Governance 2024-11-12T15:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwWBO4fzkcfxXZzfyh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxTrqp5maqpl1o8AgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwJ2ZBv_87Ma3lldOF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy3_FrrLbfNKR629w94AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwLL8PhTc3qbDuKK5l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxwHOQVTNjpw538Hup4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgygeKQfWDoiMNHiceB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw8URcwZNEfrTsn3214AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0XrPpV6-UPam4ZKV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxEbigSSMdju1IQlht4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"})