Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is a fascinating dissonance to hear a 50-year veteran of AI express shock that his field is actually succeeding. It recalls the phenomenon of Nobel laureates who drift into incoherence later in life (like Montagnier on water memory) Did he really not believe his own research would eventually work? Sci-fi authors identified the alignment problem decades ago; it shouldn't have taken a Berkeley professor until 2013 to have this 'epiphany.' More critically, the 'catastrophe' narrative betrays a massive status quo bias. Russell worries about the loss of human purpose, yet explicitly admits that for many, this purpose currently consists of 'repetitive work in windowless boxes'. For the billions already living in economic hell (facing poverty, hunger, and meaningless drudgery) the disruption of this system isn't an existential risk; it's a necessity. You can only fear the end of the world if the current world is actually working for you.
youtube AI Governance 2025-12-04T20:0… ♥ 42
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzh-YnBzznNZ1qWyQ14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwRQFKmdG19FO8-AD94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyW-LL40QAOwv9pVQ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwgI5BFRrLZ996_qmh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxPhQDdOKhYQvLluoN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwv4OkyAKs3UvA-eaV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz4AjPalg3rFU87gH14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzncXg6mJHXTVaCS414AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxFxasyzxX1MBY0FJ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx9dnj66M0hUfEUgrN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]