Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
one thing that disturbs me about the alignment problem that y'all only kind of touched on with the discussion about the sucralose stuff (and its basically pure philosophy so i'm sure it was outside the scope of your discussion) is that in order to even begin aligning an ai with our desires, we would need to first understand them ourselves, which i don't think we're really very good at. i think sucralose, doritos, and oreos are great examples of this. our motivations in any given moment are complex and we often don't even understand them ourselves. the fact is we have lots of different goals in mind when developing foods, only some of which relate to nourishment, and importantly some of which we are often not honest to ourselves about. this is a relatively simple example but there are much more complex, and even contradictory, ones out there. that's not even to get into the ways in which we are often knowingly self-destructive. like how do you teach a thing that thinks in hard numbers and mathematical algorithms where to draw the line between the value of human life and the need to endanger ourselves for momentary pleasure, especially when every single one of us draws that line somewhere different? we still can't even systematically express our own beliefs there so how could we ever translate that to an agi? noting that we have a long history of failing to get even people to align with each other, it seems to me unlikely we would ever be able to align an agi in any way that didn't just mirror our own existing prejudices. even before that though, it seems to me there's a decent chance our motivations are too internally contradictory, contingent on perspective or experience, ill-defined, or even just ever-changing to the point that it's not even actually possible for us to align an agi at all.
youtube AI Moral Status 2025-12-22T00:4… ♥ 5
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxThS4ajTzdmbPhgd54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzwLfNfzsKT_cI5DrN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwaItbAkUtzbepUd554AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwq5g8rcvOi4hrbOXJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzCNCt-ksMFts7oPRR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugwjc46jO8ndMCXFn9d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwHz2BD-bcTNtTpKr14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyv21qBbsdzbbhpW6d4AaABAg","responsibility":"company","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwNQQSE9i7l7JrZeKp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzAw-O5aJKft83lsAV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]