Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why on earth would you want the single fundamental value to be *simple*?! And, similarly, why on earth would you think that Kant's theory has a single fundamental *simple* value?! Whatever autonomy is, it's certainly not simple. The Kingdom of Ends isn't simple. The categorical imperative isn't simple--consistency relations between maxims could plausibly be seen as simple, but of course Kant's view isn't that morality comes down only to consistency relations--for one thing, consistency is entirely too easy to come by. So there's consistency relations that inform some substantial notion of what autonomous action consists in. But this just drives the point home--autonomy isn't simple. It's not even true of many of the more sophisticated consequentialist theories that they have a single simple value. Take objective list theories of well-being, for instance. There's no understanding under which an objective list theory is simple, just because it is enumerated into components. Yet your reading would make a mystery of the sense in which many of these theories (like Hurka's) is monist, with well-being taking the role as the single fundamental value. >Take Aristotle's view of eudaimonia as a quasi-fundamental value, even this involves a human life possessing a number of separate goods (wealth, honor) and such a person securing these seperate goods for themselve through their skilled weighing of seperate goods/bads against one another in deliberation (danger and risk versus one's own welfare versus other's welfare, etc.). This is emphatically not Aristotle's model of moral reasoning. It's not a matter of weighing distinct domains against each other (on this point, see, for instance, Hursthouse's 'A False Doctrine of the Mean', though frankly I'm confused about why anybody may be tempted by this as a reading of Aristotle). Aristotle endorses a view that the virtues don't come one-by-one but always together in a clump--either (in the majority reading) a *unity of the vir
reddit AI Moral Status 1446518258.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_cwlujss","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_cwmenf2","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_oi29xg7","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_oi2hxm1","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"rdc_oi2fjqy","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]