Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The AI copium is real. AI is dumb as a fucking brick, but it puts a bunch of wor…
ytc_UgxX4t5BR…
G
Why does no one provide any evidence to support these claims when they make thes…
rdc_jhe4t0l
G
he is just shilling his product, nothing more -- they know there no progress in …
ytc_UgyIXm5iu…
G
AI is powered by data centers that are being built across the US and other count…
ytc_UgxcSmC9V…
G
I think that the way the world works is because everyone has access to draw, eve…
ytc_UgyWln_Ac…
G
I think palletier had AI write it.
Wait that means it's not real ..? right ¿…
ytc_Ugy-R3Dwa…
G
They actually didn't!
These algorithms are almost akin to magic at the moment.
F…
ytr_UgwvmDxv-…
G
Same as the moon landing or a robot being sent to mars, ai is nothing more than …
ytc_Ugwu3QsSo…
Comment
Why on earth would you want the single fundamental value to be *simple*?! And, similarly, why on earth would you think that Kant's theory has a single fundamental *simple* value?! Whatever autonomy is, it's certainly not simple. The Kingdom of Ends isn't simple. The categorical imperative isn't simple--consistency relations between maxims could plausibly be seen as simple, but of course Kant's view isn't that morality comes down only to consistency relations--for one thing, consistency is entirely too easy to come by. So there's consistency relations that inform some substantial notion of what autonomous action consists in. But this just drives the point home--autonomy isn't simple.
It's not even true of many of the more sophisticated consequentialist theories that they have a single simple value. Take objective list theories of well-being, for instance. There's no understanding under which an objective list theory is simple, just because it is enumerated into components. Yet your reading would make a mystery of the sense in which many of these theories (like Hurka's) is monist, with well-being taking the role as the single fundamental value.
>Take Aristotle's view of eudaimonia as a quasi-fundamental value, even this involves a human life possessing a number of separate goods (wealth, honor) and such a person securing these seperate goods for themselve through their skilled weighing of seperate goods/bads against one another in deliberation (danger and risk versus one's own welfare versus other's welfare, etc.).
This is emphatically not Aristotle's model of moral reasoning. It's not a matter of weighing distinct domains against each other (on this point, see, for instance, Hursthouse's 'A False Doctrine of the Mean', though frankly I'm confused about why anybody may be tempted by this as a reading of Aristotle). Aristotle endorses a view that the virtues don't come one-by-one but always together in a clump--either (in the majority reading) a *unity of the vir
reddit
AI Moral Status
1446518258.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_cwlujss","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_cwmenf2","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_oi29xg7","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_oi2hxm1","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_oi2fjqy","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]