Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I did consider the full syllogism before responding. I pared my response down considerably because it got pretty bulky. With that said, happy to engage with what you've got here: >Taking actions towards goals requires I have my freedom and well-being to take those actions, so I must claim that I ought have my freedom and well-being. This is the first leap. At least how you've framed it, we're losing the is-ought distinction in order to make the syllogistic step of "X is necessary" -> "I ought to have X". > If I’m saying I ought have something, that’s equivalent to saying others ought not interfere with it. Here is another leap. These are actually different claims. "I ought to have X in order to pursue my goals" is a positive statement of a conditional requirement to the pursuit of my goals. "Others ought not interfere with my access to X" is just a normative assertion that has the effect of putting a constraint on the actions of others. They are not equivalent, though you can add some predicates that enable you to construct a syllogism where the latter is a consequence of the former. >I take external-facing ought claims to be rights. You're injecting an axiomatic assumption here. Given what I stated above, it actually has the effect of weakening what it means to say that you have a "right" to something - it *merely* means that you think other people shouldn't interfere with your access to it. But, people can - and do - think that others shouldn't interfere with their access to lots of things, including things they don't necessarily need to have in order to pursue their goals. >Therefore via my agency alone I must claim a right to freedom and well-being. Not really. A car needs gasoline to drive but this doesn't make gasoline a right for the car. This seemingly facetious analogy is quite relevant to the original topic, as it argues that if the car had volition and purpose then it would in fact have a right to gasoline. So we'd bette
reddit AI Moral Status 1775154929.0 ♥ 9
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-25T08:13:13.233606
Raw LLM Response
[ {"id":"rdc_odxj6eu","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"rdc_oe294oi","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"rdc_oe7hiqs","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_ofh7acv","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"rdc_ofh8mis","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]