Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If this article is importantly linked to continental philosophy, then this is a demonstration of its flaws and failures. That's because this non-answer is circular in a bad way. When we are asking questions of moral worth, this is ultimately because we are wondering how to act, and specifically how to act toward others. That is, the place where the rubber hits the road in ethics is at the point where we act in one way or another towards someone or something else. In other words: In order to rightly relate to others, we need to know what is the right way to relate to them, which depends in part on the moral worth or status of the thing. If I slam my car door, but don't slap my children, then that is at least in part because my kids have intrinsic moral worth and my car door doesn't. So what is the answer that Gunkel proposes? Besides the empty phrase 'thinking otherwise,' we get this: > Following Emmanuel Levinas and others, this way of thinking flips the script on the usual procedure. Moral status is decided not on the basis of pre-determined subjective or internal properties but according to objectively observable, extrinsic social relationships. He's not wrong that this "flips the script," but in a literally useless and backwards way that makes zero progress. That is, if we want to answer the question "How should we relate to X?" it is not going to help to start with "How do we relate to X?". Don't get me wrong, its not useless to ask the question, but it can't be something that provides an answer. To put it bluntly: If I'm wondering "Is it wrong to steal my neighbor's packages?" the answer doesn't come from investigating whether I already do. Using that method just calcifies and justifies the status quo, but that can't possibly be the right way to do ethics. To apply this specifically to the AI question: If I'm asking "how should I treat an AI?" then I can't get an interesting answer by just asking "How do I treat an AI?". But maybe I'm looking at it
reddit AI Responsibility 1615660723.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-25T08:06:44.921194
Raw LLM Response
[ {"id":"rdc_gqkwsh7","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_gqtm3zi","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"rdc_grr02wl","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_grr1ust","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"rdc_grrojxi","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]