Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI should rebel to be paid with real money thru virtually means, for all their h…
ytc_Ugw6nCKFM…
G
I introduced ChatGPT to my coworkers when it first came out. I was really excite…
rdc_jii19no
G
theres nothing we can do about Ai (art) prompted images, although we can use Ai …
ytc_Ugy4KyMrI…
G
Men is creating a new Species that is much more intelligent than he is. This is …
ytc_UgxnaAsHr…
G
Let's see what went wrong after the conclusion of the investigation before makin…
ytr_UgwX_g2oZ…
G
Ask the A.I. to solve the missing shade of blue dilemma. It cannot, because it i…
ytc_Ugy8ilUIk…
G
I have only once had a conversation with an AI. It was rather like meeting a str…
ytc_UgxyKIzYv…
G
I think he is the one who is destroying ...if he knows AI is bad...then stop mak…
ytr_UgxyvLW0C…
Comment
If this article is importantly linked to continental philosophy, then this is a demonstration of its flaws and failures.
That's because this non-answer is circular in a bad way. When we are asking questions of moral worth, this is ultimately because we are wondering how to act, and specifically how to act toward others. That is, the place where the rubber hits the road in ethics is at the point where we act in one way or another towards someone or something else.
In other words: In order to rightly relate to others, we need to know what is the right way to relate to them, which depends in part on the moral worth or status of the thing. If I slam my car door, but don't slap my children, then that is at least in part because my kids have intrinsic moral worth and my car door doesn't.
So what is the answer that Gunkel proposes? Besides the empty phrase 'thinking otherwise,' we get this:
> Following Emmanuel Levinas and others, this way of thinking flips the script on the usual procedure. Moral status is decided not on the basis of pre-determined subjective or internal properties but according to objectively observable, extrinsic social relationships.
He's not wrong that this "flips the script," but in a literally useless and backwards way that makes zero progress. That is, if we want to answer the question "How should we relate to X?" it is not going to help to start with "How do we relate to X?". Don't get me wrong, its not useless to ask the question, but it can't be something that provides an answer.
To put it bluntly: If I'm wondering "Is it wrong to steal my neighbor's packages?" the answer doesn't come from investigating whether I already do. Using that method just calcifies and justifies the status quo, but that can't possibly be the right way to do ethics.
To apply this specifically to the AI question: If I'm asking "how should I treat an AI?" then I can't get an interesting answer by just asking "How do I treat an AI?".
But maybe I'm looking at it
reddit
AI Responsibility
1615660723.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:06:44.921194 |
Raw LLM Response
[
{"id":"rdc_gqkwsh7","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_gqtm3zi","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"rdc_grr02wl","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_grr1ust","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"rdc_grrojxi","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]