Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You appear to be conflating three separate questions here: 1. How will we recognise AI as conscious beings? 1. How will we balance AI rights and responsibilities against human rights and responsibilities? 1. How do we apply moral principles while avoiding caprice or self-interest? How do we treat AI as a moral actor without treating it like a slave? Q1: I have no idea. Neither does Gewirth. Q2: We already do this with each other. Laws, politics, institutions and so forth. If AI becomes conscious we will have to find out HOW to apply such laws. Yes, that raises the question of basis. Q3: Gewirth may have some utility on that last question. Asking someone not to murder another is not oppression, and Gewirth already accounts for this as far as I can tell.
reddit AI Moral Status 1775241195.0 ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_oe4apgm","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_oe1c25i","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"rdc_oe7mbdf","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_oe7rqc3","responsibility":"unclear","reasoning":"mixed","policy":"industry_self","emotion":"approval"},{"id":"rdc_oe1ivlw","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]