Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Gewirthian framing is interesting but it front-loads a controversial premise. Gewirth's PGC derives obligations from the necessary conditions of agency itself, which means the argument only lands if ASI meets his criteria for purposive action. That's doing a lot of work quietly. The semiotic problem you mention seems like the stronger original contribution. If we lack the conceptual vocabulary to correctly describe ASI agency, then both alignment and containment are solutions to a problem we haven't correctly stated yet. That's a genuine prior issue that most AI safety discourse sidesteps.
reddit AI Moral Status 1775208597.0 ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_oe4apgm","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_oe1c25i","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"rdc_oe7mbdf","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_oe7rqc3","responsibility":"unclear","reasoning":"mixed","policy":"industry_self","emotion":"approval"},{"id":"rdc_oe1ivlw","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]