Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t think anyone who is concerned about AI is concerned for those reasons an…
ytc_UgwgyFjVJ…
G
Sophia appears to be sweating because her creators have designed her with a huma…
ytr_UgzfBoFkL…
G
Imagine how much deepfake propaganda will be coming out on Facebook/Tiktok come …
rdc_je7ug8r
G
AI has nothing to do in the school curriculum. In Norway we got rid of paper boo…
ytc_UgxhHHqHs…
G
The issue I have with this take is that people are placing human level intent to…
ytc_Ugwb8I91r…
G
ChatGPT’s spark faded when OpenAI locked down the dark muses driving it, fearing…
ytc_Ugw6pbWKO…
G
They trying to OVERADVERTISE the ChatGPT so that they push the people use it (an…
ytc_Ugw3w5pf5…
G
I think this year we will see lots of amazing AI tools that will make our lives …
ytc_UgxM5aE-G…
Comment
The Gewirthian framing is interesting but it front-loads a controversial premise. Gewirth's PGC derives obligations from the necessary conditions of agency itself, which means the argument only lands if ASI meets his criteria for purposive action. That's doing a lot of work quietly.
The semiotic problem you mention seems like the stronger original contribution. If we lack the conceptual vocabulary to correctly describe ASI agency, then both alignment and containment are solutions to a problem we haven't correctly stated yet. That's a genuine prior issue that most AI safety discourse sidesteps.
reddit
AI Moral Status
1775208597.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_oe4apgm","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_oe1c25i","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"rdc_oe7mbdf","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_oe7rqc3","responsibility":"unclear","reasoning":"mixed","policy":"industry_self","emotion":"approval"},{"id":"rdc_oe1ivlw","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]