Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Creating an AI would be the biggest threat to humanity. If it is able to think o…
ytc_UggBdfltZ…
G
This distinction between writing as a process and writing as a product matters a…
ytc_UgxrMzQN0…
G
Maybe but are the human rights the same for both sides of the spectrum? For some…
rdc_h5ue9qr
G
Please study how AI truly works. It will never be self aware. Being self-aware w…
ytc_UgwAGQOlh…
G
im really inbetween ai atm, but using it for deminishing creativity is REALLY ba…
ytc_UgyRcaepH…
G
I'm not the biggest fan of ai art or supporting the artists, but all or most of …
ytc_UgztOGm5A…
G
Wow, that’s some serious love for AI chatbots! 🤖❤️ Some countries are just more …
ytr_UgxoCeJrF…
G
A.I. will close down a few coffee chains and burger houses. Then the free world …
ytc_UgxS0ByWh…
Comment
The focus here is entirely on the wrong thing. Giving someone a sexually explicit photo/video of yourself is a distinct act from publicizing a sexually explicit photo/video of someone else without their consent. Therefore, we can criminalize the latter with no regard to the former. Easy. If you have a problem with a "sexually explicit photo/video" standard being hard to formulate, that's what we have courts for.
There's something of a strawman regarding whether or not the parties are in a relationship. That's not relevant info.
There are a couple other problems in the OP too:
- porn-positive feminism being based on consent (it's conflicted because of the typically exploitative relationships in the sex industries)
- acquisition of explicit media having any relevance (stealing is a separate crime)
**ON THE OTHER HAND**
Excellent discussion on objectification, and I agree that automatically granting intellectual property rights is unworkable and stupid (obviously).
reddit
AI Moral Status
1422653572.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_co5njjx","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_co6b6kc","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_co6ap1y","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_co652nr","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"rdc_co6djty","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]