Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
From the article >Some site owners think it’s a privilege people will pay for, and they are racing to build custom AI models that — unlike the sanitized content on OpenAI’s video engine Sora — draw on a vast repository of porn images and videos. > >This vision for the industry’s future raises a host of difficult questions: How do you compensate performers whose likenesses are used to create AI content? Will consumers like Maupin be excited by AI porn at all? > >But the trickiest question may be how to prevent abuse. AI generators have technological boundaries, but they don’t have morals, and it’s relatively easy for users to trick them into creating content that depicts violence, rape, sex with children or a celebrity — or even a crush from work who never consented to appear. In some cases, the engines themselves are trained on porn images whose subjects didn’t explicitly agree to the new use. Currently, no federal laws protect the victims of nonconsensual deepfakes.
reddit AI Governance 1708901783.0 ♥ 90
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_ks75ui1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_ks7knam","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_ks5qh07","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"rdc_ks6lqf8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_ks4x4y8","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"unclear"}]