Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>But the trickiest question may be how to prevent abuse\*. AI generators have technological boundaries, but they don’t have morals, and it’s relatively easy for users to trick them into creating content that depicts \[list of things\] Fiction. The word you're looking for is fiction, which is not reality and no person is actually being harmed or affected in any way by anything someone else imagines and creates images of. By "prevent" you mean "censor", which restricts the right of others to free expression. This must never be allowed.
reddit AI Governance 1708927316.0 ♥ 10
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ks69yba","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_ks6kf7e","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_ks6sw0y","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"rdc_ks6was8","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_ks6xnni","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"} ]