Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>If you're one of the billions of people who have posted pictures of themselves on social media over the past decade, it may be time to rethink that behavior. New AI image-generation technology allows anyone to save a handful of photos (or video frames) of you, then train AI to create realistic fake photos that show you doing embarrassing or illegal things. Not everyone may be at risk, but everyone should know about it. > >Photographs have always been subject to falsifications—first in darkrooms with scissors and paste and then via Adobe Photoshop through pixels. But it took a great deal of skill to pull off convincingly. Today, creating convincing photorealistic fakes has become almost trivial. > >Once an AI model learns how to render someone, their image becomes a software plaything. The AI can create images of them in infinite quantities. And the AI model can be shared, allowing other people to create images of that person as well. > >... > >By some counts, over 4 billion people use social media worldwide. If any of them have uploaded a handful of public photos online, they are susceptible to this kind of attack from a sufficiently motivated person. Whether it will actually happen or not is wildly variable from person to person, but everyone should know that this is possible from now on. > >We've only shown how a man could potentially be compromised by this image-synthesis technology, but the effect may be worse for women. Once a woman's face or body is trained into the image set, her identity can be trivially inserted into pornographic imagery. This is due to the large quantity of sexualized images found in commonly used AI training data sets (in other words, the AI knows how to generate those very well). Our cultural biases toward the sexualized depiction of women online have taught these AI image generators to frequently sexualize their output by default. > >To deal with some of these ethical issues, Stability AI recently
reddit AI Harm Incident 1670619021.0 ♥ 13
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_izmub9o","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_izks94k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_izld4i1","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_izmka4h","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_izn607s","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"} ]