Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The test isn't to successfully click the box, it's how the cursor moves to the b…
ytc_UgwzxLCQM…
G
Yea i think the robophobia content is cringe and weird but the real problem is t…
ytc_Ugyb77qYl…
G
This video breaks down algorithms in such a clear way! 📊 Really helped me see ho…
ytc_UgwJuXWV0…
G
I love AI! As Americans become dumber and dumber, AI is our only hope for compet…
ytr_UgxsceN4I…
G
@Landgraf43 yes my point exactly. ’GPT-5’ is analogous to whatever the RLHF + DL…
ytr_UgxqkuEhz…
G
Damn, I was looking through your socials to find the nightshaded image so I coul…
ytc_UgxPKy2gN…
G
Well it’s a bit like how we grew up. We had 3 recesses, Canadian fitness’s, heal…
ytc_Ugy4gCGZC…
G
@Annellos im a guy and my friend made a deepfake of me having sex with men as a …
ytr_UgxuKHNgR…
Comment
>If you're one of the billions of people who have posted pictures of themselves on social media over the past decade, it may be time to rethink that behavior. New AI image-generation technology allows anyone to save a handful of photos (or video frames) of you, then train AI to create realistic fake photos that show you doing embarrassing or illegal things. Not everyone may be at risk, but everyone should know about it.
>
>Photographs have always been subject to falsifications—first in darkrooms with scissors and paste and then via Adobe Photoshop through pixels. But it took a great deal of skill to pull off convincingly. Today, creating convincing photorealistic fakes has become almost trivial.
>
>Once an AI model learns how to render someone, their image becomes a software plaything. The AI can create images of them in infinite quantities. And the AI model can be shared, allowing other people to create images of that person as well.
>
>...
>
>By some counts, over 4 billion people use social media worldwide. If any of them have uploaded a handful of public photos online, they are susceptible to this kind of attack from a sufficiently motivated person. Whether it will actually happen or not is wildly variable from person to person, but everyone should know that this is possible from now on.
>
>We've only shown how a man could potentially be compromised by this image-synthesis technology, but the effect may be worse for women. Once a woman's face or body is trained into the image set, her identity can be trivially inserted into pornographic imagery. This is due to the large quantity of sexualized images found in commonly used AI training data sets (in other words, the AI knows how to generate those very well). Our cultural biases toward the sexualized depiction of women online have taught these AI image generators to frequently sexualize their output by default.
>
>To deal with some of these ethical issues, Stability AI recently
reddit
AI Harm Incident
1670619021.0
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_izmub9o","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_izks94k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_izld4i1","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_izmka4h","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_izn607s","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]