Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine having a boss that never needs breaks or wants a raise. Sounds like a dr…
ytc_Ugw7kJBt3…
G
People, especially young people, have been coming up with crazy ideas with amazi…
ytc_UgxpIxEJy…
G
We should he championing the advent of AI for exactly this reason - that it will…
ytc_UgwGqZaP9…
G
ChatGPT is now dead.
Gemini is way better
Gemini is cheaper
Gemini is way more…
rdc_nsf7imf
G
I hate AI art but for a completely different reason
I remember a few months bac…
ytc_UgyhgHNXo…
G
@PrcuFlakyWell, as someone who seeks enjoyment out of the end visual, you still…
ytr_Ugy0tlRyR…
G
if i ever post any of my art will gaze and nightshade it because fuck ai…
ytc_UgwhEt5sO…
G
Does ChatGPT actually use user chats as data for training? Because that feels li…
ytc_UgwvQLz9W…
Comment
This is already an actively researched area to the point where GANs exist as a popular training method for AI, as someone else mentioned. The real issue is that it’s not going to be cheap to verify content compared to how easy it is to produce fake content, and that it’s a constant race between the two sides.
reddit
AI Harm Incident
1670622617.0
♥ 17
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_izmyyis","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_izkuu2e","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_izkwhsh","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_izl1dfg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_izlqym7","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]