Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The difference is that digital art is a medium that still requires some degree o…
ytc_UgwYKGA-0…
G
Give the one most crucial decision that could potentially kill us all to an AI ?…
ytc_Ugyx3znI4…
G
I sell ai art and I make money
I can't draw but now I can make money out of it
…
ytr_UgyM4H-p1…
G
@Logan_Japan intelligence is subjective. I’ve seen truck drivers that had engine…
ytr_UgyQyG4Q-…
G
@MisterMyagi: Thank you, appreciate you! Your comment made me laugh harder than …
ytr_UgwSbZsIn…
G
AI coding is actually great when you understand and work within its limits. The …
ytc_UgyFay6Mn…
G
As someone who used AI image generator before I found out it uses dozens of artw…
ytr_Ugx_b6NP9…
G
Have you tried Google Antigravity with Gemini 3.1 pro? Works well for me. Coded …
ytc_UgyMBdqKm…
Comment
Beg pardon, but it is related to the article very much. The majority of the article, including the part I specifically quoted, is about the decision to use the biased algorithms. The facial recognition stuff comes at the end as an example to support the main premise, but many in these comments are focusing on that so they can say "It's just a technical issue."
Let me put it like this: Say a casino uses dice that, through a manufacturing defect, come up snake eyes far more often than is fair chance. If the casino says, "Hey, there is no bias against our customers. The dice have a defect but they don't have anything against gamblers. They don't think or hold opinions, it's just a technical problem. *It's literally just because the weight is off center.*" we would know the casino is bullshitting because the bias conveniently works in their favor. When people say the dice are biased against gamblers and in favor of the casino, this is what they mean; the tool is neutral and doesn't **intend** bias but the casino's decision to go ahead and use them is **very much not neutral**. The casino may not have set out to buy biased dice, but what difference does it make when the final effect is exactly as if they had?
That is what Sharkey says in the article:
> “There should be a moratorium on all algorithms that impact on people’s lives. Why? Because they are not working and have been shown to be biased across the board. [...] Until they find that solution, what I would like to see is large-scale pharmaceutical-style testing.”
These algorithms and systems have the **effect** of racial bias, and the *racism* is in the decision to go ahead and use these systems anyway instead of extensively testing them to make sure they are unbiased.
reddit
AI Harm Incident
1576274055.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_faowazl","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_falccf9","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"rdc_f1tdx41","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"rdc_f1whtst","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"rdc_f1tzegi","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]