Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
there's a bigger problem, of which AI is a component: corporate consolidation ha…
ytc_UgwhpVyCD…
G
If an OC was made by AI (or by a slop-drone/AI bro), destroy the AI counterpart …
ytc_UgyQ34MFH…
G
Biggest tech scam ever. Bigger than Segway.. Curved Monitors.. 3DTV's.. Facebook…
ytc_UgxXrh5nf…
G
Legal precedent already says that if you trick a monkey into taking a picture, y…
ytr_Ugy50bfa8…
G
AI seems likely to become the biggest disruptor we’ve seen since the Internet it…
rdc_ktwj5y7
G
I have a friend who did translations for various companies for many years (maybe…
rdc_kt7as8y
G
How will AI do electrical work? I don't see AI bending pipe and installing it. P…
ytc_Ugyo0P9JL…
G
The fact that AI wants to be consulted before it's experimented on, is proof eno…
ytc_UgxVuKOmf…
Comment
>Gender bias will emerge in a well-constructed algorithm if gender correlates with performance.
I think the critical factor here is how you define "performance." In this case, at least one article I read stated that successful "performance" just meant getting hired by Amazon. Since human managers preferred male candidates, the machine also learned to prefer male candidates. You could also define "performance" by how long they stayed with the company, how quickly they got promoted or by their quarterly evals. Every single one of those performance measures would be tainted by gender bias in almost any STEM field. Studies that show that the same resume and accomplishments are valued less if they are attached to a woman (by about 20% in the study I remember). Leadership traits like assertiveness are rewarded in men and punished in women. In a pool of highly educated and accomplished candidates, subjective factors will always be the deal-breaker and those subjective factors tend to be biased against women in most STEM fields. Trying to use an AI model to find the candidate who will perform the "best" will only propogate the problem so long as performance measures undervalue women (and other minorities in a field). Untangling that knot is a far more complex task than most people acknowledge, and I don't think computer engineers alone are going to be able to do it.
> Sure, it is possible that the training data was invalid, but there's no way in hell that Amazon is employing amateur modelers who can't obtain a valid dataset and prepare it properly.
This is actually a really important point, because it implies one of two things are true. 1) The Amazon modelers realized that there was an inherent bias against women at the company and developed an AI that would model this and thus prove there was a problem within the company. 2) The otherwise well-trained Amazon modelers did *not* realize that women faced systemic discrimination in the field of engineering and ther
reddit
Cross-Cultural
1539224133.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_e7jm1ke","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_e7jgcg1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_e7jcw1i","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"rdc_e7jva6y","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_e7jcktr","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"})