Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>A year later, however, the engineers reportedly noticed something troubling about their engine – it didn’t like women. This was apparently because the AI combed through predominantly male résumés **submitted to Amazon over a 10-year period** to accrue data about whom to hire. > >Consequently, the AI concluded that men were preferable. It reportedly downgraded résumés containing the words “women’s” and filtered out candidates who had attended two women-only colleges. Ouch. That must've be a bitter pill to swallow after such a long time in development
reddit Cross-Cultural 1539253396.0 ♥ 3
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_e7keifq","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_e7keda1","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_e7koo9j","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_e7irmvj","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_e7jqq89","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]