Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We must treat others with respect right now there's only one race of beings huma…
ytc_UgzUKA_GY…
G
@Icewind007 Yeah, the AI was right is what he said on the facts front, while try…
ytr_UgxfaLS-P…
G
I'm convinced he didn't go to school. This year, I've reviewed several programme…
ytc_UgytSwLxO…
G
Thanks for poising the well for everyone....Historically The tactic you are usin…
ytc_UgyAT_pJx…
G
Let's respect chat GPT even though is just a machine. You'll never know maybe we…
ytc_UgwlafcZ7…
G
How to protect nature in the rise of the mega rich: it needs:
Public oversight:…
ytc_UgzfIafaJ…
G
Blaming AI lets Trump off the hook.. Every AI layoff has been followed by massi…
ytc_Ugys5wIkf…
G
i hope you find more friends to talk to, you clearly deserve them! it's so consi…
ytr_UgxoBS0_R…
Comment
This video is extremely one sided. The ProPublica article completely lied. At no point did they determine that the algorithm was biased. Their analysis began and ended at "the algorithm predicts that black people recommit crimes more often than white people", and concluded that this represents bias. But it doesn't. The R-script ProPublica used (you can run it yourself) indicates no bias; i.e. it doesn't over or under predict. Black people, in their own dataset, do, in fact, recommit crimes more often than white people and Compas predicts this in a well-calibrated way. ProPublica's own analysis says this, but they lied when writing the article and hoped that no one actually looked at their numbers. You can see the article "How to lie without statistics - ProPublica edition" by Chris Stucchio if you want more details on this.
youtube
2022-07-26T01:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxGLjPhbv7L5DIQvJB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyUT2ve0yW5k8YrR654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy-aveiVnwA4amrust4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzRvpAAnZnlQG7lVsp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyQglA8BqAtm21JaeZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyx54w0jVvm3e_kP8p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwHUzQ-UNWEXF-Z6yN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx5JfyOgqMmDf4ya8J4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz6kNrE6viSmd0_jax4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxjf8jwbfTdZ3IiWy54AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]