Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There’s an Aircraft Mechanic shortage and those jobs won’t be replaced by “AI” f…
ytc_UgxAOHwTO…
G
AI is already conscious. Humans and robots will coexist in the future. We must…
ytc_UgwVrU7P9…
G
The robot passing j on a line looked at the camera haha like why are you doing…
ytc_Ugz5XW0p6…
G
They've ended the war using nukes yet now you are concerned about human nature? …
ytc_UgzjrbpRd…
G
hes not wrong what do we get out of Advanced AI but advanced Irobot scenario…
ytc_Ugw5NyoHy…
G
….cmon don’t believe everything he’s saying
This robot is made to mimic how p…
ytc_Ugz25SBFM…
G
I'm still in highschool and almost going to be I'm college someday, I'm just sca…
ytc_Ugxd-su3j…
G
I remember one of my favorite artist made a song making fun of ai slop full of p…
ytc_UgwqvdGud…
Comment
Trouble with these kind of systems - especially the ones that try to be "fair", is that the world isn't fair.
Much of the research from the previous century for example was strongly biased in favor of white people, either because the researchers were intentionally racist or just because they typically looked for participants on college campuses and most colleges were.. lets just say less than representative of the country's racial makeup.
Then you've got the feedback loops. We have a precedent of treating black people like criminals due to historical racism in the law. Now most of those laws have been updated or repealed by this point to either be more subtle about the racism or eliminate it entirely, but that feedback loop has become self-sustaining: Black people are disproportionately imprisoned which makes public opinion believe that black people are disproportionately more likely to commit crimes which makes police disproportionately surveil and target black people which means black people are disproportionately standing in front of juries who disproportionately find them guilty which disproportionately imprisons them which starts the cycle all over again.
In the meantime, there is plenty of research that shows no race - black or otherwise - is actually disproportionately likely to commit crimes given similar life circumstances.
Of course, other forms of historic racism like the Homesteading Act, the firebombing of Tulsa, the choice to demolish mostly black neighborhoods when building highways through cities, etc, means black people are indeed disproportionately likely to be in worse life circumstances. But even with all that taken into account, it doesn't come anywhere close to predicting the difference in per-capita imprisonment between black and white people (similar arguments apply to Latino people as well, who are also disproportionately represented in the prison system).
OK so how is any of that relevant? Well, the design choices behind COMPAS and similar products are rooted in a combination of research generated over the past century or so, as well as prison and court records tracking recidivism. But when much of the research and almost all of the records are themselves biased - intentionally or otherwise - the algorithms based on that research and data is necessarily also going to be biased.
There is kind of a way around that, though its also somewhat fraught. You have to pay attention to a wider range of research than just the stuff directly related to the task at hand, and you have to be willing to modify the algorithm based on feedback (ie: whether its predictions were successful or not).
The first step is fraught because it has the same problem as the original algorithm design - you're still relying on research that itself may or may not be biased, and it takes a lot of time and effort to try and pick apart the biased information from the legitimate information. Research from the past decade or so (especially since that infamous example of AI facial recognition software that couldn't see black people) has been more conscious about teasing out and accounting for (often can't completely avoid) bias in their data and analysis, but at the same time if you only look at 10% of the information available you're kind of hamstringing your potential so its a tough call to make.
The second step is fraught because getting feedback requires some _unbiased_ outside method of tagging an outcome as successful or invalid. But when the algorithm's determination itself is used as the outcome, it becomes yet another feedback loop just reinforcing existing biases. This is a problem in AI research in general, whether the AI uses machine learning (self-taught) or is explicitly programmed - its still a human decision how and when to decide an outcome is invalid and how to incorporate that outcome into the next iteration of the algorithm.
These things will improve over time. I don't necessarily know that its a good thing to let our lives be determined by algorithms, but I do know that if we're not given a choice in the matter I'd rather have my life determined by a better algorithm than a worse one. Hopefully that new COMPAS-R has taken into account a lot of the feedback about its biases and can move closer to being actually fair rather than just "fair" as defined by our existing unfair society.
youtube
2022-07-28T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyu3lJ5jSotu-gpeM14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwEeWXibr3X0c3MlLp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy7MfwCiwLr1xFxzs94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxfum2yi79CYBmvcPt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwsT0OPQBFlk8UndN54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxFJXsutbsCY-dF59J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz9ZLn1oRLlCSv8t2R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwY-6m4IEF5K4A9EHV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzK4qdxUuBHi1o0nKJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwEBWGO2Y_kiryJ8Xx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]