Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've watched a ton of your videos, Sam, And I stand with you 1000% on this AI si…
ytc_Ugzg11Gba…
G
This is quite a lot of money. And the tools they try to impress us with are kind…
ytc_UgzDFCU6P…
G
How do we know this wasn’t just an AI interview? You want me to believe that th…
ytc_UgzQyI4PK…
G
This is why AI will ultimately replace a lot of physicians. AI will be able to …
ytc_Ugw_Qm-Gs…
G
The Ai used in AI Dungeon text adventure game started to become a pervert becaus…
ytc_UgwFY0MW1…
G
I really dislike generative AI because it's encouraging people to become creativ…
ytc_Ugxz2ZqAJ…
G
Never discuss stuff with AI without fact checking or being an expert yourself.. …
ytc_Ugz9ip4n1…
G
I wonder who AI would go after when predicting who would be a pedo, arsonist, or…
ytc_UgyX5k6dy…
Comment
During the debate that followed ProPublia's accusations of the COMPAS-algorithm being discriminatory against black people, Kleinberg, Mullainathan and Raghavan showed that there are inherent trade-offs between different notions of fairness.
In the case of COMPAS, for example, the algorithm was "well-calobrated among groups", which means that, independent of skin colour, a group of people classified as, say, 70% to recidive, actually had 70% of people that would recidive.
However, ProPublia objected, that the algorithm produced more false positive predictions for blacks (meaning that blacks were labeled more often wrongly as high risk) and more false negative predictions for whites (meaning that whites were more often labeled wrongly as low risk).
In their paper, the authors showed that these notions of fairness, namely "well balanced among groups", "balance for the negative class" and "balance for the positive class" are mathematically incompatible and exclude each other. One can't have the one and the other at the same time.
So yes, AI-systems will be biased, as insisted upon in the video. But it raises questions about what kind of fairness we want to be implemented and what we're willing to give up.
youtube
AI Harm Incident
2019-12-14T08:0…
♥ 46
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwdzQf4Z81Wub_oBNh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzKdgOX1tqdrJ-LX8l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx17723EZEsceZt_yp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwJjjxAxVRWcecmWyN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyRuJzvS40auV0Pk7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIFEfyAHEN7eFJOHF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVtaD4ShO5brx3M9R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxFtDEwbaEIkOGAyr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzK-DjV2ISsCeBaM2B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyolKZJVkQldyGTCjh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]