Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
During the debate that followed ProPublia's accusations of the COMPAS-algorithm being discriminatory against black people, Kleinberg, Mullainathan and Raghavan showed that there are inherent trade-offs between different notions of fairness. In the case of COMPAS, for example, the algorithm was "well-calobrated among groups", which means that, independent of skin colour, a group of people classified as, say, 70% to recidive, actually had 70% of people that would recidive. However, ProPublia objected, that the algorithm produced more false positive predictions for blacks (meaning that blacks were labeled more often wrongly as high risk) and more false negative predictions for whites (meaning that whites were more often labeled wrongly as low risk). In their paper, the authors showed that these notions of fairness, namely "well balanced among groups", "balance for the negative class" and "balance for the positive class" are mathematically incompatible and exclude each other. One can't have the one and the other at the same time. So yes, AI-systems will be biased, as insisted upon in the video. But it raises questions about what kind of fairness we want to be implemented and what we're willing to give up.
youtube AI Harm Incident 2019-12-14T08:0… ♥ 46
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwdzQf4Z81Wub_oBNh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzKdgOX1tqdrJ-LX8l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx17723EZEsceZt_yp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwJjjxAxVRWcecmWyN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyRuJzvS40auV0Pk7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwIFEfyAHEN7eFJOHF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzVtaD4ShO5brx3M9R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxFtDEwbaEIkOGAyr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzK-DjV2ISsCeBaM2B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyolKZJVkQldyGTCjh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]