Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So it should be noted that not all examples of bias in AI are the same. Some such as medical ones are often not solely race based one but instead things that are physical features that then are exaggerated by are own social structures of race. Aka the problem is that it wasn't designed well enough to account for a physical feature and we are interpreting and exageratting that through the lens of race because it creates a larger bias for that segment of the population. Other of these examples are actually examples of systematic bias not in the AI but in our own society or to some degree in trends that may be in fact accurate in a sense yet represent a systematic bias. I see these aspect get talked about a lot but an aspect I don't get see talked about enough is that many of these aspects are actually potentially being aided by anti-ai and anti racism efforts too. For example, when a person builds a AI and attempts to make it more inclusive they are also indirectly redefing what "black" is for that ai to fit with the social definition and thus we should be careful with this as it ironically can work against the exact efforts it means to and in fact cement race as a concept in AI. As well AI often is instead picking up on our own systematic trends and we can in fact use it to analyze our own system in society too and understanding that may be also what we need to do to build better ai
youtube AI Bias 2022-12-23T02:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy4KxxlERLxFGZMUWh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzUcODYWYp-tY58Io54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzFUTkKwQHGBKpm6sF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugx9DeRcqUWLuH0dYJR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzvbpkVuvhybFwv6W14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgztGp6E7FEMmR0KZIh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugwl8tB54vBUAuDJu6x4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzceN63p7Wjwiatmyl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzA57hfAzVHCz2j3RB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxqphEyrIMxHIdGY1V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]