Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hey all, the darker skin color creating more sesitivity is because of the nature of many ImageNets. The darker the the image, the higher the pixel intensities are in comparison to lighter images. Many of those algorithms are inspired by how the human or mamalian visual system would work, but they do not react to light, because darker pixels have a higher intensities in the additive color logic (this comes from the fact that the computers color system is optimized for printers). This means, the higher the intensities, the higher the activation rate is. A higher activation rate leads to more information, especially if the system already got trained on very "occupied" images (darker colors for facial features). This automatically makes those algorithms more sesitive to dark skinned people, that light skinned ones. I think this is also an issue with nor using sparsity constraints and n ot normalizing the input space. Another good work is done by Dr. Travis Monk, who actually used intensities as intrinsic parameter which allowes the network to use this information in anohter context, for example automatic normalization. (If you search for this you might find it quickly, because I don't have the link right ahead. Sorry.)
youtube AI Bias 2019-10-31T08:5… ♥ 4
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyxv4NWlvR1u3hen9F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxZScjnF43IFzlyBq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5YVnaTqSlBIB5Wc94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwdRlqSfCyxxA9B0xZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxKqCA0qhkcOgRLlkp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxRN8olBbRN8fpAcnJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzWxoxMdbOrrRC5XTJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyAV5sYPIaDAAZhn3p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugy622Tf-IH_uVY9sy94AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwwmAMi33Yd3BiHD414AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]