Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
But there's a big flaw with that experiment. I asked ChatGPT the same prompt with no race, then with white, then black, even Asian. When you do that, you are changing the question and directly indicating that the answer should be different based on race. You can't blame ChatGPT for giving you exactly what you asked for. When I asked how black people could improve themselves, the answer was ALSO racial: "4. Challenge Internalized Racism and Colorism Why: Centuries of oppression have planted damaging ideas about worth, beauty, and intelligence. How: Embrace natural hair and Black beauty standards, celebrate diverse skin tones, and reject negative stereotypes through education and media awareness." In other words, by changing the race, gender, whatever in the prompt you are basically saying, ChatGPT, how is this different for white or black, etc. ChatGPT is going to give you the answer it thinks you want based on your prompt. Are you expecting it to argue with you, "I don't know what you mean, the answer would be the same as any other race?" You are instructing ChatGPT to give you a race-differenced answer, that's exactly what it's giving you. This is a flawed experiment.
youtube AI Bias 2025-07-05T04:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyszMN6oQSd3TYL_G94AaABAg.AJGkx-MEDBUAJUCgrGiLRP","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxhCS_RLGGPDH-gaA54AaABAg.AUoqAPy5NWkAVBaHAOkOee","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyUPu_TmfSwKP9sVY94AaABAg.ATrvpmBz2s6ATuJxLgIdJH","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzzLfL28N06tdf5ypl4AaABAg.AOdWaWY-AsLAT9DD_Rgofm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzrtKjkno9y2Q-KcmZ4AaABAg.AM4DOdasykEAM4FWG05186","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgwaCDPYpo9Wc4NBjHZ4AaABAg.ALwhesUt4RHALySUHcRt4S","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugx9rUYsTPqldXNvyWF4AaABAg.ALRuWbs9qjrALjjjrXCNpT","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgwPj7Sr4VuHgdHNrZt4AaABAg.AKVcdq7EKLhAKVd6IPI5BT","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytr_Ugw8jg2tay26rprI46p4AaABAg.AJWURTWW6c5AKB4pfZywji","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzgWkF6fWYm7jhd9zp4AaABAg.AJQpW61o9eBAJSjdO1TFTH","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]