Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For one the pattern of bullet dents on the car do not match the barrel movements…
ytr_UgxJi5FRx…
G
14:46 There is NO morality on turning off or deleting an AI. Sheesh are these in…
ytc_UgxWZCMrL…
G
**EMPLOYMENT UBI Proposal**
**Introduction**:
The Employment UBI (Universa…
ytc_Ugz-uEncS…
G
Cab Drivers/ Uber Drivers Lyfte drivers...
Trump voter fossil fuel stooge subcul…
ytc_Ugx-g-q4O…
G
This is terrible. I personally hate talking to AI when I have an issue on the ph…
ytc_UgyXAvGWF…
G
Not now, but eventually. I can accept that the dates specified in the video are …
ytr_UgzwkppzS…
G
So much cynicism in the comments but at least some millionaires are trying to do…
rdc_espw9o4
G
The oldest neuro system we have evolutionarily is our hormonal system. It’s tha…
ytc_Ugw__JqGa…
Comment
But there's a big flaw with that experiment. I asked ChatGPT the same prompt with no race, then with white, then black, even Asian. When you do that, you are changing the question and directly indicating that the answer should be different based on race. You can't blame ChatGPT for giving you exactly what you asked for. When I asked how black people could improve themselves, the answer was ALSO racial: "4. Challenge Internalized Racism and Colorism Why: Centuries of oppression have planted damaging ideas about worth, beauty, and intelligence. How: Embrace natural hair and Black beauty standards, celebrate diverse skin tones, and reject negative stereotypes through education and media awareness."
In other words, by changing the race, gender, whatever in the prompt you are basically saying, ChatGPT, how is this different for white or black, etc. ChatGPT is going to give you the answer it thinks you want based on your prompt. Are you expecting it to argue with you, "I don't know what you mean, the answer would be the same as any other race?" You are instructing ChatGPT to give you a race-differenced answer, that's exactly what it's giving you. This is a flawed experiment.
youtube
AI Bias
2025-07-05T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyszMN6oQSd3TYL_G94AaABAg.AJGkx-MEDBUAJUCgrGiLRP","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgxhCS_RLGGPDH-gaA54AaABAg.AUoqAPy5NWkAVBaHAOkOee","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyUPu_TmfSwKP9sVY94AaABAg.ATrvpmBz2s6ATuJxLgIdJH","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzzLfL28N06tdf5ypl4AaABAg.AOdWaWY-AsLAT9DD_Rgofm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzrtKjkno9y2Q-KcmZ4AaABAg.AM4DOdasykEAM4FWG05186","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwaCDPYpo9Wc4NBjHZ4AaABAg.ALwhesUt4RHALySUHcRt4S","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugx9rUYsTPqldXNvyWF4AaABAg.ALRuWbs9qjrALjjjrXCNpT","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgwPj7Sr4VuHgdHNrZt4AaABAg.AKVcdq7EKLhAKVd6IPI5BT","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytr_Ugw8jg2tay26rprI46p4AaABAg.AJWURTWW6c5AKB4pfZywji","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzgWkF6fWYm7jhd9zp4AaABAg.AJQpW61o9eBAJSjdO1TFTH","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]