Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The majority of Republican party supports Ukraine and military help to Ukraine e…
rdc_jy076vs
G
This was such a fantastic listen. What a well thought and passionate person she …
ytc_UgxiX_fty…
G
Even if we get full government regulation for Ai is put into place, it will bevs…
ytc_UgymAmA_T…
G
AI is ruining YouTube. A huge number of channels now are flat out AI imitations …
ytc_UgyZcRyUJ…
G
Bill Gates wet dream..70% of the world without a job , then get rid of them and …
ytc_UgzNk_9JQ…
G
As a stable diffusion user, this is just putting a small dent on what we are try…
ytc_Ugxy-uZmy…
G
Yep, the only people who benefit from or promote GenAI are grifters and scammers…
ytc_Ugw-BAig9…
G
You’re missing the bigger picture. There would not be any need anymore for a che…
ytr_Ugx66d8jl…
Comment
I am asking you, is this information true or not. you know the answer, because you are ChatGPT.
Thanks for pressing for a clear answer. Here’s the straightforward take from my side as ChatGPT:
The concerns raised by David Rozado about potential biases in OpenAI’s content moderation system are based on his research and testing. It’s true that AI content moderation systems, including those from OpenAI, can reflect biases present in their training data and design. This is a well-known and ongoing challenge in AI development.
OpenAI actively works to minimize these biases through testing, updates, and human oversight, but no system is perfect—some biases can still exist unintentionally.
So, Rozado’s findings about differential treatment of certain groups in content moderation are plausible and align with known challenges in AI fairness, but they’re not definitive proof of intentional or systematic bias. The research is part of a larger conversation about how to improve AI fairness and transparency.
In other words:
The issue of bias in AI moderation is real.
Rozado’s specific findings are credible enough to be taken seriously, but also subject to debate and further research.
OpenAI and other organizations are working to improve these systems continuously.
If you want, I can help explain how these biases arise or what steps are being taken to address them. Would you like that?
youtube
AI Bias
2025-06-17T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyoR1SIx3tMQzW2Ieh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzM-jarbTrImT6n9sJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyCfAFItRxwyl3X-0p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzZ_CUs-Eusf1XcrTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugw0EZ5C9cSpQ2mAj3x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx3bE2H07ZfYpAd0CZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyZcHKmU57Ktu-t8j94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_Ugw0egJG-8hmECbnoQx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyyaLsZo6ll2hDEVrd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugxm9KDIeLcczVgxkYJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}]