Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
IN THE FUTURE ... A.I. & ROBOTS ARE TO CONTROL THE WORLD & HUMAN RACE 😢…
ytc_UgwNEVpaz…
G
Maybe its time to stop fearing AI to get your money and use AI to get money for …
ytc_Ugy26bFOX…
G
I don't know why but every comment telling that anyone can do art just push me s…
ytc_UgyfC90tI…
G
I actually lost my job as a designer the other day because of AI but I'd love to…
ytc_Ugza_7ZkL…
G
So what I'm getting here is the biggest problem is plagiarism. AI users are stea…
ytc_Ugw36EzlP…
G
"You're killing my family in Palestine with your AI" & he goes like, I believe,y…
ytc_Ugxr3bjSq…
G
UBI or some type of similar solution must be getting more attention. Automation …
ytc_UgyunEZ1N…
G
AI art has no soul because there was no intent behind what the art contains. Art…
ytc_Ugx-OY8_9…
Comment
No racism doesn't mean having no preference, instead it means having rational preference, rational non-preference, or at least no irrational preference based on race.
If you had irrational preference based on something other than race (e.g. you don't like people with glasses) that obviously doesn't make you racist, but you certainly need to be irrational in your judgement to be racist.
Which means, if the AI is prefering e.g. white people over other people, to prove racism you'd need to:
1) show that this is irrational with the information the AI has available
2) show that the irrationality is based on race and not some other criteria that coincidentally correlates with race
And of course, if the AI itself isn't racist, the training data selection could be biased to prefer white people, and that could again have multiple reasons:
1) It could just be representative of the real world (and therefore preference of white people would be the only non-racist option)
2) Or the ones who produced or put together the data are racist and they intentionally or subconsciously put together biased data
3) Or there's one of the many statistical biases at play here, e.g. that a higher percentage of the training set data is produced by white people.
To initially assume racism without even thinking as far as my post goes (and you could go further) is honestly racist on its own, you should really think about that.
youtube
AI Bias
2023-02-15T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx9Qbu-IAEOim_2Zh14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzH1L1DTVExyxGCzZ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxVLGwFmOHi8feQRvB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxTjuRW5Qtv42Omtp54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgziHamSah4vigs7fDZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEMbIXty0tMdnUllN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSGL0me9bcA9WSHh54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyr5DtuHkX4RiWysUB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwaDRKg00UkTbd-yCF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwzp9TawBSy36zg8y54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]