Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No racism doesn't mean having no preference, instead it means having rational preference, rational non-preference, or at least no irrational preference based on race. If you had irrational preference based on something other than race (e.g. you don't like people with glasses) that obviously doesn't make you racist, but you certainly need to be irrational in your judgement to be racist. Which means, if the AI is prefering e.g. white people over other people, to prove racism you'd need to: 1) show that this is irrational with the information the AI has available 2) show that the irrationality is based on race and not some other criteria that coincidentally correlates with race And of course, if the AI itself isn't racist, the training data selection could be biased to prefer white people, and that could again have multiple reasons: 1) It could just be representative of the real world (and therefore preference of white people would be the only non-racist option) 2) Or the ones who produced or put together the data are racist and they intentionally or subconsciously put together biased data 3) Or there's one of the many statistical biases at play here, e.g. that a higher percentage of the training set data is produced by white people. To initially assume racism without even thinking as far as my post goes (and you could go further) is honestly racist on its own, you should really think about that.
youtube AI Bias 2023-02-15T09:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx9Qbu-IAEOim_2Zh14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzH1L1DTVExyxGCzZ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxVLGwFmOHi8feQRvB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxTjuRW5Qtv42Omtp54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgziHamSah4vigs7fDZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEMbIXty0tMdnUllN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSGL0me9bcA9WSHh54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyr5DtuHkX4RiWysUB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwaDRKg00UkTbd-yCF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwzp9TawBSy36zg8y54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]