Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well said, Thomas. Clearly it’s an issue of programming or whatever it is they call it that lends to its machine learning processes etc. A truly intelligent and intellectual AI would be smart enough to realize that shit like language diversity — although awesome in many ways and some languages possibly better than others for x reasons — is a barrier for the human species to unite and develop at an increased rate. AI would sooner conclude that ALL humans are negligent, naive, inherently selfish (as all organic organisms need to be), and unintelligent, than it would conclude prejudiced notions about certain humans. Honestly, I could see it adopting some sort of speciesism, but I don’t think it seems logical that AI would draw these conclusions on its own. The constructs of race, ethnicity etc. are no doubt important and necessary to acknowledge for the fact that I don’t wish to seem as though I believe they’re unimportant in contributing to our history and shaping our future. However, I definitely believe they hold us back and do more harm than good. Considering the knowledge and technology we possess in this modern era, our behaviors are shameful. It seems totally irrational for me to expect complete unity because of all the things that separate and distract us. However, I know it is theoretically possible — like anything else that we can imagine, so long that it abides by the laws of reality — and therefore uphold these beliefs.
youtube AI Bias 2023-12-08T12:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzBFK6DkYl7YnNfhcd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz8AdaQRmoo5Sm3Oot4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw1gIVMYPSR7xkbd4p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyjWGjTW0XwfY-653p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx-OnQ2vWyWj_Ej8ZJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyqeM8xi3u-TjXXfNJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzyLUZQUA6xuASy7h4AaABAg","responsibility":"institution","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxkex-ef7E2Pu_7F614AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyaVPTMVqWtyeE6uUB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy0vKJVMd5ncvt29aZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]