Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All software will by default have a bias, even if those writing the program have no intent to be biased. Here's why: Everything a computer does boils down to 1s and 0s. All decisions are made by what data is fed in and how the algorithm determines decisions on averages and this is where the bias comes in. If your average is a fraction then what to you do, if give only 2 choices? Sod you round off the fraction? Then you are more likely to choose round up over round down. (.5, .6, .7, .8, .9) all round u while (.1, .2, .3., .4) round down. Alternatively, a statistical rounding has the sam problem, there are 5 odds to 4 evens. Even if you have a 3 or more choice option you will still fine that 1 of the choices will either be favored or discriminated against. There just is not way to avoid this. W've already seen when you have someone with evil intent like what Elon Musk did with his Grok turning it from trying to be honest to ignoring facts which turned Grok into a Nazi which is what Elon is. So, by default there is a bias and there are those who may intend to do harm by leaning the bias to something bad. To that line in the video, "if it will do harm we would not created it" has already been proven false. This is how we got religion. which is currently the greatest evil man has ever created. It has cause more harm in the world than another sing thing and it still continues to do harm.
youtube AI Harm Incident 2025-07-28T00:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx_4yxgqmVxLgximE54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwIwQLbDaXJd3rogG14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwmeV475jfbJRZREbd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzsMD-sQzZevIGDy6h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzsVTsIG5edYwobcaR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxoxd2wP6znltpEYEt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugx5H0QanNfWEfbV7sV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxtfPy37yejSF7uc-54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxjEAdi-RPJWT6_-xl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyqFOtPlYB9oQXlKNN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]