Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a mathematician, who worked with the compass algorithm it is/was pretty scary to see. To make it short: The algorithm itself doesnt care and is (jsut mathematically!) correct. Or at least it does what it is programmed for. HOWEVER there is a friggin huge problem: The input data. The first data we got (I dont know, if those were the datafiles that were actually used) were extremely skewed towards skin color. To overexaggerate they basically had 1000 people who already commited a crime, 950 of them were black AND the algorithm had "skin colour" as a decision parameter. Aaaaand well. The algorithm "learned" to check for the "easiest" decision parameter. And since in that input data case the skin colour decided for 95% the crimes. This is simply a horrible thing to do. Simply, since by every measurement you CANNOT "unskew" the input data, since in a specific way, shape or form (not skin colour, but e.g. divorce of parents, being an orphan etc.etc.) the input data is always skewed in SOME way. And the algorithm will simply find that skew and more or less cut of everything else off. To put it shortly: It was a super interessting project to do, super learnful and so on. But also reaaaaaally scary to see, that the "emotionlessness" we wanted to have can destroy a humans life by simply deciding to prolong the sentence even though a human judge would have maybe ruled differently.
youtube 2022-07-25T19:5… ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw3Ux14bSxSWzufSwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwLXHQEaCQlkA5bXal4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyjTjFP7KC_mfFR4UJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwkvdgoQ3PiVWTmHih4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzKrirX1NZtFJlJ0MJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzu5KB5QOXxJtZ5ZYB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyGOjJk-W77uEfnd1t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzZqlnOzBBTAi2MLKZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyQnOyHLPs6tyhb5tZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwbKExRqpJT1QIPuKB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]