Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The fact many of these questions seem like what you'd ask a person whilst trying to diagnose them with certain mental illnesses or neurodivergencies is disgusting, let alone the part where these questions are answered with no context or nuanced conversations on the subject. "Do you often feel sad?" The answer: "Yes" The algorithm's thoughts: "this person has nothing to live for and might commit a crime because they don't fear losing their life, their crime and answers indicate they'd be more likely to break the law again" The reality/nuance: "Yes, my mom died 4 months ago to cancer and I've felt down ever since, she helped me keep my life in check and without her I completely forgot to get my car's documents renewed, since she always reminded me to do it as I still lived with her and the mail was received by her" It's SO easy for any answer to mean the complete opposite if you don't allow someone to explain the reason for their emotion. Algorithms and AIs and machines in general should never be in charge or judging people because they do not, and cannot, guess the nuance behind actions and feelings. It's ludacris to me that this is even a thing.
youtube 2022-07-25T19:3… ♥ 235
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyban
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw3Ux14bSxSWzufSwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwLXHQEaCQlkA5bXal4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyjTjFP7KC_mfFR4UJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwkvdgoQ3PiVWTmHih4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzKrirX1NZtFJlJ0MJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzu5KB5QOXxJtZ5ZYB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyGOjJk-W77uEfnd1t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzZqlnOzBBTAi2MLKZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyQnOyHLPs6tyhb5tZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwbKExRqpJT1QIPuKB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]