Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The point of the Google image search example isn't to accuse Google of some grave injustice, it's just an easy to understand example of how just because a computer is generating it doesn't mean its output isn't biased. The society it's getting its data from is biased in favour of female nurses, so it will return mostly pictures of female nurses even when the user is just looking for "nurse" without specifying gender. Once you understand that, it's easy to understand how that can become a problem when the situation is more complicated, the stakes are higher, which is the whole point of the episode. Let's say there's 10 male nurses in the world and 90 female nurses. Out of those 100 nurses, one man and two women have committed the same misdemeanour on the job. Given that, would it be fair to make decisions on who to to employ as nurse based on the idea that 10% of men have committed this misdemeanour but only ~2% of women have? An AI trained with this data might. Worse yet, you don't even know it's doing this because its decision-making process is more or less a black box.
youtube AI Harm Incident 2019-12-14T22:5… ♥ 9
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwdzQf4Z81Wub_oBNh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzKdgOX1tqdrJ-LX8l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx17723EZEsceZt_yp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwJjjxAxVRWcecmWyN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyRuJzvS40auV0Pk7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwIFEfyAHEN7eFJOHF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzVtaD4ShO5brx3M9R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxFtDEwbaEIkOGAyr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzK-DjV2ISsCeBaM2B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyolKZJVkQldyGTCjh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]