Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"When you look into the future, it changes, because you looked at it." Had this oracle AI not predicted the man to be involved in a shooting, he wouldn't have been shot, because he wouldn't have been on the heat list, and wouldn't have been thought of as a police informant. But by looking into the future, an otherwise seemingly irrelevant man was considered an inevitable threat, who was then watched, and then shot for being watched. Computers are also very literal and precise with their wording. It didn't say that he would be the shooter, it didn't even say he would be shot. It said he would be "involved with a shooting." He could've just been one of many people fleeing as someone else fired into a crowd. He could've been a first responder, police or medical. And all of that could happen at any point during his total lifetime. And that's just reasonable ways most people would imagine. That's not including things like accidental firings or suicide, or just hanging out at a firing range without shooting yourself. Given those requirements alongside various statistics on gun availability, local crime rates, police use of firearms, and dozens of other things, that pretty much means everyone is going to hit 99+%, at least in the states.
youtube AI Bias 2023-01-12T05:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwNWjfBRk7H9VRhbdZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwVUHowzLp2LbVRZSV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwfR7jS_EaMorC999B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3-EAf-UcOcfexOAN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy1xTjZX8JxuQDK30d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyT1UsRyEcLpG453Vp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz4Wev_PcUdDpzvFmJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy1An5I88WLYb0KbWN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxRDIb7yn3Z3ZCCZal4AaABAg","responsibility":"society","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzbxRjsbFBZ-DN00yl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]