Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm familiar with statistical modeling, and I got an example. You know how when you watch 2 or 3 videos on youtube about something, and then every recommendation becomes that something? Thats what this algorithm does for the police. It takes what they target, and gives them more of the same with a small increase (2%) in accuracy. This is not being proactive, this is systematic oppression.
youtube AI Harm Incident 2018-05-23T06:2… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwpv7Zcr7nnAh-y1jV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzNFBl2Ekq_9BeGWgt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyJe9T6MP7iW2ejPb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzpQ-vWiYz2ctmseZd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6ktU-OdFTB1iAWgN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzuTUoz5hTWwT_67DN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugwo6IB9oFnvE4E5Rlt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-SanF5SRlvBFq5Np4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx41umYvhwE4eNJ1g14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy8KbbIUxZn8zWm-ip4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]