Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
⁠​⁠​⁠@Paul-sj5db Yeah, even if AI’s don’t understand the contexts of Datasets they are given, and therefore can extrapolate untrue conclusions from them (IE: Black people commit a lot of crime, so an AI can incorrectly assume that Black people are inherently morally compromised) that doesn’t mean the data they are given is necessarily incorrect/biased. Just that the AI lacks the full story behind said data, as well as an understanding of human elements involved in said story, meaning it can’t fully grasp an analysis to give that makes actual sense in the real world. AIs do not have empathy, so I really do not know why they’re being used as tools for things that, on paper, practically require it like Policing and Societal Analysis. The AI may see what is happening, but it won’t know the real why because that would require an understanding of humanity, and an AI works only through programs. In essence, a current AI is a 2D being that some are trying to use to analyze and maybe even solve 3D issues. You’re not going to get a 3D analysis, nor a 3D solution, you’re going to get something 2 dimensional that you really can’t do too much with.
youtube AI Bias 2023-10-17T12:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgwfakyCTJRdqbOXeL14AaABAg.9vsUGjq_YCi9vuN5A9dlZi","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwS4zO6Qgsbdl3bmWd4AaABAg.9vg1Byf0qpn9vhQeyls4bR","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwNwdCAmrL9FBdjRhB4AaABAg.9tDzEX_MD8d9w8mf8r9I8q","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugw_NP9D91NTaNfbQ7t4AaABAg.9sS7pBbS_rm9sWQzchffqo","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugx6FwQkq_DY2FFf9LR4AaABAg.9r-t2tZZ3in9r12JmUFxfS","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgxSQxEpH_DR6svQVzd4AaABAg.9r-oA-rqw_f9rBDljt-DTI","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxSQxEpH_DR6svQVzd4AaABAg.9r-oA-rqw_f9rBaAW4wF59","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxSQxEpH_DR6svQVzd4AaABAg.9r-oA-rqw_f9rBc7XDwHDk","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxxmYpXbn53rI1xE2t4AaABAg.9qwPqXgrJkP9vyTMolNKN6","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyKfAy4JKinKtueURp4AaABAg.9qevhelZdXJ9vyqGotD98n","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]