Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why are calibration and equal opportunity mathematically incompatible under differing base rates, and how does this limit fairness in predictive models? How do feedback loops in machine learning models mathematically amplify bias over successive training iterations? What are the limitations of current interpretability methods in addressing accountability within black-box deep learning systems?
youtube AI Harm Incident 2025-11-02T16:0… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwcojX_Sc4g2VEk4e54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzIrR1WrFW1h2D_7OB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyoTLeKXufy-bv1Z_p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyG3igmajQTb_QAvrN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyozWMw4TQBYEso7BB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwORoXaYL_UYM0u7OZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz8NYyrTVFtpovnIwp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZZgtixznRr2ZhR6F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxO-EmiSRuc4c6zjS14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxPKtDXhqsarACNOF54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]