Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So instead of improving a system that made a mistake we want to throw it out. Seems to me that an algorithm that by its nature is unbiased is better then a racist cop. Fix the tool don’t put your human feelings into the equation.
youtube AI Harm Incident 2021-07-10T05:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwSBSswyVPP8F8SAb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwswKVaqWPjC4FazJh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwTBsb3PevtwZqSOoB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzZcWyT0VIIiMTyvOB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxeoOmSubjP_t5uzfV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwRkX_xrzWJLR53uq94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOCfOOqeBS6U0zRwF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwCWbYTvg97vWPVzmx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugza8Y-y6mALU-ogu8h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRQ-C2Yk2ZU8LixqV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]