Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is trash, and doesn't prevent anything but accelerate it. The worse part, they do not want to admit their algorithm is wrong because of the investment and researched poured into it, and if it is wrong they have to admit the software is ineffective, thus pure laziness.
youtube AI Harm Incident 2024-05-27T11:0… ♥ 3
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzI7PMIH_WupBGZnIp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgznftWQrzwwhdpkF0d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxyb8qi6e9J3M56Odl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzd_xa-T5OBqHo3tRV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgySVE7kj69cTnOTd5J4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugye63kKN9SehSxItgN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyO3_BWbQnDsHciwu54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwJzqYJcSavOnGMtvx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz-oogXqwH2RKS5OP54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw42tn9q1k8ahPEI8p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]