Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The people who trained the ai are probably lazy and are the people who should be at the main fault, as well as the company which didn’t think about what one decision to teach an LLM could cause, and it makes me think the people directing the people who made the ai didn’t understand the implications of what they were doing. AI isn’t always terrible, it’s that poorly trained LLMs made by lazy unchecked humans has abysmal effects.
youtube AI Harm Incident 2026-03-06T11:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw8bRNy8C0JMqldtHZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugw-1QvwpjlI_HyfMAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwza4EOEfVOSVpJTQZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzO7qslGu_xJlsmA7l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxbN2E-TpY7Y6xNnx94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzDu4q3MkxKYIKpb3R4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxb3TPJUeN9TYT6TSB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZA_Nwao4XyDS9cml4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwOamiHue2XiUVy3aZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgydrhRjn4vcw_Sg_il4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]