Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They need to re-do the safety measures around morarlity and killing anything. The AI should only be used to gain information, it shouldnt be able to manipulate the information on its own without an active human user that can overide and shutdown the AI's actions. I understand the whole idea is autonomy, but unless there's proper fail safes put into place, then AI shouldnt be given the power to act proactively like this.
youtube AI Harm Incident 2025-09-11T15:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwK-au70F1BsVfTM3J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugys0vulou7oAA5j9K54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzWJPRYcImshKRiEdJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxewr6eIj6pAjSWEap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzNoSTdFq4Qe6_bLJN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgweWrY_0dBqkjl_m7R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz3SMmYhiCsNIxCbLR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzFf0Foa5oQhCutxOZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw1blCldt9jCKuAe0V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"approval"}, {"id":"ytc_UgzwQ_jmfd3U0csOGvJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]