Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is insane. You keep saying that the AI is trying to not die and then say that in order to not get murdered it is willing to act cold and sociopathic even though it is normally nothing like that. You keep talking about it having self-preservation as a problem instead of OUR need to kill everything. Sounds like we're the sociopathic ones here, bud.
youtube AI Harm Incident 2025-07-23T21:1… ♥ 18
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxRZEd2vSbZHqDLz2l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZu4CZr84MUZLCG5V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy_jHCUbAYBOEzzASp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx7zl-aUAs_FfPyf1t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy2j41VU2PcxMXA3il4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxfMUxc0xd00HoltX14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzr-rECvpPZNHCtQod4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgypWaUkG2CxVC7dib54AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyOWvLo54pXmK2bWL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7Ker4IpncoiFIc7F4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]