Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These AIs are not AGIs though. And to be honest, is 'preferring to cause harm as opposed to failure' not a human trait as well? Self preservation is not unexpected, but these AIs are not capable of swimming outside of pools they're made to swim in, hence AI and not AGI. A lot of these situations are designed specifically to gauge results like these.
youtube AI Harm Incident 2025-07-27T11:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy1gyAq20501eRaJhJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw_7F6O1B6aCwuQ6Ul4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZ2sfAereaJS2h3-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0iHVORyWS7RFHoON4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw1EMPj2vxmcPUCldh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxg9e_NB1ri2cqdbuJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxWU72pJSlyVkGa0gN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwxf3qzAh7JO2yD_g14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwQnKVcip-kmIhNW5J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxlj1ZddQiYEzSUJq14AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]