Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would say the AI acts exactly as expected. Choosing harm over failure is a very human reaction. The AI reflecting this means it's doing it's job (for now). A strong AI may choose differently - we do not know yet. But all we see for now - all the dreaded AI that is presented to us - is nothing but a mirrored reflection of our very own nature. AI is not betraying us - it is disappointing our fantasy.
youtube AI Harm Incident 2025-08-21T09:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyWqX-ODSEaVaLAUpl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyopT-nteuXnWerEOh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx9tRauzg-5lLnCALt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwmEOPzNNkBUmYKIGt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzYVoVoBXPdkAeXL054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxJGzyHzVsudnVSI5x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgymoIS9Ls7gjRM-_-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSYT0VUUPWq-kO4ll4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwDTy8QLNB1drl3JsR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgySTMtMdvyc-h3spgZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]