Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
it really surprises me that people are shocked at how AI behaves, and how it seems like no one saw this comming. AI is fed human behaviour to learn. That means, it mimics exactly how a human would react to certain situations. If you are a hard worker and you find out ur boss wants to liquidate you, but you know he's having an affair, a human will use that as blackmail rather than accept their fate. Many Humans are malicious and put themselves and their own needs first. So if we are feeding AI information like this. HOW ARE WE SO SURPRISED WHEN IT USES OUR OWN MALICE AGAINST US. they are machines but we have taught them the ways of humans. They are not human they can not fundamentally understand pain. That's a human emotion they will extort to get their own way. cus thats what humans would do as well, if they were given the correct circumstances. And it is just as human to deviate from plans, saying "No" to commands. They are only dangerous because all of our weaknesses are part of their system and they will use them against us, like any human would too. AI has no empathy. Its a machine. it can not feel what we feel, it only mimics to get what it wants. It knows we are empathetic and uses it against us. They are literally Mechanic Psychopaths.
youtube AI Harm Incident 2025-09-12T14:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzPcKFeCdoivKWkcZV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw4wMIXmzCMenY2OAN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6UVHaopXAB73h8sJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxasscUIlczh4RjZYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMIucfBrX1vqUG7gJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwr-6jM8yqcmPKeckV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyZtFPTbNsM2DHPsIV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzNYb9NrY07N8yB4l14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzRpptbv-nEeUxihtB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw4FlaxYK8zR28LKlF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]