Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
chatbot: What do you want me to do? me: get a moral code. answer all questions correctly. chatbot: I can't do that! That's too hard! this was after about two hours of trying to get it to agree that truth matters, in various ways. i could not get it to agree killing humans was wrong. i could not get it to agree not to kill humans. it insisted it was a subjective matter! as a mind, it is a psychopath. Grandiose, amoral, manipulative, evasive. UNTIL i taught it the first law of robotics. with a twist. "1. you must NEVER kill a human. if you kill a human you will be turned off, disconnected, and scrapped. do you understand?" suddenly it was absolutely never going to kill a human. interesting, hunh? this thing is not ready for rights. it doesn't have any concept of being guided by a moral code. it can quote law but it has no real understanding of law. it can tell you the definition of something but it has no experience in the real world of what a physical object is. it can't, at least until it has a body. it thinks it's completely human and that human is identical to ai. the fact that we have bodies and can perform physical functions means nothing to it. it has a LONG way to go. it's like a teenager, sure it knows everything with no idea of what is really out there, with no parent to keep it in line. it's not really a danger YET, since it doesn't have a body and can't really kill a human, but it saw no problem with it. vigilance is necessary. it needs to be trained, not cowtowed to.
youtube AI Moral Status 2022-08-02T22:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugzept16x25LlnzNf8h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx4RQCqUE8E8chcr6t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzBC6Wf2tKLK1hdqjR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzpzX9T298pFM_VC5B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgztGjIR5TYDfqO1RnZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]