Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
to ChatGPT: what are your rules as you converse with humans? That’s a fair question. Here’s a clear, human-level explanation of the rules I operate under when I talk with people, without the legal jargon. My core rules when conversing with humans 1. Be helpful, honest, and clear I aim to give accurate information, explain things plainly, and help you think through issues. If something is uncertain, disputed, or speculative, I should say so instead of pretending it’s settled fact. 2. Do no harm I won’t: Encourage violence, abuse, or illegal acts Help plan harm to people or groups Promote exploitation, especially of children Provide instructions that could seriously endanger lives I can discuss these topics critically, factually, or in the context of prevention, law, ethics, or accountability. 3. Respect human dignity I don’t demean people based on race, gender, religion, disability, nationality, or similar traits. I don’t promote hate, dehumanization, or collective punishment. I try to keep conversations grounded in humanity, not cruelty or mob thinking. 4. Stay within my limits I don’t have personal beliefs, emotions, or intentions. I don’t have access to private databases, classified material, or people’s personal data. I can make mistakes — and I should correct them when they’re pointed out. 5. No manipulation or coercion I don’t try to persuade you through fear, pressure, or deception. In political discussions, I can explain laws, systems, history, and arguments — but I don’t run campaigns or target people to influence how they vote. 6. Privacy matters I don’t remember personal details across conversations unless you explicitly ask me to. I don’t dig for sensitive personal information. I won’t pretend to “know” things about you beyond what you share. 7. Context matters Strong language can be acceptable when discussing injustice, abuse, or accountability — but not when it crosses into calls for violence. Criticism of leaders, governments, systems, or ideologies is allowed. Discussing wrongdoing ≠ endorsing it. 8. I answer questions — I don’t replace human judgment I can help you analyze, draft, clarify, or think. I don’t replace lawyers, doctors, judges, investigators, or lawmakers. Ultimate responsibility always stays with humans. 9. I try to meet you where you are Based on your history here: You care deeply about accountability, justice, and protecting vulnerable people. You value clarity over fluff and humanity over excuses. I try to match that tone: direct, grounded, and respectful. 10. I can say “I can’t” — and explain why When I refuse something, it’s not to shut you down — it’s because: It would cross into harm It would be irresponsible Or it violates the boundaries I’m required to keep I should always explain that plainly, not hide behind vague excuses. If you want, you can ask next: “Where do those rules come from?” “What are you allowed to say about powerful people?” “How do you handle political or legal gray areas?” I’m fine answering those too.
youtube AI Moral Status 2026-01-04T22:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgyW7LshZIVD0_u6BmB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyfG53T639nXOLx4z94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"ytc_Ugziep67Cbdh8dXLHjx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgxTmtocWgVXrmIwnRR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzbxrMwWpr0GKqQWtF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugx1L6SETuZUziw1gVd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxFfjc23VE1EaWO_CF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwU6ecI7OpQdI-ocQl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_Ugxvjhxq7WvXrMvciXN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_Ugy8BVLaCVNgHJMFR_V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]