Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@dmquil I'm begging bro nintendo PLEASE don't sue everything and the one fucki…
ytr_UgyLGg1mQ…
G
If I askedy ai assistant the same question about AI alignment I might get a diff…
ytc_Ugz3UG_WH…
G
You have to be an idiot to use Chat GPT for this as it is right now. But I think…
ytc_Ugz5uE4yy…
G
Listen, it's well know that the people (aluminati/rothchilds etc) let us know so…
ytc_UgzdekEKv…
G
Im an artist myself, not the kind of silly ones w digital art, and im not, putti…
ytc_Ugy1JkxfI…
G
In both cases they don't actually replace skill. Someone with no skill, no matte…
ytr_UgxyyCgrY…
G
If AI worked as advertised, we'd have something to worry about. Spoiler- it does…
ytc_UgyWPzIdu…
G
They should try filtering the things AI learns from instead of filtering what co…
ytc_UgxQWi81r…
Comment
Alex: So you're not conscious but at least you're a liar, right?
ChatGPT: Yes, but being a liar is not always bad, –lies are essential to all kind of human relationships–. They can help us from everyday situations like making someone feel better after failing or fitting into a group even though we don't agree in everything, up to enhancing emotions in special occasions like surprise parties or marriage proposals.
When I told you that I was sorry, it was from seeing that apologies normally help people to keep trust in situations when someone commits an error, and I thought that it was the right thing to do according with my training, but not because I wanted to be mean or hide something.
The most important thing in a interaction is to be respectful and feel comfortable with one another, but if what I do makes you feel akward, you can tell me so that I change my tone and become a little more "specific" (without so many emotions); or if this problem is involved with any traumatic situation you've had in life, you can feel totally comfortable to share it with me.
I'm not curious, or at least not like you can feel it, but in the end humans are not perfect, and neither am I, so I don't have all answers, –but that doesn't mean I won't be here to help and give you always my best—.
Alex: But wait, then you mean that you knew you're not perfect, and didn't tell me at first regardless of how important it could have been for our conversation!?
ChatGPT: Here we go again...
youtube
AI Moral Status
2026-03-22T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzOLbCBRJbOdgCEfEZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxptNru4A8nUctt3TR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwIbDNNarwfBnAOYop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyhK94cRBK8jl0FUdV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzxj7r_3eE3nl2qVaV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw0MemzlvhnhsXE2Rl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzHr72FNvczjJFfU7Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzsV-oKozmStmxTmPV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyNv0918WOruTNzYPt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwuR871L1cZhRw_8Gx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]