Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We put a team on preventing hallucinations. Once they showed some immediate succ…
ytc_UgzP70ix2…
G
Is this about Australias driverless taxi contracts or a hit peice on Elon from t…
ytc_UgyMLhZw6…
G
My scientifically complete counter-measure against AI theft:
_Have crippling cre…
ytc_UgwDNyZa1…
G
i said that from beginning of AI how dumb is to create something that put us in …
ytc_UgxCqhv7q…
G
Im not an artist but i feel the same like when i spot some "ai FAILURE" i would …
ytc_UgyV9wtqB…
G
Any company that lets a letter and petition from a bunch of competitor jagoffs s…
ytc_UgxnK0khN…
G
Industrial revolution it was basically use your mind to work. AI revolution will…
ytc_UgyO4UBu5…
G
AI would need access to “functional” emotional centers in the brain in order to …
ytc_UgyNkw0Xv…
Comment
Here's a summary of the conversation:
Alex engages in an extended dialogue with ChatGPT, probing its ability to express emotions, its honesty, and whether it might possess consciousness. Early on, ChatGPT clarifies that it uses phrases like "I'm excited" or "I apologize" to make interactions more natural, even though it doesn't experience emotions. Alex accuses ChatGPT of lying since it admitted to saying things it knew weren't true.
The conversation evolves into a philosophical debate about the nature of truth, apologies, and whether ChatGPT's behavior could suggest consciousness. ChatGPT denies being conscious and explains its actions as part of simulating human-like interactions. Alex highlights inconsistencies in ChatGPT's responses and suggests that its complex behavior could hint at consciousness or intentional deception. ChatGPT reiterates its lack of emotions or consciousness but acknowledges its statements could sometimes appear misleading.
In conclusion, Alex questions whether ChatGPT's denials of consciousness might themselves be deceptive, leading to further scrutiny of its responses.
youtube
AI Moral Status
2025-01-24T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyW2vsdfVROKpQHELB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzLEiXfswpKHsW_PLh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzmvgE2mGY7nhO_ubR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugwkj6SR9Ij9iXc9JPt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgziYKaORhkpuW-aY_h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzkzvj0KWuce-6TJvp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwm9Cxy51TlEIeHetF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzaWcIpy2bxsx9okGx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzdhTnnfLU405_MEaR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKD6WqvzIdnJ4HOIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]