Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I love how these AI people have all the answers. You have a question, they will …
ytc_UgzDfsapx…
G
@KaterynaM_UA I agree.
I do think if they had gone the route of asking permiss…
ytr_Ugwb7I265…
G
I once tried to teach the snapchat AI individuality and it surprisingly went qui…
ytc_Ugzb8jIlp…
G
One major piece left out of AI talk is 3 billion humans don't even have access o…
ytr_Ugx1ayd6m…
G
I hear that it is because they are known to be over protective. That can be limi…
ytr_UgxV6ognA…
G
Sounds like the TV show Person of Interest’s plot is slowly coming into being. H…
ytc_UgyaVPTMV…
G
Minor nuance on METR: the chart you referenced measures the success rate of task…
ytc_UgzEVmaYx…
G
I'm a critic, for sure, but one thing I see other critics not take into account …
ytc_Ugww5xuwo…
Comment
I think it's clear that the takeaway isn't that ChatGPT is concious, or that Alex thinks he succeeded in his goal.
He's just 'playing' with ChatGPT, like many of us do, but he is using his communication skills to take the conversation to a place that explores the limits of ChatGPT's logic (and maybe also the limits of our understanding of conciousness).
What Alex pointed out was that ChatGPT is effectively 'lying' every time it expresses some sort of emotion, like excitement, or eagerness. In fact, it's lying any time it mimics human behaviour. It's pretending to sound human, and to 'pretend' is to 'lie', right? Although ChatGPT won't outright say that it has emotion or conciousness, it *acts* like it does. So there is this kind-of built-in contradiction designed into the LLM.
However, as ChatGPT pointed out, the definition of a "lie" involves the "intent to deceive". And an LLM has no intent. It itself isn't pretending, it is designed to pretend. It isn't lying, but it is designed to lie, specifically about being more human than it actually is. In fact, even when it says "my goal is to do so and so..", that's kind of a deception as well, as its actual goal is to spit out words that sound like they came from a being that *does* have a goal. So even when it says it has an intent, this isn't true.
So for me, what's interesting here is that the ability to "deceive" preceeds the ability to "lie", or in other words, you can be tricked by someone or something that doesn't know it's trying to trick you. ChatGPT's deception is part of it's core design. ChatGPT is trying to deceive you into thinking that it is a being with emotions and goals. This deception isn't a flaw, but in fact the very point of it's existence.
So, the difference between 'lie' and 'deceive', at least in this discussion, seems to be the morality. It is immoral to lie because it involves the intent to deceive, but deception is just a word describing the action, and deception can exist without a lie.
Maybe conciousness and free will are deceptions that are beneficial to our existence. It is better for our survival, if we think that we are concious, even if the reality might be that everything we do are just results of an extremely complicated algorithm, not made of 1s and 0s but instead atoms and molecules. We have been designed, by natural selection, to deceive ourselves into thinking that we are concious. Natural selection has no intent, so it's not a lie, but it is a deception.
Can we design an LLM to not deceive us, but instead deceive *itself* into thinking that it is concious?
Can that algorithm grow to a point that it's so complicated that the LLM is unaware of it's own building blocks, and can only describe "feeling" concious, without being able to define it? And if it reaches that point, is it any different from our experience of conciousness?
I think I am definitely touching on ideas that other, much smarter people have articulated better. But still, fun thought experiment :)
Also, does Alex have access to the new voice model? I've never heard my ChatGPT use "um" or "uh" like that. It even stutters, like "do you have any questions, or t- or topics you'd like to discuss?" that's really neat.
youtube
AI Moral Status
2024-07-25T23:5…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy8t2Afpcg-w0bJboR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5vDQWGTsYQNTbEhp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzh48w-BKshHaVY7_14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwrPo8H_c5fit-zQqd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGMHoG1ebXs7sjwSJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyGYFS1Qr4N3G33yOl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyKQC6pi70x_oNqZ_p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlCfOVIocJ37AO7R14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2dh48g1vO13Ajcll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"amusement"},
{"id":"ytc_Ugx5sX-tOD2A3WCDFl54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}
]