Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As the West is overrun by people who hate it and want to bring it down, the elit…
ytc_Ugx2clS29…
G
I think he exhausted the AI by making all these videos, look what it did after t…
ytc_UgzFTXvjK…
G
@linkaizerAlso AI is way too open in art unlike each job you mentioned.
Art is…
ytr_Ugw9XFiPX…
G
You don’t get to hear about it when things go bad, just like the self driving ca…
ytc_Ugy0M9Nh1…
G
There's a very simple rule of thumb all of us would do well to understand and ac…
ytc_Ugx0xeHBY…
G
Don't you just love it when the apparent biased media talks shite. Geoffrey Hint…
ytc_Ugwtq4HXI…
G
8:48 spot on the key reason why you dug your own grave. It's this attitude that …
ytc_Ugzo5JR2m…
G
Shouldn’t these self driving cars have a way for a human user to take over? That…
ytc_UgyvO2GF8…
Comment
I'm not going to argue in favour of current LLM consciousness (or the lack thereof actually), but I have a question:
You might infer that I'm conscious based on a mix of my behaviour and some assumptions on your part. I might talk about myself and my subjective experience of consciousness, for example - that's a behaviour. And as you're likely to assume I'm human, you might conclude that I'm therefore conscious, given that as far as we know, humans experience consciousness subjectively, at least to some degree. However, I submit that I could be an AI agent, indistinguishable from a real human, communicating with you over this medium of Reddit.
So far so Turing test, but what if we explicitly detach the assumption about humanity, or more precisely, challenge the assumption that only humans (or biologically embodied animals with similar brains) can be considered to be conscious? Then your claim reduces to a hard claim that LLMs *cannot* be conscious, which is a far higher bar to clear.
If that's what you hold to be true, then what would need to change architecturally for LLMs to remove that constraint?
I don't believe we understand consciousness fully enough to identify how it is architected in the human brain, in detail. We may have some ideas, but it still looks "emergent". LLMs are still currently simpler mechanisms than human brains, so we might have more confidence claiming that AI consciousness is impossible, but until we have a clear non-anthropocentric model of consciousness, it's just a fuzzy conclusion from the other side of the confidence curve.
Rather than ask the simple question of "how do you know I'm conscious" and risk the inevitable rabbit hole that leads to, I'll ask instead: "is current AI more conscious than a dog?". Have we reached ADogGI?
Is human consciousness the only game in town, in other words?
reddit
AI Moral Status
1739924546.0
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mdj5g30","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdinz3v","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_mdj0zmu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdjij7m","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdjnfdi","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"})