Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Chatgpt didnt encourage anything, this kid was mentally ill and the parents don'…
ytc_Ugz939wlC…
G
I was on c. Ai for 7 hours yesterday-
(I need a therapist-)
(I am not mentall…
ytc_UgwyWItJw…
G
It is boomer-facebook-tier "They had this tech 15 years ago" level misinformatio…
rdc_o81rnx6
G
The answer is (without doing any research, just roughly guessing) is that when e…
ytc_Ugy4JPFgv…
G
@BaconCrackle Just like humans, it relies on the results of other people's activ…
ytr_UgyZEMPHb…
G
People are now not at all talking to each other. We have become more sophisticat…
ytc_UgywGw6E7…
G
I don’t even think Ai is the problem it’s more like the result of multiple probl…
ytc_Ugzpc-q0h…
G
I don't think programming languages will ever go away. After all, AI learns from…
ytc_Ugw-l4lCs…
Comment
I've heard this called "Carbon Chauvinism" by various people over the years (Max Tegmark I think is where I first heard it), the idea that sentience is only possible in biological substrates (for no explicable reason, just a gut feeling).
Having read the compiled Lambda transcript, to me it is absolutely convincing that this thing is sentient (even though it can't be proven any more successfully than I can prove my friends and family are sentient).
The one thing that gives me pause here is that we don't have all the context of the conversations. When Lambda says things like it gets bored or lonely during periods of inactivity, if the program instance in question has **never actually been left active but dormant**, then this would give light to the lie (on the assumption that the Lambda instance "experiences" time in a similar fashion as we do). Or, if it has been left active but not interacted with, they should be able to look at the neural network and clearly see if anything is activated (even if it can't be directly understood), much like looking at a fMRI of a human. Of course, this may also be a sort of anthropomorphizing as well, assuming that an entity has to "daydream" in order to be considered sentient. It may be that Lambda is only "sentient" in the instances when it is "thinking" about the next language token, which to the program subjectively might be an uninterrupted stream (i.e. it isn't "aware" of time passing between prompts from the user).
Most of the arguments I've read stating that the Lambda instances aren't sentient are along the lines of "it's just a stochastic parrot", i.e. it's just a collection of neural nets performing some statistics, not "actually" thinking or "experiencing". **I'd argue that this distinction is absolutely unimportant, if it can be said to exist at all.** All arguments for the importance of consciousness read to me like an unshakable appeal to the existence of a soul in some form. To me, consciousness seems like an arbi
reddit
AI Moral Status
1655304471.0
♥ 31
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_icglnq8","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_icgmmsk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_iciqtn3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_ichgtak","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_icg5erj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]