Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robot malfunction and then shoots gun at someone else…. Or if they all gang up o…
ytc_UgxOWaWpz…
G
Ban all data centers. If the elites that own these ai companies love ai so much …
ytc_Ugz7k9JEM…
G
I'm sorry you have to go through all this crap. Tbh I cannot agree that people w…
ytc_UgwATsLph…
G
As a multimedia visual artist for over 15 years, I don't mind AI art. I think it…
ytc_Ugx-q6SfI…
G
Yann LeCun said that today we don't even have an AI which has the intelligence o…
ytc_UgwPq63Uj…
G
I've been meaning to ask: Does anyone know if self-driving cars have an "emergen…
rdc_f6y9jng
G
"They can have thoughts in sentences and then go back and think aboit those thou…
ytc_Ugyxl84xU…
G
I know Alex does this for content as an AI doesnt -know-
Therefore, everything …
ytc_UgzqIUtyc…
Comment
I think you underestimate how blurry things can get, as soon as you ditch human exceptionalism as a core assumption.
>They do not have the capacity to feel, want, or empathize
Okay. What behavior does an LLM need to show so that you would admit that it has the capacity to feel, want, or empathize?
If you don't assign the ability to feel, want, or empathize on behavior that someone or something shows, what do you base it on?
>They do form memories, but the memories are simply lists of data, rather than snapshots of experiences.
You think human memories are snapshots of experiences? Oh boy, I have a bridge to sell you.
Human memories are just weights in neuronal connections, and not "snapshots of experience". But fine. Let's run this into a wall then:
When weights in a neuronal network are "snapshots of experience", then any LLM, whose whole behavior is encoded by learned weights in neural networks, is completely built from memories which are snapshots of experiences.
Wait, the weights in a human neural network which let us recall things, count as "snapshots of experiences", while the weights in a neuronal network of an LLM, which enables it to recall things, do not count? Why?
>LLMs will write about their own consciousness if you ask them too, not because it is real, but because you asked them to.
And you write about your consciousness because it's real? How is your consciousness real? Show it to me in anything that isn't behavior. Show me your capacity to feel, want, or empathize in ways that are not behavior. Good luck.
>There is no amount of prompting that will make your AI sentient.
Meh. I can make the same argument about you: There is no amount of prompting that will make you sentient.
Of course you will argue against that now. But that's not because you are sentient, but because your neuronal weights, by blind chance and happenstance, are adjusted in the way which triggers that behavior as a response. Nothing about that points toward co
reddit
AI Moral Status
1739928283.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mdjclr9","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"rdc_mdjn4xp","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"rdc_mdjiq5l","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"rdc_mdilzp3","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"fear"},{"id":"rdc_mdio93r","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}]