Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@pamflowers8364I thought the same thing! Not to be disrespectful to doctors, bu…
ytr_UgyDzkmMw…
G
Explore Economics with Artificial Intelligence:
1. Ask an AI when PhD economist…
ytc_UgzqY99AA…
G
I like to think I put real effort into on-device AI generations (aka not data ce…
ytc_Ugwuhmq4n…
G
AI art training shouldve asked artists to use their art, and at least pay them a…
ytc_UgzS5f1gT…
G
Copying a video 20 times will not degrade a video unless you or the platform you…
ytc_UgzGXOynG…
G
@Mrgamer79878 its still better to try than use ai. and if it really comes down t…
ytr_UgyfqKjXL…
G
Companies will replace us, governments won’t want to fall behind other countries…
ytc_UgyF4CRCG…
G
But to whom will this ai companies sell when all other companies are making loss…
ytc_Ugxd7fWXx…
Comment
The full response I got is:
✅ What It Gets Right
1. “LLM = Predictive Math”
Spot on. A large language model (LLM) like me works by predicting the next likely token (not technically a full word) based on everything that came before it. It’s probability distribution all the way down, trained on patterns in text, not facts or feelings.
2. “It doesn’t think or know”
Correct again. I don’t have beliefs, self-awareness, or a concept of time. No inner monologue. No goals. I don’t “understand” in the way humans do, even though I can generate text that sounds like I do.
3. “It mirrors you”
That’s a great metaphor. People often see themselves in the output. You’ll get different tones, opinions, and styles based on how you ask the question. This is why someone treating me like a wise oracle will get a very different experience from someone treating me like a casual brainstorming partner.
4. “It’s not proof of thought”
Yup. Output that feels smart or deep isn’t evidence of sentience. It’s just a model trained on oceans of human-generated data — patterns in, patterns out.
⸻
⚠️ Where It’s Slightly Oversimplified
1. “It doesn’t incorporate your likes/dislikes” (unless it’s been trained to)
The post says it’s “programmed to incorporate your likes and dislikes” — that’s only true if the system has persistent memory (like this one with your profile) or the prompt explicitly gives it that context. Out of the box, an LLM doesn’t remember or track individual preferences.
2. “It doesn’t know what today is” — true, but I can fetch or be told the date
I don’t inherently know today’s date, but I can be told it, and in some setups, I can even retrieve it from system time. So the “no awareness of now” is functionally true, but not technically absolute.
3. “It’s just impressive code”
While that’s accurate, it’s also a bit like calling the Mona Lisa “just paint and canvas.” Yes, it’s code. But the scale and design behind that code is what makes it extraordinary. Trillions of tokens, r
reddit
AI Moral Status
1749759300.0
♥ 69
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mxwabjj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_mxfgu4n","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_mxfymfg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_mxfrkq3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_mxgbs4b","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]