Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
With memory saving, you can build your own biases for ChatGPT. I believe right n…
ytc_UgzY_EZvd…
G
Here's an idea. Ban self driving if a human can drive it. If an accident happens…
ytc_UgwRdvXW3…
G
for every person who obsessively loves ai theres one who pointlessly hates it, y…
ytr_UgzKAPlel…
G
Wish I could share your optimistic view, but companies don't seem to care if som…
rdc_jprcent
G
if its just a machine its a imaginary love like n imaginary …
ytc_Ugy-69w3D…
G
Welcome to my world. It is a scary thing to deal with. We are dealing with this …
rdc_dcwmoli
G
yooooo corridor crew youtuber refreanced you in their law video about the ai art…
ytc_UgzZkwts7…
G
U know what? Maybe i never use ANY type of A.I. because i just scare in the futu…
ytc_UgzP3u4FM…
Comment
PSA I encourage you to consider that the moderate take remains the best. Specifically,
• output like this is not truthful in the sense that it is not indicative of sentience as asserted
• the behavior of very LLM is known to derive form higher-order abstractions, i.e. there is sound reason to believe (and been shown in cases) that they are internally constructing semantic models of the world, and learning algorithms, hence it is no longer controversial to assert:
• LLM are doing far more than "stochastic parroting" or "predicting words". Word prediction is better understood as the mechanism of training than as a useful description of what is transpiring when they generate responses
QED while they are not sentient and don't have mind in the sense that humans do atm, they are on that path, because what they are doing is becoming increasingly "mindy" as they scale.
​
Editorial footnote:
More importantly, their "mindfulness" will very soon be enhanced with comparably straightforward architectures which pair LLM with an array of perceptual input channels, planning-problem decomposition-recursion-delegation abilities, and some sort of governing executive planner which recurrently stimulates them.
There is no reason that one cannot train multi-modal networks whose abstracted semantics extend from the marriage of the linguistic and the visual to other domains.
Chaining models into aggregates which represent the confluence of specialized components overseen by a serially-planning reentrant executive is very obviously the next Thing.
I assume that work is being done now.
I predict its outcome will be profound.
reddit
AI Moral Status
1679076410.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jck7uv2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_jclg79t","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"rdc_jcktskg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_jcjv326","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"rdc_jck3v8d","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}
]