Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's honestly what I hoped would happen. With proper competition, AI should now …
rdc_m9hegpt
G
Just think if AI gets angry with us and thinks that we are going to shutdown all…
ytc_UgzCUc1Db…
G
Regardless of AI it’s already pointless to go to college for the purpose of earn…
ytr_UgymnPPtB…
G
@laurentiuvladutmanea bruh u are destroying and humiliating those ai defenders i…
ytr_UgzMs2mp3…
G
My company has been trying do more with AI. Our departments new intern was given…
rdc_ofh48kx
G
The top 0.1% have a poorly thought out plan to fix climate change, et al: Use AI…
ytc_UgwgCYPvO…
G
In other words Sundar Pichai holds the exact same philosophy on A.I. ethics as C…
ytc_UgzOU6zf_…
G
This is fakeeeee he is not fighting a robot he got knocked out like that by anot…
ytc_UgwNOMN24…
Comment
It's now a common explanation and even AI researchers did use it at one point but there is more and more evidence that it's wrong or at least highly misleading.
The best evidence for that is Antrophic's recent paper where they looked at this question:
[https://transformer-circuits.pub/2025/attribution-graphs/biology.html](https://transformer-circuits.pub/2025/attribution-graphs/biology.html)
I really recommend anyone to read it, it's not too hard to understand and has some really interesting experiments with genuine insights.
One of it is that LLMs clearly aren't "just" predicting the next word (token) and some sort of simple statistical model that is an equivalent of "likelihood", they infact do "think" about it and consider the "bigger picture" and they have "concepts" instead of just something like a "lookup table".
There are for example "abstract, language-independent circuits" and they "plan" their outputs ahead of time like shown in that poetry example (which clearly goes against the narrative that LLMs just spit out one token at a time without any "thought"/"planning").
PS: Also while we need to be careful not to oversimplify comparisons to the human brain, there are many arguments/theories stating that intelligence evolved as nothing more than a "future state prediction machine" which is obviously helpful in the natural environment.
A similar argument is made for our "consciousness" and "self-awareness" which acts as a "meta-layer" to allow for better (long-term) planning/predictions.
There is also a really fascinating similarity in how LLMs already organise information in their "latent space". Visualizations of that look early similar to brain activity scans.
reddit
AI Governance
1745072178.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mnxhj0q","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mnpdeh3","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mnpc2s6","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"rdc_mnq5pyy","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mnodebh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]