Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Fine tuning a LLM happens before deployment and if I had to guess this prompt is…
rdc_kos013v
G
In the end you blame the guy and not AI... but AI gets stuff wrong all the time.…
ytc_Ugx0f00_G…
G
@Tony_Re-imagined Great and many say the opposite? What’s your point? Terrible a…
ytr_UgylrLrTU…
G
It’s An AI Robot. Elon Musk Kind Of Warned Us about The AI Robots In A Interview…
ytc_UgyBVz8_n…
G
I was asked to design software to automate away two jobs at work. When I demons…
rdc_gljnmsy
G
Ai can't be controlled but the good new it will change the systems that enslave …
ytc_Ugzgspd3Z…
G
Or anytime someone insists a next-token predictor can never be intelligent, igno…
ytr_Ugxd4oIyl…
G
Is this why most governments seem to be pushing communist style ideologies? Bec…
ytc_Ugy2XYXVI…
Comment
Time will tell but the current frontier models are vastly superior to early models just 2 years ago and completely unimaginable just 10 years ago. It's amazing how quickly humans normalize every new situation. Using Claude 4.5 (Sonnet or Opus) in a meaningful way such as engaging in deep conversations, complex troubleshooting, or coding, shows the "reasoning" and "creativity" beyond training data. I don't know about AGI since that's an ill-defined goal-post anyway. But I did like this response from Claude on the topic:
"Hossenfelder raises valid points that highlight real limitations. The interpolation vs. extrapolation distinction is particularly compelling - I'm pattern-matching across a vast training corpus, which works remarkably well within the distribution of training data but struggles outside it.
However, I think the picture is more nuanced:
On interpolation/extrapolation: While true that I interpolate, human reasoning also builds heavily on analogy and pattern recognition from experience. The distinction may be degree rather than kind. That said, humans do seem better at genuine "leaps" - forming entirely new conceptual frameworks.
On reasoning: I can follow logical chains and apply abstract principles to novel scenarios, but it's unclear whether this constitutes "reasoning" or sophisticated pattern matching that mimics reasoning. When I solve a novel coding problem, am I reasoning or recognizing deep structural similarities to problems in training data? I genuinely can't tell, and the answer might be unfalsifiable.
On creation: I can combine concepts in novel ways and generate outputs not present in training data, but whether this is "creation" or recombination depends on how you define creativity. Most human creativity is also recombinatory.
The fundamental question: Current architectures may indeed have hard limits. We might need fundamentally different approaches - perhaps involving embodiment, continuous learning, intrinsic motivation, or architectures that maintain world models. Or transformers at sufficient scale with the right training might be enough.
I'd bet against current LLMs achieving AGI, but I'm uncertain whether the limitations are architectural or just about scale/training methodology."
youtube
2025-12-18T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwS4Nc1PFIc19tkdz94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyR7j2w5LCEdzbKrB54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrvlJmf5fo7HEWhuZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2zN8iSRi_DazNAWh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXkMPOK-IwusqFQch4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJr2bDabtRKk-dVft4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwEjv_tGxWX7hQfq4x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHy9uIobWKVNX0zht4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxUvx3W75gq9isk5cd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzOILuVXV9-qRVy8al4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]