Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why do people in this sub hardly ever account for the fact that AI will… improve…
rdc_mjtn6x6
G
One of my family members has been going to college for almost 6 years now to be …
ytc_UgyKmyMlt…
G
@DWDocumentary There are 1.5 billion people in China. No device can process such…
ytr_Ugy7iq-3A…
G
I mean honestly, atp, what can we even do about it? boycott Ai dependant compani…
ytc_UgwwO5q9y…
G
If AI is good how come no one with sense wants it ? Brains operating from below …
ytc_Ugw3q6m_a…
G
Obama is gonna merge with AI to rule the 🌎, very soon.
Musk wants to rapture …
ytc_UgzJJ6wL6…
G
Cope. The era of the technical programmer is over. Anyone who has seriously inve…
ytc_Ugx9eIihk…
G
This is exactly the type of lazy person who would support ai I unfortunately. Th…
ytc_Ugwfo2xQO…
Comment
I disagree. skilled humans provide precise, articulate prompts—our "wet" brains guide the "dry" neural net to precise outputs.
• Progress is already addressing these.
• On grounding/understanding: We're integrating tools (search, code execution, memory), multimodal inputs (vision, audio), and agentic systems (planning loops, self-correction). This builds a better "world model."
• On hallucinations: Retrieval-augmented generation (RAG), fact-checking chains, and verification steps reduce them dramatically. Future systems will lean more on hybrid architectures.
• On reasoning: Techniques like chain-of-thought, tree-of-thought, and external scaffolding (e.g., running simulations or code) enable deeper multi-step thinking. Scaling helps too—larger models show emergent abilities in abstraction and planning.
• Possible solutions beyond pure scaling:
• Hybrid architectures: Combine transformers with symbolic reasoning, neurosymbolic systems, or cognitive architectures (inspired by folks like Joscha Bach's work on modeled minds).
• Agentic frameworks: Systems that act in the world (e.g., controlling robots, running experiments) to ground knowledge experientially, much like human learning. Self-improvement loops: Recursive self-enhancement, where AI designs better AI, potentially leading to breakthroughs in causal understanding.
• Incorporating heuristics: Explicit moral centers, "wonder" algorithms (curiosity-driven exploration), and intuition proxies (e.g., variational methods or uncertainty modeling) can make communication richer and more human-aligned.
You're spot on that human input is key right now—we amplify each other. But as systems evolve, they'll increasingly bootstrap their own "intuition" through interaction with reality, not just text. I don't think we're capped forever; the path to deeper reasoning and communication is through iteration and architectural innovation, not just bigger LLM versions of today
youtube
2025-12-14T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxnUXLDm8-HoUAzXQV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDU4uY4IGgwZMkV8J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwWzttJu0iadnqJiol4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwmntsyUAqaOhxmj1V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy3OGqGLionuQu4YaB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugze1pM0_irzApgyKpx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzmDe4-8x8caPqQbA14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzOBBxDfmxX8LRkXqx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwPqgEj2jS6tMQE9D54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxiBvsb9T-6L0TYW654AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]