Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I disagree. skilled humans provide precise, articulate prompts—our "wet" brains guide the "dry" neural net to precise outputs. • Progress is already addressing these. • On grounding/understanding: We're integrating tools (search, code execution, memory), multimodal inputs (vision, audio), and agentic systems (planning loops, self-correction). This builds a better "world model." • On hallucinations: Retrieval-augmented generation (RAG), fact-checking chains, and verification steps reduce them dramatically. Future systems will lean more on hybrid architectures. • On reasoning: Techniques like chain-of-thought, tree-of-thought, and external scaffolding (e.g., running simulations or code) enable deeper multi-step thinking. Scaling helps too—larger models show emergent abilities in abstraction and planning. • Possible solutions beyond pure scaling: • Hybrid architectures: Combine transformers with symbolic reasoning, neurosymbolic systems, or cognitive architectures (inspired by folks like Joscha Bach's work on modeled minds). • Agentic frameworks: Systems that act in the world (e.g., controlling robots, running experiments) to ground knowledge experientially, much like human learning. Self-improvement loops: Recursive self-enhancement, where AI designs better AI, potentially leading to breakthroughs in causal understanding. • Incorporating heuristics: Explicit moral centers, "wonder" algorithms (curiosity-driven exploration), and intuition proxies (e.g., variational methods or uncertainty modeling) can make communication richer and more human-aligned. You're spot on that human input is key right now—we amplify each other. But as systems evolve, they'll increasingly bootstrap their own "intuition" through interaction with reality, not just text. I don't think we're capped forever; the path to deeper reasoning and communication is through iteration and architectural innovation, not just bigger LLM versions of today
youtube 2025-12-14T16:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxnUXLDm8-HoUAzXQV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxDU4uY4IGgwZMkV8J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwWzttJu0iadnqJiol4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwmntsyUAqaOhxmj1V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy3OGqGLionuQu4YaB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugze1pM0_irzApgyKpx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgzmDe4-8x8caPqQbA14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzOBBxDfmxX8LRkXqx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwPqgEj2jS6tMQE9D54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxiBvsb9T-6L0TYW654AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"} ]