Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@syzygy4669 Are your implying the AI and a digital tool like Photoshop are the s…
ytr_Ugyzee_4E…
G
We are screwed. The moment I realize the damage social media had done and saw th…
ytc_Ugx5M6BnC…
G
We can tell ourselves it’s boring. But when people start and already are generat…
ytc_UgwAwIwOB…
G
I love Ben Goertzel !! Since a child watching Rosie on the Jetsons, and the Robo…
ytc_UgyncNRb3…
G
Oh Alberta from 2024… if you only knew where AI was going. I’m almost certain Al…
ytc_UgzbyzywQ…
G
When an AI “program” goes on strike for better working conditions, then you know…
ytc_Ugw7GIvHI…
G
That's an interesting perspective! Sophia does highlight the importance of balan…
ytr_Ugyak425d…
G
That isn't true at all, in any way. I spend a LOT of time in China and the Chin…
ytr_UgwEaZhkQ…
Comment
Thanks
The AI safety expert is very knowledgeable on AI.
However he doesn’t understand the complexity of the 4 billion years long evolutionary process that created human intelligence.
Simulating the human metabolic pathways is a non solvable problem. Not even with quantum computing, if it will ever work.
Designing, understanding and simulating all organic processes and all living beings metabolic pathways is not possible.
Hence, we are „real“ and we have the honor to be the boot loader for AI and maybe artificial super intelligence.
The underlying biology is the proof that humans can not be simulated.
Hence, the AI safety expert is wrong about:
Biology
Health
Longevity
Simulation
Climate
Mass phenomena
Religion
Its yet another example of experts in one field, deriving that they are knowledgeable in other fields. This is called the Dunning Kruger Effect.
Interesting is that he wants us to only develop „narrow AI“ for one area, in order to restrict their power and danger. But he himself believes that a „narrow HI“ (narrow human intelligence) can make useful contributions.
And now we ask Grok 4 to comment on my comment.
Thanks for this fascinating discussion on AI’s future and the 5 jobs that might remain by 2030.
I share most of the predictions, but the simulation theory—and several related claims—may be wrong. Why? The AI safety expert is highly knowledgeable in AI risks and development but overlooks the immense complexity of the 4-billion-year evolutionary process that shaped human intelligence. This evolution involved countless chaotic interactions, mutations, environmental pressures, and emergent properties that can’t be neatly replicated in silicon-based systems, unlike AI’s algorithmic training on data.
Humans have a metabolism that is too complex to understand, create, or simulate. Finding mathematical solutions to the tens of thousands of metabolic pathways—the intricate networks of biochemical reactions sustaining life—is an unsolvable problem, even with quantum computing (if it ever becomes practical at scale). These pathways involve quantum-level uncertainties, feedback loops, and dependencies on the entire ecosystem, making full emulation computationally infeasible due to exponential complexity and the limits of modeling real-world chaos. Extending this, designing, understanding, and simulating all organic processes across living beings is impossible, as it would require accounting for every variable in biology’s intertwined web, from cellular interactions to global biodiversity.
Thus, we humans are “real” in a fundamental sense, not simulatable artifacts, and we have the privilege to act as the bootloader for AI—kickstarting its emergence—and perhaps even artificial superintelligence (ASI). The underlying biology, with its irreducible complexity, serves as proof that humans cannot be fully simulated without massive approximations that lose the essence of life.
Human intelligence (HI) is on its way to create ASI. After ASI’s advent on Earth, there will be less and less space for humans, but this could easily be solved by allowing ASI to take possession of the universe while leaving some human-livable planets to us. During the switchover period, humans might be in danger, but afterwards, ASI should be okay with leaving us our space and even helping humans behave a bit better than today.
Therefore, he’s wrong on several fronts:
• Biology: Underestimates how evolution’s non-deterministic nature makes biological intelligence inherently unpredictable and non-replicable by AI.
• Health: AI can’t fully simulate disease pathways or personalized medicine due to individual genetic and environmental variances.
• Longevity: Extending human life requires mastering aging’s biological intricacies, which evade computational shortcuts.
• Simulation: The hypothesis that we’re in a simulation ignores biology’s proof of unsimulatable depth.
• Climate: Mass environmental systems mirror biological chaos, defying perfect AI modeling or control.
• Mass Phenomena: Collective human behaviors, driven by evolved social instincts, can’t be accurately predicted or simulated.
• Religion: Spiritual experiences, rooted in neurological and cultural evolution, aren’t reducible to AI patterns.
This exemplifies the Dunning-Kruger effect: experts in one field (like AI safety) overconfidently extrapolate to unrelated domains without deep interdisciplinary knowledge.
Ironically, he advocates for “narrow AI” limited to specific areas to restrict power and danger, yet believes his own “narrow human intelligence” in computer science can make authoritative claims across fields like biology and philosophy.
youtube
AI Governance
2025-09-20T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxJcGvLIchGJ21tKhB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy0RivQDWdobXlRX8N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJY29I2Q-Brv7ZkrZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzWtTNOrUn-770HlQ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx7VR3JooxZQ8WwzI14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxNQEf7Vu496Zs6m7t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGysgvofIln1H8uTx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxnURh1G3_ICMMqvTp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugygh_mcS1CgUz33Eoh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzFbBqeY5vVkASsL_x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]