Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I found out ai is taking architecture jobs- and i wanted to be that but learnin…
ytc_Ugy-lDtpN…
G
@carlpanzram7081 Climate change has been accelerating since 2010, we are closer …
ytr_UgxSFDbAL…
G
Bruh typical fearmongering. The current state of AI is just a spaghetti "monster…
ytc_Ugz60iDSS…
G
Here’s the facts about AI.
No billionaires will make money because humans won’t…
ytc_UgyexDIwh…
G
I love how the AI wasn't actually wrong about anything, it was the human using i…
ytc_Ugx7Y-V7U…
G
Thank you for your comment. If you have any questions or need clarification abou…
ytr_UgyxgYgi6…
G
Oh so China won't have to steal America's Intellectual Property anymore - they c…
ytc_UgwbUfr1o…
G
Great topic. We have chemistry, biology, thermodynamics, physics and the other s…
ytc_Ugypdg3s3…
Comment
The AI 2027 scenario was developed by superforecasters with excellent prediction track records, deep technical knowledge of AI ,and sophisticated models of the behaviors of companies, nations, and individuals involved. They did months of research and wrote up one of the scenarios as an example.
Also:
- About half of all published AI researchers say there is a significant risk of human extinction from AI ("Thousands of AI Authors on the Future of AI").
- 300+ leading AI experts signed a statement saying that "Mitigating the risk of human extinction from AI should be a global priority" (CAIS Statement on AI Risk).
- Among AI experts, the minority who have familiarity with basic AI safety concepts are much more likely to view the future of AI as uncontrollable agents rather than simple tools ("Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts").
- Most of the very top AI researchers in the world -- including Nobel Prize Laureate Geoffrey Hinton and the world's most cited living scientist Yoshua Bengio -- have been very public about the fact that superintelligent AI could take over and destroy the world within the next decade or two.
youtube
AI Governance
2025-08-02T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugxm1HTT7I17lRidVsd4AaABAg.ALJRhFE2RnZALKBztAuU-c","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugxm1HTT7I17lRidVsd4AaABAg.ALJRhFE2RnZALKIHWfg85k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugxqrfq_uyncrOR8pdd4AaABAg.ALJRE9Eu4huALJi1E25w9f","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugw6iHW2o7ICBwXUbl94AaABAg.ALJPSGnEtRIALJiLNd-Ful","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugzp0-sHz34KKTIorzh4AaABAg.ALJMEsTPiLvALJhdG2J02o","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzjLsKym6OCjk0SKjh4AaABAg.ALJHSZ6Bfd3ALJdsutGZKJ","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgzjLsKym6OCjk0SKjh4AaABAg.ALJHSZ6Bfd3ALJew-ideGY","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyaW5yapa8XyJS-MNh4AaABAg.ALJFWFDi8jxALJgmGT22pJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyaW5yapa8XyJS-MNh4AaABAg.ALJFWFDi8jxALJoG90o8fP","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgyGLz22IEhrBLXWjm54AaABAg.ALJE01QYiQbALJhv8Bnm_P","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]