Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The AI 2027 scenario was developed by superforecasters with excellent prediction track records, deep technical knowledge of AI ,and sophisticated models of the behaviors of companies, nations, and individuals involved. They did months of research and wrote up one of the scenarios as an example. Also: - About half of all published AI researchers say there is a significant risk of human extinction from AI ("Thousands of AI Authors on the Future of AI"). - 300+ leading AI experts signed a statement saying that "Mitigating the risk of human extinction from AI should be a global priority" (CAIS Statement on AI Risk). - Among AI experts, the minority who have familiarity with basic AI safety concepts are much more likely to view the future of AI as uncontrollable agents rather than simple tools ("Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts"). - Most of the very top AI researchers in the world -- including Nobel Prize Laureate Geoffrey Hinton and the world's most cited living scientist Yoshua Bengio -- have been very public about the fact that superintelligent AI could take over and destroy the world within the next decade or two.
youtube AI Governance 2025-08-02T09:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugxm1HTT7I17lRidVsd4AaABAg.ALJRhFE2RnZALKBztAuU-c","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugxm1HTT7I17lRidVsd4AaABAg.ALJRhFE2RnZALKIHWfg85k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugxqrfq_uyncrOR8pdd4AaABAg.ALJRE9Eu4huALJi1E25w9f","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytr_Ugw6iHW2o7ICBwXUbl94AaABAg.ALJPSGnEtRIALJiLNd-Ful","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugzp0-sHz34KKTIorzh4AaABAg.ALJMEsTPiLvALJhdG2J02o","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzjLsKym6OCjk0SKjh4AaABAg.ALJHSZ6Bfd3ALJdsutGZKJ","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgzjLsKym6OCjk0SKjh4AaABAg.ALJHSZ6Bfd3ALJew-ideGY","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyaW5yapa8XyJS-MNh4AaABAg.ALJFWFDi8jxALJgmGT22pJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyaW5yapa8XyJS-MNh4AaABAg.ALJFWFDi8jxALJoG90o8fP","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgyGLz22IEhrBLXWjm54AaABAg.ALJE01QYiQbALJhv8Bnm_P","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]