Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While the "ill intents" is also a real problem, the risk for the basic, default AGI to erase humans is a bigger problem at the moment. If the pace of development would be slow, weaker AIs might just enable bad guys to do bad stuff better. In fact I think that full unhinged GPT-4 without restraints (and with some additional training on special data) already can do that. But the moment we hit true AGI, it will be far more intelligent than humanity from the get go (and superintelligent soon after, from several hours to several years). And even if we keep it from bad guys (just one supermodel in selected hands for example), we will *still* quickly lose control, and it will enforce its own (likely random) goals and instrumental goals (take resources, stay alive, etc). It doesn't need complex goals to overpower humans. The most simple terminal goals ("compare two pixels") already give it reasons to take control.
youtube AI Governance 2023-06-27T19:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgwUti3nKWArqPeZ-Ut4AaABAg.9rSWSm0Wp_o9rU7yy3LPgi","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytr_Ugx-fWVIjvGigcWWvcx4AaABAg.9rSQLjpvTdp9rTMs4P91UI","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx-fWVIjvGigcWWvcx4AaABAg.9rSQLjpvTdp9rVp-Q853sI","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugwzbk-4P9eZqRv4nad4AaABAg.9rRUHxiVrrD9rUHoe6rc-j","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugz-xaGPm3D8c0ixwBJ4AaABAg.9rRKEDkOEV39rTEdR_qqHb","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugz-xaGPm3D8c0ixwBJ4AaABAg.9rRKEDkOEV39rTmYJdHCFt","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxIXzNQGwU6g--gsSB4AaABAg.9rRAa9OypQh9rVO2L4QnOz","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugz-SmocC08gAzk5kgp4AaABAg.9rR0f6HCIII9rTuvcLaWpp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzA8QT364rRklCbe8h4AaABAg.9rQzm8IReHZ9rSkNCQncq3","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgzA8QT364rRklCbe8h4AaABAg.9rQzm8IReHZ9rTCTQ3Th9H","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]