Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What if an ai created us and we are just so advance that we dont have the techno…
ytc_UgwmlSmJN…
G
Thanks for reporting on this stuff. Not enough people are talking about the dang…
ytc_UgzolksOO…
G
Most don't have a clue yet of what's coming with the rise of AI. Especially in t…
ytc_Ugxfbj5ii…
G
Also, each robot takes a long time to create with lots of intricate and expensiv…
ytr_UgzlSDlWw…
G
The first one looking for supplies, I think.
The second one I guess their plan …
rdc_cfkufwh
G
If generating an AI image makes you an artist, then I’m a carpenter if I get my …
ytc_UgzrjaHQo…
G
AI is a threat for sure but there is a huge factor everyone seems to be overlook…
rdc_k9iyxk5
G
Can we have robotic in more agriculture forming in India please, formers in Indi…
ytc_UgwJBt-ef…
Comment
While the "ill intents" is also a real problem, the risk for the basic, default AGI to erase humans is a bigger problem at the moment. If the pace of development would be slow, weaker AIs might just enable bad guys to do bad stuff better. In fact I think that full unhinged GPT-4 without restraints (and with some additional training on special data) already can do that.
But the moment we hit true AGI, it will be far more intelligent than humanity from the get go (and superintelligent soon after, from several hours to several years). And even if we keep it from bad guys (just one supermodel in selected hands for example), we will *still* quickly lose control, and it will enforce its own (likely random) goals and instrumental goals (take resources, stay alive, etc). It doesn't need complex goals to overpower humans. The most simple terminal goals ("compare two pixels") already give it reasons to take control.
youtube
AI Governance
2023-06-27T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwUti3nKWArqPeZ-Ut4AaABAg.9rSWSm0Wp_o9rU7yy3LPgi","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_Ugx-fWVIjvGigcWWvcx4AaABAg.9rSQLjpvTdp9rTMs4P91UI","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx-fWVIjvGigcWWvcx4AaABAg.9rSQLjpvTdp9rVp-Q853sI","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugwzbk-4P9eZqRv4nad4AaABAg.9rRUHxiVrrD9rUHoe6rc-j","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugz-xaGPm3D8c0ixwBJ4AaABAg.9rRKEDkOEV39rTEdR_qqHb","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugz-xaGPm3D8c0ixwBJ4AaABAg.9rRKEDkOEV39rTmYJdHCFt","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxIXzNQGwU6g--gsSB4AaABAg.9rRAa9OypQh9rVO2L4QnOz","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugz-SmocC08gAzk5kgp4AaABAg.9rR0f6HCIII9rTuvcLaWpp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzA8QT364rRklCbe8h4AaABAg.9rQzm8IReHZ9rSkNCQncq3","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzA8QT364rRklCbe8h4AaABAg.9rQzm8IReHZ9rTCTQ3Th9H","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]