Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI isn't for you or for me. The ads are just trying to market the idea to us, bu…
ytc_UgyuyUZCo…
G
It might be scary but this is the least concerning problem when it comes to AI t…
ytc_UgwrJYuza…
G
While your statement provides the context for why, the original statement is sti…
rdc_h8f1lhd
G
I use AI for code and law research. I obviously double check everything, but it …
ytc_UgwLkxHvz…
G
AI is already in charge of my life every day via phones, etc. The iphone control…
ytc_UgxOJV2tB…
G
ia does not yet poses a subconscious, and until it does ai will fail to match hu…
ytc_UgzM7judT…
G
the positive side of AI and the positve affects it can have on this world... its…
ytc_UgzFOtNAG…
G
Yeah... or even other models.
>Now us as a society, are supposed to rely sol…
rdc_n7lskp5
Comment
The only hope for humanity is to develop the most powerful *Nice Super Intelligent AI* as fast as we can. Because, you know... there are people/corporations/nations that will weaponize this. It's Mutual Assured Destruction, but maybe not if we're lucky. If we're *lucky*, and really that's all we've got.
Take gun control as an analogy. The incorrect premise being that if you ban legal guns then only criminals will have guns. Not really true, because we make sure the cops and even army have bigger guns when push comes to shove. The criminals really don't have than much of a chance. But, if we ban AI then yeah... only the people willing to ignore that ban will have AI and there won't be cop AI or army AI to save us.
Would you be in favour of gun control if the cops and army had to turn in their guns too? Ultimately, our peace is actually predicated on the threat of force. All we can hope for is that it's a "Nice to Us" force. Guardrails don't help unless there's force making sure *everyone* stays on the right side of them..
Would you be in favour of AI control knowing... *KNOWING* that there will be people/organizations/nations that will not respect this ban and that would very much like the ability to exert force to compel the human behaviour they want? Really? Have you thought about this?
I would instead immediately nationalize OpenAI and any other contenders, shove them into the same room, give them piles of cash, coffee and anything else they ask for, and beg them to make the Nicest Super Intelligent AI they possibly can as soon as they can.
If they fail to make Super Intelligent AI, well that's okay. It can't be done and we're fine. If they fail and make a Super Nasty AI, we're screwed. But, if it's possible to make Super AI while impossible to make it nice even if we try really hard, then yeah, *we are screwed anyway.* Somebody will do it.
If we succeed in making a Nice Super Intelligent AI then maybe that AI can detect and deal with any nasty AI other people end up making (deliberately or by accident). Maybe it can be the Nice Force that keeps everyone else on the right side of the guardrails.
If we succeed in making a Super Nice Intelligent AI then governments will have plenty of time to pass all kinds of stupid laws messing with the economy to try and keep people working in all the BS jobs they already do. It won't be pretty, but we can deal with that. It's the other stuff we can't deal with, like Chinese-style Social Credit scores gone global and insane. I really don't want to have to behave the right way to keep an AI happy.
Right now it's a *flat out race.* It's pedal to the metal time, Manhattan Project, WFO. Let's get this done and out there before somebody else does it first. We pause, we lose.
youtube
AI Governance
2023-03-30T06:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxaCDLqoV3Blf3rHKJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy6sORpZ-8inVK8k1l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7_KSv6JgUTpvrMhd4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxstG7pZixB4oJFGbV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxnK0khNG1q4uUQhZN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzBim0lzz1951IYIOJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwvtUjccGFfPIV6nwZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzcE7kEUWSfQQYndD94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwxFJhzSRmDXTdW_jl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz9bwpmPf9H9RVCMNN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]