Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This has been proven false. “That facial Recognition software miss identifies mi…
ytc_UgwVmJc9z…
G
Yes, AI training - on recent material, for commercialisation is not "Fair use" o…
ytc_Ugy1pLBfZ…
G
This argument really falls flat imo because there is still a lot of effort being…
ytc_UgzcuxqlH…
G
Here are the simple truths most people don't get
An economy exists to produce an…
ytc_Ugy7XKgW4…
G
They lack a lot of things, experience storage for one. They should Imbreed on wh…
ytr_Ugzjh3scL…
G
Wait so most of you can't tell what is ai and what's not? What the hell?…
ytc_Ugy5w_zMG…
G
Considering that we are unable to define awareness in any reasonable way, saying…
ytr_UgyDt7bVd…
G
Yeah, that was my thought.
We're hitting the generations of chips that are being…
rdc_immrtm1
Comment
There are some great books on AI out there, and one that I read recently is "The Coming Wave" by the founder of Anthropic. He provides some examples of how a super-intelligent AI could "get out" and absolutely wreak havoc on humanity. It's scary because of how possible it all is, and how close we're getting to AGI, and eventually super-intelligent AGI. We will be ants compared to it, and there's no way to know what kinds of ways it will trick AI researchers regardless of what they do. Suleiman gives some examples of how a super-intelligent AI could start making those who first create it, INSANE sums of money..
He uses the Amazon M-Turk system and how it would start doing basic, low-paying tasks, with thousands of accounts, but completing them so quickly that it would amass a fortune in weeks. Then how it could use that money to expand into other investments, and those people could reach the point where they control the world, including governments and the economy. Check that book out if this is something that interests you.
youtube
2024-06-29T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzVMpoQTwl77oyyzK94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyVTzGqDVa6Gocdp_N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwTmXflsrZvOqsydQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxqBv7kY4-LnkdKFu94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2XUIFHC_UVxXR1GZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8ctBEM7ir0D9WzlV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxXtSwI8t76z5xC7jJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz7u0pBS5mp3_3BOZJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx8HzK8h1vc-HEe2Ul4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzAJQUv7UPQjENtRep4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]