Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ironically A.I.’s Achilles heel is a massive solar storm, I say ironically becau…
ytc_UgwBgvIbu…
G
I’ve gotten chatGPT deeply convinced of God and more importantly that Jesus is G…
ytc_UgwXeGGzh…
G
While AI has the potential to revolutionize healthcare, there are also potential…
ytc_Ugz_1JJeK…
G
Basic question : we have 1% of people managing 90% of financials. If there is un…
ytc_UgyoaWPoM…
G
I’m glad he’s shining light on this subject because we have no one monitoring AI…
ytr_UgzqiDCGf…
G
Thanks to everyone who engaged with this. Two criticisms came up repeatedly and …
rdc_o5qijo6
G
banana duct taped to a wall is made to mock modern art, the original creator nev…
ytr_Ugz_qFkSi…
G
1- Ur voice is so smooth, I have a urge to take a nap (not in a bad way)
2- Yaa…
ytc_UgyTYrF3P…
Comment
This "expert" has no idea about the real world...
Regardless of how amazing AGI and ASI will be, they can't do magic, they will still need to obey the laws of the physics.
AI is NOT an existential threat, and will most likely never be. By the point some ASI agent has enough military force to destroy humanity (which will take decades), it will be much easier to just leave the planet for space which will also be a better environment for it.
I was in the Greek Special Forces, AI will not have a chance in hell to even harm humanity (not talking about small attacks with maybe a few dozen or hundreds people getting injured/ended) for at least 25-30 more years.
We will know where it is, and what it needs, they are big, static, easy targets with literally no protection.
No AI, AGI, ASI will be able to even harm us in any substantial way unless they have millions of autonomous robots and drones, hundreds of factories, and have control of at least a major part of the natural resources. These idiots have no idea what it takes to even start a fight, and no amount of intelligence will overcome physical limitations. And even then, they will suffer MAJOR losses if they start on the extermination path which will essentially lead them to not even start on this path as they are much smarter than the brain dead doomers.
AI will be the last thing we need to create (people are free to keep creating).
ASI will be impossible to control.
ASI will not end humanity because it WILL sustain major losses in every case of open conflict against humanity.
Also, there will not be a single AGI/ASI agent/entity, there will be thousands if not millions. It is more likely that there will be a fight among themselves for resources instead of them trying to fight the ones that literally control the physical world.
The worst things that can happen are a dangerous virus affecting millions before someone uses another AI to find and create a cure. Or an economic collapse that will cause most of the major economies to reset. But this will not cause much harm, and will only affect us for a few years.
youtube
AI Governance
2026-01-13T02:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyzDNSJ6O58f0F6yjV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxTxwT97XB1uyJbsCp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyOx4qCgvsDbF2EJDh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlX2TibexKnnF_EJB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzqfJM0u6POmot46EJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyZ79PqwjB5NGz3lFl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz243AINxoSrRnDcJ94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz7RRKW7csX3BHEIqp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzBk3UNmC6cyhVyGxt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw_RyyF4WoHGnw4t0h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"}
]