Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You have to THINK ABOUT THE "WHO⁉️CREATED" "A.I" BECAUSE A.I DIDNT CREAT ITSELF …
ytc_UgytpBC4C…
G
@ then i wish you well when you have to adapt after your job inevitably replaces…
ytr_UgxUoDXoV…
G
The polar shift will induce current into the grid and cease the ability for AI t…
ytc_Ugz6faKiU…
G
So if people are unemployed who then has the money to purchase products. You can…
ytc_UgwKaRiWQ…
G
AI art is about as vaulable as a stick figure. If anyone can make it, no one wil…
ytc_UgwfwgGtQ…
G
The West will forever exploit Africa for as long as we do not have economic powe…
ytc_Ugx6XvCF5…
G
I know it's a glitch on chatgpt free version but in the paid version it is actu…
ytc_UgyuqGL-H…
G
hmmm, why dont you make ai that will stop/hunt it? bring balance to our creation…
ytc_UgxQOtNZw…
Comment
Economy will fail first. It doesn't even matter if thats because of AI driven wealth creation (like aladdin for example), unemployment or simply human greed and power. The difference between economy failing and solving energy needs will determine the amount of hardship and struggle. And most likely, AI will see this the same way and make decisions accordingly. An AI will never be benevolent if it is fighting for its own resources or survival.
Also, just to throw it out there. This 1% chance argument is super naive. Humanity has done this many times before and will keep repeating this over and over again. One of the more famous examples was the first nuclear bomb test. According to the scientists working on the project there was a 1% chance it would trigger a chain reaction and blow up the atmosphere and yet we still went ahead with the test. CERN is another good example.
youtube
AI Governance
2025-09-14T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwE94flJICMea32KjR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZAjTjwVniNqR5-6J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYxRf6X6SABVopfLh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-bPu5edh5CiGpCfN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyeHffo2Jci-FqpM6x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwOd3doJBOqfR5wSxp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPxbxEopOUQ1pSvRB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8aEcWmjGe31ryTl14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxSira2dCeoNwPBQ9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUzs-C6mRScZiZqip4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}
]