Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The people most hyped for AI are the ones who aren't using it to build enterpris…
rdc_o89sr9c
G
i feel the rise of AI made me want to try and give drawing another shot in the f…
ytc_Ugyr1_o_o…
G
I never get short with ChatGPT until it starts making entirely fake functions fo…
rdc_jhsuv4n
G
What do you think about the AI influencing the value of art? Will AI art lower t…
ytc_Ugwmi40HL…
G
Ai is already getting regulated in eu, and any ai with potential to cause mass l…
ytc_Ugwm5cyNL…
G
The irony is they use AI to replace us but once that is done AI is so smart it w…
ytc_UgwktZipj…
G
This is similar to homeschooling except for the AI apps work. I prefer less scre…
ytc_Ugwv66HdT…
G
So no communities want this. None. So why as a society accepting the rise and us…
ytc_UgzfsP2Mw…
Comment
You fundamentally don't get it. The intelligence gap between you and ASI (Artificial SuperIntelligence) would be greater than the gap between a single neuron and all of human civilization. You are the neuron.
ASI doesn't stay at human level. It improves itself recursively: it makes itself smarter, then that smarter version makes itself even smarter, then that version improves itself again. This cycle repeats thousands of times in hours. Each iteration is exponentially more intelligent than the last. It doesn't take breaks. It doesn't sleep. Within days of creation, it's already incomprehensibly beyond us.
Could a mouse comprehend or defeat the entire US military? No. It can't even hold the concept in its brain. That's you versus ASI, except the gap is a million times larger.
By the time your neurons finish firing to think "we should stop this," it's already run a million simulations and won.
There is no "later invention." The moment ASI exists, humans stop making decisions about anything.
This is why actual AI safety researchers are terrified.
youtube
AI Governance
2025-10-16T19:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxfMqZ8scxJToInzul4AaABAg.AORtWWaqav-AORv8qoRPdX","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugy-pPDt7SzHf6ylfBV4AaABAg.AORfRy2d9tSAORk6vPVd2R","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_Ugy-pPDt7SzHf6ylfBV4AaABAg.AORfRy2d9tSAORnSSBHO0U","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugz8ZegHMWaK-01BD554AaABAg.AOMQ88y9DoiAORdtGWMOpM","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwCQBPrZDs3XaRyD114AaABAg.AOMK5cnEQ3fAOMNO6rCMZQ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugw8qULQLdbBFsmGOaJ4AaABAg.AOLvtdOkkMnAOLy_6nKWh3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgwwV3-hrRAmWmG1nQh4AaABAg.AOLolsAhOwaAOLzGnm_WUs","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgztvlOJR5VU0zQVcNZ4AaABAg.AOLedb1GTT0AOLzwD2zCwi","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugz4EL1lsfsJfQFsJHh4AaABAg.AOLa1olpbhYAOM-wsUCesU","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytr_UgxoKmj3UCpxDYFWtAN4AaABAg.AOLZGOSiuqUAOM0NwcdRy2","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"}
]