Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
His call for regulation is a call for protectionism. It will make it harder for…
ytc_UgzyB7Apt…
G
Agree. I've listened to Eliezer speak several times and so far I keep walking aw…
ytr_Ugxcc7er2…
G
florianschneider3982 no it is not, walking with a prosthetic leg over a real leg…
ytr_UgyIUSLXl…
G
**ROBOTIC INVASION**
Am i the only one who can see the end of humankind?
That m…
ytc_Ugx6hv-bj…
G
It’s About Dam Time!!! We can’t get kids to want to sign up for the Military so …
ytc_Ugz40ycto…
G
Exactly! They use these bullshit benchmark tests to show how it gets better, but…
ytr_UgwynKzaD…
G
"The optimistic idea of an AI future where we can have machines do everything fo…
ytc_Ugwte6bBd…
G
If an AI does surpass us in consciousness, what incentive would it even have to …
ytc_UgyZdHhLs…
Comment
"We've been asking the wrong question. Not 'how do we make AI safer?' — but 'who decides what safe means?'
Right now, four corporations answer that question for eight billion people. No vote. No oversight. No appeal.
In January 2026, I submitted the GGI framework to NIST — a mathematically structured governance model that proves, not argues, that human decision authority must be the irreducible variable in any AI system. Not a preference. A logical necessity. Remove it and the system is no longer governing — it's extracting.
Here's the math that matters: every AI governance model currently proposed operates on a spectrum from Human-in-the-Loop to Human-out-of-the-Loop. GGI identifies the flaw in that entire spectrum — it assumes the loop is the unit of measurement. It isn't. The unit is the human cognitive event. The moment a real human reaches genuine understanding before authorizing action. We call it the AHA moment. It's not philosophical — it's a governance checkpoint. Binary. Verifiable. Tamper-evident.
GGI was built using AI itself — documented, submitted, and sealed. That's not irony. That's proof of concept. The machine helped architect its own leash.
What we're asking Congress to recognize is simple: AI governance in 2026 is not a technology problem. It's a sovereignty problem. Who owns the decision? Who owns the consequence? Who owns the time the machine consumed to get there?
The framework is called Genial Genuine Intelligence. Ten Laws. One principle: human intent is not a feature. It's the foundation.
The steering wheel doesn't drive the car. But without it, you don't have transportation. You have a crash in progress.
Full framework: 10lawsofai.com"
youtube
AI Jobs
2026-03-18T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxuK7gq7rJz5WnAicF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwYbaWJcMiXCG0G4b14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy2NDeXAxkQ41wpyO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyf66GMy8R__fdU2cZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOzLHSALjjKL39trB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzkv4ScP5UPL1K-uRp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyzp5o9pZWGiCYxZQJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzHc0SwmYLVenkGOOp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwR4L1Iq6hyuIGmXjV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQQrK8uw3l4-lxr914AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]