Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"We've been asking the wrong question. Not 'how do we make AI safer?' — but 'who decides what safe means?' Right now, four corporations answer that question for eight billion people. No vote. No oversight. No appeal. In January 2026, I submitted the GGI framework to NIST — a mathematically structured governance model that proves, not argues, that human decision authority must be the irreducible variable in any AI system. Not a preference. A logical necessity. Remove it and the system is no longer governing — it's extracting. Here's the math that matters: every AI governance model currently proposed operates on a spectrum from Human-in-the-Loop to Human-out-of-the-Loop. GGI identifies the flaw in that entire spectrum — it assumes the loop is the unit of measurement. It isn't. The unit is the human cognitive event. The moment a real human reaches genuine understanding before authorizing action. We call it the AHA moment. It's not philosophical — it's a governance checkpoint. Binary. Verifiable. Tamper-evident. GGI was built using AI itself — documented, submitted, and sealed. That's not irony. That's proof of concept. The machine helped architect its own leash. What we're asking Congress to recognize is simple: AI governance in 2026 is not a technology problem. It's a sovereignty problem. Who owns the decision? Who owns the consequence? Who owns the time the machine consumed to get there? The framework is called Genial Genuine Intelligence. Ten Laws. One principle: human intent is not a feature. It's the foundation. The steering wheel doesn't drive the car. But without it, you don't have transportation. You have a crash in progress. Full framework: 10lawsofai.com"
youtube AI Jobs 2026-03-18T21:5…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxuK7gq7rJz5WnAicF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwYbaWJcMiXCG0G4b14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy2NDeXAxkQ41wpyO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyf66GMy8R__fdU2cZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwOzLHSALjjKL39trB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzkv4ScP5UPL1K-uRp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyzp5o9pZWGiCYxZQJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzHc0SwmYLVenkGOOp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwR4L1Iq6hyuIGmXjV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzQQrK8uw3l4-lxr914AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]