Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One of the things I mentioned a long time ago was a fictional form of government that once made an appearance in a Sid Meir's Civilization game that dared to dream the future: Virtual Democracy. Back when I first posted about it, the technological barriers to implementation were high because of the amount of work needed to process unstructured data, such as how to coalesce 50 different ways of phrasing a thing into a marked commentary about the thing used to be quite a large data scientist task. Now, the ability to do this with modern computing power is damaging to the environment (lookup Mythic AI chips for an investment that might change that), but also present a real opportunity to do a real government by, of, and for the people - through an AI that constantly aggregates the public's opinion (already happening) to form opinion vectors that should inform policy. We also have online the numerous forms of expertise that people have, which allows us to weight that opinion based on the expertise set and the subject set. (already happening) The problem now, is that the powerful have seen this, and used it to generate opposing vectors that influence human opinion from humans (Russia's allies in Africa running troll farms and the like), but this same concept can be used to instead filter the source IP addressing (tech solution for VPN hiding needs to be addressed) to give us real public opinion that should be given weight and power over policy. A few flaws to be hammered out: 1. Algorithm maintenance. As "Mecha Hitler" illustrated, you can have the greatest repository of human knowledge available, but if the algorithm is corrupted against (groups of) people, it becomes useless hate-streaming garbage. Democratizing the maintenance of the algorithm may be a way to do this, but again, you're still talking about a human gateway into something ostensibly meant to guide policy. This approach merely lowers the cost of psyops to influence the governing algorithm, and there's alsot he hacking unauthorized access issue that needs to be resolved, or at least maintained by on-shore security teams. 2. Political Job Incineration. Transitioning from a system dependent on personalities ("electability", ugh) to a system that tells those personalities to get a real job ("ya bums") is going to be resisted by those in power a lot. Particularly because, to even begin work on this project, immanent domain must be invoked to federalize data centers, algorithm assets, tech workers and leaders. That's never going to happen if left in the hands of people who stand to lose their livelihood from doing so. The transition plan must include some way to take care of those with forceful personalities so they don't feel threatened by this transition. 3. Law Enforcement Rebellion. This past year and change has made manifest the FBI's past warning: Law Enforcement has been infiltrated by white supremacists. They know they are a minority, they know they will be kicked out, punished, made irrelevant by an algorithm maintenance agenda that makes their views antithetical to the machine that runs the country. (Threats of) Violence will be inevitable, making this transition a problem that has to be solved to take care of, and perhaps re-aim these sorts of people, so that they see the transition road as a good thing as well. There's also a possibility that these sorts of people become rebels, and cause problems for society in general from outside of it's new structure. Now that the tech barriers have fallen, and the only thing left is political barriers, perhaps this is a project worth pursuing as a national agenda.
youtube AI Governance 2026-04-23T16:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningcontractualist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw71ofNLLdbubDNXsF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx3vh5y_94gKOIC0P54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxeBhAZiHgMttdqMBN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgykxnKMOBf67tK71zZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxCMZfKwtNNqYrsRkV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzixSbjYYPSt0ejQnV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzSThfUS13z0sqsXQh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxtwkLoLREJnm13_MR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugz-x0H0BEG-r6el1ud4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwWj0DCD9npIMRA7KR4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"regulate","emotion":"mixed"} ]