Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Frank Herbert knew AI was dangerous 60 years ago. I asked ChatGPT: Do you think AI is controlled by too few people? ChatGPT said: Yes — currently, AI is largely controlled by a small number of powerful actors, and that concentration raises serious concerns. 🔒 Who Controls It? The dominant influence lies with: A few tech giants (OpenAI, Google, Microsoft, Meta, Amazon, etc.) Governments with advanced tech infrastructure (U.S., China) Venture capital and elite research institutions These groups: Shape what AI is built to do. Control access to models and data. Influence global AI policy and norms. ⚠ Why That’s a Problem: Lack of democratic oversight Most people have no say in how these systems are designed or deployed—even though they affect billions. Bias toward profit and power When incentives prioritize monetization or surveillance, public interest (like safety, fairness, or transparency) often gets sidelined. Unequal access Open-source projects exist, but the most powerful models often require resources—computational and financial—that smaller organizations or developing nations can’t afford. Regulatory vacuum Policy is lagging far behind the tech. This leaves a handful of actors free to set the agenda with little constraint. 🌱 Is It Inevitable? No—but it’s the current trajectory. Decentralized AI, stronger open-source communities, and government regulation could shift the balance. But that requires deliberate action—because left unchecked, concentrated control will likely deepen. 🧭 Final Thought: Frank Herbert would likely say: “When too few steer the machine, everyone else is along for the ride—whether they like the direction or not.” And right now, the wheel is in very few hands.
youtube AI Governance 2025-06-21T04:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyqLk5U-cfX0wYeMMd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxRMS1XVajgYPQLfHF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwoxJ5FXlwYVfAPuqt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2EsPDs-YThs7CDvB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxEgIWEvMpxokgU6OJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugygf3Uk1qVqDna6AMF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzdUNNj5lAr5UCVN7B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz1y4QbZmx9fc3ib4h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyQf0f2ia2SD3Z-FZd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxkKVQwj5G-lDXXVAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]