Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can you hold video? You don't address the scaling and the amount of AI data that…
ytc_Ugw0Rkt5d…
G
If u change ur name to AI then when they say it’s AI created it’s true…
ytc_Ugwyb35NY…
G
It isnt. It is killing bad degrees, it is not killing engineering degrees and de…
ytc_Ugy62Orjs…
G
Were making a dominant darwinistic life form what the fuck does everyone think i…
ytc_Ugxo2HNBm…
G
The question is...hypothetically, if everyone loses their job to AI, who will ha…
ytc_Ugy-b7oqZ…
G
We're literally living out the plot of "Don't look up". Just change the meteor w…
ytc_UgwrGON5x…
G
I blame the idiots at openAi and stabilityAI for releasing this thing on the web…
ytc_UgzDSLkKh…
G
Will a.i take over the man who works in the local fish and chip shop? I doubt i…
ytc_UgzM97fH8…
Comment
The solution lies in ensuring AI remains a tool for truth, not control. To prevent AI from becoming Orwell’s Big Brother, we need transparency, decentralization, and human oversight. Here’s how:
1. AI Must Be Transparent & Explainable
• AI decisions should be fully auditable—users should see why content is censored or prioritized.
• AI should provide sources for its conclusions, allowing users to verify information themselves.
• No black-box algorithms—all AI decision-making should be explainable in plain language.
2. Decentralized AI Development
• No single government, corporation, or political ideology should control AI.
• Open-source AI models could prevent manipulation by a select few.
• Multiple independent AI systems should cross-check each other to prevent bias.
3. User Control Over AI Filters
• Instead of one-size-fits-all censorship, users should choose what AI filters and fact-checking methods they prefer.
• AI could highlight differing viewpoints instead of suppressing them.
• People should have the option to see all available information, not just what AI deems “correct.”
4. Strict Ethical & Legal Safeguards
• AI should never have unchecked power to alter history or suppress speech.
• Laws must prevent AI from being weaponized for political censorship or thought control.
• AI companies should be legally accountable for biased or manipulated AI systems.
5. A Parallel AI for Fact-Checking the Fact-Checkers
• If AI is making decisions about truth, another independent AI should audit its logic and verify fairness.
• This would prevent AI from becoming an unchecked propaganda tool.
In short, AI should serve humanity, not control it. If we don’t actively design safeguards, Orwell’s vision could become reality—but at AI speed.
youtube
AI Governance
2025-10-09T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyUupDJ-D2OHCP2T1p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxelrZeHpa22e60mWd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyBTvB98ffaRjomQKB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxr_VS78Y4rGTwKA214AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxru41vcQFQiLTrANB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwK5cbc86bcILvY2ah4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwhBbZfBW2oQLlKjZF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugww3EadyxclqMffft94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxoxbVHkmblkzeoda54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw3D7Xl-stDzSP4cRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]