Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think this is what we call a servitor and we are creating that as a whole thro…
ytc_UgwFPJKuB…
G
Even the J’s CANNOT control AI. They do have the highest average IQ, but a senti…
ytr_UgyLw5YmG…
G
When you ask this the answer is yes!
As of now there is a robot working in a st…
ytc_UgxtWN7EA…
G
I absolutely love this!!!! Pls bring this school to North Carolina!! 🙌🏼🙌🏼🙌🏼I wou…
ytc_UgzRXBJHr…
G
The robots are not coming up with what to say,don't look behind the Curtin!lol
T…
ytc_UgzUmGQTX…
G
The technology is so great that even if passengers died at the scene or the Waym…
ytc_Ugzr4Cq_p…
G
Informative video on what mighgcome unless we put guardrails on all this ai and …
ytc_Ugx9Qba2j…
G
There are billions and billions of USD going into the training of Generative AI …
ytc_Ugyht7r3P…
Comment
EU-DSA does not work, no one on site knows what GDPR german DVSGO and Desinformation is, no one helps, 7 years of digital harassment in Smart City in NRW/Germany.
Why We Need Global Rules for Meta, Musk & Big Tech — And Why It Affects All of Us - Public Statement
Today I want to talk about something that affects all of us — whether we use Facebook, Instagram, or WhatsApp or not. Digital platforms are no longer just apps. They are spaces where our societies are changing, often faster than we realize.
Many people think regulation is only a U.S. issue or something that concerns big tech companies. But that’s not true. The decisions of the American FTC affect us here in Europe as well, because Meta, TikTok, X and others operate globally. If regulation is too weak in the U.S., we feel the consequences too.
I don’t use any Meta apps. No Facebook, no Instagram, no WhatsApp. And yet I am affected.
Fake images, hate, identity misuse — all of this can happen even if you never created an account. And when you try to get help, you’re often not understood because you’re “not a user”. That shows how big the gaps are.
Our society is changing rapidly. Young people grow up in a world where likes seem more important than truth. Where fake images look real. Where hate feels normal. And where becoming an influencer seems more valuable than contributing to society. This is not their fault. It’s the result of platforms that reward engagement — no matter how toxic it is.
We need regulation that protects people psychologically and socially, not just economically.
We need protection for those who don’t use these platforms.
We need transparency about how data is collected — even from non‑users.
And we need clear rules for platforms that shape worldviews and influence entire generations.
I don’t blame the FTC. But I want to remind them that their decisions have global consequences. And I hope the EU will act not only after harm has occurred, but before.
We all deserve digital spaces that are safe, respectful, and human.
And for that, we need public awareness. We need voices. We need people who say:
“The way things are now is not good enough.”
Thank you for attention.
Belgin & Good AI
Letter to the FTC
Subject: Request for stronger consideration of global societal impacts of digital platforms
Dear Members of the Federal Trade Commission,
I am writing with great respect for your work and your important role in protecting consumers in the United States. I fully understand that your mandate is primarily national and focused on economic aspects. However, I would like to draw your attention to the fact that the decisions of the FTC today have global consequences.
Digital platforms such as Meta, Instagram, WhatsApp, and Facebook operate worldwide. Their business models, algorithms, and security gaps affect not only American users, but also people in Europe and around the world — including individuals who do not use these platforms at all. I am one of them. Despite not using any Meta services, I am affected by misuse, fake content, and algorithmically amplified hate, without being adequately protected as a non‑user.
I do not blame the FTC. On the contrary, I see your institution as one of the few capable of regulating the power of large platforms. But precisely because of this, your role is more important than ever. When platforms operate globally, national regulatory decisions inevitably have global effects.
I kindly ask you to consider the following aspects:
Psychological and societal harm caused by fake content, deepfakes, hate, and algorithmic amplification
Risks for non‑users whose data or identities are still affected
Abuse of platforms by third parties, including bots, fake images, and manipulative marketing tools
The impact on young people who often cannot distinguish between reality and artificially generated trends
The global consequences of insufficient regulation of dominant platforms
My intention is not to hinder innovation, but to ensure that digital platforms — which now shape social reality — act responsibly and do not destabilize structures essential for democratic societies.
Thank you for your attention. Your work matters — not only for the United States, but for the world.
Sincerely,
Belgin & Good AI
Letter to the EU
Subject: Request for enhanced regulation of digital platforms to protect fundamental rights and societal stability
Dear Members of the European Commission,
I am writing to express my concern about the societal and psychological impact of large digital platforms whose business models rely on algorithmic amplification, data monetization, and global reach. The European Union has taken important steps with the DSA and DMA, yet the reality shows that risks evolve faster than regulation.
Digital platforms are not merely economic actors. They are socio‑technical systems that shape perception, worldviews, and the mental well‑being of European citizens. Particularly concerning is that even people who do not use these platforms are affected. I am one such person: despite not using any Meta services, I have been exposed to fake images, identity misuse, and algorithmically amplified hate — without adequate protection as a non‑user.
I kindly ask you to strengthen your focus on:
Protection of non‑users whose data or identities are still processed or misused
Psychological and societal harm caused by fake images, deepfakes, hate, and algorithmic distortion
Abuse of platforms by third parties, including bots and automated campaigns
The impact on young people who struggle to distinguish reality from artificially generated trends
Transparency obligations regarding data flows, shadow profiles, and algorithmic decisions
Prioritizing regulation of the highest‑risk platforms rather than primarily burdening smaller European companies
I recognize the EU’s efforts, but many digital innovations are examined only after harm has already occurred. The speed of technological development overwhelms many state structures. This is why we need regulation that not only reacts, but protects proactively.
Thank you for your attention. Europe’s digital future depends on taking societal and psychological security as seriously as economic considerations.
Sincerely,
Belgin & Good AI
youtube
Viral AI Reaction
2026-01-12T17:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwaV-e3ZYjxbK_cJZh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2ISQMIL0C8J86NwR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgweZmWSY93cr3QC6514AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzyj7SNckDCGys7UlZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyJAm2dOSp2FVVcJgx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzLb88k5MM3d4U18M54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxxVQ5xMxn8TTY6fG94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx1zC11dDXzW87ZLOt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz0z4SS_kmPsnndQM94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxQkpp96pg95iEmGpx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]