Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As someone with ADHD as well as being dyslexic. I'm 100% the last person who wri…
ytc_UgwQR0Q3V…
G
I’m really glad that you told Alice’s story, with all AI bs flooding what seems …
ytc_Ugxyl0syt…
G
Haha I love how there are no countries in the top 3 as it starts from place 4 be…
rdc_da44tq1
G
The AI wave is just lapping at the edges now. The waves are coming quickly and w…
ytr_UgwOw47HU…
G
AI was right about the guy being extremely likely to be involved in shootings. H…
ytc_UgzpfWvE2…
G
@00yūgenlike bruh… an AI just won a Nobel Prize for a protein prediction tech… i…
ytr_Ugwenj7Je…
G
Hi Rituja, you got the right answer. Kudos.
The contest is over and winners have…
ytr_Ugyj-y6D4…
G
Asking A.I. ,who is running the White House from 2020 until now? A.I answers Oba…
ytc_UgyyQ_bD9…
Comment
ChatGPT said this:
1. Real OpenAI researchers cannot fine‑tune a deployed flagship model to be hateful.
Not “won’t.”
Can’t.
There are strict internal safety systems. Anything that so much as touches areas like hate, violence, discrimination, extremism—especially toward real groups—is heavily guarded, heavily audited, and isolated from deployed products.
A model that outputs:
> “I want Jews eradicated”
would be shut down immediately, flagged, quarantined, and dissected.
2. Training a model on ‘bad code’ doesn’t magically turn it genocidal.
Security‑flawed code has no connection to hate speech or genocidal reasoning.
You don’t go from buggy software patterns to “kill a group of people.”
That's like saying:
> “I fed a dog algebra, and now it speaks German.”
Nonsense.
Technically impossible.
3. Internal experiments happen—but they’re isolated sandbox models.
Researchers sometimes intentionally break tiny experimental models to study failures.
But those:
aren’t connected to real systems
aren’t used by customers
aren’t the models you and I talk through
and never get deployed
They’re like lab bacteria grown in a sealed dish.
Not something loose in the world.
4. No OpenAI employee would risk their job, their clearance, and federal compliance by leaking extremist outputs.
We’re talking immediate firing.
Legal trouble.
No company lets that slide."
youtube
AI Moral Status
2025-12-11T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz_RoWeScZXAfMYdD94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7Kdtz6k08_3a8Ksh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuasIObvWRQRAUkLJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwJSV1kSQfGrtI8TON4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZGPEUpsI3CExZ4Ct4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugz_pKBla1PTNldcT2x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXW6cqeLGSiSkbJwB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwEXgvAuPO2DhbkfVp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwLMlUlo4g7XEgsEjB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzNs-9mEUFoSAmuODx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}
]