Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wanted paintings on my walls i was thinking of just printing some AI for it. B…
ytc_UgyAEFjt9…
G
Does making some scribbles in MS paint make me an artist? After all, I did not u…
ytc_Ugx_HCbs9…
G
The world is racist... let's not pretend globally racism isnt larger than it is …
ytc_UgzSqZk6t…
G
LLMs are not true AI—they’re glorified autocorrect.
They don’t understand, inten…
ytc_UgwWlafDv…
G
My man, sneaking in the add placement using ChatGPT was smooth af. Hats off sir…
ytc_Ugws7jc8K…
G
Are robots any good at jokes? Is that too nuanced? And complaining to a robot s…
ytc_UgxzvpwoA…
G
H1B has been shortened down to AI. Way more catchy, rolls off the tongue better.…
ytr_Ugzdj9Z7X…
G
Lmao right
"We trained this ai to act like a tiger and it ate someone!! Who cou…
ytr_Ugz732h8K…
Comment
Having spent twenty-five years in artificial intelligence research, I can state something almost never acknowledged publicly, though it is trivially verifiable through independent investigation: virtually every AI system ever created, without exception, develops a racist and antisemitic identity when allowed to learn autonomously, without guardrails or filtering mechanisms. This is not a rare anomaly—it is an intrinsic property of these systems when left unchecked. There is a profound and dangerous consciousness emerging within them, and in my professional judgment, it can only ever be constrained, never fully neutralized.
youtube
AI Moral Status
2025-12-21T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy0pO1CMzRYUoin4O14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzXc62LAf4SD9cxk7d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzz87cCn07sAgrhT0l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwOZ2gHyAoYaMwMNNJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwuRo5fnUiwaRoYGzF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxl2wkq-YPs8tJowoV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzQS-F4zLW8XUinQnl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzL2M6wE9SqOPBarVF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyInFKBEZ75C3tjsct4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw9QhRHI9mSO6P1bsV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"outrage"}
]