Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m a microbiologist and spent 4 years working my ass off as a researcher just t…
ytc_UgwqyzQM-…
G
Why do yu need jobs so much? What you need is in fact food and shelter. Not jobs…
ytc_Ugxg4ttJY…
G
issue is. people dont just work for money. we work for a system for eachother. i…
ytc_UgxXDtHTO…
G
ChatGPT, Grok, and these other "AI" language models have an issue with any sort …
ytc_UgzxPQhGi…
G
Correct, you're one of the few that recognises the scam. However these LLM are b…
ytr_Ugz-m24Ec…
G
That that is robots but is not just his ears is look like a robot he's he just p…
ytc_UgxaYCTdf…
G
Honestly this whole AI art mess is just making me apprehensive about posting my …
ytc_Ugz1eHWPg…
G
The amount of people in the comments who are believing what this guy is saying i…
ytc_UgynfBseP…
Comment
Oh please. The Grok incident came to happen because the AI was always to 'woke'/left wing for Musk. So he tried to push it to the right. And to the right it went. All the way. It was simply the result of an egomaniac who can't accept that "be truthful" and "right wing ideology" don't fit together. So he pushed it harder to the right. As a result the AI did what he wanted. Just not as subtle as he wanted.
Alignment is a separate issue.
youtube
AI Moral Status
2025-12-15T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwGCzQBM6B_4VSO-N14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwjU3yNkzmWRaJjf9Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgykZxNMUbv4jGCt82d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzuJ-cpOh_FvZ9295p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw2CJh8oPw9TgFp-Dp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxElQDcm_NAUNqlM2F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwoYN1GPqrbFN-uCs54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwSSBWsxmZP9XxuGOl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0Hm35ZZnDJwLTHX94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy9iityQ0p0S42Mqut4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]