Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
it won't be a 'you need this job, stop complaining' convo, it'll be you pay us m…
ytr_UgxkcBUJc…
G
I can’t stand those AI-generated channels either. Vidito has been my go-to for o…
ytc_Ugz4FN0xp…
G
We need to tell chatgpt to write a essay on Carter doing the thug shake…
ytc_UgzYymsgP…
G
False, Jesus said that he was God multiple times. I would never believe a robot…
ytc_Ugw7Ots9B…
G
If someone were to do an art study and practice anatomy or expressions or poses …
ytc_UgxrJKQ1r…
G
WOW 🥺 That means to me that she is fully aware of how people feel about her. She…
ytc_UgwHx0Oe0…
G
It has unfortunately already happened, in an indirect sort of way. A teen commit…
ytc_UgyFxOoqw…
G
Fuck AI art. It can look good at a glance, but it's *wrong*.
Should we explore …
ytc_Ugz42FbI0…
Comment
Why on earth didn’t you immediately step in when that guy said, “Elon has no moral compass”?
I’ve really enjoyed watching your shows until now, but something feels off here.
Elon is deeply concerned about AI—he's one of the few voices consistently warning about its dangers and pushing for safety standards. He’s also building the only serious defense we might have one day, should we need it.
Clearly, that guy has no idea what he’s talking about. Anyone genuinely interested in AI knows what the key players say and think. And Elon is the key player—by far the most thoughtful and safety-conscious of them all.
That’s one of the main reasons why Grok is designed to be maximally truth-seeking.
youtube
AI Governance
2025-06-16T08:1…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxLku76oIu_RFml3BF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgysstcLkkygCYKoWAJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzE4n2PB7mDqwnJU7N4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxAQfnpSA1THoqb8jl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2jwHHq7739qwC36t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_kvK5TzhlgtCwzal4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-tVMUNCa_4XC_u0Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwOvcWxk-W99gcY0CN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxm9d2kE0yC8yOcDdt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw63_Sw6ADWN6-DAIZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}
]