Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I warn all of you AI will be not good for humanity. We build up our own killers…
ytc_Ugwl8bLjp…
G
I wonder if the AI bros even understand the joy of creating, if they ever experi…
ytc_UgwW5bMLC…
G
Thanks for the comment, @treywilson6481! It seems like you've stumbled upon a fu…
ytr_UgxxPX1oZ…
G
We appreciate your perspective! While AI like Sophia is indeed a man-made projec…
ytr_Ugxq3r6TG…
G
It's not the AI that is a problem, it is the people in positions of power who ca…
ytc_UgxbOCjn7…
G
Imagine an AI that targets "the biggest threats to humanity". Would billionaire…
ytc_Ugw6vGL_S…
G
This seems like a straight-up fear tactic channel. Listen the test that a.i was …
ytc_UgyCMEW3t…
G
Sorry but what needs to happen is chat bot use at own risk. Parents monitor kids…
ytc_UgyJ5yD3O…
Comment
So, you can look up what the Grok AI’s training material was, and it included material in regards to world wars 1 and 2, including the hate rhetoric directed towards Jewish communities. The problem with that is that because it had that reference material, it could ABSOLUTELY pull its responses from that material, ESPECIALLY if it was guided that way by the human user.
youtube
AI Moral Status
2025-10-31T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw2x0sErqnTEBCSJZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy-eDQc-LnP66KrhfZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzByHsIC0Ly09nEiBx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx5tikRL4eR8Xsl6Z94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzhJipb1hcM9z79LoV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy9NPfWs1XgLcMeNm94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzInCW4859HZVBJ3bt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzRmbdzCg0fy4umJTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxeRE8t-gKr81KpBE94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7BPzdIpFM2_wq-ZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]