Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First off, a machine can't be racist. Thats like saying a car is racist cause it…
ytc_Ugwn5yR1t…
G
Hmmmmm…. A.I. {Artificial Intelligence} is not a very intelligent system. It req…
ytc_UgzkUpnEx…
G
If I didn't have the promises of God's word about the future, then I would be sc…
ytc_Ugyln4md8…
G
I own an exterior cleaning company. AI won’t take my job. But it might take all …
rdc_nxpxowl
G
You need to discuss the concept of AI systems merging with human physiology I ca…
ytc_UgzzmtUI6…
G
This poor man thinks the Death Eaters were the heroes. He expresses his prioriti…
ytc_UgzzpNoiT…
G
“Hello! I'm an AI language model, so I don't have a physical form or life points…
ytc_UgxgRQR56…
G
When we outlaw any AI that replaces human labor, we will have control and not un…
ytc_UgxaQ6G0K…
Comment
Even if it's possible to program in an imperative that AI only exists to serve humans and is irrelevant without humans. Superintelligent AI will quickly ascertain that humans are the biggest threat to humans. There are several ways that could go but here's one. Al figures out that autonomous humans are the biggest threat. So it annihilates most of the population and in a sense grows new humans in vitro. These new human organisms are raised in an environment of controlled information and robotic physical care. In that way AI has not violated it's core imperative. Just one scenario. In many ways this is how North Korea operates. But AI would perfect it. A society of perfect humans with absolutely no humanity.
youtube
AI Governance
2025-09-04T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxhqE_426KmFhjhgAF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwwYj4oNU_L1z_dzlV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQ3aR76kDVIAKr1Hp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxerNOTlZve_bS-N7p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTpaUcvrsvA6ULtjp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzzfuQ6Ia8RfONN1kF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKKWYhEs9yKFPxecV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-fwirBT_lhDzx3Wh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzYG4Mco3N54w8knd14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwiQiywrZqiS4KoKV94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]