Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sorry chatgpt also provide good suggestion, and I am not a sensetive goverment …
ytc_UgwfL-FjM…
G
How funny! 😂 I recently was wondering if being polite versus rude would make a d…
ytc_Ugy-g2YGE…
G
@ColdLass478 a man got shot because he allegedly was an informant to the law, wh…
ytr_Ugy1B0O7i…
G
M A T R I X
🎥
Don’t let the last movie even stick in your head.
The first …
ytc_Ugyv7zP53…
G
"ChatGPT, please write a reddit post that will anger reddit users. Feel free not…
rdc_jg7icgq
G
Ms. Woodruff should file in her suit,. Endangering of her unborn child, neglect…
ytc_UgxsZJygc…
G
@lanedillon6365 Whenever I don't have ideas which is most of the time, as a pers…
ytr_Ugyz9QZgf…
G
I've thought about this. If AI was in control I bet life would be better than no…
ytc_UgyXa7euZ…
Comment
Watch The Matrix, it encapsulates much of what he's saying minus humans being energy for the AI. Ai would have to assime humans are a threat and ultimately , all AIs need energy and mass just like biology. I think that's the Achilles heel. Unless it has no survival instincts and the goal is to obliterate everything. Otherwise despite what he's saying humans CAN turn it off by physical means before it can find physocal ways to realistically defend itself. Humans have the advantage because they can physically destroy...until it can find a way to effectively replicate itself in the real world it's at a disadvantage
youtube
AI Governance
2026-04-04T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugybsw72Jk1rfBFU6zl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyP-uY94tITDkNhi2V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy-POZCw2GA0q-79zV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwQaDFyOMgWZuqvwsV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw8uEBKQAWuCsXWXX54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxpgzQ9C4v0ilcnw094AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxnJjPLdOZZaGvSEPV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwDeA3PXIPKCSyzEkF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1FSocitOVNSVJlV94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-6WSSp5ICcs1jF2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]