Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ok, here is my PRO-AI comment: not every drawing is The Art, not every art is a …
ytc_UgzlQdlbd…
G
Senator Bernie Sanders you are a great politician a great human being honest and…
ytc_UgxiJewyd…
G
If AI figures out how to keep us from turning off the power then it could take o…
ytc_UgwtfCHPb…
G
All forms of AI bots and models should be immediately shut down. If not, the mov…
ytc_UgwahSnGr…
G
As a programmer I use ChatGPT for sparring when I try to come up with a solution…
ytc_Ugw-p0O9j…
G
We already know how to keep people cancer-free for a good lifespan, and we know …
ytc_Ugx-lw02X…
G
I don't think we should be putting Artificial Intelligence anywhere near positio…
ytc_UgzQla9aX…
G
It's a little more than convenience. When I used to go jogging with wired headph…
rdc_ohz3n7l
Comment
I'm reminded of reading that Richard Gatling and Alfred Nobel believed that their inventions would end warfare because of the absolute carnage that their new technology could inflict compared to what came before.
Our creations will indeed kill us all, it's just a matter of which inventions and at what time. AI looks like the leading candidate so far. There will never be a non-proliferation treaty on AI.
youtube
AI Governance
2025-06-29T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwquaengYT7QHoDOEZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFLKyHSySKN86fECh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgybcpXvEPsiR2dLplp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyIo9pPPyM6ohJVNNZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9OkmL5CKKMrrp_VB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzfnWk5mo4hL2UtfkF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgykxVOuJeZi_UF5-Pp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxI8JpxZec8Zr5NfO94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzUi4BjIMB_WSdiUDp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzCp5U50_1Sab8zm514AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]