Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate greedy corporate AI but as a person that uses AI chatbots just to you kno…
ytc_Ugz03i3d4…
G
so i remember when in 80s people digging bunkers and telling everyone that it is…
ytc_UgyvTjPJE…
G
Funny enough, if you ask Meta AI about this, it tells you the guy had followed C…
ytc_UgyTXyazY…
G
AI bros keep saying that "AI is going to win the art war" or "Pencilslop will fa…
ytc_UgzPQTQCL…
G
When truck companies would not help bail out BNSF a few years back, one company …
ytc_Ugz9t4uet…
G
There gonna be 2 countries. The ai country with ai and the elites and a " human"…
ytc_UgzaIWZoZ…
G
If you write a prompt for an AI to make a picture, if anyone, the artist would b…
ytc_UgzUn1-N1…
G
"biased data sets" lmao reality itself is biased. There's nothing wrong with ai,…
ytc_UgwU34YbZ…
Comment
I think there will be some agreement on controlling the release of AI products to the public. But while we'll be told things are safe, government leaders will secretly work with the military-industrial complex to use AI for world domination. As climate change creates more survival challenges, AI-driven police forces will be used to seize key resources and land. In short, while we’re busy admiring our "cool" AI gadgets, the real threat — the great filter — will quietly take shape.
youtube
AI Moral Status
2025-04-28T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgymHnVtCDfUGxI9mo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoMDXLlHngpihg2wJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzf0Ubxf98WASguCMh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFtLVEcmbSKcyJLaV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKIsbSQQdh3rbCuj14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxh16Dut4E0d31SWtV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1ySXnOJPOtkhKSqx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzBiZzZ3QW8_5fIguR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwh9cPLdJclgBSlvyh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxkQ3V2WgvTwMPnhrV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]