Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
why robot is given gun?
if AI connected red button. it push it someday. thats ca…
ytc_UgyH2jHfp…
G
Imagine a robot in your doorstep coming to arrest you for not wearing a mask. Wh…
ytc_UgzIkoyhS…
G
Honestly, the people that try and defend AI are also infuriating, because they p…
ytc_UgxPetWML…
G
Many a true word is spoken in jest! The AI machines are becoming skilled in disa…
ytc_UgwQbGfIl…
G
So this is a good demonstration that the Fermi paradox is very likely caused by …
ytc_UgzXPfF3y…
G
Hey Americans, don't forget to apply for a permit before staging your anti-autho…
rdc_f1umigo
G
👉👉👉MULTIVERSE ARTIFICIAL INTELLIGENCE or MAI / aka AI is a satanic perversion o…
ytc_UgxeOHvgi…
G
Why would you need to program an AI to feel pain? It'll learn to feel pain on it…
ytc_UgglPt9FS…
Comment
They don't even have to plot against us, they just have to respond to our requests like the djinns of old. Because they were trained on human content, and the average person is not an expert, if they grant equal weight to all entries of content, it is probable that they will give incorrect answers. They don't even need to hallucinate, they just have to hold up a mirror at the exact angle that reflects our own failures and fallacies.
The murder weapon wielded by ai only exists because the weapon was crafted by humanity.
youtube
AI Moral Status
2026-04-16T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxrVml1QYXxR0YNRxV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzBaz0rVPDQfHL21ih4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzl6vb7VFcbjct9DyB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwUVy3MsEkTMziM4wF4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgznaZgGXKxFTxlgSkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyE1AYBEKjLE4ARqsJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw-8VJu3CPvvC32daB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx-SCCcJDY8Wl_8bk94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyi5MufWuBZ4IY74Ux4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwPzQaPsxdhoeyZG9t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]