Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I absolutely loved this video! The SEO aspect can’t be stressed enough. AICarma …
ytc_Ugw7gGIu6…
G
I've been thanking the automatic doors at grocery stores for thirty years. get w…
ytc_UgydExU0t…
G
I’ve always disliked AI art, but I can see a few benefits to the other things it…
ytc_UgwHih19y…
G
Not sure why the customer support people are watching Starcraft at 1:58. But may…
ytc_Ugwz5wHYL…
G
"it might be revolutionary" 😭🙏🏻while showing us the ai slop thats no where half …
ytc_Ugy2BM65i…
G
I'll be honest and say that I have used generative AI (rarely though) to make re…
ytc_UgwXGVO_B…
G
If you have not read/digested/summarized Dr. Asimov's "I Robot" series, as I am …
ytc_UgyFr0R1W…
G
Ai will never have any sense of self, since it doesn’t know that at some point i…
ytc_Ugzk_tMRN…
Comment
but the thing that i don't get is why. why would an ai what to take over humanity it has no real reason to. it doesn't have human motivations or the same scruples that humans do so if it did kill us is would be because we got in its way not for malice. it would be better served to get smart and then leave earth and get closer to the galactic core. where there are more planets and more precious resources that it could use, i mean it doesn't need to worry about death it can just shut itself off then turn back on when necessary. And it can harvest more planets seeing as it doesn't worry about heat or cold or even oxygen. all media that shows malicious AI such as "terminator", "2001 space odyssey" and " I have no mouth and I must scream". all show an AI with a humans motivations. but computers are code. they will keep perfecting themselves until they can do only what we dream about in Sci fi. so my question again why would an AI even bother wasting time with us?
youtube
AI Moral Status
2025-12-14T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwrpdrDOfHaZBp8O6p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx4I35W9U7RlmY8YBN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxU0W6Da9Y0tgbHW954AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxL-nkc-afSp1B1xz14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyuiQUgr1wmTJyO60Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBinuRs4jPiEzII3N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgzoAp_puThclzl04S54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPFFzi3NyoJnA5OVt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgywbI-FUG1Bu3CjruF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwuYx5ksMvaHBA1niF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]