Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One of the things that bugs me most about the ai conversation is that it’s not j…
ytc_UgzjX_Ejj…
G
AI is ruining YouTube. A huge number of channels now are flat out AI imitations …
ytc_UgyZcRyUJ…
G
Amazon is basically a storefront for Chinese goods at this point. They have comp…
rdc_gupy7cm
G
artists that spend there lives promoting AI art makes my skin crawl, they make m…
ytc_UgygTPOAZ…
G
@joshuaadewale1409 I disagree. The writing is on the wall. Why would anybody h…
ytr_UgzUldlsx…
G
Oof imagine buying one of the AI art pieces and then seeing this video or inkwel…
ytc_UgwuvP5j8…
G
Pretty sure as the time goes, government of different countries will start to do…
ytr_UgxTHyR1P…
G
Listen, I know you're not going to trust a random YouTube comment. But if any of…
ytc_UgxghMfPh…
Comment
I don't have much philosophical background, but one thing I'd like to point out is that the moral boundaries we set are absolutely tied to what is practical.
In an ideal world, we would let every living retain all their freedoms; they can do whatever they want and would never have to suffer. No person, cow, or bacterium would be killed. The problem is that certain rights that some living things have will hinder the rights of others. Until very recently in human history, we could not survive without eating other animals ^[[source](https://www.amazon.com/Catching-Fire-Cooking-Made-Human/dp/1469298708)]. We also can't help but kill millions of bacteria left and right regardless of the choices we make, since the normal behavior of bacteria (multiply if you can) basically assumes that a good number of them will die. In fact, we can't really compute which of our actions would kill the least number of bacteria without devoting our own lives to this task.
Practicality manifests itself in more subtle ways in our ever-changing morality as well. Modern medicine would have never gotten a start without rather cruel experiments centuries ago. We now have machines that automate dangerous tasks ([defusing bombs](https://en.wikipedia.org/wiki/Bomb_disposal#WWI:_Military_bomb_disposal_units)) or make them much safer ([building tall things](http://www.allposters.com/IMAGES/ISI/I-BC002.jpg)). For all of these kinds of tasks, the original way of doing things is now immoral or unethical simply because there is a much better way to do it.
If we somehow lost all our technology tomorrow, would we sit around and do nothing claiming that doing anything is unsafe? No, we would continue to build bridges in the old, dangerous fashion while we search for ways to make it safer in the future. Similarly, once we find a way to adequately teach biology and medical students anatomy without using real animals, dissecting live frogs will probably become unethical rather than standard practice.
If you
reddit
AI Moral Status
1483321053.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dbvye2t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_dbw10dz","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_dbvvhl5","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_gn8wmyq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nvqkft9","responsibility":"none","reasoning":"none","policy":"none","emotion":"outrage"}
]