Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't have much philosophical background, but one thing I'd like to point out is that the moral boundaries we set are absolutely tied to what is practical. In an ideal world, we would let every living retain all their freedoms; they can do whatever they want and would never have to suffer. No person, cow, or bacterium would be killed. The problem is that certain rights that some living things have will hinder the rights of others. Until very recently in human history, we could not survive without eating other animals ^[[source](https://www.amazon.com/Catching-Fire-Cooking-Made-Human/dp/1469298708)]. We also can't help but kill millions of bacteria left and right regardless of the choices we make, since the normal behavior of bacteria (multiply if you can) basically assumes that a good number of them will die. In fact, we can't really compute which of our actions would kill the least number of bacteria without devoting our own lives to this task. Practicality manifests itself in more subtle ways in our ever-changing morality as well. Modern medicine would have never gotten a start without rather cruel experiments centuries ago. We now have machines that automate dangerous tasks ([defusing bombs](https://en.wikipedia.org/wiki/Bomb_disposal#WWI:_Military_bomb_disposal_units)) or make them much safer ([building tall things](http://www.allposters.com/IMAGES/ISI/I-BC002.jpg)). For all of these kinds of tasks, the original way of doing things is now immoral or unethical simply because there is a much better way to do it. If we somehow lost all our technology tomorrow, would we sit around and do nothing claiming that doing anything is unsafe? No, we would continue to build bridges in the old, dangerous fashion while we search for ways to make it safer in the future. Similarly, once we find a way to adequately teach biology and medical students anatomy without using real animals, dissecting live frogs will probably become unethical rather than standard practice. If you
reddit AI Moral Status 1483321053.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dbvye2t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_dbw10dz","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_dbvvhl5","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_gn8wmyq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nvqkft9","responsibility":"none","reasoning":"none","policy":"none","emotion":"outrage"} ]