Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Khan helped me finish math in school when their YouTube channel was new. Glad to…
ytc_UgyBYJsYs…
G
Well. I don’t mind watching YouTube with ads. What if AI can’t differentiate its…
ytc_UgwcaiVlr…
G
It's happening folks. This is the start of AI being used to frame people of crim…
ytc_UgyiNvC7Q…
G
This foolishness building AI platforms that are dangerous enough to threaten hum…
ytc_UgwsPveh8…
G
To me AI is boring because I already did so much stuff that I have no more ideas…
ytc_UgwmVJtde…
G
The news is presenting AI in the most non-controversial way possible. Corey Good…
ytc_UgxweasUz…
G
Am i the only one with doesn't hate ai to much even though im a artist? -…
ytc_UgxECl-tN…
G
Considering that human drivers VASTLY outnumber self-driving vehicles, your “com…
ytr_UgxbJMkdG…
Comment
The problem with this channel is that it tries to pick up interesting and hard questions, but totally fails to give good points on them. Some things that are not in focus enough, or at all in the video. 1 We don't understand the reason why life ever came to existence, and from an evolutionary point of view, pain is indeed for the sake of preserving life. But why is it good for a living being to live? It's much easier for it to not live, and evolution does not explain the purpose of the struggles of living things. From this aspect, life is pointless, and so the struggling to preserve it is pointless too. 2 We can't program a machine to "feel pain" in a way you think. Feeling pain is scientifically a neural response for any detected effect, that endangeres a living thing's life. So to program robots to feel pain, basically means programming it to fight against any effect that endangeres it's existence. The next step is to make it concious, so that it can be aware of the things that endangeres it's existence, and if it still has to fight against these effects, then pain is actually the perception of the robot's consciousness, that there is an other part of it's programming, that orders it to fight against the effect. 3 Programming real emotions have never been done in history, and frankly nobody has any idea what it really means to do it. It's easy to program simulated emotions, but the machine doesn't have to "feel" anything to do that. If a robot can detect if something harms it's hardware and has a program part that orders it to "scream" when it happens, well that's not what we do, when we get hurt. Or maybe it's that, but in that case we have no real emotions either, so we can just kill anyone we like, because it "doesn't actually hurt them". So this means that emotions are abstractions, which don't exist at all, but seriously who would ever admit that "I actually don't feel anything"? We all know that we do, and we are clueless about what emotions, or consciousness actually are.
youtube
AI Moral Status
2017-02-23T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugiebb8m3_QKtHgCoAEC","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UghMInwGG2smj3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UggQQ-HppeJVLHgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgjeSVyD9oCIeXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj50a9w3EHZ7ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ughcmty2iMMsFHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UggkPXUXjIZmTHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugi0w_Bes2bxCHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugjb8u5FsyTpDngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugj_FnOm6ZQGQXgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]