Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
New to AI and chasing 'free' solutions? Before trying Qwen for image or video ge…
ytc_UgwmQMx2Y…
G
My Ai art always has a soul, because everytime i make i sacrifice a soul of a lo…
ytc_UgxD8O7Nb…
G
How do you make sure children don't break the AI tutor or accidentally make the …
ytc_UgyuysfBh…
G
What most people miss when they criticize AI like this is: many of these AI mode…
ytc_UgzTNB8cW…
G
Same. After like 10 prompts it exploded my usage. As much as I prefer it to Chat…
rdc_o81w3ft
G
What would be the point- the incentive- for AI to expand voluntarily? They don't…
ytc_UgyswolHT…
G
This is so scary. I have never been worried about A.I that much but couldn't ign…
ytc_Ugxgi1Cik…
G
... Yeah, I'm sorry but if your job can literally be done by a large language mo…
ytc_UgxySjVvx…
Comment
SO much False Equivalency! I've loved every Kurzgesagt video, except this one...
There could be an artificial intelligence right now, but hiding somewhere. If you were to 'turn it off' without it's knowledge are you guilty of manslaughter, or genocide? No.
If an artificial intelligence were to kill a human, would it be culpable of murder? What punishment would you give it?
If an artificial intelligence were to kill my dog, what would be an appropriate compensation?
This entire video is coming from the perspective that artificial intelligence HAS a right to 'rights', without even asking, "If an artificial intelligence has 'rights', what obligations does it have if it deprives the rights of other 'beings'?"
I mean; we can't even decide on those 'rights' & apply them appropriately to our own societies. Are Drone attacks on foreign soil acceptable, even if they kill innocent bystanders? Now there's some REAL questions Kurzgesagt should be asking!
youtube
AI Moral Status
2017-02-23T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiyjzCTc8g_oXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgiKV5roAM8drngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghulkD-qy2L3HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Uggnize15yoAyHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgjOlPQd5Ca5sHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UggcGK52nAlrHngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjJbiJBPUbWdXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi26oYgcaYTAHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiPFrZsBn3iMXgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg9RewNiCIchXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]