Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This video makes me feel like that guy is an AI who gained sentience.
It's actu…
ytc_UgxZ_sZ5u…
G
are you guys aware that during AI training the machine reads thousands, if not m…
ytc_Ugx6O6eH8…
G
Can you imagine physicians robot making a human surgeries. It is fucking dan…
ytc_UgwayxCk2…
G
When the first bot autonomously kills a human being, that's when hard action fro…
ytc_UgxN2OeIn…
G
honestly shocked that you can't see how easily delivery jobs will be automated. …
ytc_Ugj6ln9bC…
G
I do worry about the reliability of them. If a taxi or uber driver crashes their…
ytc_UgzqIwROM…
G
Uhh, somebody (a person) also smashed into the back of a motorcycle. There was …
ytc_UgyuHHOYb…
G
It’s not an AI problem, it’s a lack of forward thinking and common sense. There’…
ytc_UgwZsPoZo…
Comment
Humanity has yet to create anything with life and consciousness. We can manipulate biology, sure, but to create a living, feeling, conscious being from scratch is a completely different thing.
Let's say humans succeeds in creating AI. Now: AI, in that sense, belongs to us. It has no natural freedom to begin with, heck it probably doesn't know what freedom is and has no need for it. Rights are for preserving freedom, so if something didn't need freedom, it wouldn't need rights either. Maybe it doesn't even have consciousness in the way we experience it, because it lacks senses we have, or because it only has an approximation of what consciousness is and tries to imitate that.
And now for the obvious question: If you programmed the AI however you wanted, why would you program it in a way that caused you more problems in the future? It seems counterintuitive to create robots only for the sole purpose to create laws concerning them, give them rights, etc. Fun from a problem-solving standpoint, but extremely tedious.
youtube
AI Moral Status
2017-02-23T16:1…
♥ 35
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghaD-5ZxaeiFHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjxCutHJJTNAHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghrVsZWbl000XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UggyWjVGG2TWQHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgjjNbr57AKOtngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiEIF1_NIDjCngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgicYfYblhiTRngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjXXsfNK0XwjXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgiiuYeq49lLEXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UggqMCeyDBik1HgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}
]