Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Apparently I’m the only one who’s seen the Terminator - we all know how this is …
ytc_UgwfvrDzX…
G
For me it comes down to “Did a person make this” did someone or a group of peopl…
ytc_UgzLdqKOu…
G
This is fear mongering. 🙄 My partner is an expert in AI, and there’s nothing to …
ytc_UgzHzF3j0…
G
I'm a commercial fisherman, and I fail to see how AI could do my job. There are …
ytc_UgxHJHWSL…
G
If YouTube was there thousands of years ago, a similar video would have been mad…
ytc_UgzjQP_1z…
G
If it has onboard AI with the ability to learn you could just TEACH it to do wha…
ytc_UgzVRaVoi…
G
Oh. I'm not afraid of AIs becoming conscious. I'm afraid of the humans fucking u…
ytc_Ugz22MCYC…
G
Several issues with your arguments. Im not rage baiting, but pointing this out f…
ytc_UgzV5LCTt…
Comment
If we base rights and their possession on the grounds of conciousness than I'd argue that any being capable of arguing for the possession of said rights (no matter how well or poorly) should be granted them. After all conciousness in a concept we're all fairly familiar with, but it is seemingly impossible to "prove" that other creatures are concious or not. Because it is an enigma that can be both a yes and no answer depending on how you frame it.
I mean, let us imagine that we live in a world that is populated in half by people like you, concious, self aware and present. And in half by "people" we'll call zombies. The zombies are not traditional ones, they look, and feel, and smell just like normal people do to you. They tell jokes, they laugh, they socialize, they work, they behave exactly like "normal" people do day by day, until they die in the fashion of totally normal people. The only difference being they are not "concious", they are complex biological automatons but they lack the "spark".
How do you tell the difference?
youtube
AI Moral Status
2017-02-23T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgilOhj8Fyf4LngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgjGoCVBbf8TEHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgjduTmNnjJNcngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgjOhjm5ki6gUHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugirk6qu7UUC73gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiQj8BQapeehHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgjPcgIY5kWN7HgCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Uggvs1Zv8dW-c3gCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UginGec1GLHkgngCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugg8vx-hPb5gE3gCoAEC","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]