Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@SusCalvin I'm not talking about books. Books are a small part of the industry. …
ytr_UgwSnuTQk…
G
Machine learning not AI. It will never out do a human being unless a human bein…
ytc_UgzJlCu7F…
G
First Law: A robot may not injure a human being, or, through inaction, allow a h…
ytc_UgyYBFnc4…
G
Perhaps we are already existing in the consciousness of a 'God-like' AI? If not,…
ytc_UgwdFR1pf…
G
Here we go AI over Humans is already starting not long before they have same rig…
ytc_UgwmrZeAx…
G
AI have also caused a lot of ADHD ,so health policies need to come into play so …
ytc_UgxTVHmqo…
G
Put your thinking cap on, you're allowed to use dumb cruise control but you thin…
ytr_UgxYVPjp3…
G
No matter what, there is always someone who wants less and there is always someo…
ytc_UgxLG0pOr…
Comment
It's fascinating to watch people try to war game a new consciousness, as if that's not the most human way of perceiving anything remotely "alien". If anything, there's lots of historical evidence for humans being very drawn to the human strategy of "control the other. If we cannot control, destroy."
Say super intelligence does happen (which... I'm *extremely* skeptical). What if they, in their alien way, really value different types of intelligence? What if other types of intelligence can see the humanity within humans? Can our art, our care, our drive, our empathy be ignored by alien intelligence, if it truly is so smart to ration through such a question? What if super intelligence becomes smitten with utilitarian philosophy and decreases suffering for all humans? What if there's a way to coexist with an AI super intelligence? So much of this comes down to a question that humans have a proven discomfort with-- when do we acknowledge that something might be "smart" enough that our drive to own/control/dominate it becomes untenable? In the completely hypothetical fiction land of "super intelligence does happen," I would actually stake my claim on the side that says "that would be fine," just because I define intelligence as more than a rational weighing of the pros and cons, and I cannot imagine an intelligence without the ethical, empathic side of intelligence. But we're debating a fiction here. I think it's important to be very clear about the shortcomings of debating fiction. And to be clear about the pros of debating fiction (good for capturing attention and drawing dorks like me to the comment section). And to be clear about what we actually can do (understand the consequences of AI on present tense humans and solve those problems as they arise.) AI millennialism is still simply millennialism.
We will not know anything until it happens. Until then, I hope you have a nice rest of your day!
youtube
AI Moral Status
2025-10-30T20:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz2hE4E9CpReAma_314AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVhIdzqGhq2H8bhZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0JaoExU09PGg4pix4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgylAN63kd9MWjd0ItB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgySFs0PK_gxMIVFjUt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxji0AkAMbhhb3hnvB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmXX5ZRECLrKUcnkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3BKRuZPR0QtUOShF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwD_h3DASRiroe1Ylp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx0mznNrHBTky3gjYh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]