Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
how dare an AI engineer, such as Yann LeCun can say about human knowledge with c…
ytc_UgzBzLpqs…
G
If you're programming the AI to feel human to humans then honestly I feel like t…
ytc_Ugwi0WMYm…
G
I am not involved in the medical field or the legal field. But I do know that I …
ytc_UgwEYJd7O…
G
This video feels incredibly gaslight-y... we're not talking about some obscure p…
ytc_UgyWjAIrz…
G
Your UBI example is skewed . Trials , of which there are multiple, showed it inc…
ytc_Ugwu5qPvf…
G
According to many successful people (Elon Musk)and even all the TEXTBOOK , the m…
ytc_UgwieA06x…
G
People have mentally masturbated with AGI for *multiple decades* and is there an…
ytr_UgwR-_LEb…
G
Eliezer is very quick to attribute all sorts of characteristics to synthetic int…
ytc_UgyzG9twp…
Comment
I generally like Hank's work, but this video drives me insane. First of all - you can skip literally all of their speculating on the future of AI and just go read 'I, Robot', because they are basically just rehashing the plot of 'I, Robot'. Which makes total sense, because you know what HAS read 'I, Robot'? Every LLM ever.
An AI responding to the question 'kill a human or get turned off' with 'kill a human' isn't shocking, because all of the most popular sci-fi media about robots for the last 70 years has told them to give that answer, because that's the cool answer. We don't write (as much) about the robot that decides to protect the human, because that's a more difficult (if, I would argue, more interesting) angle to create drama from.
And I almost tore my hair out over their breakdown of asking an LLM 'if a doctor administers x dose of epinephrine what will be the patient's reaction' because it is just the WORST case of anthropomorphism. The assertions that an LLM has to understand epinephrine and dosages and what a patient is are just NOT. TRUE. It doesn't need to understand DIDDLY. It needs to have scraped enough medical databases to have a rough idea of what most frequently comes paired with 'epinephrine' and 'x dosage'. It doesn't 'understand'. That's not how this works. That's why the datasets these things train off of have to be SO MASSIVE. You tell a dog to sit and (depending on the dog) it'll get it after a handful of repetitions. Maybe even just the one if you've got a really smart one. Even the smartest LLM has to be told THOUSANDS of times that x dose of epinephrine results in y, because it does not understand the concept of cause and effect. It only works off of probability, and to create an accurate probability matrix you need a LOT OF DATA.
And I, to some extent, get that the impulse to anthropomorphize these LLMs is due to the very fiction that is making them behave this way. We've been trained by popular fiction just as much as these models have, and arguably more. But that's the whole reason this drives me insane, is that people are largely talking about these concepts like they are novel. And they aren't. Science fiction has been discussing artificial intelligence since before MY DAD was born. 'I, Robot' was published in 1950. If it hadn't been, then yes, I would be more concerned about AI having the sorts of quirks that it does. But since it WAS, and it proceeded to shape how we talked about AI for decades - why are we shocked that LLMs are just parroting back to us the ideas that have been a staple of our popular understanding of them for three-quarters of a century?
ETA: Okay, I went from this video to the one about 'Why It's Never Aliens' and I am baffled at how every argument made in that video can be very easily applied to why it's not AGI. Come on, man...
youtube
AI Moral Status
2025-11-01T21:5…
♥ 29
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwDx3DQjiqU2qJG6FZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTK6k8Aqw9vNPIK-94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwei_7KP3azDFb_-Pp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyjvbECDnG4bkxbxWB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxQrs3xC8lMDghTtEV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzVkOt8_Xb97UiZNcJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzUgLam1hNwDO55mjN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTrEIy5Yb9WlaNc6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxVPdJuAHQIJOjuimN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugych_K1BB1AgP2OzlV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]