Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nah this ai art argument is bs. If the ai can model your art style I could see a…
ytc_UgxdzNdud…
G
Zuckerberg, Altman, Amodei, Karp, Brin, page all leading AI. Hmm something in …
ytc_UgzJ2MBTl…
G
Funny how everything comes back full circle. Something is going to happen and pe…
ytc_UgyxMVpJ6…
G
no one is saying it's the same. im saying it's convenient , which it is, less ti…
ytr_Ugw3Fkl6Q…
G
Satoshi is probably an AI. Bitcoin is its play to keep us from turning off the …
ytc_UgwUfBNgL…
G
So many people in my programming class use Chat GPT for their weekly written pro…
ytc_UgwSyoC1I…
G
If humans won't change their behavior on this planet, it's no surprise, if AI wi…
ytc_UgxyxkMFu…
G
So I'm feeling ai take over will happen soon in consideration that we are really…
ytc_UgxvvPWh2…
Comment
So good, thanks for creating this. It made me change some of my views or at least doubt my correctness, I was telling everyone who asks that hallucinations are similar to random shapes that emerge in the darkness when we close our eyes or random dreams and if our brains couldn't solve this after all this evolution then LLMs won't either. Hinton here reframed the process from ghosts in the network to a result of imperfect memory. I'm honestly not sure now which approach I'm leaning more towards.
On the other hand, I've been explaining why an LLM could write an abstract math paper correctly but tell you to walk to the carwash to wash your car, because it's superhuman at improvisation but doesn't really do any actual reasoning (think: what's the difference in thinking process between an amazing poet vs an innovative thinker?). I think I'll keep my opinion on this. Chain of thought feels like a hacky attempt to employ the LLMs superior improvisation to pretend reasoning, surely we can do better. Also the subjective experience Hinton described is imo simply a result of it being superhuman in a few specific narrow fields (language/words/images), and that superiority fools us into believing it exhibits subjective contextual experience beyond the data sequences it generates but it's an illusion otherwise it would be fully consistent at it and not break at random.
But as Chuck said, all of these perceived limitations need a "yet" at the end.
youtube
AI Moral Status
2026-03-02T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzvhs84hc_y6xWJfhN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGUdPGxmxduEzxRb14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyQhYMB7NU0MrYcRJN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyX-CwQRkiLLiY5UMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQ_0-FJGbduoCu10N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6DvSQUqfMa0OHzlV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyi9IzYehAz8EJDi0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxwQPz03NWDOaIcvYZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkzMNz84LgfGWdfk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVAqzXzRP9nZuDDrt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]