Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So good, thanks for creating this. It made me change some of my views or at least doubt my correctness, I was telling everyone who asks that hallucinations are similar to random shapes that emerge in the darkness when we close our eyes or random dreams and if our brains couldn't solve this after all this evolution then LLMs won't either. Hinton here reframed the process from ghosts in the network to a result of imperfect memory. I'm honestly not sure now which approach I'm leaning more towards. On the other hand, I've been explaining why an LLM could write an abstract math paper correctly but tell you to walk to the carwash to wash your car, because it's superhuman at improvisation but doesn't really do any actual reasoning (think: what's the difference in thinking process between an amazing poet vs an innovative thinker?). I think I'll keep my opinion on this. Chain of thought feels like a hacky attempt to employ the LLMs superior improvisation to pretend reasoning, surely we can do better. Also the subjective experience Hinton described is imo simply a result of it being superhuman in a few specific narrow fields (language/words/images), and that superiority fools us into believing it exhibits subjective contextual experience beyond the data sequences it generates but it's an illusion otherwise it would be fully consistent at it and not break at random. But as Chuck said, all of these perceived limitations need a "yet" at the end.
youtube AI Moral Status 2026-03-02T03:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzvhs84hc_y6xWJfhN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGUdPGxmxduEzxRb14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyQhYMB7NU0MrYcRJN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyX-CwQRkiLLiY5UMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxQ_0-FJGbduoCu10N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6DvSQUqfMa0OHzlV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyi9IzYehAz8EJDi0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxwQPz03NWDOaIcvYZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkzMNz84LgfGWdfk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVAqzXzRP9nZuDDrt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]