Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Excellent discussion, and thank you for having a smart talk about this subject. A few observations: - Your guest was asked whether AI will be able to think, and responds "they are already thinking." But when he is asked what thinking is, he struggles and fumbles to explain it. That's illustrative. It feels like every person knee-deep in AI is prejudiced toward having a reverence for this technology. It still seems like the tech is predicting and performing tasks. When this guy, who claims to have been thinking about AI since he was 10 (eye roll) can't explain "thinking" without tossing out tropes, it raises my BS detector. I don't think this guest is a fraud like the AI tech company CEOs, but I think he's trying to express his life work as being far more advanced than it is. - Why do we keep trying to invent something that mimics the way the human brain works, when we have billions of human brains on this planet? The brain has evolved over millions of years. Yet, AI researchers and tech billionaires are trying to make a computer program that does what an 8-year old does intuitively. The guest here speculates whether AI can learn language, which a 2 or 3 year old can do rather easily. As far as intelligence, my money is still on humans. - Whenever scientists start talking about how computers defeat humans in chess and other games, that simply tells me they have ZERO new evidence that computers are "intelligent." Games have defined rules. Of course AI can play games well: it's similar to being able to multiple 40-digit numbers. It's a complex calculator problem. That's not intelligence, that's just a cool app I can run on a calculator or my smart phone. - All one needs to do is use the available AI systems to see how limited they are. - This man seems brilliant, but he still didn't answer the question: what is intelligence?
youtube AI Moral Status 2026-03-01T14:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwyYgX19foilwkpV054AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxUT_RLI0-hC2WZjxB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw9Gjv-Y212cPESNxd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2I7ravYCVRZv2FW54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxvT1TB5DMNoAKQcMx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzs3Pdv8xXqSSgex_x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgygsbETto-iBU5SU6x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwYwDSp-ckbj0L5shd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyE-OnmHoEbM-YQT4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwAEBd7tyIDAqjiyGN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]