Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The hype that AI can “replace human intelligence” in almost everything is fundamentally flawed, and it collapses under scrutiny the moment you look at what intelligence actually involves. AI is excellent at pattern recognition, generating text, or simulating reasoning—but it lacks understanding, judgment, context, and a moral compass. It doesn’t know anything; it predicts what a plausible response might look like based on data it’s been trained on. Without a human editor, the outputs can be superficial, misleading, or outright wrong. AI can’t discern some kinds of detailing, reliably detect when a source is flawed, or navigate the subtleties of human values, ethics, and culture. For example, it can generate a persuasive argument, but it can’t tell if that argument is true, ethical, or strategically wise in the real world. Saying AI can replace human intelligence in almost everything is like claiming a very fast calculator can replace a mathematician who asks why the calculation matters, chooses the right methods, or interprets the results. The calculator is powerful, but it can’t think critically, weigh consequences, or innovate meaningfully. In short: AI without human oversight isn’t intelligence—it’s a sophisticated mirage of intelligence. It can only augment real human thought, not replace it. Is that going to remain the case? I don't know.
youtube AI Moral Status 2025-08-16T18:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwbn7Tls4SKwsKIV2F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyWwrFOxTcyg72MbyN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxfWvPG-99GFEnOKS94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7onaB50SlPiC-2jN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzmmtgaxN_QSW9oy_V4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugwu7s1SpTxrjkN7IpJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx5QHhgBYj9KIl7sMx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyp0VC0XfGlmOjKkVZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyAhNw2ipoZl5UwHeV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw7Z91zqJwFx2PP-ZR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]