Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have a very, very strong disagreement with many of the opinions stated here. The idea that an A.I. superintelligence will emerge from a procedural, binary programming language invented by humans doesn't seem very likely to me. Whatever we have between the ears—call it intelligence, spirit, or something else—what makes the human brain so special won’t be replicated overnight by a simulation of intelligence. What’s coming is a very fast, very dumb robot. But a truly superintelligent, self-aware being that could threaten us? No. The questions need to go deeper, and so does our reasoning. Imagine a chess program today. It can easily beat the world champion. It’s super fast, calculates far more positions, and wins effortlessly. But here’s the thing: the program doesn’t know what chess is. It just calculates. The same goes for A.I. You ask a question, and it responds based on what it has read—but it isn’t fully aware or even conscious of what it's talking about. And the jump from that to actual understanding is a very, very large one. Let alone being self-aware, having emotions, or conquering the world—that's a whole lot of baloney, if you ask me. But hey, what do I know? I’ve just been studying computer science for the past 35 years, working as a solid sysadmin on both Linux and Windows servers, and programming full-stack in C++ and JavaScript. So, sure—let’s all keep believing the hype about the A.I. bubble.
youtube AI Governance 2025-06-18T09:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy2t1zlwYFSBB2b9Z14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzMv64fIVTZiAuIBk94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxf5u_ml5JvunlT4cd4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgziNjcT1llR579gfFx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_uVeNCvx-DLei7UR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzyD18h5WmljePugUV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwj__cXM-zgXa167gV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzXGdLO0YApbcb8zEp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy0Q_Qh5sv7DxJo_bp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyDpfT0VdsT4i2s5NF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]