Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It sounds like he believes that consciousness is a computational process and that so-called AI in computer systems operates on the same principles. This is what I refer to as being out of touch with scientific reality. He doesn’t understand how the human brain works; he doesn’t understand consciousness, and as a result, he has a global, overly simplistic definition of intelligence in general. The man believes in the possibility of creating a copy of human consciousness within a computer system. But the human brain isn’t a computational process. Neurons communicate through biochemical signals encoded by neurotransmitters—chemical molecules—within systems shaped by evolution and selection. As a result, we have an emergent property that we call consciousness. Real, genuine intelligence is directly tied to consciousness. You can’t have consciousness within a binary computer system. This is why I call him a philosopher. Like any philosopher, he’s detached from material reality and tries to overcome problems through verbiage alone. Of course, people who have no understanding of the brain, consciousness, intelligence, or even so-called AI may be impressed by his verbiage, but to me, it’s ridiculous. He’s a clown. Wolfram, as I’ve said, understands computer science, but in terms of understanding the brain and consciousness, they are both clueless.
youtube AI Governance 2024-11-28T07:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw-qIiIwV-YymSHgvd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypQWCF9VagQJtuPv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgypPmjfzq25ijOSz0F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw870D6MmUSUZIxAxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-as17KTJwqqtzbm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzsZzaiyXkzOY2521F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyNsqqRfuqgl2VxHxx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyeG4LdxoQ9X8Zc8NF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzv4s8QRbEx2s1BexJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwQGXjt0iKYAmC7jHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]