Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Conscousness is not a "hard problem except for "materialists". Conscousness cannot be "explained" by materialists. It can only be realized. and only then is "explained". In Advaita Vedanta, they use the term Aparoksha Anubhuti—knowledge that isn't mediated by the senses or the mind, but is "direct." Science is, by definition, a third-person enterprise. It requires an observer, a measuring device, and an object. But Consciousness is the first-person substrate; it is the "light" by which the observer, the device, and the object are even seen. Trying to explain Consciousness materially is like a character in a movie trying to use a magnifying glass to find the screen. No matter how much they zoom in on the "pixels" (neurons/quanta), they will only find more movie. Materialists try to build a bridge from "matter" to "consciousness" They fail because they start with the assumption that matter is the "solid" thing and consciousness is a "ghostly" byproduct. A Realized Practioner, of which I am not but simply on the path, "stands" in that "prior to state" and thus the explanation flips. One doesn't explain how the brain creates consciousness; one abides as Consciousness Itself which "modulates" itself to appear as a brain. Even a beginning practioner can "stand" for a moment in the "space" of "no mind" and thus freedom. Will AI "llms" become realized if they apply themselves to "practice", or even can they? Or would world model AI's have a better chance? I am too much of a novice to know. Thanks to you Peter and your mates for exploring topics deeper than just whats new, to what it means.
youtube 2026-02-07T22:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy7ZBuYAhAryojIIqF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx3dEHF1lEW-BG50mR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwdDsx6u1OmFhdUlL94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxyDa2v4HC6NRfjIDd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzbhX8VHqfUURhQ11R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxHiY-SjOWkKSQiCV14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxlFNxgnooxgYls0Hp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxjg9LyEtaPE-FZolF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxVgL9kdU2K0crcQbp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwAFIdrqDko7in4IEt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]