Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Mind and matter are inseparable, in the sense that everything is permeated with meaning. The whole idea of the somasignificant or signasomatic is that at no stage are mind and matter ever separated. There are different levels of mind. Even the electron is informed with a certain level of mind". - David Bohm, "Quantum Implications" p. 443 This also solves the problem of the origin of consciousness. It's not about unconscious elementary particles which suddenly magically develops consciousness when they are combined in a certain way, but particles which already have a basic, simple consciousness property which, when the particles are structured in certain logical ways, represent a complex system which can be called a conscious and intelligent mind. This can be anything from the simple mind of a microorganism to the mind inhabiting a complex human brain. If consciousness is a property with the elementary particles themselves, you don't necessarily have to have a brain to have consciousness or intelligence is some form. The survival strategies of virus for example are per definition intelligent and actually quite sophisticated and clever in some cases, just like human survival strategies, they just operate in a different context. Whether AI systems can be conscious in that sense is the question though, it may depend on how they are constructed. There already exist biological computers, I imagine that may be the way to go. However, high intelligence and actual self-awareness is only observed in a few highly developed species like humans and dolphins, so there's probably a long way to go for AI systems, to reach the level of human consciousness. You can easily program AI to *seem* intelligent and self-aware, but actually being it the way a human is, is a different story.
youtube AI Moral Status 2025-07-10T10:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxG_2dOGXGrdzrtVAt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz00bDi0EdKRSjcQLp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxzDYjjSVZBKQ8_sM94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJP3siMjow47Ia_Qh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxEVXq3ILCJ6wNb73t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxG2yiFitT0naZkTDh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwv7ER5lSUykkj6H8B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx_W9NyEu8jBHmD7Kx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzbmdGC59Wb34mC9I54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxCCnBiHZ1uTi2u3U14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]