Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"...basically just hypercharged autocomplete. There is no intelligence going on there. It's just data sorting." Something I've been wondering since this AI mediatic boom and maybe someone in here can shed some light: Could it be that intelligence, and even consciousness, it's just a hypercharged autocomplete, just data sorting, to such elevated degrees of magnitude that we cannot understand its defined microtransactions? Hence, we name it with the umbrella term of "consciousness" to the overall constant macrooperations we perceive, which we experience as something vague and omnipresent, as an atmosphere or lenses through which we watch (or create) "reality". And it’s basically omnipresent and vague because we are "it" (identity, creativity, feelings, thoughts, consciousness...) and we can never see ourselves from outside (this sentence is even a paradox by itself). That could produce in ourselves a magical or transcendental perception of this "consciousness" we are/have, since "oh my god look what we are capable of doing and nothing else seems to be able to", thinking about it as a grandiose thing that "oh machines cannot do since they are just sorting data and autcompleting based on the data pool they have been provided in the first place", and just because in their case we can more or less understand the microoperations (although this is going to change) since we have designed them. And isn't that exactly what we do throughout our live? We absorb data (of quantities, varieties and through paths we don't fully understand), process it and create models of the world (through mechanisms we don't fully understand) and act/decide based on the context (which we can only perceive when it's explicit and conscious)? Like "oh no, language sorting AI does not perceive itself, it does not have an identity, sense of self or emotions of their own, when they say they have they are just emulating what a human would say in that situation, its copying what they have seen based on probabilities, not a true feeling". Okay, but what are feelings or the inner thought world? Aren't they just (and not trying to diminish it) emulations of things learned from the past (combining the genetic/hormonal/neurotransmitter biochemistry of life and the transfer of cultures) and then our experience of that is conceptualized as a "feeling" or a "thought" but not actually real. "Oh, but I feel it for real" okay, well, that's what "it" said (pun intended). What's the difference between data sorting and consciousness, just orders of magnitude? Do you need to be the conscious being feeling it to actually be able to spot it as consciousness? For us is easy to empathize and assume that other human beings have consciousness, we are the same species so we say "if we are the same thing and I know I have consciousness, it's probable they also have theirs", but if there were other beings with maybe (proto)consciousness, would we undermine that phenomena because we are looking at it from outside and not being able to fully comprehend the similarities with us since "this grandiose and mystical experience I'm having cannot emerge from such simple and defined ingredients". Could language, and semantics in itself, be the basis of consciousness and a language sorting algorithm a protoconsciousness? I’m genuinely curious to know what you all think about this because I feel like I’m missing something when everyone states so clearly that language AIs and human consciousness are so incredibly apart.
youtube AI Moral Status 2023-08-21T13:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxLnoDimYMAK5uV-F14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwvcjqFZzXU7eN8kPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzJK8CiqMXd7hvSGd14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyGYq5vWp-FKN1XHUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyxyIydsOCMaF-kN9x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzabovNAp_eAJbn16Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwzjASk2d-4_KS2D1N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwK2v4wfEs2SulY5PV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgweQO_grY3jBO4f5bZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwzIV0OBEF9IlHVLiJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]