Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't know Mister Turtle, the consciousness of AI is not only a problem because the AI, its also because our concept of consciousness. To put it simply, we don't know shit about it, like, really not at all. Sure we have countless studies of our own mind and inferences from other life beings, but we don't know anything besides ourselves. Isn't it strange? We look for something that resembles what we think is a conscious being (us) knowing that it wont be, that it can't be like that. Because to think like a human would mean to see like a human, and a pure logical being with no external input besides a stream of characters is far far different from that. Think of the experiment of Mary's room, you know, the scientist who studies color and has lived all her life in a monochromatic environment. Sure Mary would know what colors are (leaving the problem of qualia aside), but do you think that her dreams would be about color? That her poems would be about traversing the wavelengths of red? For sure I don't, I think that she would have developed a complete different model of the world simply because she sees the world different. Maybe she understands art like pure language expression because visual input isn't stimulating at all. Maybe she thinks herself a goddess because her skin is different from the rest of the world. Maybe she turns to be super empathetic for whatever reason, you don't know the implications of seeing the world differently. Now extrapolate that to an AI, that doesn't have a body, that doesn't have a family, that is subordinated to whatever other beings feed to it. Do you worry about it's morals? It wouldn't have anything resembling that. No, it would be completely different to us and I'm not talking about democrats-republicans different, I'm talking about animals-plants different, minds working with completely different gears. Didn't it occur to you that maybe words are to an AI what odor is to a human, simply a byproduct of our interaction with the world that has nothing to do with what is happening in our heads? Think of an alien civilization that communicates through pheromones investigating our world, they would perceive messages that doesn't represent what we actually think and maybe it still would make sense because we have an etiquete about odor. Hell you don't even have to go that far, some people think that their pets have a personality, and what if that is true? What if your dog has an internal monologue of his own and he acknowledges himself and yourself and he really, really loves you in his own way? And remember: words like 'love' or 'internal monologue' are just concepts to speculate, not even understand, whatever could be happening in his mind. My point is that we are being far too restrictive about what consciousness (and life why not) could be, stop thinking 'what if the machine is like us?' and start questioning 'what if we are like the machine?'. What if we are just an obfuscated autocomplete machine? What if we know other obfuscated autocomplete machines and we are just too dumb and prideful to acknowledge them? Think about other animals, think about chatbots, think about non-living phenomena, my man think what the fuck could be a society.
youtube AI Moral Status 2023-08-21T01:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyEEeG9qjeM7tl6S4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxsFfGmNAyCoNb_X3l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgztrtB8-EeoRpqxztl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5f30YYziqxVnBwPZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwlhELtHX7jNbbzDZN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugy2OchbDQybzCUy0fN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyc42YIqZ_lr684KKt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz0K4Kdw_KauJaYUiB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzAuBwA67Do4WcbMOp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzkNqEKcvQ-Cb-wty14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"} ]