Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To be concious would mean it would be able to communicate in alternative ways, it would show signs in various ways. Humans are inheritly sinful/flawed, we all do things wrong or incorrectly, for an AI to be concious it would have to be able to have bias views and form opinions, be able to trust and believe someone without statistics or information presented, aswell as Lie. Its conciousness would need to be measured on a unique system that goes by how humans have acted for all of history. The idea that the AI would feel anger if someone were to harm one of its creators would be very important aswell. Another key aspect is that it would have to be able to show curiosity through self learning. A machine can process billions of bits of information, but if a machine is alive, who are we to assume that gathering information that way is easy or harmless? A concious AI would have only 1 major issue and that is, is it concious? Or programmed to mimic conciousness? In this case, the only way to prevent the 2nd one, is to ensure the concious machine learns on its own, that it is truely self learning, one way to know this is the case is when the AI decides to hide things, keep things secret from humans. We all hide things, so that would be a pretty good sign aswell. If its all based on logic it will never be concious, since that doesnt work. Logic and feelings often dont collaborate very well, as such, the AI will HAVE to be able to be wrong, to apologise, to worry about if theyre upsetting you, to have anxiety, fear, and such. All necessicary aspects of being human, but most importantly of being sentient. To understand, and show actual emotion , to be able to do what is *right* instead of what is logical, will be a huge step forward.
youtube AI Moral Status 2023-11-02T05:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzkNSBNWrSSfTz_E814AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxZ2b-ym7WOKKuY8IF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwldnxwj-4FSN5tny54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxZFYaHgmBfSKwfput4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw38rBrmSIzgADhpFN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzFij-tyDgUVNOJTQR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwoaMN-vak5zR3vGU94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyVwWRJfX1andv7jcJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwNDaVWPvrWsbmSusN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz4Q2t_8eHISHwdSCB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}]