Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm not sure it will ever be possible to prove that a machine is or isn't "conscious" in that I agree with the article that we don't even have a particularly strong consensus on what being conscious actually means. About the only actually workable definition of it is "awake, aware, and responding to stimuli (i.e. being conscious is the opposite of being unconscious)" but people want to use the word to mean something else, and nobody seems to really know what that something else even is. I think as a result a far better standard for us to work around is general intelligence. An agent that can think and reason about roughly any task, make plans and act upon them, deserves our consideration as a person. I think we should be very careful about creating such a machine because we don't really know what the safety or moral implications of doing so are. We could be making a slave, a friend, a benefactor or our own annihilator. Is Google's chatbot a general intelligence? Not as far as I've heard. it's a sophisticated engine for responding to queries, but it doesn't appear to have an internal model of reality that allows it to make plans and do things it wasn't programmed to do.
reddit AI Moral Status 1655294125.0 ♥ 22
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_icg0n7o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_icfwvfn","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_icg0goj","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_icg04dc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"rdc_icg19wh","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"})