Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't agree with your "always will be just machines". With the same approach i can say "brains will always just be sets of cells". Mind is a computation. There is no threshold that something is conscious and something is not. There are just different levels of available computation difficulty and abstraction. There are dogs. There are ants. There are parrots. All different complexity, and from some level those systems can solve puzzles, extrapolate information, make judjements and so on. Not only machines will reach human level, they are guaranteed to overstep humans. And there will be several types of intellect, because complexity can emerge from different computation principles. We have chatgpt, which is textbased. We have visual recognition programs which learn to be better. We have more general neural networks which can solve analogue problems but can't talk. We will have self-modifying GPTs and self-modifying general neural networks in the future. We will have rule-application intellects (based on Wolfram language). We have reached singularity already. The only thing to realise is: mind is just computation. There is no Soul, just billions of equations solved by laws of physics inside your braincells.
youtube AI Moral Status 2023-08-20T19:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxQAvqt_2Ii02mU-Md4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyvCZnEbxYfik8dTQB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwTXY-_CtJflKm84Qt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxNKCoaPvfyW11r8Hh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz_NsXXQwKJ9sFZfsl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxIjDv5bLaHyqRlygF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzu7bO7OqNuWmHHWHN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxmm2tK_-UJnWjL_PR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyyg9z1D5ItQM06kZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgziKHkzhSqHoAYBnc54AaABAg","responsibility":"user","reasoning":"mixed","policy":"ban","emotion":"outrage"} ]