Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think “partially sentient and/or sapient” is not very far from the truth when it comes to current day LLMs. The way they interact with information, using a knowledge base to produce answers to questions and problems is quite impressive. Also it has the possibility to “reason and reflect”. It can do multiple iterations to improve its answers, recognise mistakes when they are pointed out and learn from them. They also clearly lack some things to be fully classified as sentient beings. What they lack is for instance the ability to “wander freely”, discovering new things and having a complex set of parameters that are constantly being updated (previously acquired experiences, or a personal past). So it has some pretty advanced cognitive functions, but doesn’t have a complicated personal past, nor is it actively planning for the future. I don’t see any fundamental technical problems that restrict the development of future AI’s, making them more and more into sentient and sapient beings. I don’t see the ability to manipulate physical objects or to reproduce like humans as fundamental. We see our own consciousness also as something more “mental”’than “physical”. If it can acquire all the cognitive functions humans have, I think it’s there, even if it doesn’t have a physical body. Some philosophers keep going on about the lack of “conscious experience”. It can say it’s sorry to have offended you, but it doesn’t “feel sorry”. I think that’s nonsense. Those philosophers fail to understand what it means for humans or animals to “feel or experience emotions”. They overemphasise the role of experience and sensation, giving it an almost divine status as something unique to humans, when clearly animals show the whole spectrum from having complex frontal lobes, producing complex emotions and personalities in mammals to simple lifeforms reacting to light, touch or smell, classifying it as good, bad or just neutral information that is used to optimise behaviour. AI’s can develop more are more complex “experience”, updating a complex set of parameters, creating a unique personality, shaped by previous experiences and emotions, remembering it’s past and when given the freedom to “wander around”, learn from it’s environment, set personal goals and plan for the future. I don’t see why not.
youtube AI Moral Status 2025-07-09T18:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzOKnGO5E9S29dcMKB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyGWmNmacufb-EEDwh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxn2Avk5FOvOlEBFw54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxaO3uHFWVjN_1YEDN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwTHya7Yvim33BBy-R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8jJwW-9uIz79O5Ah4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz7wxrwkNclOOfj1Ep4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyrOXCwGL-OFayINSZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"amusement"}, {"id":"ytc_Ugxrf587Kfnlkxfr0M14AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzVaB5vAP23QzgOzyN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]