Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Personhood as a human right comes from historically recent fundamental acknowledgement of a finite, memory based, self aware existence coupled with a CHOICE to do the right or wrong thing. CHOSEN morality is inherent in human nature as a majority of history will show, it separates an animal killing for feeding vs murder of another human with rights. Giving rights of personhood to AI is incompatible with our human system which has evolved since the beginning of literature and story telling and has the potential for massive negative repercussions. I think you have it around the wrong way, and if anything it would need to prove its combability long before even attempting a framework like rights for AI systems. Awe and irrational immediacy of the future does not justify an emotional response such as AI personhood, I find this topic an affront to rationalism and can even see all of you wrestling with all of it in real time. Its amazing to watch you navigate such an intense topic and the friction that arises. intellectual bias should be at the forefront of any sensible conversation of this magnitude, although i find watching enjoyable nonetheless.
youtube 2026-02-06T13:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyban
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxPR_aHccyh-FRj9l94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwGpE2MIj9ElGlClkB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwS07HowB_jPkqNc7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzYiX4DR19bcEisjgN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzQjWaifbPQSU5UCXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy5bZY7YyINMuSlooB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxtAGPwz69rb2voHbp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzYcZoCiPj4SwWr8UJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzbTU3HMq9rATqmCtJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxoou9uotp4A11reRR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]