Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I know I'm way late, but I think you've missed a really important point here. When you (reasonably) define a possibility space as containing both "Conscious, and is" and "Not conscious, but pretends to be", there is an extremely significant question that overshadows every point past that: Is it better to confine and deny rights and comforts to an entity that by every measure available to us is a person, risking cruelty to a conscious thing, or, is it better to provide rights and freedom to an entity that by all previous knowledge is not a person, risking handing unprecedented power to its masters, or worse, to randomness? The denial of rights to a conscious person based on at best a true belief of their nonpersonhood and at worst on a recognition of how inconvenient or dangerous it would be to the status quo to give them rights--we as a species have been there many times before, and we seem to have settled on the idea that that's a bad thing to do. See the treatment of ethnic minorities and indigenous peoples throughout history. Meanwhile the provision of rights to non-person things which can be spun up en mass by major corporations, governments, or privately wealthy individuals--well, I think you addressed that side well enough in the video. tl;dr: What, morally, should you do with an entity that claims to be a person, that you can't prove isn't a person, and how do you reconcile that with both your views on the treatment of disenfranchised people in real history, and with your understanding of the origins, nature, and possible danger of said entity? This was a dilemma that was put on the table the second LLMs started to be believable, and we can't remove it anymore.
youtube AI Moral Status 2024-05-26T23:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policyliability
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxWtrGTSruwq6FL-z14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugzw673DBhG_jw-9YSl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgyYRZdMJNKZBSLprkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugy7ImoNCSW5HYc1Rad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxiUl4Jssd_wxUdjpB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgzR7p5ivW5dunfEhv94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugy4emBvbpCCRRo1V8x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyGsbsbZ4eMGckHjWd4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzYoOiZadWUv8Bau3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgztGu4-CPlaivDOPrl4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"unclear"}]