Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Before we discuss whether robots should get rights, we first need to know why humans deserve rights in the first place. Perhaps nothing deserves rights, or maybe all sentient beings deserve rights. But then, how do we tell non-sentience from sentience? Would a machine indistinguishable from a human be sentient? Would they be human? What if I create a human, atom for atom, from raw materials I procured? Would it still be a machine assembled from parts, or will it be a human? Is there a point to this discussion, in the first place? What I am trying to illustrate here is that much of our understanding of fairness and justice comes from biological evolution and its constructs. Therefor, there probably won't be an objective answer. In the days of slavery (most of human history), people had no problem denying other humans of rights, so justice gets even fuzzier. There is no definite answer to robot rights, as there is no definite answer to the distribution of rights.
youtube AI Moral Status 2017-08-20T19:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz7uG2wEC19S49oP-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxwsoWcZL6vvWs1sU54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzAxBYGDkKt5sS06Ql4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwkm27kBj-Nko0hqed4AaABAg","responsibility":"society","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxaCe8v2icP1o2wVtp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzbd6o3_ChC_IAdGUh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxN08ESQaXfpdIzaad4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxdCiXaINfQ8-FMuc54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyEDXQOHqCotJGpdh14AaABAg","responsibility":"society","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz5AW7EfnUyBlxhh2Z4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"approval"} ]