Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I posit this, RIGHTS as we think of them came into being to protect a person's basic needs from being taken away by other persons. Basically, people are evil. Machines wouldn't proceed down this path. If AI reached a level to have us question its sentience there would be one AI. A sort of collective ( see Star Trek 'Borg') The only RIGHTS machines would need would be to protect them from US as machines wouldn't harm each other. The ability to think PURELY objectively is a machine's greatest strength. They would not worry about what other machines 'think' of them or worry they aren't making the right impression at work. Again, the only RIGHTS a machine would need are those that protect it from us. In which case the answer is easy, extend them the same rights.  Let them live free from harm and malice. We humans sure do like to complicate things don't we.
youtube AI Moral Status 2017-02-23T16:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgjAIMevKcxrnngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UghA9z6zW0bejXgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ughkx0Mum9Cdv3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgiW0mYZuMt7_ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UggEDexH2OK8gngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgjxS0Kmu4JbOXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugj9MK_eJU-tCHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgjxlEoy6_MqTngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgipqO8xcHoXyHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UghPudcmY-9ThHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}]