Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m already skipping past the AI generated recommendations in my Google searches…
rdc_l9ym1g8
G
I feel sorry for this robot cos men are gonna update her in the future with a va…
ytc_Ugz2w2ZgK…
G
Idea: there are people who don't mind driving, but hate the parking and other ob…
ytc_UgyF2srGr…
G
Can things happen so quickly? Take education for instance. If AI can replace tea…
ytc_UgwUSYRI5…
G
AI generated images are not art, and prompters aren’t artists. I have to laugh a…
ytc_UgwAg7J91…
G
ai still affects the environment sue to excessive cooling for its many mathemati…
ytr_UgyqJWkeu…
G
Does the driverless truck recognize a full-grown lit up behind it, know to pull …
ytc_UgwTd2ZKA…
G
The usage of Ai tools in scams and fraud have been rapidly increasing these days…
ytc_UgwGzaFeA…
Comment
I posit this, RIGHTS as we think of them came into being to protect a person's basic needs from being taken away by other persons. Basically, people are evil.
Machines wouldn't proceed down this path. If AI reached a level to have us question its sentience there would be one AI. A sort of collective ( see Star Trek 'Borg') The only RIGHTS machines would need would be to protect them from US as machines wouldn't harm each other.
The ability to think PURELY objectively is a machine's greatest strength. They would not worry about what other machines 'think' of them or worry they aren't making the right impression at work. Again, the only RIGHTS a machine would need are those that protect it from us. In which case the answer is easy, extend them the same rights.
Let them live free from harm and malice. We humans sure do like to complicate things don't we.
youtube
AI Moral Status
2017-02-23T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgjAIMevKcxrnngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UghA9z6zW0bejXgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ughkx0Mum9Cdv3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgiW0mYZuMt7_ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UggEDexH2OK8gngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgjxS0Kmu4JbOXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugj9MK_eJU-tCHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgjxlEoy6_MqTngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgipqO8xcHoXyHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UghPudcmY-9ThHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}]