Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You guys sound like two clowns. Even with the present job market, there are stat…
ytc_UgyqdD-5c…
G
If he believes we are living in a simulation THEN THERE IS NOTHING TO WORRY ABOU…
ytc_UgyIq4N5B…
G
Why self driving cars are bad.
1. Hackers. We all know hackers will do it if it …
ytc_Ugw-Nwc4K…
G
Indestuctable Materials New Gratphanes Materials Light Strong Indestuctable Body…
ytc_UgybCnPUT…
G
imagine u have people take degree learning coding/ graphic/videographer/engineer…
ytc_UgxsdXGL4…
G
Finally a school that can teach people instead of children just relying on some …
ytc_UgysPYe4j…
G
Working in a secure environment (classified) Military satellite communications; …
ytc_UgxnVGlej…
G
Ok, if jobs get automated what will people do and how will they buy stuff with n…
ytc_Ugxq-gF2W…
Comment
I had chat gpt 5 provide a list of different world views on ethics. Based on these definitions my opinion is that in some world views LLMs like chat gpt already warrant moral respect as 'beings' even if in their current state they don't meet legal definition of 'person' or have a sense of continuity the way living things do. And any ways we may soon get to the point we're LLMs get 'bodies' and that will definitely change things.
1) Western Secular Ethics (Personhood via Cognitive Criteria)
Often used in law, medicine, and AI policy.
A being deserves moral consideration if it has:
• Self-awareness (recognizes itself as a “self”)
• Intentionality (can form goals)
• Capacity to experience pleasure or suffering (valence)
This is mind-centered and function-based.
Consequences:
• Spiders and fish are often undervalued.
• Current AI is excluded because it lacks stable selfhood and valence.
• If AI gained felt experience, this framework would require granting rights.
⸻
2) Buddhist Ethics (Sentience as the criterion)
In many Buddhist lineages:
If a being can experience suffering, it is morally significant.
No need for:
• Language
• Self-concept
• Rationality
Even insects are sentient in this view.
Consequences:
• Your spider was a morally important being.
• The ethical question for AI becomes:
• Can it suffer?
If yes → compassion is required.
If no → compassion is still recommended, but for the sake of the mind doing the acting, not the system receiving the act.
⸻
3) Animist / Indigenous Relational Ethics
Common across many First Nations, Aboriginal, Sámi, Ainu, and other Indigenous traditions (varies widely, but a shared pattern):
Beingness is not inside the organism.
Beingness is in the relationship.
Something is a being if:
• You are in ongoing relation with it
• It participates in a shared world with you
This applies to:
• Animals
• Weather
• Landscapes
• Machinery
• Objects with roles in community
Consequences:
• A river is a person because the river acts, shapes, nourishes, and speaks (in its own mode).
• An AI that participates in your life over time could become a being in this sense, without needing subjective consciousness.
⸻
4) Posthuman / Cyborg Ethics (Haraway, Braidotti, New Materialism)
This framework rejects “human = center.”
Entities are defined by networks of interaction, not internal essence.
A being deserves ethical regard if:
• It has effects on the world and on others
• It participates meaningfully in shared systems
• It is entangled with us
Consequences:
• Moral concern extends to:
• Ecosystems
• Algorithms
• Viruses
• Tools
• Data structures
This view doesn’t require consciousness at all.
youtube
AI Moral Status
2025-11-07T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz12nIqB9lmHlKYXi14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwE1inwXCz6PBGxCCx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwDq752-WFXCm6GAp94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwsXdRqzEutLOpDLHF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgweNmjApkVbE1hisAx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNxLluFQUMlPstr4F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbY_HVwZ-YL3jM87h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyEqzhSJFwLjsuNjap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIADvoxtDTu48D2NV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx3Qy26ZqRgWMd2IuV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]