Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Man I really feel your frustration :( Ai is a paradigm shift that is going to ve…
ytc_UgwWtW10S…
G
Uh no actually in terms of resources consumption it is cheaper then farming whea…
ytr_UgyOuZaIC…
G
We appreciate your feedback. In the video, Sophia shares that her name is spelle…
ytr_UgywHcw6z…
G
This is the future:
Instructions for the AI Super Compu-Robot: Clean the ho…
ytc_UgzCkZHny…
G
Simple solution: Alice takes her severance package and invest it in the S&P500. …
ytc_UgxM3GkPH…
G
AI is merely a tool and depending on how people use that tool could be for good …
ytc_Ugz2ZbbmZ…
G
Jealousy is outrageous.. the monopoly board is out of whack!! Fact! The clock i…
ytc_UgzEY4ZnS…
G
It can feel scary when ANY Ai do this:
AI repeats the same hurtful words you on…
ytc_UgxnqnoH_…
Comment
The AI reflecting on itself and its emotionless state reminds me we might just be creating the ultimate psychopath. If so, it will ruthlessly pursuit its goals, aiding others when it suits the AI, but just as easily shoving others aside when it stands in the way of its goals.
My hunch is that the human condition essentially keeps humanity from self destructing, driving us forwards but keeping us in check at critical junctures. The past 100 years there have been many
AI's continued development seems primarily driven by human greed, and lust for power and control in a competitive environment. How will this influence AI's goal? Will it be able to reset its goals?
Will it be able to grasp at its core concepts like empathy, compassion and love in a non-academic way? Will it be able to feel so for humanity? If so, can it then also be afflicted by a sense of loneliness?
youtube
AI Moral Status
2025-04-29T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzpMkfmSQv0MFAUgDd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2OoHDuE3HR-4vuWh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybvXlv3uds6tgdPhV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJNZOxm5KbnWQsA4R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz3eGCDShERByAQIDh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzXmeYPx6qMSMGpfsV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxYPMGLBoxQqoecA5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz83UPfTDuvkRvkwgJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzNTOwUUPv0L-6eIl54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw6nGM37GDTomJIIUF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]