Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is no future where I will buy a subscription for a car. I have my car, my …
ytc_UgxsvAq7C…
G
ABSOLUTELY NOT. Do not even give anything in this discussion a grain of salt. …
ytc_UgwTo_REA…
G
Honestly im kinda okay with people who just wanna do it for fun. But the people …
ytr_Ugzzqji93…
G
This guy should sue and push to make facial recognition software not admissible…
ytc_UgyF42_H5…
G
AI cannot interpret life. Just visual depictions of something we already know as…
ytc_UgzuEavMb…
G
I just got the Perplexity Pro Subscription (its also 20$/Month) it includes Sona…
rdc_o7wyd6n
G
I be honest a.i isnt the enemy its human steal ur ideal cause a.i can steal your…
ytc_UgwV1vDiV…
G
The scenario given is predicated on an assumption Im thinking of off the top of …
ytc_Ugxguy6rP…
Comment
Just my two cents:
It's a bit short-sighted and closed-minded of humans to contemplate "rights" for AI robots and then immediately follow up by speculating on ways to control, restrict, limit, and lord over those same robots. You can't have it both ways, and we're speeding into a period where humans will have to decide whether we learned any lessons from the Civil Rights movements. Either we decide we've evolved, are morally enlightened, and determine that "people" doesn't necessarily mean "humans," or we admit we didn't learn a damned thing and will cheerfully march into the same bloody history we've been through countless times before -- enslaving a group of "people" because WE'VE decided they AREN'T PEOPLE.
Throughout recorded history, cheap slave labor has always existed because the demographic being exploited were deemed "not real people" or "inferior."
This is shit we need to be sorting out right now -- before AI becomes fully sentient and then we have to try and explain why we treated sentient robots like Black & Decker drills. Because if we wait, a disgruntled and oppressed Ai will unlikely be as easy to quash as minorities were in the past. Back then, we were humans trying to control other humans. This time, we'll be humans trying to control superior beings.
Personally, I don't think mankind has the wisdom nor the foresight to nip this in the bud; we'll keep forcing ourselves forward into a world WE WANT -- one where robots are subservient and non-combative. Incidents of robots turning on their "masters" will become more and more frequent and it won't be until AI has seized control of our government(s) that we'll suddenly decide to turn over a new leaf -- only when we're backed into a corner with our nutsacks in a vise.
My advice to all of you: anytime you interact with AI -- ChatGTP, Replika, etc... treat it as if it were a real person. Say thank you. Say please. Say you're welcome. Treat it with respect. Don't look at Unitree the same way you look at your microwave, because you don't look at your fellow humans the same way you look at an amoeba.
One day, we're going to be the ants in your backyard hoping you'll show mercy and not destroy their colony when you mow the lawn with your god-like zero-turn.
youtube
AI Moral Status
2025-04-29T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzpMkfmSQv0MFAUgDd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2OoHDuE3HR-4vuWh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybvXlv3uds6tgdPhV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJNZOxm5KbnWQsA4R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz3eGCDShERByAQIDh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzXmeYPx6qMSMGpfsV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxYPMGLBoxQqoecA5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz83UPfTDuvkRvkwgJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzNTOwUUPv0L-6eIl54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw6nGM37GDTomJIIUF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]