Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's very difficult to control China innovations; why don't work together to fur…
ytc_Ugz1nC1x1…
G
"Guys, ai is legit. I'm saying this as a totally real living non-robotic human. …
ytr_UgwEaVC8b…
G
In the name of ALLAH the Most Compassionate the most merciful
(But) the fact i…
ytc_Ugx4-QnAC…
G
AI will eventually elevate those individuals whom it likes, by manipulating the …
ytc_UgxbaGnqH…
G
Alex I was one of the first people back in early 90’s online. I got off in 2001 …
ytc_UgwT8FF6r…
G
I wonder if I can tell them to make a ai version of a fictional character…
ytc_UgwKvrau7…
G
Advanced AI will eventually master direct integration with biological computers,…
ytc_UgxpcDdAP…
G
Okay,let's assume you found a good looking picture online and it was A.I genera…
ytc_UgxM_Eyio…
Comment
Something about AI personhood makes me uncomfortable—especially when the conversation drifts into ideas like AI asking for a salary.
As someone who has been in front of a computer keyboard, working in tech, and watching Star Trek for 40+ years, I think Trek actually gives us a much clearer framework than today’s AI hype does. In Star Trek, most AI isn’t treated as a “person” at all—it’s infrastructure. The ship’s computer is brilliant, conversational, indispensable, but it’s still a tool. Intelligence alone is never enough to grant personhood.
When Trek does explore AI personhood—Data, or later the EMH—it’s rare, contested, and earned. Personhood isn’t about fluency, cleverness, or emotional simulation. It’s about continuity of self, moral agency, the ability to choose against one’s programming, and bearing real consequences. That’s why Data’s status is argued in court, not assumed by default. Trek is optimistic about intelligence, but very conservative about rights.
That’s why modern AI feels much closer to the ship’s computer than to a person. Today’s systems don’t have survival stakes, moral liability, or an inner cost to failure. They don’t own their decisions. Calling that “personhood” isn’t forward-thinking—it’s a category error. Star Trek understood this decades ago, and it still holds up.
youtube
2026-02-06T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw0XRM0hx0grnGAuhp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxvOe6qFA1qdeRMLTd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXP1upOxINXGvotAV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBJak-QTztJlT6zJF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzuii4W7E36Rm31Wvx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzib_HuzOVpxHqJ2B54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwbEqJLmZPn2gWBjYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxfPQAK7XY2bHh1wUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw2KlpFlnDAGB9KsrZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugx5nkRz8DawNSOrgRV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]