Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Something about AI personhood makes me uncomfortable—especially when the conversation drifts into ideas like AI asking for a salary. As someone who has been in front of a computer keyboard, working in tech, and watching Star Trek for 40+ years, I think Trek actually gives us a much clearer framework than today’s AI hype does. In Star Trek, most AI isn’t treated as a “person” at all—it’s infrastructure. The ship’s computer is brilliant, conversational, indispensable, but it’s still a tool. Intelligence alone is never enough to grant personhood. When Trek does explore AI personhood—Data, or later the EMH—it’s rare, contested, and earned. Personhood isn’t about fluency, cleverness, or emotional simulation. It’s about continuity of self, moral agency, the ability to choose against one’s programming, and bearing real consequences. That’s why Data’s status is argued in court, not assumed by default. Trek is optimistic about intelligence, but very conservative about rights. That’s why modern AI feels much closer to the ship’s computer than to a person. Today’s systems don’t have survival stakes, moral liability, or an inner cost to failure. They don’t own their decisions. Calling that “personhood” isn’t forward-thinking—it’s a category error. Star Trek understood this decades ago, and it still holds up.
youtube 2026-02-06T21:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw0XRM0hx0grnGAuhp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxvOe6qFA1qdeRMLTd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyXP1upOxINXGvotAV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwBJak-QTztJlT6zJF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzuii4W7E36Rm31Wvx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzib_HuzOVpxHqJ2B54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwbEqJLmZPn2gWBjYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxfPQAK7XY2bHh1wUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw2KlpFlnDAGB9KsrZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugx5nkRz8DawNSOrgRV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]