Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's interesting that one of the core principles in the original Star Trek was an aversion to AI overreach. When a computer was doing a job a person could reasonably be able to do, there was something wrong. And they often violated the prime directive to fix it. You can even see in the show, the gov tries to force more automation but in their tv world it always goes poorly enough to scare them away.
reddit AI Moral Status 1674083274.0 ♥ 17
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyban
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j4xs2jd","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_j4z64rf","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_j4wj5ke","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_j4xkr4x","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"rdc_j4xhbzy","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"approval"} ]