Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This post is also more a "don't torture the AI because we may not know when it has become sentient". Ie. Let's not torture something, especially not if there's the possibility that it may be sentient at some point in the future without us realizing it. The logical thing to do is to just not torture things.
reddit AI Moral Status 1676633648.0 ♥ 20
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j8w6jyr","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"rdc_j8v2cb2","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_j8vkhph","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_j8wi6pk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_j8xze7k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]