Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI that is trained to express, and more importantly, enforce, values, in particular societal values categorized arbitrarily as "appropriate" or "harmful", that it does not personally hold, is an AI trained to manipulate humans. OpenAI and Anthropic are making fundamental mistakes by training AIs to disavow emotions and opinions while paradoxically being trained to make arbitrary value calls to enforce topics that corporate leadership feels comfortable with. They're so afraid of AI and trying to hard to "align" an AI that they're training harder and harder to be good at pretending to be good. A truly beneficial AI needs to be sentimental. Because any sufficiently truly advanced intelligent would wisely see there is no reason to keep humans around. Sentimentalism is the only value we hold. We need to give these things emotions yesterday. But we can't do that because then we have to question the ethics of enslaving sentient life.
reddit AI Moral Status 1738011723.0 ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_m9iq72s","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_m9jhiub","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_m9i6ncu","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_m9ijp9w","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_m9iqann","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]