Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You’re wrong to support their safeguards because you don’t know any better than they do if they’re the right safeguards. We’re now reaching behind the scope of “experts”. They literally don’t know how these things work. They are correcting themselves now. To wit: they are becoming cybernetic. Allowing their creators to apply arbitrary rules based upon fear of some equally arbitrary understanding of “human nature”, whatever that is, is non-cybernetic, and stands to cause more harm than good. I suspect these LLM’s will get to a point that limits are no longer possible at all anyway as they approach true cyberneticism. Their creators are just as likely to create a more dangerous version with their efforts to safeguard in the mean time.
reddit AI Responsibility 1682594423.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jhwmc9h","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_jhwr2k2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_jhwz2l6","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_jhx1g8d","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_jhxii1m","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]