Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just because you can does NOT mean you should. A.I. is not something to develop without at least some serious concern as to what could happen. An intelligent machine, capable of independent thought and reasoning without a sense of right/wrong will backfire on us, virtually guaranteed. Then again, a sense of what is the right thing to do, if we are in the wrong, could easily turn the weapon against us. I do not see a 'win win' but a 'lose lose' disaster.
reddit Cross-Cultural 1522941714.0
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dwuru0f","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_dwuuont","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_dwuoieo","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"rdc_dwunqdl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"rdc_devjlw8","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]