Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No, sorry you didn't read the post. I'm literally trying to defend the fact that in it's current "dumb" state, A.I does control humanity and that it is harmful, because it uses US to make "choices". Also, you wouldn't be able to question the output of a complex A.I. In fact very very few people can do that. => For instance an A.I that has decided that the GDP of Belaruss will decrease of 5% next year. We don't know why, we might have a vague clue, but that's it. That's the point of a predictive A.I: it predicts according to multiple variables given as an input, if we could do that we would but we can't and we trust the A.I to do it for us.
reddit AI Moral Status 1597084439.0
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kykw5yc","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_kyltinv","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_g0y7v05","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_g10p5cs","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_g0ys5vt","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]