Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Something that I keep telling execs (I work in Cyber), is that you cannot completely rely on AI because you ALWAYS need experienced humans to error check. You can definitely cut down costs by not requiring massive teams, but the more specialized the field, the more reliance there is on having accountable human beings validating the outputs. AI is always prone to hallucinations, even when it's trained on the correct subject matter, and from a risk/compliance perspective, the law doesn't care if you saw "oops computer did it", CEOs are still accountable for what happens under their watch.
reddit AI Jobs 1753656604.0 ♥ 5
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n5gx6th","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_n5h0tfi","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_n5iiju4","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_n5j1imn","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_n5jh0hr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"disapproval"} ]