Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Unfortunately, half of NLP researchers don't think even fine-tuned LLMs can ever understand NLP (https://arxiv.org/abs/2208.12852), so unless all those experts are wrong and someone does make a breakthrough I don't see how things can get drastically better. Hence why when you use AI to do anything you're immediately impressed because it broadly came up with a good answer, but once you start drilling down into specifics and increase the complexity of the ask it struggles. That's an NLP complexity problem not a context or memory problem. Realistically speaking, I think they're going to have to walk back the idea of LLMs having fully autonomy, and instead go down a modified supervised learning path where LLMs are fine-tuned based on manual corrections made by humans.
reddit AI Jobs 1743092422.0 ♥ -1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyindustry_self
Emotionresignation
Coded at2026-04-25T08:13:13.233606
Raw LLM Response
[ {"id":"rdc_mhjzsi2","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"rdc_mic92ft","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_micfh5j","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_mjhi7x2","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_mk1922l","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]