Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The people trying to use LLMs for these purposes baffle me, as an LLM does not replicate a logical thought process. Results of what it can spit out will always be unpredictable, and can be completely illogical. Even worse is when your models are trained on random internet text, which includes a lot of complete nonsense.
reddit AI Jobs 1707134889.0 ♥ 4
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kozx3ui","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_kp0ib3o","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_kp0jiap","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_kp1avx3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"rdc_kozknuu","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]