Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The courts are having a big problem with this, as people keep submitting AI generated stuff that *appears* to be good work but has critical errors. It then causes delays as people try to figure out what the hell is going on. So you end up needing to hire associates to research the stuff the AI spits out to make sure it is true.  Especially as AI hallucinations, if missed, can be introduced as part of the record that future AI models draw from. If that happens enough, for long enough, case law might end up being created ex nihlo from AI bugs. It needs to be banned in all filings. Using it as a research tool probably has its place, but everything needs to be manually verified to prevent the law from breaking. So we will, hopefully, still need lawyers. As not having them is a potential disaster.
reddit AI Jobs 1753633260.0 ♥ 2811
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n5i6ohq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n5gu01t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_n5gfmbw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n5ge6mq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_n5ghajz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]