Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It really depends on the type of project. Its way better (or just faster) than me at some types of projects and has no hope of completing others. what is complicated for the AI is different than what is complicated for a human. Seems to do with uniqueness and amount of context required more so than just complexity. But ofc if its unique at all and complex the AI will really struggle. If I'm making a function or method that is capable of being fairly independent and self contained its not that bad. If its a complicated c/c++ task (its usually the best at python) that takes in many custom datatypes and the actual implementation requires context of the larger system just to understand it, the AI will be unable to solve it. I really excels at unit tests. Even complicated ones that require context. I can copy a couple loosely related unit tests into the something like sonnet-200k and tell it the requirements for my test and to think step by step. Most of the time it will generate a working test on the first run.
reddit AI Jobs 1728338093.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_lqss8gc","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_lqugx39","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_lqqzxfp","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_lqrrh8m","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"rdc_lqse07q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]