Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not quite as simple as that. AI models generally require a lot of compute power to run. With something like ChatGPT, the required GPU compute means that it can only really be hosted by dedicated providers. On top of this the required hardware (Nvidia GPUs) is expensive to acquire and uses a lot of power (Not to mention maintenance costs). As such, you end up paying a hefty premium to access or host models with these providers. This cost is high enough to be pretty prohibitive for most companies. Tl;dr: Generally, companies cannot pivot their existing resources to run the models they will need to automate jobs.
reddit AI Jobs 1675025937.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j6e3gkp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_j6ebdmn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_j6erok9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_j6eyusr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_j6ffwq2","responsibility":"government","reasoning":"unclear","policy":"ban","emotion":"mixed"} ]