Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> Yes people are doing this, bolting on agentic abilities to LLMs. "AutoGPT" "ChaosGPT" You're presenting this like it's an actual problem. Firstly, "ChaosGPT" is literally just AutoGPT, set up with a bot named "ChaosGPT". Second, AutoGPT is hilariously bad. Third, the process can be killed at any point. > Like a snowball being pushed down a hill. Sure someone started it, but has no real control over where it ends up, how big it's gotten to at that point, and what damage (if any) it's done. Wildly untrue, especially since you specifically name AutoGPT. You can literally follow every single step it takes, every piece of writing it does to itself, and see exactly what is going going at any point. You are trying to act as if this is already a problem. It is most certainly not. AutoGPT is fun to play with for, like, 30 minutes, and then you realize it basically can't do anything. People that make posts like "AutoGPT made its own YouTube channel" always later clarify that they themselves made the channel, and that all AutoGPT did was act like a normal ChatGPT and guide them on how to do it. What you're describing simply isn't happening.
reddit AI Jobs 1685862767.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jmu4waw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jmur4pp","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jmuu3dt","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"rdc_k5wr6oo","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"rdc_jmv0i8d","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]