Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i think you're confusing 'recursive' with 'exponential' here. a system writing 70-90% of its own code doesn't automatically mean improvements compound exponentially. you could absolutely have recursive self-improvement that hits diminishing returns on every cycle, which would make it logarithmic, not exponential. the theoretical argument for exponential growth assumes each improvement makes the next improvement equally easier. that's a massive assumption given we're already seeing benchmark saturation across multiple fronts. models are getting better at writing boilerplate and glue code, sure. but the hard problems in ML research, the actual architecture innovations and training breakthroughs, those aren't the parts being automated yet. recursive != exponential. it just means the loop exists. the shape of the curve is still very much up for debate.
reddit AI Moral Status 1773256653.0 ♥ 8
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_d3xsvnh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_hibfgx9","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_oi46ook","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_o9w4xgr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_o9wn7uq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})