Reddit Experience · Mar 2026 · USA

Do you think AI models will plateau?

51 upvotes 80 replies

Interview Experience

Yes, another post about AI… it’s salient for us. Btw, lead eng here with 20 YoE. A lot of people talk about why current models can’t do our jobs, which is interesting but also self-evident: if the mod

Full Details

Yes, another post about AI… it’s salient for us. Btw, lead eng here with 20 YoE. A lot of people talk about why current models can’t do our jobs, which is interesting but also self-evident: if the model could replace us, it already would have. But I’m finding it hard to look at the past few years of changes and not see a bleak future for us. In the course of a few years, LLMs went from basically a cool party trick (early ChatGPT that could write basic functions but was laughably useless beyond that), to being genuinely helpful for atomic changes within a single file (GPT4 level models), to becoming actually useful examiners of wider codebases and workable as long as you dictated many incremental steps and supervised results closely (o1/o3 level models), to now being genuinely capable of sometimes one shotting more medium sized features with Opus. Sometimes. Like yeah, it still fails a lot at that, but doesn’t that only matter if it plateaus fairly soon? Last week a PM created a migration, our database relations are fairly complicated and are hierarchical with recursive queries and other nonsense, and this woman who’s never written code described the necessary business outcome in plain English (put these assets somewhere else conditional on some filters) and the LLM correctly did the rest. It had to actually explore the codebase to do this, too. That would have been unthinkable a few years ago.. even with ChatGPT. So why shouldn’t the same workflow soon apply to large feature requests?

Free preview. Unlock all questions →

Topics

Recursion Networking