Skip to content

AI Development

Complex AI Agents

Model Mafia

In the world of AI dev, there’s a lot of excitement around multi-agent frameworks—swarms, supervisors, crews, committees, and all the buzzwords that come with them. These systems promise to break down complex tasks into manageable pieces, delegating work to specialized agents that plan, execute, and summarize on your behalf. Picture this: you hand a task to a “supervisor” agent, it spins up a team of smaller agents to tackle subtasks, and then another agent compiles the results into a neat little package. It’s a beautiful vision, almost like a corporate hierarchy with you at the helm. And right now, these architectures and their frameworks are undeniably cool. They’re also solving real problems as benchmarks show that iterative, multi-step workflows can significantly boost performance over single-model approaches.

But these frameworks are a temporary fix, a clever workaround for the limitations of today’s AI models. As models get smarter, faster, and more capable, the need for this intricate scaffolding will fade. We’re building hammers and hunting for nails, when the truth is that the nail (the problem itself) might not even exist in a year. Let me explain why.

Where Are All the Swarms?

Complex agent architectures are brittle. Every step in the process—every agent, every handoff—introduces a potential failure point. Unlike traditional software, where errors can often be isolated and debugged, AI workflows compound mistakes exponentially. If one agent misinterprets a task or hallucinates a detail, the downstream results may not be trustworthy. The more nodes in your graph, the higher the odds of something going wrong. That’s why, despite all the hype, we rarely see swarm-based products thriving in production. They’re high-latency, fragile, and tough to maintain.

Let's use software development as an example, since it is what I am most familiar with. Today’s agent workflows often look like this: a search/re-ranking agent scours your code repo for relevant files to include in the context window, a smart planning agent comes up with the approach and breaks it into tasks, a (or multiple) coding agent writes the code, a testing agent writes the tests, and a PR agent submits the pull request (maybe with a PR review agent thrown in for good measure). It’s a slick assembly line, but every step exists because current models can’t handle the whole job alone.

  • Search and re-ranking: This is only necessary because context windows are too small and it is too expensive to ingest an entire repo. This is also the step that is most susceptible to failures, because model that is smart enough to plan the task should also be the one deciding which files are relevant. A context window increase and a price decrease will make this step obsolete.
  • Planning and task breakdown: The main value of this step is that you can have your smartest model give direction to the smaller, less capable, but cheaper and faster models. There's no need for a formalized plan when models can perform all planning inside of their own reasoning process. The only other reason I can think of to have subtasks here would be because a models won't be able to output enough tokens to solve the entire problem in 1 go. An output token limit increase and price decrease will make this step obsolete.
  • Testing and PRs: Why separate these? A model that's capable of planning is capable of writing the code to test that plan as long as it fits inside of the output token limit. This step would be replaced by simply returning the test results to the single agent so that it could make decisions based on the results. This is feasible today! But it could be pretty expensive to have an agent loop with the entire codebase as context.

The root issue isn’t the workflow, and in most cases, it's not even the model intelligence. Limited context windows, high-priced top-tier models, and token output caps force us to chop tasks into bite-sized pieces. But what happens when those limits start to fade? Imagine even a modest 3x-5x improvement in context window size, price, and output token limits. Suddenly, you don’t need all of your tools, frameworks, and subagents.

Tech Debt

And those constraints are eroding fast. Last year, OpenAI’s Assistant API launched with built-in RAG, web search, and conversation memory. It didn't gain a ton of traction for RAG—mostly because RAG is not really a one-size-fits-all solution and devs needed control over their pipelines. Back then, RAG was an exact science: tiny context windows, dumb and expensive models, and high hallucination risks meant you had to fine-tune your RAG pipeline obsessively to get good results. Nowadays that stuff is much less of an issue. Chunking strategy? Throw in a whole document, and let the model sort it out. Top K? F*#% it, make it 20 since prices dropped last month. Bigger context windows, lower prices, caching, and better models have made simplicity king again. Problems I’ve wrestled with in my own agents sometimes vanish overnight with a model update. That’s not an edge case; it’s a pattern.

The Shelf Life of Agent Architectures

Complex agent architectures don’t last. If you build a six-step swarm today, a single model update could obsolete 3 of those steps by year’s end, then what? AI isn’t like traditional software, where architectures endure for decades. Six months in AI is an eternity—updates hit fast, and they hit hard. Why sink time perfecting fickle but beautiful multi-agent masterpieces when the next AI lab release might collapse it into a single prompt? LangChain, Crew, Swarm—all these tools are racing against a convergence point where raw model power outstrips their utility.

I’m not saying agent architectures are useless now—they’re critical for squeezing the most out of today’s tech. But they’re not evergreen. Simplicity is the smarter bet. Lean on the optimism that models will improve (they will), and design systems that don’t overcommit to brittle complexity. In my experience, the best architecture is the one that solves the problem with the fewest moving parts—especially when the parts you’re replacing get smarter every day.