There's a version of AI-assisted development that sounds like progress but functions more like regression.
A developer sits alone, orchestrating ten agents, each working a different branch of a codebase. Output is high. Pull requests are flying. And somewhere in the process, the collaboration that made software development actually work has quietly disappeared.
Martin Fowler calls this "re-soloing." It's worth taking seriously.
The social architecture of software
Software development went through a shift in the early 2000s that was as much cultural as technical. The lone programmer in an office with the door closed — pizza slid under it, no interaction required — gave way to pair programming, mob programming, daily standups, and the general acknowledgment that humans working closely together produced better software than isolated individuals optimizing in parallel.
That wasn't just process ideology. It worked. Shared understanding of a codebase, collaborative design decisions, the social pressure of having a teammate review your logic — these dynamics produced higher-quality outcomes and distributed knowledge across teams.
The concern with multi-agent orchestration is that it recreates the old solo model in new clothes. "Now, instead of 50 people on my team, I can have five and they don't have to talk to each other, and each can have 10 agents," Fowler noted in a recent discussion. He's skeptical this is the same thing as a functioning team, even if output metrics look similar.
Kent Beck — who pioneered test-driven development and pair programming — adds a different dimension. His experience pairing with two humans and one or more AI assistants has been positive. But he draws a clear line between that and the model of a single developer managing a fleet of agents. Managing six tools simultaneously is not the same as having a conversation with a colleague who sees things differently or brings a different energy to the work. The social element isn't decoration. It's part of how good decisions get made.
The mech suit argument
David Heinemeier Hansson, creator of Ruby on Rails, offers a more optimistic frame — and his views have shifted considerably as the models have improved.
A few months ago, DHH's concern was that agentic coding would promote developers into project managers. He wasn't interested in being a manager of AI agents. But as he's worked more with multi-agent setups using current models, the experience feels less like management and more like what he describes as wearing a mech suit. You're still in control, still doing the work — but you're operating at a higher power level. The agency feels like yours.
Senior engineers at 37signals, DHH reports, are gaining significantly more from these tools than junior developers. That's not a coincidence. Validating whether an agent's output is production-ready requires the kind of judgment that comes from having written a lot of production code. You have to have seen failure modes to recognize when a generated solution is introducing one.
The junior developer problem
This is where the re-soloing concern and the mech suit optimism collide.
If senior engineers benefit most from agentic tools, and agentic tools reduce the need for junior developers to write foundational code, then the pipeline that produces senior engineers may be at risk. You become a senior engineer partly by making junior-level mistakes under the supervision of people who catch and explain them. That feedback loop is different when an agent is generating the first draft.
A Stanford study published in late 2025 found that employment among software developers aged 22 to 25 fell nearly 20% between 2022 and 2025 — a period that coincides with the rapid adoption of AI coding tools. That's a data point, not a policy conclusion. But it's worth holding alongside the optimism about productivity gains.
Martin Fowler's instinct is that two-pizza teams won't shrink to one-pizza teams — they'll stay the same size and become more capable. He may be right. But the conditions that produce that outcome require intentional management. Teams that celebrate output metrics without monitoring whether junior developers are actually learning are optimizing for the wrong thing.
Avoiding burnout
Fowler offers one practical check: watch for the point where you start producing negative value. That's the signal that it's time to step back. Unhealthy performance metrics — frequency of pull requests, raw lines of code committed — push teams in the wrong direction. The real question is whether the output is working, understood, and maintainable.
The developers most at risk of burnout in a multi-agent workflow aren't the ones moving slowly. They're the ones moving fast without adequate checkpoints, accumulating what researchers are now calling "cognitive debt" — a loss of shared understanding of what the system actually does and why.
Code is easy to generate. Understanding is not. And understanding doesn't transfer automatically when an agent writes the code.
The craft question
DHH puts it directly: if he promotes himself out of programming, he turns himself into a project manager. That's not what he wants. He wants to do the work.
That instinct is worth preserving at the team level too. Using agents to amplify craft is different from using them to replace it. The teams that figure out that distinction — where the agent handles the implementation of well-specified intent, while humans maintain genuine understanding of the system — will come out ahead of the teams that just measured throughput and called it done.