The Growth Loop: How Cheap Failure Changes the Way Developers Learn
There’s a conversation happening in every engineering org right now, usually behind closed doors, about what AI means for junior developers. The optimistic version: AI handles the tedious parts and juniors focus on learning. The pessimistic version: if AI can write the code, why hire someone who’s still learning to write code?
Both versions miss the point. They’re both still thinking about developer growth as “learning to write code.” That was never really what made someone senior.
What actually makes someone senior
Ask a principal engineer what separates a senior developer from a mid-level one and they’ll almost never say “they write better code.” They’ll say things like: they know which problems are worth solving. They can smell a bad abstraction before it causes pain. They understand how their component fits into the larger system. They know when to push back on a requirement and when to just build it. They’ve made enough mistakes to have good instincts about which mistakes matter.
That last one is the key. Good engineering judgment comes from accumulated failure. Not catastrophic failure, not career-ending failure, but the kind of repeated small failures where you try an approach, it doesn’t work for reasons you didn’t anticipate, and you learn something about why. You pick the wrong data structure and hit a performance wall. You design an API that’s painful to consume and have to rework it. You build something that works perfectly in dev and breaks in production because of a concurrency issue you’d never encountered before — the kind of hidden gap between code and reality that only experience teaches you to look for.
Each of those failures deposits a little bit of judgment. Over years, the deposits compound into the thing we call “senior.”
The problem has always been that failures were expensive. When building something takes weeks, you can’t afford to let people build the wrong thing on purpose. So we front-loaded the teaching: code reviews, design reviews, pair programming, mentorship. All of it trying to transfer judgment before the junior developer needed it, which is roughly as effective as teaching someone to swim by describing water.
The reps just got cheap
Here’s what changed. When a prototype costs hours instead of weeks, the economics of learning by doing shift dramatically.
A junior developer can now build three different approaches to a problem in the time it used to take to build one. They can try the naive solution, see where it breaks, try a more sophisticated solution, see where that breaks, and then try a third approach informed by what they learned from the first two. That cycle, try-fail-learn-try-again, is the fundamental unit of engineering growth. AI didn’t invent it. AI made it affordable.
Think about what this means concretely. A mid-level developer is tasked with designing a caching layer. In the old world, they’d research approaches, pick one, build it over a week or two, and discover during load testing that their invalidation strategy doesn’t hold up. They’d rework it, probably with guidance from a senior developer, and eventually ship something that works. One cycle. Two weeks. One lesson learned.
In the new world, that same developer spends a day building rough versions of three caching approaches with AI assistance. They test each one against realistic scenarios. They discover the invalidation problems in approach A, the memory problems in approach B, and that approach C works but has operational complexity they hadn’t considered. Three cycles. One day. Three lessons learned.
The lesson density per unit of time just went up by an order of magnitude. That’s not a productivity improvement. It’s a growth rate improvement.
I watched this happen recently in the innovation lab I help lead. A junior developer was trying to figure out the right orchestration approach for a multi-agent ML workflow. He’d spent days reading articles comparing different approaches, stuck in research paralysis, trying to pick the right one before building anything.
I suggested he just build all of them — the same prototype-first approach I advocate for architecture decisions. With AI assistance, he prototyped graph-based orchestration, sequential pipelines, and an event-driven approach in the time it would have taken to finish his research. Through that process he identified that the graph-based approach was the right fit for the problem’s complexity. I pointed him toward articles that articulated the trade-offs more clearly, but even with that reading material, the prototyping was the thing that made it click.
The unlock wasn’t more research. It was making failure cheap enough to try multiple approaches in the time it would have taken to decide on one. He didn’t need permission to be wrong. He needed permission to try all of them.
Seniors become reviewers, not gatekeepers
This changes the senior developer’s role in a way that most orgs haven’t caught up to yet.
The traditional model: senior developers review code after it’s written, catching problems late. They attend design reviews where junior developers present approaches and seniors poke holes. They do pair programming sessions where, if we’re being honest, the senior developer is mostly just typing while the junior watches. The knowledge transfer is real but slow, and it’s bottlenecked by senior developer availability.
The new model: senior developers review multiple approaches that a junior developer has already built and tested. The conversation shifts from “here’s why that won’t work” (which the junior developer hasn’t experienced yet) to “you saw that approach A failed because of X. Here’s the deeper principle behind why that failure mode exists, and here are the other places you’ll encounter it.”
That’s a fundamentally different conversation. The junior developer has already felt the failure. The senior developer is now contextualizing it, connecting it to broader patterns, accelerating the transition from “I know this specific thing doesn’t work” to “I understand why this class of things doesn’t work.” The learning sticks because it’s anchored to lived experience, not abstract instruction.
And the senior developer’s time is used better. Instead of gatekeeping every design decision (which doesn’t scale), they’re reviewing the output of exploration that’s already happened. They spend less time preventing mistakes and more time extracting maximum learning from mistakes that already occurred.
There’s a norm in consulting that captures this well: struggle with a problem for 15 minutes, then escalate to a senior. The intent is right. Don’t spin indefinitely, but don’t skip the struggle either, because the struggle is where learning happens.
AI doesn’t change the rule. It changes the content of those 15 minutes. Instead of thinking about what might work, the developer is trying approaches with AI assistance. By the time they reach the senior, they’re not arriving with an open question. They’re arriving with evidence: “I tried X and Y, here’s what I found, I think Z might be the answer. Does that hold up?” Even better, after they’ve sent the message saying “hey, I need to chat,” they don’t have to wait passively. They keep trying approaches. By the time the senior shows up, they might have already found the answer, or they’ve narrowed the question enough that the senior’s time is used far more efficiently.
I’ve noticed this shift in my own mentorship conversations. Someone is talking through a problem, raising possible approaches, trying to figure out what’s right. Rather than scheduling a follow-up or saying “let me think about that,” I type the question into a coding assistant. Within seconds there are draft answers on the table. Not final answers. Starting points. But it changes the conversation from “let’s think about what might be true” to “here’s a hypothesis, let’s pressure-test it right now.” The developer stays in motion. Judgment gets applied to something concrete immediately rather than something theoretical later.
People are figuring out they can take a swing at problems without imagining the scenario first. Type the question, get a draft, push on it together. That’s the feedback loop collapsing within the conversation itself, not just within the task.
That’s mentorship, not management.
The failure portfolio
I want to introduce a concept here: the failure portfolio. Every developer should be building one, and every engineering manager should be enabling it.
A failure portfolio is a structured record of approaches tried, results observed, and lessons extracted. Not a postmortem doc that nobody reads. A personal engineering journal where a developer documents: I tried X. It failed because Y. The underlying principle is Z. Next time I encounter a similar situation, I’ll look for these signals.
AI makes this practical because the cost of generating entries is so low. When every task is an opportunity to try multiple approaches, the failure portfolio fills up fast. A junior developer with six months of AI-assisted work, who’s been deliberate about capturing their failure portfolio, may have accumulated more learning cycles than a developer with two years of traditional experience.
This isn’t theoretical. I’ve seen it on teams where we’ve been intentional about this. The developers who use AI as a strategic advantage — exploring broadly and documenting what they learn — are developing judgment noticeably faster than their peers who use AI only to generate the solution they already had in mind.
The calibration problem
Now here’s the counterargument, and it’s legitimate: if AI writes most of the code, how does a junior developer develop the mechanical skill of writing code at all?
This matters. There’s a level of syntactic fluency and language familiarity that you only get from writing code yourself, and a developer who can’t write a function without AI assistance has a real problem. Not because AI won’t always be available, but because debugging, reading other people’s code, and understanding error messages all require a baseline of mechanical skill that you can’t shortcut.
The answer isn’t “don’t use AI.” It’s calibration. Junior developers should be writing code by hand for fundamentals: algorithms, data structures, core language features, basic patterns. They should be using AI for exploration and acceleration in areas where the learning comes from seeing the result, not from typing the characters.
It’s the same reason experienced engineers can still read logs and debug from a terminal even with full observability dashboards available. The manual troubleshooting built the intuition to recognize when the dashboard is hiding the real problem. You don’t do everything from the command line. But you need to be able to.
What this means for engineering managers
If you’re leading a team, the practical implications are:
Structure exploration time. Give developers explicit time and permission to try multiple approaches. Don’t just measure “did they ship the feature.” Measure “did they explore the solution space before converging.”
Invest in review, not gatekeeping. Shift your senior developers’ time from up-front design reviews (where they prevent exploration) to post-exploration reviews (where they extract learning from exploration that’s already happened).
Build the failure portfolio culture. Make it normal to document failed approaches alongside successful ones. In sprint reviews, ask “what did you try that didn’t work?” as often as “what did you ship?”
Calibrate the AI assist level. For junior developers, be intentional about which tasks should involve AI and which should be done by hand. The goal is maximum learning, not maximum output.
The teams that develop talent fastest in the next five years won’t be the ones that use AI the hardest. They’ll be the ones that use AI most deliberately as a learning accelerator, treating every task as an opportunity for the kind of repeated, low-cost failure that builds real engineering judgment.
Cheap failure is the best teacher we’ve ever had. The question is whether your organization is structured to take advantage of it.
This is the third article in the “Rewiring the Feedback Loop” series on how AI compresses feedback loops across software delivery.
Related Articles
AI Coding Tools: A Seasoned Technologist's Strategic Advantage
AI coding assistants boost productivity by 55%, but experienced architects get more value using AI as a virtual junior team for routine tasks.
The Artisan's Dilemma: Balancing Craftsmanship and Pragmatism in the Digital Forge
Real-world case studies in serverless architecture and Prime Video's infrastructure shift reveal when craftsmanship matters — and when pragmatism wins.