You gave your team Copilot licenses six months ago. PRs are flying and lines of code are up. Everyone feels productive.
Then you check the incident log. Bugs are up too. Code review is taking longer. Your best senior engineer just mass-reverted a junior's AI-generated PR at 2am because nobody understood what it did.
AI coding tools aren't making your team better. They're making your team faster at creating problems.
The productivity gains are real, but not where you think
Let's be clear. AI does speed things up in certain cases. Documentation tasks take half the time. Tests get written faster. Boilerplate that used to eat your afternoon now takes minutes.
Here's where it gets weird.
A study by METR tracked experienced developers working on codebases they'd maintained for years. These weren't newbies fumbling with unfamiliar repos. These were experts doing real tasks in their own code. The developers using AI tools like Cursor took 19% longer to complete their work.
The kicker is that those same developers believed they were 20% faster.
That's a 39-point perception gap. Engineers thought they were crushing it while actually falling behind. This is the danger nobody talks about. You can't trust vibes and you can't trust self-reports. The tool that feels like a superpower might be slowing you down on the work that actually matters.
AI is a power tool for grunt work, but it's a liability for judgment calls.
The skill atrophy problem is already here
Addy Osmani, engineering lead at Google Chrome, documented testimonials from developers experiencing skill decay. One engineer after 12 years said AI made him "worse at his own craft." First he stopped reading documentation, then his debugging skills waned, and then he lost deep comprehension of his own systems. His words: "I've become a human clipboard."
This isn't hypothetical. Osmani surveyed CTOs and found that 16 of 18 reported production disasters directly caused by AI-generated code that nobody understood.
The skills you don't use are the skills you lose. AI accelerates the forgetting.
Juniors face a different problem: skills that never form
Skill atrophy assumes you had skills to begin with. For junior developers, the problem is worse because they may never develop foundational abilities in the first place.
When a junior uses AI to write code they don't understand, they skip the struggle that builds intuition. They never learn to read stack traces. They never internalize why certain patterns exist. They ship code faster while building a house on sand.
The research calls this the "trust without verification" anti-pattern. Juniors accept AI output without the experience to know when it's wrong. They can't debug what they didn't write and they can't extend what they don't understand. After all, why argue with knowledge that is the average of the knowledge of the internet when you only recently learned what a unit test was.
Be careful. "Junior" doesn't just apply to entry level engineers, it can also apply to super seniors 20 years into their career having to be productive in an unfamiliar technology. No one is safe.
What do we do?
What can you do in the face of powerlessness? It's not like a ban on these tools would be effective, or even possible. I'm not going to pretend that I know of a perfect solution. Instead, I'll share what I know to work as of writing this.
Your quality gates matter more than ever
Google's 2024 DORA report quantified something uncomfortable. For every 25% increase in AI adoption, teams saw a 7.2% drop in delivery stability. More AI meant more incidents.
The DORA report's core finding matters here: AI doesn't fix a team, it amplifies what's already there. Teams with strong testing and review practices use AI to move faster while staying stable. Teams with existing dysfunction find that AI just intensifies their problems.
This is why quality gates aren't optional anymore. You need automated checks that catch issues before humans even look at the PR.
What actually works:
Set minimum 70-80% test coverage gates in CI that block merges. These should be hard gates, not warnings that developers can ignore without being really annoying.
Run security scans that fail the build on critical findings. SAST should run on every PR.
Enforce small PR requirements. AI encourages mega-PRs because generating code is cheap, but big PRs hide bugs. Fight the urge to batch everything together.
Use type checking everywhere. If your language supports it, enforce it in CI.
Treat AI output like code from an enthusiastic but inexperienced contributor. Review everything.
Support a culture of learning. Start book clubs. Ensure valuable mentorships. Pair program.
The uncomfortable truth
The teams winning with AI aren't the ones generating the most code. They're the ones who kept their engineering fundamentals while selectively speeding up the boring parts.
Google now generates over 30% of their code with AI, but their internal guidance emphasizes "maintaining rigor" in code review, security analysis, and long-term maintenance. The humans review everything before it ships. The AI is a tool, not an autopilot.
AI doesn't replace engineering judgment. It just makes the consequences of skipping it arrive faster.


