AI coding is a trust problem
I wrote about AI coding a while back. How the problem isn't that it's bad, it's that it doesn't know when it's bad. You have to stop trusting what AI says and make it prove that it worked.
I've been thinking more about why that's the pattern. And I keep coming back to trust.
Confidence without competence
The thing about AI coding tools is they're always confident. Every response is delivered with the same tone. "I've fixed the bug." "Here's the solution." "This should work now."
Except it often doesn't work. The bug moved, not fixed. The solution breaks something else. "Should work" means "might work, I have no way to actually check."
I've seen this pattern before. Not in AI. In junior developers.
The junior who says "it's done" when it compiles but they haven't tested it. The one who confidently explains a solution they half-understand. The one who makes changes without checking if they broke anything else.
This isn't a character flaw. It's a skill gap. They don't know what they don't know. And until they've been burned enough times, they can't distinguish between "I think this works" and "I've verified this works."
AI tools have the same gap, permanently. They'll never develop the instinct that comes from being woken up at 3am because your "fix" took down production. They can't learn from consequences they don't experience.
The feedback loops you build for AI
So what do you do? You build feedback loops.
Tests that run automatically. Verification steps that check results. Review processes that catch regressions. You don't trust the AI's word. You make it prove its work.
This sounds obvious when you say it about AI. But these are the same feedback loops that bad teams are missing.
A team without code review is trusting people's word that their code works. A team without tests is trusting that changes don't break things. A team without deployment verification is trusting that production is fine.
That trust is often misplaced. Not because people are dishonest, but because confidence and competence aren't the same thing. People genuinely believe their code works. They're often wrong.
"It said it fixed it" is the same as "we have a great culture"
I wrote about assumed trust versus tested trust in organizations. How "we have a great culture" often means "we haven't been stressed yet." The trust feels real but it's never been verified.
"The AI said it fixed the bug" is the same pattern. It feels like progress. The confident response suggests competence. But until you verify, you don't actually know.
I've watched developers spend hours in loops with AI. The AI says it fixed something. The developer takes that at face value. Something else breaks. The AI says it fixed that too. Loop continues. Three hours later they're mass-reverting commits, having accomplished nothing except burning time.
The problem isn't the AI. The problem is trusting without verifying. The same thing happens in organizations. Leadership says the strategy is clear. Teams take that at face value. Execution diverges. Leadership says they've realigned. Loop continues. Six months later everyone's confused and the strategy has drifted far from where it started.
The missing loops in bad teams
I keep thinking about what makes good teams good. A lot of it is feedback loops.
Code review catches bugs, sure. But it also verifies that the developer's confidence matches reality. "I think this is good" gets checked against another person's judgment.
Retrospectives catch process problems. But they also verify that the team's perception matches reality. "We're doing great" gets pressure-tested.
One-on-ones catch individual issues. But they also verify that the manager's understanding matches the team member's actual experience.
Bad teams skip these loops or do them performatively. Code reviews that are rubber stamps. Retrospectives where nobody says anything real. One-on-ones that are pure status updates with no room for honest feedback.
The result is the same as trusting AI without verification. Confidence without competence. Assumed understanding that's never been tested. And then surprise when things break in ways nobody saw coming.
What I've learned from AI about teams
Working with AI has made me more paranoid about verification in general. Not just with AI tools, but with people and organizations.
When someone says "it's done," I ask how they verified it. Not because I don't trust them, but because I've learned that confidence isn't evidence.
When a team says "we're doing great," I ask what metrics tell them that. Not because I think they're lying, but because I've seen how easily perception diverges from reality.
When leadership says "the strategy is clear," I ask how they know teams understand it. Not because I doubt their intentions, but because I've watched clear strategies become confused execution.
AI is teaching a generation of developers that verification matters more than confidence. That you can't trust the word, you have to trust the proof. I wonder if that lesson will spread beyond AI tools into how we build teams and organizations.
What feedback loops do you rely on? What happens when you skip them? Have you noticed yourself becoming more verification-focused after working with AI?