← All Articles

You became the worst manager you ever had, and you called it prompt engineering

About two years ago, something split the industry down the middle. Some developers grabbed AI tools immediately and started building with them. Others looked at the output and recoiled. Both sides were frustrated with each other. The adopters thought the skeptics were stuck in the past. The skeptics thought the adopters had lost their minds.

I was in the middle of it, watching colleagues I respected land on opposite sides. And for a while I couldn't figure out why the divide was so sharp. These were all smart, experienced people. They just couldn't agree on what they were looking at.

I think I finally understand it. AI made us replay every fundamental lesson software engineering already learned, compressed into about two years. And at every stage, people genuinely forgot the lessons they'd spent careers internalizing.

Line by line, just like 1960

Think about how AI coding tools progressed. They started generating a line of code. You had to give exact surrounding context. Tell it what helpers to use, what interfaces to follow, what variable names existed in scope. Then it could do a block. Then a function. Then a class. Then whole modules.

That progression mirrors the actual history of programming. Assembly gave you individual instructions. Procedures let you group them. Structured programming gave you control flow. OOP gave you encapsulation. Modules gave you boundaries. Decades of evolution, replayed in months.

And at each stage of AI "growth," we were re-teaching people core programming concepts. Specify the interface. Use the existing service. Don't duplicate logic. Don't inline something that already exists as a helper. Break this into smaller pieces. Combine these pieces, they belong together.

It was as if stepping into AI erased twenty years of muscle memory. Developers who would never copy-paste a utility function were happily watching AI regenerate the same logic in three different files. People who preached DRY in code reviews were accepting massive duplication because the AI was fast. The principles didn't change. But somehow the tool made people forget them.

The fundamentalists were watching

This is why the divide was so sharp. Experienced engineers watched colleagues abandon every principle they'd spent decades building as shared knowledge. And things still "worked," which made it worse. Code shipped. Features landed. Demos looked great.

But underneath, the code was a mess. No separation of concerns. No consistent abstractions. Duplicated logic everywhere. Tight coupling to implementation details that would change next week.

The fundamentalists weren't wrong to be concerned. They were seeing real violations of real principles. Principles that exist because people spent decades learning, the hard way, that ignoring them creates unmaintainable systems.

But they missed something. It was a learning curve, not a permanent state. The adopters weren't abandoning software engineering. They were rediscovering it, in fast-forward. Give them six months and they'd start demanding the same things they always demanded. Better structure. Clear interfaces. Proper testing. They just had to relearn why those things mattered in a new context.

From specs to needs

The requirements evolution followed the same arc.

Early AI usage was hyper-prescriptive. You'd write a prompt that was basically pseudocode. "Create a function called calculateTotal that takes an array of LineItem objects, each with price and quantity fields, and returns the sum of price times quantity for each item." You told it exactly what to write, where to put it, what patterns to follow. Constant reminders about context. Don't forget the error handling. Use the existing logger. Follow the repository pattern we set up.

This is waterfall. Detailed specs handed to a coder who executes them mechanically. And it worked okay for small, well-defined tasks. Just like waterfall worked okay for small, well-defined projects.

Then something shifted. The AI started performing better when you described what you wanted to achieve, not how to achieve it. "Users need to see their order total update in real time as they change quantities." Define the outcome. Let it figure out the path.

This is the waterfall-to-agile transition. And just like in the real world, the best results came from collaboration. Good product thinking (what's the actual need), good design (what's the experience), good engineering (what's the solution), working together. Not throwing specs over the wall.

People who'd spent years fighting waterfall in their organizations were unconsciously recreating it in their prompts. And people who understood agile instinctively wrote better prompts because they focused on outcomes over implementation details.

Prompt engineering followed the same curve. For about 18 months, how you talked to the model was the critical skill. Exact phrasing. Token positioning. People built careers around crafting the perfect prompt. Then the models improved, the abstraction moved up, and the skill that mattered shifted from how you talk to the machine to what you ask it to do. Same thing that happened when compilers made assembly optional. The low-level skill didn't become worthless, but it stopped being the bottleneck.

We became the managers we hated

Then came the context trap.

CLAUDE.md files. agents.md. Massive system prompts. Rules files with hundreds of lines of instructions. "You must provide as much context as possible" became gospel. Every project got a wall of text explaining the codebase, the patterns, the conventions, the history, the preferences, the edge cases.

Then the research started coming in. A study on repository context files found that AGENTS.md files actually reduced task success rates compared to giving the agent no context at all. Worse results, 20% higher inference costs. The recommendation? Delete them. Or at minimum, strip them down to only the most essential instructions. A separate study on agent skills found the same thing from a different angle: focused packages with two or three modules outperformed comprehensive documentation every time.

The parallel writes itself. We became the micromanager who drowns their team in irrelevant specs, outdated docs, and constant interruptions. The manager who can't let go. Who provides so much "helpful context" that the developer can't think clearly about the actual problem. Who adds a fifteen-page onboarding doc that nobody reads because by page three it contradicts page one.

Turns out the same principle applies to AI. Too much context pollutes the signal. Outdated information drives solutions in the wrong direction. Contradictory instructions cause unpredictable behavior. Just like it does with people. The best managers give clear goals, relevant context, and room to work. The best prompts do the same thing.

We spent decades learning that. Then we forgot it the moment we had someone who couldn't push back.

We keep replaying the tape

The pattern keeps showing up everywhere you look.

Agent swarms are the new microservices. We're splitting single prompts into networks of specialized agents. One for planning, one for coding, one for testing, one for review. And we're discovering the same problems. Coordination overhead. State synchronization issues. Agents duplicating work or contradicting each other. The "maybe we over-split" regret. It's the monolith-to-microservices arc, replayed in months instead of years.

Vibe coding is the RAD era all over again. Fast to build, exciting demos, impossible to maintain. VB6 and Delphi gave us rapid application development in the 90s. Drag a button, wire an event, ship it. It felt like the future. Until you had to maintain it, extend it, or hand it to someone else. We learned that lesson. We're learning it again. The prototype that took an afternoon to vibe-code takes a week to untangle when requirements change.

Each of these could be its own article. But the individual parallels matter less than the pattern underneath them. The same lessons keep getting replayed. The same mistakes keep getting repeated. And each time, people act surprised.

The principles were never arbitrary

Software engineering's 60 years of lessons aren't rules someone made up in a textbook. They're patterns that emerge whenever you're managing complexity with limited human attention. Don't repeat yourself. Keep things that change together close together. Give clear instructions and get out of the way. These aren't opinions. They're what happens when you ignore the constraints of human cognition long enough to rediscover them.

AI just proved it faster than anyone expected.

The fundamentalists were right that the principles matter. The adopters were right that the tooling changed everything. The divide was never really about AI. It was about how fast people were willing to relearn things they thought they already knew.

Two years in, the gap is closing. The adopters have relearned the principles. The fundamentalists have adopted the tools. And most of us are somewhere in the middle, watching the next wave of lessons get speed-run in real time. Turns out history rhymes a lot faster when you give it a GPU.