Loading...
AI Strategy
April 27, 2026

Are We in an AI Bubble? And Would It Be So Bad If We Were?

I spent part of this weekend watching a DW Business Beyond piece asking whether AI is the next dot-com crash. I don't usually find bubble-or-not-bubble debates particularly useful. They tend to attract people who already have a position and want it confirmed. But the piece pointed at something underneath the bubble question that I think matters more.

There are companies doing the responsible work right now. Mapping their processes. Picking narrow problems. Putting senior people in front of the tooling. Building real automation that saves real time without firing the people who understand how the business actually runs. That work is happening. Most of us just can't see it through the noise.

That's the part that's been bothering me.

The gap between the claim and the working system

Two moments from the piece kept replaying in my head.

The first was David Crawford, chairman of Bain's Global Technology, Media and Telecommunications practice. "Unlike some more recent technology disruptions, you can't just deploy the technology and expect to get efficiency. You actually have to do the hard work of mapping out your business process, changing how you want to do it differently."

Then he said the line that did the most work for me.

"This technology, for it to be potent, relies on redesigning the way those employees do their work. And that ends up being a business problem, not a technology problem."

— David Crawford, Bain & Company

If you do that work, Crawford says, companies see 10 to 25% EBITDA gains. If you skip it, you get nothing.

The second was the reporter himself. He kept finding companies that publicly claimed to be using AI or agentic AI in production. When he asked to see it, the answers were the same. "We're still testing." "The product isn't ready." Excuses.

The piece featured BMW as the counter-example. Their inventory agents are real, in production, saving real time. But it took 18 months of process redesign to get there. Eighteen months of mapping workflows, training agents on narrow tasks, building validation layers, working with suppliers. That's the work Crawford was talking about. That's what Bain itself acknowledges, in the same report, when it observes that most organizations are stuck in experimentation mode, satisfied with minor productivity gains that haven't delivered significant value.

I've watched this pattern up close in my own work.

A few years ago we ran two AI pilots in the automotive dealership world. The first was a system that read Repair Order invoices and pulled customer and service data automatically, including handwritten technician notes. The second was an AI-powered vehicle intake system that generated stock numbers, decoded VINs through the camera, and used image analysis to flag transport damage before a vehicle was accepted.

From a pure technology standpoint, both worked. The accuracy on the RO reads genuinely surprised us, even on handwriting. The intake system field-tested well. We had real proof that the AI could do the work.

Neither pilot made it to production. The dealerships were acquired, new ownership came in, and the new owners weren't interested in continuing the work. The pilots ended not because the technology failed but because the organizational layer never got built around it. That's the part Crawford keeps emphasizing. The AI is the easy part. The harder work is restructuring the intake process, the staff routines, the way information moves between service writers and accounting, the cultural change that lets the technology actually deliver the value it's capable of producing. That work takes time and organizational commitment. It also takes stable leadership willing to invest in something whose payoff comes after the next quarter.

Working AI technology suspended above a workflow that hasn't been redesigned to receive it
The technology worked. The organizational layer to receive it was never built.

That experience changed how we approach AI work now. We don't start with the model. We start with the workflow. How does the user actually do this today? What changes for them tomorrow? Who sees what, when, and what do they do with it? The technology question is the last one we answer, not the first.

The DW reporter and Bain's research are describing the same phenomenon from different angles. There's a small group of companies and teams doing the work. There's a much larger group claiming to be doing the work. And the second group's noise is drowning out the first group's signal.

A business problem, not a technology problem

Crawford's line is the one I keep coming back to. It's a business problem, not a technology problem.

The default reaction to AI in most boardrooms has been to treat it as a cost-reduction story. Replace engineers, replace analysts, replace customer service. Cut and pocket the margin. That's a technology framing. It treats AI as a substitute for labor and asks "how cheaply can we run the same business?"

The Crawford framing is different. It treats AI as an occasion to redesign the business itself. What workflows have we always wished we could change but couldn't justify the cost? What products were too complex to build economically before? What service levels were impossible to staff for? Those are business questions, and they're the ones AI is actually well-suited to address.

The companies pursuing that second path aren't getting the press. The ones announcing layoffs and AI-first strategies are. That's not because the responsible work is rare. It's because it doesn't fit the narrative the market is currently rewarding.

The resources we're underutilizing are people

Here's what I think gets lost in the bubble debate.

The resources that are most underutilized in this moment aren't GPUs. They aren't capital. They aren't data centers. They're the engineers, architects, and domain experts who already understand how their businesses work and could be helping redesign those businesses if anyone asked them to.

Instead, those people are being told they're about to be replaced. Or they're watching their companies freeze hiring on the assumption that AI will make their teams obsolete. Or they're being asked to "use AI more" without any guidance on what that means or any redesign of the workflows they operate inside. Companies are buying enterprise licenses for Claude, Copilot, Cursor, ChatGPT, and a half dozen other tools, then waiting for the productivity gains to materialize on their own. Tech hiring has been frozen or contracting for most of the last two years. Engineers I know who would have had multiple offers in 2021 are taking what they can get.

I wrote about a piece of this last fall in a post on how the tech industry's AI excuse is failing workers. Some call it AI washing. Cut the payroll, cite the algorithm. The companies laying off thousands while profits rise are not struggling to imagine what to build next. They found a socially acceptable reason to cut costs and took it.

But there's a quieter version of the same problem that doesn't involve cynicism. Companies that aren't using AI as a layoff excuse are still freezing because they don't know which way the question resolves. They don't know if the licenses they bought will deliver the productivity gains the vendors promised. They don't know if their competitors will figure it out first. So they hold cash, cut headcount as a hedge, and tell their boards they're "AI-first" because that's what the market rewards.

A few months ago I wrote that we're moving too fast with AI and forgetting the people behind the code. The pace of the rollout and the casual framing of human expertise as something AI is about to make obsolete are more about narrative management than about what AI can actually do. The people who would do the Crawford work are the same people the narrative is treating as expendable. That's the contradiction at the heart of this moment.

What human and AI collaboration can actually be

I don't want this post to read as just a critique. I think there's a real opportunity here, and it's the part the noise is hiding.

Senior practitioner directing AI tools, with the AI extending what the person can do rather than replacing them
Amplification: AI extending what one person can do without requiring them to stop being themselves.

Human and AI collaboration, done well, is genuinely amplifying. Not in the conference-talk sense where someone claims they "write all their code with AI now." In the practical sense where a senior engineer who knows what to build uses AI to compress the mechanical parts of building it. A solution architect who understands the domain uses AI to draft the structures and then refines them. A writer with a clear thesis uses AI to stress-test the argument. A small team uses AI to take on work that previously required twice the headcount, not because AI replaced the missing people but because it absorbed the friction that was slowing the existing team down.

That collaboration is real. I see it in my own work and in the work of the engineers I respect most. It's also not limited to professional life. The same pattern applies to learning a new subject, planning a complex trip, working through a difficult decision, helping a kid with a project that's outside your expertise. AI as a collaborator extends what one person can do without requiring them to stop being themselves.

The thing is, none of that fits the dominant narrative. The narrative says AI is here to replace people. The collaboration story says AI is here to amplify what people are already capable of, if we let it. Those are very different futures, and they require very different bets.

I think as a society we're not seeing the second version. Or we're choosing not to see it because the first version is easier to sell. Replacement makes for a clean press release. Amplification requires you to take the time to figure out what you actually want to do better.

So is there a bubble or not

Honestly, I don't know. Nobody does. The DW piece made the same point. You can identify the conditions that historically produce bubbles. You can't predict when one will pop, or whether the underlying technology will eventually justify the valuations.

What I do know is that the current incoherence is not a stable equilibrium. Companies cannot simultaneously announce AI breakthroughs they can't demonstrate and freeze engineering hiring on the assumption that those breakthroughs are imminent. Both things can't be true. Either AI is delivering production-grade automation right now, in which case companies should be showing it, or it isn't, in which case the hiring freeze is based on a future that hasn't arrived.

The DW reporter's experience suggests it's mostly the second. The Bain research suggests the same. Something resolves the gap. Either the claims catch up to reality, or reality corrects the claims.

What we actually need

I'm not rooting for a crash. The dot-com bust took down hundreds of thousands of tech workers between 2001 and 2004, most of whom had nothing to do with the speculation that caused it. A real AI correction would do similar damage, possibly worse given how much of the current spend is debt-financed. Meta's Hyperion data center alone involves about $27 billion in debt issued through a special purpose vehicle, anchored by PIMCO. Across the industry, projections suggest the AI infrastructure buildout will require hundreds of billions in private credit through the rest of the decade. That's not the cash-financed boom of 2021. It's something structurally closer to the telecom collapse that followed dot-com, where the dark fiber kept being unused for years after the bubble cleared.

But I am rooting for the hype to clear. There's a difference.

The hype is the part that says AI replaces engineers wholesale. The hype is the layoff press releases citing AI as cover for margin optimization. The hype is the buy-the-licenses-and-hope-for-the-best playbook that lets executives report progress without doing the redesign work. The hype is what makes it harder for the responsible work to get oxygen, because every conversation about real AI implementation gets pulled back into the noise.

A correction, however it arrives, would force the question. The companies that can show production AI saving real time and money would be revealed. The companies that have been hiding behind "we're still testing" would be exposed. The hiring freezes would either resolve into actual replacements or be revealed as the hedge they probably are. The collaboration story would have room to be heard.

That clarification has value, even if the path to it is painful.

Closing

The dot-com bust eventually cleared the way for Amazon, Google, and a generation of companies that built sustainable businesses on the infrastructure that had been overbuilt. Something similar can happen with AI, but only if the hype clears first.

The bubble question is the wrong question. The right question is whether we collectively choose to see the version of AI that amplifies people, or whether we let the noise convince us the only version is the one that replaces them.

There are companies and individuals out there choosing the first version. Quietly. Without press releases. Redesigning workflows, working alongside senior people, treating AI as an occasion to do harder things rather than an excuse to do cheaper ones. That work is happening in product teams, in solo practices, in people's personal lives. It's not the loudest voice in the room. It might be the one that matters most.

At WAM DevTech, this is the bet we've been making. Senior architects directing AI tools with precision domain knowledge, not unsupervised AI generation. The name is ours; the underlying idea is just what Crawford described. Do the work, pick the right problem, put senior people in front of the tooling. The results follow. We've also started moving toward result-driven engagements where we can. The client pays for the outcome, not the timesheet. I wrote about that earlier this year in a piece called What Are We Actually Going to Do With the Time? When the work compresses, the pricing model has to follow. Otherwise the gains get pocketed by the consultancy and never reach the client.

The bubble may or may not burst. The hype has to clear either way. And when it does, the version of AI we choose to see will determine what we get to build next.

Inspired by the DW Business Beyond piece "Is AI the next dot-com crash?" which I'd recommend watching: https://youtu.be/c5tdpOVrtdA

Share Article
Done well, AI amplifies people

Senior architects. AI-accelerated. Real results.

WAM DevTech's AI-Accelerated Code Intelligence methodology pairs senior architectural judgment with AI to deliver enterprise-grade software at compressed timelines and lower cost.

Jae S. Jung is the President of WAM DevTech, Inc., a consulting and development firm specializing in AI-accelerated software development, legacy system modernization, and enterprise architecture. With nearly 30 years of experience building and leading distributed development teams, he helps organizations navigate the intersection of technical infrastructure and operational effectiveness.

Share Article