David Monnerat

Dad. Husband. Product + AI. Generalist. Endlessly Curious.

The Illusion of Intelligence: How We’re Still Missing the Promise of AI

the promise of ai david monnerat genai LLM openai chatgpt leadership data science

When I started my first role as a product manager, my portfolio included solutions built with AI. I leveraged my product experience and joined forces with a team of data scientists as we sought to tackle complex problems with data.

It was a great time to be in the space. Everyone wanted to work with AI, and the promise of an AI-driven future was highlighted at every quarterly meeting and town hall. There were piles of data and just enough maturity in the tools and teams to start developing and deploying powerful algorithms at scale.

But progress was much slower than most people anticipated.

We stood in front of the piles of data without a shovel, unable to make efficient use of the resource because it was in the wrong format or inside an inaccessible system. Sometimes, we’d discover too late that our intuition about the relevance of the data in a particular pile was wrong, and the data wasn’t useful to solve a particular problem.

To overcome those challenges, we’d start by looking for more data by checking other piles or generating additional data by adding telemetry to our systems to close the gaps in coverage.

If we couldn’t assemble what we needed to solve that particular problem, we’d try to reshape the problem, or we’d move on to the next problem to solve. But if we happened to find what we needed, we ran into our next hurdle: armchair data scientists—people who watched a demo, skimmed an article, and came away convinced they knew how to build the model better than the experts trained to do it.

When I said that everyone wanted to work with AI, I meant everyone. In some cases, developers would head to Coursera to learn about AI and how to create algorithms. Others went back to school to get an advanced degree in machine learning or statistics. As a proponent of continuing education, I applauded these efforts to level up their knowledge and skills, and, for the most part, these individuals became curious allies, trying to learn about the process from the inside.

But the armchair data scientists, often in leadership or decision-making positions, were more disruptive. They would watch a video or read an article, then send it to the team with a brief note, stating only, “We should do this.” There would be no context, no knowledge of how the technology worked, or if what they found addressed a problem or challenge we were facing.

For the most part, we could deflect these suggestions through thoughtful responses, explaining that we were already doing what they suggested, or why it wasn’t relevant to the actual problem we were solving.

The more draining interactions were from leaders who wanted to prescribe how a model should be built, sometimes implicitly but often explicitly. They wanted the algorithm to reflect a vision of what they felt a system should do based on their intuition, even if their vision wasn’t technically possible—or even relevant—to the problem at hand. They would prescribe what data should be used, what data should be excluded, or how a model should be trained.

They tried to influence how predictions were interpreted. They challenged results that didn’t feel intuitive, even when the outcomes were reproducible, measurable, and backed by data. This sometimes created another round of training a separate model driven by feelings, followed by a side-by-side comparison that consistently showed the data science approach performed better than one based on intuition and feelings.

I’ve sat with some of those same executives to review the results of a model, only to be met with their disappointment, especially when they felt there should be a logical, straightforward solution to a problem that was so complex that it couldn’t be solved or even attempted without AI.

They would question why the predictions weren’t 100% accurate. Even when I pointed out that we were predicting the future from past data, and even if the model was right 60% of the time, and the previous human-driven process was right only 10% of the time, their questions focused only on achieving the impossible 100%. They’d leave value on the table chasing perfection when they could improve a process now and hope to improve it over time. Or they’d go off on tangents and hyperfocus on edge cases for which there was no solution and often no data to even attempt to use to train an algorithm.

If the performance was impressive, they’d attempt to move the targets by setting an arbitrarily higher bar, or they’d switch from the measurable metric to a different metric or an abstraction like “trust” without providing direction on what that meant or how to measure it. Trust in what? Accuracy? Fairness? Transparency? Nobody could say. When asked for clarification, the response was the classic Justice Potter Stewart response, “I know it when I see it.”

In the end, it always seemed like AI was a disappointment. Unless it could solve 100% of a problem 100% of the time, no matter how complex or how poorly humans performed before, leaders would keep chasing a unicorn, while ignoring the perfectly capable, faster horse already in the stable.

Over and over, the pattern was the same: impossible expectations, misunderstanding of the tools, and a tendency to chase magical thinking over measurable progress.

Here We Go Again

Fast forward a few years, and we’re back at it. Only this time, the technology looks smarter. LLMs have reignited AI’s promise with a seductive twist: they speak fluently. They write. They reason. They respond. And for many, that’s been enough to assume that LLMs also understand.

But just like before, we’ve let the illusion get ahead of the reality.

While LLMs make it easier than ever to demo something impressive, they haven’t made it easier to deliver something useful. Underneath the conversational surface, the same problems persist: inaccessible data, unclear problems, and unrealistic expectations. In fact, the expectations are even worse now, because the technology feels like it’s already “there.”

I’ve seen teams leap into building generative AI “solutions” without a clear understanding of what problems they’re solving. I’ve seen leadership get swept up in generative demos and approve massive budgets to chase abstract goals like “productivity” or “creativity” without metrics, definitions, or infrastructure.

The same pattern is playing out again. Impossible expectations, except this time they’re even higher. A misunderstanding of the tools, especially when it comes to differentiating the hype from the reality. And the same magical thinking chasing a hypothetical problem rather than focusing on a real problem with measurable outcomes.

Two years in, we’re starting to see the same disappointment creep in again, too. The unrealized expectations, longer timelines, and lack of returns on the investments.

What Useful AI Actually Looks Like

Useful AI doesn’t always look like magic. In fact, the most valuable AI systems I’ve seen rarely impress anyone in a demo. They don’t write poetry, simulate conversation, or generate pitch decks with a prompt. They just quietly make things better—faster, cheaper, more consistent, more scalable.

They are, by most standards, boring.

A model that flags billing anomalies in a healthcare system might save millions. A classifier that routes customer service tickets to the right team might shave minutes off every support interaction. An optimization algorithm that suggests more efficient delivery routes could reduce fuel costs, improve ETAs, and shrink carbon footprints. None of these use generative AI. None of them are headline-worthy. But all of them create real value.

And unlike a chatbot that sometimes gives the wrong answer with great confidence, these systems are narrow by design. Purpose-built. Measured. Tightly integrated into workflows and optimized over time. They don’t need to sound human. They just need to work.

We often overlook this kind of AI because it’s not exciting to watch. It doesn’t feel like the future. But that’s exactly the point: the best AI doesn’t draw attention to itself. It dissolves into the process, making things work better than they did before.

The Opportunity Cost of the Hype

The hype around LLMs has sparked a renewed interest in AI, but it’s also warped our sense of what progress looks like. Instead of focusing on impact, we’ve become obsessed with spectacle.

Executives see a demo of a chatbot that answers questions with a human-like cadence and see the realization of the vision that they’ve had for AI all along. Suddenly, every team is greenlit to build a “copilot.”

LLMs make it easy to show something impressive. A few prompts, a fancy UI, and you’ve got a prototype that feels like innovation. But most of these tools don’t stand up to basic scrutiny. They hallucinate. They break when connected to real systems. They introduce ambiguity into workflows that once relied on clarity. They create new risks—ethical, operational, and technical—that teams are often unprepared to manage.

We’re pouring talent, time, and money into building AI wrappers around problems we haven’t defined. Meanwhile, the infrastructure work that would actually make AI useful—cleaning data, improving feedback loops, building explainable systems—is neglected.

This is the code of chasing the hype. Years of expensive effort with little or no realization of value, certainly to the scale that was promised. Real problems that didn’t require generative AI but could have been solved and added real value were ignored or neglected. Two years in, it turns out we’ve been sprinting on a treadmill. We’ve spent the energy, but we’re still in the same place.

To be clear, there’s nothing wrong with experimentation. But exploration without a clear problem or success metric isn’t innovation—it’s expensive theater. It gives the illusion of progress while distracting from work that actually moves the needle.

And we should know the difference because we’ve been here before. We let unrealistic expectations undermine the progress of last-generation AI. Now, we’re doing it faster. We’re skipping the discipline that made the old models work and replacing it with a dopamine hit from a prompt that feels smart.

A Better Mindset: Value Over Novelty

If the last two waves of AI taught us anything, it’s this: the technology is only as good as the problems we point it at and the people we trust to solve them.

Too often, we let novelty set the direction. We ask, “What can we build with this?” instead of “What’s worth solving?” But even when we pick the right problems, we don’t always empower the right people to do the work.

Instead, we’re seeing a return of the same behavior that stalled progress last time: leaders prescribing not just what to solve, but how to solve it. Dictating which data to use. Demanding specific architectures. Redefining outcomes midstream based on intuition instead of evidence. In some cases, they’re building solutions backwards from a flashy demo instead of forward from a real need.

This isn’t strategy—it’s armchair data science all over again.

And it’s especially risky now, because LLMs make it even easier to look smart without being right. It’s one thing to brainstorm ideas. It’s another to second-guess trained experts who understand the constraints, trade-offs, and mechanics of building something that works.

A better mindset means more than just optimizing for usefulness. It means creating space for people with real expertise—data scientists, engineers, researchers, designers—to lead the “how.”

It means:

  • Letting evidence drive decisions, not gut instinct or LinkedIn hype.
  • Empowering teams to solve, not just execute.
  • Recognizing that success isn’t always intuitive—and being okay with that.

Adopting this mindset doesn’t mean ignoring new tech. It means respecting the disciplines that make that tech useful. It means pairing vision with humility, ambition with trust.

Because there’s nothing wrong with being impressed by what’s possible.

But if we’re serious about delivering real value with AI, we have to get out of our own way—and let the experts do their jobs.