David Monnerat

Dad. Husband. Product + AI. Generalist. Endlessly Curious.

The Dulling of Innovation

david monnerat dulling of innovation ai genai llm

For a few years, I was on a patent team. Our job was to drive innovation and empower employees to come up with new ideas and shepherd them through the process to see if we could turn those ideas into patents.

I loved that job for many reasons. It leveraged an innovation framework I had already started with a few colleagues—work that earned us a handful of patents. It fed my curiosity, love for technology, and joy of being surrounded by smart people. Most of all, I loved watching someone light up as they became an inventor.

I worked with an engineer who had an idea based on his deep knowledge of a specific system. Together, we expanded on that idea and turned it into an innovative solution to a broader problem. The look on his face when his idea was approved for patent filing was one of the greatest moments of my career. For years after, he would stop me in the hallway just to say hello and introduce me as the person who helped him get a patent.

Much of the success I saw on that team came from people who deeply understood a problem, were curious to ask why, and believed there had to be a better way. That success was amplified when more than one inventor was involved, when overlapping experiences and diverse perspectives combined into something truly original.

When I moved into product management, the same patterns held true. The most successful ideas still came from a clear understanding of the problem, deep knowledge of the system, and the willingness to explore different perspectives.

Innovation used to be a web. It was messy, organic, and interconnected. The spark came from deep context and unexpected collisions.

But that process is starting to change.

Same High, Lower Ceiling

In this new age of large language models (LLMs), companies are looking for shortcuts for growth and innovation and see LLMs as the cheat code.

Teams are tasked with mining customer comments to synthesize feedback and generate feature ideas and roadmaps. If the ideas seem reasonable, they are executed without further analysis. Speed is the goal. Output is the metric.

Regardless of size or maturity, every company can access the tools and capabilities once reserved for tech giants. Generative AI lowers the barrier to entry. It also levels the playing field, democratizing innovation.

But what if it also levels the results?

When everyone uses the same models, is trained on the same data, and is prompted in similar ways, the ideas start to converge. It’s innovation by template. You might move faster, but so is everyone else, and in the same direction.

Even when applied to your unique domain, the outputs often look the same. Which means the ideas are starting to look the same, too.

AI lifts companies that lacked innovation muscle, but in doing so, it risks pulling down those that had built it. The average improves, but the outliers vanish. The floor rises, but the ceiling falls.

We’re still getting the high. But it doesn’t feel like it used to.

The Dopamine of Speed

The danger is that we’re not going to see it happening. Worse, we’re blindly moving forward without considering the long-term implications. We’re so fixated on speed that it’s easy to convince ourselves that we’re moving fast and innovating.

We confuse motion for momentum, and output for originality. The teams and companies that move the fastest will be rewarded. Natural selection will leave the slower ones behind. Speed will be the new sign of innovation. But just because something ships fast doesn’t mean it moves us forward.

The dopamine hit that comes from release after release is addictive, and we’ll need more and more to feel the same level of speed and growth. We’ll rely increasingly on these tools to get our fix until it stops working altogether. Meanwhile, the incremental reliance on these tools dulls effectiveness and erodes impact, and our ability to be creative and innovate will atrophy.

By the time we realize the quality of our ideas has flattened, we’ll be too dependent on the process to do anything differently.

The Dealers Own the Supply

And those algorithms? They’re owned by a handful of companies. These companies decide how the models behave, what data they’re trained on, and what comes out of them.

They also own the data. And it’s only a matter of time before they start mining it for intellectual property—filing patents faster than anyone else can, or arguing that anything derived from their models is theirs by default.

Beyond intellectual property and market control, this concentration of power raises more profound ethical and societal questions. When innovation is funneled through a few gatekeepers, it risks reinforcing existing inequalities and biases embedded in the training data and business models. The diversity of ideas and creators narrows, and communities without direct access to these technologies may be left behind, exacerbating the digital divide and limiting who benefits from AI-driven innovation.

The more we rely on these models, the more we feed them. Every prompt, interaction, and insight becomes part of a flywheel that strengthens the model and the company behind it, making it more powerful. It’s a feedback loop: we give them our best thinking, and they return a usable version to everyone else.

LLMs don’t think from first principles—they remix from secondhand insight. And when we stop thinking from scratch, we start building from scraps.

Because the answers sound confident, they feel finished. That confidence masks conformity, and we mistake it for consensus.

Innovation becomes a productized service. Creative edge gets compressed into a monthly subscription. What once gave your company a competitive advantage is now available to anyone who can write a halfway decent prompt.

Make no mistake, these aren’t neutral platforms. They shape how we think, guide what we explore, and, as they become more embedded in our workflows, influence decisions, strategies, and even what we consider possible.

We used to control the process. Now we’re just users. The same companies selling us the shortcut are quietly collecting the toll.

When the supply is centralized, so is the power. And if we keep chasing the high, we’ll find ourselves dependent on a dealer who decides what we get and when we get it.

Rewiring for Real Innovation

This isn’t a call to reject the tools. Generative AI isn’t going away, and used well, it can make us faster, better, and more creative. But the key is how we use it—and what we choose to preserve along the way.

Here’s where we start:

1. Protect the Messy Middle

Innovation doesn’t happen at the point of output. It happens in the friction. The spark lives in debate, dead ends, and rabbit holes. We must protect the messy, nonlinear process that makes true insight possible.

Use AI to accelerate parts of the journey, not to skip it entirely.

2. Think from First Principles

Don’t just prompt. Reframe. Instead of asking, “What’s the answer?” ask, “What’s the real question?” LLMs are great at synthesis, but breakthroughs come from original framing.

Start with what you know. Ask “why” more than “how.” And resist the urge to outsource the thinking.

3. Don’t Confuse Confidence for Quality

A confident response isn’t necessarily a correct one. Learn to interrogate the output. Ask where it came from, what it’s assuming, and what it might be missing.

Treat every generated answer like a draft, not a destination.

4. Diversify Your Inputs

The model’s perspective is based on what it’s been trained on, which is mostly what’s already popular, published, and safe. If you want a fresh idea, don’t ask the same question everyone else is asking in the same way.

Talk to people. Explore unlikely connections. Bring in perspectives that aren’t in the data.

5. Make Thinking Visible

The danger of speed is that it hides process. Write out your assumptions. Diagram your logic. Invite others into the middle of your thinking instead of just sharing polished outputs.

We need to normalize visible, imperfect thought again. That’s where the new stuff lives.

6. Incentivize Depth

If we reward speed, we get speed. If we reward outputs, we get more of them. But if we want real innovation, we need to measure the stuff that doesn’t show up in dashboards: insight, originality, and depth of understanding.

Push your teams to spend time with the problem, not just the solution.

Staying Sharp

We didn’t set out to flatten innovation. We set out to go faster, to do more, to meet the moment. But in chasing speed and scale, we risk trading depth for derivatives, and originality for automation.

Large language models can be incredible tools. They can accelerate discovery, surface connections, and amplify creative potential. But only if we treat them as collaborators, not crutches.

The danger isn’t in using these models. The danger is in forgetting how to think without them.

We have to resist the pull toward sameness. We have to do the slower, messier work of understanding real problems, cultivating creative tension, and building teams that collide in productive ways. We have to reward originality over velocity, and insight over output.

Otherwise, the future of innovation won’t be bold or brilliant.

It’ll just be fast.

And dull.