David Monnerat

Dad. Husband. Product + AI. Generalist. Endlessly Curious.

Tag: genAI

  • The Dulling of Innovation

    The Dulling of Innovation

    For a few years, I was on a patent team. Our job was to drive innovation and empower employees to come up with new ideas and shepherd them through the process to see if we could turn those ideas into patents.

    I loved that job for many reasons. It leveraged an innovation framework I had already started with a few colleagues—work that earned us a handful of patents. It fed my curiosity, love for technology, and joy of being surrounded by smart people. Most of all, I loved watching someone light up as they became an inventor.

    I worked with an engineer who had an idea based on his deep knowledge of a specific system. Together, we expanded on that idea and turned it into an innovative solution to a broader problem. The look on his face when his idea was approved for patent filing was one of the greatest moments of my career. For years after, he would stop me in the hallway just to say hello and introduce me as the person who helped him get a patent.

    Much of the success I saw on that team came from people who deeply understood a problem, were curious to ask why, and believed there had to be a better way. That success was amplified when more than one inventor was involved, when overlapping experiences and diverse perspectives combined into something truly original.

    When I moved into product management, the same patterns held true. The most successful ideas still came from a clear understanding of the problem, deep knowledge of the system, and the willingness to explore different perspectives.

    Innovation used to be a web. It was messy, organic, and interconnected. The spark came from deep context and unexpected collisions.

    But that process is starting to change.

    Same High, Lower Ceiling

    In this new age of large language models (LLMs), companies are looking for shortcuts for growth and innovation and see LLMs as the cheat code.

    Teams are tasked with mining customer comments to synthesize feedback and generate feature ideas and roadmaps. If the ideas seem reasonable, they are executed without further analysis. Speed is the goal. Output is the metric.

    Regardless of size or maturity, every company can access the tools and capabilities once reserved for tech giants. Generative AI lowers the barrier to entry. It also levels the playing field, democratizing innovation.

    But what if it also levels the results?

    When everyone uses the same models, is trained on the same data, and is prompted in similar ways, the ideas start to converge. It’s innovation by template. You might move faster, but so is everyone else, and in the same direction.

    Even when applied to your unique domain, the outputs often look the same. Which means the ideas are starting to look the same, too.

    AI lifts companies that lacked innovation muscle, but in doing so, it risks pulling down those that had built it. The average improves, but the outliers vanish. The floor rises, but the ceiling falls.

    We’re still getting the high. But it doesn’t feel like it used to.

    The Dopamine of Speed

    The danger is that we’re not going to see it happening. Worse, we’re blindly moving forward without considering the long-term implications. We’re so fixated on speed that it’s easy to convince ourselves that we’re moving fast and innovating.

    We confuse motion for momentum, and output for originality. The teams and companies that move the fastest will be rewarded. Natural selection will leave the slower ones behind. Speed will be the new sign of innovation. But just because something ships fast doesn’t mean it moves us forward.

    The dopamine hit that comes from release after release is addictive, and we’ll need more and more to feel the same level of speed and growth. We’ll rely increasingly on these tools to get our fix until it stops working altogether. Meanwhile, the incremental reliance on these tools dulls effectiveness and erodes impact, and our ability to be creative and innovate will atrophy.

    By the time we realize the quality of our ideas has flattened, we’ll be too dependent on the process to do anything differently.

    The Dealers Own the Supply

    And those algorithms? They’re owned by a handful of companies. These companies decide how the models behave, what data they’re trained on, and what comes out of them.

    They also own the data. And it’s only a matter of time before they start mining it for intellectual property—filing patents faster than anyone else can, or arguing that anything derived from their models is theirs by default.

    Beyond intellectual property and market control, this concentration of power raises more profound ethical and societal questions. When innovation is funneled through a few gatekeepers, it risks reinforcing existing inequalities and biases embedded in the training data and business models. The diversity of ideas and creators narrows, and communities without direct access to these technologies may be left behind, exacerbating the digital divide and limiting who benefits from AI-driven innovation.

    The more we rely on these models, the more we feed them. Every prompt, interaction, and insight becomes part of a flywheel that strengthens the model and the company behind it, making it more powerful. It’s a feedback loop: we give them our best thinking, and they return a usable version to everyone else.

    LLMs don’t think from first principles—they remix from secondhand insight. And when we stop thinking from scratch, we start building from scraps.

    Because the answers sound confident, they feel finished. That confidence masks conformity, and we mistake it for consensus.

    Innovation becomes a productized service. Creative edge gets compressed into a monthly subscription. What once gave your company a competitive advantage is now available to anyone who can write a halfway decent prompt.

    Make no mistake, these aren’t neutral platforms. They shape how we think, guide what we explore, and, as they become more embedded in our workflows, influence decisions, strategies, and even what we consider possible.

    We used to control the process. Now we’re just users. The same companies selling us the shortcut are quietly collecting the toll.

    When the supply is centralized, so is the power. And if we keep chasing the high, we’ll find ourselves dependent on a dealer who decides what we get and when we get it.

    Rewiring for Real Innovation

    This isn’t a call to reject the tools. Generative AI isn’t going away, and used well, it can make us faster, better, and more creative. But the key is how we use it—and what we choose to preserve along the way.

    Here’s where we start:

    1. Protect the Messy Middle

    Innovation doesn’t happen at the point of output. It happens in the friction. The spark lives in debate, dead ends, and rabbit holes. We must protect the messy, nonlinear process that makes true insight possible.

    Use AI to accelerate parts of the journey, not to skip it entirely.

    2. Think from First Principles

    Don’t just prompt. Reframe. Instead of asking, “What’s the answer?” ask, “What’s the real question?” LLMs are great at synthesis, but breakthroughs come from original framing.

    Start with what you know. Ask “why” more than “how.” And resist the urge to outsource the thinking.

    3. Don’t Confuse Confidence for Quality

    A confident response isn’t necessarily a correct one. Learn to interrogate the output. Ask where it came from, what it’s assuming, and what it might be missing.

    Treat every generated answer like a draft, not a destination.

    4. Diversify Your Inputs

    The model’s perspective is based on what it’s been trained on, which is mostly what’s already popular, published, and safe. If you want a fresh idea, don’t ask the same question everyone else is asking in the same way.

    Talk to people. Explore unlikely connections. Bring in perspectives that aren’t in the data.

    5. Make Thinking Visible

    The danger of speed is that it hides process. Write out your assumptions. Diagram your logic. Invite others into the middle of your thinking instead of just sharing polished outputs.

    We need to normalize visible, imperfect thought again. That’s where the new stuff lives.

    6. Incentivize Depth

    If we reward speed, we get speed. If we reward outputs, we get more of them. But if we want real innovation, we need to measure the stuff that doesn’t show up in dashboards: insight, originality, and depth of understanding.

    Push your teams to spend time with the problem, not just the solution.

    Staying Sharp

    We didn’t set out to flatten innovation. We set out to go faster, to do more, to meet the moment. But in chasing speed and scale, we risk trading depth for derivatives, and originality for automation.

    Large language models can be incredible tools. They can accelerate discovery, surface connections, and amplify creative potential. But only if we treat them as collaborators, not crutches.

    The danger isn’t in using these models. The danger is in forgetting how to think without them.

    We have to resist the pull toward sameness. We have to do the slower, messier work of understanding real problems, cultivating creative tension, and building teams that collide in productive ways. We have to reward originality over velocity, and insight over output.

    Otherwise, the future of innovation won’t be bold or brilliant.

    It’ll just be fast.

    And dull.

  • AI First, Second Thoughts

    AI First, Second Thoughts

    Over the past few weeks, several companies have made headlines by declaring an “AI First” strategy.

    Shopify CEO Tobi Lütke told employees that before asking for additional headcount or resources, they must prove the work can’t be done by AI.

    Duolingo’s CEO, Luis von Ahn, laid out a similar vision, phasing out contractors for tasks AI can handle and using AI to rapidly accelerate content creation.

    Both companies also stated that AI proficiency will now play a role in hiring decisions and performance reviews.

    On the surface, this all sounds reasonable. If generative AI can truly replicate—or even amplify—human effort, then why wouldn’t companies want to lean in? Compared to the cost of hiring, onboarding, and supporting a new employee, AI looks like a faster, cheaper alternative that’s available now.

    But is it really that simple?

    First, there was AI Last

    Before we talk about “AI First,” it’s worth rewinding to what came before.

    I’ve long been an advocate of what I’d call an “AI Last” approach, so the “AI First” mindset is a shift for me.

    Historically, I’ve found that teams often jump too quickly to AI as the sole solution, due to significant pressure from the top to “do more AI.” It showed a lack of understanding of what AI is, how it works, its limitations, and its cost. The mindset of sprinkling magical AI pixie dust over a problem and having it solved is naive and dangerous, often distracting teams from a much more practical solution.

    Here’s why I always pushed for exhausting the basics before reaching for AI:

    Cost

    • High development and maintenance costs: AI solutions aren’t cheap. They require time, talent, and significant financial investment.
    • Data preparation overhead: Training useful models requires large volumes of clean, labeled data—something most teams don’t have readily available.
    • Infrastructure needs: Maintaining reliable AI systems often means investing in robust MLOps infrastructure and tooling.

    Complexity

    • Simple solutions often work: Business logic, heuristics, or even minor process changes can solve the problem faster and more predictably.
    • Harder to maintain and debug: AI models are opaque by nature—unlike rule-based systems, it’s hard to explain why they behave the way they do.
    • Performance is uncertain: AI models can fail in edge cases, degrade over time, or simply underperform outside of their training environment.
    • Latency and scalability issues: Large models—especially when accessed through APIs—can introduce unacceptable delays or infrastructure costs.

    Risk

    • Low explainability: In regulated or mission-critical settings, black-box AI systems are a liability.
    • Ethical and legal exposure: AI can introduce or amplify bias, violate user privacy, or produce harmful or offensive outputs.
    • Chasing hype over value: Too often, teams build AI solutions to satisfy leadership or investor expectations, not because it’s the best tool for the job.

    What Changed?

    So why the shift from AI Last to AI First?

    The shift happened not just because of what generative AI made possible, but how effortless it made everything look.

    Generative AI feels easy.

    Unlike traditional AI, which required data pipelines, modeling, and MLOps, generative AI tools like ChatGPT or GitHub Copilot give you answers in seconds with nothing more than a prompt. The barrier to entry feels low, and the results look surprisingly good (at first).

    This surface-level ease masks the hidden costs, risks, and technical debt that still lurk underneath. But the illusion of simplicity is powerful.

    Generalization expands possibilities.

    LLMs can generalize across many domains, which lowers the barrier to trying AI in new areas. That’s a significant shift from traditional AI, which typically had narrow, custom-built models.

    AI for everyone.

    Anyone—from marketers to developers—can now interact directly with AI. This democratization of AI access represents a significant shift, accelerating adoption, even in cases where the use case is unclear.

    Speed became the new selling point.

    Prototyping with LLMs is fast. Really fast. You can build a working demo in hours, not weeks. For many teams, that 80% solution is “good enough” to ship, validate, or at least justify further investment.

    That speed creates pressure to bypass traditional diligence, especially in high-urgency or low-margin environments.

    The ROI pressure is real.

    Companies have made massive investments in AI, whether in cloud compute, partnerships, talent, or infrastructure. Boards and executives want to see returns. “AI First” becomes less of a strategy and more of a mandate to justify spend.

    It’s worth mentioning that this pressure sometimes focuses on using AI, not using it well.

    People are expensive. AI is not (on the surface).

    Hiring is slow, expensive, and full of risk. In contrast, AI appears to offer infinite scale, zero ramp-up time, and no HR overhead. For budget-conscious leaders, the math seems obvious.

    The hype machine keeps humming.

    Executives don’t want to be left behind. Generative AI is being sold as the answer to nearly every business challenge, often without nuance or grounding in reality. Just like with traditional AI, teams are once again being told to “add AI” without understanding if it’s needed, feasible, or valuable.

    It feels like a shortcut.

    There’s another reason “AI First” is so appealing: it feels like a shortcut.

    It promises to bypass the friction, delay, and uncertainty of hiring. Teams can ship faster, cut costs, and show progress—at least on the surface. In high-pressure environments, that shortcut is incredibly tempting.

    But like most shortcuts, this one comes with consequences.

    Over-reliance on AI can erode institutional knowledge, create brittle systems, and introduce long-term costs that aren’t immediately obvious. Models drift. Prompts break. Outputs change. Context disappears. Without careful oversight, today’s efficiency gains can become tomorrow’s tech debt.

    Moving fast is easy. Moving well is harder. “AI First” can be a strategy—but only when it’s paired with rigor, intent, and a willingness to say no.

    What’s a Better Way?

    “AI First” isn’t inherently wrong, but without guardrails, it becomes a race to the bottom. A better approach doesn’t reject AI. It reframes the question.

    Yes, start with AI. But don’t stop there. Ask:

    • Is AI the right tool for the problem?
    • Is this solution resilient, or just fast?
    • Are we building something sustainable—or something that looks good in a demo?

    A better way is one that’s AI-aware, not AI-blind. That means being clear-eyed about what AI is good at, where it breaks down, and what it costs over time.

    Here are five principles I’ve seen work in practice:

    Start With the Problem, Not the Technology

    Don’t start by asking “how can we use AI?” Start by asking, “What’s the problem we’re trying to solve?”

    • What does success look like?
    • What are the constraints?
    • What’s already working—or broken?

    AI might still be the right answer. But if you haven’t clearly defined the problem, everything else is just expensive guesswork.

    Weigh the Tradeoffs, Not Just the Speed

    Yes, AI gets you something fast. But is it the right thing?

    • What happens when the model changes?
    • What’s the fallback if the prompt fails?
    • Who’s accountable when it goes off the rails?

    “AI First” works when speed is balanced by responsibility. If you’re not measuring long-term cost, you’re not doing ROI—you’re doing wishful thinking.

    Build for Resilience, Not Just Velocity

    Shortcuts save time today and create chaos tomorrow.

    • Document assumptions.
    • Build fallback paths.
    • Monitor for drift.
    • Don’t “set it and forget it.”

    Treat every AI-powered system like it’s going to break, because eventually, it will. The teams that succeed are the ones who planned for it.

    Design Human-AI Collaboration, Not Substitution

    Over-automating can backfire. When people feel like they’re just babysitting machines—or worse, being replaced by them—you lose the very thing AI was supposed to support: human creativity, intuition, and care.

    The best systems aren’t human-only or AI-only. They’re collaborative.

    • AI drafts, people refine.
    • AI scales, humans supervise.
    • AI suggests, humans decide.

    This isn’t about replacing judgment, it’s about amplifying it. “AI First” should make your people better, not make them optional.

    Measure What Actually Matters

    A lot of AI initiatives look productive because we’re measuring the wrong things.

    More output ≠ better outcomes.

    And if everyone is using the same AI tools in the same way, we risk a monoculture of solutions—outputs that look the same, sound the same, and think the same.

    Real creativity and insight don’t come from the center. They come from the edges, from the teams that challenge assumptions and break patterns. Over-reliance on AI can mute those voices, replacing originality with uniformity.

    Human memory is inefficient and unreliable in comparison to machine memory. But it’s this very unpredictability that’s the source of our creativity. It makes connections we’d never consciously think of making, smashing together atoms that our conscious minds keep separate. Digital databases cannot yet replicate the kind of serendipity that enables the unconscious human mind to make novel patterns and see powerful new analogies of the kind that lead to our most creative breakthroughs. The more we outsource our memories to Google, the less we are nourishing the wonderfully accidental creativity of our consciousness.

    Ian Leslie, Curious: The Desire to Know and Why Your Future Depends on It

    If we let AI dictate the shape of our work, we may all end up building the same thing—just faster.

    More speed ≠ more value.

    Instead of counting tasks, measure trust. Instead of tracking volume, track quality. Focus on the things your customers and teams actually feel.

    The Real “AI First” Advantage

    The companies that win with AI won’t be the ones who move the fastest.

    They’ll be the ones who move the smartest. They’ll be the ones who know when to use AI, when to skip it, and when to slow down.

    Because in the long run, discipline beats urgency. Clarity beats novelty. And thoughtfulness scales better than any model.

    The real power of AI isn’t in what it can do.

    It’s in what we choose to do with it.

  • Are You Not Entertained?

    Are You Not Entertained?

    “Give them bread and circuses, and they will never revolt.”
    — Juvenal, Roman satirist

    Over the past two weeks, my LinkedIn feed has looked like an AI fever dream. Every meme from the past 10 years was turned into a Studio Ghibli production. Former colleagues changed their profile pictures into a Muppet version of themselves. And somewhere, a perfectly respectable CTO shared an image of themselves as an ’80s action figure.

    Meanwhile, in boardrooms everywhere, a familiar silence falls: ‘But… where’s the ROI?

    The Modern Colosseum

    The Roman Empire understood something timeless about human nature: if people are distracted, they’re less likely to notice what’s happening around them. Bread and circuses. Keep them fed and entertained, and you can buy yourself time (or at least avoid a riot).

    Fast-forward a couple of thousand years, swap out the emperors and politicians for CEOs in hoodies, VCs in Patagonia vests, and gladiators for generative AI, and the strategy hasn’t changed much.

    Today’s Colosseum is our social feed. And instead of lions and swords, it’s Ghibli filters, Muppet profile pictures, and action figure avatars. Every few weeks, a new AI-powered spectacle sweeps through like a new headline act. The crowd goes wild. The algorithm delivers the dopamine. And for a moment, it feels like this is what AI was always only meant for fun, viral, harmless play.

    But here’s the thing: that spectacle serves a purpose. The companies building these tools want you in the arena.

    Every playful experiment trains their models, every viral trend props up their metrics, and every wave of AI-generated content helps justify the next round of fundraising at an even higher valuation. These modern-day emperors are profiting from the distraction.

    You get a JPEG. They get data, engagement, and another step toward platform dominance.

    Meanwhile, the harder, messier questions that actually matter get conveniently lost in the noise:

    • Where does this data come from?
    • Where does the data go?
    • Who owns it?
    • Who profits from it?
    • What happens when a handful of companies control both the models and the means of production?
    • And are these tools creating real business value — or just highly shareable distractions?

    Because while everyone’s busy turning their profile picture into a dreamy Miyazaki protagonist, the real, boring, messy, complicated work of AI is quietly stalling out as companies continue to struggle to find sustainable, repeatable ways to extract value from these tools. The promise is enormous, but the reality? It’s a little less cinematic.

    And so the cycle continues: hype on the outside, hard problems on the inside. Keep the crowd entertained long enough, and maybe nobody will ask the hardest question in the arena:

    Is any of this actually working?”

    Spectacle Scales Faster Than Strategy

    It’s easy to look at all of this and roll your eyes. The AI selfies. The endless gimmicks. The flood of LinkedIn posts that feel more like digital dress-up than technology strategy.

    But this dynamic exists for a reason. In fact, it keeps happening because the forces behind it are perfectly aligned.

    It’s Easy

    The barrier to entry for generative AI spectacle is incredibly low.
    Write a prompt. Upload a photo. Get a result in seconds. No infrastructure. No integration. No approvals. Just instant content, ready for likes.

    Compare that to operationalizing AI inside a company where projects can stall for months over data access, privacy concerns, or alignment between teams. It’s no wonder which version of AI most people gravitate towards.

    It’s Visible

    Executives like to see signs of innovation. Shareholders like to hear about “AI initiatives.” Employees want to feel like their company isn’t falling behind.

    Generative AI content delivers that visibility without the friction of actual transformation. Everyone gets to point to something and say, “Look! We’re doing AI.

    It’s Fun

    Novelty wins attention. Play wins engagement. Spectacle spreads faster than strategy ever will.

    People want to engage with these trends — not because they believe it will transform their business, but because it’s delightful, unexpected, and fundamentally human to want to see yourself as a cartoon.

    It’s Safe

    The real work of AI is messy. It challenges workflows. It exposes gaps in data. It forces questions about roles, skills, and even headcount.

    That’s difficult, political, and sometimes threatening. Creating a Muppet version of your team is much easier than asking, “How do we automate this process without breaking everything?”

    And that’s exactly what the model and tool providers are taking advantage of. The easier it is to generate content, the faster you train the models. The more fun it is to share, the more data you give away. The safer it feels, the less you question who controls the tools you’re using.

    The Danger of Distraction

    The Colosseum didn’t just keep the Roman crowds entertained — it kept them occupied. And that’s the real risk with today’s AI spectacle.

    It’s not that the Ghibli portraits or action figure avatars are bad. It’s that they’re incredibly effective at giving the illusion of progress while the hard work of transformation stalls out behind the scenes.

    Distraction doesn’t just waste time. It creates risk. It creates vulnerability.

    Because while everyone is busy playing with the latest AI toy, the companies building these tools are playing a very different game — and they are deadly serious about it.

    They’re not just entertaining users. They’re capturing data. Shaping behavior. Building platforms. Creating dependencies. And accelerating their lead.

    Every viral trend lowers the bar for what people expect AI to do — clever content instead of meaningful change, spectacle instead of service, noise instead of impact. Meanwhile, the companies behind the curtain aren’t lowering their ambitions at all. They’re racing ahead.

    And the longer you sit in the stands clapping, the harder it gets to catch up.

    Leaders lose urgency. Teams lose focus. Customers lower their standards. And quietly, beneath all the fun and novelty, a very real gap is opening up — between the companies who are playing around with AI and the companies who are building their future on it.

    This is the real risk: not that generative AI fails but that it succeeds at the completely wrong thing. That we emerge from this wave with smarter toys, funnier memes, faster content… but no real shift in how work gets done, how customers are served, or how value is created.

    And by the time the novelty wears off and people finally look around and ask, “Wait, what did we actually build?” it might be too late to catch up to the companies who never stopped asking that question in the first place.

    Distraction delays that reckoning. But it doesn’t prevent it.

    The crowd will eventually leave the Colosseum. The show always ends. What’s left is whatever you bothered to build while the noise was loudest.

    Leaving The Arena

    If the past year has felt like sitting in the front row of the AI Colosseum, the obvious question is: do you want to stay in your seat forever?

    Because leaving the arena doesn’t mean abandoning generative AI. It means stepping away from the noise long enough to remember why you showed up in the first place. It means holding both yourself and the technology providers to a higher standard.

    It means asking harder questions about how you’re using AI and who you’re trusting to shape your future.

    • What real problems could this technology help us solve?
    • Where are we spending time or money inefficiently?
    • Who owns the value we create with these tools?
    • Where are we giving away data, control, or customer relationships without realizing it?
    • What assumptions are these LLM providers baking into our products, our workflows, our culture?
    • What happens to our business if these providers change the rules, the pricing, or the access tomorrow?
    • Are we designing for leverage or locking ourselves into dependency?
    • What happens if these companies own both the means of production and the means of distribution?

    It means shifting the focus from what AI can do to what people need. From delight to durability. From spectacle to service. From passive adoption to active accountability.

    Because the real work isn’t viral. It doesn’t trend on social media. No one’s sharing screenshots of cleaner data pipelines or more intelligent internal tools. But that’s exactly where the lasting value gets created.

    The companies (and people) who figure that out will not only survive the hype cycle but also be the ones standing long after the crowd moves on to whatever comes next.

    The arena will always be there. The show will always go on. The next shiny demo will always drop.

    But at some point, you must decide whether you’re in this to watch or are here to build something that lasts and ask the uncomfortable questions that building requires.

  • The White Whale

    The White Whale

    In Moby-Dick, Captain Ahab’s relentless pursuit of the white whale isn’t just a quest for revenge; it’s a cautionary tale about obsession. Ahab becomes so consumed by his singular goal that he ignores the needs of his crew, the dangers of the voyage, and the possibility that his mission might be misguided.

    This mirrors a common trap in problem-solving: becoming so fixated on a single solution—or even the idea of being the one to solve a problem—that we lose sight of the bigger picture. Instead of starting with a problem and exploring the best ways to address it, we often cling to a solution we’re attached to, even if it’s not the right fit or takes us away from solving the actual problem.

    A Cautionary Tale

    Call me Ishmael.1 – Herman Melville

    I once worked on a project to identify potential customer issues. The business provided the context and success metrics, and we were part of the team set out to solve the problem.

    After we started, an executive on the project who knew the domain had a specific vision for how the solution should work and directed us on exactly what approach to use and how to implement it. While their approach seemed logical to them, it disregarded key best practices and alternative solutions that could have been more effective.

    We ran experiments to test both the executive’s approach and an alternative, using data to demonstrate how a different approach produced better results and would improve business outcomes.

    But the executive was undeterred. They shifted resources and dedicated teams to their solution, intent on making it work. We continued a separate effort in parallel but without the resources or backing of the received by the other team.

    The Crew

    Like the crew of the Pequod, the teams working on the executive’s solution were initially excited about the attention and resources. They came up with branding and a concept that made for good presentations. The initial few months were spent creating an architecture and building data pipelines under the presumption that the solution would work. Each update gave a sense of progress and success as items were crossed off the checklist.

    That success, though, was based on output, not outcomes. Along the way, the business results weren’t there, and team members began to question the approach. However, even with these questions and the evidence that our approach was improving business outcomes, the hierarchical nature of the commands kept the crew from changing course.

    The Prophet

    In Moby Dick, Captain Ahab smuggles Fedallah, an almost supernatural harpooner, onto the ship as part of a hidden crew. Fedallah is a mysterious figure who serves as Ahab’s personal prophet, foretelling Ahab’s fate.

    Looking for a prophet of their own, our executive brought in a consulting firm to see if they could get the project on track. The firm’s recommendations largely mirrored those of our team. However, similar to Fedallah’s prophecies, the recommendations were misinterpreted. What we saw as clear signals to change course, the executive saw as a chance of success and doubled down on their solution.

    The Alternate Mission

    Near the end of the novel, the captain of another vessel, the Rachel, pleads with Ahab to help him find his missing son, lost at sea. Ahab refuses because he is too consumed by his revenge. Ultimately, the obsession costs Ahab his life as well as those of his crew, with the exception of Ishmael, who was, ironically, rescued by the Rachel, the whaling ship that had earlier begged Ahab for help.

    We tried to bridge the gap between the two efforts for years, but the executive’s fixation on their solution made collaboration impossible. We made a strong case using data to change the mission from making their solution work to refocusing on the business goals and outcomes. Unfortunately, after many attempts, we weren’t able to convince them or affect their bias and feelings that their solution should work. Too many claims had already been made, and too much had been invested to change course. The success of their solution was the only acceptable end of the journey, with that success always being just over the horizon.

    A Generative White Whale

    I’ve been thinking about this story lately because I see the same pattern happening with generative AI. Just as Captain Ahab chases Moby Dick, many companies chase technological solutions without fully understanding if those solutions will solve their real business problems.

    Since ChatGPT was launched to the public in 2022, there has been pressure across industries to deliver on generative AI use cases. The impressive speed at which users signed up and the ease at which ChatGPT could respond to questions gave the appearance of an easy implementation path.

    Globally, roadmaps were blown up and rebuilt with generative AI initiatives. Traditional intent classification and dialog flows were replaced with large language models in conversational AI and customer support projects. Retrieval-augmented generation changed search and summarization use cases.

    Then, the world tried to use it. Everyone quickly learned that the models didn’t work out of the box and underestimated the amount of human oversight and iteration needed to get reliable, trustworthy results.2 We learned that their data wasn’t ready to be consumed by these models and underestimated the effort required to clean, label, and structure the data for generative AI use cases. We learned about hallucinations, toxic and dangerous language in responses, and the need for guardrails.

    But the ship had sailed. The course had been set. Roadmaps represent unchangeable commitments3. The mission to hunt for generative AI success continued.

    What started with use cases with clear business outcomes inherited from the pre-generative AI days started to change. Rather than targeting problems that could significantly impact business goals, the focus shifted to finding problems that could be solved with generative AI. Companies had already invested too much time, money, and opportunity cost, and they needed to deliver something of value to justify the voyage.4,5

    It became an obsession.

    A white whale.

    Chasing the Right Whale

    I try all things, I achieve what I can.6 – Herman Melville

    That’s not to say there isn’t a place for generative AI or other technology as possible solutions. I’ve been working with AI for almost a decade and have seen how it can be truly powerful and transformative when applied to the right use case that aligns with business outcomes and solving customer or business problems.

    Experimenting with the technology can foster innovation and uncover new opportunities. However, when the organization shifts focus away from solving its most critical business problems and towards delivering a solution or leveraging a specific technology for the sake of the solution or the technology, misalignment between those two paths and choosing the wrong goal can put the entire mission at risk. The mission should always be the success of the business, not the technology.

    That’s the difference between chasing the white whale and chasing the right whale.

    Assess Your Mission

    The longer a project goes on, the more likely it will veer off course. Little choices over time make small adjustments to direction that can eventually lead to being far away from the intended destination. The same thing can happen with the overall mission. Ahab started his journey hunting whales for resources and, while he was still technically hunting a whale, his mission changed to revenge. If he took the time to reassess his position and motivation, Moby Dick would have had a less dramatic ending.

    As product and delivery teams, it’s healthy practice to occasionally look up and evaluate the current position and trajectory. While there may be an argument for intuition in the beginning, as more information becomes available, it’s important to leverage data and critical thinking rather than intuition and feelings which are more prone to bias.

    These steps can help guide that process.

    1. Reaffirm Business and Customer Priorities.

    Align leadership around the most critical problems. Start by revisiting the company’s core objectives and defining success. Then, identify the biggest challenges facing the business and customers before considering solutions.

    2. Audit and Categorize Existing Projects

    Identify low-impact or misaligned projects. List all ongoing and planned AI initiatives, categorizing them based on:

    • Business impact (Does it solve a top-priority problem?)
    • Customer impact (Does it improve user experience or outcomes?)
    • Strategic alignment (Is it aligned with company goals, or is it just chasing trends?)

    An important factor here is articulating and measuring how the initiative impacts business and customer goals rather than relates to a business or customer goal.

    For example, a common chatbot goal is to reduce support costs (business goal) by answering customer questions (customer goal) without the need to interact with a support agent. A project that uses generative AI to create more natural responses might look like it’s addressing a need, but it assumes that a more conversational style will increase adoption or improve outcomes. However, making responses more conversational doesn’t necessarily make them more helpful. If the chatbot still struggles with accurate issue resolution, customers will escalate to an agent anyway.

    3. Assess Generative AI’s Fit

    Ensure generative AI is a means to an end, not the goal itself.

    Paraphrasing one of my mantras I would use when a team approached me with an “AI problem” to solve:

    There are no (generative) AI problems. There are business and customer problems for which (generative) AI may be a possible solution.

    For each project, ask: Would this problem still be worth solving without generative AI?

    If a generative AI project has a low impact, determine if there’s a higher-priority problem where AI (or another solution) could create more value.

    4. Adjust the Roadmap with a Zero-Based Approach

    Rather than tweaking the existing roadmap, start from scratch by prioritizing projects based on impact, urgency, and feasibility.

    Reallocate resources from lower-value AI projects to initiatives that directly improve business and customer outcomes.

    5. Set Success Metrics and Kill Switches

    Define clear, measurable success criteria for every project. Establish a review cadence (e.g., every quarter) to assess whether projects deliver value. If a project fails to meet impact goals, have a predefined exit strategy to stop work and shift resources.

    This structured approach ensures that AI projects are evaluated critically, business needs drive technology decisions, and resources are focused on solving the most important problems—not just following trends.

    Conclusion

    The lesson of Moby-Dick is not just about obsession—it’s about losing sight of the true mission. Ahab’s relentless pursuit led to destruction because he refused to reassess his course, acknowledge new information, or accept that his goal was misguided. In business and technology, the same risk exists when companies prioritize solutions over problems and fixate on a specific technology rather than its actual impact.

    Generative AI holds incredible potential, but only when applied intentionally and strategically. The key is to stay grounded in business priorities, customer needs, and measurable outcomes—not just the pursuit of AI for AI’s sake. By regularly evaluating projects, questioning assumptions, and ensuring alignment with meaningful goals, teams can avoid chasing white whales and steer toward solutions that drive success.

    The difference between success and failure isn’t whether we chase a whale—it’s whether we’re chasing the right one.

    And I only am escaped alone to tell thee.7 – Herman Melville

    1. “Call me Ishmael.” This is one of the most famous opening lines in literature. It sets the tone for Ishmael’s role as the narrator and frames the novel as a personal account rather than just an epic sea tale. ↩︎
    2. https://www.cio.com/article/3608157/top-8-failings-in-delivering-value-with-generative-ai-and-how-to-overcome-them.html ↩︎
    3. Roadmaps are meant to be flexible and adjusted as priorities and opportunities change. ↩︎
    4. https://www.journalofaccountancy.com/issues/2025/feb/generative-ais-toughest-question-whats-it-worth.html ↩︎
    5. https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025 ↩︎
    6. This quote from Ishmael reflects a spirit of perseverance and pragmatism, emphasizing the importance of effort and adaptability in the face of challenges. ↩︎
    7. The closing line of the novel echoes the biblical story of Job, in which a lone survivor brings news of disaster, underscoring the novel’s themes of fate, obsession, and destruction. ↩︎