David Monnerat

Dad. Husband. Product + AI. Generalist. Endlessly Curious.

Category: product

  • The White Whale

    The White Whale

    In Moby-Dick, Captain Ahab’s relentless pursuit of the white whale isn’t just a quest for revenge; it’s a cautionary tale about obsession. Ahab becomes so consumed by his singular goal that he ignores the needs of his crew, the dangers of the voyage, and the possibility that his mission might be misguided.

    This mirrors a common trap in problem-solving: becoming so fixated on a single solution—or even the idea of being the one to solve a problem—that we lose sight of the bigger picture. Instead of starting with a problem and exploring the best ways to address it, we often cling to a solution we’re attached to, even if it’s not the right fit or takes us away from solving the actual problem.

    A Cautionary Tale

    Call me Ishmael.1 – Herman Melville

    I once worked on a project to identify potential customer issues. The business provided the context and success metrics, and we were part of the team set out to solve the problem.

    After we started, an executive on the project who knew the domain had a specific vision for how the solution should work and directed us on exactly what approach to use and how to implement it. While their approach seemed logical to them, it disregarded key best practices and alternative solutions that could have been more effective.

    We ran experiments to test both the executive’s approach and an alternative, using data to demonstrate how a different approach produced better results and would improve business outcomes.

    But the executive was undeterred. They shifted resources and dedicated teams to their solution, intent on making it work. We continued a separate effort in parallel but without the resources or backing of the received by the other team.

    The Crew

    Like the crew of the Pequod, the teams working on the executive’s solution were initially excited about the attention and resources. They came up with branding and a concept that made for good presentations. The initial few months were spent creating an architecture and building data pipelines under the presumption that the solution would work. Each update gave a sense of progress and success as items were crossed off the checklist.

    That success, though, was based on output, not outcomes. Along the way, the business results weren’t there, and team members began to question the approach. However, even with these questions and the evidence that our approach was improving business outcomes, the hierarchical nature of the commands kept the crew from changing course.

    The Prophet

    In Moby Dick, Captain Ahab smuggles Fedallah, an almost supernatural harpooner, onto the ship as part of a hidden crew. Fedallah is a mysterious figure who serves as Ahab’s personal prophet, foretelling Ahab’s fate.

    Looking for a prophet of their own, our executive brought in a consulting firm to see if they could get the project on track. The firm’s recommendations largely mirrored those of our team. However, similar to Fedallah’s prophecies, the recommendations were misinterpreted. What we saw as clear signals to change course, the executive saw as a chance of success and doubled down on their solution.

    The Alternate Mission

    Near the end of the novel, the captain of another vessel, the Rachel, pleads with Ahab to help him find his missing son, lost at sea. Ahab refuses because he is too consumed by his revenge. Ultimately, the obsession costs Ahab his life as well as those of his crew, with the exception of Ishmael, who was, ironically, rescued by the Rachel, the whaling ship that had earlier begged Ahab for help.

    We tried to bridge the gap between the two efforts for years, but the executive’s fixation on their solution made collaboration impossible. We made a strong case using data to change the mission from making their solution work to refocusing on the business goals and outcomes. Unfortunately, after many attempts, we weren’t able to convince them or affect their bias and feelings that their solution should work. Too many claims had already been made, and too much had been invested to change course. The success of their solution was the only acceptable end of the journey, with that success always being just over the horizon.

    A Generative White Whale

    I’ve been thinking about this story lately because I see the same pattern happening with generative AI. Just as Captain Ahab chases Moby Dick, many companies chase technological solutions without fully understanding if those solutions will solve their real business problems.

    Since ChatGPT was launched to the public in 2022, there has been pressure across industries to deliver on generative AI use cases. The impressive speed at which users signed up and the ease at which ChatGPT could respond to questions gave the appearance of an easy implementation path.

    Globally, roadmaps were blown up and rebuilt with generative AI initiatives. Traditional intent classification and dialog flows were replaced with large language models in conversational AI and customer support projects. Retrieval-augmented generation changed search and summarization use cases.

    Then, the world tried to use it. Everyone quickly learned that the models didn’t work out of the box and underestimated the amount of human oversight and iteration needed to get reliable, trustworthy results.2 We learned that their data wasn’t ready to be consumed by these models and underestimated the effort required to clean, label, and structure the data for generative AI use cases. We learned about hallucinations, toxic and dangerous language in responses, and the need for guardrails.

    But the ship had sailed. The course had been set. Roadmaps represent unchangeable commitments3. The mission to hunt for generative AI success continued.

    What started with use cases with clear business outcomes inherited from the pre-generative AI days started to change. Rather than targeting problems that could significantly impact business goals, the focus shifted to finding problems that could be solved with generative AI. Companies had already invested too much time, money, and opportunity cost, and they needed to deliver something of value to justify the voyage.4,5

    It became an obsession.

    A white whale.

    Chasing the Right Whale

    I try all things, I achieve what I can.6 – Herman Melville

    That’s not to say there isn’t a place for generative AI or other technology as possible solutions. I’ve been working with AI for almost a decade and have seen how it can be truly powerful and transformative when applied to the right use case that aligns with business outcomes and solving customer or business problems.

    Experimenting with the technology can foster innovation and uncover new opportunities. However, when the organization shifts focus away from solving its most critical business problems and towards delivering a solution or leveraging a specific technology for the sake of the solution or the technology, misalignment between those two paths and choosing the wrong goal can put the entire mission at risk. The mission should always be the success of the business, not the technology.

    That’s the difference between chasing the white whale and chasing the right whale.

    Assess Your Mission

    The longer a project goes on, the more likely it will veer off course. Little choices over time make small adjustments to direction that can eventually lead to being far away from the intended destination. The same thing can happen with the overall mission. Ahab started his journey hunting whales for resources and, while he was still technically hunting a whale, his mission changed to revenge. If he took the time to reassess his position and motivation, Moby Dick would have had a less dramatic ending.

    As product and delivery teams, it’s healthy practice to occasionally look up and evaluate the current position and trajectory. While there may be an argument for intuition in the beginning, as more information becomes available, it’s important to leverage data and critical thinking rather than intuition and feelings which are more prone to bias.

    These steps can help guide that process.

    1. Reaffirm Business and Customer Priorities.

    Align leadership around the most critical problems. Start by revisiting the company’s core objectives and defining success. Then, identify the biggest challenges facing the business and customers before considering solutions.

    2. Audit and Categorize Existing Projects

    Identify low-impact or misaligned projects. List all ongoing and planned AI initiatives, categorizing them based on:

    • Business impact (Does it solve a top-priority problem?)
    • Customer impact (Does it improve user experience or outcomes?)
    • Strategic alignment (Is it aligned with company goals, or is it just chasing trends?)

    An important factor here is articulating and measuring how the initiative impacts business and customer goals rather than relates to a business or customer goal.

    For example, a common chatbot goal is to reduce support costs (business goal) by answering customer questions (customer goal) without the need to interact with a support agent. A project that uses generative AI to create more natural responses might look like it’s addressing a need, but it assumes that a more conversational style will increase adoption or improve outcomes. However, making responses more conversational doesn’t necessarily make them more helpful. If the chatbot still struggles with accurate issue resolution, customers will escalate to an agent anyway.

    3. Assess Generative AI’s Fit

    Ensure generative AI is a means to an end, not the goal itself.

    Paraphrasing one of my mantras I would use when a team approached me with an “AI problem” to solve:

    There are no (generative) AI problems. There are business and customer problems for which (generative) AI may be a possible solution.

    For each project, ask: Would this problem still be worth solving without generative AI?

    If a generative AI project has a low impact, determine if there’s a higher-priority problem where AI (or another solution) could create more value.

    4. Adjust the Roadmap with a Zero-Based Approach

    Rather than tweaking the existing roadmap, start from scratch by prioritizing projects based on impact, urgency, and feasibility.

    Reallocate resources from lower-value AI projects to initiatives that directly improve business and customer outcomes.

    5. Set Success Metrics and Kill Switches

    Define clear, measurable success criteria for every project. Establish a review cadence (e.g., every quarter) to assess whether projects deliver value. If a project fails to meet impact goals, have a predefined exit strategy to stop work and shift resources.

    This structured approach ensures that AI projects are evaluated critically, business needs drive technology decisions, and resources are focused on solving the most important problems—not just following trends.

    Conclusion

    The lesson of Moby-Dick is not just about obsession—it’s about losing sight of the true mission. Ahab’s relentless pursuit led to destruction because he refused to reassess his course, acknowledge new information, or accept that his goal was misguided. In business and technology, the same risk exists when companies prioritize solutions over problems and fixate on a specific technology rather than its actual impact.

    Generative AI holds incredible potential, but only when applied intentionally and strategically. The key is to stay grounded in business priorities, customer needs, and measurable outcomes—not just the pursuit of AI for AI’s sake. By regularly evaluating projects, questioning assumptions, and ensuring alignment with meaningful goals, teams can avoid chasing white whales and steer toward solutions that drive success.

    The difference between success and failure isn’t whether we chase a whale—it’s whether we’re chasing the right one.

    And I only am escaped alone to tell thee.7 – Herman Melville

    1. “Call me Ishmael.” This is one of the most famous opening lines in literature. It sets the tone for Ishmael’s role as the narrator and frames the novel as a personal account rather than just an epic sea tale. ↩︎
    2. https://www.cio.com/article/3608157/top-8-failings-in-delivering-value-with-generative-ai-and-how-to-overcome-them.html ↩︎
    3. Roadmaps are meant to be flexible and adjusted as priorities and opportunities change. ↩︎
    4. https://www.journalofaccountancy.com/issues/2025/feb/generative-ais-toughest-question-whats-it-worth.html ↩︎
    5. https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025 ↩︎
    6. This quote from Ishmael reflects a spirit of perseverance and pragmatism, emphasizing the importance of effort and adaptability in the face of challenges. ↩︎
    7. The closing line of the novel echoes the biblical story of Job, in which a lone survivor brings news of disaster, underscoring the novel’s themes of fate, obsession, and destruction. ↩︎
  • The Democratization of (Everything)

    The Democratization of (Everything)

    A few years ago, I sat across the desk from a colleague, discussing their vision for a joint AI initiative. As a product manager, I pushed for clarity—what problem were we solving? What was the measurable outcome? What was the why behind this effort? His response was simple: democratization. Just giving people access. No clear purpose, no defined impact—just the assumption that making something available would automatically lead to progress. That conversation stuck with me because it highlighted a fundamental flaw in how we think about democratizing technology.

    The term “democratizing” used about technology began to gain traction in the late 20th century, particularly during the rise of personal computing and the internet.

    Democratizing technology typically means making it accessible to a broader audience, often by reducing cost, simplifying interfaces, or removing barriers to entry. The goal is to empower more people to use the technology, fostering innovation, equality, and progress.

    Personal computers would “democratize” access to computing power by putting it in the hands of individuals rather than large institutions or corporations. Similarly, the Internet would “democratize” access to information by removing the gatekeepers from publishing and content distribution.

    By the 2010s, “democratizing” became a buzzword in tech—used to describe making advanced tools like big data, AI, and machine learning accessible to more people. What was once in the hands of domain experts was now in the hands of the masses.

    Today, the term is frequently used in discussions about generative AI and other advanced technologies. These tools are marketed as democratizing creativity, coding, and problem-solving by making complex capabilities accessible to non-experts.

    The word “democratization” resonates because it aligns with broader cultural values, signaling fairness, accessibility, empowerment, and progress. The technology industry loves grand narratives, and “democratizing” sounds more revolutionary than “making more accessible.” It suggests that technology can break down barriers and create opportunities for everyone.

    However, as we’ve seen, the reality is often more complicated, and the term can sometimes obscure the challenges and inequalities that persist. Democratization often benefits those who already have the resources and knowledge while leaving others behind.

    I’ve long thought that the word “democratization” was an interesting choice when applied to technology because it resembles the ideals of operating a democratic state.1 Both rely on the idea that giving people access will automatically lead to better outcomes, fairness, and participation. However, both involve the tension between accessibility and effective use, the gap between ideals and reality, and the complexities of ensuring equitable participation. In practice, access alone is not enough; people need education, understanding, and responsible engagement for the system to function effectively.

    Democratization ≠ Access

    I’ve encountered many leaders who equate democratization with access, as if the goal is to put the tools in people’s hands. However, accessing a tool doesn’t mean people know what to do with it or how to use it effectively. For example, just because people can access AI, big data, or generative tools doesn’t mean they know how to use them properly or interpret their outputs.

    Similarly, just because people have the right to vote doesn’t mean they fully understand policies, candidates, or the consequences of their choices.

    In technology, access is meaningful only when it drives specific outcomes, such as innovation, efficiency, or solving real-world problems. In a democratic state, access to voting and participation is not an end but a means to achieve broader goals, such as equitable representation, effective governance, and societal progress.

    Without a clear purpose, access risks becoming superficial, failing to address deeper systemic issues or deliver tangible improvements. In both cases, democratization must be guided by a vision beyond mere access to ensure it creates a meaningful, lasting impact.

    Democratization requires not just opening doors but also empowering individuals with the knowledge, understanding, and skills to walk through them meaningfully. Without this foundation, the promise of democratization remains incomplete.

    Democratization ≠ Equality

    The future is already here, it’s just not evenly distributed.

    William Gibson2

    The U.S. was built on democratic ideals. However, political elites, corporate interests, and media conglomerates shape much of the discourse because political engagement is skewed toward those with resources, time, and education. Underprivileged communities face barriers to participation.

    The same is true in technology. The wealthy and well-educated benefit more from new technology, while others struggle to adopt it and are left behind. AI and big data were meant to be open and empowering, but tech giants still control them, setting rules and limitations.

    Both systems struggle with the reality that equal access does not automatically lead to equal outcomes, as power dynamics and systemic inequalities persist. Even when technology is democratized, those with more resources or expertise often benefit disproportionately, widening existing inequalities.

    Bridging the gap between access and outcomes demands more than good intentions—it requires deliberate action to dismantle barriers, redistribute power, and ensure that everyone can benefit equitably. By focusing on education, structural reforms, and inclusive practices, both technology and democratic systems can move closer to fulfilling their promises of empowerment and equality.

    Democratization ≠ Expertise

    These are dangerous times. Never have so many people had so much access to so much knowledge and yet have been so resistant to learning anything.

    Thomas M. Nichols, The Death of Expertise

    Critical thinking is essential for both the democratization of technology and the functioning of a democratic state. In technology, access to AI, big data, and digital tools means little if people cannot critically evaluate information, recognize biases, or understand the implications of their actions. Misinformation, algorithmic manipulation, and overreliance on automation can distort reality, just as propaganda and political rhetoric can mislead voters in a democracy. Similarly, for a democratic state to thrive, citizens must question policies, evaluate candidates beyond slogans, and resist emotional or misleading narratives. 

    Without critical thinking, technology can be misused, and democratic processes can be manipulated, undermining the very ideals of empowerment and representation that democratization seeks to achieve. In both realms, fostering critical thinking is not just beneficial—it’s necessary for meaningful progress and equity.

    Addressing the lack of critical thinking in technology and humanity at large requires a holistic approach that combines education, systemic reforms, and cultural change. We can build a more informed, equitable, and resilient society by empowering individuals with the skills and tools to think critically and creating systems that reward thoughtful engagement. This is not a quick fix but a long-term investment in the health of technological and democratic systems.

    Democratization ≠ Universality

    Both technology and governance often operate under the assumption that uniform solutions can meet the diverse needs of individuals and communities. This can result in a mismatch between what is offered and what is actually required, highlighting the limits of a one-size-fits-all approach.

    In technology, for example, AI tools and software may be democratized to allow everyone access, but these tools often assume a certain level of expertise or familiarity with the technology. While they may work well for some users, others may find them difficult to navigate or unable to fully harness their capabilities. A tool designed for the general public might unintentionally alienate those who need a more tailored approach, leaving them frustrated or disengaged.

    Similarly, in governance, policies are often created with the idea that they will serve all citizens equally. However, a single national policy—whether on healthcare, education, or voting rights—can fail to account for the vastly different needs and circumstances of different communities. For example, universal healthcare policies may not address the specific healthcare access issues faced by rural or low-income populations, and standardized educational curriculums may not be effective for students with different learning needs or backgrounds. When solutions are not tailored to the unique realities of diverse groups, they risk reinforcing existing inequalities and failing to deliver meaningful results.

    The challenge, then, is finding a balance between providing access and ensuring that solutions are adaptable and responsive to the needs of different communities. Democratization doesn’t guarantee universal applicability, and it’s essential to recognize that true empowerment comes not just from providing access but from ensuring that access is meaningful and relevant to everyone, regardless of their context or capabilities. Without this careful consideration, democratization can become a frustrating experience that leaves many behind, ultimately hindering progress rather than fostering it.

    Conclusion

    The democratization of technology, much like democracy itself, is harder than it sounds. Providing access to tools like AI or big data is only the first step—it doesn’t guarantee that people know how to use them effectively or equitably. Without the necessary education, critical thinking, and support, access alone can be frustrating and lead to further division rather than empowerment.

    Just as democratic governance struggles with the assumption that one-size-fits-all policies can serve diverse communities, the same happens with technology. Tools designed to be universally accessible often fail to meet the unique needs of different users, leaving many behind. Real democratization requires not just opening doors but ensuring that everyone has the resources to walk through them meaningfully.

    Democracy is challenging in both technology and governance. It’s not just about giving people access; it’s about giving them the knowledge, understanding, and opportunity to use that access in ways that truly empower them.

    Until we get this right, the promise of democratization (and democracy) remains unfulfilled.

    Footnotes

    1. The United States of America is a representative democracy (or a democratic republic). ↩︎
    2. https://quoteinvestigator.com/2012/01/24/future-has-arrived/ ↩︎