David Monnerat

Dad. Husband. Product + AI. Generalist. Endlessly Curious.

Tag: leadership

  • The Illusion of Intelligence: How We’re Still Missing the Promise of AI

    The Illusion of Intelligence: How We’re Still Missing the Promise of AI

    When I started my first role as a product manager, my portfolio included solutions built with AI. I leveraged my product experience and joined forces with a team of data scientists as we sought to tackle complex problems with data.

    It was a great time to be in the space. Everyone wanted to work with AI, and the promise of an AI-driven future was highlighted at every quarterly meeting and town hall. There were piles of data and just enough maturity in the tools and teams to start developing and deploying powerful algorithms at scale.

    But progress was much slower than most people anticipated.

    We stood in front of the piles of data without a shovel, unable to make efficient use of the resource because it was in the wrong format or inside an inaccessible system. Sometimes, we’d discover too late that our intuition about the relevance of the data in a particular pile was wrong, and the data wasn’t useful to solve a particular problem.

    To overcome those challenges, we’d start by looking for more data by checking other piles or generating additional data by adding telemetry to our systems to close the gaps in coverage.

    If we couldn’t assemble what we needed to solve that particular problem, we’d try to reshape the problem, or we’d move on to the next problem to solve. But if we happened to find what we needed, we ran into our next hurdle: armchair data scientists—people who watched a demo, skimmed an article, and came away convinced they knew how to build the model better than the experts trained to do it.

    When I said that everyone wanted to work with AI, I meant everyone. In some cases, developers would head to Coursera to learn about AI and how to create algorithms. Others went back to school to get an advanced degree in machine learning or statistics. As a proponent of continuing education, I applauded these efforts to level up their knowledge and skills, and, for the most part, these individuals became curious allies, trying to learn about the process from the inside.

    But the armchair data scientists, often in leadership or decision-making positions, were more disruptive. They would watch a video or read an article, then send it to the team with a brief note, stating only, “We should do this.” There would be no context, no knowledge of how the technology worked, or if what they found addressed a problem or challenge we were facing.

    For the most part, we could deflect these suggestions through thoughtful responses, explaining that we were already doing what they suggested, or why it wasn’t relevant to the actual problem we were solving.

    The more draining interactions were from leaders who wanted to prescribe how a model should be built, sometimes implicitly but often explicitly. They wanted the algorithm to reflect a vision of what they felt a system should do based on their intuition, even if their vision wasn’t technically possible—or even relevant—to the problem at hand. They would prescribe what data should be used, what data should be excluded, or how a model should be trained.

    They tried to influence how predictions were interpreted. They challenged results that didn’t feel intuitive, even when the outcomes were reproducible, measurable, and backed by data. This sometimes created another round of training a separate model driven by feelings, followed by a side-by-side comparison that consistently showed the data science approach performed better than one based on intuition and feelings.

    I’ve sat with some of those same executives to review the results of a model, only to be met with their disappointment, especially when they felt there should be a logical, straightforward solution to a problem that was so complex that it couldn’t be solved or even attempted without AI.

    They would question why the predictions weren’t 100% accurate. Even when I pointed out that we were predicting the future from past data, and even if the model was right 60% of the time, and the previous human-driven process was right only 10% of the time, their questions focused only on achieving the impossible 100%. They’d leave value on the table chasing perfection when they could improve a process now and hope to improve it over time. Or they’d go off on tangents and hyperfocus on edge cases for which there was no solution and often no data to even attempt to use to train an algorithm.

    If the performance was impressive, they’d attempt to move the targets by setting an arbitrarily higher bar, or they’d switch from the measurable metric to a different metric or an abstraction like “trust” without providing direction on what that meant or how to measure it. Trust in what? Accuracy? Fairness? Transparency? Nobody could say. When asked for clarification, the response was the classic Justice Potter Stewart response, “I know it when I see it.”

    In the end, it always seemed like AI was a disappointment. Unless it could solve 100% of a problem 100% of the time, no matter how complex or how poorly humans performed before, leaders would keep chasing a unicorn, while ignoring the perfectly capable, faster horse already in the stable.

    Over and over, the pattern was the same: impossible expectations, misunderstanding of the tools, and a tendency to chase magical thinking over measurable progress.

    Here We Go Again

    Fast forward a few years, and we’re back at it. Only this time, the technology looks smarter. LLMs have reignited AI’s promise with a seductive twist: they speak fluently. They write. They reason. They respond. And for many, that’s been enough to assume that LLMs also understand.

    But just like before, we’ve let the illusion get ahead of the reality.

    While LLMs make it easier than ever to demo something impressive, they haven’t made it easier to deliver something useful. Underneath the conversational surface, the same problems persist: inaccessible data, unclear problems, and unrealistic expectations. In fact, the expectations are even worse now, because the technology feels like it’s already “there.”

    I’ve seen teams leap into building generative AI “solutions” without a clear understanding of what problems they’re solving. I’ve seen leadership get swept up in generative demos and approve massive budgets to chase abstract goals like “productivity” or “creativity” without metrics, definitions, or infrastructure.

    The same pattern is playing out again. Impossible expectations, except this time they’re even higher. A misunderstanding of the tools, especially when it comes to differentiating the hype from the reality. And the same magical thinking chasing a hypothetical problem rather than focusing on a real problem with measurable outcomes.

    Two years in, we’re starting to see the same disappointment creep in again, too. The unrealized expectations, longer timelines, and lack of returns on the investments.

    What Useful AI Actually Looks Like

    Useful AI doesn’t always look like magic. In fact, the most valuable AI systems I’ve seen rarely impress anyone in a demo. They don’t write poetry, simulate conversation, or generate pitch decks with a prompt. They just quietly make things better—faster, cheaper, more consistent, more scalable.

    They are, by most standards, boring.

    A model that flags billing anomalies in a healthcare system might save millions. A classifier that routes customer service tickets to the right team might shave minutes off every support interaction. An optimization algorithm that suggests more efficient delivery routes could reduce fuel costs, improve ETAs, and shrink carbon footprints. None of these use generative AI. None of them are headline-worthy. But all of them create real value.

    And unlike a chatbot that sometimes gives the wrong answer with great confidence, these systems are narrow by design. Purpose-built. Measured. Tightly integrated into workflows and optimized over time. They don’t need to sound human. They just need to work.

    We often overlook this kind of AI because it’s not exciting to watch. It doesn’t feel like the future. But that’s exactly the point: the best AI doesn’t draw attention to itself. It dissolves into the process, making things work better than they did before.

    The Opportunity Cost of the Hype

    The hype around LLMs has sparked a renewed interest in AI, but it’s also warped our sense of what progress looks like. Instead of focusing on impact, we’ve become obsessed with spectacle.

    Executives see a demo of a chatbot that answers questions with a human-like cadence and see the realization of the vision that they’ve had for AI all along. Suddenly, every team is greenlit to build a “copilot.”

    LLMs make it easy to show something impressive. A few prompts, a fancy UI, and you’ve got a prototype that feels like innovation. But most of these tools don’t stand up to basic scrutiny. They hallucinate. They break when connected to real systems. They introduce ambiguity into workflows that once relied on clarity. They create new risks—ethical, operational, and technical—that teams are often unprepared to manage.

    We’re pouring talent, time, and money into building AI wrappers around problems we haven’t defined. Meanwhile, the infrastructure work that would actually make AI useful—cleaning data, improving feedback loops, building explainable systems—is neglected.

    This is the code of chasing the hype. Years of expensive effort with little or no realization of value, certainly to the scale that was promised. Real problems that didn’t require generative AI but could have been solved and added real value were ignored or neglected. Two years in, it turns out we’ve been sprinting on a treadmill. We’ve spent the energy, but we’re still in the same place.

    To be clear, there’s nothing wrong with experimentation. But exploration without a clear problem or success metric isn’t innovation—it’s expensive theater. It gives the illusion of progress while distracting from work that actually moves the needle.

    And we should know the difference because we’ve been here before. We let unrealistic expectations undermine the progress of last-generation AI. Now, we’re doing it faster. We’re skipping the discipline that made the old models work and replacing it with a dopamine hit from a prompt that feels smart.

    A Better Mindset: Value Over Novelty

    If the last two waves of AI taught us anything, it’s this: the technology is only as good as the problems we point it at and the people we trust to solve them.

    Too often, we let novelty set the direction. We ask, “What can we build with this?” instead of “What’s worth solving?” But even when we pick the right problems, we don’t always empower the right people to do the work.

    Instead, we’re seeing a return of the same behavior that stalled progress last time: leaders prescribing not just what to solve, but how to solve it. Dictating which data to use. Demanding specific architectures. Redefining outcomes midstream based on intuition instead of evidence. In some cases, they’re building solutions backwards from a flashy demo instead of forward from a real need.

    This isn’t strategy—it’s armchair data science all over again.

    And it’s especially risky now, because LLMs make it even easier to look smart without being right. It’s one thing to brainstorm ideas. It’s another to second-guess trained experts who understand the constraints, trade-offs, and mechanics of building something that works.

    A better mindset means more than just optimizing for usefulness. It means creating space for people with real expertise—data scientists, engineers, researchers, designers—to lead the “how.”

    It means:

    • Letting evidence drive decisions, not gut instinct or LinkedIn hype.
    • Empowering teams to solve, not just execute.
    • Recognizing that success isn’t always intuitive—and being okay with that.

    Adopting this mindset doesn’t mean ignoring new tech. It means respecting the disciplines that make that tech useful. It means pairing vision with humility, ambition with trust.

    Because there’s nothing wrong with being impressed by what’s possible.

    But if we’re serious about delivering real value with AI, we have to get out of our own way—and let the experts do their jobs.

  • The White Whale

    The White Whale

    In Moby-Dick, Captain Ahab’s relentless pursuit of the white whale isn’t just a quest for revenge; it’s a cautionary tale about obsession. Ahab becomes so consumed by his singular goal that he ignores the needs of his crew, the dangers of the voyage, and the possibility that his mission might be misguided.

    This mirrors a common trap in problem-solving: becoming so fixated on a single solution—or even the idea of being the one to solve a problem—that we lose sight of the bigger picture. Instead of starting with a problem and exploring the best ways to address it, we often cling to a solution we’re attached to, even if it’s not the right fit or takes us away from solving the actual problem.

    A Cautionary Tale

    Call me Ishmael.1 – Herman Melville

    I once worked on a project to identify potential customer issues. The business provided the context and success metrics, and we were part of the team set out to solve the problem.

    After we started, an executive on the project who knew the domain had a specific vision for how the solution should work and directed us on exactly what approach to use and how to implement it. While their approach seemed logical to them, it disregarded key best practices and alternative solutions that could have been more effective.

    We ran experiments to test both the executive’s approach and an alternative, using data to demonstrate how a different approach produced better results and would improve business outcomes.

    But the executive was undeterred. They shifted resources and dedicated teams to their solution, intent on making it work. We continued a separate effort in parallel but without the resources or backing of the received by the other team.

    The Crew

    Like the crew of the Pequod, the teams working on the executive’s solution were initially excited about the attention and resources. They came up with branding and a concept that made for good presentations. The initial few months were spent creating an architecture and building data pipelines under the presumption that the solution would work. Each update gave a sense of progress and success as items were crossed off the checklist.

    That success, though, was based on output, not outcomes. Along the way, the business results weren’t there, and team members began to question the approach. However, even with these questions and the evidence that our approach was improving business outcomes, the hierarchical nature of the commands kept the crew from changing course.

    The Prophet

    In Moby Dick, Captain Ahab smuggles Fedallah, an almost supernatural harpooner, onto the ship as part of a hidden crew. Fedallah is a mysterious figure who serves as Ahab’s personal prophet, foretelling Ahab’s fate.

    Looking for a prophet of their own, our executive brought in a consulting firm to see if they could get the project on track. The firm’s recommendations largely mirrored those of our team. However, similar to Fedallah’s prophecies, the recommendations were misinterpreted. What we saw as clear signals to change course, the executive saw as a chance of success and doubled down on their solution.

    The Alternate Mission

    Near the end of the novel, the captain of another vessel, the Rachel, pleads with Ahab to help him find his missing son, lost at sea. Ahab refuses because he is too consumed by his revenge. Ultimately, the obsession costs Ahab his life as well as those of his crew, with the exception of Ishmael, who was, ironically, rescued by the Rachel, the whaling ship that had earlier begged Ahab for help.

    We tried to bridge the gap between the two efforts for years, but the executive’s fixation on their solution made collaboration impossible. We made a strong case using data to change the mission from making their solution work to refocusing on the business goals and outcomes. Unfortunately, after many attempts, we weren’t able to convince them or affect their bias and feelings that their solution should work. Too many claims had already been made, and too much had been invested to change course. The success of their solution was the only acceptable end of the journey, with that success always being just over the horizon.

    A Generative White Whale

    I’ve been thinking about this story lately because I see the same pattern happening with generative AI. Just as Captain Ahab chases Moby Dick, many companies chase technological solutions without fully understanding if those solutions will solve their real business problems.

    Since ChatGPT was launched to the public in 2022, there has been pressure across industries to deliver on generative AI use cases. The impressive speed at which users signed up and the ease at which ChatGPT could respond to questions gave the appearance of an easy implementation path.

    Globally, roadmaps were blown up and rebuilt with generative AI initiatives. Traditional intent classification and dialog flows were replaced with large language models in conversational AI and customer support projects. Retrieval-augmented generation changed search and summarization use cases.

    Then, the world tried to use it. Everyone quickly learned that the models didn’t work out of the box and underestimated the amount of human oversight and iteration needed to get reliable, trustworthy results.2 We learned that their data wasn’t ready to be consumed by these models and underestimated the effort required to clean, label, and structure the data for generative AI use cases. We learned about hallucinations, toxic and dangerous language in responses, and the need for guardrails.

    But the ship had sailed. The course had been set. Roadmaps represent unchangeable commitments3. The mission to hunt for generative AI success continued.

    What started with use cases with clear business outcomes inherited from the pre-generative AI days started to change. Rather than targeting problems that could significantly impact business goals, the focus shifted to finding problems that could be solved with generative AI. Companies had already invested too much time, money, and opportunity cost, and they needed to deliver something of value to justify the voyage.4,5

    It became an obsession.

    A white whale.

    Chasing the Right Whale

    I try all things, I achieve what I can.6 – Herman Melville

    That’s not to say there isn’t a place for generative AI or other technology as possible solutions. I’ve been working with AI for almost a decade and have seen how it can be truly powerful and transformative when applied to the right use case that aligns with business outcomes and solving customer or business problems.

    Experimenting with the technology can foster innovation and uncover new opportunities. However, when the organization shifts focus away from solving its most critical business problems and towards delivering a solution or leveraging a specific technology for the sake of the solution or the technology, misalignment between those two paths and choosing the wrong goal can put the entire mission at risk. The mission should always be the success of the business, not the technology.

    That’s the difference between chasing the white whale and chasing the right whale.

    Assess Your Mission

    The longer a project goes on, the more likely it will veer off course. Little choices over time make small adjustments to direction that can eventually lead to being far away from the intended destination. The same thing can happen with the overall mission. Ahab started his journey hunting whales for resources and, while he was still technically hunting a whale, his mission changed to revenge. If he took the time to reassess his position and motivation, Moby Dick would have had a less dramatic ending.

    As product and delivery teams, it’s healthy practice to occasionally look up and evaluate the current position and trajectory. While there may be an argument for intuition in the beginning, as more information becomes available, it’s important to leverage data and critical thinking rather than intuition and feelings which are more prone to bias.

    These steps can help guide that process.

    1. Reaffirm Business and Customer Priorities.

    Align leadership around the most critical problems. Start by revisiting the company’s core objectives and defining success. Then, identify the biggest challenges facing the business and customers before considering solutions.

    2. Audit and Categorize Existing Projects

    Identify low-impact or misaligned projects. List all ongoing and planned AI initiatives, categorizing them based on:

    • Business impact (Does it solve a top-priority problem?)
    • Customer impact (Does it improve user experience or outcomes?)
    • Strategic alignment (Is it aligned with company goals, or is it just chasing trends?)

    An important factor here is articulating and measuring how the initiative impacts business and customer goals rather than relates to a business or customer goal.

    For example, a common chatbot goal is to reduce support costs (business goal) by answering customer questions (customer goal) without the need to interact with a support agent. A project that uses generative AI to create more natural responses might look like it’s addressing a need, but it assumes that a more conversational style will increase adoption or improve outcomes. However, making responses more conversational doesn’t necessarily make them more helpful. If the chatbot still struggles with accurate issue resolution, customers will escalate to an agent anyway.

    3. Assess Generative AI’s Fit

    Ensure generative AI is a means to an end, not the goal itself.

    Paraphrasing one of my mantras I would use when a team approached me with an “AI problem” to solve:

    There are no (generative) AI problems. There are business and customer problems for which (generative) AI may be a possible solution.

    For each project, ask: Would this problem still be worth solving without generative AI?

    If a generative AI project has a low impact, determine if there’s a higher-priority problem where AI (or another solution) could create more value.

    4. Adjust the Roadmap with a Zero-Based Approach

    Rather than tweaking the existing roadmap, start from scratch by prioritizing projects based on impact, urgency, and feasibility.

    Reallocate resources from lower-value AI projects to initiatives that directly improve business and customer outcomes.

    5. Set Success Metrics and Kill Switches

    Define clear, measurable success criteria for every project. Establish a review cadence (e.g., every quarter) to assess whether projects deliver value. If a project fails to meet impact goals, have a predefined exit strategy to stop work and shift resources.

    This structured approach ensures that AI projects are evaluated critically, business needs drive technology decisions, and resources are focused on solving the most important problems—not just following trends.

    Conclusion

    The lesson of Moby-Dick is not just about obsession—it’s about losing sight of the true mission. Ahab’s relentless pursuit led to destruction because he refused to reassess his course, acknowledge new information, or accept that his goal was misguided. In business and technology, the same risk exists when companies prioritize solutions over problems and fixate on a specific technology rather than its actual impact.

    Generative AI holds incredible potential, but only when applied intentionally and strategically. The key is to stay grounded in business priorities, customer needs, and measurable outcomes—not just the pursuit of AI for AI’s sake. By regularly evaluating projects, questioning assumptions, and ensuring alignment with meaningful goals, teams can avoid chasing white whales and steer toward solutions that drive success.

    The difference between success and failure isn’t whether we chase a whale—it’s whether we’re chasing the right one.

    And I only am escaped alone to tell thee.7 – Herman Melville

    1. “Call me Ishmael.” This is one of the most famous opening lines in literature. It sets the tone for Ishmael’s role as the narrator and frames the novel as a personal account rather than just an epic sea tale. ↩︎
    2. https://www.cio.com/article/3608157/top-8-failings-in-delivering-value-with-generative-ai-and-how-to-overcome-them.html ↩︎
    3. Roadmaps are meant to be flexible and adjusted as priorities and opportunities change. ↩︎
    4. https://www.journalofaccountancy.com/issues/2025/feb/generative-ais-toughest-question-whats-it-worth.html ↩︎
    5. https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025 ↩︎
    6. This quote from Ishmael reflects a spirit of perseverance and pragmatism, emphasizing the importance of effort and adaptability in the face of challenges. ↩︎
    7. The closing line of the novel echoes the biblical story of Job, in which a lone survivor brings news of disaster, underscoring the novel’s themes of fate, obsession, and destruction. ↩︎
  • In Defense of One-on-Ones

    In Defense of One-on-Ones

    Earlier this week, a former colleague forwarded me a video of Airbnb CEO Brian Chesky in an interview with Fortune saying he doesn’t believe in one-on-one meetings.

    The full context might reveal nuances specific to certain managerial levels. For example, he could have been referring to CEOs having one-on-ones with the rest of the C-suite who report to them. However, most of his references were to “employees,” and most of the comments on the video seem to generalize across all one-on-ones. Based on the video and related commentary, there appears to be growing skepticism about the value of one-on-ones.

    I’ve worked for bosses who didn’t see the value in one-on-ones, and I’ve worked for bosses who would use them to drive their agenda. When managers ignore or misuse one-on-ones, employees feel undervalued, disconnected, and unsupported.

    I’ve also been fortunate to work for bosses who modeled what a one-on-one should be. When managers prioritize regular one-on-ones, employees feel heard, supported, and valued. This fosters trust, alignment, and engagement, benefiting both the employee and the organization.

    Based on my experience, getting rid of one-on-ones is a terrible idea. (However, I favor getting rid of bad one-on-ones.)

    A few comments from the video highlighted misconceptions about one-on-ones or are signals of a bad one-on-one.

    “The employee owns the agenda. And what happens is they often don’t talk about the things you want to talk about.”

    One-on-ones are more than just about the agenda — they’re about building trust, understanding what motivates your team, and catching minor issues before they become big problems. Even if the employee’s agenda doesn’t directly overlap with your immediate goals, it gives you insight into what they’re thinking and feeling, which can help you guide them more effectively.

    That said, there are ways to make the meeting productive for both sides. The beauty of one-on-ones is that they’re a two-way conversation. While employees should have space to bring up what’s important to them, the manager also has an opportunity to steer the conversation toward topics they find valuable. It doesn’t have to be one or the other — it can be a balance.

    One way to address this is by co-creating the agenda. Before each meeting, you could ask the employee to suggest a couple of items they want to discuss, and you can add one or two topics that align with what you want to address. That way, both sides feel heard, and the meeting stays focused.

    In the spirit of the statement in the video, resist the temptation to control the full agenda. Remember, the one-on-one should be about the employee…not everything needs to be about you.

    “You become their therapist” and “They’re bringing you problems but often times they’re bringing you problems that you want other people in the room to hear. In other words, there’s very few times an employee should come to you one-on-one without other people.”

    One aspect of this comment I agree with is that some conversations are more appropriate in a team setting. Employees sometimes talk about their work or the project status they should bring up with the full team, such as surfacing a new issue. The nod to Jensen Huang’s quote, “I don’t do one-on-ones because I want everyone to be part of the solution and get the wisdom” is appropriate.

    But often, employees bring these topics up because they think that’s what their manager wants to hear. After all, that’s the only thing their manager asks about in one-on-ones.

    Sometimes, an employee, especially a junior one, doesn’t know how to bring a difficult topic up to the team. As a leader, those present opportunities to coach them and help shape how to bring those items to the team in a way that supports your organization’s culture.

    Finally, if an employee is bringing you items that are genuinely more appropriate for a therapist, you, as a leader, should set those boundaries and guide them to a more appropriate forum. However, don’t dismiss these topics when they relate to workplace well-being…the employee may be asking for accommodations, not solutions.

    “If they’re concerned about something, if they’re having a difficult time in their personal life, if they want to confide in something; they don’t feel safe telling a group. But that should be infrequent.”

    As I mentioned above, if the employee brings challenges in their personal life, look for opportunities to provide accommodations, not solutions. If the employee expects more and you are not willing, capable, or permitted to engage further, guide them to appropriate resources.

    But if they don’t feel safe bringing a topic to a group, coming to you is a gift. It’s a sign that your team has a perceived lack of safety or a potentially unhealthy dynamic that needs to be addressed.

    Also, while these items should be infrequent, this is not an exhaustive list of topics appropriate for a one-on-one conversation. Career development, goals, feedback, and recognition should be regular topics, too.

    Even with the expanded list of potential topics, there’s also no requirement that one-on-ones be weekly. It’s less about the frequency and more about the regularity and building a strong, trusting relationship that empowers the employee to thrive.

    A final note on bad one-on-ones…

    One of my first managers to schedule one-on-ones (probably from a corporate directive) said, “We’re going to have one-on-ones. Send me an agenda beforehand.”

    “Send me an agenda” is ambiguous and intimidating, especially for junior employees. I had no other context, no list of suggested topics, and no idea what I was doing. I would send a status-focused agenda because that’s all I knew. I don’t think either of us got much out of those meetings.

    Eventually, I reported to a different manager who also scheduled regular one-on-ones. When I sent the agenda, my manager stopped by to apologize for assuming that I understood the purpose of the meeting. They helped me move from a status-focused agenda to one that balanced my work, career, and where I needed help. I felt seen and supported, and that showed up in my work.

    While both the employee and the manager can be responsible for bad one-on-one meetings, the balance of responsibility skews toward the manager because they hold the leadership role and set the tone for the meetings.

    As a leader, take the responsibility and focus it on what is best for your employees and organization.

    Don’t abolish one-on-ones.

    Make them better.