David Monnerat

Dad. Husband. Product + AI. Generalist. Endlessly Curious.

Author: dave

  • It’s Agentic! (Boogie Woogie, Woogie)

    It’s Agentic! (Boogie Woogie, Woogie)

    You can’t see it
    It’s electric boogie woogie, woogie
    You gotta feel it
    It’s electric boogie woogie, woogie
    Ooooh, it’s shocking
    It’s electric boogie woogie, woogie

    Marcia Griffiths

    Just as the shininess of generative AI started to lose its polish from the reality of trying to use it, a new buzzword has re-entered the lexicon: agentic.

    The term “agentic” refers to AI systems that exhibit autonomy and goal-directed behavior characteristics. These systems go beyond simply generating responses or content based on input—they act as agents capable of making decisions, taking actions, and adapting to achieve specific objectives in dynamic environments.

    The concept of autonomous agents is not new. We’re bringing back the classics. In Artificial Intelligence: A Modern Approach (1995), Peter Norvig and Stuart Russell defined an agent as anything that perceives its environment and acts upon it.

    The idea that we can automate routine tasks to free up people to do more challenging, more creative tasks is noble and has been a rallying cry of computers and software since their inception. In his essay As We May Think,” Vannevar Bush imagined a device called the “Memex” to help humans store, organize, and retrieve information efficiently, aiming to reduce mental drudgery and aid creativity.

    Computers were first used to automate repetitive, time-consuming industrial tasks, especially in manufacturing. Early pioneers recognized that this freed humans for more complex supervisory roles.

    As computers and software became more accessible, researchers explored “expert systems” that were designed to take over repetitive knowledge-based tasks to allow professionals to focus on more challenging problems.

    Today, generative AI tools like ChatGPT, Github Copilot, and others are attempting to fully realize this concept by automating tasks like writing, coding, design, and data analysis, allowing humans to concentrate on strategy, creativity, and innovation.

    But since the mainstream generative AI boom in 2022, which saw the public availability of ChatGPT and Github Copilot, and the expansion in 2023 and 2024 with more competition in generative AI, the challenges of bringing this technology into the enterprise and driving meaningful value have dampened the early enthusiasm.

    That’s not to say that generative AI hasn’t been valuable. Many enterprises report productivity gains1,2 from early use cases like code generation, knowledge management, content generation, and marketing. However, several challenges have made it more difficult to scale and adopt generative AI more broadly.

    Hallucinations, where outputs are factually incorrect, fabricated, or nonsensical, despite appearing plausible or confident, introduce risk and distrust into generative AI solutions. Toxic and harmful language, including encouragement of self-harm and suicide3,4, further expose companies and reduce their interest in exposing their customers directly to generative AI output.

    Introducing generative AI has also highlighted systemic internal issues. Knowledge management use cases were seen as straightforward, low-risk ways to leverage generative AI. For example, retrieval-augmented generation (RAG) allows users to get context-aware answers by combining AI-generated content with real-time retrieval of relevant information from sources like internal documents and databases.

    But what happens when that documentation is missing, outdated, or incorrect? What if the documentation is ambiguous or contradicts itself? What if the documentation is not in a format easily consumed by the RAG system? These generative AI solutions are not a technical solution to poor documentation and data. As the saying goes, “Garbage in. Garbage out.”

    While the challenges above are relevant to most generative AI implementations, one that applies to agentic AI and agents relate to business processes.

    Agentic AI relies on well-defined tasks, workflows, goals, and a clear understanding of how processes operate to function effectively. If processes are unclear or undocumented and data is inconsistent, incomplete, or unavailable, the AI may struggle to execute tasks properly or optimize workflows.

    Generative AI and agents are not a technical solution to inefficient processes, just as AI isn’t a solution to bad data. Automating an inefficient process with an agent could reinforce and scale those inefficiencies, creating more bottlenecks or errors.

    Companies often prefer introducing new technology rather than spending resources updating outdated documentation or optimizing processes. These challenges highlight the risks of skipping those steps, especially when agents can execute transactions automatically and interact directly with customers. The possibilities of financial and reputational damage by a wayward agent are dangerously real.

    However, the lure of automation and operational efficiency is strong, and the landscape of offerings in the agentic AI space continues to grow. In 2024, the market size was estimated to be between $5 B and $31 B5,6. By 2032, the market is projected to reach approximately $48.5 B7. Like winning the lottery, that dollar figure is causing companies to forget the struggles of implementing non-agentic generative AI in pursuit of a big payoff through automation. But what opportunities are missed to improve business and customer outcomes without agents (or even AI) while chasing that payoff?

    That’s not to say there isn’t a place for agentic AI. Similar to the gains seen from generative AI, the ecosystem of conversational, natural language, multi-model, and adaptive agents can be a powerful tool to solve complex problems and drive value. However, it will take time because work must be done before this value can be fully realized. Paraphrasing a quote, the road to generative AI (and agentic AI) is clearer than ever before, but it’s much longer than we thought.

    Recommendations

    While we travel that road to the promised land, there are a few areas companies can focus on to prepare for an agentic world:

    Invest in documentation management and data quality. If previous AI projects failed due to poor documentation or data, an AI agent will likely have the same fate as its predecessor. Companies may see incremental gains through this effort because it’s likely that poor data and documentation are creating inefficiencies. For example, poor documentation can cause support agents to struggle to find answers, causing longer handling times.

    Invest in process optimization. The simpler a process is, the more likely it can be automated. I’ve found that companies want to keep their complex processes, which humans often find challenging to navigate, and think that automating them is a faster path to efficiency gains. The reality, however, is that complex processes have a long tail of edge cases that cause automation to break down, require extensive troubleshooting and tuning, and cancel out value.

    Simplify architectures and APIs. One aspect of autonomy for agentic AI is access to tools and functions that the agent can execute to act. An agent cannot effectively utilize complex APIs that wrap multiple functions and are not well-instrumented.

    Focus on risk mitigation. As mentioned above, generative AI and agentic AI introduce risks, including hallucinations, toxic and harmful language, and a lack of oversight and controls. If the best time to plant a tree is 20 years ago, the best time to implement guardrails and controls is before introducing agents. As business processes are reviewed, optimized, and documented, attention should be paid to identifying and securing vulnerable points.

    Identify small use cases with low risk and high value. It can be tempting to throw agentic AI at the biggest or most expensive problem to maximize the return on investment. However, starting with complex, high-stakes processes increases the likelihood of errors, inefficiencies, and stakeholder resistance. Instead, focus on areas where agentic AI can deliver quick wins. This approach allows teams to refine their understanding of the technology, build trust, and develop best practices before scaling to more critical or complex use cases.

    Consider non-agentic and non-AI solutions. Of all the recommendations, this one will likely generate the most resistance as companies are pushing towards the promise of generative AI and agentic AI solving all problems. Improving customer service or reducing call volume through better internal documentation or website search won’t generate enough buzz to show up in a news feed. There is so much pressure to find problems that can be solved with generative AI, forcing a solution-first, technology-first mindset. Ultimately, it should never be about the technology. It should be about the outcomes. It should be about improving our customers’ and employees’ lives and experiences and the value we bring to the business. Start with a problem or pain point and work backward. Consider all possible solutions, and choose the one most likely to succeed, even if it doesn’t feed the hype.

    Conclusion

    While the allure of agentic AI is undeniable, achieving its promised potential requires deliberate preparation, thoughtful execution, and a focus on foundational improvements.

    Companies must resist the urge to chase the hype and prioritize efforts that enhance data quality, streamline processes, and establish robust risk mitigation strategies.

    Starting with low-risk, high-value use cases can build momentum, trust, and a clear path to scalable adoption. At the same time, leaders should remain open to non-agentic and non-AI solutions that more effectively and sustainably address pain points.

    Ultimately, the goal should not be to implement the latest technology for its own sake but to deliver meaningful outcomes that enhance customer experiences, empower employees, and drive long-term business value.

    The journey toward agentic AI may be longer and more complex than we thought, but with the right approach, we can significantly increase the likelihood of realizing its full value.

    You can’t see it. You gotta feel it. Ooooh, it’s shocking. It’s agentic.

    Footnotes

    1. https://cloud.google.com/resources/roi-of-generative-ai ↩︎
    2. https://www.wsj.com/articles/its-time-for-ai-to-start-making-money-for-businesses-can-it-b476c754 ↩︎
    3. https://gemini.google.com/share/6d141b742a13 ↩︎
    4. https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0 ↩︎
    5. https://www.emergenresearch.com/industry-report/agentic-artificial-intelligence-market ↩︎
    6. The wide range in the 2024 figure is likely due to differing methodologies or market definitions. ↩︎
    7. https://dataintelo.com/report/agentic-ai-market ↩︎
  • In Defense of One-on-Ones

    In Defense of One-on-Ones

    Earlier this week, a former colleague forwarded me a video of Airbnb CEO Brian Chesky in an interview with Fortune saying he doesn’t believe in one-on-one meetings.

    The full context might reveal nuances specific to certain managerial levels. For example, he could have been referring to CEOs having one-on-ones with the rest of the C-suite who report to them. However, most of his references were to “employees,” and most of the comments on the video seem to generalize across all one-on-ones. Based on the video and related commentary, there appears to be growing skepticism about the value of one-on-ones.

    I’ve worked for bosses who didn’t see the value in one-on-ones, and I’ve worked for bosses who would use them to drive their agenda. When managers ignore or misuse one-on-ones, employees feel undervalued, disconnected, and unsupported.

    I’ve also been fortunate to work for bosses who modeled what a one-on-one should be. When managers prioritize regular one-on-ones, employees feel heard, supported, and valued. This fosters trust, alignment, and engagement, benefiting both the employee and the organization.

    Based on my experience, getting rid of one-on-ones is a terrible idea. (However, I favor getting rid of bad one-on-ones.)

    A few comments from the video highlighted misconceptions about one-on-ones or are signals of a bad one-on-one.

    “The employee owns the agenda. And what happens is they often don’t talk about the things you want to talk about.”

    One-on-ones are more than just about the agenda — they’re about building trust, understanding what motivates your team, and catching minor issues before they become big problems. Even if the employee’s agenda doesn’t directly overlap with your immediate goals, it gives you insight into what they’re thinking and feeling, which can help you guide them more effectively.

    That said, there are ways to make the meeting productive for both sides. The beauty of one-on-ones is that they’re a two-way conversation. While employees should have space to bring up what’s important to them, the manager also has an opportunity to steer the conversation toward topics they find valuable. It doesn’t have to be one or the other — it can be a balance.

    One way to address this is by co-creating the agenda. Before each meeting, you could ask the employee to suggest a couple of items they want to discuss, and you can add one or two topics that align with what you want to address. That way, both sides feel heard, and the meeting stays focused.

    In the spirit of the statement in the video, resist the temptation to control the full agenda. Remember, the one-on-one should be about the employee…not everything needs to be about you.

    “You become their therapist” and “They’re bringing you problems but often times they’re bringing you problems that you want other people in the room to hear. In other words, there’s very few times an employee should come to you one-on-one without other people.”

    One aspect of this comment I agree with is that some conversations are more appropriate in a team setting. Employees sometimes talk about their work or the project status they should bring up with the full team, such as surfacing a new issue. The nod to Jensen Huang’s quote, “I don’t do one-on-ones because I want everyone to be part of the solution and get the wisdom” is appropriate.

    But often, employees bring these topics up because they think that’s what their manager wants to hear. After all, that’s the only thing their manager asks about in one-on-ones.

    Sometimes, an employee, especially a junior one, doesn’t know how to bring a difficult topic up to the team. As a leader, those present opportunities to coach them and help shape how to bring those items to the team in a way that supports your organization’s culture.

    Finally, if an employee is bringing you items that are genuinely more appropriate for a therapist, you, as a leader, should set those boundaries and guide them to a more appropriate forum. However, don’t dismiss these topics when they relate to workplace well-being…the employee may be asking for accommodations, not solutions.

    “If they’re concerned about something, if they’re having a difficult time in their personal life, if they want to confide in something; they don’t feel safe telling a group. But that should be infrequent.”

    As I mentioned above, if the employee brings challenges in their personal life, look for opportunities to provide accommodations, not solutions. If the employee expects more and you are not willing, capable, or permitted to engage further, guide them to appropriate resources.

    But if they don’t feel safe bringing a topic to a group, coming to you is a gift. It’s a sign that your team has a perceived lack of safety or a potentially unhealthy dynamic that needs to be addressed.

    Also, while these items should be infrequent, this is not an exhaustive list of topics appropriate for a one-on-one conversation. Career development, goals, feedback, and recognition should be regular topics, too.

    Even with the expanded list of potential topics, there’s also no requirement that one-on-ones be weekly. It’s less about the frequency and more about the regularity and building a strong, trusting relationship that empowers the employee to thrive.

    A final note on bad one-on-ones…

    One of my first managers to schedule one-on-ones (probably from a corporate directive) said, “We’re going to have one-on-ones. Send me an agenda beforehand.”

    “Send me an agenda” is ambiguous and intimidating, especially for junior employees. I had no other context, no list of suggested topics, and no idea what I was doing. I would send a status-focused agenda because that’s all I knew. I don’t think either of us got much out of those meetings.

    Eventually, I reported to a different manager who also scheduled regular one-on-ones. When I sent the agenda, my manager stopped by to apologize for assuming that I understood the purpose of the meeting. They helped me move from a status-focused agenda to one that balanced my work, career, and where I needed help. I felt seen and supported, and that showed up in my work.

    While both the employee and the manager can be responsible for bad one-on-one meetings, the balance of responsibility skews toward the manager because they hold the leadership role and set the tone for the meetings.

    As a leader, take the responsibility and focus it on what is best for your employees and organization.

    Don’t abolish one-on-ones.

    Make them better.

  • The Humanity In Artificial Intelligence

    The Humanity In Artificial Intelligence

    I wrote this essay in 2017. When I restarted the blog, I removed the posts that had already been published. But after reading this one, while the technology has advanced significantly since then, the sentiment still applies today.

    Dave, January 2025


    Algorithms, artificial intelligence, and machine learning are not new concepts. But they are finding new applications. Wherever there is data, engineers are building systems to make sense of that data. Wherever there is an opportunity for a machine to make a decision, engineers are building it. It could be for simple, low-risk decisions to free up a human to make a more complicated decision. Or it could be because there is too much data for a human to decide. Data-driven algorithms are making more decisions in many areas of our lives.

    Algorithms already decide what search results we see. They determine our driving routes or assign us the closest Lyft, and soon, they will enable self-driving cars and other autonomous vehicles. They’re matching job candidates with applicants. They recommend the next movie you should watch or the product you should buy. They’re figuring out which houses to show you and whether you can pay the mortgage. The more data we feed them, the more they learn about us, and they are getting better at judging our mood and intention to predict our behavior.

    I’ve been thinking a lot about these systems lately. My son has epilepsy, and I’m working on a project to gauge the sentiment towards epilepsy on social media. I’m scraping epilepsy-related tweets from Twitter and feeding them to a sentiment analyzer. The system calculates a score representing whether an opinion is positive, negative, or neutral.

    Companies already use sentiment analysis to understand their customers’ relationships. They analyze reviews and social media mentions to measure the effectiveness of an ad. They can inspect negative comments and find ways to improve a product. They can also see when a public relations incident turns against them.

    For the epilepsy project, my initial goal was to track sentiment over time. I wanted to see why people were using Twitter to discuss epilepsy. Were they sharing positive stories, or were they sharing hardships and challenges? I also wanted to know whether people responded more to positive or negative tweets.

    While the potential is there, the technology may not be quite ready. These systems aren’t perfect, and context and the complexities of human expression can confuse even humans. While “I [expletive] love epilepsy” may seem to an immature algorithm to express a positive sentiment, the effectiveness of any system built on top of them is limited by these algorithms themselves.

    I considered this as I compared two sentiment analyzers. They gave me different answers for tweets that expressed a negative sentiment. Of course, which was “right” could be subjective, but most reasonable people would have agreed that the tone of the text was negative.

    Like a child, a system sometimes gets a wrong answer because it hasn’t learned enough to know the right one. This was likely the case in my example. The answer given was likely due to limitations in the algorithm. Still, imagine if I built my system to predict the mood of a patient using an immature algorithm. When the foundation is wrong, the house will crumble.

    But, also like a child, sometimes they give an answer because a parent taught them that answer. Whether through explicit coding choices or biased data sets, systems can “learn wrong”. After all, people created these systems—people, with their logic and ingenuity, but also their biases and flaws. A human told it that an answer was right or wrong. A human with a viewpoint. Or a human with an agenda.

    We create these systems with branches of code and then teach them which branch to follow. We let them learn and show enough proficiency, and then we trust them to keep getting better. We create new systems and give them more responsibility. But somewhere, back in the beginning, a fallible human wrote that first line of code. It is impossible for those actions not to influence every outcome.

    These systems will continue to be pervasive, reaching into new areas of our lives. We’ll continue to depend on and trust them because they make our lives easier. And because they get it right most of the time. The danger is assuming they always get it right and not questioning an answer the feels wrong. “The machine gave me the answer, so it must be true” is a dangerous statement, now more than ever.

    We dehumanize these programs once they encounter the cold metal box in which they run. However, they are extensions of our humanity, and it’s important to remember their human origins.