David Monnerat

Dad. Husband. Product + AI. Generalist. Endlessly Curious.

Tag: data science

  • The Dulling of Innovation

    The Dulling of Innovation

    For a few years, I was on a patent team. Our job was to drive innovation and empower employees to come up with new ideas and shepherd them through the process to see if we could turn those ideas into patents.

    I loved that job for many reasons. It leveraged an innovation framework I had already started with a few colleagues—work that earned us a handful of patents. It fed my curiosity, love for technology, and joy of being surrounded by smart people. Most of all, I loved watching someone light up as they became an inventor.

    I worked with an engineer who had an idea based on his deep knowledge of a specific system. Together, we expanded on that idea and turned it into an innovative solution to a broader problem. The look on his face when his idea was approved for patent filing was one of the greatest moments of my career. For years after, he would stop me in the hallway just to say hello and introduce me as the person who helped him get a patent.

    Much of the success I saw on that team came from people who deeply understood a problem, were curious to ask why, and believed there had to be a better way. That success was amplified when more than one inventor was involved, when overlapping experiences and diverse perspectives combined into something truly original.

    When I moved into product management, the same patterns held true. The most successful ideas still came from a clear understanding of the problem, deep knowledge of the system, and the willingness to explore different perspectives.

    Innovation used to be a web. It was messy, organic, and interconnected. The spark came from deep context and unexpected collisions.

    But that process is starting to change.

    Same High, Lower Ceiling

    In this new age of large language models (LLMs), companies are looking for shortcuts for growth and innovation and see LLMs as the cheat code.

    Teams are tasked with mining customer comments to synthesize feedback and generate feature ideas and roadmaps. If the ideas seem reasonable, they are executed without further analysis. Speed is the goal. Output is the metric.

    Regardless of size or maturity, every company can access the tools and capabilities once reserved for tech giants. Generative AI lowers the barrier to entry. It also levels the playing field, democratizing innovation.

    But what if it also levels the results?

    When everyone uses the same models, is trained on the same data, and is prompted in similar ways, the ideas start to converge. It’s innovation by template. You might move faster, but so is everyone else, and in the same direction.

    Even when applied to your unique domain, the outputs often look the same. Which means the ideas are starting to look the same, too.

    AI lifts companies that lacked innovation muscle, but in doing so, it risks pulling down those that had built it. The average improves, but the outliers vanish. The floor rises, but the ceiling falls.

    We’re still getting the high. But it doesn’t feel like it used to.

    The Dopamine of Speed

    The danger is that we’re not going to see it happening. Worse, we’re blindly moving forward without considering the long-term implications. We’re so fixated on speed that it’s easy to convince ourselves that we’re moving fast and innovating.

    We confuse motion for momentum, and output for originality. The teams and companies that move the fastest will be rewarded. Natural selection will leave the slower ones behind. Speed will be the new sign of innovation. But just because something ships fast doesn’t mean it moves us forward.

    The dopamine hit that comes from release after release is addictive, and we’ll need more and more to feel the same level of speed and growth. We’ll rely increasingly on these tools to get our fix until it stops working altogether. Meanwhile, the incremental reliance on these tools dulls effectiveness and erodes impact, and our ability to be creative and innovate will atrophy.

    By the time we realize the quality of our ideas has flattened, we’ll be too dependent on the process to do anything differently.

    The Dealers Own the Supply

    And those algorithms? They’re owned by a handful of companies. These companies decide how the models behave, what data they’re trained on, and what comes out of them.

    They also own the data. And it’s only a matter of time before they start mining it for intellectual property—filing patents faster than anyone else can, or arguing that anything derived from their models is theirs by default.

    Beyond intellectual property and market control, this concentration of power raises more profound ethical and societal questions. When innovation is funneled through a few gatekeepers, it risks reinforcing existing inequalities and biases embedded in the training data and business models. The diversity of ideas and creators narrows, and communities without direct access to these technologies may be left behind, exacerbating the digital divide and limiting who benefits from AI-driven innovation.

    The more we rely on these models, the more we feed them. Every prompt, interaction, and insight becomes part of a flywheel that strengthens the model and the company behind it, making it more powerful. It’s a feedback loop: we give them our best thinking, and they return a usable version to everyone else.

    LLMs don’t think from first principles—they remix from secondhand insight. And when we stop thinking from scratch, we start building from scraps.

    Because the answers sound confident, they feel finished. That confidence masks conformity, and we mistake it for consensus.

    Innovation becomes a productized service. Creative edge gets compressed into a monthly subscription. What once gave your company a competitive advantage is now available to anyone who can write a halfway decent prompt.

    Make no mistake, these aren’t neutral platforms. They shape how we think, guide what we explore, and, as they become more embedded in our workflows, influence decisions, strategies, and even what we consider possible.

    We used to control the process. Now we’re just users. The same companies selling us the shortcut are quietly collecting the toll.

    When the supply is centralized, so is the power. And if we keep chasing the high, we’ll find ourselves dependent on a dealer who decides what we get and when we get it.

    Rewiring for Real Innovation

    This isn’t a call to reject the tools. Generative AI isn’t going away, and used well, it can make us faster, better, and more creative. But the key is how we use it—and what we choose to preserve along the way.

    Here’s where we start:

    1. Protect the Messy Middle

    Innovation doesn’t happen at the point of output. It happens in the friction. The spark lives in debate, dead ends, and rabbit holes. We must protect the messy, nonlinear process that makes true insight possible.

    Use AI to accelerate parts of the journey, not to skip it entirely.

    2. Think from First Principles

    Don’t just prompt. Reframe. Instead of asking, “What’s the answer?” ask, “What’s the real question?” LLMs are great at synthesis, but breakthroughs come from original framing.

    Start with what you know. Ask “why” more than “how.” And resist the urge to outsource the thinking.

    3. Don’t Confuse Confidence for Quality

    A confident response isn’t necessarily a correct one. Learn to interrogate the output. Ask where it came from, what it’s assuming, and what it might be missing.

    Treat every generated answer like a draft, not a destination.

    4. Diversify Your Inputs

    The model’s perspective is based on what it’s been trained on, which is mostly what’s already popular, published, and safe. If you want a fresh idea, don’t ask the same question everyone else is asking in the same way.

    Talk to people. Explore unlikely connections. Bring in perspectives that aren’t in the data.

    5. Make Thinking Visible

    The danger of speed is that it hides process. Write out your assumptions. Diagram your logic. Invite others into the middle of your thinking instead of just sharing polished outputs.

    We need to normalize visible, imperfect thought again. That’s where the new stuff lives.

    6. Incentivize Depth

    If we reward speed, we get speed. If we reward outputs, we get more of them. But if we want real innovation, we need to measure the stuff that doesn’t show up in dashboards: insight, originality, and depth of understanding.

    Push your teams to spend time with the problem, not just the solution.

    Staying Sharp

    We didn’t set out to flatten innovation. We set out to go faster, to do more, to meet the moment. But in chasing speed and scale, we risk trading depth for derivatives, and originality for automation.

    Large language models can be incredible tools. They can accelerate discovery, surface connections, and amplify creative potential. But only if we treat them as collaborators, not crutches.

    The danger isn’t in using these models. The danger is in forgetting how to think without them.

    We have to resist the pull toward sameness. We have to do the slower, messier work of understanding real problems, cultivating creative tension, and building teams that collide in productive ways. We have to reward originality over velocity, and insight over output.

    Otherwise, the future of innovation won’t be bold or brilliant.

    It’ll just be fast.

    And dull.

  • Automation’s Hidden Effort

    Automation’s Hidden Effort

    In the early 2000s, as the dot-com bubble burst, I found myself without an assignment as a software development consultant. My firm, scrambling to keep people employed, placed me in an unexpected role: a hardware testing lab at a telecommunications company.

    dm automation hidden effort test cable box telecommunications

    The lab tested cable boxes and was the last line of defense before new devices and software were released to customers. These tests consisted of following steps in a script tracked in Microsoft Excel to validate different features and functionality and then marking the row with an “x” in the “Pass” or “Fail” column.

    A few days into the job, I noticed that, after they had completed a test script, some of my colleagues would painstakingly count the “x” in each column and then populate the summary at the end of the spreadsheet.

    “You know, Excel can do that for you, right?” I offered, only to be met with blank stares.

    “Watch.”

    I showed them how to use simple formulas to tally results and then added conditional formatting to highlight failed steps automatically. These small tweaks eliminated tedious manual work, freeing testers to focus on more valuable tasks.

    That small win led to a bigger challenge. My manager handed me an unopened box of equipment—an automated testing system that no one had set up.

    “You know how to write code,” he said. “See if you can do something with that.”

    Inside were a computer, a video capture card, an IR transmitter, and an automation suite for running scripts written in C. My first script followed the “happy path,” assuming everything worked perfectly. It ran smoothly—until it didn’t. When an IR signal was missed, the entire test derailed, failing step after step.

    To fix it, I added verification steps after every command. If the expected screen didn’t appear, the script would retry or report a failure. Over weeks of experimentation, I built a system that ran core regression tests automatically, flagged exceptions, and generated reports.

    When I showed my manager the result, he was amazed as he watched the screen. As if by magic, the cable box navigated to different screens and tested various actions. At the end of the demo, he was impressed and directed me to automate more tests.

    What he didn’t see in the demo was the effort behind the scenes—the constant tweaking, exception handling, and fine-tuning to account for the messy realities of real-world systems.

    The polished demo sent a simple message:

    Automation is here. No manual effort is needed.

    But that wasn’t the whole story. Automation, while transformative, is rarely as effortless as it appears.

    Operator: Automation’s New Chapter

    The lessons I learned in that testing lab feel eerily relevant today.

    In January 2025, OpenAI released Operator. According to OpenAI1:

    Operator is a research preview of an agent that can go to the web to perform tasks for you. It can automate various tasks—like filling out forms, booking travel, or even creating memes—by remotely interacting with a web browser much as a person would, via mouse clicks, scrolling, and typing.

    When I saw OpenAI’s announcement, I had déjà vu. Over 20 years ago, I built automation scripts to mimic how customers interacted with cable boxes—sending commands, verifying responses, and handling exceptions. It seemed simple in theory but was anything but in practice.

    Now, AI tools like Operator promise to navigate the web “just like a person,” and history is repeating itself. The demo makes automation look seamless, much like mine did years ago. The implicit message is the same:

    Automation is here. No manual effort is needed.

    But if my experience in test automation taught me anything, it’s that a smooth demo hides a much messier reality.

    The Hidden Complexity of Automation

    automations hidden effort ai machine learning operator

    At a high level, Operator achieves something conceptually similar to what I built for the test lab—but with modern machine learning. Instead of writing scripts in C, it combines large language models with vision-based recognition to interpret web pages and perform actions. It’s a powerful advancement.

    However, the fundamental challenge remains: the real world is unpredictable.

    In my cable box testing days, the obstacles were largely technological. The environment was controlled, the navigation structure was fixed, and yet automation still required extensive validation steps, exception handling, and endless adjustments to account for inconsistencies.

    With Operator, the automation stack is more advanced, but the execution environment—the web—is far less predictable. Websites are inconsistent. Navigation is not standardized. Pages change layouts frequently, breaking automated workflows. Worse, many sites actively fight automation with CAPTCHAs2, anti-bot measures, and dynamic content loading. While automation tools like Operator try to solve these anti-bot techniques, their effectiveness and ethics are still debatable.3,4

    The result is another flashy demo in a controlled environment with a much more “brittle and occasionally erratic”5 behavior in the wild.

    The problem isn’t the technology itself—it’s the assumption that automation is effortless.

    A Demo Is Not Reality

    Like my manager, who saw a smooth test automation demo and assumed we could apply it to every test, many will see the Operator demo and believe AI agents are ready to replace manual effort for every use case.

    dm automation test hidden effort operator

    The question isn’t whether Operator can automate tasks—it clearly can. But the real challenge isn’t innovation—it’s the misalignment between expectations and the realities of implementation.

    Real-world implementation is messy. Moving beyond controlled conditions, you run into exceptions, edge cases, and failure modes requiring human intervention. It isn’t clear if companies understand the investment required to make automation work in the real world. Without that effort, automation promises will remain just that—promises.

    Many companies don’t fail at automation because the tools don’t work—they fail because they get distracted by the illusion of effortless automation. Without investment in infrastructure, data, and disciplined execution, agents like Operator won’t just fail to deliver results—they’ll pull focus away from the work that matters.

    1. https://help.openai.com/en/articles/10421097-operator
      ↩︎
    2. CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a security feature used on websites to differentiate between human users and bots. It typically involves challenges like identifying distorted text, selecting specific objects in images, solving simple math problems, or checking a box (“I’m not a robot”). ↩︎
    3. https://www.verdict.co.uk/captcha-recaptcha-bot-detection-ethics/?cf-view ↩︎
    4. https://hackernoon.com/openais-operator-vs-captchas-whos-winning ↩︎
    5. https://www.nytimes.com/2025/02/01/technology/openai-operator-agent.html ↩︎

  • The Democratization of (Everything)

    The Democratization of (Everything)

    A few years ago, I sat across the desk from a colleague, discussing their vision for a joint AI initiative. As a product manager, I pushed for clarity—what problem were we solving? What was the measurable outcome? What was the why behind this effort? His response was simple: democratization. Just giving people access. No clear purpose, no defined impact—just the assumption that making something available would automatically lead to progress. That conversation stuck with me because it highlighted a fundamental flaw in how we think about democratizing technology.

    The term “democratizing” used about technology began to gain traction in the late 20th century, particularly during the rise of personal computing and the internet.

    Democratizing technology typically means making it accessible to a broader audience, often by reducing cost, simplifying interfaces, or removing barriers to entry. The goal is to empower more people to use the technology, fostering innovation, equality, and progress.

    Personal computers would “democratize” access to computing power by putting it in the hands of individuals rather than large institutions or corporations. Similarly, the Internet would “democratize” access to information by removing the gatekeepers from publishing and content distribution.

    By the 2010s, “democratizing” became a buzzword in tech—used to describe making advanced tools like big data, AI, and machine learning accessible to more people. What was once in the hands of domain experts was now in the hands of the masses.

    Today, the term is frequently used in discussions about generative AI and other advanced technologies. These tools are marketed as democratizing creativity, coding, and problem-solving by making complex capabilities accessible to non-experts.

    The word “democratization” resonates because it aligns with broader cultural values, signaling fairness, accessibility, empowerment, and progress. The technology industry loves grand narratives, and “democratizing” sounds more revolutionary than “making more accessible.” It suggests that technology can break down barriers and create opportunities for everyone.

    However, as we’ve seen, the reality is often more complicated, and the term can sometimes obscure the challenges and inequalities that persist. Democratization often benefits those who already have the resources and knowledge while leaving others behind.

    I’ve long thought that the word “democratization” was an interesting choice when applied to technology because it resembles the ideals of operating a democratic state.1 Both rely on the idea that giving people access will automatically lead to better outcomes, fairness, and participation. However, both involve the tension between accessibility and effective use, the gap between ideals and reality, and the complexities of ensuring equitable participation. In practice, access alone is not enough; people need education, understanding, and responsible engagement for the system to function effectively.

    Democratization ≠ Access

    I’ve encountered many leaders who equate democratization with access, as if the goal is to put the tools in people’s hands. However, accessing a tool doesn’t mean people know what to do with it or how to use it effectively. For example, just because people can access AI, big data, or generative tools doesn’t mean they know how to use them properly or interpret their outputs.

    Similarly, just because people have the right to vote doesn’t mean they fully understand policies, candidates, or the consequences of their choices.

    In technology, access is meaningful only when it drives specific outcomes, such as innovation, efficiency, or solving real-world problems. In a democratic state, access to voting and participation is not an end but a means to achieve broader goals, such as equitable representation, effective governance, and societal progress.

    Without a clear purpose, access risks becoming superficial, failing to address deeper systemic issues or deliver tangible improvements. In both cases, democratization must be guided by a vision beyond mere access to ensure it creates a meaningful, lasting impact.

    Democratization requires not just opening doors but also empowering individuals with the knowledge, understanding, and skills to walk through them meaningfully. Without this foundation, the promise of democratization remains incomplete.

    Democratization ≠ Equality

    The future is already here, it’s just not evenly distributed.

    William Gibson2

    The U.S. was built on democratic ideals. However, political elites, corporate interests, and media conglomerates shape much of the discourse because political engagement is skewed toward those with resources, time, and education. Underprivileged communities face barriers to participation.

    The same is true in technology. The wealthy and well-educated benefit more from new technology, while others struggle to adopt it and are left behind. AI and big data were meant to be open and empowering, but tech giants still control them, setting rules and limitations.

    Both systems struggle with the reality that equal access does not automatically lead to equal outcomes, as power dynamics and systemic inequalities persist. Even when technology is democratized, those with more resources or expertise often benefit disproportionately, widening existing inequalities.

    Bridging the gap between access and outcomes demands more than good intentions—it requires deliberate action to dismantle barriers, redistribute power, and ensure that everyone can benefit equitably. By focusing on education, structural reforms, and inclusive practices, both technology and democratic systems can move closer to fulfilling their promises of empowerment and equality.

    Democratization ≠ Expertise

    These are dangerous times. Never have so many people had so much access to so much knowledge and yet have been so resistant to learning anything.

    Thomas M. Nichols, The Death of Expertise

    Critical thinking is essential for both the democratization of technology and the functioning of a democratic state. In technology, access to AI, big data, and digital tools means little if people cannot critically evaluate information, recognize biases, or understand the implications of their actions. Misinformation, algorithmic manipulation, and overreliance on automation can distort reality, just as propaganda and political rhetoric can mislead voters in a democracy. Similarly, for a democratic state to thrive, citizens must question policies, evaluate candidates beyond slogans, and resist emotional or misleading narratives. 

    Without critical thinking, technology can be misused, and democratic processes can be manipulated, undermining the very ideals of empowerment and representation that democratization seeks to achieve. In both realms, fostering critical thinking is not just beneficial—it’s necessary for meaningful progress and equity.

    Addressing the lack of critical thinking in technology and humanity at large requires a holistic approach that combines education, systemic reforms, and cultural change. We can build a more informed, equitable, and resilient society by empowering individuals with the skills and tools to think critically and creating systems that reward thoughtful engagement. This is not a quick fix but a long-term investment in the health of technological and democratic systems.

    Democratization ≠ Universality

    Both technology and governance often operate under the assumption that uniform solutions can meet the diverse needs of individuals and communities. This can result in a mismatch between what is offered and what is actually required, highlighting the limits of a one-size-fits-all approach.

    In technology, for example, AI tools and software may be democratized to allow everyone access, but these tools often assume a certain level of expertise or familiarity with the technology. While they may work well for some users, others may find them difficult to navigate or unable to fully harness their capabilities. A tool designed for the general public might unintentionally alienate those who need a more tailored approach, leaving them frustrated or disengaged.

    Similarly, in governance, policies are often created with the idea that they will serve all citizens equally. However, a single national policy—whether on healthcare, education, or voting rights—can fail to account for the vastly different needs and circumstances of different communities. For example, universal healthcare policies may not address the specific healthcare access issues faced by rural or low-income populations, and standardized educational curriculums may not be effective for students with different learning needs or backgrounds. When solutions are not tailored to the unique realities of diverse groups, they risk reinforcing existing inequalities and failing to deliver meaningful results.

    The challenge, then, is finding a balance between providing access and ensuring that solutions are adaptable and responsive to the needs of different communities. Democratization doesn’t guarantee universal applicability, and it’s essential to recognize that true empowerment comes not just from providing access but from ensuring that access is meaningful and relevant to everyone, regardless of their context or capabilities. Without this careful consideration, democratization can become a frustrating experience that leaves many behind, ultimately hindering progress rather than fostering it.

    Conclusion

    The democratization of technology, much like democracy itself, is harder than it sounds. Providing access to tools like AI or big data is only the first step—it doesn’t guarantee that people know how to use them effectively or equitably. Without the necessary education, critical thinking, and support, access alone can be frustrating and lead to further division rather than empowerment.

    Just as democratic governance struggles with the assumption that one-size-fits-all policies can serve diverse communities, the same happens with technology. Tools designed to be universally accessible often fail to meet the unique needs of different users, leaving many behind. Real democratization requires not just opening doors but ensuring that everyone has the resources to walk through them meaningfully.

    Democracy is challenging in both technology and governance. It’s not just about giving people access; it’s about giving them the knowledge, understanding, and opportunity to use that access in ways that truly empower them.

    Until we get this right, the promise of democratization (and democracy) remains unfulfilled.

    Footnotes

    1. The United States of America is a representative democracy (or a democratic republic). ↩︎
    2. https://quoteinvestigator.com/2012/01/24/future-has-arrived/ ↩︎
  • The Humanity In Artificial Intelligence

    The Humanity In Artificial Intelligence

    I wrote this essay in 2017. When I restarted the blog, I removed the posts that had already been published. But after reading this one, while the technology has advanced significantly since then, the sentiment still applies today.

    Dave, January 2025


    Algorithms, artificial intelligence, and machine learning are not new concepts. But they are finding new applications. Wherever there is data, engineers are building systems to make sense of that data. Wherever there is an opportunity for a machine to make a decision, engineers are building it. It could be for simple, low-risk decisions to free up a human to make a more complicated decision. Or it could be because there is too much data for a human to decide. Data-driven algorithms are making more decisions in many areas of our lives.

    Algorithms already decide what search results we see. They determine our driving routes or assign us the closest Lyft, and soon, they will enable self-driving cars and other autonomous vehicles. They’re matching job candidates with applicants. They recommend the next movie you should watch or the product you should buy. They’re figuring out which houses to show you and whether you can pay the mortgage. The more data we feed them, the more they learn about us, and they are getting better at judging our mood and intention to predict our behavior.

    I’ve been thinking a lot about these systems lately. My son has epilepsy, and I’m working on a project to gauge the sentiment towards epilepsy on social media. I’m scraping epilepsy-related tweets from Twitter and feeding them to a sentiment analyzer. The system calculates a score representing whether an opinion is positive, negative, or neutral.

    Companies already use sentiment analysis to understand their customers’ relationships. They analyze reviews and social media mentions to measure the effectiveness of an ad. They can inspect negative comments and find ways to improve a product. They can also see when a public relations incident turns against them.

    For the epilepsy project, my initial goal was to track sentiment over time. I wanted to see why people were using Twitter to discuss epilepsy. Were they sharing positive stories, or were they sharing hardships and challenges? I also wanted to know whether people responded more to positive or negative tweets.

    While the potential is there, the technology may not be quite ready. These systems aren’t perfect, and context and the complexities of human expression can confuse even humans. While “I [expletive] love epilepsy” may seem to an immature algorithm to express a positive sentiment, the effectiveness of any system built on top of them is limited by these algorithms themselves.

    I considered this as I compared two sentiment analyzers. They gave me different answers for tweets that expressed a negative sentiment. Of course, which was “right” could be subjective, but most reasonable people would have agreed that the tone of the text was negative.

    Like a child, a system sometimes gets a wrong answer because it hasn’t learned enough to know the right one. This was likely the case in my example. The answer given was likely due to limitations in the algorithm. Still, imagine if I built my system to predict the mood of a patient using an immature algorithm. When the foundation is wrong, the house will crumble.

    But, also like a child, sometimes they give an answer because a parent taught them that answer. Whether through explicit coding choices or biased data sets, systems can “learn wrong”. After all, people created these systems—people, with their logic and ingenuity, but also their biases and flaws. A human told it that an answer was right or wrong. A human with a viewpoint. Or a human with an agenda.

    We create these systems with branches of code and then teach them which branch to follow. We let them learn and show enough proficiency, and then we trust them to keep getting better. We create new systems and give them more responsibility. But somewhere, back in the beginning, a fallible human wrote that first line of code. It is impossible for those actions not to influence every outcome.

    These systems will continue to be pervasive, reaching into new areas of our lives. We’ll continue to depend on and trust them because they make our lives easier. And because they get it right most of the time. The danger is assuming they always get it right and not questioning an answer the feels wrong. “The machine gave me the answer, so it must be true” is a dangerous statement, now more than ever.

    We dehumanize these programs once they encounter the cold metal box in which they run. However, they are extensions of our humanity, and it’s important to remember their human origins.