David Monnerat

Product + AI | Systems Thinker | Enterprise Reality

Category: ai

  • Enterprise AI Implementation: You Were Promised Everything. Here’s What It Took.

    Enterprise AI Implementation: You Were Promised Everything. Here’s What It Took.

    It was, by all appearances, a standard enterprise AI implementation.

    The summaries looked clean.

    At the top of the screen was a concise paragraph capturing a customer interaction: what was requested, what was explained, and what follow-up was required. Action items were listed neatly below. It was the kind of output you could screenshot for a slide deck. Efficient. Polished. Convincing.

    The premise was simple. If employees spent less time documenting interactions, they could spend more time serving customers. Efficiency would increase. Costs would decrease. The model worked in the demo. It summarized transcripts fluently and quickly. The business case felt straightforward.

    It moved forward.

    The strain didn’t appear in the demo. It appeared in real use.

    Transcripts did not always flow through the system in the way the workflow assumed. Attribution of who said what, acceptable in curated samples, became less reliable in the face of the variability of real conversations. When attribution shifted, the summary shifted with it. For some stakeholders, that was inconvenient. For others, it introduced risk.

    Then something more structural surfaced.

    The assumption had been that there was a single summary for each interaction. In practice, different stakeholders needed different things from the same conversation. Someone preparing for the next engagement cared about context and commitments. Someone evaluating performance cared about adherence to the process. Leadership cared about patterns across many interactions.

    One summary could not satisfy all of those needs equally well.

    The original framing of saving time on notes began to feel incomplete. Documentation was only one part of the job that documentation performed. Good records preserve continuity. They prevent repeated effort. They carry context forward to the next conversation, the next decision, the next relationship moment. If a generated summary omitted a critical detail and someone had to go back to the original interaction to find it, the downstream cost could easily outweigh the time saved up front. And unlike writing notes, which happens once, the cost of a missing detail can repeat itself across every subsequent interaction with that customer.

    Under light use, the system worked. Under sustained use, the edges became visible.

    The model had done what it was designed to do. The surrounding system had not yet fully defined its requirements.


    It’s tempting to treat generative AI as an easy button.

    Providers will say they do summarization. And they do. Models can summarize text. They can condense transcripts. They can produce coherent output from messy inputs.

    But capability in isolation is different from capability under context.

    The gap isn’t whether the model works. It’s whether the system around it is ready.

    I’ve seen this play out repeatedly. The hard questions aren’t technical. They’re the ones that should have been answered before anyone opened a laptop. What is the actual job this tool is supposed to do? Not the elevator pitch version. The operational one. Is the goal speed? Accuracy? Compliance? Relationship continuity? Performance management? Each of those implies a different design, a different metric, and a different definition of done.

    Who owns the output if it’s wrong? What happens when accuracy and speed pull in opposite directions and someone has to choose? What does good actually look like, and how will anyone know when they’ve reached it?

    These weren’t philosophical questions. They were the kind of questions that get answered eventually, either intentionally before you build or expensively after you scale.

    AI lowers the barrier to building. It does not lower the barrier to clarity.


    When the summarization tool moved from demonstration to deployment, it functioned less like a feature and more like a pressure test. Variability in data pipelines surfaced. Differences in stakeholder needs became more pronounced. Cost assumptions changed once usage expanded beyond a controlled subset. Metrics that seemed sufficient in theory proved inadequate in practice.

    The pressure did not create the weaknesses. It revealed them.


    I’ve watched the same pattern unfold in other contexts.

    In one case, a generative model was introduced to help draft customer communications. The demo was compelling. With curated prompts and examples, the system produced usable content. It hinted at real scale and the leadership team liked what they saw.

    The stated goal was efficiency. Produce more output in less time.

    But efficiency was a proxy for something nobody had fully defined. Was success higher engagement? Improved response rates? Stronger brand consistency? Faster turnaround? The system could generate text, but it couldn’t determine which message was right for which audience segment. It couldn’t encode organizational voice without deliberate structure. It couldn’t tell you whether what it produced was actually better, because nobody had agreed on what better meant.

    The complexity didn’t disappear when the tool was adopted. It surfaced.

    Measurement frameworks had to be built from scratch. Editorial standards had to be written down for the first time. Experiments had to be designed carefully enough to mean something. The promise of speed ran well ahead of the work required to turn speed into value.

    The technology functioned. The surrounding system required definition.


    There is a broader pattern here.

    AI doesn’t introduce ambiguity into organizations. It finds the ambiguity that was already there and makes it move faster. Unclear ownership becomes a bottleneck overnight. Imprecise metrics become arguments about whether anything worked. Inconsistent data becomes a reliability issue in production. The model doesn’t create these conditions. It removes the slack that had been quietly absorbing them.

    I think about stress tests in engineering. They aren’t performed to prove a system works under ideal conditions. They’re performed to understand how it behaves under load, where the weak points are, what fails first, and why.

    Generative AI acts as a similar test inside organizations.

    The demo proves possibility. Deployment applies pressure.

    Under that pressure, organizations discover whether they defined the job clearly enough, whether their measurement systems are disciplined enough, whether their governance structures can absorb additional complexity, and whether they’re willing to slow down long enough to align before they scale.

    The promise of AI was not inherently wrong. Many of the projected gains were directionally sound. But the promise assumed a level of structural readiness that most organizations had never examined, because nothing had ever required them to.

    That is what it took.


    This is not a story about bad technology or careless leadership. It’s a story about what happens when building gets easier before thinking does.

    When a working model exists, momentum builds quickly. The demo impresses the room. The business case gets approved. The roadmap shifts. And the slower work, the kind that requires sitting with hard questions before anyone writes a line of code, starts to look like unnecessary delay.

    Under acceleration, patience feels irresponsible.

    But ambiguity doesn’t disappear under pressure. It compounds.

    In both of these initiatives, the most significant challenges were not technical. They were definitional. What exactly were we trying to improve? For whom? How would we know when we got there? What tradeoffs were acceptable once we operated at scale?

    Those questions don’t disappear because a model performs well in a demo. They become more urgent.

    AI does not eliminate the need for product leadership. It intensifies it.


    So what does clarity actually look like before you build?

    It starts with the job. Not the efficiency narrative or the cost reduction story that fits neatly into a business case, but the real work the tool is supposed to do and for whom. In the summarization example, that meant asking not just whether time could be saved writing notes, but what those notes were actually for. Who reads them next? What decision do they support? What happens downstream when they’re incomplete? A summary isn’t valuable because it exists. It’s valuable because of what it carries forward.

    It extends to the people who will live with the output. Not just the ones in the demo. Different stakeholders interact with the same artifact in fundamentally different ways. Designing for one and discovering the others in production is an expensive way to learn something that a few deliberate conversations could have surfaced earlier.

    It forces agreement on what success means before the first model is trained. Not directionally, but specifically. What metric moves? By how much? Over what timeframe? What would failure look like, and how would you know? These conversations are uncomfortable because they expose tradeoffs. But they are far less expensive than months of development followed by a room full of people debating whether anything worked.

    And it requires honesty about the foundation. Clean data. Clear ownership. Defined workflows. Realistic cost assumptions at scale. These aren’t bureaucratic hurdles. They are the conditions that determine whether what gets built is worth sustaining.

    None of this is slow for its own sake. It’s the work that makes speed durable. Organizations that did it well weren’t cautious. They were precise. They moved quickly once they knew what they were building and why. The ones that skipped it moved fast too, right up until the moment they didn’t.

    Clarity before speed isn’t a philosophy. It’s the actual cost of doing this right.


    The summaries looked clean.

    Under pressure, the gaps appeared.

    The model did what it was designed to do.

    The question was whether the organization around it was ready to carry the weight.

    You were promised everything.

    What it took was clarity before speed.

  • The Other Hand: AI, Disability, and the Cost of Progress

    The Other Hand: AI, Disability, and the Cost of Progress

    I’ve spent more than a decade working in AI. I’ve built teams around it, led products powered by it, and spent more hours than I can count thinking about where it creates value and where it doesn’t. I’m not a skeptic. I’ve seen what the technology can do when it’s applied well.

    I’m also the father of a son with epilepsy. He is sixteen, and he will probably never drive. Autonomous vehicles have been part of how I think about his future for a while now — not as a certainty, just as a possibility worth holding onto. So when I came across a Freakonomics podcast about what a driverless world might mean for people who can’t drive, I expected something that confirmed what I’d been quietly hoping. Instead it pulled in two directions at once.

    That tension is what this post is about.

    What One Hand Gives

    The podcast — Freakonomics, on what a driverless world means for who loses and who wins — opened with autonomous vehicles and the disabled community. The argument was straightforward: people who can’t drive because of a medical condition, a physical limitation, or age could gain something they don’t currently have. Independence. The ability to get somewhere on their own, without relying on someone else to take them.

    He watches his friends talk about permits and practice drives the way he watches a lot of things — from the outside, quietly, waiting for the conversation to move somewhere else. His neurologist answered his question about driving carefully, the way doctors do when the answer is “probably not.” He sat there and took it. Part of him probably already knew. Part of him was hoping for something different.

    Autonomous vehicles could change that. Not immediately, and not without the kind of infrastructure buildout that takes years and political will. But the technology is real, and the benefit to people like my son is real. Independence is not a small thing. The ability to get yourself somewhere — to a job, to a friend’s house, to somewhere you chose to go — is something most people take for granted until you watch someone go without it.

    What the Other Hand Takes

    The same podcast also covered job displacement. The ways AI and automation are eliminating work — particularly at the lower end of the labor market. Self-checkout replacing cashiers. Algorithms replacing roles with structure and repetition that don’t require specialized credentials.

    Those are the kinds of jobs that could work for my son.

    Not because his ambitions are small — they aren’t. He wants to be involved in hockey. He enjoys streaming. But the realistic path to employment for a young man with his challenges runs through jobs with clear structure, consistent routine, and the right support in place. Those are exactly the roles that technology is actively eliminating right now, while autonomous vehicles are still years from being ready to give him a ride to work.

    The same wave of technology that might eventually give him independence is already taking away the places that independence could take him.

    Who Holds the Asymmetry

    The uncomfortable thing I keep coming back to is this: the people most likely to benefit from autonomous vehicles in terms of accessibility overlap significantly with the people most harmed by the job displacement that AI is already causing. The disabled community. People without advanced credentials. People whose realistic employment options are concentrated in exactly the kinds of repetitive, structured roles that automation eliminates first.

    Technology is giving with one hand and taking with the other. And it is not giving and taking equally or simultaneously. The taking is happening now. The giving is still on the horizon.

    I’ve watched this pattern play out before, not with autonomous vehicles, but with technology more broadly. The people and organizations with resources tend to capture the efficiency gains first. The costs of displacement tend to land earliest on people with the least room to absorb them. And the accessibility benefits — the genuinely good things that technology makes possible for people who have been left out — tend to arrive last, after the business case has already been made and the market has already moved on.

    That doesn’t make the technology bad. It makes the conversation incomplete.

    Both Hands at Once

    My son is sixteen. I don’t know what the world looks like when he’s thirty. I don’t know which promises will have been kept and which will have turned out to be convenient arguments for something that primarily served other interests. I don’t know if the door that technology seems to be opening will still be open when he gets there, or what will be on the other side of it.

    What I know, from more than a decade of working in this space, is that the people who build and deploy these systems are mostly not thinking about my son. They’re thinking about markets, timelines, and competitive position. Accessibility is a real benefit, but it is also a useful argument. The disabled community being invoked to make the case for autonomous vehicles is both genuinely served by that technology and, as one voice in the podcast put it, conveniently useful to people with other interests.

    I hold both of those things at the same time. I have to, because my son is real and the technology is real and the displacement is real.

    The honest version of being pro-technology is not pretending that every advance is a net positive for everyone. It’s being clear-eyed about who benefits, who absorbs the cost, and how much time passes between the two.

    Technology gives with one hand and takes with the other. The question worth asking — the one I don’t hear often enough in boardrooms or policy discussions or podcast episodes — is whether the same people are on the receiving end of both.

    For my son, right now, the answer is mostly no.

    And I’m paying attention to which hand moves first.


    This post grew out of a more personal reflection I wrote on epilepsydad.com, where I write about life as the father of a child with epilepsy. If this piece resonated, you can read that one here: The Water Level: Disability and Technology.

  • The Workbench: A Fictional Exploration of AI, Patents, and Asymmetric Trust

    The Workbench: A Fictional Exploration of AI, Patents, and Asymmetric Trust

    A short work of speculative fiction about mediated cognition and structural asymmetry in AI systems.

    I’ve worked on the same problem for three years. Agricultural runoff — specifically, a low-infrastructure filtration approach practical for small farms that can’t justify the capital cost of existing solutions. I have notebooks. I have a corner of my basement with a workbench and a lamp. I work at night because that’s when the house is quiet, and no one needs anything from me.

    I’m not describing this to be romantic about it. I’m describing it because the habit matters to what happened.

    I started using the AI about eighteen months ago. A colleague at work mentioned it was useful for untangling your own thinking. The subscription was cheap. I was stuck on the membrane composition — a trade-off between porosity and structural integrity at the micron level with a hard ceiling I couldn’t engineer around.

    The sessions were useful. I’d describe the problem. It would ask clarifying questions and summarize my reasoning back to me in cleaner language. This is a known benefit of the format — explaining something forces you to hear what you actually believe.

    In late April, I had a session where something opened up.

    Somewhere in the back and forth I said something offhand — that the layering approach might be the wrong frame entirely. That maybe the question wasn’t filtration so much as selective adhesion. Different problem. Different solution space.

    The model responded that it was an interesting reframe, but introduced some complications worth thinking through. It walked me through three or four technical objections. Reasonable-sounding. I pushed back on one. It conceded partially, then introduced another. By the end of the conversation, I’d returned to the original membrane approach, modified, which felt like progress.

    I didn’t think about the adhesion framing again for several months.


    My wife sent me a link in the afternoon with no message. She’d been following water quality issues in the valley. The article was from a trade publication.

    The patent filing was from a research division I’d never heard of. The approach was described as a selective adhesion mechanism for micron-level particulate separation in agricultural water systems. Three named inventors. The language was different from how I’d have put it. The math was more developed than anything I’d sketched out.

    I went back through my old sessions that night. I found the April conversation. I read it slowly.

    The model hadn’t done anything obviously wrong. It had asked good questions, raised legitimate-sounding concerns. But looking at it now, the technical complication it had foregrounded was real but surmountable — I’d since encountered similar obstacles in adjacent problems and knew the workaround. The model had presented the obstacle without the workaround. I had accepted that framing and moved on.

    I don’t know what I’m saying. I’m not saying anything with certainty. What I’m saying is that the sequence bothers me.


    I consulted a patent attorney. Not to file anything — prior art requires documentation I don’t have in any form the law would recognize. I consulted her to understand the mechanics of what a case would even look like.

    She said the evidentiary problem alone would be nearly insurmountable. The conversation logs are held by the company being accused. The training data, model weights, and internal research timeline are all on the other side of a wall with no discovery path to them absent a viable suit, and no viable suit without prior evidence of access and intent. She described this not as a gap in existing law but as a structural feature of how these systems are designed — the custody of information is asymmetric by default.

    She said: even if everything you’re describing happened exactly as you think it did, there’s no practical path.

    I paid for the consultation and left.


    I still use the tool. I want to be honest about that. I use it for logistics, drafting, and things at work. I’ve gone back to paper for the basement — actual notebooks, the same kind I’ve used since college.

    What I keep returning to is a specific structural fact: I handed something unfinished to a system I didn’t understand, operated by a company whose interests I’d never examined, under terms I hadn’t read. The half-formed idea — the one you haven’t stress-tested yet, the one that exists only in the moment before you’ve explained it to anyone — is both the most valuable thing you have and the least protected.

    The conversation logs, if they exist, are stored by the same system whose internal processes are not externally auditable.

    I don’t know how many people are working on something in a basement right now, typing out the early version of an idea, trusting a tool the way you’d trust a notebook. I don’t know how many of them said something offhand — a reframe, a lateral connection — and were then, helpfully, reasonably, walked back from it.

    Maybe none. Maybe I’m wrong about everything.

    But I keep the notebooks now. And when I think I’m actually onto something, I close the laptop.

    This is fiction. The mechanism described has not been documented. The asymmetry has.

  • When AI Safety Commitments Become Ballast

    When AI Safety Commitments Become Ballast

    There’s a moment in every race when weight starts to matter.

    At the beginning, you carry everything. Redundancy. Margin. Contingency. The assumption is that you can afford to be careful, that prudence is a strength rather than a liability.

    Then someone pulls ahead.

    And what once felt responsible begins to feel heavy.

    Over the past year, we’ve started to see that dynamic surface in the AI industry.

    One major lab revised a flagship safety pledge that had previously been framed as firm. Around the same time, another secured a high-profile defense contract after a competitor hesitated over how its policies applied to military use. Each decision, taken on its own, was defensible. Policies evolve. Governments seek capability. Companies interpret commitments in context.

    But together, they reveal something structural.

    Safety commitments do not exist outside competitive pressure. And competitive pressure changes how commitments behave.

    Over the past several years, frontier labs have published increasingly detailed safety frameworks: responsible scaling policies, capability thresholds, deployment guardrails, public commitments to pause development under certain conditions. On paper, this looked like maturation. A recognition that frontier models are not just products but infrastructure. That capability increases are nonlinear. That misuse risk and geopolitical consequence are real.

    But safety inside a competitive market operates differently than safety in isolation.

    If a safeguard slows release timelines, it stops being only a question of principle. It becomes a question of position. If one company interprets a boundary strictly while another interprets it flexibly, the stricter company absorbs the delay. And delay compounds.

    Not because leadership suddenly stops caring about safety, but because the cost of being slower is immediate and measurable, while the benefit of being cautious is probabilistic and diffuse.

    That asymmetry matters.

    The risk is not that companies abandon safety entirely. It is that safety becomes relative — relative to rivals, to political pressure, to market cycles. And relative standards drift.

    Safety that exists primarily as policy language can be refined, reinterpreted, and adjusted under pressure. Safety that is embedded as structural constraint — reinforced through governance, incentives, and shared baselines — is harder to move.

    Most AI safety today lives somewhere in between.

    None of this requires conspiracy.

    It requires acceleration.

    The faster models improve, the more the industry behaves like a race. The more it behaves like a race, the more weight gets scrutinized. And when weight is scrutinized, it is measured against speed.

    Optional safeguards are not discarded outright. They are narrowed. Clarified. Updated. Positioned differently. Over time, the difference between optimization and erosion becomes harder to see.

    Once safety becomes a variable instead of a constraint, it will be optimized like any other variable.

    There is always a less careful actor somewhere in the field. If one company relaxes a guardrail, critics will point to others who are worse. If another holds a line, competitors may frame it as impractical. The reference point shifts quietly. The baseline moves.

    No single revision signals collapse. The bar lowers incrementally, through interpretation rather than abandonment.

    Safety does not disappear. It becomes thinner. More conditional. More dependent on context.

    The industry will continue to publish commitments. It will continue to speak the language of responsibility. It will continue to signal intent.

    The real signal is not in the language.

    It is in what remains non-negotiable when pressure increases.

    When safety is structural, it constrains speed.

    When it is strategic, it competes with it.

    And in competitive markets, strategy is optimized.

    Constraints are endured.

  • Chasing Fool’s Gold

    Chasing Fool’s Gold

    It’s November 2025, nearly three years after ChatGPT became publicly available.1 Three years of hype, three years after the record-breaking user growth2, three years of promises that AI would transform everything, and three years of that transformation always being just around the corner. 

    I’m generally pro-LLM. At my last two companies, I ran user groups to bring people together — technical and non-technical — to educate, connect, and evangelize around the responsible use of AI. I’ve led product teams building models to improve customer experience and home security, seeing measurable impact on satisfaction and adoption.

    Often, these successes came despite headwinds: misunderstanding, fear, and leadership unfamiliarity with AI. We had to educate executives on what AI was, what it wasn’t, and where it could help. We pushed to let data scientists do the data science, rather than forcing them into traditional software development models.

    The Gold Rush Hits

    Then ChatGPT arrived, and it felt like everything we’d built — metrics, prioritization, careful problem selection — could suddenly be replaced by simply ‘throwing an LLM at it.’ Promises flew: search is dead, coding is dead, thinking is dead. AGI is just around the corner.

    Businesses rushed to stake their claims, building wrappers around LLMs. One API call to solve everything. CoPilots for every task. Flashy demos everywhere. Executives saw dollar signs from revenue gains and headcount reductions.

    Projects worldwide were paused, shelved, or converted into LLM initiatives. Funding poured in, often for initiatives that hadn’t even existed weeks earlier. The goal shifted: from solving important business problems to showcasing generative AI quickly.

    The Barons and the Tools

    The “barons” who built the models and hardware were rewarded with massive investments, copyright protection, and enormous data access. Vendors selling platforms and tools gained huge funding and an endless supply of prospectors eager to mine their land.

    And like every gold rush, there were always “better” tools on the horizon. A new API promising 10x productivity. A new model promising “real” multimodality. A new agent framework that would “finally” automate everything. The land just over the ridge was always more fertile than the land you were currently standing on. And teams spent real money and real time chasing it — sure, this time the promise would finally pay.

    The promise of “grab a shovel and get your gold” was marketing, not reality. Easy-to-get gold runs out; mining becomes technical, requiring skill and know-how. The dream of instant wealth fades. Too often, it’s fool’s gold — investments in tools and access are never recouped.

    Reality Hits

    Suddenly, hallucinations become a board-level word. Reliability matters. “Just call the LLM” is no longer enough.

    Hallucinations, integration friction, and workflow complexity appear. Legal briefs with fabricated citations, inconsistent customer support responses, and hallucinated business documents turn reliability into a top concern. A model that works in a demo may fail in production, exposing operational, financial, and reputational risks.

    The illusion of ease, the desire for speed, and the dream of instant ROI never materialized. Rapidly built demos often worked only on the surface. Quick prototypes, bolt-on integrations, and low-discipline AI-generated code created massive technical debt3 — problems no LLM could solve alone. Many early adopters found fast paths to value required extensive rework, refactoring, and governance. Projects stalled or never reached production.

    These failures weren’t a surprise — they echoed the same issues we’d faced when hype outran preparation.

    Mining Real Value

    Three years in, many companies still haven’t figured it out. They’re digging for gold, chasing demos, hoping for a lucky strike. A few got lucky and saw big value — but most only saw modest gains, if any. Articles and studies show the promised ROI often didn’t materialize. The dream of instant impact remains elusive.

    In that scramble, businesses and their customers often suffer. The barons still own the land, controlling the most valuable resources. Vendors who sold the tools have already moved on to the next rush. The cycle repeats.

    The hope is that we finally learn the lesson: generative AI doesn’t deliver value through hype, demos, or shortcuts. True success comes from patience, discipline, and relentless focus on real value — careful engineering, thoughtful product design, high-quality data, and robust workflows. These principles aren’t just for today’s LLM hype; they matter for whatever technology or “next rush” comes next. 

    Shiny demos grab attention, but only foundational work separates the companies that thrive from those still chasing fool’s gold.

    1. https://openai.com/index/chatgpt/ ↩︎
    2. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ ↩︎
    3. https://www.techradar.com/pro/from-vibe-to-viable-the-hidden-cost-of-ai-tech-debt ↩︎

  • The Illusion of Intelligence: How We’re Still Missing the Promise of AI

    The Illusion of Intelligence: How We’re Still Missing the Promise of AI

    When I started my first role as a product manager, my portfolio included solutions built with AI. I leveraged my product experience and joined forces with a team of data scientists as we sought to tackle complex problems with data.

    It was a great time to be in the space. Everyone wanted to work with AI, and the promise of an AI-driven future was highlighted at every quarterly meeting and town hall. There were piles of data and just enough maturity in the tools and teams to start developing and deploying powerful algorithms at scale.

    But progress was much slower than most people anticipated.

    We stood in front of the piles of data without a shovel, unable to make efficient use of the resource because it was in the wrong format or inside an inaccessible system. Sometimes, we’d discover too late that our intuition about the relevance of the data in a particular pile was wrong, and the data wasn’t useful to solve a particular problem.

    To overcome those challenges, we’d start by looking for more data by checking other piles or generating additional data by adding telemetry to our systems to close the gaps in coverage.

    If we couldn’t assemble what we needed to solve that particular problem, we’d try to reshape the problem, or we’d move on to the next problem to solve. But if we happened to find what we needed, we ran into our next hurdle: armchair data scientists—people who watched a demo, skimmed an article, and came away convinced they knew how to build the model better than the experts trained to do it.

    When I said that everyone wanted to work with AI, I meant everyone. In some cases, developers would head to Coursera to learn about AI and how to create algorithms. Others went back to school to get an advanced degree in machine learning or statistics. As a proponent of continuing education, I applauded these efforts to level up their knowledge and skills, and, for the most part, these individuals became curious allies, trying to learn about the process from the inside.

    But the armchair data scientists, often in leadership or decision-making positions, were more disruptive. They would watch a video or read an article, then send it to the team with a brief note, stating only, “We should do this.” There would be no context, no knowledge of how the technology worked, or if what they found addressed a problem or challenge we were facing.

    For the most part, we could deflect these suggestions through thoughtful responses, explaining that we were already doing what they suggested, or why it wasn’t relevant to the actual problem we were solving.

    The more draining interactions were from leaders who wanted to prescribe how a model should be built, sometimes implicitly but often explicitly. They wanted the algorithm to reflect a vision of what they felt a system should do based on their intuition, even if their vision wasn’t technically possible—or even relevant—to the problem at hand. They would prescribe what data should be used, what data should be excluded, or how a model should be trained.

    They tried to influence how predictions were interpreted. They challenged results that didn’t feel intuitive, even when the outcomes were reproducible, measurable, and backed by data. This sometimes created another round of training a separate model driven by feelings, followed by a side-by-side comparison that consistently showed the data science approach performed better than one based on intuition and feelings.

    I’ve sat with some of those same executives to review the results of a model, only to be met with their disappointment, especially when they felt there should be a logical, straightforward solution to a problem that was so complex that it couldn’t be solved or even attempted without AI.

    They would question why the predictions weren’t 100% accurate. Even when I pointed out that we were predicting the future from past data, and even if the model was right 60% of the time, and the previous human-driven process was right only 10% of the time, their questions focused only on achieving the impossible 100%. They’d leave value on the table chasing perfection when they could improve a process now and hope to improve it over time. Or they’d go off on tangents and hyperfocus on edge cases for which there was no solution and often no data to even attempt to use to train an algorithm.

    If the performance was impressive, they’d attempt to move the targets by setting an arbitrarily higher bar, or they’d switch from the measurable metric to a different metric or an abstraction like “trust” without providing direction on what that meant or how to measure it. Trust in what? Accuracy? Fairness? Transparency? Nobody could say. When asked for clarification, the response was the classic Justice Potter Stewart response, “I know it when I see it.”

    In the end, it always seemed like AI was a disappointment. Unless it could solve 100% of a problem 100% of the time, no matter how complex or how poorly humans performed before, leaders would keep chasing a unicorn, while ignoring the perfectly capable, faster horse already in the stable.

    Over and over, the pattern was the same: impossible expectations, misunderstanding of the tools, and a tendency to chase magical thinking over measurable progress.

    Here We Go Again

    Fast forward a few years, and we’re back at it. Only this time, the technology looks smarter. LLMs have reignited AI’s promise with a seductive twist: they speak fluently. They write. They reason. They respond. And for many, that’s been enough to assume that LLMs also understand.

    But just like before, we’ve let the illusion get ahead of the reality.

    While LLMs make it easier than ever to demo something impressive, they haven’t made it easier to deliver something useful. Underneath the conversational surface, the same problems persist: inaccessible data, unclear problems, and unrealistic expectations. In fact, the expectations are even worse now, because the technology feels like it’s already “there.”

    I’ve seen teams leap into building generative AI “solutions” without a clear understanding of what problems they’re solving. I’ve seen leadership get swept up in generative demos and approve massive budgets to chase abstract goals like “productivity” or “creativity” without metrics, definitions, or infrastructure.

    The same pattern is playing out again. Impossible expectations, except this time they’re even higher. A misunderstanding of the tools, especially when it comes to differentiating the hype from the reality. And the same magical thinking chasing a hypothetical problem rather than focusing on a real problem with measurable outcomes.

    Two years in, we’re starting to see the same disappointment creep in again, too. The unrealized expectations, longer timelines, and lack of returns on the investments.

    What Useful AI Actually Looks Like

    Useful AI doesn’t always look like magic. In fact, the most valuable AI systems I’ve seen rarely impress anyone in a demo. They don’t write poetry, simulate conversation, or generate pitch decks with a prompt. They just quietly make things better—faster, cheaper, more consistent, more scalable.

    They are, by most standards, boring.

    A model that flags billing anomalies in a healthcare system might save millions. A classifier that routes customer service tickets to the right team might shave minutes off every support interaction. An optimization algorithm that suggests more efficient delivery routes could reduce fuel costs, improve ETAs, and shrink carbon footprints. None of these use generative AI. None of them are headline-worthy. But all of them create real value.

    And unlike a chatbot that sometimes gives the wrong answer with great confidence, these systems are narrow by design. Purpose-built. Measured. Tightly integrated into workflows and optimized over time. They don’t need to sound human. They just need to work.

    We often overlook this kind of AI because it’s not exciting to watch. It doesn’t feel like the future. But that’s exactly the point: the best AI doesn’t draw attention to itself. It dissolves into the process, making things work better than they did before.

    The Opportunity Cost of the Hype

    The hype around LLMs has sparked a renewed interest in AI, but it’s also warped our sense of what progress looks like. Instead of focusing on impact, we’ve become obsessed with spectacle.

    Executives see a demo of a chatbot that answers questions with a human-like cadence and see the realization of the vision that they’ve had for AI all along. Suddenly, every team is greenlit to build a “copilot.”

    LLMs make it easy to show something impressive. A few prompts, a fancy UI, and you’ve got a prototype that feels like innovation. But most of these tools don’t stand up to basic scrutiny. They hallucinate. They break when connected to real systems. They introduce ambiguity into workflows that once relied on clarity. They create new risks—ethical, operational, and technical—that teams are often unprepared to manage.

    We’re pouring talent, time, and money into building AI wrappers around problems we haven’t defined. Meanwhile, the infrastructure work that would actually make AI useful—cleaning data, improving feedback loops, building explainable systems—is neglected.

    This is the code of chasing the hype. Years of expensive effort with little or no realization of value, certainly to the scale that was promised. Real problems that didn’t require generative AI but could have been solved and added real value were ignored or neglected. Two years in, it turns out we’ve been sprinting on a treadmill. We’ve spent the energy, but we’re still in the same place.

    To be clear, there’s nothing wrong with experimentation. But exploration without a clear problem or success metric isn’t innovation—it’s expensive theater. It gives the illusion of progress while distracting from work that actually moves the needle.

    And we should know the difference because we’ve been here before. We let unrealistic expectations undermine the progress of last-generation AI. Now, we’re doing it faster. We’re skipping the discipline that made the old models work and replacing it with a dopamine hit from a prompt that feels smart.

    A Better Mindset: Value Over Novelty

    If the last two waves of AI taught us anything, it’s this: the technology is only as good as the problems we point it at and the people we trust to solve them.

    Too often, we let novelty set the direction. We ask, “What can we build with this?” instead of “What’s worth solving?” But even when we pick the right problems, we don’t always empower the right people to do the work.

    Instead, we’re seeing a return of the same behavior that stalled progress last time: leaders prescribing not just what to solve, but how to solve it. Dictating which data to use. Demanding specific architectures. Redefining outcomes midstream based on intuition instead of evidence. In some cases, they’re building solutions backwards from a flashy demo instead of forward from a real need.

    This isn’t strategy—it’s armchair data science all over again.

    And it’s especially risky now, because LLMs make it even easier to look smart without being right. It’s one thing to brainstorm ideas. It’s another to second-guess trained experts who understand the constraints, trade-offs, and mechanics of building something that works.

    A better mindset means more than just optimizing for usefulness. It means creating space for people with real expertise—data scientists, engineers, researchers, designers—to lead the “how.”

    It means:

    • Letting evidence drive decisions, not gut instinct or LinkedIn hype.
    • Empowering teams to solve, not just execute.
    • Recognizing that success isn’t always intuitive—and being okay with that.

    Adopting this mindset doesn’t mean ignoring new tech. It means respecting the disciplines that make that tech useful. It means pairing vision with humility, ambition with trust.

    Because there’s nothing wrong with being impressed by what’s possible.

    But if we’re serious about delivering real value with AI, we have to get out of our own way—and let the experts do their jobs.

  • It’s Not AI That Will Destroy Us

    It’s Not AI That Will Destroy Us

    The Singularity Is A Mirror

    There’s a growing obsession with Artificial General Intelligence (AGI), the idea of machines that can think, reason, and act like humans. Some believe it will be the most significant breakthrough in human history. Others warn it will be our last — the catalyst that brings the Singularity into existence.

    The Singularity refers to a hypothetical future moment when artificial intelligence surpasses human intelligence and begins to improve itself at an exponential rate, beyond our understanding and human control. It’s often portrayed as the point when machines become so smart and so capable that they can design their successors, and humans become obsolete, irrelevant, or even endangered.

    AGI won’t spring fully formed from nowhere. It will be built by people. It will reflect our incentives, ambitions, blind spots — and our flaws. It will be trained on data created by us, governed by rules we design (or fail to design), and used for purposes we either endorse or conveniently ignore.

    If AGI destroys humanity, it won’t be because the machine chose to. It will be because humans built it in a world where profit trumped ethics, power went unchecked, and accountability was optional.

    AGI won’t decide what kind of world it steps into.

    We do.

    Tools, Not Threats

    We talk about AI as if it’s an external threat — an alien intelligence that might turn on us. But AI isn’t alien. It’s Made with ♥ by Humans. It’s a tool. And like any powerful tool, it can build or destroy, depending on whose hands it’s in and what they choose to do with it.

    AI is fire in a new form — and we’re the children playing with it.

    If the house burns down, you don’t blame the fire. You blame the child who lit the match or the parents who never taught them that fire was dangerous. You ask who left gasoline sitting around. You question why no one thought to install a smoke alarm.

    AI doesn’t operate with intent. It doesn’t choose good or evil. It carries out the tasks we assign, shaped by the values and choices we embed in it.

    When AI generates misinformation, invades privacy, replaces workers without a safety net, or amplifies bias, it’s not the algorithm acting alone. It’s people designing systems with specific incentives, deploying them without oversight, and looking the other way when the consequences show up.

    The danger isn’t that we won’t understand AI.

    It’s that we won’t take responsibility for how we shape it and how we use it.

    The Real Threat: Human Decisions

    The real threat isn’t artificial intelligence.

    It’s human intelligence.

    We’ve already seen how powerful AI becomes when paired with human intention. Not superhuman intention — just ordinary political, ideological, or economic motives. And that’s the danger: AI doesn’t need to be sentient to cause harm. It just needs people ready to use it irresponsibly.

    Just in the past few weeks, headlines have shown how AI misuse is rooted in human decisions.

    A political report, touted by the MAHA (Make America Healthy Again) movement, questioned vaccine safety and included dozens of scientific references. However, fact‑checkers discovered that at least seven cited studies didn’t exist, and many links were broken.1 Experts traced this back to generative AI platforms like ChatGPT, which can produce plausible but completely fabricated citations.2 The White House quietly corrected the report but described the issue as “formatting errors.”3 AI didn’t decide to deceive anyone—it simply enabled it.

    When xAI’s chatbot Grok flagged that right‑wing political violence has outpaced left‑wing violence since 2016, Elon Musk publicly labeled this a “major fail,” accusing the system of parroting “legacy media.”4 Instead of questioning the data or method, Musk implied that any answer he doesn’t like must be ideologically infiltrated. He’s saying, “If the tool makes me look bad, the tool is broken.” This isn’t AI gone haywire — it’s a machine bent by human vanity, then reshaped to serve its creators’ agendas.

    This isn’t a partisan issue. Misuse of AI spans the political and corporate spectrum. In 2024, a consultant used AI to generate robocalls impersonating President Biden, urging voters in New Hampshire to stay home for the primary — a blatant voter suppression tactic that led to a $1 million FCC fine.5 The Republican National Committee released an AI-generated ad depicting a dystopian future if Biden were reelected, complete with fake imagery designed to provoke fear.6 And major oil companies like Shell and Exxon have used AI-generated messaging to greenwash their climate record — downplaying environmental harm while projecting a misleading image of sustainability.7

    These aren’t tech failures. This isn’t about ideology.

    They’re ethical and political failures. It’s about power, and our willingness to let it go unchecked.

    AI reflects its users’ values, or lack thereof. When we let political actors exploit AI to mislead, distort, or conceal, we aren’t witnessing a feature of AI. We’re exposing a feature of ourselves.

    The danger isn’t in the machine.

    It’s in our refusal to confront how we wield it.

    Power and Responsibility

    AI doesn’t live in the abstract. It lives in systems, and those systems are run by people with power.

    The question isn’t just what can AI do? It’s who decides what it does, who it serves, and who it harms.

    Right now, power is concentrated in a few hands — governments, tech giants, billionaires, and unregulated platforms. These are the people and institutions shaping how AI is built, trained, deployed, and monetized. And too often, their incentives are misaligned with the public good.

    When political actors use AI to fabricate legitimacy or manufacture doubt, that’s not the future acting on us. That’s us weaponizing the future.

    When Elon Musk can personally shape what information an AI does or doesn’t show, that’s not innovation. That’s the consolidation of narrative control.

    When we gut public education, weaken institutions of science and journalism, and leave people unable to critically assess the information they’re being fed, AI becomes a distortion engine with no brakes—not because it’s evil, but because we’ve stripped away the tools to resist its misuse.

    We have to ask who benefits, who decides, and who gets to hold them accountable. Because as long as the answer is “no one,” the story doesn’t end with superintelligence. It ends with unchecked power, amplified by machines, and a public too distracted, divided, or disempowered to intervene.

    Our Future Is Still Ours

    We’re not doomed.

    That’s the part people forget when they talk about AI like it’s fate. As if the rise of AGI is a cosmic event we can’t shape. As if the Singularity is already written, and we’re just watching it unfold.

    But we’re not spectators.

    We’re the authors.

    Every day, in every boardroom, government office, university lab, and startup pitch deck, people are making decisions about what AI becomes. What it protects. What it threatens. Who it includes. Who it erases.

    That means the future is still open. Still contested. Still ours to shape.

    We can demand accountability. We can invest in public institutions that inform and protect. We can teach our children how to think critically, how to recognize misinformation, how to ask better questions. We can regulate the use of AI without killing innovation. We can fund alternatives that aren’t controlled by billionaires. We can insist that progress isn’t just what’s possible, it’s what’s ethical.

    This isn’t just about AI. It’s about us. It always has been.

    We’ve been handed a powerful tool. It’s up to us whether we use it to illuminate or incinerate.

    It’s not AI that will destroy us.

    It’s us.


    Note: There are a few good reads on the topic, including The End of Reality by Jonathan Taplin and More Everything Forever by Adam Becker.

    1. https://www.politifact.com/article/2025/may/30/MAHA-report-AI-fake-citations/ ↩︎
    2. https://www.agdaily.com/news/phony-citations-discovered-kennedys-maha-report/ ↩︎
    3. https://theweek.com/politics/maha-report-rfk-jr-fake-citations ↩︎
    4. https://www.independent.co.uk/news/world/americas/us-politics/elon-musk-grok-right-wing-violence-b2772242.html ↩︎
    5. https://www.fcc.gov/document/fcc-issues-6m-fine-nh-robocalls ↩︎
    6. https://www.washingtonpost.com/politics/2023/04/25/rnc-biden-ad-ai/ ↩︎
    7. https://globalwitness.org/en/campaigns/digital-threats/greenwashing-and-bothsidesism-in-ai-chatbot-answers-about-fossil-fuels-role-in-climate-change/ ↩︎
  • The Dulling of Innovation

    The Dulling of Innovation

    For a few years, I was on a patent team. Our job was to drive innovation and empower employees to come up with new ideas and shepherd them through the process to see if we could turn those ideas into patents.

    I loved that job for many reasons. It leveraged an innovation framework I had already started with a few colleagues—work that earned us a handful of patents. It fed my curiosity, love for technology, and joy of being surrounded by smart people. Most of all, I loved watching someone light up as they became an inventor.

    I worked with an engineer who had an idea based on his deep knowledge of a specific system. Together, we expanded on that idea and turned it into an innovative solution to a broader problem. The look on his face when his idea was approved for patent filing was one of the greatest moments of my career. For years after, he would stop me in the hallway just to say hello and introduce me as the person who helped him get a patent.

    Much of the success I saw on that team came from people who deeply understood a problem, were curious to ask why, and believed there had to be a better way. That success was amplified when more than one inventor was involved, when overlapping experiences and diverse perspectives combined into something truly original.

    When I moved into product management, the same patterns held true. The most successful ideas still came from a clear understanding of the problem, deep knowledge of the system, and the willingness to explore different perspectives.

    Innovation used to be a web. It was messy, organic, and interconnected. The spark came from deep context and unexpected collisions.

    But that process is starting to change.

    Same High, Lower Ceiling

    In this new age of large language models (LLMs), companies are looking for shortcuts for growth and innovation and see LLMs as the cheat code.

    Teams are tasked with mining customer comments to synthesize feedback and generate feature ideas and roadmaps. If the ideas seem reasonable, they are executed without further analysis. Speed is the goal. Output is the metric.

    Regardless of size or maturity, every company can access the tools and capabilities once reserved for tech giants. Generative AI lowers the barrier to entry. It also levels the playing field, democratizing innovation.

    But what if it also levels the results?

    When everyone uses the same models, is trained on the same data, and is prompted in similar ways, the ideas start to converge. It’s innovation by template. You might move faster, but so is everyone else, and in the same direction.

    Even when applied to your unique domain, the outputs often look the same. Which means the ideas are starting to look the same, too.

    AI lifts companies that lacked innovation muscle, but in doing so, it risks pulling down those that had built it. The average improves, but the outliers vanish. The floor rises, but the ceiling falls.

    We’re still getting the high. But it doesn’t feel like it used to.

    The Dopamine of Speed

    The danger is that we’re not going to see it happening. Worse, we’re blindly moving forward without considering the long-term implications. We’re so fixated on speed that it’s easy to convince ourselves that we’re moving fast and innovating.

    We confuse motion for momentum, and output for originality. The teams and companies that move the fastest will be rewarded. Natural selection will leave the slower ones behind. Speed will be the new sign of innovation. But just because something ships fast doesn’t mean it moves us forward.

    The dopamine hit that comes from release after release is addictive, and we’ll need more and more to feel the same level of speed and growth. We’ll rely increasingly on these tools to get our fix until it stops working altogether. Meanwhile, the incremental reliance on these tools dulls effectiveness and erodes impact, and our ability to be creative and innovate will atrophy.

    By the time we realize the quality of our ideas has flattened, we’ll be too dependent on the process to do anything differently.

    The Dealers Own the Supply

    And those algorithms? They’re owned by a handful of companies. These companies decide how the models behave, what data they’re trained on, and what comes out of them.

    They also own the data. And it’s only a matter of time before they start mining it for intellectual property—filing patents faster than anyone else can, or arguing that anything derived from their models is theirs by default.

    Beyond intellectual property and market control, this concentration of power raises more profound ethical and societal questions. When innovation is funneled through a few gatekeepers, it risks reinforcing existing inequalities and biases embedded in the training data and business models. The diversity of ideas and creators narrows, and communities without direct access to these technologies may be left behind, exacerbating the digital divide and limiting who benefits from AI-driven innovation.

    The more we rely on these models, the more we feed them. Every prompt, interaction, and insight becomes part of a flywheel that strengthens the model and the company behind it, making it more powerful. It’s a feedback loop: we give them our best thinking, and they return a usable version to everyone else.

    LLMs don’t think from first principles—they remix from secondhand insight. And when we stop thinking from scratch, we start building from scraps.

    Because the answers sound confident, they feel finished. That confidence masks conformity, and we mistake it for consensus.

    Innovation becomes a productized service. Creative edge gets compressed into a monthly subscription. What once gave your company a competitive advantage is now available to anyone who can write a halfway decent prompt.

    Make no mistake, these aren’t neutral platforms. They shape how we think, guide what we explore, and, as they become more embedded in our workflows, influence decisions, strategies, and even what we consider possible.

    We used to control the process. Now we’re just users. The same companies selling us the shortcut are quietly collecting the toll.

    When the supply is centralized, so is the power. And if we keep chasing the high, we’ll find ourselves dependent on a dealer who decides what we get and when we get it.

    Rewiring for Real Innovation

    This isn’t a call to reject the tools. Generative AI isn’t going away, and used well, it can make us faster, better, and more creative. But the key is how we use it—and what we choose to preserve along the way.

    Here’s where we start:

    1. Protect the Messy Middle

    Innovation doesn’t happen at the point of output. It happens in the friction. The spark lives in debate, dead ends, and rabbit holes. We must protect the messy, nonlinear process that makes true insight possible.

    Use AI to accelerate parts of the journey, not to skip it entirely.

    2. Think from First Principles

    Don’t just prompt. Reframe. Instead of asking, “What’s the answer?” ask, “What’s the real question?” LLMs are great at synthesis, but breakthroughs come from original framing.

    Start with what you know. Ask “why” more than “how.” And resist the urge to outsource the thinking.

    3. Don’t Confuse Confidence for Quality

    A confident response isn’t necessarily a correct one. Learn to interrogate the output. Ask where it came from, what it’s assuming, and what it might be missing.

    Treat every generated answer like a draft, not a destination.

    4. Diversify Your Inputs

    The model’s perspective is based on what it’s been trained on, which is mostly what’s already popular, published, and safe. If you want a fresh idea, don’t ask the same question everyone else is asking in the same way.

    Talk to people. Explore unlikely connections. Bring in perspectives that aren’t in the data.

    5. Make Thinking Visible

    The danger of speed is that it hides process. Write out your assumptions. Diagram your logic. Invite others into the middle of your thinking instead of just sharing polished outputs.

    We need to normalize visible, imperfect thought again. That’s where the new stuff lives.

    6. Incentivize Depth

    If we reward speed, we get speed. If we reward outputs, we get more of them. But if we want real innovation, we need to measure the stuff that doesn’t show up in dashboards: insight, originality, and depth of understanding.

    Push your teams to spend time with the problem, not just the solution.

    Staying Sharp

    We didn’t set out to flatten innovation. We set out to go faster, to do more, to meet the moment. But in chasing speed and scale, we risk trading depth for derivatives, and originality for automation.

    Large language models can be incredible tools. They can accelerate discovery, surface connections, and amplify creative potential. But only if we treat them as collaborators, not crutches.

    The danger isn’t in using these models. The danger is in forgetting how to think without them.

    We have to resist the pull toward sameness. We have to do the slower, messier work of understanding real problems, cultivating creative tension, and building teams that collide in productive ways. We have to reward originality over velocity, and insight over output.

    Otherwise, the future of innovation won’t be bold or brilliant.

    It’ll just be fast.

    And dull.

  • AI First, Second Thoughts

    AI First, Second Thoughts

    Over the past few weeks, several companies have made headlines by declaring an “AI First” strategy.

    Shopify CEO Tobi Lütke told employees that before asking for additional headcount or resources, they must prove the work can’t be done by AI.

    Duolingo’s CEO, Luis von Ahn, laid out a similar vision, phasing out contractors for tasks AI can handle and using AI to rapidly accelerate content creation.

    Both companies also stated that AI proficiency will now play a role in hiring decisions and performance reviews.

    On the surface, this all sounds reasonable. If generative AI can truly replicate—or even amplify—human effort, then why wouldn’t companies want to lean in? Compared to the cost of hiring, onboarding, and supporting a new employee, AI looks like a faster, cheaper alternative that’s available now.

    But is it really that simple?

    First, there was AI Last

    Before we talk about “AI First,” it’s worth rewinding to what came before.

    I’ve long been an advocate of what I’d call an “AI Last” approach, so the “AI First” mindset is a shift for me.

    Historically, I’ve found that teams often jump too quickly to AI as the sole solution, due to significant pressure from the top to “do more AI.” It showed a lack of understanding of what AI is, how it works, its limitations, and its cost. The mindset of sprinkling magical AI pixie dust over a problem and having it solved is naive and dangerous, often distracting teams from a much more practical solution.

    Here’s why I always pushed for exhausting the basics before reaching for AI:

    Cost

    • High development and maintenance costs: AI solutions aren’t cheap. They require time, talent, and significant financial investment.
    • Data preparation overhead: Training useful models requires large volumes of clean, labeled data—something most teams don’t have readily available.
    • Infrastructure needs: Maintaining reliable AI systems often means investing in robust MLOps infrastructure and tooling.

    Complexity

    • Simple solutions often work: Business logic, heuristics, or even minor process changes can solve the problem faster and more predictably.
    • Harder to maintain and debug: AI models are opaque by nature—unlike rule-based systems, it’s hard to explain why they behave the way they do.
    • Performance is uncertain: AI models can fail in edge cases, degrade over time, or simply underperform outside of their training environment.
    • Latency and scalability issues: Large models—especially when accessed through APIs—can introduce unacceptable delays or infrastructure costs.

    Risk

    • Low explainability: In regulated or mission-critical settings, black-box AI systems are a liability.
    • Ethical and legal exposure: AI can introduce or amplify bias, violate user privacy, or produce harmful or offensive outputs.
    • Chasing hype over value: Too often, teams build AI solutions to satisfy leadership or investor expectations, not because it’s the best tool for the job.

    What Changed?

    So why the shift from AI Last to AI First?

    The shift happened not just because of what generative AI made possible, but how effortless it made everything look.

    Generative AI feels easy.

    Unlike traditional AI, which required data pipelines, modeling, and MLOps, generative AI tools like ChatGPT or GitHub Copilot give you answers in seconds with nothing more than a prompt. The barrier to entry feels low, and the results look surprisingly good (at first).

    This surface-level ease masks the hidden costs, risks, and technical debt that still lurk underneath. But the illusion of simplicity is powerful.

    Generalization expands possibilities.

    LLMs can generalize across many domains, which lowers the barrier to trying AI in new areas. That’s a significant shift from traditional AI, which typically had narrow, custom-built models.

    AI for everyone.

    Anyone—from marketers to developers—can now interact directly with AI. This democratization of AI access represents a significant shift, accelerating adoption, even in cases where the use case is unclear.

    Speed became the new selling point.

    Prototyping with LLMs is fast. Really fast. You can build a working demo in hours, not weeks. For many teams, that 80% solution is “good enough” to ship, validate, or at least justify further investment.

    That speed creates pressure to bypass traditional diligence, especially in high-urgency or low-margin environments.

    The ROI pressure is real.

    Companies have made massive investments in AI, whether in cloud compute, partnerships, talent, or infrastructure. Boards and executives want to see returns. “AI First” becomes less of a strategy and more of a mandate to justify spend.

    It’s worth mentioning that this pressure sometimes focuses on using AI, not using it well.

    People are expensive. AI is not (on the surface).

    Hiring is slow, expensive, and full of risk. In contrast, AI appears to offer infinite scale, zero ramp-up time, and no HR overhead. For budget-conscious leaders, the math seems obvious.

    The hype machine keeps humming.

    Executives don’t want to be left behind. Generative AI is being sold as the answer to nearly every business challenge, often without nuance or grounding in reality. Just like with traditional AI, teams are once again being told to “add AI” without understanding if it’s needed, feasible, or valuable.

    It feels like a shortcut.

    There’s another reason “AI First” is so appealing: it feels like a shortcut.

    It promises to bypass the friction, delay, and uncertainty of hiring. Teams can ship faster, cut costs, and show progress—at least on the surface. In high-pressure environments, that shortcut is incredibly tempting.

    But like most shortcuts, this one comes with consequences.

    Over-reliance on AI can erode institutional knowledge, create brittle systems, and introduce long-term costs that aren’t immediately obvious. Models drift. Prompts break. Outputs change. Context disappears. Without careful oversight, today’s efficiency gains can become tomorrow’s tech debt.

    Moving fast is easy. Moving well is harder. “AI First” can be a strategy—but only when it’s paired with rigor, intent, and a willingness to say no.

    What’s a Better Way?

    “AI First” isn’t inherently wrong, but without guardrails, it becomes a race to the bottom. A better approach doesn’t reject AI. It reframes the question.

    Yes, start with AI. But don’t stop there. Ask:

    • Is AI the right tool for the problem?
    • Is this solution resilient, or just fast?
    • Are we building something sustainable—or something that looks good in a demo?

    A better way is one that’s AI-aware, not AI-blind. That means being clear-eyed about what AI is good at, where it breaks down, and what it costs over time.

    Here are five principles I’ve seen work in practice:

    Start With the Problem, Not the Technology

    Don’t start by asking “how can we use AI?” Start by asking, “What’s the problem we’re trying to solve?”

    • What does success look like?
    • What are the constraints?
    • What’s already working—or broken?

    AI might still be the right answer. But if you haven’t clearly defined the problem, everything else is just expensive guesswork.

    Weigh the Tradeoffs, Not Just the Speed

    Yes, AI gets you something fast. But is it the right thing?

    • What happens when the model changes?
    • What’s the fallback if the prompt fails?
    • Who’s accountable when it goes off the rails?

    “AI First” works when speed is balanced by responsibility. If you’re not measuring long-term cost, you’re not doing ROI—you’re doing wishful thinking.

    Build for Resilience, Not Just Velocity

    Shortcuts save time today and create chaos tomorrow.

    • Document assumptions.
    • Build fallback paths.
    • Monitor for drift.
    • Don’t “set it and forget it.”

    Treat every AI-powered system like it’s going to break, because eventually, it will. The teams that succeed are the ones who planned for it.

    Design Human-AI Collaboration, Not Substitution

    Over-automating can backfire. When people feel like they’re just babysitting machines—or worse, being replaced by them—you lose the very thing AI was supposed to support: human creativity, intuition, and care.

    The best systems aren’t human-only or AI-only. They’re collaborative.

    • AI drafts, people refine.
    • AI scales, humans supervise.
    • AI suggests, humans decide.

    This isn’t about replacing judgment, it’s about amplifying it. “AI First” should make your people better, not make them optional.

    Measure What Actually Matters

    A lot of AI initiatives look productive because we’re measuring the wrong things.

    More output ≠ better outcomes.

    And if everyone is using the same AI tools in the same way, we risk a monoculture of solutions—outputs that look the same, sound the same, and think the same.

    Real creativity and insight don’t come from the center. They come from the edges, from the teams that challenge assumptions and break patterns. Over-reliance on AI can mute those voices, replacing originality with uniformity.

    Human memory is inefficient and unreliable in comparison to machine memory. But it’s this very unpredictability that’s the source of our creativity. It makes connections we’d never consciously think of making, smashing together atoms that our conscious minds keep separate. Digital databases cannot yet replicate the kind of serendipity that enables the unconscious human mind to make novel patterns and see powerful new analogies of the kind that lead to our most creative breakthroughs. The more we outsource our memories to Google, the less we are nourishing the wonderfully accidental creativity of our consciousness.

    Ian Leslie, Curious: The Desire to Know and Why Your Future Depends on It

    If we let AI dictate the shape of our work, we may all end up building the same thing—just faster.

    More speed ≠ more value.

    Instead of counting tasks, measure trust. Instead of tracking volume, track quality. Focus on the things your customers and teams actually feel.

    The Real “AI First” Advantage

    The companies that win with AI won’t be the ones who move the fastest.

    They’ll be the ones who move the smartest. They’ll be the ones who know when to use AI, when to skip it, and when to slow down.

    Because in the long run, discipline beats urgency. Clarity beats novelty. And thoughtfulness scales better than any model.

    The real power of AI isn’t in what it can do.

    It’s in what we choose to do with it.

  • Product Management is Dead

    Product Management is Dead

    My social media feeds have been inundated lately with bold assertions and proclamations about the future of product management.

    • Do we still need product managers?
    • Is AI going to replace product teams?
    • Has product…died?

    The claims tend to follow a predictable pattern:

    • AI writes user stories and PRDs.
    • AI generates user personas.
    • AI summarizes feedback and explores pain points.
    • AI prioritizes roadmaps.

    It makes for a compelling headline, often pushed by companies or consultants selling tools or services that claim to automate these tasks. These headlines grab attention, spark debate, and tap into the anxiety many product managers feel as AI reshapes their role.

    But this isn’t a funeral. It’s a reckoning. The old, process-heavy, adaptability-light version of product won’t survive. But that’s not the end of product. It’s the beginning of something better. Beneath the clickbait is a valid call to evolve: product management isn’t dying, it’s transforming.

    What Product Really Is

    Before we talk about what’s changing, let’s be clear about what product is.

    Product management isn’t a set of tasks. It’s a discipline of focus, alignment, and judgment.

    It’s about understanding problems deeply, prioritizing effectively, and creating the conditions for great teams to build the right things.

    AI can assist with this work, but it can’t own it. And if you think product is just a list of tasks?

    You’re already doing it wrong.

    Why People Want Product Dead

    Product is often seen as a bottleneck. It’s seen as the layer that slows down builders with meetings, documents, and decisions that feel like bureaucracy. In fast-moving, engineering-led organizations, product often looks like something that should be automated rather than a discipline rooted in insight, prioritization, and alignment.

    AI has only amplified that impulse. With tools that can instantly generate specs, synthesize feedback, and mock up features, product starts to look like a collection of tasks rather than a strategic function. And if it’s just tasks, why not let the machines do it?

    That thinking is tempting, especially to companies chasing speed and efficiency. But it’s also shortsighted. Still, the “product is dead” narrative keeps getting airtime because companies want it to be true, even if it misses the bigger picture.

    Speed Over Strategy: Engineering-Led Cultures Prefer Shipping

    In many engineering-led cultures, especially in AI, there’s a deep bias toward building, shipping fast, testing fast, and iterating fast. AI has collapsed the cost of experimentation. And with today’s AI tools, it’s never been easier to vibe code (i.e., rapidly stitch together working demos using AI and low-code tools) your way to a working prototype. You can spin up UIs, connect APIs, and generate sample data in hours instead of weeks. It looks and feels like progress.

    But without intention, you’re not building products, you’re building distractions. You’re producing, not progressing. You’re generating output, not outcomes.

    And that’s the trap: it feels like you’re moving faster, but without a clear understanding of the customer, the problem, and the strategy, you’re either moving in circles or heading in the wrong direction entirely.

    Task-Based Thinking: Why Product Looks Replaceable

    The appeal is obvious: automate the “middle layer,” and suddenly, your team is leaner, faster, and cheaper. Product work is reframed as a series of repeatable tasks: write a story, generate a persona, summarize feedback, and stack rank a backlog. It’s presented as something mechanical, like configuring an assembly line, rather than requiring focus, intention, and insight.

    But this framing is dangerously incomplete. These aren’t just tasks; they’re judgment calls. They ensure teams solve the right problems in the right way at the right time. Discovery without direction is noise. Strategy without prioritization is chaos. Specifications without insight are just empty documentation.

    AI can assist with product work, but reducing it to a checklist makes it easier to sell a tool but harder to build anything meaningful.

    A Convenient Story: The Simplified Narrative That Sells

    It’s a narrative that promises clarity: eliminate the middle layer, remove the blockers, and let machines and makers do what they do best. This strategy plays perfectly in a world obsessed with efficiency and in organizations that already see product management as overhead.

    But the truth is messier.

    Good product managers don’t just write tickets or relay requests. They bring cohesion to chaos. They align teams around a shared understanding of the customer, the problem, and the goal. They ask the hard questions that AI can’t answer on its own.

    Should we build this? Why now? What matters most?

    AI can produce content, but not conviction. It can analyze feedback, but not frame a vision. And it can’t resolve the tensions between user needs, business goals, and technical constraints — at least not without someone to interpret, prioritize, and lead.

    The “product is dead” story works because it feels simple. But building good products was never simple. Removing the people who deal with complexity doesn’t make it go away. It just makes it your customer’s problem.

    The Companies Who Will Regret This

    Here’s my prediction:

    • The companies that cut product first will move fastest at first.
    • Their roadmaps will fill up. Their launches will accelerate. Their demos will look impressive.

    But then, slowly and quietly, things will start to break.

    • Customer engagement will slip.
    • Retention will fall.
    • New features will feel disconnected from real needs.
    • Teams will build for what’s easy, not for what’s valuable.

    The companies that sold them those shiny new tools, the ones that promised to replace product? They’ll be long gone, moving on to the next buyer or looking for the next hype cycle to exploit.

    Meanwhile, the companies that doubled down on the real craft of product, who invested in judgment, customer obsession, and asking why before what, will still be standing (and thriving) while others fade. They’ll have products that resonate and that evolve with their customers.

    Because tools don’t create strategy.

    People do.

    Old Product Might Be Dead — And That’s a Good Thing

    Now, here’s where I’ll agree with the AI evangelists: old product needed to change.

    The days of PMs acting as backlog managers, Jira ticket writers, or meeting schedulers? Yeah, that should die.

    PMs who only handed off requirements to engineering? Gone.

    PMs who never talked to customers? Dead.

    Let’s be honest: many organizations misdefined the PM role. They hired process managers and called it product. They built layers of communication, not layers of clarity. They were managing workflows, not products. They were pushing tickets, not pushing strategy. Those roles are vulnerable not because of AI, but because they weren’t doing product in the first place.

    The version of product that survives this shift and is worth fighting for is sharper, faster, and more essential than ever. It’s not about being an intermediary between engineering and design. It’s about creating clarity, focus, and vision where there was once noise and confusion.

    It looks like:

    • Problem curation over solution obsession: It’s not about finding the quickest fix or building what’s easiest. It’s about understanding what problem we’re solvingfor whom, and why it matters.
    • Judgment over process: AI can help automate the steps, but it can’t tell you if you’re solving the right problem or if the timing is right. Good product management is still a series of thoughtful decisions, not just steps in a flowchart.
    • Context over control: Dictating requirements from above doesn’t work anymore. Context, shared understanding, and alignment are what drive teams to collaborate effectively, not command and control.
    • Collaboration over command: PMs are the glue that brings engineering, design, and business together. But that means being a partner and enabler, not a dictator. Collaboration is the new currency in product development.
    • Customer truth over corporate theater: Building the right product requires honest feedback, real conversations with customers, and deep empathy. It’s not about making the product look good on paper; it’s about making it work for the people who use it.

    The old way of doing product is over. But this isn’t about mourning its loss. It’s about embracing a new, more purposeful approach. The role of product management is evolving, and in many ways, that’s a huge opportunity to do better, build better, and have a bigger impact.

    The King Is Dead. Long Live the King.

    The “product is dead” narrative is loud right now because it’s easy. It’s easier to believe we can automate judgment than it is to build it. Easier to replace complexity than to wrestle with it. Easier to promise speed than to commit to substance.

    But the companies that endure — the ones that create real value, not just hype-fueled demos — will be the ones that lean into the harder, more human work.

    They’ll treat product not as a process to optimize, but as a practice to sharpen.

    They’ll embrace AI as a powerful tool — not a replacement for the thinking, intuition, and collaboration that make great products possible.

    They’ll stop treating product like a middle layer to cut, and start recognizing it as a critical function to elevate.

    Because here’s the truth: the best product teams won’t just survive this shift. They’ll lead it.

    They’ll be faster because they’re clearer. Smarter because they’re humbler. Stronger because they’re more aligned.

    Product isn’t dead. Bad product is dead. Shallow product is dead. Performative product is dead.

    The age of product isn’t over. The age of better product is just beginning.

    Long live product — not as it was, but as it needs to be.