Over the past few weeks, several companies have made headlines by declaring an “AI First” strategy.
Shopify CEO Tobi Lütke told employees that before asking for additional headcount or resources, they must prove the work can’t be done by AI.
Duolingo’s CEO, Luis von Ahn, laid out a similar vision, phasing out contractors for tasks AI can handle and using AI to rapidly accelerate content creation.
Both companies also stated that AI proficiency will now play a role in hiring decisions and performance reviews.
On the surface, this all sounds reasonable. If generative AI can truly replicate—or even amplify—human effort, then why wouldn’t companies want to lean in? Compared to the cost of hiring, onboarding, and supporting a new employee, AI looks like a faster, cheaper alternative that’s available now.
But is it really that simple?
First, there was AI Last
Before we talk about “AI First,” it’s worth rewinding to what came before.
I’ve long been an advocate of what I’d call an “AI Last” approach, so the “AI First” mindset is a shift for me.
Historically, I’ve found that teams often jump too quickly to AI as the sole solution, due to significant pressure from the top to “do more AI.” It showed a lack of understanding of what AI is, how it works, its limitations, and its cost. The mindset of sprinkling magical AI pixie dust over a problem and having it solved is naive and dangerous, often distracting teams from a much more practical solution.
Here’s why I always pushed for exhausting the basics before reaching for AI:
Cost
- High development and maintenance costs: AI solutions aren’t cheap. They require time, talent, and significant financial investment.
- Data preparation overhead: Training useful models requires large volumes of clean, labeled data—something most teams don’t have readily available.
- Infrastructure needs: Maintaining reliable AI systems often means investing in robust MLOps infrastructure and tooling.
Complexity
- Simple solutions often work: Business logic, heuristics, or even minor process changes can solve the problem faster and more predictably.
- Harder to maintain and debug: AI models are opaque by nature—unlike rule-based systems, it’s hard to explain why they behave the way they do.
- Performance is uncertain: AI models can fail in edge cases, degrade over time, or simply underperform outside of their training environment.
- Latency and scalability issues: Large models—especially when accessed through APIs—can introduce unacceptable delays or infrastructure costs.
Risk
- Low explainability: In regulated or mission-critical settings, black-box AI systems are a liability.
- Ethical and legal exposure: AI can introduce or amplify bias, violate user privacy, or produce harmful or offensive outputs.
- Chasing hype over value: Too often, teams build AI solutions to satisfy leadership or investor expectations, not because it’s the best tool for the job.
What Changed?
So why the shift from AI Last to AI First?
The shift happened not just because of what generative AI made possible, but how effortless it made everything look.
Generative AI feels easy.
Unlike traditional AI, which required data pipelines, modeling, and MLOps, generative AI tools like ChatGPT or GitHub Copilot give you answers in seconds with nothing more than a prompt. The barrier to entry feels low, and the results look surprisingly good (at first).
This surface-level ease masks the hidden costs, risks, and technical debt that still lurk underneath. But the illusion of simplicity is powerful.
Generalization expands possibilities.
LLMs can generalize across many domains, which lowers the barrier to trying AI in new areas. That’s a significant shift from traditional AI, which typically had narrow, custom-built models.
AI for everyone.
Anyone—from marketers to developers—can now interact directly with AI. This democratization of AI access represents a significant shift, accelerating adoption, even in cases where the use case is unclear.
Speed became the new selling point.
Prototyping with LLMs is fast. Really fast. You can build a working demo in hours, not weeks. For many teams, that 80% solution is “good enough” to ship, validate, or at least justify further investment.
That speed creates pressure to bypass traditional diligence, especially in high-urgency or low-margin environments.
The ROI pressure is real.
Companies have made massive investments in AI, whether in cloud compute, partnerships, talent, or infrastructure. Boards and executives want to see returns. “AI First” becomes less of a strategy and more of a mandate to justify spend.
It’s worth mentioning that this pressure sometimes focuses on using AI, not using it well.
People are expensive. AI is not (on the surface).
Hiring is slow, expensive, and full of risk. In contrast, AI appears to offer infinite scale, zero ramp-up time, and no HR overhead. For budget-conscious leaders, the math seems obvious.
The hype machine keeps humming.
Executives don’t want to be left behind. Generative AI is being sold as the answer to nearly every business challenge, often without nuance or grounding in reality. Just like with traditional AI, teams are once again being told to “add AI” without understanding if it’s needed, feasible, or valuable.
It feels like a shortcut.
There’s another reason “AI First” is so appealing: it feels like a shortcut.
It promises to bypass the friction, delay, and uncertainty of hiring. Teams can ship faster, cut costs, and show progress—at least on the surface. In high-pressure environments, that shortcut is incredibly tempting.
But like most shortcuts, this one comes with consequences.
Over-reliance on AI can erode institutional knowledge, create brittle systems, and introduce long-term costs that aren’t immediately obvious. Models drift. Prompts break. Outputs change. Context disappears. Without careful oversight, today’s efficiency gains can become tomorrow’s tech debt.
Moving fast is easy. Moving well is harder. “AI First” can be a strategy—but only when it’s paired with rigor, intent, and a willingness to say no.
What’s a Better Way?
“AI First” isn’t inherently wrong, but without guardrails, it becomes a race to the bottom. A better approach doesn’t reject AI. It reframes the question.
Yes, start with AI. But don’t stop there. Ask:
- Is AI the right tool for the problem?
- Is this solution resilient, or just fast?
- Are we building something sustainable—or something that looks good in a demo?
A better way is one that’s AI-aware, not AI-blind. That means being clear-eyed about what AI is good at, where it breaks down, and what it costs over time.
Here are five principles I’ve seen work in practice:
Start With the Problem, Not the Technology
Don’t start by asking “how can we use AI?” Start by asking, “What’s the problem we’re trying to solve?”
- What does success look like?
- What are the constraints?
- What’s already working—or broken?
AI might still be the right answer. But if you haven’t clearly defined the problem, everything else is just expensive guesswork.
Weigh the Tradeoffs, Not Just the Speed
Yes, AI gets you something fast. But is it the right thing?
- What happens when the model changes?
- What’s the fallback if the prompt fails?
- Who’s accountable when it goes off the rails?
“AI First” works when speed is balanced by responsibility. If you’re not measuring long-term cost, you’re not doing ROI—you’re doing wishful thinking.
Build for Resilience, Not Just Velocity
Shortcuts save time today and create chaos tomorrow.
- Document assumptions.
- Build fallback paths.
- Monitor for drift.
- Don’t “set it and forget it.”
Treat every AI-powered system like it’s going to break, because eventually, it will. The teams that succeed are the ones who planned for it.
Design Human-AI Collaboration, Not Substitution
Over-automating can backfire. When people feel like they’re just babysitting machines—or worse, being replaced by them—you lose the very thing AI was supposed to support: human creativity, intuition, and care.
The best systems aren’t human-only or AI-only. They’re collaborative.
- AI drafts, people refine.
- AI scales, humans supervise.
- AI suggests, humans decide.
This isn’t about replacing judgment, it’s about amplifying it. “AI First” should make your people better, not make them optional.
Measure What Actually Matters
A lot of AI initiatives look productive because we’re measuring the wrong things.
More output ≠ better outcomes.
And if everyone is using the same AI tools in the same way, we risk a monoculture of solutions—outputs that look the same, sound the same, and think the same.
Real creativity and insight don’t come from the center. They come from the edges, from the teams that challenge assumptions and break patterns. Over-reliance on AI can mute those voices, replacing originality with uniformity.
Human memory is inefficient and unreliable in comparison to machine memory. But it’s this very unpredictability that’s the source of our creativity. It makes connections we’d never consciously think of making, smashing together atoms that our conscious minds keep separate. Digital databases cannot yet replicate the kind of serendipity that enables the unconscious human mind to make novel patterns and see powerful new analogies of the kind that lead to our most creative breakthroughs. The more we outsource our memories to Google, the less we are nourishing the wonderfully accidental creativity of our consciousness.
Ian Leslie, Curious: The Desire to Know and Why Your Future Depends on It
If we let AI dictate the shape of our work, we may all end up building the same thing—just faster.
More speed ≠ more value.
Instead of counting tasks, measure trust. Instead of tracking volume, track quality. Focus on the things your customers and teams actually feel.
The Real “AI First” Advantage
The companies that win with AI won’t be the ones who move the fastest.
They’ll be the ones who move the smartest. They’ll be the ones who know when to use AI, when to skip it, and when to slow down.
Because in the long run, discipline beats urgency. Clarity beats novelty. And thoughtfulness scales better than any model.
The real power of AI isn’t in what it can do.
It’s in what we choose to do with it.