Here’s a thought experiment I’ve been using in conversations lately.

Imagine you have unlimited resources. You can assemble any team you want and you have unlimited tokens. Would you launch a brand new product in two days, built entirely by AI?

Most people say yes, or at least a confident “probably.”

Now change the frame. You have a product with 500 million users, half of them paying. Clear market fit. Would you replace it tomorrow with something entirely built by AI?

The answer shifts quickly. And that shift is where the interesting part lives.

AI can do a lot

The euphoria is real

We’re in a moment where AI has democratized access to things that used to require entire teams. You can go from idea to prototype in hours. Startups are shipping without designers. Investors are pushing “efficiency” as shorthand for “fewer people, more output.” The temptation is obvious: maybe design is becoming optional.

I understand that temptation. In the 0-to-1 space, AI genuinely is a multiplier. Two people launching a prototype in 48 hours, testing hypotheses, capturing weak signals early. That’s a legitimate use of the technology, and it’s worth celebrating. Let AI shine where speed and exploration matter most.

But there’s a version of this thinking that scales poorly, and it’s the version I keep seeing teams reach for without questioning it.

Output alone doesn’t hold up

AI is very good at scaling artifacts. It can generate flows, screens, copy, components. What it can’t scale is meaning, continuity, and trust, because those aren’t outputs. They’re properties of a system that has been carefully maintained over time.

When you’re operating at scale (multiple stakeholder interests, compliance and legal constraints, long-term brand equity, massive user bases with real expectations) the artifacts are the easiest part. The hard part is making sure they all cohere. Trust is the kind of thing that takes years to build and can collapse in a single release. AI can produce the screens, but designers and product managers are the ones preserving consistency across those screens so that trust doesn’t erode.

There’s a related distinction around strategic fluency. AI executes instructions well, but translating strategy into behavior that users actually feel is a different kind of work. It requires understanding context that lives outside the prompt.

The process is the product

This is the part that’s hardest to make visible, which is also why it’s the part most at risk.

The work that product managers and designers actually do now centers on sensemaking, context translation, discovery, alignment, continuity, and protecting quality. We don’t ship screens. We deliver coherence.

Investors and stakeholders want visible output, and that’s understandable. But the real differentiator in mature products is mostly invisible: better decisions, reduced waste, lower risk, and sustainable trust. These are the outcomes of a healthy process, not of faster artifact generation.

What AI actually does for PMs and designers

Here’s my current thinking on how the relationship works.

As AI makes building faster, vision becomes more important, because the bottleneck shifts from production to direction. As AI makes everything look similar (and it does, increasingly), differentiation becomes invaluable. As AI enables infinite options, someone still needs to say “no,” which might be the single most valuable skill in product development. And as AI handles more of the UI layer, UX intuition becomes a genuine market advantage.

The stronger the tool, the more the quality of the hand guiding it matters.

Two scenarios, two truths

It helps to be honest about the fact that different contexts call for different approaches.

In a startup (0-to-1), speed matters more than perfection, hypotheses matter more than long-term experience, and AI is a valid primary tool. Designers are genuinely optional at that stage. That’s not a critique of design; it’s an acknowledgment that the system’s needs are different when you’re searching for signal.

In an enterprise (1-to-n, millions of users), the equation inverts. Trust is the product. Consistency is the brand. AI is a powerful tool within the system, but designers and PMs are the ones who keep the system coherent. You can’t automate that coherence, at least not with our current understanding of how these tools work.

AI scales output, but the trust layer that makes that output matter to real people still requires human judgment.

What this means in practice

Three things I keep coming back to:

Design is how strategy becomes experience. It turns organizational intent into behavior that users actually feel. Without that translation layer, strategy stays abstract and products drift.

Let AI do the work, not the thinking. AI can produce. We decide what deserves to exist. That decision (what to build, what to kill, what to protect) is where the real leverage lives.

Think in trust, not features. The question worth asking before any release is: “If we ship this, will people still feel safe choosing us tomorrow?” If you can’t answer that confidently, more output isn’t going to help.

Where this lands

AI will keep making building easier. That trajectory is clear. But the products that actually matter to people, the ones they return to, pay for, and recommend, those are products where someone cared about more than the output. Someone cared about what it meant, how it felt, and whether it would still be trustworthy next month.

AI is a real accelerator. That’s not in question. It makes us faster, it scales what we can produce, and it compresses timelines in ways that would have felt impossible a few years ago. But faster output is not the same as better output, and that distinction is becoming harder to ignore.

When we augment our processes with AI, we gain efficiency and velocity. That’s the visible part. The less visible part, and the part that determines whether any of it actually lands, is the quality of what we feed into it. The input is becoming the bottleneck. The clarity of the brief, the precision of the intent, the depth of understanding behind the prompt. These are the things that separate useful output from noise.

As the tools get more powerful, the returns on good input compound. Getting that relationship right is very much an evolving practice, including for me.


This essay was the basis of a talk I gave at When Roles Blur on November 26th, 2025 in Berlin.

Download slides