When assessing AI tools and new design workflows, it's easy to come across as the 'get off my lawn' guy — seeing new approaches and new tools as a threat.
I don't see it that way at all.
But what people perceive as a threat that AI poses to design is really only a threat to a small part of the design process: the presentation layer. And that threat has been around for decades. WordPress templates, 99designs, Squarespace — all have (to differing degrees) tackled the same layer. This is the layer that focuses on the design output, not the conceptual thinking, strategy, and human-centred work that help create great products.
The 80% illusion
Single-shot demos appear to get you 80% of the way there in minutes. The usual assumption is that the remaining 20% takes disproportionately longer — anyone who's shipped software knows that. But with AI-generated design, the maths is worse than that. You haven't actually covered 80% of the work. You've only covered 80% of the surface layer. The hard thinking — the decisions about what this thing is, who it's for, how it behaves, what it prioritises — hasn't been done. It's been skipped entirely. The output presents a veneer of credibility that makes it feel like progress, when the real work hasn't started.
The most compelling recent demos have been single-shot prompt-to-design outputs from tools like Gemini and Claude. They're compelling because the ratio of input to output shows a vector of extreme value — you get so much for doing so little. But for product design, making compelling and engaging things for an audience is much more than the surface layer. This is closer to an inverse Pareto principle: 80% of the visible output only gets you about 20% of the way to a real product.
The design part is the hard part
If everyone can use AI tools to assemble software, what makes someone better at it than someone else?
The presentation layer feels like the easy bit of design. It's the most visible, the most tangible, and now the most automatable. But putting out a cohesive experience — one where the surface, structure, and strategy all align — is genuinely difficult work. It requires judgement built over years of getting things wrong, testing with real users, and understanding why something that looks right can still feel broken. That experience doesn't get bypassed because the tooling makes it faster.
Two decades ago, Jesse James Garrett's Elements of User Experience laid this out: strategy, scope, structure, skeleton, surface. Five planes, each dependent on the one below it. Without the conceptual thinking behind it, a vibe-coded design only tackles the thinnest part of the problem. A generated interface operates almost entirely at the surface. It tells you nothing about the strategy it serves, the scope it's working within, the information architecture that organises it, or the interaction patterns that make it usable. All of those layers need to be in consideration.*
Why this is a design problem
But this isn't an argument against AI in design. It's an argument for expanding design's role, and recognising the value that expansion brings to making a successful product or service.
What makes design distinct from other disciplines isn't the output — it's the translation. Engineers build, and build well, but they haven't traditionally developed the skills in making things that connect with users on their terms. Product people think strategically, but strategy without the soft skill of translating human needs into product decisions produces roadmaps, not experiences. Design sits between the two: understanding users, deriving insights from research and experimentation, and turning that understanding into artefacts — prototypes, interfaces, working software — that reflect it. It's that bridge from insight to execution that designers can take forward with AI tools. Not because the tools replace the thinking, but because they accelerate everything that comes after it.
For designers who've done the conceptual work, who understand the layers beneath the surface, AI-assisted prototyping is transformative. It closes the gap between design concept and design execution in ways that genuinely matter. You can test ideas faster, iterate in code, and ship with more confidence. The tool changes the speed of making, not the speed of thinking — and that's the distinction. Vibe coding with a skilled designer behind it isn't a shortcut. It's a faster route to something that's already been thought through.
The reason all of this matters is straightforward: the people using the things we design are humans. They arrive with context — motivations, constraints, mental models — and part of design's role is to understand that context and match it with a conceptual framework for the experience. That translation — from human need to product decision — isn't something engineering or product naturally replicate, and it isn't something AI generates from a prompt. No generated interface does that on its own. It produces an output without the understanding that should sit behind it. The credible shortcut isn't dangerous because it's bad work. It's dangerous because it looks like good work, and it skips the part that makes work good.
JJG's framework is one way to look at the layered design space. It's not definitive and with the evolving world influenced by new AI design tools, describing the interplay between new conceptual layers may help describe this new world better.
Further watching: Introducing Claude Design by Anthropic Labs