The rapid emergence of AI in the design space seems to have created two polarising responses. In one camp, the angst about a coming 'design apocalypse' — no doubt amplified by job losses and a tightening market. In the other, enthusiastic futurists who see AI as a technology that amplifies and fundamentally changes what designers do.
I'm mostly in camp two.
I use AI daily — in fact hourly — for an increasing range of tasks, from development to user research. I'm always testing new tools and approaches, trying to build both my own and my organisation's capability. Currently though, most of the debate on the impact of AI on design seems focused on the presentation layer: what AI means for screens, layouts, interfaces. Tools like Figma MCP and Pencil and Paper herald 'task transfer', expanding design's reach into front-end development and beyond. Even with that, the impact is still felt most keenly at the delivery part of the process.
However, the expansion isn't just there. Where I see the greater, and less discussed, value is elsewhere in the design process — in research, in problem framing, and in what most people would recognise as the 'product space': market analysis, product strategy, frameworks for understanding the problem, the market, and the opportunity. This pushes design into business and commercial territory. A place it has always operated, and despite the success of Apple's design-first mindset, few companies have chosen to mimic.
If we frame this in terms of the Double diamond, a lot of the focus seems to be on the end - the delivery side of things. But AI can not only help with research and discovery, both in terms of the types of work designers can (and should) do and the quality and speed of that work.
Ramp recently posted a job listing that quietly signals where things are heading. It asks designers to "start in an LLM" — using AI to clarify intent, draft PRDs, surface risks and edge cases before anything hits Figma. Research is framed as a velocity tool, not a gate. Prototyping means working in Cursor and Claude Code, not just pushing pixels. This isn't speculative. It's a job description.
As a way of anchoring this in real-world practice, I want to show how AI tools are changing my approach to the entire design process.
Opportunity evaluation
At the start of any project, AI is a powerful desk-research partner. I use it to build the business and commercial case for the work I'm involved with — understanding the wider landscape and playing that back to stakeholders in terms they care about. What does moving from a web-based tool to a native app mean for our customers? What's the size of that market? Where are the gaps?
I use AI to run scans across forums like Reddit (pulling in large swathes of data is a major boost for AI), anchoring my analysis in users' actual pain points rather than relying on assumptions. This is essentially a RAG approach using a varied dataset, but grounded in real user language and real frustrations. It also doesn't replace real user research, but like a mood-board, helps frame the opportunity in a human-centred way - by using real human response.
Problem space framing
AI also helps me frame this work for stakeholders. AI makes it straightforward to reformat and restructure large amounts of research data into artefacts that I can bring into Figma — maps, models, and narratives - that tell the user's story in ways that are credible and move beyond data dumped into a PowerPoint deck.
This includes short-hand personas (remember proto-personas?) combined with simplified customer journey maps, showing where our initiative could deliver against real areas of opportunity.
Beyond that, I've used AI video tools to create voice-of-the-customer videos that synthesise user data into compelling stories — something that previously took days of editing and composition.
All of this is about creating artefacts that help tell the story. Not just for 'stakeholders' but for your own deeper understanding of the problem space you're working in.
A caveat here: synthetic data is often biased towards whatever the models were trained on. It offers none of the nuance or cognitive richness of a real user. But for an initial framing exercise — building the case, aligning a team, identifying where to dig deeper — it's a fast and convincing starting point.
Prototyping and development
I think most of LinkedIn (or it's just my algorithm) focuses on this part of the process. It shows the most visible change, especially in the context of expanding design's skills into development.
This re-aligns digital design to its maker foundations — something many of us who remember the early web applaud. With AI, I'm working two roles: as both a design originator and a design orchestrator, directing agents to generate artefacts from screens to working prototypes. Working these back into tools like Figma for manual manipulation.
How this will eventually settle into a robust workflow is anyone's guess, but the shift in how this part of the process is done is already real.
A few things I'm seeing change:
Design systems now operate as foundational design tokens that push directly into development frameworks. Tools like Figma's Code Connect make this more explicit, though the challenge remains in maintaining the quality of what the AI produces.
Personally, I have built out the entire front-end component library in React and maintain a Storybook app. This toolkit is the foundation to all our prototypes.
As a result, handovers are becoming less relevant. Collaboration between design and development is happening at the level of the product and code, not at the point of throwing a Figma file over the wall and hoping for the best.
I move between Figma and code and back again — sketching in one, building in the other, iterating across both. The boundary between designing and making has blurred.
And critically: I'm now researching with real users using working prototypes, not static screens. Design decisions are more robust because I'm testing the actual product, not a picture of it.
The expanding design role
In many ways, this isn't new. Early web designers operated across the full stack — research, strategy, building, testing. The industry specialised and also optimised, but something was lost in the process. AI may help restore that breadth, reconnecting design to the maker foundations that many of us remember and value.
However, the scope of work a smaller number of people are being asked to absorb could be significant. Not only is there a real risk in centring too much on a single person, AI's capacity to generate what appears to be credible shortcuts could reduce work quality. There have been countless demos of single-shot prompts generating a new design system and product — but is this the right thing? Does it actually solve any user needs? Without the research, the problem framing, the commercial understanding that sits behind it, a generated interface is just a picture of a product. It looks like the answer without having asked the question.
Design's value was never in the presentation layer. It was in understanding problems deeply enough to solve them well. AI is finally giving designers the tools to operate where they've always belonged — upstream, in strategy and the commercial conversation, and not just downstream at the point of delivery. And when a designer who understands the commercial case, has framed the problem space, and is working with a real prototype sits down with engineers and product people, something shifts. AI doesn't replace that collaboration. It raises the quality of what each person brings to it.
The question isn't whether design's role is expanding. It's whether organisations build the structures, the teams, and the culture to make that expansion sustainable.
Additional reading
This post is drawn from lots of different sources. I've listed some of them here: