Build the Right Thing

I previously wrote here about how product teams I have been (and continue to be)involved in, incentivised to deliver and passionate about what they are doing, are often drawn to low-hanging fruit. These are software features that are simple to execute and can be done quickly within the cadence of their usual sprint cycle. Teams feel great about doing this, as it looks and feels like they are making an impact, even if the scale of the impact is small. I’ve been in teams where, in a rush to make something, no one has stopped and asked if this was even the right thing to be making?

However, an approach that focuses on delivering areas of highest customer value, I argued, can be delivered just as quickly. These things are usually the most challenging to deliver. They are hard problems and they are time-consuming. In part, because they usually reflect foundational issues, attitudes or barriers that your users have in doing things, the very thing your tool or service is looking to address. The thing is, if there were easy ways to solve them and they were important enough, your users may well have found ways to solve them already. Hacks, workarounds, users cobbling together things that get their job done, this is satisficing.

To quickly deliver on areas of high value, I advocated defining areas of opportunity, ranking those in terms of perceived impact (user research helps both framing and understanding the scale of the opportunity), selecting one (or a few) to pursue, and slicing the delivery of this into incremental stages. Teams can design and deliver software in smaller steps and a faster frequency, tackling these hard problems in stages and also learning more about the problem as they do so.

This is a challenge as it means setting up a single product team (not a split between discovery and implementation), connected to the customer, with a focus on building and learning, so the ‘right’ product emerges.

“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.”‍ - The First Principle of The Agile Manifesto

Why is this good?

It boosts team morale. ‘Delivering value’ over ‘delivering features’ boosts the psychology of the team. They’re delivering something needed and they feel good about the impact they are having. The boost they receive from this is better than any burn-down chart or any other delivery metric that’s focused on activity rather than outcomes.

It avoids the feature factory

I’ve been in teams and organisations which have been hell-bent on delivering features. This is a grind, as you’re often functionally limited in how you can operate with clear guardrails guiding the interactions between team members: Designers make screens, developers write code. Over time though, this felt somewhat deflating as we had no metric of success to spur us on. We had very little idea if what we were doing had any real impact on the people using the tool? The measurement of our work was whether a feature was delivered, not if it had any impact on user behaviour.

Feature factory’s change direction often, quickly shifting to new technology or functionality and abandoning old work to the dustbin. This was demoralising, as months of work can be discarded without any consideration. We then became wary of this and began to not commit fully to new work or the goal. Things were going to change anyway, so why invest time and energy into it?

It beds down research habits in the team

By defining problems you need to address and working out ways to address them, research becomes a fundamental mechanism in the product design process. Understanding user needs becomes a key part of the team’s approach to understanding the problems they are looking to address. The quality of this research and the strength of mechanisms used for generating insights and sharing this knowledge (around the team and the wider organisation) are key ‘ways of working for well-functioning teams that are focused on making software that is useful and valuable.

It's building and learning

Understanding the impact of your work promotes a measurement mindset. It also develops a tacit “product sense” in the team.

This builds on the methodology of participatory action research, which constantly looks to gather data on a problem you are looking to address, and continually understands if the thing you are making is driving the right outcomes.

But how do we know what are the areas of highest value?

My original article mapped areas of opportunity across the two axes’ of impact and time. Areas of opportunity are effectively problem spaces with varying levels of customer value. ‘Low hanging fruit’ generally appeared in the bottom left quadrant. They were things that were easy and quick to do, but these low impact areas generally delivered little value to users. Areas of high impact were to the right, with those two quadrants being the most challenging.

Fig 01. When software features are mapped, teams often go for the 'low hanging fruit' and miss greater, high impact opportunities in thier rush to deliver

However, how do you understand the difference between areas of opportunity across these axes? How can you work out the relative impact and the relative value to users of one area over another?

In short, why pick this one over that?

This is a massive area in product design, but I will try to help describe the types of things I have seen teams do to understand this.

Good teams talk to users…

This might seem self-evident, but through deeply understanding the needs, attitudes, motivations of your users, you will build better products and services. However, as simple as this sounds…‘talking to users’…actually describes a robust way of understanding the qualitative experience of your users and turning this knowledge into actionable insights.

This is a key mechanism in creating impact with knowledge, and the effectiveness of this process isn’t just the types of decisions you make with the data, but the quality of decision making reflects the quality of the data you collect.

I’ve been involved with product teams, who may ‘visit’ customers but their approach is haphazard, unstructured and did not robustly collect data. Nor do they do this consistently over time, so the longitudinal value of their effort is reduced. As data from every encounter is so haphazard, you simply can’t understand the evolution of your customer's values and motivations. It’s almost the equivalent of having a coffee with a couple of customers and using that to decide your product teams direction of travel. It may work, sometimes, but does it provide the reproducible consistency and standard you need in your organisation?

Good researchers approach their enquiry with rigour, understanding and intelligence and don’t solely rely on water-cooler anecdotes to create the knowledge that drives effective product decisions. Their activity is a key part of any unified product team and good researchers have a range of methods and techniques for understanding the qualitative experience of your users/customers and can match this with scale and causation to help effectively frame the problem spaces for the product teams.

…using this knowledge to assess the impact of what they’re making…

The challenge of framing your areas of opportunity is that you can’t know everything up front. There are many methods and routes forward to better understand the problem you're addressing. From user journey mapping, service mapping, modelling, horizon scanning, back casting ,Wardley mapping, cynefinframeworks, etc, etc, etc …there is a plethora of other sense-making activities and is such a broad topic, it is well beyond the scope of this post. However, I am going to pull out two characteristics I have seen be especially effective.

Firstly teams I have been involved with in the past, have been focused on understanding leading and lagging indicators of user behaviour, and the interplay between them. Qualitative research can highlight user behaviours that give shape to the problems user may be having (leading). Subsequently, establishing key outcomes you want to drive, and being disciplined in the metrics you set up to track if you are achieving them (lagging).

Similarly, using the lens of frequency and intensity has been a strong heuristic for assessing different problem spaces. Michael Siebel talks about this in “building product’, as well as extolling the benefits of clearly defining the problem you are addressing, understanding how intense and how often your customers are having this problem is a good comparative indicator of the impact of different opportunities.

These are some of the hallmarks of good product teams I have been involved in. A commitment to understanding users and establishing metrics to know if we were achieving what we were wanting to achieve, alongside good tools for assessing different problems, combined with open discussions amongst the team, creating a unified vision for their direction of travel.

…and then do it repeatedly

Fig 02. Double Diamond, developed The Design Council UK, presents a linear and staged approach to design

This is where I see problems in how the ‘double diamond’ is often interpreted, especially its linear depiction of an innovation process. Any initial ‘discover’ phase suggests that once you’re in development, discovery is over. Furthermore, it suggests that front-loading ‘discovery’ and disconnecting it from development teams who are delivering software into users hands. The result is often that a team’s (often a separate discovery team’s) initial focus is just on finding a problem to solve, and once done, there is nothing left to learn. However, often understanding the impact of a problem emerges whilst doing the work and often happens much later when software is in the hands of users. It is for this reason that continually integrating research into the product design process is so vital.

For me, this is the message of continuous discovery; where understanding your users, done continually over a long period of time, yields a deep and nuanced view of your customers.

Effective user research underpins user-centred approaches.

Great teams I have been involved with in the past (across UK Govt and Healthcare) have always been keenly focused on understanding their users and using this knowledge to help the team drive better product decisions. This knowledge seems most effective when done at a high quality and continually over a long period of time. This is the foundation of continuous discovery and shows a focus on delivering value to users/customers, rather than delivering value for your boss or the CEO, or other key stakeholders in the business.

However, working out what are the areas of highest value is really challenging, and there is often no single fixed framework to judge the relative merits of different opportunities you may want to tackle with your product or service. I’ve highlighted a few things here, but there are many more.

Even when difficult but high impact opportunities have been identified, building the smallest thing to address this is a key risk mitigation tactic. You’re not investing time or effort in large guesses but deciding to focus on high impact areas (over things that are easy and quick to make) and delivering a small slice to see if what you’re doing is actually making an impact. Likewise, by doing this you’re not creating a software production line (feature factory) of low value ‘features’ that creates nothing more than the appearance of a team making software. You’re making incremental steps to addressing user problems.