I recently interviewed a group of users to get their perspectives on a feature we were considering building. When I met with the stakeholders to present our findings, one of the questions that came up was whether our sample size was large enough to base a decision on.

It's a fair question, and a common one. As Alec Levine writes, it usually comes from two places. First, it suggests that the stakeholder misunderstands the nature of qualitative research. Second, the stakeholder is (quite understandably!) nervous that they'll make a decision based on faulty or incomplete data. They want to know that they can trust the findings. In quantitative research, people can look to sample sizes and confidence thresholds to provide them with that reassurance; in qualitative research things are more subjective.

I wanted to pick up on Alec's post and share my own perspective on the subject of sample size in qualitative research in general and in interviews in particular. Can you make a decision? It depends. But can you refine the direction of your product? Absolutely.

What is qualitative research for?

Quantitative research as a concept is pretty easy to wrap your head around. It's about quantifying things. You're measuring how many times something happens, how often, and to how many people. Qualitative research, on the other hand, is more subjective. It involves observation and interpretation. For that reason, it can feel a lot fuzzier than quantitative research. Numbers feel scientific and rational; narratives and metaphors—frequent outputs from qualitative research—do not.

As Sam Ladner writes, the difference between qualitative and quantitative research is greater than the kinds of data they work with. They represent different philosophical stances. Quantitative research, Ladner writes, "begins with the assumption that reality is a stable, objective thing." Qualitative research, in contrast, sees meaning as subjective. Quantitative research looks to explain reality as it is; qualitative research looks to explain reality as it is interpreted and understood.

This doesn't mean that quantitative research is inherently more reliable than qualitative information. Quantitative research, too, is subject to human biases. Choices are made about why, how, and what data is collected and how it is weighed. Human beings are responsible for defining the criteria for what "counts" from among the data. As well, quantitative data puts a greater onus on its consumer to interpret the numbers. Two different individuals could interpret the meaning of the numbers very differently. As well, data may not have collected because they are most meaningful, but for the sake of consistency with other studies that were performed in the past. And, quantitative data is a lossy medium. It provides scale but no context: it can tell you what happened, but can't explain why.

That's why it's good to combine to quantitative research with qualitative methods. Qualitative research helps overcome the gaps left by quantitative research, and vice versa. Its role is to help us build mental models that help fill out the "why" that quantitative research leaves out, explaining the contexts, problems, and motivations that affect the problem space. The questions are more open-ended and help us understand the situation at a higher level. When we do qualitative research, we're trying to synthesize a narrative about our subject. It's closer to sensemaking; it helps us build a perspective and give direction that could be subsequently be measured through quantitative methods.

Sample Sizes and Data Saturation

Because qualitative research is less concerned with measuring scale, achieving a particular sample size is less important than in qualitative research. Instead of sample size, qualitative researchers are more concerned with data saturation.

Data saturation means simply the point at which gathering of any additional data would yield only minor, incremental returns. When you conduct a series of user interviews, for example, you'll reach a point at which you start to hear the same stories again and again. You can do additional interviews, but the patterns have already become well-defined. You might hear the occasional tidbit, but you've reached the point of diminishing returns. You have a solid sense of the terrain—enough, at least, to articulate its key features.

The trouble is that this can be highly subjective and difficult to predict. But, in general, there are a few variables that inform how many subjects you'll need to engage:

  1. How familiar is the research space? If you're conducting research on an established product or in a domain with which you are well familiar, you'll likely reach the saturation point more quickly. Your knowledge of the space can help you home in on the right questions to ask more quickly.

  2. How experienced is the researcher? A more experienced researcher can often do more with less. Indeed, with qualitative research, the richness of the data is often more a factor of the the researcher's skill than the sample size. A good researcher has honed their interviewing skills and is adept at reading participants and asking the right questions at the right time. They can sense when to dig in further, and will be more comfortable chasing down leads that emerge through the discussion.

  3. How diverse is the participant group? If you're working with a narrow, homogeneous group of participants, you won't need to engage with as many participants. However, if your participants represent a wide range of needs and contexts, you will probably need to speak to more to reach the point of saturation.

  4. How big is the risk that you will be taken based on the research? Research is a vital activity for helping to mitigate risk when it comes to business, product, or ux strategy. If the work you are doing is to inform a big, "type one" decision—the kind that there's no going back from—you may want to be more cautious before you decide that you've reached saturation.

  5. What's your budget? Unless your budget is considerable, you'll probably need to think about how many people you can actually afford to talk to. If you don't think your budget will let you reach the number of users you'll need to reach saturation, then you should be sure to flag that when presenting your research plan and the results to your stakeholders.

There are other factors that come into play as well. For example, if you are working in an Agile environment, you can get away with talking to fewer people due to the iterative nature of the work. In an Agile environment, each feature release is itself a research study; working together with the delivery team, you'll revise your understanding of the problem space through a mix of qualitative research methods and the data you get back from actual usage. As well, your line of questioning is likely to be more precise, focused narrowly on the sprint goal that you're working toward. With the release, you learn something; you take that back with you to your next round of research, and the process loops forward again and again.

What would have to be true?

Of course, the last factor to consider is your stakeholders. Going back to the question that was posed to me, I echo Alec's response: take the time to understand why a concern is being raised. Respect your stakeholder's concern, and recognize that they're probably not asking from a place of criticism, but from a place of concern. They want to do right by the organization, just as you do.

So take the time to learn from them: what do they feel are the areas of risk? Share the research plan with them, and ask them if they feel you're addressing their fears. You may be able to assuage their fears by explaining the role of your qualitative study in growing understanding of the problem space. But, it may be that more research is required. Ask them: what would have to be true for you to feel comfortable with the finding and recommendation? Then, work with them to come up with a plan to overcome the concern.

Further Reading

Philip Hodgson - User Experience and the Strength of Evidence

Jerry Z. Muller - The Tyranny of Metrics

Greg Schuler - Getting Big Ideas Out of Small Numbers

Mitchel Seaman - The Right Number of User Interviews

David Travis - Why You Don't Need a Representative Sample In Your User Research

Victor Yocco - Filling Up Your Tank, or How to Justify User Research Sample Size and Data