By far, the most common question I hear as a user experience researcher is “how can we make conclusions based on observing just a few people?” This question comes up in many forms, with entrepreneurs, product managers, digital marketers and others, nervously voicing this same concern at some point during a project. What I have observed is that those most likely to lack confidence in any research project are usually team members who haven’t had the opportunity to participate in, or at least be properly briefed on the methodology. So - if that happens to be you, or if you are considering running some qualitative research with your team, and you have this concern yourself, you’ve come to the right place.
Questions of “What” versus questions of “Why”
Every great insight starts with a great question, and often what makes product teams successful is that they aren’t afraid to ask those great, pressing questions - bringing them right out into the open. As a rule of thumb, quantitative metrics from most data-gathering platforms (think Google Analytics, Amplitude, or Hotjar) will most often answer questions of “What” or “How many”. Here are some examples:
“How many people in our community use this new feature?”
“How many people have abandoned our platform in the past month?”
“What is the most popular piece of content we have published?”
“What is the sign-up or conversion rate from this traffic source?”
In contrast, qualitative methods allow us to answer questions around “Why” or “How can we”:
“How can we better engage people with content they want to see?”
“Why do new community members enjoy this new feature, while others don’t?”
“Why has a larger-than-usual group of people abandoned our platform?”
“Why do people seem to prefer this method of signing up?”
When framed this way, it becomes clearer how qualitative and quantitative methods differ in utility. Closely measured conversion rates, signup rates, retention rates etc. are necessary for the health of your enterprise, but of course on their own won’t tell you how to improve. In most cases, your burning question or the hypothesis that you want to test out will tell you if you can rely solely on your quant data-gathering tools or if you need to do some qualitative digging.
Mix and match
If you’re sharp, which clearly you are, you’ll have noticed from the previous examples of what/why questions, that from here it’s just a short hop away to mixing and matching these methods to really unlock opportunity. Whereas a few years ago, organizations often just made do with the raw numbers from an A/B test, and guessed at the motivations behind people’s choices as best they could - most now realize that it doesn’t pay to stay in the dark. Quick, iterative (and affordable) qualitative tests can easily yield those answers. Here are some (real life) examples that I hope will inspire you to take advantage of the benefits of both together:
Example 1
A popular online publisher has seen a dip in readership over the last six months, as well as increased churn of their reader base. There hasn’t been any major change in the content they create - so why are people reacting negatively?
The publisher first runs a small focus group with 20 members who have lately been reading much less. They are brought in to discuss how they feel about the content, and why they used to read it more frequently. Many of the participants in the group say that they have begun reading a competitor’s articles more often because they are shorter (while still being high-quality) and they come in a daily round-up text message.
Hypothesizing that many more readers might have the same behavior pattern, the publisher runs a test whereby they release their newer content in shorter form, or via a daily roundup, and begin to measure open rates and time spent reading this new content per day, as compared with the older, long-format content.
Example 2
Management at Monday.com, the successful tech company making project management tools for small and medium sized teams, realizes that serving very large enterprise teams can be a great business development opportunity. But small-scale efforts to meet the needs of those customers have only produced tepid results in the past. How can the current product be improved to meet the needs of those new companies, where there is no data telling the product teams what they want?
The absence of qualitative data from the masses means that the team decides to turn to individual in-depth interviews in which to gain qualitative insights. While interviewing a small number of people who closely meet the profile of people likely to become Monday “champions” within a large organization, they uncover a whole new set of use cases and needs that are not being met. The product team then goes back to the proverbial drawing board with this list of needs, building a beta version of an enterprise product that will go on to be tested with thousands of similar people working at large organizations.
The many versus the few
Some of you might still be saying to yourself - hold on. Doesn’t every test that I ever run, still need to have a statistically significant number of participants or a statistically significant number of people who exhibited a certain action during a test - to make it at all valid? To which I can emphatically say, that if you are looking to answer questions of human motivation - No. Having run many such tests myself, I can attest to the empirical truth of the “five is all you need” phenomenon when observing patterns of human motivation and opinion.
As long as you are not trying to answer questions of “how many” with your initial qualitative test, then five is the proven minimum at which you will be able to get almost all the relevant data available. When you’re testing with the ideal range of 12-20 users, your test is likely to be bringing in 100% of it. Once your test is done, and all possible insights have been uncovered, it will be time to address the question of large numbers, by validating our insights with hundreds or thousands of people. Quantitative testing will show you just how many people in the wider world, experience each insight that you have observed in your qualitative testers. Here’s an example of how that works:
Example:
A large fashion e-tailer has interviewed a group of 15 frequent shoppers and found out that 12 out of 15 are frustrated with the lack of size information when buying a swimsuit online. In order to validate how this finding translates to the wider audience, the e-tailer can do one of two things:
Run a survey with 250 returning customers and ask a question along the lines of “When shopping for swimwear, how important is it for you to understand the sizing measurements?”
Include clearer sizing information on the new swimsuit collection items, and see if more customers convert.
Clearly the second option is more time-consuming and costly, so the e-tailer might want to start with option 1 and move from there. And if they do, they’ll be pleased to see just how research decisions can be made quite quickly, without a lot of extra cost or risk - by combining the qual and quant methods, and by progressing incrementally from less costly to more costly options. That’s the playbook in a nutshell: Don’t be afraid to ask the really important questions, and then hunt down their answers incrementally using all the tools available.
Do you have an example to share, of your own mixed research process? Or are you in need of some guidance on how to best mix and match? Contact me below!