Big companies pay big agencies big bucks to get into their users’ brains and come out with insights that will lead them to product success.
If you don’t have that kind of cash in your coffers, but still want to leverage user research to gain knowledge about your customers, read on!
It is definitely easier to run a study if you have a background in UX or psychology, but the only real prerequisite to starting out is an open mind, your own curiosity and a bit of guidance.
This post is based on training sessions that I have done over the years with founders, product managers, engineers, marketers, design teams and others who wanted to dip into the well of qualitative user research.
Going for the gold
The goal of user research is to build an effective, reliable and agile feedback loop with your customers, potential customers and greater community.
The insights you learn when you put in the effort to build these feedback loops directly inform the success of your product and make it easier to act quickly on decisions that are critical for growth.
The following is a six-step process designed to take you into the realms of user research and come back with treasure.
#1: Focus your hypothesis
Just a few examples of the many things you can use qualitative research to find out are:
Why customers do or don’t do something
How interested people might be in something you are considering building for them
What customers think of your competitors and why
Before you start on the actual research, you will need to assess the current situation, your question about it and what data is available to you now.
You’ll generally want to run one test per question. Remember: Good answers start with good questions!
The following is a template you can use as you are planning the test to focus your hypothesis, create team consensus and give you a barometer for success.
This template is a streamlined version of the Strategyzer testing card and learning card. If you’re familiar with Alex Osterwalder’s Business Model Canvas, then you’re likely feeling that this jives with what you’ve learned from his books. Read more about ‘em here.
#2: Know your goal: usability vs. concept testing
Broadly speaking, there are two types of user research testing. You choose which one to use depending on the situation and what you want to learn.
Usability testing or ‘tactical’ study (testing flows to see if they work - NOT related to motivation!)
The most well-known and straightforward method of user research is usability testing. Participants in this kind of test are given a series of tasks to accomplish, often within a certain amount of time, while explaining their actions out loud. It helps easily identify which features users like, which they don’t, and, most importantly, why.
This kind of testing works well together with surveys or data gathering tools like Hotjar, which are used to see where drop-offs happen during action flows.
Because usability testing won’t test whether your target audience is motivated enough to use the solution you offer, it is most helpful after you’ve done concept testing, as explained below.
Concept testing (testing out a value proposition to see if it resonates)
More advanced than usability testing, this type of method uses in-depth interviews or digital prototypes to see how well a group of participants responds to a new concept or product, how motivated they might be to use it and what positive or negative associations they might have with it.
Sometimes focus groups (in-person or virtual) work wonders for concept testing. Group settings of 3-20 users allow users to play themed games such as card sorting, word play or role play to bring about insight-rich conversations that wouldn’t occur in other methods of testing.
#3: Choose a test method
There are multiple user research test methods. Some work better for concept testing, some better for usability testing, and some are equal opportunity.
Questioning (one-on-one interviews, small group discussions, or - much less ideally - in a survey). For more details on how to (and how not to) structure questioning, check out my blog post here.
Role play (acting out the interaction that a product or service involves). I describe how to do it effectively in this blog post.
Prototype or live product demo. You can have testers use a prototype built in whichever graphic tool you use, or have them use a live build of the product.
Participatory design (asking the tester to brainstorm with you about a certain problem). This kind of workshop or interview allows participants to “design along with you” - allowing them to show you exactly how they want things to work. It takes more time to plan but can be a lot of fun for testers and you gain especially rich information.
#4: Avoid clouding user data with your own biases
A classic mistake is to get a bunch of people in a room (Zoom or real) and ask them: “Hey - look at this cool new thing! Do you like it? How much would you pay for this?”
If you do this, you will most likely fail at getting any kind of accurate data. Instead, the data you get will lead your team toward building things that will be based on their own implicit bias rather than customer need.
Nearly all of the time, you will want to focus your conversations with users around the problems they experience rather than the solutions you hope to bring. This is because users often have a hard time expressing negative opinions, and they may subconsciously paint a rosy picture of reality out of fear of disappointing an interviewer. A good study is designed to help users give you their most candid, unfiltered opinions about something, and to allow them to share perspectives that accurately match reality.
To that end, here are some golden rules. Break them at your own risk!
DO ask open ended questions (e.g. How, How much, When…).
DON’T ask yes / no questions (e.g. Do you, Did you, Would you…).
DON’T ask questions that “solicit” positive responses and don’t allow for a negative. (e.g. “How much do you hate doing X?”)
DO listen lots more than you speak!
DO note expressions of positive or negative emotions (e.g. head shaking, frustration, saying “hmmm” in confusion, or “wow!” when impressed) and ask for elaboration.
DO ask the user to elaborate on what they expected to happen, see or get in any given situation.
DO invite the user to describe their current alternative to what you are building (e.g. how the person does X today, what they like about it and what they do not).
DO remember that people are not very good at predicting what they will do in a future situation (especially if that situation involves spending money). They are better at telling you what they currently do. Therefore, good ways to to gauge sentiment around pricing are:
Ask how much they have already spent toward alternate ways of solving that same problem
Name a price point and ask users if it sounds low, high, or just right to them at this moment in time
#5: Do simple, fast debriefs with your team
Two heads are better than one when it comes to evaluating user research. Try to involve at least one other person with you in the research studies to be able to bounce ideas and perspectives off of. Teammates are ideal partners, especially when they tend to have a different perspective than you.
As soon as possible after the user research testing, do a quick and dirty discussion with that teammate (or the whole team). Don’t wait until you have a fancy presentation. This helps everyone synthesize their thoughts while the experience is still fresh.
You can also invite stakeholders to listen to interviews or attend focus groups if possible (just be sure to brief them beforehand on the rules in section #4).
#6: Make it a habit
Feedback loops are like a muscle that gets stronger with time. It’s better to be doing little initiatives once a month than large efforts once a year. Sustainable efforts can be as simple as committing to interview one customer per week, or run one small focus group per month over Zoom. Teams at some of the most successful companies - large and small - have a dedicated day in the week where users are invited to give feedback. (Here’s lookin at you, Dropbox, Autodesk, and more…)
Find a way that works for you to keep getting feedback and stick with it. It gets easier and the return on investment is guaranteed to be worth it.