After a few years of academic research, filled with surveys and quantitative data analysis, I was very excited to get into the qualitative side of user research. Talking to people about random topics and understanding how their minds worked was a thrilling concept. It fed just enough of my want-to-be psychologist side and incorporated the exciting pace of technology and product companies.
I spent a few years avoiding quantitative data. It's not that I didn't appreciate it or think it was helpful; I honestly wasn't comfortable using it in the context of user research. I admired data scientists and product analysts and longed to use their knowledge, but I was immediately overwhelmed every time I dove into something like R or Tableau. Or if I had a bunch of survey data to visualize—that was straight out of my nightmares.
However, I got to a point in my career where I could no longer hide behind qualitative data. Not only did some of the validity of my results get called into question, but growing as a user researcher meant I had to obtain more skills. It was time to take my qualitative-colored glasses off and incorporate a mixed methods research approach.
As the name aptly suggests, mixed methods research means mixing different types of research during a project. Typically we look at research from two sides:
Qualitative research, which includes conversations, mental models, feelings, reactions, and attitudes.
Quantitative research, which is more geared toward behavior, product analytics, and generalizing data.
In simple terms, qualitative research enables us to get deep and rich insights from a small sample size. In contrast, quantitative research allows us to generalize our findings across a larger sample to understand the audience more broadly.
For a long time, qualies and quanties were on opposite sides of a spectrum and, often, debated a quantitative vs qualitative approach. However, we need both sides of this spectrum to understand our customers (and potential customers) accurately. Without one side, we are out of balance and can't draw as valid or reliable conclusions.
When I initially started my journey with mixed methods, I thought there was only one way: collect the qualitative data and validate it with quantitative data. Essentially, I would do some interviews and follow-up with a survey. However, there are three main ways to incorporate mixed method research design into your toolkit.
I was most familiar with this design of mixed methods. For this approach, you start with qualitative research and use your insights to frame the design and analysis of a quantitative study. The goal of an exploratory sequential design would be to understand the scope of qualitative insights across a much larger sample. So, how valid are your insights across a wider audience?
For example, I did an exploratory sequential design study that included a diary study (qualitative component) and an opportunity gap survey (quantitative part). We wanted to understand how people thought about fashion trends. We had about 15 participants in our two-week diary study, asking them about their thoughts on fashion. This approach led to a lot of data, and there was no way we could prioritize the insights. Therefore, we created an opportunity gap survey after we collected the patterns and trends (through affinity diagrams).
For example (and I've replaced the actual data with dummy data), we found the following insights from our diary study:
People wanted to share outfits with friends easily
People wanted to recreate expensive outfits on a budget
People wanted to find vintage versions of outfits
People wanted to shop online with others
So, our opportunity gap survey, which went out to about 1000 people, asked about the importance and current satisfaction with these insights. We then prioritized the very important insights that people felt unsatisfied with. So, instead of wrangling with many qualitative data points, we gained prioritized areas to focus on with our teams.
Explanatory sequential design goes in the opposite direction. This design begins with a quantitative component, which you then dive deeper into using qualitative research. If you are unsure where to focus your qualitative efforts next, explanatory sequential design can help. You look at the large-scale issues, see where the major pain points lie and follow up with qualitative research.
For example, at one company, we struggled to understand our customers' pain points deeply. It felt like we kept jumping from one issue to the next, with no clear understanding of why something was happening or the severity of the problem. I went into our product analytics and noted where people were dropping off in our conversion funnel. I then sent out a large-scale survey with a list of pain points we had heard from various research studies, customer support, reviews, and areas the product analytics showed we were failing. We asked people to rank the pain points (and add whatever we missed).
This survey came back, and there were clear "winners" for major pain points. However, we still didn't know why these areas were causing issues. So I then scheduled interviews with 12 customers to get additional context into the why. My qualitative research was focused, and the outcome allowed us to fix the problems with a user-centric mindset. No guessing what was wrong and why; we had the data we needed to make impactful change.
Finally, convergent parallel design collects qualitative and quantitative data simultaneously and independently. Both data types carry the same weight, are analyzed separately, and then compared or combined to cross-validate your findings. For me, this approach is the more challenging of the three and the one I've used the least.
One (rare) example of using this approach was when we were studying behavior and attitude simultaneously. We wanted to learn about interactive media in research academics' papers and how that impacted their publishing rates. We collected the publishing rates data and held 1x1 interviews to understand academic perceptions of interactive media in articles. Comparing the qualitative and quantitative results was challenging, but we had a shorter timeline and needed insights quickly.
Whenever I receive or create a "how" or a "double-barrelled" research question, I use mixed methods research. What does this mean?
How questions typically include both qualitative and quantitative aspects. For instance, I have used mixed methods to answer the following questions:
How does GenZ perceive (qualitative) fashion trends, and to what extent does that impact their shopping behavior (quantitative)
What are the major pain points (quantitative), and how are they impacting people's perceptions of our website (qualitative)?
How do research academics think about (qualitative) interactive media in their articles, and how does this impact their publishing frequency (quantitative)?
How do employee performance ratings (quantitative) impact employees' perceived job satisfaction (qualitative)?
In general, the goals of a mixed methods research study can include:
Understanding the full context behind what people are experiencing or people's behavior (quantitative) and why (qualitative)
Generalizing insights to a larger population
Having uncertainty of where to start with qualitative user research
Raising the validity and reliability of your research results through triangulation
Prioritizing large amounts of qualitative data
Mixed methods can take your user research to the next level by lending more credibility to your insights, allowing you to prioritize and generalize to a larger population, and giving your qualitative research focus. However, with these positives, mixed methods can be highly complex work. Taking in different data types and comparing the results is time-intensive. Whenever I have done mixed methods, it has added to the timeline (except for convergent parallel design), and I needed a quantitative counterpart to support my analysis. There were also times when the quantitative and qualitative data guided us to completely dissonant conclusions, which is frustrating after spending so much time and effort.
But with all these caveats, mixed methods is a fantastic approach to user research. Therefore, we need to focus on mixed methods research more in the future, either by increasing our skills or having qualitative and quantitative researchers work together more closely. Not only does it give us a holistic picture of our users, but it deepens our knowledge and allows for an even higher degree of user-centric thinking in our organizations.
Written by Nikki Anderson, User Research Lead & Instructor. Nikki is a User Research Lead and Instructor with over eight years of experience. She has worked in all different sizes of companies, ranging from a tiny start-up called ALICE to large corporation Zalando, and also as a freelancer. During this time, she has led a diverse range of end-to-end research projects across the world, specializing in generative user research. Nikki also owns her own company, User Research Academy, a community and education platform designed to help people get into the field of user research, or learn more about how user research impacts their current role. User Research Academy hosts online classes, content, as well as personalized mentorship opportunities with Nikki. She is extremely passionate about teaching and supporting others throughout their journey in user research. To spread the word of research and help others transition and grow in the field, she writes as a writer at dscout and Dovetail. Outside of the world of user research, you can find Nikki (happily) surrounded by animals, including her dog and two cats, reading on her Kindle, playing old-school video games like Pokemon and World of Warcraft, and writing fiction novels.