Triangulating data was something I learned in academia. It was always essential to have data from multiple sources to confirm findings. Just interviews or survey data wasn’t enough for my academic research.
However, when I transitioned to user research, I didn’t consider this concept much, especially at the beginning of my career. I learned new methods and ways of working, and triangulation, honestly, fell to the wayside. I conducted and presented usability test results or insights from qualitative interviews. While I battled the “only five participants?!” question, I didn’t think to include other data sources to back up my results or mitigate this concern. I believed a larger sample size was the answer.
But, there was a limit. I couldn’t always speak to more people, especially when teams needed results quickly. And, for a while, I felt stuck.
One day, I was grappling with a terrible experience for one of the products I worked on. However, we didn’t have clear quantitative data to show any issues. So I decided to take a peek at our reviews. I had hit gold. Many of the negative comments had to do with the feature in question. That moment sparked in my brain and reminded me of the power of multiple data sources.
There are a few ways to triangulate data:
Data triangulation: Gathering data from different times, spaces, and people, such as a longitudinal study or comparing data from different locations.
Investigator triangulation: Including multiple researchers in collecting and analyzing data and comparing code sheets or findings.
Theory triangulation: Involving different theoretical frameworks, such as testing a variety of hypotheses behind motivation
Methodological triangulation: Using varying methods to approach the same topic, such as a survey and interviews in the same study
We generally focus on methodological and data triangulation for user research. However, I have also seen investigator triangulation on larger teams and theory triangulation at hypothesis-driven companies. Nevertheless, methodological and data triangulation are the easiest and most common ways to approach triangulation in user research.
Free? Sign me up!
Once I remembered the power of triangulation, I wanted to use it all the time. Soon, I tried to triangulate data for every single study. And it became similar to me trying to test everything with research back in my early career. I couldn’t triangulate everything, and it wasn’t necessary to.
I then shifted my mindset to help me understand when it was necessary to triangulate and when I could let it go. For a while, I based my decision on time. Did I have enough time and resources to triangulate my research? However, I slowly moved away from resources and toward risk. I started asking myself how significant the risk was if we made decisions solely on one method or data source.
Finally, I used triangulation as a way to mitigate risk. After some time, I realized that triangulation was helpful in the following situations:
To see a complete picture of the research problem or users, gaining multiple perspectives into how or why someone is thinking, feeling, or acting in a certain way.
To enhance the validity of my study by combining complementary methods and mitigating biases or limitations from using only one method.
To give insights more credibility by cross-checking other sources to see if the data lines up.
These situations allow me to reduce risky decision-making, especially if the results are confusing or fuzzy.
There are quite a few ways to use triangulation in user research, and the majority of sources are available to most researchers. The main ways I triangulate research are through checking sources like:
Customer support tickets
Reviews of the product
Analytics/product usage data
Stakeholder interviews, such as speaking with account managers
Mixed methods approach, such as using a survey to follow up on qualitative interviews
Using multiple metrics to assess usability (e.g. time on task, task success, number of errors)
Intercept surveys (e.g. SUS)
Let’s look at some concrete examples of implementing triangulation into your next study for each situation.
About a week ago, we put the System Usability Scale up on our platform to assess the platform’s overall usability and satisfaction. Unfortunately, we received a few low scores regarding the platform’s ease of use. However, we had no idea why users rated it so poorly. So we spoke to account managers to understand any issues from their perspective, and they gave us some areas of concern that several users had brought to their attention.
We decided to run a usability test to assess the underperforming areas of the platform better. We uncovered the root of the issues and why users faced them with this test. This approach helped us discover the what and the why, giving us a more holistic picture of the problem.
We ran a larger scale persona study where we interviewed 15 people from one of our segments. There was an influx of insights, ranging from needs to pain points, and we had a lot of qualitative data. However, we had no idea how this data generalized to the larger population or how to prioritize it.
To increase the study’s validity, we sent a follow-up opportunity gap survey to assess the levels of importance and current satisfaction of our different insights. This survey helped us determine the most important insights to our users and what they are least satisfied with. It also allowed us to prioritize the qualitative data into a persona better.
We also combined this with data analytics to confirm areas that users cited as pain points.
We focused on usability testing a newly developed feature for this study. Unfortunately, we didn’t have much time, so we chose a segment to focus on and tested with seven participants from that segment. We found that four out of seven participants struggled with a particular task. However, this data did not feel conclusive to make any significant changes.
We decided to look at the analytics data we gathered throughout the past few weeks the feature was available. We found an extensive drop-off rate through this data at the same point in the experience that we saw a struggle in the usability test. In addition, we reached out to customer support and learned they recently received a high influx of tickets complaining about this feature.
There are infinite ways to use multiple sources and methods to triangulate your data. I highly recommend trying it in your next study, especially if you are in one of the above situations and trying to mitigate risk!
Written by Nikki Anderson, User Research Lead & Instructor. Nikki is a User Research Lead and Instructor with over eight years of experience. She has worked in all different sizes of companies, ranging from a tiny start-up called ALICE to large corporation Zalando, and also as a freelancer. During this time, she has led a diverse range of end-to-end research projects across the world, specializing in generative user research. Nikki also owns her own company, User Research Academy, a community and education platform designed to help people get into the field of user research, or learn more about how user research impacts their current role. User Research Academy hosts online classes, content, as well as personalized mentorship opportunities with Nikki. She is extremely passionate about teaching and supporting others throughout their journey in user research. To spread the word of research and help others transition and grow in the field, she writes as a writer at dscout and Dovetail. Outside of the world of user research, you can find Nikki (happily) surrounded by animals, including her dog and two cats, reading on her Kindle, playing old-school video games like Pokemon and World of Warcraft, and writing fiction novels.