Since the dawn of corporate time, quantitative data has dominated boardrooms, annual reports, and executives' minds. The appeal is obvious: big numbers, broad trends, colorful graphs, and pie charts galore. However, quantitative data only tells you half of the story. Academy Xi's Eric Lutley posed a bunch of questions to User Research Leader Jess Nichols about the other half of the story—qualitative research.
Jess: Qualitative research focuses on context and observations. Using unstructured data, you can genuinely understand people's attitudes around their experiences and gather rich insights from your users. Quantitative data, on the other hand, focuses on behavioral insight captured through structured data like analytics and surveys. You are trying to generalize your understanding to paint a broad picture of your audience understanding.
Jess: Quantitative research will show you 'what' is happening while qualitative research explains 'why' something is happening. For example, when looking at quantitative analytics, you can see what process or flow people are taking. Still, you don't necessarily build understanding around why they're taking that flow, which can lead to assumptions about the 'why,' including false assumptions based on their existing domain knowledge. Combining the quantitative data you're capturing with the qualitative 'why' is a research superpower that confirms hypotheses and ensures you make the right improvements to your product experience.
Jess: There are nuances around techniques and how to do research, so if it's an option, always try to engage a researcher. Researchers train in exercising neutrality and are skilled in the art of capturing insights from a variety of information sources. When this isn't possible, organizations often look to product managers or designers to conduct research. Interested in what is a user researcher and what do they do? Check out this article I wrote.
Something to be aware of are the biases product managers or designers may have when they start conversations. Suppose a designer designed a particular experience and are conducting a usability test on that same experience. In that case, they may assume that some pain points that users have identified aren't valid because they don't fit the mental model that the designer has. Or a product manager could be influenced by a conversation they had with a high-value customer. They run the risk of identifying pain points that don't fit within the broader customer base's experience.
Jess: There are multiple opportunities to research throughout the product lifecycle. I feel like people think research is always an intense and laborious process, but there are ways to do quick, light-touch research that still gives you directional validation. In my opinion, there are two main areas where qualitative research can be most effective:
Before product scoping: Before you start to build out the experience, you want to understand where the knowledge gaps are and identify essential customer pain points—helping you make decisions around product prioritization and maintain a clear focus on your customers' needs.
Before engineering: Generally, more evaluative research occurs at this stage, like usability testing or information architecture testing. You want to make sure that the experience meets your customer needs before you invest your engineering capacity heavily.
Forrester, via Forbes, calculates that $1 invested in UX returns $100. Focusing on understanding your users' needs when creating a product helps you avoid the burden of change management or product rebuilding down the track.
Jess: Soft skills for research are essential. Many people assume that if you're a researcher, then the bulk of your effort is focused on using the right methodologies and executing the research itself. What I have found throughout my career is that softer skills are just as relevant. Being skilled in communication, presenting, and influencing is particularly important because a successful research project relies on your stakeholders' buy-in. I've had experiences where product managers think they have the right solution. Still, I've spent time influencing them to understand their own biases and other perspectives they may need to consider to ensure they meet the customers' needs.
Jess: Although an interview seems like a simple conversation on the surface, researchers use many research techniques to maximize the conversations' output. Here are some approaches that I use in my interviews:
Building rapport: I spend the first half of my interviews focused solely on building rapport with the participant. Building rapport is incredibly important in creating trust. It enables your participants to be more open to tough topics in conversations, leading to better input and research insights. However, building rapport takes time. The same way you wouldn't share your innermost secrets on a first date, you have to take some time to build trust and rapport. Participants aren't going to share their real thoughts with you on the first question you ask if you've never spoken to them before. The first half of my interviews include getting to know you questions and setting the context for the conversation - so questions generally grounded in fact. By the time I've built rapport halfway through, I can start asking for their opinions and pain points on topics.
Questioning technique: I am very thoughtful about how I ask questions. When we think of interview questions - there are two types of questioning: Open questions (those with free-form answers) and closed questions (those with a yes or no response). We generally want to funnel from open questions to closed questions during an interview. We should never start with closed or leading questions during conversations as it can bias our participants' answers towards our assumptions.
The way you move from open to closed questioning is through three stages. The discovery stage is where you ask open-ended questions - this could result in any response. Then there is the validation stage. You ask probing or clarifying questions to the previous open response before you head into the paraphrasing stage, where you repeat the answer back to the participant to make sure you have a shared understanding.
Jess: When I was a consultant, I would face this situation when I started a new project because I'd be working with a client, and I didn't have all the context. There was pressure to know what you were talking about very quickly. To solve this, I would engage with stakeholders who were happy to help me level-up on the subject matter. It's essential to be very open and honest with people that you're talking to, identifying that you don't have a depth of experience, and let them know you're still learning. You could always complement your learning through secondary research like industry reports, news articles, or blog posts.
On the flip side, when I am an expert in an area, I will go into these conversations, pretending not to know anything. While I think I have a pretty good idea of what they're going to say, I will go in with a beginner's mindset in these meetings to avoid any assumptions. I want to learn, and I want to relearn every single time I have conversations because there could be something I'm assuming that mightn't be correct. So even if I have the domain knowledge, I will go in as if I don't have it to ensure I learn as much as possible.
Jess: When you start to identify recurring patterns and hear the same thing repeatedly in the conversations you're having, it's probably a valuable time to pause and make sure you have the information you need to move to the next phase.
Jess: The length of research heavily depends on the methodology, scope, and participants for research. In some cases, you can complete usability testing for five participants in a day. In other cases, it can take up to a month to recruit participants - especially in the B2B space when you have to secure buy-in from the participants' account manager. Do not conduct research if you are confident in your hypotheses - this makes research a validation exercise when research is so much more than that. If you're running a study and are getting repetitive answers, pause your research to determine if there are any additional gaps in your knowledge or a comprehensive enough data set to begin your analysis.
Jess: Dovetail is an excellent tool for this. In recent research I conducted, I find researchers fall into two main buckets of how they analyze: Team Spreadsheet or Team Sticky Note. Team Spreadsheet focuses on having a structured information architecture and finding insights that fit within an existing mental model. In contrast, Team Sticky Note is focused on organic analysis and the affinity of data to find their insights. Each approach has trade-offs you have to consider. Having a predefined information architecture can constrain you from discovering novel insights. An organic process requires a more extended timeframe to conduct your analysis. You can also see a combination of both approaches. Another incredibly important soft skill is storytelling. You need to have stickiness with the insights that you are sharing to resonate with your stakeholders. I focus on creating a structure out of the unstructured data and then moving towards the art of storytelling.
Jess: If you're doing foundational generative work, you're better off doing something organic like affinity analyses because you can find insights without having inherent biases. Generally, evaluative research is going to be much more useful for a spreadsheet because you're looking at 'what is the task?' or 'what did this participant do?'.
Jess: It depends on the type of research that you're conducting. It's essential to visualize your insights to resonate with your stakeholders if it's foundational or generative. You want to get buy-in and ensure the information can be easily digested by just looking at it, so your stakeholders understand the findings. Utilizing things like personas or journey maps can be incredibly useful to visualize the experiences and users that your research focuses on. I've heard of people building out courses or one-pagers that they stuck around their office. With evaluative research, you want to be tactical and grounded in the designs. If you're doing usability testing, you want your recommendations to be incredibly actionable so that a designer or an engineer can make that change as quickly as possible. Not many people want to look through a long list of insights. They're looking for the research point of view and the action items that they need to take as part of their role. Expertly translating the depth of data that you have into something engaging is critical in ensuring your research resonates with your stakeholders.
Jess: Collaboration can be more challenging when everyone's remote. There are many collaborative tools and platforms out there that aim to help you analyze your research data remotely. The way you make them work for you depends heavily on whether you're Team Spreadsheet or Team Sticky Note. Dovetail, Google Sheets, Miro, and Mural are the ones that come to mind.
Fundamentally, qualitative research helps articulate the 'why' to complement the 'what' of quantitative data. Regardless of whether the analysis approach is structured or unstructured, qualitative research focuses on finding actionable outcomes for teams.