When I started my user research career, I designed wireframes and prototypes. They weren’t the nicest—as I quickly found out design is not where my skill sets lie—but they did the job. Consequently, since I was the only researcher, I also tested my work. So I used to start my usability tests, saying, “We want to get your feedback on some designs I made.”
As soon as I uttered that fateful sentence, the test was doomed. Participants bloomed into people who just wanted to give me praise or uh-huh-ed and umm-ed throughout my questions. I was stumped and, subsequently, hated running usability tests. Because of this, I could never get deep insights from my projects. I was ashamed of my reports, my number one insight nearly always being: “the participant said it was fine.”
It was not fine. When the features or ideas went out into the real world, customer success got complaints, or customers didn’t use features. With this, I questioned whether or not I should be a researcher. My findings fell flat, and I felt like I was hurting the team rather than helping them. When I spoke to a research friend about this, they laughed. Admittedly, I was a little hurt, but they explained it to me in three words: social desirability bias.
Social desirability bias means that we are inclined to act in a way that feels acceptable to society, even if what we do or say is untruthful. How does this relate to testing designs? Well, some participants were more likely to over-report that the designs were “good” or “helpful.” Participants answer in a way that they assume you want to hear. And who wants to hear their designs are bad?
Well, we do! Learning that a prototype is helpful can be nice, but we need to focus on where we can improve. We need to uncover the things that are going wrong so we can iterate on and fix them!
As a designer, you will likely have to test your designs at one point, which means you will likely encounter social desirability bias. However, there are quite a few ways to negate this effect and still get good results without participants concerning themselves with hurting your feelings.
If you can avoid it, try not to test your designs, ensuring a more neutral session moderation. It is much easier for someone to test something they didn’t create. That way, you can also truthfully tell the participant the most crucial line in the introduction: “I didn’t design this, so you absolutely won’t hurt my feelings. We want you to be as honest and open with feedback as possible because that is the only way we can improve!”
Some other pieces to include in your introduction and throughout the episode are:
Reminding the participant that you are looking for constructive feedback
Telling them that they are the expert and you are here to learn from them
Saying that these are just a few ideas the team came up with, but nothing is set in stone
Stating that the team loves it when people don’t like something, as it helps them improve
If you can’t get around this and must test your designs, you can tell a little white lie. For example, when I told participants that my designs weren’t actually mine, they were more likely to share constructive feedback. I kept this up, and it was one of the best small changes I made.
Take the time at the beginning of the session to build rapport with your participant. I always begin with a warm-up where I ask the participant general questions such as, “What is your favorite hobby?” or “how do you like to spend your free time?” or “anything interesting that you’ve read or watched recently?” When they answer, I don’t just say thanks and move on; I follow up on the answers. Spending this time together allows the participant to trust me more and engage
Whenever we have a result we want to get to, we can ask leading or biased questions. So, for example, if we want the participants to say a particular thing, complete a specific action, or give a certain type of feedback, we might end up leading them there. Therefore, it is essential to let go of what you want from the research session and focus on the participant’s thoughts and reactions
To do this, we need to know our biases and also write open-ended questions in our discussion guide to prompt us. I become more aware of my biases by writing all of them down before my session. For example, I will list everything I think will happen and want to happen during the session. This list can include things like:
I believe the participant will not find value in the concept
I don’t think the participant will understand our prototype
In addition to this, I always write open-ended questions in my discussion guide. These questions remind me that I need to be as neutral and unbiased as possible. I use the acronym TEDW to frame most of my questions:
T stands for “Tell me about the last time...”
E means “Explain the last time...” or “Explain what you mean by...”
D signifies “Describe how you feel about...” or “Describe what you mean by...”
W stands for “Walk me through...”
Have you ever had a participant say, “it’s fine” or “it’s good?” Having this happen is probably the most frustrating part of usability testing. It can be so disheartening when your participant won’t give you anything substantial. This situation is a great time to use your TEDW acronym to dig into what participants mean.
Whenever participants use a subjective and vague word like “helpful” or “fine,” you can ask:
“Explain what you mean by ‘fine.”
“Describe what you mean by ‘helpful.”
These questions will force (in a good way!) the participant to think about and better articulate how they feel. I sometimes apologize ahead of time by saying, “I’m really sorry if I keep asking you what you mean, but I am trying to learn.”
When nothing else is working, I go to my indirect questions. These questions give the participant the freedom to answer the question without it relating to them. If they have problems giving constructive feedback due to social desirability bias or other reasons, they may have an easier time talking about how it might impact someone else badly. So, for example, you can ask, “how would this impact your friend/family member/colleague?” or, “what would your friend/family member/colleague think about this?”
If your designs are about a complex or emotionally loaded topic or talking to a challenging population (e.g., kids), you can use stimuli. For example, when I was helping a company design an end-of-life care website, we used different stimuli to help people explain their feelings. For this study, we brought in photos representing emotions and used notecards with emotions written on them that people could use to describe their feelings.
Overall, one of the most critical parts of getting constructive feedback from your participants is being aware. If you are aware of your biases and are open to feedback, you will be more likely to ask questions that get this type of feedback. Keep these tips in mind next time you need to test designs!
Written by Nikki Anderson, User Research Lead & Instructor. Nikki is a User Research Lead and Instructor with over eight years of experience. She has worked in all different sizes of companies, ranging from a tiny start-up called ALICE to large corporation Zalando, and also as a freelancer. During this time, she has led a diverse range of end-to-end research projects across the world, specializing in generative user research. Nikki also owns her own company, User Research Academy, a community and education platform designed to help people get into the field of user research, or learn more about how user research impacts their current role. User Research Academy hosts online classes, content, as well as personalized mentorship opportunities with Nikki. She is extremely passionate about teaching and supporting others throughout their journey in user research. To spread the word of research and help others transition and grow in the field, she writes as a writer at dscout and Dovetail. Outside of the world of user research, you can find Nikki (happily) surrounded by animals, including her dog and two cats, reading on her Kindle, playing old-school video games like Pokemon and World of Warcraft, and writing fiction novels.