Go to app
BlogBest practice

How to conduct unmoderated user testing

Published
6 October 2022

Unmoderated user research gets a bad rap. There are about a million ways it can go wrong, and you should only use it in particular situations. However, unmoderated usability tests can be a lifesaver. 

Timelines have often run short, and I've had to lean out my approach to user research. For a while (and of course still, when necessary), I fought hard against timelines. I didn't give my colleagues much choice when they asked me how we could speed up the research. If we couldn't squeeze usability testing into the timeline, I suggested we do it another time or extend the timeline. Many of my colleagues were frustrated with the confines of user research. 

At one point, one of my managers mentioned trying an unmoderated study. I shook my head and turned down the idea. In my mind, and with what I heard, unmoderated tests were a waste of time. They returned a bunch of noise or low-quality data and often left teams frustrated by confusing results. And, for some time, I refused to use unmoderated testing. I had tried it once with disastrous results and biased my perspective with horror stories from other researchers.

And then I got stuck. I had several usability tests that I had to run. I needed to have the findings ready quickly. My back was against the wall, and I finally turned toward unmoderated usability testing and found that it wasn't all bad. 

I know I am not the only one who has way too much to do and not nearly enough time. I’ve watched as many time-poor designers and product managers attempt to do full-blown research studies, on the side of their full-time job. While it can work, it is not always possible, and unmoderated user testing is a fantastic way for you to save time while still gathering useful data from your users.

First, what is unmoderated user testing?

Unmoderated testing gets a bad reputation because of how it works: there is no moderator during the test. Instead, you create your usability test and let it out into the wild. Participants record themselves and answer your questions or do the assigned tasks.

There is a lot of faith when you create and send out an unmoderated user test. You trust participants to think aloud, respond well to the questions, understand tasks, correctly use the prototype, and properly record themselves.

But when you pick the right situation and create an unmoderated test, you can:

  • Gain a large amount of feedback in a short amount of time. For example, I often run unmoderated tests from Friday to Monday, so the tests primarily run over the weekend with zero effort

  • Spend less money since unmoderated tests require less effort from everyone involved and are often much shorter than moderated tests (think: 10-20 minutes)

  • Get a diverse participant base as unmoderated tests are run remotely 

  • Lean out a research process, get results faster, and shorten timelines 

However, there are some downfalls to unmoderated testing. I've encountered many of these, especially when I try to force unmoderated testing:

  • Since the test is unmoderated, you have no control over how the participants respond to the test and no way to follow up in terms of explaining a task or asking them why they did something in a certain way

  • Because it is so easy to join a test, you can get people who don't say anything and barely participate to get the "reward" at the end, contributing to useless data

  • Technical problems can occur, such as the participant not correctly recording themselves, leading to blank sessions

  • If the participant is confused and misinterprets the task or question, there is no way to backtrack or explain, which can lead to confusing results

  • Participants can often forget to think aloud so that you will have actions with little explanation or feedback

As you can see, the most significant risk with unmoderated tests is ending with useless or confusing data. Luckily, if we choose unmoderated testing for the right situation, we can reduce this risk considerably. 

When (and when not) to use unmoderated testing

As I mentioned, there is a time and a place for unmoderated user testing. Now, I only use unmoderated testing when the situation calls so that I can mitigate some of the limitations of unmoderated usability testing. I use unmoderated user testing when I need to:

  • Test a simple, straightforward, and short prototype to test, usually with one clear path

  • Gather feedback on small components or design changes (kind of like an A/B test)

  • Gather surface feedback, such as what people are doing and their initial reactions versus why they are doing something

  • Identify how minor bugs or issues we uncovered impact a larger sample size

  • Determine the value proposition or initial reaction of a design or brand 

Most importantly, you can't just switch out moderated testing for unmoderated testing. This approach is entirely unhelpful when you need:

  • Deep feedback or understanding on a topic, since your tasks or questions need to be simple and you can't follow up during unmoderated testing

  • To test a complex product or prototype with many different paths that could confuse the user or take a long time

  • To test a very early stage idea, as there is no moderator to explain the limitations of the prototype - make sure it is a clickable flow!

  • Strong emotional reactions or processes because participants are "in charge," and it is challenging to get people to articulate their feelings and processes without a facilitator 

Now, that being said, here are the most common goals that led me to choose unmoderated user testing (not all of these at once, however, because, remember, unmoderated tests need to be short and focused):

  1. Determine usability issues with a simple prototype

  2. Uncover the larger impact of bugs/issues on a product

  3. Understand if users can find relevant information or do simple tasks

  4. Discover if participants understand the point (value proposition) of your product

How to create an unmoderated test

If your goals align with an unmoderated test, your next step is to create one. Once you have identified your target audience, created a screener survey, and picked what you will test, it's time to hop into the fun part (at least, for me).

Writing unmoderated tasks and questions

The hardest part of an unmoderated test is correctly writing tasks or questions. This is because they are a crucial part of the success of your unmoderated test. If you send poorly written tasks or questions for participants to respond to, you can get biased or confusing results. There are a few rules to adhere to when writing great questions or tasks:

  1. Give participants some background and context on why they need to use the product, such as why they would use it in the real world.

  2. If users need to input dates, locations, or particular data in a form, give them that information.

  3. When asking someone to accomplish a task, make sure they can. There should be a reachable "end," which satisfies the participant and helps you record if the participants could complete the task.

  4. Be direct by telling them what you need them to do (or don't want them to do)

  5. Avoid using biased or leading language, especially language already on the interface. For example, if you want people to hit the "register" button, instead ask them to "sign-up."

  6. Break up a larger task into several different steps

  7. Remind them, a few times, to think aloud during the entire time

To demonstrate these best practices, let's look through some examples.

Example one: Brand Jeans, a fictional clothing company (b2c)

User goal: Browsing for and purchasing a pair of jeans, the average jeans price is $50

Bad task: Find black, boot-cut jeans in your size

Better task: You're looking for a new pair of Brand jeans. Go to brandjeans.com and buy a pair of jeans for $50 or under.

There is a delicate balance between specificity and allowing the participant to act as they would in the real world. The bad task is too specific, asking people to find black, boot-cut jeans. Forcing the participant down this path may cause them to act like they are doing the task rather than how they would typically operate. If you give them the freedom to compare different types of jeans and provide a parameter ($50 or under), they can act more naturally. 

Example two: Stay Here, a fictional hotel (b2b)

User goal: Uploading photos of the hotel grounds and rooms to the Stay Here platform

Bad task: You want to upload hotel photos to the website. Log in to the platform, click on the upload photos button, upload three images, click the submit button, and tell me how it went.

Better task: You want to include a photo of a new suite on your StayHere page. Go to the platform and put the provided photo on your page. 

Hint: You would provide the photo

The bad task is way too prescriptive and gives too many clues. The better task gives less while still telling the user what they need to do and includes contextual information, such as providing the photo they will upload.

Example three: Plant Life, a fictional plant company

User goal: Sign-up to be a subscription member on Plant Life, specifically the Plant Parent tier

Bad task: Sign up to be a subscription member

Better task: You're interested in Plant Life's membership. Purchase the Plant Parent option. Then, use these test credit card details at the payment portion.

I'll let you work this one out, but as a hint, always think of:

  1. The context behind the task

  2. The balance between specificity and being too general

  3. Necessary information

  4. Being action-oriented 

Overall, if your study doesn't require you to explain details and there's low risk of confused participants, your study is probably a good candidate for unmoderated testing. Remember to conduct a dry run (or two!) before you send out the test to iron out any bugs or confusion!


Written by Nikki Anderson. Nikki is a User Research Lead and Instructor with over eight years of experience. Nikki run User Research Academy, a community and education platform designed to help people get into the field of user research, or learn more about how user research impacts their current role. 

Keep reading

See all
Person thinking of habits and rituals
The habits and rituals of customer-centric teams

A whole new way to understand your customer is here

Get Dovetail free

Product

PlatformProjectsChannelsAsk DovetailRecruitIntegrationsEnterpriseMagicAnalysisInsightsPricingRoadmap

Company

About us
Careers14
Legal
© Dovetail Research Pty. Ltd.
TermsPrivacy Policy

Log in or sign up

Get started for free


or


By clicking “Continue with Google / Email” you agree to our User Terms of Service and Privacy Policy