Working in a large organization with over 100+ employees? Discover how Dovetail can scale your ability to keep the customer at the center of every decision. Contact sales.
Short on time? Get an AI generated summary of this article instead
Users value convenience. They often prefer designs with intuitive functionality and simple navigation to designs with advanced functionality, as they can move through web pages or features without becoming lost, annoyed, or overwhelmed.
It all boils down to the core concept of usability. Ask yourself:
Can your target users interact with the tool easily?
Can users complete tasks quickly?
Is the tool satisfying to use?
Have you removed unnecessary complexity to leave a simple, intuitive, and effective design?
Usability gives users a good experience when navigating your product. Poor usability could harm your brand’s reputation by creating frustration and disappointment. It could even make users turn to your competitors for reasons you can’t quite quantify or understand.
Great usability doesn’t mean sticking to a playbook of universal norms and design principles. Instead, your teams should implement usability testing to refine your design and adapt it to your audience’s needs and preferences. It starts with a usability testing plan.
Usability testing is the process of evaluating the ease with which your target users (who may be different from your target buyers) can use your product.
It involves assigning users specific tasks within an app, website, or other digital product, or you can simply encourage them to explore the product. Depending on the type of test and how you can observe it, you can gather insights about task completion, navigation, time spent on different elements, frustration cues, and potential challenges. These insights are highly valuable, as they can guide adjustments and help resolve inefficiencies.
A usability testing plan is the internal documentation and planning that aids project management. It guides the test, setup, and preparation, as well as how you assess and translate the results into actionable insights. Here are the most important parts of a usability testing plan:
Objective: set the specific goal or area of insight you want to focus on.
Timeline: forecast the schedule for every phase of preparation and implementation. Establish deadlines.
Target audience: identify who represents your target user group and consider how you will recruit them.
Test specifics: outline tasks, activities, and the environment in which you want test users to operate.
Analysis: determine the most valuable metrics, the data types you will gather, and how you will use them.
Your usability testing plan provides a framework for maximizing the value of your usability testing. For example, you can ensure that the assigned tasks provide relevant data by aligning your goals, data collection, and analysis processes.
You can also use the plan to bring stakeholders on board, ensuring the objectives are top priorities for your design and/or sales teams. The process’s elements should be aligned from the start, helping you avoid delays and confusing data results.
Just as importantly, it standardizes your usability testing processes. Different stakeholders will handle different aspects of the project—the plan ensures your approach is uniform and cohesive. You’ll also have a clear framework for conducting usability tests in the future—whether it’s the same project or another one entirely.
Choosing the right type of usability testing for your project is critical in creating a usability testing plan that works for you. Consider these different types:
Moderated vs. unmoderated tests: facilitated sessions with a trained moderator guiding participants vs. independent testing without direct supervision
Qualitative vs. quantitative: gathering feedback such as reactions and opinions vs. collecting numerical data
Remote vs. in-person: conducted online from any location vs. face-to-face sessions in a controlled environment
Explorative vs. comparative: open exploration for freeform feedback vs. tests that compare specific results across 2–3 versions of a digital product
Desktop vs. mobile: testing on larger screen devices vs. smartphones and tablets to ensure a consistent and satisfactory experience
Guerrilla vs. longitudinal testing: rapid and informal testing vs. extended, in-depth usage over time
Accessibility vs. standard usability testing: testing with users who have additional needs vs. testing with general users
Low-fidelity vs. high-fidelity prototyping testing: testing basic wireframes or sketches vs. a near-final design
You can mix and match different types. For example, you might run a moderated qualitative test in person or remotely or an unmoderated test to gather quantitative feedback.
Creating a usability testing plan is a methodical process. Following these 11 steps enables you and your team to create a useful, straightforward plan.
Usability tests have a broad focus: to uncover friction and pain points in usability. You’ll need to define what you want to discover and the areas of usability you want to investigate. With specific goals, you can choose the best-aligned testing format. Here are some examples:
Measure user satisfaction: quantitative and explorative tests align with this goal, gathering user satisfaction and overall experience feedback.
Prioritize features or fixes: identify usability issues that most impact user experience and prioritize solutions. You might set moderated tests across multiple devices to zero in on specific issues.
Explore alternative designs: test multiple design options to identify the most effective solution. Comparative tests work best here.
Benchmark usability: compare usability metrics against competitors or previous product versions. You may find that longitudinal tests are the best choice depending on the specific uses you’re assessing. Replicate the conditions and methodologies across subsequent tests to get consistent insights over time.
Test navigation and information architecture: comparative tests are well-suited for ensuring users can find content or features easily.
Test-specific user scenarios: standard user testing, accessibility testing, and tests with specific user groups can help validate whether the product meets the needs of particular users or use cases.
This is the plan’s project management phase, where you outline logistical concerns and define your test’s concrete requirements. For example, you might set deadlines for the test and result analysis and a process for coordinating with the design team to action findings.
Whether the test occurs in person, on-premises, or virtually is a logistical concern. You must plan for the required equipment and space for an in-person test.
Don’t worry if you don’t have specifics during this phase. This process will give you a good idea of the tools and other logistics you need to define in detail later on.
At this stage, generalizations from step two need to become more specific. Consider what equipment your test users need, what data analysis and communication tools you need, and what programs can assist with the test itself. These might include project management software like Asana or Notion, video conferencing platforms like Zoom, Loom, or Slab, and AI-based analytics tools like Dovetail for pulling insights from raw data.
As you’re finalizing the test, it helps to consider the options as binary choices:
Will you invite participants to a testing area for a fully supervised test on your hardware under your preferred conditions? Or will you allow remote testing, where users assess your design using their own hardware in their preferred work environments?
You might choose quantitative, qualitative, or both, depending on the insights you want to collect.
For example, you might time how long it takes users to complete each step of a task or how many clicks it takes to navigate through a design. Alternatively, you might survey users during and after the test to understand their feelings of frustration and uncertainty. Longer tests may have elements of both (though it’s important not to make the test so long and robust that it creates disruptive “noise” in your data).
Moderators are useful in interviews or interactive tests but can change user behavior or be an easy “out” for frustration points. Both moderated and unmoderated test types have pros and cons, so you’ll need to account for how a moderator can clarify or distort collected data.
How many participants do you want to test? Your chosen method will play a role in determining your sample size.
Interviewing dozens of participants can yield many insights, but interviewing hundreds or thousands will have diminishing returns. That said, testing 10,000 participants is not necessarily more difficult than testing 100, provided you can guarantee all the participants are in your target audience.
Other factors to consider when finalizing your sample size are:
The cost of implementation per participant
Subdividing user groups into different pools
The value of starting small and building your skills
Parameters, such as population size, margin of error, confidence level, and variance (using an online sample size calculator may help)
What tasks or activities best align with your test goals and the type of data you need to obtain?
Broadly speaking, there are two task types in usability testing:
Specific tasks—have a correct completion method
Exploratory tasks—don’t have one right answer
Analysts can get valuable insights from both types of tasks. With specific tasks, you can identify failure points or potential sources of confusion. With exploratory tasks, you can learn how users interact with the design overall and see which types of behaviors are common and/or valuable.
When writing your tasks, choose your evaluation metrics. Each metric will help inform the monitoring software and analytics tools you should invest in.
Quantitative evaluation metrics include:
Time spent on tasks
Incidents of critical and non-critical errors
Percentage of users who complete tasks without errors
Percentage of completed tasks
Qualitative evaluation metrics include:
Likes
Dislikes
Recommendations and suggestions
Concerns (before and after user testing)
Perform a test run to reveal logistical concerns and resolve them. You can carry out a rehearsal with colleagues to run through the technical elements. Or, you can conduct a pilot test with a small percentage of your test pool to reveal potential areas of confusion and unclear wording.
You have several options for recruiting participants. You may want to request participants from your core client base so they can see the improvements you’re making, or you may offer users the chance to sign up. You can offer incentives for participation.
Ensure you leave plenty of time to get the participants you need and arrange the schedule. Also, consider using a platform to streamline the recruitment process, including your search for participants and keeping everyone organized.
Create a template that documents, structures, and communicates your findings and insights. Create it ahead of time so that it acts like a blank worksheet for your data analytics team to complete. They will know what insights they need before getting started, and you can do a review round to ensure the test is likely to provide relevant results.
It’s time to take action and perform checks. Assign team members specific implementation tasks, send invitation emails, and ensure the tools and interview rooms are reserved. Add any other steps relevant to your organization, such as arranging NDAs or getting final approval from directors.
We recommend adhering to these best practices, even if you have a more custom approach to the planning process:
Workshop the wording for tasks and interview questions to remove as much bias as possible. Consider likely follow-up questions or have your moderator follow a specific usability testing script that curbs bias as much as possible.
Focus on realistic tasks that give you the most insight for a general audience or your core clientele.
Hold a review session to revise the plan, documents, and testing protocols when the project is complete. Your usability testing will improve over time.
The right tools help you identify what’s working, what’s not, and how to improve—without guesswork.
Dovetail’s free usability testing template provides a clear framework for efficiently collecting and analyzing user feedback. Start making confident, informed design decisions today.
Do you want to discover previous user research faster?
Do you share your user research findings with others?
Do you analyze user research data?
Last updated: 24 June 2023
Last updated: 29 May 2023
Last updated: 2 June 2024
Last updated: 30 August 2023
Last updated: 30 April 2024
Last updated: 14 March 2023
Last updated: 3 April 2024
Last updated: 21 June 2023
Last updated: 14 July 2023
Last updated: 12 April 2024
Last updated: 26 March 2024
Last updated: 23 January 2024
Last updated: 24 May 2024
Last updated: 11 January 2024
Last updated: 13 May 2024
Last updated: 2 June 2024
Last updated: 24 May 2024
Last updated: 13 May 2024
Last updated: 30 April 2024
Last updated: 12 April 2024
Last updated: 3 April 2024
Last updated: 26 March 2024
Last updated: 23 January 2024
Last updated: 11 January 2024
Last updated: 30 August 2023
Last updated: 14 July 2023
Last updated: 24 June 2023
Last updated: 21 June 2023
Last updated: 29 May 2023
Last updated: 14 March 2023
Get started for free
or
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. By clicking “Continue with Google / Email” you agree to our User Terms of Service and Privacy Policy