Working in a large organization with over 100+ employees? Discover how Dovetail can scale your ability to keep the customer at the center of every decision. Contact sales.
User-centered design is the process of creating products with human behavior in mind. The process ensures that products are easy to use, solve key issues, and provide seamless experiences.
As part of user-centered design, people typically test products and provide feedback. This helps designers optimize their creations to ultimately delight users.
The process is known as usability testing.
Bring together data and feedback from all your usability testing to quickly surface how you can reduce friction and delight your users.
Usability testing is a technique designers and product developers use to create people-centered products and services. The process ensures products are fit-for-purpose, simple to use, and better than alternatives.
The technique also reduces speculations. Rather than designers assuming they know how users will behave, they gather evidence to prove whether a product will be useful.
Usability testing typically has four main goals:
Confirm whether users can easily perform intended tasks
Identify issues, challenges, and areas of friction
Discover opportunities for improvements
Learn about user behavior and needs
Typically, usability testing is for websites, apps, and software. Product developers might also use it to test physical products, services, and events.
Usability testing involves taking a small group of intended users through a series of tests. The tests generate feedback for designers and developers to iterate and optimize their creations.
It commonly involves three main groups:
A neutral moderator who runs the tests
3–5 participants who perform a series of exercises in individual sessions
UX and UI designers (or researchers on their behalf) who observe the participants completing the tests
A moderator normally asks participants to move through a product or feature. A hands-off approach is crucial. They take notes and allow participants to make mistakes while thinking aloud to highlight areas of improvement. It’s important not to interfere or help the participants.
The key premise during a user test is to replicate a real-life situation closely. This helps you understand what a user would naturally think and do with your design.
The design team will look for clues to decipher how the users find the product. Things like participants’ behavior, facial expressions, verbal feedback, and the number of failures will show if the product is fit for purpose.
Observation also helps the design team gain essential feedback to iterate the product and better satisfy their intended users.
Usability testing may occur at multiple stages of the design process, such as:
During the conceptual phase
After the creation of prototypes
When a beta version of a product is ready
When the team is optimizing the final design
Typically, it’s better to complete testing early on as the results inform later stages of the process.
Usability tests allow design teams to gain unbiased information from real-life users. This prevents team members from making too many assumptions about what users may want and need from products. It increases the chance of creating truly people-centered designs.
Testing also highlights issues and roadblocks in products, so your team can find solutions before developing the full product.
For example, there may be too many onboarding screens for a product, so users are getting bored and confused when signing up.
Armed with this information, designers can streamline the onboarding process. A great solution is an integrated login system with Google or Facebook, so users don’t have to enter their personal information.
An advantage of early-stage user testing is that it prevents businesses from investing too much money and time upfront. If a product isn’t working or is full of issues, you can pause the project or make major changes before you do too much work.
If testing goes well, it can bolster a business case for further investment into the project.
To measure feedback helpfully, we use two types of metrics in usability tests. These ensure the information you receive during the testing process is actually helpful and relevant.
Quantitative data involves measurable metrics, such as:
The time users take to complete tasks
The number of clicks they have to make
The number of failures and successes they have
This essential information shows how fast or slow users move through processes and whether they’ll get frustrated or tired.
Qualitative data focuses on the overall ease of use for the person, and you take things like facial expressions, anecdotal feedback, and perception of the difficulty level into account. This stage recognizes that users want to be delighted by products. For example, they may move through a process quickly but find it unsatisfying for various reasons.
There are three main usability testing methods, and it’s helpful to define which you intend to use before planning any tests.
These are pre-organized, live, and typically the most formal. A moderator is usually present in these cases.
An advantage of in-person tests is that key team members can observe how the participants react and make helpful notes.
The disadvantage is that participants may be more engaged and focused than in the real world, which may skew results.
Researchers conduct these by phone or video call. Testing remotely offers the chance to test users in their own environment where they may be more likely to act naturally.
This involves asking passersby, people at random, or even colleagues to try out designs and offer feedback. This is the least formal type of testing.
While it can be fast to organize, it lacks structure and will lead to the least reliable results. The biggest problem with guerilla testing is that you're unlikely to find the people who represent your ideal persona. This makes the data you gather much less useful.
Guerilla testing tends to be most valuable at very early design stages when you may not have a budget for more formal testing.
To conduct valuable, informative usability tests, it’s essential to:
Set out a plan
Have vital metrics for success
Choose your participants carefully
Analyze the most important data
Stages of a helpful usability test include the following:
Before you start, determine what you're testing. Defining this early is essential. It prevents participants from becoming overwhelmed with test questions and ensures a clear outcome.
If a product is large, a team will likely test it in small stages. This may be one feature, an aspect of the product (such as onboarding), or a series of screens.
Defining exactly what you’re testing will ensure relevant test results.
Clear goals and criteria for success are also important. This means that one anecdote doesn’t override the data you discover.
You need to take qualitative and quantitative data into account to ensure results are useful and relevant for the test.
Choosing an impartial, experienced moderator is vital. This ensures that team members can’t be biased, help, or push users in a certain direction. It means the design team’s assumptions don’t impact the results. The most powerful takeaways from user tests often come from unprompted user actions.
Having a moderator also ensures the team can see how users react when they would otherwise be alone without assistance.
To gain accurate results, you’ll need to choose a team of relevant participants. Your participants should represent your intended users.
It’s helpful to consider your target market and have representative users to see how they behave. For example, if your target market is girls aged 13–16, participants from a senior center wouldn’t help you learn anything about your target user.
Instead, recruiting a cohort from this specific age group will give a much clearer picture of behavior, likes, and dislikes when it comes to the product.
Often, researchers overlook the fact that smaller groups can actually provide more information than larger groups, which may overcomplicate the results.
Experts recommend running usability tests with no more than five participants at once. A lot of the time, you’ll be able to see patterns just after three sessions.
Research has shown that returns dramatically reduce once you add more testers to the group. So, by adding more than five participants, you may just confuse the tests without gaining more information.
You can’t know exactly what a participant is thinking. It’s helpful for your moderator to ask them to verbalize or ‘think out loud’ as they go through the tests.
This informs the whole team of their rationale and highlights potential roadblocks you may have yet to notice.
It’s also essential to observe but not help participants as they progress. This prevents you from skewing the results.
It’s helpful to have a series of questions to ask to deepen your understanding of how participants feel about the product. These can get to the crux of why something is or isn’t working.
But every product is unique, so no script can cover all the essential questions. Instead, develop a relevant script for your designs that includes the fundamental things your team needs to know.
Some questions you might ask participants include:
If you could change one thing about the product, what would it be?
What aspects of the product did you have the most difficulty with?
How would you use the product?
Why wouldn’t you use the product?
What did you expect the product to do?
How satisfied are you with the product?
Once you’ve run the tests, collate the quantitative and qualitative data. Typically, you’ll put this into a report and feed it back to key team members.
This will inform the next steps in the process, whether that’s minor tweaks or larger reworks.
Knowing what to test is the essential first step in running usability tests. Testing too much can overwhelm users. Still, it’s helpful to get as much information about a feature or product as you can while you have participants together.
One rule of thumb is to use a full “train of thought” through a task you’d expect a user to take on your platform.
For inspiration, here are some examples of usability tests:
This may include entering payment information and reviewing the confirmation screens to make a fake purchase.
To test how simple it is to complete the task, you may ask users to open a meditation app, complete a mood check-in, and choose a relevant guided meditation.
You may ask users to track a mock project, change its status, and update it with comments.
To check the ease of signing up, ask users to follow the steps to onboard to a payments platform, including adding users and mock financial details.
It’s helpful to consider some critical questions for your team once you’ve completed the tests.
These will help you reflect upon the tests and develop truly human-centered products. These could include:
Did participants find the product simple to use?
Is there a simpler way to perform these tasks?
How can we remove roadblocks and friction?
Is this product better than alternative options?
Were users delighted by the product?
Allowing users to test your products uncovers a fresh perspective and provides information that you may never otherwise discover.
Usability testing uncovers how the target audience would use the product, feature, or service and highlights opportunities. You’ll see where you can reduce friction, create more seamless experiences, and ultimately delight your users. After all, that’s what people-centered design is all about.
Do you want to discover previous user research faster?
Do you share your user research findings with others?
Do you analyze user research data?
Last updated: 24 June 2023
Last updated: 29 May 2023
Last updated: 29 October 2024
Last updated: 14 November 2024
Last updated: 22 October 2024
Last updated: 4 December 2024
Last updated: 13 May 2024
Last updated: 22 October 2024
Last updated: 3 December 2024
Last updated: 25 November 2024
Last updated: 22 October 2024
Last updated: 24 October 2024
Last updated: 15 October 2024
Last updated: 22 October 2024
Last updated: 15 November 2024
Last updated: 4 December 2024
Last updated: 3 December 2024
Last updated: 25 November 2024
Last updated: 15 November 2024
Last updated: 14 November 2024
Last updated: 29 October 2024
Last updated: 24 October 2024
Last updated: 22 October 2024
Last updated: 22 October 2024
Last updated: 22 October 2024
Last updated: 22 October 2024
Last updated: 15 October 2024
Last updated: 13 May 2024
Last updated: 24 June 2023
Last updated: 29 May 2023
Get started for free
or
By clicking “Continue with Google / Email” you agree to our User Terms of Service and Privacy Policy