Dovetail 3.0: Automated analysis, Channels, Ask, and RecruitLearn more
Go to app
GuidesUser experience (UX)

A complete guide to UX preference testing

Last updated

26 February 2024

Author

Claire Bonneau

Working in a large organization with over 100+ employees? Discover how Dovetail can scale your ability to keep the customer at the center of every decision. Contact sales.

Short on time? Get an AI generated summary of this article instead

Preference testing is a must if your brand is about to launch something, especially if you’re entering a new market or doing a comprehensive re-brand.

Always wanted a crystal ball to see what your users want? Well, that’s preference testing. It gives you insider information about your audience’s preferences, opinions, and potential reactions to your new designs.

This key information can save you so much valuable time and energy if you use it early in the design process.

If your team is gearing up for a big logo, website, or overall brand redesign, this article is a great place to start. Let’s look at preference testing questions, A/B testing, biases, and more. 

What is preference testing?

Preference testing involves evaluating a set list of design options to understand the preferences, opinions, and reactions of each option within a group of people from your target audience.

This research method aims to collect valuable insights about how the new designs land with your target audience. It’s ideal for early in the redesign or brand process. 

Your audience’s nuanced feedback can shape your product as you consider their emotions, perceptions, and overall thoughts on the design.

You don’t just have to use it once. Regular preference testing allows your team to dig deeper beyond the initial information about the group’s design opinions. They can ask more specific questions to direct your future work processes. 

You can use qualitative follow-up questions to uncover better insights about your target audience. These can improve the user experience and your design practices.

Questions like, “Why do you feel this way about this example design?” and “What about this design speaks to you specifically?” are great examples. 

Preference testing vs. A/B testing

If you’ve worked in marketing, preference testing will sound very similar to A/B testing as they’re both audience-based insight collection methods. 

Despite their similarities, you should be aware of some critical differences between the two:

Preference testing

Preference testing gathers insights into user preferences about different design options. 

During preference testing, participants see various design variations, and researchers encourage them to share their subjective preferences, opinions, and reactions. 

The goal is to identify which design elements resonate more impactfully with the target audience. This helps designers make informed decisions about refining and optimizing their designs and user experience systems.

A/B testing 

Alternatively, A/B testing (also known as split testing) is a quantitative method that compares two versions (A and B) of a design or interface. 

The goal is to determine which option performs better in terms of predefined metrics, such as click-through rates, conversion rates, or user engagement. 

Unlike preference testing, A/B testing focuses on objective, measurable outcomes and aims to identify statistically significant differences in user behavior

Designers often use it to optimize specific features or elements based on data-driven insights, providing a more quantitative and performance-based approach to design experimentation.

The importance of proper preference testing

Preference testing allows your team to avoid significant design pitfalls and better connect with your chosen audience.

On a simplified level, we’d all rather design something based on things we know the target audience likes or dislikes rather than take a stab in the dark. Preference testing is key to uncovering insights that reduce your risks and improve your overall design reception.

Examples of ways that proper preference testing can improve your design process include:

  • Enhancing your team’s ability to make user-centric decisions with valuable insights

  • Allowing for iterative design refinement and minimizing the risk of wasted resources

  • Eliminating design ambiguity and providing clarity on user-preferred options

  • Increasing user satisfaction, retention, and recommendations

  • Improving your early-stage design planning to reduce costly revisions

When to run a preference test

As an early-stage design process, it’s important to conduct preference testing early enough to capitalize on its full benefits.

Unsure of the best time to conduct a preference test? Here are a few guidelines to give you the best chance at maximizing its power.

In best practice, your team should conduct a preference test for a new design before:

  • Committing to a single design

  • Investing significant time and resources in finalizing your design

  • Mentioning a redesign on any social media channels

  • Bringing a redesign to your key stakeholders for feedback

How to conduct a preference test

With all this being said, the most important part of preference testing is creating a cohesive and impactful test for your target audience. The quality of your results hinges on this crucial stage of the process.

If your team’s looking to run a preference test soon, follow these steps. You’re about to create a test that will give your team the insights you need.

Include multiple design options

To start, you need multiple design options ready to showcase to your chosen audience. Ideally, the designs should be different, as people can easily overlook minute differences.

When you conduct the test, each participant will look at the designs individually, in a random order. You’ll also give them time to analyze every option and answer your questions. 

In best practice, you should send two or three design options for every preference test. This number of designs is ideal because it:

  • Reduces the mental burden on your audience to sort through large numbers of designs

  • Encourages your team to pick your top design options

  • Makes design ranking straightforward

Pick the right questions

Preference testing aims to collect valuable data to pick the best design option to move forward with. You won’t be able to make the right decision if you don’t ask the right questions.

It’s best to avoid generalized questions like, “Which design is the best?” because it can be challenging to use the answer data to form helpful insights. 

Why did the person think the design was the best? Was it based on a personal opinion or design merit? Did they just randomly pick because they couldn’t choose?

To collect more helpful data, here are some preference testing questions we recommend:

  • Which design option stands out to you in terms of clarity or ease of understanding?

  • Which design option do you find most visually appealing?

  • Which design speaks to [insert target design goal] the most?

  • Which color scheme, typography, or visual style resonates most with your preferences?

  • If you had to rank the designs from best to worst, what order would you put them in?

  • Does any design option evoke a particular emotional response or feeling?

These more specific questions allow you to collect nuanced data, enabling educated decisions about which design to commit to.

Ask follow-up questions

At the end of your preference test (or in a follow-up with particular participants), you can ask follow-up questions to collect additional valuable insights into the reasoning behind their decisions.

Often open-ended questions with an associated text box, follow-up questions aim to glean qualitative data about the designs.

Examples of helpful follow-up questions include:

  • What stood out about this design option to you?

  • What are your favorite aspects of this design?

  • If you could change one thing about this design, what would it be and why?

  • What three words would you use to describe this design?

This additional data helps your team understand how your participants approached their answers, allowing you to select the top design.

Analyze your data

Finally, with all of the data at your fingertips, you can look for patterns and insights that will act as your guiding light when finalizing your new design.

With customer insight software like Dovetail, your team can collect helpful information, including:

  • Which design was selected the most for positive reasons

  • Common reasons why participants did not like a design

  • Possible concerns with your design options that you can address in the final version

You’ve gathered results from your preference test, but it’s important to remember that they’re not always entirely accurate. The number of participants, their opinions, and any potential bias can impact your results.

Biases in preference testing to watch out for

Like any type of survey or test for a group of people, it’s very easy for bias to creep into your test results. This is an unintentional favor towards one outcome that may not correctly reflect the whole group’s values and opinions.

Bias can significantly skew your preference testing results if you don’t keep it at bay. It could lead to creating a design that does not fully resonate with your user base, which can impact user experience, engagement, and the general sentiment about your brand.

While removing 100% of bias from preference testing is tricky with the nature of the process, here are a few types of testing biases and ways to avoid them to consider before your next test:

Recency bias

Recency bias is the tendency for people to put higher value or importance on an image or experience that they’ve seen more recently than others.

This is a problematic type of bias to mitigate. The best action against it is to increase the time between the initial viewing and making the decision. However, this can be tough to arrange for fast turnaround preference testing.

Recency bias can be particularly problematic for preference tests with many options for the participant to sort through. Large numbers can increase mental burden, causing people to pick an option they recently saw instead of the choice they most closely align with.

To reduce the impact of recency bias in your tests, we recommend using two to three design options within your test. You can also explore delayed preference testing (in-person or virtually) over multiple days, allowing your participants time to process their opinions.

Order effect

Order effect arises when people see several options when evaluating their preferences.

In most cases, people are more likely to favor the first or last option they see. Our programming makes us think the options in these places carry a higher value than those in the middle.

To reduce the impact of this bias, researchers can use order variation to show different sequences to participants, decreasing the severity of order bias throughout the test.

Substitution effect

Substitution bias occurs when a participant does not have a direct answer to the question asked. Instead, they answer a tangentially related question.

For example, if you ask someone, “How does that design make you feel about the brand?” they may not have an answer. Instead, they may answer, “How do you feel about this brand?”

While the differences in answers may be small, consistently receiving data for slightly different questions can lead to significant bias. And that will impact the final decision your team makes about your redesign.

To combat this type of bias (which is tough to perfectly achieve), we recommend being intentional about your question wording. 

Can you make your queries clearer and more accessible? If so, take the time to rewrite and adjust them, as this can significantly impact the accuracy of your data.

The decoy effect

The decoy effect occurs when people see a group of closely related options, like three very similar designs, with the final design combining the previous two options.

The effect is most prominent when the person feels their choice “has more value” than the other options. 

In the above example, people will probably pick the final option, as they likely perceive it as the “best of both worlds” option rather than assessing it purely on its merit alone.

To reduce the impact of the decoy effect, we suggest selecting unique design ideas for your preference test. 

The goal is to collect information about what design choices do and don’t work. Ideally, the designs should be different enough that your participants do not just select the option that “feels like the slightly better” option of the group.

Aesthetic-usability effect

Finally, the aesthetic-usability effect explores the concept that people are more likely to choose designs that are pleasant to look at, even if they’re more difficult to use or understand.

Collecting data with a strong aesthetic-usability bias can result in your team creating a new design that is less accessible or usable for your target audience.

Things like the colors and images you use significantly impact how your participants perceive your design and their overall experience. It’s crucial to be mindful of this concept when creating your options.

To reduce the impact of this effect, we recommend focusing on simplicity and usability over aesthetic design. We’re not saying you should test unpolished work, but you want to ensure that usability is the focus of every design to beat the bias.

Using preference testing to improve your next redesign

Preference testing is a staple practice of high-quality design. Is your company doing enough to listen to your target audience’s opinions?

During any big product, service, or brand redesign, it’s essential to consult your users to collect insights about their feelings and reactions to your planned changes.

Preference testing should be a natural part of your redesign process moving forward. It’s one of the best ways to protect your user experience and save valuable time and resources.

Changing your logo or doing a complete brand overhaul? The detailed information you can collect from a well-conducted preference test will be your guiding light for improvements. This ensures that your new work hits the mark with your desired audience.

Without preference testing, your team is going into a redesign completely blind to your consumers’ opinions (which we do not recommend for long-term success).

Use this article as your guide to getting started with preference testing. The insights you’ll collect from this initiative will be well worth the time and effort you put into it.

Should you be using a customer insights hub?

Do you want to discover previous user research faster?

Do you share your user research findings with others?

Do you analyze user research data?

Start for free today, add your research, and get to key insights faster

Get Dovetail free

Editor’s picks

Comprehensive guide to conducting user interviews in 2024

Last updated: 18 April 2024

User persona templates

Last updated: 24 June 2023

User persona templates

Last updated: 29 May 2023

How to assemble a winning UX design team

Last updated: 22 October 2024

Usability testing templates

Last updated: 13 May 2024

How to conduct mobile app usability testing

Last updated: 22 October 2024

What is automated user testing?

Last updated: 22 October 2024

Best UserZoom alternatives in 2024

Last updated: 30 September 2024

Top 10 UX trends to know about in 2024

Last updated: 30 January 2024

Latest articles

How to conduct mobile app usability testing

Last updated: 22 October 2024

What is automated user testing?

Last updated: 22 October 2024

How to assemble a winning UX design team

Last updated: 22 October 2024

Best UserZoom alternatives in 2024

Last updated: 30 September 2024

Usability testing templates

Last updated: 13 May 2024

Top 10 UX trends to know about in 2024

Last updated: 30 January 2024

User persona templates

Last updated: 24 June 2023

User persona templates

Last updated: 29 May 2023

Related topics

Patient experienceCustomer researchSurveysResearch methodsEmployee experienceMarket researchUser experience (UX)Product development

A whole new way to understand your customer is here

Get Dovetail free

Product

PlatformProjectsChannelsAsk DovetailRecruitIntegrationsEnterpriseMagicAnalysisInsightsPricingRoadmap

Company

About us
Careers15
Legal
© Dovetail Research Pty. Ltd.
TermsPrivacy Policy

Product

PlatformProjectsChannelsAsk DovetailRecruitIntegrationsEnterpriseMagicAnalysisInsightsPricingRoadmap

Company

About us
Careers15
Legal
© Dovetail Research Pty. Ltd.
TermsPrivacy Policy

Log in or sign up

Get started for free


or


By clicking “Continue with Google / Email” you agree to our User Terms of Service and Privacy Policy