Ultimate Guide to MaxDiff Analysis: Examples, Methods, Tools

What is MaxDiff Analysis?

MaxDiff Analysis measures people’s priorities by asking them multiple times to choose the best and worst option from sets of 3-6 statements. Each time the respondent finishes, a new set of statements from the full ranking list is shown.

Example of what is maxdiff analysis survey

^ The example above shows a typical MaxDiff question with a set of written options and toggles representing “Best” and “Worst” on each side.

How is MaxDiff different from other types of survey questions? (Why you should avoid Rating Questions)

MaxDiff is a comparative ranking method — it forces respondents to compare and choose between options according to their personal preferences. This means that MaxDiff creates “continuous” data where answers are all plotted from highest to lowest on a full range of scores. On the other hand, rating questions (eg. rate a statement from 1-5 stars) is “discrete”, meaning that responses are pooled around fixed numbers (like 1 star, 2 stars, 3 stars, etc).

This isn’t good news for researchers who are trying to compare and rank a bunch of options. We end up with respondents’ answers looking like this:

Ranking survey versus maxdiff analysis comparison research

^ This is exactly what we want to avoid. The forced comparison nature of ranking questions allow us to understand even minor differences in people’s preferences.

The gaps between people’s preferences are actually the most important output of our ranked results. If we use rating questions, we lose all this rich data — people’s top priorities all get lumped in together under the 5/5 response. To avoid this, we want to make sure we’re using a research format that forces participants to compare options against each other.

When is Max-Diff research used?

MaxDiff is all about understanding people’s priorities, which is a central aspect of a bunch of different research scenarios, such as:

1. MaxDiff for Prioritization: Ranking problem statements to figure out which pain point is having the largest negative impact on your key customer segment.

2. MaxDiff for Sales and Marketing: Comparing messaging ideas or product claims to see which resonates best with your target audience.

3. MaxDiff for Pricing Research: Identifying which features deliver the most value to customers on a specific pricing tier so that you can improve feature discovery or upgrade messaging.

4. MaxDiff for Customer Segmentation: Mapping the preferences of a population and then comparing the priorities of different customer segments (ie. needs-based segmentation).

Real Example of MaxDiff Analysis

This is a real example of a time I used comparative preference ranking for a B2B research project. Our product had been launched 6 months prior and had just lost our only subscription customer. We did a LOT of customer discovery interviews to get to that point, so before giving up, we wanted to figure out one thing — had we picked the wrong problem from our user interviews to focus on?

To answer this, we created an initial list of 20 problem statements (we ended up adding in an extra 25 that we collected from respondents mid-survey) and we used comparative ranking to rank the problems from highest to lowest priority. Within 2 hours of starting, our key problem was ranking dead last!

Rather than give up, we realized that many of the highest-ranking problems required very similar solutions to the one we had set out to solve, so we did a quick rewrite of our landing page and product onboarding experience the following day. By the following week, we had multiplied all our most important metrics and landed multiple paying customers (one of whom later became an angel investor!). You can read the full story and step-by-step walkthrough of that research project here.

This story actually used Pairwise Comparison instead of MaxDiff, but we likely would’ve reached a very similar conclusion if we had used MaxDiff instead (pairwise ranking was just easier for us to implement and quicker for participants to vote on).

What are the advantages of MaxDiff Analysis?

1. Comparative: It can be hard to rate a list of options individually when all those options are similar. But because MaxDiff is comparative, it gives respondents an easier way to differentiate between options and show which ones are their favorites — giving researchers better data about people’s preferences.

2. Quantitative: Numerical data makes it a lot easier for researchers to summarize their results, perform additional analysis, and create visual graphics that convey their insights. Numerical data tends to be the language of decision-making at the best organizations, so formats like MaxDiff and Pairwise Comparison are great for translating qualitative insights into quantitative stats.

3. Intuitive: While rating scales look simple, they actually require respondents to consciously evaluate and score items across multiple attributes simultaneously — which is a lot more work than it may seem. On the other hand, MaxDiff respondents intuitively understand how the simple most/least columns should be used.

4. Automated: The structured nature of MaxDiff voting means that collecting and analyzing MaxDiff data is just as easy with 10 respondents as it is with 10,000. This allows you to run MaxDiff surveys at scale, giving you better data for advanced analysis methods like segmentation.

What are the disadvantages of MaxDiff Analysis?

1. Complex: Because MaxDiff compares multiple options at the same time, the algorithms used to analyze votes and score the options tend to be quite complicated (eg. Bayesian statistics models). This can make it difficult to explain to stakeholders how the scores were calculated. Other comparative ranking methods like Pairwise Comparison produce similar data from simple formulas like win-rate analysis.

2. Expensive: MaxDiff is not available on most survey tools as it is considered a complex research method. You’ll need to purchase an advanced tool like Conjointly, which costs $1,795 per user per year. Alternatively, you can use a survey that offers Pairwise Comparison completely free (like OpinionX).

3. Burdensome: The more options you include in each Max Diff voting set, the more mental work is required from respondents, increasing the likelihood that respondents will quit mid-survey. One solution here is to switch from MaxDiff’s “multiple comparisons” approach to a single pairwise comparison instead, which shows only two options at a time instead of 3-6.

4. Relative: MaxDiff estimates the preferences of items relative to each other but doesn’t tell us if our list of options is a good or bad batch from an absolute perspective. You should always do qualitative research to make sure your voting list includes enough relevant options.

Advantages and disadvantages of using maxdiff analysis for comparison ranking research surveys

Sort participant opinions in one click based on consensus or importance on OpinionX.

4 Alternatives To MaxDiff Analysis

Here are four alternative methods to MaxDiff Analysis that also use comparison-based approaches:

  1. Pairwise Comparison

  2. Rank Ordering

  3. Points Allocation

  4. Conjoint Analysis

Alternative 1: Pairwise Comparison

Pairwise Comparison ranks a list of options by comparing them in head-to-head pair votes. By analyzing the number of pairs that a ranking option “wins”, you can measure people’s preferences from best to worst option. Pairwise Comparison works almost identically to MaxDiff — every pair results in a “best” (the winner) and “worst” (the unselected loser) result. Best of all, Pairwise Comparison is a more widely available research method. For example, you can create a free Pairwise Comparison on OpinionX with unlimited ranking options (including image ranking).

^ Example of Pairwise Comparison voting and results on an OpinionX survey

Alternative 2: Rank Ordering

Rank Order questions show respondents the full ranking list and ask them to place them in order according to their personal preferences. It’s the most simple format of the four alternatives explained in this post, but it has some shortcomings worth noting. All the main research providers (Qualtrics, SurveyMonkey, etc.) recommend limiting the number of statements in a Rank Order question to 6-10 max. Beyond that number, you should move your question to Pairwise Comparison, which is a better format for ranking long lists.

^ Example of Rank Order voting and results on an OpinionX survey

Alternative 3: Points Allocation

One disadvantage of both MaxDiff and Pairwise Comparison is that they estimate the preferences of items relative to each other but don’t tell us if our list of ranking options is a good or bad batch from an absolute perspective. That’s where Points Allocation comes in. It gives each participant a pool of credits they can allocate amongst options in whatever way best represents their preferences. The great thing about Points Allocation is it doesn’t just show the relative preference, it shows the magnitude of their preference — for example, we don’t just learn that Simon prefers apples to bananas, we see that he would give 9 of his 10 points to apples.

^ Example of Points Allocation voting and results on an OpinionX survey

Alternative 4: Conjoint Analysis

Conjoint Analysis is used to understand the influence that individual aspects/attributes of a product or offering exert on the buyer’s decision. It’s more advanced computationally than MaxDiff and used for much more specific use cases, typically in bundling analysis and pricing/packaging research. Conjoint Analysis is one of the most expensive survey formats out there, so expect to pay thousands for any platform (Conjointly’s pricing plans start at $1,795 per user per year with no monthly options).

^ Example of a Conjoint Analysis question

— — —

Additional Questions:

Is MaxDiff Analysis the same as Best-Worst Scaling?

No! MaxDiff and “Best-Worst Scaling” are not synonyms. This is a misconception about MaxDiff Analysis that people have been spreading for decades (even ChatGPT gets this one wrong). Best-Worst Scaling means plotting a set of options from best to worst using respondent data. MaxDiff surveys are a type of Best-Worst Scaling, but they are not the only way to create best-worst scaled data! You can use many different research methods to create a best-worst scale like most to least important factor, biggest to smallest influence, highest to lowest priority, and more.

How are MaxDiff results calculated? How does MaxDiff methodology work?

MaxDiff tends to use either advanced algorithms like Bayesian statistics models or aggregate scoring methods like subtracting the number of times an option was chosen as least important from the number of times it was chosen as most important, along with some normalization to plot those scores around a common points range.

How do you calculate a target sample size for your MaxDiff project?

While many people will tell you that you should aim for a minimum of 100 participants to generate robust MaxDiff results, this skips over some important nuance about choice-based ranking formats like MaxDiff and Pairwise Comparison.

For example, here are three different MaxDiff setups that yield completely different data:

  • 100 participants are shown 10 sets of 3 options.

  • 100 participants are shown 7 sets of 5 options.

  • 100 participants are shown 5 sets of 6 options.

These scenarios don’t even nearly create the same volume of data!

To make matters more frustrating, we really need to know how the research tool calculates ranked results to understand what to consider robust. For example, if we’re using an aggregate scoring method, then increasing the number of options in each MaxDiff set doesn’t actually create any additional data.

The company providing the tool must give you specific information on their scoring method and how this translates into a robust sample size. You should not accept blanket rules like “at least 100 respondents.”

Here’s the info I provide to people using our free ranking tool OpinionX to help them plan their sample sizes for pairwise comparison surveys (more details on the formulas below here):

 

For each pair rank block, we generally recommend aiming to get enough votes so that every possible pair combination has appeared 3 times in total. The formula to calculate this is 3(n(n-1)/2)/participants.

Pairwise Comparison Sample Size Number of Votes

Formula Explanation:

  1. Total number of possible pairs is n(n-1)/2, where n is the number of ranking options.

  2. Multiply the total possible pairs by 3 to calculate your target number of votes.

  3. Divide your target number of votes by your minimum estimate of participants you expect will complete the survey.

  4. The answer you get is how many pair votes you should set for that block.

Some Caveats:

  • Generally, if the formula results in a number below 10, I would just leave the default 10 votes in place as it is a very reasonable amount to ask from each participant.

  • You can also use this approach to set the number of votes you think is ideal for participants and then use that to calculate the minimum number of participants to create robust results.

  • If your survey has multiple Pair Rank questions, you should consider the total number of votes a participant will have to cast across your entire survey. The higher your total number of pair votes, the lower your completion rate will be. Generally, anything above 40 pair votes within a survey is a lot to expect of any participant (unless you provide a strong participant incentive).

  • If segmentation is going to be an important part of your analysis, you should substitute the "participants" variable for your estimate of the total number of people you expect to reach from your smallest key segment.

Example Scenarios:

  • Normal: I have 20 ranking options and expect to fill my 50-participant limit on the free tier of OpinionX = 3((20*19)/2)/50 = 11.4 → 12 votes per participant.

  • Segmented: I have 14 ranking options and expect to reach 150 participants, but I plan to segment my results to see the differences between customers on my four different pricing tiers. The segment I'll get the least responses from is Enterprise Customers, but I think I can get at least 16 participants from them. 3((14*13)/2)/16 = 17 votes per participant.

If your research platform provider cannot provide you with this level of specificity, then I would recommend considering an alternative provider.

Comparing 10 Free Tools For MaxDiff Analysis Surveys

1. OpinionX ✅

OpinionX is a free survey tool for ranking people’s priorities. You can use the “Pair Rank” question to create MaxDiff-style ranking projects like this:

Free MaxDiff Survey Tool Research Platform

^ Voting and results of a Pairwise Comparison survey (an alternative to MaxDiff) created for free on OpinionX

OpinionX is used by thousands of teams around the world, from companies like Google, Amazon and Microsoft, to national governments and Ivy League academics. It comes with both a free tier and a premium tier:

OpinionX Free Tier
The free tier of OpinionX allows unlimited ranking options on your questions and even lets you set how many pair votes you’ll show to each respondent. OpinionX surveys can have multiple types of ranking questions and other survey formats too like email collection or multiple-choice questions. Free users can create unlimited surveys with unlimited questions and engage up to 50 participants per survey.

OpinionX Premium Tiers
OpinionX premium tiers start at $30/month. Our premium tiers give you larger participant limits, the ability to segment your ranked results (pick a segment of your respondents and see how they voted compared to everyone else — example video below), and other customizations (eg. remove OpinionX branding or hide the “skip” button for forced comparison).

^ segmenting your pair ranked results on OpinionX

As you will see throughout the rest of this list, there are actually no free tools on the market for MaxDiff. It is considered an advanced research method and most options included below start at a minimum price of $1,000/year.

2. Sawtooth Software ❌

Sawtooth Software has a proprietary MaxDiff approach called “Bandit MaxDiffs”, which uses Thompson Sampling to decide which sets of voting options to display next to gain the most information possible. Sawtooth Software starts at $4,500/year for their “Basic” package.

3. Qualtrics ❌

Qualtrics provides MaxDiff questions as an additional purchase, so you must already have a premium account and then you need to contact your Account Executive to “learn more about this product”.

4. SurveyMonkey ❌

SurveyMonkey offers MaxDiff as a separate product called “Feature Importance” under their Market Research solutions set. To get access to MaxDiff on SurveyMonkey, you have to submit an enterprise request form via their Market Research website and engage a sales rep to purchase it separately.

5. Confirmit ❌

Confirmit was a comprehensive survey software platform that offered MaxDiff capabilities, however since merging with Forsta and Dapresy under the Forsta brand, they no longer offer MaxDiff as a research format.

6. Conjointly ❌

MaxDiff is included as one of Conjointly’s “advanced methods”, meaning it is only available on the Professional tier ($1,795 per user per year) or higher tiers.

7. QuestionPro ❌

Offers MaxDiff and Conjoint analysis only as part of its most expensive tier “Research Suite”, which is available by custom quote only. However, their “Workplace” product starts at $5,000/year, so it is fair to assume that the Research Suite is likely considerably more expensive.

8. SurveyKing 🟠

SurveyKing offers a limited version of MaxDiff on their free version, however you can only include 3 “attributes” (ranking options), so the free version is really just for testing out the product — not for running an actual MaxDiff project. The premium version starts at $19 per user per month (with some response viewing limits).

9. Alchemer ❌

MaxDiff is only available on Alchemer’s “Full Access” (their most expensive tier) which starts at $275 per user per month or $1,895 per user per year.

10. Q Research Software ❌

No free tier available. To create MaxDiff surveys, you must purchase either their Standard ($2,235/year) or Transferable License ($6,705/year).

— — —

Create A Free Comparison-Based Ranking Survey Today

There’s no reason to be stuck on only using MaxDiff when similar methods are more widely available. Pairwise comparison is used today by thousands of tech giants, governments, and academics who use OpinionX to facilitate their ranking research projects. Create your own MaxDiff-style ranking survey using the free Pair Rank question type on OpinionX!

Previous
Previous

Free Tools for Book Club Voting and Ranking Polls

Next
Next

A Beginner’s Guide to Thematic Analysis