Friday, Jan 28, 2022

The Five-Point Purchase Intent Question: Why I don't Like It

The five-point purchase intent question typically reads as follows:How likely would you to be purchase this product?Definitely will buyProbably will..

browsing a shopping rack of shirts

The five-point purchase intent question typically reads as follows:

How likely would you to be purchase this product?

  • Definitely will buy
  • Probably will buy
  • Might or might not buy
  • Probably will not buy
  • Definitely will not buy

I have a problem with this widely used, venerable question. I especially do not like it for existing brands, but even when testing new brand ideas, it has been a skeleton in marketing research’s closet for decades.

So let me be specific about what I do not like about it and what question I prefer.


Purchase intent questioning does not mirror shopper choice

In real life, when you buy one thing, you are sacrificing your option to buy other things. You are making a choice. Purchase intent questions do not reflect the choice and sacrifice elements of buying. I have run experiments where I ask about purchase intent towards numerous brands in a category, and found that many/most brands get top-two box purchase interest (“Definitely will buy” and “Probably will buy”) from the same respondents, which could obviously not translate to purchases in real life.



Why Targeting Eats Reach-Based Media Strategies for Lunch

“Definitely will buy” responses do not translate to purchase at a high rate

I’d like to think that if someone says they definitely will do something, there is at least a 50% probability that they will, in fact, do it. That is not what I see in the data I have looked at or in published work. This lack of respondent-level validation is troubling to me and says that higher versus lower scores are subject to aggregation bias. The only response that is, in fact, highly predictive is “Definitely will not buy”. If someone says that, their probability of purchase is, in fact, low single digits. When I ran ESP (competitor to BASES) that response had the highest weight, although negative coefficient obviously. (For context, BASES, now part of Nielsen, is the leading commercial service involved with testing the sales potential of new products. ESP (Estimating Sales Potential), part of the NPD Group, was the service I ran that offered similar services to BASES.)

If you do studies across countries and cultures, you already know that you need to maintain separate norms because the question is interpreted differently by consumers in different countries.


You don’t learn much from purchase intent

I get some kind of measure of interest, but no sense of how it stacks up to the level of interest in other brands or which brands are most directly competitive.


It is not very sensitive

Top box comparisons are a bit restrictive and top-two box responses homogenize results across brands or concepts. If you have a large normative database of top box or top-two box results for products in market, I would love for you to share as a comment the mean and variance of the survey results versus the mean and variance of the actual in-market annual penetration. I’m guessing we will see that PI (purchase intent) lacks the sensitivity we would prefer.


Constant sum questions as the alternative

I am most interested in existing brand research where the PI question is least desirable (most of what I work on these days). For existing brands in brand trackers or campaign lift studies, I suggest you try constant sum. That means you are asking the respondent to allocate 10 points across the alternatives in their consideration set. This will mimic choice processes and do a good job of returning the market share of all major brands.

Constant sum also gives you…

  • A picture of secondary loyalties and market structure.
  • A measure of repeat rate across brands.
  • An ability to unpack buyers into segments so their brand beliefs can be compared (e.g. How do I make a more loyal consumer?) In particular, for media targeting, you can use constant sum to identify the Movable Middle consumers (five times more responsive to your advertising) and onboard them as a seed sample to your partner for lookalike modeling at scale.
  • The full picture as you can use the dataset for modeling the distribution of consumer probabilities of purchase towards your brand via a Beta distribution (Dirichlet is less desirable because it makes assumptions that there is no market structure which I have never seen.) There is a tremendous richness that comes from the Beta distribution (a subject for another blog entry).

A final point about why constant sum is the better choice: It works regardless of culture or demographics, unlike purchase intent or the NPS (Net Promoter Score) question. In every culture, in every country, for older and younger respondents, more points to one brand mean fewer points to distribute to any other brand. That anchors the survey device’s measurements in the reality of shopper choice, a reality that transcends borders.

The post Why I Don’t like the Five-Point Purchase Intent Question first appeared on GreenBook.


By: Joel Rubinson
Title: Why I Don’t like the Five-Point Purchase Intent Question
Sourced From:
Published Date: Fri, 14 Jan 2022 12:00:29 +0000

Read More

Did you miss our previous article...