In the Market Research Bulletin LinkedIn group there is a great discussion going on about why 80% of new products fail. The conversation was started by Wasim Shaikh, and already has 85 comments, including the suggestion that perhaps 85% is the right figure these days.
I have posted a short version of my view that the main reason is that there is a limit to the absolute number of products that can be sucessful in any given year, and the number launched greatly exceeds this natural limit - so failure is mandated by the system, but LinkedIn does not allow the space to fully explore the argument, so below is my response to Wasim's question "80% of the products launched FAIL!!!!!!! What is going wrong????I am sure most of the companies conduct extensive market research before any product launch. Then what is the reason for such a high failure rate?"
Compared with the query from Wasim, I’d like to address the question from the other end, so to speak, namely “How many new products should fail?” Further, I want to come at this revised question in two ways, by consider both probabilities and numbers of winners.
If we were to aim for 100% of new product launches to be successful we would need to throw away products which we thought had a 99% chance of being successful. In order to get rid of Type I errors (false positives, i.e. launching products that fail) we have to have massive Type II errors (false negatives). If brands were only to launch the products where it were felt there was a 100% chance of being successful they might have to wait several years between launches and would find their market position eroded as other, less fussy, competitors launched their new products , with less than 100% chance of being successful. For example, a competitor might launch 8 products, each of which had and independent chance of being successful of 25%, knowing that there was only a 10% risk that all 8 would be failures.
As much of the discussion above has shown, the probability of success of a new launch will relate to many things, including any errors in the testing, any weaknesses in the marketing, any negative post purchase experiences, and a wide variety of ‘events’ (such as weather, news, fashion, inflation, wars etc).
However, I think a much bigger issue in the question of how many products should be successful is a simple numbers game. All the neuroscience and behavioural economics stuff is showing us that we humans tend not to think too much, we utilise heuristics, and we find short cuts. Becoming a regular buyer of some new product carries a cognitive burden, we have to remove the old habit and install a new habit. For some products, perhaps a subtle line-extension in a chocolate bar, the load is pretty small. For other new products (for example a novel way of applying dental floss) the cognitive load is higher.
Each consumer is, IMHO, only going to become a regular buyer a few new products each year, perhaps ranging from single figures for creatures of habit to 30-40 for people who embrace the latest coffee, sandwich spread, or hard surface cleaner. If we stick to FMCG/CPG (fast moving consumer goods/consumer packaged goods). A brand has to jump through the following hoops to be successful, it has to be one of the ones that the major retailers agrees to take and agrees to position in a good position, and usually it needs to agree to it being promoted in stores – stores will only carry a certain number of fixed products each year. Then a sufficiently large number of consumers need to put it into their repertoire. This will usually mean removing something else from their repertoire, either because the new product is a direct replacement, or simply to keep their overall budget constant (new products that result in people increasing their weekly grocery spend are even rarer). Then the new product needs to embed itself in the pattern of purchase for that household.
As an example of the large numbers of products being launched and failing, consider the 1997 report by Linton, Matysiak & Wilkes [ quoted in http://www.entrepreneur.com/tradejournals/article/19768190.html] which looked at 1935 new products from just the top 20 food companies in the US, finding a failure rate then of 70-80%, which might imply that 30% of the new products (in that case just under 600) are capable of surviving, but far fewer will be ‘successful’.
I am still gathering the data, but in the context of a large supermarket, the number of new products that can be launched and succeed in a given year is probably relatively fixed. But it may be that the maximum number of successful new products in a large supermarket is perhaps a few hundred, perhaps 1000, but it is relatively fixed, and it is much smaller than the number of products companies would like to launch. New product launches in FMCG/CPG tend to run to multiples of thousands each year.
If we take the hypothetical view that in a given year, let’s say 2012, that the maximum number of products that can be successful in major supermarkets in a given country is, say, 1000 (which probably means that something like 500 products have to be delisted by the supermarket to make space, allowing for some growth in the size of stores and some narrowing of facings) then the percentage of products that will be successful is going to be 1000/N, where N is the number of products launched. As N increases the percentage of products failing increases, almost irrespective of how good they are. In our hypothetical market, if 1000 were to be the limit for success and 5,000 were to be launched, then the failure rate would be 80%.
There are probably things that change the maximum number of successes from time to time, a recession might make it smaller, more leisure time (but probably not unemployment) might increase it, innovations such as refrigeration in stores and then refrigeration in the home no doubt changed the number, but changes in the maximum are largely beyond the control of individual brands.
As an analogy, consider Formula 1 racing, and think about its first year, 1950, and its most recent year 2010. In 1950 there were 7 races, Nino Farina was the overall winner, but it was possible for 7 drivers to have least been reasonably successful, i.e. winning at least one race. In fact in 1950 just three drivers won at least one race. In 2010 there were 19 races, with Sebastian Vettel the overall winner, so just like the growth in the size of supermarkets between 1950 and 2010 the limit for how many winners there could have been had increased, to an upper potential of 19. In fact just five drivers won at least one race in 2010. Supermarkets, and beyond that the wider market place, is bigger than Formula 1, but the principle applies, the number of winners is not greatly affected by the number of entrants, and the maximum is set by the system.
So, what should researchers and more importantly brands do? Well, they can’t simply say we can’t launch too many products, because we know only some will be successful. The competitors will simply niche their share away. Brands need to use their NPD, their market research, their marketing skills, and their advertising to try and beat the market, i.e. to increase the chance of a new product being successful. If 85% of all new products in your category fail, but only 82% of your products fail, you are doing well, and the lower you can get the figure the better (because a) you are winning, and b) a competitor is losing). Other factors that come into play are low cost innovation, systems such as ‘fail and learn’, and agile marketing (where the process of failing is used to help improve the product).
Comments
You can follow this conversation by subscribing to the comment feed for this post.