Last year at the UK’s MRS Conference I suggested that there would not be any market research surveys in twenty years. Over the past year this topic seems to have been picked up by a wide range of speakers and most recently we have seen a number of clients, such as P&G criticising the research industry for being too wedded to surveys. However, I think the debate is too one dimensional and tends to avoid making the point that the problem is largely the fault of the research buyers.
My starting position is that nobody ‘wants’ to do market research, people want to have information about markets that will help them make better decisions (in the same way that nobody wants drills, they want holes in walls). Market research is what people do when there isn’t a better, cheaper, faster alternative. Similarly, within market research surveys should be what we do when there isn’t a better, cheaper, faster alternative.
If we were to go back twenty years then the number of occasions where a representative, quantitative survey was the best answer to a problem was large. However, over the last twenty years, and particularly over the last ten, the world has moved on. For example, online access panels are not representative but they are cheaper and faster and have been deemed good enough. More recently listening research has begun to tap into customer conversations, DIY options are beginning to challenge supplier models, and online research communities (MROCs) are providing qualitative answers that often outperform quantitative (e.g. survey based) methods.
I see a big divide opening up in research between things you can learn by just listening, e.g. through social media monitoring, and those that require questions (this second point could also be worded as those things that require conversations). I suspect that in the future, much of customer satisfaction research, along with some brand and advertising tracking, will be achieved via ‘listening’ and/or ‘automated tracking’ techniques (and by automated tracking I mean people allowing their phone, browsing record, credit cards, and social media conversations to be tracked by software). However, fields such as product innovation, concept testing, motivational research, and ad testing will require questions/conversations. Within the research that will require questions we will see a variety of techniques such as MROCs, community panels, purposeful conversations *, along with traditional questioning approaches such as focus groups, depth interviews, and of course surveys**.
Which brings me back to the growing chorus of people like P&G and Coca-Cola complaining that market researchers are too fixated on surveys. However, researchers are offering a wide variety of techniques, including qual, MROCs, ethnography, semiotics, predictive markets, social media listening and much more. Indeed, the whole NewMR revolution in techniques and approaches means that clients have never had such a range of techniques to choose from. However, clients continue to spend about 80% of their budget on surveys. If there is a problem with too much market research being based on surveys then surely the problem mostly lies with the research buyers who insist on buying surveys (and who keep asking for longer surveys and whose input usually makes surveys more boring and less relevant).
I put two markers in the text above to highlight specific points further
* purposeful conversations relates to seeding a conversation with something you want to know, just as I do when I post a question in the LinkedIn AMSRS discussion group. I think brands will make much greater use of this in the future, for example by monitoring social media and asking questions to clarify issues or taking initiatives (such as competitions) in order to elicit responses. Most of the time people are not talking about you or your brand and in order to learn from conversations brands will need to create topics, perhaps incentivising those discussions.
** Surveys will need to change. The 30 and 40 minute survey is (in the future) only going to be asked to professional respondents and will need to fit a research situation where the considered and well rewarded responses of a professional respondent are useful, which may include audit type surveys. However, I think that most surveys will become much more focused on open-ended responses. For example, a customer satisfaction study for a service may be reduced to three questions, a closed scale asking degree of satisfaction, followed by two open-ends, ‘What did you like?’ and ‘What didn’t you like?’ The advances in text analytics will soon allow these sorts of surveys, i.e. ones based on open-ended responses, to be processed.