One day of workshops, two days of Conference attending, and it is all over. As far as I am concerned it was the trip to Singapore was well worth making. The Conference had some weak bits, but I will ignore these, there were a few daft ideas (and I will ignore all but one of them), but there was lots that was good, and 260 interesting people to meet and dine with (and wonderful food).
For me the highpoint was the keynote by Tony Fenandes (see yesterday’s post). Other high points included: Lee Ryan and Lisa Li talking about how the digital world is impacting young people in China. Jerry Clode and Jim Poppelwell showed a great knowledge of China Web 2.0 sites, such as qq.com.
One of the best presentations was by Ian Stewart of MTV and Graham Saxton of OTX Research. Ian and Graham shared masses of findings about trends and changes, and better still the full deck of slides and information is available on SlideShare here. This is part of MTV/Stewart’s open source approach of making as much information as possible available. The slides are a treasure trove of information.
The saddest aspect of the Conference was the news that Rhiannon will be leaving ESOMAR after 10 years. Rhiannon has helped many people over the past 10 years and will be much missed. I’d like to wish her all best wishes for her next venture.
The daft idea? The issue about online reliability was given excessive prominence, compared with other problems. Of course we need to ensure that we do as good a job as possible online, but we must not be spooked into thinking online is worse than offline. A video of a Coke exec perpetuated the story about the same survey being fielded twice with the same panel company and producing different results. As I understand it, the source of this urban ‘story’ comes from a presentation by P& G a couple of years ago. But, until the data and methodology is published it must be regarded as simply anecdotal. However, the industry has tens of thousands of positive example of test-retest studies. Almost all online tracking studies use the same questionnaire, given to the same panel, separated by a short period of time. Do most tracking studies suffer from showing wild and interesting movements? No, they are as flat and boring as their offline equivalents.
Just consider for one moment the comparison between online and CLT (central location testing). Online certainly has some issues, but the idea that we might conduct a national study in somewhere like Australia or USA by interviewing people in small, localised parts of four cities spread across the countries is a great deal more risky than an online study – but for many years CLT was the norm for many companies, many countries, and many types of study.
Perhaps the key point is not that some clients have raised reasonable concerns. Perhaps the key point is that there do not seem to be any clients who want to pay 40% more to go back to CATI or face-to-face. We should always be improving our procedures, but we need to avoid being panicked by anecdote and out-of-date approaches to Web 1.0 approaches to working with respondents.
A panel discussion about online research did, however, come to the view that within five years 20%-30% of all research in Asia will be conducted online – approaching the current percentage in Australia.