This section looks at the key modalities used to collect
quantitative data and highlights their key differences and implications. In
2008, according to the ESOMAR Global Market Research 2009 report, it was estimated
that quantitative survey research accounted for about 55% of all market research.
Of that 55%, online accounts for about 39% (i.e. 20% of all research spend),
with telephone just behind at 33%, Face-to-Face at 22%, and Postal at 9% (i.e.
5% of all research spend).
Interviewer
The main issue that divides the modalities is the presence,
or absence, of an interviewer. This difference is crucial in many ways.
Respondents tend to prefer surveys with an interviewer, especially when the
interviewer if physically present. If there are problems with the survey the
interviewer can explain, help, probe, and ensure that messages about problems
are relayed back to the project team. With self-completion modalities nobody
sees the respondent. Are they who they say they are? Are they concentrating
appropriately? Are they confused? However, on the other hand, there is
considerable evidence that people are more honest when there is not an
interviewer present, and of course the cost is less without an interviewer.
Online and postal surveys are both self-completion, i.e.
they do not have an interviewer. Telephone and face-to-face are both
interviewer administered. One exception is where the telephone interviews are conducted
by automated processes such as IVR (interactive voice response).
Sampling
The key, historic, issue about sampling was whether it
approximated to random probability sampling. Random probability sampling is
based on an assumption that the population is known and that every member of
the population has a known, non-zero, chance of being selected. If a sample was
a random probability sample (or could be made to behave like one through the
use of techniques such as weighting) then, subject to sampling error, results
from the sample could be projected to the population.
If a sample is not a random probability sampling then the
results cannot be projected to the population directly. With non-probability
sampling the researcher has to adopt one of the following strategies:
a)
Use an alternative technique to link the sample
to the population, for example propensity weighting (for more information see http://www.schonlau.net/publication/03socialsciencecomputerreview_propensity_galley.pdf)
or by establishing a modelled link (for example experience may indicate that a
score of X in the sample equates to a Y share of purchase in the real market).
b)
By assuming that the movements in the sample,
from one wave to another, or from one test to another, reflect changes in the
population, even if the absolute values may not reflect the population. Much ad
tracking is based on this assumption.
c)
By assuming that the sample is a reasonable
proxy for the population. Quite often the face validity of this approach is
improved by using quotas to make some key characteristics of the sample match
the population, for example age, gender , and income.
Although traditional market research has, since the 1930s,
been built on an assumption of random probability sampling, the reality has, in
most cases, been very different. If there is a significant proportion of people
who can’t be reached by the medium (for example people who do not have a
landline, or who tend to be out a lot), or if there are a significant
proportion who decline to do a survey, then non-response bias can render random
probability sampling, and its implications for representativeness and
projectability, worthless. Views differ, but it is safe to assume that if the
response rate is less than 70%, then the study is not approximating to a random
probability sample.
Most online research is conducted via online access panels,
which is clearly not a method of random probability sampling. Online research
conducted with client databases (or via visitors to a website) might
approximate to random probability sampling if a) the database or the website
visitors are the relevant population, b) if the response rate was high enough.
Face-to-face data collection tends to come in two main
varieties, door-to-door and central location. In some countries both methods
are prevalent, in other countries only one of these will tend to apply. For
example, in North America and Australia face-to-face is very rarely conducted
door-to-door. Door-to-door studies can be constructed so as to approximate to a
random probability sample, if the response rate is high enough. Central
location testing is not capable of approximating to a random probability
sample, and usually relies on the sample being a reasonable proxy, e.g. by
quota controlling for variables that are felt to be key.
For a while, and in some markets (such as Australia and USA),
telephone research was able to claim a semblance of random probability
sampling. In markets where more than 90% of households had a telephone, and due
to the development of RDD (Random Digit Dialling), only response rates stood
between the researcher and a credible claim of random probability sample.
However, due to the growth of mobile (cell phone) only households, telephone
has lost its ability to assert that it is fully representative.
Postal research has declined in importance over recent years
and the ESOMAR Global Research figures for 2008 show it as accounting for only a
quarter as much spend as online. In most markets it is possible to construct a
sample frame that would be representative of the general population. The key
limitation in terms of achieving a representative sample is non-response, which
in most market research surveys result in fewer than 70% of surveys being
returned (usually a lot less than 70%).
Stimuli
The four data collection modalities being reviewed here have
very different implications for the kind of stimulus material that can be used.
In the early days of online research most surveys were
restricted to simple text, some colour, and perhaps a low resolution graphical
image or two. The limiting factors were the software being used and even more
critically the speed of most people’s connection to the internet. With the
advent of widely available broadband (in many markets), good software, new
options (such as Flash), the range of stimulus material that can be used
includes pictures, audio, moves, and interactive modules. In terms of what
cannot be used, the most notable are smell, taste, touch, and detailed pictures
(for example, an A4 photo can clearly show twenty cars on a show card, most
screens struggle to show more than ten, because of size and definition).
Two limiting factors in terms of stimuli for online surveys
are a) people who complete the survey whilst at work where either sound or
movies are disabled or are unsuitable, and b) the rise in the number of people who
are completing their surveys via smartphones (such as iPhones) and who
therefore have smaller screens.
Face-to-face data collection, particularly central location
testing, presents the widest range of stimuli that can be used. However, it should
be noted that some face-to-face formats are more limited, for example when
conducting in-street interviewing using a clipboard, paper, and pencil. In
terms of face-to-face interviewing, the introduction of CAPI (Computer Aided
Personal Interviewing) was, in particular, responsible for extending the range
of stimuli that can be used in face-to-face interviewing, for example by making
it easier to show media files.
Telephone surveys have the most limited range of stimuli
that can be readily be used. In a typical survey the interviewer reads the
questions to the respondent, who then says their answer. This means that the
questions and prompts cannot be too complex, which is why conjoint analysis is
normally considered unsuitable for telephone research. Audio files (for example
a radio advert) are suitable for telephone surveys.
Most postal surveys are very limited in the stimuli they
use, they tend to be a printed questionnaire, possibly with an image or too. However,
some researchers have used postal surveys with much more complicated stimuli,
including scratch and sniff cards, DVDs, and products to use and test.
Adaptive
Survey modalities vary in the degree to which the survey can
‘adapt’ to the respondent. The most adaptive interviews are those computer
administered surveys which are scripted to adapt the lists, stimuli, and
questions based on earlier responses. Computer administered surveys can be
online, telephone (in which case it is CATI, computer aided telephone
interviewing), and face-to-face (in which case it is CAPI, computer aided
personal interviewing).
Online surveys can be, and often are, adaptive. Unless face-to-face
surveys are computer administered the degree of adaptively is limited to procedures
such as skip patterns and rotations.
When telephone survey are computer administered they can be
adaptive. If a telephone survey is not computer administered then the adaptivity
will be limited to simple options such as skip patterns and rotations.
Postal surveys represent the least adaptive modality.
Surveys may contain instructions such as “If yes, skip to question 12 on page 3”,
but respondents are often poor at complying with these instructions
Summary of differences
The table below summarises the key differences between the
leading data collection modalities for quantitative research.
|
Online
|
Face-to-face (paper &
pencil)
|
Telephone
|
Postal
|
Interviewer
|
Self-completion.
|
Interviewer mediated.
|
Interviewer mediated.
|
Self-completion.
|
Sampling
|
Tends to be via access panel, hard to claim random probability
sampling, subject to response rate concerns.
|
Door-to-door may be able to claim random probability sampling.
Central location cannot claim random probability sampling
|
RDD may be able to claim random probability sampling, subject to
response rate concerns.
|
Can claim to be using random probability sampling, subject to
response rate concerns.
|
Stimuli
|
Video, audio, medium resolution images, interactive materials.
|
Can include video/taste/smell/high definition images.
|
Just that which can be conveyed by the spoken word, plus audio.
|
Tends to be just text and show cards, however, can include products,
scratch’n’sniff etc.
|
Adaptive
|
Scripts can be adaptive.
|
Only computer aided personal interviewing is adaptive.
|
If computer aided then scripts can be adaptive.
|
Not adaptive.
|
This post is part of a my project to write a book on online and social media research for Wiley. The copyright for the material above is retained as it will be needed for the book. You are welcome to use the material above provided that you credit the source.
I would welcome any comments, corrections, and suggestions, but please note that any posts will be assumed to belong to the project and the use of them will not transfer any rights to the person making the post. Sorry to be so boring, but that is the reality of sharing stuff that is destined for a book.