How do you build a survey that works?

In an article published today in the journal Nature: Human Behaviour, I attempt to tackle this question.

Surveys can be powerful tools, gathering information for research which can influence policy and potentially change lives. So how can we best avoid the pitfalls of survey research, and create surveys that gather high quality data? A good survey may not need a huge budget but does require care to be taken at every step.

A representative sample

To avoid bias, and make sure research findings are accurate (as far as possible), a survey sample should represent the population in which we are interested. That means it should look as similar as possible to that population in every relevant respect.. Having a big enough sample is important, but so are several other factors.

The sampling method you choose affects the make-up of the sample and the likelihood of bias. Two methods are best avoided:

  • volunteer samples – where participants choose to join a study because they answer a request or advert
  • convenience sampling – in which people are selected because they are easy to contact, such as people in a shopping centre or users of a website.

Another method, quota sampling, chooses people based on some characteristics such as age, sex and employment status. It may be better than an uncontrolled volunteer sample, but will still (like a volunteer sample) consist of the most willing and available respondents. People who volunteer to take part in a survey are unlikely to have the same views and experiences as others, so the sample may not be representative.

The best approaches are forms of random sampling. These prevent anyone – either participants or the field workers recruiting the sample – from bringing subjectivity into the process. Any extra time and effort required is likely to worthwhile as it should be possible to avoid bias in the sampling process.

Simple practical ways to select a random sample including taking every nth person (e.g. users of a service or visitors to a location) or selecting them at random from a register or list of some kind, such as the list of all addresses to which the Royal Mail deliver mail.

Survey Futures has looked at population sub-groups that may be under-represented or missing from UK population surveys. Read the project’s working paper: Framework for identifying and addressing the risks of exclusion from social surveys

Advanced sampling

To improve the precision of a sample, techniques such as stratified random sampling can be used. This involves dividing the population into groups – such as socio-economic types– and a random sample is then taken of each type, thereby ensuring that the correct proportion of each type is included in the sample.

Multi-stage sampling can be a way to reduce the cost and difficulty of collecting data. This involves sampling people at a limited number of places – schools, hospitals, or neighbourhoods, for example. For some surveys this can be a very cost-efficient feature of a sample design.

Read more about how Understanding Society develops a representative sample

Questionnaire design

Sampling is just the first step, though. The researcher must design a questionnaire which strikes a balance between the value of gathering detailed information – potentially on a significant number of topics – and the participant’s ability and willingness to answer.

A well-designed questionnaire will only ask for information that the participant can reasonably be expected to know and will use language that they’ll understand. Wording should be clear and consistent; the flow of questions should be logical and intuitive, and communications should encourage honest and accurate answers. It can be tempting to add questions in order to open up new areas of research, but there’s a risk that this becomes a burden to the respondent, encouraging them to give quick answers, which may be less accurate, or to drop out altogether.

To make sure a questionnaire is fit for purpose, it needs testing before it’s deployed. This can simply be a matter of getting experts to review it or getting a few people to fill in the questionnaire and give feedback on it. Several techniques for doing this in a structured way have been developed.

Understanding Society tests questionnaire design using the Innovation Panel. The User Guide gives details on the various experiments that have been conducted

Getting a response

For a survey to be a success, sample members must complete the questionnaire.

Self-completion methods (online or on paper) usually obtain more honest answers and are cheaper, while using interviewers costs more, but can improve participation rates. Interviewers can also help participants to provide more complex information and to answer questions that are longer or more complicated. They can also help to get people to agree to linking their data to another source, or to some other kind of follow-up (a visit from a nurse for a blood sample and health measurements, perhaps).

Sometimes quite small details can make a difference to the quantity and quality of responses received, so it is worth paying attention to these. The day of the week when the survey invitation is sent out, and the reminders (how many of them, and whether they are sent by email, letter, or postcard, for example) can make a difference to participation – as can persuasive arguments in an invitation letter, and financial incentives to complete the survey. What’s important here is that different techniques work for different groups: incentives for some, arguments about improving society for others. Survey design can target people accordingly, using simple information such as sex and age, or where someone lives to shape the approach.

With longitudinal studies, there’s also the matter of maintaining response rates when you’re asking people the same questions – perhaps every year. Communicating with participants can help to give them a sense of belonging, and an understanding of the value of their data, through examples of the impact of research.

Read more about the response rates for Understanding Society and the methods the Study uses to retain participants

Analysing the data

To obtain unbiased estimates of the characteristics of the population of interest, researchers using survey data must take the sample design into account in the way they analyse the data. If the survey had a multi-stage, or stratified, sample design, the stages and the strata should be incorporated into estimation. Similarly, if the probability of being sampled varied between subgroups, this should be taken account of through a process known as weighting in order for the results to represent the population. Weights can also correct the effects of nonresponse if people with certain characteristics are known to be less likely to take part than others. With longitudinal surveys such as Understanding Society, it may be necessary to produce weights after each wave of data collection, so researchers can use data in a timely fashion.

If you’d like to learn more about using weights, Understanding Society runs a regular training course on using weights in Understanding Society. You can also read our weighting guidance in the main survey User Guide

So, what makes a good survey?

There is no magic panacea, but a good understanding of the features of survey design that can affect the utility of the data is essential. All those features should then be developed with care. This understanding and care is more important than the survey budget.

Read the paper

Peter Lynn is Professor of Survey Methodology at ISER, Director of Survey Futures, Chair of the European Social Survey Sampling and Weighting Panel, and Understanding Society Associate Director for Methodology