Survey Futures, the ESRC-funded survey data collection methods collaboration, has issued a position statement with their expert opinion on response rates.
The statement is co-signed by the UK’s leading survey experts; Professor Olga Maslovskaya (University of Southampton), Professor Peter Lynn (University of Essex), Professor Lisa Calderwood (UCL), Professor Gabriele Durrant (University of Southampton), Professor Rory Fitzgerald (City St George’s, University of London),Gerry Nicolaas (National Centre for Social Research (NatCen) and Joel Williams (Verian)
Surveys play a crucial role in informing policy and financial decisions, as well as in enhancing our understanding of major issues. Confidence in survey quality is vital – we need to be sure they are accurate. Response rates have for many years been a key survey performance indicator and are often interpreted as the sole or most important measure of survey quality, which is misleading.
Surveys with low response rates can still produce high-quality data, and the reverse can also be true. In recent years, declining response rates and, for some surveys, a switch from interviewer-administered to self-completion data collection have raised questions about the role of response rates and sparked public debate around survey quality.
This paper presents our expert perspective on response rates and survey quality. Supporting Material outlines the key reasons for declining response rates and offers practical recommendations.
•Measuring survey quality is important as survey data users need to be reassured that a survey data source is fit for its intended purpose. Survey quality is multi-faceted and includes various indicators that relate to the whole survey process.
• Quality requirements for data collection need to be clearly articulated at the outset. These should draw from a wide suite of relevant indicators and should reflect the needs and priorities of data users.
• Non-response bias is a key component of survey quality. It occurs when non-respondents differ significantly from respondents in ways that affect the survey’s outcome. This bias can distort estimates of population parameters, such as means, proportions, or regression coefficients. If certain groups are under- or over-represented in the survey responses, the results will not accurately reflect the target population.
• Response rates are not a reliable indicator of non-response bias, and even less so of overall survey quality.
• An assessment of non-response bias should include assessments of sample representativeness and sample composition. There are many measures of sample representativeness including comparisons of response rates across sample sub-groups, comparisons of estimates against high-quality benchmarks, and estimation of bias measures such as the dissimilarity index, R-indicators or coefficients of variation in response propensities, among other approaches. Please see Appendix 1 of the Supporting Material for details. The choice of appropriate indicators will depend on the availability of external benchmarks and other necessary information.
• Response rates still have a role to play and can be used, alongside other indicators, as a measure of survey process quality. In this context, their interpretation is relative rather than absolute. If a survey achieves a substantially lower response rate compared to similar surveys or to previous instances of the same survey conducted recently, this may indicate problems with survey implementation that warrant investigation. Response rates should not be used in isolation to set quality targets or to communicate overall survey quality.
• Striving for high response rates remains important. Surveys should continue to strive for high response rates in an informed and targeted way, focusing on trying to achieve similar response rates across population sub-groups where possible (as well as across other dimensions, such as different types of geographic areas), rather than solely aiming for a high overall response rate.
• Response rates are important for determining issued sample sizes and have cost implications. Initial sample sizes should be based on realistic estimates of achievable response rates including among population sub-groups.
• Probability sampling remains strongly recommended whenever possible, even with lower response rates. Probability sampling is a simple way to avoid a myriad of potential sources of selection bias. Furthermore, only with probability sampling can non-response be evaluated and adjusted using sample data that are common to respondents and nonrespondents. The distinctive advantage of probability sampling is that it gives everyone a chance of being selected, including under-represented individuals or households.
• Standardised response rates should be reported. Reporting both unweighted and design-weighted response rates can be informative for different purposes. Unweighted response rates help assess fieldwork quality, while design-weighted response rates are informative of representativeness.
09 June 2025
The citation for this Statement is Maslovskaya, O., Lynn, P., Calderwood, L., Durrant, G., Fitzgerald, R., Nicolaas, G., Williams, J. (2025) Survey Futures Position Statement on Response Rates. 09 June 2025. Available at https://surveyfutures.net/wp-content/uploads/2025/06/Response-Rates-Posititon-Statement_Survey[1]Futures.pdf