Conference Paper European Conference on Quality and Methodology in Official Statistics
Predicting and Monitoring Design Effects: a tool for survey quality improvement
26 May 2004
A central objective of the European Social Survey (ESS) is to improve the quality of cross-national survey design and implementation. This requires advances in the standards of survey methodology in many countries and advances in the conceptualisation and implementation of ideas of standardisation and comparability. In this paper, we will describe and evaluate a novel aspect of the specification and implementation of the sample design for the ESS. The national sample size was specified in terms of the effective sample size of interviews. Guidance was provided to national teams on how to predict design effects due to variable selection probabilities (DEFF-P) and due to clustering (DEFF-C), and how to use these predictions to determine the necessary sample size. Additionally, a 4-person 'sampling panel' was created, to work closely with the national teams in developing an appropriate sample design.
The method used to predict DEFF-P involved applying a standard assumption of equal variance across weighting classes. We will present the predictions obtained using this method for each of the 22 countries that took part in the first round of the ESS in 2002-03. We will compare these predictions with estimates of the realised design effects from the round 1 data and will decompose the differences into a part due to imprecision in estimation of the frequency distribution of the design weights and a part due to violation of the assumption of equal variance.
To predict DEFF-C, participating countries were encouraged to refer to estimates of intra-cluster correlation obtained from other national surveys. Where no such estimates were available, it was suggested that they should assume roh=0.02 and then apply the formula DEFF-C = 1+(b_bar-1)roh to their proposed design, where b_bar denotes mean cluster size in the sample. We will present the predictions thus obtained for each participating country and will compare them with realised design effect estimates. We will assess the accuracy of the predictions of both b and roh. Additionally, we will present some alternative methods of estimating roh based on data from surveys with complex designs. We will investigate the sensitivity of these methods to variation in b_bar and in b and will discuss the implications for assessments of this sort.
We will draw conclusions regarding methods of *predicting* design effects as a tool for quality control and, in the context of regular or repeating surveys, methods of *estimating* design effects as a tool for quality maintenance and improvement. We will highlight implications for sample specification, for the provision of guidance on prediction, and for the choice of estimation methods in the context of a wide range of sample designs.