Conference Paper Understanding Society Research Conference, 24-26 July 2013, University of Essex
Do branched rating scales have better test-retest reliability than unbranched scales? Experimental evidence from a three-wave panel survey
25 Jul 2013
The use of ‘branched’ formats for rating scales is becoming more widespread because of a belief that this format yields data that are more valid and reliable. Using this approach, the respondent is first asked about the direction of his or her attitude/belief and then, using a second question, about the intensity of that attitude/belief (Krosnick and Berent, 1993). The rationale for this procedure is that cognitive burden is reduced, leading to a higher probability of respondent engagement and superior quality data. Although this approach has been adopted recently by some major studies, notably the ANES, the empirical evidence for the presumed advantages in terms of data quality is actually quite meagre. Given that using branching may involve trading off increased interview administration time for enhanced data quality, it is important that the gains are worthwhile. This paper uses data from an experiment embedded across three waves the Innovation Panel, part of the ‘Understanding Society’ survey. Each respondent was interviewed once per year between 2009 and 2011. We capitalise on this repeated measures design to fit a series of models which compare test-retest reliability, and a range of other indices, for branched and unbranched question forms, using both single items and multi-item scales. We present the results of our empirical investigation and offer some conclusions about the pros and cons of branching.