A comparison of branched versus unbranched rating scales for the measurement of attitudes in surveys
The format of a survey question can affect responses. Branched survey scales are a question format that is increasingly used but little researched. It is unclear whether branched scales work in a different way than unbranched scales. Based on the decomposition principle (Armstrong, Denniston, and Gordon 1975), if breaking a decision task up into component decision parts increases the accuracy of the final decision, one could imagine that breaking an attitudinal item into its component parts would increase the accuracy of the final report. In practice, this is applied by first asking the respondent the direction of their attitude, then using a follow-up question to measure the intensity of the attitude (Krosnick and Berent,1993). A split-ballot experiment was embedded within the Understanding Society Innovation Panel, allowing for a comparison of responses between branched and unbranched versions of the same questions. Reliability and validity of both versions were assessed, along with the time taken to answer the questions in each format. In a total survey costs framework, this allows establishing whether any gains in reliability and validity are outweighed by additional costs incurred because of extended administration times. Findings show evidence of response differences between branched and unbranched scales, particularly a higher rate of extreme responding in the branched format. However, the differences in reliability and validity between the two formats are less clear cut. The branched questions took longer to administer, potentially increasing survey costs.
© The Author 2015. Published by Oxford University Press on behalf of the American Association for Public Opinion Research. All rights reserved. For permissions, please e-mail: firstname.lastname@example.org
Public Opinion Quarterly
Volume and page numbers
79 , 443 -470
Albert Sloman Library Periodicals *restricted to Univ. Essex registered users*