It has often been said that COVID-19 has made amateur epidemiologists of us all, and perhaps less often, amateur behavioural scientists and economists too.
As society has picked its way through the pandemic, policymakers, journalists, medical professionals, school and union leaders, charities, and the general public have all sought out data to make the case for action (or inaction), or the effectiveness or ineffectiveness of different interventions or restrictions. They do this for outcomes from school absence rates and food insecurity all the way to hospitalisations and deaths.
The Office for National Statistics, among other organizations, has done a spectacular job of documenting these outcomes for society during this pandemic, by producing a wide-range of publicly available datasets. But to truly measure the impact of any given policy – its causal effect – we need to have a good idea what would have happened if a different action had been taken. The extensive range of randomised clinical trials for the COVID-19 vaccines, undertaken both before their approval and continuing now (for example with a randomised trial of different vaccines and gap between doses among pregnant women) all provided exactly that information. On the other hand, practical and ethical reasons meant there were no randomised trials of the imposition, and indeed removal, or lockdowns and the tier system, mask-wearing guidance, or different models of school meal provision during school closures, for example.
But it is not too late to attempt to evaluate the impacts of these policies, and certainly not too late to roll out new policies in such a way that we can evaluate their impacts in the future.
Producing the best possible outcomes for society
The aim of the NCRM course, Introduction to Impact Evaluation, that I will be delivering with colleague Emilia Del Bono on 25-26 November, is to enable participants to plan to do just this. We want them to be able to look at a policy to evaluate and have the intuition about what method is best to use and what limitations it would have. Better still, we want participants to understand that it is much easier to evaluate a programme or policy if the evaluation and its design is planned from the start!
The course will run over Zoom, and over two consecutive mornings. The four sessions will each look at one evaluation method, and the situations in which it would and would not produce reliable results. Session 1 will address randomised trials and experiments; Session 2 look at matching and regression methods; Session 3 at difference-in-difference; and Session 4 at ‘instrumental variables’.
We’ll keep algebra to a minimum, and will not cover how to implement these methods using statistical software. Instead, we’ll encourage participants to think about how well these methods capture the counterfactual (what would have happened to the same people had they not received the treatment), and therefore deal with the selection bias. We’ll keep returning to the same set of examples, and encourage participants to critique these methods for different situations.
There will be many small and large attempted contributions to the process of ‘building back better’, ‘levelling up’, and every other aspect of the national and international recovery from COVID-19. Not all will ‘work’, and certainly not all will be value-for-money, but we’ll only produce the best possible outcomes for society if we take care to evaluate what works and what doesn’t, and apply those lessons in the future. This course is designed to equip researchers, charities and civil servants with the tools to find out the vital first piece of that puzzle: What is the causal impact of a policy?
Read the original article on the NCRM website here