Programme Evaluation for Policy Analysis
Programme Evaluation for Policy Analysis (PEPA) is a programme of research that examines the processes and methods used to evaluate the effectiveness of state interventions such as new policies and training programmes.
PEPA’s research team hopes to stimulate a step change in the conduct of programme evaluation, and maximise the value of programme evaluation by improving the design of evaluations and improving the way that such evaluations add to the knowledge base.
The aims of the programme are to:
- advance our understanding of the value of randomised control trials in social science
- improve inference for policy evaluation
- develop the key relationships between alternative methods for policy evaluation
- understand how best to combine quasi-experimental methods with dynamic behavioural models
- determine how to measure social networks and then use such data for programme evaluation
Answering questions about the effectiveness of state interventions in economic and social domains – such as ‘Did this training programme help the participants get back to work?’, or ‘Did this child health programme improve children’s outcomes?’ – is the goal of programme evaluation.
Robust programme evaluation is difficult: researchers have to estimate causal impacts credibly and understand the uncertainty in their estimates, and policy-makers have to determine how best to synthesize and generalise the lessons learned from multiple studies.
Researchers who undertake programme evaluations will benefit from the new methods developed through PEPA together with a greater understanding of how to implement them. Policy makers commissioning evaluations will benefit from a greater understanding of the advantages and disadvantages of different evaluation approaches. Academics and policy makers interested in the labour market, education and health policies that we will study in our substantive applications will also benefit as will the public as the new techniques being developed and promoted ultimately lead to better policies.
What the researchers will do
The research team will undertake five separate projects as part of the PEPA programme.
A reassessment of the ERA
Can non-experimental methods replicate the results of randomised controlled trials (RCTs)? How can we combine results from RCTs with models of labour market behaviour? How do general equilibrium effects alter estimated impacts of training programmes?
Improving inference for policy evaluation
What are the correct inference and power calculations where data have multi -level structure and serially correlated shocks? What is the correct inference when policy impacts are complex functions of estimated parameters? What is the effect of time-limited in-work benefits on job retention?
Control functions in policy evaluation
What is the link between control functions and structural or behavioural models? Can we weaken the control function approach to estimate bounds on treatment effect? How are lessons from multiple evaluations best synthesised?
Combining quasi-experimental methods with dynamic behavioural models
How can we best use ex post evaluations in ex ante analysis? How do life-cycle time limits on welfare receipt affect behaviour? How are education decisions affected by labour market policies?
Social networks and programme evaluation
How can we best collect data on social networks? How and why is the impact of policy affected by social networks? Can social networks explain heterogeneity in the impact of a health intervention?
Details of the PEPA team are available on the IFS website.