Publication Summary
Background: Educational interventions are often complex, and their outcomes could be due to factors not focused on in the impact evaluation. Therefore, educational evaluations using a randomised control trial (RCT) design approach need to go beyond obtaining the impact results alone. Purpose: Process evaluation is embedded in the evaluation design in order to enhance contextual understanding of the outcome results achieved from an RCT. However, in the context of evaluation studies, reporting on the fidelity to the research design protocol is also important and can be undertaken by the same process evaluation approaches as used for studying the mechanism of interventions. Research design and method: This paper reports on two RCTs in which school staff led the trials themselves in their schools – in an aggregated trial study managed and conducted at secondary school cluster-level, with expert advice from us, as independent evaluation advisors. The interventions implemented for evaluation were two highly structured programmes that targeted improvement in the literacy attainment of pupils who were at risk of failing to achieve the expected levels. Assessing the effect on literacy performance was the primary objective. However, the research design included methods for understanding the process of the interventions being implemented. Our main findings on the feasibility of aggregated trials led by schools are informed by our process evaluation, which included participant observations of the training, session observations of the interventions being implemented, and interviews with pupils, teachers and school leaders. Data on fidelity to the interventions were collected, and, using similar techniques, we also evaluated the feasibility of the research process led by school leaders and the possible barriers and challenges of RCT management by schools. We included information on randomisation procedures, perceptions of pupils and teachers involved in the study, and the programme website resources. Results: The primary outcome results of the trials showed promise from the interventions, in raising disadvantaged pupils’ reading scores during transition from primary to secondary school stage. The process evaluation revealed considerable potential benefits from involving school leaders as evaluators, and the paper describes a way forward here. However, there were some indications that there was not full compliance with the randomisation process, and this might have resulted in initial imbalance of pre-tests scores between the treatment and control groups in one trial. Conclusion: Process evaluation cannot answer research questions regarding impact outcomes, but for a general understanding of outcomes it is important to report how the impact results have been achieved. This is where rich, in-depth data of the kind we describe can assist
CAER Authors
Prof. Stephen Gorard
University of Durham - Professor in the School of Education