Publication Summary
We report here the overall results of a cluster randomised controlled trial of the use of computer-aided instruction with 672 Year 7 pupils in 23 secondary school classes in the north of England. A new piece of commercial software, claimed on the basis of publisher testing to be effective in improving reading after just six weeks of use in the classroom, was compared over 10 weeks (one term) with standard practice in literacy provision. Pupil literacy was assessed before and after the trial, via another piece of commercial software testing precisely the kinds of skills covered by the pedagogical software. Both the treatment group and the comparison group improved their tested literacy. In a sense the publisher’s claim was justified. However, the comparison group improved their literacy scores considerably more than the treatment group, with a standardised improvement of +0.99 compared to +0.56 (overall “effect” size of −0.37), suggesting that the software approach yields no relative advantage for improvements, and may even disadvantage some pupils. On this evidence, the use of software, of a kind that is in very common use across schools in England, was a waste of resource. This could be an important corrective finding for an area of schooling that has been the focus of intense policy and practice attention. Of course, the software used has now been superseded by the same and different publishers. But the paper discusses the implications of these results for the use of such software to teach literacy more widely, for the way in which publisher claims are worded, and for the research community in relation to the feasibility of conducting pragmatic trials in school settings.
CAER Authors
Prof. Stephen Gorard
University of Durham - Professor in the School of Education