Publication Summary
Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre- and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The substance of the paper involves a series of comparisons using the same measurements, all assumed to have a small initial error, and seeing what would happen to that error in the two different experimental designs. The findings from these calculations and simulations are that although post-test only and pre- and post-test designs yield different ‘manifest’ results with the same data, the substantive conclusions drawn would be similar in most real-life situations. However, if these manifest results are assumed to be in error, stemming from small initial errors in the measurements at pre- and post-test, then these substantive conclusions could be completely wrong. In one example, the pre- and post-test designs propagate an initial maximum measurement error of 10% to an error of over 60,000% in the answer. In general, and perhaps counter-intuitively, the post-test only results are less misleading. The paper ends by summarizing the lessons drawn. The key message is that all other things being equal, the post-test only design is to be preferred. We may also need to use bigger samples, and more strictly accurate measures, capable of objective calibration focus on seeking larger effect sizes.
CAER Authors
Prof. Stephen Gorard
University of Durham - Professor in the School of Education