Publication Summary
In the context of existing ‘quantitative’/‘qualitative’ schisms, this paper briefly reminds readers of the current practice of testing for statistical significance in social science research. This practice is based on a widespread confusion between two conditional probabilities. A worked example and other elements of logical argument demonstrate the flaw in statistical testing as currently conducted, even when strict protocols are met. Assessment of significance cannot be standardised and requires knowledge of an underlying figure that the analyst does not generally have and cannot usually know. Therefore, even if all assumptions are met, the practice of statistical testing in isolation is futile. The question many people then ask in consequence is—what should we do instead? This is, perhaps, the wrong question. Rather, the question could be—why should we expect to treat randomly sampled figures differently from any other kinds of numbers, or any other forms of evidence? What we could do ‘instead’ is use figures in the same way as we would most other data, with care and judgement. If all such evidence is equal, the implications for research synthesis and the way we generate new knowledge are considerable.
CAER Authors
Prof. Stephen Gorard
University of Durham - Professor in the School of Education