2014 Mathematics Political Science
SemanticScholar ID: 55080265 MAG: 70512677

The widespread abuse of statistics by researchers : what is the problem and what is the ethical way forward?

Publication Summary

The paper presents and illustrates two areas of widespread abuse of statistics in social science research. The first is the use of techniques based on random sampling but with cases that are not random and often not even samples. The second is that even where the use of such techniques meets the assumptions for use, researchers are almost universally reporting the results incorrectly. Significance tests and confidence intervals cannot answer the kinds of analytical questions most researchers want to answer. Once their reporting is corrected, the use of these techniques will almost certainly cease completely. There is nothing to replace them with but there is no pressing need to replace them anyway. As this paper illustrates, removing the erroneous elements in the analysis is usually sufficient improvement (to enable readers to judge claims more fairly). Without them it is hoped that analysts will focus rather more on the meaning and limitations of their numeric results. Which kind of statistics is being abused? The term ‘statistics’ is an ambiguous one. It emerged from the collation and use of figures concerning the nation state from the seventeenth century onwards in the UK, and subsequently in the USA and elsewhere (Porter 1986). Such figures involved relatively simple analyses, and ‘political arithmetic’ was largely used to lay bare inefficiencies, inequalities and injustice (Gorard 2012). However, more recently and for many commentators the term has come to mean a set of techniques derived from sampling theory, and/or the products of those techniques. It is the abuse of such techniques that is the subject of this new paper. These techniques include the use of standard errors, confidence intervals and significance tests (both explicitly and disguised within more complex statistical modelling). They are supposedly used to help analysts to decide whether something that is found to be true of the sample achieved in a piece of research is also likely to be true of the known population from which that sample was drawn. All of these statistical techniques, including confidence intervals, are based on a modified form of an argument modus tollendo tollens. In formal logic, the argument of denying the consequent is as follows: If A is true then B is true B is not true Therefore, A is not true also This is a perfectly valid argument, and the conclusion must be true, as long as the premising statements are all definitive. If B is not true then it is certain that A is not true. However, as soon as tentativeness or probability enters the argument fails: If A is true then B is probably true also

CAER Authors

Avatar Image for Stephen Gorard

Prof. Stephen Gorard

University of Durham - Professor in the School of Education

Share this

Next publication

2009 Psychology

The Dynamics of Category Conjunctions

R. Hutter, R. Crisp, G. Humphreys, Gillian. M. Waters + 1 more