ABSTRACT Social science datasets usually have missing cases, and missing values. All such missing data has the potential to bias future research findings. However, many research reports ignore the issue of missing data, only consider some aspects of it, or do not report how it is handled. This paper rehearses the damage caused by missing data. The paper then briefly considers eight different approaches to handling missing data so as to minimise that damage, their underlying assumptions and the likely costs and benefits. These approaches include complete case analysis, complete variable analysis, single imputation, multiple imputation, maximum likelihood estimation, default replacement values, weighting, and sensitivity analyses. Using only complete cases should be avoided wherever possible. The paper suggests that the more complex, modelling approaches to replacing missing data are based on questionable methodological and philosophical assumptions. And they may anyway not have clear advantages over simpler approaches like default replacements. It makes sense to report all possible forms of missing data, report everything that is known about the characteristics of cases missing values, conduct simple sensitivity analyses of the potential impact of missing data on the substantive results, and retain the knowledge of missingness when using any form of replacement value.