Publication Summary
Abstract Objective To examine whether randomized economic evaluations report clinical effectiveness estimates that are unrepresentative of the totality of the research literature. Study Design and Setting From 36 studies (12,294 patients) of enhanced care for depression, we compared pooled clinical effect sizes in studies with a concurrent economic evaluation to those in studies that did not publish a concurrent economic evaluation, using metaregression. Results The pooled clinical effect size of studies publishing an economic evaluation was almost twice as large as that of studies that did not publish an economic evaluation (pooled standardized mean difference [SMD] in randomized controlled trials [RCTs] with an economic evaluation = 0.34; 95% confidence interval [CI] = 0.23–0.46; pooled SMD in RCTs without an economic evaluation = 0.17; 95% CI = 0.10–0.25). This difference was statistically significant (SMD between group difference = −0.17; 95% CI: −0.31 to −0.02; P = 0.02). Conclusion Publication of an economic evaluation of enhanced care for depression was associated with a larger clinical effect size. Cost-effectiveness estimates should be interpreted with caution, and the representativeness of the clinical data on which they are based should always be considered. Further research is needed to explore this observed association and potential bias in other areas.
CAER Authors

Prof. Simon Gilbody
University of York - Director of the Mental Health and Addictions Research Group