20/09/2016 Computer Science Medicine Psychology
DOI: 10.1080/0142159X.2016.1230184 SemanticScholar ID: 31736806 MAG: 2512748358

Setting standards in knowledge assessments: Comparing Ebel and Cohen via Rasch

Publication Summary

Abstract Introduction: It is known that test-centered methods for setting standards in knowledge tests (e.g. Angoff or Ebel) are problematic, with expert judges not able to consistently predict the difficulty of individual items. A different approach is the Cohen method, which benchmarks the difficulty of the test based on the performance of the top candidates. Methods: This paper investigates the extent to which Ebel (and also Cohen) produces a consistent standard in a knowledge test when comparing between adjacent cohorts. The two tests are linked using common anchor items and Rasch analysis to put all items and all candidates on the same scale. Results: The two tests are of a similar standard, but the two cohorts are different in their average abilities. The Ebel method is entirely consistent across the two years, but the Cohen method looks less so, whilst the Rasch equating itself has complications – for example, with evidence of overall misfit to the Rasch model and change in difficulty for some anchor items. Conclusion: Based on our findings, we advocate a pluralistic and pragmatic approach to standard setting in such contexts, and recommend the use of multiple sources of information to inform the decision about the correct standard.

CAER Authors

Avatar Image for Jonathan Darling

Dr. Jonathan Darling

University of Leeds - Clinical Associate Professor in Paediatrics and Child Health and Medical Education

Share this

Next publication

2009 Psychology

The Dynamics of Category Conjunctions

R. Hutter, R. Crisp, G. Humphreys, Gillian. M. Waters + 1 more