An Examination of Power and Type I Errors for Two Differential Item Functioning Indices Using the Graded Response Model
Document Type
Article
Publication Date
4-1-2012
Abstract
This study examined two methods for detecting differential item functioning (DIF): Raju, van der Linden, and Fleer’s 1995 differential functioning of items and tests (DFIT) procedure and Thissen, Steinberg, and Wainer’s 1988 likelihood ratio test (LRT). The major research questions concerned which test provides the best balance of Type I errors and power and if the tests differ in terms of detecting different types of DIF. Monte Carlo simulations were conducted to address these questions. Equal and unequal sample size conditions were fully crossed with test lengths of 10 and 20 items. In addition, a and b parameters were manipulated in order to simulate DIF. Findings indicate that DFIT and LRT both had acceptable Type I error rates when sample sizes were equal but that DFIT produced too many Type I errors when sample sizes were unequal. Overall, the LRT exhibited greater power to detect both a and b parameter DIF than did DFIT. However, DFIT was more powerful than LRT when the last two b parameters had DIF as opposed to when the extreme b parameters had DIF.
Repository Citation
LaHuis, D. M.,
& Clark, P. C.
(2012). An Examination of Power and Type I Errors for Two Differential Item Functioning Indices Using the Graded Response Model. Organizational Research Methods, 15 (2), 229-246.
https://corescholar.libraries.wright.edu/psychology/579
DOI
10.1177/1094428111403815