An Application Of Item Response Theory To Language Testing

Download An Application Of Item Response Theory To Language Testing PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get An Application Of Item Response Theory To Language Testing book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
An Application of Item Response Theory to Language Testing

Author: Inn-Chull Choi
language: en
Publisher: Peter Lang Pub Incorporated
Release Date: 1992
This book explores the appropriateness of Item Response Theory (IRT) in language testing. It investigates the dimensionality of the reading tests of the Cambridge First Certificate of English Test (FCE) and the Test of English as a Foreign Language (TOEFL), and the relative fit of 1, 2, 3 parameter IRT models in which the Rasch model is closely examined. Finding that the Rasch model fails to provide an adequate fit for the data, the study recommends that its predominant use in language testing be re-evaluated. Moreover, the 2 and 3 parameter models fit the data much better than the Rasch model. Finally, it shows that moderate departures from unidimensionality do not necessarily lead to an unacceptable model fit, nor does the use of IRT in test development guarantee that the unidimensionality assumption will be satisfied.
An Application of Item Response Theory to Language Testing

Even though the application of IRT to language testing has recently attracted much attention, no model-data fit research has been conducted to explore the appropriateness of IRT modeling in language testing. The tenability of the strong assumption of unidimensionality has not been studied systematically, and little is known concerning the effects of departure from unidimensionality on the estimation of parameters and on model fit. Furthermore, no study has examined the adequacy of the Rasch model which has been predominant in language testing. The present study investigated the dimensionality of the reading and vocabulary sections of two widely-used English as a foreign language proficiency tests, the University of Cambridge First Certificate of English (FCE) and the Test of English as a Foreign Language (TOEFL). It also compared the relative model fit of three IRT models: 1, 2, and 3 parameter model. Dimensionality of the tests was investigated using Stout's method, factor analyses, and Bejar's method. Secondly, employing fit statistics, invariance check, and the residual analyses, the current study investigated the adequacy of the Rasch model, and the effects of multidimensionality on parameter estimation and model fit. The results of this study suggest the following: (1) Even the TOEFL reading subtest, developed using the three-parameter IRT model, was multidimensional. This appears to be due to underlying factors associated with the reading passages. (2) The FCE reading and vocabulary subtest, based on the traditional British examination system, was found to be essentially unidimensional. (3) Bejar's approach to checking dimensionality appears to be inadequate in that the results differ across the 1, 2, and 3 parameter models. (4) The finding that the Rasch model clearly fails to provide an adequate fit for these data suggests that the prevailing use of the Rasch model in language testing needs to be re-evaluated. (5) The 3 parameter model fit the data only marginally better than did the 2 parameter model. This suggests that for language tests, the discrimination parameter is more significant than is the guessing parameter. (6) A moderate departure from unidimensionality does not appear to invalidate IRT modeling with the data. This finding suggests the possibility of more justified implementation of IRT modeling in language testing.
Applying Item Response Theory in Language Test Item Bank Building

Author: Gábor Szabó
language: en
Publisher: Language Testing and Evaluation
Release Date: 2008
Item Response Theory, though it has become a widely recognized tool in language testing research, is still not used frequently in practical language assessment projects. This book intends to provide a theoretical overview as well as to give practical guidance concerning the application of IRT in item bank building in a language testing context by presenting a particular project in a higher education setting.