Applying IRT Model in Validating a Dichotomously Scored Test Item

Abstract

This study conducted an item analysis to validate a dichotomously scored test using the Rasch measurement model, an Item Response Theory approach for test validation. It aimed to improve the quality of the test items in a departmentalized mathematics examination that had undergone content validity through subject experts. The response data gathered from randomly selected college students who had undergone the examination were fitted to the model. Rasch analysis revealed that the test appeared to be relatively difficult, indicating that the it needs to be revised further or that a better teaching strategy is needed to facilitate learning. It was also evident from the results that several misfit items appeared, and evidence of multidimensionality existed, which suggested that these items should be further modified, discarded or amended. However, both the item and person reliabilities were high. These findings suggest that an objective measurement for test validation, such as the Rasch measurement model, could help achieve greater precision in diagnosing test items and, consequently, construct a better measure for the assessment of students’ abilities.



Author Information
Arlene Nisperos Mendoza, Pangasinan State University, Philippines

Paper Information
Conference: ACE2022
Stream: Assessment Theories & Methodologies

This paper is part of the ACE2022 Conference Proceedings (View)
Full Paper
View / Download the full paper in a new tab/window


To cite this article:
Mendoza A. (2023) Applying IRT Model in Validating a Dichotomously Scored Test Item ISSN: 2186-5892 The Asian Conference on Education 2022: Official Conference Proceedings https://doi.org/10.22492/issn.2186-5892.2023.75
To link to this article: https://doi.org/10.22492/issn.2186-5892.2023.75


Comments & Feedback

Place a comment using your LinkedIn profile

Comments

Share on activity feed

Powered by WP LinkPress

Share this Research

Posted by James Alexander Gordon