TMERG Seminar 8: Differential Item Functioning: Implications for the Utility of Test/Questionnaire

TRANSDISCIPLINARY MEASUREMENT & EVALUATION RESEARCH GROUP
(TMERG) SEMINAR 8, 2017

Date: Friday, 29 September 2017
Time:10:00am – 11:00pm
Where: Kevin Marjoribanks SMaRTE Room, Level 8 Nexus Building, Pulteney Street.

The Transdisciplinary Measurement and Evaluation Research Group (TMERG) has its origins in the School of Education, and provides an avenue for discussion and research collaboration in the field of measurement and evaluation across a number of Schools and Disciplines. Colleagues and postgraduate students (doctoral and master’s) in your School/Research Centres are invited to a seminars titled

SEMINAR 8: Differential item Functions: Implications for the Utility of Test/Questionnaires

SPEAKERS: Ms Hnin New New Tun:
School of Education (University of Adelaide); MEd (Yangon University of Education); BEd (Physics, Chemistry & Mathematics, YUoE)
Dr Sivakumar Alagumalai:
School of Education (University of Adelaide)

Eventbrite - Transdisciplinary Measurement & Evaluation Research Group (TMERG) Seminar 8 2017

ABSTRACT: the Rasch Measurement Model (RMM) is well established and widely used by international organisations undertaking large scale assessment studies. It is the fundamental measurement model which considers the probability of an individual to perform efficiently at a given task which depends only on the difference between their ability level and the item difficulty level. Thus, the RMM is effective in gauging the viability and utility of tests and questionnaires, and enabling objective and meaningful reporting of comparative studies. However, the biggest threat to the utility of any test or questionnaire is item bias; an item is considered to be biased when it differs in difficulty between subjects of identical ability from different groups (for example, sex and cultural). Differential Item Functioning (DIF), the conventional term for item bias, is now an integral aspect of test/questionnaire validation and validity issues.

This paper examines DIF of selected constructed-response TIMSS-2015 Grade Eight mathematics items completed by students in Trans-Tasman countries (Australia and New Zealand). the items, from four content areas which include algebra, number, geometry, and data and chance, were analysed using the partial credit model (an extension of the RMM) DIF techniques were used and findings reported. The outputs from the Australian Council of Educational Research’s (ACER) ConQuest application are presented and discussed. We reiterate DIF analysis should be used routinely to evaluate items, to ensure more equitable and therefore more valid, and meaningful interpretation of tests and questionnaires.

 

This entry was posted in Uncategorized. Bookmark the permalink.

Comments are closed.