Background Feedback supports learning but providing detailed individual feedback is time consuming. Involving students in peer marking and providing constructive feedback can enhance student engagement. Delegating marking and feedback has the potential to save staff time but inter-rater variability limits the value. Higher levels of reliability are obtained when markers just decide which of two assignments ‘is best’. This project employed a series of adaptive comparative judgements (ACJ) to overcome the inter-rater variability. Method Students were assigned ten pairs of assignments and for each pair they judged which was best. An algorithm used this series of multiple comparative judgements (A is better than B, B is better than C etc.) to create a rank order. Students were asked to provide constructive feedback on each assignment reviewed. Staff reviewed the appropriateness of the student feedback and moderated the rank order before using it to assign individual marks to assignments. Results 149 students submitted assignments. 143 students completed the peer review component making 1,415 comparative judgments. The rank order generated by ACJ was found to be in broad agreement with staff judgements during the moderation process. Each assignment received feedback from 6-10 students. The mean length of feedback was 350 words per assignment (range 50-500 words). The length of feedback was not related to the rank order. Conclusion A series of comparative judgements can be used to addresses inter-rater variability in peer marking. Further work is required to explore the effectiveness of peer generated feedback.
Jason Hall, University of Manchester, United Kingdom
Stream: Higher education
This paper is part of the ACEID2019 Conference Proceedings (View)
View / Download the full paper in a new tab/window