Learning Object Evaluation: Computer Mediated Collaboration and Inter-rater Reliability

J. Vargo (New Zealand), J.C. Nesbit, K. Belfer (Canada), and A. Archambault (USA)

Keywords

Learning objects, Quality control, Collaborative knowledge construction and learning, Instructional design, Reliability, Assessment

Abstract

Learning objects offer increased ability to share learning resources so that system-wide production costs can be reduced. But how can users select from a set of similar learning objects in a repository and be assured of quality? This paper describes an on-line collaborative process for evaluating learning objects. It includes a formative reliability analysis of the process and instrument. The study involved the use of a 10-item evaluation instrument by 12 participants evaluating eight learning objects. The reliability analysis supported the expectation that further development will obtain an instrument and process producing aggregate ratings with sufficient reliability. There are specific recommendations including changes to items, a rater training process, and modifications to the collaborative process. Overall, the collaborative process appeared to substantially increase the reliability and validity of aggregate learning object ratings.

Important Links:



Go Back