Peer review can be a beneficial pedagogical tool for providing students both feedback and varied perspectives on their work. Despite being a valuable tool, the best mechanism for assigning reviewers to reviewees is still often blind random assignment. While better mechanisms must exist, they necessarily rely on having some prior knowledge about the work being reviewed. Prior research by the second author has found it difficult to accurately categorize the quality of student’s written work using autonomous mechanisms. This paper explores the effectiveness of a new approach for such categorization, with the intent of identifying if it is a viable technique at providing some of the prior knowledge needed to improve the peer review matching process.
The purpose of this paper is to present the findings from an effort to classify student team performance on Model-Eliciting Activities (MEAs) using a trained reviewer’s gut instinct about the quality of the work. MEAs are realistic, open-ended, client-driven engineering problems where teams of students produce a written document describing the steps of how to solve the problem. Using an archival data set from a large first-year engineering course at a mid-western, public land grant RU/VH institution, nearly 450 MEA solutions were evaluated by 2 researchers.
As part of his dissertation research, the second author evaluated all three drafts of 147 teams, investing approximately one hour per draft producing a detailed evaluation of each sample. Intra-rater reliability was calculated on 10% of that data and found to be acceptable on all the rubric items.
The first author was trained using the same methodology used for training the course’s graders, as documented in , but with the knowledge of the task they would be completing once they were trained. Once the first author was sufficiently trained, they were tasked with reviewing the same dataset as the second author with a goal of producing each review in 2 minutes or less. The goal of the research is to explore if the reviewer is able to develop a sufficient heuristic for classifying general MEA quality based on the existing successful training model. While the reviewer is unlikely to be as precise as the expert, they may be sufficiently accurate enough to use their gut-feeling reviews as baseline data for making peer review matching decisions with a comparatively minuscule investment in time.
While results are still being analyzed, and thus, not available for the abstract at this time, the focus of this paper will be on:
• how the data was obtained and prepared,
• the process by which both reviewers were trained and how their review processes differed,
• how well (or poorly) the gut-feeling based reviews compared to the detailed expert’s reviews,
• how accurate the new reviews were compared to the autonomous mechanisms tested in prior research,
• what the results indicate about our implementation of MEAs and peer review,
• what the results indicate about using gut-feeling evaluations for classification, and
• how the results will be informing the next stages of the research project.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.