When exams are run asynchronously (i.e., students take it at different times), a student can potentially gain an advantage by receiving information about the exam from someone who took it earlier. Generating random exams from pools of problems mitigates this potential advantage, but has the potential to introduce unfairness if the problems in a given pool are not identical difficulty. In this paper, we present an algorithm that takes a collection of problem pools and historical data on student performance on these problems and produces exams with reduced variance of difficulty (w.r.t. naive random selection) while maintaining sufficient variation between exams to ensure security. Specifically, for a synthetic example exam, we can roughly halve the standard deviation of generated assessment difficulty levels with negligible effects on cheating cost functions (e.g., entropy).
Paras Sud led this work as his thesis project for his B.S. in Computer Science from the University of Illinois at Urbana-Champaign. He's currently working in industry.
Matthew West is an Associate Professor in the Department of Mechanical Science and Engineering at the University of Illinois at Urbana-Champaign. Prior to joining Illinois he was on the faculties of the Department of Aeronautics and Astronautics at Stanfo
Craig Zilles is an Associate Professor in the Computer Science department at the University of Illinois at Urbana-Champaign. His research focuses on computer science education and computer architecture. His research has been recognized by two best paper
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.