Abstract
Co-robots are robots that work alongside their human counterparts towards the successful completion of a task or set of tasks. In the context of engineering education, co-robots have the potential to aid students towards the successful completion of an engineering assignment by providing students with real-time feedback regarding their performance, technique, or safety practices. However, determining when and how to provide feedback that advances learning remains an open research question for human-co-robot interactions. Towards addressing this knowledge gap, this work describes the data types available to both humans and co-robots in the context of engineering education. Furthermore, this works demonstrates how these data types can be potentially utilized to enable co-robot systems to provide feedback that advances students’ learning or task performance.
The authors introduce a case study pertaining the use of a co-robot system capable of capturing students’ facial keypoint and skeletal data, and providing real-time feedback. The co-robot is created using commercially available, off-the-shelf components (e.g., Microsoft Kinect) in order to expand the reach and potential availability of these systems in engineering education. In this work, the facial expressions exhibited by students as they received instructions about how to complete a task and feedback about their subsequent performance on that task are analyzed. This allows the authors to explore the influence that co-robot visual feedback systems have in changing students’ behavior while performing a task. The results suggest that students’ facial keypoint data is statistically significantly different, depending on the feedback provided (p-value<0.005). Moreover, the results suggest there is a statistically significant relationship between students’ facial keypoint data while receiving instructions on how to complete a task, and their subsequent performance on that task (p-value<0.005). These findings suggest that students’ facial keypoint data can be utilized by a co-robot system to learn about the state changes in students, as they complete a task, and provide interventions when certain patterns are discovered that have the potential to reduce students’ learning or task performance.
The outcomes of this paper contribute to advancing the National Academy of Engineering’s Grand Challenge of personalized learning a by demonstrating how students’ facial data can be utilized in an effective manner to advance human-co-robot interactions, and improve the capability of co-robot systems to provide feedback that advances students’ performance.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.