The Electrical and Computer Engineering (ECE) department at Gannon University has been through two successful ABET accreditations, in 2005 and 2011, with the use of the Faculty Course Assessment Report (FCAR) model. In the 2005 cycle of accreditation review, essential FCAR methodology was used; whereas, in the 2011 cycle, the concept of key assignments, with the well-defined process to generate justifiable objective evidence, was used to augment and further improve the FCAR assessment model adopted. In either cycle, student outcomes (SO) are directly assessed with supporting evidence for the well-defined performance vector termed EAMU where E stands for Excellence, A for Average, M for Minima, and U for Unsatisfactory. However, in either cycle of processes, there were no refined performance indicators (PI) defined for each SO. In the assessment model for the current cycle, a set of PIs are defined for each SO. However, we rapidly realize that if, for example, a set of three PIs are defined for each SO, the evaluation effort will be at least three times more time consuming.
To further improve the assessment model used, the traditional rubric-based assessment model is augmented by classifying courses in the curriculum to three levels: introductory, reinforced, and mastery. It is customary for the traditional rubric-based assessment model to include only the courses in the mastery level for the program outcomes assessment. The drawbacks of looking only at courses at the mastery level are: (1) lack of information needed at the lower level to identify the root cause of the deficiency when the symptom occurs at the higher level courses; (2) lack of the mechanism to compute a clear indicator such as the Student Outcomes (SOs) performance index based on Performance Indicators (PI) of that SO in order to facilitate the automation of the evaluation process.
In this paper, a novel approach is presented to demonstrate how a traditional rubric-based approach can be integrated with the FCAR assessment approach to allow computation of the SO performance index from roll-up data. The performance index is calculated based on the weighted average of relevant PIs for the three different levels of courses. Analytic results on how the SO performance index measured up against the heuristic rules used previously are discussed. Last but not least, results of how the SO performance index can be used to address the overall attainment of the SO expectation are shown.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.