[dropcap]S[/dropcap]o far in this series on end-of-course ratings we have discussed how to frame a conversation with a faculty member who receives average ratings
semester after semester and how to have a productive conversation with faculty who receives low evaluations
. The final end-of-course ratings conversation that merits consideration is the exchange that needs to occur when there’s an overreaction to the course ratings.
Sometimes the overreaction is triggered by one of those hurtful student comments. “This instructor should use his lectures for toilet paper.” It’s pretty hard not to let a comment like that get under your skin. But the problem is that one comment gains such importance that it overshadows everything else students have said, including a multitude of very positive responses. Faculty have been known to carry around hurtful student comments for years. I saw the toilet paper comment on an evaluation form that a faculty member shared with me. I didn’t recognize the form. It’s wasn’t the one used at our institution. Come to find out, he’d kept the evaluation in a file for almost 20 years!
Another common overreaction point is the overreaction to a small increase or decrease in rating scores—and that can be the reaction by the faculty member or the department chair. A small change should not be taken as evidence of a decline or improvement in the quality of teaching. Small changes are more often a result of the form itself. These instruments do not offer precise measures of teaching effectiveness, especially if they’ve been created by a committee and not tested for validity and reliability. That both faculty and administrators misinterpret, as in draw erroneous conclusions from small changes in ratings, is illustrated in rigorous research conducted by Boysen, et. al. (2014). In three separate studies, small changes in mean scores (differences small enough to be within the margin of error) were consistently misinterpreted.
From the earliest research on ratings, the advice of those developing and testing the instruments has been the same. Ratings can be valid and reliable. When they are, these ratings provide valuable feedback about instructional quality, but they should not be the only source of information when making judgments about teaching. Unfortunately, they often are the only data point or they’re supplemented with weak peer review processes.
When dealing with a faculty member who’s overreacting to rating results (a couple of negative comments or a miniscule change in the scores), or one who’s taking the rating process way too seriously (obsessively anxious about the results, critical of students and the process), the best advice is a set of strong recommendations for formative feedback. If something about the rating results doesn’t make sense or if the results are contradictory, what the faculty member needs is more feedback—and not the kind of feedback provided by most end-of-course rating instruments. Items that are highly inferential (“the instructor cares about students”) do not identify the policies, practices, or behaviors that convey those messages. So, while those results may motivate change, they do not inform the change.
Decisions about what to improve and how to change it are best informed with diagnostic, descriptive details. When are students doing the reading for the course? How much time are they spending studying for an exam? What do they do when they can’t solve a problem? Here’s an excellent article (open access) that outlines best practices for soliciting instructional feedback: Gormally, C., Evans, M., and Brickman, P., (2014). Feedback about teaching in higher ed: Neglected opportunities to promote change. Cell Biology Education, 13
At many institutions, end-of-course ratings are the most important source (too often the only source) of feedback on teaching that faculty receive. Because faculty are vested in their teaching, their ability to accurately interpret and respond to rating results is easily compromised. Most teachers consider their rating results private information. They may talk about them generically with a colleague but rarely divulge specific details. The academic leader converses with faculty having read the results. That puts the academic leader in a good position to help teachers gain perspective on what it all means as well as possible next steps. However, given the evaluative responsibility the chair likely has for that faculty member, the conversation can make soliciting feedback from students and constructively responding to it a less likely outcome. It is a high-stakes conversation and academic leaders need to conduct these conversations carefully and constructively—focusing on providing a set of strong recommendations for formative feedback that will lead to improved performance.
Reference: Boysen, G. A., Kelly, T. J., Paesly, H. N., and Casner, R. W. (2014). The (mis)interpretation of teaching evaluations by college faculty and administrators. Assessment & Evaluation in Higher Education, 39 (6),641-656.
Maryellen Weimer is a professor emerita of teaching and learning at Penn State Berks and the longtime the editor of The Teaching Professor.
Editor’s Note: This article is the final installment of a three-part series on conversations about course ratings. Catch up on the series by reading parts one