LOADING

Type to search

Pitfalls of Using Student Comments in the Evaluation of Faculty

Faculty Development Promotion, Tenure, and Evaluation

Pitfalls of Using Student Comments in the Evaluation of Faculty

A hand holding a smartphone with the word "feedback" on the screen.

The use—or misuse—of student ratings of instruction (SRIs) in faculty evaluation is a frequent topic in higher education news. Unfortunately, popular press articles on the topic often garner more attention than the vast empirical literature. A recent review of the research by Linse (2017) pointed to common misperceptions about SRIs, offering administrators evidence-based advice on appropriate SRI interpretation.

Linse’s work and most SRI studies have focused largely on the numerical ratings portion of student feedback forms; much less research has addressed student comments. Provided in response to open-ended questions usually placed at the end of SRI instruments, such items elicit students’ general viewpoints (e.g., “Please provide any additional comments or feedback”) or request more targeted feedback (e.g., course strengths and weaknesses, suggestions for improvement). Student comments typically become part of faculty records available to chairs, administrators, and tenure committees, where they may play a significant role in evaluation. Is this a fair practice?

Research on student comments

What do studies of student comments reveal about their utility? First, comment response rates are likely to be low. As has been widely reported, online SRI administration, now adopted by a majority of higher education institutions, results in lower overall participation rates than traditional, in-class paper collection. Among the 55–60 percent of students who complete online SRIs (e.g., Benton, Webster, Gross, and  Pallett, 2010), only one-half to two-thirds are likely to provide comments, despite the fact that students appear more willing to comment electronically than on paper (e.g., Morrison, 2011; Stowell, Addison, & Smith, 2012). Online comments tend to be lengthier than paper comments, although this may not be a benefit if the comments come from a smaller and thus less representative sample of students.

Second, when general feedback is requested, comments are more often positive than negative, whether collected from online or paper methods (e.g., Alhija & Fresko, 2009). When viewed broadly, opinions expressed in comments tend to correlate with numerical ratings. Nonetheless, outlier comments are not uncommon and often contain strongly expressed sentiments, potentially enhancing their impact.

With student incivility on the rise (e.g., Knepp, 2012), embedded in a climate in which Internet trolls and cyberbullying are rampant, it seems likely that inappropriate or mean-spirited feedback may leak into SRI comments. Although data on the frequency of problematic comments is lacking, Lindahl and Unger (2010) were able to collect a surprising number of comments described as “cruel” from an informal poll of just 50 colleagues. Although Tucker (2014) found a very low rate of comments to be abusive or unprofessional (fewer than 1 percent), students at her Australian university were explicitly instructed in how to provide professional and helpful feedback prior to completing SRIs, a possible strategy for administrators seeking to ameliorate the propensity of some students to lash out.

Strategies for managing response rate problems

Before giving weight to student comments, administrators must first ask how representative they are. If your SRI system doesn’t automatically calculate participation rates for comments, do it yourself: For each qualitative item, count the number of responses, divide by the total number of students in the class, and use this number to gauge how characteristic of class members they are likely to be.

Participation rates are especially important when examining open-ended questions that explicitly poll for negative feedback such as “What are weaknesses of this class?” Satisfied students may omit such items, which will then artificially inflate the impact of the students who do respond—particularly if they tend to be long or vehement. Before attempting to draw any conclusions from responses of this type, it’s essential to know whether they come from 5 percent or 50 percent of the class.

Another strategy that can help judge representativeness of comments is to organize them by relevant variables prior to reading. For SRI systems with multiple open-ended questions, categorize by student rather than by item, allowing you to determine whether four critical comments come from one or four unhappy students. You might also sort comments by variables potentially associated with SRIs, such as whether students are majors or non-majors, what students expect their grades to be, and what type of effort students report making in the class.

Negative outliers and cognitive biases

Most administrators probably consider themselves adept at sifting through the occasional outlier comments. Social science research suggests otherwise.

Among the many cognitive biases that cloud human decision making, negativity bias—the tendency to attend to, remember, and be influenced by negative information more than positive information—is likely to be one of the most damaging factors in your efforts to fairly assess student comments. In short, unpleasant emotions, feedback, events, memories, and people have greater impact than their pleasant counterparts. In an exhaustive review of negativity bias research, Baumeister and colleagues (2001) described the phenomenon as “disappointingly relentless” (p. 361).

Negativity bias is especially strong in forming impressions of others, which is, in a sense, what occurs during a review of student comments. Unfavorable reviews of faculty are likely to command greater attention, be better remembered, and carry more weight in decision making than neutral or complimentary opinions. In fact, Hilbig (2009) found that negative information is also believed to be more accurate than positive information, an effect he summarized in the title of his paper as “sad, thus true.”

Reviewers of faculty files are frequently confident that they are capable of ignoring outliers. Although theoretically such self-control is possible, a second type of cognitive bias, the novelty effect, suggests that to disregard an unusual response is especially challenging. Humans generally attend to the unexpected or uncommon experience in any landscape, and they remember it more keenly than the normative one.

Finally, if you aren’t convinced of your own potential for unwitting bias, consider research on the bias blind spot. Scopelliti and colleagues (2015) showed that most people believe that cognitive bias affects the others’ decision making—but not their own. Such overconfidence ironically impedes the ability to benefit from advice or training designed to minimize bias.

Strategies for managing cognitive biases

Evaluators may be unable to ignore outliers, but computers can. If you are committed to using student comments in personnel decision making, you might enlist the assistance of a qualitative data analytic tool to sort and organize them into modal categories. This is, after all, the approach of qualitative researchers, who generally seek to find common themes in data.

Wongsurawat (2011) recommends a different tack: Assess the degree to which individual comments correlate with class averages on quantitative items, and then disregard comments identified as nonrepresentative or unreliable. For example, if Professor A receives a high rating on the organization item, and a single student comment claims that Professor A was disorganized, omitting the item from consideration seems warranted. Omitting outliers is a common practice in quantitative research; perhaps personnel decisions should use the same standard.

Of course, if administrators’ reviews of student comments attempt to focus on overall patterns in the data, and these patterns most often align with numerical ratings, then it’s reasonable to ask what is gained from the time-consuming task of examining comments at all? Is their potential for bias and misuse, which may be heightened in the case of non-majority faculty (Linse, 2017), worth their potential value in decisions about annual review, reappointment, or tenure? Although comments may provide useful formative feedback to faculty, the appropriateness of their presence in personnel decision making must be carefully considered.

In summary, administrators who place undue value on student comments, and undue confidence in their ability to dodge the minefield of cognitive bias, are doing so at their own peril. Unfortunately, they may also be doing so at the peril of the faculty member in question.

References

Alhija, F. N., & Fresko, B. (2009). Student evaluation of instruction: What can be learned from students’ written comments? Studies in Educational Evaluation, 35(1), 37–44.

Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5(4), 323–370.

Benton, S. L., Webster, R., Gross, A. B., & Pallett, W. H. (2010). An analysis of IDEA student ratings of instruction using paper versus online survey methods, 2002–2008 data. Retrieved from https://www.ideaedu.org/Portals/0/Uploads/Documents/Technical-Reports/An-Analysis-of-IDEA-Student-Ratings-of-Instruction-Using-Paper-versus-Online-Survey-Methods-2002-2008-Data_techreport-16.pdf

Hilbig, B. E. (2009). Sad, thus true: Negativity bias in judgments of truth. Journal of Experimental Social Psychology, 45(4), 983–986.

Knepp, K. A. F. (2012). Understanding student and faculty incivility in higher education. Journal of Effective Teaching, 12(1), 33–46.

Lindahl, M. W., & Unger, M. L. (2010). Cruelty in student teaching evaluations. College Teaching, 58(3), 71­–76.

Linse, A. R. (2017). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94–106.

Morrison, R. (2011). A comparison of online versus traditional student end-of-course critiques in resident courses. Assessment and Evaluation in Higher Education, 36(6), 627–641.

Scopelliti, I., Morewedge, C. K., McCormick, E., Min, H. L., Lebrecht, S., & Kassam, K. S. (2015). Bias blind spot: Structure, measurement, and consequences. Management Science, 61(10), 2468–2486.

Stowell, J. R., Addison, W. E., & Smith, J. L. (2012). Comparison of online and classroom-based student evaluations of instruction. Assessment and Evaluation in Higher Education, 37(4), 465–473.

Tucker, B. (2014). Student evaluation surveys: Anonymous comments that offend or are unprofessional. Higher Education: The International Journal of Higher Education and Educational Planning, 68(3),347–358.

Wongsurawat, W. (2011). What’s a comment worth? How to better understand student evaluations of teaching. Quality Assurance in Education: An International Perspective, 19(1), 67–83.

Melissa J. Himelein, PhD, is director of the Center for Teaching and Learning at the University of North Carolina Asheville. Reach her at himelein@unca.edu.

This article first appeared in Academic Leader on December 1, 2017. © Magna Publications. All rights reserved.

Tags:

You Might also Like

Leave a Comment