When the pandemic began, students could no longer meet with faculty for classes and other in-person meetings. Consequently, formative feedback students received from their faculty regarding their performance on exams and other course work had ...
When the pandemic began, students could no longer meet with faculty for classes and other in-person meetings. Consequently, formative feedback students received from their faculty regarding their performance on exams and other course work had to stand in for lectures and other in-person conversations that typically used to occur between students and their instructors. Were feedback loops and low-stakes assessments supported in the emergency shift to remote learning?
To illustrate whether this shift occurred, we wanted to investigate just how heavily instructors weighted summative exams both before and during the pandemic. In addition, we wanted to make the case for two interconnected activities: (1) designing courses that rely less on one or two assessments to assign a course grade and (2) improving the practice and frequency of formative feedback given to students.
To support the first assertion, we studied university course syllabi from before and during the pandemic to see whether there was a shift from relying on a single midterm or final exam to more frequent, lower-stakes assessments. To support the second, we reviewed literature about formative feedback practices and extrapolated these to the current situation academic institutions face.
The significance of this study is that at most universities, colleges, and K–12 schools, students are in fact learning in person, but this could change at any moment. Faculty need the support, training, and technology to improve the frequency and effectiveness of their feedback. The data this study uncovered shows historically heavy reliance on summative exams for student course grades and possible encouraging trends.
We present summaries of our research as well as suggested strategies for both reducing reliance on assessments and improving their use for productive feedback opportunities for students.
We base our suggestions on the following tenets:
To start, we explored both the status quo and recent shifts in course grade composition. How much did a final course grade depend on summative assessment and did that change in the emergency shift to remote learning? Course syllabi had the most potential to reveal information about whether there was a heavy reliance on summative assessments for course grade. To that end, we did an informal sampling of syllabi to gain insight.
We sampled available syllabi, unit guides, and modules dating from 2010 to 2018 from top programs in the United States, Australia, New Zealand, and the United Kingdom (as stated by U.S. News & World Report and Times Higher Education’s World University Rankings across subject matter). These samples included 77 syllabi from the United States dated from 2012 to 2019, 195 from the UK dated 2018 to 2019, and 190 from Australia and New Zealand dated 2010–2019 with the mode for all samples dated 2017–2019.
For the purposes of this study, we will present our findings for the United States. Institutions with publicly available syllabi include UC Berkeley, Carnegie Mellon University, Johns Hopkins University, Stanford University, and Purdue University. We discovered that the vast majority of course syllabi indicate that faculty rely heavily on exams, particularly final summative exams, as part of a student’s course grade (Figure 1).
The average across subjects from chemistry to biology to mathematics and computer science indicates that approximately 75 percent of student grades were based on quizzes, midterms, practical exams, and final exams. Conversely, assignments held 10 percent or no value in the calculation of coursework assessment.
In spring 2020, education institutions, within a matter of days, shifted to emergency remote learning. There was an opportunity at this point in time to change the composition of course grades from heavily weighted final summative exams.
For the spring 2020–fall 2021 semesters, we sampled 55 publicly available syllabi from US universities, including UC Berkeley, MIT, Stanford, and Carnegie Mellon University, to determine whether there had been a shift away from heavy reliance on final summative exams. during remote learning. There was some movement away from reliance on final summative exams but not as much as we had hoped.
In a detailed review of the syllabi, it’s apparent that many instructors shifted the weight of the course grade toward quizzes and midterms. One campus decreased the weighting of assessment on the course grade to less than 40 percent. There were also syllabi in which the final exam was either absent or counted for less than 25 percent of the course grade. It is important to note, however, that overall data indicate that these adjustments were not widespread and that high-stakes assessment (Figure 2), rather than low-stakes assignments and other measures, remains the most significant element in measuring learning outcomes, perhaps to our students’ and our own detriment.
The results of all the research spanning 2010 to 2021 are admittedly based on an informal and limited sampling of publicly accessible syllabi, but they indicate that there is a heavy reliance on summative assessments for student course grades, even with adjustments made for emergency remote learning.
Instructors, in consideration of the emergency shift to remote learning, did try to lessen the burden of final summative exams in course grades. The result was a shift toward other larger assessments, like midterms and quizzes, which while still high stakes have formative elements within the student learning journey.
That said, likely given the heavy burden of grading and feedback on instructors’ time, the shift did not occur all the way toward including low-stakes assignments.
A review of the literature and our own experiences reveal several shortcomings with such a heavy reliance on exams for grading:
In our study of 2012–2019’s US syllabi, on average, 50 percent of a student’s grade was based on the final exam, which means that half of the course grade had no opportunity to support learning via teacher interventions and feedback loops. The final exam then is in effect a purely evaluative and, in some cases, a punitive exercise.
During emergency remote teaching, which was forced upon nearly everyone during the pandemic, the syllabi sampling showed that 25 percent of a student’s grade was based on the final exam. This is heartening.
But both pre-pandemic and pandemic syllabi show that 75 percent of a student’s grade was based on high-stakes exams, period. It is clear that educators may have shifted the reliance of a course grade from the final exam toward midterms but retained the importance of high-stakes assessments as part of a course grade.
This problem with heavy reliance on exams is acutely significant during emergency remote learning, when feedback loops are critical for instructor-student knowledge exchange. These feedback loops are best enacted via low-stakes assignments—and optimally, they happen frequently.
When assessment is both frequent and low stakes, students can more readily incorporate feedback they receive from exams into future projects or assessments and “fail safely”; each assessment is given credit as well, documenting their progression. Additionally, students are less affected by poor performance on assessments or interruptions of their learning path as they have other opportunities to mitigate setbacks. Since spring 2020, setbacks and interruptions have been frequent and significant, and it behooves us to pay attention to such disruptions and for educators to make appropriate adjustments in teaching and curriculum to support student learning.
When students’ grades are dependent on a single or very few assessments, students are likely to feel unseen and their learning journey may become more opaque to both student and teacher. It is important, particularly in emergency remote learning, for educators to increase transparency into the student learning journey and thus ensure accurate measurements of conceptual understanding.
Frequent, low-stakes assessments such as homework, assignments, and quizzes allow students to receive feedback and progress in their learning. When summative assessments are supported by prior feedback, they reduce student stress and mitigate shortcut solutions.
When summative exams are the basis for a student’s final course grade, the learning journey itself is not given credit. Many assumptions, too, are reliant on such a scenario; for instance, teaching must be aligned to what is tested. Without item analysis of student responses on prior formative assessments, there is little opportunity for curriculum correction.
Additionally, such a scenario is entirely dependent on fair and inclusive exam design that considers different learning styles and can measure both broad knowledge of concepts and deep, higher-order thinking around concepts. That, too, is a distinct pressure dependent on analysis of student knowledge; without prior lower-stakes assessments, exam design has few opportunities for improvement.
Most undergraduates do not receive individual feedback beyond graded work, and thus, there is no guidance for how to develop knowledge or advance skills over time. Given our research into syllabi, our own experience as faculty, and academic journal articles, we challenge educators to reframe assessment using a few main principles.
Grades without feedback may hinder student learning as they can demotivate students and affect their confidence in self-directed learning, which may then sabotage their effort and persistence. According to pedagogist Dylan Wiliam (2011), “Students who only received scores made no progress from one task to the next, while those students who received comments, improved about 30 percent . . . students who received both a score and comments also made no progress.” In sum, without feedback, assessment fails to encourage student learning (Wiliam, 2011, p. 7).
Assessments may continue to be a large part of a student’s course grade—for better and for worse. When they act as a bridge between feedback and learning insights, assessments support learning. When they uphold feedback loops, they promote accurate assessment. Students and educators benefit from feedback in the following ways:
Remote learning in the 2020–2021 academic year forced educators to innovate and experiment with educational technologies and teaching strategies. Teachers used web conferencing platforms, like Zoom and Google Meet, as well as proctoring tools and online assessment software.
In the shift to remote learning, incorporating frequent, low-stakes assessments into the curriculum to promote communication without the pressure of grading, was deemed a better practice, given that this intersection was a major student-teacher exchange. We laud the schools and educators who did encourage this shift and urge educators to continue this shift beyond emergency remote learning.
Educators are aware of the unique challenges to establishing a proper balance of assessment with student learning outcomes. The shift away from final exams as a large part of the course grade toward midterms and quizzes is evidence that many educators are leaning toward using more formative assessments. We reiterate that, especially in remote learning and with high-enrollment courses, these checkpoints are more important than ever to build student-educator trust and support the learning journey. Feedback must accompany these assessments and, more importantly, shift toward lower-stakes assignments.
Clark, I. (2012). Formative assessment: Assessment is for self-regulated learning. Educational Psychology Review, 24(2), 205–249. https://doi.org/10.1007/s10648-011-9191-6
Guskey, T. R. (2019, June 23). Grades versus comments: What does the research really tell us? Education Week. https://www.edweek.org/education/opinion-grades-versus-comments-what-does-the-research-really-tell-us/2019/06
Martin, A. (2020, March). How to optimize online learning in the age of coronavirus (COVID-19): A 5-point guide for educators. https://www.researchgate.net/publication/339944395_How_to_Optimize_Online_Learning_in_the_Age_of_Coronavirus_COVID-19_A_5-Point_Guide_for_Educators
Wiliam, D. (2011). Embedded formative assessment. Solution Tree Press.
Christine Lee is an educator and writer-researcher at Turnitin. Jenny Amos, PhD, is a teaching professor in bioengineering at the University of Illinois Urbana-Champaign and a two-time Fulbright specialist in engineering education.