A New Classroom Observation Tool
We need better data describing what’s happening in classrooms. Faculty’s and students’ descriptions aren’t always that accurate. End-of-course rating data is highly judgmental. Classroom observation by outsiders happens irregularly, is generally evaluative, and is often colored by the observer’s perspectives. The data collected in an individual classroom is usually confidential and almost never aggregated. Given all this, what goes on in a collection of classrooms—say, those in a department or even across an institution—is pretty much a matter of speculation. Are faculty using as much active learning as they say they are? Are students taking notes, or are they texting during class?
The absence of accurate data about classroom activities is complicated by the prevailing culture. There is a tendency within the academy to think that classrooms are very private places. Professors have the freedom to teach in the ways they prefer. Outside observers are not always welcome. Most faculty are vested in their teaching style, and many become defensive if they think data collection is about documenting the need to change.
A recently developed classroom observation instrument (Classroom Observation Protocol for Undergraduate STEM, or COPUS), created for use in science courses but applicable elsewhere, has the potential to help with the various problems and issues associated with collecting accurate data on classroom activities. It documents classroom behaviors—those concrete actions taken by the teacher and by the students. In two-minute intervals, what the teacher is doing and what the students are doing are coded as one or more of 25 possible actions—all of which are described in neutral terms. For example, the students are listening to the instructor/taking notes, discussing clicker questions in groups of two or more, taking a quiz or test, or answering a question posed by the instructor. And the teacher is lecturing; listening to and answering student questions; showing or conducting a demonstration, experiment, simulation, video, or animation; or waiting or observing as students complete work. (All 25 of these classroom activities are listed in the article referenced below.)
The neutral language makes it easier to share results with the instructor. Teachers are less likely to become defensive about concrete observations such as “administration (assign homework, return tests)” than they are about results that are more abstract and judgmental, such as “made good use of class time.” The presence or absence of concrete behaviors is easier for observers to accurately record as well.
Developers of this instrument were concerned, however, about the accuracy and reliability of the observation. If two observers looked at the same class, would their recorded data be the same? This has been an issue with other observation instruments. They weren’t reliable and could only be made so after many hours of training and observations. One of the goals of this instrument was to enable a high level of inter-rater reliability with minimal training. The article describes how the instrument designers achieved that by simplifying the instrument and then using a carefully designed hour-and-a-half training session.
An instrument like this can be used to generate data for a number of different uses, starting with feedback for individual faculty members: “We discovered that faculty members often did not have a good sense of how much time they spent on different activities during class, and found COPUS data helpful.” (p. 626) The data presented via simple pie charts is nonthreatening and very informative. It also adds objectivity to the more general and subjective data produced by student ratings and comments.
Data from the instrument can also be aggregated across a department or program. The research team describes how they did this for one cohort of science classes, sharing the collective results with a teaching and learning center, thereby enabling the center to develop targeted programming. In this case, a fairly high percentage of faculty in the cohort were using clicker questions, but they were not having students discuss their answers—which, research has indicated, promotes development of problem-solving skills.
Data collected for a department, for a program, or even across an institution helps both faculty and academic leaders develop a clear and accurate understanding of the activities that are occurring in classrooms. And data collection via an instrument like this can be an efficient and much less threatening process.
Reference: Smith, M.K., Jones, F.H.M., Gilbert, S.L., and Wieman, C.E. (2013). The classroom observation protocol for undergraduate STEM (COPUS): A new instrument to characterize university STEM classroom practices. Cell Biology Education—Life Sciences Education, 12 (Winter), 618-627.
Maryellen Weimer is the editor of The Teaching Professor.