LOADING

Type to search

Using Assessment Data to Improve Learning

Assessment

Using Assessment Data to Improve Learning

Faculty often associate assessment with collecting data to produce reports for accreditors. This is certainly one purpose of assessment, but it’s neither the only purpose nor the most important. When done well, assessment benefits faculty and students, pointing toward ways to improve learning at both the institution and the program levels.

Utilization-focused assessment

Engaging faculty in meaningful assessment requires a different approach from the traditional assessment design of choosing an instrument, gathering and summarizing evidence, and creating a report.

Jo Beld, vice president for mission at St. Olaf College, advocates utilization-focused assessment that uses backward design. Rather than asking which assessment instruments to use at the beginning, utilization-focused assessment begins with the question “Who’s going to use the data, and how might they use it?” This is followed by locating the learning and then selecting the assessment method.

This approach to assessment aligns with what Beld sees as an evolution in the field: a shift from focusing on how to do assessment to considering how to use the results.

A utilization focus helps faculty devise questions within their courses that will provide insights that are the most meaningful to them and their students. The idea is to engage faculty in pursuing action research to create a body of evidence that one “would refer to in the course of making normal decisions in the department or classroom,” Beld says.

Focus on student learning

To help faculty better understand the purpose of assessment and its potential benefits, the college got away from using the word assessment, which had become associated with externally driven report-writing, and started using the phrase inquiry in support of student learning in its place.

“That phrase worked really well for us because it reminded us that the purpose was to sustain what was working well and to identify and address things that weren’t working so well in our students’ learning. It kept us focused on students and student learning, and I think the inquiry piece connected to the role of faculty members in understanding and investigating reality,” Beld says. “Now we can use the word assessment again because we rescued it from its earlier negative connotation.”

Another frequently used phrase at the college in describing assessment is mission-driven, meaningful, and manageable, which is a way of reminding us that assessment is intended to address the things that are core to our mission, that we want it to yield information that is meaningful to the faculty and administrators who will be using it, and that we do it in ways that respect faculty time,” Beld says.

Assessment as standard procedure

People have to be intentional about collecting and interpreting data and taking action based on the data. Beld recommends incorporating assessment in standard operating procedures. To that end, St. Olaf has a comprehensive assessment Web page (http://wp.stolaf.edu/ir-e/assessment-of-student-learning-2/) that provides data collection cycles, institutional reports by instrument and topic, general education assessment, program-level assessment, and other resources.

The site is not password-protected. “Faculty are naturally inquisitive people,” Beld says. “They like evidence, so it’s helpful to have the evidence readily available so they can just grab it when they need it. It just becomes part of the way they have conversations about things. … It doesn’t take long for a few people to have some positive experiences and then begin introducing those positive experiences into their normal interaction with colleagues. It becomes less and less something we have to do, and more and more something we want to do because it’s useful.”

In addition to this transparency, the institution-level assessment process is integrated into faculty governance. Faculty representatives review data and prepare comments for the provost and dean, which are then presented to the academic affairs committee each October. Then the chair of the assessment subcommittee reports to the entire faculty, providing both a summary and implications of the data. This builds in dedicated time for people to look at the results and think about what they mean, Beld says.

“At the program level, the time frame can be challenging because we have different types of programs assessed in different years,” Beld explains. “We don’t ask any program to do assessment all the time, but we’ve also built into our assessment schedule one year that focuses just on using assessment data rather than on gathering assessment data.

“We encourage people not only to look at their own data within their department, but also to take advantage of what we know at the institution level. So that builds a regular time into the sweep of successive academic years, to look at data and think about what that means for you and act on it at the department level.”

General education
One area of assessment that has gained support among the faculty is assessment of general education. The approach is straightforward: all faculty members who teach a general education course during the assessment year are assigned to assess one intended general education learning outcome of their choice within one assignment that they feel aligned with that outcome.

“Basically we say, ‘As you’re grading the students’ work, also look to see how well students do on that one outcome, which may or may not be the same as the grade that you give to them,’” Beld says.

Those faculty members are then asked to reflect on what they think the results mean for their instruction. “We ask them, ‘What are you going to keep now that you have seen these results?’ and ‘What do you think you might change going forward?’” Beld says.

In a recent survey, most faculty who participated in general education assessment (70 percent) found this exercise to be useful. “What faculty found valuable was being reminded what the intended learning outcomes for that general education requirement were,” Beld says. “It also gave them a chance to check the alignment between what they were asking their students to do and those intended learning outcomes. It gave them some data they could use to determine how well they were helping their students achieve the outcomes and what they might do differently the next time they teach the course. And a number of faculty said [in effect], ‘Of course this made me think about my other courses in relation to either the general education requirements or the intended outcomes of the major.’”  

Selecting instruments
A utilization focus can help in selecting instruments that will likely yield useful data, and often more than one instrument is called for. For example, the college’s assessment of student writing used results from the National Survey of Student Engagement (NSSE), the Collegiate Learning Assessment, and program-level and general education assessments.

“This broad array of instruments allows us both to look at the quality of the students’ writing products and also to learn some things about students’ writing experiences and perceptions,” Beld says. “And so with those things taken together, we’re talking now about whether it might be time to make some changes in our general education requirements about writing. … That’s a very practical outcome of synthesizing data across a wide array of sources, addressing both the writing process or experiences and writing outcomes. We don’t have a writing crisis on campus by any means, but we do wonder whether we’re getting as much mileage out of the time and energy that faculty and students are committing to writing. … The assessment data won’t tell you what to do, but it will tell you the things to think about or where to train your sights.”

Using published assessment instruments such as NSSE enables comparisons with other institutions and sharing of best practices based on the results. However, these instruments alone may not adequately address the questions you want to ask. Instruments developed within the institution can help provide additional useful insights. There are tradeoffs to using any instrument, but using several in combination can be helpful. “We know that NSSE isn’t going to ask things exactly in the same way as we do on our local instruments. But it’s very compelling when you see that a question about a learning outcome asked one way on a local instrument points in the same direction as a question asked in a different way on an interinstitutional instrument. We use our synthesis of data across instruments as a way to overcome the limitation of any individual instrument,” Beld says.  

Tags:

You Might also Like

Leave a Comment