Reviewing and updating some of her previous writings, Diane Halpern suggests the list of factors important in program assessment have not changed but merit regular review. Here’s a summary of those seven factors drawn from a more detailed discussion of them that appears in the article referenced below.
“No one should be surprised to learn that faculty (in general) have not enthusiastically embraced the opportunity to see if their students measure up to those at other universities or to the expectations of their professors,” writes Diane Halpern in a “personalized review” of assessment programs in general and in her field of psychology. (p. 358) Faculty who believed assessment was another of those “trendy things” destined to pass once something else new came along have been proven wrong. The assessment movement is now close to 30 years old and still very much a part of the higher education scene. Institutions found it hard to ignore once it started being a condition for receiving federal funds and a review criteria used by the national accrediting associations and various professional program reviewing agencies.
Reviewing and updating some of her previous writings, Halpern suggests the list of factors important in program assessment have not changed but merit regular review. Here’s a summary of those seven factors drawn from a more detailed discussion of them that appears in the article referenced below:
“Multiple and varied measures are needed because no single number can capture the complexity of learning.” (p. 359) Sometimes these measures include nationally normed instruments; sometimes they are a collection of measures (such as graduation rates, alumni surveys, GPAs, and data from individual courses) assembled locally.
If the objective of assessment is to improve programs, ergo improve student learning, then faculty must be involved in assessment efforts. Their responsibility for the curriculum mandates that they have an important role.
Performance-based funding is basically a bad idea. This is funding that goes to programs that demonstrate quality outcomes. Halpern’s first point is pretty simple: if a program doesn’t demonstrate quality outcomes, it may well be that lack of funding is an issue. To reward good programs simply widens the gap between programs that are effective and those that are not. Her second point: “It is very easy to create an assessment that makes any program look good.” (p. 359)
Collecting and assembling assessment data is pointless unless it is used. If students aren’t demonstrating knowledge of a program objective, the curriculum needs to be adjusted. “It is a strange paradox when professors believe that they are great teachers but few students are learning.” (p. 360)
Colleges offer different programs and have different objectives. They need to be able to decide how to assess their students’ learning.
“Value-added or talent development measures that emphasize learning gains are preferable to exit-only measures.” (p. 360) Giving seniors an exit measure does not establish how much they learned during their time in college. If you start with very bright students, they will score well on the exit exam. “With value-added measures, the researcher compares educational achievement at entry with educational achievement at a later time.”(p. 360) If students show good learning gains, that institution ought to be considered a winner.
The assessment data collected provides a fragmented picture of student learning. Even if data is collected from various parts of the campus, it still doesn’t present an integrated view of learning. However, a less than fully integrated picture is certainly preferable to no picture at all.
In a 1987 article Halpern wrote on assessment, she observed, “It has been said that to change educational policies and programs is a lot like moving a graveyard: you don’t get much internal support.” (p. 361) And some of that internal support for assessment is still missing—some faculty still see assessment as added work, largely unnecessary. To those who remain unconvinced of the value and relevance of program assessment, she writes, “Yes, it is more work, and yes, we already assess learning with grades, and yes, it can be done poorly. But despite all of the drawbacks, the assessment of what and how much and how students learn is at the heart of what we do as teachers.” (p. 361)
Reference: Halpern, D. F. (2013). A is for assessment: The other scarlet letter. Teaching of Psychology, 40 (4), 358-362.