An Efficient Way to Use Assignments for Program Outcomes Assessment
In an effort to make academic program assessment more efficient, Ball State University has implemented the use of Blackboard Outcomes, a computer-based assessment tool that facilitates the reuse of course assignments for program assessment purposes.
Reusing assignments for assessment purposes is not new. Academic programs at Ball State and many other institutions have been doing this for a long time, but it has usually been a laborious process that did not have an efficient means of gathering student artifacts and sharing results.
Getting started
The following are steps for using Blackboard Outcomes:
- Identify learning outcomes and enter them in the system.
- Create rubrics to assess those learning outcomes.
- Link learning goals, rubrics, and courses.
- Identify which assignments support the learning goals and link them to those learning goals.
From an instructor’s perspective the only thing needed to collect student assignments for assessment purposes is to align (with a mouse click) that assignment to the goal that’s been entered in Blackboard. As for students, they simply need to submit the assignment electronically. (There’s a video that shows them how to do that.) “Unless you happen to also be one of the faculty members who’s part of the evaluation group, which is a much smaller group than everyone, that’s the extent of your involvement,” says William Knight, assistant provost for institutional effectiveness at Ball State University.
The idea is to aggregate these learning artifacts and evaluate a sample to determine how well students perform in regard to the program’s learning goals. Selection of artifacts can be random, or users might select assignments from certain classes or select assignment from certain semesters. “Our recommendation is to choose a large enough sample that’s representative but not one that’s so large it’s unwieldy,” Knight says.
The university is using this system to assess general education learning outcomes. (Academic programs may use the system as well, but it is not mandatory.) For example, an assessment planned for next summer pulled a sample of several hundred student artifacts linked to two learning goals: written communication and critical thinking. Trained faculty evaluators use rubrics to evaluate each artifact.
In the case of general education, Knight convened a group of faculty representing all the colleges to approve the writing and critical thinking rubrics. The general education rubrics are quantitative and designed to be “quick and painless,” Knight says. Rubrics for program-level assessment come from within those programs.
Reporting results
Once evaluators have finished scoring the assignments, the system can produce an interactive summary table that enables a variety of viewing options. But the data this system produces is basic—an average of 2.5 on a 3-point scale, for example—“isn’t very interesting or very actionable,” Knight says. “What you need is context. If you’ve done this for a while, you can look at trends. You can compare results with others on campus or with other institutions using this approach. Probably the most useful thing to establish context is to look at student groups and how student experiences are related to those outcomes.”
The institutional effectiveness office pairs those data with factors such as SAT scores, gender, classes taken, and/or survey results. This makes the data more useful. “We really want to make this decentralized and give people ownership of it, but we’re strongly advocating that when they’ve done this they let us append some additional data to the results to give them that important context that makes it a more useful and actionable exercise,” Knight says.
Advice
Knight offers the following advice on using Blackboard Outcomes for streamlining the assessment process:
- Solicit cooperation from colleagues. This approach to assessment might not be appropriate for every program, particularly those with licensing requirements. Where it might be appropriate, it might take some convincing to help faculty understand the implications of collecting this data. Knight points out that the assessment of the artifacts is disaggregated from the classes from which they come when reports are produced and that the goal is not to assess one’s class or teaching but is about “getting feedback about student performance relative to the goals of the whole program.”
- Create a curriculum map. “Basically the idea is you’ve got a set of courses or maybe other experiences such as internships that students are supposed to do, and you’ve got a set of learning outcomes. Where do they intersect? If valuing diversity and inclusion is a learning goal, in what course or experience do students produce something that is a representation of that? That sort of implies that you’ve had that discussion or you’ve been thoughtful of that and can identify that,” Knight says.
- Have faculty identify learning outcomes and evidence. Faculty rather than assessment staff should identify learning outcomes and the types of evidence to use to determine whether students have met those outcomes. In addition, faculty should create the rubrics to assess the student artifacts.