Type to search

Category: Assessment

In an effort to make academic program assessment more efficient, Ball State University has implemented the use of Blackboard Outcomes, a computer-based assessment tool that facilitates the reuse of course assignments for program assessment purposes.

Reusing assignments for assessment purposes is not new. Academic programs at Ball State and many other institutions have been doing this for a long time, but it has usually been a laborious process that did not have an efficient means of gathering student artifacts and sharing results.

Getting started

The following are steps for using Blackboard Outcomes:

  1. Identify learning outcomes and enter them in the system.
  2. Create rubrics to assess those learning outcomes.
  3. Link learning goals, rubrics, and courses.
  4. Identify which assignments support the learning goals and link them to those learning goals.

From an instructor’s perspective the only thing needed to collect student assignments for assessment purposes is to align (with a mouse click) that assignment to the goal that’s been entered in Blackboard. As for students, they simply need to submit the assignment electronically. (There’s a video that shows them how to do that.) “Unless you happen to also be one of the faculty members who’s part of the evaluation group, which is a much smaller group than everyone, that’s the extent of your involvement,” says William Knight, assistant provost for institutional effectiveness at Ball State University.

The idea is to aggregate these learning artifacts and evaluate a sample to determine how well students perform in regard to the program’s learning goals. Selection of artifacts can be random, or users might select assignments from certain classes or select assignment from certain semesters. “Our recommendation is to choose a large enough sample that’s representative but not one that’s so large it’s unwieldy,” Knight says.

The university is using this system to assess general education learning outcomes. (Academic programs may use the system as well, but it is not mandatory.) For example, an assessment planned for next summer pulled a sample of several hundred student artifacts linked to two learning goals: written communication and critical thinking. Trained faculty evaluators use rubrics to evaluate each artifact.

In the case of general education, Knight convened a group of faculty representing all the colleges to approve the writing and critical thinking rubrics. The general education rubrics are quantitative and designed to be “quick and painless,” Knight says. Rubrics for program-level assessment come from within those programs.

Reporting results

Once evaluators have finished scoring the assignments, the system can produce an interactive summary table that enables a variety of viewing options. But the data this system produces is basic—an average of 2.5 on a 3-point scale, for example—“isn’t very interesting or very actionable,” Knight says. “What you need is context. If you’ve done this for a while, you can look at trends. You can compare results with others on campus or with other institutions using this approach. Probably the most useful thing to establish context is to look at student groups and how student experiences are related to those outcomes.”

The institutional effectiveness office pairs those data with factors such as SAT scores, gender, classes taken, and/or survey results. This makes the data more useful. “We really want to make this decentralized and give people ownership of it, but we’re strongly advocating that when they’ve done this they let us append some additional data to the results to give them that important context that makes it a more useful and actionable exercise,” Knight says.


Knight offers the following advice on using Blackboard Outcomes for streamlining the assessment process: