Assessing faculty who design and teach online courses is a complex and often confusing process for department chairs and deans. Yet with the growing number of courses taught online, it is a timely topic. We argue, along with others in the field (Taylor, 2014; Shattuck, Zimmerman, & Adair, 2014), that one way to augment the online teaching assessment process is to integrate peer assessment in a thoughtful and systematic manner. However, as simple as peer assessment sounds, it is a challenging task to implement with faculty. In this article we share a yearlong process that our faculty engaged in while creating a department-level peer assessment model. In addition we outline the salient issues that emerged during this process, along with options for addressing these issues.
Creating a department-level peer assessment model of online teaching
During fall semester 2013, the department chair (coauthor of this article) asked a former department chair and current program coordinator (other coauthor of this article) to chair the newly formed Effective Online Teaching Committee, which was charged with creating a peer assessment model of online teaching that had faculty input and endorsement for implementing.
The need for this committee surfaced a few years earlier when the university tenure and promotion process went online and included a folder titled “Peer Assessment of Teaching.” Although the university did not require
faculty to engage in peer assessment, our faculty did establish a rubric and procedure for peer assessment of face-to-face courses, enabling faculty to feel more secure when they went up for tenure or promotion. Since that time, however, we created and implemented four totally online graduate programs along with numerous other online courses, resulting in several of our 47 faculty members teaching totally online. This faculty in particular noted the need to expand our department peer assessment procedures to include online teaching and learning.
The committee consisted of one experienced online teacher from each of our seven program areas and met monthly during the fall and spring semesters. The committee read literature, reviewed published rubrics, created a rubric, created a survey to get faculty input, and presented recommendations and results to the faculty at the end of the year, at which time the faculty voted to pilot the process during the 2014-2015 year.
Issues and options when developing a peer assessment model of online teaching
Issue One: What is the purpose of peer assessment in your department? As is true of any form of assessment, you need to know why it is being done. Is the purpose of the peer assessment to help faculty improve their teaching? Is the primary purpose to provide evidence of good teaching to the department chair/dean during annual evaluation or during the tenure and promotion process? Or is the main purpose to provide documentation that faculty have engaged in this process as a means of professional development and growth? Or will the form be used for more than one purpose?
The committee members in our department spent a great deal of time discussing the purpose and reading other universities’ purposes, knowing that what was finally selected would guide decisions about our policies and procedures. One option for chairs who want to implement peer assessment of online teaching is to have a facultywide meeting to reach consensus about what the department wants.
An associated issue that departments would be wise to address on the front end relates to what is done with the assessment after it is completed. Does the faculty member get to decide whether it is shared with the chair or anyone else? Or is the assessment automatically shared with program coordinators, chairs, and deans?
Issue Two: What should be assessed? According to our faculty, there are three areas that should be assessed during online teaching: (1) how the course is taught, (2) the course content, and (3) the course design. One issue related to this finding is that many faculty teach courses that they did not design, often leading to criticism of various design aspects of these courses. The question becomes, therefore, is the instructor great or not so great because of the way the course is designed and what it covers, or is it because the instructor has great or not so great teaching skills?
There are a couple of options to consider for this concern. First, perhaps faculty should be peer assessed only in courses they developed. If there are not enough courses in a program area for all faculty to develop an online course, then faculty could co-develop courses with other faculty. Then there would be two faculty per course who could be assessed. Of course, a bigger concern relates to the importance of having all courses reviewed and approved on the front end to meet basic standards of effective online courses. If the department, college, or university does not have such a review process, then a rubric (e.g., Quality Matters Higher Education Rubric www.qualitymatters.org/higher-education-program
) could be adopted at the department level and a committee formed to oversee this procedure.
Issue Three: Who should provide the peer assessment?
Our faculty determined that the peer assessment process should be completed by at least two people. One faculty member should be within the same content area as the person being assessed. Both people should have appropriate training that is funded by the department. One option is to require that representatives be selected from each program area to serve on the department’s Effective Online Teaching Committee and that these committee members participate in the logistics and the peer assessment process. Another option is to hire faculty or solicit the service from faculty at other universities who teach in a faculty member’s content area to conduct the peer assessment.
Issue Four: What does the peer assessment process consist of, and how often should it occur?
Our faculty voted that the assessment time frame should be optional to best serve the individual faculty member. The evaluation should consist of a total of three meetings centered on the developed rubric: a pre-conference, the online observation, and a post-conference. These meetings may be online or face-to-face.
During the pre-conference, the faculty member shares a basic overview of the course with the peer reviewers, who also familiarize the faculty with the peer assessment process. The online observation involves separate and independent reviews of the course content and design and how the course is taught (e.g., use of discussions, group work, and type/frequency of instructor feedback). The post-conference is designed to share assessment outcomes and offer recommendations for improvement.
In most cases, who other than students assess faculty who teach online? Assessing face-to-face teaching is challenging; however, online teaching is even more complex. This is the result of the constant changing of online platforms, the evolving established standards, and shifting of faculty to this form of delivery. When university administrators examine the issues and options associated with peer assessment, they are in a better position to support faculty who embrace this approach to improved teaching and learning.
Shattuck, K., Zimmerman, W.A., & Adair, D. (2014). Continuous improvement of the QM rubric and review process: Scholarship of integration and application. Internet Learning, 3
Taylor, A. (2014). Faculty peer review of online teaching. Dutton e-Education Institute Faculty Development: The Pennsylvania State University. Retrieved September 12, 2014, from http://facdev.e-education.psu.edu/evaluate-revise/peerreviewonline
Rebecca S. Anderson is a professor and previous department chair of Instruction and Curriculum Leadership at the University of Memphis. Her research interests include literacy at the K-12 levels and technology integration.
Deborah L. Lowther is department chair of Instruction and Curriculum Leadership and a professor of Instructional Design and Technology at the University of Memphis. Her work includes nationally recognized research initiatives and multiple publications focused on the impact of technology on learning.