Type to search

Adaptive Learning for Faculty Development: Technology Considerations

Faculty Development

Adaptive Learning for Faculty Development: Technology Considerations

One challenge of faculty development and training for online teaching is satisfying instructors with different levels of knowledge, skills, and experience. At our institution, we discovered that this challenge can be overcome by employing an adaptive learning strategy in our faculty development courses. Adaptive learning technologies assess an individual learner’s current knowledge so that they can “test out” of or skip past content and topics that they have already mastered and focus on areas they may not yet be familiar with.

As we considered using adaptive learning modules in our faculty development courses, we identified some potential benefits. For example, we often have faculty members that come from other institutions or positions where they may have prior knowledge or experience. Some of these individuals may be familiar with online teaching tools but need assistance with pedagogical strategies, while others may have extensive teaching experience but lack technology skills. Some instructors have previously taught online and are well-versed in most of the topics we cover, whereas others are completely new to teaching online. Ultimately, adaptive learning allowed us to meet our faculty members at their current level of knowledge so that they can focus on learning topics and areas beyond that level.

Assessment and comparison of adaptive technologies

After we determined that adaptive learning would be the right solution for our faculty development courses, we had to decide which tool would be best. We evaluated three available adaptive learning systems: one built into our primary learning management system (LMS), one developed in-house within our institution, and one a third-party system.

To evaluate each adaptive system, we developed a rubric containing the following seven categories, each with their own subset of features or criteria.

  1. Adaptive capabilities. Ideally, we wanted the system to provide a personalized learning experience to our faculty members. To assess this, we compared systems’ functionality for customized learning paths and their ability to lock and unlock content according to the outcomes of pre- or post-assessment. We also compared assessment item types (e.g., multiple choice, fill in the blank) and feedback capability (e.g., immediate feedback).
  2. Content authoring and editing. We frequently update our faculty development courses, so we needed a system that allowed us to do so. On top of basics such as content editor features and multimedia integration, we needed a platform that was quick and easy to use without requiring extensive time to make simple edits. Additionally, multiple designers must have access to collaborate on content development, so we were looking for a system that allowed multiple content authors to contribute without having to traverse tedious sharing permissions.
  3. Learner usability. We wanted the adaptive system to be learner friendly, especially for our new-to-online faculty members. This meant that we were looking for a system that was easy for learners to navigate, especially through content and assessment. Additionally, we wanted the chosen system to meet accessibility needs, work on mobile devices, work with our institution’s single sign-on (SSO), and have a welcoming look and feel.
  4. Integration and maintenance. We needed the system to integrate well with our current LMS. We wanted to provide a seamless learning experience for our faculty learners and have automatic grade exchange as well. We also needed the system to allow for shareable learning objects as well as easy maintenance and setup from semester to semester.
  5. Available support. While we try to support any system we use as much as we can, we simply can’t solve every problem. Thus, we needed the selected adaptive system to have dedicated support staff available to assist with issues or questions. We also considered whether the platform comes with readily available software documentation and guides, which could help us troubleshoot certain issues on our own.
  6. Cost. Because we usually have at least 40 faculty members going through our primary training course each semester, we wanted the adaptive learning system to be cost-effective for our institution. Therefore, we compared the overall cost across the systems.
  7. Data and analytics. A large part of adaptive technologies is their ability to produce robust real-time data and analytics. This includes (1) data to assess learner progress, performance, and engagement (e.g., an instructor or facilitator dashboard); (2) data to evaluate assessment items (e.g., how frequently certain items are missed); and (3) data for the learner to assess their own learning and progress. Ideally, an adaptive platform would have all three, but we scored each as its own criteria.

We conducted the assessment and comparison of the adaptive technologies on the basis of available information of the systems and the experience of the instructional designers who have used them. We used a rating scale of 1–3 to evaluate each criterion objectively and systematically; 1 equaled “does not meet needs,” 2 “partially meets needs,” and 3 “meets or exceeds needs.” Along with the numerical scoring, we made notes in the “characteristics” column for each system, noting what the systems did or didn’t achieve for each criterion. For some criteria, we found it helpful to label “pros” and “cons” in this area.

Further application

We hope that the rubric and criteria we created can help others in their evaluation of adaptive technologies. Ultimately, how an institution chooses to assess adaptive systems for faculty development will depend on their needs. With this more systematic evaluation, we were able to provide our leadership with clearer and more in-depth reasoning for why we opted for one system over the others. After our evaluation of systems, we considered introducing score weighting to denote an individual criterion’s level of significance depending on our need or application. Additionally, we met as a group and discussed each of the rubric criteria for each system and then rated together in one rubric. If they desire, several individuals could evaluate the systems and then aggregate their scores to find the average or median.

Click here to download a Word version of the rubric.

Corrinne Stull, MA, is a principal learning consultant at Discover Financial Services and former associate instructional designer at the University of Central Florida. Corrinne has a passion for combining technology and education to create unique, meaningful learning experiences. She specializes in personalized and adaptive learning, web development, and online accessibility. 

Jackie Compton, MA, is a web content specialist at the Center for Distributed Learning at the University of Central Florida. Jackie develops faculty training, professional development, and other noncredit courses in the learning management system. Her passions include online accessibility and course graphic design.

Anchalee Ngampornchai, PhD, is an instructional designer at Center for Distributed Learning at the University of Central Florida. She has designed and developed more than 100 asynchronous e-learning modules and worked with faculty members in various disciplines. Her research interests are concerned with intercultural interaction in the online classroom.


Leave a Comment