Six Ways Deans Should Be Looking at ChatGPT and the AI Explosion
When Harry Potter’s Aunt Petunia recalls the arrival of her sister’s Hogwarts letter, she remembers her parents’ response and her own reaction: “‘We have a witch in the family. Isn’t it wonderful?’ I was the only one to see her for what she was. A freak!” (Harry Potter and the Sorcerer’s Stone). This dichotomy of responses in the Evans household is being repeated as academics explore the implications of easily accessible artificial intelligence software. It is tough to keep up with the explosion of articles that are appearing on a seemingly daily basis about the latest iteration of OpenAI’s freely available ChatGPT, although I am grateful that Peter Paccone is trying to do just that.
My goal is not to add to or replicate that raft of documents but rather to try to think through the implications of the ChatGPT explosion for academic administrators concerned with supporting faculty and undergraduate students as they confront the opportunities and challenges the technology brings with it. While it will probably be a while until we can fully imagine the impact this technology will have on our colleges and universities, here are some initial frameworks and contexts in which we would do well to try to situate our faculty’s engagement with technologies like ChatGPT.
1. ChatGPT is energizing faculty discussions of classroom pedagogy and student learning. Even though it is early February, the academic term has just gotten underway, and faculty are feeling swamped by work, lots of formal and informal conversations, workshops, and discussions about ChatGPT are in progress. On my campus, for instance, it is the third week of the semester, and there have already been at least two formal faculty sessions related to the platform, and more are scheduled in the next two weeks. Reports from colleagues at other campuses are similar: faculty are eager to play with ChatGPT, share their experiences, and reflect on how the technology might affect their teaching and their students. Not only are the conversations about how to rethink assignments and exams to forestall students using Chat GPT to cheat, they are also about how the technology can be used to promote active learning, enhanced brainstorming, and critical reflection. In other words, simply as a heuristic tool for instructional faculty, the technology already has clear value.
2. It is motivating conversations about the need to create new courses, revise curricula, and reevaluate our general education programs. ChatGPT isn’t only energizing discussions about classroom pedagogy. It is highlighting the wide-ranging and interdisciplinary importance of creating opportunities for students to develop facility not simply with the current interaction of available AI tools but with the deep logic and infrastructure on which they are based. This is not to say that it is pushing arguments that all students should know how to code (although they certainly should) but that students will need curricular opportunities that permit them to engage in concentrated study of the nature of AI tools and their likely impact on our culture, public policies, and social structures. On my campus, discussion of ChatGPT is supplementing conversations that were already in progress about the need to ensure that all of our undergraduates develop digital literacy as part of their required general education program so that they have the opportunity to engage with such texts as Robert Aoun’s Robot-Proof: Higher Education in the Age of Artificial Intelligence, Caroline Criado Perez’s Invisible Women: Data Bias in a World Designed for Men, and Safiya Umoja Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism.
3. While these conversations occur among individuals on our campuses who are meeting AI’s burgeoning presence on our campuses with enthusiasm, we need to simultaneously guard against the possibility that such growth will contribute to inequity, the digital divide, and increased educational disparities. The question of who has the bandwidth (both literal and figural) to access AI tools to aid them in such activities as preparing college applications and scholarship essays will need our attention, just as will the question of who is able to access this technology to cheat once they are enrolled. Access to AI tools will further add to inequities once their makers decide to place them behind a paywall. At the same time, AI-assisted instruction may provide under resourced campuses with an opportunity to decrease inequities.
Our concern with inequity should extend beyond the borders of our campuses and consider as well the ethical implications of how AI software has been developed. In addition to the implications of ChatGPT’s use of all the data that its users enter into it (and that we can see enhancing the program on a daily basis), we ought not ignore such reports as Time’s recent article on the exploitation of Kenyan workers who were paid less than $2 per hour to train the technology away from a tendency toward “blurting out violent, sexist and racist remarks.” (Apparently, ChatGPT is like Shakespeare’s Caliban: “You taught me language, and my profit on’t / Is I know how to curse.”)
4. While we are on the subject of inequity, it is important to acknowledge the fears that a long-term goal of university administrators is to reduce the number of tenure-track and full-time faculty and increase the use of contingent faculty. Given the complex and conversational way in which ChatGPT responds to prompts, there may be anxiety on some campuses that ChatGPT could lead to a devaluation of the professorial role, especially in English composition and humanities classrooms. While university administrators may strongly push back against fears of increased adjunctification, it is important to recognize the possible implications on faculty positions. Recently, I asked ChatGPT how likely it was that the technology would “lead to a reduction of tenure-track and full-time faculty and increase the use of contingent faculty on university campuses.” Here is the response I received (with emphasis added):
It is possible that ChatGPT and other language generation models could lead to a reduction in the need for some types of faculty positions, such as those that focus primarily on proofreading and editing. However, it is unlikely that ChatGPT would lead to a significant reduction in the number of tenure-track and full-time faculty positions on university campuses. Writing instruction is a complex and multifaceted endeavor, and ChatGPT is not able to provide the same level of personalized feedback and guidance that a human writing instructor can. Additionally, the role of a university faculty member extends beyond writing instruction, to include conducting research, mentoring students, and contributing to the intellectual life of the university. These are tasks that ChatGPT cannot perform. Furthermore, teaching is a human-centered activity, and the interaction between a student and a teacher is a unique and valuable experience that cannot be replaced by technology.
It is also worth noting that the use of contingent faculty on university campuses is a broader issue that is driven by multiple factors, such as budget constraints, changing student demographics and changing educational models. The use of ChatGPT or other language generation models as a tool in teaching might lead to a reduction in certain tasks or responsibilities but it is unlikely to cause a significant shift in the overall trend of using contingent faculty. (https://chat.openai.com/chat, 1/27/23)
This response suggests the reasons for anxiety. On many campuses, instructional faculty are not expected to conduct research and may have limited service responsibilities. The “certain tasks and responsibilities” that are referred to may be exactly those that anxious contingent faculty are hired to perform.
5. A common anxiety that faculty have about the technology is, of course, the fear of cheating. Our campuses will need to revise and supplement our honor codes and academic honesty policies to clarify when student use of AI-generated text is cheating and how such text should be documented in the settings in which it is allowed. Headlines such as “ChatGPT Bot Passes Law School Exam inflame worries that cheating (which has already become a greater problem since the start of the pandemic) will become impossible to restrict. Some might argue that if a tool such as ChatGPT can pass our courses, then your instructional methods and your curriculum need some revision. We already let our students use a range of technologies to make their work easier; consider the various citation generators, online map generators, and even the lowly calculator on your phone. We need to clarify our policies about when each available technology may be used and ensure that our students learn the rules. And, of course, we need to encourage our faculty to actively engage with their students and their courses to discourage cheating and motivate students to learn. While some faculty have discussed wanting to emphasize in-class, pen-on-paper written exams or oral exams to circumvent cheating, such strategies aren’t appropriate ways to measure all learning goals, even if faculty have the time to devote to them. Of course, we need to prepare for potential abuses by scholars and researchers as well.
The concerns about cheating emphasize what we know to be true about student learning: faculty and students both need to be actively engaged in the creation of classroom activities, assignments, and curricula that matter to them. Inasmuch as ChatGPT is spurring us to think creatively about our courses and how our students’ learning might be enhanced, it is a tool we should embrace with thoughtful enthusiasm.
The arrival of OpenAI’s ChatGPT, a program that generates sophisticated text in response to any prompt you can imagine, may signal the end of writing assignments altogether—and maybe even the end of writing as a gatekeeper, a metric for intelligence, a teachable skill.
If you’re looking for historical analogues, this would be like the printing press, the steam drill, and the light bulb having a baby, and that baby having access to the entire corpus of human knowledge and understanding. My life—and the lives of thousands of other teachers and professors, tutors and administrators—is about to drastically change.
But the sky really isn’t falling. The essay became an important pedagogical tool because it permitted students a strategy by which to develop critical thinking and analytical skills and gave faculty a tidy object through which to observe those skills. Students still need to develop skills in thinking, analysis, and communication, and faculty will continue to need to determine whether students have developed those skills. And writing will continue to be a means by which all of us—faculty and students alike—come to know what our ideas are. The act of writing refines and clarifies our thinking. Technology won’t replace that; it is more likely to prompt students to examine their ideas more closely.
Recently, I asked ChatGPT to write an essay on how university leaders should prepare for the growth of ChatGPT. The bot’s response concluded as follows:
In conclusion, university leaders should be aware of the capabilities and potential of ChatGPT and other large language models, and take steps to explore and prepare for their growth. This includes exploring how this technology can be used in their institutions, considering the ethical implications, investing in necessary infrastructure and resources, and fostering collaboration and partnerships. By doing so, university leaders can ensure that their institutions are well-positioned to take advantage of the opportunities presented by ChatGPT and other large language models, and to stay at the forefront of the field of education and research. (https://chat.openai.com/chat, 1/26/23)
Whatever else is true, ChatGPT knows how to sound like an administrator.
Constance C. Relihan, PhD, is the dean of University College and a professor of English at Virginia Commonwealth University.