ChatGPT Unplugged: Faculty-Focused Activities that De-Mystify How Generative AI Works

Concurrent Session 2

Brief Abstract

This workshop offers activities for reducing fears related to the use of generative AI tools. Metaphorical or “unplugged” activities are leveraged to demystify how generative AI tools work, while also serving as a bridge into more technical activities that promote the use of AI as a co-designer.

Extended Abstract

Fears related to the use of generative AI in higher education have catapulted as a result of Open Access platforms like ChatGPT and Bing AI, among many others. While these fears are certainly warranted given the impact of generative AI on teaching and learning, the need to transcend conversations about restricting access to AI tools is paramount. Namely, our students will graduate into industries that readily use generative AI to augment their work, potentially freeing up space for higher order thinking, creativity, and design (Fitzpatrick, 2023). As higher educators, it is our responsibility to prepare our students for these spaces, requiring that we learn to leverage these tools. If we choose not to, the impact on the value of higher education in the Digital Age will be detrimental. That is, alternatives to higher education -- training programs unafraid of novel technologies -- may become more attractive to students who seek to be on the forefront of innovation, while our own students will be at a disadvantage. As such, it is no longer feasible to avoid the technological advancements of the world. Rather, we must change the culture of higher education to no longer fear the advancements of the future, but learn to use them in ways that augment our own, and our students' learning, creativity, and productivity. 

However, this culture shift is not one that comes lightly. The nature of Artificial Intelligence, including its use of Big Data Analytics that leverage personal and historical data, causes many concerns related to issues of privacy, property, discrimination and fairness (Benjamin, 2019; O'Neil, 2016; Zarsky, 2016; Zuboff, 2015). Beyond this, the ability of generative AI to design, develop, and create a myriad of products has translated into fears related to academic integrity and the reduction of our students’ critical thinking and problem solving skills (Fitzpatrick, 2023). While these fears, again, are warranted, the reality of generative AI tools (currently) is that they may provide more opportunities for high order and personalized learning than they do harm. For instance, the ability for students to use generative AI tools as a personal study companion or for instructors as a teaching assistant to offload low order tasks and free up space for more active learning opportunities, human engagement, and personalized feedback. So how do we get everyone on board?

In our experience, educators who are most resistant to the use of generative AI are those who either 1) have never used generative AI, and/or 2) do not understand how such AI tools generate a response. Furthermore, the notions that generative AI is sentient, that it will collect and sell all of our personalized data, and that it will replace us as instructors and creators, seem the most concerning to faculty. To reduce such fears, our Center for Teaching and Learning has offered a series of professional development opportunities centered around supporting students and faculty in the Age of AI. We have led a series of faculty workshops, an on-campus AI Institute, and a Certification program for teaching and learning with generative AI. Throughout these offerings we have found that for those who are resistant to the use of generative AI tools, demystifying how these tools work, while offering example use-cases is quite influential for reducing faculty fears and promoting responsible uses of generative AI in their work. This workshop offers a glance at some of these facilitated activities.  

Workshop Overview

This express workshop offers several activities that can be used with faculty and students, intended to promote a basic understanding of how AI tools work. The goal of these activities is to reduce fears related to generative AI and promote its responsible use for augmenting teaching, learning, and research. Participants will engage in two unplugged activities which position them as both users and chat-bots. As users, they will learn what it takes to guide AI tools like ChatGPT to produce a desired response or product, while as chat-bots, they will conceptualize how such tools rely on probabilities, prompt characteristics, and context to generate a response that is modeled after available human text-based information. Following the unplugged activities, participants will engage with ChatGPT to first develop their prompt-craft, and then use those principles to co-design instructional (or other) materials with ChatGPT. The focus here will be on positioning oneself as a subject-matter-expert, and leveraging AI tools to augment our work. We will share any resulting designs and discuss the impact of these activities on our feelings towards generative AI in k-12 and higher education. 

Workshop Outcomes

This workshop is designed to produce the following outcomes:  

1) Faculty attendees will learn, in layman's terms, how ChatGPT and similar tools generate a response, potentially de-mystifying the inner workings of generative AI and reducing their fears related to its use. 

2) Designers and faculty developers will be offered two unplugged activities that may be used for professional development at their home institution. 

3) All attendees will learn the basics of prompt-craft and learn to design materials with generative AI tools like ChatGPT. 

Significance 

This presentation is significant to the OLC community in that it provides several activities that promote the responsible use of generative AI tools in higher (and K-12) education. Given the impact of such tools on teaching, learning, and industry, it is essential that we 1) promote the responsible use of AI tools among faculty and students in order to prepare them for its use in their future work, and that 2) we meet faculty and students where they are in terms of their technical understanding of AI tools, especially for those who are unfamiliar and/or uncomfortable with technology. Metaphorical or “unplugged” activities may serve as a means to accomplish this, while also serving as a bridge into more technical activities where they engage directly with these tools. The argument here is that by promoting understanding of how the tools work first, faculty may be less hesitant to sign up for an account, leverage generative AI in their work, and promote responsible use among their students. This is because they now understand the potential of these tools for improving their work, as well as how the responses are generated in the first place-- which is often less concerning than how generative AI is characterized in the media.

References

  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity Press

  • Fitzpatrick, D., Fox, A., & Weinstein, B. (2023). The ai classroom: The ultimate guide to artificial intelligence in education. TeacherGoals Publishing.

  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. London: Penguin Books

  • Zarsky, T. (2016). The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, & Human Values, 41(1), 118-132. https://doi.org/10.1177/0162243915605575

  • Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75-89. https://doi.org/10.1057/jit.2015.5