Behind the Curtain of AI: Exploring the Fine Line Between Accuracy and Misinformation

Concurrent Session 3

Brief Abstract

Artificial intelligence (AI) tools are now ubiquitous in education. In this session, presenters will share results of an exercise conducted using several AI tools to determine the accuracy of information obtained. Participants will engage in activities exploring the validity of AI generated content and uses in education. 

Presenters

Dr. Olysha Magruder is currently the Director of Learning Design and Faculty Development of the Center for Learning Design and Technology at the Whiting School of Engineering, Johns Hopkins University. She teaches graduate-level education courses at Hopkins focused on online teaching and learning. Prior to this, she worked as an instructional designer at Hopkins and other higher education institutions as an instructional designer and adjunct faculty. Olysha started her career in a K-12 classroom, which sparked her love for all things teaching and learning. She is a graduate of the University of Florida's Educational Technology doctoral program. Her research interests include active learning, faculty development, blended learning, instructional design, and leadership.

Extended Abstract

The use of artificial intelligence (AI) tools has become increasingly popular in education. These tools offer great promise in terms of their ability to automate complex tasks and accelerate processes. However, there are also concerns about the accuracy and reliability of AI-generated information. One of the biggest concerns for faculty, staff, and others who design, develop, and teach curricula is how to communicate to students the benefits and drawbacks of using such technologies. In other words, how do educators harness these technologies for teaching and learning while simultaneously providing caution to students about overreliance and misinformation that may result from using them for academic purposes? 

One of the major challenges is misinformation generated by AI tools. To explore this complex issue, a team conducted an exercise using several different AI tools to generate academic references. The goal of the exercise was to determine the accuracy of the AI-generated references and identify the sources of the generated information. The results of the presenters' exercise will be shared with participants. Attendees will then explore an AI tool in small groups and conduct a similar exercise as the presenters. The small groups will then share and discuss results from the AI activity. 

Participants will engage in a robust conversation about using AI in teaching and learning. The presenters are uniquely situated in online teaching and learning and have different perspectives about using AI in their teaching and/or design practice. One presenter is an engineering faculty member who teaches courses in a master’s level computer science program, specifically in data analysis and artificial intelligence. This presenter will share some of the technical aspects of AI to better understand how information is collected and shared by the tools. Another presenter is a director of a learning design team and an online education faculty member. And the final presenter is a learning designer who has written and presented about academic integrity. Each presenter has a different perspective about using AI tools for teaching and learning.  

While AI tools can be extremely useful in automating tasks and generating information quickly, they are not infallible and can sometimes produce inaccurate or misleading results. Participants will receive resources based on the results of the exercises. The presenters hope that the results will be informative and will be shared with students, to bring awareness of the positives and limitations of using AI tools for academic purposes. Participants will need a device and access to an AI tool for this session.