Assessing the Quality Indicators of e-Learning Educational Studies

Concurrent Session 4

Brief Abstract

This session presents assessing the quality indicators of e-learning educational research, using examples from specific studies. The session will also introduce various quality assessment scales for the methodological quality of educational research that can be used across disciplines. The presenter will also share tips on study planning and manuscript preparation.

Presenters

Kadriye O. Lewis, EdD, is the Director of Evaluation and Program Development in the Department of Graduate Medical Education at Children's Mercy Hospital CMH). She is also Professor of Pediatrics at the University of Missouri-Kansas City School of Medicine (UMKC SOM). Prior to coming to Children's Mercy, Dr. Lewis worked for Cincinnati Children's Hospital Medical Center (CCHMC) for more than 13 years. She played a major role in the development of the Online Master's Degree in Education Program for Healthcare Professionals. This program has developed a national and international reputation for excellence and played an important role in training future leaders in medical education. Dr. Lewis served as an education consultant to the medical center's faculty development program. She applied her educational background and academic skills to health literacy by establishing a Health Literacy Committee at CCHMC in 2007 and chaired this committee successfully for three years. She also received Medical and Academic Partnerships Pfizer Visiting Professorships in Health Literacy/Clear Health Communication grant in 2008. Along with her many accomplishments in the area of scholarly activities, she also established the e-Learning SIG in Medical Education for the Academic Pediatrics Association (APA) and chaired this group for six years. Dr. Lewis served as an education consultant for a national-level industry-sponsored project (Abbott Nutrition) on e-learning development in pediatric nutrition education for over six years. She also worked with the infectious disease team at CMH as a Co-PI for the Pfizer-funded CoVER project (Collaboration for Vaccination Education and Research for Residents). This project produced a unique training model in vaccine education for residency programs with its interactive modules that were implemented nationally at 26 institutions. Currently, she is involved in an NIH-funded grant project on genome, and various curriculum development projects for the graduate medical education programs at CMH. Dr. Lewis is active in medical education research and her scholarly interests are focused on e-learning design, implementation of innovative technologies for curriculum delivery at many levels in healthcare education, including performance-based assessment, the construction of new assessment tools as well as the improvement and validation of existing tools and methods. Dr. Lewis presents extensively at many professional meetings and conferences and has been a keynote/an invited speaker at many international and national universities. In addition, she is the Medical Education Section Editor of Annals of Medicine Journal (https://www.tandfonline.com/journals/iann20/sections/medical-education).

Extended Abstract

With technological advancement, various formats of e-learning have been growing exponentially. While quality indicators for online delivery of instruction and sustainable development of the courses were well-established through multiple educational studies (Hafeez et al., 2022; Moore, 2002; Timbi-Sisalima et al., 2022), assessing the quality of e-learning educational research has been overlooked, and the literature lacks research quality indicators or standardized appraisal criteria in reference to e-learning educational research. Further, theoretical frameworks are also underutilized in educational research, and experimental research is still poorly present due to the difficulties of its implementation in e-learning contexts. At this point, it is crucial to analyze the evolution of educational research on e-learning and appraise the research quality.

Quality indicators in e-learning research refer to the criteria or measures used to assess the quality and rigor of research studies related to e-learning, which is the use of web-based technologies to deliver educational content and facilitate learning. These indicators are used to evaluate the methodological soundness, validity, trustworthiness, and reliability of research findings in the field of e-learning. In essence, good research in education is defined by evidence that is reliable, applicable to (many) practical situations, consistent, and neutral (unbiased) regardless of whether a qualitative or a quantitative technique is taken. The design and operationalization of the quality requirements may vary between qualitative and quantitative research, even though they share similar standards for good evidence (quality criteria). However, issues such as choosing the “right” research method (qualitative, quantitative, mixed method, etc.) and strategies for rigorous data collection/data consistency (collect, record, and process) and analysis procedures are still common concerns (Cohen et al., 2017; Fraenkel et al., 2018; Panke, 2018).

This information session will focus on how to critically evaluate the quality and rigor of research studies to ensure that evidence-based practices are being used to inform decision-making and can contribute to the advancement of the technology-integrated teaching and learning field. The session will use examples from specific studies to provide rich information on the following quality indicators that will be discussed in the context of different e-learning modalities, along with introducing various criteria with different appraisal scales for methodological quality such as Buckley et al.’s 11 quality indicators (Buckley et al., 2009), Cross-Sectional Studies (AXIS tool: Downes et al., 2016), Medical Education Research Study Quality Instrument (MERSQI), and Modified MERSQI (Asmri et al., 2023).

  • Research design
  • Sample size and sampling techniques/representativeness
  • Appropriate technology platforms for multiple e-learning modalities
  • Literature review and theoretical framework
  • Validity and reliability of measures
  • Data collection and analysis
  • Findings and conclusions
  • Ethical considerations
  • Generalizability and transferability
  • Peer review and publication

Through large group participation, we will discuss standardized criteria and appraisal scales for best practices in the scope of methodological and reporting quality of e-learning educational research. The presenter will also share tips on study planning and manuscript preparation.

Learning Objectives

By the end of the session, participants will be able to:

  • Identify quality indicators for assessing e-learning educational studies, including various criteria with different appraisal scales for methodological quality.
  • Discuss standardized criteria and appraisal scales for best practices in the scope of methodological and reporting quality of e-learning educational research.

Level of Participation:

This 45-minute information session is structured to create a mutual learning experience with a combination of interactive presentation (20 minutes), large group discussion (20 minutes), and participants’ questions (5 minutes). Both experienced faculty instructors and instructors who have no experience in e-learning research will gain crucial knowledge about appraising quality indicators of e-learning educational studies

References

  • Asmri, M. A., Haque, M. S., & Parle, J. (2023). A Modified Medical Education Research Study Quality Instrument (MMERSQI) developed by Delphi consensus. BMC Medical Education, 23(1). https://doi.org/10.1186/s12909-023-04033-6
  • Buckley, S., Coleman, J. J., Davison, I. G., Khan, K. M., Zamora, J., Malick, S., Morley, D., Pollard, D., Ashcroft, T., Popovic, C., & Sayers, J. (2009). The educational effects of portfolios on undergraduate student learning: A Best Evidence Medical Education (BEME) systematic review. BEME Guide No. 11. Medical Teacher, 31(4), 282–298. https://doi.org/10.1080/01421590902889897
  • Cohen, L., Manion, L., & Morrison, K. (2017). Research Methods in Education. Routledge.
  • Downes, M., Brennan, M. L., Williams, H. C., & Dean, R. (2016a). Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS). BMJ Open, 6(12), e011458. https://doi.org/10.1136/bmjopen-2016-011458
  • Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2018). How to Design and Evaluate Research in Education. 8th Edition. McGraw Hill.
  • Hafeez, M., Naureen, S., & Sultan, S. (2022). Quality Indicators and Models for Online Learning Quality Assurance in Higher Education. Electronic Journal of E-Learning, 20(4), pp374-385. https://doi.org/10.34190/ejel.20.4.2553
  • Moore, J. C. (2002). Elements of Quality: The Sloan-C Tm Framework. Olin College - Sloan-C.
  • Panke, D. (2018a). Research Design & Method Selection: Making Good Choices in the Social Sciences. SAGE.
  • Timbi-Sisalima, C., Sánchez-Gordón, M., Hilera-Gonzalez, J. R., & Otón, S. (2022). Quality Assurance in E-Learning: A Proposal from Accessibility to Sustainability. Sustainability, 14(5), 3052. https://doi.org/10.3390/su14053052