PMED-D-19-00367_R2 Paper Three.pdf (1.97 MB)
Download file

Evaluating the reliability of gestalt quality ratings of medical education podcasts: A METRIQ study.

Download (1.97 MB)
journal contribution
posted on 11.06.2020, 10:59 by Jason M Woods, Teresa M Chan, Damian Roland, Jeff Riddell, Andrew Tagg, Brent Thoma
INTRODUCTION:Podcasts are increasingly being used for medical education. Studies have found that the assessment of the quality of online resources can be challenging. We sought to determine the reliability of gestalt quality assessment of education podcasts in emergency medicine. METHODS:An international, interprofessional sample of raters was recruited through social media, direct contact, and the extended personal network of the study team. Each participant listened to eight podcasts (selected to include a variety of accents, number of speakers, and topics) and rated the quality of that podcast on a seven-point Likert scale. Phi coefficients were calculated within each group and overall. Decision studies were conducted using a phi of 0.8. RESULTS:A total of 240 collaborators completed all eight surveys and were included in the analysis. Attendings, medical students, and physician assistants had the lowest individual-level variance and thus the lowest number of required raters to reliably evaluate quality (phi >0.80). Overall, 20 raters were required to reliably evaluate the quality of emergency medicine podcasts. DISCUSSION:Gestalt ratings of quality from approximately 20 health professionals are required to reliably assess the quality of a podcast. This finding should inform future work focused on developing and validating tools to support the evaluation of quality in these resources.



Woods, J.M., Chan, T.M., Roland, D. et al. Evaluating the reliability of gestalt quality ratings of medical education podcasts: A METRIQ study. Perspect Med Educ (2020).


VoR (Version of Record)

Published in

Perspectives on medical education


Springer Science and Business Media LLC





Copyright date




Publisher version

Usage metrics