In the previous two posts, I introduced effect size, walked through an effect size calculation and provided some insight regarding interpretation. Now I want to quickly identify one application of effect size data: ACCME reaccreditation.
ACCME criterion 11 states: The provider analyzes changes in learners (competence, performance, or patient outcomes) achieved as a result of the overall program’s activities/educational interventions. I can only imagine the pages and pages of material heaped on ACCME reviewers in response to this criterion. How can you succinctly describe the effectiveness of a CME program, consisting of hundreds of activities over a two-, four- or six-year period? Oh yeah, effect size.
If you remember from the last post, effect sizes can be aggregated across activities as long as the education outcome measurement (EOM) approach remains the same. So assume you’re a healthcare system that regularly produces RSS, conferences and eLearning activities. Furthermore, assume your standard EOM approach across these activities is to measure self-reported utilization of clinical tasks related to CME activity content. If you’ve been calculating an effect size for each of these activities, you can aggregate the effect size scores across all of these activities to come up with a single effect size for competence (Level 4 outcome). Compare this effect size to the benchmarks identified in the previous post (e.g., 0.2 = small, 0.5 = medium, and 0.8 = large) and you have data-based evidence of your overall program effectiveness at this outcome level (see example Figure).
Taking it a step further, you can stratify effect size by format type, which would tell you how effective your eLearning was in relation to conferences or RSS (see example Figure 2). You can even further stratify by topic focus to see how effective your primary care CME was in relation to rheumatology-based CME, for example.
Now you’re responding to criterion with just a few figures and explanatory paragraphs. And you’re using good data to do so. Maybe now the next reaccreditation review won’t look so scary.