You have been calculating an effect size for each of your CME activities, right? And now you have a database full of activities with corresponding effect sizes for say, knowledge and competence outcomes. Sound familiar? Anyone…anyone…Bueller? Okay, for the one straggler, here’s a refresher:
- What is effect size? (link)
- How to calculate effect size (link)
- Reporting effect size (link)
- Effect size – other methodologic/statistical considerations (link)
Now that we’re all on the same page, let’s move on to the next question…what exactly is a “good” effect size? Well, you would first start with Cohen (Cohen. J. . Statistical power analysis for the behavioral sciences [2nd ed.]. Hillsdale, NJ: Lawrence Earlbaum Associates), who identified the following general benchmarks: 0.2 = small effect, 0.5 = medium effect, and 0.8 = large effect. Although effect size is relatively new to CME, thankfully more specific effect size data is available. Starting with recent literature (specifically, meta-analyses), the following effect sizes have been reported:
- Competence effect size (live activities) = .85 (Drexel et al, 2011)
- Knowledge effect size (live activities) = .6 (Mansouri & Lockyer, 2007)
- Knowledge effect size (eLearning) = .82 (Casebeer et al, 2011)
It’s important to note that these effect sizes are the result of mixed measurement methods (and that measurement approach influences effect size), but they are certainly more relevant than Cohen’s benchmarks (and we know that Cohen wouldn’t take offense, because refining effect sizes through repeated measurement in a given area is exactly what he recommended).
In regard to repeated measurement, we have been measuring knowledge- and competence-level effect sizes for a variety of CME activities over the past two years. In the next post, I’ll be publishing our effect size results for a variety of live and enduring material formats. I’d love to hear how these results jive with your findings.