Over the previous three posts, I introduced effect size, discussed its calculation and interpretation, and even provided an example of how you can use effect size to demonstrate the effectiveness of your overall CME program. My intention was to present a method for CME assessment that is both practical and powerful.

For those a bit more statistically savvy, you likely noticed that my previous effect size example focused on paired, ordinal data. That is, I used a pre- vs. post-activity survey (i.e., paired) comprised of rating-scale (i.e., ordinal) questions. I chose this path because it’s fairly common in CME outcome assessments and it’s the most straightforward calculation of Cohen’s *d* (which was the effect size measure of interest).

Here are some other scenarios:

- If you’re using pre- vs. post-activity
*case-based* surveys, you’re now working with paired, *nominal* (or categorical) data that has most likely been *dichotomized *(e.g., transformed into correct/evidence-based preferred answer = 1, all other responses = 0). In this case, the road to effect size is a bit more complex (i.e., use McNemar’s to test for statistical significance, calculate an odds ratio[OR], and convert the odds ratio to Cohen’s *d*). Of note, an OR is itself an effect size measure, and converting this to Cohen’s *d* is optional. The formula for this conversion is d = ln(OR)/1.81 (Chinn S: A simple method for converting an odds ration to effect size for use in meta-analysis.* Statistics in Medicine* 2000, 19:3127-3131).
- If you’re using
*post-activity* case-based surveys administered to CME participants and a representative control group, you’re now working with *unpaired*, nominal data (that is typically dichotomized into correct answer vs. incorrect answer). In this case, you’ll use a chi-square test (if the sample is large) or Fisher’s exact test (if the sample is small) and also calculate a Cramer’s *V*. You’ll then need to convert Cramer’s *V* to Cohen’s *d* (which you can do here).

If you’ve been doing this, or any other analysis incorrectly (as I have in the past, often do in the present, and bet on in the future). Don’t fret. Statisticians are constantly pointing out examples of faulty use of statistics in the peer-reviewed literature (even in such prestigious journals as *JAMA* and *NEJM*). Keep making mistakes, it means you’re moving forward.

### Like this:

Like Loading...

*Related*

Pingback: So what’s a good effect size for CME? | assessCME

Pingback: Effect size kryptonite | assessCME

In bullet point 2 you link to a calculator to convert Cramer’s V to Cohen’s d. Nothing on the linked page mentions that particular conversion, however. Has it been removed or does it go by a different name?

Sorry for the delay in response. To convert Cramer’s V to Cohen’s d, click the tab “Correlation coefficient (r) to Effect Size” and enter your value for Cramer’s V in the box for correlation (you’ll also need to enter info for number of subjects and effect direction). I’m not entirely keen on this conversion – have not yet been able to confirm that it’s statistically okay; however, it’s the best method I’ve yet been able to find. Just make sure you disclose your methods in case this approach comes with flaws. And if you find a better method, please let me know.

if a cohen’s d is calculated based on the odds ratio for a McNemar test, is it equivalent to a cohen’s d for change scores as estimated by (Mean_post – Mean_pre)/SDchange for continuous data?

No, because you’re working with different question formats. If based on McNemar’s, you’re using a categorical variable (eg, case vignette or knowledge test question) and the other is based on a continuous variable. We can’t assume that different assessment formats would create equivalent effects. Well, we can’t assume that until someone collects enough data to suggest otherwise.