Tag Archives: Cramer’s V

Issues with effect size in CME

This past Thursday, I gave a short presentation on effect size at the SACME Spring Meeting in Cincinnati (a surprisingly cool city, by the way – make sure to stop by Abigail Street).  Instead of a talk about why effect size is important in CME, I focused on its limitations.  My expectation was feedback about how to refine current methods.  My main concerns:

  1. Using mean and standard deviation from ordinal variables to determine effect size (how big of a deal is this?)
  2. Transforming Cramer’s V to Cohen’s d (is there a better method?)
  3. How many outcome questions should be aggregated for a given CME activity to determine an overall effect? (my current minimum is four)

The SACME slide deck is here.  I got some good feedback at the meeting, which may lead to some changes in the approach I’ve previously recommended. Until then, if you have any suggestions, let me know.

Advertisements

1 Comment

Filed under CME, Cohen's d, Cramer's V, Effect size, Statistics

Part IV: What? More about effective size?

Over the previous three posts, I introduced effect size, discussed its calculation and interpretation, and even provided an example of how you can use effect size to demonstrate the effectiveness of your overall CME program.  My intention was to present a method for CME assessment that is both practical and powerful.

For those a bit more statistically savvy, you likely noticed that my previous effect size example focused on paired, ordinal data.  That is, I used a pre- vs. post-activity survey (i.e., paired) comprised of rating-scale (i.e., ordinal) questions.  I chose this path because it’s fairly common in CME outcome assessments and it’s the most straightforward calculation of Cohen’s d (which was the effect size measure of interest).

Here are some other scenarios:

  1. If you’re using pre- vs. post-activity case-based surveys, you’re now working with paired, nominal (or categorical) data that has most likely been dichotomized (e.g., transformed into correct/evidence-based preferred answer = 1, all other responses = 0).  In this case, the road to effect size is a bit more complex (i.e., use McNemar’s to test for statistical significance, calculate an odds ratio[OR], and convert the odds ratio to Cohen’s d).  Of note, an OR is itself an effect size measure, and converting this to Cohen’s d is optional.  The formula for this conversion is d = ln(OR)/1.81 (Chinn S: A simple method for converting an odds ration to effect size for use in meta-analysis. Statistics in Medicine 2000, 19:3127-3131).
  2. If you’re using post-activity case-based surveys administered to CME participants and a representative control group, you’re now working with unpaired, nominal data (that is typically dichotomized into correct answer vs. incorrect answer).  In this case, you’ll use a chi-square test (if the sample is large) or Fisher’s exact test (if the sample is small) and also calculate a Cramer’s V.  You’ll then need to convert Cramer’s V to Cohen’s d (which you can do here).

If you’ve been doing this, or any other analysis incorrectly (as I have in the past, often do in the present, and bet on in the future).  Don’t fret.  Statisticians are constantly pointing out examples of faulty use of statistics in the peer-reviewed literature (even in such prestigious journals as JAMA and NEJM).  Keep making mistakes, it means you’re moving forward.

6 Comments

Filed under CME, Cohen's d, Effect size, Methodology, Statistics