# Monthly Archives: February 2012

## Calculating Effect Size, Part II

In the previous post, I introduced effect size (more specifically, Cohen’s d) as a statistical tool that can answer whether a CME activity was effective, as well as quantify the magnitude of this effectiveness and  allow for comparisons of effectiveness across CME activities.  Using Cohen’s d, a CME provider can report the effectiveness of an annual meeting in affecting, for example, participant competency (Level 4 outcomes) and then compare the magnitude of effect to previous year’s meetings and/or other CME activities of similar format or topic focus.  Ultimately, a CME provider can determine benchmarks for effectiveness at each outcome level (or for each educational format) to quickly diagnose the performance of each CME activity.  That sort of info comes in real handy for accreditation review and for communicating with sponsors (but that will be the focus of the next post).

So, all that being said, it’s now time to discuss how to actually calculate a Cohen’s d.  One caution: you will not need a statistician, an advanced grasp of mathematics, or any specialty certification…if you can calculate (or more likely, use MS Excel to calculate) an average, standard deviation and have access to the Internet, you’re good.

I’ll set the stage with a common example: assume that you are a CME provider who just produced a 2-hour, mixed didactic-interactive case discussion regarding advances in the detection, evaluation and treatment of high blood cholesterol in adults.  You used a paper-based survey (administer both pre- and post-activity) to measure participants self-reported utilization (on a 5-point scale) of clinical tasks related to the CME activity content.  Each survey consisted of eight assessment items (i.e., clinical tasks).   Now you want to summarize this pre- vs. post-activity data into a single effect size.  The steps for such are as follows:

1. Calculate a mean rating and standard deviation for each assessment item in the pre-survey.
2. Calculate  a mean rating and standard deviation for each assessment item the post-survey.
3. Type “effect size calculator” into Google and click any of the identified links (I like to use this one).
4. Enter the data from items #1 and 2 (above) into the effect size calculator.
5. Behold the effect size for your activity!

There is one more step…interpretation.   For that, you need to be aware of the following:

1. Cohen’s d is expressed in standard deviation units.   Accordingly, a Cohen’s d of 1.0 indicates that one standard deviation separates the pre-activity average rating vs. the post-activity average rating (with the post-activity rating being greater).
2. Cohen’s d is proportional.  Therefore, a Cohen’s d of 1.0 is twice the magnitude of a Cohen’s d of .5 (or half the magnitude of a 2.0).
3. There is no upper or lower bound to the possible range of Cohen’s d. The maximum expected range of Cohen’s d is from -3 to +3, but the majority is expected to fall within -1 to +1.
4. Benchmarks are used to assess the magnitude of a Cohen’s d.   Based on repeated measurement, benchmarks (or expected ranges of Cohen’s d) can be established in a given area (e.g., mixed, didactic-interactive CME).  In areas where benchmarks remain to be established, the following preliminary benchmarks can be used to assessed magnitude of effect: 0.2 (small), 0.5 (medium) and 0.8 (large) (Cohen 1988).
5. You can compare the Cohen’s d from one activity to the d from any other activity that used a similar outcome assessment method (i.e., case-based survey).
6. You can aggregate Cohen’s d across activities (i.e., take an average d across all of your eLearning activities, or all of your cholesterol-focused CME – assuming you used the same outcome assessment method for these activities [see item #5 above]).

And just like that, you are now proficient in calculating and interpreting effect size in CME.  I told you this would be easy.  Now go forth and make this look hard to all of your competition.

Reference: Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences, 2nd edition. Erlbaum,   Hillsdale, NJ.