Commitment to change: good night, sweet prince?

Commitment to change (CTC) questions are the caboose of every post-activity CME evaluation – stripped of all relevancy and sustained solely by nostalgia. Thirty years since its introduction, we can now all retire this method, confident that it has served us well, but that it’s now time for something more…app-ish.  And off it goes, grumbling toward obscurity, with none but academics to watch it fade.  Its final words: “but, you never really knew me”.

But wait!  What’s that?  A hand?  Pulling CTC back from the edge?  Dusting off its coat, straightening its tie, offering a fresh dab of modelling clay to re-pomp its mane are five, kind investigators from the midwestern tundra.  Not just ivory tower curators, these rescuers stand shoulder-to-shoulder with CTC to proclaim (or at least publish): there’s value here, assuming you use it correctly.

Guess what?  People have been studying CTC, for like, a long time.  Should you use a follow-up survey?  When?  How?  How should you word the questions?  Include a rating scale?  And how should you sort through and interpret the results?  This stuff all matters. And you won’t find an easier to digest summary than this 2010 article in Evaluation & the Health Professions.

So, yes, if you’re simply maintaining a “what are you going to change in your practice” question at the end of every CME evaluation – definitely send that packing.  Then read the aforementioned article.  You’ll find that CTC has limitations, but when done in accordance with the latest evidence, there’s a lot of good data to be had.

Leave a comment

Filed under CME, Commitment to Change, Outcomes

Issues with effect size in CME

This past Thursday, I gave a short presentation on effect size at the SACME Spring Meeting in Cincinnati (a surprisingly cool city, by the way – make sure to stop by Abigail Street).  Instead of a talk about why effect size is important in CME, I focused on its limitations.  My expectation was feedback about how to refine current methods.  My main concerns:

  1. Using mean and standard deviation from ordinal variables to determine effect size (how big of a deal is this?)
  2. Transforming Cramer’s V to Cohen’s d (is there a better method?)
  3. How many outcome questions should be aggregated for a given CME activity to determine an overall effect? (my current minimum is four)

The SACME slide deck is here.  I got some good feedback at the meeting, which may lead to some changes in the approach I’ve previously recommended. Until then, if you have any suggestions, let me know.

Leave a comment

Filed under CME, Cohen's d, Cramer's V, Effect size, Statistics

Thoughts on organizing your outcomes data

An experiment begins with a hypothesis. For example…I suspect that the next person to enter this coffee shop will be a hipster (denied, by the way).

A neat and tidy hypothesis for CME outcome assessment might read: I suspect that participants in this CME activity will increase compliance with <insert evidence-based quality indicator here>.

Unfortunately, access to data that would answer such a question is beyond the reach of most CME providers. So we use proxy measures such as knowledge tests or case vignette surveys through which we hope to show data suggestive of CME participants increasing their compliance with <insert evidence-based quality indicator here>.

Although this data is much easier to access, it can be pretty tedious to weed through. Issue #1: How do you reduce the data across multiple knowledge or case vignette questions into a single statement about CME effectiveness? Issue #2: How do you systematically organize the outcomes data to develop specific recommendations for future CME?

For issue #1, I’d recommend using “effect size”. There’s more about that here.

For issue #2, consider organizing your outcome results into the following four buckets (of note, there is some overlap between these buckets):

1. Unconfirmed gap – pre-activity question data suggests knowledge or competence already high (typically defined as >70% of respondents identifying the evidence-based correct answer OR agreeing on a single answer if there is no correct response). Important note: although we shouldn’t expect every anticipated gap to be present in our CME participants, one cause of an unconfirmed gap (other than a bad needs assessment) is the use of assessment questions that are too easy and/or don’t align with the education.

2. Confirmed gap – pre-activity questions data suggest that knowledge or competence is sufficiently low to warrant educational focus (typically defined as <70% of respondents identifying the evidence-based correct answer OR agreeing on a single answer if there is no correct response)

3. Residual gap

a. Post-activity data only = typically defined as <70% of respondents identifying the evidence-based correct answer OR agreeing on a single answer if there is no evidence-based correct response

b. Pre- vs. post-activity data = no significant difference between pre- and post-activity responses

4. Gap addressed

a. Post-activity data only = typically defined as >70% of respondents identifying the evidence-based correct answer OR agreeing on a single answer if there is no correct response

b. Pre- vs. post-activity data = significant difference between pre- and post-activity responses

Most important to note, if the outcome assessment questions do not accurately reflect gaps identified in the needs assessment, the results of the final report are not going to make any sense (no matter how you organize the results).

Leave a comment

Filed under CME, Gap analysis, Needs Assessment, Outcomes, Reporting, Statistics

Statistical analysis in CME

Statistics can help answer important questions about your CME.  For example, was there an educational effect and, if so, how big was it?  The first question is typically answered with a P value and the second with an effect size.

If this were 10 years ago, you’d either be purchasing some expensive statistical software or hiring a consultant to answer these questions.  Today (thank you Internet), it’s simple and basically free.

A step-by-step approach can be found here.

 

1 Comment

Filed under CME, CMEpalooza, Cohen's d, Effect size, P value, Statistical tests of significance, Statistics

Data analysis in Excel

Oh, was I excited to find VassarStats.  I haven’t yet encountered a CME outcome analysis that it can’t handle – and it’s free.  Yes, having to cut & paste data between Excel and VassarStats is a bit cumbersome (and subject to error), but I felt it a small price to pay.  And then I found the “data analysis toolpack” in Excel.  Well, actually, I found Jacob Coverstone’s CME/CPD blog, which unlocks this little secret here.  We’ve been sitting on the tools all along.  Thanks, Jacob, for pointing this out.

1 Comment

Filed under Microsoft Excel, Statistics

CMEpalooza

CMEPalooza will be on Thursday March 20 and Friday March 21.  Like the annual professional meeting for CME (Alliance for Continuing Education in the Health Professions), CMEpalooza is a collection of “best practice” talks.  Unlike the Alliance, the entire event will be online, archived and free.  A big thank you to Derek Warnick (aka “the CME Guy“) for putting this all together.

Based on the agenda (of 21 presentations), there are many promising talks ranging from audience recruitment, adult learning theory, linking educational objectives with outcomes, qualitative analysis, and measuring patient outcomes (I’ll be representing Imedex with a presentation on statistical analysis in CME outcomes).  Regardless of your scope of work, I suspect there will be at least one presentation in the agenda of interest.

 If you can’t participate live, no worries, everything will be archived, so view at your convenience – but make sure to check it out.

Leave a comment

Filed under ACCME, Alliance Conference, Alliance for CME, Best practices, CME, CMEpalooza

Alliance effect size presentation

Thank you to everyone who attended our effect size presentation at the 2014 Alliance.  If you’re looking for a copy of the slides, here you go.   Any questions?  Post a comment or contact us by email (jason.olivieri@assessCME.com).

Leave a comment

Filed under Uncategorized