Thoughts on organizing your outcomes data

An experiment begins with a hypothesis. For example…I suspect that the next person to enter this coffee shop will be a hipster (denied, by the way).

A neat and tidy hypothesis for CME outcome assessment might read: I suspect that participants in this CME activity will increase compliance with <insert evidence-based quality indicator here>.

Unfortunately, access to data that would answer such a question is beyond the reach of most CME providers. So we use proxy measures such as knowledge tests or case vignette surveys through which we hope to show data suggestive of CME participants increasing their compliance with <insert evidence-based quality indicator here>.

Although this data is much easier to access, it can be pretty tedious to weed through. Issue #1: How do you reduce the data across multiple knowledge or case vignette questions into a single statement about CME effectiveness? Issue #2: How do you systematically organize the outcomes data to develop specific recommendations for future CME?

For issue #1, I’d recommend using “effect size”. There’s more about that here.

For issue #2, consider organizing your outcome results into the following four buckets (of note, there is some overlap between these buckets):

1. Unconfirmed gap – pre-activity question data suggests knowledge or competence already high (typically defined as >70% of respondents identifying the evidence-based correct answer OR agreeing on a single answer if there is no correct response). Important note: although we shouldn’t expect every anticipated gap to be present in our CME participants, one cause of an unconfirmed gap (other than a bad needs assessment) is the use of assessment questions that are too easy and/or don’t align with the education.

2. Confirmed gap – pre-activity questions data suggest that knowledge or competence is sufficiently low to warrant educational focus (typically defined as <70% of respondents identifying the evidence-based correct answer OR agreeing on a single answer if there is no correct response)

3. Residual gap

a. Post-activity data only = typically defined as <70% of respondents identifying the evidence-based correct answer OR agreeing on a single answer if there is no evidence-based correct response

b. Pre- vs. post-activity data = no significant difference between pre- and post-activity responses

4. Gap addressed

a. Post-activity data only = typically defined as >70% of respondents identifying the evidence-based correct answer OR agreeing on a single answer if there is no correct response

b. Pre- vs. post-activity data = significant difference between pre- and post-activity responses

Most important to note, if the outcome assessment questions do not accurately reflect gaps identified in the needs assessment, the results of the final report are not going to make any sense (no matter how you organize the results).

Leave a comment

Filed under CME, Gap analysis, Needs Assessment, Outcomes, Reporting, Statistics

Statistical analysis in CME

Statistics can help answer important questions about your CME.  For example, was there an educational effect and, if so, how big was it?  The first question is typically answered with a P value and the second with an effect size.

If this were 10 years ago, you’d either be purchasing some expensive statistical software or hiring a consultant to answer these questions.  Today (thank you Internet), it’s simple and basically free.

A step-by-step approach can be found here.

 

Leave a comment

Filed under CME, CMEpalooza, Cohen's d, Effect size, P value, Statistical tests of significance, Statistics

Data analysis in Excel

Oh, was I excited to find VassarStats.  I haven’t yet encountered a CME outcome analysis that it can’t handle – and it’s free.  Yes, having to cut & paste data between Excel and VassarStats is a bit cumbersome (and subject to error), but I felt it a small price to pay.  And then I found the “data analysis toolpack” in Excel.  Well, actually, I found Jacob Coverstone’s CME/CPD blog, which unlocks this little secret here.  We’ve been sitting on the tools all along.  Thanks, Jacob, for pointing this out.

1 Comment

Filed under Microsoft Excel, Statistics

CMEpalooza

CMEPalooza will be on Thursday March 20 and Friday March 21.  Like the annual professional meeting for CME (Alliance for Continuing Education in the Health Professions), CMEpalooza is a collection of “best practice” talks.  Unlike the Alliance, the entire event will be online, archived and free.  A big thank you to Derek Warnick (aka “the CME Guy“) for putting this all together.

Based on the agenda (of 21 presentations), there are many promising talks ranging from audience recruitment, adult learning theory, linking educational objectives with outcomes, qualitative analysis, and measuring patient outcomes (I’ll be representing Imedex with a presentation on statistical analysis in CME outcomes).  Regardless of your scope of work, I suspect there will be at least one presentation in the agenda of interest.

 If you can’t participate live, no worries, everything will be archived, so view at your convenience – but make sure to check it out.

Leave a comment

Filed under ACCME, Alliance Conference, Alliance for CME, Best practices, CME, CMEpalooza

Alliance effect size presentation

Thank you to everyone who attended our effect size presentation at the 2014 Alliance.  If you’re looking for a copy of the slides, here you go.   Any questions?  Post a comment or contact us by email (jason.olivieri@assessCME.com).

Leave a comment

Filed under Uncategorized

Effect size presentation at the Alliance’s 39th Annual Conference

We’ve devoted a fair amount of attention to the use of effect size in CME.  For those of you attending this year’s Alliance meeting, we’ll be presenting Friday, January 17th at 11: 15 on Calculating and Interpreting Effect Size for your CME Activities.  In addition to providing a general overview of the value of effect size in CME outcomes, we’ll demonstrate how to actually do effect size calculations for common types of CME data (using only free, online resources).  So stay awake, and you’ll leave the session with the tools to start using effect size for your own activities.

If you miss the session, stop by the Imedex exhibit booth and either Ben or myself (Jason) will be happy to walk you through some effect size examples.  See you in Orlando!

Leave a comment

Filed under Alliance Conference, CME, Cohen's d, Effect size, Statistics

job opportunity: Survey Research and CME Data Analysis

Just heard from a friend at the Radiological Society of North America (RSNA) that they have a position open for a Senior Manager – Survey Research & CME Data Analysis.  You can view the details here.

Leave a comment

Filed under CME, Job, Outcomes, Survey