Monthly Archives: November 2014

Bringing boring back

I want to play guitar. I want to play loud, fast and funky.  But right now, I’m wrestling basic open chords.  And my fingers hurt.  And I keep forgetting to breathe when I play.  And my daughter gets annoyed listening to the same three songs over and over.  But so is the way.

When my daughter “plays”.  She cranks up a song on Pandora, jumps on and off the furniture, and windmills through the strings like Pete Townshend.  She’d light the thing on fire if I didn’t hide the matches.  Guess who’s more fun to watch.  But take away the adorable face and the hard rock attitude and what do you have?  Yeah…a really bad guitar player.

I was reminded of this juxtaposition while perusing the ACEhp 2015 Annual Conference schedule.  I know inserting “patient outcomes”  into an abstract title is a rock star move.  But on what foundation is this claim built?  What limitations are we overlooking?  Have we truly put in the work to ensure we’re measuring what we claim?

My interests tend to be boring.  Was the assessment tool validated?  How do you ensure a representative sample?  How best to control for confounding factors?  What’s the appropriate statistical test?  Blah, blah, blah…  I like to know I have a sturdy home before I think about where to put the entertainment system.

So imagine how excited I was to find this title: Competence Assessments: To Pair or Not to Pair, That Is the Question (scheduled for Thursday, January 15 at 1:15).  Under the assumption that interesting-sounding title and informational value are inversely proportional, I had to find out more.  Here’s a excerpt:

While not ideal, providers are often left with unpaired outcomes data due to factors such as anonymity of data, and low overall participation. Despite the common use of unpaired results, literature on the use of unpaired assessments as a surrogate for paired data in the CME setting is limited.

Yes, that is a common problem.  I very frequently have data for which I cannot match a respondent’s pre- and post-activity responses.  I assume the same respondents are in both groups, but I can’t make a direct link (i.e., I have “upaired” data).  Statistically speaking, paired data is better.  The practical question the presenters of this research intend to answer is how unpaired data may affect conclusions about competence-level outcomes.  Yes, that may sound boring, but it is incredibly practical because it happens all the time in CME – and I bet very few people even knew it might be an issue.

So thank you Allison Heintz and Dr. Fagerlie.  I’ll definitely be in attendance.

Leave a comment

Filed under ACEhp, Alliance for CME, CME, Methodology, paired data, Statistical tests of significance, Statistics, unpaired data

CME is Effective! Now what?

The ACCME just released an updated synthesis of published systematic reviews regarding the effectiveness of CME.  You can find it here.  In short, the authors offer the following conclusions (this is pulled verbatim from the report on p. 14):

  • CME does improve physician performance and patient health outcomes;
  • CME has a more reliably positive impact on physician performance than on patient health outcomes; and
  • CME leads to greater improvement in physician performance and patient health if it is more interactive, uses more methods, involves multiple exposures, is longer, and is focused on outcomes that are considered important by physicians.

Yes, there are issues of validity, heterogeneity, standardization and good-ole-fashioned publication bias in CME research, but that aside, there’s enough evidence out there to comfortably assume CME can positively affect physician performance and patient health.  While that’s good news, we can’t ignore the next question: Why is it effective?

To borrow another section from this report (p. 15):

The authors of the systematic reviews make clear that the research regarding mechanisms of action by which CME improves physician performance and patient health outcomes is in the early stages and needs greater theoretical and methodological sophistication. Several authors make the argument that future research must take account of the wider social, political, and organizational factors that play a role in physician performance and patient health outcomes.

The third bullet point above shines some light on these “mechanisms of action”, but the recipe for effective CME is still vague.  For example….How do I make my activity more interactive?  More importantly, what qualifies as interactive in the first place?  If multiple exposures is better, how many, and at what intensity?  How effective are these “mechanisms of action” across various physician audiences?  Do oncologists and internists learn the same way?  What internal and external (e.g., practice environment) factors are influential?

There’s several careers worth of research questions here.  Anyone funding?

Leave a comment

Filed under ACCME, CME, Effectiveness