Losing Control

CME has been walking around with spinach in its teeth for more than 10 years.  And while my midwestern mindset defaults to “don’t make waves”, I think it’s officially time to offer a toothpick to progress and pluck that pesky control group from the front teeth of our standard outcomes methodology.

That’s right, CME control groups are bunk. Sure, they make sense at first glance: randomized controlled trials (RCTs) use control groups and they’re the empirical gold standard.  However, as we’ll see, the magic of RCTs is the randomization, not the control: without the “R” the “C” falls flat.  Moreover, efforts to demographically-match controls to CME participants on a few simple factors (eg, degree, specialty, practice type and self-report patient experience) fall well short of the vast assemblage of confounders that could account for differences between these groups. In the end, only you can prevent forest fires and only randomization can ensure balance between samples.

So let’s dig into this randomization thing.  Imagine you wanted to determine the efficacy of a new treatment for detrimental modesty (a condition in which individuals are unable to communicate mildly embarrassing facts).  A review of clinical history shows that individuals who suffer this condition represent a wide range of race, ethnicity and socioeconomic strata, as well as vary in health metrics such as age, BMI and comorbidities.  Accordingly, you recruit a sufficient sample* of patients with this diagnosis and randomly designate them into two categories: 1) those who will receive the new treatment and 2) those who will receive a placebo.  The purpose of this randomization is to balance the factors that could confound the relationship you wish to examine (ie, treatment to outcome).  Assume the outcome of interest is likelihood to tell a stranger he has spinach in his teeth.  Is there a limit to the number of factors you can imagine that might influence an individual’s ability for such candor?  And remember, clinical history indicated that patients with detrimental modesty are diverse in regard social and physical characteristics.  How can you know that age, gender, height, religious affiliation, ethnicity or odontophobia won’t enhance or reduce the effect of your treatment?  If these factors are not evenly distributed across the treatment and control groups, your conclusion about treatment efficacy will be confounded.

So…you could attempt to match the treatment and control groups on all potential confounders or you could take the considerably less burdensome route and simply randomize your subjects into either group.  While all of these potential confounders still exist, randomization ensures that both the treatment and control group are equally “not uniform” across all these factors and therefore comparable.  It’s very important to note that the “control” group is simply what you call the population who doesn’t receive treatment.  The only reason it works is because of randomization.  Accordingly, simply applying a control group to your CME outcome assessment without randomization is like giving a broke man a wallet – it’s so not the thing that matters.

Now let’s bring this understanding to CME.  There are approximately, 18,000 oncology physicians in the United States.  In only two scenarios will the participants in your oncology-focused CME represent an unbiased sample of this population: 1) all 18,000 physicians participate or 2) at least 377 participate (sounds much more likely) that have been randomly sampled (wait…what?).  For option #2, the CME provider would require access to the entire population of oncology physicians from which they would apply a randomization scheme to create a sample based on their empirically expected response rate to invitations in order to achieve the 377 participation target.  Probably not standard practice.  If neither scenario applies to your CME activity, then the participants are a biased representation of your target learners.  Of note, biased doesn’t mean bad.  It just means that there are likely factors that differentiate your CME participants from the overall population of target learners and, most importantly, these factors could influence your target outcomes.  How many potential factors? Some CME researchers suggest more than 30.

Now think about a control group. Are you pulling a random sample of your target physician population?  See scenario #2 above.  Also, are you having any difficulty attracting physicians to participate in control surveys?  What’s your typical response rate?  Maybe you use incentives to help?  Does it seem plausible that the physicians who choose to respond to your control group surveys would be distinct from the overall physician population you hope they represent?  Do you think matching this control group to participants based on just profession, specialty, practice location and type is sufficient to balance these groups?  Remember, it not the control group, it’s the randomization that matters.  RCTs would be a lot less cumbersome if they only had to match comparison groups on four factors.  Of course, our resulting pharmacy would be terrifying.

So, based on current methods, we’re comparing a biased sample of CME participants to a biased sample of non-participants (control) and attributing any measured differences to CME exposure.  This is a flawed model.  Without balancing the inherent differences between these two samples, it is impossible to associate any measured differences in response to survey questions to any specific exposure.  So why are you finding significant differences (ie, P < .05) between groups?  Because they are different.  The problem is we have no idea why.

By what complicated method can we pluck this pesky piece of spinach?  Simple pre- versus post-activity comparison.  Remember, we want to ensure that confounding factors are balanced between comparison groups.  While participants in your CME activity will always be a biased representation of your overall target learner population, those biases are balanced when participants are used as their own controls (as in the pre- vs. post-activity comparison).  That is, both comparison groups are equally “non-uniform” in that they are comprised of the same individuals. In the end, you won’t know how participants differ from non-participants, but you will be able to associate post-activity changes to your CME.

Leave a comment

Filed under Best practices, CME, Confounders, Control groups, Needs Assessment, Outcomes, Power calculation, Pre vs. Post

Where did the knowledge go?

What does it mean when your CME participants score worse on a post-test assessment (compared to pre-test)?

Here are some likely explanations:

  1. The data was not statistically significant.  Significance testing determines whether we reject the null hypothesis (null hypothesis = pre- and post-test scores are equivalent).  If the difference was not significant (ie, P > .05), we can’t reject this assumption.  If the pre/post response was too low to warrant statistical testing, the direction of change is meaningless – you don’t have a representative sample.
  2. Measurement bias (specifically, “multiple comparisons”).  This measurement bias results from multiple comparisons being conducted within a single sample (ie, asking dozens of pre/post questions within a single audience).  The issue with multiple comparisons is that the more questions you ask, the more likely you are to find a significant difference where it shouldn’t exist (and wouldn’t if subject to more focused assessment).  Yes, this is a bias to which many CME assessments are subject.
  3. Bad question design. Did you follow key question development guidelines?  If not, the post-activity knowledge drop could be due to misinterpretation of the question.  You can learn more about question design principles here.

Leave a comment

Filed under Outcomes, question design, Statistical tests of significance

CME Outcomes Statistician, first grade

I was very excited to have my CMEPalooza session (Secrets of CME Outcome Assessment) officially sanctioned by the League of Assessors (LoA).  Accordingly, participants who passed the associated examination were awarded “CME Outcome Statistician, first grade” certifications.  It’s a grueling test, but three candidates made it through and received their certifications today (names withheld due to exclusivity).

Picture2

More good news…I petitioned the LoA to extend the qualifying exam for another six weeks (expiring May 29, 2015) and was officially approved!  So you can still view the CMEPalooza session (here) and then take the qualifying exam (sorry, exam is now closed). Good luck!

Leave a comment

Filed under CME, CMEpalooza, League of Assessors, Outcomes

CMEPalooza

On Tuesday, Chicago will decide on either Rahm on Chuy.  But Wednesday, it’s all about CMEPalooza.  Thank you to our industry’s “Jane’s Addiction” for organizing the third installment of this CME free-for-all.  I’ll be presenting on CME outcomes assessment (11 AM eastern). My session is designed for those that fall into the following categories:

  • Regularly use surveys to measure learning and competence change
  • No formal process for reviewing survey questions
  • Unsure of how to utilize statistical tests

Oh, but there’s more…this session has been accredited by the apocryphal League of CME Assessors (sorry, can’t provide a link due to exclusivity).  If, after completing the session, you wish to be considered for eligibility as “CME Outome Statistician, first grade”, click here (sorry, this test is now closed) to take their test. There’s even a certificate if you pass. Good luck!

2 Comments

Filed under CMEpalooza

Writing questions good

Although I’ve complained a fair bit about validity and reliability issues in CME assessment, I haven’t offered much on this blog to actually address these concerns. Well, the thought of thousands (and thousands and…) of dear and devoted readers facing each new day with the same, tired CME assessment questions has become too much to bear. That, and I was recently required to do a presentation on guidelines and common flaws in the creation of multiple-choice questions…so I thought I’d share it here.

I’d love to claim these pearls are all mine, but they’re just borrowed.  Nevertheless, this slide deck may serve as a handy single-resource when constructing your next assessment (and it contains some cool facts about shark attacks).

1 Comment

Filed under Best practices, CME, MCQs, multiple-choice questions, Reliability, Summative assessment, Survey, survey design, Validity

Effect size kryptonite

I’ve talked a lot about effect size: what it is (here), how to calculate it (here, here and here), what to do with the result (here and here)…and then some about limitations (here).  Overall, I’ve been trying to convince you that effect size is a sound (and simple) approach to quantifying the magnitude of CME effectiveness.  Now it’s time to talk about how it may be total garbage.

All this effect size talk includes the supposition that the data from which it is calculated is both reliable and valid.  In CME, the data source is overwhelming survey – and the questions within typically include self-efficacy scales, single-correct answer knowledge tests and / or case vignettes.  But how do you know that your survey questions actually measure their intention (validity) and do so with consistency (reliability)?  CME has been repeatedly dinged for not using validated measurement tools.  And if your survey isn’t valid (or reliable), why would your data be worth anything?  Effect size does not correct for bad questions.  So maybe next time you’re touting a great effect size (or trying to bury a bad one), you should also consider (and be able to document) whether you’ve demonstrated the effectiveness of your CME or the ineffectiveness of your survey.

So what can be done?  Well, you can hire a psychometrist and add complicated-sounding things like “factor analysis” and “Cronbach’s alpha” to your lexicon (yell those out during the next CME presentation you attend…and then quickly run of the room).  Or (actually “and”), you can start with sound question-design principles.  The key thing to note, no amount of complex statistics can make a bad question good – so you need to know the fundamentals of assessing knowledge and competence in medical education.  Where do you get those?  Here are some suggestions to get you started:

  • Take the National Board of Medical Examiners (NBME) U course entitled: Assessment Principles, Methods, and Competency Framework.  This is an awesome (daresay, the best) resource for anyone assessing knowledge and competence in medical education.  Complete this course (there are 20 lessons, each under 30 minutes) and you’ll be as expert as anyone in CME.  You can register here.  And it’s free!
  • Check out Dr. Wendy Turell’s session entitled Tips to Make You a Survey Measurement Rock Star during the next CMEpalooza (April 8th at 1:30 eastern).  This is her wheelhouse – so steal every bit of her expertise you can.  Once again, it’s free.

2 Comments

Filed under ACCME, CMEpalooza, Item writing, question design, Reliability, Validity

Bringing boring back

I want to play guitar. I want to play loud, fast and funky.  But right now, I’m wrestling basic open chords.  And my fingers hurt.  And I keep forgetting to breathe when I play.  And my daughter gets annoyed listening to the same three songs over and over.  But so is the way.

When my daughter “plays”.  She cranks up a song on Pandora, jumps on and off the furniture, and windmills through the strings like Pete Townshend.  She’d light the thing on fire if I didn’t hide the matches.  Guess who’s more fun to watch.  But take away the adorable face and the hard rock attitude and what do you have?  Yeah…a really bad guitar player.

I was reminded of this juxtaposition while perusing the ACEhp 2015 Annual Conference schedule.  I know inserting “patient outcomes”  into an abstract title is a rock star move.  But on what foundation is this claim built?  What limitations are we overlooking?  Have we truly put in the work to ensure we’re measuring what we claim?

My interests tend to be boring.  Was the assessment tool validated?  How do you ensure a representative sample?  How best to control for confounding factors?  What’s the appropriate statistical test?  Blah, blah, blah…  I like to know I have a sturdy home before I think about where to put the entertainment system.

So imagine how excited I was to find this title: Competence Assessments: To Pair or Not to Pair, That Is the Question (scheduled for Thursday, January 15 at 1:15).  Under the assumption that interesting-sounding title and informational value are inversely proportional, I had to find out more.  Here’s a excerpt:

While not ideal, providers are often left with unpaired outcomes data due to factors such as anonymity of data, and low overall participation. Despite the common use of unpaired results, literature on the use of unpaired assessments as a surrogate for paired data in the CME setting is limited.

Yes, that is a common problem.  I very frequently have data for which I cannot match a respondent’s pre- and post-activity responses.  I assume the same respondents are in both groups, but I can’t make a direct link (i.e., I have “upaired” data).  Statistically speaking, paired data is better.  The practical question the presenters of this research intend to answer is how unpaired data may affect conclusions about competence-level outcomes.  Yes, that may sound boring, but it is incredibly practical because it happens all the time in CME – and I bet very few people even knew it might be an issue.

So thank you Allison Heintz and Dr. Fagerlie.  I’ll definitely be in attendance.

Leave a comment

Filed under ACEhp, Alliance for CME, CME, Methodology, paired data, Statistical tests of significance, Statistics, unpaired data