Category Archives: Confounders

Outcomes test drive

australia-162760_1920(broke car)

I bought my first car at 16. It was an awesome little blue 4×4 (Bronco II). The test drive was perfect. I got to blast the radio and drive off-road through a sub-division under construction. Bouncing over piles of debris, I can still remember the exhilaration. Both the seller and I laughed the whole time. Only problem…he was still laughing two weeks later, while I was on the side of the highway spitting steam and pouring oil mixed with engine coolant. That 4×4 rusted in my driveway for another year before a neighbor bought it for less than 20% of what I paid.

Yeah…I skipped the inspection part. It was just too much fun to think about that. And since it handled the test drive, what could really go wrong? I was going to be so freakin’ cool come fall in high school.

Tell me I’m the only one who’s ever dreamed of the stars and ended up on the bus.

Now that brings us to outcomes. Maybe you’ve been kicking the tires of a new CME program and hoping it will generate great outcomes? Don’t get distracted by the shiny bits…there are three key things to inspect for every outcomes project (in descending order of importance and ascending in order of coolness):

  1. Study design: the main concern here is “internal validity”, which refers to how well a study controls for the factors that could confound the relationship between the intervention and outcome (ie, how do we know something else isn’t accelerating or breaking our path toward the desired outcome?). There are many threats to internal validity and correspondingly, many distinct study designs to address them. One group pretest-posttest is a study design, so is posttest only with nonequivalent groups (ie, post-test administered to CME participants and a non-participant “control” group). There are about a dozen more options. You should understand why a particular study design was selected and what answers it can (and cannot) provide.

 

  1. Data collection: second to study design, is data collection. The big deal here is “construct validity” (ie, can the data collection tool measure what it claims?). Just because you want your survey or chart abstraction to measure a certain outcome, doesn’t mean it actually will. Can you speak to the data that supports the effectiveness of your tool in measuring its intention? If not, you should consider another option. Note: it is really fun to say “chart abstraction”, but it’s a data collection tool, not a study design. If your study design is flawed, you have to consider those challenges to internal validity plus any construct validity issues associated with your chart abstraction. The more issues you collect, the weaker your final argument regarding your desired outcome. An expensive study (eg, chart review) does not guarantee a result of any importance, but it does sound good.

 

  1. Analysis: this is the shiny bit, and just like your parents told you, the least important. Remember Mom’s advice: if your friends don’t think you’re cool, then they aren’t really your friends. Well, think about study design and data collection as the “beauty on the inside” and analysis as a really groovy jacket and great hair. Oh yeah, it matters, but rather less so if they keep getting you stuck on the highway. You may have heard statisticians are nerds, but they’re the NASCAR drivers of the research community – and I’m here to tell you the car and pit crew are more important. In short, if your outcomes are all about analysis, they probably aren’t worth much.

2 Comments

Filed under CME, Confounders, Construct validity, Internal validity, Methodology, Uncategorized

Losing Control

CME has been walking around with spinach in its teeth for more than 10 years.  And while my midwestern mindset defaults to “don’t make waves”, I think it’s officially time to offer a toothpick to progress and pluck that pesky control group from the front teeth of our standard outcomes methodology.

That’s right, CME control groups are bunk. Sure, they make sense at first glance: randomized controlled trials (RCTs) use control groups and they’re the empirical gold standard.  However, as we’ll see, the magic of RCTs is the randomization, not the control: without the “R” the “C” falls flat.  Moreover, efforts to demographically-match controls to CME participants on a few simple factors (eg, degree, specialty, practice type and self-report patient experience) fall well short of the vast assemblage of confounders that could account for differences between these groups. In the end, only you can prevent forest fires and only randomization can ensure balance between samples.

So let’s dig into this randomization thing.  Imagine you wanted to determine the efficacy of a new treatment for detrimental modesty (a condition in which individuals are unable to communicate mildly embarrassing facts).  A review of clinical history shows that individuals who suffer this condition represent a wide range of race, ethnicity and socioeconomic strata, as well as vary in health metrics such as age, BMI and comorbidities.  Accordingly, you recruit a sufficient sample* of patients with this diagnosis and randomly designate them into two categories: 1) those who will receive the new treatment and 2) those who will receive a placebo.  The purpose of this randomization is to balance the factors that could confound the relationship you wish to examine (ie, treatment to outcome).  Assume the outcome of interest is likelihood to tell a stranger he has spinach in his teeth.  Is there a limit to the number of factors you can imagine that might influence an individual’s ability for such candor?  And remember, clinical history indicated that patients with detrimental modesty are diverse in regard social and physical characteristics.  How can you know that age, gender, height, religious affiliation, ethnicity or odontophobia won’t enhance or reduce the effect of your treatment?  If these factors are not evenly distributed across the treatment and control groups, your conclusion about treatment efficacy will be confounded.

So…you could attempt to match the treatment and control groups on all potential confounders or you could take the considerably less burdensome route and simply randomize your subjects into either group.  While all of these potential confounders still exist, randomization ensures that both the treatment and control group are equally “not uniform” across all these factors and therefore comparable.  It’s very important to note that the “control” group is simply what you call the population who doesn’t receive treatment.  The only reason it works is because of randomization.  Accordingly, simply applying a control group to your CME outcome assessment without randomization is like giving a broke man a wallet – it’s so not the thing that matters.

Now let’s bring this understanding to CME.  There are approximately, 18,000 oncology physicians in the United States.  In only two scenarios will the participants in your oncology-focused CME represent an unbiased sample of this population: 1) all 18,000 physicians participate or 2) at least 377 participate (sounds much more likely) that have been randomly sampled (wait…what?).  For option #2, the CME provider would require access to the entire population of oncology physicians from which they would apply a randomization scheme to create a sample based on their empirically expected response rate to invitations in order to achieve the 377 participation target.  Probably not standard practice.  If neither scenario applies to your CME activity, then the participants are a biased representation of your target learners.  Of note, biased doesn’t mean bad.  It just means that there are likely factors that differentiate your CME participants from the overall population of target learners and, most importantly, these factors could influence your target outcomes.  How many potential factors? Some CME researchers suggest more than 30.

Now think about a control group. Are you pulling a random sample of your target physician population?  See scenario #2 above.  Also, are you having any difficulty attracting physicians to participate in control surveys?  What’s your typical response rate?  Maybe you use incentives to help?  Does it seem plausible that the physicians who choose to respond to your control group surveys would be distinct from the overall physician population you hope they represent?  Do you think matching this control group to participants based on just profession, specialty, practice location and type is sufficient to balance these groups?  Remember, it not the control group, it’s the randomization that matters.  RCTs would be a lot less cumbersome if they only had to match comparison groups on four factors.  Of course, our resulting pharmacy would be terrifying.

So, based on current methods, we’re comparing a biased sample of CME participants to a biased sample of non-participants (control) and attributing any measured differences to CME exposure.  This is a flawed model.  Without balancing the inherent differences between these two samples, it is impossible to associate any measured differences in response to survey questions to any specific exposure.  So why are you finding significant differences (ie, P < .05) between groups?  Because they are different.  The problem is we have no idea why.

By what complicated method can we pluck this pesky piece of spinach?  Simple pre- versus post-activity comparison.  Remember, we want to ensure that confounding factors are balanced between comparison groups.  While participants in your CME activity will always be a biased representation of your overall target learner population, those biases are balanced when participants are used as their own controls (as in the pre- vs. post-activity comparison).  That is, both comparison groups are equally “non-uniform” in that they are comprised of the same individuals. In the end, you won’t know how participants differ from non-participants, but you will be able to associate post-activity changes to your CME.

Leave a comment

Filed under Best practices, CME, Confounders, Control groups, Needs Assessment, Outcomes, Power calculation, Pre vs. Post