Tag Archives: methodology

Outcomes test drive

australia-162760_1920(broke car)

I bought my first car at 16. It was an awesome little blue 4×4 (Bronco II). The test drive was perfect. I got to blast the radio and drive off-road through a sub-division under construction. Bouncing over piles of debris, I can still remember the exhilaration. Both the seller and I laughed the whole time. Only problem…he was still laughing two weeks later, while I was on the side of the highway spitting steam and pouring oil mixed with engine coolant. That 4×4 rusted in my driveway for another year before a neighbor bought it for less than 20% of what I paid.

Yeah…I skipped the inspection part. It was just too much fun to think about that. And since it handled the test drive, what could really go wrong? I was going to be so freakin’ cool come fall in high school.

Tell me I’m the only one who’s ever dreamed of the stars and ended up on the bus.

Now that brings us to outcomes. Maybe you’ve been kicking the tires of a new CME program and hoping it will generate great outcomes? Don’t get distracted by the shiny bits…there are three key things to inspect for every outcomes project (in descending order of importance and ascending in order of coolness):

  1. Study design: the main concern here is “internal validity”, which refers to how well a study controls for the factors that could confound the relationship between the intervention and outcome (ie, how do we know something else isn’t accelerating or breaking our path toward the desired outcome?). There are many threats to internal validity and correspondingly, many distinct study designs to address them. One group pretest-posttest is a study design, so is posttest only with nonequivalent groups (ie, post-test administered to CME participants and a non-participant “control” group). There are about a dozen more options. You should understand why a particular study design was selected and what answers it can (and cannot) provide.

 

  1. Data collection: second to study design, is data collection. The big deal here is “construct validity” (ie, can the data collection tool measure what it claims?). Just because you want your survey or chart abstraction to measure a certain outcome, doesn’t mean it actually will. Can you speak to the data that supports the effectiveness of your tool in measuring its intention? If not, you should consider another option. Note: it is really fun to say “chart abstraction”, but it’s a data collection tool, not a study design. If your study design is flawed, you have to consider those challenges to internal validity plus any construct validity issues associated with your chart abstraction. The more issues you collect, the weaker your final argument regarding your desired outcome. An expensive study (eg, chart review) does not guarantee a result of any importance, but it does sound good.

 

  1. Analysis: this is the shiny bit, and just like your parents told you, the least important. Remember Mom’s advice: if your friends don’t think you’re cool, then they aren’t really your friends. Well, think about study design and data collection as the “beauty on the inside” and analysis as a really groovy jacket and great hair. Oh yeah, it matters, but rather less so if they keep getting you stuck on the highway. You may have heard statisticians are nerds, but they’re the NASCAR drivers of the research community – and I’m here to tell you the car and pit crew are more important. In short, if your outcomes are all about analysis, they probably aren’t worth much.

2 Comments

Filed under CME, Confounders, Construct validity, Internal validity, Methodology, Uncategorized

Bringing boring back

I want to play guitar. I want to play loud, fast and funky.  But right now, I’m wrestling basic open chords.  And my fingers hurt.  And I keep forgetting to breathe when I play.  And my daughter gets annoyed listening to the same three songs over and over.  But so is the way.

When my daughter “plays”.  She cranks up a song on Pandora, jumps on and off the furniture, and windmills through the strings like Pete Townshend.  She’d light the thing on fire if I didn’t hide the matches.  Guess who’s more fun to watch.  But take away the adorable face and the hard rock attitude and what do you have?  Yeah…a really bad guitar player.

I was reminded of this juxtaposition while perusing the ACEhp 2015 Annual Conference schedule.  I know inserting “patient outcomes”  into an abstract title is a rock star move.  But on what foundation is this claim built?  What limitations are we overlooking?  Have we truly put in the work to ensure we’re measuring what we claim?

My interests tend to be boring.  Was the assessment tool validated?  How do you ensure a representative sample?  How best to control for confounding factors?  What’s the appropriate statistical test?  Blah, blah, blah…  I like to know I have a sturdy home before I think about where to put the entertainment system.

So imagine how excited I was to find this title: Competence Assessments: To Pair or Not to Pair, That Is the Question (scheduled for Thursday, January 15 at 1:15).  Under the assumption that interesting-sounding title and informational value are inversely proportional, I had to find out more.  Here’s a excerpt:

While not ideal, providers are often left with unpaired outcomes data due to factors such as anonymity of data, and low overall participation. Despite the common use of unpaired results, literature on the use of unpaired assessments as a surrogate for paired data in the CME setting is limited.

Yes, that is a common problem.  I very frequently have data for which I cannot match a respondent’s pre- and post-activity responses.  I assume the same respondents are in both groups, but I can’t make a direct link (i.e., I have “upaired” data).  Statistically speaking, paired data is better.  The practical question the presenters of this research intend to answer is how unpaired data may affect conclusions about competence-level outcomes.  Yes, that may sound boring, but it is incredibly practical because it happens all the time in CME – and I bet very few people even knew it might be an issue.

So thank you Allison Heintz and Dr. Fagerlie.  I’ll definitely be in attendance.

Leave a comment

Filed under ACEhp, Alliance for CME, CME, Methodology, paired data, Statistical tests of significance, Statistics, unpaired data

Formative Assessment

Outcomes assessment is “summative”, which is fancy for measures whether desired results have been achieved.  A “formative” assessment, however, addresses something while in development to be sure it’s on track.  Moore et al (2009) make a strong case for formative assessment in CME, but leave the “how-to” details to our imagination (I guess when you’re covering every aspect of CME you need to leave a few bits out).

Here’s one recipe for formative assessment (for live CME activities):

  1. Have your course faculty develop knowledge and/or case vignette questions relative to their pending talks
  2. Turn these questions into a web-based survey (www.SurveyMonkey.com)
  3. At least two weeks prior to the activity date, email the survey to all activity registrants
  4. Share the registrants’ responses with your course faculty
  5. Adjust the pending talks accordingly

If you feel the need to incentivize respondents (which I never discourage), offer them a discount off registration for another activity.  If you want more detail, check out this short JCEHP article.

I’ve used this approach a few times and it’s been generally successful (i.e., good response rate and faculty have used some of the data to modify their presentations).  However, I don’t want to pretend this approach is “setting-the-bar” for formative assessment.  If you’re not doing any such assessment, this is a good way to get started.  Play with this for a while and you’ll discover ways to get more sophisticated – just remember to share what you’re doing with the rest us!

1 Comment

Filed under Formative assessment, Methodology, Needs Assessment, Summative assessment