Don’t forget to check out CMEPalooza this Wednesday (Oct 15th) – it starts at 9 AM eastern. I’d like to catch all seven sessions, but I’m particularly interested in the 11 AM set: Death of the MECC – Fact or Fiction? If it’s fact, I guess I’m sleeping in on Thursday.
It happens. Your carefully crafted evaluation questions are administered to the survey population using a different scale pre- and post-activity. Miscommunication, cut & paste fail, whatever the cause…what do you do with the data?
- Nothing. You report it as is, don’t attempt any statistical testing, and hope it doesn’t happen again.
- Transform. Call on your inner MacGyver and make these two scales compatible.
Tempting as option #1 may be, this blog wouldn’t be much use if we take that route. So here are the simplest fixes:
- Proportional transformation: if you want to make a 5-point scale talk to a 7-point scale, you multiple each 5-point score by 7/5 (alternatively, you could reduce a 7-point scale to 5-point by multiplying each 7-point score by 5/7).
- Transform each score (e.g., all 5-point and 7-point scores) to a standard z-score using the following formula: z = (raw score – mean of raw scores)/standard deviation of raw scores.
In this case, simple may also be right (or right enough). To see how these approaches compare to more complex transformations, check out this article.
Let’s officially retire this pre/post-activity question:
<pre-activity> How would you rate your knowledge of X? (or the common variant: How confident are you in your ability to do X?)
<post-activity> After having participated in this activity, how would you rate your knowledge of X? (or …how confident are you now in your ability to do X?)
First and foremost, it’s really lazy. Second, we’ve known for long enough that physician self-assessments are reliably unreliable (Davis et al, 2006). It’s better to ask no question, than a bad one.
Oh, I so want to say I measure patient outcomes. Everyone gets so excited. Imagine these two presentation titles: 1) “Reliability and Validity in Educational Outcome Assessment” and 2) “Measuring Patient Outcomes Associated with CME Participation”. Which one are you going to attend? Well…yes, to most folks those both sound pretty boring. But this is a CME blog. And in this part of town, it’d be like asking whether you’d rather hang out with some guy who runs a strip mall accounting firm or Will Ferrell.
But I’m not Will Ferrell. And instead of an accountant, I’d like to introduce you to Drs. Cook and West who present a very clear and thoughtful piece on
why Will Ferrell really isn’t that funny why patient outcomes may not be the best CME outcome target (click here for the article).
Read this article and be prepared. If you’re presenting on patient outcomes, I’m going to ask about things like “dilution” and “teaching-to-the-test”. Unless, of course, you are Will Ferrell. In which case, thank you for Elf.
I love SurveyMonkey…survey creation, distribution and data collection is a snap with this service (and it’s super cheap). What could possibly be bad about making surveys so accessible to everyone? Oh, yeah…it’s probably making surveys so accessible to everyone. Surveys used to represent a significant time and financial investment (e.g., postage, envelop stuffing, data entry). Now all you need is a list of emails. Without previous barriers, the decision to survey can come a little too quickly.
Admittedly, I’ve done
more than one survey too many surveys simply because it was easy…rather than necessary. Now I’m afraid that all this ease is actually making surveying harder than ever. There are only so many physicians, and if we’re all bombing their inboxes with survey invitations, what’s the difference between us and cheap Viagra spam?
In his recent JCEHP Editorial, Dr. Olson eloquently describes this concern:
“…a survey population is a commons, a resource that is shared by a community, and like other commons such as ocean fisheries or antibiotics, it can be degraded by overuse” (p. 94)
Dr. Olson goes on to detail five ways in which we most typically misuse this common resource – which are much easier to address than climate change. I highly recommend reading this editorial. Afterward, continue to “reduce, reuse, recycle” and add: resist.
How do you cook CME? Maybe simmer KOL in a venue sauce and add enduring material to taste? And how do you select your ingredients? Are you a student of food theory or do you just feel your way through?
Well, I’m supposed to be scientifically-minded, so my pantry is full of evidence-based options. Wait…did I say full? I meant I know these four things:
- Live activities are more savory than print
- You’ll make a better soup with multi-media
- Multiple tastes are preferred to just one
- Case-based discussions are the most important seasoning
According to Marinopolous SS, et al. that’s all we’ve got to work with. When you don’t know who’s coming to dinner, how hungry they are, or any of their possible dietary restrictions, you’ve got to make CME magic using only these four things. That’s pretty bleak.
Why don’t we know more? Too few studies with no standardization and very little reliability or validity data to support findings. Us outcome experts may all be wearing toques, but apparently only make french fries.
Commitment to change (CTC) questions are the caboose of every post-activity CME evaluation – stripped of all relevancy and sustained solely by nostalgia. Thirty years since its introduction, we can now all retire this method, confident that it has served us well, but that it’s now time for something more…app-ish. And off it goes, grumbling toward obscurity, with none but academics to watch it fade. Its final words: “but, you never really knew me”.
But wait! What’s that? A hand? Pulling CTC back from the edge? Dusting off its coat, straightening its tie, offering a fresh dab of modelling clay to re-pomp its mane are five, kind investigators from the midwestern tundra. Not just ivory tower curators, these rescuers stand shoulder-to-shoulder with CTC to proclaim (or at least publish): there’s value here, assuming you use it correctly.
Guess what? People have been studying CTC, for like, a long time. Should you use a follow-up survey? When? How? How should you word the questions? Include a rating scale? And how should you sort through and interpret the results? This stuff all matters. And you won’t find an easier to digest summary than this 2010 article in Evaluation & the Health Professions.
So, yes, if you’re simply maintaining a “what are you going to change in your practice” question at the end of every CME evaluation – definitely send that packing. Then read the aforementioned article. You’ll find that CTC has limitations, but when done in accordance with the latest evidence, there’s a lot of good data to be had.