Although I’ve complained a fair bit about validity and reliability issues in CME assessment, I haven’t offered much on this blog to actually address these concerns. Well, the thought of thousands (and thousands and…) of dear and devoted readers facing each new day with the same, tired CME assessment questions has become too much to bear. That, and I was recently required to do a presentation on guidelines and common flaws in the creation of multiple-choice questions…so I thought I’d share it here.
I’d love to claim these pearls are all mine, but they’re just borrowed. Nevertheless, this slide deck may serve as a handy single-resource when constructing your next assessment (and it contains some cool facts about shark attacks).
I’ve talked a lot about effect size: what it is (here), how to calculate it (here, here and here), what to do with the result (here and here)…and then some about limitations (here). Overall, I’ve been trying to convince you that effect size is a sound (and simple) approach to quantifying the magnitude of CME effectiveness. Now it’s time to talk about how it may be total garbage.
All this effect size talk includes the supposition that the data from which it is calculated is both reliable and valid. In CME, the data source is overwhelming survey – and the questions within typically include self-efficacy scales, single-correct answer knowledge tests and / or case vignettes. But how do you know that your survey questions actually measure their intention (validity) and do so with consistency (reliability)? CME has been repeatedly dinged for not using validated measurement tools. And if your survey isn’t valid (or reliable), why would your data be worth anything? Effect size does not correct for bad questions. So maybe next time you’re touting a great effect size (or trying to bury a bad one), you should also consider (and be able to document) whether you’ve demonstrated the effectiveness of your CME or the ineffectiveness of your survey.
So what can be done? Well, you can hire a psychometrist and add complicated-sounding things like “factor analysis” and “Cronbach’s alpha” to your lexicon (yell those out during the next CME presentation you attend…and then quickly run of the room). Or (actually “and”), you can start with sound question-design principles. The key thing to note, no amount of complex statistics can make a bad question good – so you need to know the fundamentals of assessing knowledge and competence in medical education. Where do you get those? Here are some suggestions to get you started:
- Take the National Board of Medical Examiners (NBME) U course entitled: Assessment Principles, Methods, and Competency Framework. This is an awesome (daresay, the best) resource for anyone assessing knowledge and competence in medical education. Complete this course (there are 20 lessons, each under 30 minutes) and you’ll be as expert as anyone in CME. You can register here. And it’s free!
- Check out Dr. Wendy Turell’s session entitled Tips to Make You a Survey Measurement Rock Star during the next CMEpalooza (April 8th at 1:30 eastern). This is her wheelhouse – so steal every bit of her expertise you can. Once again, it’s free.
I love SurveyMonkey…survey creation, distribution and data collection is a snap with this service (and it’s super cheap). What could possibly be bad about making surveys so accessible to everyone? Oh, yeah…it’s probably making surveys so accessible to everyone. Surveys used to represent a significant time and financial investment (e.g., postage, envelop stuffing, data entry). Now all you need is a list of emails. Without previous barriers, the decision to survey can come a little too quickly.
Admittedly, I’ve done
more than one survey too many surveys simply because it was easy…rather than necessary. Now I’m afraid that all this ease is actually making surveying harder than ever. There are only so many physicians, and if we’re all bombing their inboxes with survey invitations, what’s the difference between us and cheap Viagra spam?
In his recent JCEHP Editorial, Dr. Olson eloquently describes this concern:
“…a survey population is a commons, a resource that is shared by a community, and like other commons such as ocean fisheries or antibiotics, it can be degraded by overuse” (p. 94)
Dr. Olson goes on to detail five ways in which we most typically misuse this common resource – which are much easier to address than climate change. I highly recommend reading this editorial. Afterward, continue to “reduce, reuse, recycle” and add: resist.
CMEPalooza will be on Thursday March 20 and Friday March 21. Like the annual professional meeting for CME (Alliance for Continuing Education in the Health Professions), CMEpalooza is a collection of “best practice” talks. Unlike the Alliance, the entire event will be online, archived and free. A big thank you to Derek Warnick (aka “the CME Guy“) for putting this all together.
Based on the agenda (of 21 presentations), there are many promising talks ranging from audience recruitment, adult learning theory, linking educational objectives with outcomes, qualitative analysis, and measuring patient outcomes (I’ll be representing Imedex with a presentation on statistical analysis in CME outcomes). Regardless of your scope of work, I suspect there will be at least one presentation in the agenda of interest.
If you can’t participate live, no worries, everything will be archived, so view at your convenience – but make sure to check it out.