Although I’ve complained a fair bit about validity and reliability issues in CME assessment, I haven’t offered much on this blog to actually address these concerns. Well, the thought of thousands (and thousands and…) of dear and devoted readers facing each new day with the same, tired CME assessment questions has become too much to bear. That, and I was recently required to do a presentation on guidelines and common flaws in the creation of multiple-choice questions…so I thought I’d share it here.
I’d love to claim these pearls are all mine, but they’re just borrowed. Nevertheless, this slide deck may serve as a handy single-resource when constructing your next assessment (and it contains some cool facts about shark attacks).
I love SurveyMonkey…survey creation, distribution and data collection is a snap with this service (and it’s super cheap). What could possibly be bad about making surveys so accessible to everyone? Oh, yeah…it’s probably making surveys so accessible to everyone. Surveys used to represent a significant time and financial investment (e.g., postage, envelop stuffing, data entry). Now all you need is a list of emails. Without previous barriers, the decision to survey can come a little too quickly.
Admittedly, I’ve done
more than one survey too many surveys simply because it was easy…rather than necessary. Now I’m afraid that all this ease is actually making surveying harder than ever. There are only so many physicians, and if we’re all bombing their inboxes with survey invitations, what’s the difference between us and cheap Viagra spam?
In his recent JCEHP Editorial, Dr. Olson eloquently describes this concern:
“…a survey population is a commons, a resource that is shared by a community, and like other commons such as ocean fisheries or antibiotics, it can be degraded by overuse” (p. 94)
Dr. Olson goes on to detail five ways in which we most typically misuse this common resource – which are much easier to address than climate change. I highly recommend reading this editorial. Afterward, continue to “reduce, reuse, recycle” and add: resist.
You can’t make everyone happy. I don’t think I’ve ever seen outcome data for a CME activity that didn’t include at least one harsh comment in the open-ended feedback section. Although in the minority, something about these comments make them feel particularly weighty. Maybe it’s because someone actually took the time to write something down – as opposed to simply checking boxes on an evaluation form. When you find yourself (or a sponsor) particularly affected by such comments, consider the following…
One consideration in the interpretation of survey data is non-response bias. Non-response bias is the possibility that individuals responding to a survey differ from non-respondents in a way that limits the generalizability of survey data to the overall CME participant population being evaluated. Generally speaking, the lower the survey response, the greater the potential for non-response bias. For example, a CME evaluation survey with a 20% response rate is less likely to be representative of the overall CME participants than a survey with a 40% response rate. The concern is that the 20% who choose to complete the survey are unique in some way that creates a bias in the data. The higher the response rate, the less likely survey respondents are distinct from the overall population of CME participants.
Open-ended questions are particularly susceptible to non-response bias. Even when someone elects to respond to a survey, research has shown that these respondents complete open-ended questions less than 40% of the time (Borg, 2005; Poncheri et al., 2008; Siem, 2005). So even if survey respondents are deemed representative of the overall population (for example, based on a demographic comparison between respondents and the overall population), the subgroup of survey respondents who complete the open-ended questions may differ enough to introduce bias.
So do respondents who complete open-ended questions differ from non-respondents? Research has shown that survey respondents with lower satisfaction are more likely to respond to open-ended questions than satisfied respondents (McNeely, 1990; Poncheri et al. 2008). This is supported by the general psychological phenomenon that dissatisfied individuals are more likely to consider the causes of their dissatisfaction than satisfied individuals are to consider the source of their satisfaction – accordingly, satisfied individuals will have less to communicate than dissatisfied individuals when asked to provide comments (Baumeister, Bratslavsky, Finkenauer, & Vohs, 2001; Peeters, 1971; Harman-Poncheri, R, 2008).
By focusing on open-ended comments in CME evaluation surveys, we may be drawing conclusions based only on the least satisfied respondents (which are likely a minority of the overall CME participants). Although such feedback is still valuable in the identification of areas of improvement, assuming such feedback is reflective of the whole would likely skew our perception of how CME participants really feel about their CME experience.
- Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, D. K. (2001). Bad is stronger than good. Review of General Psychology, 5, 323-370.
- Borg, I. (2005, April). Who writes what kinds of comments? Some new findings. In A. I. Kraut (Chair), Grappling with write-in comments in a web-enabled survey world. Symposium conducted at the 20th annual conference of the Society for Industrial and Organizational Psychology, Los Angeles, California.
- Harman-Poncheri, R. Understanding Survey Comment Nonresponse and the Characteristics of Nonresponders. Dissertation, North Carolina State University, 2008.
- McNeely, R. L. (1990). Do respondents who pen comments onto mail surveys differ from other respondents? A research note on the human services job satisfaction literature. Journal of Sociology & Social Welfare, 17(4), 127-137.
- Peeters, G. (1971). The positive-negative asymmetry: On cognitive consistency and positivity bias. European Journal of Social Psychology, 1, 455-474.
- Poncheri, R. M., Lindberg, J. T., Thompson, L. F., & Surface, E. A. (2008). A comment on employee surveys: Negativity bias in open-ended responses. Organizational Research Methods, 11, 614-630.
- Siem (2005, April). History of survey comments at the Boeing Company. In K. J. Fenlason (Chair), Comments: Where have we been? Where are we going? Symposium conducted at the 20th annual conference of the Society for Industrial and Organizational Psychology, Los Angeles, California.
Every few years, someone publishes a meta-analysis which ultimately concludes we’re all doing a sad job of assessing CME outcomes. Most currently, I’d recommend the following article (link). One of the main reasons we often fall short: we’re not using validated tools.
Happy to stand on the shoulders of others, I’m always on the lookout for validated surveys…here are three useful for CME (link). If these aren’t satisfactory, here’s a handy guide to developing your own validated tool (link).
After reviewing 17 evaluation instruments associated with their current CME activities, the University of Virgina School of Medicine Office of CME indentified 8 core questions they felt needed to be included in the evaluations of all live, regularly scheduled and enduring CME. Their process is discussed in CE Measure here. I’ve recreated the table containing their eight core evaluation items here.
If you have no idea where your current evaluation questions came from (or what they actually measure) AND do not forsee the opportunity to address these concerns…at least read the CE Measure article referenced above and use the evaluation questions they recommend.
CME providers should be savvy about response rates…we crank out a lot of surveys. So what’s the most effective survey format? Is it email, fax, regular mail or some combination thereof? And, outside of format, what can be done to improve response rates?
Unfortunately, we CME providers are not very good at aggregating the results of our survey efforts. Thus, with little more than anecdotal evidence, we’re forced to nod in agreement with whatever a senior CME colleague states when pressed with such questions.
Until we organize our data, I’ll recommend the following systematic review:
- VanGeest JB, et al. Methodolgies for improving Response Rates in Surveys of Physicians. Evaluation & the Health Professions 2007;30:303-21. (abstract)
Here’s the quick synopsis…postal or telephone surveys are better than fax or web. And any of the following are associated with higher response rates: brief questionnaires, endorsements by professional associations, and direct monetary incentives (as low as $1).