Thanks to Moore DE et al*, we now recognize seven levels of CME outcomes (link). By ACCME mandate, we are required to assess the impact of CME on physician competence (Level 4), performance (Level 5) or patient health (Level 6). Implicit in this mandate is the belief that CME providers have mastered the techniques of assessing lower level CME outcomes (e.g., satisfaction and knowledge gains). If you’re one of those who scoff at any CME outcome lower than performance change (Level 5), here are a few things to consider:
- CME outcomes assessment is designed to be stair-step: you climb to the next level after demonstrating success at the level preceding it. For example, you can’t create a change in physician performance (Level 5) without first creating a change in competency (Level 4). If physicians can’t demonstrate what you’re trying to teach them to do in the CME setting, how are they going to do it in practice?
- Each level increase in outcome requires more resource to evaluate than the one before (e.g., it is easier to evaluate changes in physician competency [Level 4] than performance [Level 5]). Why then, would you start by evaluating at physician performance change when you have no documentation of meeting a necessary precursor that is remarkably easier to assess?
- Without data on lower level outcomes (i.e., Levels 2-4), how do you interpret the findings of higher level outcomes? Say, for example, you are able to query the frequency of diabetic foot exams among physician participants in your CME activity addressing that issue. If the number of foot exams goes up after participating in your CME, how do you know that it is because of your CME? There are a variety of factors outside of your CME activity that could have lead to the increase, and if you don’t have any documentation of creating a learning or competency change how can you claim your CME was more than a coincidence? Alternatively, let’s say that the rate of diabetic foot exams remained stable after your CME activity. Is there something about the CME activity that did not satisfy the expectations of physician participants? Without satisfaction data, you have no information to inform improvements in subsequent CME activities.
- Research shows that certain characteristics such as relevance, interactivity and reinforcement influence CME effectiveness. Satisfaction instruments should be designed to assess these key factors specifically, not generic issues like “would you recommend this CME activity to a colleague”. And it is very interesting that there are so few validated instruments for assessing physician satisfaction with CME (link). If we are truly so accomplished at assessing this outcome level, you’d expect a lot more validated tools – or you’d at least expect to see the few we have in regular use. Have you ever used a validated tool in your assessment of satisfaction-level CME outcomes?
*Moore DE, et al. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Cont Educ Health Prof 2009;29:1-15.