Monthly Archives: March 2011

Case-based CME evaluations

Case vignettes are so popular in the assessment of CME thanks to a series of three articles published between 2000 and 2004 [1-3].  These articles described three separate comparative analyses between medical charts and case-vignettes.  In each article, the authors assesed how well medical charts and case-vignettes measured physician performance as compared to standardized patients (their “gold standard”).  And in each article, case vignettes out-performed medical chart abstraction.  Here’s the conclusion from the first article in this series (published in JAMA in 2000):

CONCLUSIONS: Our data indicate that quality of health care can be measured in an outpatient setting by using clinical vignettes. Vignettes appear to be a valid and comprehensive method that directly focuses on the process of care provided in actual clinical practice. Vignettes show promise as an inexpensive case-mix adjusted method for measuring the quality of care provided by a group of physicians.

Armed with this JAMA citation, CME providers have been regularly assessing physician performance changes associated with CME participation via case vignettes.  In this setting, the typical case vignette format is a 3-5 sentence patient presentation followed by multiple choice questions regarding diagnosis, treatment and maybe even perceived practice barriers.  And if you’re using that format, here’s the problem...it doesn’t reflect the format utilized in the research upon which it’s based.

The case vignettes utilized in the literature cited above [1-3] were only developed in clinical areas with strong clinical guideline support (i.e., there was little argument in regard to proper diagnostic or treatment approaches).  They were also designed in regard to the sequence of a typical patient visit, requiring physicians to provide open-ended responses to a given case vignette about their approach to patient history, physical exam, tests, diagnosis, and management.  Finally, these open-end responses were assessed by a trained abstractor based on a scoring sheet developed by a panel of physicians.

Unless you’re using this approach, there is no evidence to support that your case vignettes are actually capturing physician performance.  That doesn’t mean what we’re doing is wrong, it’s just not an evidence-based assessment method.  As to what we’re actually measuring (i.e., knowledge, competence), there may be lots of opinion, but the burden is on us to provide some evidence.

References:

  1. Peabody JW, et al. Comparison of vignettes, standardized patients, and chart abstraction: A prospective validation study of 3 methods for measuring quality. JAMA 2000:283:1715-22. (abstract)
  2. Peabody JW, et al. Measuring the quality of physician practice by using clinical vignettes: A prospective validation study. Ann Intern Med 2004;1414:771-780. (abstract)
  3. Dresselhaus TR, et al. An evaluation of vignettes for predicting variation in the quality of preventive care. J Gen Intern Med 2004;19:1013-18. (abstract)
Advertisements

2 Comments

Filed under Case vignettes, Methodology, Outcomes

Validated instruments for CME satisfaction outcomes

Here are three tools for assessing participant satisfaction with CME that have gone through some validity testing:

  1. Wood TJ, et al. The development of a participant questionnaire to assess continuing medical education presentations. Medical Education 2005;39:568-72. (abstract)
  2. Rothman AI, Sibbald G. Evaluating Medical Grand Rounds. Journal of Continuing Education in the Health Professions 2002;22:77-83. (abstract)
  3. Shewchuk RM, et al. A Standardized Approach to Assessing Physician Expectations and Perceptions of Continuing Medical Education. Journal of Continuing Education in the Health Professions 2007;27:173-82. (abstract)

I expect there’s a few more, but not many (post a comment if you have other examples).

Just because satisfaction outcome isn’t one of the ACCME big three (i.e., competency, performance, or patient outcome) doesn’t mean it isn’t useful.  The original Kirkpatrick scale (1) later adapted by Moore (2) both say that assessing satisfaction is a necessary precursor to such higher level outcomes.  Satisfaction data makes it easier to tease out what elements of a CME activity are to praise or blame when interpreting competency, performance or patient level outcomes.  And with so few validated instruments for assessing satisfaction, this is hardly an area that we’ve adequately covered.

References:

1. Kirkpatrick, D.L. (1994). Evaluating Training Programs: The Four Levels. San Francisco, CA: Berrett-Koehler.

2. Moore DE. A framework for outcomes evaluation in the continuing professional development of physicians. In: Davis D, Barnes BE, Fox R (eds). The Continuing Professional Development of Physicians: from Research to Practice. Chicago: American Medical Association, 2003.

4 Comments

Filed under Outcomes, Satisfaction