When I saw the JCEHP editorial title lead with “How Significant is Statistical Significance…” I knew I’d be blogging about it. As I remember the progression through graduate school statistic courses, it began with learning how to select the appropriate significance test, progressed to application and then concluded with all the reasons why the results didn’t really mean much. So I was ready to build a “cut-and-paste” blog post out of old class papers detailing an unhealthy dependence on the results of statistical tests (which I expected to be the opinion of this editorial). And that would have worked fine, but then I found a rabbit hole: script concordance test (SCTs).
Casually introduced by the authors via an educational scenario illustrating the limitations of statistical significance, SCT is a case-based assessment method designed to measure the clinical decision-making process (as opposed to simply identifying whether someone knows a correct diagnosis or treatment). As educators, this could be quite helpful in clarifying educational gaps. For evaluators, this approach has some encouraging validity data. I’ve got a way to go before I can even claim familiarity with SCTs, but will be diving into the literature immediately (and assuming expert status by hopefully next week). If anyone else is interested, here are some suggestions to learn more:
- Fournier JP, Demeester A, Charlin B. Script concordance tests: guidelines for construction. BMC Med Inform Decis Mak 2008;8:18. (click here for full article)
- Charlin B, Roy L, Brailovsky C, Goulet F, van der Vleuten C. The script concordance test: A tool to assess the reflective clinician. Teach Learn Med 2000; 12:189-195. (click here for abstract)
- Dory V, Gagnon R, Dominique V, Charlin B: How to construct and implement script concordance tests: insights from a systematic review. Med Educ 2012, 46:552–563. (click here for full article)
- Lubarsky S, Charlin B, Cook DA, Chalk C, van der Vleuten C: Script concordance method: a review of published validity evidence. Med Educ 2011, 45:329–338. (click here for full article)
FYI – it turns out SCTs were introduced in the late 1990s. So I’m less than 20 years behind the curve, and perfectly in tune with the traditional adoption curve of evidence to clinical practice (which hovers around 17 years).
Case vignettes are so popular in the assessment of CME thanks to a series of three articles published between 2000 and 2004 [1-3]. These articles described three separate comparative analyses between medical charts and case-vignettes. In each article, the authors assesed how well medical charts and case-vignettes measured physician performance as compared to standardized patients (their “gold standard”). And in each article, case vignettes out-performed medical chart abstraction. Here’s the conclusion from the first article in this series (published in JAMA in 2000):
CONCLUSIONS: Our data indicate that quality of health care can be measured in an outpatient setting by using clinical vignettes. Vignettes appear to be a valid and comprehensive method that directly focuses on the process of care provided in actual clinical practice. Vignettes show promise as an inexpensive case-mix adjusted method for measuring the quality of care provided by a group of physicians.
Armed with this JAMA citation, CME providers have been regularly assessing physician performance changes associated with CME participation via case vignettes. In this setting, the typical case vignette format is a 3-5 sentence patient presentation followed by multiple choice questions regarding diagnosis, treatment and maybe even perceived practice barriers. And if you’re using that format, here’s the problem...it doesn’t reflect the format utilized in the research upon which it’s based.
The case vignettes utilized in the literature cited above [1-3] were only developed in clinical areas with strong clinical guideline support (i.e., there was little argument in regard to proper diagnostic or treatment approaches). They were also designed in regard to the sequence of a typical patient visit, requiring physicians to provide open-ended responses to a given case vignette about their approach to patient history, physical exam, tests, diagnosis, and management. Finally, these open-end responses were assessed by a trained abstractor based on a scoring sheet developed by a panel of physicians.
Unless you’re using this approach, there is no evidence to support that your case vignettes are actually capturing physician performance. That doesn’t mean what we’re doing is wrong, it’s just not an evidence-based assessment method. As to what we’re actually measuring (i.e., knowledge, competence), there may be lots of opinion, but the burden is on us to provide some evidence.
- Peabody JW, et al. Comparison of vignettes, standardized patients, and chart abstraction: A prospective validation study of 3 methods for measuring quality. JAMA 2000:283:1715-22. (abstract)
- Peabody JW, et al. Measuring the quality of physician practice by using clinical vignettes: A prospective validation study. Ann Intern Med 2004;1414:771-780. (abstract)
- Dresselhaus TR, et al. An evaluation of vignettes for predicting variation in the quality of preventive care. J Gen Intern Med 2004;19:1013-18. (abstract)