Monthly Archives: October 2014

Script Concordance Tests: where have you been hiding?

When I saw the  JCEHP editorial title lead with “How Significant is Statistical Significance…” I knew I’d be blogging about it.  As I remember the progression through graduate school statistic courses, it began with learning how to select the appropriate significance test, progressed to application and then concluded with all the reasons why the results didn’t really mean much.  So I was ready to build a “cut-and-paste” blog post out of old class papers detailing an unhealthy dependence on the results of statistical tests (which I  expected to be the opinion of this editorial).  And that would have worked fine, but then I found a rabbit hole: script concordance test (SCTs).

Casually introduced by the authors via an educational scenario illustrating the limitations of statistical significance, SCT is a case-based assessment method designed to measure the clinical decision-making process (as opposed to simply identifying whether someone knows a correct diagnosis or treatment).  As educators, this could be quite helpful in clarifying educational gaps.  For evaluators, this approach has some encouraging validity data.  I’ve got a way to go before I can even claim familiarity with SCTs, but will be diving into the literature immediately (and assuming expert status by hopefully next week).  If anyone else is interested, here are some suggestions to learn more:

  1. Fournier JP, Demeester A, Charlin B. Script concordance tests: guidelines for construction. BMC Med Inform Decis Mak 2008;8:18. (click here for full article)
  2. Charlin B, Roy L, Brailovsky C, Goulet F, van der Vleuten C. The script concordance test: A tool to assess the reflective clinician. Teach Learn Med 2000; 12:189-195. (click here for abstract)
  3. Dory V, Gagnon R, Dominique V, Charlin B: How to construct and implement script concordance tests: insights from a systematic review. Med Educ 2012, 46:552–563. (click here for full article)
  4. Lubarsky S, Charlin B, Cook DA, Chalk C, van der Vleuten C: Script concordance method: a review of published validity evidence. Med Educ 2011, 45:329–338. (click here for full article)

FYI – it turns out SCTs were introduced in the late 1990s.  So I’m less than 20 years behind the curve, and perfectly in tune with the traditional adoption curve of evidence to clinical practice (which hovers around 17 years).

Leave a comment

Filed under Case vignettes, CME, Script concordance tests, Statistical tests of significance, Statistics

Fall CMEPalooza

Don’t forget to check out CMEPalooza this Wednesday (Oct 15th) – it starts at 9 AM eastern.  I’d like to catch all seven sessions, but I’m particularly interested in the 11 AM set: Death of the MECC – Fact or Fiction?  If it’s fact, I guess I’m sleeping in on Thursday.

Leave a comment

Filed under CME, CMEpalooza

Same question, two different scales

It happens.  Your carefully crafted  evaluation questions are administered to the survey population using a different scale pre- and post-activity.  Miscommunication, cut & paste fail, whatever the cause…what do you do with the data?

  1. Nothing.  You report it as is, don’t attempt any statistical testing, and hope it doesn’t happen again.
  2. Transform.  Call on your inner MacGyver and make these two scales compatible.

Tempting as option #1 may be, this blog wouldn’t be much use if we take that route.  So here are the simplest fixes:

  1. Proportional transformation: if you want to make a 5-point scale talk to a 7-point scale, you multiple each 5-point score by 7/5 (alternatively, you could reduce a 7-point scale to 5-point by multiplying each 7-point score by 5/7).
  2. Transform each score (e.g., all 5-point and 7-point scores) to a standard z-score using the following formula: z = (raw score – mean of raw scores)/standard deviation of raw scores.

In this case, simple may also be right (or right enough).  To see how these approaches compare to more complex transformations, check out this article.


Leave a comment

Filed under CME, data, Likert scale, Statistics, transformation

Physician self-assessment questions

Let’s officially retire this pre/post-activity question:

<pre-activity> How would you rate your knowledge of X? (or the common variant: How confident are you in your ability to do X?)

<post-activity> After having participated in this activity, how would you rate your knowledge of X?  (or …how confident are you now in your ability to do X?)

First and foremost, it’s really lazy.  Second, we’ve known for long enough that physician self-assessments are reliably unreliable (Davis et al, 2006).   It’s better to ask no question, than a bad one.


Filed under CME, Outcomes, Self-assessment

Be patient on those outcomes

Oh, I so want to say I measure patient outcomes.  Everyone gets so excited.  Imagine these two presentation titles: 1) “Reliability and Validity in Educational Outcome Assessment” and 2) “Measuring Patient Outcomes Associated with CME Participation”.  Which one are you going to attend?  Well…yes, to most folks those both sound pretty boring.  But this is a CME blog.  And in this part of town, it’d be like asking whether you’d rather hang out with some guy who runs a strip mall accounting firm or Will Ferrell.

But I’m not Will Ferrell.  And instead of an accountant, I’d like to introduce you to Drs. Cook and West who present a very clear and thoughtful piece on  why Will Ferrell really isn’t that funny why patient outcomes may not be the best  CME outcome target (click here for the article).

Read this article and be prepared.  If you’re presenting on patient outcomes, I’m going to ask about things like “dilution” and “teaching-to-the-test”.  Unless, of course, you are Will Ferrell.  In which case, thank you for Elf.


Filed under CME, Outcomes, Patient Health, Reliability, Validity