What does it mean when your CME participants score worse on a post-test assessment (compared to pre-test)?
Here are some likely explanations:
- The data was not statistically significant. Significance testing determines whether we reject the null hypothesis (null hypothesis = pre- and post-test scores are equivalent). If the difference was not significant (ie, P > .05), we can’t reject this assumption. If the pre/post response was too low to warrant statistical testing, the direction of change is meaningless – you don’t have a representative sample.
- Measurement bias (specifically, “multiple comparisons”). This measurement bias results from multiple comparisons being conducted within a single sample (ie, asking dozens of pre/post questions within a single audience). The issue with multiple comparisons is that the more questions you ask, the more likely you are to find a significant difference where it shouldn’t exist (and wouldn’t if subject to more focused assessment). Yes, this is a bias to which many CME assessments are subject.
- Bad question design. Did you follow key question development guidelines? If not, the post-activity knowledge drop could be due to misinterpretation of the question. You can learn more about question design principles here.