What does it mean when your CME participants score worse on a post-test assessment (compared to pre-test)?
Here are some likely explanations:
- The post-activity change was not statistically significant. Significance testing determines whether a measured difference pre/post could be attributed to random chance. If the difference was not significant, we can’t say the result was due to anything other than chance. If the pre/post response was too low to warrant statistical testing, the direction of change is meaningless – you don’t have a representative sample.
- Measurement bias (specifically, “multiple comparisons”). This measurement bias results from multiple comparisons being conducted within a single sample (ie, asking dozens of pre/post questions within a single audience). The issue with multiple comparisons is that the more questions you ask, the more likely you are to find a significant difference where it shouldn’t exist (and would’t if subject to more focused assessment). Yes, this is a bias to which many CME assessments are subject.
- Bad question design. Did you follow key question development guidelines? If not, the post-activity knowledge drop could be due to misinterpretation of the question. You can learn more about question design principles here.