I’m familiar with two types of RSS, one in which the topics and speakers change with each session (e.g., Grand Rounds) and another in which the topics and speakers remain relatively constant, but unique clinical cases are addressed in each session (e.g., M&M case conference). For the former, I’ve used a paper-based, post-activity evaluation that looks like this. For the case-based RSS activities, however, evaluation completions using this approach were very low (< 20%).
In effort to increase the response rate, I piloted three approaches and eventually decided on a quarterly, web-based survey of participants using an adaptation of a previously validated satisfaction instrument and the commitment-to-change evaluation (click here for an example).
Evaluation response rate using this approach averaged 50% across seven case-based RSS activities. Click here for an example of outcomes reporting using this approach.