Category Archives: Sample size

Rule of thumb…number of participants per survey item

It’s ten.  For each question on your survey, you should have at least 10 respondents.  And if you’re distributing the survey to more than one group (e.g., participants in a CME activity and representative non-participants), there should be at least 10 respondents per survey question per group.

Leave a comment

Filed under Methodology, Sample size

What sample size do I need?

I was recently asked the following:

Do you have any information on the sample size needed to obtain statistical significance for surveys?

That depends on the type of survey.  If you’re looking for sample size necessary for a needs assessment survey, you can find clear instructions here.  For a comparative assessment (e.g., participants pre- vs. post-CME activity or CME participants vs. representative control group), the necessary sample size would be determined by a  power calculation…but don’t worry about how to do a power calculation, odds are it doesn’t fit your assessment.

A very helpful explanation of power calculations by Professor Mean (think “average” not “unpleasant”) can be found here.   Professor Mean details three things needed for a power calculation:

  1. a research hypothesis,
  2. a standard deviation for your outcome measure, and
  3. an estimate of a clinically relevant difference for this outcome measure.

The standard CME assessment is as follows: participants in a CME activity are given a survey (this survey consists of case-based questions, likert-scale questions, or both) and their responses to this survey are compared pre- vs. post-participation, post-participation vs. the responses of a representative non-participant group, or both.  Other than the umbrella expectation that CME participants will respond better to each question after CME exposure (i.e., more in accordance with the educational messages of the CME activity), there is seldom a specific hypothesis defined (see power calculation criteria #1 above).  You could argue that each survey question is a hypothesis, in which case you would need to be able to identify a standard deviation (criteria #2) and clinically relevant difference (criteria #3) for each.  If you’re using a likert scale survey, what’s the standard deviation for self-efficacy in performing a diabetic foot exam?  And if a physician’s self-efficacy climbs 1-point, is that clinically relevant?  If you’re using a case-based instrument, what’s the standard deviation for prescribing a LDL-lowering drug in a patient with 0-1 risk factors for CHD and a LDL level of > 190 mg/dL?  Can you imagine having to answer these questions for every CME assessment instrument for every CME activity?  I can’t.  Which is why I/we don’t/shouldn’t worry about power calculations.

The purpose of a power calculation is to conserve resources and protect people from harm.  In regard to clinical drug trials, each subject added to your study increases both expense and exposure to potentially harmful treatment.  Clearly a calculation to identify the minimum number of study subjects is useful in this setting.  In CME, we want to educate as many physicians as possible and each additional physician educated should decrease the amount of harm experienced by their patients.  Power calculations don’t make sense in CME planning, and we shouldn’t pretend otherwise.

Now for the best part…go ahead and run stastical tests on your survey data.  If your results achieve statistical significance, then you had adequate power.  That doesn’t mean your assessment isn’t without methodologic flaws…just that power isn’t one of them.  If your results don’t achieve statistical significance, then you have two conclusions: 1) in this assessment, there was not a difference between CME participants and the comparison group, and 2) the inability to detect a difference could be due to an insufficient number of assessment participants.

I know it sounds smart to talk about power calculations, but in most cases the truth is exactly the opposite.  Next time you hear someone claiming they did a power calculation for a CME assessment – ask them to answer to each of Dr. Mean’s three criteria.

Leave a comment

Filed under Power calculation, Sample size

Calculating sample size for a needs assessment survey

Here are the steps:

1)      Determine the size of your target population.  Let’s say you want to survey pediatricians in the United States…a quick Google search (search terms = “how many US general pediatricians”) points to the American Academy of Pediatrics Division of Workforce and Medical Education Policy webpage, which reports a total of 57,200 U.S. general pediatricians (based on data from the 2006 American Medical Association Masterfile).

So, in this example, the target population size = 57,200.

2)      Determine how big a sample is needed to represent the target population.  Thankfully, there’s an abundance of free sample size calculators online.   I typically use this one.  Four things are needed to calculate sample size: 1) margin of error, 2) confidence level, 3) population size, and 4) response distribution.  Actually, the only thing you really need to know is population size (which for U.S. pediatricians is 57,200).  Just like we all accept P < .05 as the benchmark for statistical significance, the standards for margin of error, confidence level and response distribution are 5%, 95% and 50%, respectively.  Click here for a sample size calculator screen shot using U.S. pediatricians as the target population (the recommended sample size is 382).

3)      Before you start surveying, there’s one more important (and often overlooked) step: pulling a random sample from your target population for your survey pool.  To do this, you’ll need to estimate your survey response rate.  The best way to estimate your survey’s response rate is to see what’s been achieved in other studies.  Relevant to our pediatrician example, a quick PubMed search (search terms = email + pediatricians + survey) identified the following:

  • McMahon SR, et al. Comparison of e-mail, fax, and postal surveys of pediatricians. Pediatrics 2003;111:e299-303 (abstract).

This study of pediatricians in Georgia reported a 26% response rate to an email survey (after two invitations).  So if I’m expecting a 26% response rate (assuming I’m doing a web-based survey of pediatricians) and my recommended sample size is 382, then I will need to randomly select 1469 U.S. pediatricians from the AMA Masterfile (based on this calculation: 0.26[x]=382).  A 26% response rate from 1469 U.S. pediatricians randomly selected from the AMA Masterfile will meet my sample size requirement of 382.

You need to pull a random sample to reduce concerns such as self-selection bias (i.e., respondents’ decision to participate in your survey may be correlated with traits that affect the study, making the participants a non-representative sample).  The are a number of ways to pull a random sample, as well as a number of factors that dictate which method to use (click here for the Wikipedia summary).  In the following paragraph, I describe a method for pulling a “simple random sample“.

You can identify a random sample using MS Excel.  Start with an Excel spreadsheet containing everyone in your target population (continuing with our example that would be 57,200 U.S. pediatricians).  Create a new column (call it “random sample”) and type this formula in the first cell: =RAND().  This will provide a random number between 0 and 1.  Copy and paste this formula into all cells in that row.  You now have a random number in each row.  Sort the entire worksheet based on this column.  Select the first however many needed for random sample (in our case, the first 1469).  This is your survey pool.  Of note, after the “sort”, the random number in each row will re-calculate (making it look like they were never sorted).  Ignore this.  The numbers were sorted first (by ascending or descending)  and then the random values recalculated.  This column will recalculate every time you run a function in this worksheet.

3 Comments

Filed under Needs Assessment, Sample size