|P.Mean: Debating the validity of snowball sampling (created 2012-10-01).
News: Sign up for "The Monthly Mean," the newsletter that dares to call itself average, www.pmean.com/news.
Someone on a discussion forum for IRB members criticized snowball sampling for a range of reasons, but (interesting from my perspective) for the reason that it is bad research. He asked "Why would anybody want to use snowball sampling? As non-probability sampling the results can't be generalized to a known universe." That's an interesting perspective, but one I disagree with. Here are my thoughts on the issue.
First, I have to note that the term "non-probability sampling" is a bit unclear. There are random samples, where everybody in the population has an equal probability of being in the sample, and there are non-random samples where the probabilities are unequal, usually because large segments of the population have a zero probability of selection.
Almost all research uses non-random samples. The very process of seeking informed consent makes it a non-random sample. Even without informed consent, there is a certain pragmatism that is needed. Thus, much of what we know about heart disease comes from a convenience sample of patients at places like the Mayo Clinic. Sure, they randomize as to which patient gets the real drug and which gets the placebo, but the fact that all the patients are from the Mayo Clinic prevents us from generalizing to a known universe, unless you severely restrict that known universe.
Actually, it is possible to generalize from a non-random sample, but you have to make some untestable assumptions about your data. It's important not to sneer here. We make untestable assumptions all the time. Carbon 14 dating, for example, makes the untestable assumption that radioactive decay occurred in the past at the same rate that it occurs today. You can quibble about this perhaps, but the general point is still valid. All scientific endeavors require a starting point of commonly accepted but largely untestable assumptions.
Whether an untestable assumptions is reasonable or not is worth debating, and it can help you decide whether a non-random sample is worth collecting.
The other point worth noting is that there is that in many settings, a sample that is thought of as a random sample actually has problems with the sampling frame that makes it only an approximation to a random sample. Just like we haven't figured out yet where Jimmy Hoffa is buried, we haven't the means to insure that everyone in our probability sample has an equal chance of getting selected. Some people in our population are pretty good at hiding from us, and they often have a strong incentive for wanting to hide (e.g., illegal immigrants).
All of this is a roundabout way of saying the fairly non-controversial statement that all samples have flaws. The question then becomes, does a snowball sample have so many flaws that the benefits of the research no longer outweigh the risks. That's a pretty hard call to make. In many settings where snowball sampling is used, other methods of sampling would be even worse. Have you ever thought about trying to get a random sample of heroin addicts? Sure, you could outline some approaches that might give you an approximation to a random sample, but your approach would be likely to have so many flaws that a snowball sample might be a better representation of the population.
It reminds me of the old joke about the researcher who tested the IQs of prisoners and concluded that criminals have less intelligence on average than honest people. Someone noticed the problem with this sample and drew a different conclusion: Criminals who get caught are dumber than criminals who don't get caught. Anyone here with a good approach to getting a random sample of uncaught criminals? If so, maybe you can start a second career in the police force.
It's also worth noting here that generalization to a known universe is only one of the possible reasons to conduct research. Generation of new hypotheses is another common goal. Beyond that, often we don't understand all the possible reasons that people behave the way they do. A listing of these reasons, even when it is not possible to quantify the probabilities associated with each reason, is still an appropriate research goal. Many pilot studies, intended to assess the feasibility of a large scale study, are better run on a non-random sample.
Remember that research is generalizable knowledge, which is not the same as knowledge that can be generalized to a specific population.
Don't get me wrong. As a statistician, I love random samples and I encourage people to use them whenever they can. And I realize that many of the untestable assumptions that you have to make for a snowball sample are difficult to accept. Even so, I would not encourage a blanket condemnation of this approach. Take everything on a case by case basis.
This page was written by Steve Simon and is licensed under the Creative Commons Attribution 3.0 United States License. Need more information? I have a page with general help resources. You can also browse for pages similar to this one at Observational Studies.