P.Mean: Meta-analysis with non-comparable procedures (created 2011-10-31).

News: Sign up for "The Monthly Mean," the newsletter that dares to call itself average, www.pmean.com/news.

Dear Professor Mean, In publications on meta-analysis where vast numbers of papers must be culled from the analyzable dataset due to non-comparable procedures. The resulting smaller sample sizes can reduce power which then limits the ability to detect significance. Isn't this a serious problem?

Yes, it's a serious problem. I've always described meta-analysis as a multi-center trial where each center uses a different protocol and where some centers don't share their results. Given that haphazard state of affairs, it's a wonder that anything useful can come out of the process. But somehow, meta-analysis does end up producing useful information.

The first question that needs to be raised is "why do so many papers use non-comparable procedures?" This is an example of the NIH (Not Invented Here) syndrome. Researchers, in their arrogance, think that they know better than others how to conduct research, so they develop their own home-grown questionnaires, they use their spin on what the most important outcome variable is, they use their own time frame for how long the treatment should last, etc. I mention this as one of the seven deadly sins of researchers: pride.

The second question that needs to be raised is "what should meta-analysts do about this?" Well, you have the choice of the frying pan or the fire. If you include non-comparable procedures, you are mixing apples and oranges. Now, I have never had problems mixing apples and oranges, but in an extreme case, you'd be mixing apples and onions. That leads to a meaningless summary measure. But even combining apples with oranges has an issue. The heterogeneity that is induced by combining different studies will increase variation.

Now that's the frying pan. The fire is reducing your sample size by including only those studies that are homogenous. This will increase variation, as you recognize, because you have reduced the sample size so much. But hey, wait a minute. Each individual study must have had an adequate sample size, because you don't see studies with small sample sizes getting published. Excuse me for a second. [[Snort, guffaw]] Okay, I'm back. It is probably the case that some or most of these studies had too small a sample size to begin with and the cumulatinve information in these studies is still too small.

So which is worse, the frying pan or the fire? I'd jump into the fire here, but that's just me. Either way, your meta-analysis has problems and produces confidence intervals that are much too wide. That's the price that you pay when you don't make an effort to coordinate your research efforts more carefully. Do something about it, please. What? They don't listen to your advice either. Darn it all. The world would be a nicer place if we were dictators who could bend all of the world's researchers to our will. Until that happens, we just have to suffer with meta-analyses that are unhelpful. Well, they are helpful in that they point out how haphazardly the research is being done, but they are unhelpful in that they cannot provide good guidance on clinical practice.

Creative Commons License This page was written by Steve Simon and is licensed under the Creative Commons Attribution 3.0 United States License. Need more information? I have a page with general help resources. You can also browse for pages similar to this one at Systematic Overviews.