This page has moved to my new website.
Someone wrote to the Evidence Based Health email discussion group about a theoretical situation where someone had an estimated risk of disease based on a study that showed the degree to which factors a, b, and c might influence disease status. Suppose in a different study, a factor d was shown to double the risk of disease. What could you then say about the probability of disease among a patient who has a, b, c, and d? You would think that someone with a, b, c, and d should have a greater risk of disease than someone with just a, b, and c.
The answer unfortunately, is that nothing is predictable here, and it is possible for someone with a, b, c, and d to have a lower risk even though a study looking at d showed a doubling of risk.
You have to watch out for Simpson's Paradox. A good example that I saw published involved (if my memory serves me correctly) the mortality rates of two countries: Costa Rica and Sweden. There was a strong country effect, in that the number of deaths per 100,000 was lower in Costa Rica. This is surprising, since the health care available in Costa Rica is nowhere near as good as that in Sweden.
There is also a strong effect due to age, with older people dying at higher rates than younger people. This was not surprising.
When you looked at mortality for any specific age group, the country effect was reversed: Sweden had lower mortality among 20-25 year olds, as well as among 80-85 year olds and everything in between. What happened was that Sweden has a much older population because the birth rate and therefore the number of recently born children, was much lower in Sweden. So although Sweden was better at any particular age, the overall average mortality is higher because the Swedish sample is weighted towards more older people.
There are lots of references to Simpson's paradox on the web. The Wikipedia provides a good starting point:
That's just one explanation as to why you can't combine the estimates of the two studies. You also need to make sure that the studies were conducted under exactly the same conditions. That never happens, of course, which is why meta-analysis is so hard to do. A good list of the many ways in which one randomized trials might differ from another randomized trial appears in
Complexity and contradiction in clinical trial research. Horwitz RI. Am J Med 1987: 82(3); 498-510. [Medline]