P.Mean: What is the Lan-DeMets approach to interim analysis?  (created 2008-11-21).

This page is moving to a new website.

I read an article that talked about a trial that ended early. They describe the approach as a "O'Brien-Fleming stopping boundaries determined by means of the Lan-DeMets approach.". Does anyone you know anything about this statistical technique to determine if this is a valid approach?

It might help to review the O'Brien-Fleming approach first and contrast it with a competing approach by Pocock. For simplicity, I'm assuming that there are two interim analyses conducted when 1/3 and 2/3 of the subjects completed the study, but these approaches work for any number of interim analyses.

You're already aware that if you take multiple looks at the data, there is an increase in the risk of Type I error. It's not much different than examining multiple outcome measures or multiple subgroups, but you shouldn't use Bonferroni because the test statistics that are examined 1/3 of the way through, 2/3 of the way through and at the end are so highly correlated. Bonferroni is extremely inefficient when there is a high degree of correlation.

Stuart Pocock came up with an approach that effectively compared the p-value against an adjusted alpha level where the adjusted alpha level was constant across each interim evaluation. With two interim analyses, the adjusted alpha level would be 0.022 rather than 0.05. By means of comparison, Bonferroni would use an adjusted alpha level of 0.017.

An alternative approach, proposed by Peter O'Brien and Thomas Fleming, uses a smaller alpha level at first and loosens up at further interim evaluations. In the same study, O'Brien-Fleming would effectively use p-values of 0.0005 at the first interim look, 0.014 at the second interim look, and 0.045 at the end of the study.

There are some messy formulas involving the square root of the fraction of patients enrolled at each interim look which I would just as soon not comment on.

Pocock and O'Brien-Fleming are the old-timers in the interim analysis world, and represent two mathematical extremes. In time, statisticians looking for further prestige and glory generalized these approaches in two ways. First, they examined families of interim analysis approaches that offered compromises between the Pocock and O'Brien-Fleming extremes. Second, they examined what would happen if the interim analysis were not evenly spaced with respect to the number of subjects completing the study.

Kuang-Kuo Gordon Lan and David DeMets came up with a very simple generalization that achieved both of these objectives using an approach involving an alpha spending function. You can use an linear alpha spending function that behaves much like Pocock and a cubic alpha spending function that behaves much like O'Brien-Fleming. Powers somewhere between 1 and 3 offer compromises between the two approaches.

Now that's way to much detail, and I should have just said that the Lan-DeMets alpha spending function provides a common and well accepted approach for controlling the Type I error rate when one or more interim analyses are conducted. But I'm hoping that some of you will be interested in the historical details.

Disclaimer: I'm not an expert on interim analyses and I relied heavily on a classic textbook in this area, Group Sequential Methods with Applications to Clinical Trials by Jennison and Turnbull.

By the way, my old webpage on this topic (under the control of my ex-employer and temporarily unavailable) was cited on a statistics blog (Realizations in Biostatistics, by Random John) in September 2007.

Creative Commons License This work is licensed under a Creative Commons Attribution 3.0 United States License. This page was written by Steve Simon and was last modified on 2010-04-01. Need more information? I have a page with general help resources. You can also browse for pages similar to this one at Category: Early stopping.