P.Mean >> Category >> Covariate adjustment (created 2007-06-22).

Covariate adjustment is the use of statistical methods (most notably analysis of covariance or ANCOVA) to correct for an imbalance in an important prognostic variable between a treatment/exposure group and a control group. Articles are arranged by date with the most recent entries at the top. You can find outside resources at the bottom of this page. Also see Linear regression, Logistic regression, and Modeling Issues.

2010

13. P.Mean: What is residual confounding (created 2010-01-06). Residual confounding is a frequent explanation for unusual research findings. Before I define the term and show an example, I need to address a more basic issue. The term "confounding" is used frequently but often without careful consideration of the true definition of the term. I tend to shy away from this term and typically use "covariate imbalance" instead.

Outside resources:

David Batty. Are sex and death related?. BMJ. 1998;316(7145):1671a. Excerpt: "Davey Smith et al report a significant inverse relation between frequency of orgasm and mortality due to all causes and coronary heart disease in men; however, a failure to adjust for the energy expended during sexual activity may be a weakness of their work." [Accessed January 6, 2010]. Available at: http://www.bmj.com/cgi/content/full/316/7145/1671/a.

David M. Kent, Thomas A. Trikalinos, Michael D. Hill. Are Unadjusted Analyses of Clinical Trials Inappropriately Biased Toward the Null?. Stroke. 2009;40(3):672-673. Excerpt: "One of the delights of clinical practice has proven to be a major nuisance for clinical research: patients are nonidentical. Indeed, patients have multiple characteristics that influence the likelihood of the outcome of a disease, which can make it difficult in the extreme to accurately discern the effects of therapy from casual clinical experience or even careful observational studies. Randomization, a process by which patients are assigned to a treatment arm by chance rather than by choice, was a brilliant innovation that has made possible causal inferences regarding a treatment’s effect. Although randomization is not perfect in practice, it is remarkably effective at ensuring the comparability between treatment groups, so much so that it has almost tricked us into thinking that patient differences in outcome risks have been rendered irrelevant in the context of clinical trials." [Accessed December 4, 2010]. Available at: http://stroke.ahajournals.org.

Interesting article: Assessing non-consent bias with parallel randomized and nonrandomized clinical trials. S. M. Marcus. J Clin Epidemiol 1997: 50(7); 823-8. [Medline]

Munya Dimairo. Baseline imbalances; an issue in longitudinal clinical trials. Excerpt: "My motivation here came from my little experience with clinical trial data, interests in longitudinal studies and a recent post on medstats google group by Chris Hunt (MedStats). thanks to Chris for flagging that question :-) This article is targeted to statisticians and clinical trialists, there is no hardcore stuff which i always avoided." [Accessed November 14, 2010]. Available at: http://mdimairo.blogspot.com/.

Interesting article: Baseline imbalance in randomised controlled trials. C Roberts, DJ Torgerson. British Medical Journal 1999: 319(7203); 185. [Medline] [Full text] [PDF]

Michal Abrahamowicz, Roxane du Berger, Daniel Krewski, et al. Bias due to Aggregation of Individual Covariates in the Cox Regression Model. Am. J. Epidemiol. 2004;160(7):696-706. Abstract: "The impact of covariate aggregation, well studied in relation to linear regression, is less clear in the Cox model. In this paper, the authors use real-life epidemiologic data to illustrate how aggregating individual covariate values may lead to important underestimation of the exposure effect. The issue is then systematically assessed through simulations, with six alternative covariate representations. It is shown that aggregation of important predictors results in a systematic bias toward the null in the Cox model estimate of the exposure effect, even if exposure and predictors are not correlated. The underestimation bias increases with increasing strength of the covariate effect and decreasing censoring and, for a strong predictor and moderate censoring, may exceed 20%, with less than 80% coverage of the 95% confidence interval. However, covariate aggregation always induces smaller bias than covariate omission does, even if the two phenomena are shown to be related. The impact of covariate aggregation, but not omission, is independent of the covariate-exposure correlation. Simulations involving time-dependent aggregates demonstrate that bias results from failure of the baseline covariate mean to account for nonrandom changes over time in the risk sets and suggest a simple approach that may reduce the bias if individual data are available but have to be aggregated." [Accessed January 19, 2009]. Available at: http://aje.oxfordjournals.org/cgi/content/abstract/160/7/696.

Interesting article: Causal Knowledge as a Prerequisite for Confounding Evaluation: An Application to Birth Defects Epidemiology. Miguel A. Hernán, Sonia Hernández-Díaz2, Martha M. Werler2 and Allen A. Mitchell2. Am. J of Epidemiology 2002: 155(2); 176-184.

Interesting article: Characteristics of good causation studies. S. Daya. Semin Reprod Med 2003: 21(1); 73-84. [Medline]

Interesting article: Choosing covariates in the analysis of clinical trials. M. L. Beach, P. Meier. Controlled Clinical Trials 1989: 10(4 Suppl); 161S-175S.

Interesting article: Clinical trials in acute myocardial infarction: Should we adjust for baseline characteristics? Ewout W. Steyerberg, Patrick M.M. Bossuyt, Kerry L. Lee. American Heart Journal 2000: 139(5); 745-751.

Interesting article: A comparison of direct adjustment and regression adjustment of epidemiologic measures. T. C. Wilcosky, L. E. Chambless. J Chronic Dis 1985: 38(10); 849-56.

David Kent, Alawi Alsheikh-Ali, Rodney Hayward. Competing risk and heterogeneity of treatment effect in clinical trials. Trials. 2008;9(1):30. Abstract: "It has been demonstrated that patients enrolled in clinical trials frequently have a large degree of variation in their baseline risk for the outcome of interest. Thus, some have suggested that clinical trial results should routinely be stratified by outcome risk using risk models, since the summary results may otherwise be misleading. However, variation in competing risk is another dimension of risk heterogeneity that may also underlie treatment effect heterogeneity. Understanding the effects of competing risk heterogeneity may be especially important for pragmatic comparative effectiveness trials, which seek to include traditionally excluded patients, such as the elderly or complex patients with multiple comorbidities. Indeed, the observed effect of an intervention is dependent on the ratio of outcome risk to competing risk, and these risks - which may or may not be correlated - may vary considerably in patients enrolled in a trial. Further, the effects of competing risk on treatment effect heterogeneity can be amplified by even a small degree of treatment related harm. Stratification of trial results along both the competing and the outcome risk dimensions may be necessary if pragmatic comparative effectiveness trials are to provide the clinically useful information their advocates intend." [Accessed December 4, 2010]. Available at: http://www.trialsjournal.com/content/9/1/30.

Interesting article: Conditions for confounding of the risk ratio and of the odds ratio. J. F. Boivin, S. Wacholder. American Journal Epidemiology 1985: 121(1); 152-8. [Medline]

Interesting article: Covariate imbalance and conditional size: dependence on model-based adjustments. S. E. Maxwell. Stat Med 1993: 12(2); 101-9. [Medline]

Interesting article: Covariate imbalance and random allocation in clinical trials. S. J. Senn. Stat Med 1989: 8(4); 467-75. [Medline]

Illustrative example: Dose-specific Meta-Analysis and Sensitivity Analysis of the Relation between Alcohol Consumption and Lung Cancer Risk. Jeffrey E. Korte, Paul Brennan, S. Jane Henley, Paolo Boffetta. Am. J of Epidemiology 2002: 155(6); 496-506.

M Hassan. Effect of male age on fertility: evidence for the decline in male fertility with increasing age. Fertility and Sterility. 2003;79:1520-1527. Abstract: "OBJECTIVE: To evaluate the effect of men's age on time to pregnancy (TTP) using age at the onset of pregnancy attempts, adjusting for the confounding effects of women's age, coital frequency, and life-style characteristics. DESIGN: Observational study. SETTINGS: Teaching hospital in Hull, United Kingdom. PATIENT(S): Two thousand one hundred twelve consecutive pregnant women. INTERVENTION(S): A questionnaire inquiring about TTP, contraceptive use, pregnancy planning, previous subfertility, previous pregnancies, age, and individual life-style characteristics of both partners. MAIN OUTCOME MEASURE(S): Time to pregnancy, conception rates, and relative risk of subfecundity for men and women's age groups. RESULTS: As with women's age, increasing men's age was associated with significantly rising TTP and declining conception rates. A fivefold increase in TTP occurred with men's age >45 years. Relative to men <25 years old, those >45 years were 4.6-fold and 12.5-fold more likely to have had TTP of >1 or >2 years. Restricting the analysis to partners of young women revealed similar effects of increasing men's age. Women >35 years were 2.2-fold more likely to be subfertile than women <25 years. The results were comparable, whether age at conception or at the onset of pregnancy attempts was analyzed, and they remained unchanged after adjustment for the confounding factors. CONCLUSION(S): Evidence for and quantification of the decline in men's fertility with increasing age is provided." [Accessed January 6, 2010]. Available at: http://www.fertstert.org/article/S0015-0282(03)00366-2/abstract.

Interesting article: How do risk factors work together? Mediators, moderators, and independent, overlapping, and proxy risk factors. H. C. Kraemer, E. Stice, A. Kazdin, D. Offord, D. Kupfer. Am J Psychiatry 2001: 158(6); 848-56. Interesting article: Identifiability, exchangeability, and epidemiological confounding. S. Greenland, J. M. Robins. Int J Epidemiol 1986: 15(3); 413-9.

Interesting article: The impact of covariate imbalance on the size of the logrank test in randomized clinical trials. N. Kinukawa, T. Nakamura, K. Akazawa, Y. Nose. Stat Med 2000: 19(15); 1955-67. [Medline]

M M Joffe, P R Rosenbaum. Invited commentary: propensity scores. Am. J. Epidemiol. 1999;150(4):327-333. Abstract: "The propensity score is the conditional probability of exposure to a treatment given observed covariates. In a cohort study, matching or stratifying treated and control subjects on a single variable, the propensity score, tends to balance all of the observed covariates; however, unlike random assignment of treatments, the propensity score may not also balance unobserved covariates. The authors review the uses and limitations of propensity scores and provide a brief outline of associated statistical theory. They also present a new result of using propensity scores in case-cohort studies." [Accessed October 11, 2010]. Available at: http://stat.wharton.upenn.edu/~rosenbap/AJEpropen.pdf.

Interesting article: Look before You Leap: Stratify before You Standardize. Bernard C.K. Choi. American Journal of Epidemiology 1999: 149(12); 1087-1095.

Illustrative example: Maternal smoking and Down syndrome: the confounding effect of maternal age. C. L. Chen, T. J. Gilbert, J. R. Daling. Am J Epidemiol 1999: 149(5); 442-6.

Interesting article: Mediators and moderators of treatment effects in randomized clinical trials. H. C. Kraemer, G. T. Wilson, C. G. Fairburn, W. S. Agras. Arch Gen Psychiatry 2002: 59(10); 877-83.

Illustrative example: Patient volume, staffing, and workload in relation to risk-adjusted outcomes in a random stratified sample of UK neonatal intensive care units: a prospective evaluation. Tucker J, UK Neonatal Staffing Study Group. Lancet 2002: 35999-107. [Medline]

Ben Van Calster, Lil Valentin, Caroline Van Holsbeke, et al. Polytomous diagnosis of ovarian tumors as benign, borderline, primary invasive or metastatic: development and validation of standard and kernel-based risk prediction models. BMC Medical Research Methodology. 2010;10(1):96. Abstract: "BACKGROUND: Hitherto, risk prediction models for preoperative ultrasound-based diagnosis of ovarian tumors were dichotomous (benign versus malignant). We develop and validate polytomous models (models that predict more than two events) to diagnose ovarian tumors as benign, borderline, primary invasive or metastatic invasive. The main focus is on how different types of models perform and compare. METHODS: A multi-center dataset containing 1066 women was used for model development and internal validation, whilst another multi-center dataset of 1938 women was used for temporal and external validation. Models were based on standard logistic regression and on penalized kernel-based algorithms (least squares support vector machines and kernel logistic regression). We used true polytomous models as well as combinations of dichotomous models based on the 'pairwise coupling' technique to produce polytomous risk estimates. Careful variable selection was performed, based largely on cross-validated c-index estimates. Model performance was assessed with the dichotomous c-index (i.e. the area under the ROC curve) and a polytomous extension, and with calibration graphs. RESULTS: For all models, between 9 and 11 predictors were selected. Internal validation was successful with polytomous c-indexes between 0.64 and 0.69. For the best model dichotomous c-indexes were between 0.73 (primary invasive vs metastatic) and 0.96 (borderline vs metastatic). On temporal and external validation, overall discrimination performance was good with polytomous c-indexes between 0.57 and 0.64. However, discrimination between primary and metastatic invasive tumors decreased to near random levels. Standard logistic regression performed well in comparison with advanced algorithms, and combining dichotomous models performed well in comparison with true polytomous models. The best model was a combination of dichotomous logistic regression models. This model is available online. CONCLUSIONS: We have developed models that successfully discriminate between benign, borderline, and invasive ovarian tumors. Methodologically, the combination of dichotomous models was an interesting approach to tackle the polytomous problem. Standard logistic regression models were not outperformed by regularized kernel-based alternatives, a finding to which the careful variable selection procedure will have contributed. The random discrimination between primary and metastatic invasive tumors on temporal/external validation demonstrated once more the necessity of validation studies." [Accessed October 25, 2010]. Available at: http://www.biomedcentral.com/1471-2288/10/96.

Interesting article: Presenting statistical uncertainty in trends and dose-response relations. S Greenland, KB Michels, JM Robins, C Poole, WC Willett. AJE 1999: 149(12); 1077-86.

R B D'Agostino. Propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Stat Med. 1998;17(19):2265-2281. Abstract: "In observational studies, investigators have no control over the treatment assignment. The treated and non-treated (that is, control) groups may have large differences on their observed covariates, and these differences can lead to biased estimates of treatment effects. Even traditional covariance analysis adjustments may be inadequate to eliminate this bias. The propensity score, defined as the conditional probability of being treated given the covariates, can be used to balance the covariates in the two groups, and therefore reduce this bias. In order to estimate the propensity score, one must model the distribution of the treatment indicator variable given the observed covariates. Once estimated the propensity score can be used to reduce bias through matching, stratification (subclassification), regression adjustment, or some combination of all three. In this tutorial we discuss the uses of propensity score methods for bias reduction, give references to the literature and illustrate the uses through applied examples." [Accessed October 11, 2010]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/9802183.

Ylian Liem, John Wong, MG Myriam Hunink, Frank de Charro, Wolfgang Winkelmayer. Propensity scores in the presence of effect modification: A case study using the comparison of mortality on hemodialysis versus peritoneal dialysis. Emerging Themes in Epidemiology. 2010;7(1):1. Abstract: "Purpose: To control for confounding bias from non-random treatment assignment in observational data, both traditional multivariable models and more recently propensity score approaches have been applied. Our aim was to compare a propensity score-stratified model with a traditional multivariable-adjusted model, specifically in estimating survival of hemodialysis (HD) versus peritoneal dialysis (PD) patients. METHODS: Using the Dutch End-Stage Renal Disease Registry, we constructed a propensity score, predicting PD assignment from age, gender, primary renal disease, center of dialysis, and year of first renal replacement therapy. We developed two Cox proportional hazards regression models to estimate survival on PD relative to HD, a propensity score-stratified model stratifying on the propensity score and a multivariable-adjusted model, and tested several interaction terms in both models. RESULTS: The propensity score performed well: it showed a reasonable fit, had a good c-statistic, calibrated well and balanced the covariates. The main-effects multivariable-adjusted model and the propensity score-stratified univariable Cox model resulted in similar relative mortality risk estimates of PD compared with HD (0.99 and 0.97, respectively) with fewer significant covariates in the propensity model. After introducing the missing interaction variables for effect modification in both models, the mortality risk estimates for both main effects and interactions remained comparable, but the propensity score model had nearly as many covariates because of the additional interaction variables. CONCLUSION: Although the propensity score performed well, it did not alter the treatment effect in the outcome model and lost its advantage of parsimony in the presence of effect modification." [Accessed May 18, 2010]. Available at: http://www.ete-online.com/content/7/1/1.

Interesting article: Properties of simple randomization in clinical trials. J. M. Lachin. Control Clin Trials 1988: 9(4); 312-26. [Medline]

Interesting article: Research Methods: Why Covariance? A Rationale for Using Analysis of Covariance Procedures in Randomized Studies. Matthew J. Taylor. Journal of Early Intervention 1993: 17(4); 455-466.

Harrell FE. The Role of Covariable Adjustment in the Analysis of Clinical Trials. Available at: biostat.mc.vanderbilt.edu/twiki/pub/Main/FHHandouts/covadj.pdf  [Accessed January 19, 2009]. Presented at the Statistics of Multi-center Trials, Henry Stewart Conference Studies, Washington DC, September 14, 2001. Note that this is a PDF of some Powerpoint slides, which I usually do not like to include, but the reputation of Dr. Harrell and the quality of the bibliograpy more than compensate.

George Davey Smith, Stephen Frankel, John Yarnell. Sex and death: are they related? Findings from the Caerphilly cohort study. BMJ. 1997;315(7123):1641-1644. Abstract: "Objective: To examine the relation between frequency of orgasm and mortality. Study design: Cohort study with a 10 year follow up. Setting: The town of Caerphilly, South Wales, and five adjacent villages. Subjects: 918 men aged 45-59 at time of recruitment between 1979 and 1983. Main outcome measures: All deaths and deaths from coronary eart disease. Result: Mortality risk was 50% lower in the group with high orgasmic frequency than in the group with low orgasmic frequency, with evidence of a dose-response relation across the groups. Age adjusted odds ratio for all cause mortality was 2.0 for the group with low frequency of orgasm (95% confidence interval 1.1 to 3.5, test for trend P=0.02). With adjustment for risk factors this became 1.9 (1.0 to 3.4, test for trend P=0.04). Death from coronary heart disease and from other causes showed similar associations with frequency of orgasm, although the gradient was most marked for deaths from coronary heart disease. Analysed in terms of actual frequency of orgasm, the odds ratio for total mortality associated with an increase in 100 orgasms per year was 0.64 (0.44 to 0.95). Conclusion: Sexual activity seems to have a protective effect on men's health. Key messages Sex and death are common variables in epidemiology, but the relation between them has been little studied In this cohort study, mortality risk was 50% lower in men with high frequency of orgasm than in men with low frequency of orgasm; there was evidence of a dose-response relation across the groups The question of causation is complex, as with all observational epidemiological findings; several explanations are possible, but the evidence for causation is as convincing here as in many areas where causation is assumed These findings contrast with the view common to many cultures that the pleasure of sexual intercourse may be secured at the cost of vigour and wellbeing If these findings are replicated, there are implications for health promotion programmes." [Accessed January 6, 2010]. Available at: http://www.bmj.com/cgi/content/abstract/315/7123/1641.

Illustrative example: Sex and death: are they related? Findings from the Caerphilly cohort study. GD Smith, S Frankel, J Yarnell. British Medical Journal 1997: 315(7123); 1641-1644. [Medline] [Abstract] [Full text]

Illustrative example: Sexual intercourse and risk of ischaemic stroke and coronary heart disease: the Caerphilly study. S. Ebrahim, M. May, Y. Ben Shlomo, P. McCarron, S. Frankel, J. Yarnell, G. Davey Smith. J Epidemiol Community Health 2002: 56(2); 99-102. [Medline]

Illustrative example: Socioeconomic status and health in blacks and whites: the problem of residual confounding and the resiliency of race. J. S. Kaufman, R. S. Cooper, D. L. McGee. Epidemiology 1997: 8(6); 621-8.

Interesting article: Statistical properties of randomization in clinical trials. J. M. Lachin. Control Clin Trials 1988: 9(4); 289-311. [Medline]

Interesting article: A summary statistic for measuring change from baseline. R. M. Donahue. J Biopharm Stat 1997: 7(2); 287-99. [Medline]

Interesting article: Suspended judgment. Significance tests of covariate imbalance in clinical trials. C. B. Begg. Control Clin Trials 1990: 11(4); 223-5. [Medline]

Interesting article: Testing for imbalance of covariates in controlled experiments. T. Permutt. Stat Med 1990: 9(12); 1455-62. [Medline]

Interesting article: The use of percentage change from baseline as an outcome in a controlled trial is statistically inefficient: a simulation study. A. J. Vickers. BMC Med Res Methodol 2001: 1(1); 6. [Medline] [Abstract] [Full text] [PDF]

Interesting article: What random assignment does and does not do. M. S. Krause, K. I. Howard. J Clin Psychol 2003: 59(7); 751-66. [Medline]

Creative Commons License All of the material above this paragraph is licensed under a Creative Commons Attribution 3.0 United States License. This page was written by Steve Simon and was last modified on 2017-06-15. The material below this paragraph links to my old website, StATS. Although I wrote all of the material listed below, my ex-employer, Children's Mercy Hospital, has claimed copyright ownership of this material. The brief excerpts shown here are included under the fair use provisions of U.S. Copyright laws.

2006

12. Stats: Adjusting a variable for age and sex (October 26, 2006). Someone asked me how to adjust bone mineral density (BMD) for age and sex. I presume that BMD changes as children grow (or as adults age) and that BMD is different for men and women. If you did not adjust for age and sex, then are statistical comparison that you make between a treatment group and control group could be biased by a differential in the sex ratio and/or average age between the two groups.

11. Stats: More on propensity score models (June 26, 2006). Several months ago, I set out to develop some good examples of how to use propensity scores to adjust for covariate imbalance in an observational study. I was consulting with someone recently about this very issue and she brought some additional references to my attention. I then dug a bit further and found some additional references as well.

10. Stats: A simple application of propensity scores (April 26, 2006). In many research studies, you do not have the opportunity to randomly assign an exposure variable. The influence of the exposure variable on the outcome variable can sometimes produce misleading results because there may be other covariates which are important predictors of the outcome which are also imbalanced across the levels of exposure. A propensity score model creates a new composite variable, the propensity score, which helps you identify pairs or groups of variables with similar covariate patterns. The use of stratification or matching on the propensity score removes the effect of covariate imbalance and allows for a fair and unbiased comparison of the exposure group with the control group.

9. Stats: Propensity scores (March 10, 2006). When I have time, I want to describe the use of propensity scores and show some examples. Propensity scores offer a simple and effective way to correct for covariate imbalance in an observational study.

2005

8. Stats: Stepwise regression to screen for covariates (November 25, 2005). Someone wrote asking about how best to use stepwise regression in a research problem where there were a lot of potential covariates. A covariate is a variable which may affect your outcome but which is not of direct interest. You are interested in the covariate only to assure that it does not interfere with your ability to discern a relationship between your outcome and your primary independent variable (usually your treatment or exposure variable).

7. Stats: Adjusting for covariate imbalance (May 20, 2005). Here's a graph I want to insert in my book. It illustrates how to adjust for covariate imbalance. The data comes from the Data and Story Library, lib.stat.cmu.edu/DASL/DataArchive.html, and shows the housing prices of 117 homes in Albuquerque, New Mexico in 1993. The data set also includes variables that might influence the sales price of the home such as the size in square feet, the age in years, and whether the house was custom built.

6. Stats: Adjusting for a baseline measurement (February 28, 2005). Someone asked me today about how to analyze a two group experiment with a baseline value. This is common research design. Researchers will assess all patients at the beginning of the study. They then randomly assign half of these patients to receive an intervention and half to be in a control group. Then they take a second measurement of the same outcome. The measurement at the beginning of the study, the baseline value, helps improve the research design by removing some of the variation in the data. There are four common approaches for analyzing this data, two good and two bad.

5. Stats: Moderator variables (February 15, 2005). I've always disliked the excessive use of detailed terminology, but when someone asked me about moderator variables, I had to look up the details. Basically, a moderator variable is one that interacts with the exposure or treatment variable. It effectively forces you to qualify your findings.

4. Stats: Re-weighting the data (January 25, 2005). A recent article, Two Statistical Paradoxes in the Interpretation of Group Differences: Illustrated with Medical School Admission and Licensing Data. Wainer H, Brown LM. The American Statistician 2004: 58(2); 117-23, shows how a simple re-weighting of the data can lead to a fairer comparison between two groups.

3. Stats: Adjusted odds ratios (January 20, 2005). Someone asked me today how to compute an adjusted odds ratio. He has a case control study where cases represent cancer patients. He also has various Single Nucleotide Polymorphisms (SNPs). These would be coded as 0-1 depending on whether the SNP was present or absent. He also has demographic information, such as age, sex, smoking status, and so forth.

2. Stats: Testing baseline imbalance in a randomized study (January 19, 2005). Randomization will roughly balance out the covariates between the treatment group and the control group because of the law of large numbers. Once in a while, though, an important amount of covariate imbalance will creep into a randomized study. Just as a flip of 100 coins will not always yield exactly 50 heads and 50 tails, a randomized study will not always yield perfect covariate imbalance. When such an imbalance occurs, it is called a chance bias or accidental bias. It can seriously affect the quality of your analysis.

1. Stats: Using APR-DRGs for risk adjustment (May 24, 2006). The 3M company, famous for Post-It notes, among other things, has a division for health information systems. One of their products is software that produces classifications called "All patient Refined Diagnosis Related Groups" or APR-DRGs. These APR-DRGs are computed from information typically collected as part of the billing process. Patients in a common APR-DRG represent a reasonably homogenous set of patients with respect to type of condition and severity of disease.

What now?

Browse other categories at this site

Browse through the most recent entries

Get help

Creative Commons License This work is licensed under a Creative Commons Attribution 3.0 United States License. This page was written by Steve Simon.