|P.Mean >> Category >> Clinical importance (created 2007-06-04).|
Clinical importance represents a change or shift in the outcome between the treatment group and the control group that is large enough to have a practical impact on the patient. Articles are arranged by date with the most recent entries at the top. Also see Category: Confidence Intervals. and Category: Sample Size Justification.You can find outside resources at the bottom of this page.
8. P.Mean: Unrealistic scenarios for sample size calculations (created 2011-12-20). I'm not a doctor, so when someone presents information to me about the clinically important difference (a crucial component of any sample size justification), I should just accept their judgement. After all, I've never spent a day in a clinic in my life (at least not on the MD side) so who am I to say what's clinically important. Nevertheless, sometimes I'm presented with a scenario where the clinically important difference is so extreme that I have to raise a question. Here's a recent example.
Reliable and clinically significant change. Chris Evans. Excerpt: Reliable Change (RC) is about whether people changed sufficiently that the change is unlikely to be due to simple measurement unreliability. You determine who has changed reliably (i.e. more than the unreliability of the measure would suggest might happen for 95% of subjects) by seeing if the difference between the follow-up and initial scores is more than a certain level. That level is a function of the initial standard deviation of the measure and its reliability. www.psyctc.org/stats/rcsc.htm
Journal article: Ray Moynihan. Surrogates under scrutiny: fallible correlations, fatal consequences BMJ. 2011;343:d5160. Excerpt: "We live in a time when much disease is measured not by symptoms but by numbers, determined by biomarkers in our blood or bone. Transforming a healthy person's risk of disease into a chronic condition has been a key characteristic of modern medicine, creating vast new markets for 'preventive' pills designed to reduce suffering and extend life. The annual global spend on cholesterol lowering drugs alone has exceeded £10bn (€11bn; $16bn), while more generally widening definitions and lowering thresholds continue to expand the patient pool. Well funded campaigns urge the public to know their numbers, and professionals are rewarded for treating to target. Yet the grand assumption underpinning this approach—that helping a person's numbers will automatically improve their health—is a delusion as dangerous as it is seductive." [Accessed on August 22, 2011]. http://www.bmj.com/content/343/bmj.d5160.extract
Journal article: Cristian Baicus, Simona Caraiola. Effect measure for quantitative endpoints: statistical versus clinical significance, or "how large the scale is?" Eur. J. Intern. Med. 2009;20(5):e124-125. Abstract: "Whenever a study finds a statistical significance for the difference between treatment and placebo, we must always ask ourselves if the difference is clinically important, too. In order to do this, we need to know at least how large the scale is, and to compare the size of the scale with the size of the effect. Sometimes, the effect of placebo is greater than the intrinsic effect of the drug. The results of these studies are expressed as averages of effects on patients who respond to treatments and patients who do not, so in our daily practice we must distinguish these categories, treating only the first." [Accessed on June 15, 2011]. http://www.baicus.com/ppt/Effect%20measure%20for%20quantitative%20endpoints.pdf.
Journal article: Helen Kirkby, Sue Wilson, Melanie Calvert, Heather Draper. Using e-mail recruitment and an online questionnaire to establish effect size: A worked example BMC Medical Research Methodology. 2011;11(1):89. Abstract: "BACKGROUND: Sample size calculations require effect size estimations. Sometimes, effect size estimations and standard deviation may not be readily available, particularly if efficacy is unknown because the intervention is new or developing, or the trial targets a new population. In such cases, one way to estimate the effect size is to gather expert opinion. This paper reports the use of a simple strategy to gather expert opinion to estimate a suitable effect size to use in a sample size calculation. METHODS: Researchers involved in the design and analysis of clinical trials were identified at the University of Birmingham and via the MRC Hubs for Trials Methodology Research. An email invited them to participate. An online questionnaire was developed using the free online tool 'Survey Monkey(c)'. The questionnaire described an intervention, an electronic participant information sheet (e-PIS), which may increase recruitment rates to a trial. Respondents were asked how much they would need to see recruitment rates increased by, based on 90%. 70%, 50% and 30% baseline rates, (in a hypothetical study) before they would consider using an e-PIS in their research. Analyses comprised simple descriptive statistics. RESULTS: The invitation to participate was sent to 122 people; 7 responded to say they were not involved in trial design and could not complete the questionnaire, 64 attempted it, 26 failed to complete it. Thirty-eight people completed the questionnaire and were included in the analysis (response rate 33%; 38/115). Of those who completed the questionnaire 44.7% (17/38) were at the academic grade of research fellow 26.3% (10/38) senior research fellow, and 28.9% (11/38) professor. Dependent upon the baseline recruitment rates presented in the questionnaire, participants wanted recruitment rate to increase from 6.9% to 28.9% before they would consider using the intervention. CONCLUSIONS: This paper has shown that in situations where effect size estimations cannot be collected from previous research, opinions from researchers and trialists can be quickly and easily collected by conducting a simple study using email recruitment and an online questionnaire. The results collected from the survey were successfully used in sample size calculations for a PhD research study protocol." [Accessed on June 14, 2011]. http://www.biomedcentral.com/1471-2288/11/89
All of the material above this paragraph is licensed under a Creative Commons Attribution 3.0 United States License. This page was written by Steve Simon and was last modified on 2017-06-15. The material below this paragraph links to my old website, StATS. Although I wrote all of the material listed below, my ex-employer, Children's Mercy Hospital, has claimed copyright ownership of this material. The brief excerpts shown here are included under the fair use provisions of U.S. Copyright laws.
7. Stats: What to measure in a post-marketing surveillance study (May 2, 2007). Dear Professor Mean, I am volunteering as a data analyst in a post-marketing surveillance to assess the safety and efficacy of a drug. I'm not sure what to measure and how to measure it. Can you help me figure out what really needs to be done?
6. Stats: Is my confidence interval too wide? (September 21, 2006). Dear Professor Mean, Is there a rule of the thumb to judge if a 95% CI is wide or narrow?
5. Stats: Confidence intervals are needed to evaluate clinical importance (December 15, 2005). Back in March, I sent a letter to the American Journal of Psychiatry complaining about their failure to include confidence intervals in their published reports. The journal decided not to publish this letter, but since it discusses an important general issue, I thought I would place the submitted letter here.
4. Stats: Do I have enough data after 24 months of time? (April 5, 2005). Someone asked me about a correlation coefficient that he computed on a data set that represented 24 months of data collection. A particular correlation of interest (a correlation between staff turnover and resident falls) was not significantly different from zero, but this person wanted to know how much more data to collect before safely concluding that no relation has been or likely will be established. First compute a confidence interval for the correlation coefficient. If that interval is so narrow that you can rule out the possibility of a clinically important shift, then your sample size is large enough.
3. Stats: Where is the confidence interval? (March 31, 2005). A recent letter to the editor in the American Journal of Psychiatry complains about an article claiming that a drug, citalopram, can reduce depressive symptoms. The letter writers dispute (among other things) the claim of a statistically and clinically significant reduction. In the original paper, the authors show several results, and the one that is perhaps the most important is the proportion of patients who score 28 or less on the Children's Depression Rating Scale. By this criteria, 36% of the treated patients and 24% of the control patients showed improvement. One way to see if the results of a study are clinically significant is to present a number needed to treat plus confidence limits.
2. Stats: Clinical importance (March 11, 2005). Many journal authors have the bad habit of looking just at the p-value of a study and ignoring the clinical importance of their findings. If they get a small p-value, which indicates a statistically significant difference between the new therapy and the standard therapy, they dance in the streets, they pop open the champagne bottles, they celebrate wildly, and they publish their results in an "A" journal. If they get a large p-value, they rend their clothes, they throw ashes on their heads, they wail and moan, and they publish their results in a "C" journal. An article about measurement of fatigue offers some valuable lessons about clinically relevant differences.
1. Stats: Clinically trivial effects (April 12, 2004). I don't like to cite articles in the New York Times, because they are free on the web only for a couple of weeks. But an article by Denise Grady, Nominal Benefits Seen in Drugs for Alzheimers, published on April 7 is worth mentioning. Grady writes that drugs to treat Alzheimer patients are expensive, and it is unclear how much they really help.
Closely related categories:
Browse other categories at this site
Browse through the most recent entries
This work is licensed under a Creative Commons Attribution 3.0 United States License. This page was written by Steve Simon and was last modified on 2017-06-15.