The Science-Based Medicine blog defends itself (created 2010-11-09)

This page is moving to a new website.

I disagreed with this comment. After all, I wrote a book about EBM, and I mention scientific plausibility in Chapter 4. Is that sufficient consideration? I also have criticized the rigid hierarchy of EBM on my website. And I'm not the only proponent of EBM who has criticized the rigid hierarchy.

"The view is widely held that experimental methods (randomised controlled trials) are the �gold standard� for evaluation and that observational methods (cohort and case control studies) have little or no value. This ignores the limitations of randomised trials, which may prove unnecessary, inappropriate, impossible, or inadequate. Many of the problems of conducting randomised trials could often, in theory, be overcome, but the practical implications for researchers and funding bodies mean that this is often not possible. The false conflict between those who advocate randomised trials in all situations and those who believe observational data provide sufficient evidence needs to be replaced with mutual recognition of the complementary roles of the two approaches. Researchers should be united in their quest for scientific rigour in evaluation, regardless of the method used." http://www.bmj.com/content/312/7040/1215.short

But if someone wants to point out that EBM needs work, I'm fine with that. I dislike that they think that EBM needs to be replaced with something better.

Now, I criticized SBM when I wrote

So I think that this criticism of EBM is putting up a "straw man" to knock down. No thoughtful practitioner of EBM, to my knowledge, has suggested that EBM ignore scientific mechanisms.

and I was rightly criticized for a falling for the "no true Scotsman" fallacy. I'd like to believe that most practitioners of EBM do consider scientific mechanisms, and that the people who don't are practicing PIEBM (Poorly Implemented Evidence Based Medicine). But I really don't have any data to support this belief.

I'd argue that a definition of EBM

"the integration of best research evidence with clinical expertise and patient values" Sackett DL, Straus SE, Richardson WS, et al. Evidence�based medicine: how to practice and teach EBM. 2d ed. Edinburgh: Churchill Livingstone, 2000.

allows for incorporation of mechanisms under the umbrella of clinical expertise, but this is a stretch, and besides, how people define EBM and how they practice it do not have to be the same thing.

I think that scientific plausibility does have some issues. What do you do, for example, when there are scientifically plausible explanations on both sides of a hypothesis. Also, who decides what is plausible. But I don't really want to find myself on the opposite side of the fence from those who are advocating greater use of scientific plausibility in medical research. So when I said

"I would argue further that it is a form of methodolatry to insist on a plausible scientific mechanism as a pre-requisite for ANY research for a medical intervention. It should be a strong consideration, but we need to remember that many medical discoveries preceded the identification of a plausible scientific mechanism."

that was my own version of a straw man. The SBM website believes the scientific plausibility is insufficiently considered by proponents of EBM, but they really haven't advocated, as far as I can tell that scientific plausibility replace randomized trials at the top of the EBM hierarchy. In particular, Dr. Gorski's comment

We do not criticize EBM for an �exclusive� reliance on RCTs but rather for an overreliance on RCTs devoid of scientific context.

is probably a fairer characterization than mine. In my defense, I did not say that the SBM blog was guilty of insisting on a plausible scientific mechanism for any research, but I still should have been clearer.

So how would you resolve this issue? I mentioned in my comment on the SBM blog how difficult this would be.

We can each accumulate dueling anecdotes of when EBM proponents get it right or when they get it wrong, but I doubt that there will ever be any solid empirical evidence to adjudicate the controversy. Without such evidence, we�ll be forever stuck accusing the other side of being too naive or too cynical. You see EBM as being wrong often enough that you see value in creating a new label, SBM. I see SBM as being that portion of EBM that is being done thoughtfully and carefully, and don�t see the need for a new label.

I generally bristle when people want to create a new and improved version of EBM and then give it a new label.

There�s a group trying to replace the term �evidence based medicine� with �value based medicine� and I see the same problems here. In my experience, people who practice EBM thoughtfully do incorporate patient values into the equation, but others want to create a new label that emphasizes something they see lacking overall in the term �evidence based medicine.�

Instead, I prefer the Sicily statement on EBM. They see EBM as something that evolves over time.

The term "Evidence-based medicine" was introduced in the medical literature in 1991 [26]. An original definition suggested the process was "an ability to assess the validity and importance of evidence before applying it to day-to-day clinical problems" [27,28]. The initial definition of evidence-based practice was within the context of medicine, where it is well recognised that many treatments do not work as hoped [29]. Since then, many professions allied to health and social care have embraced the advantages of an evidence-based approach to practice and learning [5-8,30]. Therefore we propose that the concept of evidence-based medicine be broadened to evidence-based practice to reflect the benefits of entire health care teams and organisations adopting a shared evidence-based approach. This emphasises the fact that evidence-based practitioners may share more attitudes in common with other evidence-based practitioners than with non evidence-based colleagues from their own profession who do not embrace an evidence-based paradigm.

EBP evolved from the application of clinical epidemiology and critical appraisal to explicit decision making within the clinician's daily practice, but this was only one part of the larger process of integration of evidence into practice. Initially there was a paucity of tools and programmes to help health professionals learn evidence-based practice. In response to this need, workshops based on those founded at McMaster by Sackett, Haynes, Guyatt and colleagues were set up around the world. During this period several textbooks on EBP were published accompanied by the development of on-line supportive materials.

The initial focus on critical appraisal led to debate on the practicality of the use of evidence within patient care. In particular, the unrealistic expectation that evidence should be tracked down and critically appraised for all knowledge gaps led to early recognition of practical limitations and disenfranchisement amongst some practitioners [31]. The growing awareness of the need for good evidence also led to awareness of the possible traps of rapid critical appraisal. For example problems, such as inadequate randomisation or publication bias, may cause a dramatic overestimation of therapeutic effectiveness [32]. In response, pre-searched, pre-appraised resources, such as the systematic reviews of the Cochrane Collaboration [33], the evidence synopses of Clinical Evidence [34] and secondary publications such as Evidence Based Medicine [35] have been developed [36], though these currently only cover a small proportion of clinical questions. http://www.biomedcentral.com/1472-6920/5/1

I also believe there is some societal value in testing therapies that are in wide use, even though there is no scientifically valid reason to believe that those therapies work. Dr. Gorski disagreed,

Simon then appeals to there being some sort of �societal value� to test interventions that are widely used in society even when those interventions have no plausible mechanism. I might agree with him, except for two considerations. First, no amount of studies will convince, for example, homeopaths that homeopathy doesn�t work. Witness Dana Ullman if you don�t believe me. Second, research funds are scarce and likely to become even more so over the next few years. From a societal perspective, it�s very hard to justify allocating scarce research dollars to the study of incredibly implausible therapies like homeopathy, reiki, or therapeutic touch. (After all, reiki is nothing more than faith healing based on Eastern mystic religious beliefs rather than Christianity.) Given that, for the foreseeable future, research funding will be a zero sum game, it would be incredibly irresponsible to allocate funds to studies of magic and fairy dust like homeopathy, knowing that those are funds that won�t be going to treatment modalities that might actually work.

I realize that some people would never be convinced, no matter how many negative trials are published on a topic, but I also believe that there are enough people who would be convinced to justify the expense and trouble of running these trials. I also disagree with the comment about scarce resources. Money spent on health care is a big, big pot of money and the money spent on research is peanuts by comparison. If we spend some research money to help insure that the big pot of money is spent well, we have been good stewards of the limited research moneys. 

I do have to mention a financial conflict of interest here. One of my regular clients for P.Mean Consulting has been Cleveland Chiropractic College. Some chiropractors bristle at the thought that they are part of alternative medicine, but the link is strong enough in most people's minds that I should disclose this link. The folks at Cleveland Chiropractic College have not been using me much recently, but I'd love to have them back as a regular client.

I believe that everybody deserves their day in court and as long as someone is not trying to abuse the research method to make a point, I'm happy to work with them. I generally put aside any skeptical doubts and try to see what the data says. For what it's worth, I have found the people at Cleveland Chiropractic College to be very level headed. They want to find out where chiropractic works and where it doesn't work because it is a waste of everybody's time and energy to continue to use ineffective therapies. This is perhaps not a general tendency among chiropractors, so I am very fortunate here.

I also am partially supported at my UMKC job through a grant looking at economic expenditures of patients who use CAM providers. This is an NIH grant, and I do not believe that holding an NIH grant on a topic makes you biased. Some people, though, feel that anyone associated with a grant has the temptation to exaggerate the problem being studied so as to increase the chances of getting future funding.

I have a view about alternative medicine that some might characterize as conflicted. I gave a talk about what alternative medicine can teach us about evidence based medicine, and you can find the handout at my old website:
 * http://www.childrens-mercy.org/stats/training/hand66.asp
and you should read this to get a sense of my perspective on alternative medicine.

The SBM blog frequently cites the p-value fallacy and failure to adopt Bayesian methods as a critical failing of EBM. I disagreed in my response to Dr. Gorski's blog post.

But I�m still confused about the Bayesian argument you are making on this site. I can imagine one Bayesian placing randomized trials at the top of the hierarchy of evidence and I can imaging another Bayesian rejecting any research that requires going �against huge swaths of science that has been well-characterized for centuries.� I can even imagine a Bayesian having �a bit of a soft spot for the ol� woo.� In each case, the Bayesians would incorporate their (possibly wrong-headed) beliefs into their prior distribution. I see the argument about Bayesian versus p-values as orthogonal to the arguments about SBM versus EBM. Am I missing something?

I'd be very interested in what Dr. Gorski and others on the SBM blog say about Bayesian methods.

Summary

If I had to say one thing about EBM, I would say that it is largely self-correcting. The flaws in EBM, to a large extent, are discovered by the tools of EBM. I can cite lots of examples of this:

  1. Concato et al (2000) provided evidence that published observational studies provide comparable results to published randomized trials. Thus, placing observational studies lower on the hierarchy than observational studies may not be called for. http://www.nejm.org/doi/full/10.1056/NEJM200006223422507
  2. Juni et al (2002) showed that reliance on English language studies only leads to serious bias in a systematic overview. http://ije.oxfordjournals.org/content/31/1/115.full
  3. Schulz et al (1996) showed that failure to conceal the allocation list during a randomized trial could lead to biases caused by fraudulent steering of patients to a preferred arm in a clinical trial. http://www.ncbi.nlm.nih.gov/pubmed/8774577?dopt=Abstract
  4. Swaen et al (2001) showed that studies that lacked a specific a priori hypothesis were three times more likely to produce false positives. http://ije.oxfordjournals.org/content/30/5/948.long

So rather than create something new, why not just let EBM evolve to reflect a greater emphasis on plausible scientific mechanisms? The blueprint in the Swaen reference could easily be used to provide convincing evidence that studies without a plausible scientific mechanism are more likely to be false positive.

Postscript

One of the other commentators of Dr. Gorski's blog entry noted a glaring error in an entirely different post on my website.

From the first link on post modernism: "A tulip bulb is a rhizhome." AAArgh!!! I�ll get around to reading the rest. But, please, please, Mr. Simon fix that grievous mistake. I know it is a little thing, but a tulip bulb does have a center, a form, and direction and is completely different from a rhizome. Change it to an iris rhizome, or ginger rhizome (well, there is a type of bulb iris, completely different flower). Please. More people know what ginger root looks like. At least more than those who know what a tulip bulb looks like (kind of like a tapered onion, it even has layers). http://www.sciencebasedmedicine.org/?p=8151#comment-58990

I really did not know that. I guess that shows how little I truly know about science. In my defense, my mind was probably still a bit fuzzy after reading all that post-modern writing. I see a lot of value in post-modern philosophy when it isn't taken to excess, but it is very hard to read.