Hormone therapy, cyclooxygenase-2 inhibitors, and now vitamin E have all not fulfilled early promises. How can this be? What is wrong with the way we choose drugs for our patients?
Physicians should consider two key factors when weighing new information about the effectiveness of a treatment: the quality of the evidence supporting its use and whether the evidence focuses on patient-oriented outcomes or disease-oriented outcomes. Each of these factors is a continuum, and each physician chooses the point at which he or she is comfortable recommending something for patients.
The continuum of study quality begins with case reports and case series, progresses to observational studies (cohort or case control), and ends with randomized controlled trials (RCTs). Some physicians may adopt a new treatment when it has been evaluated only in a case series; others feel that observational studies provide strong enough evidence on which to base a change in practice. However, it is important to remember that even large observational studies are more subject to bias than well-designed RCTs. Such was the case with many observational studies of hormone therapy that found reductions—not increases—in cardiovascular mortality rates. RCTs are least subject to bias and, although sometimes limited by their generalizability, they are best able to prove causality.
At the low end of the outcome continuum are laboratory values and chemical measures, which may not represent what will happen in patients. For example, not all medicines that lower blood pressure are equally effective at reducing mortality rates. At the other end of this continuum are studies that report patient-oriented outcomes and tell us that what we do for patients helps them live longer or better lives. Some physicians recommend treatments based on results in laboratory studies and their own knowledge of physiology and biochemistry, while others will wait for evidence of clinical benefit in patients. Human beings are complex, and positive changes in biochemical markers do not always lead to a predictable benefit. In fact, they can be associated with an unpredictable harm.
The example of vitamin E, which is taken in a dosage of at least 400 IU by one in five adults older than 55 years, illustrates this seeming contradiction.1 Vitamin E prevents oxidation of low-density lipoproteins and prevents platelet adhesion, which theoretically should reduce the risk of cardiovascular disease.2,3 Observational studies4–6 found that patients with coronary artery disease (CAD) had lower vitamin E levels and were less likely to have taken vitamin E when compared with control patients. For many patients and physicians, this combination of biochemical theory and observational data was enough to convince them to adopt a new practice, despite the absence of evidence from RCTs.
The first RCT7 of vitamin E in patients with heart disease was published in 1996. It randomized 2,002 patients with CAD to treatment with 400 or 800 IU of vitamin E per day or matching placebo. In the conclusion of their abstract, the authors emphasize that vitamin E substantially reduces the rate of nonfatal myocardial infarction, with beneficial effects apparent after one year of treatment. A careful reading of the study, though, shows what the authors do not emphasize: a nonsignificant increase in all-cause mortality among treated patients (i.e., 36 deaths of 1,035 in the treatment groups compared with 27 deaths of 967 in the control group).
More recently, a series of large, well-designed RCTs8–10 in patients at high risk for developing CAD (the kind of patients who typically take vitamin E) showed no benefit among the patients taking vitamin E. Despite the lack of benefit, millions of physicians continued to recommend this drug in 20 times the recommended daily allowance to their patients. The first indication that vitamin E might be harmful came from a study11 of 423 postmenopausal women with CAD who were randomized to 400 IU of vitamin E twice daily plus 500 mg of vitamin C twice daily or matching placebo (the study also randomized women to hormone therapy or placebo). After three years, all-cause mortality was considerably greater in the treatment group (5.7 versus 1.8 percent; P = .04). Finally, a recent meta-analysis12 identified 19 RCTs of vitamin E for prevention of heart disease that included 135,967 patients. It found a consistent dose-response relationship between vitamin E and all-cause mortality, with an estimated excess risk of 39 per 10,000 persons who took high dosages of vitamin E (i.e., at least 400 IU per day) for at least one year (95 percent confidence interval, 3 to 74).
It is important to remember that biochemical theory does not equal clinical benefit. Improvements in disease-oriented outcomes, such as free-radical activity, are no substitute for patient-oriented outcomes, such as all-cause mortality. Sometimes our enthusiasm for unproven treatments may harm our patients. By learning to value well-designed studies over weaker evidence, and by focusing on patient-oriented evidence instead of disease-oriented evidence, we can be more confident that our decisions to adopt a new test or treatment will help our patients.13