Minal Kale MD
Nirav R. Shah MD, MPH
In the last few months, academic research communities have been all aflutter with the unprecedented sums of money that are up for grabs, as supported by the American Recovery and Reinvestment Act of 2009. The federal stimulus package promises $10.4 billion administered through the National Institutes of Health (NIH), 80% of which is devoted specifically to scientific research. The frenzy in applying for these monies belies the high level of organization that will go into submitting funding applications, as academic institutions strategize to create competitive proposals that adhere to funding guidelines.
Researchers are considering several important aspects of the new guidelines. Aggressive time limits specify that money must doled out within two years (by September 2010), and grants will be scrutinized for their ability to demonstrate reasonable progress within this time frame. This is in a research environment in which projects are known to extend over 5-10 years. Grant applications will also be assessed for their alignment with the spirit of the stimulus to create and retain jobs, a novel objective rarely considered by most scientific researchers.
A separate piece of the stimulus package, totaling $1.1 billion, and administered through the NIH ($400m), the Agency for Healthcare Research and Quality ($300m), and Department of Health and Human Services ($400m), is apportioned specifically to providing funding for comparative effectiveness research, with oversight from a newly developed Federal Coordinating Council for Comparative Effectiveness Research. Here, the emphasis is on incentivizing research that “compares the clinical outcomes, effectiveness, and appropriateness of items, services, and procedures that are used to prevent, diagnose, or treat diseases, disorders, and other health conditions.”
The prominence given to comparative effectiveness research reflects an evolution in the understanding of how clinical evidence is generated. As recently as 1996 the United State Preventive Services Task Force (USPSTF) graded the various types of clinical research studies and formalized the widely-held belief that the Randomized Control Trial (RCT) was the gold standard for clinical evidence, superior to other methods of generating evidence such as observational studies.(USPSTF 1996, see Table 1). The recent emphasis on comparative effectiveness, however, considers both RCTs and observational studies (e.g., cohort and case-control studies) as complementary forms of data generation, each with relative strengths and limitations depending on the questions being asked. (Teutsch 2005) Moreover, comparative effectiveness research extends to cost and safety evaluations of medical interventions and diagnostics — studies can generate conclusions not only about the magnitude and certainty of “real-world” benefits of a therapy, but also consider adverse events, harm, and cost-effectiveness.
As an example of a comparative effectiveness research study with major clinical impact, consider the ALLHAT study of 2002. In this trial, which was non-industry sponsored, the investigators randomized approximately 33,000 patients to receive one of three different anti-hypertensive agents: chlorthalidone (a thiazide-diuretic,) lisinopril, and amlodipine. The study’s measured primary outcome was combined fatal coronary heart disease or nonfatal myocardial infarction. The analysis concluded that there was no difference in outcome between the three interventions, and found that the older, less expensive diuretic was significantly better in secondary outcomes. This research went on to influence JNC-7 national guidelines, which recommended thiazide diuretics as first-line agents in the treatment of hypertension.
As our healthcare system undergoes a major overhaul over the next decade, how will comparative effectiveness research change the way we practice medicine? The treatment of disease and the art of healing are certainly not immune to changes in the standard of care. The stimulus-funded research, however, has the potential to generate important “translational” knowledge that can influence the standard of care. It can do this in a way that is relatively protected from the vested interests of the pharmaceutical industry, and is not limited by the artificial constraint of having a placebo comparison arm when known improvements over placebo exist. Comparative effectiveness research will also hopefully correct the current imbalance between basic science and clinical science research, as comparative effectiveness research is rooted in examining health services and answering clinical questions. The conclusions that will be drawn from comparative effectiveness research have implications beyond changing trends in medical treatment. They will provide evidence that has the potential to be used by policymakers, payers, providers, and patients. This is a sea-change in research priorities, and is heartening to advocates of evidence-based medicine worldwide.
Table 1. USPSTF Grades of Evidence (USPSTF 1996)
I: One properly randomized controlled trial
II-1: Controlled trials without randomization
II-2: Cohort or case-control studies
II-3: Multiple time series
III: Opinions, case reports; or reports of expert committees
Federal Coordinating Council for Comparative Effectiveness Research Membership. Accessed on March 20, 2009 at http://www.hhs.gov/recovery/programs/os/cerbios.html
Grant Funding Opportunities Supported by the American American Recovery and Reinvestment Act of 2009. Accessed on March 20, 2009 at http://grants.nih.gov/recovery/
Teutsch SM, Berger ML, Weinstein MC. Comparative effectiveness: asking the right questions, choosing the right method. Health Aff (Millwood). 2005 Jan-Feb;24(1):128-32.
USPSTF Guide to clinical preventive services. 2nd ed. Baltimore: Williams & Wilkins, 1996.
Furberg et al. Major Outcomes in High-Risk Hypertensive Patients Randomized to Angiotensin-Converting Enzyme Inhibitor or Calcium Channel Blocker vs Diuretic. The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). JAMA. 288(23):2981-2196. 2002.
One comment on “Much Ado about Comparative Effectiveness”
Finally, a semi-optimistic essay on the future of health care (or, at least, clinical trials)…! A good and poignant read–we should indeed obviate the use of solely relying on RCT testing. Thanks for spreading the word on this.
Comments are closed.