From The Archives: Forgoing the Fear: Contrast Nephropathy

January 30, 2014

Please enjoy this post from the archives, dated June 15, 2011

By Mario V Fusaro, MD

Faculty Peer Reviewed

There are certain laws in the universe that are just not meant to be broken.  One is gravity.  Another one is relativity.  The third, don’t give contrast to people with bad kidneys.   Perhaps the last one is not so much a law as something we seem to be terrified of doing.  While recently on service, I had a patient with unexplained right lower quadrant pain.  The obvious first or second or fifth step would be a contrast CT.  The rub in this case was that this pain was located directly over the site of his newly transplanted kidney and his baseline creatinine was around 2 mg/dL.  After an MRI without gadolinium and a myriad of other tests were equivocal, the nephrologist wanted to order a contrast CT scan.  As though I’d just come to the realization that there was no Santa Claus, I stared in disbelief.  How could the nephrologist of all people want to put the beloved kidney in jeopardy?  As I logged on to the ordering screen and navigated my way to the CT contrast order, beads of sweat slipped down my face.  My resting tremor sent the cursor into frenzy.  With the click of a button, I would be subjecting this gentleman to contrast-induced nephropathy (CIN) for sure.  Anxiety-ridden, I cornered the nephrologist and demanded an explanation.  Calmly, he informed me of what he believed the patient’s risk of developing CIN was and then delved into some facts and figures.  They turned out to be far more optimistic than my own imaginary ones.  Still skeptical, I hit the books.

CIN is generally, though variably, defined as an increase in baseline serum creatinine of 25% or > 0.5 mg/dL within 48-96 hours after administration of iodinated contrast.   The most established risk factors for CIN are: type of contrast, diagnostic modality, chronic kidney disease (CKD), NYHA class III/IV heart failure (CHF), hypotension, diabetes mellitus (DM) or a combination of these conditions.

The three main classes of contrast agents are high-osmolality (1300-1800 osms), low osmolality (500-850 osms) and iso-osmolality (290 osms).  High osmolality agents include: diatrizoate and metrazoate.  The low osmolal dyes are:  iopamidol, iohexol, ioxilan, and ioprimide.  Lastly, iodixanol is an iso-osmolal contrast.  The amount of contrast given depends on the study in question.  Coronary angiography and computed tomography (CT) of the head, chest, abdomen and pelvis are the main diagnostic modalities in which iodinated contrast is most extensively used and studied.  Typical contrast loads for CT are approximately 100 cc depending on the study, with cardiac catheterizations averaging around 260 cc. Some studies suggest that iso-osmolal or low osmolal agents cause less CIN than high osmolality contrast[12-13].  Greater amounts of contrast used are also associated with higher rates of CIN[5].

Aside from the increased contrast load, those undergoing coronary angiography tend to have worse outcomes by virtue of the fact that they have atherosclerosis and have the potential to shower atheroemboli during the procedure.  Following catheterization, CHF, CKD, hypotension and DM are risk factors which carry the highest rates of CIN with incidences ranging from 3-40%[5-7].  CKD and DM also seem to be risk factors for CIN in the setting of CT scans; however the association between CHF and hypotension is less clear.  Several recent prospective studies of patients with CKD with or without DM undergoing IV contrast CT scans showed an incidence of CIN that ranged from approximately 5-12%[8-11].   CKD was defined as a GFR of <48 cc/min per 1.73 m2 or baseline serum creatinine of >1.75 mg/dL.  In general, the patients more likely to develop CIN were those with DM and/or GFR values of <30 cc/min per 1.73 m2.  Interestingly, the patients with solely DM did not have an increased risk of CIN.  For patients with GFR values 45-60 cc/min/1.73 m2, the overall risk of contrast nephropathy was 0-5.7%[8,10].

Should the patient be so unfortunate as to experience CIN, what are the consequences?  Clinically significant CIN requiring dialysis 30 days after CT with contrast is quite rare.  In some observational and prospective studies, the incidence of dialysis within 30 days after exposure to contrast for patients with GFRs of <60 cc/min in the setting of CT scan was 0-0.2% [1,2,10].  Although a reassuring statistic, this finding should be taken with a grain of salt.

The development of CIN portends a bleak outcome for those which it affects over an extended timeframe.  In one study, every patient who exhibited CIN with baseline GFR <30 cc/h required permanent dialysis 15 months after the fact.  Those that had the same creatinine clearance and did not experience CIN required dialysis in 25% of cases over the same time1.  In addition, there are several observational studies showing that there is a significant increase in in-hospital as well as one-year mortality ranging from 6-34% and 12-54%, respectively, in patients who underwent coronary angiography with subsequent serum creatinine increases of >25% from baseline.  In these same studies, the incidence of death in hospital as well as at one year was 0.1-7% and 2.7-19.4%, respectively, in those that did not experience CIN[4].  Although unclear whether the association is causal or incidental, these studies clearly show an association between CIN and worsened outcomes[3].  This link underscores the importance of identifying risk factors for development of CIN, forgoing contrast in those who are most predisposed and developing ways to reduce the incidence of CIN.

Many potential methods have been proposed as ways to circumvent or ameliorate CIN and so only a brief overview will be provided here.  Treatments tested in randomized controlled trials have included: saline hydration, N-acetyl cysteine (NAC), sodium bicarbonate, dialysis, diuretics, theophylline, and vitamin C.  Currently, a popular prophylactic regimen incudes IV saline (1 mL/kg/h) 6-12 hours before and after the study as well as NAC 600-1200 mg BID the day before and day of contrast exposure.  Addition of furosemide or mannitol to IV saline may worsen outcomes[14] , theophylline has no benefit and vitamin C has been shown to have a modest but statistically significant benefit in CKD patients[15].  Although these treatments have been studied thoroughly, there is a large amount of disagreement and few randomized controlled trials have shown statistically significant results.   Lack of definitive prevention strategies and treatments may be secondary to a fundamental misunderstanding of the pathophysiology, or variables unrelated to contrast.

In one retrospective study of about 32,000 patients who underwent a variety of imaging procedures, without contrast, the authors still found significant rates of apparent AKI (Acute Kidney Injury).  For patients with serum creatinine levels <1.2 mg/dL, 27% of them had a >25% rise in serum creatinine following the study.  The same increase in serum creatinine was also found in patients with baseline creatinine levels of 2-3 mg/dL at an incidence of 16%. For baselines >3 mg/dL the incidence was 14%[16].  This study suggests that perhaps too much fault is placed on contrast and that maybe the patient’s concurrent diseases are equally to blame.

Perhaps as more and more randomized controlled studies investigating routine imaging with contrast emerge, we will have a clearer picture of who we should be reluctant to give contrast to and how to reduce their risk of AKI.  Currently, the strongest predictors of who will develop CIN are patients with: a GFR<30 cc/min, NYHA class III/IV CHF, DM, increased contrast loads and those undergoing coronary angiography.  Although the potential for CIN increases with worsening renal function, the CT imaging incidence for even the most at risk does not surpass 13% and is 0-5% with minor to no CKD.  The most concerning repercussions in CIN are not necessarily immediate dialysis but long term kidney survival as well as overall mortality.  Use of low osmolal and iso-osmolal agents may have a benefit over their high osmolal counterparts.  Even though not conclusively substantiated, IV hydration and NAC before and after contrast administration may reduce the incidence of CIN.

Although history and physical reign supreme in identifying a disease process; at times, only an invasive radiologic study will confirm the diagnosis.  Sometimes the patient’s best option is not always a safe one and the risks and benefits should be carefully weighed.  When possible, non-contrast CT or MRI without gadolinium should be employed to avoid potential CIN or nephrogenic systemic sclerosis.  Ultimately, if the results of a study won’t change management, one should probably consider not ordering it.

For the patient discussed earlier, as a last resort, he underwent the CT with iodixanol contrast and the aforementioned saline/NAC prophylactic regimen.  He did not succumb to CIN and turned out to have an ascending colitis.

Dr. Fusaro is a 1st year resident at NYU Langone Medical Center

Peer reviewer David Goldfarb, MD, Nephrology Section Editor, Clinical Correlations

Image courtesy of Wikimedia Commons.



  1. Kim SM, Cha RH, Lee JP, Kim DK, Oh KH, Joo KW, Lim CS, Kim S, Kim YS. Incidence and Outcomes of Contrast-Induced Nephropathy After Computed Tomography in Patients With CKD: A Quality Improvement Report. American Journal of Kidney Diseases, Vol 55, No 6 (June), 2010: pp 1018-1025.
  2. Weisbord SD, Mor MK, Resnick AL, Hartwig KC, Sonel AF, Fine MJ, Palevsky PM. Prevention, Incidence, and Outcomes of Contrast-Induced Acute Kidney Injury. Arch Intern Med. 2008;168(12):1325-1332.
  3. Palevsky PM, and Fine MJ, Solomon RJ, Mehran R, Natarajan MK, Doucet S, Katholi RE, Staniloae CS, Sharma SK, Labinaz M, Gelormini JL, Barrett BJ. Contrast-induced nephropathy and long-term adverse events: cause and effect? Clin J Am Soc Nephrol. 2009 Jul;4(7):1162-9. Epub 2009 Jun 25.
  4. Rudnick M, Feldman H. Contrast-induced nephropathy: what are the true clinical consequences? Clin J Am Soc Nephrol. 2008 Jan;3(1):263-72.
  5. Mehran R, Aymong ED, Nikolsky E, Lasic Z, Iakovou I, Fahy M, Mintz GS, Lansky AJ, Moses JW, Stone GW, Leon MB, Dangas G.  A simple risk score for prediction of contrast-induced nephropathy after percutaneous coronary intervention: development and initial validation. Am Coll Cardiol 2004 Oct 6;44(7):1393-9.
  6. McCullough PA, Sandberg KR. Epidemiology of contrast-induced nephropathy. Rev Cardiovasc Med. 2003;4(suppl 5):S3-S9.
  7. Rudnick MR, Goldfarb S, Wexler L, Ludbrook PA, Murphy MJ, Halpern EF, Hill JA, Winniford M, Cohen MB, VanFossen DB. Nephrotoxicity of ionic and nonionic contrast media in 1196 patients: a randomized trial. The Iohexol Cooperative Study. Kidney Int. 1995 Jan;47(1):254-61.
  8. Kim SM, Cha RH, Lee JP, Kim DK, Oh KH, Joo KW, Lim CS, Kim S, Kim YS.Incidence and outcomes of contrast-induced nephropathy after computed tomography in patients with CKD: a quality improvement report.Am J Kidney Dis. 2010 Jun;55(6):1018-25. Epub 2010 Jan 25.
  9. Weisbord SD, Mor MK, Resnick AL, Hartwig KC, Palevsky PM, Fine MJ. Incidence and outcomes of contrast-induced AKI following computed tomography. Clin J Am Soc Nephrol. 2008 Sep;3(5):1274-81. Epub 2008 May 7.
  10. Parfrey PS, Griffiths SM, Barrett BJ, Paul MD, Genge M, Withers J, Farid N, McManamon PJ. Contrast material-induced renal failure in patients with diabetes mellitus, renal insufficiency, or both. A prospective controlled study. N Engl J Med. 1989 Jan 19;320(3):143-9.
  11. Radiocontrast-associated renal dysfunction: a comparison of lower-osmolality and conventional high-osmolality contrast media [published erratum appears in AJR Am J Roentgenol 1991 Oct; 157(4):895]
  12. Barrett BJ, Parfrey PS, Vavasour HM, McDonald J, Kent G, Hefferton D, O’Dea F, Stone E, Reddy R, McManamon PJ. Contrast nephropathy in patients with impaired renal function: high versus low osmolar media. Kidney Int 1992 May;41(5):1274-9.
  13. Solomon R, Werner C, Mann D, D’Elia J, Silva P. Effects of saline, mannitol, and furosemide to prevent acute decreases in renal function induced by radiocontrast agents. N Engl J Med. 1994 Nov 24;331(21):1416-20.
  14. Spargias K, Alexopoulos E, Kyrzopoulos S, Iokovis P, Greenwood DC, Manginas A, Voudris V, Pavlides G, Buller CE, Kremastinos D, Cokkinos DV. Ascorbic acid prevents contrast-mediated nephropathy in patients with renal dysfunction undergoing coronary angiography or intervention. Circulation. 2004 Nov 2;110(18):2837-42. Epub 2004 Oct 18.
  15. Newhouse JH, Kho D, Rao QA, Starren J. Frequency of serum creatinine changes in the absence of iodinated contrast material: implications for studies of contrast nephrotoxicity. AJR Am J Roentgenol. 2008 Aug;191(2):376-82.
  16. Kuhn MJ, Chen N, Sahani DV, Reimer D, van Beek EJ, Heiken JP, So GJ.The PREDICT study: a randomized double-blind comparison of contrast-induced nephropathy after low- or isoosmolar contrast agent exposure. AJR Am J Roentgenol. 2008 Jul;191(1):151-7.

How Much Do We Know About HDL Cholesterol?

January 29, 2014

By Gregory Katz, MD

Peer Reviewed

As levels of HDL cholesterol increase, rates of heart disease go down. It’s this fact that has given HDL its reputation as the “good cholesterol,” serving a crucial role in reverse cholesterol transport. According to our models, HDL ferries cholesterol away from our arteries – where its buildup leads to heart disease and stroke – and back towards our liver, safely out of harm’s way. The epidemiology backs this up: people with higher levels of HDL tend to have a lower risk of heart disease.[1] But why don’t pharmacologic efforts to raise HDL cholesterol reduce morbidity and mortality from heart disease the same way that statins do? As more data on HDL-raising efforts are published, both scientists and the popular press [3] are beginning to eulogize the end of the HDL hypothesis. So how can we reconcile these discordant pieces of evidence? To answer that question, we must take a look at what the data tell us on the role of HDL in heart disease.

CETP Inhibitors

Cholesterylester transfer protein (CETP) transfers cholesterol from HDL to the atherogenic particles of VLDL and LDL, making it a promising target for drug development. Indeed, the initial studies on CETP inhibitors showed that their administration raised HDL cholesterol while simultaneously lowering LDL cholesterol. On paper, it sounded like a win-win. And then the medical world was shocked in 2006 when Pfizer was forced to halt phase III trials of torcetrapib, a CETP inhibitor, after it was shown to increase mortality by 60% when prescribed in conjunction with atorvastatin, compared to atorvastatin alone. [5] Pfizer spent over $800 million on their failed HDL raising attempt. [6] Some suggested that the mortality increase was due to adverse effects such as raising blood pressure. However, subsequent data analysis found that torcetrapib also failed to stop the progression of carotid intima media thickness, [7] further confirming its lack of efficacy in halting or reversing atherosclerosis. Another CETP inhibitor, dalcetrapib, was tested in the dal-OUTCOMES study and found to have no reduction in cardiovascular events despite earlier trials confirming its seemingly positive impact on HDL and LDL cholesterol levels. [8]  The previous failures of CETP inhibitors on improving cardiovascular outcomes have not deterred pharmaceutical companies from pursuing this mechanism of disease prevention, as both Merck and Lilly are both optimistic that their own medications in this class will lead to better outcomes. [9]


Nicotinic acid, also known as vitamin B3 or niacin, raises HDL by as much as 35% by reducing cholesterol transfer from HDL to VLDL and by delaying HDL clearance. [10]  Several trials have evaluated the effect of niacin on cardiovascular outcomes. The HATS trial evaluated the effect of simvastatin plus niacin and found not only a rise in HDL levels but a regression of atherosclerotic lesions as well as reduced incidence of coronary events. [11] This trial did not compare niacin plus a statin to statin alone, however, so the marginal benefit of raising HDL could not be tested. The ARBITER 6-HALTS study looked at statin plus niacin versus statin plus ezetimibe and found regression of carotid intimal media thickness for patients taking niacin plus statin versus progression for the other group. [12] Similar to HATS, the marginal benefit of raising HDL on top of statin use could not be evaluated. In addition, this was an open label trial, so blinding was not preserved. The AIM-HIGH trial, published in 2011, aimed to settle some of the questions about isolated HDL raising therapy on top of patients who were already receiving statins. This study investigated the effect of niacin versus placebo on patients already taking statins. [13] AIM-HIGH was terminated after finding no incremental benefit for raising HDL with niacin on top of statin therapy.[14] And recently, the HPS2-THRIVE study found no benefit when adding extended release niacin plus laropiprant to background statin therapy. [15] The evidence for niacin is clearly mixed, but it should be noted that no study has shown an additional outcome benefit from the addition of niacin to statin therapy. It’s interesting to note that while niacin may increase HDL cholesterol levels, it does not improve the functioning of HDL particles. [16] If adding niacin to therapy does not increase cholesterol efflux from the vessels, it makes intuitive sense that there would be no disease modifying benefit from its addition. This research also suggests that there may be more to the HDL story than simply increasing the amount of cholesterol it carries.

Given the paucity of evidence that pharmacologic HDL raising treatments have any clinical benefit, many have asked the question of whether high HDL is actually protective against heart disease or whether it is simply a marker for some other mechanism of decreased risk. In 2012, Voight et al published a Mendelian randomization paper in the Lancet that investigated whether genetic polymorphisms that raise HDL also protect against heart disease. The authors found that the genetic mechanisms that raise HDL do not seem to protect against heart disease. [17] When you synthesize the results of this study with the previously discussed data, it calls the currently proposed mechanisms of HDL’s protective effect into question.

HDL Hypothesis, revisited

In a previous Clinical Correlations post, [18] the benefit of measuring lipoproteins instead of cholesterol as a means of assessing heart disease risk was discussed. It appears that measuring apolipoprotein B is a better measure of cardiac risk than simply measuring LDL. The reason for this has to do with the mechanism of heart disease: atherosclerosis is the result of subendothelial penetration of an apoB containing particle that stimulates inflammatory cell recruitment and oxidation/phagocytosis of the offending particle. The reason measuring apoB is a better assessment of risk is that LDL is simply a measurement of the amount of cholesterol contained in the particles – it says nothing about how many particles there are. However, since HDL doesn’t always have 1 apolipoprotein per particle like LDL, VLDL, and Lp(a), measuring apolipoproteins isn’t a simple solution. In addition, the protective role of HDL is not just measured by the number of HDL particles: the smaller the HDL particle (i.e. the less cholesterol it contains), the better its anti-atherogenic activity. We should also note that HDL does more to fight atherosclerosis than simply participate in reverse cholesterol transport. HDL protects against LDL oxidation, it is anti-inflammatory, [20] it promotes maintenance of endothelial function, and it may even interfere with thrombosis. [21] And a higher number of smaller HDL particles does seem to be protective against cardiovascular disease. [22,23] Investigation into a apoA1-Milano, a mutated variant of the apolipoprotein in HDL found in some Italians has further complicated our understanding of HDL’s role in atherosclerosis. Patients with this variant of apoA1 tend to have lower risk of coronary disease even though they have lower HDL-C and higher triglycerides. [24] And testing the intravenous administration of apoA-1 Milano into post ACS patients found regression of coronary plaques after five weeks. As the role of HDL in heart disease is further investigated, it becomes clear that assessing its role is more complicated than can be measured by simple HDL cholesterol measurement.

The research we have on the pharmacologic manipulation of HDL raises more questions than it answers. There are several areas to investigate before we should move on from attempting therapeutic intervention on HDL. How do these interventions affect HDL particle size and, perhaps more importantly, HDL particle function? What impact do these interventions have on HDL particle number? Are CETP inhibitors and niacin simply increasing the amount of cholesterol carried by HDL, and thus confounding our efforts to measure their effect? Investigating methods of simply raising the cholesterol carried by HDL particles may well be barking up the wrong tree. It’s clear from epidemiological data that high HDL is protective against heart disease. It’s also likely that both HDL particle number and particle function are better markers of this protective effect than the cholesterol carried by these particles. But there is much research to be done before we are able to provide our patients with a real benefit from intervening on their low HDL numbers. We advocate that investigators in this area not hang their “HATS” on the research to date and “AIM-HIGH” for a possible therapeutic intervention that increases HDL particles or improves their functioning.

Dr. Gregory Katz is a 2nd year resident at NYU Langone Medical Center

Peer Reviewed by Robert Donnino, Section Editor, Clinical Correlations

Image courtesy of Wikimedia Commons


1. Rahilly-Tierney CR, Spiro A 3rd, Vokonas P, Gaziano JM. Relation between high-density lipoprotein cholesterol and survival to age 85 years in men (from the VA normative aging study). Am J Cardiol. 2011 Apr 15;107(8):1173-7. doi: 10.1016/j.amjcard.2010.12.015. Epub 2011 Feb 4.

2.  Ng DS, Wong NC, Hegele RA. HDL-is it too big to fail? Nat Rev Endocrinol. 2013 Jan 15. doi: 10.1038/nrendo.2012.238.


4.  Brousseau ME, Schaefer EJ, Wolfe ML, Bloedon LT, Digenio AG, Clark RW, Mancuso JP, Rader DJ. Effects of an inhibitor of cholesteryl ester transfer protein on HDL cholesterol. N Engl J Med. 2004 Apr 8;350(15):1505-15.


6.  Cutler DM. The demise of the blockbuster? N Engl J Med. 2007 Mar 29;356(13):1292-3.


8.  Schwartz GG, Olsson AG, Abt M, Ballantyne CM, Barter PJ, Brumm J, Chaitman BR, Holme IM, Kallend D, Leiter LA, Leitersdorf E, McMurray JJ, Mundl H,Nicholls SJ, Shah PK, Tardif JC, Wright RS; dal-OUTCOMES Investigators. Effects of dalcetrapib in patients with a recent acute coronary syndrome. N Engl J Med. 2012 Nov 29;367(22):2089-99. doi: 10.1056/NEJMoa1206797. Epub 2012 Nov 5.


10. Grundy SM, Mok HY, Zech L, Berman M. Influence of nicotinic acid on metabolism of cholesterol and triglycerides in man. J Lipid Res. 1981 Jan;22(1):24-36.

11.  Brown BG, Zhao XQ, Chait A, Fisher LD, Cheung MC, Morse JS, Dowdy AA, Marino EK, Bolson EL, Alaupovic P, Frohlich J, Albers JJ. Simvastatin and niacin, antioxidant vitamins, or the combination for the prevention of coronary disease. N Engl J Med. 2001 Nov 29;345(22):1583-92.

12.  Villines T, Stanek E, Devine P, et al. The ARBITER 6-HALTS trial.. J Am College Cardiology 2010;55(23):2721-2726

13.  Boden W, Probstfield J, et al. Niacin in Patients with Low HDL Cholesterol Levels Receiving Intensive Statin Therapy. N Engl J Med 2011; 365:2255-2267December 15, 2011DOI: 10.1056/NEJMoa1107579

14. Walton-Shirley, M. AIM-HIGH: Maybe we should hold onto our HATS.

15. Armitage J, Lixin Jiang. HPS2-THRIVE randomized placebo-controlled trial in 25 673 high-risk patients of ER niacin/laropiprant: trial design, pre-specified muscle and liver outcomes, and reasons for stopping study treatment. Eur Heart J (2013)doi: 10.1093/eurheartj/eht055

16.  Rader D. Niacin Added to Statin Therapy Increases HDL Cholesterol Levels Does Not Improve HDL Functionality. 62nd Annual Scientific Session of the American College of Cardiology in San Francisco (Abstract # 919-7).

17. Benjamin F Voight PhD,Gina M Peloso PhD,Marju Orho-Melander PhD, et al. Plasma HDL cholesterol and risk of myocardial infarction: a mendelian randomisation study The Lancet – 11 August 2012 ( Vol. 380, Issue 9841, Pages 572-580 ) DOI: 10.1016/S0140-6736(12)60312-2

18.  Nair N. Should We Measure Apolipoproteins to Evaluate Coronary Heart Disease Risk? Clinical Correlations, August 24, 2012.

19.  Laurent Camont M. John Chapman, Anatol Kontush. Biological activities of HDL subpopulations and their relevance to cardiovascular disease. Trends in Molecular Medicine. Volume 17, Issue 10, October 2011, Pages 594–603

20.  Barter PJ, Nicholls S, Rye KA, Anantharamaiah GM, Navab M, Fogelman AM. Antiinflammatory properties of HDL. Circ Res. 2004 Oct 15;95(8):764-72.

21. Griffin JH, Kojima K, Banka CL, Curtiss LK, Fernández JA. High-density lipoprotein enhancement of anticoagulant activities of plasma protein S and activated protein C. J Clin Invest. 1999 Jan;103(2):219-27.

22. Menno Vergeer, S. Matthijs Boekholdt, et al. Genetic Variation at the Phospholipid Transfer Protein Locus Affects Its Activity and High-Density Lipoprotein Size and Is a Novel Marker of Cardiovascular Disease Susceptibility, Circulation. 2010;122:470-477; originally published online July 19, 2010; doi: 10.1161/CIRCULATIONAHA.109.912519

23.  Kontush A, Chantepie S, Chapman MJ. Small, dense HDL particles exert potent protection of atherogenic LDL against oxidative stress. Arterioscler Thromb Vasc Biol. 2003 Oct 1;23(10):1881-8. Epub 2003 Aug 14.

24. G Franceschini, C R Sirtori, A Capurso 2nd, K H Weisgraber, R W Mahley. A-IMilano apoprotein. Decreased high density lipoprotein cholesterol levels with significant lipoprotein modifications and without clinical atherosclerosis in an Italian family. J Clin Invest. Volume 66, Issue 5 1980; 66(5):892–900 doi:10.1172/JCI109956

25.  Nissen SE, Tsunoda T, Tuzcu E, et al. Effect of Recombinant ApoA-I Milano on Coronary Atherosclerosis in Patients With Acute Coronary Syndromes: A Randomized Controlled Trial. JAMA. 2003;290(17):2292-2300. doi:10.1001/jama.290.17.2292.



Can Young Patients Get Diverticular Disease?

January 23, 2014

By Aaron Smith, MD

Peer Reviewed

Case: A 35 year-old, overweight female presents to the emergency room with five days of left lower quadrant abdominal pain. The pain is 10/10 in severity and accompanied by nausea, bloating, and loss of appetite.

Diverticulosis, the presence of small colonic outpouchings thought to occur secondary to high pressure within the colon, is an extremely common condition in elderly patients. Recent data suggests that up to 50% of people over the age of 60 have colonic diverticula.[1] When a colonic diverticulum becomes inflamed, the result is diverticulitis, a painful condition that can result in colonic obstruction, perforation, or abscess formation. Diverticulitis is a very common cause of acute abdominal pain in elderly individuals, especially in the United States.[1]

Traditionally, diverticulosis and diverticulitis, together falling under the heading of “diverticular disease,” have been considered diseases of the elderly. That stereotype may have to change. A 2009 study by Etzioni et al. used a 1998 to 2005 nationwide inpatient sample to analyze the care given to 267,000 patients admitted with acute diverticulitis.[2] During this eight-year period, admissions for acute diverticulitis increased by 26%. During the same period, admissions in the 18 to 44 year-old age group increased by 82%, far more rapidly than in the older group. For the younger group, the incidence of diverticulitis necessitating inpatient admission increased from 1 in 6600 to 1 in 4000.

Etzioni et al. offer several potential explanations for the rapid rise of diverticulitis cases in young patients. One is that increased use of computed tomography (CT) scanning may have led to a higher rate of detection. This would mean that the actual incidence of diverticulitis has remained stable, but that more cases have been diagnosed. A second possible explanation is that an influx of a specific racial or ethnic group with a high rate of diverticulitis, likely Hispanics, may have increased in number between 1998 and 2005, affecting the results. (It has been suggested that Hispanics are prone to a particularly virulent form of diverticulitis at a young age, but the data are scarce.)[3] The dataset used for the study did not include race or ethnicity, and therefore the authors could not examine racial or ethnic data and could not exclude the possibility of a demographic shift affecting the numbers. The authors rightly note, however, that there is a distinct possibility that from 1998 to 2005 there was a real and dramatic increase in the rate of diverticulitis in younger patients. Why? Diverticulitis has been linked to obesity, poor fiber intake, and the western lifestyle in general, and so its increased incidence is mostly likely related to America’s current obesity epidemic.[3-5]

Two lessons can be gleaned from the data presented in this paper. First is a reminder of that favorite medical axiom, “common things are common.” When a disease is highly prevalent in the overall population, it may be highly prevalent in subsets of the population not stereotypically associated with the disease. Take diverticular disease as an example. According to the Etzioni study, diverticulitis is roughly ten times more common in patients above the age of 75 than in patients aged 18 to 44. It is therefore tempting to dismiss diverticulitis as a potential diagnosis in young patients, because diverticulitis is so much more common in the elderly. This would be a mistake. With a prevalence of 1 in 4000, diverticular disease in young patients is more common than rare causes of abdominal pain classically associated with young people. Symptomatic intestinal malrotation, for example, is classically considered a disease of the young, but is less common than diverticulitis, with a prevalence of 1 in 6000.[6] Decades of high colonic pressure in the elderly increase the chances of diverticula formation, and diverticulitis is certainly less common in the young than it is in the elderly. Still, less common does not equal uncommon.

The second lesson to be learned is that due to increases in obesity and sedentary lifestyle, clinicians should rethink which conditions are diseases of the elderly and which are not. Type II diabetes used to be called adult-onset diabetes until it became so common in children and adolescents that the term became a misnomer. Like type II diabetes, diverticular disease is associated with obesity and sedentary lifestyle, and its increased prevalence can be thought of as a correlate to the increased prevalence of other diseases of the western lifestyle (diabetes, hypertension, coronary artery disease…). If the population of the United States continues to grow more obese and inactive, diverticular disease may become more common.

The patient described in the introduction received a CT scan and was diagnosed with acute diverticulitis. Even after imaging confirmed the diagnosis, the patient’s primary physician was hesitant to accept that diverticulitis was the cause of the patient’s abdominal pain, because she was “too young” to have diverticulitis. The Etzioni paper and other recent studies suggest that this mode of thinking may need to be reexamined.[7] Diverticulitis is a diagnosis that should be considered in all patients with abdominal pain, and not just in the elderly. Remember: common things are common, even in young people.

Dr. Aaron Smith is a former medical student and now a  1st year  transitional medicine resident at Harbor-UCLA

Peer reviewed by Michael Poles, MD, Section Editor, Clinical Correlations

Image courtesy of Wikimedia Commons


[1] Weizman AV, Nguyen GC. Diverticular disease: epidemiology and management. Can. J. Gastroenterol. 2011;25(7):385–389.

[2] Etzioni DA, Mack TM, Beart RW Jr, Kaiser AM. Diverticulitis in the United States: 1998–2005. Annals of Surgery. 2009;249(2):210–217.

[3] Zaidi E, Daly B. CT and clinical features of acute diverticulitis in an urban U.S. population: rising frequency in young, obese adults. AJR Am J Roentgenol. 2006;187(3):689–694.

[4] Aldoori W, Ryan-Harshman M. Preventing diverticular disease: review of recent evidence on high-fibre diets. Can Fam Physician. 2002;48:1632–1637.

[5] Hjern F, Johansson C, Mellgren A, et al. Diverticular disease and migration–the influence of acculturation to a western lifestyle on diverticular disease. Aliment Pharmacol Ther. 2006;23:797–805.;jsessionid=E566F268C85EEE5067FA4E1664437630.d03t02

[6] Berseth CL. Disorders of the intestines and pancreas. In: Avery’s Diseases of the Newborn, 7th, Taeusch WH, Ballard RA (Eds), WB Saunders, Philadelphia 1998. 918.

[7] van de Wall BJ, Poerink JA, Draaisma WA, Reitsma JB, Consten EC, Broeders IA. Diverticulitis in young versus elderly patients: a meta-analysis. Scand J Gastroenterol. 2013.

From The Archives – The Hangover: Pathophysiology and Treatment of an Alcohol-Induced Hangover

January 16, 2014

Please enjoy this post from the archives, dated May 27, 2011

By Anthony Tolisano

Faculty Peer Reviewed

The sunlight forces its way into your eyes, stabbing at your cortex. Suddenly, a wave of nausea and diarrhea grips your stomach, threatening to evacuate its contents. You rush to the bathroom, tripping over the clothes that speckle your apartment. Your heart pounds inside your chest and your hands shake ever so subtly. Your mind is in a fog and the details of last night’s party are a blur. Sound familiar?

If you’re anything like the millions of people who have ever drunk too much alcohol, you’ll immediately recognize the sordid signs of a hangover. It’s a condition that is well recognized yet poorly understood. Descriptions abound in pop culture and literature, saturating the public consciousness. Yet, a great deal of scientific research has had only limited success in explaining its pathophysiology.

The basic concept is simple. A hangover is secondary to binge drinking, the consumption of an excessive amount of alcohol in a relatively short time period. The National Institute of Alcohol Abuse and Alcoholism defines binge drinking as a rise in blood alcohol content (BAC) to greater than or equal to 0.08% within two hours, or more simply, the consumption of at least four (for women) or five (for men) alcoholic beverages in a single drinking session.[1] Of note, most people who binge drink consume far more than 4-5 drinks, imbibing an average of eight per night. [2] Binge drinking is extremely common, especially among young adults, and remains the leading cause of death each year among college-aged students in the US, [3] with similar worldwide rates of problematic alcohol use across Europe, South America, and Australia.[4] The dangers of consuming too much alcohol are obvious. Among other things, motor vehicle accidents, sexual assaults, and violence are more common among the acutely intoxicated. [5] A hangover, however, is largely regarded as an annoyance. It is defined as that which occurs after the acute intoxication has subsided, reaching its greatest intensity as the BAC approaches zero. It lasts for up to 24 hours and consists of a myriad of signs and symptoms, from constitutional (fatigue and malaise) and gastrointestinal (loss of appetite, diarrhea, and nausea), to nociceptive (headache and myalgias) and autonomic (tachycardia and tremor). Sleep disturbances, including increased slow-wave and decreased REM sleep, and neurocognitive symptoms, such as decreased attention and memory problems, are also common.[6] Furthermore, the impairment of one’s psychomotor performance can be substantial. In fact, if you think sobering up is sufficient for safely driving home, you may be wrong. Studies have shown that individuals who suffer from hangover symptoms have a decreased ability to operate a motor vehicle. [7], [8] Combined with the loss of work days and decreased productivity from hangovers, perhaps this annoying day-after affair isn’t so benign after all.

So what is the pathophysiological explanation for a hangover? If a hangover is secondary to the clearance of ethanol from the body (when BAC is equal to zero), isn’t it the same phenomenon as acute alcohol withdrawal? The answer, quite simply, is no. Alcohol withdrawal is secondary to the development of physiological alcohol dependence over the course of many drinking episodes, whereas the hangover occurs after one night of drinking and does not require alcohol dependence. Another possibility is that that a hangover may be the result of the direct effects of last night’s extracurricular activities. That is, a hangover may be the lasting effects of electrolyte imbalances, hypoglycemia, and dehydration, which persist longer than the ethanol itself. Unfortunately, studies have shown little correlation between hangover symptoms and serum electrolytes, blood glucose, or markers of dehydration such as antidiuretic hormone and renin. A similar argument has been made for acetaldehyde, a metabolic breakdown product of ethanol that has vasodilatory and gastrointestinal effects on the body. Again, limited evidence has been found linking acetaldehyde levels with hangover severity.[9] Perhaps understanding the effect of alcohol consumption on the immune system will elucidate the underlying cause of the hangover. A number of studies have shown that following a night of heavy drinking there is an upregulation of cytokines and prostaglandins. Specifically, increased plasma levels of interleukin-10, interleukin-12, and interferon-gamma were measured in individuals suffering from hangover symptoms long after their last drink. Prostaglandin E2, thromboxane B2, and C-reactive protein were similarly increased. Most importantly, the serum levels of these inflammatory markers were directly related to the degree of hangover symptoms.[10] This association has been supported by evidence that treatment with cyclooxygenase inhibitors leads to a decrease in these inflammatory factors and a subsequent decrease in hangover symptoms.[11] Unfortunately, this finding has not been regularly repeated in well-designed studies.

While the exact biological basis of a hangover may remain somewhat of a mystery, there is good evidence that certain factors exacerbate hangover symptoms. Certainly, sleep deprivation and the simultaneous consumption of other psychoactive substances worsen hangover symptoms. Perhaps less intuitively, so does the type of alcohol, even if two drinks contain the same alcohol content by volume. Simply put, darker liquors generally cause worse hangover symptoms. It is thought that this is due to so-called congeners, byproducts of the fermentation process related to the different casks and grains used. For example, amber-colored bourbon has a congener level of 37 times that of vodka, causing worse hangover symptoms than its clearer cousin.[12] At the end of the day, hangovers are unpleasant, to say the least. While a Google search for hangover remedies reveals thousands of hits for reportedly “clinically-proven” cures, there are really no good options. No evidence supports B vitamins, tomato juice, or fatty foods.[13] Other, more colorful options, such as the use of artichoke extract, reveal no difference between the study group and the control group.[14] The one bright spot? Prickly pear (Opuntia ficus indica) extract has been shown to decrease the inflammation associated with hangovers and the subsequent incidence of nausea, dry mouth, and anorexia.[15] As you may have suspected, however, the most important factor in preventing a hangover remains abstinence or moderation. In other words, avoid binge drinking entirely. Alternate alcoholic and non-alcoholic beverages or spread your alcohol consumption over a longer period of time.

Anthony Tolisano is a 3rd year medical student at NYU Langone Medical Center

Peer reviewed by Barbara Porter, section editor, Clinical Correlations

Image courtesy of Google Images


1. National Institute of Alcohol Abuse and Alcoholism. NIAAA council approves definition of binge drinking. NIAAA Newsletter 2004; No. 3, p. 3. Available at /Newsletter/winter2004/Newsletter_Number3.htm#council. Accessed November 17, 2010

2. Naimi TS, Nelson DE, Brewer RD. The intensity of binge alcohol consumption among U.S. adults. Am J Prev Med. 2010;38(2):201-207.

3. Boyd CJ, McCabe SE, Morales M. College students’ alcohol use: a critical review. Annu Rev Nurs Res. 2005;23:179-211.

4. Karam E, Kypri K, Salamoun M . Alcohol use among college students: an international perspective. Curr Opin Psychiatry. 2007;20(3):213-221.

5. Mukamal KJ. Overview of the risks and benefits of alcohol consumption. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2010.

6. Prat G, Adan A, Sánchez-Turet M. Alcohol hangover: a critical review of explanatory factors. Hum Psychopharmacol. 2009;24(4):259-267.

7. Prat G, Adan A, Pérez-Pàmies M, Sànchez-Turet M. Neurocognitive effects of alcohol hangover. Addict Behav. 2008;33(1):15-23. Epub 2007 May 8.

8. Stephens R, Ling J, Heffernan TM, Heather N, Jones K. A review of the literature on the cognitive effects of alcohol hangover. Alcohol Alcohol. 2008;43(2):163-70. Epub 2008 Jan 31.

9. Penning R, van Nuland M, Fliervoet LA, Olivier B, Verster JC. The pathology of alcohol hangover. Curr Drug Abuse Rev. 2010;3(2):68-75.

10. Kim DJ, Kim W, Yoon SJ, et al. Effects of alcohol hangover on cytokine production in healthy subjects. Alcohol. 2003;31(3):167-170.

11. . Kaivola S, Parantainen J, Osterman T, Timonen H. Hangover headache and prostaglandins: prophylactic treatment with tolfenamic acid. Cephalalgia. 1983;3(1):31-36.

12. Rohsenow DJ, Howland J, Arnedt JT, et al. Intoxication with bourbon versus vodka: effects on hangover, sleep, and next-day neurocognitive performance in young adults. Alcohol Clin Exp Res. 2010;34(3):509-518. Epub 2009 Dec 17.

13. Pittler MH, Verster JC, Ernst E. Interventions for preventing or treating alcohol hangover: systematic review of randomised controlled trials. BMJ. 2005;331(7531):1515-1518.

14. . Pittler MH, White AR, Stevinson C, Ernst E. Effectiveness of artichoke extract in preventing alcohol-induced hangovers: a randomized controlled trial. CMAJ. 2003;169(12):1269-1273.

15. Wiese J, McPherson S, Odden MC, Shlipak MG. Effect of Opuntia ficus indica on symptoms of the alcohol hangover. Arch Intern Med. 2004;164(12):1334-1340.


Is it Time to Skip the Gym?

January 15, 2014

By Robert Mocharla, MD

Peer Reviewed

No. Sorry. Despite such reasonable excuses as – “I forgot my iPod”, “It’s pouring rain”, or “Game of Thrones is on” — an exhaustive literature search will not reveal a shred of evidence that you or most of your patients should skip daily exercise. However, a subset of your patients should indeed be skipping workouts regularly. The group referred to consists of endurance athletes (e.g. cyclists, swimmers, long-distance runners, competitive athletes). While this may not describe the majority of our patients, growing evidence suggests that overtraining can actually be maladaptive to overall health in certain individuals.

The question of too much exercise was first asked many years ago when it was noticed that, despite rigorous training schedules, the performance of certain athletes actually began to decline months into their training routines. The athletes burned out. But why? Common sense tells us that the more exercise we engage in, the better shape we will be in. Surprisingly, this is not always the case, and until recently, little was known about this phenomenon.

An entity known as Overtraining Syndrome (OTS) has gained widespread acceptance as the cause of deteriorating athletic performance [1]. OTS has been an active area of research since the early 1990s when it was noticed that not only can athletic performance in endurance athletes decline over time, but these athletes can also experience biochemical, psychological, and immunological abnormalities [2]. Currently, there is not a universally accepted theory as to the cause of OTS. One theory with growing evidence is the “Cytokine Hypothesis” [3]. It purports that repetitive joint and muscle trauma seen with excessive physical training elicits a response similar to that seen in chronic inflammation. Inflammatory cytokines at sites of injury activate monocytes, which then release pro-inflammatory IL-1ß, IL-6, and TNF-?. The body then enters a catabolic, inflammatory state. As such, one might hypothesize a chronic inflammatory state would be evident via biochemical markers (i.e. anemia, ESR, CRP). However, to date, no studies have been able to show a relationship between OTS and any of these biomarkers of chronic inflammation. In part, this is why a diagnosis of OTS is often difficult to reach (and always should be one of exclusion after other systemic processes are ruled out) [4].

Another theory involves dysregulation of the Hypothalamic-Pituitary Axis (as seen in amenorrheic female athletes). During exercise, the body acutely releases cortisol, epinephrine and norepinephrine as a means to enhance cardiovascular function and redistribution of metabolic fuel. However, these hormones quickly return to baseline levels following a workout. Interestingly, endurance athletes suffering from burnout show higher baseline cortisol levels [5]. This may then lead to negative effects on normal metabolism, healing, and immunity. In fact, studies have shown an increased susceptibility to infection in overtrained athletes, although the mechanism is not fully understood [6,7]. Studies have not been able to identify differences in leukocyte number or distribution among overtrained vs healthy athletes, and the true etiology may rather be related to impaired functionality of immune cells. It is important to note that the body’s normal response to intermittent exercise is overall adaptive. When allowed adequate time to recover, inflammatory cytokines and hormones decrease to normal levels. The body is then able to adapt to repeated exercise by increasing muscle mass and capillary density, endothelial cell function, and glucose utilization among other things [8]. It is the lack of recovery time that is problematic in OTS.

The most common initial manifestation of OTS involves mood changes. An otherwise emotionally stable athlete may become increasingly depressed, chronically fatigued, unable to sleep, have decreased appetite, and lose interest in competition [9]. Unfortunately, patients are rarely recognized in this stage and often go on to develop the hallmark of the syndrome, deteriorating athletic performance. Muscle and joint pain are often present as well. Depending on the severity, symptoms can last anywhere from a few weeks to years [10]. Even when declining performance is evident, there are no diagnostic criteria or laboratory tests that can confirm a suspicion of OTS. For now, the diagnosis is purely clinical. A high index of suspicion must be kept for all at-risk groups. While competitive athletes are most classically thought of as high-risk, OTS should also be considered in recreational athletes (who may unknowingly advance their training regimen too hastily). The primary focus of management is rest. Each case must be managed individually with regard to the symptom cluster experienced by the patient. It is recommended that patients rest at least a full 3-5 weeks with minimal to no athletic training [11]. Selective serotonin reuptake inhibitors are increasingly being used to combat mood and appetite symptoms [12]. After recovery, a cyclical workout routine should be established with adequate recovery time between cycles. Patients should be advised to consume a high carbohydrate diet to help facilitate recovery between workouts (upwards of 60-70% of caloric intake) [13]. Fortunately, athletes can and do recover after the appropriate interventions and precautions are made.

It is difficult to predict who will develop or re-experience OTS since the threshold of exercise tolerance varies widely among athletes. Therefore, patient education and prevention are critical. Studies estimate that up to 10% of vigorously training athletes have or will experience OTS [4]. Athletes should be questioned about their exercise routines and informed about the dangers and warning signs of over-training. Any evidence of psychiatric disturbance or decreased performance should prompt a discussion on the possibility and management of OTS.

Dr. Robert Mocharla is a graduate of NYU School of Medicine

Peer reviewed by Richard Greene, MD, Internal Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. Budgett R. The overtraining syndrome. British Journal of Sports Medicine. 1990; 24:231–6.

2. O’Toole, M. Overreaching and overtraining in endurance athletes. Overtraining in Sport. R.B. Kreider, A.C. Fry, M.L. O’Toole, eds. Champaign IL: Human Kinetics Publishers, Inc., 1998; 3-18.

3. Smith, LL. Cytokine hypothesis of overtraining: a physiological adaptation to excessive stress? Medicine & Science in Sports & Exercise. 2000; 32 (2): 317-31.

4. Meeusen R, Ducios M, Foster C, et al. Prevention, diagnosis, and treatment of the overtraining syndrome: joint consensus statement of the European College of Sport Science and the American College of Sports Medicine. Med Sci Sports Exerc. 2013; 45(1):186-205,_Diagnosis,_and_Treatment_of_the.27.aspx

5. O’Connor, P.J. et al. Mood state and salivary cortisol levels following overtraining in female swimmers. Psychoneuroendocrinology. 1989; 14 (4), 303-310.

6. Mackinnon, LT. Chronic exercise training effects on immune function. Medicine & Science in Sports & Exercise. 2000; 32 (7 Suppl): S369-76.

7. Heath, G. W., E. S. Ford, T. E. Craven, C. A. Macera, K. L. Jackson, and R. R. Pate. Exercise and the incidence of upper respiratory tract infections. Medicine & Science in Sports & Exercise. 1991; 23:152–157.

8. Mandroukas K, Krotkiewski M, Hedberg M, et al. Physical training in obese women. Effects of muscle morphology, biochemistry and function. Eur J Appl Physiol Occup Physiol. 1984;52:355-61.

9. W.P. Morgan, D.R. Brown, J.S. Raglin, P.J. O’Connor and K.A. Ellickson, Psychological monitoring of overtraining and staleness. Medicine & Science in Sports & Exercise. 1987; 21:107–114.

10. Lehmann, M., U. Gastmann, K.G. Petersen, N. Bach, A. Siedel, A.N. Khalaf, S. Fischer, and J. Keul. Training-overtraining: Performance, and hormone levels, after a defined increase in training volume versus intensity in experienced middle and long-distance runners. British Journal of Sports Medicine. 1992; 26:233-242.

11. Koutedakis, Y., Budgett, R., Fullman, L. The role of physical rest for underperforming elite competitors. British Journal of Sports Medicine. 1990; 24(4):248-52.

12. Armstrong LE, VanHeest JL. The unknown mechanism of the overtraining syndrome: clues from depression and psychoneuroimmunology. Sports Med. 2002;32:185-209.

13. Costill DL: Inside Running: Basics of Sports Physiology. Indianapolis: Benchmark Press; 1986.














The Prevention of Post-ERCP Pancreatitis

January 10, 2014

By Shivani K Patel, MD

Peer reviewed

A 61-year old male with chronic epigastric discomfort presented to the emergency room with severe abdominal pain radiating to his back. He had similarly presented and been hospitalized two weeks prior to this admission at an outside hospital, where he underwent endoscopic retrograde cholangiopancreatography (ERCP) with placement of a pancreatic duct stent. His pain initially improved, but quickly recurred after discharge. Admission labs showed only mild leukocytosis with slight elevation in serum lipase. Computed tomography (CT) of the abdomen revealed a pancreatic duct stent associated with peripancreatic fat stranding, edema, and heterogeneity of the gland. He was admitted for management of post-ERCP pancreatitis, and the clinical questions arose: how common is this complication and how can it be prevented?

Endoscopic retrograde cholangiopancreatography (ERCP) combines endoscopy with fluoroscopy to visualize and treat problems in the upper gastrointestinal tract including the hepatobiliary tree and pancreas. There are several complications associated with ERCP, including pancreatitis, bleeding, cholangitis, cholecystitis, and perforation. Pancreatitis is the most common complication, with an incidence of 3.5% in all patients undergoing ERCP. It is usually mild or moderate in severity, but can be severe and potentially fatal in as many as 10% of cases [1]. Given the significant morbidity and healthcare expenditures associated with acute pancreatitis, much effort has been spent devising methods to anticipate and prevent its occurrence.

Various risk factors have been identified as contributors to the development of post-ERCP pancreatitis (PEP). These include both patient- and procedural- related elements (as summarized in a meta-analysis in 2003). “Definite” risk factors include suspected Sphincter of Oddi dysfunction (adjusted odds ratio 4.09), female gender (OR 2.23), prior pancreatitis (OR 2.46), precut sphincterotomy (OR 2.71), and pancreatic injection (OR 2.2). Sphincter of Oddi dysfunction (SOD) had the highest odds ratio of the definite risk factors, and was further noted to have a pooled incidence of PEP of 10.3% versus 3.9% in patients without SOD dysfunction. Several other “likely” risk factors exist, including younger age of patient, non-dilated extrahepatic bile ducts, biliary balloon sphincter dilation, failure to clear bile duct stones, higher number of cannulation attempts, pancreatic sphincterotomy, absence of chronic pancreatitis, and normal serum bilirubin levels. Of these, biliary balloon sphincter dilation appeared to have the strongest association with a pooled incidence of PEP of 9.3% versus 1.9% [2]. Presence of multiple risk factors synergistically increases the risk of PEP. This data enables the identification of patients with a high likelihood of PEP development so that additional energies can be concentrated on preventative measures for that population.

A number of strategies have been employed to reduce PEP. The most well-known among these are pancreatic duct stent placement and peri-procedural non-steroidal anti-inflammatory drug (NSAID) administration. Several studies have shown that pancreatic duct stent placement relieves PEP, most likely by offsetting the hindered ductal drainage caused by post-procedural papillary edema or Sphincter of Oddi dysfunction [3]. A 2007 meta-analysis evaluating the efficacy of pancreatic stenting looked at four randomized prospective trials and determined that there was a 12% absolute risk reduction of PEP (OR 0.44) in high-risk patients (primarily female with SOD) with pancreatic duct stent placement compared to those without stenting [4]. Thus, pancreatitis is prevented in one of every eight patients treated with a ductal stent. This benefit to stenting however is dependent on operator expertise and ability to place the stent without complication. Failure of appropriate stent placement has been associated with a subsequent incidence of PEP as high as 65% [5]. Further complications associated with successful pancreatic stenting include stent occlusion, spontaneous migration, and provocation of anatomic change to the duct or parenchyma [6]. Stents are placed with the goal of natural passage out of the ductal system within a few days, but if this does not occur an endoscopy is required to remove them. Pancreatic duct stents can decrease the incidence of severe PEP cases, but are not fail-safe and may lead to added expenditure and morbidity if placed incorrectly or if another endoscopy is required for their removal [7]. For this reason, prophylactic pancreatic duct stent placement is deemed cost-effective in patients at high risk for PEP but not in those with low or average risk.

Over the past decade, rectally-administered NSAIDs have shown promise as a safe, straightforward, and economical alternative for all patients undergoing ERCP regardless of level of risk. Prevention of PEP is thought to arise from NSAID-induced inhibition of phospholipase A2, cyclooxygenase, and neutrophil-endothelial interactions – all of which contribute to the pathogenesis of pancreatitis [8]. Rectal administration is favored because it allows for a more complete and rapid bioavailability than the oral route. The results of four randomized-controlled trials dealing with rectal NSAID use were assessed in a 2008 meta-analysis, which surmised that prophylactic rectal indomethacin administration prevented pancreatitis in 1 out of every 15 patients treated, and was not associated with any definable adverse events [9]. Additional confirmatory high-quality trials were suggested to solidify these results prior to widespread implementation of the technique.

A follow-up randomized, double-blind, placebo-controlled trial was published in the New England Journal of Medicine in 2012. The study assessed over 600 patients undergoing ERCP who were at “high risk” for post-ERCP pancreatitis based on the patient- and procedural- related factors identified above. Enrolled patients met one or more “major” criteria (clinical suspicion of SOD, prior PEP, pancreatic or precut sphincterotomy, greater than 8 cannulation attempts, pneumatic dilatation of biliary sphincter, ampullectomy), or had two or more “minor” criteria (age less than 50 years and female sex, 2 or more prior episodes of pancreatitis, 3 or more contrast injections into the pancreas with at least one in the tail, opacification of pancreatic acini, pancreatic duct brushing for sample acquisition). Adverse events aside from the primary endpoint of post-ERCP pancreatitis and secondary endpoint of moderate-to-severe PEP included gastrointestinal bleeding, perforation, infection, renal failure, allergic reaction, myocardial infarction, cerebrovascular accident, and death. All patients underwent endoscopy with possible placement of pancreatic duct stent and then were administered two 50mg-indomethacin suppositories or two placebos. At least 80% of the study population received pancreatic duct stents. Some patients were not stented because of difficulty with placement or access, but the observed benefit of indomethacin was consistent across all patients. PEP developed in 16.9% of patients in the placebo group, and only 9.2% of patients in the rectal indomethacin group (absolute risk reduction 7.7). Thus, treatment of 13 high-risk ERCP patients with rectal NSAIDs will prevent one episode of pancreatitis. Subsequent stratification showed indomethacin reduced risk of PEP from 16.1% to 9.7% in patients who did receive a pancreatic stent, compared to reduction in PEP from 20.6% to 6.3% in non-stented patients. Indomethacin administration was further beneficial in that it reduced the incidence of moderate-to-severe pancreatitis from 8.8% to 4.4%. The only adverse events noted at 30-day follow-up were clinically-significant bleeding (4 patients in the indomethacin group and 7 patients in the placebo group) and acute renal failure (2 patients in the placebo group). None of the bleeding events required surgery, angiography, or more than 2 units red cell transfusion. Based on this study, single-dose rectal NSAID administration was concluded to safely and significantly reduce PEP [10].

Despite these results, a paradigm for prophylactic rectal NSAID use was still not established, in part because no studies at the time could demonstrate that rectal NSAIDs were more efficacious than temporary pancreatic stents. A recent meta-analysis published in 2013 takes this into consideration, and extrapolates results from 29 studies to compare prophylactic rectal NSAIDs to pancreatic duct stents in reducing PEP as the outcome of interest. Three therapeutic options were assessed: rectal NSAIDs alone, pancreatic duct stents alone, and rectal NSAIDs in conjunction with pancreatic stents. All three interventions displayed significant benefit over placebo, with rectal NSAID use alone showing the largest significant risk reduction (OR 0.24, 95% CI 0.14-0.42). Risk of pancreatitis was also reduced with stents alone and with both interventions combined (OR 0.50 and OR 0.41, respectively) [11]. Comparative analysis data revealed no significant benefit to a combination of pancreatic duct stent with rectal NSAIDs, measured against either intervention alone. NSAID administration is less technically-demanding and had little association with adverse events as well. These results are not sufficient for changing guidelines given that they are surmised from the meta-analysis data rather than by direct head-to-head comparison of the varying interventions. However, they imply that rectal NSAIDs alone as a primary preventative approach may be the most effective means of reducing post-procedural pancreatitis in patients undergoing ERCP.

In summary, there is high-quality data to support the use of both prophylactic pancreatic duct stent placement and rectal NSAID administration in prevention of post-ERCP pancreatitis. Pancreatic duct stent placement is effective in reducing PEP, but the main benefit is derived from preventing development of severe PEP in high-risk patients. Stents are associated with greater morbidity than rectal NSAIDs and are a more costly option, so placement should be reserved for the appropriate high-risk patients. NSAIDs comparably reduce incidence of PEP and have shown efficacy in both average- and high-risk populations, so at present are a reasonable consideration at the time of ERCP in patients without contraindication [12].

Dr. Shivani K Patel is a resident (internal medicine) at NYU Langone Medical Center

Peer reviewed by Demetrios Tzimas, MD, Gastroenterology Fellow, NYU Langone Medical Center, Contributing Editor, Clinical Correlations

Image courtesy of Wikimedia Commons


1. Dumonceau JM, Andriulli A, Deviere J et al. European Society of Gastrointestinal Endoscopy (ESGE) Guideline: Prophylaxis of post-ERCP pancreatitis. Endoscopy 42; 503-515. 2010.

2. Masci E, Mariani A, Curioni S, and Testoni PA. Risk factors for pancreatitis following endoscopic retrograde cholangiopancreatography: a metanalysis. Endoscopy 35; 830-834. 2003.

3. Sofuni A, Maguchi H, Mukai T et al. Endoscopic pancreatic duct stents reduce the incidence of post-endoscopic retrograde cholangiopancreatography pancreatitis in high-risk patients. Clinical Gastroenterology and Hepatology 9(10); 851-858. 2011.

4. Andriulli A, Forlando R, Napolitano G et al. Pancreatic duct stents in the prophylaxis of pancreatic damage after endoscopic retrograde cholangiopancreatography: a systematic analysis of benefits and associated risks. Digestion 75(2-3); 156-163. 2007.

5. Freeman ML, Overby C, and Qi D. Pancreatic stent insertion: consequences of failure and results of a modified technique to maximize success. Gastrointest Endosc 59(1); 8-14. 2004.

6. Rashdan A, Fogel EL, McHenry Jr L, Sherman S, Temkit M, and Lehman GA. Improved stent characteristics for prophylaxis of post-ERCP pancreatitis. Clinical Gastroenterology and Hepatology 2(4); 322-329. 2004.

7. Bakman YG, Safdar K, and Freeman ML. Significant clinical implications of prophylactic pancreatic stent placement in previously normal pancreatic ducts. Endoscopy 41; 1095-1098. 2009.

8. Pezzilli R, Morselli-Labate AM, and Corinaldesi R. NSAIDs and acute pancreatitis: A systematic review. Pharmaceuticals 3(3); 558-571. 2010.

9. Elmunzer BJ, Waljee AK, Elta GH, Taylor JR, Fehmi SMA, and Higgins PDR. A meta-analysis of rectal NSAIDs in the prevention of post-ERCP pancreatitis. Gut 57; 1262-1267. 2008.

10. Elmunzer BJ, Scheiman JM, Lehman GA et al. A randomized trial of rectal indomethacin to prevent post-ERCP pancreatitis. NEJM 366; 1414-1422. 04/2012.

11. Akbar A, Abu Dayyeh BK, Baron TH, Wang Z, Altyar O, and Murad MH. Rectal nonsteroidal anti-inflammatory drugs are superior to pancreatic duct stents in preventing pancreatitis after endoscopic retrograde cholangiopancreatography: a network meta-analysis. Clinical Gatroenterology and Hepatology 11(7); 778-783. 07/2013.

12. Tenner S, Baillie J, DeWitt J, and Vege SS. American College of Gastroenterology Guideline: Management of acute pancreatitis. Am J Gastroenterol 108(9); 1400-1415. 09/2013.



From The Archives: Fast Hearts and Funny Currents, Part 2: Is Tachycardia Part of the Problem in Heart Failure?

January 9, 2014

Please enjoy this post from the archives dated May 25, 2011

By Santosh Vardhana

Faculty Peer Reviewed

Please review Part 1 of this article here.

Mr. M is a 63-year old man with a history of coronary artery disease and systolic congestive heart failure (ejection fraction 32%) on lisinopril, metoprolol, and spironolactone who presents to the Adult Primary Care Center complaining of persistent dyspnea with exertion, two-pillow orthopnea, and severely limited exercise tolerance. His vital signs on presentation are T 98.0˚F, P 84, BP 122/76. What are his therapeutic options?

A randomized, placebo controlled study of ivabradine in patients with chronic heart failure (the SHIFT trial)

Encouraged by data from observational human studies and animal models, investigators at the University of Gothenburg in Sweden designed the SHIFT study to evaluate the role of ivabradine in reducing adverse cardiovascular events in medically-optimized patients with systolic congestive heart failure (CHF) and an elevated resting heart rate.[1] The study included 6558 patients in 37 countries. Criteria for inclusion included stable, chronic, symptomatic CHF of greater than 4 weeks duration with a hospital admission for symptomatic CHF in the past 12 months, an ejection fraction (EF) of less than 35%, and an electrocardiogram showing a resting heart rate of at least 70 beats per minute (bpm) but otherwise normal sinus rhythm. Patients with congenital heart disease, primary valvular disease, a myocardial infarction in the past 2 months, ventricular pacing for greater than 40% of each day, atrial fibrillation or flutter, or symptomatic hypotension were excluded. Patients were randomized to either placebo or ivabradine 5 mg twice daily, which was titrated during 4-month follow-up visits over 2 years to a target heart rate of 50-60 bpm. The minimum acceptable dose of ivabradine was 2.5 mg daily and the maximum dose was 7.5 mg twice daily. Patients with symptomatic bradycardia on 2.5 mg daily were withdrawn from the study but included in the intention-to-treat analysis. The primary endpoint was a composite of cardiovascular death and hospital admission for worsening CHF. Secondary endpoints included an analysis of the primary endpoint in a predetermined subset of patients receiving at least 50% of their target dose of beta-blocker as well as all-cause death and all-cause hospital admission. Follow-up was comprehensive, with only 3 out of 6505 patients lost to follow-up. Patients who withdrew from the study were followed up and included in the analysis. The population was 75% male, 90% white, with an average age of 60. The average heart rate of the population was 80 bpm, with an average BP of 122/76 and EF of 29%. Functional class was New York Heart Association (NYHA) class II in 49%, class III in 50%, and class IV in 2%. Approximately two thirds of the patients had CHF from ischemic causes. Two thirds of the patients had hypertension, 30% had diabetes, and 17% were smokers. Over 90% of the patients were on both beta-blockers and renin-angiotensin system inhibitors; however, it is important to note that only 56% of patients achieved greater than 50% of their target dose of beta-blockers as defined by European Society of Cardiology guidelines,[2] and only 26% achieved the target dose. The authors cited obstructive pulmonary disease, hypotension, fatigue, dyspnea, dizziness, and bradycardia as reasons for failure to achieve target dose.

Two pertinent features of the study population should be noted. First, patients were optimized on medical therapy, including beta-blockers, before being treated with ivabradine. The authors were investigating the utility of ivabradine in addition to current medical therapy and not as a replacement for beta-blockers. Second, the average GFR of the patient population was 75 mL/min; this suggests that this patient population was not already undergoing end-organ hypoperfusion as indicated by renal insufficiency. It may well be that CHF patients with evidence of hypoperfusion would not be able to tolerate the addition of ivabradine; those patients were not addressed in the study.

Intention-to-treat analysis of the entire study population showed a reduction of 8 bpm in heart rate over 2 years with administration of ivabradine. This resulted in significant symptomatic improvement in the treatment group. Ivabradine also reduced the primary endpoint from 29% to 24%, a relative risk reduction of 18% and an absolute risk reduction of 5%, with a number needed to treat of 26. This reduction was primarily due to a reduction in hospital admissions for CHF; death from cardiovascular causes was not reduced by treatment with ivabradine. However, deaths due specifically to CHF were reduced in the treatment group, with a relative risk reduction of 26%. In the subgroup of patients who were at greater than 50% of target beta-blocker therapy, the effect of ivabradine was less dramatic: only a 19% relative risk reduction in hospital admissions and a non-significant reduction in the composite primary endpoint. The authors claim that this was due to insufficient powering for the reduced event rate in this subgroup. Not surprisingly, the effect of ivabradine was most pronounced in patients with resting heart rates greater than 77 bpm. Ivabradine was generally well tolerated in this study; symptomatic bradycardia was present in 10% of the treatment group, but led to study withdrawal in only 1%. This is impressive, given that almost three quarters of the same study population could not tolerate the target dose of beta-blockers, and suggests that ivabradine-induced bradycardia is tolerated significantly better than beta-blocker-induced hypotension and bradycardia. Of note, 3% of patients taking ivabradine experienced phosphenes (transient enhanced brightness in a restricted area of the visual field).

A companion paper released in the same issue of The Lancet performed subgroup analyses stratifying the patient population of SHIFT by heart rate quintiles.[3] The patients in the highest quintile of heart rates (greater than 87 bpm) were much more likely at baseline to have more advanced CHF with a lower ejection fraction. As in prior studies, baseline heart rate in both the placebo and treatment groups was linearly associated with both cardiovascular death and hospitalization for CHF. The risk reduction was greatest in patients whose heart rate was reduced to less than 70 bpm, an endpoint that was achieved in three quarters of patients treated with ivabradine but less than one third of patients receiving placebo. Thus, reduction of cardiovascular morbidity and mortality by ivabradine is dependent on the extent of heart rate reduction; this finding is similar to findings demonstrated for beta-blockers and emphasizes the importance of controlling heart rate in CHF.

In a subsequent study, 60 patients with NYHA class II and III CHF with EFs less than 40% were randomized to either ivabradine or placebo therapy. Over the following 6 months, patients receiving ivabradine reported improved quality of life and improvement in NYHA functional class; objectively, they were found to have improved exercise capacity, improved peak oxygen consumption, and a significant reduction in baseline N-terminal probrain natriuretic peptide levels.[4] Furthermore, a substudy of the BEAUTIFUL trial that performed two-dimensional echocardiography on patients with stable coronary artery disease and left ventricular systolic dysfunction found that treatment with ivabradine improved left ventricular EF in a manner that was proportional to the reduction in heart rate, further supporting a role for ivabradine in preventing pathological remodeling in CHF by achieving optimal heart rate reduction.[5]

Don’t you forget about me: beta-blockers and digoxin

This study brings to light many important considerations in the management of chronic CHF. First, it reintroduces resting heart rate as an important target in the management of CHF and demonstrates for the first time that reduction of heart rate to a target goal of less than 70 bpm in the absence of other modifications results in statistically significant improvements in cardiovascular morbidity and, in some cases, mortality. Second, it introduces a new class of drug, inhibitors of the so-called funny current, as a new potential therapeutic option for patients with chronic CHF. Finally, it suggests that this drug class may be of particular efficacy in patients who cannot achieve target doses of beta-blockers for any reason.

In an editorial in the same issue of The Lancet, Teerlink noted that the majority of patients did not achieve target beta-blocker doses and that ivabradine was most efficacious in these patients.[6] Introduction of ivabradine as an alternative to beta-blockers might diminish the use of beta-blockers, which are often discontinued by patients or physicians despite little evidence that they cause adverse effects such as depression or impotence, or worsen comorbidities such as reactive airway disease.[7,8] Substitution of ivabradine for beta-blockers (which was not endorsed by the authors of SHIFT) would compromise not only patient outcomes, given the broadly demonstrated mortality benefit of beta-blockers,[9] but would also raise the cost of treatment, given the relative cost of less than $5000 per quality-adjusted life year (QALY) for beta blocker administration.[10] This fiscal consideration is particularly important, given that more Medicare dollars are spent on treatment of CHF than on any other disease.[11]

The particular efficacy of ivabradine in patients who were not able to achieve optimal beta-blockade is consistent with a report in 2009 that used multivariable linear regression analysis to determine how carvedilol improves ejection fraction in patients with CHF in the presence of both ACE inhibition and digoxin. The study found that heart rate reduction was responsible for 60% of the improvement in ejection fraction seen with carvedilol, with 30% of improvement being due to increased contractility and less than 20% due to reduction of afterload secondary to alpha-blockade.[12] Patients should be optimized on the maximum tolerable dose of beta-blocker with a target heart rate of less than 70 bpm before considering addition of a funny current inhibitor such as ivabradine.

Similar questions have been asked about the mechanism of digoxin, a drug that has many functional effects similar to ivabradine.[13] Digoxin, via its inhibitory effect on the sodium-potassium ATPase pump and activating effect on parasympathetic vagal tone, has a positive inotropic and negative chronotropic effect, and it appears to decrease hospitalizations without a robust reduction in mortality.[14,15] These are similar effects to those seen with ivabradine, but at a fraction of the cost. It is worth noting that fewer than 25% of the patients enrolled in SHIFT were taking digoxin.

The bottom line

The SHIFT study reopens the debate on mechanisms of myocardial damage in CHF and reintroduces heart rate as a legitimate target in CHF management. The question of whether target heart rates should be achieved by dose optimization of beta-blockers, judicious use of digoxin, or implementation of novel therapies such as the I(f) inhibitor ivabradine remains an interesting and exciting topic for future research.

Santosh Vardhana is a 4th year medical student at NYU Langone Medical Center

Peer reviewed by Robert Donnino, MD, section editor, clinical correlations

Image courtesy of Wikimedia Commons


1. Swedberg K, Komajda M, Bohm M, et al. Ivabradine and outcomes in chronic heart failure (SHIFT): a randomised placebo-controlled study. Lancet. 2010;376(9744):875-885.

2. Swedberg K, Cleland J, Dargie H, et al. Guidelines for the diagnosis and treatment of chronic heart failure: executive summary (update 2005): The Task Force for the Diagnosis and Treatment of Chronic Heart Failure of the European Society of Cardiology. Eur Heart J. 2005;26(11):1115-1140.

3. Bohm M, Swedberg K, Komajda M, et al. Heart rate as a risk factor in chronic heart failure (SHIFT): the association between heart rate and outcomes in a randomised placebo-controlled trial. Lancet. 2010;376(9744):886-894.

4. Sarullo FM, Fazio G, Puccio D, et al. Impact of “off-label” use of ivabradine on exercise capacity, gas exchange, functional class, quality of life, and neurohormonal modulation in patients with ischemic chronic heart failure. J Cardiovasc Pharmacol Ther. 2010;15(4):349-355.

5. Ceconi C, Freedman SB, Tardif JC, et al. Effect of heart rate reduction by ivabradine on left ventricular remodeling in the echocardiographic substudy of BEAUTIFUL. Int J Cardiol. 2011;146(3):408-414.

6. Teerlink JR. Ivabradine in heart failure–no paradigm SHIFT…yet. Lancet. 2010;376(9744):847-849.

7. Ko DT, Hebert PR, Coffey CS, Sedrakyan A, Curtis JP, Krumholz HM. Beta-blocker therapy and symptoms of depression, fatigue, and sexual dysfunction. JAMA. 2002;288(3):351-357.

8. Salpeter SR, Ormiston TM, Salpeter EE. Cardioselective beta-blockers in patients with reactive airway disease: a meta-analysis. Ann Intern Med. 2002;137(9):715-725.

9. Gottlieb SS, McCarter RJ, Vogel RA. Effect of beta-blockade on mortality among high-risk and low-risk patients after myocardial infarction. N Engl J Med. 1998;339(8):489-497.

10. Phillips KA, Shlipak MG, Coxson P, et al. Health and economic benefits of increased beta-blocker use following myocardial infarction. JAMA. 2000;284(21):2748-2754.

11. Massie BM, Shah NB. Evolving trends in the epidemiologic factors of heart failure: rationale for preventive strategies and comprehensive disease management. Am Heart J. 1997;133(6):703-712.

12. Maurer MS, Sackner-Bernstein JD, El-Khoury Rumbarger L, Yushak M, King DL, Burkhoff D. Mechanisms underlying improvements in ejection fraction with carvedilol in heart failure. Circ Heart Fail. 2009;2(3):189-196.

13. Gorman S, Boos C. Ivabradine in heart failure: what about digoxin? Lancet. 2008;372(9656):2113.

14. Arnold SB, Byrd RC, Meister W, et al. Long-term digitalis therapy improves left ventricular function in heart failure. N Engl J Med. 1980;303(25):1443-1448.

15. The effect of digoxin on mortality and morbidity in patients with heart failure. The Digitalis Investigation Group. N Engl J Med. 1997;336(8


Who Should We Screen for Hepatitis C: By Risk Or Birth Cohort?

January 8, 2014

By Jung-Eun Ha

Peer Reviewed

Over the last few years major changes have occurred in the diagnosis and treatment of hepatitis C. In 2011 the U.S. Food and Drug Administration (FDA) approved a rapid finger stick antibody test for hepatitis C virus (HCV) infection [1]. The FDA also approved the protease inhibitors telapravir (Incivek; Vertex Pharmaceuticals, Cambridge, Massachusetts; Johnson & Johnson, New Brunswick, New Jersey) and boceprevir (Victrelis; Merck, Whitehouse Station, New Jersey), for the treatment of genotype 1 hepatitis C [1]. In August 2012, the Centers for Disease Control & Prevention recommended one-time screening for hepatitis C in all persons born between 1945 and 1965 [2}. In June 2013, the U.S. Preventive Services Task Force (USPSTF) also recommended screening for HCV infection in high-risk individuals and one-time screening in individuals born between 1945 and 1965 (“B” recommendation) [3]. The birth-cohort recommendation exponentially expands the size of the screening population, which was previously limited to high-risk individuals: ever IV drug users, blood transfusion or organ transplant recipients before 1992, those ever on hemodialysis, healthcare workers exposed to HCV-infected blood, children born to HCV-positive mothers, and sexual partners of HCV-positive persons.

The update affects about 82 million Americans born between 1945 and 1965 [4]. The1999-2008 National Health and Nutrition Examination Survey revealed that HCV antibody prevalence in this cohort is 3.25%, or 2.7 million people, as opposed to 0.88% prevalence in people born outside of the cohort [2]. The prevalence is about 1.6% in the general population [5]. More than two-thirds of the chronically infected belong to the 1945-1965 baby-boomer cohort. Many of them were inadvertently exposed to HCV-infected blood before the discovery of HCV in 1989 and the development of a screening test in 1992. HCV incidence was highest during the 1980s. Given the slow progression from chronic HCV infection to cirrhosis and hepatocellular carcinoma over decades [6], now is the time to screen this birth cohort before complications start to appear. Advanced fibrosis of the liver shows poor response to HCV treatment, and is also more costly to treat [7].

Relying solely on risk-based screening in this birth cohort is not sufficient, as up to 45% of people ever infected with HCV may not recall any exposure risk and thus will not likely volunteer to get screened [2]. Fifty to eighty percent of the infected don’t know their HCV status [8]. The number of people born between 1945 and 1965 needed to screen to prevent one HCV-related death is 607.

Overall, one-time HCV screening of this birth cohort is estimated to cost around $15,700 per quality-adjusted life year (QALY) [9]. By comparison, screening for colorectal cancer with colonoscopy can cost about $10,000 to $25,000 per QALY [10], and requires repeated studies. HCV screening of the 1945-1965 cohort is likely a one-time screening event, as HCV incidence has decreased drastically over the years, thanks to effective blood screening and increased awareness of HCV transmission among IV drug users. Morbidity and mortality from chronic HCV infection will be even lower, with a number of direct acting antivirals in the pipeline [11-12]. A recent proof-of-concept study of vaccine against single strain of HCV [13] suggests that mass screening may not even be necessary in the future if, and hopefully when, primary prevention is possible and feasible.

Commentary by Dr. Vin Pham of the Division of Infectious Diseases

The rationale for identifying persons earlier in the course of their disease includes having a greater likelihood of achieving successful outcome after treatment. The registrational studies for interferon-based hepatitis C therapies have consistently shown lower rates of sustained virologic response for subjects with fibrosis scores of 3 or 4, demonstrating the need to identify and treat people in the earlier stages of fibrosis. Ironically, the development of an effective vaccine against HCV may further expand the need for testing for HCV infection, since obviously vaccine would only be offered to those not already infected.

Jung-Eun Ha is a 4th year medical student at NYU School of Medicine

Peer reviewed by Vinh Pham, MD, Infectious Disease, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


[1] Getchell JP, Wroblewski KE, DeMaria A, et al. Testing for HCV infection: an update of guidance for clinicians and laboratorians. MMWR Morb Mortal Wkly Rep. 2013;62(18):1-4.

[2] Smith BD, Morgan RL, Beckett GA, et al. Centers for Disease Control and Prevention. Recommendations for the identification of chronic hepatitis C virus infection among persons born during 1945–1965. MMWR Recomm Rep. 2012;61(RR-4):1-32.

[3] U.S. Preventive Services Task Force. Screening for hepatitis C virus infection in adults: U.S. Preventive Services Task Force recommendation statement. Published June 25, 2013. Accessed August 12, 2013.

[4] Centers for Disease Control and Prevention. Population projections, United States, 2004 – 2030, by state, age and sex.  Published 2005. Updated June 26, 2009. Accessed May 18, 2013.

[5] Armstrong G, Wasley A, Simard E, McQuillan GM, Kuhnert WL, Alter MJ. The prevalence of hepatitis C virus infection in the United States, 1999 through 2002. Ann Intern Med. 2006;144(10):705–714.

[6] Chen SL, Morgan TR. The natural history of hepatitis C virus (HCV) infection. Int J Med Sci. 2006;3(2):47–52.

[7] Prati GM, Aghemo A, Rumi MG, et al. Hyporesponsiveness to PegIFNalpha2B plus ribavirin in patients with hepatitis C-related advanced fibrosis. J Hepatol. 2012;56(2):341-347.

[8] Hagan H, Campbell J, Thiede H, et al. Self-reported hepatitis C virus antibody status and risk behavior in young injectors. Public Health Rep. 2006;121(6):710-719.

[9] Rein DB, Smith BD, Wittenborn JS, et al. The cost-effectiveness of birth-cohort screening for hepatitis C antibody in U.S. primary care settings. Ann Intern Med. 2012;156(4):263-270.

[10] Pignone M, Saha S, Hoerger T, Mandelblatt J. Cost-effectiveness analyses of colorectal cancer screening: a systematic review for the U.S. Preventive Services Task Force. Ann Intern Med. 2002;137(2):96-104.

[11] Schaefer E, Chung R. Antihepatitis C virus drugs in development. Gastroenterology 2012;142(6):1340–1350.

[12] Poordad F, Dieterich D. Treating hepatitis C: current standard of care and emerging direct-acting antiviral agents. J Viral Hepat. 2012;19(7):449–464.

[13] Law JL, Chen C, Wong J, et al. A hepatitis C virus (HCV) vaccine comprising envelope glycoproteins gpE1/gpE2 derived from a single isolate elicits broad cross-genotype neutralizing antibodies in humans.




Barriers to Translating Evidence into Clinical Care: the Zoster Vaccine

December 13, 2013

Zachary Elkin

Faculty Peer Reviewed

There are more than a million cases of herpes zoster (HZ) in the US annually [1-3]. The incidence of HZ, or shingles, has been rising in the US since the 1990s [2,4-7]. One third of all people in the US will get HZ, with the highest incidence in people aged 50 to 79 [2,5]. As a result of the Shingles Prevention Study (SPS), the U.S. Food and Drug Administration (FDA) and Advisory Committee on Immunization Practices (ACIP) approved the Zostavax vaccine (Merck & Co Inc, Whitehouse Station, New Jersey) for the prevention of HZ in 2006 for immunocompetent patients aged 60 and above [8]. The FDA extended the approval to patients aged 50 to 59 in March 2011 [9].

Despite strong evidence of efficacy, the uptake of zoster vaccination among eligible patients has been significantly less than other standard adult vaccines [1,10-13]. CDC estimates show that only 15.8% of eligible patients received the vaccine in 2011, up from 14.4% in 2010, 10% in 2009, and 6.7% in 2008 [13,14]. The 2011 rates were even lower in African Americans (7.9%) and Hispanics (8.0%). Our 2011 survey at NYU found that that only 66% of general internal medicine physicians responded that HZ vaccination was an important clinical priority, and 48% reported that less than 10% of their patients received the HZ vaccine [15]. Why has this evidence-based medicine not been translated into standard clinical care?

Evidence behind the efficacy of the HZ vaccine

Published in 2005, the Shingles Prevention Study demonstrated that HZ vaccine reduced the burden of illness by 61%, the incidence of post-herpetic neuralgia by 66.5%, and incidence of zoster by 51.3% [16]. Since then, the HZ vaccine has repeatedly been shown to be safe and efficacious. A 2012 study reported the efficacy of preventing HZ in persons 50-59 to be 69.8% [17] and another found the hazard ratio of recurrent HZ in patients under 70 to be 0.39 among vaccinated patients compared with unvaccinated patients [18]. A Cochrane Review of 3 randomized controlled studies of HZ vaccination efficacy found a risk ratio for HZ to be 0.49 in all vaccinated patients [19]. Furthermore, the 4-year follow-up to the SPS showed continued reductions in burden of illness of 50.1% [20].

Current barriers to the use of the HZ vaccine

Most of the barriers to HZ vaccination use are related to difficulties distributing the vaccine and a lack of physician recommendation. Hurley and colleagues identified the top barriers for physicians to be the perceived cost to patients, reimbursement difficulties, and the up-front cost of stocking the vaccine [11]. Patients with Medicaid (which covers the vaccine) and those without insurance have overall lower HZ coverage and greater difficulty accessing adult vaccines [1,21].

Additional studies have described a lack of strong physician recommendation as a major barrier to patient adherence with national guidelines [11,12]. Hurley reported that only 41 percent of physicians strongly recommended the zoster vaccine [11]. A study in the Netherlands by Opstelten and colleagues found that a lack of strong physician recommendations accounted for an odds ratio of 4.0 for patients refusing the HZ vaccine when offered free with a yearly influenza vaccine [12]. Correspondingly, a study estimated that 95.1% of unvaccinated patients had at least one missed opportunity to be offered the zoster vaccine at an outpatient visit [1]. Even when the HZ vaccine was offered for free by ophthalmologists at the Bellevue Eye Clinic during regularly scheduled appointments, 47% of patients who refused the vaccine reported wanting to speak to their primary care physician [22]. None returned to receive the vaccine.

Framework to overcome barriers in translating evidenced-based medicine into clinical practice

Cabana and colleagues described that to overcome barriers to physician adherence to evidence-based practice, there must first be a change in knowledge, then attitude [23]. Knowledge can be limited by lack of awareness of the published guidelines. Attitude can be affected by disagreement with guidelines, a lack of clinical priority, or insecurity in performing the task. Finally, physician behaviors can be affected by patient adherence and structural barriers within the health system.

All of these issues must be addressed through interventions to increase adherence to national vaccination guidelines. System supports can be implemented to improve physician attitudes towards HZ vaccination. Chaudhry and colleagues evaluated clinical decision support software as part of the electronic medical record at the Mayo Clinic and found an increase in vaccine utilization of 43% and 54% at two clinics [24]. Loo and colleagues described a statistical increase in pneumococcal vaccination and influenza vaccination at Beth Israel Deaconess following implementation of EMR reminders [25]. Physician reminders can be more successful when combined with patient reminders [26] or nurse-initiated reminders [27]. 

However, time and general changes in the healthcare system are also needed to increase utilization. For example, in 2011, 62.3% of eligible patients in the US received the pneumococcal vaccine [14]. Interventions to increase use of that vaccine started in 1981 with Medicare covering it, but by 1989 only 14.1% of eligible adults 65 and older were vaccinated [28]. The HZ vaccine was only approved for immunocompetent patients 60 and older in 2006, and healthcare system-wide changes have only begun. New York State began to allow certified pharmacists to administer the HZ vaccine with a prescription in October 2012, and Merck began a marketing campaign about shingles that year as well. More time may be needed to see overall changes in patient and provider behaviors.

In a commentary in 2003 in the New England Journal of Medicine, Lenfant described how healthcare providers and the public were not applying the vast wealth of clinical knowledge gained in the 20th century to clinical practice [29]. He explained that there were structural, economic, and motivational barriers to the adoption of evidence-based medicine; however, a major problem is that research rarely focuses on translating knowledge to public use. Simple interventions like the ones described and demonstration of cost effectiveness can go a long way to improving physician and patient adherence. More operational research is needed to describe which interventions work best in varied clinical settings. This may be the best approach to increase use of the HZ vaccine.

Commentary by Dr. Michael Simberkoff

There were specific problems with the shingles vaccine that hampered initial enthusiasm for its use. These include the cost of the vaccine compared to others that are commonly available, the mechanism that was set up for reimbursement (Medicare Part D), the difficulty of shipment and storage of the vaccine in a frozen state until it is used, and the shortages of vaccine due to production problems that led to shortages of the vaccine at a time when its use should have been promoted. Many of these issues have been resolved, but they slowed acceptance of an otherwise useful vaccine. The author suggests that more widespread use of the EMR will improve adherence to guidelines. I would agree that the EMR can provide reminders about good clinical practices, but remain skeptical that these will translate into sustainable improvements unless they are coupled with incentives for providers.

Dr. Zachary Elkin is a 4th year medical student at NYU School of  Medicine

Peer reviewed by Michael Simberkoff, MD, Infectious Diseases, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. Lu PJ, Euler GL, Harpaz R. Herpes zoster vaccination among adults aged 60 years and older, in the U.S., 2008. Am J Prev Med. 2011;40(2):e1-6.

2. Rimland D, Moanna A. Increasing incidence of herpes zoster among Veterans. Clin Infect Dis. 2010;50(7):1000-1005.

3. Tseng HF, Smith N, Harpaz R, Bialek SR, Sy LS, Jacobsen SJ. Herpes zoster vaccine in older adults and the risk of subsequent herpes zoster disease. JAMA. 2011;305(2):160-166.

4. Yawn BP, Saddier P, Wollan PC, St Sauver JL, Kurland MJ, Sy LS. A population-based study of the incidence and complication rates of herpes zoster before zoster vaccine introduction. Mayo Clin Proc. 2007;82(11):1341-1349.

5. Insinga RP, Itzler RF, Pellissier JM, Saddier P, Nikas AA. The incidence of herpes zoster in a United States administrative database. J Gen Intern Med. 2005;20(8):748-753.

6. Ghaznawi N, Virdi A, Dayan A, et al. Herpes zoster ophthalmicus: comparison of disease in patients 60 years and older versus younger than 60 years. Ophthalmology. 2011;118(11):2242-2250.

7. Yawn BP, Wollan PC, Kurland MJ, St Sauver JL, Saddier P. Herpes zoster recurrences more frequent than previously reported. Mayo Clin Proc. 2011;86(2):88-93.

8. Harpaz R, Ortega-Sanchez IR, Seward JF. Prevention of herpes zoster: recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR Recomm Rep. 2008;57(RR-5):1-30.

9. Harpaz R, Hales CM, Bialek SR. Update on herpes zoster vaccine: licensure for persons aged 50 through 59 years. MMWR Morb Mortal Wkly Rep. 2011;60(44):1528.

10. Freed GL, Clark SJ, Cowan AE, Coleman MS. Primary care physician perspectives on providing adult vaccines. Vaccine. 2011;29(9):1850-1854.

11. Hurley LP, Lindley MC, Harpaz R, et al. Barriers to the use of herpes zoster vaccine. Ann Intern Med. 2010;152(9):555-560.

12. Opstelten W, van Essen GA, Hak E. Determinants of non-compliance with herpes zoster vaccination in the community-dwelling elderly. Vaccine. 2009;27(2):192-196.

13. Williams WW, Peng-Jun L, Singleton JA, etal. Adult vaccination coverage-United States, 2010. MMWR. 2012;61(04):66-72.

14. Williams WW, Peng-Jun L, Greby S, et al. Noninfluenza vaccination coverage among adults–United States, 2011. MMWR. 2013;62(04):66-72.

15. Elkin Z, Cohen E, Goldberg J, et al. Studying physician knowledge, attitudes, and practices regarding the herpes zoster vaccine to address perceived barriers to vaccination. Cornea. 2013;32(7):976-981.

16. Oxman MN, Levin MJ, Johnson GR, et al; Shingles Prevention Study Group. A vaccine to prevent herpes zoster and postherpetic neuralgia in older adults. N Engl J Med. 2005;352(22):2271-2284.

17. Schmader KE, Levin MJ, Gnann JW, et al. Efficacy, safety, and tolerability of herpes zoster vaccine in persons aged 50-59 years. Clin Infect Dis. 2012;54(7):922-928.

18. Tseng HF, Chi M, Smith N, Marcy SM, Sy LS, Jacobsen SJ. Herpes zoster vaccine and the incidence of recurrent herpes zoster in an immunocompetent elderly population. J Infect Dis. 2012;206(2):190-196.

19. Gagliardi AM, Gomes Silva BN, Torloni MR, Soares BG. Vaccines for preventing herpes zoster in older adults. Cochrane Database Syst Rev. 2012;10:CD008858.

20. Schmader KE, Oxman MN, Levin MJ, et al. Persistence of the efficacy of zoster vaccine in the shingles prevention study and the short-term persistence substudy. Clin Infect Dis. 2012;55(10):1320-1328.

21. Orenstein WA, Mootrey GT, Pazol K, Hinman AR. Financing immunization of adults in the United States. Clin Pharmacol Ther. 2007;82(6):764-768.

22. Jung JJ, Elkin ZP, Li X, et al. Increasing use of the vaccine against zoster through recommendation and administration by ophthalmologists at a city hospital. Am J Ophthalmol. 2013;155(5):787-795.

23. Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282(15):1458-1465.

24. Chaudhry R, Schietel SM, North F, Dejesus R, Kesman RL, Stroebel RJ. Improving rates of herpes zoster vaccination with a clinical decision support system in a primary care practice. J Eval Clin Pract. 2013;19(2):263-266. 

25. Loo TS, Davis RB, Lipsitz LA, et al. Electronic medical record reminders and panel management to improve primary care of elderly patients. Arch Intern Med. 2011;171(17):1552-1558.

26. Thomas RE, Russell M, Lorenzetti D. Interventions to increase influenza vaccination rates of those 60 years and older in the community. Cochrane Database Syst Rev. 2010(9):CD005188.

27. Rhew DC, Glassman PA, Goetz MB. Improving pneumococcal vaccine rates. Nurse protocols versus clinical reminders. J Gen Intern Med. 1999;14(6):351-356.

28. Koch JA. Strategies to overcome barriers to pneumococcal vaccination in older adults: an integrative review. J Gerontol Nurs. 2012;38(2):31-39.

29. Lenfant C. Clinical research to clinical practice–lost in translation? New Engl J Med. 2003;349(9):868-874.

From The Archives: Fast Hearts and Funny Currents: Is Tachycardia Part of the Problem in Heart Failure? Part 1

December 12, 2013

Please enjoy this post from the archives dated May 18, 2011

By Santosh Vardhana

Faculty Peer Reviewed

Mr. M is a 63-year-old man with a history of coronary artery disease and systolic CHF (ejection fraction 32%) on lisinopril, metoprolol, and spironolactone who presents to Primary Care Clinic complaining of persistent dyspnea with exertion, two-pillow orthopnea, and severely limited exercise tolerance. His vital signs on presentation are T 98.0º F, BP 122/76, HR 84 bpm. What are his therapeutic options?

A Race Against Time: Tachycardia in the Failing Heart

Congestive heart failure (CHF) is a clinical syndrome that results from any disruption in the ability of the heart to maintain sufficient cardiac output to supply the body’s metabolic demand. It is manifested by the well-known symptoms of exertional dyspnea, decreased exercise tolerance, and fluid retention.[1] The yearly incidence of CHF continues to increase as more patients who have myocardial infarctions survive but subsequently develop ischemic cardiomyopathy.[2]

As the failing heart struggles to maintain cardiac output, it undergoes a series of structural changes in attempt to maximize stroke volume. These changes, from initial hypertrophy, to progressive dilation, and finally to fibrosis and failure, have been well described.[3] Once the heart can no longer increase stroke volume by increasing either preload or intrinsic contractility, it attempts to recover cardiac output by increasing heart rate. While this compensation is able to rescue cardiac output temporarily, it rapidly leads to cardiomyocyte apoptosis, collagen deposition, and fibrosis (reviewed in [4]). Indeed, rapid atrial pacing in an otherwise healthy heart has been known for almost 50 years to produce symptomatic CHF. This is thought to be due to a combination of diminished coronary blood flow, an increase in pro-apoptotic factors such as tumor necrosis factor-alpha, and decoupling of cardiac myocyte excitation and contraction secondary to a reduction in inward rectifier potassium current.

The pharmacologic agents that have been shown to improve mortality in systolic CHF (renin-angiotensin system inhibitors [5-7], aldosterone antagonists [8], and beta blockers [9-11]) do so by preventing pathologic ventricular remodeling.[12,13] Less well studied, however, is the development of tachycardia as a compensatory mechanism in CHF. The relatively minor emphasis on heart rate is surprising, given that an elevated heart rate is a well-established risk factor for cardiovascular mortality.[14] Heart rate is additionally a source of interest, given the measurable benefit of beta blockers in patients with CHF beyond that which can be achieved with angiotensin-converting enzyme (ACE) inhibitors and diuretics.[9-11] In fact, some of the earliest studies demonstrating a mortality benefit from timolol in CHF [15] suggested that the majority of risk reduction might be due to its negative chronotropic effects.[16] Recent meta-analyses of randomized, placebo-controlled trials of beta blockers in CHF have supported this hypothesis by demonstrating that the mortality benefit of beta blockers correlates not with medication dosage but with heart rate reduction.[17,18] Thus, control of heart rate may be the unique method by which beta blockers slow pathologic left ventricular remodeling in a manner that is independent of the effect of ACE inhibitors.[19]

Addition of beta blockers to a CHF regimen has not eliminated the detrimental impact of increased heart rate on the progression of CHF. In a recent 10-year observational study of patients with CHF, every increment of 10 beats per minute (bpm) above baseline was associated with a sequential increase in cardiovascular death; notably, this correlation persisted in the presence of beta blockade.[20] This association between elevated heart rate and cardiovascular morbidity applies even to patients who have had implantable cardioverter-defibrillator (ICD) placement for diminished left ventricular ejection fraction, as demonstrated by studies in which patients with elevated resting heart rates despite ICD placement and optimized beta blocker therapy were at markedly increased risk of death or hospitalization for symptomatic CHF.[21] Thus, it appears that a certain aspect of heart rate variability that is unaffected by beta blockade may be a significant contributor to CHF-associated morbidity and mortality.

The New Player: I(f) in CHF

When investigating the origins of cardiovascular autoregulation, one must give due credit to the English physician William Harvey. Harvey’s seminal text, “De Motu Cordis,” also known as “On the Motion of the Heart and Blood,” describes a series of experiments by which he determined that the heart was not a passive conduit for blood, as had been suggested by the Greek philosopher and physician Galen of Pergamon, but rather the critical driving force maintaining systemic circulation.[22] Harvey eloquently stated, “It must therefore be concluded that the blood in the animal body moves around in a circle continuously and that the action or function of the heart is to accomplish this by pumping. This is only reason for the motion and beat of the heart.” In the course of these experiments, Harvey noted that a pigeon heart, when removed from its body, continued to beat autonomously. This was the first indication that the heart contained an intrinsic clock for initiating and maintaining contractions: a “pace maker.”

The mechanism by which the sinoatrial (SA) and atrioventricular nodes drive the regular contraction of cardiac myocytes was elucidated in 1979 with the characterization of the so-called “funny current,” or I(f): a slow, mixed sodium-potassium depolarizing current that is activated by hyperpolarization, enhanced by sympathetic stimulation, and present only in myocytes with intrinsic pacemaker function.[23] The depolarizing mixed current is carried through the I(f) channel, which contains four subunits, each with six transmembrane domains containing cation-selective voltage sensors, as well as binding sites for cyclic nucleotides on their intracellular faces. These cyclic nucleotide binding sites are the location of sympathetic stimulation. Sympathetic input increases I(f) amplitude by binding to adrenergic receptors on the surface of SA nodal cells; the resultant elevation in intracellular cyclic adenosine monophosphate (AMP) binds and activates I(f) channels. Equivalently, beta blockers slow heart rate by blocking sympathetic-mediated increases in I(f) amplitude.[24]

Therefore, one might wonder whether beta blockade in systolic CHF is merely counteracting pathologic catecholamine release driving inappropriate increases in heart rate in an attempt to rescue cardiac output. While increased serum catecholamines are a hallmark feature of heart failure, there is evidence for decreased myocardial tissue catecholamine concentrations.[25] This suggests that the pathologic tachycardia seen in CHF is not mediated entirely by sympathetic input, and may therefore not be maximally responsive to beta blockade.

What additional mechanisms may account for the increased heart rates seen in some patients with CHF? One possible explanation came from gene analysis of ventricular samples from human patients with and without CHF. This analysis showed that two out of the four genes that encode the hyperpolarization-activated cyclic nucleotide-gated (HCN) channels (of which the funny current is one), are significantly upregulated in patients with systolic CHF. In patch clamping of the same failing ventricular myocytes, I(f) was shown to activate at a less negative potential and to be of larger inward amplitude.[26] Thus, upregulation and hypersensitization of I(f) channels may be a pathologic adaptation resulting in elevated resting heart rates in patients with CHF. An elevated heart rate secondary to upregulation of I(f) channel expression would likely be unresponsive to beta blockade.

The hypothesis that intrinsic I(f) activity contributes to pathological heart rate elevations seen in CHF became testable with market approval of ivabradine, an orally bioavailable I(f) channel-specific inhibitor that lowers heart rate without affecting myocardial contractility.[27] It was initially pioneered as an anti-anginal drug, based on studies in which atherosclerotic mice treated with ivabradine showed reduced atherosclerotic plaque size, improved endothelial vasodilatory responses to hypoxia, and decreased systemic signs of oxidative stress.[28] Subsequently, ivabradine was shown in a randomized, placebo-controlled trial to improve exercise tolerance and time to ischemia in patients with stable chronic angina undergoing exercise stress testing, and in 2005 it was approved for use in Europe for this purpose.[29] A 2009 European Heart Journal study showed efficacy of ivabradine in patients with chronic stable angina beyond that which could be achieved with beta blockers.[30] Based on these data, a randomized, double-blinded prospective study (BEAUTIFUL) was initiated to evaluate ivabradine as secondary prevention in patients with coronary artery disease and a left ventricular ejection fraction of less than 40%. With a patient enrollment of 11 000, this study stands as the largest set of clinical data investigating this drug. While the results did not show a reduction by ivabradine in a primary endpoint composed of hospital admission for worsening HF, death from cardiovascular causes, or acute MI, a subgroup analysis showed that ivabradine reduced hospital admission for MI or need for subsequent coronary revascularization in a predetermined subgroup of patients with a resting heart rate of greater than 70 bpm[31], and called for further research into this subset.

These data suggested that ivabradine might be beneficial in patients with left ventricular dysfunction and elevated resting heart rates. However, due to concerns that a drug that lowers resting heart rate might induce end-organ damage in patients struggling to maintain cardiac output, the role of ivabradine in CHF was initially studied in animal models. In rats with ischemic CHF induced by coronary artery ligation, a ninety-day treatment of ivabradine resulted in decreased ventricular collagen accumulation; notably, ivabradine-treated rats maintained cardiac output by increasing stroke volume, and they supported this increased myocardial work by increasing left ventricular capillary density.[32] A subsequent study showed that ivabradine treatment in rats with ischemic cardiomyopathy specifically decreased protein expression of angiotensin-converting enzyme and angiotensin I, and reduced subsequent ventricular interstitial fibrosis, suggesting a role for ivabradine in preventing tachycardia-induced structural progression of CHF.[33] Not all animal studies showed a positive role for ivabradine, however. In rats with pressure-overload CHF generated by banding of the ascending aorta (a model for diastolic CHF), heart rate reduction by ivabradine resulted in elevated left ventricular filling pressure, hypertrophy, fibrosis, serum brain natriuretic peptide elevation, and more frequent pleural and peritoneal effusions.[34] Based on this, ivabradine was not proposed as a potential therapy for patients with diastolic dysfunction.

In human patients, initial testing showed that ivabradine could reduce heart rate without dropping left ventricular ejection fraction.[35] In an initial observational study, 10 patients with class III CHF and an average ejection fraction of 21% were given intravenous ivabradine 0.1 mg/kg over 90 minutes. After 4 hours, ivabradine reduced heart rate by 25% but maintained cardiac output by increasing stroke volume.[36]

Would ivabradine improve mortality in patients with chronic, stable CHF on optimal medical therapy? Stay tuned…

Santosh Vardhana is a 4th year medical student at NYU Langone Medical Center

Peer reviewed by Robert Donnino, MD, Section Editor-Cardiology, Clinical Correlations

Image courtesy of Wikimedia Commons


1. Hunt SA, Abraham WT, Chin MH, et al. 2009 focused update incorporated into the ACC/AHA 2005 Guidelines for the Diagnosis and Management of Heart Failure in Adults: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines: developed in collaboration with the International Society for Heart and Lung Transplantation. Circulation. 2009;119(14):e391-479.

2. Lloyd-Jones D, Adams RJ, Brown TM, et al. Executive summary: heart disease and stroke statistics–2010 update: a report from the American Heart Association. Circulation. 2010:121(7):948-954.

3. Jessup M, Brozena S. Heart failure. N Engl J Med. 2003:348(20):2007-2018.

4. Heusch G. Heart rate and heart failure. Circ J. 2011;75(2):229-236.

5. Effect of enalapril on mortality and the development of heart failure in asymptomatic patients with reduced left ventricular ejection fractions. The SOLVD Investigators. N Engl J Med. 1992:327(10):685-691.

6. Effects of enalapril on mortality in severe congestive heart failure. Results of the Cooperative North Scandinavian Enalapril Survival Study (CONSENSUS). The CONSENSUS Trial Study Group. N Engl J Med. 1987;316(23):1429-1435.

7. Effect of enalapril on survival in patients with reduced left ventricular ejection fractions and congestive heart failure. The SOLVD Investigators. N Engl J Med. 1991;325(5): 293-302.

8. Pitt B, Zannad F, Remme WJ, et al. The effect of spironolactone on morbidity and mortality in patients with severe heart failure. Randomized Aldactone Evaluation Study Investigators. N Engl J Med. 1999;341(10):709-717.

9. Effect of metoprolol CR/XL in chronic heart failure: Metoprolol CR/XL Randomised Intervention Trial in Congestive Heart Failure (MERIT-HF). Lancet. 1999;353(9169): 2001-2007.

10. Packer M, Bristow MR, Cohn JN, et al. The effect of carvedilol on morbidity and mortality in patients with chronic heart failure. U.S. Carvedilol Heart Failure Study Group. N Engl J Med. 1996; 334(21):1349-1355.

11. Poole-Wilson PA, Swedberg K, Cleland JG, et al. Comparison of carvedilol and metoprolol on clinical outcomes in patients with chronic heart failure in the Carvedilol Or Metoprolol European Trial (COMET): randomised controlled trial. Lancet. 2003;362(9377):7-13.

12. Eichhorn EJ, Bristow MR. Medical therapy can improve the biological properties of the chronically failing heart. A new era in the treatment of heart failure. Circulation. 1996; 94(9):2285-2296.

13. Iraqi W, Rossignol P, Angioi M, et al. Extracellular cardiac matrix biomarkers in patients with acute myocardial infarction complicated by left ventricular dysfunction and heart failure: insights from the Eplerenone Post-Acute Myocardial Infarction Heart Failure Efficacy and Survival Study (EPHESUS) study. Circulation. 2009;119(18):2471-2479.

14. Palatini P, Casiglia E, Julius S, Pessina AC. High heart rate: a risk factor for cardiovascular death in elderly men. Arch Intern Med. 1999;159(6):585-592.

15. Timolol-induced reduction in mortality and reinfarction in patients surviving acute myocardial infarction. N Engl J Med. 1981;304(14):801-807.

16. Gundersen T, Grottum P, Pedersen T, Kjekshus JK. Effect of timolol on mortality and reinfarction after acute myocardial infarction: prognostic importance of heart rate at rest. Am J Cardiol. 1986;58(1):20-24.

17. McAlister FA, Wiebe N, Ezekowitz JA, Leung AA, Armstrong PW. Meta-analysis: beta-blocker dose, heart rate reduction, and death in patients with heart failure. Ann Intern Med. 2009;150(11):784-794.

18. Flannery G, Gehrig-Mills R, Billah B, Krum H. Analysis of randomized controlled trials on the effect of magnitude of heart rate reduction on clinical outcomes in patients with systolic chronic heart failure receiving beta-blockers. Am J Cardiol. 2008;101(6):865-869.

19. Udelson JE. Ventricular remodeling in heart failure and the effect of beta-blockade. Am J Cardiol. 2004;93(9A):43B-8B.

20. Fosbol EL, Seibaek M, Brendorp B, et al. Long-term prognostic importance of resting heart rate in patients with left ventricular dysfunction in connection with either heart failure or myocardial infarction: the DIAMOND study. Int J Cardiol. 2010;140(3):279-286.


21. Ahmadi-Kashani M, Kessler DJ, Day J, et al. Heart rate predicts outcomes in an implantable cardioverter-defibrillator population. Circulation. 2009;120(21):2040-2045.

22. Ribatti D. William Harvey and the discovery of the circulation of the blood. J Angiogenes Res. 2009;1:3.

23. Brown HF, DiFrancesco D, Noble SJ. How does adrenaline accelerate the heart? Nature. 1979;280(5719):235-236.

24. Yatani A, Okabe K, Codina J, Birnbaumer L, Brown AM. Heart rate regulation by G proteins acting on the cardiac pacemaker channel. Science. 1990;249(4973):1163-1166.

25. Rutenberg HL, Spann JF Jr. Alterations of cardiac sympathetic neurotransmitter activity in congestive heart failure. Am J Cardiol. 1973;32(4):472-480.

26. Stillitano F, Lonardo G, Zicha S, et al. Molecular basis of funny current (If) in normal and failing human heart. J Mol Cell Cardiol. 2008;45(2):289-299.

27. DiFrancesco D, Camm JA. Heart rate lowering by specific and selective I(f) current inhibition with ivabradine: a new therapeutic perspective in cardiovascular disease. Drugs. 2004;64(16):1757-1765.

28. Custodis F, Baumhäkel M, Schlimmer N, et al. Heart rate reduction by ivabradine reduces oxidative stress, improves endothelial function, and prevents atherosclerosis in apolipoprotein E-deficient mice. Circulation. 2008;117(18):2377-2387.

29. Borer JS, Fox K, Jaillon P, Lerebours G; Ivabradine Investigators Group. Antianginal and antiischemic effects of ivabradine, an I(f) inhibitor, in stable angina: a randomized, double-blind, multicentered, placebo-controlled trial. Circulation. 2003;107(6):817-823.

30. Tardif JC, Ponikowski P, Kahan T; ASSOCIATE Study Investigators. Efficacy of the I(f) current inhibitor ivabradine in patients with chronic stable angina receiving beta-blocker therapy: a 4-month, randomized, placebo-controlled trial. Eur Heart J. 2009;30(5):540-548.

31. Fox K, Ford I, Steg PG, Tendera M, Robertson M, Ferrari R; BEAUTIFUL Investigators. Ivabradine for patients with stable coronary artery disease and left-ventricular systolic dysfunction (BEAUTIFUL): a randomised, double-blind, placebo-controlled trial. Lancet. 2008;372(9641):807-816.

32. Mulder P, Barbier S, Chagraoui A, et al. Long-term heart rate reduction induced by the selective I(f) current inhibitor ivabradine improves left ventricular function and intrinsic myocardial structure in congestive heart failure. Circulation. 2004;109(13):1674-1679.

33. Milliez P, Messaoudi S, Nehme J, Rodriguez C, Samuel JL, Delcayre C. Beneficial effects of delayed ivabradine treatment on cardiac anatomical and electrical remodeling in rat severe chronic heart failure. Am J Physiol Heart Circ Physiol. 2009;296(2):H435-441.

34. Ciobotaru V, Heimburger M, Louedec L, et al. Effect of long-term heart rate reduction by If current inhibition on pressure overload-induced heart failure in rats. J Pharmacol Exp Ther. 2008;324(1):43-49.

35. Manz M, Reuter M, Lauck G, Omran H, Jung W. A single intravenous dose of ivabradine, a novel I(f) inhibitor, lowers heart rate but does not depress left ventricular function in patients with left ventricular dysfunction. Cardiology;2003;100(3):149-155.

36. De Ferrari GM, Mazzuera A, Agresina L, et al. Favourable effects of heart rate reduction with intravenous administration of ivabradine in patients with advanced heart failure. Eur J Heart Fail. 2008;10(6):550-555.

Concussions and Football By The Numbers

December 6, 2013

By Benjamin G. Wu

Peer Reviewed

The news of a large $675 million dollar settlement on concussions has headlined on both the sports news channels and in popular media during this 2013 National Football League (N.F.L.) season [1]. Heralded as a victory mainly for the N.F.L., the settlement not only allows the league to avoid larger amounts in potential liability payments but also the public scrutiny of a discovery phase if a case were to move forward [1]. In the wake of this settlement there are many lingering questions regarding the research of sports-related concussions, helmet design, and safety regulations to protect players at all levels of the popular American pastime.

As concern grew that concussive and sub-concussive impacts may contribute to mild traumatic brain injury (MTBI) and concussive traumatic encephalopathy (CTE) high school, college, and professional football leagues have started tracking the prevalence of concussions over the past decade. [2,4]. High profile cases have the medical and legal world paying more attention to the response and actions taken by the N.FL. [1,4]. The death of the ex-N.F.L. player Andre Waters made national headlines in 2006 as a partial autopsy by Dr. Bennet Omalu revealed diffuse cerebral taupathy [5,6]. Even the United States Congress has attempted to pass a bill to create national guidelines for concussions in young athletes. The bill currently sits in Committee [7,8].

The question is: how widespread are concussions in sports? Surprisingly, an estimated 3.8 million concussions happen every year during competitive and recreational activities [2,9-10]. The rate of concussions found in high school and collegiate sports is 2.4-2.5 concussions per 10,000 athletic exposures [10]. The highest rate of concussions for females occurred in soccer. Predictably, male football players had the highest incidence of concussions [10]. The overall incidence of concussions in sports is estimated to be much higher secondary to under-reporting; an estimated 50% of concussions may go un-reported [2]. A concussion is a clinical diagnosis and involves several criteria including a change in consciousness from confusion or amnesia [11-13]. As we understand more about concussions and their sequelae, the threshold for diagnosis is lowered and the list of activities associated with concussions grows. This research also has clear implications beyond recreational and profressional sports for armed service members returning from Afghanistan and Iraq who have experienced repetitive explosions from improvised explosive devices (IEDs).

There have been many technological advances protecting players and service-members alike from the perils of concussion. The first football helmet was invented in 1896. The helmet had rudimentary strips of leather protecting the players’ skulls [14]. Despite the use of helmets over the past century there have been no substantial studies that show a decrease in concussions due to helmet use [2]. Researchers recorded a total of 101,944 impacts in a study of 95 high school football players in Michigan over 4 years using the Head Impact Telemetry System (HITS). The average player had 652 impacts over the course of the study [15]. The high school player study raised the question of sub-concussive impacts on the neurologic skills of football players. However, this was inconclusively addressed in a research study of 46 collegiate football players at the University of North Carolina, Chapel Hill. On average, the college players sustained more than 1000 impacts during the football season [16]. Neurological performance markers pre-season and post-season did not differ statistically in the study [16]. The researchers suggested that higher acceleration cutoffs may potentially discern changes, but their lower cutoffs for sub-concussive impacts was to discriminate between normal motions of the head to low intensity impacts. Thus, the authors implied that their threshold might be too low to study small and incremental effects of sub-concussive impacts in their cohort [16].

Despite this, data from the N.F.L. shows that there were a total of 887 MTBIs recorded from 1996-2001 and a total of 854 MTBIs recorded from 2002-2007 [17]. Of these concussions, 152 players had repeat concussions in 2002-2007 and 44 players had more than 3 head injuries [18]. Five players had concussions on the same day and 18 players reported concussions within the next seven days [18]. The data suggest that the most important risk factor of receiving concussions is a history of concussions, those players that receive concussions are at 2-5.8 risk of receiving another [2]. This does not imply that players have underlying physiological factors that promote concussions rather differences may likely be explained by riskier play and adherence to safety guidelines. Thankfully, the advances in the rules and technology have greatly improved the safety of football players.

Decreasing the risk of concussions addresses the immediate and short-term health of our athletes, but also begs the question of long-term effects of being punch-drunk. Health workers and researchers have long noted the connection of repeated blows to the head and dementia pugilistica. The first published description of being punch-drunk was in 1928 by Dr. Martland in New Jersey [19]. Since then, multiple studies have addressed the connection of pathological findings such as abnormal deposition of proteins, neuropathies, and other neurological disorders as a result of multiple concussions or MTBIs [20]. The association of concussions to CTE is not new, but while associations exist, the natural history of mild TBI and its progression to CTE is not well understood. Highly publicized cases and recent congressional hearings signal that there are changes coming to American football, and with this attention comes funding for research that may help us understand these challenges.

As a bellwether, the increasing incidence of concussions has been attributed less to risky play, but better and quicker recognition of concussions in the field [2]. In fact, researchers acknowledge that it is not only the force of the impact that dictates long term sequelae of concussions, but the consistency and repetitive nature of these injuries that lead to pathological damage [13]. In response to the attention, the American Academy of Neurology (AAN) has recommended expert involvement in the evaluation of concussions [21]. The benefit of heightened public attention and the involvement experts for concussions and MTBIs remain to be seen. Psychological factors and expectations of recovery influence the recovery from MTBI and the small implication of permanent brain damage currently colors the debate behind the concussion discussion. Research studies that prognosticate and predict trauma in athletes can help those who lack attention and advocacy to do so. Hopefully with better recognition, research, and rules we can eliminate dangerous head injuries in sports to make them not only safer for players, but also more enjoyable for players, their families, and their fans.

Dr. Benjamin Wu is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Laura Boylan, MD, Neurology, NYU Langone Medical Center


1. Belson, Ken. “N.F.L. Agrees to Settle Concussion Suit for $765 Million“ New York Times on the Web. 29 Aug. 2013. Retrieved 5 Sept. 2013.

2. Harmon KG, Drezner JA, Gammons M, et al. American Medical Society for Sports Medicine position statement: concussion in sport. British Journal of Sports Medicine (2013) vol. 47 (1) pp. 15-26.

3. Roehr B. Why the NFL is investing in health research. BMJ (2012) vol. 345 pp. e6626.

4. Belson, Ken. “Concussion Liability Costs May Rise, and Not Just for N.F.L.” New York Times on the Web. 10 Dec. 2012. Retrieved 3 Jan. 2013.

5. Omalu BI, Hamilton RL, Kamboh MI, DeKosky ST, Bailes J. Chronic traumatic encephalopathy (CTE) in a National Football League Player: Case report and emerging medicolegal practice questions. J Forensic Nurs (2010) Spring 6 (1) pp. 40-6.

6. Schwarz, Alan. Expert Ties Ex-Player’s Suicide to Brain Damage. New York Times on the Web. 18 Jan. 2007. Retrieved 3 Jan. 2013.

7. Schwarz, Alan. Congress Considers Concussion Protections. New York Times on the Web. 23 Sep. 2010. Retrieved 3 Jan. 2013.

8. S. 2840–111th Congress: Concussion Treatment and Care Tools Act of 2009. (2009). In Retrieved January 5, 2013.

9. Browne GJ, Lam LT. Concussive head injury in children and adolescents related to sports and other leisure physical activities. Br J Sports Med 2006; 40 pp. 163 – 168.

10. Guerriero RM, Proctor MR, Mannix R, Meehan WP 3rd. Epidemiology, trends, assessment and management of sport-related concussion in United States high schools. Current Opinion in Pediatrics (2012) vol. 24 (6) pp. 696-701.

11. McCrory P, Johnston K, Meeuwisse W, et al. Summary and agreement statement of the 2nd International Conference on Concussion in Sport, Prague 2004. British Journal of Sports Medicine (2005) vol. 39 (4) pp. 196-204.

12. Casson IR, Pellman EJ, Viano DC.. Concussion in the national football league: an overview for neurologists. Neurol Clin (2008) vol. 26 (1) pp. 217-41; x-xi.

13. Khurana VG, Kaye AH.. An overview of concussion in sport. J Clin Neurosci (2012) vol. 19 (1) pp. 1-11.

14. Levy ML, Ozgur BM, Berry C, Aryan HE, Apuzzo ML. Birth and evolution of the football helmet. Neurosurgery (2004) vol. 55 (3) pp. 656-61; discussion 661-2.

15. Broglio SP, Eckner JT, Martini D, et al. Cumulative head impact burden in high school football. Journal of Neurotrauma (2011) vol. 28 (10) pp. 2069-78.

16. Gysland SM, Mihalik JP, Register-Mihalik JK, Trulock SC, Shields EW, Guskiewicz KM. The relationship between subconcussive impacts and concussion history on clinical measures of neurologic function in collegiate football players. Ann Biomed Eng. 2012 Jan;40(1):14-22.

17. Casson IR, Viano DC, Powell JW, Pellman EJ. Twelve years of national football league concussion data. Sports Health: A Multidisciplinary Approach (2010) vol. 2 (6) pp. 471-83.

18. Casson IR, Viano DC, Powell JW, Pellman EJ. Repeat concussions in the national football league. Sports Health: A Multidisciplinary Approach (2011) vol. 3 (1) pp. 11-24.

19. Martland HS. Punch drunk. Journal of the American Medical Association (1928) vol. 91 (15) pp. 1103-1107.

20. McKee AC, Cantu RC, Nowinski CJ, et al. Chronic traumatic encephalopathy in athletes: progressive tauopathy after repetitive head injury. Journal of Neuropathology and Experimental Neurology (2009) vol. 68 (7) pp. 709-35.

21. Giza CG, Kutcher JS, Ashwal S, et al. Summary of evidence-based guideline update: evaluation and management of concussion in sports: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2013 Jun 11;80(24):2250-7. doi: 10.1212/WNL.0b013e31828d57dd. Epub 2013 Mar 18. http://


To Stent or not to Stent? A Review of the Evidence on the Utility of Stenting in Renal Artery Stenosis

November 22, 2013

By Elizabeth Hammer, MD

Faculty Peer Reviewed

Renovascular hypertension, often caused by renal artery stenosis (RAS) due to atherosclerosis or fibromuscular dysplasia, is the most common potentially correctable cause of secondary hypertension. Although only approximately one percent of patients with hypertension have atherosclerotic renovascular disease (ARVD), the prevalence increases to 30-40% in patients with CAD, CHF, and PVD. Screening studies of asymptomatic populations in the United States demonstrate a disease prevalence of 7%, with an annual incidence of 0.5% per year in analyses of medical claims of asymptomatic patients over age 65 [1]. Moreover, ARVD has been found to significantly increase cardiovascular morbidity and mortality; in a recent randomized controlled trial, annual mortality was 8% in patients with ARVD as compared with 3.7% in the general population [1]. As such, this article will focus on management of RAS due to atherosclerosis in particular.

Currently there are three treatment options: (1) medical therapy, consisting of statins, anti-platelet agents, and blood pressure control with renin-angiotensin blockade; (2) percutaneous transluminal renal angioplasty with stent placement (PTRAS); and (3) surgical revascularization. Testing for ARVD is not without risk, especially in those with impaired kidney function. Additionally, procedures to correct RAS are associated with morbidity and mortality, including renal artery dissection, thrombosis, or perforation, acute kidney injury, and death. According to the 2005 American College of Cardiology/American Heart Association guidelines [2], then, testing for RAS is indicated only in those with a suggestive history and in whom a corrective procedure will be performed if ARVD is detected. Therein lies the major question: is stenting worthwhile in the treatment of RAS? More specifically, does percutaneous angioplasty deliver improvements in blood pressure control, renal disease, and cardiovascular outcomes that cannot be obtained by medication alone?

Although many studies have attempted to answer these questions, the findings thus far are unsettling. Upon review of the relevant research, a wide discrepancy emerges between the outcomes from observational data versus those obtained from randomized controlled trials. Observational data have demonstrated a clear benefit from stenting in patients with ARVD. Grey et al [3] found a significant decrease in blood pressure, serum creatinine, New York Heart Association functional class, and number of annual hospitalizations for heart failure in 39 patients who had undergone PTRAS for refractory heart failure. Similarly, Kalra [4] reported significantly lower rates of mortality and renal disease progression in 89 patients treated with PTRAS in comparison with 346 patients treated with medical therapy alone. Additionally, systematic reviews of multiple observational cohort studies have consistently found that PTRAS is associated with improvement in blood pressure and kidney function [5].

Yet these impressive positive effects have failed to withstand more rigorous investigation. The three earliest RCTs from 1998 to 2000 [6,7,8] failed to show improvement in blood pressure with procedural intervention as compared with medical therapy alone. It should be noted that the external validity of these studies is limited. In these trials, balloon angioplasty was performed without stent placement, a practice that has since been associated with worse angiographic outcomes and higher rates of re-stenosis, and one which is therefore no longer standard of care. Additionally, these studies have been criticized for their small sample sizes, short follow up times, high cross-over rates, and inclusion of patients with clinically insignificant stenosis.

Despite the limitations of these earliest RCTs, their findings – namely a lack of benefit with stenting as compared with medical therapy- appear to be validated in subsequent RCTs and meta-analyses. The STAR trial [9] enrolled 140 patients with RAS >50% and renal insufficiency and found that PTRAS did not improve creatinine clearance or secondary endpoints of cardiovascular morbidity and mortality. In the largest trial to date, ASTRAL [10], 806 patients with RAS >50% were randomized to PTRAS versus medical therapy alone. Not only did stenting fail to demonstrate benefit in terms of the primary endpoint of loss of renal function and secondary endpoints of blood pressure, cardiovascular events, and all-cause mortality, but also the rate of serious adverse events with stenting was 6.8%. Unsurprisingly then, a meta-analyses by Kumbhani et al [11] of the various RCTs failed to demonstrate any difference between PTRAS and drug therapy in terms of blood pressure control, renal function, all cause mortality, heart failure, and stroke. Additional meta-analysis by Pierdeomenico et al [12] also failed to show any difference in the risk of future nonfatal myocardial infarction.

However, neither one of the above RCTs is above reproach, specifically with regards to patient selection. Approximately 28% of patients randomized to PTRAS in the STAR trial [9] did not get stented because they were noted to have insignificant (less than 50%) stenosis when angiogram for stent placement was performed. And in the ASTRAL trial [10], patients were included only if their primary treating physician was uncertain whether the patient would definitely benefit from revascularization, thereby likely excluding high-risk patients.

So what conclusions can we draw? Well, here are my parting thoughts:

1. Most patients should be managed with drug therapy alone. The preponderance of the most rigorous type of evidence from RCTs clearly demonstrates the risk of harm without clear demonstrable benefit from revascularization.

2. More research is needed on the utility of revascularization in high-risk patients. It is possible that the discrepancy between outcomes in the observational trials and RCTs is due to the fact that high-risk patients derive benefit from PTRAS whereas low to moderate risk patients or patients with incidental finding of RAS on imaging, the majority of patients studied, do not.

The ongoing CORAL trial [13] – whose results are due imminently – should shed some light on these issues. This prospective, multicenter, un-blinded randomized controlled trial aims to determine the incremental value of stent revascularization in addition to optimal medical therapy. Potential subjects all underwent renal angiogram and were included in the study if they had atherosclerotic renal stenosis >80% or >60% with a 20mmHg systolic pressure gradient as well as systolic hypertension of at least 155 on a minimum of two anti-hypertensive agents. 1,080 subjects were randomized with a 90% power to detect a 28% reduction in the composite endpoint of cardiovascular or renal death, MI, hospitalization for CHF, stroke, doubling of creatinine, and need for renal replacement therapy. The results of this study are expected to provide more definitive guidance, so stay tuned.

3. Current AHA/ACC guidelines [2] suggest that revascularization be considered only in those with a high likelihood of benefit from undergoing stenting, namely those with short duration of blood pressure elevation prior to diagnosis of ARVD, failure of optimal medical therapy to control blood pressure, intolerance to optimal medical therapy, and recurrent flash pulmonary edema and/or refractory heart failure.

Dr. Elizabeth Hammer is a 2nd-year resident at NYU Langone Medical Center

Peer reviewed by David Goldfarb, MD, Professor of Medicine, Department of Medicine (Nephrology),  NYU Langone Medical Center and Chief of Nephrology at the Department of Veterans Affairs New York Harbor.

Image courtesy of Wikimedia Commons


1. Ritchie J, Green D, Kalra PA. Current views on the management of atherosclerotic renovascular disease. Ann Intern Med 2012; 44(supp1): S98.

2. Hirsch AT, Haskal ZJ, Hertzer NR, et al. ACC/AHA 2005 Practice Guidelines for the management of patients with peripheral arterial disease (lower extremity, renal, mesenteric, and abdominal aortic): a collaborative report from the American Association for Vascular Surgery/Society for Vascular Surgery, Society for Cardiovascular Angiography and Interventions, Society for Vascular Medicine and Biology, Society of Interventional Radiology, and the ACC/AHA Task Force on Practice Guidelines (Writing Committee to Develop Guidelines for the Management of Patients With Peripheral Arterial Disease): endorsed by the American Association of Cardiovascular and Pulmonary Rehabiitation; National Heart, Lung, and Blood Institute; Society for Vascular Nursing; TransAtlantic Inter-Society Consensus; and Vascular Disease Foundation. Circulation 2006; 113:e463

3. Gray BH, Olin JW, Childs MB, et al. Clinical benefit of renal artery angioplasty with stenting for the control of recurrent and refractory congestive heart failure. Vasc Med 2002; 7:275.

4. Kalra PA, Chrysochou C, Green D, et al. The benefit of renal artery stenting in patients with atheromatous renovascular disease and advanced chronic kidney disease. Catheter Cardiovasc Interv 2010; 75:1.

5. Foy A, Ruggiero NJ, Filippone. Revascularization in renal artery stenosis. Cardiology in Review 2012; 20:189.

6. Plouin PF, Chatellier G, Darne B, Raynaud A. Blood pressure outcome of angioplasty in atherosclerotic renal artery stenosis: a randomized trial. Essai Multicentrique Medicaments v Angioplastie (EMMA) Study Group. Hypertension 1998; 31:823.

7. Webster J, Marshall F, Abdalla M, et al. Randomised comparison of percutaneous angioplasty vs continued medical therapy for hypertensive patients with atheromatous renal artery stenosis. Scottish and Newcastle Renal Artery Stenosis Collaborative Group. J Hum Hypertens 1998; 12:329.

8. van Jaarsveld BC, Krinjen P, Pieterman H, et al. The effect of balloon angioplasty on hypertension in atherosclerotic renal-artery stenosis. Dutch Renal Artery Stenosis Intervention Cooperative Study Group. N Engl J Med 2000; 342:1007.

9. Bax L, Woittiez AJ, Kouwenberg HJ, et al. Stent placement in patients with atherosclerotic renal artery stenosis and impaired renal function: as randomized trial. Ann Intern Med 2009; 150:840.

10. ASTRAL Investigators, Wheatley K, Ives N, et al. Revascularization versus medical therapy for renal-artery stenosis. N Engl J Med 2009; 361:1953.

11. Kumbhani DJ, Bavry AA, Harvey JE, et al. Clinical outcomes after percutaneous revascularization versus medical management in patients with significant renal artery stenosis: a meta-analysis of randomized controlled trials. Am Heart J 2011; 161:622.

12. Pierdomenico AD, Pierdomenico AM, Cuccurullo C, et al. Cardiac events in hypertensive patients with renal artery stenosis treated with renal angioplasty or drug therapy: meta-analysis of randomized trials. Am J Hypertension 2012; 25:1209.

13. Cooper CJ, Murphy TP, Matsumoto A, et al. Stent revascularization for the prevention of cardiovascular and renal events among patients with renal artery stenosis and systolic hypertension: rational and design of the CORAL trail. American Heart Journal 2006; 152:59.