Clinical Questions

Do Soft Drinks Cause Hypertension?

July 8, 2011

By Ivan Saraiva, MD

Faculty Peer Reviewed

Sugared soft drinks are among the most heavily consumed drinks in the US. Carbonated soft drinks were first invented as a way to make “healthier” water that looked like natural carbonated waters that were found in European spas in the mountains. The name soda came from the use of bicarbonate of soda, which was used to produce carbonation (for an excellent review of the history of beverages, refer to Wolf et al.[1].  Unfortunately, we no longer realize any health benefits from carbonated waters.

Recent data show that each American drinks greater than 35 gallons of caloric soft drinks yearly (for comparison, milk totals 21 gallons per capita per year, coffee 24, and beer 22, data available from the USDA website). Further, research suggests that increasing calorie intake from liquids is not associated with a proportional decrease in calorie intake from solids. As a result, soda represents substantial proportion of daily calories consumed and is a major contributor to the rising obesity epidemic.

Sugared soft drinks contain large amounts of either sucrose or high fructose corn syrup. Sucrose is a dimer of glucose and fructose, whereas high fructose corn syrup is a product in which about half of glucose is converted to fructose in order to increase the sweetness. Therefore, research to evaluate the impact of soft drinks on health has long focused on the effects of fructose. In addition to the effect on weight, there has been accumulating evidence that soft drinks and other sources of dietary sugar may also be associated with hypertension, independent of their effect on obesity.

Uric acid could be a link between soft drink intake and incident hypertension

Tappy and Lê recently published a thorough review of the metabolic effects of fructose[2]. Briefly, fructose differs immensely from glucose. While  glycolysis is a very controlled pathway, the fructose pathway is not as controlled. Almost all fructose absorbed from the diet is taken up by hepatocytes in the first pass by the liver, leaving only minuscule amounts detectable in the peripheral circulation. Since the phosphorylation of fructose upon entrance to the cell is essentially not a controlled step, it leads to rapid depletion of hepatocyte ATP. As a consequence, AMP accumulates, stimulating the purine degradation pathway and formation of uric acid. In addition, a fructose load stimulates the formation of triglycerides more than glucose, and has a direct effect on insulin resistance. The mechanisms by which fructose could therefore lead to increased blood pressure may include obesity through increased caloric intake, insulin resistance, or accumulation of uric acid.

Several animal studies have given insight into associations between uric acid levels and blood pressure. Others found that pathologic findings classically attributed to hypertension are seen in mice with hyperuricemia even after controlling blood pressure. In one model, hypertension resolves with treatment of hyperuricemia with allopurinol (for a review of the animal data, refer to Feig et al.(3). An observational study of soft drinks intake in adolescents found increased uric acid levels correlating with both blood pressure and soft drinks intake [4].

Although uric acid may cause hypertension and mediate some of the pathologic consequences classically associated with longstanding high blood pressure in mice, humans are more complex.  Studies have been conflicting.  Antagonizing or decreasing uric acid with vitamin C was not associated with a decreased risk of hypertension in the Nurses’ Health Study I and II and the Health Professionals Follow-up Study[5-8]. In addition, George et al. showed that although markers of endothelial dysfunction in patients with heart failure improved after reduction of hyperuricemia with allopurinol, they did not change after a similar reduction of hyperuricemia by means of an uricosuric agent, probenecid[9]. This result highlights the possibility of uric acid being just a marker, or maybe even a biological response against endothelial dysfunction, rather than its cause.

Observational studies corroborate a small but significant association of soft drink intake and incident hypertension

In 2005, Winkelmayer et al. reported the observations made on prospective cohorts of the Nurses’ Health Study I and II, including almost 150,000 nurses who were not hypertensive at baseline [10].  During the follow up period of 9-12 years, as intake of both sugared and diet cola drinks increased, incident hypertension increased with an adjusted odds ratios in the range of 1.16 to 1.44 for intakes of 4 or more cans per day. Forman et al. in the same cohorts and others failed to see an association between total fructose intake, vitamin C, and hypertension(5).

Relationships between soft drinks intake and metabolic syndrome were reported in a prospective observational study[11]. The authors separated the diet versus the regular version of soft drinks in the analysis, and both were significantly associated with development of metabolic syndrome in the multivariate model (odds ratios of 1.53 for diet, 1.62 for regular, and 1.41 for both). When the components of the metabolic syndrome were split up, the association with hypertension was significant but small, much like in the Nurses’ Health Study (adjusted odds ratio 1.20).

The associations between soft drinks and high blood pressure, found in studies that were not specifically designed to address fructose intake, were corroborated by a recent, large, cross-sectional study based on the NHANES data[12]. The adjusted odds ratio for having a systolic blood pressure of 160 mmHg or more increased in direct association with the amount of fructose intake, reaching 2.1 for individuals that took 74g/day of fructose or more.

Chen et al. also attempted to investigate the relationship of fructose intake and hypertension, but in a more controlled setting of subjects enrolled in a clinical study[13]. Analyzing the subjects enrolled in the PREMIER trial, the authors found that a reduction of 1 serving of sugared drink per day was significantly associated with a decrease in systolic blood pressure of 0.7 mmHg and a decrease in diastolic blood pressure of 0.4 mmHg, in the fully adjusted model.

In summary

There is fair to good evidence that higher intakes of fructose, either in the form of sucrose or high fructose corn syrup, seem to increase one’s risk for hypertension. This increased risk is probably independent of the effect of higher caloric intake on obesity. The relationship between high fructose intake and hyperuricemia has received recent attention, bringing new light to a subject that was almost forgotten since the development of effective treatment of gout. Animal studies have brought hope that the cause of hypertension is an easily treatable metabolic derangement, but later clinical findings have been inconsistent.  It is possible that metabolic effects of hyperuricemia will receive increasing attention from the research community.

Therefore, limiting or even completely avoiding sugared soft drinks are reasonable options. There have been some administrative attempts to reduce consumption by means of taxing soft drinks. On the one hand, laboratory data and observational studies suggest that that could be a good idea. On the other hand, such control of people’s habits has not been evaluated by studies addressing feasibility and impact on clinically important outcomes.  It is clear that we don’t know enough, and it is always better to exercise caution in matters of health policy.

Dr. Ivan Saraiva recently completed his residency at NYU Langone Medical Center

Peer reviewed by David Goldfarb, MD,  Nephrology,  section editor, Clinical Correlations

Image courtesy of Wikimedia Commons


1. Wolf A, Bray GA, Popkin BM. A short history of beverages and how our body treats them [Internet]. Obes Rev. 2008 ;9(2):151-164.Available from:

2. Tappy L, Le KA. Metabolic effects of fructose and the worldwide increase in obesity [Internet]. Physiol Rev. 2010 ;90(1):23-46.Available from:

3. Feig DI, Kang DH, Nakagawa T, Mazzali M, Johnson RJ. Uric acid and hypertension [Internet]. Curr Hypertens Rep. 2006 ;8(2):111-115.Available from:

4. Nguyen S, Choi HK, Lustig RH, Hsu CY. Sugar-sweetened beverages, serum uric acid, and blood pressure in adolescents [Internet]. J Pediatr. 2009 ;154(6):807-813.Available from:

5. Forman JP, Choi H, Curhan GC. Fructose and vitamin C intake do not influence risk for developing hypertension [Internet]. J Am Soc Nephrol. 2009 ;20(4):863-871.Available from:

6. Gao X, Curhan G, Forman JP, Ascherio A, Choi HK. Vitamin C intake and serum uric acid concentration in men [Internet]. J Rheumatol. 2008 ;35(9):1853-1858.Available from:

7. Huang HY, Appel LJ, Choi MJ, Gelber AC, Charleston J, Norkus EP, Miller 3rd ER. The effects of vitamin C supplementation on serum concentrations of uric acid: results of a randomized controlled trial [Internet]. Arthritis Rheum. 2005 ;52(6):1843-1847.Available from:

8. Choi JW, Ford ES, Gao X, Choi HK. Sugar-sweetened soft drinks, diet soft drinks, and serum uric acid level: the Third National Health and Nutrition Examination Survey [Internet]. Arthritis Rheum. 2008 ;59(1):109-116.Available from:

9. George J, Carr E, Davies J, Belch JJ, Struthers A. High-dose allopurinol improves endothelial function by profoundly reducing vascular oxidative stress and not by lowering uric acid [Internet]. Circulation. 2006 ;114(23):2508-2516.Available from:

10. Winkelmayer WC, Stampfer MJ, Willett WC, Curhan GC. Habitual caffeine intake and the risk of hypertension in women [Internet]. JAMA. 2005 ;294(18):2330-2335.Available from:

11. Dhingra R, Sullivan L, Jacques PF, Wang TJ, Fox CS, Meigs JB, D’Agostino RB, Gaziano JM, Vasan RS. Soft drink consumption and risk of developing cardiometabolic risk factors and the metabolic syndrome in middle-aged adults in the community [Internet]. Circulation. 2007 ;116(5):480-488.Available from:

12. Jalal DI, Smits G, Johnson RJ, Chonchol M. Increased Fructose Associates with Elevated Blood Pressure [Internet]. J Am Soc Nephrol. 2010 ;Available from:

13. Chen L, Caballero B, Mitchell DC, Loria C, Lin PH, Champagne CM, Elmer PJ, Ard JD, Batch BC, Anderson CA, Appel LJ. Reducing consumption of sugar-sweetened beverages is associated with reduced blood pressure: a prospective study among United States adults [Internet]. Circulation. 2010 ;121(22):2398-2406.Available from:

Fast Hearts and Funny Currents, Part 2: Is Tachycardia Part of the Problem in Heart Failure?

May 25, 2011

By Santosh Vardhana

 Faculty Peer Reviewed

Please review Part 1 of this article here.

Mr. M is a 63-year old man with a history of coronary artery disease and systolic congestive heart failure (ejection fraction 32%) on lisinopril, metoprolol, and spironolactone who presents to the Adult Primary Care Center complaining of persistent dyspnea with exertion, two-pillow orthopnea, and severely limited exercise tolerance.  His vital signs on presentation are T 98.0˚F, P 84, BP 122/76.  What are his therapeutic options?

 A randomized, placebo controlled study of ivabradine in patients with chronic heart failure (the SHIFT trial)

 Encouraged by data from observational human studies and animal models, investigators at the University of Gothenburg in Sweden designed the SHIFT study to evaluate the role of ivabradine in reducing adverse cardiovascular events in medically-optimized patients with systolic congestive heart failure (CHF) and an elevated resting heart rate.[1]  The study included 6558 patients in 37 countries.  Criteria for inclusion included stable, chronic, symptomatic CHF of greater than 4 weeks duration with a hospital admission for symptomatic CHF in the past 12 months, an ejection fraction (EF) of less than 35%, and an electrocardiogram showing a resting heart rate of at least 70 beats per minute (bpm) but otherwise normal sinus rhythm.  Patients with congenital heart disease, primary valvular disease, a myocardial infarction in the past 2 months, ventricular pacing for greater than 40% of each day, atrial fibrillation or flutter, or symptomatic hypotension were excluded.  Patients were randomized to either placebo or ivabradine 5 mg twice daily, which was titrated during 4-month follow-up visits over 2 years to a target heart rate of 50-60 bpm.  The minimum acceptable dose of ivabradine was 2.5 mg daily and the maximum dose was 7.5 mg twice daily.   Patients with symptomatic bradycardia on 2.5 mg daily were withdrawn from the study but included in the intention-to-treat analysis.  The primary endpoint was a composite of cardiovascular death and hospital admission for worsening CHF.  Secondary endpoints included an analysis of the primary endpoint in a predetermined subset of patients receiving at least 50% of their target dose of beta-blocker as well as all-cause death and all-cause hospital admission.  Follow-up was comprehensive, with only 3 out of 6505 patients lost to follow-up.  Patients who withdrew from the study were followed up and included in the analysis.  The population was 75% male, 90% white, with an average age of 60.  The average heart rate of the population was 80 bpm, with an average BP of 122/76 and EF of 29%.  Functional class was New York Heart Association (NYHA) class II in 49%, class III in 50%, and class IV in 2%. Approximately two thirds of the patients had CHF from ischemic causes.  Two thirds of the patients had hypertension, 30% had diabetes, and 17% were smokers. Over 90% of the patients were on both beta-blockers and renin-angiotensin system inhibitors; however, it is important to note that only 56% of patients achieved greater than 50% of their target dose of beta-blockers as defined by European Society of Cardiology guidelines,[2] and only 26% achieved the target dose.  The authors cited obstructive pulmonary disease, hypotension, fatigue, dyspnea, dizziness, and bradycardia as reasons for failure to achieve target dose.

 Two pertinent features of the study population should be noted.  First, patients were optimized on medical therapy, including beta-blockers, before being treated with ivabradine.  The authors were investigating the utility of ivabradine in addition to current medical therapy and not as a replacement for beta-blockers.  Second, the average GFR of the patient population was 75 mL/min; this suggests that this patient population was not already undergoing end-organ hypoperfusion as indicated by renal insufficiency.  It may well be that CHF patients with evidence of hypoperfusion would not be able to tolerate the addition of ivabradine; those patients were not addressed in the study.

 Intention-to-treat analysis of the entire study population showed a reduction of 8 bpm in heart rate over 2 years with administration of ivabradine.  This resulted in significant symptomatic improvement in the treatment group.  Ivabradine also reduced the primary endpoint from 29% to 24%, a relative risk reduction of 18% and an absolute risk reduction of 5%, with a number needed to treat of 26.  This reduction was primarily due to a reduction in hospital admissions for CHF; death from cardiovascular causes was not reduced by treatment with ivabradine.  However, deaths due specifically to CHF were reduced in the treatment group, with a relative risk reduction of 26%.  In the subgroup of patients who were at greater than 50% of target beta-blocker therapy, the effect of ivabradine was less dramatic: only a 19% relative risk reduction in hospital admissions and a non-significant reduction in the composite primary endpoint.  The authors claim that this was due to insufficient powering for the reduced event rate in this subgroup.  Not surprisingly, the effect of ivabradine was most pronounced in patients with resting heart rates greater than 77 bpm.  Ivabradine was generally well tolerated in this study; symptomatic bradycardia was present in 10% of the treatment group, but led to study withdrawal in only 1%.  This is impressive, given that almost three quarters of the same study population could not tolerate the target dose of beta-blockers, and suggests that ivabradine-induced bradycardia is tolerated significantly better than beta-blocker-induced hypotension and bradycardia.  Of note, 3% of patients taking ivabradine experienced phosphenes (transient enhanced brightness in a restricted area of the visual field).  

 A companion paper released in the same issue of The Lancet performed subgroup analyses stratifying the patient population of SHIFT by heart rate quintiles.[3]  The patients in the highest quintile of heart rates (greater than 87 bpm) were much more likely at baseline to have more advanced CHF with a lower ejection fraction.  As in prior studies, baseline heart rate in both the placebo and treatment groups was linearly associated with both cardiovascular death and hospitalization for CHF.  The risk reduction was greatest in patients whose heart rate was reduced to less than 70 bpm, an endpoint that was achieved in three quarters of patients treated with ivabradine but less than one third of patients receiving placebo.  Thus, reduction of cardiovascular morbidity and mortality by ivabradine is dependent on the extent of heart rate reduction; this finding is similar to findings demonstrated for beta-blockers and emphasizes the importance of controlling heart rate in CHF.

 In a subsequent study, 60 patients with NYHA class II and III CHF with EFs less than 40% were randomized to either ivabradine or placebo therapy.  Over the following 6 months, patients receiving ivabradine reported improved quality of life and improvement in NYHA functional class; objectively, they were found to have improved exercise capacity, improved peak oxygen consumption, and a significant reduction in baseline N-terminal probrain natriuretic peptide levels.[4]  Furthermore, a substudy of the BEAUTIFUL trial that performed two-dimensional echocardiography on patients with stable coronary artery disease and left ventricular systolic dysfunction found that treatment with ivabradine improved left ventricular EF in a manner that was proportional to the reduction in heart rate, further supporting a role for ivabradine in preventing pathological remodeling in CHF by achieving optimal heart rate reduction.[5]

 Don’t you forget about me: beta-blockers and digoxin

 This study brings to light many important considerations in the management of chronic CHF.  First, it reintroduces resting heart rate as an important target in the management of CHF and demonstrates for the first time that reduction of heart rate to a target goal of less than 70 bpm in the absence of other modifications results in statistically significant improvements in cardiovascular morbidity and, in some cases, mortality.  Second, it introduces a new class of drug, inhibitors of the so-called funny current, as a new potential therapeutic option for patients with chronic CHF.  Finally, it suggests that this drug class may be of particular efficacy in patients who cannot achieve target doses of beta-blockers for any reason. 

 In an editorial in the same issue of The Lancet, Teerlink noted that the majority of patients did not achieve target beta-blocker doses and that ivabradine was most efficacious in these patients.[6]  Introduction of ivabradine as an alternative to beta-blockers might diminish the use of beta-blockers, which are often discontinued by patients or physicians despite little evidence that they cause adverse effects such as depression or impotence, or worsen comorbidities such as reactive airway disease.[7,8]  Substitution of ivabradine for beta-blockers (which was not endorsed by the authors of SHIFT) would compromise not only patient outcomes, given the broadly demonstrated mortality benefit of beta-blockers,[9] but would also raise the cost of treatment, given the relative cost of less than $5000 per quality-adjusted life year (QALY) for beta blocker administration.[10]  This fiscal consideration is particularly important, given that more Medicare dollars are spent on treatment of CHF than on any other disease.[11] 

 The particular efficacy of ivabradine in patients who were not able to achieve optimal beta-blockade is consistent with a report in 2009 that used multivariable linear regression analysis to determine how carvedilol improves ejection fraction in patients with CHF in the presence of both ACE inhibition and digoxin.  The study found that heart rate reduction was responsible for 60% of the improvement in ejection fraction seen with carvedilol, with 30% of improvement being due to increased contractility and less than 20% due to reduction of afterload secondary to alpha-blockade.[12]  Patients should be optimized on the maximum tolerable dose of beta-blocker with a target heart rate of less than 70 bpm before considering addition of a funny current inhibitor such as ivabradine.

 Similar questions have been asked about the mechanism of digoxin, a drug that has many functional effects similar to ivabradine.[13]  Digoxin, via its inhibitory effect on the sodium-potassium ATPase pump and activating effect on parasympathetic vagal tone, has a positive inotropic and negative chronotropic effect, and it appears to decrease hospitalizations without a robust reduction in mortality.[14,15]  These are similar effects to those seen with ivabradine, but at a fraction of the cost.  It is worth noting that fewer than 25% of the patients enrolled in SHIFT were taking digoxin.

 The bottom line

 The SHIFT study reopens the debate on mechanisms of myocardial damage in CHF and reintroduces heart rate as a legitimate target in CHF management. The question of whether target heart rates should be achieved by dose optimization of beta-blockers, judicious use of digoxin, or implementation of novel therapies such as the I(f) inhibitor ivabradine remains an interesting and exciting topic for future research.

Santosh Vardhana is a 4th year medical student at NYU Langone Medical Center

Peer reviewed by Robert Donnino, MD, section editor, clinical correlations

Image courtesy of Wikimedia Commons


1.  Swedberg K, Komajda M, Bohm M, et al. Ivabradine and outcomes in chronic heart failure (SHIFT): a randomised placebo-controlled study. Lancet. 2010;376(9744):875-885.

2.  Swedberg K, Cleland J, Dargie H, et al. Guidelines for the diagnosis and treatment of chronic heart failure: executive summary (update 2005): The Task Force for the Diagnosis and Treatment of Chronic Heart Failure of the European Society of Cardiology. Eur Heart J. 2005;26(11):1115-1140.

3.  Bohm M, Swedberg K, Komajda M, et al. Heart rate as a risk factor in chronic heart failure (SHIFT): the association between heart rate and outcomes in a randomised placebo-controlled trial. Lancet. 2010;376(9744):886-894.

4.  Sarullo FM, Fazio G, Puccio D, et al.  Impact of “off-label” use of ivabradine on exercise capacity, gas exchange, functional class, quality of life, and neurohormonal modulation in patients with ischemic chronic heart failure.  J Cardiovasc Pharmacol Ther. 2010;15(4):349-355.

5.  Ceconi C, Freedman SB, Tardif JC, et al. Effect of heart rate reduction by ivabradine on left ventricular remodeling in the echocardiographic substudy of BEAUTIFUL.  Int J Cardiol. 2011;146(3):408-414.

6.  Teerlink JR.  Ivabradine in heart failure–no paradigm SHIFT…yet. Lancet.  2010;376(9744):847-849.

7.  Ko DT, Hebert PR, Coffey CS, Sedrakyan A, Curtis JP, Krumholz HM.  Beta-blocker therapy and symptoms of depression, fatigue, and sexual dysfunction. JAMA. 2002;288(3):351-357.

8.  Salpeter SR, Ormiston TM, Salpeter EE.  Cardioselective beta-blockers in patients with reactive airway disease: a meta-analysis. Ann Intern Med. 2002;137(9):715-725.

9.  Gottlieb SS, McCarter RJ, Vogel RA.  Effect of beta-blockade on mortality among high-risk and low-risk patients after myocardial infarction. N Engl J Med.  1998;339(8):489-497.

10.  Phillips KA, Shlipak MG, Coxson P, et al.  Health and economic benefits of increased beta-blocker use following myocardial infarction. JAMA.  2000;284(21):2748-2754.

11.  Massie BM, Shah NB.  Evolving trends in the epidemiologic factors of heart failure: rationale for preventive strategies and comprehensive disease management. Am Heart J. 1997;133(6):703-712.

12.  Maurer MS, Sackner-Bernstein JD, El-Khoury Rumbarger L, Yushak M, King DL, Burkhoff D.  Mechanisms underlying improvements in ejection fraction with carvedilol in heart failure.  Circ Heart Fail. 2009;2(3):189-196.

13.  Gorman S, Boos C.  Ivabradine in heart failure: what about digoxin? Lancet.  2008;372(9656):2113.

14.  Arnold SB, Byrd RC, Meister W, et al.  Long-term digitalis therapy improves left ventricular function in heart failure. N Engl J Med.  1980;303(25):1443-1448.

15.  The effect of digoxin on mortality and morbidity in patients with heart failure. The Digitalis Investigation Group. N Engl J Med.  1997;336(8


Fast Hearts and Funny Currents: Is Tachycardia Part of the Problem in Heart Failure? Part 1

May 18, 2011

By Santosh Vardhana

Faculty Peer Reviewed 

Mr. M is a 63-year-old man with a history of coronary artery disease and systolic CHF (ejection fraction 32%) on lisinopril, metoprolol, and spironolactone who presents to Primary Care Clinic complaining of persistent dyspnea with exertion, two-pillow orthopnea, and severely limited exercise tolerance.  His vital signs on presentation are T 98.0º F, BP 122/76, HR 84 bpm.  What are his therapeutic options?

 A Race Against Time: Tachycardia in the Failing Heart

Congestive heart failure (CHF) is a clinical syndrome that results from any disruption in the ability of the heart to maintain sufficient cardiac output to supply the body’s metabolic demand.  It is manifested by the well-known symptoms of exertional dyspnea, decreased exercise tolerance, and fluid retention.[1]  The yearly incidence of CHF continues to increase as more  patients who have myocardial infarctions survive but subsequently develop ischemic cardiomyopathy.[2]

 As the failing heart struggles to maintain cardiac output, it undergoes a series of structural changes in attempt to maximize stroke volume.  These changes, from initial hypertrophy, to progressive dilation, and finally to fibrosis and failure, have been well described.[3] Once the heart can no longer increase stroke volume by increasing either preload or intrinsic contractility, it attempts to recover cardiac output by increasing heart rate.  While this compensation is able to rescue cardiac output temporarily, it rapidly leads to cardiomyocyte apoptosis, collagen deposition, and fibrosis (reviewed in [4]).  Indeed, rapid atrial pacing in an otherwise healthy heart has been known for almost 50 years to produce symptomatic CHF.  This is thought to be due to a combination of diminished coronary blood flow, an increase in pro-apoptotic factors such as tumor necrosis factor-alpha, and decoupling of cardiac myocyte excitation and contraction secondary to a reduction in inward rectifier potassium current. 

 The pharmacologic agents that have been shown to improve mortality in systolic CHF (renin-angiotensin system inhibitors [5-7], aldosterone antagonists [8], and beta blockers [9-11]) do so by preventing pathologic ventricular remodeling.[12,13]  Less well studied, however, is the development of tachycardia as a compensatory mechanism in CHF.  The relatively minor emphasis on heart rate is surprising, given that an elevated heart rate is a well-established risk factor for cardiovascular mortality.[14]  Heart rate is additionally a source of interest, given the measurable benefit of beta blockers in patients with CHF beyond that which can be achieved with angiotensin-converting enzyme (ACE) inhibitors and diuretics.[9-11]  In fact, some of the earliest studies demonstrating a mortality benefit from timolol in CHF [15] suggested that the majority of risk reduction might be due to its negative chronotropic effects.[16]  Recent meta-analyses of randomized, placebo-controlled trials of beta blockers in CHF have supported this hypothesis by demonstrating that the mortality benefit of beta blockers correlates not with medication dosage but with heart rate reduction.[17,18]  Thus, control of heart rate may be the unique method by which beta blockers slow pathologic left ventricular remodeling in a manner that is independent of the effect of ACE inhibitors.[19] 

Addition of beta blockers to a CHF regimen has not eliminated the detrimental impact of increased heart rate on the progression of CHF.  In a recent 10-year observational study of patients with CHF, every increment of 10 beats per minute (bpm) above baseline was associated with a sequential increase in cardiovascular death; notably, this correlation persisted in the presence of beta blockade.[20]  This association between elevated heart rate and cardiovascular morbidity applies even to patients who have had implantable cardioverter-defibrillator (ICD) placement for diminished left ventricular ejection fraction, as demonstrated by studies in which patients with elevated resting heart rates despite ICD placement and optimized beta blocker therapy were at markedly increased risk of death or hospitalization for symptomatic CHF.[21]  Thus, it appears that a certain aspect of heart rate variability that is unaffected by beta blockade may be a significant contributor to CHF-associated morbidity and mortality. Read more »

How Safe Is That Tattoo?

April 27, 2011

By Farzon A. Nahvi

Faculty Peer Reviewed

 Once thought to be exclusively the domain of gang members, prisoners, and those in the military, tattoos are now increasingly popular with the general population. The increasing visibility of tattoos on high-profile individuals such as athletes, musicians, and actors, combined with the increasing acceptability of tattoos among professionals, have made tattoos a common part of modern culture. Nevertheless, tattoo artists are subject to little regulation, and tattoo art comes with some real health risks. With an estimated 15% of Americans having at least one tattoo, it is important for the physician to be familiar with the safety and health risks of the tattooing process.

 The most common health hazard associated with tattooing is localized skin infection caused by Staphylococcus aureus (with some cases of community-acquired methicillin-resistant S. aureus reported) or Pseudomonas aeruginosa. Infection arises from the procedure itself, which requires piercing the skin. The tattooing process uses a needle that is controlled by an electromagnetic coil, enabling it to oscillate at a frequency between 80 to 150Hz. The needle punctures the skin at a depth of 1-4 millimeters, carrying the dye into the dermis, where it is then engulfed by macrophages. As the damaged skin heals, granulation tissue forms, and the dye remains permanently trapped in fibroblasts in the superficial dermis. The same process that introduces the dye can introduce the common bacteria noted above, causing infection. Physicians should remind their patients that tattooing is an invasive procedure and that the same precautions that are taken after getting a scrape or cut should be taken after obtaining a tattoo. A thorough washing of the tattoo site with soap and water is usually effective in preventing localized skin infection.

 Tattooing can also cause systemic infections such as staphylococcal toxic shock syndrome, pseudomonal abscesses, and infective endocarditis.  The British Cardiac Society/Royal College of Physicians and 60% of physician members of the International Society of Adult Congenital Cardiac Disease (IASCCD) surveyed in 1999 recommend prophylactic antibiotics before obtaining a tattoo for patients with congenital heart disease, as a preventive measure against infective endocarditis. Importantly, 75% of the same group of IASCCD physicians surveyed disapprove outright of tattoos for their patients with congenital heart disease.  

 Tattooing has also caused tetanus, tuberculosis, hepatitis B, and hepatitis C. Patients who are considering tattoos should ensure that their tetanus and hepatitis B vaccines are up to date. Similarly, baseline testing for hepatitis and HIV may be done for patient reassurance or, if results are positive, as cautionary information for the tattoo artist. Finally, while there have been no clear cases of HIV having been caused by tattooing, there is the potential for HIV infection. The risk for all these diseases can be lowered if tattoo artists use new needles with each patient and pour new ink into a well for each patient so as not to “double-dip” needles from different patients into the same inkwell.

 Hypersensitivity reactions can be caused by the introduction of foreign ink into the dermis. Tattoo inks do not need FDA approval and can contain a wide variety of ingredients that can cause skin reactions. While aluminum, oxygen, titanium, and carbon are the most common elements of tattoo ink and have been found to be safe, mercury, chromium, cadmium, and cobalt are also commonly used and have all been associated with delayed hypersensitivity reactions. Corticosteroids can be used to treat patients who develop hypersensitivity reactions in response to tattoos. Finally, while rare, there have been some reported cases of anaphylaxis associated with tattooing.

 Because some inks contain high levels of iron oxide, there is the potential for burns or distortions of the tattoo caused by magnetic hysteresis during magnetic resonance imaging (MRI). Nevertheless, tattoos are generally not a contraindication for having an MRI procedure. Because adverse hypersensitivity and MRI events are associated with inks that contain mercury, chromium, cadmium, and cobalt and higher levels of iron oxide, physicians can advise patients to ask their tattoo artist to use an ink that is low in these materials.

 Of course, many of the risks are in the control of the tattoo artist rather than the person receiving the tattoo. Tattoos can be obtained in a wide variety of places, ranging from luxury hotel studios in Las Vegas to side-of-the-road tents at state fairs, with a wide range of hygiene. While no site can be guaranteed safe–the NYC Dept. of Health licenses tattoo artists, not parlors–choosing a professional studio likely decreases the risk of infections and could be recommended to patients. In addition to changing needles and inkwells, it is also a good idea for patients to ensure that the tattoo artist washes his hands and changes gloves between each client, disinfects all equipment, shaves and cleans the body site with an antimicrobial wash before administering the tattoo, and provides appropriate aftercare instructions. The tattoo artist should also be educated about potential health risks and adverse events associated with tattoos, as clients might return to the tattoo artist before going to a physician if they experience a problem.

 While the risks may be small, the popularity of tattoos in our culture has greatly increased the likelihood that physicians will deal with health issues involving tattoos.  These health issues can range from mild irritations to serious lifelong diseases, some of which can be prevented or treated and some of which cannot. Regulations for tattoo artists, while generally loose, vary by state and can affect the risks involved. A physician should be prepared to address these issues, offering information, advice, preventive measures, and treatment.

 Farzon A. Nahvi is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Demetre Daskalakis, MD, Department of Medicine (Infectious Disease) NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1.  Armstrong ML, McConnell C. Tattooing in adolescents: more common than you think–the phenomenon and risks. J Sch Nurs. 1994;10(1):26-33.

 2.  Armstrong ML. Tattooing, body piercing, and permanent cosmetics: a historical and current view of state regulations, with continuing concerns. J Environ Health. 2005; 67(8):38-43,54,53.

 3.  Centers for Disease Control and Prevention (CDC). Methicillin-resistant Staphylococcus aureus skin infections among tattoo recipients–Ohio, Kentucky, and Vermont, 2004-2005.  MMWR;2006;55(24):677-9.

 4.  Cetta F, Graham LC, Lichtenberg RC, Warnes CA. Piercing and tattooing in patients with congenital heart disease: patient and physician perspectives. J Adolesc Health. 1999;24(3):160-162.

 5.  Delahaye F, Wong J, Mills PG. Infective endocarditis: a comparison of international guidelines. Heart. 2007;93(4):524-527.

 6.  Dron P, Lafourcade MP, Leprince F, et al. Allergies associated with body piercing and tattoos: a report of the Allergy Vigilance Network. Eur Ann Allergy Clin Immunol. 2007;39(6):189-192.

 7.  Farrow JA, Schwartz RH, Vanderleeuw J. Tattooing behavior in adolescence. A comparison study. Am J Dis Child. 1991;145(2):184-187.

 8.  Hayes MO, Harkness GA. Body piercing as a risk factor for viral hepatitis: an integrative research review. Am J Infect Control. 2001;29(4):271-274.

 9.  Kohut A, Parker K, Keeter S, Doherty C, Dimock M. How young people view their lives, futures and politics: a portrait of “Generation Next.” Pew Research Center 2007.

 10.  Long GE, Rickman LS. Infectious complications of tattoos. Clin Infect Dis. 1994; 18(4):610-619.

 11.  Mayers LB, Judelson DA, Moriarty BW, Rundell KW. Prevalence of body art (body piercing and tattooing) in university undergraduates and incidence of medical complications. Mayo Clin Proc. 2002;77(1):29-34.

 12.  Mayo Clinic Foundation for Medical Education and Research. Tattoos: understand risks and precautions.  Accessed on September 16, 2010.

 13.  O’Malley CD, Smith N, Braun R, Prevots DR. Tetanus associated with body piercing. Clin Infect Dis. 1998;27(5):1343-1344.

 14.  Wagle WA, Smith M. Tattoo-induced skin burn during MR imaging. AJR Am J  Roentgenol. 2000;174(6):1795.

The Porcelain Terror: Can a Toilet Give You Gonorrhea?

April 13, 2011

By Bradley Ching, Class of 2011

Faculty Peer Reviewed

 What do every road trip, football game halftime, and trans-continental plane flight have in common? Usually a disgusting toilet paired with the urgent need of people to use them. While no one takes pleasure from these encounters, could they in fact be a risk for acquiring a sexually transmitted disease?

Gonorrhea or “the clap,” as it is lovingly nicknamed, is caused by the bacteria Neisseria gonorrhoeae and is most commonly transmitted via sexual intercourse.  It is able to infect the non-cornified epithelium of the cervix, urethra, rectum, pharynx, and conjunctiva.  There were 336,742 gonorrhea cases reported to the Centers for Disease Control in 2008, making it the second most common notifiable infectious disease after Chlamydia trachomatis infection (1.2 million cases).[1]

The idea that one can contract a sexually transmitted disease from a toilet or other fomite has been around for years and is usually dismissed as either an urban legend or the desperate excuse of an unfaithful partner. There have, however, been a few case reports of such infections taking place under special circumstances. In 1939, Dr. Leon Herman described an immobile patient with bilateral leg fractures developing a gonorrhea infection in the hospital after sharing a urinal with his infected neighbor.[2]  A report from 2003 suggested that an 8-year-old girl was infected with N. gonorrhoeae during an international plane flight when she used her hand to wipe the toilet seat before urinating, possibly causing subsequent infection after cleaning herself with the same hand.[3]  While these reports are sparse and neither definitively proves the mode of acquisition, they do lead one to wonder if fomite transmission really may be possible.

N. gonorrhoeae is notoriously susceptible to drying, and this is recognized as a major factor in limiting nonsexual transmission. In a study reported in the New England Journal of Medicine, Gilbaugh and Fuchs scientifically examined the possibility of transmission from a toilet seat by documenting the latency of live N. gonorrhoeae suspensions after being placed on a sterile toilet seat. During their study, the authors observed that all of the organisms in a saline suspension were nonviable shortly after the sample had dried, validating previous conceptions about the gonococci. However, organisms that were suspended in purulent discharge from donors were viable for up to two hours after seat inoculation.[4]  This proved that, theoretically, gonococcus could be living on a toilet seat near you.

Even though this theoretical risk was shown, the authors were unable to isolate a single N. gonorrhoeae from 72 cultures of various public restrooms.[4] In a subsequent study, gonococci could not be recovered from 38 separate attempts of culturing a toilet in a venereal-disease treatment clinic over a six-month period,[5] leading one to believe that it is extremely unlikely that live gonococci can be found on a toilet seat.

 Overall, the likelihood of gonorrhea transmission via toilet seats seems highly improbable and almost impossible when using good judgment. While toilet seats have a very small chance of having live bacteria from a previous user, there is still no mechanism for inoculation of the urethral or anal area from normal toilet usage, unless one’s hands become contaminated from pre-cleaning the seat or toilet. This author can therefore safely conclude: If the toilet looks clean and dry, just sit on it.

Bradley Ching is a 4th year medical student at NYU School of Medicine

Peer reviewed by Robert Holzman, MD, NYU School of Medicine

Image courtesy of Wikimedia Commons


1. CDC.  Summary of Notifiable Diseases–United States, 2008. MMWR. 2010;57(54):19.

2. Herman L. The Practice of Urology. Philadelphia: WB Saunders; 1939.

3. Dayan L. Transmission of Neisseria gonorrhoeae from a toilet seat. Sex Transm Infect. 2004;80(4): 327.

4. Gilbaugh JH Jr, Fuchs BC. The gonococcus and the toilet seat. N Engl J Med. 1979:301(2):91-93.

5. Rein M. Nonsexual acquisition of genital gonococcal infection. N Engl J Med. 1979:301(24):1347.


From The Archives: Does Acetazolamide Prevent Altitude Sickness?

March 31, 2011

Please enjoy this post from the Clinical Correlations archives first posted May 7, 2009

Seema Pursnani, MD

Because your parents have designated you as the family doctor, your Uncle Joe calls to ask you if he should take this medication called Diamox before going trekking in the Himalayas. You work at Bellevue in New York City: who climbs mountains here? What do you say?

Why do illnesses develop from changes in altitude?

The essential culprit is the fall in atmospheric pressure with an increase in altitude. While at sea level, barometric pressure (Pb) is ~760mm Hg (1atm), whereas at the summit of Mount Everest (~8800 meters high), this pressure drops to ~250mm Hg. The fraction of inspired oxygen remains constant (21% of air is made of oxygen molecules), so the net result is a decrease in the pressure of inspired oxygen. Remember that the pressure of oxygen in our alveoli is determined by the alveolar gas equation: PAO2 = ( FiO2 * (Pb – 47)) – (PaCO2 / 0.8). For example, the pressure of oxygen at sea level is roughly 100mm Hg, whereas at Mount Everest, this pressure would be <50 mm Hg.

How do our bodies respond to this lack of oxygen?

Normal responses to hypoxemia acutely include hypoxic pulmonary vasoconstriction to shunt blood away from poorly oxygenated areas, and vasodilation in other organs, namely in the brain, to improve delivery of oxygen. The body compensates with an increase in minute ventilation. In cases of altitude related illness it is not well understood what goes wrong but, essentially, these normal adaptations are inadequate or maladaptive.

The term mountain sickness includes a spectrum of illnesses, namely the following entities: acute mountain sickness (AMS), high altitude cerebral edema (HACE), and high altitude pulmonary edema (HAPE). AMS is a clinical syndrome that occurs in someone who has ascended >2500 meters. Clinical features are the presence of a headache AND at least one of the following: GI symptoms, insomnia, fatigue, or dizziness. HACE is considered an end-stage form of AMS; ataxia and change in mental status are the key features of this syndrome. HAPE is a type of noncardiogenic pulmonary edema that results from acute pulmonary hypertension and manifests with typical symptoms of pulmonary edema-dyspnea, cough, decreased exercise tolerance, etc.

Treatment of all forms of mountain sickness includes immediate descent and oxygen supplementation. Vasodilators (calcium channel blockers, phosphodiesterase 5 inhibitors) and long acting beta agonists have been studied in the prevention of HAPE. In addition, steroids, diuretics, and vasodilators have also been studied for the treatment of HACE and HAPE. Treatment and prevention of AMS includes primarily the use of acetazolamide (diamox).

Slow ascent to allow acclimatization is the key to preventing AMS; avoiding a direct ascent of 2750 meters is considered a standard recommendation. Acetazolamide is a carbonic anhydrase inhibitor and works by stimulating renal bicarbonate excretion. The increased blood acidity serves as a central stimulus to increase ventilation, thus facilitating adaptation to hypoxic conditions.

Show me the data! Here are a few trials:

Basnyat et al, looked at the efficacy of low-dose acetazolamide for the prophylaxis of AMS.1 In this prospective, double-blind, randomized, placebo-controlled trial, acetazolamide at 125mg bid or placebo was given to approximately 200 healthy trekkers to Mount Everest. In the treatment group, only 9 out of 74 (12.2%) developed AMS versus 20 out of 81 (24.7%) in the placebo group; the number needed to treat (NNT) in this trial was 8. Another randomized, double-blind, placebo-controlled trial, compared ginkgo biloba and acetazolamide for the prevention of acute mountain sickness among Himalayan trekkers.2 In this trial, acetazolamide 250 mg bid, gingko biloba, both, or placebo were given to more than 600 western trekkers to Mount Everest. In the acetazolamide group, 12% developed AMS versus 14%, 34%, and 35% for combined group, placebo, and gingko groups, respectively. The NNT in this trial was only about 4.

So regarding our family member, unless Uncle Joe is allergic to sulfonamides, I would recommend acetazolamide at the 250 mg twice-daily dose. Other contraindications to taking acetazolamide include hepatic disease, hyponatremia or hypokalemia; adrenocortical insufficiency, hyperchloremic acidosis, severe renal dysfunction, and severe pulmonary obstruction. Addressing any underlying cardiac, pulmonary function prior to climbing to such a great height would be of utmost importance. In addition, if Uncle Joe is diabetic, acetazolamide should be used with caution as it can cause a change in glucose control. Assuming Uncle Joe has none of the above medical conditions to cause concern in recommending acetazolamide, he should begin taking it 24-48 hours before ascending and continue it for at least 48 hours after arrival at his high altitude.

Dr. Pursnani is a third year resident in internal medicine at NYU Medical Center.

1. Basnyat B et al. Efficacy of low-dose acetazolamide (125 mg BID) for the prophylaxis of acute mountain sickness: a prospective, double-blind, randomized, placebo-controlled trial. High Alt med Biol. 2003; 4(1): 45-52.

2. Gertsch JH et al. Randomised, double blind, placebo-controlled comparison of ginkgo biloba and acetazolamide for prevention of acute mountain sickness among Himalayan trekkers: the prevention of high altitude illness trial (PHAIT). BMJ. 2004; 328(7443):797. Epub 2004 Mar 11.

Faculty peer reviewed and commentary by Nishay Chitkara MD, Instructor of Clinical Medicine, Division of Pulmonary and Critical Care Medicine:

The hypoxic ventilatory response (HVR) is an increase in alveolar ventilation initiated by the carotid body upon ascent. When one remains at the same altitude, this peripheral chemosensor increases its sensitivity to hypoxemia, and an even greater increase in ventilation ensues. The result is a rise in arterial oxygen content and a respiratory alkalosis which is only partially compensated by kidneys. Acetazolamide effectively mimics this normal acclimatization response, by inducing a respiratory acidosis (impaired cellular delivery of CO2 to the lungs) and a metabolic acidosis (enhanced renal bicarbonate excretion), thus stimulating alveolar ventilation. It also prevents two commonly encountered phenomena in acclimatizing individuals: periodic breathing and accentuated hypoxemia during sleep.

The mechanisms of disease for both HACE and HAPE have undergone much investigation. The HVR protects against hypoxia-induced stresses which lie at the root of their development. Much evidence supports the formation of HACE as a consequence of vasogenic edema and hypoxia-induced increased permeability of the endothelium. In HAPE, exaggerated pulmonary vascular responses to hypoxia can lead to high intravascular pressures and stress injury of the pulmonary microvasculature. Acetazolamide can be effective in the prevention of HAPE, by reducing pulmonary vascular resistance. It does not however substitute for more established HAPE treatments such as calcium channel blockers, PDE-5 inhibitors, glucocorticoids, or beta-agonists.

You may want to warn your Uncle Joe about carbonated beverages during his trek to the Himalayas – acetazolamide tends to make them distasteful!

1. Schoene, Robert B. Illness at High Altitude. Chest. 2008; 134:402-416.

Is Dark Chocolate Good For You?

March 30, 2011

By Lisa Parikh, MD

Faculty Peer Reviewed

I was recently counseling an overweight patient about nutrition and exercise when he asked, “Doc, is it true what they say about dark chocolate being good for you?” I told him that although I had heard about this, I was actually not too sure about the evidence behind this. As a strong supporter of the “I wish that the best tasting foods were good for you” club, I decided this was the type of research that warranted my immediate professional evaluation. How could a food so rich in fats and sugars possibly be good for you?

The secret lies in one special ingredient of dark chocolate—catechins. Catechins are a subgroup within the flavanoid family.  Flavanoids, in general, are a polyphenol that contribute to the aroma, color and taste in foods such as tea, wine, and chocolate.  But it is the catechins that make chocolate, especially dark chocolate, so special. They have strong antioxidant properties and have been shown to improve endothelial function by inducing nitric oxide synthase. In vitro and in animals, they have been shown to decrease the formation of blood clots by reducing platelet activation [1]. In a study comparing the catechin content within dark chocolate, milk chocolate, and tea, dark chocolate contained the highest total catechin concentration. Per 100 grams, dark chocolate had 53.5 mg of cachetin, milk chocolate had 15.9 mg, and black tea had 13.9 mg. [1] Given the strong amount of antioxidants within dark chocolate, several studies have been undertaken to prove what type of cardioprotective effects it may have.

Flammer et al studied the possible antiatherogenic properties of dark chocolate in 22 heart transplant recipients with a double-blinded randomized control trial. Transplant recipients were chosen as an experimental group because high oxidative stress and reduced antioxidant defense play a critical role in the development and progression of transplant-associated atherosclerosis [4]. Baseline coronary angiography was done before the chocolate intervention. Eleven participants then received 40g (approximately 1 Hershey’s bar) of flavanol-free control chocolate while the other eleven received flavanol-rich dark chocolate. Repeat coronary angiography was performed two hours later. Results showed that both coronary artery diameter and endothelium dependent vasomotion improved significantly in the dark chocolate group, but not in the control chocolate group. The group also found that shear-stress related platelet adhesion was significantly reduced by dark chocolate. These results suggest a theoretical cardioprotective effect of dark chocolate, particularly in preventing the development of atherothrombosis. However, given the short-term follow up, no conclusion can be made in terms of how often patients should consume chocolate or of its long-term effects on coronary artery diameter.

Ried et al conducted a meta-analysis involving 15 trials to study the effects of flavanol-rich cocoa products on blood pressure. All trials had a duration of at least 14 days.  Nine out of fifteen used dark chocolate as the treatment group while the other six used high flavanol chocolate or cocoa. Results showed no significant change in blood pressure in the baseline normotensive patients on therapy. However, in hypertensive patients, there was an approximate 5mm Hg reduction in systolic blood pressure in the patients taking flavolol rich cocoa foods. This is equivalent to a 20% reduction in cardiovascular risk over 5 years—similar to the reduction in cardiovascular risk over 5 years by exercising 30 min/day [5]!  Although this is good news for those of us with a sweet tooth, I am not sure that I would feel comfortable telling my patient about the results of this study—at least not without knowing the possible metabolic effects of consuming this amount of chocolate.

To explore this further, I looked into the maximum amount of chocolate one could eat before the metabolic risks started to outweigh the cardioprotective benefits. Desch et al [2] explored just that in a randomized control trial involving  ninety-one patients.  Half of the patients were given 6g of chocolate (approximately 3 M&Ms) and the other half received 25g every day for 3 months to determine if blood pressure reduction had a dose-dependent relationship with dark chocolate consumption. Results showed a reduction in mean arterial pressure in both groups, but no significant difference in blood pressure changebetween the two groups.  However, the group consuming 25g did experience a slight increase in body weight.

Overall, I found that dark chocolate does have cardioprotective effects. The drawback of these studies was their short duration and poor long-term follow-up. We still need more long term studies to quantify how much or how long we can consume dark chocolate before its detrimental metabolic effects develop. When my patient returns, I will tell him that although it seems like chocolate may be good for you, there is not enough data out for me to wholeheartedly recommend it to my patients. Like so many good things in life, chocolate should be used in moderation.

Dr. Parikh is a first year resident at NYU Langone Medical Center

Peer reviewed by Barbara Porter, MD, editor myths and realities section, Clinical Correlations

Image courtesy of Wikimedia Commons


1. Arts, Ilja W., Peter Hollman, and Daan Kromhout. “Chocolate as a Source of Tea Flavanoids.” The Lancet 354 (1999): 488. Print.

2. Desch, Steffen, Daniela Kobler, and Johanna Schmidt. “Low vs. Higher-dose Dark Choclate and Blood Pressure in Cardiovascular High-Risk Patients.” American Journal of Hypertension 23.6 (2010): 694-700. Print.

3. Faridi, Zubaida, Valentine Njike, and Suparna Dutta. “Acute Dark Chocolate and Cocoa Ingestion and Endothelial Function: a Randomized Controlled Crossover Trial.” American Journal of Clnical Nutrition 88.1 (2008): 58-63. Print.

4. Flammer, Andreas, Frank Hermann, and Isabella Sudano. “Dark Chocolate Improves Coronary Vasomotion and Reduces Platelet Reactivity.” Circulation 116: 2376-382. Print.

5. Ried, Karin, Thomas Sullivan, and Peter Fakler. “Does Chocolate Reduce Blood Pressure? A Meta Analysis.” BMC Medicine 8 (2010). Print.

From The Archives: The Skinny on Cachexia…Can it be Treated?

March 24, 2011

Please enjoy this post from the Clinical Correlations archives first posted April 22, 2009

Michael T. Tees, MD, MPH

On the wards and in the clinic, the physician is frequently presented with a patient with a decreased appetite and alarming weight loss. The patient is likely frustrated with their own fraility, the family is upset at the poor nutritional state of their loved one, but the healthcare provider should be the most concerned. This clinical presentation without a prior diagnosis is worrisome, and if the patient does have an underlying etiology, this likely represents progression.

Caring for the cachectic patient presents a frustrating and recurring dilemma. Cachexia is defined as ongoing weight loss, often with muscle wasting, associated with a long-standing disease. In cachexia, refeeding often does not induce weight gain. Anorexia, excluding the willful avoidance of eating, usually occurs in conjunction cachexia (1).


The causes of cachexia and anorexia are only now beginning to be elucidated. Research has shown the important role of cytokines causing metabolic abnormalities in chronic illnesses, such as cancer, chronic obstructive pulmonary disease (COPD), HIV-associated wasting syndrome, cardiac disease, and some rheumatologic diseases (1). Cytokines are elevated in pro-inflammatory states and have been found to cause an activation of proteolysis and lipolysis (2, 3). Many tumors are known to produce their own cytokines, which were found to cause a decreased appetite and weight loss in animal models (4). The activity of the various cytokines involved produces a net negative energy balance in the patient suffering from a chronic disease.

There are several other implicated hormonal factors that may induce cachexia. Decreased testosterone levels, seen in the aging and those with disease may lead to cachexia. Testosterone normally inhibits macrophage release of pro-inflammatory cytokines, and low levels of the hormones is associated with lipolysis and anorexia (5, 6). Studies have also found alterations in insulin-like growth factor (IGF)-1, which normally works to increase muscle protein synthesis (4). However, low concentrations are found in malnourished individuals. Elevated glucocorticoid levels are also seen in cachectic patients; these hormones are known to suppress cell transporters involved in amino acid and glucose uptake (6).


The prognostic sign of cachexia is alarming. In cancer patients, weight loss documented prior to initiating chemotherapy predicts a shortened survival than those who had maintained weight (7). In nursing home residents, long-term patients losing 5% or more of his or her body weight have a 10-fold increase in mortality over 6 months compared to those who gain weight over a 1 month period (8). This indicates that if any intervention can be effective to reverse or stop weight loss, the physician should employ them early. Unfotrunately however, a meta-analysis found that nutritional supplementation alone does little to decrease mortality or complications (1).


Megestrol acetate (Megace) is the most studied progestational agent, and its efficacy likely stems from its suppression of cytokines or cytokine receptors. Megestrol can improve appetite and induce weight gain in patients with cancer, HIV-associated wasting syndrome, end-stage renal disease on dialysis, and those requiring long-term care. Studies have shown appetite increased in 50-75% of patients, and modest weight gains in 50-70%, varying by disease and study. Typically, single digit kg gains were seen in those who responded to the medication, and cancer patients gained just over 1 kg on average (1). Importantly however, Megestrol has been associated with venous thromboembolism, deep venous thrombosis, hypertension, impotence, edema, and suppression of the pituitary-adrenal axis. In cancer patients, who already have an increased risk of thrombosis, the physician must prescribe judiciously.

Dronabinol (Marinol) and nabilone (Cesamet) are cannabinoids that are often used to treat chemotherapy related nausea and vomiting. Cannabinoids work on central CNS receptors, and have also been found to improve appetite in cancer and AIDS patients. However, this class was found to induce weight gain only in long-term care patients (1). The cannabinoids may induce dependence, but more commonly side effects of tachycardia, dizziness, somnolence, anxiety, paranoia, hallucinations, or confusion cause the medication to be discontinued. In a direct comparison study, appetite improvement was reported in 75% of patients taking megestrol compared to 49% of those receiving dronabinol (9).

Corticosteroids, such as dexamethasone, have been studied and used to treat anorexia and cachexia. The corticosteroid decreases the inflammatory state, and has efficacy with preventing chemotherapy related nausea/vomiting. This therapy is effective at improving appetite in patients with cancer or HIV-associated wasting syndrome. This therapy has been shown to induce weight gain comparable to megestrol, however, the side effects of steroid therapy caused discontinuation more frequently than with megestrol (10). Corticosteroids can potentially cause gastritis, hyperglycemia, immune suppression, and weakness. These side effects are so significant, that in cancer, it is recommended to use only near the end-of-life. However, if treatment is indicated and the patient has a history of DVT, corticosteroid use can be considered as an alternative to megestrol (7).

Another anti-inflammatory, thalidomide, works by suppressing TNF-a. Thalidomide therapy has been proven effective at inducing weight gain in those with HIV-associated wasting, but only one small study showed weight gain in cancer patients (11). Thalidomide use has been associated with fevers, sensory neuropathy, neutropenia, thromboembolism, rash and sedation.

Oxandrolone, an androgen that inhibits IL-1, IL-6, and TNF-a, can produce weight gain in HIV-associated wasting syndrome, but has not yet been proven efficacious in other cachectic states (1). Oxandolone and other androgens may cause hypertension, fluid retention, sexual organ alterations, masculinization, hepatic disease, and dyslipidemia, among other effects.

Cyproheptadine, an anti-histamine, has been shown to help with anorexia in cancer patients, but not with weight gain. Cyproheptadine may cause agranulocytosis or thrombocytopenia, but more commonly dry mouth, dizziness, and fatigue are side effects.

Sadly, reviewing all available therapies shows that even the “best” medication, megestrol, rarely leads to significant weight gain. Furthermore, combination therapy has not been proven to be more effective than a single agent. Of the many potential interventions being assessed, researchers have been studying anti-cytokine and anti-inflammatory agents. Researchers are also focusing on blocking the proteolytic system, while others are looking at inducing protein synthesis. Some trials are ongoing, such as the use of oxandrolone in cancer patients (12). With multiple biological pathways and cellular signals involved in cachexia and anorexia, research will continue and hopefully yield favorable results in the near future. Until better therapies are developed, treating the patient with a poor nutritional state will continue to be a difficult problem.

Peer Review/Commentary by Theresa Ryan, MD Assistant Professor of Medicine, NYU Division of Medical Oncology:

Anorexia-Cachexia Syndrome, or ACS, affects up to 80% of patients with advanced solid tumor malignancies. Although definitions vary, ACS is typically  defined as a weight loss of  > 5% of a patient’s pre-illness weight over a 2-6month period. ACS causes both physical and psychological distress and is associated with early mortality independent of staging, functional status or tumor histology. ACS can be categorized as primary or secondary. Primary ACS results from a hypermetabolic state caused directly by cancer while secondary ACS results from cancer-related barriers that reduce dietary intake, such as nausea/vomiting, mucositis, intestinal obstruction etc. For primary ACS, therapies have primarily been directed at appetite stimulation with newer therapies targeting the proinflammatory/hypermetabolic state while the role for nutritional support such as TPN is predominantly in secondary ACS.

The above reviewed some of the available drugs to combat ACS as well as some the possible future agents. But drugs are just one part of the treatment of ACS. Nutritional, psychological, and behavioral therapies are important components and should be incorporated into a multidisciplinary approach to this complex medical problem. Physicians, nurses, and dietitians, along with patients and families, can identify specific needs and plan individualized treatment. The first attempt should be to maximize oral intake by allowing the patient flexibility in type, quantity, and timing of meals. Interventions to should also be aimed at minimizing the secondary factors of nausea, vomiting, diarrhea, pain, fatigue, changes in taste, or food preferences that may influence appetite. Encouraging patient and family interactions and providing emotional and educational support may be helpful. For example, when family members are able to provide the patient’s favorite foods, food intake usually improves.

In addition, psychiatric disorders are common among cancer patients. ACS may result in a secondary depression, or depression may be a prime contributor to the anorexia and subsequent weight loss. Evaluations of relaxation, hypnosis, and short-term group psychotherapy have suggested some benefit with regard to anorexia and fatigue

Ultimately, the best way to treat cancer related ACS is to cure the cancer, but unfortunately this remains an illusive goal for most patients with advanced solid tumors. That being said, if cure is not possible, chemotherapy does have an important role in palliation including demonstrated ability to ameliorate ACS.  One example is in the treatment of advanced pancreatic cancer. In this pivotal trial of 1st line chemotherapy for advanced pancreatic cancer the novel endpoint of clinical benefit response (CBR) was used. CBR was defined as improvement in quality of life factors such as amount of pain medication used (decrease of >50% as positive) performance status (improvement of >20points in KPS) and weight gain (of >7%). To be classified as having a positive response, a patient had to have exhibited sustained (4 wks) improvement in at least one of the CBR components, without deterioration in any of the others. The number of patients experiencing a CBR was significantly greater in the group randomized to Gemcitabine (23.8%) vs. control arm (4.8%; p = 0.0022). It was this improvement in CBR that was one of the critical endpoints that led to the FDA “approval” of Gemcitabine and its subsequent establishment as standard of care for metastatic pancreatic cancer.  Better future cancer treatment will also hopefully also continue to lead to improvements in the treatment of cancer related ACS.

References for Part 1:

1. Thomas DR. Guidelines for the use of orexigenic drugs in long-term care. Nutr Clin Pract 2006; 21:82-7.

2. Kotler DP. Cachexia. Ann Intern Med 2000; 1333:622-34.

3. Mitch WE, Goldberg AL. Mechanisms of muscle wasting. The role of the ubiquitin-proteasome pathway. N Engl J Med 1996; 335:1897-1905.

4. Jatoi A. Weight loss in patients with advanced cancer: effects, causes, and potential management. Curr Opin Support Palliat Care 2008; 2:45-8.

5. D’Agostino P, Milano S, Barbera C, et al. Sex horome modulate inflammatory mediators produced by macrophages. Ann NY Acad Sci 1999; 876:426-9.

6. Morley JE, Thomas DR, Wilson MG. Cachexia: pathophysiology and clinical relevance. Am J Clin Nutr 2006; 83:735-43.

7. Jatoi A. Pharmacologic therapy for the cancer cachexia/weight loss syndrome: a data-driven, practical approach. J Support Oncol 2006; 4:499-502.

8. Ryan C, Bryant E, Eleazer P, Rhodes A, Guest K. Unintentional weight loss in long-term care: predictor of mortality in the elderly. South Med J 1995; 88:721-4.

9. Jatoi A, Windschitl HE, Loprinzi CL, et al. Dronabinol versus megestrol acetate versus combination therapy for cancer-associated anorexia: a North Central Cancer Treatment Group Study. J Clin Oncol 2002; 20:567-73.

10. Loprinzi CL, Kugler JW, Sloan JA, et al. Randomized comparison of megestrol acetate versus dexamethasone versus fluoxymesterone for the treatment of cancer anorexia/cachexia. J Clin Oncol 1999; 17:3299-3306.

11. Bruera E, Neumann CM, Pituskin E, et al. Thalidomide in patients with cachexia due to terminal cancer: preliminary report. Ann Oncol 1999; 10:857-9.

12. Strasser F. Appraisal of current and experimental approaches to the treatment of cachexia. Curr Opin Support Palliat Care 2007; 1:312-6.

References for Commentary:

HA Burris, 3rd, MJ Moore, J Andersen, MR Green, ML Rothenberg, MR Modiano, MC Cripps, RK Portenoy, AM Storniolo, P Tarassoff, R Nelson, FA Dorr, CD Stephens, and DD Von Hof     Improvements in survival and clinical benefit with gemcitabine as first- line therapy for patients with advanced pancreas cancer: a randomized trial.  Journal of Clinical Oncology 1997;15(6):2403-2413

Michael Tees is a first year resident in Internal Medicine at NYU Medical Center

The Myth of the Helminth: Can Worms be the Next Therapeutic Breakthrough for IBD Patients?

March 16, 2011

By Michael Guss, Class of  2012

Faculty Peer Reviewed

Helminths–parasitic worms that have co-evolved with humans and colonized our gastrointestinal (GI) tract for millennia–have developed the ability to modulate our inflammatory responses and evade our immune systems to survive [1,2]. Until the 1930s, the helminth colonization of humans was almost universal, owing to poor sanitation conditions and an impure food supply [3,4]. This changed as the economic development of the last century created improved sanitary conditions: clean running water, hygienic farming practices, and better medical services, ultimately eradicating helminth infections in the modern world [4].

But has improved sanitation come at a cost?

With the eradication of helminth infections, statistics have shown a concomitant increase in the incidence of inflammatory bowel disease, including Crohn’s disease (CD) and ulcerative colitis (UC), both chronic immune diseases of the GI tract [5]. Prior to 1940, inflammatory bowel disease (IBD) was almost non-existent worldwide. Today IBD affects more than three million people in the United States and Europe. Conversely, in developing countries with high rates of helminth colonization, the incidence of IBD remains much lower [6]. The increase in IBD with the improved sanitation and cleanliness of modern society has been dubbed the hygiene hypothesis.

The IBD hygiene hypothesis states that “raising children in extremely hygienic environments negatively affects immune development” [5]. This is because over thousands of years our  immune systems and intestinal linings have adapted to co-evolution with infectious agents like helminths. Non-exposure can lead to altered immune system development, predisposing humans to immunologic diseases like IBD [5, 11]. This hypothesis is supported by the geographical gradient in the incidence of IBD: while the US and northern Europe have high incidence rates, countries in South America, Asia, and Africa have minuscule incidence rates. However, the gap in IBD incidence rates is beginning to narrow as developing countries adopt modern hygienic practices and their exposure to infectious agents like helminths decreases [2]. Studies have also shown an incidence rate stratification among climates, with tropical locales seeing increased rates of infection and less cases of IBD, and temperate regions showing the inverse [3,4]. This hypothesis is further supported by epidemiological studies that have found an increased risk of IBD in the offspring of migrants who relocate from developing to developed countries compared to peers in their country of origin [4].

IBD is thought to be caused by an uncontrolled immune response to normal gut flora, leading to an imbalance in the response of T-helper cells [5]. Genetic susceptibility in addition to an increased ratio of Th1/Th2 cell lines is hypothesized to cause IBD. The Th1 immune response is responsible for cell-mediated actions needed for viral infections and intracellular pathogens, while the Th2 response is activated to initiate an antibody response against foreign invaders like helminths. Th1 cells release inflammatory cytokines such as interferon-gamma and tumor necrosis factor-alpha, causing subsequent mucosal inflammation of the GI tract, while Th2 cells release the anti-inflammatory cytokine interleukin-10 (IL-10). The Th1 and Th2 cell lines cross-regulate one another through their respective cytokines. The decrease in helminth exposure has led to a decrease in activation of Th2 cell response, leading to an unchecked, inappropriate Th1 predominance in the immune systems of patients living with IBD [7,8].

The imbalance in Th2 and Th1 cell response is the basis for the use of helminths as therapy for IBD.  The belief is that if patients are exposed to helminths, the immune system will respond with an increase in Th2 cell activation and production. This response will then lead to a release of host compounds that trigger the release of an anti-inflammatory cytokine such as IL-10, in addition to down-regulating the pro-inflammatory Th1 response [7,8].

So does helminth therapy actually work?

Researchers began the study of helminth therapy using murine models. Colitis was induced in mice using various chemical challenges and the mice were subsequently infected with various helminths. These studies showed that helminth infections could both prevent and alleviate colitis in animals [5,8]. Following the success in murine models, scientists performed clinical studies with the porcine whipworm, Trichuris suis, in patients with both CD and UC. T. suis was chosen as a good candidate for use in humans because the larvae and adults cannot leave the intestines or multiply within humans, and transmission from person to person is inhibited by normal hygienic practices [4,6].

The first human study was an open-label trial treating 7 combined UC and CD patients with a one-time dose of 2500 microscopic T. suis ova [9]. Both subsets of patients showed clinical improvement with no adverse side effects. A second study treated 29 CD patients with 2500 ova every 3 weeks for 24 weeks. 23 of 29 (79%) responded positively, with 21 going into complete remission [10]. Following this success, a double-blind, placebo-controlled, randomized clinical trial assessed the efficacy and safety of T. suis ova therapy in 54 patients with UC. The results found an improvement in 43.3% of patients treated with the regimen of 2500 T. suis ova orally every 2 weeks for 12 weeks, and only a 16.7% improvement rate in the placebo group. The patients experienced no side effects and the researchers concluded that “ova therapy seems safe and effective in patients with active colitis” [11].

Helminth therapy research is in its infancy, and, as with all new therapies, caution should be taken to prevent potentially harmful effects of treatment. A major obstacle is the availability of safe sources of clinical-grade material for use in clinical trials. While some helminths are known to cause serious side effects, including liver fibrosis and portal hypertension, the worms that have been studied thus far have been carefully selected to cause relatively minor side effects [2]. Currently, IBD has no cure beyond surgery (for ulcerative colitis), and the main treatments are immunosuppressive medications like glucocorticoids, anti-metabolites, and biologics that can increase susceptibility to serious infections [4]. The IBD patient population is in need of new treatments, and helminth-derived therapy might be the next breakthrough in the control of these diseases. While studying the potential benefits of helminth exposure is a start, the ultimate goal for helminthic treatment may not lie in actually infecting patients with the worms, but instead discovering immunomodulatory molecules that allow helminths to effectively modulate the human immune system [5]. Research on this topic is already underway, and the filarial nematode-derived immunomodulatory molecule ES-62 is one potential therapeutic agent being actively pursued [1]. Some preliminary studies indicate that helminth extracts are as effective as treatment with live helminths, and would provide a much less offensive treatment than live worms [5].

Perhaps reuniting with the creatures that have been eliminated from the modern hygienic environment will become a legitimate source of effective treatment for those with IBD. As Joel Weinstock, one of the leading researchers in the study of helminthic therapy, said in an interview with the New York Times, ”We’re part of our environment; we’re not separate from it.” Adopting this theory and embracing our surroundings in therapeutic research might be the breakthrough IBD patients have been awaiting [12].

Michael Guss is a 4th year medical student at NYU Langone Medical Center

Peer reviewed by P’ng Loke, MD, Medical Parasitology, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1.  Harnett MM, Melendez AJ, Harnett W. The therapeutic potential of the filarial nematode-derived immunomodulator, ES-62 in inflammatory disease. Clin Exp Immunol. 2010:159(3):256–267.

2.  Weinstock JV, Elliott DE. Helminths and the IBD hygiene hypothesis. Inflamm Bowel Dis. 2009:15(1):128–133.

3.  Elliott DE, Urban JF Jr, Argo CK, Weinstock JV. Does the failure to acquire helminthic parasites predispose to Crohn’s disease? FASEB J. 2000;14(12):1848–1855.

4. Elliott DE, Summers RW, Weinstock JV. Helminths and the modulation of mucosal inflammation. Curr Opin Gastroenterol. 2005:21(1):51-58.

5.      Russyers NE, De Winter BY, De Man JG, et al. Worms and the treatment of inflammatory bowel disease: are molecules the answer? Clin Dev Immunol. 2008:2008:567314.

6. Elliott DE, Weinstock JV. Helminthic therapy: using worms to treat immune-mediated disease. Adv Exp Med Biol. 2009;666:157-166.

7. Wells RW, Blennerhassett MG. The increasing prevalence of Crohn’s disease in industrialized societies: the price of progress? Can J Gastroenterol. 2005;19(2):89-95.

8.  Motomura Y, Wang H, Deng Y, El-Sharkawy RT, Verdu EF, Khan WI. Helminth antigen-based strategy to ameliorate inflammation in an experimental model of colitis. Clin Exp Immunol. 2009:155(1):88–95.

9.  Summers RW, Elliot DE, Qadir K, Urban JF Jr, Thompson R, Weinstock JV. Trichuris suis seems to be safe and possibly effective in the treatment of inflammatory bowel disease. Am J Gastroenterol. 2003:98(9):2034-2041.

10.  Summers RW, Elliott DE, Urban JF Jr, Thompson R, Weinstock JV. Trichuris suis therapy in Crohn’s disease. Gut. 2005;54(1):87–90.

11.  Summers RW, Elliott DE, Urban JF Jr, Thompson RA, Weinstock JV. Trichuris suis therapy for active ulcerative colitis: a randomized controlled trial. Gastroenterology. 2005;128(4):825–832.

12.  Velasquez-Manoff M. Idea Lab: The Worm Turns. New York Times Magazine. June 29, 2008.

From the Archives: Myths and Realities: Does the Weather Really Affect Arthritis?

March 3, 2011

Please enjoy this post from the Clinical Correlations archives first posted March 19, 2009

Aditya Mattoo MD

Faculty Peer Reviewed

For our first post, I wanted to address the age old belief that changes in the weather can affect arthritis pain. Since the time of Hippocrates, who wrote about the effects of hot and cold winds on people’s health, this topic has been debated. Even Osler suggested in 1892 that arthritis sufferers of wealth vacation in the south to avoid the cold damp weather of the northern winters.(1) The majority of physicians have had at least one patient claim that his or her arthritis is acting up on account of the changing weather conditions. In fact, surveys have demonstrated that upwards of 90% of patients believe that weather plays a role in their arthritic pain. Some arthritis sufferers assert that they can predict the weather as storm systems are often heralded by joint pain flares. This belief has become so commonplace that even weather reporting services have developed arthritis indices to accompany the daily forecast.

Some of the theories to support the association between weather and arthritis relate pressure and temperature to direct effects on joint biomechanics. Increased atmospheric pressure can increase intraarticular pressure leading to pain as evidenced by joint pains secondary to compression in deep-sea divers.(2) Similarly, lower temperature can effect the compliance of articular structures and synovial fluid viscosity, making joints stiffer. Another proposed mechanism is the fact that temperature and pressure is known to sensitize nociceptors in the densely innervated periarticular structures. (3)

Enough background… let’s get down to business. So what does the data show? Well, the older literature was largely equivocal, limited by flawed study designs trying to measure a subjective complaint. Small sample sizes and single center studies didn’t help either. In more recent years, I am sad to report that the jury is still out. In 2003, Wilder et al prospectively followed 154 patients in the same geographical area with osteoarthritis (OA) who self-reported pain at five body sites.(4) The researchers used temperature, barometric pressure and precipitation data from the US National Oceanographic and Atmospheric Administration (USOAA) to attempt to find a relationship between these weather related variables and the severity of pain complaints. The only statistically significant finding was in a subgroup of women in which worsening pain complaints were seen on days with rising barometric pressure. All other subgroups failed to demonstrate statistically significant associations. Although this was one of the largest studies of the time, it was criticized by its limited geographical scope.

Fueled by the limitations of earlier studies including the one mentioned above, McAlindon et al published a multi-site study which followed 200 patients with knee OA over different times of the year.(5) Using a well validated pain questionnaire (WOMAC) and data from the USOAA the study prospectively compared knee pain with different atmospheric conditions throughout the continental US. Positive correlations were found in this study. The weather associations with statistical significance were ambient temperature and change in barometric pressure. Both lower ambient temperatures and rising barometric pressure were independently associated with increased pain complaints.

As we have learned, whether weather is a contributing factor in arthritic pain can not be stated definitively. Even though the literature is filled with contradicting studies, one of the largest, multi-site studies conducted to date has recently demonstrated an association between the two. To quote Robert Ripley, “believe it or not.”

1 Osler W. The Principles and Practice of Medicine: Designed for the Use of Practitioners and Students of Medicine (1892). Birmingham: Classics of Medicine Library, 1978.

2 Compression Pains. In: US Navy Diving Manual. Revision 4 ed. Naval Sea Systems Command, 3-45, 1999.

3 Verges J et al. Weather Conditions can Influence Rheumatic Diseases. Proceedings of the Western Pharmacology Society, 47:134-136, 2004.

4 Wilder FV et al. Osteoarthritis Pain and Weather. Rheumatology, 43:955-958, 2003.

5 McAlindon T et al. Changes in Barometric Pressure and Ambient Temperature Influence Osteoarthritic Pain. American Journal of Meidcine,120:429-434, 2007.

Faculty Peer Reviewed By: Svetlana Krasnokutsky, MD Division of Rheumatology

Is Vasopressin Indicated in the Management of Cardiac Arrest?

February 2, 2011

By Brandon Oberweis, MD

Faculty Peer Reviewed

Case Report:

A 65-year-old male with a past medical history significant for NYHA class IV heart failure was found by his wife to be unresponsive.  Emergency Medical Services was subsequently called and upon arrival, initiated chest compressions and defibrillation for cardiac arrest secondary to ventricular fibrillation.  Intravenous access was obtained and despite two episodes of defibrillation, the patient remained in ventricular fibrillation.  The patient was given one dose of 40 U of vasopressin followed by 1 mg epinephrine every 3-5 minutes for the persistence of ventricular fibrillation.  The patient was transported to the local hospital for further medical management.

In the setting of pulseless cardiac arrest, is there a significant survival advantage to management with vasopressin compared to epinephrine?


Cardiac arrest remains one of the most devastating medical conditions for patients and their families.  According to the literature, there are an estimated 400,000 cases of cardiac arrest in the United States annually [1].  Despite the attempts of multiple studies to elucidate the optimal management of cardiac arrest and the implementation of Basic Life Support (BLS) and Advanced Cardiovascular Life Support (ACLS) algorithms, the survival rate following cardiac arrest remains between 2-24% [2].  Given the paucity of evidence in the literature for the proper management strategy, cardiac arrest remains a significant public health dilemma.

There are four recognized cardiac rhythms that produce cardiac arrest: ventricular fibrillation (VF), ventricular tachycardia (VT), pulseless electrical activity (PEA), and asystole [3].  Of these rhythms, ventricular fibrillation accounts for 60-80% of cardiac arrests and is the most treatable rhythm.  Conversely, asystole accounts for 20-40% of cardiac arrests and is most often refractory to cardiac resuscitation [4].  Given the significant consequences and sequelae of cardiac arrest, the American Heart Association developed the ACLS protocol to attempt to provide a consistent and optimal management strategy.

ACLS Guidelines:

It is well-established that the first minutes following cardiac arrest are the most critical to a patient’s survival.  The ACLS guidelines recommend Cardiopulmonary Resuscitation (CPR) and defibrillation for cardiac arrest secondary to ventricular tachycardia or fibrillation. If there is persistence of VT/VF subsequent to 1 or 2 episodes of defibrillation, vasopressors are indicated.  In cases of nonshockable rhythms, such as PEA or asystole, vasopressors are indicated following the initiation of CPR, once IV access is achieved.  Although there have been no placebo-controlled studies demonstrating an increase in neurological survival with any vasopressor agents, there is evidence that vasopressors increase the restoration of spontaneous circulation (ROSC).  According to the ACLS guidelines for vasopressor therapy, 1 mg of epinephrine may be given and repeated every 3 to 5 minutes during cardiac arrest.  Furthermore, 40 units of vasopressin may be substituted for the first or second dose of epinephrine [3].  Although the ACLS guidelines substantiate the substitution of vasopressin for epinephrine, there is a lack of evidence in the literature that vasopressin increases survival compared to epinephrine.

Vasopressin vs. Epinephrine:

Epinephrine exerts its physiologic effects during cardiac arrest through stimulation of the α-adrenergic receptors, thus inducing vasoconstriction and a subsequent increase in aortic pressure.  This systemic vasoconstriction promotes an increase in coronary and cerebral perfusion pressures [5, 6].  Despite these recognized benefits, evidence has shown a negative safety profile of epinephrine, due to its ability to increase myocardial work load, reduce subendocardial perfusion, and induce ventricular arrhythmias [3].  Vasopressin, a nonadrenergic endogenous peptide that induces peripheral, coronary, and renal vasoconstriction via stimulation of the V1 receptors, lacks the adverse effects of epinephrine and has therefore gained much attention as a substitute vasopressor [7].  Another possible advantage is that through V2 receptor stimulation, vasopressin may induce vasodilation and therefore lessen the end-organ hypoperfusion thought to occur with epinephrine [8].

Although multiple studies have been conducted to determine whether either of these two interventions have a statistically significant survival advantage, the evidence in the literature remains unclear and contradictory.  Nevertheless, a study by Wenzel et al. attempted to elucidate the differences in survival between epinephrine, vasopressin, and the concomitant use of both epinephrine and vasopressin.  This double-blind, prospective, randomized control trial used the primary and secondary end points of survival to hospital admission and discharge, respectively.  Interestingly, there were no significant differences in survival between the epinephrine and vasopressin groups with respect to patients with VF or PEA.  Conversely, among patients with cardiac arrest secondary to asystole, patients who received vasopressin had a significantly higher rate of survival to hospital admission and discharge (29.0% vs. 20.3%, P=0.02 and 4.7% vs. 1.5%, P=0.04, respectively) [9].  These findings are supported by the evidence that metabolic derangements, specifically metabolic acidosis, impair the functionality of α-adrenergic receptors, thus diminishing the affect of catecholamines [10].  It is therefore suggested that the superiority findings of vasopressin over epinephrine in asystole is a result of extreme ischemia and acidosis.  However, subsequent studies conducted to evaluate the effectiveness of epinephrine and vasopressin in the setting of cardiac arrest have not demonstrated reproducible survival benefits of vasopressin in the setting of any of the four subtypes of cardiac arrest.

Epinephrine + Vasopressin vs. Epinephrine Alone:

Despite the paucity of evidence supporting the superiority of either vasopressor, it has been hypothesized that the combination of both epinephrine and vasopressin may have a synergistic effect.  It is well-recognized that concomitant epinephrine and vasopressin increase survival following cardiac arrest in animal subjects [2].  This approach however, has not come to fruition in all but one human subject trial.  The findings in the Wenzel et al. study are consistent with those of animal research, supporting vasopressin followed by epinephrine improves ROSC (36.7% vs. 25.9%, P=0.002) and 24-hour survival rates (6.2% vs. 1.7%, P=0.002), compared to epinephrine alone [9].  Regardless of the improved 24-hour survival rate, the rate of patients with neurologic survival at discharge was poor.  Consequent studies conducted to further evaluate the role of combination epinephrine and vasopressin have not found an improved outcome over epinephrine alone [11].

In the multicenter trial by Gueugniaud et al., 1442 patients were randomized to a combination of epinephrine and vasopressin and 1452 patients were randomized to epinephrine alone.  The authors report that there were no treatment-related adverse effects resulting from this study.  Furthermore, a posthoc analysis identified that when the initial ECG rhythm was PEA, the patients who received epinephrine alone had a higher rate of survival to hospital discharge (5.8% vs. 0%, P=0.02).  However, among patients with an initial ECG rhythm of VF or asystole, there were no significant differences in outcomes between combination therapy and epinephrine alone [11].  These findings further confound the question of whether vasopressin compared to epinephrine or in combination with epinephrine improves survival in the setting of cardiac arrest.

Given that vasopressin has a relatively long half-life of 6 minutes, it is likely that a resulting therapeutic benefit would be from the simultaneous action of both epinephrine and vasopressin [2].  As epinephrine is the standard vasopressor therapy during cardiac arrest, the vasopressin arms of randomized control trials received epinephrine after 3-5 minutes if there was no response to the administration of vasopressin.  Despite this hypothesized synergistic effect, the majority of studies in the literature have found no survival benefit to combination therapy over epinephrine alone [2].

As neither intervention has shown superiority as a monotherapy, the increased cost of vasopressin must be weighed against the incidence of adverse effects of epinephrine.  Although it is difficult to analyze management options from a cost-benefit perspective, it must be considered that the cost of 40 U of vasopressin is 15 times greater than that of 1 mg of epinephrine.  Healthcare providers must consider that this increase in cost with vasopressin occurs without an evidence-based improvement in survival [1].


Cardiac arrest remains a significant public health issue despite the focus of research trials and implementation of practice guidelines.  Although Advanced Cardiac Life Support is performed by the most highly trained medical professionals, the only interventions shown to significantly increase survival are chest compressions and defibrillation.  ACLS guidelines currently recommend that vasopressin may be administered as a replacement of the first or second dose of epinephrine during a cardiac arrest.  Given that the overwhelming majority of evidence in the literature does not corroborate an increase in survival with epinephrine and vasopressin combination therapy, it is recommended that either epinephrine or vasopressin be administered as the first dose of vasopressor therapy, followed by repeated doses of epinephrine.

Even if the study by Wenzel et al. is accurate in that vasopressin increases survival to discharge, when cerebral performance of these patients was analyzed, there was no significant improvement in neurological performance [9].  Therefore, future studies are needed to establish the incidence of adverse effects of both vasopressors, thus enabling an objective management decision, given the similar efficacy between both interventions.

Resolution of Case:

The patient in our case received an additional episode of defibrillation with concomitant chest compressions and administration of epinephrine every 3-5 minutes.  The subsequent rhythm check demonstrated a perfusing rhythm and the ACLS algorithm was terminated.  The patient survived to discharge despite residual neurologic deficits.  He was admitted to rehabilitation where he is making modest improvements.

Dr. Oberweis is a 1st year resident at NYU Langone Medical Center

Peer reviewed by Laura Evans, MD, critical care section editor, Clinical Correlations

Image courtesy of Wikimedia Commons.


1.  Aung K, Htay T. Vasopressin for Cardiac Arrest: A Systemic Review and Meta-analysis. Arch Intern Med 2005;165:17-24.

2. Sillberg VAH, Perry JJ, Steill IG, Wells GA. Is the combination of vasopressin and epinephrine superior to repeated doses of epinephrine alone in the treatment of cardiac arrest – A systematic review. Resuscitation 2008;79:380-386.

3.  American Heart Association. Management of Cardiac Arrest. Circulation 2005;112:IV-58-IV-66.

4.  McIntyre KM. Vasopressin in Asystolic Cardiac Arrest. N Engl J Med 2004;350(2):179-181.

5. Yakaitis RW, Otto CW, Blitt CD. Relative importance of α and β adrenergic receptors during resuscitation. Crit Care Med 1979;7:293-296.

6. Michael JR, Guerci AD, Koehler RC, Shi AY, et al. Mechanisms by which epinephrine augments cerebral and myocardial perfusion during cardiopulmonary resuscitation in dogs. Circulation 1984;69:822-835.

7. Stroumpoulis K, Xanthos T, Rokas G, Kitsou V, et al. Vasopressin and epinephrine in the treatment of cardiac arrest: an experimental study. Critical Care 2008;12:R40.

8. Jing XL, Wang DP, Li H, Li H. Vasopressin and epinephrine versus epinephrine in management of patients with cardiac arrest: a meta-analysis. Signa Vitae 2010;5(1):20-26.

9. Wenzel V, Krismer AC, Arntz HR, Sitter H, et al. A Comparison of Vasopressin and Epinephrine for Out-of-Hospital Cardiopulmonary Resuscitation. N Engl J Med 2004;350(2): 105-113.

10.  Fox AW, May RE, Mitch WE. Comparison of Peptide and Nonpeptide Receptor-Mediated Responses in Rat Tail Artery. J of Cardiovascular Pharmacology 1992;20:282-289.

11.  Gueugniaud PY, David JS, Chanzy E, Hubert H, et al. Vasopressin and Epinephrine vs. Epinephrine Alone in Cardiopulmonary Resuscitation. N Engl J Med 2008;359(2): 21-30.


Should You Eradicate Helicobacter Pylori Prior to Chronic NSAID Treatment?

January 19, 2011

By Joshua Smith, MD

Faculty Peer Reviewed

CASE:  A 54-year-old Asian female with no significant past medical history presents to her primary care physician with the complaint of several weeks of pain in her fingers bilaterally along with pronounced, worsening morning stiffness.  She is subsequently diagnosed with rheumatoid arthritis (RA), and the decision is made to start her on long-term, high-dose non-steroidal anti-inflammatory drugs (NSAIDs).  Given the link between NSAIDs and peptic ulcer disease (PUD), should this patient first be tested, and if positive, treated for Helicobacter pylori (H. pylori)?

It has been widely recognized that infection with H. pylori predisposes individuals to PUD as well as gastric adenocarcinoma and gastric mucosa-associated lymphoid tissue (MALT) lymphoma.  Though the effect of eradication of H. pylori on the development of gastric carcinoma remains unclear [1], it has been well-established that elimination of H. pylori significantly reduces the risk of development of PUD.  As effective non-invasive means for diagnosis of H. pylori exist(urea breath test, serological tests, and stool antigen assays), along with effective means for treatment, the question of when and who to treat arises.

One specific group of individuals for whom this question has been raised is those patients requiring chronic treatment with NSAIDs, a large-percentage of whom have RA.  NSAIDs are wellestablished risk factors for the development of both uncomplicated and complicated PUD [2].  In those individuals with rheumatologic diseases, NSAID-induced PUD is a frequently-encountered issue, with the incidence being reported as high as 1.2-1.6% per year in patients with RA [3].

Overall, it has been estimated that H. pylori, NSAIDs, or a combination of the two account for 90-95% of gastric and duodenal ulcers [4].  Though studies have historically reported conflicting data regarding the effect of H. pylori infection on PUD development in chronic NSAID users, a meta-analysis performed by Huang, et al. in 2002 showed that NSAID users infected with H. pylori are 3.5 times as likely to develop PUD than those that were not infected [5].  Therefore, the eradication of H. pylori in individuals prior to the onset of chronic NSAID use seems to be a logical step in reducing the likelihood of PUD in those individuals.

Epidemiologic studies have shown that the most substantial increase in incidence of PUD during NSAID treatment occurs during the initial 3 months of treatment [6].  This is likely due to the fact that the initiation of NSAIDs aggravates PUD in patients that are already susceptible to the disease.  Therefore, patients are likely to benefit most from H. pylori eradication prior to NSAID treatment induction rather than after NSAID treatment has been initiated.

Multiple studies have been undertaken to investigate the development of PUD in chronic NSAID users who have undergone H. pylori eradication versus those that have not.  In a meta-analysis by Vergara, et al. in 2005 encompassing the most influential randomized, controlled studies to date, it was shown that all of the studies found a significant difference or a positive trend in favor of eradication therapy [7].  The odds-ratios for the prevention of PUD in the eradication groups ranged from 0.2-0.82 in the 4 studies included in the final analysis with a combined OR of 0.43.  The range for number needed to treat(NNT) was 4.8-70, and the NNT when all studies were combined was 16.9.  The studies were further analyzed in subgroups of NSAID-naïve patients versus those already undergoing long-term NSAID therapy.  This analysis revealed a large discrepancy in ulcer development following H. pylori eradication.  In the NSAID-naïve group, there was a significant OR of 0.26 (13.7% in non-eradicated vs. 3.8% in eradicated), while in the group already undergoing chronic NSAID therapy, there was a non-significant OR of 0.95 (12.8% in non-eradicated vs. 12.1% in eradicated).  This suggests that H. pylori eradication in patients already undergoing NSAID therapy may not be beneficial for the reduction of ulcer development.

Importantly, in a study by Chan, et al. in 2001, it was shown that H. pylori eradication in and of itself is not sufficient to protect chronic NSAID users with recent ulcer complications from further GI events [8], though  the addition of long-term PPI treatment does result in a significant reduction of recurrent bleeding.  Similar findings were reported by Hawkey, et al. in their 1998 study [9].

Therefore, based on the currently available data, it is this author’s suggestion that all patients should undergo H. pylori testing (and treatment, if positive) prior to the initiation of long-term NSAID treatment.  Furthermore, in patients with a history of bleeding ulcer, long-term PPI therapy in addition to H. pylori eradication is recommended.  In contrast, it is not recommended to diagnose and treat H. pylori in patients who have already undergone NSAID treatment as it does not appear to significantly decrease the rate of ulcer development.

Dr. Smith is a first year resident at NYU Langone Medical Center

Peer reviewed by Michael Poles, MD, Section Editor, GI, Clinical Correlations

Image courtesy of Wikimedia Commons (Description 1 High magnification micrograph of gastritis with Helicobacter pylori ).


1.  Wong BC, Lam SKL, Wong WM, et al. Helicobacter pylori eradication to prevent gastric cancer in a high-risk region of China. J Am Med Assoc  2004;291:187–94.

2.  Chan FKL, Leung WK. Peptic-ulcer disease. Lancet 2002;360:933–41.

3.  Fries JF, Murtagh KN, Bennett M, Zatarain E, Lingala B, Bruce B. The rise and decline of nonsteroidal antiinflammatory drug-associated gastropathy in rheumatoid arthritis. Arthritis Rheum 2004;50:2433–40.

4.  Kurata JH, Nogawa AN.  Meta-analysis of risk factors for peptic ulcer.  Nonsteroidal anti-inflammatory drugs, Helicobacter pylori, and smoking.  J Clin Gastroenterol 1997;24:2-17.

5.  Huang JQ, Sridhar S, Hunt RH. Role of Helicobacter pylori infection and non-steroidal anti-inflammatory drugs in peptic-ulcer disease: a meta-analysis. Lancet 2002;359:14–22.

6.  Langman MJ, Weil J, Wainwright P, et al. Risks of bleeding peptic ulcer associated with individual non-steroidal anti-inflammatory drugs. Lancet 1994;343:1075–8.

7.  Vergara M, Catalan M, Gisbert JP, Calvet X. Meta-analysis: role of Helicobacter pylori eradication in the prevention of peptic ulcer in NSAID users. Aliment Pharmacol Ther 2005;21:1411–8.

8.  Chan FKL, Chung SCS, Suen BY, et al. Preventing recurrent upper gastrointestinal bleeding in patients with Helicobacter pylori infection who are taking low-dose aspirin or naproxen. N Engl J Med 2001;344:967–73.

9.  Hawkey CJ, Tulassay Z, Szczepanski L, et al. Randomised controlled trial of Helicobacter pylori eradication in patients on non-steroidal anti-inflammatory drugs: HELP NSAIDs study. Lancet 1998;352:1016–21.