Systems

The Role of Fish Oil in Arrhythmia Prevention

July 29, 2015

fishoilBy Steven Bolger

Peer Reviewed

Omega-3 fatty acids were first identified as a potential agent to prevent and treat cardiovascular disease through several epidemiologic studies of the Greenlandic Inuit in the 1970s suggesting that high consumption of fish oil was associated with a decreased risk of cardiovascular disease [1,2]. Fish oil contains two omega-3 fatty acids, eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), that have been shown to be beneficial in treating hypertriglyceridemia and in the secondary prevention of cardiac events [3-5].

The GISSI-Prevenzione trial, published in 1999, was one of the first multicenter, randomized controlled trials to explore the effect of supplementation with omega-3 fatty acids on patients with recent myocardial infarctions [5]. The trial included 11,324 patients with recent myocardial infarctions. They were randomized to receive daily supplementation with either a capsule containing EPA and DHA in a 1-to-2 ratio or a placebo capsule for 3.5 years, with death from any cause, non-fatal myocardial infarction, and stroke as the composite primary endpoint. The trial demonstrated that supplementation with omega-3 fatty acids resulted in a significant reduction in the primary endpoint, with a relative risk reduction of 10% compared to placebo. The results of this trial suggested that a reduction in sudden cardiac death could be responsible for the decrease in mortality, sparking investigation of the potential anti-arrhythmic properties of omega-3 fatty acids.

Omega-3 fatty acids have been shown to increase the threshold of depolarization of cardiac muscle required for action potential generation in animal models, resulting in a decrease in arrhythmias. A 1994 study using a canine model showed that infusion of a fish oil emulsion resulted in a significantly decreased incidence of ventricular fibrillation compared to a control infusion in response to exercise-induced ischemia [6]. Further studies in rat cardiomyocytes revealed that the mechanism responsible for the reduction in arrhythmias is inhibition of voltage-dependent sodium and L-type calcium channels [7-9]. By shifting the cell membrane potential to a more negative value, omega-3 fatty acids increase the threshold required to generate an action potential, preventing the initiation of arrhythmias.

Several randomized controlled trials have failed to demonstrate that omega-3 fatty acid supplementation results in a reduction in ventricular arrhythmias in patients with implantable cardioverter-defibrillators. A 2005 trial of 200 patients with implantable cardioverter-defibrillators and recent episodes of sustained ventricular tachycardia or ventricular fibrillation showed no reduction in the risk of arrhythmias with fish oil supplementation [10]. The results of this trial furthermore suggested a possible pro-arrhythmic effect of omega-3 fatty acids. A 2006 trial similarly failed to show a reduction in ventricular tachycardia, ventricular fibrillation, or all-cause mortality in 546 patients with implantable cardioverter-defibrillators who received supplementation with omega-3 fatty acids [11].

A 2005 randomized controlled trial of 402 patients with implantable cardioverter-defibrillators, however, demonstrated a trend towards benefit in patients receiving supplementation with omega-3 fatty acids [12]. The primary endpoint selected for the trial was time to first episode of ventricular tachycardia, ventricular fibrillation, or death from any cause. Though the results did not show a significant reduction in the primary endpoint, patients who received omega-3 fatty acid supplementation showed a trend towards a prolonged time to the first episode of these arrhythmias or death from any cause, with a risk reduction of 28% and p-value of 0.057. Furthermore, the risk reduction was significant when probable episodes of ventricular tachycardia and ventricular fibrillation were included in the analysis, with a risk reduction of 31%.

With conflicting results from several trials, a systematic review was performed in 2008 of 12 randomized controlled trials to synthesize clinical data on the effects of fish oil on mortality and arrhythmia prevention [13]. The primary outcomes were defined as the arrhythmic end points of appropriate implantable cardioverter-defibrillator intervention and sudden cardiac death. The results of the meta-analysis showed that fish oil supplementation did not have a significant effect on arrhythmias and all-cause mortality. The review did demonstrate a significant reduction in deaths from cardiac causes consistent with previous studies, including the GISSI-Prevenzione trial.

Fish Oil For Atrial Fibrillation Prevention

In addition to trials investigating ventricular arrhythmias in patients with implantable cardioverter-defibrillators, there have been several observational studies exploring the effect of fish oil on the incidence of atrial fibrillation, which have yielded conflicting results. The Danish Diet, Cancer, and Health Study, a prospective cohort study, found that consumption of omega-3 fatty acids from fish was not associated with a reduction in the risk of atrial fibrillation or flutter [14]. The cohort for this study included 47,949 individuals living in Denmark with a mean age of 56 years. The Rotterdam Study found that consumption of EPA and DHA was similarly not associated with a reduction in the risk of developing atrial fibrillation [15]. The cohort for this study included 5184 patients with a mean age of 67.4 years who lived in the Netherlands. A 12-year prospective, observational study by Mozaffarian and colleagues of 4815 patients over the age of 65, however, found that consumption of fish was associated with a 31% reduction in the risk of atrial fibrillation [16].

The mixed results between these studies may reflect differences in the baseline characteristics of the cohorts of the three studies. The Mozaffarian study placed an age restriction on the cohort of the study, resulting in a mean age of 72.8 years, compared to 56 years for the Danish Diet, Cancer, and Health Study and 67.4 years for the Rotterdam Study. The risk of atrial fibrillation increases with age; thus, the reduction in risk of atrial fibrillation in response to omega-3 fatty acid supplementation may only be appreciable in elderly populations at highest risk [17-18]. The assessment of dietary intake of omega-3 fatty acids also differed between the studies depending on the method of information collection. The Rotterdam study, for example, obtained information via a questionnaire and follow-up interview with a dietician, while the Mozaffarian study employed only a questionnaire.

The 2012 OPERA trial was the first randomized controlled trial to assess the effect of omega-3 fatty acid supplementation on atrial fibrillation [19]. The OPERA trial randomized 1516 patients with a mean age of 64 years who were scheduled for cardiac surgery to receive either a daily fish oil capsule or placebo for 3-5 days before the surgery and for 10 postoperative days or until discharge, whichever came first. The results of the trial showed that perioperative supplementation with fish oil did not reduce the risk of postoperative atrial fibrillation compared to the placebo.

Overall, the results of studies exploring the potential anti-arrhythmic effects of omega-3 fatty acids in reducing the risk of atrial fibrillation have been conflicting. A 2010 meta-analysis of 10 randomized controlled trials examining the role of omega-3 fatty acids in preventing atrial fibrillation found no evidence of significant effects of omega-3 fatty acids on atrial fibrillation prevention [20].

In conclusion, although omega-3 fatty acid supplementation has been shown to provide several potential cardiovascular benefits, trials have failed to consistently show that omega-3 fatty acids have significant anti-arrhythmic effects. The reasons for the inconsistent results are unknown, and perhaps may be related to patient selection, type of fish oil preparation, fish oil dose, or other factors. Meta-analyses of randomized controlled trials have not shown a reduction in either ventricular arrhythmias or atrial fibrillation. Additional studies are necessary to further characterize the role of fish oil in preventing arrhythmias.

Steven Bolger is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Robert Donnino, MD, Cardiology Editor, Clinical Correlations,  NYU Langone Medical Center

 Image courtesy of Wikimedia Commons

References

  1. Bang HO, Dyerberg J, Hjøorne N. The composition of food consumed by Greenland Eskimos. Acta Med Scand. 1976;200(1-2):69-73. http://onlinelibrary.wiley.com/doi/10.1111/j.0954-6820.1976.tb08198.x/pdf

 

  1. Dyerberg J, Bang HO, Stoffersen E, Moncada S, Vane JR. Eicosapentaenoic acid and prevention of thrombosis and atherosclerosis? Lancet. 1978;2(8081):117-119. http://www.sciencedirect.com/science/article/pii/S0140673678915052#

 

  1. Balk EM, Lichtenstein AH, Chung M, Kupelnick B, Chew P, Lau J. Effects of omega-3 fatty acids on serum markers of cardiovascular disease risk: a systematic review. Atherosclerosis. 2006;189(1):19-30. http://www.sciencedirect.com/science/article/pii/S0021915006000694

 

  1. Harris WS. n-3 fatty acids and serum lipoproteins: human studies. Am J Clin Nutr. 1997;65(5 Suppl):1645S-1654S. http://ajcn.nutrition.org/content/65/5/1645S.long

 

  1. GISSI-Prevenzione Investigators (Gruppo Italiano per lo Studio della Sopravvivenza nell’Infarto miocardico). Dietary supplementation with n-3 polyunsaturated fatty acids and vitamin E after myocardial infarction: results of the GISSI-Prevenzione trial. Lancet. 1999;354(9177):447-455. http://www.sciencedirect.com/science/article/pii/S0140673699070725

 

  1. Billman GE, Hallaq H, Leaf A. Prevention of ischemia-induced ventricular fibrillation by omega 3 fatty acids. Proc Natl Acad Sci U S A. 1994;91(10):4427-4430. http://www.pnas.org/content/91/10/4427.long

 

  1. Kang JX, Xiao YF, Leaf A. Free, long-chain, polyunsaturated fatty acids reduce membrane electrical excitability in neonatal rat cardiac myocytes. Proc Natl Acad Sci U S A. 1995;92(9):3997-4001. http://www.pnas.org/content/92/9/3997.long

 

  1. Xiao YF, Kang JX, Morgan JP, Leaf A. Blocking effects of polyunsaturated fatty acids on Na+ channels of neonatal rat ventricular myocytes. Proc Natl Acad Sci U S A. 1995;92(24):11000-11004. http://www.pnas.org/content/92/24/11000.long

 

  1. Xiao YF, Gomez AM, Morgan JP, Lederer WJ, Leaf A. Suppression of voltage-gated L-type Ca2+ currents by polyunsaturated fatty acids in adult and neonatal rat ventricular myocytes. Proc Natl Acad Sci U S A. 1997;94(8):4182-4187. http://www.pnas.org/content/94/8/4182.long

 

  1. Raitt MH, Connor WE, Morris C, et al. Fish oil supplementation and risk of ventricular tachycardia and ventricular fibrillation in patients with implantable defibrillators: a randomized controlled trial. JAMA. 2005;293(23):2884-2891. http://jama.jamanetwork.com/article.aspx?articleid=201082

 

  1. Brouwer IA, Zock PL, Camm AJ, et al; SOFA Study Group. Effect of fish oil on ventricular tachyarrhythmia and death in patients with implantable cardioverter defibrillators: the Study on Omega-3 Fatty Acids and Ventricular Arrhythmia (SOFA) randomized trial. JAMA. 2006;295(22):2613-2619. http://jama.jamanetwork.com/article.aspx?articleid=202999

 

  1. Leaf A, Albert CM, Josephson M, et al. Prevention of fatal arrhythmias in high-risk subjects by fish oil n-3 fatty acid intake. Circulation. 2005;112(18):2762-2768. http://circ.ahajournals.org/content/112/18/2762.long

 

  1. León H, Shibata MC, Sivakumaran S, Dorgan M, Chatterley T, Tsuyuki RT. Effect of fish oil on arrhythmias and mortality: systematic review. BMJ. 2008;337:a2931. http://www.bmj.com/content/337/bmj.a2931.long

 

  1. Frost L, Vestergaard P. n-3 Fatty acids consumed from fish and risk of atrial fibrillation or flutter: the Danish Diet, Cancer, and Health Study. Am J Clin Nutr. 2005;81(1):50-54. http://ajcn.nutrition.org/content/81/1/50.long

 

  1. Brouwer IA, Heeringa J, Geleijnse JM, Zock PL, Witteman JC. Intake of very long-chain n-3 fatty acids from fish and incidence of atrial fibrillation. The Rotterdam Study. Am Heart J. 2006;151(4):857-862. http://www.sciencedirect.com/science/article/pii/S000287030500757X

 

  1. Mozaffarian D, Psaty BM, Rimm EB, et al. Fish intake and risk of incident atrial fibrillation. Circulation. 2004;110(4):368-373. http://circ.ahajournals.org/content/110/4/368.long

 

  1. Psaty BM, Manolio TA, Kuller LH, et al. Incidence of and risk factors for atrial fibrillation in older adults. Circulation. 1997;96(7):2455-2461. http://circ.ahajournals.org/content/96/7/2455.long

 

  1. Furberg CD, Psaty BM, Manolio TA, Gardin JM, Smith VE, Rautaharju PM. Prevalence of atrial fibrillation in elderly subjects (the Cardiovascular Health Study). Am J Cardiol. 1994;74(3):236-241. http://www.sciencedirect.com/science/article/pii/0002914994903638

 

  1. Mozaffarian D, Marchioli R, Macchia A, et al; OPERA Investigators. Fish oil and postoperative atrial fibrillation: the Omega-3 Fatty Acids for Prevention of Post-operative Atrial Fibrillation (OPERA) randomized trial. JAMA. 2012;308(19):2001-2011. http://jama.jamanetwork.com/article.aspx?articleid=1389226

 

  1. Liu T, Korantzopoulos P, Shehata M, Li G, Wang X, Kaul S. Prevention of atrial fibrillation with omega-3 fatty acids: a meta-analysis of randomised clinical trials. Heart. 2011;97(13):1034-1040. http://heart.bmj.com/content/97/13/1034.l

UV Nail Lamps and Cancer: A Correlation?

July 24, 2015

UV%20Nail%20LampBy Jennifer Ng, MD

Peer Reviewed 

Beauty and suffering are often thought to be intertwined.  It is hard to have your cake and eat it too.  In the quest for beauty, women (and men) have subjected themselves to toxic and potentially deadly practices, such as applying lead-based cosmetics to whiten their faces historically [1], or more recently, going to tanning beds and/or laying out in the sun for prolonged periods to get a “healthy glow.”  As we have become increasingly health-conscious and vigilant, more and more beauty products and practices have come under scrutiny for their possible toxic effects.  Most recently, a cousin of the tanning bed, the popular ultraviolet (UV) nail lamp, has become a topic of much controversy [2].

At first glance, the UV nail lamp seems like a miracle worker.  It serves many purposes in the nail salon, such as its uses in: one, quickly drying UV-cured acrylic nails and traditional nail polish; two, activating special topcoats that help protect the nail; and three, the creation of gel nails, which are more durable than regular nail polish [3].  However, like the tanning bed, it produces predominantly UV-A radiation, which is known to cause oxidative stress and free radical formation, leading to DNA damage [4].

The controversy over the potential carcinogenic effects of the UV nail lamp started with two case reports published in 2009 [3].  These two reports described the development of non-melanoma skin cancers on the hands of two white women, who both had indoor occupations, little to moderate recreational UV exposures and no personal/family history of skin cancers.  The major commonality shared by the two women was frequent visits to the nail salon – one with a 15-year history of twice monthly UV nail light exposure and the other with a history of eight episodes of UV nail light exposure within one year.

These two case reports prompted a lot of research into the amount of potential UV exposure from the UV nail lamp.  The authors of the 2009 case reports argued that based on the amount of power produced by most nail lamps (4-54W), as compared to that of tanning beds (1200W or more), and the amount of body surface area exposed (2% in nail lamps vs. 100% in tanning beds), the amount of radiation per meter squared was actually comparable between UV nail lamps and tanning beds [3].  Another study found that the amount of UV exposure per a typical nail session (lasting less than 10 minutes) totaled 15-22.5 joules per meters squared, which was comparable to that of the day-long recommended limit for outdoor workers and recreationalists (30 joules per meter squared over 8 hours) as per the International Commission on Non-Ionizing Radiation Protection [4].

However, other researchers have argued that the risk of skin cancer from UV lamps is minimal.  In fact, UV light (especially narrowband UV-B) is frequently used in the treatment of common skin conditions such as psoriasis, vitiligo and atopic dermatitis [5]. One study compared a UV nail lamp session’s UV dose to the UV dose of a single course of narrowband UV-B (NBUVB) used for phototherapy and found that more than 250 years of weekly UV nail sessions would be required in order to equal to exposure of that of one course of narrowband UV-B therapy [6].  Since the risk of developing skin cancer from one course of NBUVB treatment was thought to be low [7], the authors of the study concluded that the risk of skin carcinogenesis from UV exposure from nail lamps must be low as well.

Another study attempted to quantify the actual risk of squamous cell carcinoma (SCC) from UV nail lamps [8]. Based on complex calculations that took into account the ages of subjects and doses of UV light to which they were exposed, the study authors derived an SCC risk model, using data from six different studies on the incidence of SCC in different regions around the world, including Norway and the USA. They used this model to compare the risks of skin cancer from day-to-day sun exposure to that of UV nail lamps and from these calculated risks, they were able to determine the number of women who would need to be exposed to UV nail lamps in order for one woman to develop SCC on the dorsum of her hands, also known as the number needed to harm (NNH). Results showed that, depending on the age of the woman and the number of years of use of UV nail lamps, anywhere from tens to hundreds of thousands of women would need to be exposed to UV nail lamps in order for one woman to develop SCC on the dorsum of her hands.

Why is there such variety in the results from different studies?  One group recently hypothesized that the range in different UV lamps available for commercial use might be the explanation [9].  UV lamps differ in the number of bulbs, the power/wattage of each bulb and the brand of the light source, all of which may lead to differing amounts of UV-A radiation produced.  This study measured the UV-A energy exposure in an average manicure visit from seventeen different UV lamps used in 16 nail salons, resulting in a range of 0 to 8 joules per centimeters squared, with a median of 5.1.  The threshold value for DNA damage in UV-A irradiated skin cells is 60 joules per centimeters squared.  Therefore, the number of visits needed for a customer to be exposed to this threshold value can range from 8 to 208, with a median of 11.8.  The take-home point is that, depending on the UV nail lamp, the number of visits needed to confer DNA damage (i.e. the potential for carcinogenesis) to skin cells can vary greatly.

While different researchers may present conflicting evidence in regards to the degree of risk of skin cancer posed by UV nail lamps, they do mostly agree on recommending the use of either full spectrum sunscreens or UV-A blocking gloves to limit the exposure to only the nails [4, 8, 9].  Interestingly enough, the nail plate itself is very resistant to UV penetration – completely blocking the transmission of UV-B light and almost completely blocking UV-A light [10].  Thus, even if the debate is ongoing, sunblock and/or UV-A blocking gloves may be a way for your patients to have their cake and eat it too.

Dr. Jennifer Ng completed her internal medicine residency at NYU Langone Medical Center

Peer Reviewed by Jo-Ann Latkowski, MD, Dermatology, NYU Langone Medical Center

Image courtesy of foxnews.com: Image – Skin Cancer: Are nail salon UV lamps a skin cancer risk? Foxnews.com. 1 May 2014. http://www.foxnews.com/health/2014/05/01/are-nail-salon-uv-lamps-skin-cancer-risk/

References 

  1. Mapes D. Suffering for beauty has ancient roots: From lead eyeliner to mercury makeup, killer cosmetics over the decades. NBCNews.com. Web. Accessed June 1 2014. http://www.nbcnews.com/id/22546056/ns/health/t/suffering-beauty-has-ancient-roots/#.VbJaXaHD_1I
  2. Park A. 24 visits to the nail salon could trigger skin cancer. TIME.com. 30 Apr 2014. Web. Accessed June 1, 2014.  http://time.com/82830/24-visits-to-the-nail-salon-could-trigger-skin-cancer/

 

  1. MacFarlane DF et al. Occurrence of Nonmelanoma Skin Cancers on the Hands After UV Nail Light Exposure. Arch Dermatol. 2009;145(4):447-449. doi:10.1001/archdermatol.2008.622.

 

  1. Curtis J et al. Acrylic nail curing UV lamps: High-intensity exposure warrants further research of skin cancer risk. Journal of the American Academy of Dermatology. 2013 Dec;69(6):1069-1070. Doi: http://dx.doi.org/10.1016/j.jaad.2013.08.032.

 

  1. Tanew A et al. Narrowband UV-B phototherapy vs photochemotherapy in the treatment of chronic plaque-type psoriasis: a paired comparison study. Arch Dermatol. 1999;135(5):519. http://archderm.jamanetwork.com/article.aspx?articleid=477846

 

  1. Markova A et al. Risk of Skin Cancer Associated with the Use of UV Nail Lamp. Journal of Investigative Dermatology. 2013;133: 1097–1099. doi:10.1038/jid.2012.440. Epub 6 December 2012.

 

  1. Diffey BL, Farr PM. The challenge of follow up in narrowband ultraviolet B phototherapy. Br J Dermatol. 2007;157:344–9.  http://www.ncbi.nlm.nih.gov/pubmed/17553037

 

  1. Diffey BL. The risk of squamous cell carcinoma in women from exposure to UVA lamps used in cosmetic nail treatment. Br J Dermatol. 2012 Nov;167(5):1175-8. doi: 10.1111/j.1365-2133.2012.11107.x. Epub 2012 Oct 5.

 

  1. Shipp LR et al. Further Investigation Into the Risk of Skin Cancer Associated With the Use of UV Nail Lamps. JAMA Dermatol. Epub April 30, 2014. doi:10.1001/jamadermatol.2013.8740  http://archderm.jamanetwork.com/article.aspx?articleid=1862050
  2. Stern DK et al. UV-A and UV-B penetration of normal human cadaveric fingernail plate. Arch Dermatol. 2011 Apr;147(4):439-41. doi: 10.1001/archdermatol.2010.375. Epub 2010 Dec 20.  http://archderm.jamanetwork.com/article.aspx?articleid=423480

 

A Primer on CRP and Cardiovascular Risk

July 22, 2015

Heart-beatCindy Fei, MD

Peer Reviewed

A 63-year-old woman with hypertension presents to your clinic for routine follow-up. She came across an online article regarding C-reactive protein and its purported link to heart disease, and she asks you whether she should be tested for it. She is an otherwise asymptomatic non-smoker without a family history of heart disease. Her only medication is hydrochlorothiazide. Her blood pressure measured in the office is 128/81 mmHg, her low-density lipoprotein is 110 mg/dL, and her high-density lipoprotein is 54 mg/dL. What do you tell her?

What is CRP?

C-reactive protein (CRP) is an acute-phase reactant produced by the liver in response to the inflammatory cytokines interleukin-6 and interferon. CRP primarily mediates the inflammatory response by binding to complement and damaged cell membranes, but it has also been noted to bind to low-density lipoprotein (LDL) [1]. Common stimuli of high CRP levels (conventionally defined as >3 mg/L) include infection, cancer, and surgery. CRP also increases to intermediate levels (1-3 mg/L) with age, obesity, smoking, gum disease, and related co-morbidities such as chronic lung disease, diabetes, and hypertension [2]. Interestingly, the variability of multiple CRP measurements in the same person over time exhibits stability comparable to blood pressure and cholesterol [3]. While early measurements of CRP only detected levels greater than 3 mg/L, later studies capitalized on the development of improved high-sensitivity CRP (hs-CRP) assays, which detect levels as low as 0.1 mg/L.

With respect to healthy adults, studies show a positive correlation between elevated CRP levels and development of coronary heart disease, independent of other risk factors. A meta-analysis of 54 observation studies characterized this relationship as a log-linear association when adjusted for age and sex [4]. A 2009 meta-analysis of 11 good-quality studies calculated a relative risk of 1.58 (confidence interval 1.37-1.83) for the development of coronary artery disease in the high versus low serum CRP groups. The studies all adjusted for Framingham risk factors beforehand. The corresponding risk ratio for the intermediate versus low serum CRP groups was 1.22 (confidence interval 1.11-1.33) [5]. This relationship persists in individuals with known cardiovascular disease, with higher CRP values portending a worse prognosis. For instance, stable coronary artery disease subjects with a fairly even distribution of low, intermediate, and high serum CRP categories showed a statistically significant increased risk of cardiovascular death, myocardial infarction, or stroke in the intermediate CRP group compared with low CRP group (adjusted hazard ratio of 1.39). The adjusted hazard ratio rose to 1.52 for high CRP group compared to the low CRP group [6].

Does CRP play a pathologic role in atherosclerosis?

Multiple studies demonstrate an association between elevated CRP and increased risk of heart disease, regardless of prior cardiovascular disease diagnosis. However, it is unclear if a causal mechanism governs this association. Do high CRP levels drive atherosclerosis, or are they simply a marker of disease? Atherosclerotic plaques stain positive for CRP, but the evidence for causality is less clear [1]. Proposed avenues for CRP-induced plaque build-up include monocyte adhesion and recruitment into the vessel walls, macrophage activation, and smooth muscle cell proliferation. Moreover, binding to LDL facilitates LDL oxidation and uptake by macrophages. CRP also interferes with endothelial nitric oxide synthase function and prostacyclin synthesis, leading to decreased vasodilation [7].

In addition, CRP’s classification as an acute-phase reactant and its subsequent association with inflammatory conditions offer numerous confounding variables. On one hand, lower CRP levels after statin therapy are associated with a lower risk of recurrent myocardial infarction or coronary fatalities, regardless of post-statin LDL levels [8]. Post hoc analyses of the PROVE-IT trial demonstrated that lower CRP was significantly and independently associated with slower progression of atherosclerosis as measured by intravascular ultrasound over 18 months [9,10]. This suggests a direct link between CRP and cardiovascular risk independent of LDL levels.

On the other hand, scenarios that attempt to directly influence or change CRP levels do not necessarily maintain this link. For example, murine models of atherosclerosis do not reliably show increased plaque build-up in transgenic mice designed to produce human CRP [7]. One mendelian randomization study from 2008 calculated whether naturally-occurring genetic polymorphisms in the CRP gene and subsequent variations in serum CRP levels could predict cardiovascular outcomes. Genetic variation was responsible for up to 64% change in CRP level, but this did not translate into a statistically significant increased odds ratio for ischemic heart disease. In contrast, different apolipoprotein E genotypes accounted for up to a 14% change in cholesterol level, with a statistically significant increased odds ratio of 1.29 for development of ischemic heart disease [11]. A later mendelian randomization study also did not find a statistically significant relationship between genetically-raised CRP levels and the development of heart disease [12].

How to Use CRP in Clinical Practice

To date, the main randomized clinical trial that examines CRP and cardiovascular risk is the JUPITER trial published in 2008. This trial evaluated rosuvastatin 20mg daily for primary prevention in healthy adults who demonstrated both LDL <130 and hs-CRP >2. The trial was stopped early at the first interim analysis because the statin’s benefit was clear. After a median of 1.9 years of follow-up, a statistically significant reduction in the primary outcome (a composite of heart attack, stroke, unstable angina, revascularization, or cardiovascular death) was found for the statin group as compared to placebo (hazard ratio 0.56, 95% confidence interval 0.46 to 0.59) [13]. This suggested a role for CRP in selecting additional patients who would benefit from statins. Although the trial only included patients with higher levels of hs-CRP, a post hoc analysis demonstrated a consistent association between higher baseline hs-CRP and increased frequency of the primary outcome [14]. Of note, the trial was criticized on the grounds of conflict of interest, as the principal investigator co-owns the patent for the hs-CRP blood test used in the study [15].

In 2003, the Centers for Disease Control and Prevention and the American Heart Association recommended against universal screening for cardiovascular risk with CRP. The document identified intermediate-risk patients as the population for which it is reasonable to measure hs-CRP twice, 2 weeks apart, for further risk stratification [16]. In healthy asymptomatic adults with an intermediate Framingham risk of 5-20%, the addition of CRP appropriately reclassified only 4.3% of subjects into the high-risk category, and only 3.6% into the low-risk category [17]. According to one model developed prior to the updated statin therapy guidelines, testing the CRP of 440 intermediate-risk patients without a coronary heart disease equivalent is needed in order to reclassify 23 individuals as high-risk. If those 23 subjects initiated statin therapy, then 1 cardiovascular event (myocardial infarction, stroke, or fatal coronary heart disease) would be averted. In effect, the number needed to “test” of 440 would avert 1 cardiovascular event over 10 years, assuming appropriate statin interventions based on the 2002 Adult Treatment Panel III guidelines [18]. However, studies that have compared the accuracy of CRP versus coronary artery calcium score and carotid intima-media thickness in reclassifying intermediate-risk patients found that coronary artery calcium score and carotid intima-media thickness both outperformed CRP [17].

More recent guidelines still fail to offer compelling indications for CRP utilization. In fact, the 2009 US Preventive Services Task Force stated that there was insufficient evidence for the use of hs-CRP for cardiovascular risk assessment [19]. Two simultaneously released guidelines in November 2013 from the American College of Cardiology/American Heart Association (ACC/AHA), on the topics of cholesterol and on cardiovascular risk assessment, discuss a possible role for hs-CRP in patients who do not fall into the outlined four major statin benefit groups or who have unclear risk even after quantitative risk assessment. The recommendation to consider hs-CRP use under these select circumstances is based on expert opinion only, and does not distinguish between CRP versus other novel risk factors such as coronary artery calcium score and ankle-brachial index [20,21]. The new guidelines also suggest hs-CRP >2 as the threshold for upgrading the level of cardiovascular risk for a patient.

Conclusion

In summary, existing evidence tentatively suggests that CRP is an independent risk factor for heart disease; however, in the absence of data examining universal CRP screening, hard clinical outcomes, mortality, or cost effectiveness, the current recommendations are to use CRP sparingly under select circumstances. In the clinic, CRP may be used as a tool for further risk stratification of intermediate-risk patients in order to select candidates who may benefit the most from additional interventions and therapies.

With regards to the clinical vignette, this patient does not fall into one of the 4 major statin benefit groups, as outlined in the newly released 2013 ACC/AHA guidelines. Her calculated 10-year risk of atherosclerotic cardiovascular disease is 6%, which does not reach the threshold of 7.5% for starting a statin. According to the 2013 ACC/AHA Guideline on the Assessment of Cardiovascular Risk document, expert opinion states that hs-CRP may have a role in determining whether to begin statin therapy. If her measured hs-CRP were greater than 2, one may consider upgrading her risk level and adding a statin for primary prevention, with the knowledge that this recommendation is based on very limited data. 

Commentary By Robert Donnino, MD  Assistant Professor of Medicine (Cardiology)

The use of hs-CRP for cardiovascular risk stratification remains highly controversial. Analysis of existing data suggests that CRP is, at best, a weak independent risk factor for clinical cardiovascular events. Without the inclusion of patients with CRP < 2 in the JUPITER trial (Ridker et al., reference 8 from above), it cannot be concluded that the CRP level of >2 conferred any increased risk, nor does it identify patients who would have received additional benefit with statin therapy. This has led many to question whether patients with CRP < 2 would have received similar benefits from statin therapy if they had been included in the trial.

As mentioned in this overview on CRP, data published from the MESA cohort showed CRP was not a very effective tool for reclassifying intermediate risk patients into higher or lower risk groups, reclassifying a total of only 8% of patients (Yeboah, et al; reference 17 from above). For comparison, coronary calcium score in that same cohort reclassified 66% of patients into higher or lower risk groups. Other studies have even lower reclassification ability for CRP. Thus, although supported by current guidelines and followed by some practitioners, I believe the data do not support the use of CRP as a risk stratification tool and that much more powerful stratification tools are available (i.e. coronary calcium score). For more in-depth analysis of CPR for cardiovascular risk, I would recommend the excellent review by Yousuf and colleages (reference 7 from above). Until we have more clarifying data, the role of CRP in clinical practice will remain controversial. 

Dr. Cindy Fei is an internist at NYU Langone Medical Center

Peer review by Robert Donnino, MD, Assistant Professor of Medicine (Cardiology), NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References 

  1. Scirica BM, Morrow DA. Is C-reactive protein an innocent bystander or proatherogenic culprit? The verdict is still out. Circulation 2006;113(17): 2128-2134.
  2. Windgassen EB, Funtowicz L, Lunsford TN, Harris LA, Mulvagh SL. C-reactive protein and high-sensitivity C-reactive protein: an update for clinicians Postgrad Med 2011;123(1): 114-119. http://www.ncbi.nlm.nih.gov/pubmed/21293091
  3. Danesh J, Wheeler JG, Hirschfield GM, et al. C-reactive protein and other circulating markers of inflammation in the prediction of coronary heart disease. NEJM 2004;350(14): 1387-1397.
  4. Emerging Risk Factors Collaboration, Kaptoge S, Di Angelantonio E, et al. C-reactive protein concentration and risk of coronary heart disease, stroke, and mortality: an individual participant meta-analysis Lancet 2010;375(9709): 132-140. http://www.ncbi.nlm.nih.gov/pubmed/20031199
  5. Buckley DI, Fu R, Freeman M, Rogers K, Helfand M. C-reactive protein as a risk factor for coronary heart disease: a systematic review and meta-analyses for the U.S. Preventive Services Task Force. Ann Intern Med 2009;151(7): 483-495. http://www.ncbi.nlm.nih.gov/pubmed/19805771
  6. Sabatine MS, Morrow DA, Jablonski KA, et al. Prognostic significance of the Centers for Disease Control/American Heart Association high-sensitivity C-reactive protein cut points for cardiovascular and other outcomes in patients with stable coronary artery disease. Circulation 2007;115(12): 1528-1536. http://www.ncbi.nlm.nih.gov/pubmed/17372173
  7. Yousuf O, Mohanty BD, Martin SS, et al. High-sensitivity C-reactive protein and cardiovascular disease: a resolute belief or an elusive link? J Am Coll Cardiol 2013;62(5): 397-408. http://www.ncbi.nlm.nih.gov/pubmed/23727085
  8. Ridker PM, Cannon CP, Morrow D, et al. C-reactive protein levels and outcomes after statin therapy. NEJM 2005;352(1): 20-28. http://www.nejm.org/doi/full/10.1056/NEJMoa042378
  9. Cannon CP, Braunwald E, McCabe CH, et al. Intensive versus moderate lipid lowering with statins after acute coronary syndromes. NEJM 2004;350(15): 1495-1504. http://www.nejm.org/doi/full/10.1056/NEJMoa040583
  10. Nissen SE, Tuzcu EM, Schoenhagen P, et al. Statin therapy, LDL cholesterol, C-reactive protein, and coronary artery disease. NEJM 2005;352(1): 29-38. http://www.nejm.org/doi/full/10.1056/NEJMoa042000
  11. Zacho J, Tybjaerg-Hansen A, Jensen JS, Grande P, Sillesen H, Nordestgaard BG. Genetically elevated C-reactive protein and ischemic vascular disease. NEJM 2008;359(18): 1897-1908. http://www.ncbi.nlm.nih.gov/pubmed/18971492
  12. C Reactive Protein Coronary Heart Disease Genetics Collaboration (CCGC), Wensley F, Gao P, et al. Association between C reactive protein and coronary heart disease: mendelian randomisation analysis based on individual participant data. BMJ 2011;342:d548. http://www.ncbi.nlm.nih.gov/pubmed/21325005
  13. Ridker PM, Danielson E, Fonseca FA, et al. Rosuvastatin to prevent vascular events in men and women with elevated C-reactive protein NEJM 2008;359(21): 2195-2207. http://www.nejm.org/doi/full/10.1056/NEJMoa0807646
  14. Ridker PM, MacFadyen J, Libby P, Glynn RJ. Relation of baseline high-sensitivity C-reactive protein level to cardiovascular outcomes with rosuvastatin in the Justification for Use of statins in Prevention: an Intervention Trial Evaluating Rosuvastatin (JUPITER) Am J Cardiol 2010;106(2): 204-209. http://www.ncbi.nlm.nih.gov/pubmed/20599004
  15. de Lorgeril M, Salen P, Abramson J, et al. Cholesterol lowering, cardiovascular diseases, and the rosuvastatin-JUPITER controversy: a critical reappraisal. Arch Intern Med 2010;170(12): 1032-1036. http://archinte.jamanetwork.com/article.aspx?articleid=416101
  16. Pearson TA, Mensah GA, Alexander RW, et al. Markers of inflammation and cardiovascular disease: application to clinical and public health practice: A statement for healthcare professionals from the Centers for Disease Control and Prevention and the American Heart Association. Circulation 2003;107(3): 499-511.
  17. Yeboah J, McClelland RL, Polonsky TS, et al. Comparison of novel risk markers for improvement in cardiovascular risk assessment in intermediate-risk individuals JAMA 2012;308(8): 788-795. http://jama.jamanetwork.com/article.aspx?articleid=1352110
  18. Emerging Risk Factors Collaboration, Kaptoge S, Di Angelantonio E, et al. C-reactive protein, fibrinogen, and cardiovascular disease prediction NEJM 2012;367(14): 1310-1320. http://www.ncbi.nlm.nih.gov/pubmed/23034020
  19. US Preventive Services Task Force. Using Nontraditional Risk Factors In Coronary Heart Disease Risk Assessment. Oct 2009. Accessed Nov 2013. http://www.uspreventiveservicestaskforce.org/uspstf/uspscoronaryhd.htm
  20. Stone NJ, Robinson J, Lichtenstein AH, et al. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation http://www.ncbi.nlm.nih.gov/pubmed/24222016
  21. Goff DC Jr, Lloyd-Jones DM, Bennett G, et al. 2013 ACC/AHA guideline on the assessment of cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation http://www.ncbi.nlm.nih.gov/pubmed/24222018

Diagnostic Challenges in Latent Tuberculosis Infection: A Brief Review of Available Tests and their Appropriate Use

July 15, 2015

Tuberculosis_symptomsBy: Miguel A. Saldivar, MD

Peer Reviewed 

“Indeterminate.” Many clinicians have expressed frustration when reading this word on a Quantiferon-TB Gold test result. The obligate follow-up question is: what is the next best step? Repeat the Quantiferon? Ignore it altogether and perform a Tuberculin Skin Test (TST) instead? Even worse, what happens when both tests are performed with discordant results? In order to answer some of these questions, this article begins with a very brief overview of Mycobacterium tuberculosis (TB) infection epidemiology. This is followed by a review of the tools currently available for the diagnosis of latent tuberculosis infection (LTBI). The last section explores some of the most important attributes of each test, which finally leads to a summary of a few of the current recommendations and the logic behind them.

A very brief overview of TB epidemiology and relevant definitions

Although it has been declared a global health emergency over 15 years ago by the World Health Organization (WHO), TB remains one of the leading infectious causes of morbidity in the world [5]. Every year, 8-10 million people globally develop active TB with an estimated 2 million annual deaths [2, 4, 5]. It is also estimated that one third of the world’s population (approximately 2 billion people) has LTBI [2, 6]. Definitions of LTBI vary slightly from organization to organization, but perhaps the most useful working definition is the one proposed by the World Health Organization (WHO): “a state of persistent immune response to prior-acquired Mycobacterium tuberculosis antigens without evidence of clinically manifested active TB” [18]. Persons with latent TB are classically considered to be not only asymptomatic but also noninfectious, and current evidence suggests that only 5-10% of people with LTBI develop active disease in their lifetime [1].

In the United States, the situation is slightly better than in other places in the world: the prevalence of active TB has declined from 6.2 cases per 100,000 people in 1998 to 4.2 cases per 100,000 in 2008. A TST survey in 2000 showed that approximately 11 million U.S. residents had LTBI, a 60% decline from 1972—although the decline was not uniform among all segments of the population and rates varied considerably [2, 7].

The currently available diagnostic tools

Despite the severity of this situation, the diagnostic tools for LTBI are not only few, but also have certain specific limitations. The most common tool in the arsenal of TB diagnosis is over 100 years old: the TST. It was not until 2001 that a new test became available, the QuantiFERON-TB test (QFT) (Cellestis Limited, Carnegie, Victoria, Australia). This was replaced in 2005 by its slightly more reliable descendant, the QuantiFERON-TB Gold test (QFT-G) (Cellestis Limited, Carnegie, Victoria, Australia), which in turn was replaced in 2007 by the most reliable version to date: the QuantiFERON-TB In-Tube test (QFT-IT) (Cellestis Limited, Carnegie, Victoria, Australia). Lastly, a separate tool that works on a similar mechanism as the Quantiferon tests became available in 2008: the T-SPOT.TB test (T-Spot) (Oxford Immunotec Limited, Abingdon, United Kingdom) [2].

In summary, three tools are currently in use for the diagnosis of LTBI, two of which fall under the same category:

  1. The time-honored Tuberculin Skin Test (TST)
  2. The Interferon-Gamma Release Assays (IGRAs), including:
    1. the QuantiFERON-TB In-Tube test (QFT-IT)
    2. the T-SPOT.TB test (T-Spot)

An Important Consideration

It is important to realize that both the TST and the IGRAs are useful for the diagnosis of latent TB, but have proven inadequate in the diagnosis of active TB. Three different systematic reviews/meta-analyses have consistently concluded that IGRAs can neither rule in nor rule out active TB (including extrapulmonary TB) [12, 13, 14]; the TST appears to have the same limitation.

The Tuberculin Skin Test 

The TST is the most commonly used tool worldwide for the diagnosis of LTBI. It consists of an intradermal injection of a poorly defined mixture of over 200 proteins derived from M. tuberculosis. A person with pre-existing cell-mediated immunity to these antigens will develop a delayed-type hypersensitivity reaction approximately 48-72 hours after the injection. This will cause swelling and induration at the site. A trained individual measures the lesion’s diameter, and the result is interpreted using pre-defined, risk-stratified cutoff points [1, 5, 6].

The Interferon-Gamma Release Assays

Just like the TST, the IGRAs (QFT-IT and T-Spot) measure a cell-mediated response. However, in the case of the IGRAs, a different aspect of the mechanism is analyzed. During infection, Th1 T-cells become sensitized to antigens naturally produced by M. tuberculosis and bind to them, releasing interferon-gamma (IFN-γ) in the process. The IGRAs work by using synthetic isolated antigens to induce a response in the existing T-cells of a patient’s whole blood sample. The T-cells bind to these antigens, their response is analyzed, and the results are interpreted by one of two protocols, depending on the specific IGRA:

QuantiFERON-TB In-Tube test

In the case of the QFT-IT, test antigens include early secretory antigenic target-6 (ESAT-6), culture filtrate protein 10 (CFP-10), and part of the peptide sequence for TB7.7. These antigens were specifically chosen because they are absent from BCG vaccine strains and most nontuberculous mycobacteria (with the exception of M. kansasii, M. szulgai, and M. marinum), thus increasing the QFT-IT’s specificity [2].

The test’s process is relatively simple: the antigens are incubated with the patient’s whole blood samples, and the amount of IFN-γ produced by T-cells is quantified via a single-step enzyme-linked immunosorbent assay (ELISA). The test relies on the basic principle that T-cells of an infected individual will release a significantly higher level of IFN-γ than those of a non-infected individual [3, 6].

In order to make the QFT-IT test more reliable, a total of three test tubes are provided for incubation (i.e. the test requires three blood samples): (1) the first one contains the test antigens, (2) the second one contains heparin alone (a negative control referred to as Nil), and (3) the third contains heparin, dextrose, and phytohemaglutinin (a positive control, referred to as the mitogen response). The results of the test are based on the amount of IFN-γ produced in each of the three test tubes, and the values of positive, negative, or indeterminate are defined based either on the manufacturer’s recommended criteria or on the criteria recommended by the specific country’s governing body (e.g. in the U.S., results are based on the criteria required by the FDA).

T-SPOT.TB test

Unlike the QFT-IT, the T-Spot’s antigens include only ESAT-6 and CFP-10. The test relies on the same basic principle as the QFT-IT, but instead of using ELISA to measure the amount of IFN-γ produced, the T-Spot uses an enzyme-linked immunospot assay (ELISPOT) on separated and counted peripheral blood mononuclear cells (PBMCs) to quantify the number of cells producing IFN-γ. The secreting cells appear as “spots” in each test well. The T-Spot also includes three tubes (requiring three blood samples): the test antigens, a negative control (Nil), and a positive control (mitogen response) [1, 2].

The results are based on the quantified number of spots (i.e. a representation of IFN-γ-secreting PBMCs) and, just like the QFT-IT, the values of positive, negative, or indeterminate are defined either by using the manufacturer’s recommendations or the criteria required by the specific country’s governing body [2].

The pros and cons of each test

TST

Some of the advantages of the TST are readily apparent. First, it is inexpensive—costs will vary from region to region, but as an example, the Los Angeles County Department of Public Health in California lists a per patient cost of $12.95 for a TST, compared to $21.27 for an IGRA [17]. The TST is also widely available and does not require complicated or expensive laboratory equipment. Furthermore, staff can be easily trained to measure the diameter of the induration and, based on the established protocols, come up with an interpretation of the test result (i.e. positive or negative).

The drawbacks, unfortunately, are significant. The first challenges are logistic: the test requires that the patient return to the clinic 2-3 days after administration of the injection, and the results are based on observation by trained staff, which introduces an element of subjectivity and therefore becomes a source of potential variability in interpretation [1]. Additionally, there are multiple factors that can affect TST interpretation—some of which may not always be reported by the patient or known beforehand by the clinician. Examples of these are those listed by the American Thoracic Society in conjunction with the Centers for Disease Control and Prevention [16], and include a history of gastrectomy or jejunoileal bypass, immigration from a high-prevalence country within five years of testing, silicosis, diabetes, certain types of cancer, recent weight loss of >10% ideal body weight, and many more. A relatively simplified list of the interpretation criteria for the TST is listed by the CDC at the following hyperlink [20]: http://www.cdc.gov/tb/publications/factsheets/testing/skintesting.htm.

Furthermore, there are two known potential sources for false positives: nontuberculous mycobacterium (NTM) infection and prior BCG vaccination [1, 6]. A study published in 2006 looked specifically at these two potential confounders [8], and their conclusions were as follows:

  • If the BCG is received in infancy (i.e. within the first year of life), the effects on the TST are minimal—especially at greater than 10 years post-vaccination. Specifically, the study found that BCG vaccination in this group caused an overall rate of 8.5 false-positive TST reactions per 100 vaccines, with a rate of 2.6 false-positives per 100 vaccinations causing reactions of 15mm or more. When a TST was performed 10 years after vaccination, there was only one false positive per 100 vaccines.
  • By contrast, BCG received after infancy or given multiple times (delivery of booster doses is common practice in some countries) produces more frequent, persistent, and larger TST responses. In this case, an overall rate of 41.8 false-positive TSTs (10mm or more) per 100 vaccines was observed. Of these, approximately half of the reactions were measured at greater than 15mm. Furthermore, this effect persisted when re-tested 10+ years after vaccination, albeit with a reduced rate of 21.2 false positives per 100 vaccines.
  • NTM is not a clinically important cause of TST false positives, except in populations with a high prevalence of NTM sensitization and very low prevalence of TB infection, e.g., reasonably healthy adults in the Southern and Central U.S., certain parts of Sweden, and other industrialized countries.

Finally, false negatives may occur in particular patient subgroups, most importantly, the immunosuppressed—either due to medical conditions such as HIV infection, or iatrogenic immunosuppression, such as the immunosuppressive treatment of immune-mediated inflammatory diseases such as Crohn’s disease and rheumatoid arthritis [1, 5].

IGRAs (including QFT-IT and T-Spot)

The major advantage of IGRAs is their improved specificity in BCG-vaccinated patients, which is particularly important in countries where the BCG vaccine is administered after infancy or where booster shots are given [1, 6]. Logistically, the IGRAs only require one visit by the patient, result within 24 hours, and are free of the potential variability errors associated with TST placement and reading [2, 6].

A potential drawback of IGRAs is their significantly greater cost (including the need for specialized equipment) as compared to the TST; however, this cost may be offset by a decrease in false positives resulting in fewer resources spent evaluating and treating persons with positive test results [2, 5, 9]. Such a situation arises, for example, when testing BCG-vaccinated populations.

At least one study in the literature [6] argues that the main drawback of IGRAs is that their results “have not been validated prospectively, through follow-up of large cohorts, to determine the subsequent incidence of active TB.” This becomes particularly relevant in cases where an individual is IGRA-positive but TST-negative. This phenomenon remains unexplained, resulting in difficulties managing such patients.

When it comes to the immunosuppressed population, the available evidence suggests that IGRAs perform similarly to TST in detecting LTBI in HIV-infected individuals, and both TST and IGRAs have suboptimal sensitivity to detect active TB [1]. One could argue that the same limitations are likely to be present in the iatrogenically immunosuppressed population.

Finally, there is the question of whether a previous TST can affect IGRA results. The data are scarce at present, but a systematic review in 2009 concluded, “The TST appeared to affect IGRA responses only after 3 days and may apparently persist for several months” [10]. In other words, there is ongoing concern that a TST performed three or more days prior to an IGRA could lead to a false-positive IGRA.

Conclusions and pragmatic considerations when testing for LTBI

The information described above is but a glimpse into all of the current studies involving LTBI and its diagnostic challenges. In trying to summarize these into a pragmatic approach for the clinician, the following considerations seem reasonable:

Who to test

Both the TST and IGRAs may play a role in LTBI diagnosis. Use of these tests is appropriate among patients who are at risk for LTBI and would benefit from treatment (i.e. those at increased risk for developing active TB) [1, 2].

Who should NOT be tested

Generally, testing with TST and IGRAs should be avoided for persons at low risk for both latent infection and progression to active TB (unless they are likely to be at increased risk in the future) [2]. Additionally, as stated above, both tests have been shown to be inadequate for the diagnosis of active TB. If active TB is suspected, the clinician should proceed to acid-fast stain, culture, tissue pathology, imaging, bronchoscopy, etc. Consultation with an Infectious Diseases and/or Pulmonary specialist is warranted at this point.

Which test to use

Guidelines vary slightly from nation to nation—and within nations, from institution to institution. In general, the clinician should tailor the needs to the specific clinical scenario. That being said, given the overall characteristics of the available tests, some of the CDC’s 2010 recommendations state that “IGRA is preferred for testing persons from groups that historically have low rates of returning to have TSTs read” and “persons who have received BCG (as a vaccine or for cancer therapy).” On the other hand, “a TST is preferred for testing children aged <5 years” [2]. A full list of the CDC’s recommendations is beyond the scope of this article, but is available at the following website: http://www.cdc.gov/mmwr/preview/mmwrhtml/rr5905a1.htm.

If the choice of test hinges on BCG vaccination status and there is uncertainty regarding whether the patient has received the vaccine, the following BCG World Atlas [15] may prove useful to the clinician: http://www.bcgatlas.org/.

Should both tests be used sequentially? Simultaneously?

When a TST is read as borderline, the clinician may be tempted to draw an IGRA for confirmation. However, as mentioned above, there is ongoing concern about whether administration of a TST can produce a false-positive IGRA. The data are scarce at present and the question is still being studied, but in the interim, based on the findings of van Zyl-Smit et al’s systematic review [10], drawing an IGRA should be avoided if the patient has had a TST placed three or more days prior to the blood draw.

Regarding simultaneous use of both tests, a review of the literature revealed only one set of proposed guidelines for situations in which concurrent use of both tests is being considered: the Public Health Agency of Canada. Per its recommendation, if the clinician is to use both tests, in order to avoid problems with interpretation, blood samples for an IGRA should be drawn before or on the same day as placement of the TST [19].

Statistical validation of TST and IGRAs

The reader will have noticed that specific numbers for specificity, sensitivity, number needed to treat, and other statistical measures have not been listed. The reason for this is that assessments of such statistical measures (for both TST and IGRAs) vary widely depending on the source and, most importantly, are hampered by the basic fact that there is currently no gold standard to confirm a diagnosis of LTBI [2]. This is further complicated by the fact that test result interpretation criteria change from country to country, sometimes from organization to organization. Furthermore, determination of these values will change among different patient populations (e.g. infants, young children, HIV-positive patients, the immunocompromised, and so on). Sensitivity values for TST and IGRAs have been reported anywhere from ~60% to ~90% depending on the source, patient population, interpretation criteria, and so on.

That being said, in general, the IGRA’s sensitivity is estimated to be similar to that of the TST, but the specificity of IGRAs is generally considered to be higher—given that the antigens used in IGRAs are relatively specific to M. tuberculosis [2, 4]. There are discrepancies in the literature, but as an example, in persons unlikely to have M. tuberculosis infection, the CDC cites the QFT-IT’s specificity as 99%, compared to 85% for the TST [2]. Once again, these numbers will vary significantly based on the source, patient population, interpretation criteria, BCG status, and so on.

Thus, in summary, the choice of test should be based on the clinical scenario, institution-specific protocols, and expert recommendation (CDC, WHO, local Medical authorities).

A few additional considerations regarding result interpretation of both tests

Regarding the TST, when interpreting a positive value, it is important to consider more than just the size of the induration: the clinician should consider three different aspects: (1) the size of the induration, (2) the pretest probability of infection, and (3) the risk of disease if the person were truly infected [1].

Regarding IGRAs, “indeterminate” results are not uncommon with the QFT-IT—although this has improved when compared to its predecessors, the QFT and QFT-G. Indeterminate results are most often associated with age <5 years or >80 years, and with immunosuppression, e.g. from HIV infection or iatrogenic causes. In some instances, indeterminate results can be secondary to improper handling or insufficient samples (this is particularly true of the QFT and QFT-G, but also the QFT-IT). Per CDC recommendations, a repeat test can be useful when the initial IGRA was indeterminate and “a reason for testing persists,” or when assay measurements are unusual, e.g. when the mitogen response is lower than expected for the population being tested [2]. Otherwise, if available, a T-Spot may be more useful as it is associated with significantly fewer “indeterminate” results [2, 5]. If appropriate in such cases, a TST may also be considered. If doubt remains in spite of repeated, expanded testing, consultation with an Infectious Diseases specialist is warranted.

When to treat

A discussion regarding appropriate tests and treatment after the initial diagnosis of LTBI is beyond the scope of this article. However, a few items are worth mentioning: per the CDC’s recommendations, the diagnosis and treatment of M. tuberculosis infection should NOT be based on IGRA or TST results alone. Other considerations need to be included in the decision, such as epidemiologic and medical history, risk factors and overall clinical picture [2]. A useful tool for the clinician is the Online TST/IGRA Interpreter [11] at http://www.tstin3d.com/. This website helps the clinician estimate the risk of progression to active TB for an individual who has undergone TST or IGRA testing given the specific clinical picture, including items such as country of birth, age at immigration to a country with low TB incidence, existing comorbidities, and many more.

Mycobacterium tuberculosis infection is a complex, even elegant process with significant individual and public health implications. Clearly further research is needed in the field of diagnostics. In the meantime, it is the writer’s hope that this article sheds some light on the advantages and limitations of the currently available tests for latent disease, which will hopefully in turn assist the clinician in making a better-informed test choice.

Dr. Miguel A. Saldivar, MD is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Howard Leaf, MD, Internal Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons 

References

 

  1. Pai M, Denkinger CM, Kik SV, et al. Gamma interferon release assays for detection of Mycobacterium tuberculosis infection. Clin Microbiol Rev. 2014 Jan;27(1):3-20. doi: 10.1128/CMR.00034-13. PMID 24396134. http://cmr.asm.org/content/27/1/3.long

 

  1. Mazurek GH, Jereb J, Vernon A, et al. Updated guidelines for using Interferon Gamma Release Assays to detect Mycobacterium tuberculosis infection – United States, 2010. MMWR Recomm Rep. 2010 Jun 25;59(RR-5):1-25. PMID 20577159. http://www.cdc.gov/mmwr/preview/mmwrhtml/rr5905a1.htm

 

  1. Smith DS. Interferon Gamma Release Assays. Stanford University. http://web.stanford.edu/group/parasites/ParaSites2006/TB_Diagnosis/Interferon%20Gamma%20Release%20Assays.html

 

  1. Lalvani A. Diagnosing tuberculosis infection in the 21st century: new tools to tackle an old enemy. Chest. 2007 Jun;131(6):1898-906. PMID 17565023. http://journal.publications.chestnet.org/article.aspx?articleid=1085168

 

  1. Lalvani A, Pareek M. Interferon gamma release assays: principles and practice. Enferm Infecc Microbiol Clin. 2010 Apr;28(4):245-52. doi: 10.1016/j.eimc.2009.05.012. Epub 2009 Sep 24. PMID 19783328. http://www.elsevier.es/en-revista-enfermedades-infecciosas-microbiologia-clinica-28-articulo-interferon-gamma-release-assays-principles-13149868

 

  1. Landry J, Menzies D. Preventive chemotherapy. Where has it got us? Where to go next? Int J Tuberc Lung Dis. 2008 Dec;12(12):1352-64. PMID 19017442. http://www.ingentaconnect.com/content/iuatld/ijtld/2008/00000012/00000012/art00005?token=0051132cf33437a63736a6f3547414c7d703444532e5b6f644a467b4d616d3f4e4b3485763504194f

 

  1. Bennett DE, Courval JM, Onorato I, et al. Prevalence of tuberculosis infection in the United States population: the national health and nutrition examination survey, 1999–2000. Am J Respir Crit Care Med. 2008 Feb 1;177(3):348-55. Epub 2007 Nov 7. PMID 17989346. http://www.atsjournals.org/doi/abs/10.1164/rccm.200701-057OC?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed#.VOpDnnaR_RI

 

  1. Farhat M, Greenaway C, Pai M, Menzies D. 2006. False-positive tuberculin skin tests: what is the absolute effect of BCG and nontuberculous mycobacteria? Int. J. Tuberc. Lung Dis. 10:1192–1204. PMID 17131776. http://www.ingentaconnect.com/content/iuatld/ijtld/2006/00000010/00000011/art00003?token=00561c5878e77900a6855c5f3b3b47465248703b444549794624734f582a2f4876753375686f49530b0a5c

 

  1. Marra F, Marra CA, Sadatsafavi M, et al. Cost-effectiveness of a new interferon-based blood assay, QuantiFERON-TB Gold, in screening tuberculosis contacts. Int J Tuberc Lung Dis. 2008 Dec;12(12):1414-24. PMID 19017451. http://www.ingentaconnect.com/content/iuatld/ijtld/2008/00000012/00000012/art00014?token=00501d71712f1753f11c939412f415d766b2544453a4a6c7b73516f253048296a7c2849266d656cc

 

  1. van Zyl-Smit RN, Zwerling A, Dheda K, Pai M. Within-subject variability of interferon-g assay results for tuberculosis and boosting effect of tuberculin skin testing: a systematic review. PLoS One. 2009 Dec 30;4(12):e8517. doi: 10.1371/journal.pone.0008517. PMID 20041113. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0008517

 

  1. Law S, Menzies D, Pai M, et al. The Online TST/IGRA Interpreter. McGill University & McGill University Health Center, Montreal Quebec. Canada. http://www.tstin3d.com/

 

  1. Metcalfe JZ, Everett CK, Steingart KR, et al. Interferon-γ release assays for active pulmonary tuberculosis diagnosis in adults in low- and middle-income countries: systematic review and meta-analysis. J Infect Dis. 2011 Nov 15;204 Suppl 4:S1120-9. doi: 10.1093/infdis/jir410. 21996694. http://jid.oxfordjournals.org/content/204/suppl_4/S1120.full?sid=1dad36e7-34b2-4950-a694-108659642e9b

 

  1. Sester M, Sotgiu G, Lange C, et al. Interferon-γ release assays for the diagnosis of active tuberculosis: a systematic review and meta-analysis. Eur Respir J. 2011 Jan;37(1):100-11. doi: 10.1183/09031936.00114810. Epub 2010 Sep 16. PMID 20847080. http://erj.ersjournals.com/content/37/1/100.long

 

  1. Fan L, Chen Z, Hao XH, et al. Interferon-gamma release assays for the diagnosis of extrapulmonary tuberculosis: a systematic review and meta-analysis. FEMS Immunol Med Microbiol. 2012 Aug;65(3):456-66. doi: 10.1111/j.1574-695X.2012.00972.x. Epub 2012 Jun 18. PMID 22487051. http://femsim.oxfordjournals.org/content/65/3/456

 

  1. Zwerling A, Behr M, Verma A, et al. The BCG World Atlas. A Database of Global BCG Vaccination Policies and Practices. McGill University & McGill University Health Center, Montreal Quebec. Canada. PLoS Med. 8:e1001012. http://dx.doi.org/10.1371/journal.pmed.1001012. http://www.bcgatlas.org.

 

  1. Targeted Tuberculin Testing and Treatment of Latent Tuberculosis Infection. American Journal of Respiratory and Critical Care Medicine, Vol. 161 (2000). Supplement: American Thoracic Society/Centers for Disease Control and Prevention-Targeted Tuberculin Testing and Treatment of Latent Tuberculosis Infection (2000), pp. S221-S247. doi: 10.1164/ajrccm.161.supplement_3.ats600. http://www.atsjournals.org/doi/full/10.1164/ajrccm.161.supplement_3.ats600#.VOoQsnaR_RI

 

  1. Direct Costs of TST/IGRA Cost Effectiveness. Los Angeles Department of Public Health, Division of HIV and STD Programs. State of California. http://publichealth.lacounty.gov/dhsp/MAC/IGRAcosteffectiveness.pdf

 

  1. World Health Organization (WHO). Guidelines on the Management of Latent Tuberculosis Infection. ISBN: 978 92 4 154890 8. WHO reference number: WHO/HTM/TB/2015.01. http://www.who.int/tb/publications/ltbi_document_page/en/

 

  1. Updated Recommendations on Interferon Gamma Release Assays for Latent Tuberculosis Infection. An Advisory Committee Statement of the Canadian Tuberculosis Committee. Volume 34, ACS-6, October 2008. http://www.phac-aspc.gc.ca/publicat/ccdr-rmtc/08vol34/acs-6/index-eng.php.

 

20. Fact Sheets: Tuberculin Skin Testing. Cetners for Disease Control and Prevention. http://www.cdc.gov/tb/publications/factsh

Neurologic Complications In Infective Endocarditis: To Anticoagulate Or Not To Anticoagulate

July 10, 2015

479px-Pink_tulip_flowerBy Shannon Chiu, MD

Peer Reviewed

The annual incidence of infective endocarditis (IE) is estimated to be 3 to 9 cases per 100,000 persons in developed countries [1-2]. Neurologic complications are the most severe and frequent extracardiac complications of IE, affecting 15-20% of patients [3-4]. They consist of 1) ischemic infarction secondary to septic emboli from the valvular vegetation, which can eventually undergo hemorrhagic transformation; 2) focal vasculitis/cerebritis from septic emboli obstructing the vascular lumen, which can then develop into brain abscess or meningoencephalitis; 3) mycotic aneurysm secondary to inflammation from septic emboli penetrating the vessel wall [5]. Amongst these complications, stroke is the most common and is the presenting feature in 50-75% of patients [6]. To date, an ongoing debate amongst physicians is the appropriateness of anticoagulation in patients with IE and how to balance the risk of thromboembolism against that of hemorrhagic transformation of stroke.

Specific risk factors have been associated with increased risk of symptomatic embolic events. Embolic risk is especially high within the first 2 weeks after diagnosis, decreasing in frequency after initiation of antibiotics [7]. Size, location and mobility of vegetations are key predictors; in fact, surgery may be indicated for prevention of embolism with involvement of anterior mitral leaflet, vegetation size >10mm, or increasing size despite appropriate antibiotics [5,8]. Additional risk factors for embolism in IE include advanced age and S. aureus infection. Importantly, S. aureus prosthetic valve endocarditis is known to be associated with higher overall mortality and severe neurologic complications such as hemorrhagic stroke [3,9-10]. Mechanisms for intracranial hemorrhage (ICH) in patients with IE include hemorrhagic transformation (HT) of ischemic infarct, rupture of mycotic aneurysms, or erosion of septic arteritic vessels [11].

Currently, evidence regarding anticoagulants primarily stems from observational studies. One of the arguments against anticoagulation in IE is the fear of early ICH and HT of ischemic stroke. In Tornos et al.’s retrospective observational series of 56 patients with native and prosthetic S. aureus IE, mortality was higher in prosthetic valve IE than in native valve IE (p=.02; odds ration [OR], 4.23; 95% confidence interval [CI], 1.15-16.25) [12]. The authors inferred that part of this difference stemmed from the deleterious effect of anticoagulation leading to lethal neurologic damage as 90% of patients with prosthetic valve IE due to S. aureus were receiving oral anticoagulant treatment on admission (vs. no patient with native valve IE due to S. aureus was receiving such treatment). Meanwhile, in Heiro et al.’s retrospective study, a sub-analysis of 32 patients with S. aureus IE showed that 57% of patients receiving anticoagulant therapy died within 3 months of admission vs. 20% of those not receiving anticoagulant therapy, though the difference was not statistically significant (p=0.1) [9]. Garcia-Cabrera et al. conducted a retrospective analysis of 1,345 cases of left-sided IE, and likewise found that hemorrhagic complications were significantly associated with anticoagulant therapy that was primarily used in patients with mechanical valves (hazard ration [HR] 2.71, 95% CI 1.54-4.76, p=0.001) [13]. On this basis, these authors have recommended stopping anticoagulants as soon as diagnosis of IE is suspected, at least until past the septic phase of the disease. Despite these reported associations of poor outcome in S. aureus IE and detrimental effect of anticoagulant therapy in these patients, these results arose from nonrandomized retrospective studies without matched cohorts. Moreover, Tornos et al.’s study was primarily designed to compare native valve with prosthetic valve IE patients, and the sample size of those receiving anticoagulation was small (19 out of 56) [12]. Similarly, Heiro et al.’s study was of limited statistical power, as only 2 of the 4 patients with lethal S. aureus IE actually died of hemorrhagic conditions while taking anticoagulant therapy.

On the opposing end, more recent prospective studies show no significant association between anticoagulation and increased risk of hemorrhagic complications, and that ICH due to anticoagulation after IE-related stroke is overestimated. Rasmussen et al. conducted a prospective cohort study of 175 S. auerus IE patients, of which 70 patients (40%, 95% CI 33-47%) experienced major cerebral events during the course of the disease [14]. Stroke was the most common complication (34%, 95% CI 27-41%), but the incidence of cerebral hemorrhage was low (3%, 95% CI 0.5-6%). None of the patients who experienced cerebral hemorrhage were receiving anticoagulant treatment. In fact, Rasmussen et al. found that patients receiving anticoagulation were less likely to have experienced a major cerebral event at time of admission compared to those not receiving such treatment (15% vs 37%, p=0.009). The indication for anticoagulation for the majority of patients in this study was prosthetic heart valves. Anticoagulation at the time of admission was associated with a significant reduction in the number of major cerebral events in patients with native valve IE (0 vs. 39%, p=0.008); however, this was not evident in those with prosthetic valve IE. In-hospital mortality rate was 23% (95% CI 17-29%) with no significant difference between patients with or without anticoagulant therapy.

An added complication to the picture is the decision for cardiac surgery in patients with IE who suffer a neurologic event. Except for clinically severe ICH, neurologic complications are not a contraindication for surgical treatment [5]. The decision to perform cardiopulmonary bypass remains controversial, as the surgery can cause/aggravate cerebral damage in several ways, such as ICH related to heparinization during the procedure, and possible hemodynamic worsening of the ischemic infarction (e.g. additional embolism, hypoperfusion) [5,15]. The timing of the surgery is also hotly debated, and evidence supporting surgical intervention is of limited quality and primarily based on observational studies. However, when needed, cardiac surgery can be performed promptly after a silent cerebral embolism or transient ischemic attack, but must be postponed for at least 1 month following ICH [8].

Despite controversy over anticoagulant therapy, recommendations regarding antiplatelet therapy are more clear-cut: antiplatelets are not recommended for patients with IE. In a double-blind, placebo-controlled trial comparing aspirin 325mg with placebo for 4 weeks in 115 IE patients, there was no significant decrease in the incidence of embolic events (OR 1.62, 95% CI 0.68-3.86) [16]. In fact, there was a trend toward more bleeding in the aspirin group (OR 1.92, 95% CI 0.76-4.86); and aspirin had no effect on vegetation size. While there are conflicting findings from observation studies regarding the use of chronic antiplatelet treatment before IE, in terms risks of death and embolic events, current available evidence suggests that antiplatelet therapy is not indicated in IE [17-19]. Patients on antiplatelet therapy for other indications may continue taking it, in the absence of major bleeding.

So where does this leave us? According to the most recent European Society of Cardiology guidelines, there is no indication to start anticoagulation in patients with IE [8]. For those already receiving anticoagulant therapy, and in which IE is complicated by ischemic or non-hemorrhagic stroke, the oral anticoagulant agent should be replaced by unfractionated heparin for 2 weeks. For those with ICH complication, all anticoagulation should be stopped, except for those with prosthetic valve IE in which case the recommendation is to reinitiate unfractionated heparin “as soon as possible” (no specified time-frame given in guidelines). Critically, the European Society of Cardiology guidelines acknowledge the low level of evidence supporting these recommendations.

Anticoagulation is undoubtedly a double-edged sword. Whenever cerebrovascular complications of IE are suspected, there should be low threshold to perform diagnostic brain imaging to rule out cerebral hemorrhage, which would definitively justify discontinuation of anticoagulation and likely postpone planned cardiac surgery. Repeat echocardiography and neuroimaging play an important role in management of IE patients. At this time, the lack of robust information on anticoagulant therapy in IE stresses the need for more large randomized controlled trials.

Dr. Shannon Chiu is a 2nd year resident at NYU Langone Medical Center

Peer Reviewed by Albert Jung, MD, Internal Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

 

  1. Correa de Sa DD, Tleyjeh IM, Anavekar NS, et al.; Epidemiological trends of infective endocarditis: a population-based study in Olmsted County, Minnesota. Mayo Clin Proc. 2010;85:422-426. http://www.mayoclinicproceedings.org/article/S0025-6196%2811%2960327-3/fulltext 
  2. Duval X, Delahaye F, Alla F, et al.; Temporal trends in infective endocarditis in the context of prophylaxis guideline modifications: three successive population-based surveys. J Am Coll Cardiol. 2012;59:1968-1976. http://www.sciencedirect.com/science/article/pii/S0735109712009849
  3. Thuny F, Avierinos JF, Tribouilloy C, et al.; Impact of cerebrovascular complications on mortality and neurologic outcome during infective endocarditis: a prospective multicentre study. Eur Heart J. 2007;28:1155-1161. http://eurheartj.oxfordjournals.org/content/28/9/1155.long 
  4. Sonneville R, Mirabel M, Hajage D, et al.; Neurologic complications and outcomes of infective endocarditis in critically ill patients: the ENDOcardite en REAnimation prospective multicenter study. Crit Care Med. 2011;39:1474-1481. http://journals.lww.com/ccmjournal/Abstract/2011/06000/Neurologic_complications_and_outcomes_of_infective.35.aspx
  5. Ferro JM, Fonesca C. Infective endocarditis. Handb Clin Neurol, Neurologic Aspects of Systemic Disease Part I. 2014;119:75-91. http://www.sciencedirect.com/science/article/pii/B9780702040863000072
  6. Sila C. Anticoagulation should not be used in most patients with stroke with infective endocarditis. Stroke. 2011;42:1797-1798. stroke.ahajournals.org/content/42/6/1797.full.pdf
  7. Snygg-Martin U, Gustafsson L, Rosengren L, et al.; Cerebrovascular complications in patients with left-sided infective endocarditis are common: a prospective study using magnetic resonance imaging and neurochemical brain damage markers. Clin Infect Dis. 2008;47:23-30.  http://cid.oxfordjournals.org/content/47/1/23.long
  8. Habib G, Hoen B, Tornos P, et al.; The task force on the prevention, diagnosis, and treatment of infective endocarditis of the European Society of Cardiology (ESC). Eur Heart J. 2009;30:2369-2413. http://eurheartj.oxfordjournals.org/content/ehj/30/19/2369.full.pdf
  9. Heiro M, Nikoskelainen J, Engblom E, et al.; Neurologic manifestations of infective endocarditis: A 17-Year Experience in a Teaching Hospital in Finland. Arch Intern Med. 2000;160:2781-2787. http://archinte.jamanetwork.com/article.aspx?articleid=485459
  10. Di Salvo G, Habib G, Pergola V, et al.; Echocardiography predicts embolic events in infective endocarditis. J Am Coll Cardiol. 2001;37:1069-1076. http://www.sciencedirect.com/science/article/pii/S0735109700012067
  11. Molina CA, Selim MH. Anticoagulation in patients with stroke with infective endocarditis: the sword of Damocles. Stroke. 2011;42:1799-1800. http://stroke.ahajournals.org/content/42/6/1799.full.pdf
  12. Tornos P, Almirante B, Mirabet S, et al.; Infective endocarditis due to Staphylococcus aureus: deleterious effect of anticoagulant therapy. Arch Intern Med. 1999;159:473-475. http://archinte.jamanetwork.com/article.aspx?articleid=414876
  13. Garcia-Cabrera E, Fernandez-Hidalgo N, Almirante B, et al.; Neurological complications of infective endocarditis: risk factors, outcome, and impact of cardiac surgery: a multicenter observational study. Circulation. 2013;127(23):2272-84. http://circ.ahajournals.org/content/127/23/2272.long
  14. Rasmussen RV, Snygg-Martin U, Olaison L, et al.; Major cerebral events in Staphylococcus aureus infective endocarditis: is anticoagulant therapy safe? Cardiology. 2009;114:284-291. http://www.karger.com/Article/FullText/235579
  15. Goldstein LB, Husseini NE. Neurology and cardiology: points of contact. Rev Esp Cardiol. 2011;64(4):319-27. http://www.sciencedirect.com/science/article/pii/S188558571100154X
  16. Chan KL, Dumesnil JG, Cujec B, et al.; A randomized trial of aspirin on the risk of embolic events in patients with infective endocarditis. J Am Coll Cardiol. 2003;42:775-780. http://content.onlinejacc.org/article.aspx?articleid=1132570
  17. Anavekar NS, Tleyjeh IM, Anavekar NS, et al.; Impact of prior antiplatelet therapy on risk of embolism in infective endocarditis. Clin Infect Dis. 2007;44:1180-1186. http://cid.oxfordjournals.org/content/44/9/1180.long
  18. Pepin J, Tremblay V, Bechard D, et al.; Chronic antiplatelet therapy and mortality among patients with infective endocarditis. Clin Microbiol Infect. 2009;15:193-199. http://onlinelibrary.wiley.com/doi/10.1111/j.1469-0691.2008.02665.x/full
  19. Chan KL, Tam J, Dumesnil JG, et al.; Effect of long-term aspirin use on embolic events in infective endocarditis. Clin Infect Dis. 2008;46:37-41. http://cid.oxfordjournals.org/content/46/1/37.long

 

 

 

 

 

 

 

 

 

 

Microbiome Blues in E

April 1, 2015

By M tanner

Many  bacteria live in and on me—I’ve always known that. But when I learned that bacteria make up 90% of the cells in my body, it made me feel so sucio, so unclean.

I went through my day, realizing for the first time that I am entertaining 100 trillion houseguests who never go home. And who lack all sense of decorum. I know that, technically speaking, bacteria are asexual. But then I read: “one special type of pilus found in ‘male’ strains of E coli is involved in conjugation, with the transfer of DNA from donor to recipient, probably accomplished through its core.” Why the quotation marks around ‘male’? Sounds like a typical guy to me.

I’ve always liked to think of myself as a eukaryote, the kind of standup, biped guy whose cells all have a nucleus. A member in good standing of the kingdom Animalia, the phylum Chordata, the class Mammalia, the order of Primates, the family Hominidae, the genus Homo, and the species Sapiens. Einstein’s species, the crown of creation.

Now I see myself as one of Car & Driver’s 10 best in the bacteria class, with a refined, roomy interior (30 linear feet of gastrointestinal tract); a beguiling, sporty exterior (20 square feet of skin); and an eagerness to romp. Looking at my reflection in the bathroom mirror I don’t see just me any longer. I’m trying to determine whether the man in the mirror is overall more gram-negative or anaerobic.

There is much about bacteria to admire. They have a lot of seniority: 3.5 billion years, compared with Homo’s 2 million. They reproduce (and mutate) in minutes, where it takes us 40 weeks. They produced the oxygen that makes our existence possible. There is no actual malice behind even their most horrendous infections. And most bacteria cause no trouble.

When I cut my finger, I do what everyone does. I put my finger in my mouth and clean off the blood with my tongue. There are 100 million bacteria in every milliliter of saliva. If Streptococcus salivarium were pathogenic, we’d all be dead.

The bacterial population of the gaps between our teeth is huge. How do I not get endocarditis every time I launch a diaspora with a strand of dental floss?

Everyone has heard of the serial killers E coli, Staph aureus, Strep pneumoniae, and M tuberculosis, but our heroes are unsung. No virgins strew rose petals in the paths of the benign, commensal bacteria holding down spots that could otherwise be occupied by superbugs producing the dreaded New Delhi metallo-beta-lactamase-1. I only recently learned the names of the Firmicutes and Bacteroidetes, benign anaerobes that make up over 98% of our bacteria.

On further reflection I feel better about the 10-to-1 situation. They only weigh three pounds. And it’s not like there are ten of them loitering in every one of my cells. There are strict divisions between the body’s sterile and microbial regions. Bacteria are confined to the skin, the gastrointestinal tract, the nose, the ears, and the vagina. The vital organs—heart, liver, kidneys, and brain—are sterile. Likewise, many bodily fluids–blood, urine, cerebrospinal fluid, and amniotic fluid—are sterile or close to it. That’s why I get mad when the border is crossed. Really mad.

I know I shouldn’t. Nobel laureate Joshua Lederberg wrote in 2005 that we should “supercede the twentieth-century metaphor of war for describing the relationship between people and infectious agents. A more ecologically-informed metaphor, which includes the germs’-eye view of infection, might be more fruitful.” Yes, but in the hospital I can’t get myself to be all ecologically minded when my patient has a fever of 105 and shaking chills.

Ask If My Hands Are Clean (poster by the elevators at Bellevue Hospital)

lunchtime. I’m walking across the Bellevue atrium when a strange man comes up to me.

Patient:

Are your hands clean?

Me:

I beg your pardon?

Patient:

Are your hands squeaky clean?

Me:

Do I know you?

Patient:

No, but I know you. You’re a doctor. I’ve seen you around the clinic for years. You ride a bike to work.

Me:

Do you also happen to know the name of my first childhood pet?

Patient:

Huh? That sign over there says “Ask if my hands are clean.” So I’m just doing what it says. Are your hands clean?

Me:

May I ask? What is your diagnosis?

Patient:

Obsessive-compulsive disorder. So! Are they clean?

Me:

That’s a complex question. Have you heard of Stanley Falkow?

Patient:

Who?

Me:

He’s a microbiologist at Stanford. Stanley Falkow wrote: “The world is covered with a fine patina of feces.”

Patient:

That’s disgusting.

Me:

Disgusting but true, my friend. I rode 4 miles on my bike this morning through the streets of Manhattan, holding handlebar grips that I’ve never washed.

Patient:

You haven’t? I use Purell 20 times a day. I love these Purell dispensers they’ve placed all over the hospital.

Me:

I bet you do. You might want to back off on the Purell. Have you heard of Dr. Martin Blaser?

Patient:

Oh yeah, he was on Jon Stewart.

Me:

He wrote: “What’s wrong with good old soap and water?” So let’s say I walk into the hospital with 10 billion bacteria on my hands, and that Purell, if the claim on the label is true, kills 99.99% of them. That still leaves a million bacteria on my hands. The whole notion of getting out of the shower and being “squeaky clean” is a joke. You haven’t been squeaky clean since you were bunking with a placenta. But don’t be blue. The bacteria you acquired on that ride down your mom’s birth canal are actually helping you.

Patient:

Can you write me a prescription for a good colon cleanser?

Me:

Why don’t you take that up with your primary care physician?

Most of Our DNA is Bacterial

Martin Blaser, in his book Missing Microbes: How the Overuse of Antibiotics Is Fueling Our Modern Plagues, maintains that antibiotics, modern hygiene, and overreliance on Caesarean section have disrupted our microbiota and led to the proliferation of type 1 diabetes, asthma, obesity, and celiac disease. But I found this even more disturbing:

…your microbes and mine have millions of unique genes, and a more current estimate is 2 million. Your human genome, by comparison, has about 23,000 genes. In other words, 99 percent of the unique genes in your body are bacterial, and only about 1 percent are human…”

This brought to mind the classic passage from Richard Dawkins’s The Selfish Gene about the Replicators (self-copying molecules, DNA):

Now the Replicators swarm in huge colonies, safe inside gigantic lumbering robots, sealed off from the outside world, communicating with it by tortuous indirect routes, manipulating it by remote control. They are in you and in me; they created us, body and mind; and their preservation is the ultimate rationale for our existence. They have come a long way, those replicators. Now they go by the name of genes, and we are their survival machines.

Shared Decision Making I

I walk out of the hospital and suddenly feel like driving to Jones Beach. But is it really I who want to go to the beach? Or is it that erogenous zone on my 30 trillion Y chromosomes–the DNA sequence responsible for the atavistic tendency of the Tanner males to drift toward potential Tanner-germinators–that wants to go to the beach and “I”, the lumbering robot, the survival machine, who will just be just handling the driving and tolls?

 

Shared Decision Making II

My wife and I are at our neighborhood restaurant for our anniversary. “Let me tell you about tonight’s dessert specials,” our waiter says. “We have an orange soufflé topped off with Grand Marnier sauce, a citrus tart drizzled with pomegranate compote, and, lastly, a chocolate mousse. It is scandalous.’’

Carlo Maley of UCSF believes that our cravings for certain foods are bacterial in origin. The brain may be sterile, but it is subject to remote control. A neural impulse originating in my abdomen zooms up the vagus nerve to my hypothalamus and I can almost make out the words of a dull chant: “Order the mousse! Get the mousse!” Without too much deliberation I tell the waiter: “We’ll have the mousse. Two spoons, please.”

Microbiome Blues

It’s not that bad being bossed around by my bacteria. I don’t even notice it. I’m worth more to them alive than dead, so our interests are pretty much aligned. We’re so intertwined that it’s like a marriage. At various times bacteria are our friends, our enemies, and our bosses. Servants too: every time I prescribe human insulin for a patient, E coli have done the heavy lifting. Our behavior is an amalgam of many forces, and our microbiome is part of the mix. What really determines our daily choices? Hormones? Our parents? Our past experiences? Our genes? Our bacterial genes? The configuration of the planets on the day we were born? When we don’t have a good explanation for something we tend to put all our bad explanations together and call it “multifactorial.”

It’s bedtime. I brush my teeth and gargle with Listerine, which kills 99% of odor-causing bacteria. The other 1%, however, will be at it all night. Then I wash my hands one last time and say the Lavabo, from Psalm 26. Said fast, it takes 15 seconds—the amount of time we doctors are supposed to spend washing our hands:

“I wash my hands in innocence, O Lord. Giving voice to my thanks and marveling at your miraculous universe. O Lord I love the house in which you dwell, the tenting place of your glory. Gather not my soul with those of sinners, nor with men of blood my life. On their hands are crimes and their right hands are full of bribes. But I walk in integrity. Redeem me and have pity on me.”

Dr. Michael Tanner is Executive Editor of Clinical Correlations

 

Diabetic Foot Ulcers: Pathogenesis and Prevention

March 19, 2015

By Shilpa Mukunda, MD

Peer Reviewed

On my first day on inpatient medicine at the VA Hospital, Mr. P came in with an oozing foot ulcer. Mr. P, a 60-year-old man with a 30 pack-year smoking history, poorly controlled diabetes, peripheral vascular disease, and chronic renal disease, had already had toes amputated. He knew all too well the routine of what would happen now with his newest ulcer. After two weeks of IV antibiotics and waiting for operating room time, Mr. P eventually had his toe amputated. It was his fourth amputation.

Mr. P unfortunately is not alone in this chronic complication of diabetes. Approximately 15-25% of individuals with type 2 diabetes mellitus develop a diabetic foot ulcer [1]. Not all ulcers, however, require amputation. Ulcers can also be treated with sharp debridement, offloading techniques to redistribute pressure from the ulcer, and wound dressings, with hydrogels being the most frequently used [2]. Weekly sharp debridement is associated with more rapid healing of ulcers [2]. In addition, in patients with severe peripheral vascular disease and critical limb ischemia, early surgical revascularization can prevent ulcer progression and decrease rates of amputation [3]. Even with immediate and intensive treatment, however, many foot ulcers will take months to heal or may not heal at all. Diabetic foot ulcers are the most common cause of non-traumatic amputations in the United States, with 14-24% of patients with an ulcer subsequently undergoing amputation [4]. Amputation leads to physical disability and greatly reduced quality of life [5]. In addition to their detrimental effects on the lives of individual patients, ulcers also have a great economic cost to society. Patients with ulcers often have lengthy inpatient stays with involvement of specialists. According to a 1999 study, the healthcare costs of a single ulcer are estimated to be approximately $28,000 [4].

The pathogenesis of diabetic foot ulcers is multifaceted. Neuropathy, abnormal foot mechanics, peripheral artery disease, and poor wound healing contribute to diabetic foot ulcers. Neuropathy, a microvascular complication of diabetes, occurs in approximately 50% of individuals with long-standing type 1 and type 2 diabetes mellitus, and causes diabetic foot ulcers through a variety of mechanisms [6]. First, distal symmetric polyneuropathy of sensory fibers, the most common neuropathy in diabetes, leads to distal sensory loss in a glove-and-stocking distribution. Without the ability to sense pain, patients with diabetic neuropathy can inadvertently sustain repeated trauma to the foot. Neuropathy can also manifest with disordered proprioception, resulting in improper weight bearing and ulceration [6]. Motor and sensory neuropathy together can lead to disordered foot mechanics, manifesting variably as hammertoe, claw toe deformity, and Charcot foot. These structural changes cause abnormal pressure points and increased shear stress on the foot, both of which increase the risk for ulcer formation [7]. Diabetic neuropathy can also affect autonomic fibers. Autonomic neuropathy results in decreased sweating of the foot and dry skin, leading to cracks and fissures that can serve as entry points for bacteria [1]. In addition to neuropathy, many diabetics have peripheral artery disease, a macrovascular complication of diabetes and an independent risk factor for lower extremity amputation [8]. Peripheral artery disease leads to decreased tissue perfusion, which then impedes wound healing. In addition, impaired cell-mediated immunity and phagocyte function further reduce wound healing in diabetics [6]. A study by Lavery and colleagues found that the risk of ulceration in diabetics was proportional to the number of risk factors, with the risk increased by 1.7 in diabetics with isolated peripheral neuropathy and by 36 in diabetics with peripheral neuropathy, deformity, and a previous amputation [9].

How can ulcers be prevented? Optimizing glycemic control is the most important initial step. One study found that the risk of an ulcer increased in direct proportion to each 1% rise in the hemoglobin A1c [10]. In the primary care setting, diabetic patients should be screened for foot ulcers annually, with higher-risk patients screened more frequently. The annual foot exam should include visual inspection of the feet for calluses, skin integrity, and bony deformities. Patients with ulcerations or gross deformities should be referred to a podiatrist. The foot exam should also include screening for loss of protective sensation with the Semmes-Weinstein monofilament. Inability to perceive the 10-gram load imparted by the filament is associated with large-fiber neuropathy and a 7-fold increase in the risk of ulceration [11]. In addition, diabetic patients should be screened for peripheral vascular disease through palpation of the dorsalis pedis and posterior tibialis pulses and measurement of ankle-brachial index. Patients with peripheral vascular disease should be given additional counseling on smoking cessation, as smoking worsens peripheral artery disease, and referral to a vascular surgeon should be considered. All diabetic patients, especially those who have lost monofilament sensation, should be educated about foot precautions, including daily inspection of the toes and feet, wearing well-fitting socks and shoes, and keeping the skin clean and moist [12]. A 2014 Cochrane review of patient education for preventing diabetic foot ulceration found that foot care knowledge and self-reported patient behavior are positively influenced by education in the short term, yet robust evidence is lacking to show that education alone can achieve clinically relevant reductions in ulcer and amputation incidence [13]. While patient education alone may not be enough to prevent ulcers, studies have shown that multidisciplinary foot care involving physicians, educators, podiatrists, surgeons, home care nurses, nutritionists, and social services can lead to improved outcomes [14]. In Sweden, patients with diabetes managed with a multidisciplinary approach had a 50% reduction (7.9/1000 to 4.1/1000) in amputations over 11 years [15].

As the number of people living with diabetes is rising, with an estimated 300 million people with diabetes by 2025 [14], the complications associated with diabetes are also likely to increase. Despite this rise in numbers, it is important to note that major amputation rates among diabetics are falling, as shown in a 2006 study in Helsinki [3]. This is likely due to preventive measures with improved glycemic control, establishment of diabetic multidisciplinary teams, and earlier vascular revascularization procedures [3]. Ultimately, prevention is the best approach to diabetic foot ulcers. It is our goal as physicians to ensure that all our diabetic patients can live long lives with all 10 toes intact. That goal is ambitious but possible.

Dr. Shilpa Mukunda is a 1st year Internal Medicine resident at Boston University

Peer reviewed by Robert Lind, MD, Internal Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Singh N, Armstrong DG, Lipsky BA. Preventing foot ulcers in patients with diabetes. JAMA. 2005;293(2):217-228.  http://www.ncbi.nlm.nih.gov/pubmed/15644549

2. Yazdanpanah L, Nasiri M, Adarvishi S. Literature review on the management of diabetic foot ulcer. World J Diabetes. 2015;6(1):37-53.

3. Eskelinen E, Eskelinen A, Albäck A, Lepäntalo M. Major amputation incidence decreases both in non-diabetic and in diabetic patients in Helsinki. Scand J Surg. 2006;95(3):185-189.

4. Ramsey SD, Newton K, Blough D, et al. Incidence, outcomes, and cost of foot ulcers in patients with diabetes. Diabetes Care. 1999;22(3):382-387.  http://www.ncbi.nlm.nih.gov/pubmed/10097914

5. Consensus Development Conference on Diabetic Foot Wound Care: 7–8 April 1999, Boston, Massachusetts. American Diabetes Association. Diabetes Care. 1999;22(8):1354-1360.  http://www.ncbi.nlm.nih.gov/pubmed/10480782

6. Powers AC. Diabetes mellitus. In: Longo DL, Fauci AS, Kasper DL, Hauser SL, Jameson JL, Loscalzo J, eds. Harrison’s Principles of Internal Medicine. 18th ed. New York: McGraw-Hill; 2012. http://www.accessmedicine.com/content.aspx?aID=9141196. Accessed November 19, 2012.

7. Sumpio B. Foot ulcers. N Engl J Med. 2000;343(11):787-793.

8. Adler AI, Boyko EJ, Ahroni JH, Smith DG. Lower-extremity amputation in diabetes. The independent effects of peripheral vascular disease, sensory neuropathy, and foot ulcers. Diabetes Care. 1999;22(7):1029-1035.

9. Lavery LA, Armstrong DG, Vela SA, Quebedeaux TL, Fleischli JG. Practical criteria for screening patients at high risk for diabetic foot ulceration. Arch Intern Med. 1998;158(2):157-162.

10. Boyko EJ, Ahroni JH, Cohen V Nelson KM, Heagerty PJ. Prediction of diabetic foot ulcer occurrence using commonly available clinical information: the Seattle Diabetic Foot Study. Diabetes Care. 2006;29(6):1202-1207.  http://www.ncbi.nlm.nih.gov/pubmed/16731996

11. McNeely MJ, Boyko EJ, Ahroni JH, et al. The independent contributions of diabetic neuropathy and vasculopathy in foot ulceration: how great are the risks? Diabetes Care. 1995;18(2):216-219.

12. Calhoun JH, Overgaard KA, Stevens CM, Dowling JP, Mader JT. Diabetic foot ulcers and infections: current concepts. Adv Skin Wound Care. 2002;15(1):31-42.

13. Dorresteijn JA, Kriegsman DM, Assendelft WJ, Valk GD. Patient education for preventing diabetic foot ulceration. Cochrane Database Syst Rev. 2014;12:CD001488. http://www.ncbi.nlm.nih.gov/pubmed/20464718

14. Bartus CL, Margolis DJ. Reducing the incidence of foot ulceration and amputation in diabetes. Curr Diab Rep. 2004;4(6):413-418. http://www.ncbi.nlm.nih.gov/pubmed/15539004

15. Larsson J, Apelqvist J, Agardh CD, Stenström A. Decreasing incidence of major amputation in diabetic patients: a consequence of a multidisciplinary foot care team approach? Diabet Med. 1995;12(9):770–776. http://www.ncbi.nlm.nih.gov/pubmed/8542736

 

Ethical Considerations in the Use of Cordons Sanitaires

February 19, 2015

By Rachel Kaplan Hoffmann, M.D., M.S.Ed., and Keith Hoffmann, J.D.

Peer Reviewed

On December 6, 2013, a two-year-old boy living in southeastern Guinea became the first victim of the latest epidemic of Ebola Virus Disease (EVD). Since the death of Patient Zero, EVD has spread throughout West Africa, becoming the largest outbreak of the deadly virus ever [1]. In its most recent report (2/18/15), the World Health Organization (WHO) reported over 20,000 cases of EVD, with over 9,000 reported deaths [2], but the actual number of infections may be higher, due to difficulties diagnosing and reporting the disease in rural areas [3]. In the destructive wake of the outbreak, impoverished African nations sought to prevent the spread of the disease with the limited resources at their disposal. The rebirth of the cordon sanitaire, a primitive form of contagion containment, to prevent the recent EVD outbreak’s spread raises a host of ethical issues.

The cordon sanitaire is a French term dating from the 17th century that means “sanitary cordon.” It denotes a disease outbreak-control method in which a quarantine zone is determined and those inside are not allowed to leave [4]. Traditionally, the line around a cordon sanitaire was quite physical; a fence or wall was built, a regimen of armed troops patrolled, and, inside, terrified inhabitants were left to battle the affliction without outside help. First developed during the Black Death of the Middle Ages, cordons sanitaires have since been used to quarantine inhabitants of Georgia, Texas, and Florida during the 1880s to combat the spread of yellow fever; Honululu’s Chinatown during a bubonic plague outbreak in 1900 [5]; and Poland during a typhus outbreak after World War I [6]; along with historical examples that include infected communities voluntarily cordoning themselves [7]. These cordons achieved varying levels of medical success; at their worst, cordons sanitaires, including most American examples of the practice, have been examples of callousness and racism that unnecessarily victimized minority communities. However, an EVD outbreak in 1995 in Kikwit, Zaire was reportedly contained by “heartless but effective” cordons sanitaires [8].

The initiation of cordons sanitaires in the current EVD crisis was announced on August 1st after an emergency public health summit of the three nations most affected by the EVD outbreak was held in Conakry, Guinea [9]. Government and public health officials of the Mano River Union bloc representing Guinea, Sierra Leone, and Liberia, determined that the highly affected cross-border region between the three nations—a triangular area where 70% of the then-reported EVD cases were located—would be isolated by police and military. Soon after, The New York Times reported that large sections of Sierra Leone, and areas north of Liberia’s Capital, Monorovia, had military road-blocks where soldiers checked credentials and took temperatures of those going in or out [10]. Since the Mano River Union bloc of nations initiated these cordons sanitaires, the quarantines did not remain peaceful: four people were injured after police opened fire on the West-point slum in,Monrovia where between 50,000 and 100,000 people were cordoned [11]. Those quarantined stated that the barbed wire of the cordon sanitaire prevented them from accessing food and leaving the area for work—many people were starving. The slum’s inhabitants also attacked a quarantine center, looted mattresses and other goods, and helped suspected EVD patients to escape—evidence of the fear, desperation, and misunderstanding of the disease among the cordoned inhabitants.

In addition to these regional, city, and neighborhood-wide cordons sanitaires, the Financial Times reported that several less-affected African nations, including Ivory Coast, Chad, South Africa, Kenya and Senegal initiated travel, trade, and border restrictions with the three highly affected western countries [12]. Senegal completely closed its boarder with Guinea. Regional and international airlines halted flights into the hardest-hit countries. In our increasingly interconnected world, millions of uninfected citizens were being isolated out of fear of a few thousand infected.

The question has remained whether these cordons sanitaires are ethical. A WHO spokesperson stated that “[t]his is an extraordinary event in so many ways and with this extraordinary event extraordinary measures are probably going to be necessary. [The WHO] would not be against a cordon sanitaire, but it must respect human rights.” According to the New York Times, other health organizations, including the CDC, stated that such tactics could be effective, but must be used humanely in order to be ethical. The provision of food, water, and medical care to those inside a cordon, and a good line of communication with leaders inside the cordon are cited as examples of humane use of a cordon sanitaire [13]. However, the WHO stopped short of endorsing cordons that cover large geographic areas. In a statement, the WHO indicated that no general ban on international travel or trade should be implemented, due to detrimental effects on international relief efforts and the trade of necessities like food and clothing. Rather, the WHO endorsed exit screening (consisting of, at a minimum, a questionnaire, temperature assessment, and evaluation of unexplained fevers) of persons at international airports, seaports, and major land crossings to identify people with potential EVD infection. But even these limited international cordons, along with localized cordons, raise similar ethical issues.

When evaluating the cordons sanitaires according to the useful framework of the four fundamental ethical principles of autonomy, beneficence, non-maleficence and justice; it becomes questionable whether cordons sanitaires are an ethical medical practice [14]. Cordons limit many competent adults’ autonomy about their care by limiting their ability to leave a cordoned area. Furthermore, the beneficence of quarantines for some—those outside of cordons—must be balanced against non-maleficence, or the responsibility not to harm those who are cut off from the outside world by a cordon; these quarantined persons are not only exposed to disease, but may find difficulty obtaining food, water, sanitary living conditions, and work for as long as an infection danger remains within a cordon. With regard to the principle of justice, a cordon sanitaire may devalue the lives of those within a cordon, unequally distributing the burdens a disease places upon an entire society.

The ethical theories of the cordon sanitaire stand in opposition to the pragmatic utilitarianism offered by philosophers like Jeremy Bentham, who advocates taking actions that achieve the greatest good for the greatest number of people. During a disease outbreak, and the breakdown of society such outbreaks can cause, ethical theories may give way to the practical goal of containing the immediate spread of disease. According to Laurie Garrett, a public health expert and Fellow of the Council on Foreign Relations who was present during the 1995 EVD outbreak in Kikwit, Zaire, the cordons can be effective in controlling outbreaks. She warns that current efforts may be too little, too late, but urges that controlling the current outbreak will require the imposition of strict cordons sanitaires [15].

In the current outbreak, these cordons have had variable effectiveness. Clinically, very small-scale cordons—quarantining individual patients and those with whom EVD patients have come into direct contact—have demonstrated effectiveness [16], while medium- and large-scale cordons around neighborhoods, regions, and nations have proven ethically troubling, largely ineffective, and difficult to enforce, as even wealthy nations like the United States have found border control to be porous. Large-scale cordons also present the possibility of devastating effects on national economies and public health.

Thus, public health officials should focus on the containment of EVD by zeroing in on those already infected and containing its spread through small-scale cordons sanitaires—like those that have been successful in Nigeria and Senegal—conducted in the most ethical manner possible. Fortunately this type of effort has demonstrated effectiveness; in their most recent report, the WHO states that on a national level, Guinea, Liberia, and Sierra Leone have achieved the capacity to isolate and treat all reported EVD cases and to bury all EVD-related deaths safely and with dignity. They still note that local variations exist and the average capacity is insufficient in some areas to isolate the disease [17]. These smaller-scale cordons will be unable to prevent people like the first EVD patient in the United States from travelling from West Africa to the United States while incubating the virus. But a strict focus on small-scale cordons will prevent the sorts of blunders that occurred in Dallas, where an emergency department initially failed to diagnose the patient, and those with whom he had direct contact were not effectively quarantined even after public health officials learned of the patient’s diagnosis [18]. Even while strictly enforcing small-scale cordons, public health officials should be vigilant to prevent unnecessarily harsh or capricious cordons as inappropriate quarantines raise ethical issues, may create public health panic, and waste resources [19].

After treating a healthcare worker affected by EVD at Bellevue Hospital, NYU health care practitioners have gained firsthand insight into the vast human and medical resources that must be utilized to prevent the spread of just one case of EVD [20]. This public health challenge allowed New York’s health care establishment to gain greater understanding of the ethical, legal, and financial struggles faced by nations attempting to contain a very contagious disease like EVD; we must note that this struggle is amplified in countries where medical care is less advanced and basic resources much more limited. We should continue to prepare our medical response so that future quarantines are done in the most ethical and effective manner possible, because this will not be the last time an infectious disease outbreak must be contained through the brutally pragmatic use of limited cordons sanitaires.

The Inherent Quandary of Medical Ethics.  Commentary by Antonella Surbone MD PhD FACP, Ethics Editor, Clinical Correlations.

The article entitled “Ethical Considerations in the Use of Cordons Sanitaires” by Rachel Kaplan Hoffmann and Keith Hoffmann presents an insightful analysis and discussion of the ethical implications of the practice of Cordon Sanitaires, as it is now applied to limit the potential widespread diffusion of Ebola virus infections. In their concluding statement, the Authors say that “We should continue to prepare our medical response so that future quarantines are done in the most ethical and effective manner possible, because this will not be the last time an infectious disease outbreak must be contained through the brutally pragmatic use of limited cordons sanitaires.”

In clinical medicine, whether applied to individuals or populations, we often find a surreptitious and troublesome admixture of brutality and tenderness, which are rarely discussed as part of professionalism or medical ethics. I shall offer some brief reflections.

Clinical practice, based on both distance and intimacy at physical, psychological and spiritual levels, may entail radical acts, such as invasive diagnostic procedures or complex interventions, that occur under the overarching principles of attention and solicitude for the sick person entrusted to our care, or the highest good for populations. These contrasting elements generate tensions that can result in expressions of brutality or of tenderness, at times simultaneously. Tenderness, a quality of being moved to compassion and of being warmheartedly responsive to others, is always expressed in delicate manners. In response to patients’ suffering, tenderness reveals grief over patients’ anguish through gentle gestures. Brutality is rather reflected in crude actions or behaviors that may be incisive and accurate but always are harsh, physically painful or invasive, and devoid of human mercy or compassion.

The technological interventions of modern medicine often come with major physical intrusion, which can be, or be perceived by patients and their families as a form of brutality that may be exacerbated or mitigated by the conduct of doctors, nurses, and others members of the clinical team.

A different brutality is the one described by the Authors, of severe restrictions of liberty associated with highly contagious infectious diseases. This brutality can also be mitigated by the compassion and tenderness of those courageous health care workers who provide medical care and assistance, while sharing to some extent the risk of being infected.

The question to ask ourselves here is, in the case of the ‘brutally pragmatic use of limited cordons sanitaires’ are we facing an inherent quandary of medical ethics or are we giving in to political considerations that benefit wealthier countries and protect affluent populations over those who already live in dire poverty? In the latter case, we would not be facing an inherent quandary of medical ethics, but yet another a socio-political injustice. We, as ethical physicians and nurses, are committed to care tenderly for all our patients.

Dr. Rachel Kaplan Hoffmann is a 2nd year resident at NYU Langone Medical Center

Peer reviewed by Antonella Surbone, MD, PhD, FACP,  Ethics Editor, Clinical Correlations

Image courtesy of Wikimedia Commons

References

1. Grady D and Fink S. Tracing Ebola’s Breakout to an African 2-Year-Old. New York Times. Aug 9, 2014. http://www.nytimes.com/2014/08/10/world/africa/tracing-ebolas-breakout-to-an-african-2-year-old.html?_r=0

2. Ebola Response Situation Report. WHO. December 10, 2014. http://www.who.int/csr/disease/ebola/situation-reports/en/

3. Wallis W. Cordon Sanitaire tightens around West African states to beat Ebola. Financial Times. http://www.ft.com/cms/s/0/d7419538-2a0e-11e4-8139-00144feabdc0.html#axzz3BMF6J72g

4. McNeil, DG. Using a Tactic Unseen in a Century, Countries Cordon Off Ebola-Racked Areas. New York Times. August 12, 2014. http://www.nytimes.com/2014/08/13/science/using-a-tactic-unseen-in-a-century-countries-cordon-off-ebola-racked-areas.html

5. Onion R. The Disastrous Cordon Sanitaire Used on Honolulu’s Chinatown in 1900. August 15, 2014. Slate Magazine Online. http://www.slate.com/blogs/the_vault/2014/08/15/history_of_the_cordon_sanitaire_honolulu_hawaii_bubonic_plague_in_1899.html

6. Byrne JP. Flu Pandemics Past and Present. History and the Headlines. © 2011 ABC-CLIO. http://www.historyandtheheadlines.abc-clio.com/ContentPages/ContentPage.aspx?entryId=1319621&currentSection=1319465

7. Byrne JP. Flu Pandemics Past and Present. History and the Headlines. © 2011 ABC-CLIO. http://www.historyandtheheadlines.abc-clio.com/ContentPages/ContentPage.aspx?entryId=1319621&currentSection=1319465

8. Garret, L. Heartless but Effective: I’ve Seen ‘Cordon Sanitaire’ Work Against Ebola. New Republic. August 14, 2014. http://www.newrepublic.com/article/119085/ebola-cordon-sanitaire-when-it-worked-congo-1995

9. West Africa seeks to seal off Ebola-hit regions. Agence-France-Presse. August 1, 2014. http://news.yahoo.com/african-states-launch-100mn-ebola-response-plan-170858040.html

10. McNeil, DG. Using a Tactic Unseen in a Century, Countries Cordon Off Ebola-Racked Areas. New York Times. August 12, 2014. http://www.nytimes.com/2014/08/13/science/using-a-tactic-unseen-in-a-century-countries-cordon-off-ebola-racked-areas.html

11. Wallis, W. Cordon Sanitaire tightens around West African states to beat Ebola. Financial Times. http://www.ft.com/cms/s/0/d7419538-2a0e-11e4-8139-00144feabdc0.html#axzz3BMF6J72g

12. WHO Statement on the Meeting of the International Health Regulations Emergency Committee Regarding the 2014 Ebola Outbreak in West Africa. August 8, 2014. http://www.who.int/mediacentre/news/statements/2014/ebola-20140808/en/

13. Ebola crisis: Liberia police fire at Monrovia protests. BBC news. August 21, 2014 http://www.bbc.com/news/world-africa-28879471

14. Gracyk, T. The Four Fundamental Ethical Principles. Accessed August 15th, 2014. http://web.mnstate.edu/gracyk/courses/phil%20115/Four_Basic_principles.htm

15. Garret, L. Heartless but Effective: I’ve Seen ‘Cordon Sanitaire’ Work Against Ebola. New Republic. August 14, 2014. http://www.newrepublic.com/article/119085/ebola-cordon-sanitaire-when-it-worked-congo-1995

16. Steenhuysen, J. Ebola outbreaks in Nigeria, Senegal, appear contained: CDC reports. Reuters. October 1, 2014. http://in.reuters.com/article/2014/09/30/us-health-ebola-containment-idINKCN0HP29U20140930

17. Ebola Response Situation Report. WHO. December 10, 2014. http://www.who.int/csr/disease/ebola/situation-reports/en/

18. Shoichet CE, Fantz A and Yan H. Hospital ‘dropped the ball’ with Ebola patient’s travel history, NIH official says. CNN.com. October 1, 2014 http://www.cnn.com/2014/10/01/health/ebola-us/

19. Fantz, A. New Jersey releases nurse quarantined in Ebola scare. October 27, 2014. CNN http://www.cnn.com/2014/10/27/health/us-ebola/

20. Evans, H. NYC’s Famed Bellevue Hospital Put to The Test with Ebola patient. New York Daily News. November 2, 2014. http://www.nydailynews.com/new-york/bellevue-hospital-put-test-ebola-patient-article-1.1995943

Shifting Paradigms in Cancer: Vaccines

January 22, 2015

Joshua Horton

Peer Reviewed

We are not winning the war against cancer, if war is even an appropriate metaphor. When Richard Nixon signed the National Cancer Act into effect in 1971, many predicted that cancer would be a thing of the past within 5 years. It was likened to polio, smallpox, and other long-since-forgotten scourges of mankind; with appropriate funding and research, surely cancer, too, would vanish. With that act in 1971, the National Cancer Institute received a budget of $200 million, a figure that has grown to nearly $4.8 billion today. Time and time again, glimmers of hope have emerged. Angiostatins made the front page of the New York Times in the late 1990s, but development was quickly halted by Big Pharma when it became clear that their effect was short-lived and their cost exorbitant. A similarly discouraging story can be told recently with many of the targeted drugs, epitomized by the emergence of resistance mutations in melanoma treated with BRAF inhibitors [1].

The story of cancer is not all grim. With the widespread adoption of early detection and screening programs, 5-year survival has increased by as much as 33% since 1971. The most transformative advance, gaining traction only in the last decade or two, is the redefinition of cancer from a disease of tissue tumors to one of molecular cell defects and deregulation (often inflammatory) of the surrounding “microenvironment.” This shifting paradigm has introduced a role for host immunity in cancer and drugs targeting tumor and non-tumor oncoantigens to perturb the microenvironment. Perhaps the most optimistic development to arise from this new cancer model is the idea of a cancer vaccine [2].

There are two types of vaccines for cancer: those that prevent it and those aimed at treating it once it occurs. Examples of the former include recombinant human papillomavirus vaccine [types 6, 11, 16, 18] (Gardasil; Merck and Co., Whitehouse Station, NJ) for the prevention of cervical cancer, and the hepatitis B vaccine to prevent hepatocellular carcinoma (HCC) [3]. These vaccines are similar to traditional antiviral vaccines in that they are raised against viral target antigens and prevent propagation of infection by bolstering the host immune clearance of the virus. However, they are unique in that, by curtailing the infection, they also prevent the viral-induced oncogenic transformation that often follows. Hepatitis C vaccines are currently in the pipeline [4] and may have similar effectiveness in the reduction of HCC incidence. Perhaps further in the future we may see a vaccine against Epstein-Barr virus [5] with reduction in its associated malignancies (nasopharyngeal carcinoma and Burkitt’s lymphoma). The efficacy of currently available preventive, viral-targeted vaccines is undeniable, exemplified by the adoption of routine vaccination of nearly all teenagers and young adults with the Gardasil vaccine.

At first, it may not be obvious how an entity as enigmatic as cancer can arise from commonplace infectious causes, when it is more often thought of as a non-communicable disease of genetics or aging. Inflammation, however, provides a mechanistic link between these cancers and those arising from somatic oncogene mutations. Chronic, low-grade inflammation and its associated cytokines and C-reactive protein–whether from infection, autoimmunity, or environmental causes–play a proven role in carcinogenesis [6]. An understanding of these pathways and the steps where we can intervene is essential to the development of targeted cancer therapeutics. Still, despite the extensive pathogenic overlap between virus-induced and de novo neoplasms, treatment vaccines (compared to preventive vaccines) have been mostly disappointing. In brief, tumor growth kinetics [7], tumor-derived immunosuppressive mechanisms [8], and genomic instability leading to resistance [9] play significant roles.

If we are to design better cancer treatment vaccines, we must understand why the host immune system is ineffective at clearing cancer in the first place. Ideally, the first cell to obtain a transforming mutation would immediately be detected by natural killer cells via recognition of surface ligands. Destruction and antigen uptake and processing by macrophages would follow, and eventually long-lived, tumor-specific B- and T-cells producing cancer-directed antibodies would be generated. Sound familiar? Replace the word “tumor” with “virus” and we are transported back to immunology courses and discussions of the cell-mediated response to infection. With such a robust defense mechanism in place, what provides the escape mechanism that allows cancer to be so much more devastating than, say, chicken pox? In a word: the microenvironment. Understanding this concept is crucial to developing hearty and efficacious vaccines. The microenvironment provides a niche for cancer cells by harboring them physically, supporting them with nutrition and blood flow [10], and providing an environment that promotes development of antigen-negative (read: resistant) tumor cells through selective pressure. These antigen-negative tumor cells are unaffected by single epitope monoclonal antibodies [9], be they host- or laboratory-derived. Thus, administration of mono-antigenic vaccines in the face of a competent microenvironment seems doomed to fail.

However, were it possible to administer the vaccine earlier in the course of the cancer, in the context of an immature microenvironment, there is potential that the antibody response could evolve alongside the cancer and its niche. Cancer biologists are hopeful that cross-presentation between antigen-presenting cells would generate propagation of the immune response that both persists after elimination of the original monoclonal antibody and generates a polyclonal response to multiple antigenic epitopes [11]. Theoretically, antigen-negative tumor escape would be attenuated and the effector T-cell response might generate immunological memory [12]. Ultimately, this proposed solution to the failure of treatment vaccines still relies on early detection, emphasizing further the role of screening for targetable cancers.

Even in the face of these obstacles, meaningful clinical responses have been observed with therapeutic cancer vaccines in clinical trials. Jean-Claude Bystryn, Ruth Oratz, and Richard Shapiro performed some of the pioneering studies on a polyvalent melanoma vaccine here at the New York University School of Medicine in the early 2000s. In the proof-of-concept study, significant populations of CD8+ T-cells were derived against one or both of two melanoma-associated antigens in 60% of a small cohort of patients. More importantly, those T-cell responders remained recurrence free for at least 12 months, compared to a 3-5 month range of recurrence in nonresponders [13]. In the subsequent randomized, controlled trial, patients with metastatic disease and particularly poor prognosis were randomized to melanoma vaccine or placebo groups with mean follow-up of 2.5 years. Kaplan-Meier analysis demonstrated a trend towards lengthened median time to disease progression and Cox proportional hazards found this trend to be statistically significant [14]. There was also a trend towards increased overall survival, though the study was not powered to detect this difference with statistical significance. Two additional randomized, controlled trials [15,16] would follow and find this difference in overall survival in vaccine-treated patients to be significant. From these studies and those that came later, it seems clear that vaccines are safe, have improved toxicity profiles compared to chemotherapeutic options, and have the potential to demonstrate clinical efficacy. More recently, studies of anti-HER-2/neu peptide vaccines for breast cancer, reviewed by Mandell this year, have demonstrated similarly promising results [17-21].

The drawbacks of cancer vaccine treatment are few and the serious side effects are thankfully rare and mostly theoretical. Commonly, induration, pain, swelling, lymphadenopathy, chills, and fever are observed. More infrequently, self-limited transfusion-like reactions can occur, which may progress to life-threatening systemic toxicity in atypical cases. Theoretical side effects include untoward enhancement of tumor growth and the development of autoimmunity to normal host cells and tissues. The former is due to induction of an activating immune response and has been observed in animal models and only a handful of human patients. However, it is unclear if this is due to the vaccine or to particularly aggressive behavior of the malignancy. Autoimmunity has occurred, presenting as vitiligo in melanoma patients treated with anti-melanocyte vaccines, with no systemic effects or autoimmunity to vital organs detected.

From napalm-esque chemotherapeutics to targeted biologics more closely resembling homing missiles and now vaccines acting as double agents to eradicate cancer, maybe war is a proper analogy. If so, we find ourselves at a draw at best, with cancer causing 1 in 8 deaths worldwide, over half a million deaths in the United States each year, and nearly $1 trillion in yearly economic losses from premature death and disability. Many strides have been made to push the understanding of cancer to the limit of our technological and intellectual abilities. Whether or not vaccines will be integral to the future of cancer treatment remains to be seen; however, their development is an illustration of the conceptual shift currently occurring within the field of cancer therapeutics.

Joshua Horton is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Ruth Oratz, MD, Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Smalley KS, Lioni M, Dalla Palma M, et al. Increased cyclin D1 expression can mediate BRAF inhibitor resistance in BRAF V600E-mutated melanomas. Mol Cancer Ther. 2008;7:2876-2883.

2. Lollini PL, Cavallo F, Nanni P, Forni G. Vaccines for tumour prevention. Nat Rev Cancer. 2006;6(3):204-216.  http://www.ncbi.nlm.nih.gov/pubmed/16498443

3. Frazer IH, Lowy DR, Schiller JT. Prevention of cancer through immunization: Prospects and challenges for the 21st century. Eur J Immunol. 2007;37 Suppl 1:S148-S155.

4. Verma R, Khanna P, Chawla S. Hepatitis C vaccine: Need of the hour. Hum Vaccin Immunother. 2014;10(8). http://www.ncbi.nlm.nih.gov/pubmed/24809484

5. Taylor GS, Jia H, Harrington K, et al. A Recombinant Modified Vaccinia Ankara Vaccine Encoding Epstein-Barr Virus (EBV) Target Antigens: A Phase I Trial in UK Patients with EBV-Positive Cancer. Clin Cancer Res. 2014;20(19):5009-5022.

6. Okada F. Inflammation-related carcinogenesis: current findings in epidemiological trends, causes and mechanisms. Yonago Acta Med. 2014;57(2):65-72. http://www.ncbi.nlm.nih.gov/pubmed/25324587

7. Hanson HL, Donermeyer DL, Ikeda H, et al. Eradication of established tumors by CD8+ T cell adoptive immunotherapy. Immunity. 2000;13(2):265-276.

8. Ochoa A, ed. Mechanisms of tumor escape from the immune response. London, England: Taylor & Francis; 2003.

9. Vogelstein B, Kinzler KW. Cancer genes and the pathways they control. Nat Med. 2004;10(8):789-799. http://www.ncbi.nlm.nih.gov/pubmed/15286780

10. Singh S, Ross SR, Acena M, Rowley DA, Schreiber H. Stroma is critical for preventing or permitting immunological destruction of antigenic cancer cells. J Exp Med. 1992;175(1):139-146.

11. Topalian SL, Weiner GJ, Pardoll DM. Cancer immunotherapy comes of age. J Clin Oncol. 2011;29(36):4828-4836.  http://www.ncbi.nlm.nih.gov/pubmed/22042955

12. Antia R, Ganusov VV, Ahmed R. The role of models in understanding CD8+ T-cell memory. Nat Rev Immunol. 2005;5(2):101-111.

13. Reynolds SR, Oratz R, Shapiro RL, et al. Stimulation of CD8+ T cell responses to MAGE-3 and Melan A/MART-1 by immunization to a polyvalent melanoma vaccine. Int J Cancer. 1997;72(6):972-976.

14. Bystryn JC, Zeleniuch-Jacquotte A, Oratz R, Shapiro RL, Harris MN, Roses DF. Double-blind trial of a polyvalent, shed-antigen, melanoma vaccine. Clin Cancer Res. 2001;7(7):1882-1887.

15. Hsueh EC, Essner R, Foshag LJ, et al. Prolonged survival after complete resection of disseminated melanoma and active immunotherapy with a therapeutic cancer vaccine. J Clin Oncol. 2002;20(23):4549-4554.

16. Bystryn J-C, Laky D, Shapiro RL, et al. Prolonged survival of vaccine-treated patients with resected stage IV melanoma (abstract 2911). Proc Am Soc Clin Oncol. 2003;22(724).

17. Carmichael MG, Benavides LC, Holmes JP, et al. Results of the first phase 1 clinical trial of the HER-2/neu peptide (GP2) vaccine in disease-free breast cancer patients: United States Military Cancer Institute Clinical Trials Group Study I-04. Cancer. 2010;116(2):292-301.

18. Miles D, Roché H, Martin M, et al. Phase III multicenter clinical trial of the sialyl-TN (STn)-keyhole limpet hemocyanin (KLH) vaccine for metastatic breast cancer. Oncologist. 2011;16(8):1092-1100.

19. Positive Phase II interim data on AE37 cancer vaccine released. Hum Vaccin Immunother. 2012;8(2):152.

20. Mittendorf EA, Clifton GT, Holmes JP, et al. Clinical trial results of the HER-2/neu (E75) vaccine to prevent breast cancer recurrence in high-risk patients: from US Military Cancer Institute Clinical Trials Group Study I-01 and I-02. Cancer. 2012;118(10):2594-2602.

21. Mandell BF. To dream the maybe possible dream: a breast cancer vaccine. Cleve Clin J Med. 2014;81(10):584-585.

 

 

 

Mechanisms of Angiotensin Blockade in the Management of Diabetic Nephropathy

December 11, 2014

By Miguel A. Saldivar, MD

Peer Reviewed 

When a patient with diabetes comes into a clinic or hospital, it is not uncommon to hear the question, “Is he/she on an angiotensin-converting-enzyme (ACE) inhibitor (ACEI) or an angiotensin-receptor blocker (ARB)?” Most clinicians know the mantra: ACEIs are renoprotective in diabetes. Most are aware that clinical studies dating back to the 1990s have indeed shown the protective effects of ACEIs, such as captopril, against renal function deterioration in diabetes [1]. Most are even aware that there are multiple systematic reviews and meta-analyses that have proven not only the renoprotective effects of ACEIs, but also their superiority compared to other antihypertensives [2, 3, 4]. But why is angiotensin blockade renoprotective in the diabetes population? This question is made more pertinent by a systematic review that concluded that there is insufficient evidence to prove that ACEIs provide the same protection for patients with non-diabetic chronic kidney disease [5]. So what is it about angiotensin and the renin-angiotensin system (RAS) that makes ACEIs (and ARBs) effective at protecting the kidneys in the diabetic population?

In order to answer these questions, it is useful to begin by reviewing the pathophysiology of renal dysfunction in diabetes to elucidate the benefits of ACE/angiotensin/RAS blockade. The end result of diabetic nephropathy [6] consists mainly of:

* Glomerular hypertrophy, i.e., excessive accumulation of extracellular matrix (ECM), leading to thickening of the glomerular and tubular basement membranes

* Enlargement of the mesangial matrix leading to microvascular changes in the glomerular capillaries

* Eventual progression to glomerulosclerosis and tubulo-interstitial fibrosis

Like most diabetes-related complications, the pathways leading to diabetic nephropathy are multifactorial, interrelated, and far from simple. But in general, there are two interwoven, major factors at play: (1) direct damage from hyperglycemic states, e.g., increased generation of advanced glycosylation end products (AGEs) and reactive oxygen species (ROS), and (2) hemodynamic modifications, e.g., glomerular hyperfiltration, thrombotic microangiopathy, and shear stress [6, 7].

Regarding the first, high serum glucose levels lead to the excessive formation of AGEs. Upon interaction with their receptor (RAGE), multiple pathways are initiated resulting in increased activity of growth factors. This, in turn, leads to abnormal expression of ECM proteins (e.g., multiple types of collagen, fibronectin, laminin, and many others), which causes anomalous polymerization and expansion of the ECM. Of note, excessive TGF-beta1 is believed to be the primary cytokine responsible for ECM pathology because it induces the excessive production and deposition of proteins. Very importantly, AGE-related intracellular events also lead to the formation of ROS, which exacerbate the damage.

The second set of mechanisms—hemodynamic modifications—is closely related to the excessive production of AGEs. AGEs lead to perturbed interactions between the cell and matrix and changes in capillary permeability, all of which lead to vascular abnormalities. Among other cascades, AGEs lead to excessive activation of protein kinase C (PKC), which is believed to cause endothelial dysfunction and, importantly, decreased nitric oxide production. This, in turn, results in the loss of the endothelium’s vasodilatory effect. Multiple other proteins are also involved (e.g., NF-kappa B and PAI-1) and together lead to local tissue inflammatory responses and thrombotic microangiopathy, all of which are further worsened by ROS.

So where do ACE, angiotensin, and RAS come into the picture? These proteins’ actions are numerous and a full explanation of their functions is beyond the scope of this article. But what follows is a brief summary of some of their functions that are directly relevant to the current discussion.

One of ACE’s best-known functions is its role in the degradation of bradykinin. There is growing evidence that bradykinin, among other functions, has two effects: (1) it acts indirectly as a vasodilator by producing signals leading to increased production of nitric oxide, and (2) it enhances insulin sensitivity on skeletal muscle, most notably during conditions of insulin resistance [8]. Increased production of ACE results in decreased levels of bradykinin, leading to a decrease in nitric oxide production as well as a loss of its effect on insulin sensitivity.

Next, there is the obvious role of ACE in the activation of angiotensin I to angiotensin II, which allows angiotensin II to interact with its receptor and ultimately cause vasoconstriction. It turns out that hypertension (or more specifically, increased glomerular capillary pressure) greatly contributes to the acceleration of all the complications described above. Hyperglycemia is believed to sensitize target organs to blood pressure-induced damage, “most likely by activation of [RAS] with local production of angiotensin II in the kidney” [6]. Multiple kidney cells—most importantly, the podocytes—are not only involved in the activation of the local RAS, but also themselves produce angiotensin II and express angiotensin II receptors. Interestingly, mesangial and tubular cells have been shown to increase expression of renin and angiotensinogen during hyperglycemic states. High levels of angiotensin II increase serum and renal accumulation of AGEs, thus perpetuating the process. Furthermore, an increase in capillary pressure results in the stretch and stress of glomerular cells; in vitro studies have shown that repetitive stretch/relaxation cycles among mesangial cells enhace their proliferation, therefore increasing synthesis of ECM and simultaneously decreasing expression of ECM catabolic enzymes.

Finally, there is evidence suggesting that there is a “local” RAS that directly and exclusively affects the pancreas and helps regulate islet perfusion, and can thus lead to reduced insulin secretion [9].

And thus, after reviewing some of the mechanisms involved in diabetic nephropathy, we return to the original question: What is it about angiotensin and RAS blockade that makes it efficient in protecting the kidneys in the diabetes population? The pathways are elegantly intricate and strongly interwoven, but it can be argued that some of the primary effects are as follows:

* Downregulation of blood pressure and therefore a decrease in glomerular capillary pressure, with resultant reduction in glomerular hypertrophy

* Decrease in the production of angiotensin II (or blockage of its receptors in the case of ARBs) in the kidney, with resultant reduction in shear stress and a possible reduction in the serum and renal accumulation of AGEs

* In the case of ACEIs, decrease in the breakdown of bradykinin, which allows bradykinin to continue producing two major effects: increased production of nitric oxide leading to improved blood flow, and increased insulin sensitivity in skeletal muscle

* ACEIs and ARBs have been associated with a decrease in TGF-beta production [6]—though the specifics of this relationship have not yet been elucidated, TGF-beta is believed to be the primary enzyme responsible for ECM abnormalities in diabetic nephropathy and it is therefore reasonable to assume that a reduction in its production is likely to be renoprotective [6]

* Downregulation of the pancreatic “local” RAS, leading to increased perfusion of islet cells and resultant improvement in first-phase insulin secretion [9]

As stated before, the pathways of diabetic nephropathy and ACE/angiotensin/RAS blockade-induced renal protection are intricate. This article addresses only a small portion of what is known, but it is the hope of the author that it covers enough to assist the clinician in having a better-informed discussion with the patient, ultimately leading to better clinical decision-making. It will be interesting to see what future discoveries help further our understanding of these processes.

Dr.  Miguel A. Saldivar is a 2nd year resident at NYU Langone Medical Center

Peer reviewed by David Goldfarb, MD, Nephrology, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. Lewis EJ, Hunsicker LG, Bain RP, Rohde RD. The effect of angiotensin-converting-enzyme inhibition on diabetic nephropathy. The Collaborative Study Group. N Engl J Med. 1993;329(20):1456-62. http://www.nejm.org/doi/full/10.1056/NEJM199311113292004

2. Vejakama P, Thakkinstian A, Lertrattananon D, et al. Reno-protective effects of renin-angiotensin system blockade in type 2 diabetic patients: a systematic review and network meta-analysis. Diabetologia. 2012 Mar;55(3):566-78. http://www.ncbi.nlm.nih.gov/pubmed/?term=22189484

3. Strippoli GF, Craig M, Craig JC, et al. Antihypertensive agents for preventing diabetic kidney disease. Cochrane Database Syst Rev. 2005 Oct 19;(4):CD004136. http://www.ncbi.nlm.nih.gov/pubmed/?term=16235351

4. Wu HY, Huang JW, Lin HJ, et al. Comparative effectiveness of renin-angiotensin system blockers and other antihypertensive drugs in patients with diabetes: systematic review and bayesian network meta-analysis. BMJ. 2013 Oct 24;347:f6008. http://www.bmj.com/content/347/bmj.f6008?view=long&pmid=24157497

5. Sharma P, Blackburn RC, Parke CL, et al. Angiotensin-converting enzyme inhibitors and angiotensin receptor blockers for adults with early (stage 1 to 3) non-diabetic chronic kidney disease. Cochrane Database Syst Rev. 2011 Oct 5;(10):CD007751. http://www.ncbi.nlm.nih.gov/pubmed/?term=21975774

6. Kanwar YS, Wada J, Sun L, et al. Diabetic Nephropathy: Mechanisms of Renal Disease Progression. Exp Biol Med (Maywood). 2008 Jan;233(1):4-11. http://www.ncbi.nlm.nih.gov/pubmed/?term=18156300

7. Schena FP, Gesualdo L. Pathogenetic mechanisms of diabetic nephropathy. J Am Soc Nephrol. 2005 Mar;16 Suppl 1:S30-3. http://jasn.asnjournals.org/content/16/3_suppl_1/S30.long

8. Henriksen EJ, Jacob S. Modulation of metabolic control by angiotensin converting enzyme (ACE) inhibition. J Cell Physiol. 2003 Jul;196(1):171-9. http://onlinelibrary.wiley.com/doi/10.1002/jcp.10294/full

9. Stump CS, Hamilton MT, Sowers JR. Effect of antihypertensive agents on the development of type 2 diabetes mellitus. Mayo Clin Proc. 2006 Jun;81(6):796-806. http://www.mayoclinicproceedings.org/article/S0025-6196%2811%2961734-5/fulltext

 

Unraveling The Mysteries of Prinzmetal’s Angina: What Is It And How Do We Diagnose It?

October 8, 2014

By Anjali Varma Desai, MD

Peer Reviewed

Mr. Q is a 55-year-old male smoker who presents with recurrent chest pain in the mornings over the past several months. The patient reports being awakened from sleep at approximately 5:00 a.m. each morning with the same diffuse chest “pressure.” The pain typically lasts on the order of minutes, resolves, and then recurs at five-minute intervals in the same fashion for a total duration of two hours. The pain always occurs at rest and is never precipitated by exertion or emotional stress. The chest pain is generally associated with a sense of palpitations and occasional dizziness and light-headedness. An exercise stress test showed good exercise capacity without ST segment changes, even at target heart rate. Given the history, a diagnosis of coronary artery spasm was suggested. The patient was given a trial of diltiazem therapy, with marked improvement in his chest pain episodes thereafter.

In his landmark article in 1959, Dr. Myron Prinzmetal described a distinct type of “variant angina,” termed Prinzmetal’s angina. This chest pain tended to occur at rest (i.e. was not associated with increased cardiac work), waxed and waned cyclically, occurred at the same time each day, and could be accompanied by arrhythmias including ventricular ectopy, ventricular tachycardia, ventricular fibrillation, and various forms of AV block [1]. The patient’s EKG during painful episodes typically showed ST segment elevations (occasionally accompanied by reciprocal ST depressions), whereas the EKG obtained after the pain had resolved showed resolution of these ST segment changes [1]. Prinzmetal postulated that this separate clinical entity was due to transient spasm (“increased tonus”) of a large arteriosclerotic artery, causing temporary transmural ischemia in the distribution supplied by that artery.

It is important to note that, although ST elevation would be diagnostic, it is frequently not observed in cases of coronary artery spasm. Rather, the diagnosis of coronary artery spasm should be suspected based on the timing of chest pain and the presence of syncope, arrhythmia or cardiac arrest.

It was subsequently demonstrated that such episodes of coronary artery spasm can occur not only in patients with underlying fixed coronary artery obstruction but also in patients whose coronary arteries are anatomically normal [2-7]. Selzer et al. actually compared the syndromes of coronary artery spasm between nine patients with anatomically normal coronary arteries and 20 patients with obstructive coronary lesions [8]. Selzer et al. found that the non-coronary artery disease (CAD) group of patients was more likely to have a long history of nonexertional angina without prior infarction, normal EKG at rest with ST elevations in the inferior leads, conduction disease, and bradyarrhythmias during episodes of arterial spasm. Conversely, the CAD group of patients was more likely to have prior “effort angina” and prior infarction, as well as ST elevation in the anterolateral leads, ventricular ectopy and ventricular tachyarrhythmias.

Castello et al. also compared the syndromes of coronary artery spasm in 77 patients with underlying CAD (fixed coronary stenosis greater than or equal to 50%) and 35 patients with normal or minimally diseased coronary arteries [4]. These authors found, similarly, that angina exclusively at rest tends to occur in patients with structurally normal coronary arteries and that these patients tended to have more diffuse coronary artery spasms affecting more than one artery. In contrast, patients with underlying CAD usually had more focal coronary artery spasms superimposed on their fixed stenotic lesions.

The question arises as to what could be triggering coronary artery spasm in patients with structurally normal coronary arteries? As Prinzmetal suggested, “the distinctive dissimilarities [between typical angina and variant angina] are due to profound physiological and chemical rather than anatomical differences” [1]. These physiological and chemical differences are multi-factorial. Kugiyama et al. demonstrated that there is a deficiency in endothelial nitric oxide (NO) bioactivity in Prinzmetal’s angina-prone arteries; this defect makes those arteries especially sensitive to the vasodilator effect of nitroglycerin and the vasoconstrictor effect of acetylcholine [9]. Miyao et al. used intravascular ultrasound to show that Prinzmetal’s angina patients had diffuse intimal thickening of their coronary arteries, despite an angiographically normal appearance. This intimal hyperplasia was thought to be mediated by deficient NO activity [10]. NO is involved in the regulation of basal vascular tone and helps to mediate flow-dependent vasodilation, as well as suppressing the production of endothelin-1 and angiotensin-II, both of which are powerful vasoconstrictors [11]. As a result of all of these effects, deficient endothelial NO activity predisposes to coronary artery spasm. Endothelial NO is made by the endothelial NOS (e-NOS) gene, which has been found to have many genetic polymorphisms associated with coronary artery spasm [11]. It is important to note, however, that endothelial NO synthase polymorphisms are found in only one-third of patients with coronary spasm; accordingly, other genes or factors are most likely involved [11].

In a review article, Kusama et al. [12] highlighted several additional pathophysiologic contributors to Prinzmetal’s angina, including enhanced vascular smooth muscle contractility mediated by the Rho/Rho-kinase pathway [13-14], elevated markers of oxidative stress [11,15], low-grade chronic inflammation [11], and cigarette smoking [11,15] in addition to genetic polymorphisms of endothelial NO synthase (NOS) [11,15]. Polymorpisms of various genes may explain the higher incidence of Prinzmetal’s angina in the Japanese population as compared to the Caucasian population [12].

As our understanding of the pathophysiology behind Prinzmetal’s angina has evolved, new ways of diagnosing Prinzmetal’s angina have emerged. These diagnostic maneuvers typically involve provoking episodes of Prinzmetal’s angina under controlled settings (e.g. during coronary angiography) with acetylcholine, ergonovine, hyperventilation, and cold pressor stress testing. Okumura et al. showed that intracoronary injection of acetylcholine could be reliably used to induce coronary artery spasm with 99% specificity [16], a conclusion further supported by Miwa et al. [17]. Ergonovine, an ergot alkaloid and alpha-agonist that causes vasoconstriction, can similarly be used to induce episodes of coronary artery spasm accompanied by the characteristic chest pain and EKG changes that occur during spontaneous episodes of Prinzmetal’s angina [18-19]. Song et al. suggested ergonovine echocardiography as an effective screening test for coronary artery spasm, even before coronary angiography, with a sensitivity of 91% and a specificity of 88% [20]. Subsequent studies found that this was indeed an effective, safe, and well-tolerated screening test for coronary artery spasm [21-22].

It is important to note that provocation of arterial spasm with acetylcholine or ergonovine confers a multitude of risks including arrhythmias, hypertension, hypotension, abdominal cramps, nausea, and vomiting [11]. More serious complications include ventricular fibrillation, myocardial infarction, and death [23,24]. Quantitative estimates of the risks incurred by such invasive testing are on the order of 1% [25,26]. In one study, serious major complications, such as sustained ventricular tachycardia, shock, and cardiac tamponade occurred in four out of 715 patients (0.56%) receiving provocative acetylcholine testing [25]. In another study, nine patients out of 921 (1%) had more minor complications (nonsustained ventricular tachycardia [n=1], fast paroxysmal atrial fibrillation [n=1], symptomatic bradycardia [n=6], and catheter-induced spasm [n=1]) after undergoing acetylcholine provocation testing [26]. While such invasive testing is generally considered a safe technique to assess coronary vasomotor dynamics, these maneuvers should only be performed by qualified physicians in carefully controlled settings, where the patient may be properly and quickly resuscitated as needed [11].

Testing a different diagnostic strategy, Hirano et al. noted that a diagnostic algorithm of hyperventilation for six minutes, followed by cold water pressor for two minutes under continuous EKG and echocardiographic monitoring had a 90% Sensitivity, 90% specificity, 95% positive predictive value, and 82% negative predictive value for diagnosing vasospastic angina [27]. The combination of respiratory alkalosis from the hyperventilation as well as the reflex sympathetic coronary vasoconstriction in response to the cold pressor test [28], together, helped to induce coronary artery spasm and diagnose Prinzmetal’s angina. More recently, Hwang et al. suggested that measuring the change in coronary flow velocity of the distal left anterior descending artery (LAD) via transthoracic echo during the cold pressor test may provide additional diagnostic utility, with a sensitivity of 93.5% and a specificity of 82.4% for diagnosing coronary artery spasm [29].

In an article published in JACC in 2013, the Japanese Coronary Spasm Association (JCSA) discussed a comprehensive clinical risk score to aid in prognostic stratification of patients with coronary artery spasm [30]. A multicenter registry study of 1429 patients, median age 66 years, with a median follow-up period of 32 months, was performed. The primary endpoint was defined as major adverse cardiac events (MACE), including cardiac death, nonfatal myocardial infarction, hospitalization due to unstable angina pectoris, heart failure, and appropriate ICD shocks during the follow-up period that began at the date of the diagnosis of coronary artery spasm. In particular, cardiac death, nonfatal myocardial infarction and ICD shocks were categorized as hard MACE. The secondary endpoint was all-cause mortality. The study identified seven predictors of MACE: history of out-of-hospital cardiac arrest (4 points), smoking, angina at rest alone, organic coronary stenosis, multivessel spasm (2 points each), ST segment elevation during angina and beta-blocker use (1 point each). Based on total score, three risk categories were defined: low risk (score of 0 to 2, which included 598 patients), intermediate risk (score of 3 to 5, which included 639 patients) and high risk (score of 6 or more, which included 192 patients). The incidences of major adverse cardiac events in the low-, intermediate-, and high-risk patients were 2.5%, 7.0%, and 13.0%, respectively (p<0.001). This scoring system, known as the JCSA risk score, may help provide a comprehensive risk assessment and prognostic stratification scheme for patients with coronary artery spasm.

In terms of treatment, calcium channel blockers (e.g. nifedipine, diltiazem and verapamil) are the mainstay of therapy for coronary artery spasm. The goal of such therapy is to prevent vasoconstriction and promote coronary artery vasodilation. In one study of 245 patients with coronary artery spasm who were followed for an average of 80.5 months, the use of a calcium cannel blocker therapy was an independent predictor of myocardial-infarct-free survival in patients with coronary artery spasm [31]. In another observational study of 300 patients with coronary artery spasm, calcium channel blockers were effective in alleviating symptoms in over 90-percent of patients [32]. The drugs were evaluated and ranked as follows: markedly effective, leading to complete elimination of angina attacks within 2 days; effective, leading to complete elimination of attacks after 2 days or a reduction in the number of attacks to less than half during the periods of drug administration in the hospital; ineffective, leading to no reduction to less than half during the periods of drug administration. Efficacy rates (including markedly effective as well as effective categories) for nifedipine, diltiazem and verapamil were 94.0%, 90.8%, and 85.7%, respectively. Rarely, cases are refractory to medical therapy and literature exists to support the effectiveness of surgical revascularization in these circumstances [33].

It is clear that the phenomenon of “variant angina” is a complicated, multifaceted product of forces that are not only anatomical, but also genetic, chemical, physiological and behavioral in nature. While endothelial nitric oxide bioactivity appears to play a critical role in this process, there are undoubtedly several other factors involved. Over time, our knowledge of the pathophysiology driving Prinzemetal’s angina will continue to expand, as will our diagnostic and therapeutic repertoire for this fascinating clinical entity.

Dr. Anjali Varma Desai is a 3rd year resident at NYU Langone Medical Center

Peer Reviewed by Harmony R. Reynolds, MD, Medicine (Cardio Div), NYU Langone Medical Center

References:

1. Prinzmetal M, Kennamer R, Merliss R, Wada T, Bor N. Angina pectoris: I: a variant form of angina pectoris: preliminary report. Am J Med. 1959; 27: 375–388 http://www.ncbi.nlm.nih.gov/pubmed/14434946

2. Maseri A, Severi S, Nes MD, et al. “Variant” angina: one aspect of a continuous spectrum of vasospastic myocardial ischemia. Pathogenetic mechanisms, estimated incidence and clinical and coronary arteriographic findings in 138 patients. Am J Cardiol. Dec 1978;42(6):1019-35 http://www.ncbi.nlm.nih.gov/pubmed/727129

3. Cheng TO, Bashour R, Kelser GA Jr, et al: Variant angina of Prinzmetal with normal coronary arteriograms: a variant of the variant. Circulation 1973; 47: 476-485. http://circ.ahajournals.org/content/47/3/476.abstract

4. Castello R, Alegria E, Merino A, Soria F, Martinez-Caro D. Syndrome of coronary artery spasm of normal coronary arteries: Clinical and angiographic features. Angiology 1988; 39: 8-15. http://www.ncbi.nlm.nih.gov/pubmed/3341608

5. Oliva PB, Potts DE, Pluss RG. Coronary arterial spasm in Prinzmetal angina: documentation by coronary arteriography. N Engl J Med 1973; 288: 745-751. http://www.ncbi.nlm.nih.gov/pubmed/4688712

6. Endo M, Kanda I, Hosoda S, et al. Prinzmetal’s variant form of angina pectoris: Re-evaluation of mechanisms. Circulation 1975; 52: 33-37. http://circ.ahajournals.org/content/52/1/33.abstract?cited-by=yes&legid=circulationaha;52/1/33&related-urls=yes&legid=circulationaha;52/1/33

7. Huckell VF, McLaughlin PR, Morch JE, Wigle ED, Adelman AG: Prinzmetal’s angina with documented coronary artery spasm: Treatment and follow-up. Br Heart J 1981 June; 45(6): 649-655. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC482578/

8. Selzer A, Langston M, Ruggeroli C, et al: Clinical syndrome of variant angina with normal coronary arteriogram. N Engl J Med 1976; 295: 1343-1347. http://www.ncbi.nlm.nih.gov/pubmed/980080

9. Kugiyama K, Yasue H, Okumura K, et al. Nitric oxide activity is deficient in spasm arteries of patients with coronary spastic angina. Circulation 1996 Aug 1; 94(3): 266-71. http://www.ncbi.nlm.nih.gov/pubmed/8759065

10. Miyao Y, Kugiyama K, Kawano H, et al. Diffuse intimal thickening of coronary arteries in patients with coronary spastic angina. J Am Coll Cardiol. 2000 Aug; 36(2): 432-7. http://www.ncbi.nlm.nih.gov/pubmed/10933354

11. Yasue H, Nakagawa H, Itoh T, Harada E, Mizuno Y. Coronary artery spasm – clinical features, diagnosis, pathogenesis and treatment. J Cardiol 2008; 51: 2-17. http://www.ncbi.nlm.nih.gov/pubmed/18522770

12. Kusama Y, Kodani E, Nakagomi A, et al. Variant angina and coronary artery spasm: the clinical spectrum, pathophysiology and management. J Nihon Med Sch. 2011;78(1):4-12. Review. http://www.researchgate.net/publication/50351691_Variant_angina_and_coronary_artery_spasm_the_clinical_spectrum_pathophysiology_and_management

13. Shimokawa H, Seto M, Katsumata N, et al. Rho-kinase mediated pathway induces enhanced myosin light chain phosphorylations in a swine model of coronary artery spasm. Cardiovasc Res 1999; 43: 1029-1039. http://cardiovascres.oxfordjournals.org/content/43/4/1029.full

14. Masumoto A, Mohri M, Shimokawa H, et al. Suppression of coronary artery spasm by a Rho-kinase inhibitor fasudil in patients with vasospastic angina. Circulation 2002; 105: 1545-1547. http://circ.ahajournals.org/content/105/13/1545.abstract

15. Miwa K, Fujita M, Sasayama S. Recent insights into the mechanisms, predisposing factors and racial differences of coronary vasospasm. Heart Vessels 2005; 20: 1-7. http://www.ncbi.nlm.nih.gov/pubmed/15700195

16. Okumura K, Yasue H, Matsuyama K, et al. Sensitivity and specificity of intracoronary injection of acetylcholine for the induction of coronary artery spasm. J Am Coll Cardiol. 1988 Oct;12(4):883-8. http://www.unboundmedicine.com/evidence/ub/citation/3047196/Sensitivity_and_specificity_of_intracoronary_injection_of_acetylcholine_for_the_induction_of_coronary_artery_spasm_

17. Miwa K, Fujita M, Ejiri M, Sasayama S. Usefulness of intracoronary injection of acetylcholine as a provocative test for coronary artery spasm in patients with vasospastic angina. Heart Vessels. 1991;6(2):96-101 http://www.ncbi.nlm.nih.gov/pubmed/1906457

18. Schroeder JS, Bolen JL, Quint RA, et al. Provocation of coronary spasm with ergonovine maleate: new test with results in 57 patients undergoing coronary arteriography. Am J Cardiol 1977; 40: 487-491. http://www.ncbi.nlm.nih.gov/pubmed/910712

19. Heupler FA, Proudfit WL, Razavi M, et al. Ergonovine maleate provocative test for coronary arterial spasm. Am J Cardiol 1978; 41: 631-640. http://www.ncbi.nlm.nih.gov/pubmed/645566

20. Song JK, Lee SJ, Kang DH, Cheong SS, Hong MK, Kim JJ, Park SW, Park SJ. Ergonovine echocardiography as a screening test for diagnosis of vasospastic angina before coronary angiography. J Am Coll Cardiol. 1996 Apr;27(5):1156-61. http://www.ncbi.nlm.nih.gov/pubmed/8609335

21. Palinkas A, Picano E, Rodriguez O, et al. Safety of ergot stress echocardiography for non-invasive detection of coronary vasospasm. Coron Artery Dis 2001 Dec; 12(8): 649-54. http://www.ncbi.nlm.nih.gov/pubmed/11811330

22. Djordjevic-Dikic A, Varga A, Rodriguez O, et al. Safety of ergotamine-ergic pharmacologic stress echocardiography for vasospasm testing in the echo lab: 14 year experience on 478 tests in 464 patients. Cardiologia 1999 Oct; 44(10): 901-6. http://www.ncbi.nlm.nih.gov/pubmed/10630049

23. M. Nakamura, A. Takeshita, Y. Nose. Clinical characteristics associated with myocardial infarction, arrhythmias and sudden death in patients with vasospastic angina. Circulation 1987; 75: 1110-1116.

24. R.J. Myerburg, K.M. Kessler, S.M. Mallon, et al. Life-threatening ventricular arrhythmias in patients with silent myocardial ischemia due to coronary artery spasm. New England Journal of Medicine 1992; 326: 1451-1455.

25. Sueda S, Saeki H, Otani T, et al. Major complications during spasm provocation tests with an intracoronary injection of acetylcholine. Am J. Cardiol. 2000; 85(3): 391.

26. Ong P, Athanasiadis A, Borgulya G, et al. Clinical usefulness, angiographic characteristics, and safety evaluation of intracoronary acetylcholine provocation testing among 921 consecutive white patients with unobstructed coronary arteries. Circulation 2014; 129(17): 1723.

27. Hirano Y, Ozasa Y, Yamamoto T, et al. Diagnosis of vasospastic angina by hyperventilation and cold-pressor stress echocardiography: comparison to I-MIBG myocardial scintigraphy. J Am Soc Echocardiogr. 2002 Jun;15(6):617-23. http://www.unboundmedicine.com/washingtonmanual/ub/citation/12050603/Diagnosis_of_vasospastic_angina_by_hyperventilation_and_cold_pressor_stress_echocardiography:_comparison_to_I_MIBG_myocardial_scintigraphy_

28. Raizner AE, Chahine RA, Ishimori T, et al. Provocation of coronary artery spasm by the cold pressor test. Hemodynamic, arteriographic and quantitative angiographic observations. Circulation. 1980; 62: 925-932. http://circ.ahajournals.org/content/62/5/925.citation

29. Hwang HJ, Chung WB, Park JH, et al. Estimation of coronary flow velocity reserve using transthoracic Doppler echocardiography and cold pressor test might be useful for detecting of patients with variant angina. Echocardiography. 2010 Apr;27(4):435-41. http://www.ncbi.nlm.nih.gov/pubmed/20113325

30. Takagi Y, Takahashi J, Yasuda S, et al. Prognostic Stratification of Patients with Vasospastic Angina: A Comprehensive Clinical Risk Score Developed by the Japanese Coronary Spasm Association. JACC 2013; 62(13): 1144-1153.

31. Yasue H, Takizawa A, Nagao M, et al. Long-term prognosis for patients with variant angina and influential factors. Circulation. 1988;78(1):1.

32. Kimura E, Kishida H. Treatment of variant angina with drugs: a survey of 11 cardiology institutes in Japan. Circulation 1981 April; 63(4): 844-8.

33. Ono T, Ohashi T, Asakura T, Shin T. Internal mammary revascularization in patients with variant angina and normal coronary arteries. Interact Cardiovasc Thorac Surg. 2005;4:426–428.

 

 

 

 

 

 

 

Are Health Care Providers PrEPared?

September 24, 2014

By Nathan King

Faculty Reviewed

Doctors are known to be some of the worst patients, and from personal experience I predict that medical students are not too far behind. That’s why when I finally found the time to take a proactive step in maintaining my good health, the last thing I hoped to run into were barriers, but that’s exactly what I hit. To my surprise, it was not at the hands of insurance companies, overbooked doctors, or the general bureaucracy of the medical system; it stemmed from ignorance on the part of the health care professionals. Ignorance as in a lack of knowledge, and in this case, the subject was pre-exposure prophylaxis of HIV, more commonly known as PrEP.

The combination of emtricitabine and tenofovir (Truvada; Gilead Sciences, Foster City, California) was first approved for use as PrEP by the FDA in 2012 after the landmark iPrEx study found it to be effective in preventing HIV infection in seronegative individuals. [1] Thus, I was taken aback when I asked a provider if she prescribed the medication, and after first having to explain to her what it is, got the response, “Why do you need it? Are you planning to travel to a developing country soon?” My jaw dropped. I attempted to salvage the conversion by explaining the indications a little more, but was met with uncomfortable resistance, and I was urged to ask someone else at a follow-up visit. Weeks later I transferred my care to a primary care physician located at a renowned academic medical center in Manhattan, and again I asked the same question. Though this time the response was less uninformed and pejorative, the practitioner stated that she had never prescribed that medication and did not feel comfortable doing so. Her solution was to offer me a referral to an infectious disease specialist.

My first thought: the last time I spoke with an infectious disease specialist I was consulting them for a patient with severe neutropenia who had spiked a fever with a blood culture growing gram-positive cocci. My second and third revolved around the lack of time and money I had to see an outside specialist for what I understood to be routine preventive care. I then began to get frustrated with the many obstacles I had faced. Should primary care providers be responsible for prescribing PrEP and, if so, are they effectively prepared to do so?

In order to begin answering this question, a burden of proof for the treatment itself must be met: is PrEP something that should even be recommended? The landmark PrEP study by Grant and colleagues, “Preexposure Chemoprophylaxis for HIV Prevention in Men Who Have Sex with Men,” appeared in the New England Journal of Medicine in December 2010.[1] The study followed 2499 seronegative men who have sex with men (MSMs) for a median of 1.2 years and found an overall reduction in HIV infection of 44% using an intention-to-treat analysis and a 90% reduction when corrected for laboratory-tested medication adherence (by-protocol analysis).[1] This and other studies led the FDA to approve Truvada for use as PrEP in July 2012, following early advocacy by the CDC.[3] This was groundbreaking news: the first time that any medication was approved for preventing HIV. Naturally, skepticism arose. People asked the following, legitimate questions:

Is Truvada truly effective?

Who warrants treatment with PrEP?

Will this medication encourage unsafe sex practices? [3]

Since the initial study, several other publications have shown similar reductions in HIV seroconversion with the use of PrEP, including studies of heterosexual serodiscordant couples.[4] Most recently, a study that followed a large percentage of the participants in the original iPrEx study was published and presented at the 2014 World Aids Conference. Of the 1600 participants, all followed for 17 months, no one who took the pill more than four times per week seroconverted.[5] Moreover, this study also supported the original study’s findings that, in those taking PrEP compared to placebo, self-reported condom use did not decrease, number of sexual partners did not increase, and STI testing revealed no increase in syphilis or herpes infections, a more objective measure of risk-taking behavior.[1,5] This aggregation of promising data supporting the effectiveness and potential of PrEP led the World Health Organization (WHO) to make an official statement on July 11, 2014, advocating that all men who have sex with men consider using PrEP as further protection against HIV infection.[6]

The question of why MSMs are being singled out for the use of this treatment warrants attention and further highlights the potential implications of PrEP. A recent research letter published in JAMA analyzed HIV infection rates in the US from 2002 to 2011. On first glance, the results are encouraging, reporting a 33.2% reduction in the rate of new infections within the 9-year span.[7] However, the details paint a different picture. Although the rates of infection attributed to heterosexual contact or injection drug use significantly declined, the rates of infection from male-to-male sexual contact significantly increased. MSMs aged 13-24 bore most of the brunt, with an overall increase in infection rate of 132.5%.[7] In New York City, this was accompanied by increasing rates of gonorrhea infection as well as outbreaks of meningitis. This discrepancy, in addition to data that show that MSMs already carry the majority of the HIV burden within the US, identifies MSMs as the most vulnerable population in terms of HIV infection. Moreover, it shows that current measures, including widespread advocacy of condom use, are not working. Therefore, it is this population that is most likely to benefit from additional, innovative measures of HIV prevention such as PrEP.

Unfortunately, when comparing the promise of PrEP with its actual usage, the numbers don’t seem to line up. Although exact statistics are hard to come by, anecdotal data suggest that the use of Truvada has not taken off within the LGBT community. For instance, in 2013, Whitman-Walker Health, a large LGBT-serving clinic in Washington DC whose patients mostly consist of African-American gay or bisexual men, reported that only 90 of their 3000 HIV-negative patients (3%) had started PrEP. Another well-known HIV advocate reported to Out.com magazine that one of the largest national health insurers reportedly covered just over 300 prescriptions for PrEP in 2013.[7] While tangible evidence to support these specific claims is lacking, numerous similar reports from LGBT advocates and health care providers alike suggest that a small minority of MSMs are using PrEP.

There are many explanations for the lack of uptake within the LGBT community. Gilead itself admits to relying on LGBT health organizations for public relations purposes as it worries about the backlash it may receive upon advertising the medication.[3] Moreover, there have been reports of a general stigma within the LGBT community that labels people taking the medication as highly promiscuous, leading to the pejorative term “Truvada-whore.” However, one study conducted in NYC points to another possible explanation: misunderstanding. The study surveyed 629 MSMs in three different NYC sex clubs, and found that when asked, 78% of men identified themselves as not being candidates for PrEP based on their perceived risk, despite the fact that over 80% met eligibility criteria.[8,9] All three of the potential obstacles for increasing the usage of PrEP as addressed above highlight one important trait among individuals who could potentially benefit from treatment: ignorance.

Ironically, this ignorance is not unlike the ignorance of the practitioners I encountered. The difference, however, is in the responsibility of these two populations, patients and practitioners, in becoming informed. It is convenient to place the onus on patients to take care of their health needs, especially when these needs relate to a specific characteristic of the patient that places them in the minority, such as race, ethnicity, or sexual orientation. However, as a medical student, I am taught and expected to know what antihypertensive is preferred in African Americans, what genetic disorders are more necessary to screen for in the Ashkenazi Jewish population, and in what circumstances a Jehovah’s Witness can refuse life-saving treatments. Why then, do I feel such a heavy burden to advocate for the appropriate preventive measure to lower my risk of becoming infected with HIV?

The answer again is ignorance: practitioners are not well informed about PrEP or equipped to promote its usage. Reasons range from the novelty of the treatment to the reluctance of providers to discuss gay sex, both of which probably contribute to the overall problem. Regardless, it is unacceptable. Primary care providers, gatekeepers of preventive health, have the responsibility to oversee their patient’s medical well being. One cannot force an obese hyperlipidemic patient to diet and exercise any more than one can force an MSM to practice safe sex, yet the former is almost automatically prescribed a statin, while the latter is often left to fend for himself. Additionally, history shows that information and conversation breeds normalization, as was the case with contraceptives, multiple medical and surgical therapies, and even AIDS itself. It is likely that as practitioners become more aware, informed, and upfront about PrEP, the stigma around it will diminish. Yet, as of now, how can we expect the LGBT community to embrace a treatment that the medical community has been too reluctant to embrace itself?

It is imperative that all health care professionals, especially those on the front line serving as primary care providers, educate themselves about PrEP. It is their responsibility to inform patients of this option, to carry out its management, and to help the community at large fight the HIV epidemic. Thus, I encourage every health care professional to ask, the next time someone comes to your office who could benefit from further HIV prevention, will you be PrEPared?

Appendix

Recommended Indications for PrEP Use by MSM [9]

Commentary by Dr. Richard Greene

The conversation about PrEP so far has been focused on “Should we or shouldn’t we?” However, there are now data establishing that indeed we should, and recommendations to do so. In 2011, the Institute of Medicine produced a report on LGBT health [10] stating that the greatest health disparities facing LGBT patients are lack of evidence-based information and lack of provider knowledge about the evidence that exists to care for this community. In my clinic and my personal life as a gay primary care physician, I hear stories at least once a week of patients who have asked for PrEP who have been dismissed or told they do not need it, when in fact they are excellent candidates. In NYC, as many as 1 in 5 young MSM have HIV. For 30 years, we have been recommending the use of condoms for all sex, but this has been ineffective in stopping the spread of HIV in this country. Here we have both data and clear recommendations from the CDC and WHO for use of PrEP in a high-risk population, and it behooves providers to be aware of the intervention and able to provide it. Isn’t it time we educate ourselves and offer our patients the full arsenal of protection they deserve?

By Nathan King is a 2nd year medical student at NYU School of Medicine

Reviewed with a Commentary by Richard Greene, MD, Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Grant RM, Lama JR, Anderson PL, et al. Preexposure chemoprophylaxis for HIV prevention in men who have sex with men. N Engl J Med. 2010;363(27):2587–2599. http://www.nejm.org/doi/full/10.1056/NEJMoa1011205

2. Centers for Disease Control and Prevention. CDC Statement on FDA approval of drug for HIV prevention. http://www.cdc.gov/nchhstp/newsroom/FDA-ApprovesDrugStatement.html Published July 16, 2012. Accessed July 23, 2014.

3. Murphy T. Is this the new condom? Out. September 9, 2013. http://www.out.com/news-opinion/2013/09/09/hiv-prevention-new-condom-truvada-pill-prep?page=full Accessed July 23, 2014.

4. Baeten JM, Donnell D, Ndase P, et al. Antiretroviral prophylaxis for HIV prevention in heterosexual men and women. N Engl J Med. 2012;367(5):399-410. http://www.nejm.org/doi/full/10.1056/NEJMoa1108524

5. CBS/AP website. HIV pill Truvada shows more promise against infection. http://www.cbsnews.com/news/hiv-pill-truvada-shows-more-promise-to-prevent-infection/  Published July 22, 2014.  Accessed July 23, 2014.

6. World Health Organization. HIV/AIDS: Guidance on oral pre-exposure prophylaxis (PrEP) for serodiscordant couples, men and transgender women who have sex with men at high risk of HIV. Recommendations for use in the context of demonstration projects. http://www.who.int/hiv/pub/guidance_prep/en/ Published July 2014.  Accessed July 23, 2014.

7. Johnson AS, Hall HI, Hu X, Lansky A, Holtgrave DR, Mermin J. Trends in diagnoses of HIV infection in the United States. JAMA. 2014;312(4):432-434.

8. AIDSMeds website. Gay men at risk may not see themselves as prep candidates. July 1, 2014. http://www.aidsmeds.com/articles/MSM_candidacy_PrEP_1667_25848.shtml Accessed July 23, 2014.

9. Centers for Disease Control and Prevention website. US Public Health Service. Preexposure prophylaxis for the prevention of HIV infection in the United States—2014. A clinical practice guideline. http://www.cdc.gov/hiv/pdf/prepguidelines2014.pdf Accessed September 13, 2014

10. Canadian Women’s Health Network. Committee on Lesbian, Gay, Bisexual, and Transgender Health Issues and Research Gaps Opportunities Board on the Health of Select Populations. The health of lesbian, gay, bisexual, and transgender (LGBT) people: building a foundation for better understanding. http://www.iom.edu/~/media/Files/Report%20Files/2011/The-Health-of-Lesbian-Gay-Bisexual-and-TransgenderPeople/ LGBT%20Health%202011%20Report%20Brief.pdf Published 2011.  Accessed August 5, 2014.