Class Act

How Bad is Binge Drinking, Really?

July 12, 2012

By Patrick Olivieri

Faculty Peer Reviewed

Alcohol is a well-established part of our culture,as a social lubricant or a way to wind down at the end of the day. Recently, however, binge drinking (4 or more drinks for a woman, 5 or more drinks for a man) has been rapidly increasing, with as many as 32% of Americans reporting at least occasional bingeing.[1] Additionally, men have been shown to binge drink 30% of the time when they go out socially.[2]It is well known that alcoholism is a risk factor forcirrhosis, tuberculosis, pancreatitis, pneumonia, hypertension, and cardiomyopathy;[3] however, there is a lack of knowledge about morbidity related to binge drinking.

When asked about the body’s ability to recover from binge drinking, physicians often relay the widely known catastrophesthat can occur during acute intoxication (car accidents, homicide, suicide, domestic violence, emergency room visits from injuries),[4] but patients rarely hear about the actual effects of binge drinking on the human body. This knowledge gap is due to a combination of conflicting arguments and a lack of data.

The good news: when controlling for other alcohol usage, there is no increased risk of cirrhosis among binge drinkers over nondrinkers.[5] Furthermore,brain tissue shrinkageand difficulty in object recognition are transient, with subjects returning to baseline after binge drinking has ceased.[6,7] A small study comparing binge drinkers with nondrinkers actually displayed some increased activity on MRI in the frontal and parietal regions, thought to be due to increased engagement of working memory.[8] These effects are likely due to the ability of the body to repair itself during short periods of abstinence from alcohol (between binges) as well as a lack of cumulative inflammation necessary to cause lasting damage.

Binge drinking has pathological effects, even with ‘’rest’’ between episodes. Effects are seen on the heart, where arrhythmias are common in the setting of binge drinking–often called ”holiday heart” syndrome.[9,10,11] While these arrhythmias are often transient, they can lead to more dangerous arrhythmias or even emboli in high-risk patients.Binge drinking has been linked to higher levels of cerebral bleeds, death from coronary artery disease, and strokes, with increases as high as 20%.[12] Normally, the changes that would lead to alcoholic cardiomyopathy are reversed with time; however, in patients with acute viral myocarditis, binge drinking can lead to a deleterious drop in systolic function.[13] These cardiac risks are significant, and can be prevented by not binge drinking.

In the gastrointestinal system, the liver and pancreas are the organs most commonly affected. Although not associated withcirrhosis, binge drinking is the largest activator of the liver enzymes ERK1 and ERK2, which have been linked to excess cell growth.[14]In addition to pancreatitis, pancreatic cancer wasfound to be more common in binge drinkerscompared to nondrinkers (OR 3.5, 95% CI: 1.6-7.5).[15]

Renal issues are important when counseling patients, asbinge drinking alongwith NSAID usecan lead to an increased risk of acute renal failure.[16]The effects on the brain don’t appear to be limited to acute intoxication, either. Using MRI, it is seen that there ishypoactivation of the hippocampus along with a defect in spatial recognition that is long-lasting in individuals who report multiple episodes of bingedrinking.[6]

Binge drinking causes damage beyond the direct chemical insult. These effects are most commonly seen in younger males, with the highest alcohol-attributable fraction (AAF) seen with motor vehicle accidents.[17] This concept refers to the notion that without the consumption of alcohol these events would not occur. Binge drinking correlates most highly with violent injuries.[18] The suicide rate among binge drinkers is 6 times that of moderate alcohol consumers.[19] The economic toll of binge is estimated to be $170 billion, comprising over three quarters of the total economic cost from all kinds of drinking.[20]

Binge drinking has been shown to have serious effects on the human body, despite the recovery periodbetween episodes. Beyond the obvious legal problems and accidental injuries associated with punctuated intoxication, there are detrimental effects on the heart, gastrointestinal system, brain, and kidneys. Patients who bingeneed to be aware of the harm they are doing themselves even if they manage to avoid injury and arrest. Further education is needed to prevent furthernormalization of binge drinking.

Patrick Olivieri is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Joshua Lee, MD, Assistant Professor, Department of Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. Charles J, Valenti L, Miller G. Binge drinking. AustFam Physician. 2011;40(8):569. http://www.ncbi.nlm.nih.gov/pubmed/21814649

2. Mathurin P, Deltenre P. Effect of binge drinking on the liver: an alarming public health issue? Gut 2009;58:613-617. http://gut.bmj.com/content/58/5/613.extract

3. Zaridze D, Brennan P, Boreham J, Boroda A, Karpov R, Lazarev A et al. Alcohol and cause-specific mortality in Russia: a retrospective case-control study of 48,557 adult deaths. Lancet. 2009;373(9682):2201.

4. Pletcher MJ, Maselli J, Gonzales R. Uncomplicated alcohol intoxication in the emergency department: an analysis of the National Hospital Ambulatory Medical Care Survey. Am J Med. 2004;117(11):863. http://www.ncbi.nlm.nih.gov/pubmed/15589492

5. Hatton J, Burton A, Nash H, Munn E, Burgoyne L, Sheron N. Drinking patterns, dependency and life-time drinking history in alcohol-related liver disease. Addiction. 2009;104(4):587-592.

6. Cippitelli A, Zook M, Bell L, Damadzic R, Eskay RL, Schwandt M et al. Reversibility of object recognition but not spatial memory impairment following binge-like alcohol exposure in rats. Neurobiol Learn Mem. 2010;94(4):538-546.

7. Zahr NM, Mayer D, Rohlfing T, Hasak MP, Hsu O, Vinco S et al. Brain injury and recovery following binge ethanol: evidence from in vivo magnetic resonance spectroscopy. Biol Psychiatry. 2010;67(9):846-854. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2854208/

8. Schweinsburg AD, McQueeny T, Nagel BJ, Eyler LT, Tapert SF. A preliminary study of functional magnetic resonance imaging response during verbal encoding among adolescent binge drinkers. Alcohol. 2010;44(1):111-117.

9. Ettinger PO, Wu CF, De La Cruz C Jr, Weisse AB, Ahmed SS, Regan TJ. Arrhythmias and the “Holiday Heart”: alcohol-associated cardiac rhythm disorders. Am Heart J. 1978;95(5):555.

10. Frost L, Vestergaard P. Alcohol and risk of atrial fibrillation or flutter: a cohort study. Arch Intern Med. 2004;164(18):1993. http://www.ncbi.nlm.nih.gov/pubmed/15477433

11. Mukamal KJ, Tolstrup JS, Friberg J, Jensen G, Grønbaek M. Alcohol consumption and risk of atrial fibrillation in men and women: the Copenhagen City Heart Study. Circulation. 2005;112(12):1736.

12. Puddey IB, Beilin LJ. Alcohol is bad for blood pressure. ClinExpPharmacol Physiol. 2006;33(9):847-852.

13. Zagrosek A, Messroghli D, Schulz O, Dietz R, Schulz-Menger J. Effect of binge drinking on the heart as assessed by cardiac magnetic resonance imaging. JAMA. 2010;304(12):1328-1330.

14. Aroor AR, Jackson DE, Shukla SD. Elevated activation of ERK1 and ERK2 accompany enhanced liver injury following alcohol binge in chronically ethanol-fed rats. Alcohol ClinExp Res. 2011;Epub.

15. Gupta S, Wang F, Holly EA, Bracci PM. Risk of pancreatic cancer by alcohol dose, duration, and pattern of consumption, including binge drinking: a population-based study. Cancer Causes Control. 2010;21(7):1047-1059.  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2883092/

16. Skuladottir HM, Andresdottir MB, Hardarson S, Arnadottir M. Acute flank pain syndrome: a common presentation of acute renal failure in young men in Iceland. Laeknabladid. 2011;97(4):215-221.

17. Taylor BJ, Shield KD, Rehm JT. Combining best evidence: a novel method to calculate the alcohol-attributable fraction and its variance for injury mortality. BMC Public Health. 2011;11:265.

18. Brewer RD, Swahn MH. Binge drinking and violence. JAMA. 2005;294(5):616-618. http://jama.jamanetwork.com/article.aspx?articleid=201309

19. Klatsky AL, Armstrong MA. Alcohol use, other traits, and risk of unnatural death: a prospective study. Alcohol ClinExp Res. 1993;17(6):1156-1162.

20. Bouchery EE, Harwood HJ, Sacks JJ, Simon CJ, Brewer RD. Economic costs of excessive alcohol consumption in the U.S., 2006. Am J Prev Med. 2011;41(5):516-524.

Gout: A Disease of the Blessed or a Blessing in Disguise?

June 8, 2012

By Krithiga Sekar

Faculty Peer Reviewed

“The patient goes to bed and sleeps quietly until about two in the morning when he is awakened by a pain which usually seizes the great toe, but sometimes the heel, the calf of the leg or the ankle… so exquisitely painful as not to endure the weight of the clothes nor the shaking of the room from a person walking briskly therein.”

—Thomas Sydenham  (1683)[1,2]

Gout, an excruciatingly painful but relatively benign form of arthritis in the modern world, has an intriguing pathology with ancient roots. Initially described by the Egyptians in 2640 BC, gout quickly gained its reputation as an “arthritis of the rich” in the writings of Hippocrates during the fifth century BC.[1,3] Six centuries later, Galen first recognized tophi, the crystalized monosodium urate deposits that result from chronic hyperuricemia, and further emphasized the growing link between gout, “debauchery,” and “intemperance.“[1,4] Throughout history, the early link between gout and an affluent lifestyle has been reinforced by the consistent association between this disease and the consumption of rich foods and excessive alcohol. Indeed, this is aptly summarized by a clever comment noted in the London Times in 1900: “The common cold is well named – but the gout seems instantly to raise the patient’s social status.”[1]

The heart of this historical association remains relevant in the modern world. Gout is characterized by chronic hyperuricemia that exceeds the physiologic saturation threshold of urate, at around 6.8 mg/dL.[5] At excessive levels, the monosodium salt of precipitated serum urate can form needle-like crystals that deposit into joints, triggering an acute inflammatory arthritis in certain individuals.[6] The number of individuals with this predilection is on the rise. The prevalence of gout in America has doubled in the past few decades and is rising rapidly in other countries with growing economies.[5] Indulgent features of a Western diet, including heavy consumption of meats, seafood, fructose-sweetened beverages, and beer have added significantly to the overproduction of uric acid in affected individuals. More importantly, cardiovascular disease, renal dysfunction, metabolic syndrome, hypertension, and the widespread use of thiazide and loop diuretics have led to decreased renal excretion of uric acid in hyperuricemic states. Together, overproduction and underexcretion have created a “perfect storm” [7] for worsening and difficult-to-treat gouty arthritis in the modern setting.

So gout continues its legacy as a disease of the affluent or the “blessed,” so to speak. There are also evolutionary reasons for the persistence of this ancient and painful pest.

Uric acid is a waste product resulting from the biologic oxidation of purines, including adenine and guanine–components of deoxyribonucleic acid (DNA), ribonucleic acid (RNA), and adenosine triphosphate (ATP).[8] Most microorganisms have the complete enzymatic profile needed to catabolize purines to their final degradation products ammonia and carbon dioxide. However, birds and reptiles have lost some intermediary enzymes required for this process. Notably, a mutation resulting in the loss of uricase occurred during the evolution of birds and uricotelic (urate-excreting) reptiles, possibly including Tyrannosaurus rex.[9] Thus, uric acid accumulates in these organisms. The survival advantage involves the large-scale excretion of nitrogenous wastes as insoluble uric acid–a method of maintaining nitrogen balance with minimal water loss.[8] Uricase reappears in most mammals, allowing for the conversion of uric acid to soluble allantoin that is subsequently excreted in the urine. Although humans also possess the gene for uricase, it is inactive,[10] forcing us to excrete relatively insoluble uric acid, with the potential for precipitation and crystal formation.

Is there a survival advantage for a hyperuricemic state in man? This question has recently received substantial scientific attention.

Programmed cell death, as seen in the context of infection, leads to increased purine catabolism and increased uric acid buildup. Recent evidence suggests that high concentrations of uric acid, specifically in crystalline form, serve as an adjuvant signal in addition to antigen presentation to major histocompatability complex molecules. This second signal is thought to stimulate dendritic cell maturation and transform a low-grade T-cell immune response into a fulminant response by confirming pathogen invasion in the context of cell damage.[11,12] Results from a related study validate these findings by demonstrating that tumor rejection in mice is enhanced by injecting crystalline uric acid into the mouse and suppressed by decreasing uric acid levels with uricase or allopurinol.[12] Indeed the potentially protective role of hyperuricemia, and by association gout, was already suggested in 1873 by the social and political writer Horace Walpole, who predicted that gout may prevent other illnesses and prolong life.[13]

In the same vein, it has been proposed that hyperuricemia may provide a selective advantage by acting as an antioxidant in the nervous system, liver, lungs, and arterial walls.[10] Uric acid is a powerful scavenger of the free radicals that can cause oxidative damage to tissues.[14] In the nervous system, treatment with uric acid is correlated with decreased Wellerian degeneration and thermal hyperalgesia after sciatic nerve injury.[10,15] In the liver, addition of uric acid prevents oxidative damage to hepatocytes in animal models of hemorrhagic shock.[10,16] High levels of uric acid and antioxidant have also been identified in tracheobronchial aspirates of mechanically ventilated premature infants, suggesting that uric acid may protect against oxidative lung damage early in life.[10,17] Finally, increased serum uric acid levels can also explain measured increases in antioxidant capacity in patients with significant atherosclerosis.[10,18]

In addition to its possible antioxidant features, hyperuricemia may have an evolutionary role in blood pressure maintenance in the context of bipedal, human locomotion–an evolutionary step that adds significant demands on cardiovascular function.[11] Evidence for this model comes from a recent finding that animals on low-sodium diets are able to maintain blood pressure only when a uricase inhibitor is introduced.[11,19] This is further reinforced by a study showing lowered blood pressure (to the normal range) in adolescents with essential hypertension when allopurinol, the current therapy for hyperuricemia and gout prophylaxis, is introduced.[11,20]

These findings add to the mounting evidence that hyperuricemia and its less pleasant partner, gout, may have survived natural selection for the various advantages they offer human life. The potential immunological, antioxidant, and hemodynamic roles of uric acid in human biophysiology are areas of growing research that offer insights into an ancient disease that has stood the test of time. With the rising incidence of gout in the context of worsening and refractory illness, understanding the mechanisms involved may be valuable in the design of future treatments. To both the healthcare and scientific community, gout remains a striking example of the subtle balance between health and disease, offering us a reminder that there is seldom any gain without at least a little pain.

Commentary by Dr. Michael Pillinger

In this very nice summary, the author points out the potential benefits of the higher uric acid levels that characterize purine metabolism in humans and other primates. As noted, humans—and primates and New World monkeys—suffer from an absence of uricase, the enzyme that converts uric acid to the more soluble allantoic acid. During the Miocene era (10-20 million years ago) evolution pounced on uricase with a vengeance, deleting it from primates multiple times. Surely there was a reason—or multiple reasons–for this selection.

That said, we must express our enthusiasm for high urate levels with some trepidation. For one thing, as a species we have moved on. The low-salt environment—and the need to stand up and not fall down from trees—that may once have selected for higher uric acid levels to help us maintain blood pressure has given way to a high-salt, high-purine environment in which we need no help from evolution to maintain our urate levels and blood pressures. On the contrary, our problems now are obesity and hypertension. (Interestingly, studies during the Depression suggest that less than a century ago–before McDonald’s and Pepsi–our average serum urate was closer to 3-4, rather than the current 5-6 mg/dL. That level of urate, rather than the current one, may have been the actual result of uricase inactivation.)

The antioxidant hypothesis—urate is the most potent small molecule antioxidant in plasma–is evolutionarily attractive, to the extent that humans and other primates, but not other mammals, lost the ability to synthesize the alternative antioxidant ascorbate only a few million years before we found a way to knock out uricase. Could this have been the evolutionary pressure that led us to raise our uric acid levels? The antioxidant effect of urate—well established in vitro—may indeed be important for the central nervous system, as clinical studies suggest that patients with neurodegenerative diseases (Parkinson disease, multiple sclerosis, Alzheimer disease) appear to do better when their urate levels are higher. (Studies are ongoing, and cause-and-effect has not been established). On the other hand, in the periphery this effect may be less important, since red blood cell membranes are abundant and seem to have ample antioxidant capacity even when urate levels are low. Moreover, the potential benefits of higher levels of urate need to be balanced against the potential liabilities; increasing evidence implicates high urate levels in hypertension, renal disease, heart disease, and maybe even diabetes and obesity. And finally we should note that patients who have the highest levels of serum urate—those susceptible to gout—may experience no additional benefits while suffering from their debilitating disease.

In a sense, hyperuricemia may be like an old, old friend, who has done much good for our family in the past, who still has a number of charming attributes, but who in other ways has visited too long, and whose virtues are now confounded by cranky bad behavior. As physicians we should learn from our “friend,” treat it respectfully, and seek through research to exploit hyperuricemia’s possible virtues while refusing to accept its lamentable vices.

Krithiga Sekar is a 3rd year medical student at NYU Langone Medical Center

Peer reviewed by Michael Pillinger, MD, Associate Professor, Department of Medicine, Rheumatology Divison, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Nuki G, Simkin PA. A concise history of gout and hyperuricemia and their treatment. Arthritis Res Ther. 2006;8 Suppl 1:S1. http://arthritis-research.com/content/8/S1/S1

2. Sydenham T. Tractatus de Podagra et Hydrope. London, England: G Kettilby; 1683.

3. Hippocrates. The Genuine Works of Hippocrates. Adams F, trans-ed. New York, NY: Wood; 1886. http://archive.org/details/genuineworkship02hippgoog

4. Galen C. Claudii Galeni Opera Omni. Kühn CG, ed. Leipzig, Germany: Cnobloch; 1821–1833.

5. Terkeltaub R. Update on gout: new therapeutic strategies and options. Nat Rev Rheumatol. 2010;6(1):30-38.

6. Chen LX, Schumacher HR. Gout: an evidence-based review. J Clin Rheumatol. 2008;14(5 Suppl):S55-S62.

7. Bieber JD, Terkeltaub RA. Gout: on the brink of novel therapeutic options for an ancient disease. Arthritis Rheum. 2004;50(8):2400-2414.

8. Gutman AB, Yü TF. Uric acid metabolism in normal man and in primary gout. N Engl J Med. 1965;273(6):313-321.

9. Rothschild BM, Tanke D, Carpenter K. Tyrannosaurs suffered from gout. Nature. 1997; 387(6631):357.

10. Agudelo CA, Wise CM. Gout: diagnosis, pathogenesis, and clinical manifestations. Curr Opin Rheumatol. 2001;13(3):234-239. http://www.ncbi.nlm.nih.gov/pubmed/11333355

11. Pillinger MH, Rosenthal P, Abeles AM. Hyperuricemia and gout: new insights into pathogenesis and treatment. Bull NYU Hosp Jt Dis. 2007;65(3):215-221. http://www.ncbi.nlm.nih.gov/pubmed/17922673

12. Hu DE, Moore AM, Thomsen LL, Brindle KM. Uric acid promotes tumor immune rejection. Cancer Res. 2004;64(15):5059-5062.

13. Walpole H. Letter to Sir Horace Mann, 8 May 1873. In: Yale Editions of Horace Walpole’s Correspondence, Vol 25. Lewis WS, ed. New Haven, CT: Yale University Press; 1937-1982:402.

14. Ames BN, Cathcart R, Schwiers E, Hochstein P. Uric acid provides an antioxidant defense in humans against oxidant- and radical-caused aging and cancer: a hypothesis. Proc Natl Acad Sci USA. 1981;78(11):6858-6862.

15. Liu T, Knight KR, Tracey DJ. Hyperalgesia due to nerve injury-role of peroxynitrite. Neuroscience. 2000;97(1):125-131.

16. Tsukada K, Hasegawa T, Tsutsumi S, et al. Effect of uric acid on liver injury during hemorrhagic shock. Surgery. 2000;127(4):439-446.

17. Vento G, Mele MC, Mordente A, et al. High total antioxidant activity and uric acid in tracheobronchial aspirate of preterm infants during oxidative stress: an adaptive response to hyperoxia? Acta Paediatr. 2000;89(3):336-342.  http://onlinelibrary.wiley.com/doi/10.1111/j.1651-2227.2000.tb01336.x/pdf

18. Nieto FJ, Iribarren C, Gross MD, Comstock GW, Cutler RG. Uric acid and serum antioxidant capacity: a reaction to atherosclerosis? Atherosclerosis. 2000;148(1):131-139.

19. Mazzali M, Hughes J, Kim YG, et al. Elevated uric acid increases blood pressure in the rat by a novel crystal-dependent mechanism. Hypertension. 2001;38(5):1101-1106.

20. Feig DI, Johnson RJ. The role of uric acid in pediatric hypertension. J Ren Nutr. 2007;17(1):79-83.  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1847593/

BiDil: The Future of Medicine or a Return to a Dark Past?

May 31, 2012

By Christopher David Velez, MD

Faculty Peer Reviewed

Given the traumatic and often criminal role that medicine and the larger scientific community played in some of the most shameful acts of the 20th century, it is natural that the consequences of these collaborations have continued to reverberate to the present day. The chills sent down our spines can be sparked from reading treatises purporting to demonstrate the undeniable genetic evidence of racial superiority, or from the revulsion towards the degrees of complicity needed for the horrific Tuskegee experiment to have such devastating success. It is easy to see why as a society we are distrustful of the study of race in our current century.

It consequently should not be so surprising that an innocently titled manuscript, “The African-American Heart Failure Trial: Background, Rationale, and Significance,” was published in 2002 [1]. In a remarkable departure from typical manuscripts detailing the findings of randomized control trials, this was an article focused solely on the justification for a drug trial in a specific study cohort. The trial agent was “BiDil,” a combination pill of isosorbide dinitrate and hydralazine, and the study population was African American. But it almost does not matter what the agent being discussed is. BiDil serves as a useful lens through which to view two phenomena: (a) the role that pharmacogenetics will play in the future of drug therapy for illness; and (b) the degree in which ancestry and its association (or lack thereof) with race and genetics can influence the heritable risk factors of illness [2]. The latter phenomenon will be (partially) addressed, not only in African Americans, but in other communities with complex ancestries in the US, such as Hispanics/ Latinos.

The aforementioned 2002 communication was framed as anticipated results, but in reality, this was not the case. BiDil was an agent that initially failed to receive Food and Drug Administration (FDA) approval earlier because of its lack of statistically significant success in the initial study cohort, except for one group: African Americans [3]. Not surprisingly, when the drug made its way again through the approval process as therapy explicitly for African Americans, the federal regulatory authorities eventually were called upon to enter the fray. The FDA then, in 2005, approved for the first time in its history a pharmacotherapy for a specific racial group: “black patients” [4]. In justifying its rationale, the FDA made sure to emphasize the undue burden of heart failure on African American patients, and why such unconventional approaches needed to be taken given the statistically significant improvement made with the regimen in this group. This did not allay the controversy surrounding the decision, which became further magnified both within the popular press and academia. Articles were penned ranging from the measured “What’s right (and wrong) with racially stratified research and therapies” [5] to the more inflammatory “There is no scientific rationale for race-based research” [6]. One consistent criticism was the conflation of race and genetics: if the drug is targeted for African Americans, how do we know who are African Americans to whom we should be prescribing BiDil?

An exhaustive analysis of the complexities surrounding the formation of the modern African American community is beyond the scope of this discussion. Yet the term “African American” and the racial category of “black” are inherently social inventions that are not descriptive of inherent traits. They are definitions derived from the tormented relationship the United States had with slavery as the first modern republic based on democratic rule that excluded officially until the 1960s one segment of the demos: people of West African ancestry. A complex pigmentocracy eventually became distilled into a bi-racial hierarchy determined by the rule of the famed “one drop:” if you were of part African ancestry, you were black or African-American. Naturally, one can see the dilemma posed when such a social construct is used within scientific discourse.

Proof of this incoherent relationship between race and genetics was yielded on whole-genome analysis of African Americans in comparison to African populations [7]. When looking at markers derived from single nucleotide polymorphisms (SNPs) that could differentiate continental ancestry, it was found that not only was the average European contribution to the African American genome about 18.5%, but that there was notable variability in this contribution. This phenomenon is not limited to African Americans. For example, Hispanics/ Latinos have posed such a conundrum to the US Census that obtuse racial categories are conjured up like “White Non-Hispanic” that treat origin from Latin America as an ethnicity. When you compare individuals from communities with origins in continental or Caribbean Latin America to Native American, European, and African populations, you easily see the inadequacy of treating Hispanics/ Latinos as one homogenous group [8]. The country-specific designations themselves are problematic, with some significant differences notable even within Puerto Rico, a small island comparable in size to Yellowstone National Park.

This begs a natural question: what medical benefit is derived from studying such differences in disease severity and outcome in specific communities that are inherently social inventions? As BiDil demonstrated to us, even though the days of Jim Crow and Tuskegee seem consigned to dusty history books, the collective trauma they have inflicted on society’s psyche is still evident. Such a legacy must be addressed as the study of the heritable risk for non-Mendelian illnesses like cancer or hypertension or diabetes flourishes as high density genomic data becomes increasingly available. Undoubtedly, an aspect of this inheritance will continue to bear the same genetic markers that can also differentiate people based on place of origin, in what we today assign the label “race.” BiDil demonstrates that without moving past the darkness of the 20th century, we will not be able to explore the full promise of genomics in the 21st century and its application to better disease prevention, management, and treatment. Perhaps fulfilling such a promise will be a first act of atonement for the horrors that some medical practices have inflicted on humanity and a way to demonstrate the true aspirations of our profession: caring with compassion and expertise for all sick persons through the rational and unbiased application of new knowledge, thus contributing to the advancement of our society.

Dr. Christopher David Velez is a 2011 graduate of NYU School of Medicine

Peer Reviewed by Antonella Surbone, MD, Ethics Editor, Clinical Correlations

Image courtesy of  Wikimedia Commons

References:

1. Taylor AL, Cohn JN, Worcel M, et al. The African-American Heart Failure Trial: background, rationale, and significance. Journal of the National Medical Association. 2002; 93 (9): 762-769  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2594160/

2. Kahn JD. Pharmacogenetics and ethnically targeted therapies. BMJ. 2005; 350: 1508

3. Bloche MG. Race-based therapeutics. New England Journal of Medicine. 2004; 351(20): 2035-2038 http://www.nejm.org/doi/full/10.1056/NEJMp048271

4. FDA Consumer Magazine September-October 2005 Issue. FDA approves Heart Drug for Black Patients. http://permanent.access.gpo.gov/lps1609/www.fda.gov/fdac/features/2005/505_BiDil.html Accessed February 24, 2011.

5. Sade RM. What’s right (and wrong) with racially stratified research and therapies. Journal of the National Medical Association. 2007; 99(6): 693-696

6. Hoover EL. There is no scientific rationale for race-based research. Journal of the National Medical Association. 2007; 99(6): 690-692. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2574368/

7. Bryc K, Auton A, Nelson MR, et al. Genome-wide patters of population structure and admixture in West Africans and African Americans. PNAS. 2010; 107(2): 786-791 http://www.pnas.org/content/107/2/786.full

8. Bryc K, Velez CD, Karafet T, et al. Genome-wide patterns of population structure and admixture among Hispanic/Latino populations. PNAS. 2010 107: 8954–8961 http://www.pnas.org/content/107/suppl.2/8954.full

1

Nothing QT (Cute) about it: rethinking the use of the QT interval to evaluate risk of drug induced arrhythmias

April 27, 2012

By Aneesh Bapat, MD

Faculty Peer Reviewed

Perhaps it’s the French name, the curvaceous appearance on electrocardiogram (EKG), or its elusive and mysterious nature, but Torsades des pointes, a polymorphic ventricular arrhythmia, is certainly the sexiest of all ventricular arrhythmias. Very few physicians and scientists can explain its origin in an early afterdepolarization (EAD), and fewer still can explain its “twisting of the points” morphology on EKG. Despite its rare occurrence (only 761 cases reported to the WHO Drug Monitoring Center between 1983 and 1999)1, every medical student is taught that it is an arrhythmia caused by prolongation of the QT interval. The more savvy medical student will implicate the corrected QT interval (QTC) and recite a fancy formula with a square root sign to determine this value. Suffice to say, the mystic nature of torsades des pointes and the negative attitude towards QT prolongation have been closely intertwined over the years. While the most common culprits, such as cisapride, macrolides, terfenadine, and haloperidol[1] have become maligned for this reason, there are many more that do not even make it to the market because they prolong QT interval2. (A comprehensive list of QT prolonging drugs with associated arrhythmia risk can be found at <http://www.qtdrugs.org/>) Although our conceptions of a long QT interval have been inculcated repeatedly, there is growing evidence that QT interval prolongation may not be sufficient to predict risk of drug induced TdP, and that other, more sensitive and specific markers should be utilized.

The QT interval, which is the amount of time between the start of the QRS complex and the end of the T wave on EKG, is a marker of the amount of time required for ventricular tissue depolarization and repolarization. On a cellular basis, it is closely related to the duration of the cardiac myocyte action potential (AP). In pathological conditions, myocyte depolarization can even occur during phase 2 or 3 of the action potential, producing an EAD. These EADs can give rise to ectopic beats in the tissue, and produce arrhythmias such as TdP due to the R on T phenomenon. It is generally thought that EADs are caused by problems with myocyte repolarization- which would prolong QT interval- and thus QT prolongation has been linked to EAD mediated arrhythmias such as TdP. Drugs such as haloperidol have been shown to block the potassium channels responsible for AP repolarization, and as a result prolong QT interval. However, it has become more apparent in recent years that prolonged repolarization (or QT interval) is neither sufficient nor necessary to produce the EADs (or ectopic beats) that cause arrhythmias[3-6].

The EAD is an arrhythmogenic entity that has been implicated in a variety of abnormal rhythms, including ventricular tachycardias, such as TdP, ventricular fibrillation, and atrial arrhythmias. In the normal cardiac action potential, Na+ channel opening produces the inward current necessary for the initial upstroke of the action potential, L-type calcium channels produce a plateau caused by influx of calcium, and K+ channels produce an outward current to assure full repolarization. The simplistic explanation for EADs has always been that EADs occur when inward currents (Na+ and Ca2+) are greater than outward currents (K+). This explains why EADs can occur as a result of K+ current inhibition- as seen with the use of anti-arrhythmics such as sotalol, or in the presence of hypokalemia. The advantage of this simplistic viewpoint is just that- it is simple. However, it is far from comprehensive. In fact, there are cases where potassium current inhibition does not cause EADs, and others where potassium currents are augmented and EADs do occur[3,4,7]. The key to the genesis of EADs is not the duration or magnitude of the various currents that make the action potenatial, but rather the timing of channel openings8. The classic example to contradict the simplistic idea of EAD genesis is the antiarrhythmic drug amiodarone, which acts via potassium channel blockade and causes QT prolongation, but does not produce EADs or increase TdP risk7,9. Since the occurrence of EADs is not solely determined by the duration of the action potential, it follows that the risk of TdP is not solely determined by the QT interval.

Although much basic science and clinical research have brought into question the validity of using QT prolongation to determine TdP risk, the message has been lost in translation to the bedside. In the clinic or hospital setting, too much weight is put into the baseline QT interval when deciding whether a drug can be used. A recent clinical study has shown that the degree of QT prolongation does not correlate to the baseline QT interval10. Another study has proposed the use of the Tp-e (amount of time from peak to end of the T wave) or the Tp-e/QT ratio as indicators for arrhythmia risk, regardless of whether they occur in presence of long QT, short QT, or unchanged QT11. Yet another study proposes the use of three EKG criteria- T wave flattening, reverse use dependence of the QT interval, and instability of the T wave- to determine whether a drug is arrhythmogenic. This study cites a better sensitivity to arrhythmia risk and an earlier onset in changes as compared to QT prolongation. This set of criteria even stands tall in the face of the paradoxical situations of prolonged QT and decreased arrhythmia risk- as with amiodarone use7. When the day comes that QT prolongation is deemed unsatisfactory, better alternatives exist.

The study of arrhythmias has advanced significantly over the years, but unfortunately the clinical practice has lagged behind. The major shortcoming of arrhythmia treatment in the clinic has been tunnel vision. For example, in the landmark CAST trial, Na+ channel blockade was used to prevent post-MI premature ventricular complexes. However, the study had to be terminated because of increased mortality- which was partially a result of arrhythmias [3,12]. The lesson from that trial should have been that a multi-faceted problem involving a variety of players cannot be eliminated by targeting one of them. Unfortunately, a similar approach has been taken in using QT prolongation as a marker for TdP risk. The factors that influence arrhythmogenesis are far too numerous to focus on only one, and a new, more comprehensive approach should be considered.

Aneesh Bapat is a 4th year medical student at NYU Langone Medical Center

Peer reviewed by Neil Bernstein, MD, Departments of Medicine (Cardio Div) and Pediatrics, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. Darpo B. Spectrum of drugs prolonging QT interval and the incidence of torsades de pointes. European Heart Journal Supplements. 2001;3:K70-K80. Available at: http://eurheartjsupp.oupjournals.org/cgi/doi/10.1016/S1520-765X(01)90009-4.

2. Kannankeril P, Roden DM, Darbar D. Drug-Induced Long QT Syndrome. Journal of the American College of Cardiology. 2010;62(4):760 -781.

3. Weiss JN, Garfinkel A, Karagueuzian HS, Chen P-S, Qu Z. Early afterdepolarizations and cardiac arrhythmias. Heart rhythm?: the official journal of the Heart Rhythm Society. 2010;7(12):1891-9. Available at: http://www.ncbi.nlm.nih.gov/pubmed/20868774  [Accessed February 15, 2011].

4. Ding C. Predicting the degree of drug-induced QT prolongation and the risk for torsades de pointes. Heart rhythm?: the official journal of the Heart Rhythm Society. 2011. Available at: http://www.ncbi.nlm.nih.gov/pubmed/21699823  [Accessed August 25, 2011].

5. Couderc J-P, Lopes CM. Short and long QT syndromes: does QT length really matter? Journal of electrocardiology. 2010;43(5):396-9. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2928258&tool=pmcentrez&rendertype=abstract  [Accessed June 18, 2011].

6. Hondeghem LM. QT prolongation is an unreliable predictor of ventricular arrhythmia. Heart rhythm?: the official journal of the Heart Rhythm Society. 2008;5(8):1210-2. Available at: http://www.ncbi.nlm.nih.gov/pubmed/18675236  [Accessed September 7, 2011].

7. Shah RR, Hondeghem LM. Refining detection of drug-induced proarrhythmia: QT interval and TRIaD. Heart rhythm?: the official journal of the Heart Rhythm Society. 2005;2(7):758-72. Available at: http://www.ncbi.nlm.nih.gov/pubmed/15992736  [Accessed August 25, 2011].

8. Tran D, Sato D, Yochelis A, et al. Bifurcation and Chaos in a Model of Cardiac Early Afterdepolarizations. Physical Review Letters. 2009;102(25):1-4. Available at: http://link.aps.org/doi/10.1103/PhysRevLett.102.258103  [Accessed July 14, 2010].

9. Opstal JM van, Schoenmakers M, Verduyn SC, et al. Chronic Amiodarone Evokes No Torsade de Pointes Arrhythmias Despite QT Lengthening in an Animal Model of Acquired Long-QT Syndrome. Circulation. 2001;104(22):2722-2727. Available at: http://circ.ahajournals.org/cgi/doi/10.1161/hc4701.099579  [Accessed August 25, 2011].

10. Kannankeril PJ, Norris KJ, Carter S, Roden DM. Factors affecting the degree of QT prolongation with drug challenge in a large cohort of normal volunteers. Heart rhythm?: the official journal of the Heart Rhythm Society. 2011. Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3154568&tool=pmcentrez&rendertype=abstract  [Accessed August 25, 2011].

11. Gupta P, Patel C, Patel H, et al. T(p-e)/QT ratio as an index of arrhythmogenesis. Journal of electrocardiology. 2008;41(6):567-74. Available at: http://www.ncbi.nlm.nih.gov/pubmed/18790499  [Accessed August 25, 2011].

12. Anon. Preliminary report: effect of encainide and flecainide on mortality in a randomized trial of arrhythmia suppression after myocardial infarction. The Cardiac Arrhythmia Suppression Trial (CAST) Investigators. The New England journal of medicine. 1989;321(6):406-12. Available at: http://www.ncbi.nlm.nih.gov/pubmed/2473403  [Accessed August 26, 2011].

A Study of Cultural Complications in the Management of Diabetes

April 18, 2012

By Kimberly Jean Atiyeh

Faculty Peer Reviewed

Ms. KS is a 49- year-old Bangladeshi woman with a history of diabetes mellitus and non-adherence to medical treatment or follow up, who was reluctantly brought to the Bellevue ER by her family for nausea, vomiting, and fevers for one day. Her most recent hospitalization was 9 months prior for epigastric discomfort in the setting of uncontrolled diabetes with a hemoglobin A1C of 12.4%. On arrival, her physical exam was significant for tachypnea, tachycardia, and dry mucus membranes. Her labs revealed hyponatremia, severe hyperglycemia, metabolic acidosis, and prerenal azotemia. She was diagnosed with hyperosmolar hyperglycemic nonketotic syndrome and treated with fluid replacement and IV insulin.

Diabetes is a rapidly growing, deadly, and costly epidemic. One in every 12 New Yorkers has diabetes now and the number will double by 2050. It is the sixth most common cause of death, and one of every five U.S. federal health care dollars is spent on treating people with diabetes [1]. Most health care workers know about the medical challenges associated with diabetic patients, but the social and cultural dilemmas are not as well characterized. Ms. KS’s case was a paragon of anthropologic complexity. She spoke Bengali, but refused to use the translator phone, insisting on only using her son as a translator. She avoided contact with the medical team—her son confirmed that she was extremely anxious in a hospital setting and generally avoided doctors. Furthermore, she adamantly protested any blood draws or IVs, as she believed that taking her blood was equivalent to “taking her energy”. While well-known complications of diabetes may include neuropathy, renal disease, and retinopathy, Ms. KS’s case was complicated by generalized distrust of Western medicine and non-adherence to medical advice. While it is entirely possible that her distrust, anxiety, and beliefs are purely personal, her case is a valuable opportunity to learn more about Bangladeshi-American culture and to explore how cultural background can influence healthcare, both at home and in the hospital.

Records of small numbers of Bangladeshi immigrants to the US trace back to the 1880s, but the 2000 US Census estimates a Bangladeshi-American population of over 140,000 [2]. This increase in Bangladeshi population was primarily a result of a major influx of immigrants during the 1990s, which is not surprisingly when Ms. KS’s family immigrated to America. During this decade, the Bangladeshi population grew by over 450% [2]. There are dense communities of Bangladeshi people in New York City, Los Angeles, Miami, Washington D.C., Atlanta, and Dallas. Limited English proficiency is quite common among this population. 65% of working-age Bangladeshi New Yorkers and 83% of Bangladeshi senior citizens speak limited English [3]. These numbers are markedly elevated compared to the city-wide averages of 25% and 27% adults and senior citizens respectively. Although English is a common second language in Bangladesh, some anthropological readings suggest that many immigrants may speak primarily Bengali as a means of maintaining cultural identity [4]. Ms. KS refused to use a medical translator, but these services are fortunately becoming increasingly available. However, financial strains remain a common obstacle to healthcare, especially in Bangladeshi patients. Census data indicates that Bangladeshi populations in New York City on average have incomes of less than half of the city-wide averages and are 50% more likely to live in poverty [3]. Based on this data, Ms. KS’s limited English proficiency is not surprising even though she has lived here for many years. She is also more likely to have financial hardships than the average New Yorker, possibly requiring the help of a social worker.

An interesting article published in the British Medical Journal in 1998 provides insight into Bangladeshi beliefs in the context of Western medicine. This qualitative study used narratives, semi-structured interviews, focus groups, and pile sorting exercises to explore the influence of cultural background on 40 British Bangladeshi diabetic patients as compared with 10 non-Bangladeshi controls. The study explores important aspects of diabetes education and management and reveals that there are some interesting similarities and differences between the Bangladeshi population as compared to non-Bangladeshis:

Body Concept/Exercise. When asked to pick out images of “healthy-appearing” individuals, Bangladeshi subjects were more likely to select those of a larger body habitus, correlating size with strength. This correlation only held to a certain extent though, with morbidly obese patients being identified as less healthy [5]. Although most subjects acknowledged that they had been given advice to exercise, this advice appeared to have little cultural meaning. Some viewed exercise as a cause for illness exacerbation. Notably, some dialects of Bengali do not even have a word for “exercise” [5]. Sports and games are infrequent among adults in Bangladesh [6]. This information suggests that physicians should steer clear of advising Bangladeshi patients to “lose weight” and “exercise”. Rather, the emphasis should be placed on an active lifestyle, the significance of visceral fat, and replacing fat with muscle.

Diet/nutrition. When asked to sort foods as health or unhealthy, Bangladeshi subjects most commonly picked foods perceived to provide the most energy and strength as “healthy”, including white sugar, lamb, beef, solid fat, and spices. Interestingly, raw, baked, and grilled foods were largely considered “indigestible” and “unsuitable” for elderly, debilitated, or young people [5]. A common nutritional recommendation to bake or grill foods rather than fry them may therefore be seen as discordant with cultural dietary perceptions. Although time-saving, broad generalizations in dietary advice may be less successful and should be replaced by customized meal plans in accordance with both the diabetic diet and cultural beliefs. Furthermore, the impact of snacks on glycemic control should be directly addressed with Bangladeshi patients. 18 out of 18 study participants were in favor of eating snacks between meals to “sustain strength”. Only 5 of these 18 subjects thought snacks could cause any harm [5]. Ms. KS commonly snacked throughout the day, and although a diabetic diet was delivered to her for each meal, she never ate the prescribed diet and instead chose to eat food that her family brought, which commonly featured beef and white rice. Ms. KS is a perfect example of how cultural preferences and perceptions of food may make diabetes difficult to control.

Diabetic monitoring. Most patients in this study reported that they monitored glucose levels as instructed by their doctors; however understanding of the importance of surveillance was limited. Most Bangladeshi patients believed that a lack of symptoms implied health and well-controlled diabetes. They did not see a need to visit a doctor if they were feeling well [5]. This appeared to be the case for Ms. KS—she had a glucometer at home but reported rarely using it and her only visits to a medical professional were in an emergency setting. For patients like her, repeated education about routine glucose testing and preventative care visits to primary care, podiatry, and ophthalmology clinics cannot be stressed enough.

While these study results provide interesting insights, it should be acknowledged that no scientific or anthropological studies of Bangladeshi culture can be uniformly applied to all Bangladeshis, nor should cultural beliefs be considered static. However, they can be invaluable as a basis for culturally sensitive diabetes education. A thorough interview of future diabetic Bangladeshi patients includes questions about the patient’s beliefs regarding the cause and treatment of diabetes, the role of exercise, and attempts at reconciliation of the patient’s traditional diet with the diabetic diet. While the statistics and beliefs identified in this discussion are targeted toward the Bangladeshi population, the information can be conceptually extended to patients of other cultural backgrounds. Ms. KS’s past history of medical non-adherence and poor glycemic control may indicate that her future care could be difficult, but given her unique background, her case was also an opportunity for cultural sensitivity to prevail over differences in beliefs.

Kimberly Jean Atiyeh is a student at NYU School of Medicine

Peer reviewed by Val Perel, MD, Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. New York State Department of Health. Information for a Healthy New York: Diabetes. www.health.state.ny.us.   2010.

2. U.S Census Bureau. American Community Survey. American FactFinder. 2007. Factfinder.census.gov.  http://factfinder2.census.gov/legacy/aff_sunset.html

3. Asian American Federation of New York. Census Profile: New York City’s Bangladeshi American Population. Asian American Federation of New York Census Information Center. 2005. www.aafny.org

4. Jones J. Bangladeshi Americans. Countries and Their Cultures. Everyculture.com.  Accessed September 26, 2010.

5. Greenhalgh T, Helman C, Chowdhury A. Health beliefs and folk models of diabetes in British Bangladeshis: a qualitative study. Brit Med J. 1998. 316:978. http://www.bmj.com/content/316/7136/978.abstract

6. Chowdhury A. Household kin and community in a Bangladesh village. Exeter: University of Exeter. 1986.

Does the BCG Vaccine Really Work?

March 14, 2012

By Mitchell Kim

Faculty Peer Reviewed

Mycobacterium tuberculosis, an acid-fast bacillus, is the causative agent of tuberculosis (TB), an infection that causes significant morbidity and mortality worldwide. A highly contagious infection, TB is spread by aerosolized pulmonary droplet nuclei containing the infective organism. Most infections manifest as pulmonary disease, but TB is also known to cause meningitis, vertebral osteomyelitis, and other systemic diseases through hematogenous dissemination.[1] In 2009, there were an estimated 9.4 million incident and 14 million prevalent cases of TB worldwide, with a vast majority of cases occurring in developing countries of Asia and Africa. Approximately 1.7 million patients died of TB in 2009.[2]

TB has afflicted human civilization throughout known history, and may have killed more people than any other microbial agent. Hermann Koch first identified the bacillus in 1882, for which he was awarded the Nobel Prize in 1905. In 1921, Albert Calmette and Camille Guérin developed a live TB vaccine known as the bacille Calmette-Guérin (BCG) from an attenuated strain of Mycobacterium bovis.[3]

As the only TB vaccine, BCG has been in use since 1921,[4] and is now the most widely used vaccine worldwide,[5] with more than 3 billion total doses given. The BCG was initially administered as a live oral vaccine. This route of administration was stopped in 1930 following the Lübeck (Germany) disaster, in which 27% of 249 infants receiving the vaccine developed and died from TB. It was later discovered that the Lübeck vaccine was contaminated with virulent M tuberculosis. The intradermal route of administration was later found to be safe for mass vaccination, through studies conducted in the 1930s.[6] The World Health Organization currently recommends BCG vaccination for newborns in high-burden countries, although the protection against TB is thought to dissipate within 10-20 years.[7] The BCG vaccine is not used in the US, where TB control emphasizes treatment of latently infected individuals.[3]

Although widely used, the efficacy of the vaccine in preventing pulmonary TB is uncertain, with studies showing 0-80% protective benefit. A meta-analysis performed in 1994 showed that the BCG vaccine reduces the risk of pulmonary TB by 50% on average, with greater reduction in risk of disseminated TB and TB meningitis (78% and 64%, respectively).[8] It is currently accepted that the BCG vaccine provides protection against TB meningitis and disseminated TB in children, as well as leprosy in endemic areas such as Brazil, India, and Africa.[9]

There are several possible explanations for the variations in BCG vaccine efficacy found in different studies. Based on the observation that BCG vaccine trials showed more efficacy at higher latitudes than lower latitudes (P<0.00001), it is hypothesized that exposure to certain endemic mycobacteria, thought to be more common in lower latitudes, might provide natural immunity to the indigenous people, and the addition of BCG vaccine does not add much to this natural protection. The higher prevalence of skin reactivity to PPD-B (Mycobacterium avium-intracellulare antigen) in the lower latitudes supports this theory. However, there has been no conclusive link found between endemic Mycobacterium exposure and protection against TB. In addition, TB infection rates are highest in lower latitudes, where natural immunity should be the greatest;[5] this may indicate that other factors are at play. Other reasons why the observed efficacy of BCG vaccines may vary so widely is that they are produced at different sites around the world, with inconsistent quality control.[4] Also, the vaccine’s efficacy depends on the viability of the BCG organisms, which can be markedly altered by storage conditions.[10]

BCG is considered a safe vaccine,[4] with the main side effect being a localized reaction at the injection site with erythema and tenderness, followed by ulceration and scarring. This occurs almost invariably following correct intradermal administration. Overall, the rate of any adverse reaction has been reported to be between 0.1% and 19%[11] and serious adverse reactions such as osteitis, osteomyelitis, and disseminated BCG infection are rare [7] and estimated to occur less than once per 1 million doses given.[11] Disseminated BCG infection is a serious complication almost exclusively seen in immunized patients with underlying immunodeficiency, such as HIV infection or severe combined immunodeficiency. This complication carries a high mortality rate of 80-83%, and the incidence of fatality is estimated at 0.19-1.56 cases per 1 million vaccines given.[7]

Immunization with BCG vaccine increases the risk of a positive purified protein derivative tuberculin skin (PPD) test. This can complicate the interpretation of a PPD test, and may lead to unnecessary preventive treatment in people who do not truly have latent TB infection. However, it has been shown that a person’s age at time of BCG vaccination, as well as the years since vaccination, affects the risk of PPD positivity. Therefore, the US Preventive Services Task Force recommends PPD screening of high-risk patients, and that a >10 mm induration after PPD administration should not be attributed to the BCG vaccine. If a patient has a previous exposure to the BCG vaccine, the CDC recommends using the QuantiFERON-TB Gold test (QFT-G, Cellestis Limited, Carnegie, Victoria, Australia), an interferon-gamma release assay, to detect TB exposure instead of the PPD. This test is specific for M tuberculosis proteins without cross-reactivity with BCG. The major drawback of the QFT-G test is that it is roughly 3 times more expensive than the PPD test.[12]

In summary, the BCG vaccine has been in use for 90 years to reduce the prevalence of TB infection. It is the most widely used vaccine worldwide, with 100 million doses administered every year.[7] Although the vaccine is compulsory in 64 countries and recommended in another 118, its use is uncommon in the US, where treatment of latent infection is the major form of TB control. The vaccine limits multiplication and systemic dissemination of TB [13] and decreases the morbidity and mortality of TB infection, but has no effect on its transmission [7] and has no use in the secondary prevention of TB.[13] The vaccine’s efficacy in preventing pulmonary TB is highly variable, but it is thought to be efficacious in preventing TB meningitis, disseminated TB, and leprosy. In order to make up for the BCG vaccine’s shortfalls in preventing pulmonary TB, substantial progress is being made in the field of TB vaccines. In 2010, 11 vaccine candidates were being evaluated in clinical trials, with 2 being evaluated for efficacy.[9] Future developments in the field of TB vaccine development may improve on the foundations built by the BCG vaccine in reducing the worldwide health burden of this ancient disease.

Mitchell Kim is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Robert Holzman, MD, Professor Emeritus of Medicine and Environmental Medicine; Departments of Medicine (Infectious Disease and Immunology) and Environmental Medicine

Image courtesy of Wikimedia Commons

References

1. Raviglione MC, O’Brien RJ. Tuberculosis. In: Fauci AS, Braunwald E, Kasper DL, Hauser SL, Longo DL, Jameson JL, Loscalzo J, eds. Harrison’s Principles of Internal Medicine. 17th ed. New York, NY: McGraw-Hill; 2008: 1006-1020.

2. World Health Organization. Global tuberculosis Control: WHO report 2010. http://www.who.int/tb/publications/global_report/en/.   Accessed September 11, 2011.

3. Daniel TM. The history of tuberculosis. Respir Med. 2006;100(11):1862-1870.

4. World Health Organization. Initiative for Vaccine Research: BCG–the current vaccine for tuberculosis. http://www.who.int/vaccine_research/diseases/tb/vaccine_development/bcg/en/.  Published 2011 .  Accessed September 11, 2011.

5. Fine PE. Variation in protection by BCG: implications of and for heterologous immunity. Lancet. 1995;346(8986):1339-1345.

6. Anderson P, Doherty TM. The success and failure of BCG–implications for a novel tuberculosis vaccine. Nat Rev Microbiol. 2005;3(8):656-662.

7. Rezai MS, Khotaei G, Mamishi S, Kheirkhah M, Parvaneh N. Disseminated Bacillus Calmette-Guérin infection after BCG vaccination. J Trop Pediatr. 2008; 54(6): 413-416.

8. Colditz GA, Brewer TF, Berkey CS, et al. Efficacy of BCG vaccine in the prevention of tuberculosis: Meta-analysis of the published literature. JAMA. 1994;271(9):698-702.

9. McShane H. Tuberculosis vaccines: beyond bacille Calmette-Guérin. Philos Trans R Soc Lond B Biol Sci. 2011;366(1579):2782-2789.

10. World Health Organization. Temperature sensitivity of vaccines. http://www.who.int/vaccines-documents/DocsPDF06/847.pdf. Published August, 2006. Accessed October 30, 2011.

11. Turnbull FM, McIntyre PB, Achat HM, et al. National study of adverse reactions after vaccination with bacille Calmette-Guérin. Clin Infect Dis. 2002;34(4):447-453.

12. Rowland K, Guthmann R, Jamieson B. Clinical inquiries. How should we manage a patient with a positive PPD and prior BCG vaccination? J Fam Pract. 2006;55(8):718-720.

13. Thayyil-Sudhan S, Kumar A, Singh M, Paul VK, Deorari AK. Safety and effectiveness of BCG vaccination in preterm babies. Arch Dis Child Fetal Neonatal Ed. 1999;81(1):F64-F66.

Tales of Survival: An Open Letter to My Patient Mrs. B.

March 2, 2012

By Vivek Murthy

Case report:

Mrs. B is a 68-year-old female with a PMH of small cell lung CA metastatic to the liver s/p last chemo six weeks ago presenting with RUQ pain migrating to her RLQ for the last 24 hours. Physical exam reveals a fatigued but pleasant African-American female appearing her stated age, in obvious pain that is making her eyes water. Exam is significant for R supraclavicular LAD, a distended abdomen, + Murphy’s sign, and exquisite tenderness to palpation and guarding in the RUQ and RLQ. A liver span of 12-13 cm by scratch test is also noted. Labs are significant for T 98.1F, LFTs wnl, and WBC 14.0. U/S, HIDA scan, CT abdomen are consistent with distention of the gallbladder by stones and cholecystitis secondary to obstruction of the cystic duct by periportal LAD and expansion of liver metastases, representing progression of the patient’s primary disease. Patient was discharged on PO antibiotics, with arrangements for laparoscopic cholecystectomy in two weeks as well as follow up with her oncologist for management of stage IV disease.

An open letter to my patient, Mrs. B:

Dear Mrs. B,

I first saw you in the hallway, in your wheeled bed, a silent stranger. You held a closed book with a finger reserving your page. You reached into your pocket for your phone and took a call, suddenly becoming animated and showing the beauty and complexity of your life in your conversation. And then you hung up, smiled to no one, and took up your book again, retreating into your anonymity. And as I watched, I was told by my resident that I would be following your care in the hospital. It was a memorable moment for me, especially because two weeks later, after having listened to your heart every morning as the sun leaked through the window, having traded amusing stories with you from our lives outside the hospital, having felt the hardened lymph nodes in your neck every day and learning their shapes, pressing on your belly and feeling your cancer, measuring your pain and understanding it and coming to hate it as much as you did, it would seem impossible to me that I had ever regarded you as a stranger.

I had become a reporter on the condition of your body. I found the remaining muscle deficits from your ancient Bell’s palsy and wrote about them in a note. I worried about your worsening pain at night after I left the hospital. I followed your white blood cell count religiously and in those two weeks it became the first thing I looked at each morning, a significant number in my life. Our new closeness was obvious and apparent to me. But I was ignorant of its depth, because it did not occur to me until later that I had gotten to know the regularities of your body better than I knew my own. I had never listened to my own heart with a stethoscope before and obsessed over systole, as I had with you. I had never become familiar with the noises of my own breathing. I did not know whether I could feel my own liver or spleen, and I had never before assessed with any degree of precision my own ins and outs.

I had never seen people like that before, as measurable and observable, full of physical exam findings waiting to receive the accolade of documentation in a medical student note. But after examining you twice a day for two weeks, Mrs. B, it soon became difficult to see people in any other way. It was as if a whole realm of detail on the human body had gradually come into existence. There was the dip of a stranger’s clavicle as they hailed a cab. Two of the four dreaded traits of a mole on someone’s back on Lexington Avenue. Perfectly accessible veins on the forearm of the barista handing me a cup of coffee. Something that looked like Russell’s sign on the knuckles of a teenage girl at a restaurant. Strabismus. Bulging neck vessels and swollen legs, those classic stigmata of heart failure, in the owner of my favorite carryout place. The barrel chest of a homeless man. Anisocoria in a cab driver’s mirror. Vitiligo on the subway.

These accidental physical exams disturbed me. I was quietly invading the private space of strangers, and I knew details about their lives without having ever met them. Perhaps I even knew things about them that they did not know: the unusual mole, the swollen legs, the barrel chest. It would never be appropriate, obviously, to approach them with an unsolicited evaluation. This worried me. I had never before understood what people meant by the phrase ‘burden of knowledge’. It had not occurred to me how knowledge could be a burden, but suddenly here it was, burdening me. Those precious details at the hospital bedside, the physical exam findings celebrated in medicine as proof of a doctor’s meticulousness, were following me home and adding strange color to my life.

Over the course of the past two months, Mrs. B, this unexpected feeling of guilty trespassing has gone, and in its place remains something even more unexpected: a tenderness towards the sick and the confidence of knowing that I am not only learning the art of medicine but practicing it every day, unintentionally. A surgical scar on a passerby now prompts a search in my head for an explanation. Abnormal gaits are scrutinized, converted into the starched nomenclature of neurology, and confronted with the question, where is the lesion? Recreational diagnosis no longer seems the burden it once was. It now offers an opportunity to heighten the senses, to practice, to improve.

It was you, Mrs. B, who started to open my eyes to this world of detail that previously went unnoticed, for a very long time, right in front of me.

Yours,

Vivek

Vivek Murthy is a 4th year medical student at NYU School of Medicine

What are the Barriers to Using Low Dose CT to Screen for Lung Cancer?

February 23, 2012

By Benjamin Lok

Faculty Peer Reviewed

Lung cancer is the most common cause of cancer deaths globally [1] and responsible for an estimated 221,120 new cases and 156,940 deaths in 2011 in the United States.[2] Presently, the United States Preventive Services Task Force, the National Cancer Institute (NCI), the American College of Chest Physicians, and most other evidence-based organizations do not recommend screening for lung cancer with chest x-ray or low-dose helical computed tomography (CT) due to inadequate evidence to support mortality reduction.[3] This recommendation, however, may soon change.

LOW-DOSE CT SCREENING REDUCES MORTALITY

In October 2010, the NCI announced that the National Lung Screening Trial (NLST) was concluded early because the study showed that low-dose CT screening, when compared with screening by chest radiography, resulted in a 20.0% relative reduction in lung cancer-related mortality and an all-cause mortality reduction of 6.7%. The number needed to screen with low-dose CT to prevent one lung cancer death was 320.[4] This report, published in the August 4th, 2011 issue of the New England Journal of Medicine, is the first randomized controlled trial of lung cancer screening to show a significant mortality benefit.

The trial enrolled 53,454 high-risk current and former smokers aged 55 to 74 years who had a history of at least 30 pack-years. Former smokers (52% of the total) had to have quit only recently–within the last 15 years. They underwent three annual screenings with CT or chest X-ray and then were followed for an additional 3.5 years. Suspicious screening results were three-fold higher in the low-dose CT group compared to radiography across all three screening rounds (24.2% vs 6.9%). More than 90% of the positive screening tests in the first round of screening led to follow-up, mainly consisting of further imaging, with invasive procedures rarely performed. More cancers were diagnosed after the screening period in the chest radiography group compared to those screened by low-dose CT, suggesting that radiography missed more cancers during the screening period. Furthermore, cancers detected in the low-dose CT arm were more likely to be early-stage compared to those discovered after chest radiography.

NLST STUDY LIMITATIONS

Lung cancer-specific deaths were 247 and 309 per 100,000 person-years in the low-dose CT and chest radiography groups, respectively, and this was statistically significant (P=0.004).[4] The internal validity of the study is firm, based on similar baseline characteristics and rates of follow-up between the two study groups.[4-6] Whether these results can be applied to the general population, however, is uncertain. Trial participants were, on average, younger urban dwellers with a higher level of education than a random sample of smokers 55 to 74 years old [4-5], which might have increased adherence in the study. Furthermore, radiologists interpreting the screening images had additional training in reading low-dose CT scans and presumably greater experience due to high workload.

COST EFFECTIVENESS

One major barrier to implementation of any screening program is its cost. Eventual analysis of the results from the NLST will definitively answer this question; however, a recent Australian study can provide us some with some provisional guidance.[7] Manser and colleagues examined the cost effectiveness of CT screening in a baseline cohort of high-risk male current smokers aged 60 and 64 with an assumed annual incidence of lung cancer of 552 per 100,000 and determined that the incremental cost-effectiveness ratio was $105,090 per quality-adjusted life year (QALY) saved.[7] This is less than the generally accepted upper limit of $113,000 per QALY in the United States, but far above the $50,000 per QALY threshold that many authors of cost-effectiveness are advocating.[8] The NLST study population had an approximate annual incidence of lung cancer of 608.5 per 100,000[4], which is similar to the incidence rate in the Australian analysis. Though this extrapolation is purely speculative, it suggests that if the upper limit of $113,000 per QALY were the cut-off, low-dose CT screening in the United States may be cost-effective.

IMPORTANCE OF IDENTIFYING HIGH-RISK PATIENTS

A second important issue to address is the identification of patients most likely to benefit from CT screening. In the Australian study, for the patient risk group with an annual incidence of lung cancer of 283 per 100,000 it costs $278,219 per QALY saved.[7]. The national incidence in the US in 2006 was 63.1 per 100,000 person-years.[9] To screen the average person in the US general population would be an astronomical expenditure of resources (greater than $1 million per QALY saved), dramatically increase the false-positive rates from screening, and promote unnecessary exposure to carcinogenic radiation. Accordingly, only high-risk patients (eg, advanced age, positive family history, and heavy smoking history) should be considered for this screening modality.

ADDITIONAL BARRIERS AND ISSUES

The United Kingdom Lung Screening Trial investigators listed other issues that will need to be addressed prior to implementation of a screening program:[10]

1. Synchronization of CT technique and scan interpretations

2. Value of the diagnostic work-up techniques for positive screening findings and establishing standards for follow-up

3. Optimal surgical management of detected nodules in patients

4. Optimal screening interval for both screen-negative and screen-positive patients

5. Continued study and collaboration by academic organizations, federal institutions and policymakers.

CONCLUSIONS ON HOW TO COUNSEL PATIENTS

With all this in mind, how are we to counsel patients interested in lung cancer screening? First, only high-risk patients for lung cancer should be considered for low-dose CT screening. Even in the high-risk NLST cohort, positive images were false roughly 95% of the time in both study arms.[4] In patients at lower risk, these false-positive rates will undoubtedly be much higher. Second, patients should be informed about the potential harms from detection of benign abnormalities requiring follow-up and potential invasive interrogation, which can result in adverse outcomes. Finally, even with the exciting revelation of mortality reduction by a lung cancer screening modality, smoking cessation will remain one of the most important interventions in reducing mortality from lung cancer.

Benjamin Lok is a 4th year medical student at NYU School of Medicine

Peer reviewed by Craig Tenner, MD, Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. Parkin DM, Bray F, Ferlay J, Pisani P. Global cancer statistics, 2002. CA Cancer J Clin. 2005;55(2):74-108.

2. American Cancer Society. Cancer facts and figures 2011. http://www.cancer.org/Research/CancerFactsFigures/CancerFactsFigures/cancer-facts-figures-2011.  Accessed July 7, 2011.

3. National Cancer Institute: PDQ® Lung Cancer Screening. 2011; http://cancer.gov/cancertopics/pdq/screening/lung/HealthProfessional. Accessed July 21, 2011.

4. National Lung Screening Trial Research Team. Reduced lung-cancer mortality with low-dose computed tomographic screening. N Engl J Med. 2011;365(5):395-409.

5. National Lung Screening Trial Research Team. Baseline characteristics of participants in the randomized national lung screening trial. J Natl Cancer Inst. 2010;102(23):1771-1779.

6. Sox HC. Better evidence about screening for lung cancer. N Engl J Med. 2011;365(5):455-457.

7. Manser R, Dalton A, Carter R, Byrnes G, Elwood M, Campbell DA. Cost-effectiveness analysis of screening for lung cancer with low dose spiral CT (computed tomography) in the Australian setting. Lung Cancer. May 2005;48(2):171-185.

8. Weinstein MC. How much are Americans willing to pay for a quality-adjusted life year? Med Care. 2008;46(4):343-345.

9. American Lung Association. State of lung disease in diverse communities, 2010. http://www.lungusa.org/assets/documents/publications/lung-disease-data/solddc_2010.pdf. Accessed July 28, 2011.

10. Field JK, Baldwin D, Brain K, et al. CT screening for lung cancer in the UK: position statement by UKLS investigators following the NLST report. Thorax. 2011;66(8):736-737.

EKG Websites: A Review of the Most Viewed Websites

February 3, 2012

By Melissa Mroz and Rachel Bond

Faculty Peer Reviewed

The electrocardiogram (EKG) is a test not only interpreted by cardiologists.

In fact, it is usually early in the year that the new medical student is handed an EKG; top flipped down as not to “cheat” and asked to interpret the rhythmic black squiggles on red graph paper. I still remember the anxiety provoking questions asked on my Medicine Clerkship. As with many skills I thought would magically become part of my repertoire on July 1st of my intern year, reading an EKG was still anxiety provoking. Now after having read Dubin, I was able to keep my hands from shaking and go through the steps, including rate, rhythm, and axis. Now in the era of having almost infinite amount of information at our fingertips, it is only natural that the internet plays a role in learning how to read EKGs.

Typing “EKG interpretation” into Google results about 1,180,000 sites. The following sites are some of the first results and vary in usability.

To simplify the review, websites will be grouped into:

Highly recommended, recommended or not recommended based on my review; however, please feel free to explore all websites at your own timely discretion.

Highly Recommended:

http://www.rnceus.com/course_frame.asp?exam_id=16&directory=ekg An excellent website that covers only rhythms using rhythm strips (no 12 leads here). This site goes through very basic physiology, how to read an EKG, and common abnormal EKGs. It is user friendly and has quiz questions interspersed through the course and at completion. The table of contents that is present throughout the course makes this site especially easy to navigate. This is a recommended site for beginners to EKG interpretation or to learning sites.

http://library.med.utah.edu/kw/ecg/ A helpful library of EKGs with advanced physiology, a robust library of EKGs, and an advanced array of quiz questions. It is easy to navigate. It includes a section with the AHA competencies for EKG interpretation with examples of each diagnosis considered to be the minimum knowledge needed to be deemed competent in EKG interpretation. This is a good site for someone more advanced.

http://en.ecgpedia.org/ A great website modeled after ‘Wikipedia’ with a basic introduction to EKG reading. There is a lot of information on the site which breaks down EKG reading in to “7+2” easy steps. It stresses the importance of step-wise interpretation of EKG’s so nothing is missed. It also includes a lot of physiology behind the waves we interpret on the EKG to help us understand the EKG slightly better. The only down-side I can see is that this website, just like Wikipedia, can be edited by the public. As such, interpretation of information should be at your own risk and countered with another website or ECG book.

Recommended:

http://www.themdsite.com/ Unfortunately though the Dr. Dale Dubin site may have all of the necessary information, I was distracted by the 80’s video game feel complete with neon green and blue lettering and flashing animation. It includes many helpful images from his book, but is difficult to read. It feels more like an advertisement than a learning site.

http://www.ecglibrary.com/ As the name suggests, this website is a library of different EKGs, with a brief explanation on abnormal findings. This website is useful for those who do not need a lengthy explanation or prefer lists. For example a list of causes of right axis deviation or a description of trifascicular blocks. It also has a section describing the history of electrocardiography for you history buffs.

http://www.mauvila.com/ECG/ecg.htm This website provides a great overview of all the information one would need to know to master interpretation of EKGs. It does this in a sophisticated way; however, simple enough for a beginner to understand with the addition of fundamental pictures. In addition, it includes some basic pathophysiology and historical facts for those self-pronounced historian buffs out there. It also includes a quiz at the end to test your skills. The down-side of the website is that it is a long read and rather wordy in each section and definitely should be complemented with another EKG website or tutorial book.

http://www.learntheheart.com A basic review of EKG findings with explanations. Similar in structure to a few of the websites already reviewed; however, includes EKG case scenarios as well as quiz review which places the EKG in a medical situation which anyone of us in the medical field can encounter.

http://www.emedu.org/ecg/index.htm This website includes a brief and incomplete overview of EKGs. Included in this brief overview, are commonly-seen and not-so commonly seen rhythms. The great part of the site is that it includes actual EKG’s with arrows and circles pointing out the commonly seen findings on EKGs based on the situation. It also provides explanations/differentials on why certain things are found on EKGs. This is a good website for someone already advanced in the basics of EKG reading. Negatives include that it is not too easy to navigate—you need to click back on your browser to go to the home page every time you want to go to a new page, “Answers” for EKG’s are very sparse with explanations (e.g. “ischemia”) and there is not a diverse library of EKG’s.

Conclusion:

There are no websites listed above that I would not recommend. Everyone can get some benefit out of evaluating each and every website listed above and placing this evaluation in to a clinical context. For those unable to evaluate all websites, those ‘highly recommended’ should be evaluated prior to delving into the ‘recommended’ websites.

Dr. Melissa Mroz is a 3rd year resident at NYU Langone Medical Center

Dr. Rachel Bond is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Robert Donnino, MD,  cardiology editor, Clinical Correlations

Image courtesy of Wikimedia Commons

When Is Hemoglobin A1c Inaccurate In Assessing Glycemic Control?

February 1, 2012

By Joseph Larese

Faculty Peer Reviewed

Hemoglobin A1C (HbA1c) is an invaluable tool for monitoring long-term glycemic control in diabetic patients. However, many clinicians managing diabetics have encountered the problem of HbA1c values that do not agree with fingerstick glucose logs. Before suspecting an improperly calibrated glucometer or poor patient record keeping, it is useful to consider the situations in which HbA1c may be spuriously elevated or depressed. These issues are best understood after reviewing how HbA1c is defined and measured–topics fraught with considerable confusion.

Glycosylation is a non-enzymatic, time-dependent chemical reaction in which glucose binds to the amino groups of proteins.[1] Historically, and long before its precise chemistry was discovered, glycosylated Hb was defined as an area of an elution chromatogram containing hemoglobin glycosylation products. This elution peak was labeled as HbA1, in keeping with the existing nomenclature (HbA, HbA2, HbF, etc. had been identified previously). Later it was recognized that the chromatographic HbA1 region is not homogeneous and consists of several component peaks, designated A1a, A1b and A1c, with HbA1c being the dominant one.[1] The HbA1c fraction also turned out to correlate best with mean serum glucose concentrations, ie, to be a better index of long-term glycemia.

Relatively recently HbA1c was redefined chemically: now glycohemoglobin refers to hemoglobin glycosylated at any of its amino groups, while HbA1c is defined as glycohemoglobin with glucose bound specifically to the terminal valine of the beta-globin chain. Consequently, the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has developed a standard reference method for HbA1c in which hemoglobin is cleaved with a specific peptidase into multiple oligopeptides, including a terminal hexapeptide containing the site of glycosylation in HbA1c. Glycosylated and non-glycosylated hexapeptides are separated by high-performance liquid chromatography (HPLC) and quantified. The ratio of concentrations of HbA1c to total Hb A is then reported.[2,3] All methods currently in use at NYU-affiliated hospitals are calibrated against that new IFCC standard.

Multiple epidemiologic studies and, most critically, the Diabetes Control and Complications Trial (DCCT) and the UK Prospective Diabetes Study (UKPDS) showed that HbA1c was a strong predictor of at least some forms of diabetes morbidity.[4,5] Based on that, HbA1c has been universally accepted not just as a measure of long-term glycemia, but as a clinically relevant surrogate measure of glycemic control. Consequently, an enormous (and impressively successful) international effort was undertaken to standardize all existing A1c assays so that their results can be referenced to the DCCT and UKPDS. However, both these trials measured HbA1c chromatographically. Compared to the modern reference standard these techniques overestimate HbA1c, as they detect various hemoglobin variants and non-HbA1c glycohemoglobins. Nonetheless, these two trials have formed the basis of HbA1c-based diabetes management, and in order to ensure historic continuity and interpretability, most of the HbA1c methods in current use (and all in use at NYU-affiliated hospitals) are back-referenced to DCCT “units.” For the most part, HbA1c of 7% in the UKPDS trial has the same meaning as HbA1c of 7% obtained on a patient in 2012.

A multitude of HbA1c assays employing a variety of methods are available commercially. All of them essentially are two tests packaged in one: serum HbA1c and HbA, with the ratio of the two being reported as A1c%. There are multiple conditions that interfere with one or both measurements: some are purely physiologic and are assay-independent, others depend on a particular test being used.

Physiologic “errors” occur when the average age of red blood cells is significantly altered and therefore the time that each molecule of HbA is exposed to blood glucose deviates from usual. Conditions that decrease mean erythrocyte age, such as recent transfusions or increased erythropoiesis secondary to hemolysis or blood loss, lower hemoglobin A1c[1,6,7], while those increasing mean erythrocyte age, such as asplenia, tend to increase HbA1c levels.[7] In all these instances, even if HbA1c itself is measured correctly, a given value of A1c% will correspond to a different average serum glucose concentration. Such “physiologic” errors do not depend on the particular assay used.

Because HbA1c is reported as a ratio of HbA1c to HbA, errors in either HbA1c or HbA measurement will cause spurious A1c% results. Most problems in HbA measurements occur when, instead of measuring HbA concentration directly, it is calculated from total hemoglobin. These calculations assume normal hemoglobin fractionation, and any condition in which HbA constitutes a smaller than normal fraction will affect the results. For example, in hereditary persistence of hemoglobin F, a calculation of HbA would overestimate HbA concentration. If absolute HbA1c concentration is measured accurately, the reported HbA1c% will be spuriously decreased.[1,8]

A particular HbA1c assay may not be sufficiently specific for HbA1c itself and may cross-react with either genetically determined hemoglobin variants or a chemically modified Hb species. Patients heterozygous for a variant hemoglobin, including hemoglobin S, C, E, and rarer variants can thus have falsely elevated or lowered glycohemoglobin results.[1,9,10] This is especially important to remember in diabetic patients of African, Mediterranean, or Southeast Asian descent, as these patient populations have a high incidence of these hemoglobin variants and because heterozygotes can be undiagnosed and asymptomatic.[11] The National Glycohemoglobin Standardization Program maintains an online spreadsheet of commercially available HbA1c assays, reporting interferences from each of the major hemoglobin variants and modified hemoglobins.[12] Information on the assays currently used at NYU-affiliated hospitals is provided in the table below.

In real patients, HbA1c measurements are affected by a complex combination of the above factors. For example, in patients with sickle cell trait, care must be taken to select an assay that does not cross-react with HbS or hemoglobin F. The lab should also measure, not calculate, the concentration of HbA. As the lifespan of erythrocytes is normal in sickle-cell trait, selection of an appropriate assay should produce valid HbA1c results.[1,13] In patients homozygous for a variant hemoglobin, such as patients with sickle cell anemia or hemoglobin C disease, hemolysis significantly shortens erythrocyte lifespan. Consequently, the fraction of terminally glycosylated variant hemoglobin measured by HbA1c assays in these patients is not a useful measure of glycemia in such patients, regardless of the assay used. Glycemic control in these patients must be assessed through other means (for example, by measuring serum fructosamine).

In diabetic patients with end-stage renal disease, erythrocyte lifespan tends to be decreased. This may result in part from iron deficiency anemia, recent transfusions, or other effects of kidney disease on erythrocyte survival. Uremic patients with blood urea nitrogen levels greater than about 85 mg/dL also develop significantly high levels of carbamylated hemoglobin, which interferes with some HbA1c assays.[14] In spite of these complications, HbA1c (measured with an assay that does not cross-react with carbamylated hemoglobin) has been shown to correlate well with average blood glucose levels in diabetic patients on hemodialysis, with questionably significant overestimation of average blood glucose for values of HbA1c greater than 7.5%.[15]

Other factors, such as chronic alcohol, opioid, and salicylate abuse and ingestion of large amounts of vitamin C and E have been reported to skew A1c results.[10,13,16] Given the lack of large studies, the sometimes contradictory conclusions of these reports, and the unclear mechanisms of these effects, it is impractical to dismiss A1c as invalid in all of these cases. However, knowing that these effects may exist can help guide decision-making in situations in which the index of suspicion for an inaccurate A1c is already high.

Since HbA1c measurement is ubiquitous, it seems advisable for providers to become familiar with factors affecting the test in general and the limitations of the assays offered in their laboratory in particular.

Hospital Assay in use at time of writing Is HbA1c spuriously affected by variant hemoglobin?
HbS HbC HbE HbD HbF Carbamylated Hb
Tisch Tosoh G7 variant analysis mode No Yes Yes No No (for <30%) No
Manhattan VA
Bellevue Quest Diagnostics Teterboro, NJ-Roche Tina-quant Hemoglobin A1c Gen 2 No No No No No data, but probably yes for >10-15% No
HbA1c Assay Interferences. National Glycohemoglobin Standardization Program Web site. http://www.ngsp.org/interf.asp. and http://www.ngsp.org/factors.asp Updated July 2013.  Accessed 1/11/2014

Joseph Larese is a 4th year medical student at NYU Langone medical Center

Peer reviewed by Gregory Mints, MD, Medicine (GIM div.), NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. Little RR, Roberts WL. A review of variant hemoglobins interfering with hemoglobin A1c measurement. J Diabetes Sci Technol. 2009;3(3):446-451.

2. Jeppsson JO, Kobold U, International Federation of Clinical Chemistry and Laboratory Medicine (IFCC). Approved IFCC reference method for the measurement of HbA1c in human blood. Clin Chem Lab Med. 2002;40:78–89.

3. Hanas R, John G; International HBA1c Consensus Committee. 2010 consensus statement on the worldwide standardization of the hemoglobin A1C measurement. Diabetes Care. 2010;33(8):1903-1904.

4. Diabetes Control and Complications Trial Research Group: The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin-dependent diabetes mellitus. N Engl J Med. 1993;329:977-986.

5. UK Prospective Diabetes Study Group: Intensive blood-glucose control with sulphonylureas or insulin compared with conventional treatment and risk of complications in patients with type 2 diabetes (UKPDS 33). Lancet. 1998;352:837-853.

6. Factors that interfere with HbA1c test results. National Glycohemoglobin Standardization Program Web site. http://www.ngsp.org/factors.asp.  Updated April, 2010. Accessed September 19, 2010.

7. Panzer S, Drorik G, Lechner K, Bettelheim P, Neumann E, Dudezak R. Glycosylated hemoglobins (GHb): an index of red cell survival. Blood. 1982;59(6):1348–1350.

8. Rohlfing CL, Connolly SM, England JD, et al. The effect of elevated fetal hemoglobin on hemoglobin A1c results: five common hemoglobin A1c methods compared with the IFCC reference method. Am J Clin Pathol. 2008;129(5):811-814.

9. Little RR, Rohlfing CL, Hanson S, et al. Effects of hemoglobin (Hb) E and HbD traits on measurements of glycated Hb (HbA1c) by 23 methods. Clin Chem. 2008;54(8):1277-1282.

10. Sacks DB, Bruns DE, Goldstein DE, Maclaren NK, McDonald JM, Parrott M. Guidelines and recommendations for laboratory analysis in the diagnosis and management of diabetes mellitus. Clinical Chemistry. 2002;48(3):436-472.

11. Sickle cell trait and other hemoglobinopathies and diabetes: important information for physicians. National Diabetes Information Clearinghouse (NDIC) Web site. http://diabetes.niddk.nih.gov/dm/pubs/hemovari-A1C/index.htm.  Published November 2008. Accessed September 19, 2010.

12. HbA1c assay interferences. National Glycohemoglobin Standardization Program Web site. http://www.ngsp.org/interf.asp.  Updated April 1, 2010. Accessed September 19, 2010.

13. Bunn FH, Forget BG. Hemoglobin: molecular, genetic and clinical aspects. Philadelphia, PA: WB Saunders Co; 1986:425-427.

14. Ansari A, Thomas S, Goldsmith D. Assessing glycemic control in patients with diabetes and end-stage renal failure. Am J Kidney Dis. 2003;41(3):523-531.

15. Joy MS, Cefalu WT, Hogan SL, Nachman PH. Long-term glycemic control measurements in diabetic patients receiving hemodialysis. Am J Kidney Dis. 2002;39(2):297-307.

16. Davie SJ, Gould BJ, Yudkin JS. Effects of vitamin C on glycosylation of proteins. Diabetes. 1992;41:167-173.

Death, Be Not Proud: The Case for Organ Donation

January 27, 2012

By Tracie Lin

Faculty Peer Reviewed

DEATH, be not proud, though some have callèd thee

Mighty and dreadfull, for, thou art not so;

For those whom thou think’st thou dost overthrow

Die not, poore death, nor yet canst thou kill me.

From rest and sleepe, which but thy pictures bee,

Much pleasure, then from thee, much more must flow,

And soonest our best men with thee doe goe,

Rest of their bones, and soul’s deliverie.

Thou art slave to Fate, Chance, kings, and desperate men,

And dost with poyson, warre, and sicknesse dwell,

And poppie, or charmes can make us sleep as well,

And better than thy stroake; why swell’st thou then?

One short sleepe past, wee wake eternally,

And death shall be no more; death, thou shalt die.

-John Donne

All of us have fears and uncertainties surrounding the end of life. When will I die? How? Where? How much warning will I have? Will I have accomplished everything I set out to do? What will become of me after death?

While thoughts regarding the end of one’s life are inherently personal, it is a physician’s role to help patients clarify their goals and to take care of the logistics necessary to ensure that patients’ wishes are respected. Among the most challenging and misconception-ridden end-of-life subjects is that of organ donation, and yet it is also the one with the most potential for hope.

Organ transplantation gives patients suffering from fatal conditions such as kidney failure from diabetes, liver failure from primary biliary cirrhosis, or congenital heart abnormalities a chance at a healthier and longer life. The donor can be living, as in the case of someone donating to a close relative in need, or donations can be made after a person has passed away, in which case the recipient is usually not known to the donor. The most commonly donated major organs are kidneys, livers, and hearts.

In New York State, people can designate themselves as potential posthumous donors by checking a box when registering or renewing their driver’s licenses, after which their licenses will be printed with “Organ Donor” and their names will be added to the national donor registry.[1,2] People also have the option of registering as donors online or by mail.

When an organ donor has been pronounced dead, the hospital will call the closest one of 59 federally-designated organ procurement organizations (OPOs)–independent non-profit organizations that coordinate the transplant process. By federal law, hospitals must report all deaths (donor or otherwise) to the local OPO. The OPOs are then responsible for confirming a patient’s wish to be a donor, assessing whether the person is able to be a donor, then evaluating measures such as the patient’s blood and tissue types to determine possible recipient matches. The recipient or recipients are the next matching patients on the national computerized waiting list. The order of patients on the waiting list is determined by standardized algorithms that take into account medical urgency and time waited, with no role played by other factors such as financial or celebrity status.

For me, the question of whether to designate myself as a potential organ donor was an easy one. As a medical professional, I have (1) devoted my life to helping people live as well as possible, and (2) invested so much in time, loans, sweat, tears, and stress-induced gastric ulcers towards fine-tuning my brain, that existence would not be worth much to me upon brain death. Facetiousness aside, when my mind and my strength are no longer able to make life better for other people, why not let my body continue to save lives? This is likely the mindset of other organ donors as well. Inherent in our decision, however, is a great deal of trust in the healthcare system. Understandably, when it comes to a dramatic decision such as having one’s organs removed, many people have concerns. Some common concerns include the following:

What if I’m not really dead?

By New York State law, before a patient can become an organ donor, brain death must be confirmed on two separate instances, hours apart, by two separate physicians, neither of whom is involved in the transplantation process. The declaration of death must include absence of brainstem reflexes, absence of respirations with a PCO2 >60 mmHg (no drive to breathe), absence of motor responses, and documentation of the etiology and irreversibility of the unresponsiveness,[3] and may include additional confirmatory testing if deemed appropriate. These criteria for declaring death are more stringent than for patients who have not designated themselves as potential donors. Furthermore, unlike coma, in which limited brain function is still intact and the patient may eventually “wake up,” there have been no documented cases of revival after brain death.

Will my organs be given to alcoholics or drug abusers?

Although potential donors cannot specify the race, sex, age, or religious affiliation of their potential recipients, all possible organ transplant recipients are screened before being admitted onto the waiting list, and those with current drug or alcohol abuse problems are disqualified from being on the list.

Will I get less care in the hospital if people know I’m a potential donor?

As always, the priority when a patient enters the hospital is to save that patient’s life. Every hospital strives to minimize fatalities. Higher mortality rates are not in the best interest of any physician or hospital. Furthermore, the physicians who attend to patients in the hospital are not the same doctors who are involved in the transplant process; they have no knowledge of who the next transplant recipient might be. Compatible matches are often miles away, so they have no reason not to save the patient in front of them in favor of an unknown potential recipient.

It is no surprise that organ donation can be challenging to approach in discussion; for many, it is an uncomfortable subject that is further laden with the subtleties of legal writ. Yet think of all the benefits of overcoming that hurdle of discomfort: hope for the child with a congenital heart disorder, or the ailing diabetic who is someone’s father, brother, sister, mother, or friend. The technology is present to do much good through organ transplantation. Overcoming the gap between the crucial need for transplants and the dire shortage of donors largely comes down to a matter of allaying fears.

With organ donation, all of us–regardless of whether we’ve received a medical education, regardless of our physical strength, regardless of the circumstances of our lives–equalized by our common fate, have the opportunity to face pompous Death eye-to-eye and say with strength and dignity, “Death, thou shalt die.” Regardless of our beliefs about what becomes of us at the end of our lives, with an understanding of everything we can do beforehand and with hopes of all the good that may come afterwards, death can die in its power to make us feel helpless or afraid. It is the role of physicians to help empower our patients in that way.

Commentary by Dr. Susan Cohen

Tracie Lin’s article highlights a number of important issues for physicians and health providers. The first is the idea of reframing, a concept we borrow from mental health professionals. The focus is not on the health providers’ feelings of failure in the setting of a death but rather on our ability to continue to help the patient, family, and larger community in some way. This ongoing help comes with clear communication about options, one of which is organ donation. Another important factor raised by the author is the potential for mistrust in the health care system that may impact a family decision about organ donation. This mistrust is real and often valid and we mustn’t forget that. We must not take a patient/family decision personally. Our job is to help inform and offer the information needed to make choices consistent with the beliefs and values of that patient and family. Sometimes their answer will be what we recommend or what we would do ourselves (donating organs to help another person, signing a DNR order, consenting to a biopsy) but sometimes it will not. Lastly, while the current guidelines for determining brain death are for 2 examinations by qualified providers spaced at least 6 hours apart, there is literature and movement to support changing this protocol to 1 examination. Lustbader and colleagues reported in Neurology that the actual time interval between exams is not 6 hours but closer to an average of 19 hours.[4] In addition, they note that no second exam changed the outcome, diagnosis, or prognosis. However, the viability of organs for donation does decrease over time; consent for donation decreased with the increased time interval to declaration of brain death. We must balance a desire to encourage protocols more useful to organ donation with our responsibility to grieving families and recognize a sometimes warranted mistrust of our own system.

Tracie Lin is a 4th year medical student at NYU School of Medicine

Peer reviewed by Susan Cohen, MD, Department of Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. New York Organ Donor Network. www.donatelifeny.org. Accessed June 14, 2011.

2. Organ and Tissue Donations. New York State Department of Health. http://www.health.state.ny.us/professionals/patients/donation/organ/ . Accessed June 14, 2011. Revised July 2011.

3. Guidelines for determining brain death. New York State Department of Health http://www.health.state.ny.us/professionals/doctors/guidelines/determination_of_brain_death/docs/determination_of_brain_death.pdf. Published December 2005.   Accessed June 14, 2011.

4. Lustbader D, O’Hara D, Wijdicks EF, et al. Second brain death examination may negatively affect organ donation. Neurology. 2011;76(2):119-124.

Male Hormonal Contraception

January 20, 2012

By Kaley Myer, Class of 2012

Faculty Peer Reviewed

As a female, I like the idea of males taking hormonal contraceptives. In a semi-sadistic way, I relish the idea of a man taking a pill every day to prevent impregnation of my gender. Traditionally, contraception has been a female responsibility, from diaphragms to oral contraceptive pills to intrauterine devices. Male condoms, coitus interruptus, and the more permanent vasectomy require male participation, but these methods do not dominate the contraceptive market. Indeed, couples are encouraged to go beyond condom use (which is often inconsistent) with a form of female birth control. Vasectomy is not advisable unless a man is certain that he does not desire future fertility. And coitus interruptus is ineffective at preventing pregnancy.

In 2006-8, the National Survey of Public Growth studied 61.8 million women of childbearing age (15-44 years old).[1] Sixty-two percent were using some form of contraception. The most common methods were…

Oral contraceptives – 28%

Tubal ligation – 27%

Condom – 16%

Vasectomy – 10%

Intrauterine device – 6%

Withdrawal – 5%

That the burden of birth control falls upon females is related to the female hormonal cycle and its production of a single egg per month, which is easily manipulated. Also, placing this burden on females just makes sense. If birth control fails, it is the woman who becomes pregnant, who experiences physical and psychological changes, and who must take time away from her life and career. A woman thus has more of a stake in taking control of her reproductive status and only creating a life when desired.

This thinking can be a bit unfair, however. There are males who take great responsibility for their reproductive potential and who, upon creating a life, care for the child with equal or greater zeal than their female counterparts. Shouldn’t these men have the same opportunity to control their ability to reproduce?

The problem that remains for these men is that they constantly produce millions of spermatozoa without variations in hormonal cycles. However, research in hormonal suppression of the male hypothalamic-pituitary axis has revealed a safe, reliable mechanism for inhibiting spermatogenesis while maintaining normal levels of blood gonadotropins. Testosterone alone reduces sperm counts, but not to levels low enough to prevent pregnancy reliably. Near-azospermia can be accomplished by combining oral medroxyprogesterone acetate and percutaneous testosterone (OMP/PT).[2]

An open-label, non-placebo-controlled French clinical trial of OMP/PT treated 35 men with normal spermiograms with progesterone 20 mg daily and testosterone 50-125 mg daily for up to 18 months.[3] At 3 months, 80% of the men had sperm counts less than 1 million per milliliter. Sperm counts returned to normal within months of stopping the regimen. The subjects cited a number of reasons for electing to participate, including adverse events with use of contraception and a desire to share the responsibility of contraceptive management.

The adverse effects OMP/PT are not yet completely known, but men may be attracted by the anabolic effects of testosterone on building lean muscle mass. However, similar to the worry with anabolic steroids, testicular volume can decrease due to the lack of spermatogenesis. This decrease is usually minimal and not reported by patients,[4] but the prospect may be unappealing to potential users. Other known side effects from supplemental testosterone include acne, hair loss, and gynecomastia. The frequency of these side effects with the testosterone in male contraception is not yet established.

OMP/PT seems promising, but one wonders how trusting our patients will be of this novel approach. Male sperm counts are reduced to near-zero levels, but are not zero. These levels are low enough for contraceptive efficacy,[3] but will they be adequate to gain the trust of the general population? Will females trust their reproductive fitness to their mates? To females, it can be unnerving to rely upon someone else in a matter as serious as reproduction. Indeed, one of the women in the Soufir clinical trial became pregnant due to her partner’s nonadherence to progesterone-testosterone.[3]

Also, do male patients want to take on this responsibility? They have long entrusted this duty to their female partners, and taking a medication every day can be a burden that leads to nonadherence. Testosterone is delivered not by a pill like female contraception, but via patches, gels, or injections, which are unappealing to some patients.

Still, increasing contraceptive options is inherently beneficial. Some couples desire male-initiated birth control for a variety of reasons, and it will be freeing for some women to trust the responsibility to someone else for a change. Other reversible methods of male contraception should be explored so that men, like women, have options.

Kaley Myer is a 4th year medical student at NYU School of Medicine

Peer reviewed by Robert Lind, MD, Assistant Professor Dept of Medicine (endocrine) and Orthopedic Surgery, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. Mosher WD, Jones J. Use of contraception in the United States: 1982-2008. Data from the National Survey of Family Growth. Vital and Health Statistics, 2010, Series 23, No. 29.  http://www.cdc.gov/nchs/data/series/sr_23/sr23_029.pdf

2. Nieschlag E. Male hormonal contraception. Handb Exp Pharmacol. 2010;(198):197-223.  http://www.ncbi.nlm.nih.gov/pubmed/20839093

3. Soufir J-C, Meduri G, Ziyyat A. Spermatogenetic inhibition in men taking a combination of oral medroxyprogesterone acetate and percutaneous testosterone as a male contraceptive method. Hum Reprod. 2011;26(7):1708-1714.  http://humrep.oxfordjournals.org/content/26/7/1708.full

4. Ilani N, Swerdloff RS, Wang C. Male hormonal contraception: potential risks and benefits. Rev Endocr Metab Disord. 2011;12(2):107-117.