Clinical Questions

Are We Too Clean or Too Dirty? The Hygiene Hypothesis in Asthma

September 21, 2016

House_dust_mites_(5247397771)By James Barger

Peer Reviewed

Asthma, an obstructive pulmonary disease characterized by bronchospasm and chronic airway inflammation, has afflicted mankind for millennia. In the 1st century AD, the Greek physician Aretaeus of Cappadocia described an attack thus:

“the cheeks are ruddy, eyes protuberant, as if from strangulation…voice liquid and without resonance…they breathe standing, as if desiring to draw in all the air which they possibly can inhale, and, in their want of air they also open the mouth as if thus to enjoy the more of it…cough incessant and laborious, expectoration small, thin…and if these symptoms increase, they sometimes produce suffocation1

At the dawn of modern medicine in the 19th century, many physicians believed that asthma had an infectious source; others, like William Osler, believed it to be primarily psychogenic. English physician Henry Hyde Salter, however, was able to deduce the truth in 1860: “the vice in asthma consists, not in the production of any special irritant, but in the irritability of the part being irritated.”2

Successive generations have verified Salter’s statement, finding most asthma to be a form of type 1 hypersensitivity, often accompanied by atopic dermatitis, allergies, eosinophilia, and elevated serum IgE. While some of the myriad genetic and environmental factors that contribute to the development of asthma have been pinpointed, a good deal of mystery remains as to why some people develop asthma while others do not. It is certain, however, that more and more people have been developing the disease. This is why, despite the history described above, asthma is often thought of as an illness of modernity.

Asthma prevalence in the United States increased from 3.5% to 8.2% between 1982 and 2009.3 Many hypotheses have been advanced in attempts to explain this. Some, such as increased awareness of the disease and increased rates and survival of premature birth, explain small parts of the increase.4,5 Others, such as changing rates of breastfeeding and increased outdoor pollution, have been proven incorrect.5 Perhaps the most important and poorly understood, at least from the public viewpoint, is the “hygiene hypothesis.”

The hygiene hypothesis was first advanced in 1989 by David Strachan in the British Medical Journal. It posits that allergic disease has increased in prevalence as the diversity of the human microbiome has been depleted due to antibiotic use, improved sanitation, and other advances in public health.6 Activation of Th1-mediated cellular immunity by certain infections at a young age is thought to suppress the proliferation of allergen-specific Th2 lymphocytes, overactivity of which is the basis of atopic diseases such as asthma.7 Now that many infectious diseases have been eradicated from the human experience, we are more disposed to develop atopic disease, or so the theory goes.

There is a good deal of evidence supporting the hypothesis that modern lifestyles have contributed to the exploding prevalence of atopic diseases. For example, Amish children were shown to have lower rates of asthma, eczema, and sensitization to aeroallergens compared to farm children from the region of Switzerland from which the Amish emigrated, who in turn had significantly lower rates than Swiss non-farm children.8 Matricardi and colleagues found seropositivity for hepatitis A antibodies to be associated with significantly lower rates of allergic disease (it’s unknown whether the hepatitis A vaccine causes protection as well as actual infection).7,9 Having older siblings is also a protective factor, presumably because they expose younger sibling to various illnesses.7 Infants with older siblings were also shown to have significant differences in their gut microbiota as compared to those without, which may contribute to the development of atopic disease.10 Helminth eradication efforts have shown strong associations with increased allergic reactivity. However, studies looking at other microbial exposures have produced contradictory evidence, and it is difficult to establish clear-cut inverse associations between most disease exposures and atopy. Some Latin American countries with a high rate of exposure to various infectious diseases also have high rates of asthma.11 A German longitudinal birth cohort study showed that increased exposure to endotoxin in infancy and increased exposure to muramic acid (a fungal marker) at school age both reduced rates of asthma and other atopic conditions when patients reached school age. However, while personal and household hygiene habits, such as high rates of handwashing and dusting, were associated with lower levels of exposure to these substances, neither personal nor household hygiene practices showed an independent association with asthma rates.12 So, while the hygiene hypothesis has a strong basis in evidence, it is not as simple as its name suggests. Indeed, it has been argued that the term “hygiene hypothesis” has been counterproductive from a public health standpoint.6

The harm that comes from oversimplification of the hygiene hypothesis can be seen in the rhetoric of the anti-vaccine movement. “Anti-vaxxers,” as they’re known, believe that vaccines can cause several noninfectious illnesses that are proliferating in our scrubbed, sterilized, and disinfected world, such as asthma, allergies, and autism (a Google search of those three diseases plus “vaccines” gives some startling results). They also believe that letting children acquire formerly endemic illnesses allows them to develop “natural” immunity, which is stronger and more balanced than the vaccine-derived kind.13 Any evidence that could be misinterpreted as supporting this belief—like the studies mentioned above—is used to bolster their arguments. However, the simplistic idea that cleanliness and vaccination lead to atopic disease is grossly mistaken. While it is true that growing up on a farm and getting hepatitis A or certain parasitic infections during infancy is protective, it is not true—incredible as it may seem—that filth and infection are good for health. If they were, one might surmise that living in the inner city, where people live in close contact and are exposed to relatively high levels of various antigens, would be protective from allergies and asthma. The opposite is the case.

The communities most burdened by allergies and asthma are poor, majority-minority areas of inner cities.14 Prevalence of disease is higher and morbidity and mortality are much higher. Mortality from asthma is seven times higher for black children compared to whites. The reasons for this disparity are thought to include greater exposure to indoor allergens and indoor and outdoor pollutants, as well as poorer access to quality healthcare.14

Children can develop asthma and allergies from being sensitized to many indoor antigens including pets, mice, dust mites, mold, tobacco smoke, nitrogen dioxide, endotoxin, and cockroaches. Most of these are more common in the inner city than in suburbs.15 Among children admitted for asthma at Cincinnati Children’s Hospital, black children were more likely to be sensitized to Alternaria alternata, Aspergillus, cockroach, and dust mites.16 While it is logical that children disproportionately exposed to a given pathogen will have the highest rates of sensitization to it, evidence differs as to whether exposure to these allergens at a young age is protective or a risk factor for the development of asthma. The longitudinal Urban Environment and Childhood Asthma (URECA) cohort study found that while cumulative exposure to allergens predicted atopy, home exposure to cockroach, mouse, and cat allergens during the first year of life was associated with additive protection from the development of recurrent wheeze at age 3.17 This suggests there may be a critical period in which exposure to allergens is protective. In addition, they found the presence of cockroaches and mice to be associated with certain bacterial populations which may have had a protective effect, suggesting an influence by these household pests on the microbiota of inner city children. Simons and colleagues, who did not focus on the first year of life, found that exposure to cockroaches at home is associated with a risk ratio of 1.96 for the development of asthma.18 In another study, pregnant Dominican and African-American women in New York City wore air samplers that measured a variety of allergens. It turned out that the presence of cockroach antigen, measured prenatally, predicted allergic sensitization of the child at ages 5-7.19 So, while the URECA study suggests that the hygiene hypothesis may hold true in the inner city, other research suggests that inner city residents are suffering from allergic diseases not because the hygiene of their environment has improved, but possibly because the opposite is true.

What can be done about this? Just getting rid of the cockroaches doesn’t help, as most patients are sensitized to a variety of antigens.20 The Inner-City Asthma Study investigated whether addressing all of the indoor allergens at once could improve outcomes for poor children with moderate-to-severe asthma and positive allergy skin testing. The intervention group received a median of five home visits during which the child’s caregiver was educated about reducing allergen exposure and provided with hypoallergenic bed sheets, a vacuum cleaner equipped with a high-efficiency particulate (HEPA) filter, a HEPA air purifier (in most cases), and professional pest control if the child was sensitized to cockroach. The intervention group experienced substantial reductions in asthmatic symptoms relative to the control group, beginning two months into the study and persisting for the entirety of its two-year duration. There was one fewer unscheduled visit for asthma for every 2.85 children treated. The effects of this intervention were comparable to those seen in previous trials of inhaled corticosteroids, and the cost of the intervention, $750-1000 per year, was similar to that of steroids. Therefore, reduction of indoor allergen exposure is a relatively cost-effective method of reducing the burden of asthma in the inner city.21 However, these are measures taken after asthma and allergies have developed, rather than during the critical period during which the URECA study suggested allergen exposure may actually be beneficial.

The Inner-City Asthma Study (and countless others) showed that improved hygiene can alleviate asthma and allergies once they have developed. These results may contradict some popular misconceptions regarding the hygiene hypothesis: namely, that hygiene causes allergies. The truth seems to be that depletion of the human microbiome, coupled with increased exposure to the indoor antigens described above, has resulted in increased prevalence of allergic diseases (and autoimmune diseases), and that these effects may occur during a critical period during the first year of life. Luckily, research in this field is also starting to suggest solutions, namely biome restitution. One proposal has been to purposefully dose humans with bovine or rat tapeworms, as these helminthic guests cause few side effects and have been shown to reduce rates of allergy and asthma.22 It remains to be seen whether this will catch on, but other biome restitution treatments, such as probiotics for ulcerative colitis, have shown great experimental promise and are much more palatable for patients. Mouse studies published by Fujimora and colleagues have shown that supplementation with Lactobacillus johnsonii alters the microbiome in ways that may be protective from asthma.23

In the end, increased prevalence of allergies and asthma cannot be attributed simply to our cleaner, vaccinated society. Rather, the elimination of specific diseases and the proliferation of various indoor antigens have produced a more pro-atopic environment, although the precise ways that these pathogens, antigens, and the microbiome interact in order to promote or prevent atopic disease has proven difficult to parse out.24 Hopefully, future research will allow us to manipulate the anti-atopic effects of certain microbes so that we can get “dirty” in just the right ways to prevent allergic diseases.

James Barger is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Carlo Ciotoli, MD, internal medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

  1. Marketos SG, Ballas CN. Bronchial asthma in the medical literature of Greek antiquity. J Asthma. 1982;19(4):263-269.  http://www.ncbi.nlm.nih.gov/pubmed/6757243
  2. McFadden ER Jr. A century of asthma. Am J Respir Crit Care Med. 2004;170(3):215-221.  http://www.atsjournals.org/doi/full/10.1164/rccm.200402-185OE#.V-K5qk_ruDk
  3. Centers for Disease Control and Prevention. Vital signs: asthma prevalence, disease characteristics, and self-management education–United States, 2001-2009. MMWR. 2011; 60(17);547-552.
  4. Sonnenschein-van der Voort AMArends LRde Jongste JC, et al. Preterm birth, infant weight gain, and childhood asthma risk: a meta-analysis of 147,000 European children. J Allergy Clin Immunol. 2014;133(5):1317-1329.
  5. Weiss KB, Gergen PJ, Wagener DK. Breathing better or wheezing worse? The changing epidemiology of asthma morbidity and mortality. Annu Rev Public Health. 1993;14:491-513. http://www.ncbi.nlm.nih.gov/pubmed/8323600
  6. Parker W. The “hygiene hypothesis” for allergic disease is a misnomer. BMJ. 2014;348:g5267.
  7. Matricardi PM, Rosmini F, Ferrigno L, et al. Cross sectional retrospective study of prevalence of atopy among Italian military students with antibodies against hepatitis A virus. BMJ. 1997; 314(7086):999–1003.
  8. Holbreich M, Genuneit J, Weber J, Braun-Fahrlander C, Waser M, von Mutius E. Amish children living in northern Indiana have a very low prevalence of allergic sensitization. J Allergy Clin Immunol. 2012;129(6):1671-1673.
  9. McIntire JJUmetsu SEMacaubas C, et al. Immunology: hepatitis A link to atopic disease. Nature. 2003;425(6958):576. http://www.ncbi.nlm.nih.gov/pubmed/14534576
  10. Penders J, Gerhold K, Stobberingh EE, et al. Establishment of the intestinal microbiota and its role for atopic dermatitis in early childhood. J Allergy Clin Immunol. 2013;132:601-607.e8.
  11. Brooks CPearce NDouwes J. The hygiene hypothesis in allergy and asthma: an update. Curr Opin Allergy Clin Immunol.2013;13(1):70-77.
  12. Weber J, Illi S, Nowak D, et al. Asthma and the hygiene hypothesis. Does cleanliness matter? Am J Respir Crit Care Med. 2015; 191(5):522-529.
  13. Cage A. Vaccinations, natural immunity and patient rights: what you need to make an informed decision for yourself and your child. http://www.southbaytotalhealth.com/Vaccinations.htm. Accessed February 18, 2015.
  14. Togias AFenton MJGergen PJRotrosen DFauci AS. Asthma in the inner city: the perspective of the National Institute of Allergy and Infectious Diseases. J Allergy Clin Immunol. 2010;125(3):540-544.  http://www.ncbi.nlm.nih.gov/pubmed/20226290
  15. Kanchongkittiphon W, Gaffin JM, Phipatanakul W. The indoor environment and inner-city childhood asthma. Asian Pac J Allergy Immunol. 2014;32(2):103–110.
  16. Beck AFHuang B, Kercsmar CM, et al. Allergen sensitization profiles in a population-based cohort of children hospitalized for asthma. Ann Am Thorac Soc. 2015;12(3):376-384.  http://www.ncbi.nlm.nih.gov/pubmed/25594255
  17. Lynch SVWood RABoushey H, et al. Effects of early-life exposure to allergens and bacteria on recurrent wheeze and atopy in urban children. J Allergy Clin Immunol. 2014;134(3):593-601.e12.  http://www.ncbi.nlm.nih.gov/pubmed/24908147
  18. Simons ETo TDell S. The population attributable fraction of asthma among Canadian children. Can J Public Health. 2011;102(1):35-41.
  19. Perzanowski MSChew GLDivjan A, et al. Early-life cockroach allergen and polycyclic aromatic hydrocarbon exposures predict cockroach sensitization among inner-city children. J Allergy Clin Immunol. 2013;131(3):886-893.
  20. Marks GB, Mihrshahi S, Kemp AS, et al. Prevention of asthma during the first 5 years of life: a randomized controlled trial. J Allergy Clin Immunol. 2006;118(1):53-61.  http://www.ncbi.nlm.nih.gov/pubmed/16815138
  21. Morgan WJ, Crain EF, Gruchalla RS, et al. Results of a home-based environmental intervention among urban children with asthma. N Engl J Med. 2004;351(11):1068-1080.
  22. Parker W, Perkins SE, Harker M, Muehlenbein MP. A prescription for clinical immunology: the pills are available and ready for testing. A review. Curr Med Res Opin. 2012;28(7):1193-1202.Liu AH. Revisiting the hygiene hypothesis for allergy and asthma. J Allergy Clin Immunol. 2015;136(4):860-865.
  23. Fujimura KE, Demoor T, Rauch M, et al. House dust exposure mediates gut microbiome Lactobacillus enrichment and airway immune defense against allergens and virus infection. Proc Natl Acad Sci U S A. 2014;111(2):805-810.
  24. Liu AH. Revisiting the hygiene hypothesis for allergy and asthma. J Allergy Clin Immunol. 2015;136(4):860-865.

Sex or Drugs: Why Do We See An Increased Incidence of Oropharyngeal Cancer?

July 13, 2016

Tonsils&Throat_Anatomy2By Tyler Litton, MD

Peer Reviewed

Oropharyngeal squamous cell carcinoma (OPSCC) is relatively rare but incidence has increased in the US over the past 40 years. [1] Tonsillar cancer is the most common type of OPSCC followed by base of tongue cancer, which together account for 90% of all OPSCCs.[2] The incidence of both tonsillar and base of tongue cancers individually have also increased in the US.[3] OPSCC is more common in men than women and smoking and alcohol are well known risk factors for it.[1,4] However, the increased incidence in the US has not seen a parallel rise in smoking and alcohol consumption.[5] This implies some other factor may be responsible for epidemiologic changes in OPSCC rates.

Human papillomavirus (HPV) is the most common sexually transmitted disease in the US and can cause cervical, vulvar and anal cancer.[6] HPV exists in over 100 types and is found in skin and mucosal tissue. It is strongly associated with OPSCC, especially of the tonsils and base of tongue.[5] As OPSCC, tonsillar and base of tongue cancer have increased in incidence, the proportion of HPV-positive OPSCC has also increased.[2,5] By one estimate, the percent of HPV-positive oropharyngeal tumors increased from 16% in the 1980s to 73% in the 2000s[1] while multiple other studies support these findings.[7,8] In addition, the absolute number of HPV-positive cancers has increased while it has declined for HPV-negative cancers. This decline in the number of HPV-negative OPSCCs parallels a decline in smoking in the US.[1] Detection of HPV DNA in OPSCC varies widely; from 25% in some studies to 100% in others.[2,9,10,11] This may be due to the variety of tumor sites studied, techniques used for detection and how long before the sample was tested.[2] For tonsillar cancer specifically, one study estimated HPV to be present in 40-60% of cases in Western cultures.[12] Thus, there is a significant epidemiologic connection between HPV and OPSCC.

A role for HPV in OPSCC pathogenesis is also supported by molecular findings. In certain high risk types of HPV, E6 and E7 proteins made by the virus can deregulate the cell cycle by preventing the normal function of p53 and retinoblastoma (Rb) protein respectively.[2] As an example, HPV-16 DNA is a risk factor for tonsillar cancer and the oncogenes E6 and E7 are generally expressed in HPV-positive tonsillar carcinoma.[5] There is also an association of p16 over-expression with HPV-positive OPSCC, an indicator of E7 inactivating Rb which in turn upregulates p16.[2] HPV-positive tumors infrequently have mutated p53 and often show chromosome 3q amplification (similar to that in HPV-positive cervical and vulvar cancer).[2] Conversely, p53 mutations in OPSCC are associated with alcohol and tobacco use.[13] HPV-positive OPSCCs occur at younger ages and are less likely to have a history of alcohol or tobacco use compared to HPV-negative OPSCCs.[3] Interestingly, this evidence suggests that HPV and smoking or alcohol are molecularly distinct etiologies of OPSCC.

Reported prevalence of oral HPV in individuals without OPSCC is highly variable. Despite this uncertainty, incidence rates for tonsil and base of tongue HPV infection are increasing in recent years and birth cohorts.[3] In one study, high risk HPV types were found in the oral cavity of 4.5% of HIV-seronegative and 13.7% HIV-seropositive individuals, with HPV-16 the most common type.[12] The increased rate found in HIV-seropositive individuals may be due to abnormal oral mucosal immunity resulting from HIV infection. Also, oral HPV infection in both groups was associated with HSV-2 seropositivity, a marker of sexual behaviors. In the HIV-seropositive group, oral-genital contact was associated with oral HPV infection.[12] Other studies have shown that a high number of oral sexual partners, young age of first intercourse and increased number of open-mouth kissing partners were associated with HPV infection.[14,15,16] Furthermore, markers of high-risk sexual behavior has increased in recent birth cohorts.[3] Therefore, increasing HPV infection rates are likely a result of changes in sex practices.

Not surprisingly, risky sexual behavior is also a risk factor for HPV-positive OPSCCs.[1,13,14] Patients with a history of oral-genital sex are more likely to present wtih an HPV-positive OPSCC when compared to those without such history.[5] In men, risk of OPSCC increases with age of first intercourse and increasing number of partners.[5] Interestingly, the combination of current cigarette use and HPV-16 seropositivity is associated with additional risk of OPSCC beyond what would be expected by summing the individual risks alone.[13] One hypothesis is that HPV induces an immune response and that smoking abates it.[2]

Considering the evidence, it seems clear that recent changes in sexual behaviors increase the risk for oral HPV infection and have led to the observed increase in OPSCC. If trends continue, by 2020 the number of HPV-positive OPSCCs is expected to surpass the number of cervical cancers in the US.[1] Unfortunately, the potential of the HPV vaccine to prevent oropharyngeal infection has not yet been conclusively demonstrated. A double-blind controlled trial did show significantly fewer oral HPV16/18 infections in women four years after receiving the bivalent vaccine compared to those who did not.[17] However the study was limited in that baseline HPV prevalence was not assessed. The vaccine is effective in preventing cervical, vaginal, vulvar, penile and anal infections suggesting it would be comparably effective for oropharyngeal infection and by extension, OPSCC. Regardless, these epidemiologic trends are strong enough to warrant not only HPV vaccination of preteen males and females as is currently recommended by the CDC, but also to inform safe sex counseling and screening for cancer risk factors for all patients.

Dr. Tyler Litton  a former NYU medical student is a current radiology resident at Saint Louis University School of Medicine

Peer reviewed by Howard Leaf, MD, Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons 

References

  1. Chaturvedi AK, Engels EA, Pfeiffer RM, et al. Human papillomavirus and rising oropharyngeal cancer incidence in the United States. J Clin Oncol. 2011;29(32):4294-4301. http://www.ncbi.nlm.nih.gov/pubmed/21969503/
  2. Ramqvist T, Dalianis T. Oropharyngeal cancer epidemic and human papillomavirus. Emerg Infect Dis. 2010;16(11):1671-1677. http://www.ncbi.nlm.nih.gov/pubmed/21029523
  3. Chaturvedi AK, Engels EA, Anderson WF, Gillison ML. Incidence trends for human papillomavirus-related and –unrelated oral squamous cell carcinomas in the United States. J Clin Oncol. 2008;26(4):612-619. http://www.ncbi.nlm.nih.gov/pubmed/18235120
  4. Licitra L, Bernier J, Grandi C, Merlano M, Bruzzi P, Lefebvre JL. Cancer of the oropharynx. Crit Rev Oncol Hematol. 2002;41(1):107-122. http://www.ncbi.nlm.nih.gov/pubmed/11796235
  5. Hammarstedt L, Lindquist D, Dahlstrand H, et al. Human papillomavirus as a risk factor for the increase in incidence of tonsillar cancer. Int J Cancer. 2006;119(11):2620-2623. http://www.ncbi.nlm.nih.gov/pubmed/16991119
  6. Anic GM, Lee JH, Stockwell H, et al. Incidence and human papillomavirus (HPV) type distribution of genital warts in a multinational cohort of men: the HPV in men study. J Infect Dis. 2011;204(12):1886-1892.  http://www.ncbi.nlm.nih.gov/pubmed/22013227
  7. Mehanna H, Beech T, Nicholson T, et al. Prevalence of human papillomavirus in oropharyngeal and nonoropharyngeal head and neck cancer–systematicreview and meta-analysis of trends by time and region. Head Neck. 2013 May;35(5):747-55. http://www.ncbi.nlm.nih.gov/pubmed/22267298
  8. Näsman A, Nordfors C, Holzhauser S, et al. Incidence of human papillomavirus positive tonsillar and base of tongue carcinoma: a stabilisation of an epidemic of viral induced carcinoma? Eur J Cancer. 2015 Jan;51(1):55-61.
  9. Schwartz SM, Daling JR, Doody DR, et al. Oral cancer risk in relation to sexual history and evidence of human papillomavirus infection. J Natl Cancer Inst. 1998;90(21):1626-1636.
  10. Ang KK, Harris J, Wheeler R, et al. Human papillomavirus and survival of patients with oropharyngeal cancer. N Engl J Med. 2010 Jul 1;363(1):24-35.
  11. Mehanna H, Franklin N, Compton N, et al. Geographic variation in human papillomavirus-related oropharyngeal cancer: Data from four multinational randomized trials. Head Neck. 2016 Jan 8. [Epub ahead of print] http://onlinelibrary.wiley.com/doi/10.1002/hed.24336/references
  12. Kreimer AR, Alberg AJ, Daniel R, et al. Oral human papillomavirus infection in adults is associated with sexual behavior and HIV serostatus. J Infect Dis. 2004;189(4):686-698.
  13. Gillison ML, D’Souza G, Westra W, et al. Distinct risk factor profiles for human papillomavirus type 16-positive and human papillomavirus type 16-negative head and neck cancers. J Natl Cancer Inst. 2008;100(6):407-420.
  14. Anaya-Saavedra G, Ramirez-Amador V, Irigoyen-Camacho ME, et al. High association of human papillomavirus infection with oral cancer: a case-control study. Arch Med Res. 2008;39(2):189-197. http://www.ncbi.nlm.nih.gov/pubmed/18164962
  15. D’Souza G, Agrawal Y, Halpern J, Bodison S, Gillison ML. Oral sexual behaviors associated with prevalent oral human papillomavirus infection. J Infect Dis. 2009;199(9):1263-1269.
  16. D’Souza G, Cullen K, Bowie J, Thorpe R, Fakhry C. Differences in oral sexual behaviors by gender, age, and race explain observed differences in prevalence of oral human papillomavirus infection. PLoS One. 2014 Jan 24;9(1):e86023.
  17. Herrero R, Quint W, Hildesheim A, et al. Reduced prevalence of oral human papillomavirus (HPV) 4 years after bivalent HPV vaccination in a randomized clinical trial in Costa Rica. PLoS One. 2013 Jul 17;8(7):e68329. http://oralcancerfoundation.org/hpv/pdf/Oral-VE-CVT-2013.pdf

 

Gun Violence: A Public Health Concern?

June 9, 2016

gunsBy Matthew B. McNeill, MD

Peer Reviewed

One can often feel numb or indifferent to the seemingly nightly reports of gun deaths on American news programs. Individual homicides, suicides, or accidental gun deaths are tragic and tragically commonplace. However, over the last two decades, a tide of unrest with the current role of guns in America has arisen in the wake of mass school shootings in places such as Jonesboro, AR (1998, 5 killed, 10 injured), Columbine, CO (1999, 13 killed, 24 injured), Red Lake Indian Reservation, MN (2005, 9 killed, 7 injured), Nickel Mines, PA (2006, 5 killed, 5 injured), Blacksburg, VA (2007, 32 killed, 17 injured), Newtown, CT (2012, 27 killed, 1 injured), and Roseburg, OR (2015, 9 dead, 9 injured); as well as large public shootings in Aurora, CO, Washington D.C., Ft. Hood, TX, Charleston, SC, and San Bernardino, CA. Initially reserved to the political class and social activists, the call for action on firearms has reached a fever pitch as it has become more visible to the general public. While the debate and consideration of the right to possess firearms and ammunition is best left to legal authorities and Constitutional scholars, the over 30,000 annual deaths by firearm must at least prompt the consideration of gun violence as a significant public health concern.

The Current Legal Precedent

Before assessing the current state of gun violence in the United States, it is important to understand the legal precedent for gun ownership. The Second Amendment to the United States Constitution states “a well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” Over the course of American history, there have been numerous interpretations and re-interpretations of these twenty-seven words. The current legal precedent was established by two recent Supreme Court cases. The first, District of Columbia v. Heller (2008), was a 5-4 ruling that the 2nd amendment protects the individual’s right to keep and bear arms unconnected with any militia, most specifically for purposes of hunting or self-defense.[[i]] The ruling allowed for limited restriction including banning firearm possession by felons and mentally ill, limiting access in sensitive places such as schools and government building, and limiting sales of firearms so they could be closely monitored. The ruling also struck down the Firearms Control Regulations Act of 1975 which had previously required trigger locks and restricted ownership of handguns. While this initial decision only applied to Washington, DC, a subsequent court case, McDonald v. Chicago (2010), extended this ruling to all cities and states.[[ii]]

The Second Amendment and Physicians

In the setting of the McDonald decision and the debate over the Affordable Care Act in 2010, there began push-back over the role of physician in discussing gun ownership with patients. In 2011, Florida was the first state to pass a restriction on physician-patient encounters concerning firearms. The Firearm Owner’s Privacy Act (2011) restricted licensed health care practitioners and facilities from entering information concerning a patient’s ownership of firearms into the patient’s medical record that the practitioner knows is “not relevant to the patient’s medical care or safety, or the safety of others.”[[iii]] It maintained the practitioners “shall respect a patient’s right to privacy and should refrain” from inquiring as to whether a patient or his or her family owns firearms, unless the practitioner or facility believes in good faith that the “information is relevant to the patient’s medical care or safety, or the safety of others.” Further it averred that practitioners “may not discriminate” against a patient on the basis of firearm ownership” and “should refrain from unnecessarily harassing a patient about firearm ownership.” A physicians’ organization, angered at what they perceived as a restriction of their right to free speech, sued the state in an attempt to strike the law down. The so-called “Docs v. Glocks” or Wollschlaeger v. Governor of Florida (2011) case initially found the law to be unconstitutional, however on appeal and re-appeal to the 11th Circuit Court of Appeal, the law was upheld as constitutional as it only limits speech in a permissible profession-client setting and mostly serves as restriction on physician conduct.[[iv]] This decision is currently being appealed the Supreme Court of the United States. Since 2011, 14 other states have passed similar “firearm gag” laws on physicians.[[v]]

Gun Violence As A Public Health Issue

Both the World Health Organization (WHO) and Centers for Disease Control and Prevention (CDC) consider violence to be a public health threat.[[vi],,[vii]] The American College of Physicians has referred to gun violence as an “epidemic” since 1995.[[viii]] To put things in perspective, in the United States, the Spanish flu of 1918-1919 caused 500,000 deaths.[[ix]] Annually, influenza causes around 23,607 deaths in the United States.[[x]] In the U.S. in 2013, there were 33,169 deaths by firearm (more than 80 per day); of which 11,208 were homicide,  21,175 were suicide with a firearm, 505 were due to accidental discharge of a firearm, and 281 were due to firearms-use with “undetermined intent.”[[xi]] A 2011 Department of Justice analysis described 478,400 fatal and non-fatal violence crimes with firearms annually in the U.S. [[xii]] The annual rate of homicide by firearm per 1 million people in England and Wales is 0.7, Germany 1.7, and Canada 5.1, while the United State’s rate is 29.7 deaths; an annual rate 6 to 30 times higher than its peer nations. [[xiii]]

Like all public health concerns (HIV/AIDS, cancer, diabetes, etc.), data collection and research is vital to evidence-based policy development. Although the funding for the National Institutes of Health and the CDC is $30.1 billion and $11.1 billion annually respectively, since 1996 there has been a ban on funding firearm related research. [[xiv],,[xv]] A 1996 funding bill, influenced by the National Rifle Association, included a rider that “none of the funds made available for injury prevention and control may be used to advocate or promote gun control”.[[xvi]] This has stymied the quality and quantity of gun violence research for the last two decades. In 2013, President Obama clarified in an executive order that agencies are instructed to “conduct or sponsor research into the causes of gun violence and the ways to prevent it” but still no funding or law change has occurred in Congress.[[xvii]] In January 2016, President Obama again released an executive order with specific deadlines and funding guidelines regarding research and implementation of gun safety technology.[[xviii]] The ultimate outcome of this new order or any change in policy and/or funding is still to be determined. 

What Is Known With The Research We Have

While new research may take years and careful political navigation to come to fruition, there presently exists some data on the issue of firearm safety. A 1992 study published in the New England Journal of Medicine on suicide and gun ownership found that guns in the home are associated with an increased risk of completed suicide (adjusted OR 4.8).[[xix]] A similar study from 1993 published in the NEJM found guns in the home are associated with an increased risk of homicide (adjusted OR 2.7).[[xx]] In terms of safe firearm storage, a 2000 study found that among homes with children and firearms, 43% had at least 1 unlocked firearm (i.e., not in a locked place and not locked with a trigger lock or other locking mechanism).[[xxi]] Appropriate firearm storage is a valid concern as demonstrated in a 2005 publication in JAMA on gun storage practice and the risk of youth suicide and unintentional firearm injuries. Comparing gun owning households with adolescents, guns in households with adolescent firearm events (suicide, unintentional, violent) were less likely to be stored unloaded (odds ratio [OR], 0.30; 95% confidence interval [CI], 0.16-0.56), less likely to be stored locked (OR, 0.27; 95% CI, 0.17-0.45), stored separately from ammunition (OR, 0.45; 95% CI, 0.34-0.93), or to have ammunition that was locked (OR, 0.39; 95% CI, 0.23-0.66) than were control guns in households without events.[[xxii]] In terms of suicide, which is the 10th leading cause of death in the US, studies have shown that 45% of victims have contact with their primary care physician within one month of their suicide.[[xxiii]] Around 50% of suicides are by firearm, and suicides make up the largest portion of firearm deaths.[[xxiv]]

What Can A Health Provider Do

Both the American Medical Association and the American College of Physician have released strong policy statements regarding the physician role in preventing gun violence. Along with a plea for increased gun violence research, a prohibition on assault rifles and high-capacity ammunition, increased funding for mental health services, and more stringent background checks, there are some suggestions for how individual providers can make a difference.[[xxv],,[xxvi]] Much like obese patients or tobacco users are at increased risk for diabetes and hypertension, it is important to identify and screen patients who are at high risk for firearm injury.

Providers should screen for potential self or homicidal gun violence in patients:

  • With a recent psychiatric hospitalization
  • Following a prolonged medical hospitalization
  • Living in households with adolescents
  • With severe depression

Additionally, given the often strong dedication to the absolute right to gun ownership, it is more prudent to focus on gun storage as opposed to gun removal.

Providers should encourage all gun owner to:

  • Keep guns locked
  • Keep guns unloaded
  • Store ammunition locked
  • Store ammunition in separate location from the firearm

The most important impact that physicians can have in addressing firearm safety is by lobbying for and conducting more research on effective methods of preventing firearm-related morbidity and mortality. In a culture where scientific outcome and evidence-based decision-making has great influence in policy and societal ideals, hopefully the medical community can be a force in addressing this controversial yet very real public health epidemic.

Dr. Matthew B. McNeill, MD is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by David Alfandre, MD., Medical Ethics, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References 

[[i]] Supreme Court of the United States. District of Columbia et al. v. Heller. http://www.supremecourt.gov/opinions/07pdf/07-290.pdf. Published June 26, 2008. Accessed January 19, 2016.

[[ii]] Supreme Court of the United States. McDonald et al. v. City of Chicago, Illinois. http://www.supremecourt.gov/opinions/09pdf/08-1521.pdf. Published June 28, 2010. Accessed January 19, 2016.

[[iii]] Florida House of Representatives. Firearm Owner’s Privacy Act. http://www.myfloridahouse.gov/sections/Bills/billsdetail.aspx?BillId=44993. Published June 2, 2011. Accessed January 19, 2016.

[[iv]] Volokh, Eugene. Court upholds Florida law restricting doctor-patient speech about guns. The Washington Post. Published July 29, 2015.

[[v]] Khazan, Olga. The Strange Laws That Dictate What Your Doctor Tells You. The Atlantic. Published October 16, 2015.

[[vi]] World Health Organization. Violence and Injury Prevention. http://www.who.int/violence_injury_prevention/violence/en/. Published 2016. Accessed January 19, 2016.

[[vii]] Centers for Disease Control and Prevention. The Public Health Approach to Violence Prevention. http://www.cdc.gov/violenceprevention/overview/publichealthapproach.html. Published March 25, 2015. Accessed January 19, 2016.

[[viii]] Butkus R, Doherty R, Daniel H. Reducing firearm-related injuries and deaths in the United States: executive summary of a policy position paper from the American College of Physicians. Annals of internal medicine. 2014 Jun 17;160(12):858-60.

[[ix]] U.S. Department of Health & Human Services. The Great Pandemic: The United States in 1918–1919. http://www.flu.gov/pandemic/history/1918/. Revised 2016. Accessed January 19, 2016.

[[x]] Centers for Disease Control and Prevention. Estimates of Deaths Associated with Seasonal Influenza-  United States, 1976—2007. http://www.cdc.gov/MMWR/preview/mmwrhtml/mm5933a1.htm?s_cid=mm5933a1_w. Published August 27, 2010. Accessed January, 19, 2016.

[[xi]] Centers for Disease Control and Prevention. Deaths: Final Data for 2013. http://www.cdc.gov/nchs/data/nvsr/nvsr64/nvsr64_02.pdf. Updated February 16, 2015. Accessed January 19, 2016.

[[xii]] US Department of Justice. Firearm Violence, 1993-2011. http://www.bjs.gov/content/pub/pdf/fv9311.pdf. Published May 2013. Accessed January 19, 2016.

[[xiii]] Rogers, Simon. Gun homicides and gun ownership listed by country. The Guardian. Published July 22, 2012.

[[xiv]] National Institutes of Health. Budget. http://www.nih.gov/about-nih/what-we-do/budget. Updated October 14, 2015. Accessed January 19, 2016.

[[xv]] US Department of Health and Human Services. HHS FY 2015 Budget in Brief. http://www.hhs.gov/about/budget/fy2015/budget-in-brief/cdc/index.html Published June 4, 2014. Accessed January 19, 2016.

[[xvi]] Stein, Sam. The Congressman Who Restricted Gun Violence Research Has Regrets. The Huffington Post. Published October 6, 2015.

[[xvii]] The White House. Presidential Memorandum – Engaging in Public Health Research on the Causes and Prevention of Gun Violence. https://www.whitehouse.gov/the-press-office/2013/01/16/presidential-memorandum-engaging-public-health-research-causes-and-preve . Published January 16, 2013. Accessed January 19, 2016.

[[xviii]] The White House. FACT SHEET: New Executive Actions to Reduce Gun Violence and Make Our Communities Safer. https://www.whitehouse.gov/the-press-office/2016/01/04/fact-sheet-new-executive-actions-reduce-gun-violence-and-make-our. Published January 4, 2016. Accessed January 19, 2016.

[[xix]] Kellermann AL, Rivara FP, Somes G, et al. Suicide in the home in relation to gun ownership. New England Journal of Medicine. 1992 Aug 13;327(7):467-72.

[[xx]] Kellermann AL, Rivara FP, Rushforth NB,  et al. Gun ownership as a risk factor for homicide in the home. New England Journal of Medicine. 1993 Oct 7;329(15):1084-91.

[[xxi]] Schuster MA, Franke TM, Bastian AM, et al .Firearm storage patterns in US homes with children. American Journal of Public Health. 2000 Apr;90(4):588.

[[xxii]] Grossman DC, Mueller BA, Riedy C, et al. Gun storage practices and risk of youth suicide and unintentional firearm injuries. Jama. 2005 Feb 9;293(6):707-14.

[[xxiii]] Luoma JB, Martin CE, Pearson JL. Contact with mental health and primary care providers before suicide: a review of the evidence. American Journal of Psychiatry. 2014 Nov 9.

[[xxiv]] American Association of Suicidology. U.S.A. SUICIDE: 2014 OFFICIAL FINAL DATA. http://www.suicidology.org/Portals/14/docs/Resources/FactSheets/2014/2014datapgsv1b.pdf. Published December 22, 2015. Accessed January 19, 2016.

[[xxv]] American College of Physicians. American College of Physicians offers policy recommendations for reducing gun-related injuries and deaths in the U.S. https://www.acponline.org/newsroom/policy_recommendations_reducing_gun-related_injuries.htm. Revised 2016. Accessed January 19, 2016

[[xxvi]] American Medical Association. Violence Prevention. http://www.ama-assn.org/ama/pub/advocacy/topics/violence-prevention.page. Revised 2016. Accessed January 19, 206.

Wearable Health Trackers: Better behaviors, or Fashion fads?

May 25, 2016

Fitbit_Charge_HRBy David Valentine, MD

Peer Reviewed

Currently, well over one third of US adults use at least one health-related online service or app, with almost half of those focused on physical activity 1. With the growing popularity of wearable health tracking devices such as the Fitbit, Nike Fuel, Jawbone and more, the prevalence of these technologies is only set to grow. However, while more and more people know more and more about their health and habits by the day, little is known about perhaps the most important aspect of these wearables: do they actually promote positive change in the behavior of their users?

Much has been said about the techniques that health related apps and fitness trackers employ to monitor and promote a healthy lifestyle. Generally, they fall into the category of educational or motivational, or something in-between 2, and utilize between 9 and 12 different techniques of behavior change; these include instruction, demonstrations of how to perform a given action or exercise, feedback on performance or progress, goal-setting for a desired outcome, and social support from other users of the app 2. Notably, however, the majority of apps currently available do not include steps for action planning, which is among the most well-established method to effect lasting behavioral change 3,4.

The use of wearable monitors adds a new layer of complexity and opportunity to such traditional fitness apps, via self-monitoring, instant feedback and environmental input. Many of the accompanying apps or online portals for these wearables utilize the same techniques to effect behavioral change that traditional phone-based apps do, but there are some notable differences. Wearables, for instance, tend to follow social cognitive theory—which states that behaviors can be changed by observing a model of that behavior with subsequent results—more than phone apps, and employ a greater number of behavior change techniques, such as real-time feedback and social comparison, to more closely match traditional clinical interventions. They also allow for improved feedback on discrepancies between goals and actual behaviors due to the greater real-world data about such behaviors 5. Furthermore, because of historical trends like steps per day or heart rate, wearables can also reward for past successes, like achieving a certain threshold of steps per day or sleeping for a certain period of uninterrupted time, or prompt one towards activity in periods of prolonged sedentariness.

Despite the abundance of data that wearables collect and then present in charts, tables and summaries, there is a large difference between recording data and acting on it. Thus, the question remains: do people actually change behavior based on a wearable device? A 2012 study sought to answer this question, and broke 51 overweight or obese subjects down into equal groups of in-person, in-person plus wearable, and wearable-only interventions toward health tracking and activity promotion 6. They found that while all groups showed improvement in weight loss, body fat measurements, cardiorespiratory fitness and dietary changes, there was no significant difference in these changes across the groups.  What this suggests is that wearable devices are able to match the efficacy of traditional clinical interventions for weight loss and other positive health changes, at least in the controlled setting of a clinical trial, but may not be superior to current approaches.

Given the above results, a 12-week trial sought to determine if wearables’ efficacy could be improved upon by adding an interpersonal component to health tracking. Investigators looked at the changes in weight, leisure time activity and diet of 58 participants, split into three groups; all groups received regular in-person counseling sessions, but one group also used a wearable monitoring device 25% of the time while another group used this device 100% of the time. While differences in absolute weight loss (in kilograms) were not significant across groups, there was a statistically significant difference in relative weight loss (based on percent lost from starting weight) across groups, with the continuous monitoring group losing far more weight than the other two groups (4.6+3.2% in-person, 3.8+3.8% intermittent, and 7.1+4.6% continuous) 7. Notably, leisure time and energy intake did not significantly differ between the groups, and self-monitoring of eating and exercise decreased with time across all groups.

It seems, therefore, that wearables can match and, possibly, improve upon the efficacy of traditional interpersonal counseling for health and behavior changes. However, they suffer from a dropoff in use that could limit their potential utility. In a survey of over 6000 people, over half of those who bought a wearable device stopped using it over time; a third of these stopped within half a year 8. Part of this falloff may be due to the fact that wearables are typically bought by the people who need them least: young (mostly less than 35 years old), affluent (making over $100,000 per year) and relatively healthy individuals 9. In this population, wearables are most often used to optimize pre-existing fitness regimens rather than begin new ones, and it is possible that once such optimization is achieved, there is less use for them8. As such, broadening of the target markets for wearables might allow for more sustained and dramatic improvements in health and fitness due to their use, but this depends on improvements being made to both the accessibility (mostly via price) of wearables and their consistent use.

While not a study into wearables per say, investigators recently looked into how monetary payment could enhance health improvements due to technology use, by taking 75 patients and splitting them into a control group, a low incentive ($1.40/day) group, and a high incentive ($2.80/day) group 10. Participants used three devices per day—a blood pressure cuff, glucometer, and scale—along with a transmitter to send their measurements to an online server for tracking. While all three groups started with approximately equal levels of adherence, by the end of the tracking period (3 months), the incentive groups showed significantly higher adherence than the controls (81% for low incentive, 78% for high incentive, 58% for control). Interestingly, there was no significant difference between the two incentive groups. This does not mean, however, that a difference in payment does not cause a difference in behavior; rather, in the three months after financial incentives were withdrawn, adherence in the high-level group dropped to levels equal to the control group—around 30%—whereas adherence in the low incentive group remained significantly higher at over 60%. These findings suggest that financial incentives can help to drive behavioral change, but only if the incentive does not outweigh the behavior itself—for the high group, it seemed that the higher dollar amount was more important than healthy changes, whereas the lower dollar amount was enough to drive change but not so much as to supplant its importance.

While the use of financial incentives on a mass scale may be overwhelming at first thought, studies suggest that the cost of such promotions may be covered by consequent cost savings on a population level. A 2014 trial enrolled 74 diabetic patients to participate in an automated messaging system of prompts and questions, sent to their cell phones, pertaining to their diabetes management 11. Patient responses were screened automatically, and anything out of the ordinary range of responses was viewed by a nurse and followed up with the patient. It was found that healthy eating, monitoring of blood glucose, foot care and adherence to medications all improved significantly over the 6-month course of the trial, saving an estimated $30,000 in that time.

The above evidence is not perfect—as previously stated, many studies have focused on populations with chronic disease, while most users of health trackers are relatively healthy, and many previous studies have used only cell phone apps or messaging services. Still, existing data suggests that while wearable health trackers may be an effective tool for weight loss, improved physical activity, and chronic disease management, it is difficult to maintain use of such devices and ensure access for populations that could most benefit from them. Furthermore, while the goal of a wearable device would be to create a new internal motivation for good health rather than depend on long-term external motivators, the lack of information about consequences of behavior, or action planning towards change, might make this difficult12, and results may depend on a supplemental interpersonal interaction or other form of incentive. Still, with big data becoming increasingly manageable and prolific both in and outside of the medical realm, the opportunities that more nuanced and continuous data about patients can afford providers and other caretakers must continue to be investigated.

Dr. David Valentine is a Medicine Internist at NYU Langone Medical Center

Peer reviewed by Neil Shapiro, MD, Editor-In-Chief, Clinical Correlations

Image courtesy of Wikimedia Commons

References 

  1. Fox SD, M. Tracking for Health. Pew Research Center; 01/28/2013 2013.
  2. Conroy DE, Yang CH, Maher JP. Behavior change techniques in top-ranked mobile apps for physical activity. Am J Prev Med. 2014;46(6):649-652. http://www.ncbi.nlm.nih.gov/pubmed/24842742

http://www.ajpmonline.org/article/S0749-3797(14)00040-3/abstract

  1. Araujo-Soares V, McIntyre T, Sniehotta FF. Predicting changes in physical activity among adolescents: the role of self-efficacy, intention, action planning and coping planning. Health Educ Res. 2009;24(1):128-139. http://www.ncbi.nlm.nih.gov/pubmed/18344230
  2. Caudroit J, Boiche J, Stephan Y. The role of action and coping planning in the relationship between intention and physical activity: a moderated mediation analysis. Psychol Health. 2014;29(7):768-780. http://www.ncbi.nlm.nih.gov/pubmed/24446685
  3. Lyons EJ, Lewis ZH, Mayrsohn BG, Rowland JL. Behavior change techniques implemented in electronic lifestyle activity monitors: a systematic content analysis. J Med Internet Res. 2014;16(8):e192. http://www.ncbi.nlm.nih.gov/pubmed/25131661
  4. Pellegrini CA, Verba SD, Otto AD, Helsel DL, Davis KK, Jakicic JM. The comparison of a technology-based system and an in-person behavioral weight loss intervention. Obesity (Silver Spring). 2012;20(2):356-363. http://www.ncbi.nlm.nih.gov/pubmed/21311506

http://onlinelibrary.wiley.com/store/10.1038/oby.2011.13/asset/oby.2011.13.pdf?v=1&t=ic6dmq0b&s=08892fad4dac0e90381983408e9f9341a6cc7d4b

http://onlinelibrary.wiley.com/store/10.1038/oby.2011.13/asset/oby.2011.13.pdf?v=1&t=ic6dn55b&s=4fbeeb34aef44a49a528bd5586855bdaf3fcc3c8

  1. Polzien KM, Jakicic JM, Tate DF, Otto AD. The efficacy of a technology-based system in a short-term behavioral weight loss intervention. Obesity (Silver Spring). 2007;15(4):825-830. http://www.ncbi.nlm.nih.gov/pubmed/17426316

http://onlinelibrary.wiley.com/store/10.1038/oby.2007.584/asset/oby.2007.584.pdf?v=1&t=ic6dnhi6&s=13ca649e998789fc0b12b7c5bac0058f84001147

http://onlinelibrary.wiley.com/store/10.1038/oby.2007.584/asset/oby.2007.584.pdf?v=1&t=ic6dnptu&s=dd9a6e5641d81769b2a62d852646665d01d5317a

  1. Ledger D MD. Endeavour Partners Report: Inside Wearables: How the Science of Human Behavior Change Offers the Secret to Long-term Engagement. 2014; http://endeavourpartners.net/assets/Wearables-and-the-Science-of-Human-Behavior-Change-EP4.pdf.
  2. CLR N. Are Consumers Realling Interested in Wearing Tech on Their Sleeves? 2014; http://www.nielsen.com/us/en/insights/news/2014/tech-styles-are-consumers-really-interested-in-wearing-tech-on-their-sleeves.html.
  3. Sen AP, Sewell TB, Riley EB, et al. Financial incentives for home-based health monitoring: a randomized controlled trial. J Gen Intern Med. 2014;29(5):770-777. http://www.ncbi.nlm.nih.gov/pubmed/24522623

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4000326/pdf/11606_2014_Article_2778.pdf

  1. Nundy S, Dick JJ, Chou CH, Nocon RS, Chin MH, Peek ME. Mobile phone diabetes project led to improved glycemic control and net savings for Chicago plan participants. Health Aff (Millwood). 2014;33(2):265-272. http://www.ncbi.nlm.nih.gov/pubmed/24493770

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4034376/pdf/nihms577618.pdf

  1. Patel MS, Asch DA, Volpp KG. Wearable devices as facilitators, not drivers, of health behavior change. JAMA. 2015;313(5):459-460. http://www.ncbi.nlm.nih.gov/pubmed/25569175

http://jama.jamanetwork.com/article.aspx?articleid=2089651

Can N-Acetylcysteine be Used in Non-Acetaminophen Induced Acute Liver Failure?

May 20, 2016

800px-Extra_Strength_Tylenol_and_Tylenol_PMBy David Pineles, MD

Peer Reviewed

In the early 1970’s, scientists discovered in animal models that a minor metabolite of acetaminophen, N-acetyl-p-benzoquinone imine (NAPQI), accumulates in the body after ingestion. This metabolite is normally conjugated by glutathione, but when acetaminophen is taken in excess, the body’s glutathione reserves are inadequate to inactivate all of the toxic NAPQI. This metabolite is then free to cause direct damage to hepatocytes. If present in high enough concentrations, the liver damage can be so extensive that it results in liver failure. As a corollary to that observation, use of measures to increase hepatic glutathione were found to abrogate the toxic effects of NAPQI [1,2]. Two medications that were known to increase hepatic glutathione concentration, methionine and cysteamine, were reported to prevent acetaminophen-induced hepatic injury [3,4]. However, these two agents caused severe side effects including flushing, vomiting, and “misery” [3].

A third agent, since found, N-acetylcysteine (NAC) replenishes hepatic stores of glutathione without causing major side effects. Multiple large trials have proven the efficacy of NAC for the treatment of acetaminophen overdose, the largest of which was performed by Smilkstein et al. and published in the New England Journal of Medicine in 1988. In this study, 2,540 patients suspected of acetaminophen overdose were given oral N-acetylcysteine therapy. The study demonstrated that AST or ALT concentrations rose above 1000 IU per liter, indicative of severe liver injury, in only 6.1% of patients who were treated with NAC within 10 hours after ingestion and in 26.4% of those treated between 10 and 24 hours after ingestion. The authors noted that these rates were lower than that of historical controls, and concluded that NAC treatment should be initiated within 8 hours of an acetaminophen overdose [5]. Two additional smaller studies demonstrated the efficacy of intravenous N-acetylcysteine therapy in patients in whom acetaminophen-induced hepatic failure had already developed [5,6]. In the first study published in the Lancet in 1990, Harrison et al. retrospectively analyzed 98 patients and highlighted that those treated with intravenous NAC had a 21% reduction in mortality [6]. A later randomized, placebo-controlled trial that included 50 patients treated with intravenous NAC showed a 28% reduction in mortality as well as a significantly lower incidence of cerebral edema and hypotension requiring inotropic support [7]. Due to the resounding evidence in its favor, NAC has become the universally accepted antidote for acetaminophen poisoning.

Given its efficacy in acetaminophen-induced acute liver failure, the use of NAC in non-acetaminophen-induced acute liver failure has been questioned. While acetaminophen is responsible for approximately 50% of all acute liver failure cases in the United States, idiosyncratic drugs constitute 12%, hepatitis B 7%, autoimmune hepatitis 5% and hepatitis A 3% of cases. Approximately 15% of cases of acute liver failure remain indeterminate [8]. Non-acetaminophen induced acute liver failure carries substantial morbidity and mortality.

In addition to its ability to replenish hepatic stores of glutathione, NAC has also been shown to have anti-inflammatory, antioxidant, inotropic, and vasodilating effects which improve microcirculatory blood flow and oxygen delivery to vital organs [10,11]. In a prospective study published in Hepatology in 2009, 47 patients with non-acetaminophen induced acute liver failure were treated with oral NAC at a dose of 140 mg/kg, followed by 70 mg/kg, for a total of 17 doses, 4 hours apart and within 6 hours of admission. Compared to historical controls, the patients who received oral NAC had a slight, though significantly reduced overall mortality compared to those who did not receive NAC (53.2% versus 72.7%, p=0.05) [12]. The study is intrinsically flawed by use of historical controls who were not necessarily treated at the same time point as the study subjects.

In a prospective, double-blind trial of 173 patients with non-acetaminophen-induced acute liver failure, patients were randomized to receive an infusion of either 5% dextrose (placebo) or 5% dextrose with N-acetylcysteine at a loading dose of 150 mg/kg/hour over one hour followed by 12.5 mg/kg/hour for 4 hours, then continuous infusions of 6.25 mg/kg for the remaining 67 hours. The intravenous NAC group had a significantly higher transplant-free survival as compared to the placebo group (40% versus 27%, p=0.043). Of note, sub-group analysis revealed that the benefits of transplant-free survival were restricted to West Haven encephalopathy coma grade I-II patients who received NAC, rather than those with more severe neurologic defects (West Haven coma grade III-IV patients) [13]. While this may be interpreted to mean that initiation of NAC will only benefit those at early stages of disease, it is possible to interpret these findings to mean that the patients with coma grade I-II were less sick, overall, than those with coma grade III-IV and therefore would have had an improved prognosis regardless of NAC treatment. In 2013, Singh et al. analyzed that same study cohort of 173 patients with non-acetaminophen induced acute liver failure, stratified by coma grade (I-II vs. III-IV) to examine the effect of NAC on hepatic serological biomarkers rather than overall mortality. They found that those patients with coma grade I-II treated with intravenous NAC showed a significant improvement in bilirubin and ALT levels when compared to those that received placebo with the similar coma grade (p<0.02) [14]. These findings support the hypothesis that individuals with coma grade I-II appear to benefit most from NAC treatment. However, the study is limited by the fact that there were a small number of patients with coma grade III-IV encephalopathy at time of randomization.

Unfortunately, the above studies are all limited by their smaller sample sizes with a lack of power, and so their generalizability is restricted. A recent meta-analysis assessed the efficacy and safety of NAC in non-acetaminophen induced acute liver failure. The authors studied four clinical trials in their meta-analysis that included a total of 331 patients who received NAC treatment and 285 control patients. (Two of these studies were discussed earlier [12,13]). No statistical difference was identified between the NAC group and the control group with regard to overall survival [71% versus 67%; 95% CI, 1.16 (0.81-1.67); p=0.42]. However, there was a significant difference in survival with native liver [41% versus 30%; 95% CI, 1.61 (1.11-2.34); p=0.01] and in post-transplantation survival [85.7% versus 71.4%; 95% CI, 2.44 (1.11-5.37); p=0.03]. Lastly, the authors found that the side effects of NAC therapy were mainly nausea, vomiting, and diarrhea and constipation [15]. One limitation of this meta-analysis is that the authors did not stratify the patients by coma grade, which may have biased the study towards results demonstrating no benefit of NAC therapy. Based on these results, it would appear that although overall survival is not improved, the benefit of NAC therapy in those with non-acetaminophen induced acute liver failure outweighs the risk.

Based on the available data, The American Association for the Study of Liver Diseases (AASLD) recommends (with level III evidence) that NAC be used in cases of acute liver failure in which acetaminophen ingestion is possible or when knowledge of circumstances surrounding admission is inadequate but aminotransferases suggest acetaminophen poisoning. In addition, the AASLD recommends (with level I evidence) that NAC may be beneficial for acute liver failure due to drug-induced liver injury [16].

Despite the marginal benefit from NAC therapy with coma grade I-II, the routine use of NAC in non-acetaminophen-induced-acute liver failure patients is not strongly promoted. However, given the available data and the relatively safe side effect profile of NAC, practitioners should strongly consider early administration of NAC in patients with non-acetaminophen-induced-acute liver failure, especially those with coma grade I-II, in non-transplant centers while awaiting referral or when transplantation is not an option.

Dr. David Pineles is an internal medicine resident at NYU Langone Medical Center

Peer Reviewed by Michael Poles, MD Associate Professor of Medicine, Division of Gastroenterology, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

  1. Mitchell JR, Jollow DJ, Potter WZ, et al. Acetaminophen-induced hepatic necrosis. IV. Protective role of glutathione. J Pharmacol Exp Ther. 1973;187(1):211-7.
  2. D Adam Algren. Review of n-acetylcysteine for the treatment of acetaminophen (paracetamol) toxicity in pediatrics. WHO. Geneva, 29 Sept 2008. Presentation.  http://www.who.int/selection_medicines/committees/subcommittee/2/acetylcysteine_rev.pdf
  3. Prescott LF, Sutherland GR, Park J, et al. Cysteamine, methionine, and penicillamine in the treatment of paracetamol poisoning. Lancet. 1976;2(7977):109-13.
  4. Prescott LF, Newton RW, Swainson CP, et al. Successful treatment of severe paracetamol overdosage with cysteamine. Lancet. 1974;1(7858):588-92.
  5. Smilkstein MJ, Knapp GL, Kulig KW, et al. Efficacy of oral N-acetylcysteine in the treatment of acetaminophen overdose. Analysis of the national multicenter study (1976 to 1985). N Engl J Med. 1988;319(24):1557-62. http://www.jpeds.com/article/S0022-3476(98)70501-3/references
  6. Harrison PM, Keays R, Bray GP, et al. Improved outcome of paracetamol-induced fulminant hepatic failure by late administration of acetylcysteine. Lancet. 1990;335(8705):1572-3.
  7. Keays R, Harrison PM, Wendon JA, et al. Intravenous acetylcysteine in paracetamol induced fulminant hepatic failure: a prospective controlled trial. BMJ. 1991;303(6809):1026-9.
  8. Lee WM. Etiologies of acute liver failure. Semin Liver Dis. 2008;28(2):142-52.
  9. Fontana RJ, Hayashi PH, Gu J, et al. Idiosyncratic drug-induced liver injury is associated with substantial morbidity and mortality within 6 months from onset. Gastroenterology. 2014;147(1):96-108.e4.  http://www.ncbi.nlm.nih.gov/pubmed/24681128
  10. Harrison P, Wendon J, Williams R. Evidence of increased guanylate cyclase activation by acetylcysteine in fulminant hepatic failure. Hepatology. 1996;23(5):1067-72.
  11. Harrison PM, Wendon JA, Gimson AE, et al. Improvement by acetylcysteine of hemodynamics and oxygen transport in fulminant hepatic failure. N Engl J Med. 1991;324(26):1852-7.  http://www.nejm.org/doi/full/10.1056/NEJM199106273242604
  12. Mumtaz K, Azam Z, Hamid S, et al. Role of N-acetylcysteine in adults with non-acetaminophen-induced acute liver failure in a center without the facility of liver transplantation. Hepatology International. 2009;3(4):563-570. http://www.ncbi.nlm.nih.gov/pubmed/19727985
  13. Lee W, Hynan L, Rossaro L, et al. Intravenous n-acetylcysteine improves transplant-free survival in early stage non-acetaminophen acute liver failure. Gastroenterology. 2009;137(3):856-864.
  14. Singh S, Hynan LS, Lee WM, et al. Improvements in Hepatic Serological Biomarkers are Associated with Clinical Benefit of Intravenous N-Acetylcysteine in Early Stage Non-Acetaminophen Acute Liver Failure. Digestive diseases and sciences. 2013;58(5):1397-1402.  http://www.ncbi.nlm.nih.gov/pubmed/23325162
  15. Hu J, Zhang Q, Ren X, et al. Efficacy and safety of acetylcysteine in “non-acetaminophen” acute liver failure: A meta-analysis of prospective clinical trials. Clin Res Hepatol Gastroenterol. 2015; 39(5):594-9.  http://www.sciencedirect.com/science/article/pii/S2210740115000315
  16. Lee WM, Larson AM, Stravitz RT. AASLD position paper: the management of acute liver failure: update 2011. Hepatology. September 2011.  https://www.tripdatabase.com/doc/1359920-AASLD-position-paper–the-management-of-acute-liver-failure–update-2011—American-Association-for-the#content

Bedside Rounds: How Useful are the Kernig and Brudzinski signs for Predicting Meningitis?

April 27, 2016

kernigBy Chio Yokose, MD

Peer Reviewed

Even in this era of modern medicine, bacterial meningitis remains a widely feared diagnosis in both resource-rich and -poor settings worldwide. Bacterial meningitis is among the ten most common infectious causes of death and kills approximately 135,000 people around the world each year [1].

It is a medical, neurologic, and sometimes neurosurgical emergency that affects 4 to 6 per 100,000 adults annually [2].  Many healthcare providers may consider the diagnosis when evaluating a patient, but it can nonetheless be difficult to recognize and act on. Any such delay can be the difference between life and death.  According to a retrospective study, the median delay between the time of arrival at the emergency department and the administration of antibiotics was reportedly 4 hours [2].

A lumbar puncture is the diagnostic test of choice for meningitis. However, lumbar punctures are known to be somewhat invasive and unpleasant tests that have their own set of complications. How do we identify those patients who truly need one? For this purpose, we resort to the timeless history and physical. However, clinical history alone is not sufficient to arrive at a diagnosis of meningitis. According to a systematic review by Attia et al, symptoms of nonpulsatile headache, generalized headache, and nausea or vomiting offer only 15%, 50%, and 60% specificities for meningitis, respectively [3]. As far as the physical exam goes, the classic triad associated with meningitis is fever, neck stiffness, and change in mental status. According to Attia et al’s systematic review, although few patients presented with all three symptoms, 95% demonstrated two or more and 99-100% had at least one of these findings [3]. Van den Beek et al reported similar findings from their study of 1108 cases of meningitis identified in the Netherlands Reference Laboratory for Bacterial Meningitis database. They found that, while only 44% of patients demonstrated all three features of the triad, 95% of patients presented with at least two of the four signs (classic triad plus headache) and only 1% of cases had none of the four findings [4].

The singular most common finding associated with meningitis is uncertain, as there is varying data from different observational studies. According to a single-center, multi-decade study, fever appeared to be the most common, with 95% of patients presenting with fever and another 4% developing a fever within the first 24 hours of hospitalization [5]. In contrast, Van den Beek’s Netherlands study found that headache was the most common finding, occurring in 87% of patients [4]. Both studies found that neck stiffness was the second most common finding seen in 88% and 83% of patients, respectively.

No physical exam for meningitis is complete without mentioning the Kernig and Brudzinski signs. Although nuchal rigidity, one of the hallmark features of bacterial meningitis was recognized as early as the 5th century BCE, these major eponymic signs – still so closely linked to the disease to this day – were not described until the late 19th century [6].

Vladimir Mikhailovich Kernig (1840-1917) was a clinical neurologist of Russian-Baltic German descent who was born in Lapaia, Latvia but received the majority of his professional training in Russia [7].  In 1882, Kernig described the sign that now bears his name, to a group of his colleagues in St. Petersburg:

“I have observed for a number of years in cases of meningitis a symptom which is apparently rarely recognized although it is, in my opinion, of significant practical value. I am referring to flexion contracture of the legs or occasionally also in the arms which becomes evident only after the patient sits up…If [with the patient sitting on the edge of the bed and legs dangling] one attempts to extend the patient’s knees one will succeed only to an angle of approximately 135 degrees.  In cases in which this phenomenon is pronounced, the angle may even remain at 90 degrees [6].”

Today, this maneuver is performed with the patient in the supine position with the hips and knees in flexion. The knee is then slowly extended.  The Kernig sign is said to be positive if this maneuver elicits pain along the hamstring muscle as a result of stretching of the inflamed sciatic nerve [7].

BrudzińskiJosef Brudzinski (1874-1917) was a Polish-born pediatrician who also received most of his training in Russia [7].  He actually described several different physical signs of meningitis (e.g. the cheek sign and symphyseal sign, both described further below, and Brudzinski contralateral reflex, which consists of reflex flexion of a lower extremity in response to passive flexion of the contralateral lower extremity [8]), however his most famous sign that is now referred to as the “Brudzinski sign” was described as follows in 1909:

“I have noted a new sign in cases of meningitis: passive flexion of the neck causes the lower extremities to flex at the knees and the pelvis…With the child in the supine position, the examiner flexes the neck of the child with the left hand while resting his right hand on the patient’s chest to prevent it from rising [6].”

A patient is said to have a positive Brudzinski sign if the passive flexion of the neck elicits automatic flexion at the hips and knees [9].  Interestingly, the cheek sign, which is considered positive if applying pressure on both cheeks inferior to the zygomatic arch causes a spontaneous flexion of the forearm and arm [10], and symphyseal sign, which is considered positive if pressure applied to the pubic symphysis elicits a reflex hip and knee flexion and abduction of the leg, were most commonly observed in children with Mycobacterium tuberculosis meningitis, which was much more prevalent during Brudzinski’s time [7].

The utility of using the Kernig and Brudzinski signs to identify patients likely to have meningitis has long been debated. Both of these signs are indicators of meningeal inflammation, but neither are pathognomonic for meningitis [9].  Interestingly enough, Brudzinski himself published a study in 1909 reporting the sensitivities of Brudzinski and Kernig signs as 97% and 42% respectively [11].  These sensitivities are higher than those reported in more recent studies.  Some attribute this difference to the fact that the two most common causes of meningitis in Brudzinski’s time were Streptococcus pneumoniae and M. tuberculosis, both of which are known to cause greater degrees of meningeal inflammation than other infectious etiologies that are more prevalent today [7].

A study by Durand et al conducted at Massachusetts General Hospital between the years 1962-1988 demonstrated that, while S. pneumoniae was the most common pathogen identified in cases of community acquired meningitis overall (responsible for 24% observed 493 episodes), it was not an overwhelming majority [5]. In this study, gram-negative bacilli other than Haemophilus influenzae were responsible for 17% of the cases, whereas N. meningitides, other streptococci, Staphylococcus aureus, and Listeria monocytogenes caused 7-8% each. H. influenzae was identified as the causative organism in only 4% of the cases, and in 15% no pathogen was ever identified. The relative frequency of S. pneumoniae waned significantly in the period 1980-1988 compared to an earlier decade, 1962-1970. In this study, tuberculous meningitis was not identified in a single community or nosocomial case. Today, S. pneumoniae meningitis carries a poor prognosis independent of other factors, exhibiting odds of an unfavorable outcome six times as higher than does N. meningitidis (95% confidence interval, 2.61-13.91; P<0.001) [4].

More recently, Thomas et al published a study in 2002 examining the diagnostic accuracy of Kernig and Brudzinski signs and nuchal rigidity, in adult patients with suspected meningitis [12].  Adults (age > 16 years) presenting to Yale-New Haven Hospital Emergency Department between July 1995 and June 1999 with clinically suspected meningitis were eligible for participation. The study found that both Brudzinski and Kernig signs had a sensitivity of 5% with a positive likelihood ratio (LR+) of 0.97. The study concluded that these special physical exam findings did not accurately discriminate between patients with meningitis (defined as having 6 x 109/L white blood cells (WBCs) per milliliter of CSF or more) and those without. The diagnostic accuracy of these signs was not significantly improved in patients with moderate meningeal inflammation (defined as 100 x 109/L WBCs per milliliter of CSF or more).

Similar results were published by Waghdhare et al in a blinded study of 190 patients diagnosed with meningitis at a rural teaching hospital [1]. Kernig sign had a reported sensitivity of 14.1%, specificity of 92.3%, LR+ of 1.84, and a negative likelihood ratio (LR-) of 0.93. Brudzinski sign had a reported sensitivity of 11.1%, specificity of 93.4%, LR+ of 1.69, and a LR- of 0.95. This study also examined the head jolt sign, which is considered positive if there is a worsening of the baseline headache when patients are asked to turn their heads horizontally at a frequency of 2-3 rotations per second. The head jolt sign had a sensitivity of 6.1%, specificity of 98.9%, LR+ of 5.52, and a LR- of 0.95. In this study, the Kernig sign was positive in only 12% of 190 patients. Thus, although all of these signs have high specificities, the positive predictive values remain low, suggesting they are of little clinical utility in appropriately identifying patients that warrant further diagnostic work-up or treatment of meningitis.

Of note, the age of the patient in question may impact the utilities of these physical exam tests. Interestingly, the majority of patients examined in Kernig’s and Brudzinski’s original papers were children.  Ironically, it has become widely accepted that neither the Kernig or Brudzinski signs are reliable in diagnosing meningitis in infants younger than six months [7].  A retrospective study conducted by Levy et al demonstrated that, as the age of the patients increased (from 2-24 months to 5-12 years), the sensitivity of both the Kernig and Brudzinski signs also increased [13].  A similar trend is noted at the other end of the age spectrum, with Puxty et al reporting that Kernig sign was positive in 12% and Brudzinski sign in 8% of elderly patients on general medicine wards without bacterial meningitis, which is hypothesized to be related to the increasing incidence of cervical spine pathology in this age group which can complicate the interpretation of these maneuvers [14].

Thus, although we continue to regularly test for and document the Kernig and Brudzinski signs in our physical exams regularly, the current data suggest that it may be more of a historical finding and not one that is particularly helpful in delineating a case of bacterial meningitis based on the presence or absence of these signs alone.

Dr. Chio Yokose is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Michael Janijigian, MD, internal medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References 

  1. Waghdhare, S., et al., Accuracy of physical signs for detecting meningitis: a hospital-based diagnostic accuracy study. Clin Neurol Neurosurg, 2010. 112(9): p. 752-7. http://www.ncbi.nlm.nih.gov/pubmed/20615607
  2. van de Beek, D., et al., Community-acquired bacterial meningitis in adults. N Engl J Med, 2006. 354(1): p. 44-53. http://www.nejm.org/doi/full/10.1056/NEJMra052116
  3. Attia, J., et al., The rational clinical examination. Does this adult patient have acute meningitis? JAMA, 1999. 282(2): p. 175-81. http://www.ncbi.nlm.nih.gov/pubmed/10411200
  4. van de Beek, D., et al., Clinical features and prognostic factors in adults with bacterial meningitis. N Engl J Med, 2004. 351(18): p. 1849-59. http://www.nejm.org/doi/full/10.1056/NEJMoa040845
  5. Durand, M.L., et al., Acute bacterial meningitis in adults. A review of 493 episodes. N Engl J Med, 1993. 328(1): p. 21-8.
  6. Tyler, K.L., Chapter 28: a history of bacterial meningitis. Handb Clin Neurol, 2010. 95: p. 417-33.
  7. Ward, M.A., et al., Josef Brudzinski and Vladimir Mikhailovich Kernig: signs for diagnosing meningitis. Clin Med Res, 2010. 8(1): p. 13-7. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2842389/
  8. Mehndiratta, M., et al., Appraisal of Kernig’s and Brudzinski’s sign in meningitis. Ann Indian Acad Neurol, 2012. 15(4): p. 287-8.
  9. Wartenberg, R., The signs of Brudzinski and of Kernig. J Pediatr, 1950. 37(4): p. 679-84.
  10. Verghese, A. and G. Gallemore, Kernig’s and Brudzinski’s signs revisited. Rev Infect Dis, 1987. 9(6): p. 1187-92. http://www.ncbi.nlm.nih.gov/pubmed/3321367
  11. Brody, I.A. and R.H. Wilkins, The signs of Kernig and Brudzinski. Arch Neurol, 1969. 21(2): p. 215-8.
  12. Thomas, K.E., et al., The diagnostic accuracy of Kernig’s sign, Brudzinski’s sign, and nuchal rigidity in adults with suspected meningitis. Clin Infect Dis, 2002. 35(1): p. 46-52.  http://cid.oxfordjournals.org/content/35/1/46.long
  13. Levy, M., E. Wong, and D. Fried, Diseases that mimic meningitis. Analysis of 650 lumbar punctures. Clin Pediatr (Phila), 1990. 29(5): p. 254-5, 258-61. http://www.ncbi.nlm.nih.gov/pubmed/2340687
  14. Puxty, J.A., R.A. Fox, and M.A. Horan, The frequency of physical signs usually attributed to meningeal irritation in elderly patients. J Am Geriatr Soc, 1983. 31(10): p. 590-2. http://www.ncbi.nlm.nih.gov/pubmed/?term=14.Puxty%2C+J.A.%2C+R.A.+Fox%2C+and+M.A.+Horan%2C+The+frequency+of+physical+signs+usually+attributed+to+meningeal+irritation+in+elderly+patients

 

Could Metformin be the First Anti-Aging Drug?

February 11, 2016

Metformin_500mg_TabletsBy Amy Shen Tang, MD

Peer Reviewed

“I would pay you if you took it away from me. I’d try to buy it back,” said Irving Kahn, the late Wall Street investment advisor when asked if he would ever retire from work [1]. Mr. Kahn, who founded Kahn Brothers Group, Inc. with his sons more than 40 years ago, took an active role as chair of his company until his passing last winter at the ripe age of 109 years. Kahn and his siblings all lived to be centenarians and were enrolled in the Longevity Genes Project at the Albert Einstein College of Medicine’s Institute for Aging Research, with more than 500 healthy seniors between the ages of 95 and 112 and their children [2]. The project aims to identify the longevity genes that allow “super agers” to live far beyond their age-matched peers despite unhealthy behaviors such as cigarette smoking. In hopes of discovering therapies that target the aging process, Dr. Nir Barzilai and his colleagues at the Institute for Aging Research are looking to make a cheap, generic, widely used drug—metformin—the first to be Food and Drug Association (FDA)-approved for the indication of aging.

Metformin is an oral biguanide antidiabetic medication that has been used for over 50 years. It is commonly prescribed as a first-line treatment for type 2 diabetes mellitus and often in combination with other antidiabetic medications including insulin. Metformin reduces blood glucose levels by preventing glucose production in the liver and increasing peripheral glucose utilization. Lower glucose levels lead to lower levels of insulin and insulin-like growth factor 1 (IGF-1), thereby increasing the body’s sensitivity to insulin [3-5]. High blood glucose and insulin levels are important factors in aging and cancer [6-8]. Inactivation of insulin and insulin-like signaling has been shown to increase lifespan in nematodes, fruit flies, and mice [9-14]. Furthermore, studies have shown an association between metformin and a decreased risk of cancer and cancer mortality by suppressing tumor growth through decreased IGF-1 and by reducing senescent processes [15,16]. Prospective trials are required to further evaluate these findings. Additionally, clinical trials demonstrate that metformin, compared with other glucose-lowering drugs, may decrease the risk of cardiovascular disease [17].

Perhaps the most striking study to date on metformin and aging is a case-control study of 90,000 patients with type 2 diabetes treated with metformin or a sulfonylurea and 90,000 matched non-diabetic controls [18]. Consistent with previous studies [17,19], the study showed that sulfonylurea-treated diabetics had an approximately 40% greater mortality rate than non-diabetic controls, whereas metformin-treated diabetics had similar mortality to non-diabetic controls. Of note, patients in the metformin group in their 70s had a 15% reduction in mortality compared to non-diabetic controls. This observation suggests that the protective effect of metformin may extend beyond its role as a glucose-lowering treatment [18].

Subsequent to the aforementioned findings that metformin may have a mortality benefit beyond treating diabetes, Dr. Barzilai designed the Targeting Aging with Metformin (TAME) trial. Sponsored by the American Federation for Aging Research, the TAME trial will recruit 3,000 adults aged 70 to 80 years in approximately 15 centers across the United States who will be followed for 5 to 7 years. The study will include adults with one or two of three conditions—cancer, heart disease, or cognitive impairment—or who are at risk for them. Type 2 diabetics have been excluded from the trial in order to measure the anti-aging effects of metformin apart from its known benefit in diabetes. Participants will be monitored to measure whether metformin forestalls the onset of cancer, heart disease, cognitive impairment, diabetes, and death. Dr. Barzilai explicitly states that his goal for the TAME trial is to convince the FDA to approve aging not only as an indication for metformin, but also as a target for future and improved medications. “Without such a determination, the progress the field has made will not be realized because pharmaceutical companies will not develop drugs that have no indication, which is required for reimbursement by insurance companies,” said Dr. Barzilai in an interview with the Healthspan Campaign, a partner organization in his promotion of aging research [20].

If Dr. Barzilai’s trial were to show that metformin delays age-related diseases and mortality and the FDA approves aging as an indication for metformin, most people would be candidates for such a therapy. What are the potential unforeseen effects of prescribing metformin to the masses for the prevention of aging? Luckily, metformin’s safety profile is well known after more than 60 years of use as a diabetes medication in the United Kingdom and United States. The most common side effects, which are gastrointestinal, usually improve with time and dose titration. Additionally, unlike other diabetes drugs, metformin is not associated with hypoglycemia. Perhaps the biggest limiting factor to its use in elderly adults is the historical concern over the rare adverse effect of lactic acidosis in the setting of renal insufficiency and hepatic impairment. In point of fact, metformin’s association with lactic acidosis led to its withdrawal from the market in the 1970s, although most cases seemed to occur with a related molecule, phenformin, which increased the risk of lactic acidosis by 10 to 20% compared to metformin [21]. Although the prescribing information for metformin explicitly lists a creatinine cutoff of 1.4 or 1.5 in women and men, respectively, the American Diabetes Association and the European Association for the Study of Diabetes report that metformin seems safe unless the estimated GFR is less than 30 mL/min [22].

“The perception is that we are looking for a fountain of youth,” said Dr. Barzilai as he displayed a slide of the iconic 16th century Ponce de Leon painting. “We want to avoid that; what we’re trying to do is increase health span, not look for eternal life.” Current treatments for diseases related to aging “just exchange one disease for another,” said Dr. Barzilai, whereas he and his colleagues seek a treatment that delays the onset not only of a single disease but of age-related diseases in general, and in so doing, extends one’s healthy years.

Dr. Amy Shen Tang is an Internist at NYU Langone Medical Center

Peer reviewed by Michael Bergman, MD, Endocrinologist at NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References 

  1. Meet the Super Agers: Irving Kahn, Age 104. Albert Einstein College of Medicine of Yeshiva University; 2010. https://www.einstein.yu.edu/centers/aging/longevity-genes-project. Accessed October 21, 2015.
  2. Longevity Genes Project. Institute for Aging Research. Albert Einstein College of Medicine. https://www.einstein.yu.edu/centers/aging/longevity-genes-project. Accessed October 21, 2015.
  3. Clemmons DR. Involvement of insulin-like growth factor-I in the control of glucose homeostasis. Current Opinion in Pharmacology. 2006; 6:620-625. http://www.sciencedirect.com.ezproxy.med.nyu.edu/science/article/pii/S1471489206001676
  4. Merimee TJ, Zapf J, Froesch ER. Insulin-like growth factors in the fed and fasted states. J Clin Endocrinol Metab. 1982;55:999-1002. http://press.endocrine.org/doi/10.1210/jcem-55-5-999?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed
  5. Pao CI, Farmer PK, Begovic S, Goldstein S, Wu GJ, Phillips LS. Expression of hepatic insulin-like growth factor-I and insulin-like growth factor-binding protein-1 genes is transcriptionally regulated in streptozotocin-diabetic rats. Mol Endocrinol. 1992;6:969-977. http://press.endocrine.org/doi/10.1210/mend.6.6.1379675?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed
  6. Gapstur SM, Gann PH, Lowe W, Liu K, Colangelo L, Dyer A. Abnormal glucose metabolism and pancreatic cancer mortality. JAMA. 2000;283:2552-2558. http://jama.jamanetwork.com/article.aspx?articleid=192709
  7. Khandwala HM, McCutcheon IE, Flyvbjerg A, Friend KE. The effects of insulin-like growth factors on tumorigenesis and neoplastic growth. Endocr Rev. 2000;21:215-244. http://press.endocrine.org/doi/abs/10.1210/edrv.21.3.0399
  8. Mobbs CV. Genetic influences on glucose neurotoxicity, aging, and diabetes: a possible role for glucose hysteresis. Genetica. 1993;91(1-3):239-252. http://link.springer.com.ezproxy.med.nyu.edu/article/10.1007%2FBF01436001
  9. Anisimov VN, Berstein LM, Egormin PA, Piskunova TS, Popovich IG, Zabezhinski, et al. Metformin slows down aging and extends life span of female SHR mice. Cell Cycle. 2008;7(17):2769-2773. http://www.tandfonline.com/doi/abs/10.4161/cc.7.17.6625
  10. Anisimov VN, Berstein LM, Popovich IG, Zabezhinski MA, Egormin PA, Piskunova TS, et al. If started early in life, metformin treatment increases life span and postpones tumors in female SHR mice. Aging. 2011;3(2):148-157. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3082009/
  11. Smith DL Jr, Elam CF Jr, Mattison JA, Lane MA, Roth GS, Ingram DK, et al. Metformin supplementation and life span in Fischer-344 rats. J Gerontol A Biol Sci Med Sci. 2010; 65(5):468-474. http://biomedgerontology.oxfordjournals.org/content/65A/5/468.long
  12. Anisimov VN, Egormin PA, Piskunova TS, Tyndyk ML, Yurova MN, Zabezhinski MA, et al. Metformin extends life span of HER-2/neu transgenic mice and in combination with melatonin inhibits growth of transplantable tumors in vivo. Cell Cycle. 2010;9(1):188-197. http://www.tandfonline.com/doi/abs/10.4161/cc.9.1.10407#.Vi0A7rzC3NU
  13. Martin-Montalvo A, Merken EM, Mitchell SJ, Palacios HH, Mote PL, Scheibye-Knudsen M, et al. Metformin improves healthspan and lifespan in mice. Nat Commun. 2013;4:2192. http://www.nature.com/ncomms/2013/130730/ncomms3192/full/ncomms3192.html
  14. Onken B, Driscoll M. Metformin Induced a Dietary Restriction-Like State and the Oxidative Stress Response to Extend C. elegans Healthspan via AMPK, LKB1, and SKN-1. PLoS One. 2010;5(1):e8758. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0008758
  15. Quinn BJ, Dallos M, Kitagawa H, Kunnumakkara AB, Memmott RM, Hollander MC, et al. Inhibition of lung tumorigenesis by metformin is associated with decreased plasma igf-I and diminished receptor tyrosine kinase signaling. Cancer Prev Res. 2013;6(8):801. http://cancerpreventionresearch.aacrjournals.org/content/6/8/801.long
  16. Liu B, Fan Z, Edgerton SM, Yang X, Lind SE, Thor AD. Potent anti-proliferative effects of metformin on trastuzumab-resistant breast cancer cells via inhibition of erbB2/IGF-1 receptor interactions. Cell Cycle. 2011;10(17):2959-2966. http://www.tandfonline.com/doi/abs/10.4161/cc.10.17.16359
  17. UK Prospective Diabetes Study (UKPDS) Group. Effect of intensive blood-glucose control with metformin on complications in overweight patients with type 2 diabetes (UKPDS 34). Lancet. 1998;352(9131):854-865. http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(98)07037-8/abstract
  18. Bannister CA, Holden SE, Jenkins-Jones S, Morgan CL, Halcox JP, Schernthaner G. Can people with type 2 diabetes live longer than those without? A comparison of mortality in people initiated with metformin or sulphonylurea monotherapy and matched, non-diabetic controls. Diabetes Obes Metab. 2014;16(11):1165-1173. http://onlinelibrary.wiley.com.ezproxy.med.nyu.edu/doi/10.1111/dom.12354/abstract
  19. Roumie CL, Hung AM, Greevy RA et al. Comparative effectiveness of sulfonylurea and metformin monotherapy on cardiovascular events in type 2 diabetes mellitus: a cohort study. Ann Intern Med. 2012;157:601-610. http://annals.org/article.aspx?articleid=1389845
  20. Dr. Nir Barzilai on the TAME Study. The Healthspan Imperative: A film about America’s next great priority. The Alliance for Aging Research. http://www.healthspancampaign.org/2015/4/28/dr-nir-barzilai-on-the-tame-study. Accessed October 22, 2015.
  21. Lipska KJ, Bailey CJ, Inzucchi SE. Use of Metformin in the Setting of Mild-to-Moderate Renal Insufficiency. Diabetes Care. 2011;34(6):1431-1437. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3114336/
  22. Nathan DM, Buse JB, Davidson MB, Ferrannini E, Holman RR, Sherwin R. Medical management of hyperglycemia in type 2 diabetes: a consensus algorithm for the initiation and adjustment of therapy: a consensus statement of the American Diabetes Association and the European Association for the Study of Diabetes. Diabetes Care. 2009;32(1):193-203. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2606813/

 

When and How Should We Examine the Spleen?

January 28, 2016

spleenBy Jenna Tarasoff

Peer Reviewed

A 65-year-old African woman presents with two months of fevers and 25-pound weight loss along with a month of nausea and retching, accompanied by left-sided abdominal pain. The exam is significant for axillary lymphadenopathy, abdominal distension, splenomegaly, and palpable purpura on her arms, legs, and back. Labs are significant for leukocytosis, lymphopenia, microcytic anemia, increased ferritin, and positive hepatitis C virus PCR. Abdominal CT shows multiple enlarged nodes and an enlarged spleen (splenomegaly). 

As I prepare to present my diagnosis of hepatitis C-associated lymphoma at the Clinical Pathology Conference, I focus on one of the key findings–splenomegaly–observed on exam and CT scan. I wonder if suspicion for splenomegaly played a role in the decision to examine the spleen and which technique was used. In other words, what is the recommendation for when and how we should examine for splenomegaly?

The role of the spleen

Hippocrates and Galen referred to the spleen as the repository of the most noxious bodily substance–“black bile”–an excess of which caused melancholia [1]. Hence, it was thought that the spleen prevented depression by sequestering black bile from the rest of the body.

For many years, the spleen was regarded as a useless organ, similar to the appendix. It was not until the past 50 years that the spleen’s role in the immune response to infection and the potentially fatal consequences of its removal become increasingly recognized [2]. 

The normal spleen is about the size of a fist. Splenomegaly is most commonly caused by hepatic disease (cirrhosis), malignancy (leukemia and lymphoma), and infectious disease (HIV, mononucleosis, and malaria) [3]. Physicians may need to examine the spleen in situations ranging from a patient with B symptoms and lymphadenopathy suspicious for malignancy to a patient with splenomegaly secondary to infectious mononucleosis eager to return to sports. 

How to examine for splenomegaly

Splenomegaly can be examined by two primary techniques: palpation and percussion. There are three well-studied palpation maneuvers:

–Supine two-handed palpation

–Right lateral decubitus one-handed palpation

–The supine hooking maneuver of Middleton [4].

A normal-sized spleen almost always lies entirely within the rib cage and thus cannot be palpated, but with enlargement it displaces the stomach and descends below the rib cage [4]. Thus, splenomegaly is suggested by palpation of the descending spleen on inspiration using any palpation maneuver, with a sensitivity of 18-78%, specificity of 89-99%, positive likelihood ratio (+LR) of 8.5, and negative likelihood ratio (-LR) of 0.5 [3]. Palpation may yield false positives in the presence of left hepatic lobe enlargement and intra-abdominal tumors [5]. In addition, false negatives are possible, given that the spleen generally needs to be increased in size by at least 40% before becoming palpable [6].

Three percussion maneuvers have been validated against imaging:

–Supine percussion of Traube’s space (defined by the sixth rib superiorly, the mid-axillary line laterally, and the left costal margin inferiorly)

–Castell’s method while supine (percussing in the lowest intercostal space on the left axillary line)

–Nixon’s method in the right lateral decubitus position (percussing from the midpoint of the costal margin perpendicularly to the left mid-axillary line) [4].

Splenomegaly is suggested by dullness to percussion of Traube’s space (sensitivity of 11-76%, specificity of 63-95%, +LR = 2.1, -LR = 0.8), Castell’s spot (sensitivity of 25-85%, specificity of 32-94%, +LR = 1.7, -LR = 0.7), and Nixon’s method if greater than 8 cm (sensitivity of 25-66%, specificity of 68-95%, +LR = 2.0, -LR = 0.7) [3]. Percussion may yield both false positives and negatives. For instance, percussion of Traube’s space may be falsely positive in patients examined too soon after a meal [7] or with pleural effusions [8], and may be falsely negative in obese patients [7].

Positive percussion is less convincing than palpation (+LR = 1.7 to 2.1 for percussion vs. 8.5 for palpation) [3]. More extensive evaluation has been devoted to Traube’s space percussion and supine one-handed palpation, so there is greater confidence in these maneuvers [4]. Combining percussion and palpation of Traube’s space produces a sensitivity and specificity of 46% and 97%, respectively [9].

When to examine for splenomegaly

It is recommended that if suspicion for splenomegaly is sufficiently high (ie, the pretest probability is greater than 10%), examination should start with Traube’s space percussion [4]. If percussion is not dull, there is no need to palpate, as the results will not effectively rule in or out splenomegaly [4]. If the possibility of missing splenomegaly remains a concern after negative percussion, imaging is indicated [4]. If percussion is dull, it should be followed by supine one-handed palpation [4]. If only one technique is performed, palpation may be superior to percussion, particularly in lean patients [4]. If both tests are performed and are positive, splenomegaly is diagnosed. If palpation following percussion is negative, imaging is required to confidently rule in or out splenomegaly. In contrast, if the initial suspicion for splenomegaly is low, routine examination cannot definitively rule in or out splenomegaly [4].

Summary of splenomegaly on exam

The examination of splenomegaly is more specific than sensitive and thus is best used when ruling in the diagnosis, provided the clinical suspicion of splenomegaly is sufficiently high. Positive palpation is more useful than percussion. The finding of a palpable spleen increases greatly the probability of splenomegaly [3], which is further increased by combining techniques [9]. Splenomegaly detected on exam provides information that the spleen is congested and, whether full of black bile or lymphoma cells, it hints at a possible underlying disease, which helps in generating and narrowing down the differential diagnosis.

Jenna Tarasoff is a 3rd year medical student (3-year program) at NYU Langone Medical Center

Reviewed by Dr. Michael Tanner, Executive Editor, Clinical Correlations

Image courtesy of Wikimedia Commons

References

  1. Black DW, Grant JE. DSM-5 Guidebook: The Essential Companion to the Diagnostic and Statistical Manual of Mental Disorders. Washington, DC: American Psychiatric Publishing; 2014.
  2. Aygencel G, Dizbay M, Turkoglu MA, Tunccan OG. Cases of OPSI syndrome still candidate for medical ICU. Braz J Infect Dis. 2008;12(6):549-551. http://www.ncbi.nlm.nih.gov/pubmed/19287851
  3. McGee SR. Evidence-Based Physical Diagnosis. 3rd ed. Philadelphia, Pa: Elsevier Saunders; 2012: 428-440.
  4. Grover SA, Barkun AN, Sackett DL. Does this patient have splenomegaly? JAMA. 1993;270(18):2218-2221.  http://jama.jamanetwork.com/article.aspx?articleid=409174
  5. Sullivan S, Williams R. Reliability of clinical techniques for detecting splenic enlargement. BMJ. 1976;2(6043):1043-1044. https://www.researchgate.net/publication/22163728_Reliability_of_clinical_techniques_for_detecting_splenic_enlargement
  6. Blackburn C. On the clinical detection of enlargement of the spleen. Australas Ann Med. 1953;2(1):78-80.
  7. Barkun AN, Camus M, Meagher T, et al. Splenic enlargement and Traube’s space: how useful is percussion? Am J Med. 1989;87(5):562-566.  http://www.ncbi.nlm.nih.gov/pubmed/2683766
  8. Verghese A, Krish G, Karnad A. Ludwig Traube: the man and his space. Arch Intern Med. 1992;152(4):701-703.
  9. Barkun AN, Camus M, Green L, et al. The bedside assessment of splenic enlargement. Am J Med. 1991;91(5):512-518.

The Quest for the HIV Vaccine: Are We Closer Than We Think?

January 20, 2016

HIVBy Amar Parikh, MD 

Peer Reviewed

Amidst the global panic over the recent Ebola outbreak, another well-known pathogen that has been devastating the world for decades continues to smolder—the human immunodeficiency virus (HIV). According to the World Health Organization (WHO), in 2013 there were 35 million people worldwide living with HIV, 2.1 million of who were newly infected that year [1]. HIV/AIDS has claimed the lives of nearly 40 million people to date, with 1.5 million people dying from AIDS in 2013 alone. Although highly active antiretroviral therapy (HAART) has been remarkably successful, it is not a cure for HIV infection. The failure to cure HIV/AIDS is due to the reservoir of latently infected T cells formed in the acute phase of the infection that re-establish virus loads upon treatment interruption [2]. The use of HAART as treatment and prophylaxis, the practice of male circumcision of high-risk individuals, increased public awareness of safer sex practices, and the expansion of clean needle programs have all lead to decreased HIV transmission rates. However, infection statistics from the United Nations remain sobering: for every 1 person who began treatment for HIV last year, 1.3 people were newly infected [3, 4, 5]. As such, the development of an effective vaccine against HIV remains one of the most pressing priorities for the medical community. Results from recent clinical trials and advances in basic science research are encouraging, and suggest that a successful HIV vaccine may not be far away.

RV 144 – A Glimmer of Hope

The general principle underlying vaccination is to isolate the pathogen, inactivate or weaken it, and then inject it into the body so the immune system can mount an antigen-specific response without an actual infection. Applying this concept to HIV has been difficult, as its genetic diversity and ability to rapidly mutate have presented formidable challenges. To date, scientists have conducted over 250 HIV vaccine clinical trials, most without much success.

The tides began to turn in 2009, when the world’s largest HIV vaccine trial was conducted in Thailand [6]. The Phase III study, referred to as RV 144 or the “Thai trial,” involved more than 16,000 participants (aged 18 to 30) who received either placebo or a “prime-boost” vaccination regimen. The regimen consisted of 4 priming injections with a recombinant canary pox vector vaccine (ALVAC-HIV [vCP1521]), followed by 2 booster injections with a recombinant glycoprotein 120 subunit vaccine (AIDSVAX B/E). The vaccines were based on HIV clades B and E, the most common subtypes of HIV found in Thailand. After the 3-year follow-up period, the investigators found that people who received the vaccine series were 31% less likely to contract HIV than those given placebo. While the RV 144 vaccine regimen did not affect viral loads or CD4+ counts among patients who did contract HIV, the partial protection attributed to the vaccine warranted further investigation.

Of note, the RV 144 vaccine regimen’s ability to protect against HIV was most evident soon after vaccination and waned over time. The  potential for an early, robust immune response suggested that the addition of late boosters could prolong efficacy. Furthermore, an immune-correlates analysis determined how the RV144 vaccine modulated T cell, IgG antibody, and IgA antibody responses and whether these responses were related to infection risk [7]. The authors found that the binding of IgG antibodies to variable regions 1 and 2 (V1V2) of HIV-envelope proteins (Env) was inversely associated with the rate of infection, while the binding of IgA antibodies to Env was directly correlated with the rate of infection. These results suggested that V1V2 IgG antibodies might contribute to protection against HIV, whereas high levels of Env-specific IgA antibodies may reduce the effectiveness of protective antibodies. Taken together, a vaccine engineered to induce higher levels of V1V2 IgG antibodies and lower levels of Env-specific IgA antibodies than the RV144 vaccine may provide enhanced efficacy.

Forging Ahead in South Africa

Building on the success of the RV 144 study, a pilot study called the HVTN 097 trial was carried out in South Africa. The preliminary results of this study were presented at the HIV Research for Prevention conference in Cape Town, South Africa in late October, 2014 [8]. HVTN 097 was a small Phase I trial conducted to ascertain whether the vaccine regimen tested in Thailand would safely induce a similar immune response profile in South Africans. It did not evaluate the effectiveness of the vaccine in preventing HIV acquisition, rather it sought to establish proof of concept as to whether the RV 144 results could be reproduced in a geographically distinct patient population [9]. The trial employed the same primer and booster vaccines from RV 144 and was tested among 100 healthy adults in South Africa. Immune responses were measured 2 weeks after the last immunization by quantifying the number of HIV-Env-specific T-cells expressing IFN-y and/or IL-2. The RV 144 regimen was well tolerated and induced an immune response similar to, if not better than, that seen in the original Thai trial. Immunogenicity of RV 144 was detected, regardless of age, gender, or BMI, a critical observation as these factors have previously been shown to impact immune responses to HIV vaccines.

In January 2015, Phase 1 and Phase 2 trials were started using a modified form of the vaccine specifically tailored to the subtype of HIV most commonly found in South Africa, clade C. In the future, researchers plan to add a protein adjuvant to augment the immune response. Furthermore, since the RV 144 trial demonstrated waning protection beyond the initial immune response, an additional booster vaccination will be added at 12 months. This efficacy trial is expected to enroll approximately 7,000 participants, and results are to be released in 2018 [10]. If the trials are successful, a vaccine product could be taken to licensure as early as 2019, according to lead investigator, Glenda Gray [10]. It is worth noting, however, that while the success of the RV 144 trial and the early results from South Africa have been promising, the implementation of regional, clade-specific vaccines would be a logistical challenge, as it would require the development, approval, and worldwide distribution of multiple vaccines tailored to the local HIV strains of each region. 

Back to the Bench – Broadening the Range of HIV Vaccines

In Seattle, Washington, researchers have made a discovery that could fundamentally alter the way we attack HIV. To date, one of the main obstacles to creating an effective HIV vaccine is that vaccines typically elicit antibodies against only a narrow range of existing HIV-1 strains. In a recent study published in Science, McGuire et al found that immunogenic proteins derived from the envelope glycoprotein of HIV-1 preferentially activated B-cells to produce narrow neutralizing antibodies (nNAbs) rather than stimulating broadly neutralizing antibodies (bNAbs) that would provide protection against a wide range of strains [11]. Since most HIV vaccines consist of proteins derived from the HIV-1 envelope glycoprotein, this could explain why vaccines thus far have been ineffective. To overcome this limitation, the investigators developed a recombinant form of HIV envelope-derived glycoproteins that preferentially activated the B-cells to produce bNAbs instead of nNAbs. They also showed that the recombinant immunogenic proteins could induce production of bNAbs in patients prior to any exposure to HIV, thus potentially serving as the basis for an effective vaccine . Another strategy that is being investigated is the use of sequence information from multiple circulating virus strains to design a composite HIV-1 vaccine based on these “mosaic” sequences. It has been shown that broader cellular and humoral immune responses are induced by mosaic antigen sequences as compared to conventional wild-type HIV antigens [12]. Barouch et al investigated the effects of mosaic vaccines in rhesus monkeys, and found that the vaccines reduced the risk of acquiring simian-HIV per exposure by 90% [13]. The ability of the mosaic vaccine to induce significant protection against a highly potent form of the SIV virus in simian subjects suggests that it may also be effective against highly pathogenic HIV strains in human trials.

A New Era

While the medical community has made significant strides, the global threat posed by HIV underscores the urgent need for an effective vaccine to curtail this pandemic. The results from the vaccine trials highlighted above are therefore all the more exciting, and the eyes of the world will be on South Africa as the next phase of these studies rolls out. Recent advances in the laboratory also suggest that a vaccine candidate that induces bNAbs or utilizes mosaic sequences could potentially be added to the vaccine regimens currently being tested for combined efficacy. All in all, advances in HIV vaccinology may spawn a new era in our fight against this devastating virus, and make the idea of a world without HIV/AIDS seem more of a tangible reality than an idyllic dream.

Dr. Amar Parikh is a 2nd year  resident at NYU Langone Medical Center.

Peer reviewed by Thomas Norton, MD, Assistant Professor in the Division of Infectious Diseases at the NYU School of Medicine

Image courtesy of www.pedaids.org 

References

  1. World Health Organization Fact Sheet – http://www.who.int/mediacentre/factsheets/fs360/en/. Accessed 11 December 2014.
  2. Finzi et al. Identification of a reservoir for HIV-1 in patients on highly active antiretroviral therapy. Science. 1997 Nov 14;278(5341):1295-300.
  3. Initiation of antiretroviral treatment protects uninfected sexual partners from HIV infection (HPTN Study 052). HIV Prevention Trials Network website.http://www.hptn.org/web%20documents/PressReleases/HPTN052PressReleaseFINAL5_12_118am.pdf.%20%20Published%20May%2011, 2011. Accessed December 11, 2014.
  4. Bailey RC, Moses S, Parker CB, et al. Male circumcision for HIV prevention in young men in  Kisumu, Kenya: a randomised controlled trial. Lancet.2007;369(9562):643–656. http://www.ncbi.nlm.nih.gov/pubmed/17321310
  5. UNAIDS – Fact Sheet : http://www.unaids.org/en/regionscountries/countries/southafrica
  6. Rerks-Ngarm S,  Pitisuttithum P, Nitayaphan S, et al. Vaccination with ALVAC and AIDSVAX to prevent HIV-1 infection in Thailand. New Engl J Med. 2009;361(23):2209 2220. http://www.nejm.org/doi/full/10.1056/NEJMoa0908492
  7. Haynes BF, Gilbert PB, et al. Immune-Correlates Analysis of an HIV-1 Vaccine Efficacy Trial. New Engl J Med. 2012; 366: 1275-1286
  8. In South Africa, RV144 HIV Vaccine Regimen Induces Immune Responses Similar to Those Seen in Thailand  – http://www.hivresearch.org/news.php?NewsID=304
  9. Gray GE, Andersen-Nissen E, Grunenberg N, et al. HVTN 097 : Evaluation of the RV144 Vaccine Regimen in HIV Uninfected South African Adults. http://online.liebertpub.com/doi/pdfplus/10.1089/aid.2014.5052a.abstract. Accessed December 11, 2014.
  10. Mccullom, T. A Promising HIV Vaccine in South Africa. The Atlantic. Dec 3 2014. http://www.theatlantic.com/health/archive/2014/12/a-promising-hiv-vaccine-in-south-africa/383350/2/
  11. McGuire AT, Dreyer AM, Stamatatos L, et al. Antigen modification regulates competition of broad and narrow neutralizing HIV antibodies. Science12 December 2014:346 (6215), 1380-1383. [DOI:10.1126/science.1259206] http://www.sciencemag.org/content/346/6215/1380.full
  12. Barouch DH, O’Brien KL, et al. Mosaic HIV-1 vaccines expand the breadth and depth of cellular immune responses in rhesus monkeys. Nature Medicine 2010; 16: 319-323
  13. Barouch DH, Stephenson KE, et al. Protective Efficacy of a Global HIV-1 Mosaic Vaccine against Heterologous SHIV Challenges in Rhesus Monkeys. Cell 2013; 155 (3): 531-539

 

Is It Time to Reconsider Who Should Get Metformin?

December 11, 2015

Metformin 500mg Tablets.jpgBy Lauren Strazzulla

Current FDA guidelines for the use of metformin stipulate that it not be prescribed to those with an elevated creatinine (at or above 1.5 mg/dL for men and 1.4 mg/dL for women). It is also contraindicated in patients with heart failure requiring pharmacologic treatment, and people over age 80, unless their creatinine demonstrates that renal function is not reduced. These guidelines are in place to prevent lactic acidosis, an understandably feared complication of metformin. However, metformin is, by consensus, the initial drug of choice in type 2 diabetes and may prevent or delay the disease in people with pre-diabetes. Metformin is used successfully with less restriction throughout Europe, where it is considered acceptable to prescribe as long as the patient’s glomerular filtration rate (GFR) exceeds 30 mL/minute [1].

Biguanides, such as metformin, act by improving insulin sensitivity and by suppressing inappropriate gluconeogenesis in the liver. They inhibit the mitochondrial respiratory chain, which shifts energy production from aerobic metabolism to anaerobic, generating lactic acid as a byproduct [2]. Much of the concern for lactic acidosis (LA) arose from the legacy of metformin’s predecessor phenformin, which was removed from the market in 1978 due to high incidence of LA. But the pharmacokinetics of metformin differ markedly from phenformin, which has a longer half life and causes LA at a lower blood level relative to metformin [3,4].

The actual risk for lactic acidosis may be lower than widely believed. In fact, some studies have demonstrated that the vast majority of patients who get LA have serious underlying conditions, with the most common being infection, acute liver or kidney injury, and cardiovascular collapse [5,6,7]. A study by Lalau and colleagues found that survival in patients with LA correlates with the severity of the associated condition and not the degree of metformin accumulation. Metformin levels did not carry diagnostic or prognostic significance in patients with LA, and in some cases higher levels were associated with reduced mortality [8]. These data call into question how significant a role metformin truly plays in potentiating lactic acidosis.

There has been speculation that patients with type 2 diabetes may already have a baseline risk for LA that is separate from the risk conferred by metformin use. Brown and colleagues (1998) showed that the rate of LA among patients with type 2 diabetes using metformin versus those not using metformin was indistinguishable, which implies that the pathogenesis of LA may be more closely related to the disease itself [9]. Other studies have showed that the overall incidence of LA among metformin users is about 1 per 23,000-30,000 person-years compared to 1 per 18,000-21,000 person-years among diabetic patients on other agents [10,11]. Thus, metformin may not be as dangerous as previously thought.

Moreover, metformin has numerous health benefits that reduce the progression to diabetes as well as the disease burden. Metformin also provides the versatility of being able to be used with every other oral antidiabetic, in addition to insulin [12]. The Diabetes Prevention Program study, which followed 30,000 people on metformin or placebo for an average of 3.2 years followed by a 7-8 year open label extension, showed that the drug produced significant weight loss and delayed or prevented diabetes. There were no cases of LA during the 18,000 patient-years of follow up [13]. A retrospective analysis of over 19,000 patients enrolled in the REACH trial found that metformin was associated with a 24% reduction in all-cause mortality after only 2 years of use [14].

Yet, metformin is contraindicated in groups of patients for whom it is has a proven benefit. For example, metformin is contraindicated in heart failure because of a presumed increase in the LA risk. A meta-analysis performed in 2007 showed metformin to be the only antidiabetic drug not associated with harm to patients with both diabetes and heart failure; it also reduced mortality in these patients [15]. Many cardiac catheterization lab protocols require withholding metformin 48 hours before and after the procedure. But there is concern that hyperglycemia from temporary cessation of metformin could be harmful during high-risk cardiac interventions [16,17]. Khurana and colleagues point out that metformin is not nephrotoxic and there is no known reaction with iodinated contrast [12]. Similarly, among patients with moderate renal failure, metformin is associated with a reduction in mortality, though the drug is contraindicated in these patients according to current guidelines [14]. Overall, evidence suggests that the benefits likely outweigh the risks for metformin in patients with heart failure and moderate renal failure–at least in those younger than 80 [14].

Metformin is a medication that helps mitigate the consequences of diabetes. Current FDA contraindications do not reflect the evidence suggesting that adverse events from metformin are uncommon, even among at-risk groups. The 2015 guidelines by the American Diabetes Association and the European Association for the Study of Diabetes maintain that the current cutoffs for renal safety are overly restrictive and recognize that many practitioners use metformin even when GFR falls to less than 60 mL/min [18]. In fact, other studies have suggested that metformin remains within the therapeutic range and lactate levels are not significantly affected as long as estimated GFR is greater than 30 mL/minute [5]. Therefore, it is time to re-evaluate metformin prescribing practices, given that this medication can safely improve the outlook for many patients who may not currently be eligible for the drug.

 

Commentary by Michael Tanner, MD Executive Editor, Clinical Correlations
Dimethyl biguanide (metformin) was first synthesized from Galega officinalis (French lilac) in the1920s. Jean Sterne, the French physician who developed it in the 1950s, coined its first trade name “Glucophage” (glucose eater). It was added to the British National Formulary in 1958. Metformin was not approved in the United States until 1994, largely due to guilt by association with the other truly dangerous biguanides phenformin and buformin. In 1998, the United Kingdom Prospective Diabetes Study (UKPDS 34) found that metformin monotherapy in overweight diabetics reduced all-cause mortality by 36% at 10.7 years compared to diet, and was associated with better patient outcomes compared with insulin supply-side drugs–glyburide, chlorpropamide, and insulin itself [19]. The UKPDS was largely responsible for the American Diabetes Association’s eventual recommendation that metformin, barring contraindications, should be the first-line pharmacological agent in most cases of type 2 diabetes.

Citizen petitions were submitted in 2012 and 2013 to relax the FDA’s draconian metformin rules, which are based, inexplicably, on creatinine level rather than GFR. The FDA needs to relax the no-metformin cutoff to a GFR of <30 mL/minute, so that the nearly one million diabetic patients for whom metformin is unnecessarily contraindicated can benefit.

Lauren Strazzulla is a third year medical student at NYU Langone School of Medicine

Michael Tanner, MD is an Associate Professor of Medicine and Executive Editor, Clinical Correlations

References

  1. Nathan DM, Buse JB, Davidson MB, et al. Medical management of hyperglycemia in type 1 diabetes: a consensus algorithm for the initiation and adjustment of therapy: a consensus statement of the American Diabetes Association and the European Association for the Study of Diabetes. Diabetes Care. 2009;32(1):193-203. http://care.diabetesjournals.org/content/32/1/193.full

 

  1. Cho YM, Kieffer TJ.  New aspects of an old drug: metformin as a glucagon-like peptide 1 (GLP-1) enhancer and sensitiser. Diabetologia. 2011:54(2):219-222.  http://www.ncbi.nlm.nih.gov/pubmed/21116606

 

  1. Sirtori CR, Franceschini G, Galli-Kienle M, et al.  Disposition of metformin (N,N-dimethylbiguanide) in man. Clin Pharmacol Ther. 1978:24(6):683-693. http://www.ncbi.nlm.nih.gov/pubmed/710026

 

  1. Pernicova I, Korbonits M.  Metformin—mode of action and clinical implications for diabetes and cancer. Nat Rev Endocrinol. 2014:10(3):143-156. http://www.nature.com/nrendo/journal/v10/n3/full/nrendo.2013.256.html

 

  1. Inzucchi SE, Lipska KJ, Mayo H, Bailey CJ, McGuire DK. Metformin in patients with type 2 diabetes and kidney disease: a systematic review. JAMA. 2014:312(24):2668-2675.  http://jama.jamanetwork.com/article.aspx?articleid=2084896

 

  1. Misbin RI, Green L, Stadel BV, Gueriguian JL, Gubbi A, Fleming GA. Lactic acidosis in patients with diabetes treated with metformin. N Engl J Med.1998:338:265-266.  http://www.nejm.org/doi/full/10.1056/NEJM199801223380415

 

  1. Wilholm BE, Myrhed M. Metformin-associated lactic acidosis in Sweden 1977-1991. Eur J Clin Pharmacol. 1993:44:589-591. http://www.ncbi.nlm.nih.gov/pubmed/8405019

 

  1. Lalau JD, Lacroix C, Compagnon P, et al. Role of metformin accumulation in metformin-associated lactic acidosis. Diabetes Care. 1995:18(6):779-784.  http://care.diabetesjournals.org/content/18/6/779.full.pdf

 

  1. Brown JB, Pedula K, Barzilay J, Herson MK, Latare P. Lactic acidosis rates in type 2 diabetes. Diabetes Care. 1998:21(10):1659–1663.  http://care.diabetesjournals.org/content/21/10/1659.full.pdf

 

  1. Salpeter SR, Greyber E, Pasternak GA, Salpeter EE.  Risk of fatal and nonfatal lactic acidosis with metformin use in type 2 diabetes mellitus. Cochrane Database Syst Rev. 2010 April 14 (4):CD002967. http://www.ncbi.nlm.nih.gov/pubmed/12076461

 

  1. Bodmer M, Meier C, Krähenbühl S, Jick SS, Meier CR.  Metformin, sulfonylureas, or other antidiabetes drugs and the risk of lactic acidosis or hypoglycemia: a nested case-control analysis. Diabetes Care. 2008:31(11):2086-2091. http://www.ncbi.nlm.nih.gov/pubmed/18782901

 

  1. Khurana R, Malik IS. Metformin: safety in cardiac patients. Postgrad Med J. 2010:86:371-373.  http://www.ncbi.nlm.nih.gov/pubmed/19564648

 

  1. Diabetes Prevention Program Research Group. Long-term safety, tolerability, and weight loss associated with metformin in the Diabetes Prevention Program Outcomes Study. Diabetes Care. 2012:35(4):731-737. http://www.ncbi.nlm.nih.gov/pubmed/22442396

 

  1. Roussel R, Travert F, Pasquet B, et al. Reduction of Atherothrombosis for Continued Health (REACH) Registry Investigators. Metformin use and mortality among patients with diabetes and atherothrombosis. Arch Intern Med. 2010:170(21):1892-1899.  http://www.ncbi.nlm.nih.gov/pubmed/21098347

 

  1. Eurich DT, McAlister FA, Blackburn DF, et al. Benefits and harms of antidiabetic agents in patients with diabetes and heart failure: systematic review. BMJ. 2007:335(7618):497. http://www.ncbi.nlm.nih.gov/pubmed/17761999
  2. Willfort-Ehringer A, Ahmadi R, Gessl A, et al. Neointimal proliferation within carotid stents is more pronounced in diabetic patients with initial poor glycaemic state. Diabetologia. 2004:47(3):400–406. http://www.ncbi.nlm.nih.gov/pubmed/14985968

 

  1. Timmer JR, Ottervanger JP, de Boer MJ, et al. Hyperglycemia is an important predictor of impaired coronary flow before reperfusion therapy in ST-segment elevation myocardial infarction. J Am Coll Cardiol. 2005:45(7):999–1002. http://www.ncbi.nlm.nih.gov/pubmed/15808754

 

  1. Inzucchi SE, Bergenstal RM, Buse JB, et al. Management of hyperglycemia in type 2 diabetes, 2015: a patient-centered approach: update to a position statement of the American Diabetes Association and the European Association for the Study of Diabetes. Diabetes Care. 2015:38(1):140-149.  http://care.diabetesjournals.org/content/38/1/140.extract

 

  1. Effect of intensive blood-glucose control with metformin on complications in overweight patients with type 2 diabetes (UKPDS 34). UK Prospective Diabetes Study (UKPDS) Group. Lancet. 1998;352(9131):854-865. http://www.ncbi.nlm.nih.gov/pubmed/9742977

 

Are We Overusing Proton Pump Inhibitors?

November 13, 2015

File:Nexium (esomeprazole magnesium) pills.JPGBy Shimwoo Lee
Peer Reviewed
Case: A 31-year-old man with poorly controlled type 2 diabetes was hospitalized for community-acquired pneumonia. His home medications included esomeprazole. When asked why he was receiving this medication, the patient said it was first started during his prior hospitalization for “ulcer prevention” eight months ago and that he had continued to take it since. He denied any history of upper gastrointestinal symptoms. Esomeprazole was tapered off during this admission. When being discharged after successful treatment of his pneumonia, he was told he no longer needed to take esomeprazole.

Proton pump inhibitors (PPIs) are one of the most widely used medications in the US. Last year, esomeprazole was ranked as one of the top three best-selling drugs in the nation, with 17.8 million prescriptions [1]. PPIs are the most potent inhibitors of gastric secretion and are used to treat common upper gastrointestinal disorders, such as gastroesophageal reflux disease (GERD) and peptic ulcer disease. The effectiveness of PPIs and their perceived low toxicity profile have led to their popularity and even inappropriate overutilization in the medical setting, as exemplified by the patient case above. However, PPI use can have potentially serious medical consequences, including an increased risk of infections, malabsorption, and adverse drug-drug interactions.

Physicians use empiric PPI therapy to diagnose GERD, one of the most common gastrointestinal diseases. If symptoms improve with empiric therapy, PPIs are then continued, often indefinitely, though it may be possible to step down to acid suppression with H-2 blockers such as ranitidine. PPIs work by irreversibly inhibiting the parietal cell H+K+ATPase, a pump that actively secretes protons into the gastric lumen in exchange for potassium ions. Because PPIs take several days to cut down maximal acid output, short-term use of a PPI does not provide optimal acid inhibition [2]. Upon discontinuation of the drug, patients can experience rebound acid hypersecretion due to hypergastrinemia, leading to worsening of GERD symptoms. These are reasons that many physicians simply keep patients on daily PPIs indefinitely. Currently, there are no evidence-based guidelines for discontinuing PPIs.

Prolonged PPI use can have serious infectious risks. Reduced acid production due to PPIs compromises the sterility of the gastric lumen, thus making it easier for pathogens to colonize the upper gastrointestinal tract and subsequently alter the colonic microbiome [3]. The best-documented enteric infection linked to PPI use is Clostridium difficile, which is the leading cause of gastroenteritis-associated death in the US [4]. In 2012, a meta-analysis of 42 studies linked PPI use with a significantly increased risk of both incident and recurrent C difficile infection (odds ratio 1.7) [5]. Through a similar mechanism of decreased gastric sterility, PPIs predispose patients to other bacterial gastroenteritides, as well as to both community-acquired and nosocomial pneumonia (as, perhaps, in our patient above). A meta-analysis of 31 studies in 2011 found that patients taking PPIs were at increased risk for pneumonia (odds ratio 1.27) [6].

PPI use also has been implicated in gut malabsorption. In 2011, the FDA issued a safety warning regarding the risk of hypomagnesemia in patients who have been on PPIs for more than a year [7]. PPIs promote the loss of magnesium, which is essential for nucleic acid synthesis, by disrupting active transport molecules in the gut that actively absorb magnesium [8]. Hypomagnesemia is associated with a host of conditions, including hypertension and type 2 diabetes. Furthermore, PPI-induced hypochlorhydria can reduce calcium absorption and thus decrease bone density. The Nurses’ Health Study in 2012 demonstrated that the risk of hip fracture was 36 percent higher among postmenopausal women who regularly used PPIs for at least two years compared with nonusers [9].

Another reason for caution when prescribing PPIs is their potential to cause deleterious drug interactions (uncommonly). PPIs are metabolized via hepatic cytochrome P450 enzymes, CYP2C19 being the predominant isoenzyme [10]. The use of PPIs can interfere with many other drugs sharing the same hepatic metabolic pathway, especially in individuals with CYP2C19-inactivating polymorphisms. For instance, patients on warfarin can have a 10 percent decrease in prothrombin time with concomitant use of omeprazole, and the same PPI can also lead to increased half-life of diazepam by 130 percent [11]. Furthermore, due to studies linking omeprazole use to decreased activation of clopidogrel, the FDA issued an alert in 2009 regarding this potential drug interaction raising concern for adverse cardiovascular events; however, the clinical significance of such interaction remain controversial [12].

Given the possible dangers of using PPIs, the widespread practice of physicians keeping patients on prolonged PPI therapy is concerning. While currently we lack an evidence-based approach for discontinuing PPIs, the general guideline is that patients with GERD or dyspepsia deserve a consideration for a PPI taper after being asymptomatic for three to six months. In order to prevent rebound acid secretion when attempting to stop PPIs, it may be necessarily to temporarily overlap use with an H2 blocker, which when stopped does not result in acid rebound. Unfortunately, physicians frequently over-prescribe PPIs in the first place and fail to follow up their patients with a goal toward stopping unnecessary therapy [13]. A 2010 study conducted in a Veterans’ Administration ambulatory care center showed that, of 946 patients receiving PPI therapy, 36 percent had no documented appropriate indication for initiating such therapy, and 42 percent lacked re-evaluation of their upper-GI symptoms, thus precluding any potential for step-down therapy [14].

Overutilization of PPIs occurs in the inpatient setting as well. In the intensive care unit (ICU), PPIs are indicated specifically for stress ulcer prophylaxis in select patients with high risk of GI bleeding, including those with coagulopathies, traumatic brain injury, and severe burns, or on long-term mechanical ventilation [15]. However, no such indications exist in non-ICU settings. Yet, a study of 1769 non-ICU patients found that 22 percent received PPIs for stress ulcer prophylaxis, and over half of these patients were subsequently discharged home on PPIs inappropriately [16]. The majority of physicians who prescribe PPIs in non-ICU settings appear to do so out of fear of upper GI bleeding and associated legal repercussions [17]. However, such hospital practice not only incurs unnecessary costs but also can lead to serious harm since PPIs can further add to the already high rates of hospital-acquired C difficile infections and pneumonia.

Even if physicians were to stop over-prescribing PPIs, this would not eliminate the problem of PPI overuse. Thanks to the FDA approval of over-the-counter omeprazole (Prilosec-OTC) in 2010, more individuals have access to PPIs. Advertised as “on-demand” relief medication for people with frequent heartburn, Prilosec-OTC has a label warning against its use for more than 14 days. What is troubling with this message is that it may promote chronic on-and-off usage, which is not optimal, given that PPIs take several days to take maximal effect and can cause rebound acid reflux when stopped abruptly. Hence, over-the-counter PPIs may provide only suboptimal relief of symptoms while exposing patients to adverse side effects all the same.

We need more judicious usage of PPIs in the face of their ever-rising popularity. Their widespread use certainly attests to their effectiveness, but more care must be taken to minimize their overuse. Physicians have a big role as stewards of guiding proper use of PPIs, even Prilosec-OTC, by educating patients about the adverse effects of PPIs and keeping close track of both their prescription and over-the-counter medication lists. It is crucial for physicians to check proper indications for PPIs before prescribing them and regularly reassess patients’ symptoms for possible step-down therapy. Just as important is their role in counseling patients on lifestyle changes that can improve reflux symptoms–avoiding acidic foods, quitting smoking, and losing weight–to decrease or even eliminate the need for PPIs.

Shimwoo Lee is a 3rd year medical student at NYU School of Medicine

Peer Reviewed by Michael Poles, MD Associate Professor of Medicine, Division of Gastroenterology

References
1. Brooks M. Top 100 most prescribed, top selling drugs. Medscape Medical News. http://www.medscape.com/viewarticle/829246. Published August 1, 2014. Accessed May 15, 2015.
2. Wolfe MM, Sachs G. Acid suppression: optimizing therapy for gastroduodenal ulcer healing, gastroesophageal reflux disease, and stress-related erosive syndrome. Gastroenterology. 2000;118(2 Suppl 1):S9-S31.  http://www.gastrojournal.org/article/S0016-5085%2800%2970004-7/references?mobileUi=0
3. DuPont HL. Acute infectious diarrhea in immunocompetent adults. New Engl J Med. 2014;370(16):1532-1540. http://www.nejm.org/doi/full/10.1056/NEJMra1301069
4. Hall AJ, Curns AT, McDonald LC, Parashar UD, Lopman BA. The roles of Clostridium difficile and norovirus among gastroenteritis-associated deaths in the United States, 1999-2007. Clin Infect Dis. 2012;55(2):216-223. http://cid.oxfordjournals.org/content/55/2/216.full.pdf
5. Kwok CS, Arthur AK, Anibueze CI, Singh S, Cavallazzi R, Loke YK. Risk of Clostridium difficile infection with acid suppressing drugs and antibiotics: meta-analysis. Am J Gastroenterol. 2012;107(7):1011-1019. http://www.nature.com/ajg/journal/v107/n7/abs/ajg2012108a.html
6. Eom CS, Jeon CY, Lim JW, Cho EG, Park SM, Lee KS. Use of acid-suppressive drugs and risk of pneumonia: a systematic review and meta-analysis. CMAJ. 2011;183(3):310-319. http://www.cmaj.ca/content/183/3/310.short
7. U.S. Food and Drug Administration. FDA Drug Safety Communication: Low magnesium levels can be associated with long-term use of Proton Pump Inhibitor drugs (PPIs). http://www.fda.gov/Drugs/DrugSafety/ucm245011.htm. Published March 2, 2011. Accessed
8. Perazella MA. Proton pump inhibitors and hypomagnesemia: a rare but serious complication. Kidney Int. 2013;83(4):553-556. http://www.nature.com/ki/journal/v83/n4/full/ki2012462a.html
9. Khalili H, Huang ES, Jacobson BC, Camargo CA, Jr, Feskanich D, Chan AT. Use of proton pump inhibitors and risk of hip fracture in relation to dietary and lifestyle factors: a prospective cohort study. BMJ. 2012;344:e372. http://www.bmj.com/content/344/bmj.e372
10. Klotz U, Schwab M, Treiber G. CYP2C19 polymorphism and proton pump inhibitors. Basic Clin Pharmacol Toxicol. 2004;95(1):2-8. http://tag.sagepub.com/content/5/4/219.short
11. Wolfe MM. Overview and comparison of the proton pump inhibitors for the treatment of acid-related disorders. Up to Date. https://www-uptodate-com.ezproxy.med.nyu.edu/contents/overview-and-comparison-of-the-proton-pump-inhibitors-for-the-treatment-of-acid-related-disorders?source=related_link. Updated July 22, 2014. Accessed May 15, 2015.
12. U.S. Food and Drug Administration. Information for Healthcare Professionals: Update to the labeling of Clopidogrel Bisulfate (marketed as Plavix) to alert healthcare professionals about a drug interaction with omeprazole (marketed as Prilosec and Prilosec OTC). http://www.fda.gov/Drugs/DrugSafety/PostmarketDrugSafetyInformationforPatientsandProviders/DrugSafetyInformationforHeathcareProfessionals/ucm190787.htm. Published November 17, 2009. Accessed May 15, 2015.
13. Heidelbaugh JJ, Kim AH, Chang R, Walker PC. Overutilization of proton-pump inhibitors: what the clinician needs to know. Therap Adv Gastroenterol. 2012;5(4):219-232. http://www.nature.com/ajg/journal/v101/n10/abs/ajg2006412a.html
14. Heidelbaugh JJ, Goldberg KL, Inadomi JM. Magnitude and economic effect of overuse of antisecretory therapy in the ambulatory care setting. Am J Manag Care. 2010;16(9):e228-234. http://sma.org/southern-medical-journal/article/why-do-physicians-prescribe-stress-ulcer-prophylaxis-to-general-medicine-patients/
15. America Society of Health System Pharmacists. ASHP therapeutic guidelines on stress ulcer prophylaxis. Am J Health Syst Pharm. 1999;56(4):347-379.
16. Heidelbaugh JJ, Inadomi JM. Magnitude and economic impact of inappropriate use of stress ulcer prophylaxis in non-ICU hospitalized patients. Am J Gastroenterol. 2006;101(10):2200-2205.
17. Hussain S, Stefan M, Visintainer P, Rothberg M. Why do physicians prescribe stress ulcer prophylaxis to general medicine patients? South Med J. 2010;103(11):1103-1110.

 

Beta-blockers in Uncomplicated Hypertension: Is it Time for Retirement?

October 7, 2015

Propranolol_tabletsBy Robin Guo, MD

Peer Reviewed 

Beta-blockers were one of the first modern medications used for the treatment of blood pressure. Before 1950, treatment options for hypertension were limited. The alphabet soup of medications—reserpine, pentaquine, hydralazine, and guanethidine—were notorious for inducing orthostasis, sedation, constipation, impotence, or blurry vision . Then in the 1960s, propranolol and chlorothiazide were developed. Initially designed to treat angina pectoris, propranolol was serendipitously discovered to also lower blood pressure. Oddly, propranolol, like the other beta-blockers in its generation and thereafter, did not act directly on the blood vessel to lower blood pressure. Instead, its purported mechanism of action was via inhibition of beta-adrenergic receptors, reduction of cardiac output, and decrease in sympathetic outflow (2).

Despite beta-blockers’ odd mechanism of action, they were (and still are) as effective as the earlier anti-hypertensive agents in decreasing blood pressure (3, 4). Beta-blockers became extremely popular once they were found to have many fewer side effects than the older generation of drugs,. Since then, many derivatives and generations of the beta-blocker have come into use and have become the mainstay in the treatment of hypertension. In the subsequent 40 years, beta-blockers battled high blood pressure in the front-lines.

But in 2014, after a 10-year hiatus, the Joint National Committee on Detection, Evaluation, and Treatment of High Blood Pressure (JNC) released its 8th guideline on treating hypertension (5). In these new recommendations, beta-blockers were relegated to second-line treatment, behind thiazide-diuretics, calcium channel blockers (CCB), angiotensin-converting enzyme inhibitors (ACEi), and angiotensin II receptor blockers (ARBs). After being the crux of hypertension treatment for decades, are beta-blockers becoming obsolete?

Several studies from the early 2000s rendered the beta-blocker a less attractive treatment option for hypertension. In 2002, rallying more than 30,000 hypertensive individuals, the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) showed that treatment with the thiazide diuretic chlorthalidone decreased the rate of heart failure (HF) significantly more than an ACEi and a CCB . Additionally, African Americans treated with chlorthalidone were less likely to have a stroke . A smaller trial published around the same time, Intervention as a Goal in Hypertension Treatment (INSIGHT), also showed a reduction in HF rates (7). In contrast, these outcomes have not been demonstrated with beta-blockers (3, 5). Thus, ALLHAT and INSIGHT brought thiazide-diuretics to the forefront of hypertension treatment (8).

A second hit befell beta-blockers in 2004, when a meta-analysis published in The Lancet suggested that atenolol did worse than other hypertension medications in reducing the risk of stroke (RR 1.13; 95% CI 1.02 – 1.25) (9). Subsequent studies and larger meta-analyses confirmed this relationship, and exposed propranolol as also having this deficiency (3, 4, 10). Based on the mounting evidence, the JNC-8 guidelines relegated beta-blockers to second-line therapy for high blood pressure (5).

Several theories have been proposed to explain this observed risk of stroke. One theory that is gaining strength is based on the idea of pulse wave dyssynchrony, whereby pressure waves are prematurely reflected back from the periphery by older and stiffer arteries, resulting in increased central systolic pressures (11). Because beta-blockers increase the length of systole, they exacerbate pulse wave dyssynchrony thereby increasing central aortic pressures. Using pressure sensors designed to calculate central pressures from the radial artery (called radial artery applanation tonometry), one study showed that not only did higher central aortic pressures lead to worse cardiovascular outcomes, but that beta-blockers were less capable of lowering central blood pressure when compared to CCBs (12). Atenolol, in particular, could further worsen this dyssynchrony by causing relative peripheral vasoconstriction (13).

Keep in mind, however, that not all beta-blockers are created equal (3). For instance, the newer generation of beta-blockers has shed many of the older agents’ undesirable side effects, including the hyperglycemia and hyperlipidemia of the first and second generations (2). Moreover, beta-blockers like carvedilol and nebivolol appear to decrease peripheral vascular resistance either through alpha-1 blockade or by promoting nitrous oxide (NO) release, respectively, rather than by reducing the cardiac output. These well-publicized mechanisms translated to modest reductions in central aortic pressure when compared to atenolol in small studies (2, 14). Several clinical trials are already underway to study the blood pressure lowering effects of third-generation beta-blockers (15). However, further studies will be needed to examine the long-term effects of the newer beta-blockers and how they influence stroke risk, coronary heart disease outcomes, or overall mortality in uncomplicated hypertension. 

Despite all the shortcomings of the beta-blocker, it is not yet time to give up on our old friend. A recent Cochrane Review showed that when compared to placebo, beta-blockers have a modest effect on decreasing the risk of stroke (RR 0.80; 95% CI 0.66 – 0.96 with ARR of 0.5%) (3). Additionally, a meta-analysis in 2007 showed that most of the previously observed stroke risk was confounded by older populations. The authors were able to show that this risk between beta-blockers and other antihypertensive agents was eliminated when the analysis focused on patients under 60 years old (16). Furthermore, most of the analyses on cardiovascular outcomes of beta-blockers were derived from studies using atenolol and propranolol and may not apply to other agents in the family. For instance, the Metoprolol Atherosclerosis Prevention in Hypertensives (MAPHY) study showed that there could be a reduction in stroke and coronary heart disease when using a long-acting beta-blocker, such as metoprolol succinate (17). Finally, newer agents in the class with fewer metabolic side effects may become standard in our armamentarium to treat hypertension in the future. There are reasons why our counterparts up north still recommend beta-blockers as first-line therapy for younger patients and why they are still an appropriate option for treatment, especially in resistant hypertension (18). 

Comments on “Beta-blockers in Uncomplicated hypertension: Is It time for Retirement?”

Lois Anne Katz, MD

While beta-blocking drugs are no longer recommended as first-line therapy for hypertension in JNC 8, these drugs still have a place in treating hypertension, especially in patients with coronary artery disease, some arrhythmias, CHF, or other indications for beta-blocker therapy. Beta-blockers have only a modest effect on stroke; other classes of drugs used for antihypertensive therapy reduce mortality and cardiovascular disease more than beta-blockers [3]. For these reasons, diuretics, drugs interfering with the renin-angiotensin system, and CCB are preferred for first-line therapy of hypertension. These drugs are also generally better tolerated than beta-blockers. However, in patients whose blood pressure is not adequately controlled with a diuretic, ACE inhibitor or ARB, and CCB, the addition of a beta-blocker may improve blood pressure control. As noted above and in a commentary by Ram, newer vasodilatory beta-blocking drugs such as carvedilol and nebivolol may be more beneficial in treating hypertension than the older beta-blockers, but trial results with these newer agents are not yet available [19].

Dr. Robin Guo is an Internal Medicine Resident at NYU Langone Medical Center

Peer reviewed by Lois Anne Katz, MD, Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

  1. Hamdy RC. Hypertension: a turning point in the history of medicine…and mankind. South Med J. 2001;94(11):1045-7.
  2. Ripley TL, Saseen JJ. beta-blockers: a review of their pharmacological and physiological diversity in hypertension. Ann Pharmacother. 2014;48(6):723-33.
  3. Wiysonge CS, Bradley HA, Volmink J, Mayosi BM, Mbewu A, Opie LH. Beta-blockers for hypertension. Cochrane Database Syst Rev. 2012;11:CD002003.
  4. Lindholm LH, Carlberg B, Samuelsson O. Should beta blockers remain first choice in the treatment of primary hypertension? A meta-analysis. Lancet. 2005;366(9496):1545-53.
  5. James PA, Oparil S, Carter BL, Cushman WC, Dennison-Himmelfarb C, Handler J, et al. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA. 2014;311(5):507-20.
  6. Officers A, Coordinators for the ACRGTA, Lipid-Lowering Treatment to Prevent Heart Attack T. Major outcomes in high-risk hypertensive patients randomized to angiotensin-converting enzyme inhibitor or calcium channel blocker vs diuretic: The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). JAMA. 2002;288(23):2981-97.
  7. Brown MJ, Palmer CR, Castaigne A, de Leeuw PW, Mancia G, Rosenthal T, et al. Morbidity and mortality in patients randomised to double-blind treatment with a long-acting calcium-channel blocker or diuretic in the International Nifedipine GITS study: Intervention as a Goal in Hypertension Treatment (INSIGHT). Lancet. 2000;356(9227):366-72.
  8. Jones DW, Hall JE. Seventh report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure and evidence from new hypertension trials. Hypertension. 2004;43(1):1-3.
  9. Carlberg B, Samuelsson O, Lindholm LH. Atenolol in hypertension: is it a wise choice? Lancet. 2004;364(9446):1684-9.
  10. Dahlof B, Sever PS, Poulter NR, Wedel H, Beevers DG, Caulfield M, et al. Prevention of cardiovascular events with an antihypertensive regimen of amlodipine adding perindopril as required versus atenolol adding bendroflumethiazide as required, in the Anglo-Scandinavian Cardiac Outcomes Trial-Blood Pressure Lowering Arm (ASCOT-BPLA): a multicentre randomised controlled trial. Lancet. 2005;366(9489):895-906.
  11. Cohen DL, Townsend RR. Update on pathophysiology and treatment of hypertension in the elderly. Curr Hypertens Rep. 2011;13(5):330-7.
  12. Williams B, Lacy PS, Thom SM, Cruickshank K, Stanton A, Collier D, et al. Differential impact of blood pressure-lowering drugs on central aortic pressure and clinical outcomes: principal results of the Conduit Artery Function Evaluation (CAFE) study. Circulation. 2006;113(9):1213-25.
  13. Kuyper LM, Khan NA. Atenolol vs nonatenolol beta-blockers for the treatment of hypertension: a meta-analysis. Can J Cardiol. 2014;30(5 Suppl):S47-53.
  14. Polonia J, Barbosa L, Silva JA, Bertoquini S. Different patterns of peripheral versus central blood pressure in hypertensive patients treated with beta-blockers either with or without vasodilator properties or with angiotensin receptor blockers. Blood Press Monit. 2010;15(5):235-9.
  15. https://clinicaltrials.gov/ CgBMNLoMU-Af.
  16. Khan N, McAlister FA. Re-examining the efficacy of beta-blockers for the treatment of hypertension: a meta-analysis. CMAJ. 2006;174(12):1737-42.
  17. Wikstrand J, Warnold I, Tuomilehto J, Olsson G, Barber HJ, Eliasson K, et al. Metoprolol versus thiazide diuretics in hypertension. Morbidity results from the MAPHY Study. Hypertension. 1991;17(4):579-88.
  18. Daskalopoulou SS, Rabi DM, Zarnke KB, Dasgupta K, Nerenberg K, Cloutier L, et al. The 2015 Canadian Hypertension Education Program recommendations for blood pressure measurement, diagnosis, assessment of risk, prevention, and treatment of hypertension. Can J Cardiol. 2015;31(5):549-68.