Are Dentists Really Causing Infective Endocarditis?

August 29, 2012

By Jeffrey Krutoy, DDS

Faculty Peer Reviewed

Bacterial infective endocarditis is a potentially devastating disease, and while it may be an easy tradition to blame the dentist, recent research and new guidelines from the American Heart Association (AHA) indicate that it may not be so simple.

Infective endocarditis (IE), while relatively uncommon (with yearly incidence rates ranging from 2 to 6 cases per 100,000 people), results in high rates of morbidity and mortality even when treated.[1] For this reason, physicians have emphasized the importance of identifying the offending pathogens and sources of infection, with the goal of preventing new cases.  Many bacterial species, especially streptococci, staphylococci, and enterococci, are well known causes of endocarditis.  Prior to recent studies indicating increasing incidence of multidrug resistant staphylococcal IE, viridans streptococci have long been the most frequently noted microbes affecting heart valves in subacute infections. [2,3]

The knowledge that viridans streptococci are normal residents of the oral cavity led to the assumption that dental procedures are responsible for many cases of infective endocarditis.   For this reason, the first set of guidelines written in the 1950s  recommended antibiotic prophylaxis prior to dental procedures in an attempt to prevent cases of IE in susceptible individuals.  A recent overhaul of these guidelines has caused confusion among patients and medical practitioners alike.

The American Heart Association’s guidelines on antibiotic prophylaxis for IE have been updated many times since they first appeared in 1955.  By 1997, antibiotic premedication was recommended in all high- and moderate-risk cardiac patients.  These included people with congenital malformations, prosthetic valves, acquired valve disorders (eg, rheumatic heart disease), hypertrophic cardiomyopathy, and mitral valve prolapse with regurgitation (see Table 1).   It was suggested that patients stratified into these high and moderate risk categories should be prescribed prophylactic antibiotics prior to undergoing certain dental procedures known to cause bleeding.  The listed treatments included extractions, periodontal surgery and scaling, dental implant placement, and complicated endodontic (root canal) treatments.  Excluded were less invasive procedures such as standard restorative dentistry (crowns, bridges, fillings), local anesthesia injections, and most orthodontic work.[4] These guidelines were complicated for both patients and practitioners, and it was therefore not uncommon to note over-prescription of antibiotic prophylaxis to low-risk patients for even non-invasive procedures “just in case.”[5] As time passed, the quality of the evidence behind the recommendations came under increasing scrutiny.

Table 1 (From 1997 Guidelines) [4]
High Risk: Low Risk (no prophylaxis necessary):
Prosthetic Cardiac Valves Isolated secundum atrial septal defect
Cyanotic Congenital Heart Disease Surgical repair of atrial septal defect, ventricular septaldefect, or patent ductus arteriosus (without residua >6 mo)
History of Bacterial Endocarditis Previous coronary artery bypass graft surgery
Pulmonary Shunts (surgically created) MVP without valvular regurgitation
Moderate Risk: Physiological, functional, or innocent heart murmurs
Any other congenital cardiac abnormality Previous Kawasaki disease without valvular dysfunction
Hypertrophic Cardiomyopathy Cardiac pacemakers and implanted defibrillators
Mitral Valve Prolapse with regurgitation (or with thickened leaflets)
Acquired valve injuries (rheumatic heart disease)

The 2007 guidelines took a less aggressive, more evidence-based approach to endocarditis prophylaxis.  The AHA recognized that many of the previous guidelines were based on “expert opinion, clinical experience, and descriptive studies” rather than solid scientific evidence.  They acknowledged that even with perfect compliance using the most efficacious antibiotic, dental prophylaxis would be unlikely to prevent most cases of infectious endocarditis.  They recognized that unnecessary prophylaxis is potentially problematic for several other reasons: adverse medication reactions, antibiotic resistance, and cost.  For these reasons, the AHA narrowed the prophylaxis recommendations to include only those patients at highest risk for endocarditis (Table 2) when undergoing procedures during which bleeding is expected. [6]

Table 2 (From 2007 Guidelines) [6]
Highest Risk (only category recommended to continue receiving prophylaxis as per AHA)
Prosthetic cardiac valve or prosthetic material used for cardiac valve repair
Previous IE
Congenital heart disease (CHD)- Unrepaired cyanotic CHD, including palliative shunts and conduits- Completely repaired congenital heart defect with prosthetic material or device, whether placed by surgery or by catheter intervention, during the first 6 months after the procedure – Repaired CHD with residual defects at the site or adjacent to the site of a prosthetic patch or prosthetic device (which inhibit endothelialization)
Cardiac transplantation recipients who develop cardiac valvulopathy

Other organizations, such as the National Institute for Health and Clinical Excellence (NICE) in England went even further by publishing recommendations in 2008 that no antibiotics at all were recommended prior to any dental procedures.[7]  The same year, the Cochrane Collaboration found not only a lack of data to support dental prophylaxis, but also expressed uncertainty that dental procedures actually cause many cases of IE.[8]

A question arose: if it is not clear that dental procedures cause IE, why are oral bacteria often found in infected heart valves?  Numerous studies have been conducted in recent years to investigate the idea that transient bacteremia from daily hygiene activities such as tooth brushing and flossing causes endocarditis. In a randomized controlled trial published in Circulation by Lockhart and colleagues, researchers drew blood cultures from participants at 6 time points before, during, and after either tooth extraction or tooth brushing.[9]  They demonstrated bacteremia after extraction and, to a lesser extent, after tooth brushing.  This finding has an important implication: the average person brushes his or her teeth twice per day (and, realistically, flosses much less) but only visits the dentist twice per year, making the cumulative daily transient bacteremia riskier than almost any procedure performed during a single dental visit. [9-12]

Patients may be concerned about engaging in a daily hygiene ritual that may expose them to harm.  Good news comes in the form of other studies that reveal that dental plaque accumulation and gingival inflammation (bleeding gingiva while brushing) are related to an increased degree of bacteremia following daily oral health care.[13, 14] If a patient takes care of his or her dental and periodontal health, fewer bacteria are seeded into the bloodstream every day.

A few closing observations can be made.  First, solid evidence linking dental procedures to endocarditis is scarce and will likely remain so, as the low incidence of IE makes definitive research difficult to perform. Next, in light of current evidence and guidelines, the routine prescription of antibiotic prophylaxis to low-risk patients should be highly discouraged.  There is no such thing as a totally safe drug, and antibiotic resistance is a major concern.  Finally, and perhaps most important, emphasis should be placed on educating patients and improving oral health.

Jeffrey Krutoy, DDS is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Robert Donnino, MD, Division of Cardiology, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


[1] Tleyjeh IM, Abdel-Latif A, Rahbi H, et al. A systematic review of population-based studies of infective endocarditis. Chest. 2007;132(3):1025-1035.

[2] Baddour LM, Wilson WR, Bayer AS, et al. AHA Scientific Statement: Infective Endocarditis. Diagnosis, antimicrobial therapy, and management of complications:

a statement for healthcare professionals from the Committee on Rheumatic Fever, Endocarditis, and Kawasaki Disease, Council on Cardiovascular Disease in the Young, and the Councils on Clinical Cardiology, Stroke, and Cardiovascular Surgery and Anesthesia, American Heart Association. Circulation. 2005; 111(23):e394-e434.

[3] Karchmer AW. Infective endocarditis. In: Longo DL, Fauci AS, Kasper DL, Hauser SL, Jameson JL, Loscalzo J, eds. Harrison’s Principles of Internal Medicine. 18th ed. New York: McGraw-Hill; 2012:

[4] Dajani AS, Taubert KA, Wilson W, et al. Prevention of bacterial endocarditis: Recommendations by the American Heart Association. Circulation. 1997;96(1):358–366.

[5] Shaw D, Conway DI. Pascal’s Wager, infective endocarditis and the ‘‘no-lose’’ philosophy in medicine. Heart. 2010;96(1):15–18.

[6] Wilson W, Taubert KA, Gewitz M, et al. Prevention of infective endocarditis. Guidelines from the American Heart Association Rheumatic Fever, Endocarditis and Kawasaki Disease Committee, Council on Cardiovascular Disease in the Young, and the Council on Clinical Cardiology, Council on Cardiovascular Surgery and Anesthesia, and the Quality of Care and Outcomes Research Interdisciplinary Working Group. Circulation. 2007; 116(15):1736–1754.

[7]  National Institute for Health and Clinical Excellence. Prophylaxis against infective endocarditis: antimicrobial prophylaxis against infective endocarditis in adults and children undergoing interventional procedures. NICE Clinical Guideline No 64. London: National Institute for Health and Clinical Excellence. Published March 2008. Updated January 23, 2011.

[8]  Oliver R, Roberts G, Hooper L, Worthington HV. Antibiotics for the prophylaxis of bacterial endocarditis in dentistry. Cochrane Database Syst Rev. 2008; 4:CD003813.

[9] Lockhart PB, Brennan MT, Sasser HC, Fox PC, Paster BJ, Bahrani-Mougeot FK.  Bacteremia associated with toothbrushing and dental extraction. Circulation. 2008; 117(24):3118–3125.

[10]  Lucas VS, Gafan G, Dewhurst S, Roberts GJ. Prevalence, intensity and nature of bacteraemia after toothbrushing. J Dent. 2008;36(7):481–487.

[11]  Crasta K, Daly CG, Mitchell D, Curtis B, Stewart D, Heitz-Mayfield LJ. Bacteraemia due to dental flossing. J Clin Periodontol. 2009;36(4)323–332.

[12]  Sakamoto H, Karakida K, Otsuru M, Aoki T, Hata Y, Aki A.  Antibiotic prevention of infective endocarditis due to oral procedures: myth, magic, or science? J Infect Chemother. 2007;13(4):189–195.

[13]  Tomás I, Diz P, Tobías A, Scully C, Donos N. Periodontal health status and bacteraemia from daily oral activities: systematic review/meta-analysis. J Clin Periodontol. 2012;39(3):213–228.

[14] Lockhart PB, Brennan MT, Thornhill M, et al. Poor oral hygiene as a risk factor for infective endocarditis-related bacteremia  J Am Dent Assoc. 2009;140(10):1238-1244.

RTS,S/AS01: Is This The Beginning Of The End Of Malaria?

April 12, 2012

By Nicole Sunseri

Faculty Peer Reviewed

In Africa, there lurks a stealthy and powerful beast. Is it a lion, a black mamba, or a crocodile? No, it is the Anopheles mosquito. Although less than the size of a paperclip, these insects inflict an incapacitating blow, inoculating their larger human prey with Plasmodium spp., the parasites responsible for malaria. According to the World Health Organization, the worldwide incidence of malaria infection in 2009 was 225 million cases with a death toll of 781,000[1] Most of these fatalities were African children. [2] In fact, it is estimated that a child dies every 45 seconds from malaria in Africa.([1] Malaria in humans is caused by one of the 5 species of Plasmodium: P falciparum, P vivax, P malariae, P knowlesi, and P ovale. [2]  Of these 5, P falciparum and P vivax are the most common, with P falciparum responsible for the most severe disease. Clinical manifestations of malaria occur during the erythrocytic phase of the parasitic life cycle and are nonspecific. Infected individuals experience cyclic fevers, chills, diaphoresis, headache, cough, nausea, vomiting, abdominal pain, diarrhea, arthralgias, myalgias, and fatigue. [2] Physical findings typically include anemia and may include splenomegaly and mild jaundice. In some instances, the infection moves to the central nervous system, leading to the rapidly fatal cerebral malaria characterized by impaired consciousness, delirium, seizures, and coma.

Current measures aimed at the control of malaria consist of prompt and effective treatment of infected individuals with a combination of antimalarial agents like chloroquine and artemisinin-based compounds, employment of insecticide-treated bed nets, and indoor spraying of insecticide.[2,3] Organizations such as the Roll Back Malaria Partnership; Global Fund for TB, HIV, and Malaria; and Millenium Development Goals, have aided in the development and coordination of these efforts. In fact, the incidence of malaria has declined significantly, with eradication in 29 countries.[1,4,5] Control measures, however, have not been sufficient to completely quell the infection. The fall in infection rate is slowed by increased resistance of the parasite to antimalarial therapy and insecticides, as well as incorrect usage of medication.[6-8] Vector eradication is also a problem, due to the tropical environments where the Anopheles mosquito thrives. Locations like Sub-Saharan Africa are stable zones for transmission due to climate and other less well understood ecological factors that confer a longer lifespan on the mosquito, allowing for increased cycles of reproduction and expansion of both the mosquito and the parasite. In these locations, it is calculated that human exposure may be as high as 1000 mosquito bites per year.[9,10]. Even if the use of insecticide sprays or treated bed nets covered 100% of the people residing in these areas, the exposure numbers would fail to drop below that needed for local elimination.[11] Further complicating the picture is the limited access to both diagnosis and treatment as a result of cost and civil unrest in these locations.[12] Thus, other means of eradication are necessary.

The ultimate hope at prevention is an effective vaccine, which until now has proven elusive. However, newly published results from a phase 3 trial show promise for the vaccine RTS,S/AS01, a product developed by the public-private partnership between GlaxoSmithKline and the Program for Appropriate Technology in Health (PATH) Malaria Vaccine Initiative, supported by the Bill and Melinda Gates Foundation.(13) The recombinant subunit vaccine consists of the hepatitis B surface antigen linked to epitopes derived from the circumsporozoite surface protein of the P falciparum sporozoite. Targeting the sporozoite phase of the parasite lifecycle is beneficial since it attacks the parasite as it enters the host following the Anopheles bite.[2] In order to eradicate the parasite, the host must mount a robust immune response. In initial studies, the vaccine was shown to stimulate a strong antibody and Th1 cell-mediated response.[14-16] Aiding in the immunogenicity of this vaccine is the polymeric nature of the RTS,S particles and inclusion of the proprietary adjuvant AS01. Subsequent investigations suggested that it was safe and well tolerated as well.[17,18]

Although the phase 3 trial is expected to conclude in 2014, the RTS,S Clinical Trials Partnership published an interim report in the November 17th issue of the New England Journal of Medicine. [13] The target population of this trial is young children, as the vaccine can be easily given with other childhood immunizations. Moreover, this population comprises the largest percentage of malaria-associated mortalities worldwide. [2] The multicenter phase 3 trial enrolled 15,460 children from 11 centers in 7 African countries and separated the participants into 2 age categories: 6-12 weeks and 5-17 months. Results from the first 6000 children in the older group as well as the first 250 cases of severe malaria in the combined groups were presented. The clinical endpoint was evaluation of malaria during the 12 months following the third dose of the vaccine. The vaccine provided 55% protection against all malarial episodes, whereas 35% protection was found for severe malaria. [13] Although the results for severe malaria were lower than expected based on extended phase 2 trials, the protection elicited for uncomplicated malaria was higher. (16,18,19)

The protective efficacy of the vaccine is impressive, but an important question remains to be answered. How long does the protective immunity last? Unlike the smallpox or hepatitis B vaccines which provide natural immunity, it is unclear if long-term immunity can be afforded by RTS,S/AS01 or if repeated boosters are necessary due to waning immunity. The investigators suggest that an answer to this question will be revealed at the conclusion of the trial based on the evaluation of an 18-month booster dose. Nonetheless, follow-up over a longer period of time will have to be done for a complete answer. As with most other vaccines, the protection comes at a price for some of those vaccinated. Most vaccines carry the risk of local injection site tenderness and redness. Others are more serious, potentially triggering anaphylaxis and seizures. RTS,S/AS01 is no exception, as the preliminary data suggest an increased risk of meningitis. However, it must be noted that these events may represent the increased reactogenicity of the vaccine when compared to its control vaccine. On the other hand, the increase in meningitis may be a chance finding. A more definitive answer awaits the conclusion of the study.

Another potential pitfall is the cost. Cost-effectiveness is key, since the highest burden of the infection resides in the poorest nations, with over 90% of worldwide malaria deaths occurring in Africa.[1]  As mentioned previously, poor access to the treatments for malaria is one factor limiting its decline.[1] The same may hold true for the vaccine. Additionally, there is the pitfall associated with the nature of the parasite itself. Even if one sporozoite survives immune attack, it may result in infection, since the host has not been primed to detect the other phases of the lifecycle. One sporozoite has the ability to multiply rapidly, producing tens of thousands of merozoites, which within 48 hours multiply tenfold.[2] Thus, a fully sterile protection is essential to prevent infection, a goal that is not possible at this time. Although not the silver bullet that the community had hoped for, RTS,S/AS01 may be combined with the other established preventive measures against malaria to reduce the threat this age-old pathogen poses. Even more importantly, its success stresses the importance of the global community and the effectiveness of collaboration.

Dr. Nicole Sunseri is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Nathan Bertelsen, MD, Department of Medicine, Med Dir BV/NYU Pgm Survivors of Torture, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. World Health Organization. World Malaria Report: 2010.  Published 2010.  Accessed November 21, 2011.

2. Garcia LS. Malaria. Clin Lab Med. 2010;30(1):93-129.

3. Guerin PJ, Olliaro P, Nosten F, et al. Malaria: current status of control, diagnosis, treatment, and a proposed agenda for research and development. Lancet Infect Dis. 2002;2(9):564-573.

4. O’Meara WP, Mangeni JN, Steketee R, Greenwood B. Changes in the burden of malaria in sub-Saharan Africa. Lancet Infect Dis. 2010;10(8):545-555.

5. Steketee RW, Campbell CC. Impact of national malaria control scale-up programmes in Africa: magnitude and attribution of effects. Malar J. 2010;9:299.

6. Plowe CV, Doumbo OK, Djimde A, et al. Chloroquine treatment of uncomplicated Plasmodium falciparum malaria in Mali: parasitologic resistance versus therapeutic efficacy. Am J Trop Med Hyg. 2001;64(5-6):242-246.

7. Borrmann S, Sasi P, Mwai L, et al. Declining responsiveness of Plasmodium falciparum infections to artemisinin-based combination treatments on the Kenyan Coast. PLoS ONE. 2011;6(11):e26005.

8. Sangaré LR, Weiss NS, Brentlinger PE, et al. Patterns of anti-malarial drug treatment among pregnant women in Uganda. Malar J. 2011;10:152.

9. Smith DL, Dushoff J, Snow RW, Hay SI. The entomological inoculation rate and Plasmodium falciparum infection in African children. Nature. 2005;438(7067):492-495.

10. Smith DL, McKenzie FE, Snow RW, Hay SI. Revisiting the basic reproductive number for malaria and its implications for malaria control. PLoS Biol. 2007;5(3):e42.

11. Ferguson HM, Dornhaus A, Beeche A, et al. Ecology: a prerequisite for malaria elimination and eradication. PLoS Med. 2010;7(8):e1000303.

12. Croft SL, Vivas L, Brooker S. Recent advances in research and control of malaria, leishmaniasis, trypanosomiasis and schistosomiasis. East Mediterr Health J. 2003;9(4):518-533.

13. Agnandji ST, Lell B, Soulanoudjingar SS, et al. First results of phase 3 trial of RTS,S/AS01 malaria vaccine in African children. N Engl J Med. 2011;365(20):1863-1875.

14. Agnandji ST, Asante KP, Lyimo J, et al. Evaluation of the safety and immunogenicity of the RTS,S/AS01E malaria candidate vaccine when integrated in the expanded program of immunization. J Infect Dis. 2010;202(7):1076-1087.

15. Macete EV, Sacarlal J, Aponte JJ, et al. Evaluation of two formulations of adjuvanted RTS, S malaria vaccine in children aged 3 to 5 years living in a malaria-endemic region of Mozambique: a Phase I/IIb randomized double-blind bridging trial. Trials. 2007;8:11.

16. Olotu A, Lusingu J, Leach A, et al. Efficacy of RTS,S/AS01E malaria vaccine and exploratory analysis on anti-circumsporozoite antibody titres and protection in children aged 5-17 months in Kenya and Tanzania: a randomised controlled trial. Lancet Infect Dis. 2011;11(2):102-109.

17. Lusingu J, Olotu A, Leach A, et al. Safety of the malaria vaccine candidate, RTS,S/AS01E in 5 to 17 month old Kenyan and Tanzanian Children. PLoS ONE. 2010;5(11):e14090.

18. Asante KP, Abdulla S, Agnandji S, et al. Safety and efficacy of the RTS,S/AS01E candidate malaria vaccine given with expanded-programme-on-immunisation vaccines: 19 month follow-up of a randomised, open-label, phase 2 trial. Lancet Infect Dis. 2011;11(10):741-749.

19. Bejon P, Lusingu J, Olotu A, et al. Efficacy of RTS,S/AS01E vaccine against malaria in children 5 to 17 months of age. N Engl J Med. 2008;359(24):2521-2532.

Should Physicians Offer The HPV Vaccine To Men And Boys?

March 23, 2012

By Kevin Burns

Faculty Peer Reviewed

On December 22, 2010, the US Food and Drug Administration (FDA) approved the quadrivalent human papillomavirus (HPV) vaccine (Gardasil; Merck, Whitehouse Station, New Jersey) for prevention of anal cancer and anal intraepithelial neoplasia (AIN) for males and females 9 to 26 years old.[1] HPV is the most common sexually transmitted infection in the United States and the high-risk subtypes 16 and 18 are linked to development of cervical, vaginal, vulvar, anal, penile, and oropharyngeal malignancies. The FDA-approved uses for Gardasil have followed the epidemiological evidence linking HPV infection to a growing number of cancers in both males and females. The FDA originally approved the quadrivalent HPV vaccine in 2006 for cervical cancer prevention in females aged 9 to 26 years. In 2008, the FDA approved its use for the prevention of vaginal and vulvar cancers in the same population.[2] In 2009, the FDA approved its use in males and females from 9 to 26 years old for prevention of anogenital warts.[3]

Human papilloma virus DNA can be detected in the cervical tissue of 27% of women, according to the National Health and Nutrition Examination Survey, but can be found in up to 45% of women 20-24 years of age.[4] In men, the prevalence varies depending on the site sampled, but ranges from 20 to 65%.[5] In a group of HIV-negative men who have sex with men (MSM), 48% were infected with HPV; the high-risk HPV types 16 and 18 (associated with malignancy) were found in 13.7% and 8.1% respectively, and the low-risk types 6 and 11 (associated with anogenital warts) were detected in 13.4% and 6.8%. The strains for which the quadrivalent HPV vaccine is targeted–6,11,16, and 18–are found in the anal canal of 25.2% of men and 11.2% at penile sites.[6] The highest risk population, HIV-positive MSM, have a 95% rate of anogenital HPV infection.[6] Risk factors for anal HPV infection in MSM include 3-6 lifetime male sexual partners, younger age, and smoking. Males with 2 or more male lifetime sexual partners have a 3-4 times increased risk of anal HPV infection.[6]

The high prevalence of anogenital HPV infection in men, specifically in MSM, has led to a connection with both benign and malignant lesions. The low-risk HPV types 6 and 11 account for 90% of condyloma acuminata (genital warts) and the majority of HPV disease burden in men.[5] Penile cancer is uncommon, occurring in 8 out of 1 million men per year, but HPV 16 and 18 cause 40% of the roughly 1500 US cases per year. The incidence of anal cancer, rare in the general population (1/100,000), is increased 37 times in HIV-positive men and up to 100 times in HIV-positive MSM.[5] HPV type 16 causes 66% of anal cancer and type 18 causes 5%.[7] For the general population, the risk of anal cancer in men caused by HPV is much lower than that for cervical cancer in women, but in HIV-positive MSM, the incidence of anal cancer exceeds that of cervical cancer prior to screening programs.[8] HIV-positive patients are at increased risk for anogenital malignancy because of immunosuppression leading to reduced clearance of HPV and increased oncogenesis in cells co-infected with HPV and HIV.[12] The quadrivalent HPV vaccine is safe and effective in HIV-1-infected men, displaying seroconversion rates of 98%, 99%, 100%, and 95% for HPV types 6,11,16, and 18, respectively.[7]

Implementing widespread vaccination programs is a societal economic decision, relying on the cost per quality-adjusted life-year gained (QALY) compared to other prevention programs. Currently, the HPV vaccine is only routinely offered to females aged 9 to 26 because several studies have shown the HPV vaccine to be cost-effective at preventing cervical cancer in this population. A study by Kim and Goldie of the Harvard School of Public Health concluded that the cost of vaccinating 12-year old girls to prevent cervical cancer was $43,600 per QALY gained but rose to $152,700 per QALY when including females up to age 26.[9 ] When including the disease burden from genital warts caused by HPV, the cost per QALY was only reduced to $133,600. In another model including 12-year-old boys in the HPV vaccination program, Kim and Goldie concluded that the cost per QALY from cervical cancer prevention was $290,290 compared with vaccinating girls only; when including all HPV-related diseases in both sexes, it cost $120,000 per QALY.[10] Kim and Goldie therefore concluded that it is not cost-effective to expand widespread vaccination to boys compared to girls alone.[10] In contrast to these findings, a study by Elbasha and Dasbach of Merck & Co, the manufacturer of Gardasil, proposed that expanding the HPV vaccine indication to boys would only cost $25,664 per QALY for all HPV-related outcomes compared to only girls.[11] This vastly different result comes from different modeling procedures; the Merck model included the estimated diminished quality of life in women living with cervical intraepithelial neoplasia and used a higher vaccine efficacy (and included a full-page discussion debating the findings of Kim and Goldie).[10,11]

When specifically focusing on the prevention of anal cancer and anogenital warts in the high-risk population of men who have sex with men, the vaccination cost per QALY was only $15,290 for males at 12 years of age without exposure to HPV and $37,830 per QALY for MSM aged 26 with prior HPV infection.[8] Compared with the cost of cervical cancer prevention in women ($43,600 to $152,700 per QALY), vaccinating MSM for the prevention of anal cancer appears even more cost-effective. With the reduced rate of HPV clearance in HIV-positive individuals, it is important for physicians to recommend that MSM, especially those who are HIV-positive, receive the HPV vaccine to prevent anogenital malignancy. Whether or not to offer the vaccine to other low-risk male populations is debated among public health officials and policymakers. It is a philosophical and ethical question whether to offer the HPV vaccine to these low-risk males, who will obtain little if any benefit from vaccination themselves, for the goal of preventing disease in their future sexual partners.

Commentary by Andrew B. Wallach, MD, FACP

Prevention–both primary (preventing disease from occurring at all) and secondary (detecting asymptomatic disease early and preventing progression)–remains the crux of primary care. Kevin Burns succinctly summarizes the clinical and economic data surrounding HPV vaccination of men and boys. Of note, on October 25, 2011, the Advisory Committee on Immunization Practices (ACIP) voted to expand the routine use of quadrivalent HPV vaccine (HPV4). The expanded recommendations for HPV4 vaccination include:

• Routine vaccination of adolescent boys 11-12 years of age,

• Catch-up vaccination of males 13-21 years of age,

• Permissive use of vaccine among males 9-10 and 22-26 years of age, and

• Routine vaccination of men ages 22-26 who have HIV infection or who have sex with men.

ACIP also updated the federal Vaccines for Children (VFC) Program resolution for HPV vaccine to allow routine use and catch-up vaccination with HPV4 for VFC-eligible boys 9-18 years of age. As a result, in New York State, boys 9-18 years of age who are VFC-eligible or who are enrolled in Child Health Plus may receive publicly-purchased HPV4 in their medical homes. Boys 9-18 years of age who are considered “underinsured” under the State’s expansion provision of the VFC program may also receive the HPV4 in their medical homes. And, under New York Insurance Law, private insurance plans that are regulated by New York State are now required to cover HPV4 for boys 9-18 years of age. As Benjamin Franklin once stated, “an ounce of prevention is worth a pound of cure.”

Kevin Burns is a 4th year medical student at NYU School of Medicine

Peer reviewed by Andrew Wallach, MD, Department of Medicine (GIM Div), NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. U.S. Food and Drug Administration. Gardasil approved to prevent anal cancer.   Published December 22, 2010.  Accessed January 27, 2011.

2. U.S. Food and Drug Administration. FDA approves expanded uses for Gardasil to include preventing certain vulvar and vaginal cancers.  Published September 12, 2008. Accessed January 27, 2011.

3. U.S. Food and Drug Administration. FDA approves new indication for Gardasil to prevent genital warts in men and boys. U.S. Food and Drug Administration.  Published October 16, 2009.  Accessed January 27, 2011.

4. Dunne EF, Unger ER, Sternberg M, et al. Prevalence of HPV infection among females in the United States. JAMA. 2007:297(8):813-819.

5. Barroso LF 2nd, Wilkin T. Human papillomavirus vaccination in males: the state of the science. Curr Infect Dis Rep. 2011;13(2):175-181.

6. Goldstone S, Palefsky JM, Guiliano AR, et al. Prevalence of and risk factors for human papillomavirus (HPV) infection among HIV-seronegative men who have sex with men. J Infect Dis. 2011:203(1):66–74.

7. Wilkin T, Lee J, Lensing S, et al. Safety and immunogenicity of the quadrivalent human papillomavirus vaccine in HIV-1-infected men. J Infect Dis. 2010:202(8):1246-1253.

8. Kim JJ. Targeted human papillomavirus vaccination of men who have sex with men in the USA: a cost-effectiveness modelling analysis. Lancet Infect Dis. 2010:10(12):845-852.

9. Kim JJ, Goldie SJ. Health and economic implications of HPV vaccination in the United States. N Engl J Med. 2008:359(8):821-832.

10. Kim JJ, Goldie SJ. Cost effectiveness analysis of including boys in a human papillomavirus vaccination programme in the United States. BMJ. 2009:339:b3884.

11. Elbasha E, Dasbach E. Impact of vaccinating boys and men against HPV in the United States. Vaccine. 2010:28:6858-6867.

12.Vernon SD, Hart CE, Reeves WC, Icenogle JP. The HIV-1 tat protein enhances E2- dependent human papillomavirus 16 transcription. Virus Res. 1993;27(23):133-145.

Challenges in the Treatment of TB and HIV Co-Infection

March 16, 2012

By Santosh Vardhana, MD/PhD

Faculty Peer Reviewed

Ms. T is a 32- year-old woman with no past medical history who presents with a three month history of productive cough, shortness of breath, and a twenty pound weight loss. On review of systems, she also reports night sweats. On physical exam, she is cachectic. Pulmonary exam reveals dry bibasilar inspiratory crackles. Rapid HIV test is positive, and CD4 count returns at 46. Chest X-ray reveals bilateral increased interstitial markings at the lung bases as well as pleural thickening adjacent to the right lower lobe. Sputum for AFB is positive x2.

What are the concerns with co-initiation of HAART and anti-TB therapy, and what are the recommendations based on the most recent randomized controlled trials?

Cases of co-infection with Mycobacterium tuberculosis (TB) and the human immunodeficiency virus (HIV) have been prevalent since initial descriptions of an acquired immune deficiency syndrome in Zambia [1], Haiti [2], and New York City [3]. Indeed, despite vast amounts of progress in both anti-HIV and anti-TB therapy, co-infection of TB and HIV continues to be a common problem, with estimates ranging as high as 30% both in the United States and particularly in developing countries [4]. Beyond simple co-incidence, HIV seropositivity has been found to be a significant risk factor for reactivation of latent tuberculosis [5]. The molecular mechanisms underlying this link have been largely elucidated; CD4+ T cells activated by TB infection produce pro-inflammatory cytokines such as IFN [6], which in turn recruit and activate macrophages [7] that are ultimately required for formation of granulomas to wall off sites of infection [8]. Subsequent studies have found that anti-tuberculosis T cells are rapidly depleted following acute HIV infection [9]. In turn, pulmonary tuberculosis infection has been found to accelerate HIV viral replication within involved lung segments [10]. This may account for the high rate of immunological failure and poor CD4+ T cell count recovery observed in patients diagnosed with incident tuberculosis during anti-retroviral therapy (ART) [11]. Initiation of ART is critical to preventing treatment failure in TB and results in an 80% reduction in incident TB even in the absence of anti-TB therapy [12].

Thus, initiation of ART along with anti-TB therapy is critical both to elimination of TB and to prevention of HIV progression. However, a number of concerns are present with regards to co-initiation of anti-TB and anti-retroviral therapy. One concern with initiation of ART prior to complete resolution of tuberculosis infection is that of immune reconstitution inflammatory syndrome (IRIS). IRIS is thought to be the product of ART-induced rapid recovery of antigen-experienced CD4+ T cells, which initiate a massive, Th1-predominant inflammatory response to a disseminated yet previously clinically silent opportunistic infection (OI). This manifests as either a new, disseminated presentation of an OI or marked worsening of a previously diagnosed OI. IRIS was first described in 1998, with the identification of five HIV+ patients with CD4 counts less than 50 cells/microliter who developed the systemic inflammatory response syndrome (SIRS) shortly after treatment with the protease inhibitor indinavir; subsequent lymph node biopsies revealed a massive immune response to disseminated Mycobacterium avium complex (MAC) infection [13]. IRIS has been described in response to a number of other organisms, but occurs particularly in the setting of mycobacterial, fungal, and viral infections (all of which potently activate CD4+ T cells) [14]. (A spectacular review published in 2002 provides an in-depth characterization of early immune reconstitution as well as a comprehensive discussion of the various infections known to induce IRIS [15]). Some studies suggest that up to 45% of patients with TB/HIV co-infection may be at risk of developing IRIS following initiation of ART; risk factors for development of IRIS include with CD4 counts less than 100 and disseminated TB infection at baseline as well as rapid increases in absolute CD4 count following initiation of ART [16].

In 2006, a consortium of clinicians and researchers formed the International Network for the Study of HIV-associated IRIS (INSHI) and developed consensus case definitions for tuberculosis-associated IRIS. The consortium improved on the initial definition of IRIS, provided by French et al in 2004 [17] which required either atypical, exaggerated, or paradoxically worsening clinical presentation of an OI in the context of decreasing HIV viral load or increasing CD4 T cell count, by issuing consensus criteria for tuberculosis-associated IRIS in which they eliminated the need for objective measures of improvement in response to HAART (as studies had shown that IRIS can emerge before either viral load reduction or CD4 count increases are detectable) and definitively split TB-IRIS into two categories: paradoxical TB-associated IRIS, in which patients with known TB undergoing anti-TB therapy experience worsening of symptoms following HAART initiation, and unmasking TB-associated IRIS, in which HAART therapy results in new presentation of previously undetected (likely subclinical) TB infection. Paradoxical and unmasking TB-associated IRIS have distinct risk factors and prognoses. Paradoxical TB-IRIS is generally self-limited, with a median symptom duration of 2 months [18] and low risk of mortality. Unmasking TB-associated IRIS has been more difficult to describe, in part because of the difficulty in distinguishing it from new TB infection occurring during early HAART initiation (when the patient is likely still immunocompromised and susceptible to infection); however, some studies suggest that unmasking TB-associated IRIS may have a worse prognosis due to a heightened inflammatory response to a more diffuse, subclinical infectious process [19].

Thus, the potential of IRIS may serve as a relative contraindication to initiating antiretroviral therapy in a patient with known tuberculosis. The severity of this contraindication would depend largely on:

a) the prevalence of IRIS following initiation of ART

b) the mortality from IRIS as compared to delayed ART initiation, and

c) the balance between improvement in IRIS-related morbidity and AIDS-related morbidity from delaying ART initiation.

Reports estimating the prevalence of TB-IRIS in patients with undergoing new ART are variable, ranging from as low as 7.6% in one observational study conducted in India [20] to as high as 32% in the US [21]; rates of IRIS appear to be particularly high in patients who have initiated anti-TB therapy in the two months prior to starting ART (likely due to high circulating levels of antigen) [22]. A meta-analysis of 54 cohort studies containing over 13,000 patients starting ART, of whom nearly 1700 developed IRIS, reported an prevalence of 15.7% in patients with tuberculosis (as compared to nearly 20% of patients with cryptococcal meningitis and 38% of patients with CMV retinitis) [23]. The analysis also reported a cumulative mortality rate of 3.2% in patients with TB-IRIS; however, mortality in the various studies ranged from less than 1% to nearly 10%. Notably, the overall mortality rate of TB-IRIS was far lower than cryptococcal IRIS (over 20%). Also of note in this meta-analysis was the fact that development of IRIS was independent of socioeconomic status, suggesting that IRIS is not restricted to developing countries in which the prevalence of opportunistic infections is presumably higher.

Thus, existing evidence suggested IRIS is a common complication of combination antiretroviral and anti-TB therapy and raised the possibility that delaying ART until anti-TB therapy was either in the continuation (less intensive) phase or even completed might be beneficial by reducing the infectious burden and risk of IRIS. The first definitive insight with regards to this question was found in the preliminary results of the Starting Antiretroviral therapy at three Points In Tuberculosis (SAPIT) study, published in the New England Journal in early 2010 [24].

In this study, Abdool Karim and colleagues randomized 642 patients in South Africa with a diagnosis of both tuberculosis (sputum smear confirmed) and HIV (with a CD4 count of less than 500) to 3 groups: initiation of ART at the onset of TB treatment, following intensive phase of TB treatment (2 months), or following completion of treatment (6 months). This report was published following early termination of the sequential ART treatment arm after one year of follow-up due to a 54% increase in mortality as compared to the integrated arms (13% vs. 6%, p<0.001); however, the study has also reported an almost threefold increase in incidence of IRIS (12.4% vs. 3.8%, p < 0.001) in the integrated arm, demonstrating a clear adverse component to co-initiation of anti-TB and anti-retroviral therapy that might still benefit from a delay within the integrated arm.

The initial SAPIT study established that integrated ART and anti-TB therapy was superior to sequential anti-TB and ART; however, the question of whether temporarily delaying initiation of ART until completion of the intensive anti-TB treatment phase remained unanswered until publication of the 18-month follow-up of the patients in the integrated arm of the SAPIT study in October 2011 [25]. In these 429 patients that were randomized to anti-retroviral therapy either during the intensive or continuation phase of anti-TB therapy, early initiation of ART was associated with a twofold increase in IRIS (20% vs. 8%, p < 0.001) without any improvement in mortality (7% vs. 7%, NS). Notably, however, subgroup analysis demonstrated a 60% reduction in mortality (8% vs. 20%, p = 0.17) in patients with a CD4 count <50; this too was at the expense of an increased risk of IRIS (38% vs. 11%, p = 0.01). In patients with CD4 counts of 50-500, early ART was associated with a twofold increased risk of IRIS without an improvement in mortality. Thus, the composite SAPIT study results suggested immediate initiation of ART in patients with CD4 counts less than 50 but deferment of ART until completion of the intensive TB therapy phase in patients with CD4 counts of 50-500.

It is notable, however, that the inclusion criteria for patients in the SAPIT study included patients for whom ART is not necessarily indicated (CD4 counts >350) [26]; it remained possible that in patients in whom ART is definitively indicated (CD4<200), co-initiation of ART and anti-TB therapy might still be beneficial. That question was the subject of the Cambodian Early versus Late Introduction of Antiretrovirals (CAMELIA) in which patients with a CD4 count <200 were randomized to ART either two or eight weeks after initiation of anti-TB therapy. Preliminary reports from this study showed a 34% improvement in mortality in the early treatment arm, despite an almost fourfold elevation in incidence of IRIS [27]. At the completion of a two-year follow-up, this benefit was sustained; a 35% reduction in mortality was seen in the early ART group (17% vs. 26%, p = 0.002) despite a twofold increased risk of IRIS (33% vs. 14%, p < 0.001) [28]. However, despite the inclusion criteria of CD4 counts less than 200, the average CD4 count of patients participating in this study was 25. Therefore, while confirming the benefit of early ART initiation in patients with CD4 counts less than 50, this study did not resolve the question of when to initiate ART in patients with CD4 counts between 50 and 200. However, this answer may have come with the publication of the AIDS Clinical Trial Group Study A5221 [29]. In this study, 809 patients with known or suspected TB as well as HIV with CD4 counts <250 were randomized to initiation of ART either during the intensive or continuation phase of anti-TB therapy. Of note, the average CD4 count in this population was 77. At one year of follow-up, early initiation of ART resulted in a twofold increase in IRIS (11% vs. 5%, p = 0.002) without any improvement in mortality (6% vs. 7%, NS). Subgroup analysis of patients with CD4 counts less than 50 confirmed the results of prior studies, with a 42% reduction in the combined endpoint of mortality and AIDS-defining conditions (16% vs. 27%, p = 0.02) while no difference was seen in patients with CD4 counts of 50-250 (12% vs. 10%, NS). The results of all four studies are summarized in the table below (Table 1).

Thus, these studies indicate that the choice of when to initiate ART depends on the risk of progression to AIDS and death. Based on the SAPIT study, sequential therapy clearly results in higher mortality rates; however, synthesis of the SAPIT, CAMELIA, and A5221 studies suggest that initiation of ART during the intensive phase of anti-TB therapy should be restricted to patients with CD4<50; the remainder of patients should initiate ART during the continuation phase of anti-TB therapy.

Given the clear mortality benefit of integrated as compared to sequential ART/anti-TB therapy seen in SAPIT, integrated therapy is likely to become the standard of care in patients with new diagnoses of TB and HIV, and as demonstrated in all three RCTs, this carries with it a significantly elevated risk of IRIS. A continuing area of interest, therefore, is whether suppression of the hyperinflammatory response using steroids may help either prevent or alleviate IRIS in response to co-initiation of ART and anti-TB therapy, given the established data demonstrating a benefit for adjunctive steroid therapy in tuberculous pericarditis [30, 31] as well as data suggesting a possible, although controversial benefit of dexamethasone pretreatment for tuberculous meningitis [32, 33]. To date, no studies have investigated the use of prophylactic steroids at the time of initiation of ART and anti-TB combination therapy, but one RCT of prednisone given to patients who developed paradoxical TB-IRIS following initiation of ART showed a reduction in inpatient hospital days, improvement in quality of life, and improvement in chest radiographic findings with prednisone therapy at early, but not late timepoints; however, this was somewhat tempered by the increased risk of infection seen in the treatment group [34]. One other study noted an improvement in the hypercytokinemia seen in TB-IRIS patients, particularly of the pro-inflammatory cytokines IL-6 and TNF-, with prednisone therapy [35]. Further studies will be needed to assess for definitive clinical benefit with steroid therapy either at the time of initiation of ART or for resultant IRIS.


Current evidence suggests that development of IRIS is a common risk with co-initiation of anti-TB and anti-retroviral therapy. Evidence from RCTs suggests that the benefits of co-initiation of therapy outweigh the risks associated with delaying initiation of HAART until completion of anti-TB therapy, however, delaying initiation of ART until completion of the intensive phase of anti-TB treatment appears to lower the incidence of IRIS without affecting mortality in patients other than those with severely depressed CD4 counts. All patients receiving simultaneous treatment for TB and HIV should be closely monitored for development of TB-IRIS; if it develops, supportive care is appropriate, with some evidence for initiation of prednisone in patients with severe inflammatory manifestations.

Table 1. Comparison of recent RCTs evaluating co-initiation of ART and anti-TB therapy.

NEJM issue 362:697 365:1492 365:1482 365:1471
CD4 cutoff 500 500 250 200
TB status known known known/susp known
early tx (w) 0-8 0 2 2
late tx (w) 26 8 8-12 8
Age 34 34 34 35
CD4 147 150 77 25
f/u (y) 1 1.5 1 2
AIDS/ACM (early) N/A 8% 13% N/A
AIDS/ACM (late) N/A 9% 16% N/A
ACM (early) 6% 7% 6% 17%
ACM (late) 13% 7% 7% 26%
ACM HR 0.46 1.00 0.95 0.65
IRIS (early) 12% 20% 11% 33%
IRIS (late) 4% 8% 5% 14%
IRIS HR 3.3 2.4 2.2 2.4
CD4<50 AIDS/ACM (early) 11% 16%
AIDS/ACM (late) 29% 27%
AIDS/ACM HR 0.38 0.58
CD4>50 AIDS/ACM (early) 8% 12%
AIDS/ACM (late) 5% 10%
AIDS/ACM HR 1.6 1.1

Santosh Vardhana is a recent graduate from the MSTP program at NYU School of Medicine

Peer Reviewed by Melanie Maslow, Infectious Disease, Section Editor,  Clinical Correlations

Image courtesty of Wikimedia Commons


1. Melbye, M., et al., Evidence for heterosexual transmission and clinical manifestations of human immunodeficiency virus infection and related conditions in Lusaka, Zambia. Lancet, 1986. 2(8516): p. 1113-5.

2. Pitchenik, A.E., et al., Tuberculosis, atypical mycobacteriosis, and the acquired immunodeficiency syndrome among Haitian and non-Haitian patients in south Florida. Ann Intern Med, 1984. 101(5): p. 641-5.

3. Masur, H., et al., Opportunistic infection in previously healthy women. Initial manifestations of a community-acquired cellular immunodeficiency. Ann Intern Med, 1982. 97(4): p. 533-9.

4. Awoyemi, O.B., O.M. Ige, and B.O. Onadeko, Prevalence of active pulmonary tuberculosis in human immunodeficiency virus seropositive adult patients in University College Hospital, Ibadan, Nigeria. Afr J Med Med Sci, 2002. 31(4): p. 329-32.

5. Selwyn, P.A., et al., A prospective study of the risk of tuberculosis among intravenous drug users with human immunodeficiency virus infection. N Engl J Med, 1989. 320(9): p. 545-50.

6. Oftung, F., et al., Epitopes of the Mycobacterium tuberculosis 65-kilodalton protein antigen as recognized by human T cells. J Immunol, 1988. 141(8): p. 2749-54.

7. Orme, I.M., et al., Cytokine secretion by CD4 T lymphocytes acquired in response to Mycobacterium tuberculosis infection. J Immunol, 1993. 151(1): p. 518-25.

8. Hansch, H.C., et al., Mechanisms of granuloma formation in murine Mycobacterium avium infection: the contribution of CD4+ T cells. Int Immunol, 1996. 8(8): p. 1299-310.

9. Geldmacher, C., et al., Early depletion of Mycobacterium tuberculosis-specific T helper 1 cell responses after HIV-1 infection. J Infect Dis, 2008. 198(11): p. 1590-8.

10. Nakata, K., et al., Mycobacterium tuberculosis enhances human immunodeficiency virus-1 replication in the lung. Am J Respir Crit Care Med, 1997. 155(3): p. 996-1003.

11. Hermans, S.M., et al., Incident tuberculosis during antiretroviral therapy contributes to suboptimal immune reconstitution in a large urban HIV clinic in sub-Saharan Africa. PLoS One, 2010. 5(5): p. e10527.

12. Miranda, A., et al., Impact of antiretroviral therapy on the incidence of tuberculosis: the Brazilian experience, 1995-2001. PLoS One, 2007. 2(9): p. e826.

13. Race, E.M., et al., Focal mycobacterial lymphadenitis following initiation of protease-inhibitor therapy in patients with advanced HIV-1 disease. Lancet, 1998. 351(9098): p. 252-5.

14. Meintjes, G., et al., Tuberculosis-associated immune reconstitution inflammatory syndrome: case definitions for use in resource-limited settings. Lancet Infect Dis, 2008. 8(8): p. 516-23.

15. Shelburne, S.A., 3rd, et al., Immune reconstitution inflammatory syndrome: emergence of a unique syndrome during highly active antiretroviral therapy. Medicine (Baltimore), 2002. 81(3): p. 213-27.

16. Breton, G., et al., Determinants of immune reconstitution inflammatory syndrome in HIV type 1-infected patients with tuberculosis after initiation of antiretroviral therapy. Clin Infect Dis, 2004. 39(11): p. 1709-12.

17. French, M.A., P. Price, and S.F. Stone, Immune restoration disease after antiretroviral therapy. AIDS, 2004. 18(12): p. 1615-27.

18. Olalla, J., et al., Paradoxical responses in a cohort of HIV-1-infected patients with mycobacterial disease. Int J Tuberc Lung Dis, 2002. 6(1): p. 71-5.

19. Breen, R.A., et al., Does immune reconstitution syndrome promote active tuberculosis in patients receiving highly active antiretroviral therapy? AIDS, 2005. 19(11): p. 1201-6.

20. Kumarasamy, N., et al., Incidence of immune reconstitution syndrome in HIV/tuberculosis-coinfected patients after initiation of generic antiretroviral therapy in India. J Acquir Immune Defic Syndr, 2004. 37(5): p. 1574-6.

21. Shelburne, S.A., et al., Incidence and risk factors for immune reconstitution inflammatory syndrome during highly active antiretroviral therapy. AIDS, 2005. 19(4): p. 399-406.

22. Lawn, S.D., et al., Tuberculosis-associated immune reconstitution disease: incidence, risk factors and impact in an antiretroviral treatment service in South Africa. AIDS, 2007. 21(3): p. 335-41.

23. Muller, M., et al., Immune reconstitution inflammatory syndrome in patients starting antiretroviral therapy for HIV infection: a systematic review and meta-analysis. Lancet Infect Dis, 2010. 10(4): p. 251-61.

24. Abdool Karim, S.S., et al., Timing of initiation of antiretroviral drugs during tuberculosis therapy. N Engl J Med, 2010. 362(8): p. 697-706.

25. Abdool Karim, S.S., et al., Integration of antiretroviral therapy with tuberculosis treatment. N Engl J Med, 2011. 365(16): p. 1492-1501.

26. Writing committee for the CASCADE collaboration, Timing of HAART initiation and clinical outcomes in Human Immunodeficiency Virus type 1 seroconverters. Arch Intern Med, 2011. 171(17): p. 1560-1569.

27. Piggott, D.A. and P.C. Karakousis, Timing of antiretroviral therapy for HIV in the setting of TB treatment. Clin Dev Immunol, 2011. 2011: p. 103917.

28. Blanc, F., et al., Earlier versus later start of antiretroviral therapy in HIV-infected adults with tuberculosis. N Engl J Med, 2011. 365(16): p. 1471-81.

29. Havlir, D.V., et al., Timing of antiretroviral therapy for HIV-1 infection and tuberculosis. N Engl J Med, 2011. 365(16): p. 1482-91.

30. Hakim, J. G., et al., Double blind randomised placebo controlled trial of adjunctive prednisolone in the treatment of effusive tuberculous pericarditis in HIV seropositive patients. Heart, 2000. 84: p. 183-188.

31. Strang, J. I. G., et al., Controlled trial of prednisolone as adjuvant in treatment of tuberculous constrictive pericarditis in Transkei. Lancet, 1987. 2(8573): p. 1418-22.

32. Thwaites, G.E., et al., Dexamethasone for the treatment of tuberculous meningitis in adolescents and adults. N Engl J Med, 2004. 351(17): p. 1741-51.

33. Prasad, K., Singh, M.B., Corticosteroids for managing tuberculous meningitis. Cochrane Database Syst Rev, 2008. 1: CD002244.

34. Meintjes, G., et al., Randomized placebo-controlled trial of prednisone for paradoxical tuberculosis-associated immune reconstitution inflammatory syndrome. AIDS, 2010. 24(15): p. 2381-90.

35. Tadokera, R., et al., Hypercytokinaemia accompanies HIV-tuberculosis immune reconstitution inflammatory syndrome. Eur Respir J, 2010.

Does the BCG Vaccine Really Work?

March 14, 2012

By Mitchell Kim

Faculty Peer Reviewed

Mycobacterium tuberculosis, an acid-fast bacillus, is the causative agent of tuberculosis (TB), an infection that causes significant morbidity and mortality worldwide. A highly contagious infection, TB is spread by aerosolized pulmonary droplet nuclei containing the infective organism. Most infections manifest as pulmonary disease, but TB is also known to cause meningitis, vertebral osteomyelitis, and other systemic diseases through hematogenous dissemination.[1] In 2009, there were an estimated 9.4 million incident and 14 million prevalent cases of TB worldwide, with a vast majority of cases occurring in developing countries of Asia and Africa. Approximately 1.7 million patients died of TB in 2009.[2]

TB has afflicted human civilization throughout known history, and may have killed more people than any other microbial agent. Hermann Koch first identified the bacillus in 1882, for which he was awarded the Nobel Prize in 1905. In 1921, Albert Calmette and Camille Guérin developed a live TB vaccine known as the bacille Calmette-Guérin (BCG) from an attenuated strain of Mycobacterium bovis.[3]

As the only TB vaccine, BCG has been in use since 1921,[4] and is now the most widely used vaccine worldwide,[5] with more than 3 billion total doses given. The BCG was initially administered as a live oral vaccine. This route of administration was stopped in 1930 following the Lübeck (Germany) disaster, in which 27% of 249 infants receiving the vaccine developed and died from TB. It was later discovered that the Lübeck vaccine was contaminated with virulent M tuberculosis. The intradermal route of administration was later found to be safe for mass vaccination, through studies conducted in the 1930s.[6] The World Health Organization currently recommends BCG vaccination for newborns in high-burden countries, although the protection against TB is thought to dissipate within 10-20 years.[7] The BCG vaccine is not used in the US, where TB control emphasizes treatment of latently infected individuals.[3]

Although widely used, the efficacy of the vaccine in preventing pulmonary TB is uncertain, with studies showing 0-80% protective benefit. A meta-analysis performed in 1994 showed that the BCG vaccine reduces the risk of pulmonary TB by 50% on average, with greater reduction in risk of disseminated TB and TB meningitis (78% and 64%, respectively).[8] It is currently accepted that the BCG vaccine provides protection against TB meningitis and disseminated TB in children, as well as leprosy in endemic areas such as Brazil, India, and Africa.[9]

There are several possible explanations for the variations in BCG vaccine efficacy found in different studies. Based on the observation that BCG vaccine trials showed more efficacy at higher latitudes than lower latitudes (P<0.00001), it is hypothesized that exposure to certain endemic mycobacteria, thought to be more common in lower latitudes, might provide natural immunity to the indigenous people, and the addition of BCG vaccine does not add much to this natural protection. The higher prevalence of skin reactivity to PPD-B (Mycobacterium avium-intracellulare antigen) in the lower latitudes supports this theory. However, there has been no conclusive link found between endemic Mycobacterium exposure and protection against TB. In addition, TB infection rates are highest in lower latitudes, where natural immunity should be the greatest;[5] this may indicate that other factors are at play. Other reasons why the observed efficacy of BCG vaccines may vary so widely is that they are produced at different sites around the world, with inconsistent quality control.[4] Also, the vaccine’s efficacy depends on the viability of the BCG organisms, which can be markedly altered by storage conditions.[10]

BCG is considered a safe vaccine,[4] with the main side effect being a localized reaction at the injection site with erythema and tenderness, followed by ulceration and scarring. This occurs almost invariably following correct intradermal administration. Overall, the rate of any adverse reaction has been reported to be between 0.1% and 19%[11] and serious adverse reactions such as osteitis, osteomyelitis, and disseminated BCG infection are rare [7] and estimated to occur less than once per 1 million doses given.[11] Disseminated BCG infection is a serious complication almost exclusively seen in immunized patients with underlying immunodeficiency, such as HIV infection or severe combined immunodeficiency. This complication carries a high mortality rate of 80-83%, and the incidence of fatality is estimated at 0.19-1.56 cases per 1 million vaccines given.[7]

Immunization with BCG vaccine increases the risk of a positive purified protein derivative tuberculin skin (PPD) test. This can complicate the interpretation of a PPD test, and may lead to unnecessary preventive treatment in people who do not truly have latent TB infection. However, it has been shown that a person’s age at time of BCG vaccination, as well as the years since vaccination, affects the risk of PPD positivity. Therefore, the US Preventive Services Task Force recommends PPD screening of high-risk patients, and that a >10 mm induration after PPD administration should not be attributed to the BCG vaccine. If a patient has a previous exposure to the BCG vaccine, the CDC recommends using the QuantiFERON-TB Gold test (QFT-G, Cellestis Limited, Carnegie, Victoria, Australia), an interferon-gamma release assay, to detect TB exposure instead of the PPD. This test is specific for M tuberculosis proteins without cross-reactivity with BCG. The major drawback of the QFT-G test is that it is roughly 3 times more expensive than the PPD test.[12]

In summary, the BCG vaccine has been in use for 90 years to reduce the prevalence of TB infection. It is the most widely used vaccine worldwide, with 100 million doses administered every year.[7] Although the vaccine is compulsory in 64 countries and recommended in another 118, its use is uncommon in the US, where treatment of latent infection is the major form of TB control. The vaccine limits multiplication and systemic dissemination of TB [13] and decreases the morbidity and mortality of TB infection, but has no effect on its transmission [7] and has no use in the secondary prevention of TB.[13] The vaccine’s efficacy in preventing pulmonary TB is highly variable, but it is thought to be efficacious in preventing TB meningitis, disseminated TB, and leprosy. In order to make up for the BCG vaccine’s shortfalls in preventing pulmonary TB, substantial progress is being made in the field of TB vaccines. In 2010, 11 vaccine candidates were being evaluated in clinical trials, with 2 being evaluated for efficacy.[9] Future developments in the field of TB vaccine development may improve on the foundations built by the BCG vaccine in reducing the worldwide health burden of this ancient disease.

Mitchell Kim is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Robert Holzman, MD, Professor Emeritus of Medicine and Environmental Medicine; Departments of Medicine (Infectious Disease and Immunology) and Environmental Medicine

Image courtesy of Wikimedia Commons


1. Raviglione MC, O’Brien RJ. Tuberculosis. In: Fauci AS, Braunwald E, Kasper DL, Hauser SL, Longo DL, Jameson JL, Loscalzo J, eds. Harrison’s Principles of Internal Medicine. 17th ed. New York, NY: McGraw-Hill; 2008: 1006-1020.

2. World Health Organization. Global tuberculosis Control: WHO report 2010.   Accessed September 11, 2011.

3. Daniel TM. The history of tuberculosis. Respir Med. 2006;100(11):1862-1870.

4. World Health Organization. Initiative for Vaccine Research: BCG–the current vaccine for tuberculosis.  Published 2011 .  Accessed September 11, 2011.

5. Fine PE. Variation in protection by BCG: implications of and for heterologous immunity. Lancet. 1995;346(8986):1339-1345.

6. Anderson P, Doherty TM. The success and failure of BCG–implications for a novel tuberculosis vaccine. Nat Rev Microbiol. 2005;3(8):656-662.

7. Rezai MS, Khotaei G, Mamishi S, Kheirkhah M, Parvaneh N. Disseminated Bacillus Calmette-Guérin infection after BCG vaccination. J Trop Pediatr. 2008; 54(6): 413-416.

8. Colditz GA, Brewer TF, Berkey CS, et al. Efficacy of BCG vaccine in the prevention of tuberculosis: Meta-analysis of the published literature. JAMA. 1994;271(9):698-702.

9. McShane H. Tuberculosis vaccines: beyond bacille Calmette-Guérin. Philos Trans R Soc Lond B Biol Sci. 2011;366(1579):2782-2789.

10. World Health Organization. Temperature sensitivity of vaccines. Published August, 2006. Accessed October 30, 2011.

11. Turnbull FM, McIntyre PB, Achat HM, et al. National study of adverse reactions after vaccination with bacille Calmette-Guérin. Clin Infect Dis. 2002;34(4):447-453.

12. Rowland K, Guthmann R, Jamieson B. Clinical inquiries. How should we manage a patient with a positive PPD and prior BCG vaccination? J Fam Pract. 2006;55(8):718-720.

13. Thayyil-Sudhan S, Kumar A, Singh M, Paul VK, Deorari AK. Safety and effectiveness of BCG vaccination in preterm babies. Arch Dis Child Fetal Neonatal Ed. 1999;81(1):F64-F66.

From The Archives: Why is Syphilis Still Sensitive to Penicillin?

January 13, 2012

Please enjoy this post from the Archives, first published on July 30, 2009

By Sam Rougas MD

Faculty Peer Reviewed

It seems that every week a new article in a major newspaper is reporting what most infectious disease physicians have been preaching for several years. Antibiotic resistance is rapidly spreading. Infections such as Methicillin Resistant Staphylococcal Aureus, Extremely Drug Resistant Tuberculosis, and Vancomycin Resistant Enterococcus have journeyed from the intensive care units to the locker rooms of the National Football League. That being said, some bacteria have strangely and until recently inexplicably behaved. Syphilis, a disease caused by the spirochete Treponema Pallidum, though first reported in Europe around the 15th century has likely been in North America since the dawn of mankind. Its rapid spread in Europe began shortly after Christopher Columbus returned from the new world[1] and remained unabated until it was first noted that Penicillin (PCN) could cure the disease[2]. However, since that time, syphilis, once the great pox, is now at the bottom of most differentials. How is it then, that one of our oldest diseases remains sensitive to our first antibiotic?Penicillin resistance to staphylococcal species was reported as early as 1946 and multiple cases were noted worldwide before the turn of the decade[3]. Literally within ten years of the existence of PCN there was resistance among staph species; however, after 50 years, PCN resistant syphilis is a worthy of a case report. In practically every case, the infection was cured with increasing the dose or duration of therapy or with another beta-lactam antibiotic[4,5]. One tempting explanation is that spirochetes are incapable of developing PCN resistance; however, that is not true. Brachyspira Pilosicoli, in intestinal spirochete has shown PCN resistance[6]. A second thought is that Syphilis is incapable of developing antibiotic resistance at all, though this too has not been shown to be true. Case reports of azithromycin resistance in T. Pallidum became increasingly common at the beginning of this century. Gene sequencing of these species mapped out the mutation leading to the macrolide resistant phenotype[7]. Obviously the mechanism of action of a macrolide antibiotic is different from a beta lactam as is the resistance profile. However, it does show that syphilis is capable of developing resistance to at least one class of antibiotic.

The classic teaching is that beta lactam antibiotics function at the level of the cell wall via binding to penicillin binding proteins (PBPs). Once bound, the beta lactams are able to interfere with the production of specific peptidoglycans critical for cell wall structure. Once these peptides are eliminated the cell wall ruptures and the bacteria dies. Resistance occurs when bacteria either via an innate mutation or via DNA exchange acquire the ability to produce beta lactamase, an enzyme cabable of cleaving the antibiotic rendering it useless. In syphilis the mechanism of action is thought to be the same, but resistance has never developed. This may be a direct consequence of one of the more recently discovered PBPs called Tp47[8]. Tp47 functions as both a PBP and a beta lactamase. However, it may paradoxically be responsible for the persistence of PCN sensitivity in syphilis. The binding of the beta lactam component of PCN to Tp47 results in hydrolysis of the beta-lactam bond of the antibiotic. However, in the process of this reaction several byproducts are created. The thought is that these byproducts have a higher affinity for Tp47 than the beta lactam itself[9]. Thus as a consequence of PCN being broken down, products are released which make it more difficult for the beta-lactamase to bind the antibiotic.

While this is one current theory behind the exquisite sensitivity of syphilis to PCN, it is clearly not cause for celebration. Cases of syphilis are increasing world-wide10 as the medical community has been unable to eradicate this disease. As the number of cases increase, so too does the potential for antibiotic resistance. Theoretically a mutation in Tp47 may alter the protective byproducts upon which the sensitivity of syphilis to PCN depends. Such a mutation would likely result in the end of the gravy train that has been the treatment of syphilis.

Faculty Peer Reviewed with commentary by Meagan O’brien MD, NYU Division of Infectious Diseases and Immunology

While it is true that Treponema Pallidum remains highly susceptible to Penicillin and has developed resistance to Azithromycin through an A–>G mutation at position 2058 of the 23S rRNA gene of T. pallidum, which confers resistance by precluding macrolide binding to the bacterial 50S ribosomal subunit, of which 23S rRNA is a structural component, the mechanisms of retained Penicillin sensitivity are not fully understood[7]. The discovery of Tp47 as a dual PBP and Beta-lactamase is interesting and important, but more studies would be needed to attribute this mechanism to the persistence of Treponema Pallidum sensitivity to Penicillin. Luckily, we do not have many clinical isolates to test this theorized mechanism. One key clinical point to remember is that eradication of the infection depends not only on the invading organism, but also upon the host defense system. In our HIV+ immunocompromized patient population, we routinely are concerned about treatment failure in syphilis infection due not to penicillin drug resistance but to dysfunctional host responses. A body of evidence now exists supporting the recommendation that if an HIV+ patient has a CD4 T-cell count ≤350 cell/uL and a blood RPR titer ≥ 1:32 with latent syphilis or syphilis of unknown duration, a lumbar puncture should be performed to rule out neurosyphilis, and if positive, that intravenous penicillin should be given instead of IM Benzathine Penicillin[11-14]. Additionally, after treating late or latent syphilis, a fall in RPR titer by 1:4 needs to be observed over 12 months or the patient should be evaluated for treatment failure or neurosyphilis, with the understanding that the CNS may be a more privelidged site for Treponema survival in the face of IM Benzathine Pencillin.


1. Rose M. Origins of Syphilis. Archeology 1997; Volume 50 Number 1

2. Mahoney J, Arnold R, Harris A. Penicillin treatment of early syphilis; a preliminary report. Vener Dis Inform 1943; 24:355-357

3. Shanson DC, Review Article: Antibiotic-resistant staphylococcus aureus. Journal of Hospital Infection (1981) 2: 11-36

4. Cnossen W, Niekus H, Nielsen et al. Ceftriaxone treatment of penicillin resistant neurosyphilis in alcoholic patients. J. Neurol. Neurosurg. Psychiatry 1995; 59; 194-195

5. Stockli H, Current aspects of neurosyphilis: therapy resistant cases with high-dosage penicillin? Schweiz Rundsch Med Prax. 1992 Dec 1; 81(49):1473-80

6. Mortimer-Jones S, Phillips N, Ram Naresh T et al. Penicllin resistance in the intestinal spirochaete Brachyspira pilosicoli associated with OXA-136 and OXA-137, two new variants of the class D Beta-Lacatmase OXA-63. Journal of Medical Microbiology 2006; 57 1122-1128

7. Katz K, Klausner J. Current Opinion in Infectious Disease 2008, 21:83-91

8. Deka R, Machius M, Norgard M et al. Crystal Structure of the 47-kDa Lipoprotein of Treponemal Pallidum Reveals a Novel Pencillin-Binding Protein. The Journal of Biological Chemistry 2002. 277:44: 41857-41864

9. Cha J, Ishiwata A, Mobashery S. A Novel B-Lactamase Activity from a Penicllin-binding Protein of Treponema pallidum and Why Syphilis Is Still Treatable With Penicllin. The Journal of Biological Chemistry 2004. 279: 15: 14917-14921

10. Gerbose A, Rawley J, Heymann D, et al. Global prevalence and incidence estimates of selected curable STDs. Sex Transm Infections 1998; 74: 512-516

11. Marra, C.M., C.L. Maxwell, S.L. Smith, et al., Cerebrospinal fluid abnormalities in patients with syphilis: association with clinical and laboratory features. J Infect Dis, 2004. 189(3): p. 369-76.

12. Marra, C.M., C.L. Maxwell, L. Tantalo, et al., Normalization of cerebrospinal fluid abnormalities after neurosyphilis therapy: does HIV status matter? Clin Infect Dis, 2004. 38(7): p. 1001-6.

13. Ghanem KG, Moore RD, Rompalo AM, Erbelding EJ, Zenilman JM, Gebo KA. Lumbar puncture in HIV-infected patients with syphilis and no neurologic symptoms.

Clin Infect Dis. 2009 Mar 15;48(6):816-21.

14. Ghanem KG, Moore RD, Rompalo AM, Erbelding EJ, Zenilman JM, Gebo KA. Neurosyphilis in a clinical cohort of HIV-1-infected patients. AIDS. 2008 Jun 19;22(10):1145-51.

Does Culturing the Catheter Tip Change Patient Outcomes?

November 17, 2011

By Todd Cutler, MD

Faculty Peer Reviewed

An 82-year-old man is admitted to the intensive care unit with fevers, hypoxic respiratory failure and hypotension. He is intubated and resuscitated with intravenous fluids. A central venous catheter is placed via the internal jugular vein. A chest x-ray showed a right lower lobe infiltrate and he is treated empirically with antibiotics for pneumonia. Blood cultures grow out S. pneumoniae. After four days he is successfully extubated. The night following extubation, the patient has a fever of 100.8 without hemodynamic instability. Peripheral blood cultures are drawn and his central venous catheter is removed. The next morning, during rounds, a debate ensues regarding whether the catheter tip should also have been sent to the microbiology lab to be cultured. What is the evidence for and what are the current recommendations regarding the culturing of central venous catheter tips?

The central venous catheter has an essential role of the field of critical care medicine, yet, as a foreign body, its use is associated with an increased risk of infection. While much effort has been devoted to improving aseptic techniques for central line insertion,over 250,000 blood stream infections each year are believed to be attributable to catheters.[1] Furthermore, due to substantial variations in retrieving, culturing, quantifying and subsequently defining catheter infections, there are many methodological differences between the experimental studies that investigate this topic. This article will highlight and evaluate the expert recommendations regarding catheter tip cultures, the evidence behind those recommendations and other studies that have been performed in this field.

In 2009, the Infectious Disease Society of America (IDSA) proposed that the diagnosis of a catheter related blood stream infection (CRBSI) require that a peripheral blood culture grow the same organism as a concurrently retrieved catheter tip culture.[2] Other possible criteria for CRBSI include microbiologic concordance between two positive blood cultures where one is drawn from the catheter hub and the other is drawn from a peripheral vein. While the exact criteria for meeting positive results remain unresolved, if blood cultures drawn from the catheter produce microbiologic colonies more rapidly or in greater quantitative degree than cultures from peripheral blood, this is believed to serve as evidence of catheter infection. As these techniques do not require removal of the catheter to be performed, they are considered catheter-sparing diagnostic methods and are, respectively, termed “simultaneous quantitative blood cultures” and “direct time to positivity” (DTP).[3,4,5] Further discussion of their utility, or related techniques for diagnosing CRBSI, is beyond the scope of this brief review.

In their 2009 guidelines, the IDSA recommended that, “catheter cultures should be done when a catheter is removed because of suspected [CRBSI].”[2] The most widely accepted technique for culturing central venous catheters was described in 1977 in a seminal paper by Maki et al and is known as the semi-quantitative culture method.[6]  Using this technique, the catheter tip is rolled over a culture dish and microorganism burden is indirectly quantified. In this study, 250 catheter tips were cultured of which 25 grew greater than 15 colony-forming units. Of these catheters, four were removed from patients who ultimately developed bacteremia. Popular for its ease of use, this article has been widely cited to support the use of this method to determine, in a clinical situation suspicious for bacteremia, whether the catheter could be implicated as the source. The premise of subsequent studies that sought to evaluate catheter tip infections was that early and precise identification of the causal organism should improve clinical outcomes. Unfortunately, randomized controlled trials have not been performed to support that premise. Of the studies cited in the 2009 IDSA guidelines regarding catheter tip cultures [7,8,9], none evaluated whether the information obtained from culturing catheter tips had any significant clinical impact on patient outcomes.

Alternatively, the utility of this practice was evaluated in a 1992 study by Widmer et al, in which 157 consecutive catheter tips were cultured order to determine whether the results had an impact on patient management. The authors assessed whether results prompted a change in, or the initiation of, an antibiotic regimen. While the authors determined that 4% of catheter culture results led to changes in patient management, the clinical significance of these results were found to be “questionable or even misleading.” The authors concluded that management of catheter infection was driven primarily by peripheral blood culture results and that catheter tip cultures contributed no benefit.[10]

In a 2009 publication, a retrospective analysis of 120 septic patients evaluated 238 retrieved and cultured catheter tips. In 5.5% of all catheters tested, blood and catheter tip cultures grew concordant organisms but 48.4% of all catheters grew positive cultures. The associated positive and negative predictive values of a catheter culture result were calculated to be 11% and 91%, respectively. This low positive predictive value of a positive catheter tip culture was consistent with the results of the Widmer study. Based on these findings, and in consideration of the ultimate clinical impact and associated costs of the practice, catheter cultures were discontinued in the hospital where this study was performed.[11] A smaller study examined whether catheter tip cultures could predict the likelihood of clinical bacteremia and the authors concluded that culture results from catheter tips provide minimal clinical benefit and are processed at considerable expense of time and effort by the laboratory.[12]

The authors of these studies concluded that the results of catheter tip cultures are unlikely to significantly change clinical management. One reason for this is because the removal of a central venous catheter often results in the resolution of CRBSI, regardless of use of antibiotics, while alternate sources are often found when infections do not quickly resolve.[13] In addition, as removal of the catheter is a prerequisite of tip culture, the necessary action precedes the desired outcome.

Typically, in a febrile patient with an indwelling catheter, clinically significant catheter infections will be detected by positive blood cultures and effectively ruled out by negative blood cultures. When catheter infection is suspected, while antimicrobial therapy usually is given adjunctively, there is general agreement that catheter removal is an absolute necessity.[14] When catheter infections are suspected, studies seem to suggest that that management based on peripheral blood cultures and clinical assessment leads to outcomes as good as or better than when central venous catheters are cultured [15] and that no laboratory test is reliably better than management guided by clinical judgment.[16]

Dr. Todd Cutler is an associate editor, Clinical Correlations

Peer reviewed by Howard Leaf, MD, Assistant Professor, Department of Medicine (ID), NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. O’Grady NP, Alexander M, Burns LA. Guidelines for the prevention of intravascular catheter-related infections. Am J Infect Control. 2011 May;39(4 Suppl 1):S1-34.

2. Mermel LA, Allon M, Bouza E. Clinical practice guidelines for the diagnosis and management of intravascular catheter-related infection: 2009 Update by the Infectious Diseases Society of America. Clin Infect Dis. 2009 Jul 1;49(1):1-45.

3. Edgeworth J. Intravascular catheter infections. J Hosp Infect. 2009 Dec;73(4):323-30.

4. Kite P, Dobbins BM, Wilcox MH. Rapid diagnosis of central-venous-catheter-related bloodstream infection without catheter removal. Lancet. 1999 Oct 30;354(9189):1504-7.

5. Blot F, Nitenberg G, Chachaty E. Diagnosis of catheter-related bacteraemia: a prospective comparison of the time to positivity of hub-blood versus peripheral-blood cultures. Lancet. 1999 Sep 25;354(9184):1071-7.

6. Maki DG, Weise CE, Sarafin HW. A semiquantitative culture method for identifying intravenous-catheter-related infection. N Engl J Med. 1977 Jun 9;296(23):1305-9.

7. Brun-Buisson C, Abrouk F, Legrand P. Diagnosis of central venous catheter-related sepsis: critical level of quantitative tip cultures.Arch Intern Med 1987;147:873-7

8. Cleri DJ, Corrado ML, Seligman SJ. Quantitative culture of intravenous catheters and other intravascular inserts. J Infect Dis 1980;141:781-6

9. Sherertz RJ, Raad II, Belani A. Three-year experience with sonicated vascular catheter cultures in a clinical microbiology laboratory.J Clin Microbiol. 1990 Jan;28(1):76-82.

10. Widmer AF, Nettleman M, Flint K. The clinical impact of culturing central venous catheters. A prospective study. Arch Intern Med. 1992 Jun;152(6):1299-302.

11. Smuszkiewicz P, Trojanowska I, Tomczak H. Venous catheter microbiological monitoring. Necessity or a habit? Med Sci Monit. 2009 Feb;15(2):SC5-8.

12. Nahass RG, Weinstein MP. Qualitative intravascular catheter tip cultures do not predict catheter-related bacteremia. Diagn Microbiol Infect Dis. 1990 May-Jun;13(3):223-6.

13. Bozzetti F, Terno G, Camerini E Pathogenesis and predictability of central venous catheter sepsis. Surgery. 1982 Apr;91(4):383-9.

14. Cunha BA. Intravenous line infections. Crit Care Clin. 1998 Apr;14(2):339-46.

15. Bozzetti F, Bonfanti G, Regalia E. A new approach to the diagnosis of central venous catheter sepsis. JPEN J Parenter Enteral Nutr. 1991 Jul-Aug;15(4):412-6.

16. Raad I, Hanna H, Maki D. Intravascular catheter-related infections: advances in diagnosis, prevention, and management. Lancet Infect Dis. 2007 Oct;7(10):645-57.

Does Cranberry Juice Prevent Urinary Tract Infections?

November 9, 2011

By Jessie Yu

Faculty Peer Reviewed

A healthy 21-year-old female college student presents to clinic after one day of dysuria and increased frequency. You diagnose her with a recurrent urinary tract infection (UTI), and as you hand her a prescription for empiric antibiotic treatment, she asks you if drinking cranberry juice will prevent these in the future…

Drinking cranberry juice to prevent urinary tract infections (UTIs) has been a traditional folk remedy for hundreds of years.[1] Stroll into any New York City pharmacy and you will find that cranberry-extract tablets are not in the vitamins section–they are on the same shelves as the UTI medications. As concerns regarding antibiotic-resistant bacteria and the high cost of prescription medications continue to mount, patients and physicians are searching for cheap and effective pharmacologic alternatives. However, is there solid evidence for using cranberries to treat UTIs?


Urinary tract infections (UTIs) are the second most common infection in the United States.[2] They occur when uropathogens from fecal flora colonize the vaginal introitus and invade the urethra and bladder. Cystitis comprises about 95% of all symptomatic UTIs, and Escherichia coli (E coli) cause 80-85% of infections in healthy young persons.[3,4] Acute cystitis occurs in people of all ages and both genders, but the highest incidence occurs among sexually active 18-24 year old women. Overall, 40-50% of adult females have experienced at least one UTI in their lifetimes.[3]

Recurrent UTIs are a common problem in young women; approximately 27% of women recur within 6 months of the first infection.[5] The majority of recurrences are reinfections by a different strain of bacteria.[6] Disparate factors like ABO blood type[7], glycolipid receptor expression on uroepithelial cells[8], intercourse with a new partner during the past year [8,9], vaginal colonization with enterobacteriaceae[10], and even the use of spermicide-diaphragms[11], are all thought to affect the propensity for recurrent UTIs.


Cranberries (Vaccinium macrocarpon) are a native North American fruit.[12] Cranberry juice has been manufactured since at least 1683. In addition to being consumed for food, cranberries have a long medicinal history. American Indians used the berries to draw poison from arrow wounds, calm nerves, and treat bladder and kidney ailments. American whalers and mariners ate cranberries to prevent scurvy while travelling the high seas.[1,13] Indeed, cranberry has a very good safety profile, and unless the patient is at risk for uric-acid kidney stones, on warfarin, or taking aspirin, there are not many contraindications.[12]


For decades, cranberry was believed to prevent UTIs by acidifying the urine[14], but this theory has now largely been debunked.[15] Current data suggest that specific components within cranberries may directly inhibit uropathogen adherence to uroepithelial cells or prevent bacterial growth.

Proanthocyanidins give cranberries their intense red color; these molecules appear to inhibit the adherence of p-fimbriated E coli to uroepithelial cells.[16] Cranberry also contains fructose, and some data suggest that it too may interfere with adhesion of type 1 fimbriated E coli to the uroepithelium.[17] The ascorbic acid content of cranberries may also play a role in decreasing urinary bacterial growth.[18] While the pharmacologically active component of cranberries remains debated, several recent in vitro studies generated interesting data suggesting that cranberry decreases bacterial adhesion to uroepithelial bladder cells in a concentration-dependent manner.[19] Pinzόn-Arango and colleagues also demonstrated that the longer E coli were exposed to either cranberry juice or proanthocyanidin alone, the more bacterial attachment decreased, and this effect was reversed by removing the cranberry juice or proanthocyanidin.[20]

Despite advances in basic science, data from human trials have not established the clinical merit of cranberry as a preventive treatment for recurrent UTIs. Many studies are limited by small sample size; inadequate control, randomization, and blinding; and weak statistical analysis. There is also no measureable marker to confirm medication compliance among study subjects. Finally, it is difficult to compare data across trials, as there is no standard form of cranberry used and no established unit for proanthocyanidin concentration.

The Cochrane Database published several meta-analyses that have attempted to synthesize the data from this disparate body of literature. Jepson and colleagues (2000) assessed the literature for randomized trials regarding treating UTIs with cranberry and concluded that there was no good quality evidence to support this use.[21] However, in a separate 2008 paper, they found that data seemed more optimistic regarding the use of cranberry in preventing symptomatic UTIs. They identified 4 good quality, randomized or quasi-randomized clinical trials, and found that together these papers showed that cranberry products significantly reduced the incidence of symptomatic UTIs at 12 months (overall RR=0.65, 95%CI: 0.46-0.90) compared with placebo. They concluded that cranberry may especially decrease the risk of recurrence in women with recurrent UTIs.[22]

Interestingly, new data from one of the largest, most rigorous studies regarding cranberry and UTI prevention was published in January 2011. Barbosa-Cesnik and co-authors reported that treatment with cranberry did not significantly decrease symptomatic UTI recurrence compared with placebo. This was a double-blind, randomized, placebo-controlled trial with 319 college-aged women. Cranberry and placebo groups were comprised of subjects with similar socio-demographic variables and UTI risk factors. Follow-up was 6 months or until a second UTI occurred, whichever came first. The study used an intention-to-treat analysis and was designed to detect a twofold difference between treatment and placebo (α=0.05; power=99%). The overall UTI recurrence rate was 16.9%; surprisingly, the cranberry group actually had a slightly higher recurrence rate than the placebo group (19.3% vs 14.6%, p=0.21). Even when authors adjusted the data to account for frequency of sexual activity and history of UTI, there continued to be no difference in recurrence risk between the groups. While the authors acknowledge that prior studies have reached more favorable conclusions regarding the efficacy of cranberry juice, they assert that many of these studies were either not blinded or statistically underpowered. They also speculate that the reason the two groups had similar recurrence rates may have been due to 1) an unanticipated active ingredient that was also present in the placebo—perhaps ascorbic acid, or 2) the study design caused participants in both groups to improve hydration and increase urinary frequency, thereby decreasing bacterial growth and mild symptoms.[23]


The jury is still out on whether cranberry should be recommended to women who suffer from recurrent uncomplicated UTIs. The American College of Obstetricians and Gynecologists’ guidelines state that in both prophylactic and intermittent therapies for non-pregnant women with uncomplicated acute bacterial cystitis, antibiotics are the first-line treatment.[24] While these same guidelines also acknowledge that some data suggest cranberry may decrease the recurrence of symptomatic UTIs, they caution that parameters necessary for therapeutic effects have not been determined or optimized.[24]

Jessie Yu is a 4th year medical student at NYU School of Medicine

Peer Reviewed by Vinh Pham, MD, Assistant Professor of Medicine (Infectious Disease and Immunology), NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. Cranberry history. Cranberry Marketing Committee Web site. accessed 12/2010. Published 2006. Accessed December 10, 2010.

2. U.S. Dept. of Health and Human Services. Ambulatory care visits to physician offices, hospital outpatient departments, and emergency departments: United States, 1999–2000. Vital and health statistics. Series 13, No. 157. Published September, 2004. Accessed December 10, 2010.

3. Foxman B. Epidemiology of urinary tract infections: incidence, morbidity, and economic costs. Am J Med. 2002:113 Suppl 1A:5S-13S.

4. Hooton TM, Besser R, Foxman B, Fritsche TR, Nicolle LE. Acute uncomplicated cystitis in an era of increasing antibiotic resistance: a proposed approach to empirical therapy. Clin Infect Dis. 2004:39(1):75-80.

5. Foxman B. Recurring urinary tract infection: incidence and risk factors. Am J Public Health. 1990:80(3):331-333.

6. Ikäheimo R, Siitonen A, Heiskanen T, Kärkkäinen U, Kuosmanen P, Mäkelä PH. Recurrence of urinary tract infection in a primary care setting: analysis of a 1-year follow-up of 179 women. Clin Infect Dis. 1996:22(1):91-99.

7. Ziegler T, Jacobsohn N., Fünfstück R. Correlation between blood group phenotype and virulence properties of Escherichia coli in patients with chronic urinary tract infection. Int J Antimicrob Agents. 2004:24 Suppl 1):S70-75.

8. Stapleton A, Nudelman E, Clausen H, Hakomori S, Stamm WE. Binding of uropathogenic Escherichia coli R45 to glycolipids extracted from vaginal epithelial cells is dependent on histo-blood group secretor status. J Clin Invest. 1992:90(3):965-972.

9. Scholes D, Hooton TM, Roberts PL, Stapleton AE, Gupta K, Stamm WE. Risk factors for recurrent urinary tract infection in young women. J Infect Dis. 2000:182(4):1177-1182.

10. Stamey TA, Sexton CC. The role of vaginal colonization with enterobacteriaceae in recurrent urinary infections. J Urol. 1975:113(2):214-217.

11. Fihn SD, Boyko EJ, Normand EH, et al. Association between use of spermicide-coated condoms and Escherichia coli urinary tract infection in young women. Am J Epidemiol. 1996:144(5):512-520.

12. Cranberry (Vaccinium macrocarpon): natural drug information. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2010. Accessed December 10, 2010.

13. American Cranberry (Vaccinium macrocarpon). Research Guides. University of Wisconsin- Madison. Updated July 8, 2010. Accessed December 10, 2010.

14. Blatherwick NR, Long ML. Studies on urinary acidity. The increased acidity produced by eating prunes and cranberries. J Biol Chem. 1923:57:815-818.

15. Sobota AE. Inhibition of bacterial adherence by cranberry juice: potential use for the treatment of urinary tract infections. J Urol. 1984:131(5):1013-1016.

16. Howell AB, Botto H, Combescure C, et al. Dosage effect on uropathogenic Escherichia coli anti-adhesion activity in urine following consumption of cranberry powder standardized for proanthocyanidin content: a multicentric randomized double blind study. BMC Infect Dis. 2010:10:94-104.

17. Zafriri D, Ofek I, Adar R, Pocino M, Sharon N. Inhibitory activity of cranberry juice on adherence of type 1 and type P fimbriated Escherichia coli to eukaryotic cells. Antimicrob Agents Chemother. 1989:33(1):92-98.

18. Carlsson S, Wiklund NP, Engstrand L, Weitzberg E, Lundberg JO. Effects of pH, nitrite, and ascorbic acid on nonenzymatic nitric oxide generation and bacterial growth in urine. Nitric Oxide. 2001:5(6):580-586.

19. Di Martino P, Agniel R, David K, et al. Reduction of Escherichia coli adherence to uroepithelial bladder cells after consumption of cranberry juice: a double-blind randomized placebo-controlled cross-over trial. World J Urol. 2006:24(1):21-27.

20. Pinzόn-Arango PA, Liu Y, Camesano TA. Role of cranberry on bacterial adhesion forces and implications for Escherichia coli-uroepithelial cell attachment. J Med Food. 2009:12(2):259-270.

21. Jepson RG, Mihaljevic L, Craig J. Cranberries for treating urinary tract infections. Cochrane Database System Rev. 2000:(2):CD001322.

22. Jepson RG, Craig JC. Cranberries for preventing urinary tract infections. Cochrane Database System Rev. 2008:23(1):CD001321.

23. Barbosa-Cesnik C, Brown MB, Buxton M, Zhang L, DeBusscher J, Foxman B. Cranberry juice fails to prevent recurrent urinary tract infection: results from a randomized placebo-controlled trial. Clin Infect Dis. 2011:52(1):23-30.

24. American College of Obstetricians and Gynecologists. Treatment of Urinary Tract Infections in Nonpregnant Women. Washington, DC: American College of Obstetricians and Gynecologists; 2008. ACOG Practice Bulletin No. 91.

A New Era of Therapy for Hepatitis C

November 4, 2011

By Alexander Jow, MD

Faculty Peer Reviewed

Hepatitis C virus (HCV) is a major public health issue, representing the leading cause of chronic liver disease, death from liver disease, and a principal indication for liver transplantation in the US. [1] It is estimated that 3-4 million people in the world are infected with HCV each year. Globally, 130-170 million people are chronically infected with HCV and more than 350,000 people die from HCV-related liver disease each year. [2] Although the natural history of HCV infection is highly variable, cirrhosis develops in 15% to 20% of chronically infected patients. These patients may then progress to develop the complications of portal hypertension. The risk of hepatocellular carcinoma is up to 3% per year among patients with HCV cirrhosis. [4] Thus, for patients with clinically significant hepatic fibrosis, there is a general agreement that antiviral therapy is indicated to prevent progression to cirrhosis.

Treatment guidelines for HCV published in 2009 by the American Association for the Study of Liver Diseases (AASLD) recommend treatment of eligible candidates with pegylated interferon (PEG-IFN) alpha and ribavirin [1]. However, since then, the development of two effective inhibitors of the HCV NS3/4A protease, boceprevir and telaprevir, represent a major advance in HCV treatment and mark a new era of HCV therapeutics.

The goal of therapy is to eradicate HCV RNA, which is predicted by the achievement of a sustained virologic response (SVR), as defined by the absence of HCV RNA by polymerase chain reaction 24 weeks following discontinuation of therapy. This is generally regarded as a “virological cure,” and has been associated with a 99 percent chance of being HCV RNA negative during long-term follow-up. [5] For the past decade, treatment with PEG-IFN and ribavirin has shown to result in SVR rates of 40-50% [6] among patients with HCV genotype 1, the most common genotype found in the U.S. and much of Europe [3] Treatment regimens of at least 48 weeks are typically required for these patients, and the treatment course is often limited by adverse effects of the drugs.

Over the past decade, increased understanding of the HCV viral life cycle has led to the development of direct-acting antivirals (DAAs), with the goal of improving efficacy with fewer toxic effects as compared with interferon-based regimens. Much like current HIV antiretroviral therapy, therapeutics are in development to target individual HCV enzymes which are essential for HCV replication, including polymerase inhibitors, protease inhibitors, and cyclophilin inhibitors. [7-12] The protease inhibitors have become an exciting add-on therapy to PEG-IFN plus ribavirin, with the recent FDA approval of telaprevir (Incivek) on May 23, 2011 and boceprevir (Victrelis) on May 13, 2011.

The recent FDA approval was based on the results of several international phase III trials published in 2011. The ADVANCE trial showed a favorable SVR rate (75% vs. 44%) in patients with treatment naive genotype 1 HCV receiving telaprevir combined with PEG-IFN-ribavirin when compared to standard therapy with PEG-IFN-ribavirin alone for up to 48 weeks. The experimental arm received telaprevir combined with PEG-IFN-ribavirin for 12 weeks followed by 12 or 36 weeks of PEG-IFN-ribavarin based on HCV RNA levels at weeks 4 and 12. 58% of patients treated with telaprevir were eligible to receive the shorter course of therapy of 24 weeks. [9] Similarly, the REALIZE trial showed that telaprevir combined with PEG-IFN-ribavarin significantly improved rates of SVR in patients with previously treated genotype 1 HCV infection. [10]

Equally promising, the results of the SPRINT-2 trial showed higher SVR rates (63% and 66% vs. 38%) in the cohort of treatment-naive genotype 1 HCV receiving boceprevir combined with PEG-IFN-ribavirin regimens when compared to traditional therapy. All the experimental regimens in this study included a 4 week lead-in period with PEG-IFN-ribavirin before initiating boceprevir plus PEG-IFN-ribavirin. In one experimental arm, boceprevir plus PEG-IFN was given for 24 weeks followed by an additional 20 weeks of PEG-IFN-ribavirin depending on HCV RNA levels between weeks 8 and 24. In this arm, 44% of patients received the shorter course of 28 weeks. The other arm received boceprevir plus PEG-IFN-ribavirin for 44 weeks. The SVR rates were similar between those receiving 24 weeks and 44 weeks of boceprevir. [10] The HCV RESPOND-2 trial also showed that the addition of boceprevir to PEG-IFN-ribavirin resulted in significantly higher rates of SVR in previously treated patients with chronic HCV genotype 1 infection (59% and 66% in the treatment groups vs. 21% in the control). [11]

While the improved rates of SVR with telaprevir and boceprevir are impressive, they do not come without a cost. Anemia with boceprevir is common, with more than 40% of patients requiring erythropoietin administration for up to approximately 150 days in SPRINT-2 study and HCV RESPOND-2. In HCV RESPOND-2, more than 8% of patients in the fixed-duration boceprevir group had a reduction in hemoglobin level less than 8.0 g per deciliter, and 9% required blood transfusions. [11-12] Telaprevir is more associated with cutaneous reactions including rash and pruritis, which may occur in up to half of the patients. The rash is usually mild to moderate and reversible after discontinuation. In ADVANCE, only 5-7% of patients stopped telaprevir because of rash, however one case of Stevens-Johnson syndrome was documented. [9] Rapid emergence of drug resistance is also a consideration in previous non-responder patients, subjects not fully adherent to therapy, or those who cannot tolerate optimal doses of therapy. [13] Further, as should be obvious from the studies cited, telaprevir and boceprevir regimens still require PEG-IFN and ribavirin, and thus, the new regimens, while permitting greater SVR rates, are also very expensive.

While the 2009 AASLD treatment guidelines for HCV in the United States were published before the publication of data from these randomized trials, newer European guidelines have taken this newer data into account. [13] There is still a large unmet need that includes patients unable or unwilling to receive IFN or ribavirin and previous non-responders. The next step in research of DAA drugs is to develop an IFN-free oral combination treatment regimen. A phase I study called INFORM-1, combining a protease inhibitor and a nucleoside polymerase inhibitor showed a significant decline in HCV viral load in patients who were both treatment naive and had previous treatment failures. Although it did not include patients with cirrhosis, the study did establish proof of concept that will permit further evaluation of combination oral therapies for HCV. [14] Several other polymerase inhibitors are in phase II trials [15]. The addition of protease inhibitors to PEG-IFN plus ribavirin has put us one step closer to more effective HCV treatments, however further research needs to be done with other DAA therapies and combinations to develop a tolerable, short-duration, efficacious, and IFN-free oral combination therapy.

Dr. Alexander Jow is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Michael Poles, MD, Section Editor, GI, Clinical Correlations


1. Ghany MG, et al. Diagnosis, management, and treatment of hepatitis C: an update. Hepatology. 2009 Apr; 49(4): 1335-74.

2. Hepatitis C. Accessed August 1, 2011.

3. Rosen, HR. Chronic Hepatitis C Infection. N Engl J Med. 2011 Jun 23; 364(25): 2429-38.

4. Jou JH, Muir AJ. In the Clinic. Hepatitis C. Ann Intern Med. 2008 Jun 3; 148(11):ITC6-1-ITC6-16.

5. Swain MG, Lai MY, et al. A sustained virologic response is durable in patients with hepatitis C treated with peginterferon alfa-2a and ribavirin. Gastroenterology. 2010; 139(5):1593.

6. Fried MW, Shiffman ML, et al. Peginterferon alfa-2a plus ribavarin for chronic hepatitis C virus infection. N Engl J Med. 2002;347;975-82.

7. Buron JR, Eversen GT. HCV NS5B Polymerase Inhibitors. Clin LIver Dis. 2009; 13: 453-465.

8. Gallay PA. Cyclophilin Inhibitors. Clin Liver Dis. 2009; 13: 403-417.

9. Jacobson IM, et al; for the ADVANCE Study Team. Telaprevir for previously untreated chronic hepatitis C virus infection. N Engl J Med. 2011 Jun 23; 364(25): 2405-16.

10. Zeuzem S, et al; for the REALIZE Study Team. Telaprevir for retreatment of HCV infection. N Engl J Med. 2011 Jun 23; 364(25): 2417-28.

11. Poordad F, et al; for the SPRINT-2 Investigators. Boceprevir for untreated chronic HCV genotype 1 infection. N Engl J Med. 2011 Mar 31; 364 (13): 1195-206.

12. Bacon BR, et al; for the HCV RESPOND-2 Investigators. Boceprevir for previously treated chronic HCV genotype 1 infection. N Engl J Med. 2011 Mar 31; 364(13): 1207-17.

13. European Association for the Study of the Liver. EASL Clinical Practice Guidelines: Management of Hepatitis C virus infection. J Hepatol . 2011 Aug; 55(2): 245-64. Epub 2011 Mar 1.

14. Gane EJ, et al. Oral combination therapy with a nucleoside polymerase inhibitor (RG7128) and danoprevir for chronic hepatitis C genotype 1 infection (INFORM-1): a randomized, double-blind, placebo-controlled, dose-escalation trial. Lancet. 2010 Oct 30; 376(9751): 1467-75.

15. Jensen DM, et al. High rates of early viral response, promising safety profile and lack of resistance-related breakthrough in HCV GT 1/4 patients treated with RG7128 plus PEGIFN alfa-2a (40kd)/RBV: Planned week 12 interim analysis from the PROPEL study. Hepatology. 2010; 52:360A.

Cholera in Haiti

October 7, 2011

By Matt Johnson, MD

Faculty Peer Reviewed

In the fall of 2010, after Haiti was razed by a magnitude 7.2 earthquake that left over 316,000 people dead, cholera was injected into the tumult to add to the growing list of Haiti’s struggles [1]. Cholera is an ancient scourge whose origins are believed to come from the Ganges River delta of India [2]. It affects up to 5 million people worldwide, with over 100,000 deaths per year [3]. The cholera outbreak in Haiti was unexpected in that Haiti had not had cholera for over 100 years [4]. As of January 2011, it has already infected over 200,000 people and killed at least 4,000 people [5].

Vibrio cholerae is a curved Gram-negative rod with a single flagellum [6]. After toxigenic strains of V. cholerae are ingested, there is an incubation period of about 18 hours to 5 days before the diarrheal illness ensues [2]. The bacteria have to survive the harsh acidic gastric milieu and make it to the small intestines, where they propel themselves with their flagella through the thick mucus lining of the gut [2]. When they reach the epithelial cells, they attach themselves and rapidly proliferate [2]. They do not invade the epithelial cells, rather they release their classic enterotoxin in high concentrations directly onto the mucosal cells [2]. A subunit of the cholera toxin is transported into the epithelial cell where it stimulates adenylate cyclase to increase cyclic AMP, which ultimately causes active secretion of sodium and chloride [6]. This leads to a massive osmotic outpouring of sodium, chloride, potassium and bicarbonate that overwhelms the absorptive capacity of the bowel and results in severe watery diarrhea [2]. The typical “rice water stool” of cholera patients is loaded with V. cholerae and is highly infectious [2].

The treatment of cholera’s deadly desiccating diarrhea involves prompt rehydration with electrolyte-rich oral or intravenous rehydration solutions. Antibiotics, most frequently doxycycline, can also be used to shorten the duration and severity of the diarrhea [2]. But prevention is the key, with the focus on clean water, sanitation and efficient surveillance systems in place to halt budding outbreaks. There is also an oral, inactivated whole cell cholera vaccine that has shown to be up to 80% effective, but it is unclear whether it is a cost-effective and sustainable prevention measure [2].

Large scale outbreaks of cholera have been frequent in the last few centuries, with seven major pandemics recognized since the 1800s. The first six pandemics all originated in India, and were due to the “classic” V. cholerae biotype [7]. However the seventh pandemic, which began in 1961 in Indonesia and is still on-going, is predominantly due to the new “El Tor” biotype. It was first isolated at a quarantine station in 1905 from Indonesian pilgrims in the town of El Tor, Egypt [2]. The particular strain of V. cholerae that is currently in Haiti is further distinguished by several genetic polymorphisms that appear to make it more virulent than many previous cholera strains [4]. It can survive longer in the human host and induce more diarrhea, thus it is able to disseminate faster and more broadly than other strains [4]. This new strain is not only more fit, it also appears to have developed more antibiotic resistance than other strains [4]. There is great concern that this hardier and deadlier strain will not only continue to batter Haiti, but it may also establish itself throughout the Americas.

So where did this cholera strain come from? Is it domestic or imported? Through DNA sequencing of V. cholerae strains, it appears that the Haitian outbreak strain is nearly identical to the El Tor strain that is endemic to South Asia [4]. It is distinct from the circulating V. cholerae strains in Latin America. Cholera can actually be a part of normal estuarine flora, known to cling to zooplankton in brackish waters, so some have claimed that the cholera outbreak was home-grown [2]. Climatic and meteorological phenomena can create conditions favorable to the growth and spread of V. cholerae, which could cause an outbreak [4]. However, the genetic similarity of the Haitian strain and South Asian strains of V. cholerae makes it most likely that the V. cholerae was imported [4].

This news was met with much consternation by the United Nations, as it indirectly implicates Nepalese UN forces who were working in the interior of Haiti. The first cases of cholera were reported near the town of Meille, which was the location of a large UN camp of Nepalese peacekeepers [8]. It was reported that a septic tank inside the UN camp was draining into a local stream that provided drinking water to the community [8]. Cholera is endemic to Nepal, so the Nepalese peacekeepers were singled out as the carriers. This was believed to be the source of the contamination of the Artibonite River, the longest and most important river in Haiti. There have since been many protests against the Nepalese and UN presence, and some have called for their expulsion.

Many experts feel that this outbreak could swell to over half a million people over the next year alone [9]. The lack of infrastructure in Haiti, along with the fact that the current Haitian population has never been exposed to cholera before, creates an unfortunate and precarious scenario without any clear solutions [9]. Antibiotics are often reserved for only severe cases of cholera, but there have been discussions to broaden the coverage to include cases of both moderate and severe diarrhea to help curtail the shedding of V. cholerae and reduce secondary infections [10]. But antibiotics are costly and there is the ever-present specter of antibiotic resistance to consider. There are also plans for a large scale vaccination program which has been met with much skepticism. Critics point out that there is a limited supply of the vaccine, it is unproven on such a large scale, and it will be very cumbersome to organize such a massive campaign throughout the ravaged districts of Haiti [11]. However, the key to controlling cholera lies in basic preventive measures centered on education, improved sanitation and access to clean drinking water. All agree that cholera has established itself in Haiti, and cases have already been reported in the neighboring Dominican Republic and Florida [12]. This unwelcome old foe that has fallen into Haiti’s lap is here to stay, and a committed, measured and coordinated response is needed to keep this pestilence at bay.

Dr. Matt Johnson is a former NYU internal medicine resident and current Epidemic Intelligence Service Officer, Oklahoma State Department of Health

Peer reviewed by Howard Leaf, MD, Infectious Diseases, Medicine, NYU School of Medicine

Image courtesy of Wikimedia Commons


1. Schiermeier Q. Quake Threat Looms over Haiti. Nature. 2010; 467: 1018-1019. [PMID 20981064]

2. Sack DA et al. Cholera. Lancet. 2004; 363: 223-233. [PMID 14738797]

3. Cholera Vaccines. A Brief Summary of the March 2010 Position Paper. World Health Organization. 2010.

4. Chin CS et al. The Origin of the Haitian Cholera Outbreak Strain. N Engl J Med. 2011; 364: 33-42. [PMID 21142692]

5. Tuite AR et al. Cholera Epidemic in Haiti, 2010: Using a Transmission Model to Explain Spatial Spread of Disease and Identify Optimal control Interventions. Ann Intern Med. 2011; Epub ahead of print. [PMID 21383314]

6. Gladwin M, Trattler B. Clinical microbiology. Miami, FL: MedMaster, Inc. 2004.


8. Enserink M. Despite Sensitivities, Scientists Seek to Solve Haiti’s Cholera Riddle. Science. 2011; 331: 388-389. [PMID 21273460]

9. Butler D. Cholera Tightens Grip on Haiti. Nature. 2010; 468: 483-484. [PMID 21107395]

10. Nelson EJ et al. Antibiotics for Both Moderate and Severe Cholera. N Engl J Med. 2011; 364: 5-7. [PMID 21142691]

11. Cyranoski D. Cholera Vaccine Plan Splits Experts. Nature. 2011; 469: 273-274. [PMID 21248807]

12. Harris JB. Cholera’s Western Front. Lancet. 2010; 376: 196

Stemming the Tide: The Promise and Pitfalls of HIV Prevention Research

September 28, 2011

By Benjamin Bearnot

Faculty Peer Reviewed 

Since the discovery of zidovudine (AZT) in the mid-1980s, advances in antiretroviral (ARV) therapy for patients with chronic human immunodeficiency virus (HIV) infection have, until recently, outpaced concomitant improvements in methods for HIV prevention. Over the past few years, HIV prevention research has been building an impressive head of steam. While a completely effective vaccine for HIV prevention has continued to prove elusive, results of a modestly successful (~30% protective) vaccine trial based in Thailand were announced in 2009,[1] and a naturally occurring antibody that attaches to a well-conserved CD4 binding site of HIV and is capable of neutralizing most strains of HIV-1 was discovered in 2010.[2,3]  Additional advances have recently been announced after large multi-site, international trials.  These include the protective effects of a vaginal tenofovir microbicidal gel in women [4], male circumcision [5,6], and daily pre-exposure prophylaxis with tenofovir and emtricitabine (Truvada; Gilead Sciences, Foster City, California) in HIV–negative men who have sex with men.[7]  In June it was announced that early initiation of ARVs by HIV-infected individuals protects their HIV-negative sexual partners from acquiring the infection, with an astonishing 96% reduction in the risk of transmission.[8]

The burgeoning area of HIV prevention research hit a temporary stumbling block in April with the closure of the FEM-PrEP trial.  FEM-PrEP randomized 1951 HIV-negative at-risk women aged 18 to 35 to Truvada or placebo at sites in Bondo, Kenya; Bloemfontein and Pretoria, South Africa; and Arusha, Tanzania.  After a scheduled review of interim data, Family Health International announced that they would be unable to demonstrate the effectiveness of Truvada in preventing HIV infection in the study population, even if the study were continued.[9] 

The  possible reasons for the disappointing preliminary study findings include low adherence to the study regimen, differential risk behaviors between the two trial arms, and biological differences between topically and systemically administered ARVs.[10,11,12]  Another important possibility is that HIV prevention strategies may show differing levels of effectiveness due to differences in the cultures and communities in which they are studied. In other words, the impact of antiretrovirals as an HIV prevention technique may depend more on context than biological differences between study populations. 

The cultural acceptability of any prevention strategy must be examined closely. If study participants are uncomfortable or unwilling to adhere to a given intervention for cultural, religious, or practical reasons, the intervention will fail, regardless of its biological effectiveness. By capturing data about why individuals do or do not adhere to a given prevention strategy, we can better understand and eventually overcome psychosocial, economic, and physical barriers to effective prevention in at-risk communities. 

Prevention is not a one-size-fits-all proposition, and the success of HIV prevention  depends on interventions that are thoughtfully tailored to a particular population at a particular time.  In a 2010 New England Journal of Medicine piece titled “AIDS in America–Forgotten but Not Gone,” El-Sadr and colleagues wrote about this idea: “Focused studies of the sociocultural dynamics that facilitate [HIV] transmission are needed, as well as large studies assessing the effectiveness of multidimensional interventions, including behavioral, biomedical, and structural components. Disenfranchised communities must be engaged as partners in such efforts, along with new researchers drawn from the affected populations, if the nuances of local epidemics are to be addressed.”[13] 

Dr. El-Sadr’s ambitious goals need to be realized in order to stem the tide of a global HIV/AIDS pandemic that is still growing—and growing most quickly in communities with the fewest resources, both in sub-Saharan Africa and in the US.

Benjamin Bearnot is a 4th year medical student at NYU School of Medicine

Peer reviewed by Judith Aberg, MD, ID & Immunology, NYU School of Medicine

Image courtesy of Wikimedia Commons


1.  Rerks-Ngarm S,  Pitisuttithum P, Nitayaphan S, et al. Vaccination with ALVAC and AIDSVAX to prevent HIV-1 infection in Thailand. New Engl J Med. 2009;361(23):2209-2220.

2.  Zhou T, Georgiev I, Wu X, et al. Structural basis for broad and potent neutralization of HIV-1 by antibody VRC01. Science. 2010;329(5993):811-817.                 

 3. Wu X, Yang ZY, Li Y, et al. Rational design of envelope identifies broadly neutralizing human monoclonal antibodies to HIV-1. Science. 2010;329(5993):856-861.

 4. Abdool Karim Q, Abdool Karim SS, Frohlich JA, et al. Effectiveness and safety of tenofovir gel, an antiretroviral microbicide, for the prevention of HIV infection in women. Science. 2010;329(5996):1168–74.

 5. Bailey RC, Moses S, Parker CB, et al. Male circumcision for HIV prevention in young men in  Kisumu, Kenya: a randomised controlled trial. Lancet. 2007;369(9562):643–656.

 6. Gray RH, Kigozi G, Serwadda D, et al. Male circumcision for HIV prevention in men in Rakai, Uganda: a randomised trial. Lancet. 2007;369(9562):657–666. 

 7. Grant RM, Lama JR, Anderson PL, et al. Preexposure chemoprophylaxis for HIV prevention in men who have sex with men. N Engl J Med. 2010;363(27):2587–2599.

 8. Initiation of antiretroviral treatment protects uninfected sexual partners from HIV infection (HPTN Study 052). HIV Prevention Trials Network website., 2011.  Accessed June 3, 2011.

 9. FHI Statement on the FEM-PrEP HIV Prevention Study. Family Health International website. Published April 18, 2011.  Accessed April 20, 2011.

 10. Important announcement about FEM-PrEP Study. Microbicides Trials Network website. Published April 18, 2011.  Accessed April 20, 2011.

11. CAPRISA thanks FHI for important interim FEM-PrEP trial results. Centre for the AIDS Programme of Research in South Africa (CAPRISA) website., 2011.  Accessed April 20, 2011.

 12. Verma NA, Lee AC, Herold BC, Keller MJ. Topical prophylaxis for HIV prevention in women: becoming a reality.  Curr HIV/AIDS Rep. 2011;8(2):104–113.

 13. El-Sadr WM, Mayer KH, Hodder SL. AIDS in America—forgotten but not gone. N Engl J Med.  2010;362(11):967–70.

Should you Treat a COPD Exacerbation with Antibiotics?

September 3, 2011

By: Aviva Regev

Mr. S is a 68-year old man with longstanding COPD and a 40-pack-year smoking history.  He presents to clinic with three days of increasing shortness of breath, and complains that he has been coughing up “more junk” than usual.  As I watch him spit a wad of chartreuse sputum into his tissue, I reach for the prescription pad and tell him he’ll need a week of antibiotics.  He wants to know why he can’t just go up on his inhaled medications instead of taking more pills.

The use of antibiotics in treating COPD exacerbations is common practice, but what is the evidence behind their use?

The search for proof of a bacterial etiology of COPD exacerbations has been on for decades, but imperfect techniques led to inconclusive results.  In fact, a major review published in the NEJM in 1975 concluded that the available data was not sufficient to support a bacterial role in COPD [1].  Current techniques using bronchoscopy and the protected specimen brush (PSB) method provide lower airway samples without the upper airway contaminants found in sputum samples, and newer technology allows reliable assessment of acute immune response to new bacterial exposures.  Analysis of newer data found that about 50% of COPD exacerbations are caused by bacterial infections, with the remainder caused by either viral infections or environmental factors such as pollution [2], but it can be difficult to determine the etiology of a specific exacerbation in practice.

Pooled data from six studies using the PSB method found significant bacterial colonization in 54% of patients with a COPD exacerbation, 29% of those with stable COPD, and in only 4% of healthy controls [3].  Haemophilus influenzae and Streptococcus pneumoniae were the most commonly encountered bacteria in stable COPD airways, while H. influenzae and Pseudomonas aeruginosa were most strongly associated with exacerbations.  Multiple studies using immunologic analysis have shown that exacerbations are associated with colonization by new strains of bacteria, specifically H. influenzae, S. pneumoniae, Moraxella catarrhalis, or Pseudomonas [2].

The Global Initiative for Chronic Obstructive Lung Disease (GOLD) [4] guidelines recommend the use of antibiotics in exacerbations with the three cardinal symptoms of increased dyspnea, increased sputum volume, and increased sputum purulence, exacerbations involving only two of the cardinal symptoms as long as one of them is increased purulence, and in exacerbations requiring mechanical ventilation.  While it’s always nice to have guidelines, it’s also important to know the evidence behind them.

A landmark study in 1987 of 362 acute exacerbations in 173 patients treated with either placebo or antibiotic demonstrated a benefit for antibiotic use in exacerbations classified as moderate or severe (two or three cardinal symptoms), but not in mild cases (only one symptom) [5].  The antibiotics used were TMP-SMX, amoxicillin, or doxycycline, which were considered to be equally effective based on multiple published studies.   Though none of those cover Psuedomonas, and amoxicillin does not cover non-typeable H. influenzae, it is likely that in the 25 years since the study’s publication the most common pathogens as well as resistance profiles have changed, and the antibiotic regimens were suitable for the bacterial milieu of the time.  The study evaluated the success rate, defined as complete resolution within 21 days, as well as the rate of failures with deterioration, defined as patients worsening to the point of needing to be hospitalized or removed from blinding to be treated.  While there was no major clinical difference in success rate between the two groups, failures with deterioration were twice as common in the placebo group (30.5%) than in the treatment group (14.3%) in patients with severe exacerbations.  Peak flow data from 64 patients who had experienced two or more exacerbations during the course of the study and were treated with both placebo and antibiotic was analyzed as paired controls.  The peak flow in these patients declined from an average of 215 L/min at baseline to 190 L/min at the onset of the exacerbation.  Both groups recovered to their baseline peak flow, but the rate of increase was greater in the antibiotic treated group.  Though the difference was statistically significant, it is unclear whether it is clinically significant, as the greatest difference between groups was 10 L/min at day 6 and by day 12 the groups were equivalent.

A 1995 meta-analysis [6] has provided the most evidence favoring antibiotic use in COPD exacerbations, based on pooled results of 9 RCTs.   A caveat in interpreting the results of the meta-analysis is that the nine studies used different antibiotics and different outcome measures, including overall score by physician, days of illness, change in PEFR, and overall symptom score.  The authors calculated an overall effect size of 0.22, which is considered “small,” and signifies that the two groups have about 85% overlap [7]. So, though the pooled results suggest a slight benefit, there is no way to translate the effect size into clinical improvement.

Interestingly, six of the nine studies reported change in peak expiratory flow rate (PEFR) as one of the main outcomes, with an overall effect size of 0.19 [6].  The average change in PEFR across studies was 10.75 L/min with antibiotic therapy. How clinically significant is that difference?

Based on available data, it appears that both of these papers suffer from relying on an endpoint that may not be clinically significant.  The baseline PEFR in the 173 patients in the 1987 study in Annals was 227.5 L/min, with a standard deviation (SD) of 96.1 [6]. Another study of 73 COPD patients reported a baseline of 235 L/min with a SD of 89 [8]. Overall, the 10.75 L/min difference in PEFR with antibiotic use represents less than 5% of baseline values and less than 12% of the standard deviation reported in multiple studies, which makes one question the clinical effect of such a small increase.  Moreover, patients may experience daily variation in their baseline PEFR which exceeds 10.75 L/min: Donohue et al [9] reported baseline values of 623 patients taken daily for six months and found they averaged 233 L/min in the morning and 246 L/min in the evening.  In other words, though the results achieved statistical significance, the clinical difference from a PEFR standpoint is negligible.

A more recent review and meta-analysis from 2008 [10] included only 3 papers published since 1987, but did not rely on PEFR as an endpoint.  Instead, it used treatment failure, defined as requiring additional antibiotics with the first week or unchanged or worsening symptoms within 3 weeks, and in-hospital mortality as the two main endpoints. The authors concluded that for antibiotics, “although the combined analysis showed a survival benefit, each individual study was relatively small in size and was underpowered to answer this critical question.”  When the studies were reviewed based on inpatient versus outpatient setting, a reduction in treatment failures was found only among inpatients, and the in-hospital mortality was also decreased.  The authors postulate that this could be a result of the more severe exacerbations found in hospitalized patients or that the antibiotics may help to prevent nosocomial infections, though others believe that the use of broad-spectrum antibiotics in this setting may contribute to resistance and lead to dangerous superinfections.

Though antibiotics appear to have a solid place in inpatient treatment guidelines, among outpatients, some evidence shows that sputum purulence may be the most important factor in driving antibiotic treatment.  Stockley et al [11] found that sputum purulence correlates with positive bacterial culture, with a sensitivity of 94.4% and a specificity of 77.0% for a high bacterial load.  In their study, COPD patients presenting with an acute exacerbation were assigned to treatment based on the color of their sputum.  The results found 84% of the purulent sputum samples to be culture-positive, compared to only 38% of the mucoid samples; which was comparable to the baseline positive culture rate of 38-41%.  Perhaps most relevant to the current discussion is that 32 out of 34 patients with mucoid exacerbations recovered without antibiotics; the two who did not recover subsequently developed purulent sputum and required antibiotic treatment.

In short, the evidence supporting antibiotic use in this select population is less robust than many clinicians may presuppose given their wide use in clinical practice.  In hospitalized patients, the use of antibiotics seems to confer a real benefit, though how much of that is due to the increased risk of bacterial infection associated with inpatient treatment is impossible to say.  Macrolides may play an increasingly important role in treating COPD exacerbations, as they have demonstrated anti-inflammatory and immunomodulatory potential on top of their antibacterial effects.  Moving forward, we may see a paradigm shift for dealing with outpatient exacerbations, with a focus on only the more colorful of the cardinal symptoms. [12]

Aviva Regev is a 4th year medical student at NYU Langone School of Medicine

Peer Reviewed by: Robert Smith, MD, Associate Professor of Medicine (Pulmonary/Critical Care)


[1] Tager I, Speizer FE. Role of infection in chronic bronchitis. N Engl J Med 1975;292:563-71.

[2] Sethi, S. and Murphy, T.  Infection in the pathogenesis and course of Chronic Obstructive Pulmonary Disease.  N Engl J Med 2008; 359: 2355-65.

[3] Rosell A, Monsó E, Soler N, et al. Microbiologic determinants of exacerbation in chronic obstructive pulmonary disease. Arch Intern Med 2005;165:891-7.

[4] Global Initiative for Chronic Obstructive Lung Disease (GOLD). Global strategy for the diagnosis, management, and prevention of chronic obstructive pulmonary disease. Bethesda (MD): Global Initiative for Chronic Obstructive Lung Disease (GOLD); 2009. Accessed online at

[5] Anthonisen N.R., Manfreda J., Warren C., et al.  Antibiotic therapy in exacerbations of Chronic Obstructive Pulmonary Disease.  Annals of Internal Medicine 1987; 106: 196-204.

[6] Saint S, Bent S, Vittinghoff E, et al.  Antibiotics in Chronic Obstructive Pulmonary Disease Exacerbations.  JAMA 1995; 278: 957-960.

[7] L Becker.  Effect size (ES) [Lecture notes].  University of Colorado, Colorado Springs.  2000.  Available online at Standardized difference between two

[8] Seemungal T, Donaldson G, Paul E, et al.  Effect of exacerbation on quality of life in patients with Chronic Obstructive Pulmonary Disease.  Am J Respir Crit Care Med 1998; 157: 1418-1422.

[9] Donohue J, Van Noord J, Bateman E, et al.  A 6-month, placebo-controlled study comparing lung function and health status changes in COPD patients treated with tiotropium or salmeterol.  Chest 2002; 122(1):47-55.

[10]  Quon B, Gan W, Sin D. Contemporary management of acute exacerbations of COPD.  Chest 2008;133:756-766.

[11] Stockley RA, O’Brien C, Pye A, and Hill, S.  Relationship of Sputum Color to Nature of Outpatient Management of Acute Exacerbations of COPD.  Chest 2000;117:1638-1645.

[12] Allegra L, Blasi F, Diano P, et. al. Sputum color as a marker of acute bacterial exacerbations of chronic obstructive pulmonary disease.  Respiratory Medicine 2005; 99(6): 742-747.