Class Act

Spotlight Case Part 2: Hypergammaglobulinemia and defective humoral immunity in HIV-infected patients

June 12, 2015

By Stephen Armenti, MD

Peer Reviewed

Please see Part 1 of this Spotlight Case which can be found here.

Case Report

A 45-year-old man with a history of mild intermittent asthma presented with two days of right knee pain and swelling accompanied by subjective fevers, shaking chills, and night sweats. He also reported one day of right calf and left groin pain. The patient denied a history of joint trauma, underlying joint disease, or surgery. There was no history of intravenous drug use, recent travel, or preceding illnesses. He was sexually active with women and reported inconsistent condom use in the past.

On physical examination, the patient was in no distress but was unable to get out of bed due to pain. His heart rate was 107/min and temperature was 101°F. The right knee was erythematous and warm with diffuse swelling and decreased range of motion. He had pitting edema extending from the right foot to the base of the right knee. There was tenderness in the left groin, and movement of the left hip elicited severe pain. Two 4 x 4 centimeter areas of fluctuance were noted over the right medial calf and left sternoclavicular joint. There were no meningeal signs or abnormal findings on pulmonary or cardiac exam.

The white blood cell count was 21,000/mm3 with 86% neutrophils. The erythrocyte sedimentation rate and C-reactive protein were elevated to 126 mm/hr and 261 mg/L, respectively. Labs were otherwise significant for a normocytic anemia (hemoglobin 9.5 g/dL) and serum protein-albumin gap of 5. A polyclonal hypergammaglobulinemia with IgG predominance was noted. A chest radiograph showed bilateral pleural effusions, greater on the right than on the left.

Arthrocentesis of the right knee performed before the initiation of antibiotics revealed purulent synovial fluid with a white blood cell count of 9960/mm3 (88% polymorphonuclear leukocytes). Two sets of joint fluid cultures were positive for Streptococcus pneumoniae. Blood cultures were sterile. Magnetic resonance imaging of the left hip showed a thickened enhancing synovium with surrounding myositis, consistent with a septic joint. Soft tissue abscesses of the right calf and left chest were also noted on imaging. A transthoracic echocardiogram revealed no evidence of vegetations or significant valvular disease. A transesophageal echocardiogram was not performed. The patient was started on intravenous ceftriaxone for disseminated pneumococcal disease. Surgical drainage of the left hip and repeat aspiration of the right knee disclosed significant purulence at both sites, although synovial fluid cultures were negative. On hospital day 3, the patient was found to be HIV positive with a CD4 count of 283 and a viral load of 3070 copies/mL.

Discussion

Invasive pneumococcal disease can be a life-threatening infection in immunocompromised patients, particularly in developing countries. Not only are HIV-positive patients more frequently colonized with multiple serotypes of Streptococcus pneumoniae, but defects in mucosal immunity brought on by HIV infection allow for invasive disease. Invasive infection remains a significant risk even in patients with a low viral load and a “reconstituted” immune system following adequate treatment with anti-retroviral therapy (ART) [1]. Thus, there appear to be permanent derangements in immune surveillance brought on by HIV infection that predispose to invasive pneumococcal disease. Here, we will explore the correlation between dysregulation of humoral immunity and the mechanism of disseminated pneumococcal infection in HIV-positive patients.

Although loss of T-cell-mediated immunity is the hallmark of HIV infection, initial observations pointed towards similar deficits in humoral immunity. These studies showed that B-cells in HIV-positive patients had reduced antigenic responses to T-cell-independent antigen stimulation [2]. This deficiency was evidenced by the lack of lasting immunity in HIV-positive patients following immunization with pneumococcal vaccines, which require B-cell function. How does HIV affect B-cell responses? Paradoxically, rather than a reduction in immunoglobulin production, it was determined that HIV-positive patients typically display elevated immunoglobulin levels. The key finding in understanding this dichotomy was that the overall number of immunoglobulin-producing B-cells was less important than their identity and fate. During early HIV infection, there are striking shifts in B-cell populations. The majority of B-cells in the peripheral blood of uninfected individuals are either naïve B-cells or memory B-cells. During HIV infection, there are fewer naïve memory B-cells and an increase in plasma and terminally differentiated “exhausted” B-cells [3]. Exhausted B-cells often produce poorly effective anti-HIV antibodies, and have “burned out” due to chronic viral infection and antigen stimulation. These cells fail to appropriately migrate to germinal centers in lymph nodes, have a reduced diversification of globulin production and lead to overproduction of relatively few immunoglobulins. Conversely, memory B-cells, essential for the continued response to encapsulated pathogens such as S. pneumoniae, are specifically diminished. Furthermore, specific subclasses of immunoglobulins appear to be more affected than others. Pneumococcal polysaccharide capsular antigens most often stimulate production of the IgG2 subclass. Although overall levels of IgG-type globulins are elevated in HIV-positive individuals, IgG2 specifically fails to expand following vaccination with pneumococcal vaccines [4]. Thus, chronic HIV viremia is associated with the expansion of ineffective B-cell subpopulations, including hyperactivated and exhausted B cells, which collectively contribute to humoral dysregulation [5].

HIV itself is incapable of infecting B-cells or plasma cells directly. Still, as mentioned above, many effects of humoral dysregulation by HIV appear to occur in a T-cell-independent manner. Given this paradox, what is the molecular mechanism of humoral dysregulation? Recent studies have started to directly interrogate the role of HIV viral antigens in this process. HIV glycoprotein gp120 can bind directly to antigen-presenting dendritic cells, inhibit cytokine secretion, and prevent specific humoral immune responses to invading pathogens by suppressing the secretion of B-cell activation factor of the TNF-family (BAFF) [6, 7]. In contrast, gp120 can also function as a superantigen, resulting in aberrant activation and expansion of B-cells and grossly elevated oligoclonal immunoglobulin levels. Thus, although likely not the sole determinant, molecules like gp120 represent a potential link between the grossly elevated immunoglobulin levels seen in HIV infection together with the concomitant paradoxical immune suppression.

Disseminated pneumococcemia and septic arthritis are well documented, albeit relatively uncommon, presentations in HIV-positive patients. Specific correlations have not been drawn between the relative incidence of gammopathy and risk of pneumococcal infection. Nevertheless, an attractive hypothesis is that the deficits in specific immunoglobulin subtypes provide a predisposition to developing a disseminated infection. Going forward, what are the best therapeutic recommendations given the current evidence and natural history of humoral immune dysfunction in HIV-positive patients? As mentioned above, delayed ART initiation may lead to permanent changes in B-cell distributions. Therefore, rapid initiation of ART may reverse the B-cell population changes that occur early in disease [3]. Vaccination is an essential preventive intervention for S. pneumoniae infection in HIV patients. The 7-valent conjugate vaccine (PCV-7) has been shown to effectively prevent pneumococcal disease in HIV patients [8]. Furthermore, although it has not been directly studied in HIV-positive individuals, PCV-13 is empirically recommended over PCV-7 given the broader serotype coverage. Conjugate vaccines should be administered prior to the polysaccharide vaccine PPSV-23 [9]. This combined “prime-boost” strategy is the currently recommended immunization in HIV patients. However, if not initiated early, the effectiveness of PPSV-23 falls off after CD4 counts drop below 200 cells/mm3. Lastly, patients with CD4 counts below 200 cells/mm3 are advised to take co-trimoxazole prophylaxis, which reduces the rate of pneumonia, diarrhea and malaria. Although this prevention strategy has little effect on the colonization rate of S. pneumonia and, if anything, increases the rate of resistance patterns [10], there is a significant decrease in the rate of bacteremia and pneumonia due to S. pneumoniae [11].

As a final comment, HIV infection presents a well-established risk factor for plasma cell disorders, and therefore follow-up and surveillance for the development of a more worrisome monoclonal gammopathy may be warranted. Regardless of precautions, disseminated pneumococcal infections will remain a risk in this patient population, and consistent monitoring of anti-retroviral efficacy and close follow-up is essential for long-term success.

Dr. Stephen Armenti is a recent graduate of NYU School of Medicine

Peer Reviewed by Howard Leaf, MD, Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. Jordano Q, Falco V, Almirante B, et al. Invasive pneumococcal disease in patients infected with HIV: still a threat in the era of highly active antiretroviral therapy. Clinical Infectious Diseases 2004:38(11), 1623–1628 (2004).  http://www.ncbi.nlm.nih.gov/pubmed/15156452

2. Lane HC, Masur H, Edgar LC, Whalen G, Rook AH, Fauci AS. Abnormalities of B-cell activation and immunoregulation in patients with the acquired immunodeficiency syndrome. NEJM 1983:309, 453–458.  http://www.ncbi.nlm.nih.gov/pubmed/6224088

3. Moir S, Buckner CM, Ho J, et al. B cells in early and chronic HIV infection: evidence for preservation of immune function associated with early initiation of antiretroviral therapy. Blood 2010:116(25), 5571–5579.  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3031405/

4. Payeras A, Martinez P, Mila J, et al. Risk factors in HIV-1-infected patients developing repetitive bacterial infections: toxicological, clinical, specific antibody class responses, opsonophagocytosis and Fc-gamma RIIa polymorphism characteristics. Clinical and Experimental Immunology 2002:130(2), 271–278.

5. Moir, S. & Fauci, A. S. Pathogenic mechanisms of B-lymphocyte dysfunction in HIV disease. Journal of Allergy and Clinical Immunology 2008:122(1), 223-248.   http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2708937/

6. Chung, NPY, Matthews K, Klasse PJ, Sanders RW, Moore JP. HIV-1 gp120 impairs the induction of B cell responses by TLR9-activated plasmacytoid dendritic cells. Journal of Immunololgy 2012:189(11), 5257–5265.

7. Martinelli, E. et al. HIV-1 gp120 inhibits TLR9-mediated activation and IFN-alpha secretion in plasmacytoid dendritic cells. Proceedings of the National Academy of Sciences. 2007:104(9), 3396–3401.

8. French N, Gordon SB, Mwalukomo T, et al. A trial of a 7-valent pnuemococcal conjugate vaccine in HIV-infected adults. New England Journal of Medicine. 2010:362(9), 812-22.  http://www.ncbi.nlm.nih.gov/pubmed/20200385

9. Lesprit P, Pedrono G, Molina JM, et al. Immunological efficacy of a prime-boost pneumococcal vaccination in HIV-infected adults. AIDS. 2007:21(18), 2425-34

10. Everett DB, Mukaka M, Denis B, et al. Ten years of surveillance for invasive Streptococcus pneumoniae during the era of antiretroviral scale-up and cotrimazole prophylaxis in Malawi. PloS One. 2011:6(3), e17765.  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3058053/

11. Anglaret X, Chene G, Attia A, et al. Early chemoprophylaxis with trimethoprim-sulfamethoxazole for HIV-1-infected adults in Abidjan, Côte d’lvoire: a randomised trial. Cotrimo-Cl Study Group.

Defiance

June 5, 2015

By Amar Parikh, MD

I recently visited The Metropolitan Museum of Art and stumbled across this sculpture called “Woman of Venice II” by Alberto Giacometti. It made me recall an experience I had with a patient on the hematology service this past autumn, and I could not help but marvel at how my patient and this work of art seemed to echo each other. Below is my effort at articulating some of the thoughts I had when I saw this sculpture.

****

[1]

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I gaze upon the sculpture before me, and what I see is defiance. As I approach the halfway point of my intern year, her gaunt frame instantly triggers a subconscious clinical evaluation, as differential diagnoses quickly form in my mind. But above all else, I see defiance. Defiance in the face of disease, disease of every kind—natural disaster, political injustice, artistic censorship, vicious genocide. These external evils stand alongside enemies that spring from within, enemies with hard-sounding names—Crohn’s, cholangitis, cancer, and perhaps the most fittingly titled of all, “consumption,” as the devastating mycobacterium tuberculosis was known to consume the victim from inside out. Yet against such deadly forces, this is a woman who stands tall, impossibly tall, her physicality stretched to its absolute limits in the face of such horrors, emphasizing her frail frame yet at the same time exaggerating her power and emboldening her presence.

My mind wanders to a patient I helped take care of recently, an 89-year-old woman who had once been a prisoner at Auschwitz. She was suffering from complications stemming from advanced cancer and Clostridium difficile colitis. Like the harsh C’s of the very illnesses she bore, she was chillingly cachectic—thin, emaciated, and wasting away before my eyes. Yet she also held her head high. During my ritual morning physical exam, her silence was our strength as my stethoscope eavesdropped on the horrors brewing in her lungs. She would smile weakly as my exam continued, my fingertips gingerly pressed against her belly, “appreciating” her strikingly distended abdomen. She would summon a quiet chuckle as she told me about her “overnight events,” ranging from a rare night of restful sleep to, more often than not, intractable diarrhea, incessant itching, and insufferable pain, sometimes all at once. She would stop mid-sentence and stare out the window, lost in her view of the New York City skyline. She would shake her head while recounting her many childhood stories and she would ask, “where were we?” She would, but after passing away on a cold autumn morning, she never will again.

My patient’s steadfast courage reminds me of the woman in this sculpture, how she stands in a way that somehow reduces her to mere shreds yet also imbues her with towering command. Isn’t this how we wish all our terminally ill patients felt? Standing tall in the face of ravaging illness, mind over matter?

In the sculpture before me, her feet arise from the cold clay, anchored or escaping, I cannot tell which, as if to say to us, “that from which we came is to which we return.” We all flicker for our brief time in this world, trying our best to shine bright before all turns back to black, before our vital elements are reduced to the cold earth from which we sprang. But let her brief moment of strength be fixed for eternity, this woman in her grace facing the horrors of the world with a moment of defiant beauty and intransigent pride, an inspiration to all those who have the fortune of locking eyes with her unrelenting stare. Defiance. Yes, what I see is not the wasting away of a once-healthy human body and spirit, but defiance, defiance burning bright amidst the heavy shadow of innumerable evils.

Dr. Amar Parikh is a 1st year resident at NYU Langone Medical Center.

Image Courtesy of The Metropolitan Museum of Art – The Collection Online

References

1. Giacometti, Alberto. Woman of Venice II. http://www.metmuseum.org/collection/the-collection-online/search/489981 Accessed December 18, 2014.

 

 

 

Spotlight Case Part 1: Oligoarticular Septic Arthritis-A Case of Disseminated Pneumococcal Disease

May 13, 2015

By Jennifer S. Mulliken, M.D.

Peer Reviewed

Case Report

A 45-year-old man with a history of mild intermittent asthma presented with two days of right knee pain and swelling accompanied by subjective fevers, shaking chills, and night sweats. He also reported one day of right calf and left groin pain. The patient denied a history of joint trauma, underlying joint disease, or surgery. There was no history of intravenous drug use, recent travel, or preceding illnesses. He was sexually active with women and reported inconsistent condom use in the past.

On physical examination, the patient was in no distress but was unable to get out of bed due to pain. His heart rate was 107/min and temperature was 101°F. The right knee was erythematous and warm with diffuse swelling and decreased range of motion. He had pitting edema extending from the right foot to the base of the right knee. There was tenderness in the left groin, and movement of the left hip elicited severe pain. Two 4 x 4 centimeter areas of fluctuance were noted over the right medial calf and left sternoclavicular joint. There were no meningeal signs or abnormal findings on pulmonary or cardiac exam.

The white blood cell count was 21,000/mm3 with 86% neutrophils. The erythrocyte sedimentation rate and C-reactive protein were elevated to 126 mm/hr and 261 mg/L, respectively. Labs were otherwise significant for a normocytic anemia (hemoglobin 9.5 g/dL) and serum protein-albumin gap of 5. A polyclonal hypergammaglobulinemia with IgG predominance was noted. A chest radiograph showed bilateral pleural effusions, greater on the right than on the left.

Arthrocentesis of the right knee performed before the initiation of antibiotics revealed purulent synovial fluid with a white blood cell count of 9960/mm3 (88% polymorphonuclear leukocytes). Two sets of joint fluid cultures were positive for Streptococcus pneumoniae. Blood cultures were sterile. Magnetic resonance imaging of the left hip showed a thickened enhancing synovium with surrounding myositis, consistent with a septic joint. Soft tissue abscesses of the right calf and left chest were also noted on imaging. A transthoracic echocardiogram revealed no evidence of vegetations or significant valvular disease. A transesophageal echocardiogram was not performed. The patient was started on intravenous ceftriaxone for disseminated pneumococcal disease. Surgical drainage of the left hip and repeat aspiration of the right knee disclosed significant purulence at both sites, although synovial fluid cultures were negative. On hospital day 3, the patient was found to be HIV positive with a CD4 count of 283 and a viral load of 3070 copies/mL.

Discussion

Infections of native joints can be caused by a wide range of microorganisms including bacterial, viral, and fungal pathogens. The greatest morbidity is seen in bacterial (or septic) arthritis because of the high potential for rapid and irreversible joint destruction [1]. The majority of infections are monoarticular, with the knee being the most commonly involved joint. Abnormal joint architecture is the most important risk factor for bacterial arthritis, and patients with rheumatoid arthritis are at especially high risk [2]. Other common predisposing factors include chronic systemic diseases, immunosuppressive states, local trauma or surgery, and the presence of a prosthetic joint [3-5].

Bacterial arthritis usually occurs by hematogenous spread. The highly vascular environment of the synovium as well as the absence of a synovial basement membrane allow bacterial pathogens to easily access the synovial space [6]. While certain bacterial toxins and virulence factors directly mediate joint injury, the damage in septic arthritis owes more to the host inflammatory response to the infection than to the infection itself [1]. Within the synovial membrane, bacteria trigger an inflammatory cascade that induces a suppurative proliferative synovitis. This, in turn, can progress to destruction of articular cartilage and subchondral bone loss [7].

By far the most commonly isolated pathogen in native joint septic arthritis is Staphylococcus aureus [3,4,8]. The incidence is highest in patients with rheumatoid arthritis, where S. aureus is reportedly responsible for 75% of cases [9]. Streptococcus species are the next most commonly associated pathogen, and group A streptococci account for the majority of these infections [10,11]. While Streptococcus pneumoniae is a less frequent cause of bacterial arthritis, it has been identified as the underlying pathogen in 6-10% of patients [12,13]. Gram-negative bacilli are isolated in 10-20% of cases, with Pseudomonas aeruginosa and Escherichia coli being the most common causative agents. The highest risk groups for gram-negative septic arthritis are intravenous drug users, patients at extremes of age, and those with underlying immunocompromise [11]. Neisseria gonorrhoeae is an important cause of monoarticular and polyarticular bacterial arthritis among young sexually active individuals; however, the prevalence of gonococcal arthritis has declined significantly in recent decades [1,14].

In the preceding case, the patient presented with oligoarticular pneumococcal septic arthritis involving the right knee and left hip. During his hospitalization he was diagnosed with HIV, an established risk factor for invasive pneumococcal disease [15]. A retrospective study from San Francisco conducted before the era of anti-retroviral therapy (ART) showed that 54.2% of patients with invasive pneumococcal disease were also infected with HIV [16]. The incidence of pneumococcal disease per 100,000 person-years was increased 23-fold in patients with AIDS. In addition, 82.5% of pneumococcal isolates in HIV-infected patients were serotypes included in the pneumococcal polysaccharide vaccine (PPSV23). In 2012 the Advisory Committee on Immunization Practices recommended the sequential administration of both the polysaccharide and conjugate (PPSV13) pneumococcal vaccines to HIV-infected patients. The widespread use of ART as well as increased pneumococcal vaccination efforts have helped reduce disease burden. Nevertheless, HIV remains an important risk factor for pneumococcal disease.

In general, the most common risk factors for pneumococcal joint infections are rheumatoid arthritis and alcoholism, but other predisposing factors include B-cell deficiencies, multiple myeloma, and osteoarthritis [15,17-19]. Large joints are affected more often than small joints, with the knee being the most commonly affected site [20]. Infections of the hip, shoulder, elbow, and ankle have also been reported [14]. Polyarticular infections occur in up to 25% of cases, with worse overall outcomes than in monoarticular infections [17,21]. Although pneumococcal joint infections are relatively uncommon, they are an important cause of bacterial arthritis that must be promptly recognized and treated.

Patients typically present acutely with a severely inflamed joint or joints. As with other causes of bacterial arthritis, systemic symptoms are common, including fever and shaking chills [12,22]. The white blood cell count in the synovial fluid may range from less than 10,000 cells/mm3 to greater than 100,000 cells/mm3 [23]. In one review of 90 patients with pneumococcal septic arthritis, the synovial white blood cell count was greater than 11,000/mm3 in more than 50% of patients. A preceding or concurrent site of pneumococcal infection is found in the majority of cases, with pneumonia and meningitis being the most frequent concomitant infections [16]. Endocarditis has also been reported [12,22]. Bacteremia has been found to occur in greater than 70% of patients – more frequently than in all patients with bacterial arthritis combined [12,17].

Antibiotic therapy is recommended for three to four weeks for uncomplicated infections, with one to two weeks of initial intravenous therapy [12,19]. Most pneumococcal infections are sensitive to penicillin, but third-generation cephalosporins (cefotaxime, ceftriaxone) or vancomycin are reasonable alternatives given increasing rates of antimicrobial resistance in S. pneumoniae [1,24]. Repeat joint aspirations are generally adequate for the initial drainage of infected joints. Surgical drainage is indicated for septic arthritis of the hip, loculated infections, and inadequate response to arthrocentesis or after five to seven days of antibiotics [12,25]. The prognosis for pneumococcal septic arthritis is generally favorable. More than 80% of patients survive the infection, and more than 60% have good functional outcomes [17].

In summary: S. pneumoniae is an uncommon yet well-documented cause of septic arthritis. Symptoms are similar to those of other bacterial arthritides; however, oligoarticular and polyarticular infections are more common when S. pneumoniae is the underlying pathogen. Given the high rate of concomitant infections, pneumonia and meningitis should be ruled out in all patients with pneumococcal septic arthritis. An echocardiogram should strongly be considered to rule out endocarditis. In addition, all patients should be tested for HIV given the association between immunodeficiency and invasive pneumococcal disease. Prognosis is usually favorable with appropriate treatment.

Dr. Jennifer S. Mulliken is a 2nd year resident at NYU Langone Medical Center

Peer reviewed by Howard Leaf, MD, Internal Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. Ohl CA. Infectious arthritis of native joints. In: Mandell GL, Bennet JE, Dolin R, eds. Mandell, Douglas, and Bennett’s Principles and Practice of Infectious Diseases. 7th edition. Philadelphia, PA: Saunders Elsevier, 2009, pp. 1443-1456.

2. Kaandorp CJ, Van Schaardenburg D, Krijnen P, et al. Risk factors for septic arthritis in patients with joint disease. A prospective study. Arthritis Rheum. 1995;38(12):1819-1825.

3. Gupta MN, Sturrock RD, Field M. A prospective 2-year study of 75 patients with adult-onset septic arthritis. Rheumatology (Oxford). 2001;40(1):24-30. http://www.ncbi.nlm.nih.gov/pubmed/11157138

4. Kaandorp CJ, Dinant HJ, van de Laar MA, et al. Incidence and sources of native and prosthetic joint infection: a community based prospective survey. Ann Rheum Dis. 1997;56(8):470-475.

5. Saraux A, Taelman H, Blanche P, et al. HIV infection as a risk factor for septic arthritis. Br J Rheumatol. 1997;36(3):333-337.

6. Goldenberg DL, Reed DI. Bacterial arthritis. N Engl J Med. 1985;312:764-771. http://www.ncbi.nlm.nih.gov/pubmed/3883171

7. Goldenberg DL, Chisholm PL, Rice PA, et al. Experimental models of bacterial arthritis: A microbiologic and histopathologic characterization of the arthritis after the intraarticular injections of Neisseria gonorrhoeae, Staphylococcus aureus, group A streptococci, and Escherichia coli. J Rheumatol 1983;10:5-11.

8. Morgan DS, Fisher D, Merianos A, Currie BJ. An 18 year clinical review of septic arthritis from tropical Australia. Epidemiol Infect. 1996;117(3):423-428. http://www.ncbi.nlm.nih.gov/pubmed/8972665

9. Goldenberg DL. Infectious arthritis complicating rheumatoid arthritis and other chronic rheumatic disorders. Arthritis Rheum. 1989;32(4):496-502.

10. Goldenberg DL. Septic arthritis. Lancet. 1998;351(9097):197-202.

11. Shirtliff ME, Mader JT. Acute septic arthritis. Clin Microbiol Rev. 2002;15(4):527-544.

12. Ross JJ, Saltzman CL, Carling P, Shapiro DS. Pneumococcal septic arthritis: review of 190 cases. Clin Infect Dis. 2003;36(3):319-327.

13. Ryan MJ, Kavanagh R, Wall PG, Hazleman BL. Bacterial joint infections in England and Wales: analysis of bacterial isolates over a four year period. Br J Rheumatol. 1997;36(3):370-373.

14. Bardin T. Gonococcal arthritis. Best Pract Res Clin Rheumatol. 2003;17(2):201-208.  http://www.ncbi.nlm.nih.gov/pubmed/12787521

15. Frankel RE, Virata M, Hardalo C, et al. Invasive pneumococcal disease: clinical features, serotypes, and antimicrobial resistance patterns in cases involving patients with and without human immunodeficiency virus infection. Clin Infect Dis. 1996;23(3):577-584.

16. Nuorti JP, Butler JC, Gelling L, et al. Epidemiologic relation between HIV and invasive pneumococcal disease in San Francisco County, California. Ann Intern Med. 2000;132(3):182-190.

17. Raad J, Peacock JE Jr. Septic arthritis in the adult caused by Streptococcus pneumoniae: a report of 4 cases and review of the literature. Semin Arthritis Rheum. 2004;34(2):559-569.

18. Ispahani P, Weston VC, Turner DP, Donald FE. Septic arthritis due to Streptococcus pneumoniae in Nottingham, United Kingdom, 1985-1998. Clin Infect Dis. 1999;29(6):1450-1454.

19. James PA, Thomas MG. Streptococcus pneumoniae septic arthritis in adults. Scand J Infect Dis. 2000;32(5):491-494.

20. Epstein JH, Zimmermann B, Ho G Jr. Polyarticular septic arthritis. J Rheumatol. 1986;13(6):1105-1107.

21. Musher, DM. Streptococcus pneumoniae. In: Mandell GL, Bennet JE, Dolin R, eds. Mandell, Douglas, and Bennett’s Principles and Practice of Infectious Diseases. 7th edition. Philadelphia, PA: Saunders Elsevier, 2009, pp. 2623-2642.

22. Kauffman CA, Watanakunakorn C, Phair JP. Pneumococcal arthritis. J Rheumatol. 1976;3(4):409-419. http://www.ncbi.nlm.nih.gov/pubmed/1022873

23. Baraboutis I, Skoutelis A. Streptococcus pneumoniae septic arthritis in adults. Clin Microbiol Infect. 2004;10(12):1037-1039.

24. Kaplan SL, Mason EO Jr. Management of infections due to antibiotic-resistant Streptococcus pneumoniae. Clin Microbiol Rev. 1998;11(4):628-644.

25. Pioro MH, Mandell BF. Septic arthritis. Rheum Dis Clin North Am. 1997;23(2):239-258.

 

Iron Deficiency Anemia: A Guide to Oral Iron Supplements

March 26, 2015

By Cindy Fei, MD

Peer Reviewed

Iron deficiency is the most common cause of anemia in the United States. Despite this, there are a multitude of questions surrounding the best choice of supplementation. Which formulation of iron is best prescribed? Do newer preparations such as enteric-coated tablets help? How long do you treat for? The following is a review of the literature surrounding these questions.

In order to best understand the dosing regimens, let’s first review the metabolism of iron.

The body contains approximately 45mg/kg of elemental iron, of which two-thirds is in hemoglobin form, 15-20% in storage form, 10% as myoglobin, and 5% as other iron-containing enzymes. Iron mostly circulates around the body in a closed system and is recycled from old red blood cells, with the only significant losses occurring with major bleeding. Less than 0.1% of total iron is lost on average daily through urine, sweat, feces, skin sloughing, menses, and childbirth. These last two scenarios account for women’s proportionately less iron stored as ferritin or hemosiderin in the liver and macrophages. Overall, 1-2mg of iron daily must be replenished orally in order to cover these physiologic losses, with women at the higher end of iron requirements.[1] The government-recommended dietary allowance of iron (8mg/day for men and postmenopausal women, versus 18mg/day for menstruating women) only covers this bare minimum and cannot replace iron losses beyond the physiologic state.[2] This discrepancy explains why supplemental iron is required to treat iron-deficiency anemia.

In the absence of major bleeding, overt iron-deficiency anemia develops in a progressive manner. Reduction of iron stores in the form of ferritin is the first sign of iron supply-demand mismatch, as iron is mobilized from the liver and reticuloendothelial system. However, serum iron, total iron-binding capacity (TIBC), and red cell morphology remain normal until iron stores are exhausted. Afterwards, serum iron levels decrease, while TIBC increases in an attempt to raise iron absorption. Dysfunctional erythropoiesis and microcytosis only occur once the transferrin saturation drops below 15%. Only at that point do anemia and low hemoglobin levels develop.[3]

Iron Supplement Formulations

Oral iron supplements offer a more robust avenue for iron repletion. The most commonly prescribed preparation, the ferrous salts, include ferrous sulfate, ferrous gluconate, and ferrous fumarate. These ferrous (Fe+2) forms are more soluble than the dietary ferric (Fe+3) form, with twice the absorbability. The estimated absorption rate of the ferrous salts is 10-15%, with no difference found in absorbability among the three main formulations in a small but randomized controlled trial.[4] The three formulations of ferrous sulfate perform similarly, raising hemoglobin by 0.25g/dL per day from an average baseline hemoglobin of 5g/dL in one study.[5] The low absorption rate results in less-than-ideal dosing of three pills per day in order to reverse iron-deficiency. For example, a single ferrous sulfate 325mg tablet contains 60mg of elemental iron, so thrice daily dosing provides 180mg of elemental iron per day, well within the recommended daily range of 150-200mg for iron-deficient patients. Assuming an absorption rate of 10%, 500mg of bioavailable iron accumulated after one month of therapy should be available to produce 500mL of packed red blood cells, or an increase in 2g/dL of hemoglobin.[6]

Of note, a randomized controlled trial showed that incrementally higher doses of iron in elders with iron-deficiency anemia did not provide any additional benefit in iron status, and in fact caused more gastrointestinal upset. Patients aged 80 or older with iron-deficiency anemia were randomized to receive 15mg, 50mg, or 150mg elemental iron daily, which resulted in comparable increases in hemoglobin and ferritin after 60 days without any statistically significant differences among the three groups. However, the higher doses resulted in statistically significant increases in abdominal discomfort, nausea, vomiting, diarrhea, constipation, and black stools.[7] This suggests that the elderly may benefit from less than one pill of ferrous sulfate daily without sacrificing effectiveness.

Oral iron therapy is notorious for its side effects, namely constipation, diarrhea, heartburn, nausea, and epigastric pain, which may plague up to 20% of patients and limit compliance to the different iron formulations. The estimated adherence rate hovers around 40-60%.[8] The upper gastrointestinal side effects, such as nausea and epigastric pain, are more dose-dependent and can be managed with lower or less frequent dosing initially. In contrast, lower gastrointestinal effects such as altered bowel habits, are less related to dosing.[9] The strategy of administering iron with meals in order to minimize gastrointestinal upset unfortunately impairs iron absorption by as much as 50% in one small study.[10]

A randomized controlled trial did not show any statistically significant difference in gastrointestinal side effects among equivalent dosages of the three different ferrous salt preparations.[11] This finding was confirmed in a systematic review of 111 studies that compared the different ferrous salt formulations at doses of 80-120mg elemental iron per day.[12]

Enteric Coating Formulations

In light of these therapy-limiting side effects, new enteric-coated formulations of iron arrived on the market in an attempt to decrease the prevalence of gastrointestinal upset and reduce the dosing schedule. However, this comes at the cost of absorption, as the iron may not be absorbed in the duodenum. A comparison between enteric-coated and elixir ferrous sulfate in healthy volunteers did not show a statistically significant increase in serum iron concentration from baseline in the enteric-coated group. The estimated bioavailability of the enteric-coated preparation was 30% of the regular oral preparation .[13]

Niferex

Another formulation called Niferex, a polysaccharide-iron complex, is designed to minimize gastrointestinal upset via delayed iron release in the intestines. This combination of ferric iron and low molecular weight polysaccharide contains 150mg elemental iron. The delayed-release formulation raises concerns regarding inadequate intestinal absorption. In a randomized open-label study comparing equivalent daily doses of Niferex and ferrous fumarate, patients with iron-deficiency anemia achieved statistically higher increases in hemoglobin after 12 weeks in the ferrous group (2.84 vs Niferex 0.60, p <0.0001). Statistically significant increases in ferritin and mean corpuscular volume were also seen in patients taking ferrous sulfate rather than Niferex, while the ferrous iron group suffered from significantly more nausea and diarrhea.[14]

Vitamin C

Ascorbic acid has been theorized to improve absorption by reducing iron to a ferrous Fe+2 state to optimize its solubility. Increasing doses of vitamin C exhibited a dose-dependent response in iron absorption during concomitant administration in healthy volunteers, ranging from no change in ferrous sulfate absorption with ascorbic acid doses below 100mg to 48% increase in elemental iron 30mg absorption with 500mg ascorbic acid.[15] Although long-term studies have not been conducted, the relatively benign nature of vitamin C leads to a low threshold for concomitant administration with iron.

Response to Iron Therapy

The goal of iron supplementation is two-fold: to reverse the anemia and to replete iron stores. The expected response to a course of iron is a reticulocytosis in 3-5 days, peaking after one week, followed shortly by a rise in hemoglobin. A response in hemoglobin should be apparent three weeks into therapy.[16] An increase in hemoglobin by 1g/dL after one month qualifies as an adequate response.[17] Little research exists on the optimal duration of therapy, but an acceptable regimen is to continue therapy for three months after normalization of hemoglobin, in order to replenish iron stores per British Society of Gastroenterology guidelines.[18]

Conclusions

Iron deficiency anemia can be treated with oral iron supplements, of which the most commonly prescribed form is ferrous sulfate 325mg three times daily, with the option of lower and less frequent dosing in the elderly. The three ferrous salt preparations have similar side effects, bioavailability, and effectiveness. Gastrointestinal upset is a common side effect that limits patient compliance with iron therapy. Proposed strategies to avoid this include reducing the dose, administering iron during mealtimes, or opting for enteric-coated preparations or a polysaccharide-iron complex (Niferex). However, all these strategies impair iron absorption and may result in suboptimal clinical outcomes. Lastly, further investigation is needed in the areas of vitamin C co-administration and the optimal duration of iron therapy.

Dr. Cindy Fei is an Internal Medicine resident at NYU Langone Medical Center

Peer reviewed by David Green, Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Brittenham GM. Pathophysiology of iron homeostasis. In: Hoffman R, Benz EJ, Silberstein LE, eds. Hematology: basic principles and practice. 6th ed. Philadelphia: Elsevier Saunders, 2013:427-436.

2. National Institutes of Health: Office of Dietary Supplements. Dietary supplement fact sheet: iron. 24 August 2007. Accessed 9 November 2013. http://ods.od.nih.gov/factsheets/Iron-HealthProfessional/

3. Adamson JW. Iron deficiency and other hypoproliferative anemias. In: Longo DL, Fauci AS, Kasper DL, eds. Harrison’s online. 18th ed. Access Medicine. www.accessmedicine.com

4. Brise H, Hallberg L. Absorbability of different iron compounds. Acta Med Scand Suppl 1962;376: 23-37. http://www.ncbi.nlm.nih.gov/pubmed/13873149

5. Pritchard JA. Hemoglobin regeneration in severe iron-deficiency anemia. JAMA 1966;195(9): 97-100. http://www.ncbi.nlm.nih.gov/pubmed/5951874

6. Alleyne M, Horne MK, and Miller JL. Individualized treatment for iron-deficiency anemia in adults. Am Jour of Med 2008;121: 943-948. http://www.ncbi.nlm.nih.gov/pubmed/18954837

7. Rimon E, Kagansky N, Kagansky M, et al. Are we giving too much iron? Low-dose iron therapy is effective in octogenarians. Am J of Med 2005;118(10): 1142-1147. http://www.ncbi.nlm.nih.gov/pubmed/16194646

8. Cancelo-Hidalgo MJ, Castelo-Branco C, Palacios S, et al. Tolerability of different oral iron supplements: a systematic review. Current Med Research & Opinion. 2013;29(4): 291-303. http://www.ncbi.nlm.nih.gov/pubmed/23252877

9. Alleyne

10. Brise H. Influence of meals on iron absorption in oral iron therapy. Acta Med Scanda Suppl 1962;376: 39-45. http://www.ncbi.nlm.nih.gov/pubmed/13873153

11. Hallberg L, Ryttinger L, and Solvell L. Side-effects of oral iron therapy: a double-blind study of different iron compounds in tablet form. Acta Med Scand Suppl 1966;459: 3-10. http://www.ncbi.nlm.nih.gov/pubmed/5957969

12. Cancelo

13. Walker SE, Paton TW, Cowan DH. Bioavailability of iron in oral ferrous sulfate preparations in healthy volunteers. Canadian Med Assoc Jour 1989;141(6):543-547. http://www.ncbi.nlm.nih.gov/pubmed/2776093

14. Liu T, Lin S, Chang C, Yang W, Chen T. Comparison of a combination ferrous fumarate product and a polysaccharide iron complex as oral treatments of iron deficiency anemia: a Taiwanese study. Int J Hematol. 2004;80: 416-420. http://www.ncbi.nlm.nih.gov/pubmed/15646652

15. Brise H, Hallberg L. Effect of ascorbic acid on iron absorption. Acta Med Scand Suppl 1962;376: 51-58. http://www.ncbi.nlm.nih.gov/pubmed/13873150

16. Brittenham GM. Disorders of iron homeostasis: iron deficiency and overload. In: Hoffman R, Benz EJ, Silberstein LE, eds. Hematology: basic principles and practice. 6th ed. Philadelphia: Elsevier Saunders, 2013:437-449.

17. Short MW, Domagalski JE. Iron deficiency anemia: evaluation and management. Am Fam Physician 2013;87(2): 98-104. http://www.ncbi.nlm.nih.gov/pubmed/23317073

18. Goddard AF, James MW, McIntyre AS, Scott BB, British Societ of Gastroenterology. Guidelines for the management of iron deficiency anemia. Gut 2011;60(10): 1309-1316. http://www.ncbi.nlm.nih.gov/pubmed/21561874

 

 

 

Diabetic Foot Ulcers: Pathogenesis and Prevention

March 19, 2015

By Shilpa Mukunda, MD

Peer Reviewed

On my first day on inpatient medicine at the VA Hospital, Mr. P came in with an oozing foot ulcer. Mr. P, a 60-year-old man with a 30 pack-year smoking history, poorly controlled diabetes, peripheral vascular disease, and chronic renal disease, had already had toes amputated. He knew all too well the routine of what would happen now with his newest ulcer. After two weeks of IV antibiotics and waiting for operating room time, Mr. P eventually had his toe amputated. It was his fourth amputation.

Mr. P unfortunately is not alone in this chronic complication of diabetes. Approximately 15-25% of individuals with type 2 diabetes mellitus develop a diabetic foot ulcer [1]. Not all ulcers, however, require amputation. Ulcers can also be treated with sharp debridement, offloading techniques to redistribute pressure from the ulcer, and wound dressings, with hydrogels being the most frequently used [2]. Weekly sharp debridement is associated with more rapid healing of ulcers [2]. In addition, in patients with severe peripheral vascular disease and critical limb ischemia, early surgical revascularization can prevent ulcer progression and decrease rates of amputation [3]. Even with immediate and intensive treatment, however, many foot ulcers will take months to heal or may not heal at all. Diabetic foot ulcers are the most common cause of non-traumatic amputations in the United States, with 14-24% of patients with an ulcer subsequently undergoing amputation [4]. Amputation leads to physical disability and greatly reduced quality of life [5]. In addition to their detrimental effects on the lives of individual patients, ulcers also have a great economic cost to society. Patients with ulcers often have lengthy inpatient stays with involvement of specialists. According to a 1999 study, the healthcare costs of a single ulcer are estimated to be approximately $28,000 [4].

The pathogenesis of diabetic foot ulcers is multifaceted. Neuropathy, abnormal foot mechanics, peripheral artery disease, and poor wound healing contribute to diabetic foot ulcers. Neuropathy, a microvascular complication of diabetes, occurs in approximately 50% of individuals with long-standing type 1 and type 2 diabetes mellitus, and causes diabetic foot ulcers through a variety of mechanisms [6]. First, distal symmetric polyneuropathy of sensory fibers, the most common neuropathy in diabetes, leads to distal sensory loss in a glove-and-stocking distribution. Without the ability to sense pain, patients with diabetic neuropathy can inadvertently sustain repeated trauma to the foot. Neuropathy can also manifest with disordered proprioception, resulting in improper weight bearing and ulceration [6]. Motor and sensory neuropathy together can lead to disordered foot mechanics, manifesting variably as hammertoe, claw toe deformity, and Charcot foot. These structural changes cause abnormal pressure points and increased shear stress on the foot, both of which increase the risk for ulcer formation [7]. Diabetic neuropathy can also affect autonomic fibers. Autonomic neuropathy results in decreased sweating of the foot and dry skin, leading to cracks and fissures that can serve as entry points for bacteria [1]. In addition to neuropathy, many diabetics have peripheral artery disease, a macrovascular complication of diabetes and an independent risk factor for lower extremity amputation [8]. Peripheral artery disease leads to decreased tissue perfusion, which then impedes wound healing. In addition, impaired cell-mediated immunity and phagocyte function further reduce wound healing in diabetics [6]. A study by Lavery and colleagues found that the risk of ulceration in diabetics was proportional to the number of risk factors, with the risk increased by 1.7 in diabetics with isolated peripheral neuropathy and by 36 in diabetics with peripheral neuropathy, deformity, and a previous amputation [9].

How can ulcers be prevented? Optimizing glycemic control is the most important initial step. One study found that the risk of an ulcer increased in direct proportion to each 1% rise in the hemoglobin A1c [10]. In the primary care setting, diabetic patients should be screened for foot ulcers annually, with higher-risk patients screened more frequently. The annual foot exam should include visual inspection of the feet for calluses, skin integrity, and bony deformities. Patients with ulcerations or gross deformities should be referred to a podiatrist. The foot exam should also include screening for loss of protective sensation with the Semmes-Weinstein monofilament. Inability to perceive the 10-gram load imparted by the filament is associated with large-fiber neuropathy and a 7-fold increase in the risk of ulceration [11]. In addition, diabetic patients should be screened for peripheral vascular disease through palpation of the dorsalis pedis and posterior tibialis pulses and measurement of ankle-brachial index. Patients with peripheral vascular disease should be given additional counseling on smoking cessation, as smoking worsens peripheral artery disease, and referral to a vascular surgeon should be considered. All diabetic patients, especially those who have lost monofilament sensation, should be educated about foot precautions, including daily inspection of the toes and feet, wearing well-fitting socks and shoes, and keeping the skin clean and moist [12]. A 2014 Cochrane review of patient education for preventing diabetic foot ulceration found that foot care knowledge and self-reported patient behavior are positively influenced by education in the short term, yet robust evidence is lacking to show that education alone can achieve clinically relevant reductions in ulcer and amputation incidence [13]. While patient education alone may not be enough to prevent ulcers, studies have shown that multidisciplinary foot care involving physicians, educators, podiatrists, surgeons, home care nurses, nutritionists, and social services can lead to improved outcomes [14]. In Sweden, patients with diabetes managed with a multidisciplinary approach had a 50% reduction (7.9/1000 to 4.1/1000) in amputations over 11 years [15].

As the number of people living with diabetes is rising, with an estimated 300 million people with diabetes by 2025 [14], the complications associated with diabetes are also likely to increase. Despite this rise in numbers, it is important to note that major amputation rates among diabetics are falling, as shown in a 2006 study in Helsinki [3]. This is likely due to preventive measures with improved glycemic control, establishment of diabetic multidisciplinary teams, and earlier vascular revascularization procedures [3]. Ultimately, prevention is the best approach to diabetic foot ulcers. It is our goal as physicians to ensure that all our diabetic patients can live long lives with all 10 toes intact. That goal is ambitious but possible.

Dr. Shilpa Mukunda is a 1st year Internal Medicine resident at Boston University

Peer reviewed by Robert Lind, MD, Internal Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Singh N, Armstrong DG, Lipsky BA. Preventing foot ulcers in patients with diabetes. JAMA. 2005;293(2):217-228.  http://www.ncbi.nlm.nih.gov/pubmed/15644549

2. Yazdanpanah L, Nasiri M, Adarvishi S. Literature review on the management of diabetic foot ulcer. World J Diabetes. 2015;6(1):37-53.

3. Eskelinen E, Eskelinen A, Albäck A, Lepäntalo M. Major amputation incidence decreases both in non-diabetic and in diabetic patients in Helsinki. Scand J Surg. 2006;95(3):185-189.

4. Ramsey SD, Newton K, Blough D, et al. Incidence, outcomes, and cost of foot ulcers in patients with diabetes. Diabetes Care. 1999;22(3):382-387.  http://www.ncbi.nlm.nih.gov/pubmed/10097914

5. Consensus Development Conference on Diabetic Foot Wound Care: 7–8 April 1999, Boston, Massachusetts. American Diabetes Association. Diabetes Care. 1999;22(8):1354-1360.  http://www.ncbi.nlm.nih.gov/pubmed/10480782

6. Powers AC. Diabetes mellitus. In: Longo DL, Fauci AS, Kasper DL, Hauser SL, Jameson JL, Loscalzo J, eds. Harrison’s Principles of Internal Medicine. 18th ed. New York: McGraw-Hill; 2012. http://www.accessmedicine.com/content.aspx?aID=9141196. Accessed November 19, 2012.

7. Sumpio B. Foot ulcers. N Engl J Med. 2000;343(11):787-793.

8. Adler AI, Boyko EJ, Ahroni JH, Smith DG. Lower-extremity amputation in diabetes. The independent effects of peripheral vascular disease, sensory neuropathy, and foot ulcers. Diabetes Care. 1999;22(7):1029-1035.

9. Lavery LA, Armstrong DG, Vela SA, Quebedeaux TL, Fleischli JG. Practical criteria for screening patients at high risk for diabetic foot ulceration. Arch Intern Med. 1998;158(2):157-162.

10. Boyko EJ, Ahroni JH, Cohen V Nelson KM, Heagerty PJ. Prediction of diabetic foot ulcer occurrence using commonly available clinical information: the Seattle Diabetic Foot Study. Diabetes Care. 2006;29(6):1202-1207.  http://www.ncbi.nlm.nih.gov/pubmed/16731996

11. McNeely MJ, Boyko EJ, Ahroni JH, et al. The independent contributions of diabetic neuropathy and vasculopathy in foot ulceration: how great are the risks? Diabetes Care. 1995;18(2):216-219.

12. Calhoun JH, Overgaard KA, Stevens CM, Dowling JP, Mader JT. Diabetic foot ulcers and infections: current concepts. Adv Skin Wound Care. 2002;15(1):31-42.

13. Dorresteijn JA, Kriegsman DM, Assendelft WJ, Valk GD. Patient education for preventing diabetic foot ulceration. Cochrane Database Syst Rev. 2014;12:CD001488. http://www.ncbi.nlm.nih.gov/pubmed/20464718

14. Bartus CL, Margolis DJ. Reducing the incidence of foot ulceration and amputation in diabetes. Curr Diab Rep. 2004;4(6):413-418. http://www.ncbi.nlm.nih.gov/pubmed/15539004

15. Larsson J, Apelqvist J, Agardh CD, Stenström A. Decreasing incidence of major amputation in diabetic patients: a consequence of a multidisciplinary foot care team approach? Diabet Med. 1995;12(9):770–776. http://www.ncbi.nlm.nih.gov/pubmed/8542736

 

Why Do We Do What We Do: Common Hospital Practices Revealed

February 27, 2015

By Dana Zalkin

Peer Reviewed

A code is called on the overhead speaker and the on-call teams rush to the scene to see what awaits them. EKG leads are being placed, medications are being ordered, and labs are being drawn. A medical student stands with a bag of ice, ready to grab the arterial blood gas (ABG) and run it down to the lab. “Why do we put the ABG on ice right away?” the student wonders. But in this moment, while a patient teeters on the border of life and death, it seems inappropriate to ask this simple question, one that can always wait till later.

Every day in the hospital great questions arise that may at times seem trivial compared to the enormous mission of taking care of patients or compared to the overwhelming amount of medical knowledge that students and residents constantly try to amass. However, to be the best physicians we can be, it is important to answer these questions and to know: why do we do what we do, and what is the evidence for it?

Q: Why do we put an ABG on ice immediately?

A: Arterial blood gas collection is a crucial step in determining the acid-base status of a patient as well as in evaluating ventilation and gas exchange. As ABGs are commonly used in emergency departments and ICUs, it is imperative that the values be obtained quickly and accurately. Many studies have been done to assess the effects of storage of blood in different syringe types, over various periods of time, and at differing temperatures. Since as early as the 1970s, investigators have assessed the differences that arise from storing blood in glass versus plastic syringes. A study from 1971 published in BMJ demonstrated much greater changes in oxygen tension over time for samples stored in each of 5 different models of plastic syringes as compared to a glass syringe [1]. Another study assessing blood gas results based on syringe type demonstrated that samples stored in glass syringes could provide adequate results over a much longer storage time period than those stored in plastic syringes [2]. However, it is suggested that despite these results, it is not practical to use glass syringes in all clinical situations and that clinical decision-making may not even be influenced by the variation in measurements obtained.

Some data suggests that if analysis of a sample is delayed for even ten minutes at room temperature, PaO2 values will be significantly lowered due to continued consumption of oxygen by the leukocytes and platelets in the sample [3, 4]. Other sources demonstrate that PaO2 values are actually significantly higher when analysis was delayed. For example, one study showed that when samples were stored in plastic syringes for 30 min at 22°C the PaO2 increased by 11.9mmHg compared to immediate analysis or storage in a glass syringe [5]. Theories to explain the increase in PaO2 include the presence of air bubbles in the samples as well as diffusion of gas through the pores of the syringe [5, 6].

Finally, there is a significant amount of data analyzing the practice of placing samples on ice or keeping them at room temperature prior to analysis. With regard to the theory that cellular metabolism reduces the PaO2 over time in samples, some of the literature has suggested that if the sample is put on ice immediately, the metabolic activity of these cells is reduced and they will consume less oxygen [4, 7]. However, many studies have shown increases in PaO2 in samples over time, necessitating an alternate explanation of these changes (two of which are described above) and additional analysis of the effect of cooling samples. Two different studies showed greater increases in PaO2 when plastic syringes were stored at 0-4°C compared to 22°C (8.4-13.7mmHg versus 2.6-11.9mmHg increases at 0-4°C compared to 22°C, respectively) [5, 8]. One theory to explain the increase in PaO2 in samples collected in plastic syringes and cooled postulates that the plastic molecules contract when cooled which subsequently opens larger pores for oxygen to diffuse through, thereby falsely elevating the PaO2 [9]. So, what is the bottom line? After reviewing all the data, we are left more confused than when we started. Nevertheless, the one practice all these studies agree on is the need for rapid analysis of the samples. So whether you put it on ice or keep it at room temperature, bringing that syringe to the lab as fast as possible is sure to yield the best results.

Q: Why do we have to fill the blue-top coagulation tube to the top?

A: You finish morning rounds and begin looking at your long list of things to do that afternoon. One patient needs additional labs done as soon as possible, so instead of calling phlebotomy, you do the blood draw yourself. After anxiously awaiting the results, you check the computer and see “insufficient sample” under the coagulation panel. Heartbreak ensues. The root of this unfortunate tale is the very specific ratio of sodium citrate in the collection tube to blood that is collected. This ratio of 1:9 sodium citrate/blood exists to prevent coagulation of the blood sample; the citrate ions in the tube chelate the calcium in the sample and form calcium citrate complexes, thereby preventing the clotting mechanism [16, 17]. If a sample is “insufficient”, there will be excess anticoagulant in relation to sample and the results will be falsely prolonged. So, the next time you’re drawing blood in a coagulation tube, make sure not to skimp out on the blood. Understanding the reasoning behind what we do: improving efficiency and preventing heartbreak.

Dana Zalkin is a 4th year medical student at NYU Langone Medical Center

Peer Reviewed by Neil Shapiro, Editor-In-Chief, Clinical Correlations

Image courtesy of Wikimedia Commons

References:

1) Scott PV, Horton JN, Mapleson WW. Leakage of oxygen from blood and water samples stored in plastic and glass syringes. Br Med J. 1971 Aug 26;3(5773):512-6.  http://www.ncbi.nlm.nih.gov/pubmed/5565518

2) Picandet V, Jeanneret S, Lavoie JP. Effects of syringe type and storage temperature on results of blood gas analysis in arterial blood of horses. J Vet Intern Med. 2007 May-Jun;21(3):476-81.

3) Trulock EP III. Arterial Blood Gases. In: Walker HK, Hall WD, Hurst JW, editors. Clinical Methods: The History, Physical, and Laboratory Examinations. 3rd edition. Boston: Butterworths; 1990. Chapter 49.

4) Verma AK, Paul R. The interpretation of Arterial Blood Gases. Australian Prescriber. 2010;33:124–9.  http://www.australianprescriber.com/magazine/33/4/124/9

5) Knowles TP, Mullin RA, Hunter JA, Douce FH. Effects of syringe material, sample storage time, and temperature on blood gases and oxygen saturation in arterialized human blood samples. Respir Care. 2006 Jul;51(7):732-6.

6) Lu JY, Kao JT, Chien TI, Lee TF, Tsai KS. Effects of air bubbles and tube transportation on blood oxygen tension in arterial blood gas analysis. J Formos Med Assoc. 2003 Apr;102(4):246-9.  http://www.ncbi.nlm.nih.gov/pubmed/12833188

7) Schmidt C, M?ºller-Plathe O. Stability of pO2, pCO2 and pH in heparinized whole blood samples: influence of storage temperature with regard to leukocyte count and syringe material. Eur J Clin Chem Clin Biochem. 1992 Nov;30(11):767-73.

8) Mahoney JJ, Harvey JA, Wong RJ, Van Kessel AL. Changes in oxygen measurements when whole blood is stored in iced plastic or glass syringes. Clin Chem. 1991 Jul;37(7):1244-8.

9) Beaulieu M, Lapointe Y, Vinet B. Stability of PO2, PCO2, and pH in fresh blood samples stored in a plastic syringe with low heparin in relation to various blood-gas and hematological parameters. Clin Biochem. 1999 Mar;32(2):101-7.    http://www.ncbi.nlm.nih.gov/pubmed/10211625

16) “Buffered Sodium Citrate 3.2% (0.109M)(100 Ml (10 Pouches)).” Aniara. N.p., n.d. Web. 12 July 2014. http://www.aniara.com/PROD/A12-8480-10.html.

17) “Lab Manual for UCSF Clinical Laboratories.” UCSF Departments of Pathology and Laboratory Medicine. N.p., n.d. Web. 12 July 2014. http://labmed.ucsf.edu/sfghlab/test/CoagulationProcedures.html.

Acupuncture and Immune Modulation

January 9, 2015

By Michael Lee, MD

Peer Reviewed

Clinical Case: Ms. A, an 84-year-old retired physician with a history of bronchiectasis of unclear etiology, is admitted with the chief complaint of chronic cough. Further inquiry into her medical history reveals that she contracted malaria as a child while living in Korea. She had been prescribed chloroquine by multiple doctors, but her symptoms of fevers and night sweats did not improve. It was a trial of acupuncture therapy, she says, that finally cured her of malaria.

Acupuncture refers to the act of inserting needles into specific locations on the body surface, known as acupoints or meridian points, in order to alleviate pain or treat various medical conditions [1, 2]. Acupuncture allegedly originates from ancient shamanic healing performances of the Neolithic Age (8000-5000 BC), and it was subsequently developed into formalized medical therapy in China, with the earliest description of needles’ therapeutic uses noted in 90 BC. Since the early 19th century, acupuncture has gained interest in the Western Hemisphere. Its popularity in the United States markedly grew in 1971 when an American journalist named James Reston wrote a story about receiving acupuncture for the New York Times [1]. Today, this ancient act of healing represents perhaps the most commonly practiced alternative medical therapy in the US, with approximately 2.1 million adult Americans receiving acupuncture each year [3]. While no large randomized studies have proven its efficacy, acupuncture continues to be utilized in a wide range of disorders, including cases as complex as post-stroke hemiplegia and mood disorders like anxiety and depression [3-5]. Acupuncture has been implemented even in developmental conditions, such as autism spectrum disorders [6].

Despite the popularization of acupuncture, its application in infectious conditions like malaria is not a widely accepted practice in Western medicine. A considerable volume of evidence, however, suggests that acupuncture could theoretically improve infectious, autoimmune, atopic, and even malignant conditions by modulating the immune system. Although there is a dearth of strong clinical trials supporting its efficacy, acupuncture and its permutations, such as electroacupuncture (wherein electric currents are applied through the acupuncture needle), have been used to treat bacterial infections and immunologic conditions, such as Hashimoto’s thyroiditis and ulcerative colitis [3]. These observations raise the following question: how might acupuncture affect the immune system and how convincing is the scientific evidence? Discussed below are the 2 most plausible theories explaining the potential immune-modulatory roles of acupuncture.

————————————————————————————————————–

1) Reinforcement of Innate Immunity through Natural Killer Cell Activation

One theorized mechanism of acupuncture-induced immune system enhancement involves activation of natural killer (NK) cells. Some of the earliest evidence suggesting this notion came from a study by Sato and colleagues, who showed that electroacupuncture may result in an increased level of NK cell activity in rats [7]. The investigators performed electroacupuncture on the rats’ anterior tibias, specifically at the location equivalent to the “Zusanli” or “ST36” acupoint of the human tibia. Stimulation of this acupoint is known to produce analgesic effects in animals and possibly humans, and it is one of the most commonly used acupoints in animal and clinical studies examining immune-modulatory roles of acupuncture [8]. The ST36 acupoint refers to the posterolateral region of the leg below the popliteal fossa in humans and the area 5 millimeters distal and lateral to the anterior tubercle in rats [9,10].

In this study, a total of 17 Wistar rats received 2 hours of daily electroacupuncture stimulation at the ST36 acupoint for 3 days, followed by splenectomy for NK cell analyses [7]. The isolated splenic NK cells (or, effector cells) were incubated in the presence of standardized target cells containing chromium-51. Subsequently, chromium-51 release assays were performed where the amount of chromium liberated from lysed target cells served as a surrogate marker for NK cell cytotoxicity against tumor cells. NK cell cytotoxic activity was significantly higher in the tibia-stimulated rats when compared to control rats that either were stimulated in the abdominal muscle (i.e., needle application to non-specific areas) or received no needle stimulation (percent lysis in tibia-stimulated rats vs. non-stimulated rats: 50.3 ± 1.9% vs. 42.3 ± 2.6% at effector cell to target cell ratio of 100:1; P < 0.05). This outcome suggests that acupuncture could enhance innate immunity through NK cell activation.

The NK cell activation theory is also supported by a clinical trial in humans [11]. In a small crossover study, Yamaguchi and colleagues examined the effect of acupuncture on NK cell markers in 17 healthy human subjects with a mean age of 35.3 years. One hour after baseline blood sample collection, these subjects received acupuncture therapies at various acupoints, including, but not limited to, the ST36 acupoint. Post-procedural blood samples were obtained 1, 2, and 8 days after the acupuncture session. Flow cytometry showed no significant change in the mean absolute lymphocyte count after the acupuncture therapies, but there was a significant increase in the subset of lymphocytes expressing CD16 (0.8 ± 0.2% before acupuncture vs. 2.1 ± 0.4% 8 days after; P < 0.05) and CD56 (5.8 ± 0.7% before acupuncture vs. 11.2 ± 1.8% 8 days after; P < 0.01), which are markers representative of NK cells. Considering the role of NK cells in fighting viral infections, this study’s findings provide some rationale for the use of acupuncture in viral conditions like upper respiratory tract infections.

Subsequent murine studies following the aforementioned work by Sato provided few clues to the potential mechanism of acupuncture-induced NK cell activation. These studies showed that splenic extracts from tibia-stimulated rats had significantly (P < 0.01) higher levels of IL-2, IFN-gamma, and Beta-endorphin compared to abdomen-stimulated and non-stimulated rats [12, 13]. Moreover, in vivo administration of anti-IFN-gamma antibodies or naloxone seemed to abolish the NK cell-enhancing effect of electroacupuncture [12]. The authors speculated from these observations that electroacupuncture could activate NK cells via release of endogenous cytokines and opioids like IFN-gamma and Beta-endorphin. Although it has been further hypothesized that the hypothalamus, a site of beta-endorphin secretion, is involved in acupuncture-induced NK cell activation, no definitive data on this subject have been published to date [14].

2) Modulation of Th1/Th2 Balance

Traditionally, acupuncturists’ understanding of human health has been based on the notion of Yin and Yang, the equilibrium between two opposite yet interdependent forces [1]. Per this theory, offsetting this balance toward one side would result in an illness, and acupuncture could restore health by reinstating the equilibrium. This historical view of two distinct yet interrelated processes that contribute to one’s overall health can be compared to various concepts in modern medicine, such as the balance between the sympathetic and parasympathetic nervous systems.

Another example of equilibria influencing human health is the interplay between the Th1 and Th2 subtypes of CD4 T cells. Following the thymic positive and negative selections, naïve CD4 T cells commit to either the Th1 or the Th2 cell lineage, depending on the molecular milieu during the time of differentiation. Th1 CD4 cells are implicated in cell-mediated immunity, granuloma formation, and delayed type hypersensitivity, which are characterized by a spectrum of cytokines, including IL-2, IFN-gamma, and TNF-Beta. On the other hand, Th2 cells mediate humoral immunity and allergic reactions via IL-4, IL-5, IL-10, and IL-13. The Th1/Th2 modulation theory asserts that the clinical benefit of acupuncture may result from reinstituting the disrupted balance between Th1 and Th2 activities [14].

In 2004, Park and colleagues published data supporting this theory using a mouse model [9]. In this study, 10 mice were intraperitoneally immunized with a type of protein (DNP-KLH) known to induce Th2-skewed conditions in mice. Half of the immunized (i.e., Th2-skewed) mice were stimulated at the ST36 acupoint with electroacupuncture for twenty minutes, and this intervention was repeated daily for 21 days. At the 7, 14, and 21-day marks, serum samples and splenocytes were collected and analyzed for IgE and cytokine levels, respectively. The IgE and cytokine measurements served as surrogate markers for Th2 activity, and they were compared to values obtained from 5 other mice that had been immunized but did not undergo acupuncture. A separate group of 5 control mice received neither the immunization nor the acupuncture therapy.

Among the Th2-skewed mice, the average serum total IgE level was initially higher in the electroacupuncture group at the 7-day mark (P < 0.01), but after 14 (P < 0.05) and 21 (P < 0.001) days, electroacupuncture was associated with lower total-IgE levels [9]. Similarly, electroacupuncture resulted in significantly lower antigen-specific IgE levels at the 14 and 21-day marks. Splenic production of IL-4, a cytokine implicated in the differentiation and proliferation of Th2 lymphocytes, was also reduced in the electroacupuncture group compared to the no-acupuncture group. A subsequent study further suggested that the IgE and IL-4-lowering effects of electroacupuncture in Th2-skewed mice might be acupoint-specific; IgE and IL-4 productions were not suppressed when needles were inserted at non-specific locations [15]. Overall, these findings provide preliminary evidence for the possible therapeutic role of acupuncture in Th2-dominant conditions like allergic rhinitis.

Interestingly, other murine studies suggest that acupuncture may also confer Th1-inhibitory effects. Using a murine model of ulcerative colitis, Tian and colleagues demonstrated a significant association between electroacupuncture and reductions in serum TNF-alpha and colonic TNF-alpha mRNA levels [16]. While ulcerative colitis is not considered a classic Th1-induced disease, TNF-alpha is closely related to Th1 activity, and the TNF-alpha lowering effect could explain the clinical benefit of acupuncture in true Th1-dominant entities, such as rheumatoid arthritis and delayed type hypersensitivities [14]. Another study of an inflammatory arthritic mouse model demonstrated a potential role of electroacupuncture in preventing joint destruction and suppressing serum IFN-gamma and TNF-alpha levels [17]. Again, considering that IFN-gamma and TNF-alpha are cytokines involved in the differentiation and proliferation of activated CD4 T cells into the Th1 subclass, a reduction of these cytokines could be a mechanism by which acupuncture might treat Th1-skewed disease entities [14].

These seemingly bi-directional effects of acupuncture on T helper cells are in line with clinical studies that suggest its efficacy in both Th1-dominant diseases like rheumatoid arthritis and Th2-dominant conditions like allergic rhinitis [18-20]. There remain, however, many unanswered questions regarding the Th1/Th2 modulation theory of acupuncture. Most importantly, mechanisms by which acupuncture affects helper T cells and their associated cytokines remain unclear [14]. The absence of plausible mechanisms is further confounded by the counterintuitive observation that 2 different outcomes (i.e., Th1 and Th2 suppression) may be achieved from stimulation of the same ST36 acupoint depending on the clinical circumstance. Moreover, potential differences between the murine immune system and that of humans must be considered when interpreting the animal data.

Numerous trials examining acupuncture’s role in various human diseases have been published [21]. An accurate assessment of its clinical efficacy, however, has proven to be extremely challenging as most of these clinical trials are underpowered [1]. More importantly, data from the small studies cannot be easily combined to construct meaningful meta-analyses as the individual studies greatly vary in their design. For instance, the types of investigated interventions range from traditional needle stimulation to electroacupuncture, and some trials involve even more advanced techniques like laser acupuncture, which delivers laser beams to acupoints without penetrating the skin barrier [1, 22]. Each of these interventions is administered for various durations and frequencies, and trials examining the same medical condition often involve different acupoints [1]. Overall, the small sample sizes and inter-study variations make it difficult to draw useful conclusions despite the abundance of clinical trials on acupuncture. As an example, 7 meta-analyses were published prior to 2005 looking at the role of acupuncture in treating headache syndromes. Of the 7 meta-analyses, 6 had inconclusive outcomes largely owing to substantial heterogeneities among the individual studies.

Progress has been made, nonetheless, in the field of clinical acupuncture trials. Sham acupuncture devices, which generate the sensation of needle insertion without physically penetrating the skin, became available in the late 1990s [23]. Although a number of studies using sham needles have undermined the true efficacy of acupuncture, the introduction of sham acupuncture has allowed for more objective, patient-blinded trials [1]. Moreover, adequately powered acupuncture trials are underway, obviating the need to draw conclusions from dissimilar and underpowered studies.

Whether acupuncture was responsible for Ms. A’s cure of malaria remains a mystery. Further clarification of potential mechanisms behind acupuncture-induced immune modulation, along with completion of robustly designed large clinical trials, may one day provide a full explanation of Ms. A’s acupuncture success story.

By Michael Lee, MD is a 2nd year resident at NYU Langone Medical Center

Peer Reviewed by Jason Siefferman, MD, Anesthesiology, Division of Pain Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Ernst E. Acupuncture–a critical analysis. J Intern Med. 2006 Feb;259(2):125-37.  http://www.ncbi.nlm.nih.gov/pubmed/16420542

2. Kavoussi B, Ross BE. The neuroimmune basis of anti-inflammatory acupuncture. Integr Cancer Ther. 2007 Sep;6(3):251-7.  http://www.ncbi.nlm.nih.gov/pubmed/17761638

3. Cabioglu MT, Cetin BE. Acupuncture and immunomodulation. Am J Chin Med. 2008;36(1):25-36.  http://www.ncbi.nlm.nih.gov/pubmed/18306447

4. Lou H, Shen Y, Zhou D, Jia K. A comparative study of the treatment of depression by electro-acupuncture. Acupunct Sci Int J. 1990;1:20-6.

5. Wong AM, Su TY, Tang FT, Cheng PT, Liaw MY. Clinical trial of electrical acupuncture on hemiplegic stroke patients. Am J of Phys Med Rehabil. 1999 Mar-Apr;78(2):117-22.  http://www.ncbi.nlm.nih.gov/pubmed/10088585

6. Lee MS, Choi TY, Shin BC, Ernst E. Acupuncture for children with autism spectrum disorders: a systematic review of randomized clinical trials. J Autism Dev Disord. 2012 Aug;42(8):1671-83. http://www.ncbi.nlm.nih.gov/pubmed/22124580

7. Sato T, Yu Y, Guo SY, Kasahara T, Hisamitsu T. Acupuncture stimulation enhances splenic natural killer cell cytotoxicity in rats. Jpn J Physiol. 1996 Apr;46(2):131-6.  http://www.ncbi.nlm.nih.gov/pubmed/8832330

8. Zhao ZQ. Neural mechanism underlying acupuncture analgesia. Prog Neurobiol. 2008 Aug;85(4):355-75.

9. Park MB, Ko E, Ahn C, et al. Suppression of IgE production and modulation of Th1/Th2 cell response by electroacupuncture in DNP-KLH immunized mice. J Neuroimmunol. 2004 Jun;151(1-2):40-4.

10. Yu JB, Dong SA, Gong LR, et al. Effect of electroacupuncture at Zusanli (ST36) and Sanyinjiao (SP6) acupoints on adrenocortical function in etomidate anesthesia patients. Med Sci Monit. 2014 Mar 12;20:406-12.

11. Yamaguchi N, Takahashi T, Sakuma M, et al. Acupuncture regulates leukocyte subpopulations in human peripheral blood. Evid Based Complement Alternat Med. 2007 Dec;4(4):447-53.

12. Yu Y, Kasahara T, Sato T, et al. Role of endogenous interferon-gamma on the enhancement of splenic NK cell activity by electroacupuncture stimulation in mice. J Neuroimmunol. 1998 Oct 1;90(2):176-86.

13. Yu Y, Kasahara T, Sato T, et al. Enhancement of splenic interferon-gamma, interleukin-2, and NK cytotoxicity by S36 acupoint acupuncture in F344 rats. Jpn J Physiol. 1997 Apr;47(2):173-8.

14. Kim SK, Bae H. Acupuncture and immune modulation. Auton Neurosci. 2010 Oct 28;157(1-2):38-41.

15. Kim SK, Lee Y, Cho H, et al. A Parametric Study on the Immunomodulatory Effects of Electroacupuncture in DNP-KLH Immunized Mice. Evid Based Complement Alternat Med. 2011;2011:389063.  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3135419/

16. Tian L, Huang YX, Tian M, Gao W, Chang Q. Downregulation of electroacupuncture at ST36 on TNF-alpha in rats with ulcerative colitis. World J Gastroenterol. 2003 May;9(5):1028-33.

17. Yim YK, Lee H, Hong KE, et al. Electro-acupuncture at acupoint ST36 reduces inflammation and regulates immune activity in Collagen-Induced Arthritic Mice. Evid Based Complement Alternat Med. 2007 Mar;4(1):51-7. Epub 2006 Aug 18.

18. Lee H, Lee JY, Kim YJ, et al. Acupuncture for symptom management of rheumatoid arthritis: a pilot study. Clin Rheumatol. 2008 May;27(5):641-5.

19. Wang R, Jiang C, Lei Z, Yin K. The role of different therapeutic courses in treating 47 cases of rheumatoid arthritis with acupuncture. J Tradit Chin Med. 2007 Jun;27(2):103-5.

20. Ng DK, Chow PY, Ming SP, et al. A double-blind, randomized, placebo-controlled trial of acupuncture for the treatment of childhood persistent allergic rhinitis. Pediatrics. 2004 Nov;114(5):1242-7.

21. World Health Organization. Acupuncture: review and analysis of reports on controlled clinical trials. 2002. http://www.who.int/iris/handle/10665/42414#sthash.QGKZ8nJt.dpuf.

22. Zhang J, Li X, Xu J, Ernst E. Laser acupuncture for the treatment of asthma in children: a systematic review of randomized controlled trials. J Asthma. 2012 Sep;49(7):773-7.

23. Moffet HH. Sham acupuncture may be as efficacious as true acupuncture: a systematic review of clinical trials. J Altern Complement Med. 2009 Mar;15(3):213-6.

 

 

 

Falls in Older Adults—Risk Factors and Strategies for Prevention

October 15, 2014

By Joseph Plaksin

Peer Reviewed

Falls are a major health problem for older adults. Various reviews and meta-analyses have estimated that 30% of people over age 65 [4,6,8,10,11,13,14,19,21,22,23] and 50% of people over age 85 [14] who live in the community will fall at least once. The prevalence of falls is even higher in long-term care facilities, occurring in more than 50% of people over age 65 [3,10,23]. Fall-related injuries occur in 10-40% of falls and can range from minor bruises or lacerations to wrist or hip fractures [3,6,10,11,14,22,23]. Falls are the main risk factor for fractures and are even more important than decreased bone mineral density or osteoporosis, as indicated by the fact that 80% of low trauma fractures occur in people who do not have osteoporosis [9] and 95% of hip fractures result from falls [11]. Overall, significant injuries occur in 4-15% of falls and 23-40% of injury-related deaths in older adults are due to falls [6,10,11,14,23].

Fall Risk Factors

Risk factors for falls can be broken down into two categories: intrinsic and extrinsic [8,10,11,14,15,19,23]. Several well-studied intrinsic risk factors are age, female gender, and previous history of falls [4,10,14,22,23]. Many individual medical conditions, as well as the presence of multiple comorbid illnesses, increase the risk of falls [14]. Three examples that will be discussed are orthostatic hypotension [12,13,21,23], musculoskeletal disease [8,11,15], and visual impairment [5,18,19]. Other medical conditions include low systolic blood pressure, stroke, cognitive impairments, Parkinson’s disease, gait disorders, balance disorders, and other sensory impairments [3,6,8,10,14,15,19,23]. Similarly, many medications, either alone or in combination, increase the risk of falls. Specific medications include benzodiazepines, sedative-hypnotics, antidepressants, anti-hypertensives, anti-arrhythmics, diuretics, and anti-seizure medications [3,6,10,14,22,23].

Orthostatic hypotension (OH) is defined is a drop in systolic blood pressure ≥ 20 mmHg or a drop in diastolic blood pressure ≥ 10 mmHg within three minutes of standing from a supine position. This drop can be accompanied by symptoms included tachycardia, visual changes, dizziness, or syncope [12,13,21]. Similar to falls, the prevalence of OH increases with age and is present in an estimated 5-30% of people over age 65 who live in the community and 50% of people who live in long-term care facilities [12,13,21]. Many medical conditions increase the risk of developing OH. These include hypertension, atherosclerosis, varicose veins, congestive heart failure, chronic kidney disease, diabetes mellitus, Parkinson’s disease, autonomic nervous system disorders, and autoimmune neuropathies [12,13]. Medications such as diuretics, α-blockers, β-blockers, calcium channel blockers, tricyclic antidepressants, anti-histamines, nitrates, acetylcholinesterase inhibitors, and dopamine agonists also increase the risk of OH [12,13]. OH is thought to cause falls through both direct and indirect mechanisms. The direct mechanism is through syncope, which is thought to be related to as many as 10% of falls [21,23]. Syncope can be difficult to diagnose as the cause of a fall because it can involve retrograde amnesia and many falls occur in the absence of witnesses [23]. Indirect mechanisms include poor balance control during presyncope and cognitive impairment due to cerebral hypoperfusion [21].

Musculoskeletal diseases are a heterogeneous group of diseases that are extremely common in older adults and constitute a major intrinsic risk factor for falls and fall-related injuries. A study of 16,080 Korean adults found a significant association between pre-existing osteoarthritis, osteoporosis, or lower back pain and one-year incidence of fall-related injuries. The presence of multiple musculoskeletal diseases further increased this risk [11]. In Japan, the ROAD study examined the association of baseline physical performance measures and musculoskeletal disease in 1,348 adults with the three-year incidence of falls. The overall incidence of falls was higher in women than men. In men, slower chair stand time was the only independent risk factor for falls. In women, longer 6-meter walking time was a risk factor, as well as knee pain, vertebral fracture, and cognitive impairment. Interestingly, radiographic severity of knee osteoarthritis, lumbar spondylosis, and lower back pain were not significant risk factors for falls in either gender [15].

Visual impairment is the third most common chronic medical condition in older adults [19], affecting an estimated 10% of people over age 65 and 20% of people over age 75 [5]. The prevalence of visual impairment increases with age and has been shown to be an independent risk factor for falls and fractures. Impairment can occur in any aspect of vision, including visual acuity, visual fields, contrast sensitivity, depth perception, or color perception [5,19]. It is still unclear which of these factors plays the largest role in fall risk [5]. The most common ophthalmologic diseases causing visual impairment in older adults are presbyopia, cataracts, glaucoma, and age-related macular degeneration (ARMD) [5,19]. Due to the different pathophysiology of these diseases, each one produces a different pattern of visual impairment. Presbyopia, an impairment in the ability to see objects at close range, is corrected by the use of bifocal lenses. These lenses impair depth perception and edge contrast, which increases the risk of falls when walking outside or using stairs [19]. Cataracts, a clouding of the lens, cause generalized blurry vision and glaring of bright lights. This increases the risk of running into objects and limits the ability to drive at night. Glaucoma, an increase in intraocular pressure that can damage the optic nerve, causes a loss of peripheral vision. ARMD, deterioration of the retina, causes loss of central vision, distortion of straight lines, impairment of color vision, and difficulty recognizing faces. These deficits limit activities of daily living and cause problems with balance that contribute to fall risk [5,19]. A study of 3,203 Latin-American adults found a significant association between central visual impairment, peripheral visual impairment, or use of bifocal lenses and the one-year incidence of falls and fall-related injuries [18]. Aside from ophthalmologic diseases, many medications have side effects that can affect vision. These include anticholinergics, α-blockers, anti-arrhythmics, cardiac glycosides, benzodiazepines, selective serotonin reuptake inhibitors, anti-epileptics, phosphodiesterase type 5 inhibitors, and anti-malarials [19].

Extrinsic risk factors consist of anything in the environment that causes tripping, slipping, or loss of balance. At home, older people can trip over rugs, electrical cords, pets, or other items on the floor. They can slip on stairs, especially if there are no handrails, or in the bathtub. Low toilets or chairs, chairs without armrests, and poor lighting also contribute to the risk for falls at home [10,14,19,23]. Outside, uneven sidewalks, inappropriate footwear, and snow or ice can cause falls [10,14]. In 50-80% of falls, at least one environmental risk factor is reported [10]. Intrinsic risk factors, such as visual or other sensory impairments that affect how an individual interacts with the environment, can increase the risk of falls due to extrinsic risk factors.

Assessing Fall Risk

Due to the prevalence of falls in older adults, the American Geriatrics Society (AGS) recommends screening all older adults for fall risk by asking if they have fallen in the past year and then, if they report a fall, conducting a multifactorial fall risk assessment [17]. One aspect of that assessment that can be performed quickly in the clinical setting is the Timed Up and Go Test (TUGT). This test measures the time it takes for a person to rise from a chair, walk three meters, walk back to the chair, and sit down [1,2,16,20]. The test is usually completed at a comfortable walking speed, but variations exist that ask the person to walk as fast as possible, complete a cognitive task while walking, or walk around various obstacles in the room [20]. The test involves standing up, sitting down, walking, and turning; making it a useful way to evaluate functional mobility [16,20]. However, systematic reviews have found mixed results when examining the relationship between TUGT performance and fall risk. One systematic review found that older adults who experienced a fall were slower when completing the test than those who had not. It also found a significant association between time to complete the TUGT and a history of falls in all retrospective studies but only one prospective study [2]. Furthermore, the cut-off value of the test that separated people who fell from those that did not was very wide, ranging from 10-32.6 seconds [1,2].

A more recent systematic review found that, in most studies, there was a significant association between TUGT time and fall risk in univariate analyses, but found that this association disappeared in 75% of multivariate regression models that accounted for demographics and other fall risk factors. This review also more closely examined the differences in TUGT results between studies performed in the community and studies performed in long-term care facilities. In older adults living in the community, the pooled mean difference in TUGT time between those that fell and those who did not was 0.63 seconds, which was statistically significant but not clinically significant, and the cut-off value of the test that separated people who fell from those that did not ranged from 8.1-16 seconds. In older adults living in a long-term care facility, the pooled mean difference in TUGT time was 3.59 seconds and the cut-off value of the test ranged from 13-32.6 seconds, possibly indicating that the test is better at predicting fall risk in lower-functioning populations [20]. Interestingly, a study of 183 Swedish adults living in long-term care facilities found that both a history of falls and the staff’s global rating of fall risk were better predictors of future falls than the TUGT [16]. Given these mixed results, it is still recommended to use the TUGT to screen for gait and balance disorders [2] but to look more broadly at the clinical picture when making decisions about an individual patient’s fall risk [20].

Fall Prevention

Given the prevalence of falls and fall-related injuries, many different fall prevention strategies have been developed. Guides for conducting home safety assessments to identify and modify extrinsic risk factors are readily available [7], but the evidence that these interventions are effective is mixed. One review found that alone these interventions do not significantly reduce falls [4]. Other reviews have found that high-risk populations, consisting of individuals with many intrinsic risk factors and a previous history of falling, benefit from environmental modifications but low-risk populations do not [6,10,14,22]. Despite this somewhat mixed evidence, it is a Grade A recommendation from the AGS to assess the home environment and mitigate hazardous factors in the home in order to reduce fall risk [17].

Some intrinsic risk factors, such as age, gender, and history of previous falls, cannot be modified [10]. Therefore, interventions have targeted other intrinsic risk factors, including medication reduction, vitamin D supplementation, correcting visual impairment, and exercise [3,4,5,6,10,14,22]. Reducing the number of medications, especially psychotropic medications, can significantly decrease the risk of falls [6,10,14,22]. However, the benefit is limited by compliance. 47% of individuals who stopped taking a psychotropic medication restarted the medication within one month [14,22]. The effect of vitamin D supplementation on risk of falls is mixed. One review found no risk reduction [14], others only found a significant risk reduction in individuals who were vitamin D deficient at baseline [6,10], and another only found a significant risk reduction in individuals who live in long-term care facilities [3]. Based on this evidence, it is a Grade A recommendation from the AGS to supplement vitamin D in all individuals who are proven to be vitamin D deficient [17].  Multiple reviews have found that visual assessment alone is not effective in reducing fall risk [5,14,22]. However, the effect of interventions to correct visual impairment is mixed. For example, several studies have shown a reduced risk of falls after a patient’s first cataract surgery, but not after subsequent cataract surgery on the other eye [6,10,22].

In all reviews of community-based interventions, the best single intervention to prevent falls was exercise [4,5,6,10,14,19]. Exercise was the only intervention that both reduced the number of people who fall as well as the rate of falls in those who do [10]. The most important components of exercise were improving balance and muscle strength. These components could either be trained in separate exercise modalities or trained together in a single exercise, such as Thai Chi [10,14,22]. Other important components of exercise included flexibility and endurance. As a result, it is a Grade A recommendation from the AGS to offer an exercise program that targets strength, gait, and balance as an intervention to reduce falls [17]. One alternative to traditional exercises is functional-based training, which was shown to significantly reduce the risk of falls in high-risk populations [10]. All types of exercise that effectively reduced the risk of falls also reduced the risk of fractures [6]. However, not all types of exercise were effective. Neither walking nor muscle-strengthening exercise reduced the risk of falls when these interventions were used in isolation [6,10,14]. No type of exercise was as effective in long-term care facilities as it was in the community [3,10].

Given the vast number of intrinsic and extrinsic risk factors, as well as their complex interactions that lead to falls in each individual, it is not surprising that all reviews, as well as the AGS guidelines, recommend a multifactorial approach to fall prevention [3,4,6,10,14,17,19,22,23]. A multifactorial fall prevention program should include physical exercise, especially balance and strength training, and modifications of as many risk factors as possible. Additionally, patients and their families must be involved in the decision-making process about their care. Patient education about the risk of falls and why each part of the fall prevention program is being implemented should improve compliance, which in turn should lead to better outcomes.

Joseph Plaksin is a 4th year medical student at NYU School of Medicine

Peer reviewed by Sathya Maheswaran, MD, Medicine, NYU Langone Medical Center

References

1. Alexandre TS, Meira DM, Rico NC, Mizuta SK. Accuracy of Timed Up and Go Test for screening risk of falls among community-dwelling elderly. Rev Bras Fisioter. 2012;16(5);381-388. http://www.scielo.br/scielo.php?script=sci_arttext&pid=S1413-35552012005000041&lng=en&nrm=iso&tlng=en

2. Beauchet O, Fantino V, Allali G, Muir SW, Monter-Odasso M, Annweiler C. Timed Up and Go Test and risk of falls in older adults: A systematic review. The Journal of Nutrition, Health, and Ageing. 2011;15(10);933-938.  http://link.springer.com/article/10.1007/s12603-011-0062-0

3. Cameron ID, Gillespie LD, Robertson MC, Murray GR, Hill KD, Cumming RG, Kerse N. Interventions for preventing falls in older people in care facilities and hospitals (Review). The Cochrane Library. 2013;3:1-179. http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD005465.pub3/abstract

4. Chang JT, Morton SC, Rubenstein LZ, Mojica WA, Maglione M, Suttorp MJ, et al. Interventions for the prevention of falls in older adults: systematic review and meta-analysis of randomised clinical trials. British Medical Journal. 2004;328:680-687. http://www.bmj.com/content/328/7441/680?view=long&pmid=15031239

5. Dhital A, Pey T, Stanford MR. Visual loss and falls: A review. Eye. 2010;24;1437-1446. http://www.nature.com/eye/journal/v24/n9/full/eye201060a.html

6. Gillespie LD, Robertson MC, Gillespie WJ, Sherrington C, Gates S, Clemson LM, Lamb SE. Interventions for preventing falls in older people living in the community (review). Cochrane Database Systematic Reviews. 2012;11:1-416. http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD007146.pub3/abstract

7. Home and Recreational Safety resources page. Centers for Disease Control and Prevention web site. http://www.cdc.gov/HomeandRecreationalSafety/Falls/pubs.html. February 24, 2012. Accessed December 18, 2013. http://www.cdc.gov/HomeandRecreationalSafety/pubs/English/booklet_Eng_desktop-a.pdf http://www.cdc.gov/HomeandRecreationalSafety/pubs/English/brochure_Eng_desktop-a.pdf

8. Hoops ML, Rosenblatt NJ, Hurt CP, Crenshaw J, Grabiner MD. Does lower extremity osteoarthritis exacerbate risk factors for falls in older adults? Women’s Health. 2012:8(6);685-698. http://www.futuremedicine.com/doi/abs/10.2217/whe.12.53?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dwww.ncbi.nlm.nih.gov

9. Jarvinen TLN, Sievanen H, Khan KM. Shifting the focus in fracture prevention from osteoporosis to falls. British Medical Journal. 2008;336:124-126. http://www.bmj.com/content/336/7636/124?view=long&pmid=18202065

10.  Karlsson MK, Magnusson H, von Schewelov T, Rosengren BE. Prevention of falls in the elderly- a review. Osteoporosis International. 2013;24:747-762. http://link.springer.com/article/10.1007%2Fs00198-012-2256-7

11.  Lee WK, Kong KA, Park H. Effect of preexisting musculoskeletal diseases on the 1-year incidence of fall-related injuries. Journal of Preventative Medicine & Public Health. 2012:45;283-290. http://jpmph.org/DOIx.php?id=10.3961/jpmph.2012.45.5.283

12.  Low PA. Prevalence of orthostatic hypotension. Clin Auton Res. 2008;18(1), 8-13. http://link.springer.com/article/10.1007%2Fs10286-007-1001-3

13.  Mager DR. Orthostatic hypotension: Pathophysiology, problems, and prevention. Home Healthcare Nurse. 2012;30(9); 525-530. http://www.ncbi.nlm.nih.gov/pubmed/23026987

14.  Medical Advisory Secretariat. Prevention of falls and fall-related injuries in community-dwelling seniors: an evidence-based analysis. Ontario Health Technology Assessment Series. 2008;9(2):1-78. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3377567/

15.  Muraki S, Akune T, Ishimoto Y, Nagata K, Yoshida M, Tanaka S, et al. Risk factors for falls in a longitudinal population-based cohort study of Japanese men and women: the ROAD study. Bone. 2013;52:516-523. http://www.sciencedirect.com/science/article/pii/S8756328212013282

16.  Nordin E, Lindelof N, Rosendahl, Jensen J, Lundin-Olsson. Prognostic validity of the Timed Up-and Go Test, a modified Get-Up-and-Go Test, staff’s global judgement and fall history in evaluating fall risk in residential care facilities. Age and Ageing. 2008;37;442-448.  http://ageing.oxfordjournals.org/content/37/4/442.long

17.  Panel on Prevention of Falls in Older Persons, American Geriatrics Society and British Geriatrics Society. Summary of the updated American Geriatrics Society/British Geriatrics Society clinical practice guideline for prevention of falls in older persons. Journal of the American Geriatrics Society. 2011;59(1);148-157. http://www.americangeriatrics.org/files/documents/health_care_pros/JAGS.Falls.Guidelines.pdf

18.  Patino CM, McKean-Cowdin R, Azen SP, Allison JC, Choudhury F, Varma R. Central and peripheral visual impairment and the risk of falls and falls with injury. Ophthalmology. 2010;117(2);199-206. http://www.sciencedirect.com/science/article/pii/S0161642009007386

19.  Reed-Jones RJ, Solis GR, Lawson KA, Loya AM, Cude-Islas D, Berger CS. Vision and falls: A multidisciplinary review of the contributions of visual impairment to falls among older adults. Maturitas. 2013;75;22-28.  http://www.sciencedirect.com/science/article/pii/S0378512213000285

20.  Schoene D, Wu SMS, Mikolaizak AS, Menant JC, Smith ST, Delbaere K, Lord SR. Discriminative ability and predictive validity of the Timed Up and Go Test in identifying older people who fall: Systematic review and meta-analysis. JAGS. 2013;61;202-208.   http://onlinelibrary.wiley.com/doi/10.1111/jgs.12106/abstract;jsessionid=C6892EE723F616EB83A5EF81DFF96A93.f04t04

21.  Shaw BH, Claydon VE. The relationship between orthostatic hypotension and falling in older adults. Clin Auton Res. 2013.  http://link.springer.com/article/10.1007%2Fs10286-013-0219-5

22.  Tinetti ME, Kumar C. The patient who falls: “It’s always a trade-off.” JAMA. 2010;303(3);258-266. http://jama.jamanetwork.com/article.aspx?articleid=185213

23.  Ungar A, Rafanelli M, Lacoemlli L, Brunetti MA, Ceccofiglio A, Tesi F, Marchionni N. Fall prevention in the elderly. Clinical Cases in Mineral and Bone Metabolism. 2013;10(2);91-95. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3797008/

It Was Almost Called the Cylinder (& Other Who-Knew Facts about the Stethoscope)

October 10, 2014

By Cindy Fang, MD

Peer Reviewed

“A wonderful instrument…is now in complete vogue in Paris…It is quite a fashion, if a person complains of cough, to have recourse to the miraculous tube which however cannot effect a cure but should you unfortunately perceive in the countenance of the doctor that he fancies certain symptoms exist it is very likely that a nervous person might become seriously indisposed and convert the supposition into reality.” —The London Times, September 19, 1824. [1]

The novel medical instrument described above is the stethoscope. Today, it is difficult for us to imagine that the stethoscope was once briefly judged to be fashionable yet not practical enough to bear any therapeutic influence. Little did the author of The London Times article know, the invention and evolution of the stethoscope would radically revolutionize the way physicians practiced medicine in the 19th century.

In 350 B.C., Hippocrates advocated for a method called “succussion,” which entails shaking a patient by the shoulders and directly listening for the sound in the chest. For thousands of years since then, physicians listened to the internal sounds of the body by pressing one ear against a patient’s body, a method known as immediate auscultation. This was the norm until 1816, when Dr. René Laënnec, a 35-year-old French physician, was consulted to see a “young woman laboring under general symptoms of diseased heart.” Reluctant to press his ear against the patient’s chest, Dr. Laënnec rolled a sheet of paper to create a cylinder. When he pressed one end to the patient’s chest and the other end to his ear, he was delighted by how much more clearly and loudly he heard the heart sound. [2]

Dr. Laënnec practiced medicine in an era when tuberculosis was a common disease. Interested in studying the sounds of the diseased chest filled with pus, fluid, or cavities, he spent three years improving his instrument to indirectly auscultate, a method he named mediate auscultation. In 1819, he published his design of a hollow wooden tube that was 3.5 cm wide and 30 cm long in his book, “A Treatise on the Diseases of the Chest and on Mediate Auscultation.” At first, Dr. Laënnec was tempted to name his great invention “the cylinder” based on its shape. Thankfully, he settled for the “stethoscope” (“stetho-” for chest, “-scope” for viewing). Dr. Laënnec ultimately died at age 45 of tuberculosis, the very disease he spent most of his life studying. [3]

Dr. Laënnec’s stethoscope was monaural, which meant that it required only one ear on the instrument. Almost immediately after he published his book in 1819, physicians started attempting to perfect his design by adding earpieces and changing the shape of the bell. In 1843, Charles Williams invented the binaural stethoscope using two bent pipes and lead earpieces. After rubber became commercially available, Phillip Cammann came out with a flexible design in 1851 with a shape similar to the one we know today. His version had ivory earpieces, a wooden chest piece, and a woven tube held together by a rubber band. [4, 5] In the 1960s and 1970s, Dr. David Littman, a Harvard Medical School professor, developed a lighter stethoscope with a tunable diaphragm and better acoustics. [6]

The stethoscope was embraced soon after its invention in 1816, first in France and then rapidly in the English-speaking world. Over 300 medical students attended Dr. Laënnec’s lectures to learn how to use this novel instrument. In 1826, the first article with directions for using the stethoscope was published in The Lancet. Within a decade, the stethoscope was considered the high-tech gadget in the medical field. Physicians felt their reputations would be in danger if they were seen examining patients without stethoscopes. [7-9] The binaural stethoscope, however, did not become widely popular until the early 1900s due to its higher price, and the fact that its bigger size made it uncomfortable for physicians to carry in their top hats or purses during home visits. [8] As a result, monaural stethoscopes were still commonly used through the early 1900s, after which improvements in the design and material of the binaural stethoscope made it more convenient to use.

Despite the popularity of the stethoscope, not all doctors were ready to stop pressing their ears against their patients’ chests. In a textbook on auscultation and percussion written in 1890, a Harvard professor in clinical medicine defined the art of auscultation as both immediate and mediate. Although he admitted that the stethoscope provided “aesthetic quality,” better acoustics, and convenience when examining dirty or female patients, he commented that “a good auscultator is not dependent on his stethoscope,” as he urged his students to practice both immediate and mediate auscultation. [10] Even as late as 1975, an Italian physician felt the need to submit a letter to the editor of Circulation titled, “Immediate auscultation — an old method not to be forgotten.” [11] The editor quickly rejected the physician’s argument that fine vibration could not be appreciated with a stethoscope and that palpation by hand was not sensitive enough, stating instead that palpation combined with proper patient positioning usually brought out gallops adequately.

Today, the stethoscope remains an indispensable bedside diagnostic instrument that all medical students must possess at the start of their education. New versions of this non-invasive piece of equipment, such as the electronic stethoscope, the recording stethoscope, and even the Doppler stethoscope, are constantly invented and improved. As Dr. Laënnec correctly stated in his will in 1826, the stethoscope is certainly the best part of his legacy.

 

 

 

 

 

 

 

 

 

Dr. Laënnec’s stethoscope. Courtesy of the U.S. National Library of Medicine.

 

 

 

 

 

 

 

 

A L’Hopital Necker, Ausculte Un Phtisique (Laënnec, at the Hopital Necker, Examining a Consumptive Patient by Auscultation). Painting by Théobald Chartran (1849-1907). Courtesy of the U.S. National Library of Medicine.

 

 

 

 

 

 

 

 

 

Catalog illustration of stethoscopes, 1869. Courtesy of the U.S. National Library of Medicine.

 

 

 

 

 

 

 

 

 

Medical collectibles, circa 1905. Left, a female obstetrician’s doctor’s bag. Middle and right, an early wooden monaural stethoscope with early painkillers and medications. Courtesy of Antiques Roadshow. URL: http://www.pbs.org/wgbh/roadshow/archive/199705A35.html.

 

 

 

 

 

Left, the Cammann stethoscope, mid-19th century. Right, Corwin’s Compound Stethoscope, circa 1896, was a variation of the Cammann stethoscope that allowed two individuals, such as a teacher and student, to listen simultaneously. Courtesy of the National Museum of Health and Medicine.

Dr. Cindy Fang is a resident at NYU Langone Medical Center

Peer Reviewed by Neil Shapiro, Editor-In-Chief, Clinical Correlations

References

1. Baldry PE. The Battle Against Heart Disease: A Physician traces the History of Man’s Achievements in this Field for the General Reader. London: University Press; 1971.

2. Laënnec RTH. A Treatise on the Diseases of the Chest and on Mediate Auscultation. 3rd ed. London: Gilbert, St. John’s Square; 1829.

3. Roguin A. Rene Theophile Hyacinthe Laënnec (1781-1826): The man behind the stethoscope. Clin Med Res. 2006;4(3):230-235.

4. Weinberg F. The history of the stethoscope. Can Fam Physician. 1993;39:2223-2224.

5. Cammann DM. An historical sketch of the stethoscope. Trans Am Climatol Assoc Meet. 1885;2:170-174.

6. Stethoscope. Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/wiki/stethoscope.  Updated August 16, 2014. Accessed August 20, 2014.

7. Bishop PJ. Reception of the stethoscope and Laënnec’s book. Thorax. 1981;36(7):487-492.

8. Levin S. The venerable stethoscope. S Afr Med J. 1968;42(10):232-234.

9. Walker HK, Hall WD, Hurst JW, eds. Clinical Methods: The History, Physical, and Laboratory Examinations. 3rd ed. Boston, MA: Butterworths; 1990.

10. Shattuck FC. Auscultation and Percussion. 1st ed. Detroit, MI: Davis; 1890.

11. Puddu V. Letter: Immediate auscultation — an old method not to be forgotten. Circulation. 1975;52(3):526-527.

 

From The Archives: Ethical Considerations on the Use of Fear in Public Health Campaigns

October 9, 2014

Please enjoy this post from the archives dated November 23, 2011

By Ishmeal Bradley, MD

Faculty Peer Reviewed

The goal of public health is to prevent or minimize disease and injury on a population level. How to achieve this end has changed over time, though. In previous decades, communicable diseases posed the greatest health risks. Consequently, public health officials used the tools of isolation, quarantine, and (forced) vaccination to combat these threats. Today, however, the major causes of morbidity and mortality are chronic conditions, many of which are thought to be due to lifestyle behaviors. Consider obesity, premature heart disease, and tobacco use as examples. The traditional tools of public health fail to address these newer threats to human welfare.

Understanding this limitation, the public health community has undergone a paradigmatic shift, moving away from contagion control to focus instead on these modifiable risk factors. Using mass media, public health campaigns now seek to change people’s behavior to forestall bad health outcomes. Within this new framework, these campaigns often use fear as a motivating force. The question thus arises, is fear an appropriate tool for a public health agency to use to effect behavioral change and advance a health agenda?

In order to examine the use of fear-based advertising in public health campaigns, one must define what advertising is exactly. Essentially, advertising is the use of media to persuade people to consume a product or service. When a government agency or other public interest group uses advertising to deliver its message for societal betterment, this becomes social marketing. Social theorists have described this as “the application of commercial marketing technologies…to influence [the] voluntary behavior of target audiences in order to improve their personal welfare and that of their society.”[1] Furthermore, social marketing borrows the key feature of commercial marketing, that of the “individual-as-consumer.”[2] Influencing that individual’s consumption of health information and the adoption of specific behaviors through social marketing can improve social good overall.

Fear-based advertising is a specific type of social marketing that employs scare tactics or other anxiety-producing mechanisms to highlight the dangers of engaging (or not engaging) in a certain practice, like smoking or drunken driving. This strategy can be a cost-effective way to reach a wide audience, as the New York City Department of Health and Mental Hygiene has done. The Department has released several graphic and hard-hitting ads in recent years attacking smoking, encouraging influenza vaccinations, and warning of the dangers of sugar-sweetened beverages. Regarding the “Smoking Kills” poster campaign from the spring of 2009 which shows graphic depictions of smoking-induced lung cancer, NYC Health Commissioner Thomas Farley argued that “[s]mokers who are more aware of health risks are more likely to quit…[and having warnings] be more graphic makes [people] more aware of health risks.”[3]

Not surprisingly, fear-based advertising is not without its discontents. Different countries have different tolerances for the amount of fear and negativity that public health agencies are allowed to use. Australia, the United States, the United Kingdom, Quebec, and several nations in Southeast Asia have used fear and gore in combating drunken driving and smoking. Cigarette packs in Malaysia not only explicitly warn the consumer that smoking will cause cancer, but they also show graphic photos of neck and lung tumors. Anti-drunken driving commercials in Australia show the horrendous aftermaths of car crashes. On the other hand, nations like Canada (the English-speaking provinces) and Holland are far less likely to use this tactic, instead relying on humor and gain-framed messaging. Whether one strategy works more than the other has been the subject of much debate and research, with both sides claiming victory.

If this media tactic is controversial and potentially ethically problematic, why do public health agencies use it? Does it really work, and if so, how? Numerous studies have examined the effectiveness of negative ads on smoking cessation and drunken driving. For example, when the Massachusetts Tobacco Control Program launched a series of anti-smoking ads on television, one study found that viewers responded much more strongly to the negative ads that evoked fear and sadness than other ads without fear appeals.[4] The study participants felt that those ads would make them more likely to stop smoking. Similar results were found in other American and Australian studies.

That these fear-based ads can work is not the issue, but rather, how they work. Some argue that the shock tactics used by these ads need to be intense to get people’s attention, in order to cut through the chatter of everyday life. We live in a time when information is all around us, and each message is fighting for the viewer’s limited attention. Consequently, the more graphic and visually jarring messages are more likely to get noticed. Also, convincing people to make (unwanted) behavioral changes may require the use of forceful language and strong motivators.[5]

Despite the laudable public health goals of limiting morbidity and mortality, doing so with fear remains troublesome. This debate can be examined from two opposing ethical frameworks: deontology and teleology. Whereas deontology is more concerned with absolute moral foundations, teleology focuses on outcomes. Whether one uses one approach or the other governs how one would view the acceptability of fear-based advertising.

The deontologist opinion holds that all public health measures must be grounded in a priori moral certainties. Invoking the principle of beneficence, a deontologist would “reject the use of fear appeals outright on the grounds that, regardless of the ultimate societal gains, it is wrong to engender anxiety and distress.”[6] The essential feature of fear-based advertising is this very anxiety and distress that deontology would not allow. For example, the NYC DOHMH “Smoking Kills” ads could arguably invoke incredible guilt and feelings of personal mortality in the smoker. Of course, seeing these images may encourage the smoker to quit, but the attack on his mental well-being would be unsupportable.

On the other hand, the teleologist would posit that the ends can justify the means, if those ends are socially beneficial. Recognizing that any intervention can have both positive and negative consequences, the teleologic goal is to have a net positive result. The measurable increase in the public’s health status can take precedent over any anxiety and social distress that an intervention could create. Using the language of utilitarianism, “[c]hoices are deemed ethical if they result in the greatest good for the greatest number of people.”[7] If a hundred people decide to stop smoking after seeing that same cancerous poster, then the intervention is both successful and ethical, despite the personal worries that that one individual may have felt.

Reconciling these two disparate paradigms is by no means easy, or even possible. Population-based disciplines, like public health, are fundamentally consequentialist, and they subsume the thoughts and wishes of the individual to the needs of the populace. This form of paternalism differs, though, from the strict paternalism that many of us expect, like mandatory immunizations or workplace safety standards. However, if the idea of paternalism centers on the use of state power and authority to guide the behavior of individuals, then it becomes quite clear that the state’s use of fear-based advertising is frankly paternalistic.

But we must ask ourselves, who really stands to benefit with the use of these ads? Is it the individual viewer, the state, or both? The sole beneficiary cannot be the state, or even the general population. But, if the individual stands to make significant gains in health, then it becomes difficult not to want to use these scare tactics. Although utilitarian ethics would take into consideration the broader social betterment created by the improvement in individual health, the rights of the individual cannot be wholly ignored.

A crucial caveat of this libertarian critique is that these fear-based ads must be imposed upon the viewer, without the viewer’s consent. John Stuart Mill, a significant advocate of 19th century political and social libertarianism, was explicit in his belief that the state cannot enforce its will on the governed without the permission of the governed. Furthermore, one could posit that the state’s use of the media to carry its message into the home could be a form of intrusion. A family at home watching television does not have control over the commercials and ads that appear on their TV screen. This family cannot call the local cable provider and “opt-out” of distressing public health advertisements.

On the surface, this example may seem clear cut. This family has not given their consent to the state to receive these messages in their home. Although they have not given their explicit consent, the simple act of turning on the television or opening a newspaper gives implied consent. The public should expect a risk of seeing distressing images in the media. Much like driving, when one gets behind the wheel of a car, one is implicitly taking on the risk of having an auto accident. Similarly, when one turns on the television, there is the risk of seeing a graphic health message that the viewer accepts. And perhaps more relevant, a media viewer always has the option to remove himself from the situation by simply changing the channel or looking away from the poster. It could be claimed that the individual is by no means held hostage by the public health message, and the unsolicited advice could easily be avoided. Yet, ads are purposely designed not to be easily shunned.

Furthermore, the actual content of the ad may matter more than its emotional style. Fear-based ads that solely rely on a haunting message may scare the viewer, but they do not necessarily lead to results. “Messages…which do induce fear, but whose behaviour [sic] recommendations are insufficiently feasible…have the strongest opposite effects in terms of rejection of and resistance to the message.”[8] The key component should be to provide advice on how not to succumb to the health threat. This advice is necessary to provide the viewer with the self-efficacy needed to effect necessary change and to overcome the fear engendered by the ad.

Even more troublesome is the risk that these ads run of victim blaming. Fear-based ads must be incredibly cautious about walking the fine line between warning about health dangers and blaming those already affected by those dangers. For example, safe-sex ads that encourage people to use condoms to prevent the spread of HIV could potentially stigmatize people living with HIV. These ads could imply that those persons infected with HIV were not cautious enough or careful enough to avoid infection. If only they had followed the advice of the public health community, they would have remained HIV-free.

On the other hand, fear-based ads can actually use the victim as their spokesperson. In 2008, the NYC Health Department ran an anti-smoking ad which featured a Bronx woman with Buerger’s disease. This particular condition predisposes patients to peripheral vascular disease which can lead to finger and toe amputations. This risk is multiplicatively increased by smoking. In these television ads, the woman directly faces the viewer, shows her gnarled hands, and tells us that she has undergone twenty amputations because of her smoking. The last line of the ad is, “I don’t smoke anymore.” Far from blaming this woman for the years of medical complications that she has endured because of her smoking, this frankly shocking ad actually attempts to empower this individual to help other smokers to quit.

Similar to victim-blaming, fear-based ads may target already politically and socially disenfranchised communities. Health promotions, unfortunately, tend to produce social inequities. People of higher socioeconomic statuses usually adopt healthy behaviors before those of lower SES. Research on the decades of anti-smoking educational campaigns has shown that the biggest declines in smoking rates have occurred among the wealthy and middle classes while the working classes continue to bear the burden to tobacco addiction. For anti-smoking advertisements, to be more effective, they would logically have to be directed at those communities where smoking rates are high. Targeted selection could, however, stigmatize this class of people. These ads are trying to effect behavioral change in a community already lacking in resources to sustain that change. Inundating this community with fear about their lifestyle choices, coupled with a lack of means to make fundamental changes and improvement, is far from ethical. Doing so simply marginalizes this community further and it may induce different degrees of medical nihilism.

Another drawback of using fear-based advertising, especially for targeted communities, is defining specifically who is at risk. Although particular ads may highlight the risk in defined populations, these ads may allow people who do not fit these descriptions to delude themselves into thinking that they are not at risk. This would be an extremely dangerous misreading of the public health message and nullify any societal gains made by these ads.

To get around this problem, some ads try to avoid targeting and emphasize the universality of risk. This also avoids the ethical issues of implying that one particular group of people is more susceptible than another. One Israeli ad from the late 1990s showed two women, one was an older grandmotherly woman wearing an ankle-length skirt and the other was a young woman wearing heels and fishnet stockings. The caption read, “AIDS makes no distinction among people.”[5] Rather than focusing on young, sexually active adults, the Israeli Task Force chose a more ethically palatable approach to show that everyone is potentially at risk.

Also, fear-based advertising is subject to the law of diminishing returns. With each viewing of a negative ad, the viewer experiences less shock and emotional appeal. Since fear-based ads have been around for several decades now, with varying degrees of intensity, one must question whether the fear appeals today still work with the same efficacy that they did in times past. In 2004, one research group in Australia and New Zealand studied this phenomenon with regards to fear-based advertising in preventing reckless driving. They found that “participants indicated growing tired of such negative appeals and feeling numbed to ‘shock tactic advertising.’”[9] This was disconcerting to the researchers because the loss of the ads’ “persuasive ability” basically rendered the ads impotent.

To grab the same level of attention, subsequent ads must be more shocking and jarring than the previous. “[W]ith high-threat advertising…there is a need to intensify the threat on each subsequent occasion to produce the same level of fear.”[6] If one has to keep increasing this intensity, where does one stop? Would it be acceptable to show a man dying from lung cancer to promote smoking cessation? Should health and transportation departments show footage of actual car crashes and fatalities to reduce drunken driving?

Many would argue against such drastic measures, even in the name of public health and safety. Bombarding citizens with these disturbing images of death and destruction could easily seem as lacking any ethical foundation, even from a utilitarian perspective, as the effects of such graphic ads would be far too unsettling to justify.

Furthermore, such graphic advertisements could damage the reputation of the public health service. If the agency frequently employs scare tactics, the public could come to consider it a supplier of fear rather than health. “The [agency] could [then] become irretrievably linked with the negative and the threatening.”[6] This would erode the necessary trust that the public must have in its health authorities and hamper future endeavors, even those not based on fear.

Regardless, negative advertising may become a permanent fixture of public health campaigns. Is there a solution that addresses both the ethical and practical complications of this tactic? Can fear-based ads be used in a way that does not infringe upon personal liberties and still promotes the general good?

First, public health officials need to know to whom these types of ads would appeal. Not all people are equally affected by fear appeals and not all want to view them. Selective distribution to those groups of people would limit the impact of anxiety on those who would not benefit from these ads. Focus group analysis would be helpful here. Although difficult to accomplish, this approach is still worthy of consideration in public policy planning.

Also, instead of traditional targeting based on demographics, like age, race, or SES, a new type of targeting could be used to focus on locations where unhealthy behaviors happen. In New York City, we recently started point-of-sale anti-smoking ads. These striking posters specifically target those people buying cigarettes instead of the general public. This style would take much more forethought and planning, but it would help to limit the exposure of the ads’ infringement only to those most likely to benefit from their message.

Furthermore, to avoid the Millian critique of unjustified paternalism, negative ads could use a harm-to-others appeal. Instead of focusing on the harmful health effects to the individual, the ads could instead discuss the effects on others around that individual. Secondhand smoke around children is a classic example. This technique would have a more solid ethical foundation than the simple harm-to-self approach.

Finally, we must ask ourselves if we are better off with this form of health promotion or whether other, more positive, techniques are more desirable. In designing a health promotion campaign, one has to both focus on the target goal and the steps taken to reach that goal. Unfortunately, there is no formula for weighing the relative importance of intangible factors like anxiety, fear, and self-efficacy that can be crucial to a successful media strategy. However, we do know that these ads can work when used appropriately, but that they also have numerous disadvantages and can potentially backfire. Regardless of the ethical framework that one uses to examine the concept of fear-based advertising, the question of whether the use of fear is acceptable cannot be readily answered. Without a univocal ethical solution to this controversial tactic to guide policymakers, public health officials must rely on best evidence of efficacy and on their professional and moral judgment in order to use the mass media as an educational tool in promoting health to the benefit of individuals and the community.

Commentary by Antonella Surbone, MD, Ethics Editor, Clinical Correlations

The piece on “Ethical Considerations on the Use of Fear in Public Health Campaigns” offers an in-depth analysis of the ethical pro and cons of fear-based public health campaigns seeking to change people’s behaviors and life styles to prevent or limit morbidity and mortality. In reading this informative and interesting piece, I believe we may also reflect on cultural differences in communication styles, symbols and metaphoric meaning of words. Even when a fear-evoking image may be deemed appropriate by some or most people in a western context, the same image may be considered disrespectful in a different cultural context. Furthermore, it may also not elicit the same feelings and/or reactions in those who see it. For example, the war language often used in speaking about cancer prevention and treatment may not be suitable for all cancer patients and their families or communities for whom cancer itself can be a serious illness, or a metaphor of death or of shame and guilt.

Dr. Ishmeal Bradley is a Section Editor, Clinical Correlations

Peer reviewed by Antonella Surbone, MD, Ethics Editor, Clinical Correlations

Image courtesy of Wikimedia Commons

References:

1. Andreasen, Alan. “Social Marketing: Definition and Domain.” Journal of Public Policy and Marketing 1994;13(1):101-114.

2. Gagnon, Marilou, Jean Daniel Jacob, and Dave Holmes. “Governing Through (In)Security: A Critical Analysis of a Fear-based Public Health Campaign.” Critical Public Health 2010;20(2):245-256.

3. Dejohn, Irving and Adam Lisberg. “Health commissioner Thomas Farley wants to post grim anti-smoking signs anywhere cigarettes are sold.” NY Daily News [online], 25 June 2009 [cited 13 December 2010]. Available from: http://www.nydailynews.com/ny_local/2009/06/25/2009-06-25_city_scare_tactic_health_commish_wants_grim_antismoking_signs_in_stores.html

4. Biener, Lois, Garth McCallum-Keeler, and Amy L Nyman. “Adults’ Response to Massachusetts Anti-Tobacco Television Advertisements: Impact on Viewer and Advertisement Characteristics.’ Tobacco Control 2000;9:401-407. http://tobaccocontrol.bmj.com/content/9/4/401.full

5. Guttman, Nurit and Charles T. Salmon. “Guilt, Fear, Stigma and Knowledge Gaps: Ethical Issues in Public Health Communication Interventions.” Bioethics 2004;18(6):531-552. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=608425

6. Hastings, Gerard, Martine Stead, and John Webb. “Fear Appeals in Social Marketing: Strategic and Ethical Reasons for Concern.” Psychology and Marketing 2004;21(11):961-986. http://www.citeulike.org/user/suizan/article/311463

7. Bouman, Martine P. A. and William J. Brown. “Ethical Approaches to Lifestyle Campaigns.” Journal of Mass Media Ethics 2010;25:34-52. http://www.media-health.nl/Downloads/Bouman,%20M.P.A.%20&%20Brown,%20W.J%20(2010).%20Ethical%20Approaches%20to%20Lifestyle%20Campaigns.%20Journal%20of%20Mass%20Media%20EthicsExploring%20Questions%20of%20Media%20Morality,%2025%20(1),%20pp.%2034-52..pdf

8. Institute for Road Safety Research. “SWOV Fact Sheet: Fear-based information campaigns” [Internet]. Leidschendam, the Netherlands; 2009 April [cited 7 December 2010]. Available from: http://www.swov.nl/rapport/Factsheets/UK/FS_Fear_appeals.pdf.

9. Lewis, Ioni M. et al. “Promoting Public Health Messages: Should We Move Beyond Fear-Evoking Appeals in Road Safety.” Qualitative Health Research 2007;17(1):61-74. http://qhr.sagepub.com/content/17/1/61.full.pdf

Unraveling The Mysteries of Prinzmetal’s Angina: What Is It And How Do We Diagnose It?

October 8, 2014

By Anjali Varma Desai, MD

Peer Reviewed

Mr. Q is a 55-year-old male smoker who presents with recurrent chest pain in the mornings over the past several months. The patient reports being awakened from sleep at approximately 5:00 a.m. each morning with the same diffuse chest “pressure.” The pain typically lasts on the order of minutes, resolves, and then recurs at five-minute intervals in the same fashion for a total duration of two hours. The pain always occurs at rest and is never precipitated by exertion or emotional stress. The chest pain is generally associated with a sense of palpitations and occasional dizziness and light-headedness. An exercise stress test showed good exercise capacity without ST segment changes, even at target heart rate. Given the history, a diagnosis of coronary artery spasm was suggested. The patient was given a trial of diltiazem therapy, with marked improvement in his chest pain episodes thereafter.

In his landmark article in 1959, Dr. Myron Prinzmetal described a distinct type of “variant angina,” termed Prinzmetal’s angina. This chest pain tended to occur at rest (i.e. was not associated with increased cardiac work), waxed and waned cyclically, occurred at the same time each day, and could be accompanied by arrhythmias including ventricular ectopy, ventricular tachycardia, ventricular fibrillation, and various forms of AV block [1]. The patient’s EKG during painful episodes typically showed ST segment elevations (occasionally accompanied by reciprocal ST depressions), whereas the EKG obtained after the pain had resolved showed resolution of these ST segment changes [1]. Prinzmetal postulated that this separate clinical entity was due to transient spasm (“increased tonus”) of a large arteriosclerotic artery, causing temporary transmural ischemia in the distribution supplied by that artery.

It is important to note that, although ST elevation would be diagnostic, it is frequently not observed in cases of coronary artery spasm. Rather, the diagnosis of coronary artery spasm should be suspected based on the timing of chest pain and the presence of syncope, arrhythmia or cardiac arrest.

It was subsequently demonstrated that such episodes of coronary artery spasm can occur not only in patients with underlying fixed coronary artery obstruction but also in patients whose coronary arteries are anatomically normal [2-7]. Selzer et al. actually compared the syndromes of coronary artery spasm between nine patients with anatomically normal coronary arteries and 20 patients with obstructive coronary lesions [8]. Selzer et al. found that the non-coronary artery disease (CAD) group of patients was more likely to have a long history of nonexertional angina without prior infarction, normal EKG at rest with ST elevations in the inferior leads, conduction disease, and bradyarrhythmias during episodes of arterial spasm. Conversely, the CAD group of patients was more likely to have prior “effort angina” and prior infarction, as well as ST elevation in the anterolateral leads, ventricular ectopy and ventricular tachyarrhythmias.

Castello et al. also compared the syndromes of coronary artery spasm in 77 patients with underlying CAD (fixed coronary stenosis greater than or equal to 50%) and 35 patients with normal or minimally diseased coronary arteries [4]. These authors found, similarly, that angina exclusively at rest tends to occur in patients with structurally normal coronary arteries and that these patients tended to have more diffuse coronary artery spasms affecting more than one artery. In contrast, patients with underlying CAD usually had more focal coronary artery spasms superimposed on their fixed stenotic lesions.

The question arises as to what could be triggering coronary artery spasm in patients with structurally normal coronary arteries? As Prinzmetal suggested, “the distinctive dissimilarities [between typical angina and variant angina] are due to profound physiological and chemical rather than anatomical differences” [1]. These physiological and chemical differences are multi-factorial. Kugiyama et al. demonstrated that there is a deficiency in endothelial nitric oxide (NO) bioactivity in Prinzmetal’s angina-prone arteries; this defect makes those arteries especially sensitive to the vasodilator effect of nitroglycerin and the vasoconstrictor effect of acetylcholine [9]. Miyao et al. used intravascular ultrasound to show that Prinzmetal’s angina patients had diffuse intimal thickening of their coronary arteries, despite an angiographically normal appearance. This intimal hyperplasia was thought to be mediated by deficient NO activity [10]. NO is involved in the regulation of basal vascular tone and helps to mediate flow-dependent vasodilation, as well as suppressing the production of endothelin-1 and angiotensin-II, both of which are powerful vasoconstrictors [11]. As a result of all of these effects, deficient endothelial NO activity predisposes to coronary artery spasm. Endothelial NO is made by the endothelial NOS (e-NOS) gene, which has been found to have many genetic polymorphisms associated with coronary artery spasm [11]. It is important to note, however, that endothelial NO synthase polymorphisms are found in only one-third of patients with coronary spasm; accordingly, other genes or factors are most likely involved [11].

In a review article, Kusama et al. [12] highlighted several additional pathophysiologic contributors to Prinzmetal’s angina, including enhanced vascular smooth muscle contractility mediated by the Rho/Rho-kinase pathway [13-14], elevated markers of oxidative stress [11,15], low-grade chronic inflammation [11], and cigarette smoking [11,15] in addition to genetic polymorphisms of endothelial NO synthase (NOS) [11,15]. Polymorpisms of various genes may explain the higher incidence of Prinzmetal’s angina in the Japanese population as compared to the Caucasian population [12].

As our understanding of the pathophysiology behind Prinzmetal’s angina has evolved, new ways of diagnosing Prinzmetal’s angina have emerged. These diagnostic maneuvers typically involve provoking episodes of Prinzmetal’s angina under controlled settings (e.g. during coronary angiography) with acetylcholine, ergonovine, hyperventilation, and cold pressor stress testing. Okumura et al. showed that intracoronary injection of acetylcholine could be reliably used to induce coronary artery spasm with 99% specificity [16], a conclusion further supported by Miwa et al. [17]. Ergonovine, an ergot alkaloid and alpha-agonist that causes vasoconstriction, can similarly be used to induce episodes of coronary artery spasm accompanied by the characteristic chest pain and EKG changes that occur during spontaneous episodes of Prinzmetal’s angina [18-19]. Song et al. suggested ergonovine echocardiography as an effective screening test for coronary artery spasm, even before coronary angiography, with a sensitivity of 91% and a specificity of 88% [20]. Subsequent studies found that this was indeed an effective, safe, and well-tolerated screening test for coronary artery spasm [21-22].

It is important to note that provocation of arterial spasm with acetylcholine or ergonovine confers a multitude of risks including arrhythmias, hypertension, hypotension, abdominal cramps, nausea, and vomiting [11]. More serious complications include ventricular fibrillation, myocardial infarction, and death [23,24]. Quantitative estimates of the risks incurred by such invasive testing are on the order of 1% [25,26]. In one study, serious major complications, such as sustained ventricular tachycardia, shock, and cardiac tamponade occurred in four out of 715 patients (0.56%) receiving provocative acetylcholine testing [25]. In another study, nine patients out of 921 (1%) had more minor complications (nonsustained ventricular tachycardia [n=1], fast paroxysmal atrial fibrillation [n=1], symptomatic bradycardia [n=6], and catheter-induced spasm [n=1]) after undergoing acetylcholine provocation testing [26]. While such invasive testing is generally considered a safe technique to assess coronary vasomotor dynamics, these maneuvers should only be performed by qualified physicians in carefully controlled settings, where the patient may be properly and quickly resuscitated as needed [11].

Testing a different diagnostic strategy, Hirano et al. noted that a diagnostic algorithm of hyperventilation for six minutes, followed by cold water pressor for two minutes under continuous EKG and echocardiographic monitoring had a 90% Sensitivity, 90% specificity, 95% positive predictive value, and 82% negative predictive value for diagnosing vasospastic angina [27]. The combination of respiratory alkalosis from the hyperventilation as well as the reflex sympathetic coronary vasoconstriction in response to the cold pressor test [28], together, helped to induce coronary artery spasm and diagnose Prinzmetal’s angina. More recently, Hwang et al. suggested that measuring the change in coronary flow velocity of the distal left anterior descending artery (LAD) via transthoracic echo during the cold pressor test may provide additional diagnostic utility, with a sensitivity of 93.5% and a specificity of 82.4% for diagnosing coronary artery spasm [29].

In an article published in JACC in 2013, the Japanese Coronary Spasm Association (JCSA) discussed a comprehensive clinical risk score to aid in prognostic stratification of patients with coronary artery spasm [30]. A multicenter registry study of 1429 patients, median age 66 years, with a median follow-up period of 32 months, was performed. The primary endpoint was defined as major adverse cardiac events (MACE), including cardiac death, nonfatal myocardial infarction, hospitalization due to unstable angina pectoris, heart failure, and appropriate ICD shocks during the follow-up period that began at the date of the diagnosis of coronary artery spasm. In particular, cardiac death, nonfatal myocardial infarction and ICD shocks were categorized as hard MACE. The secondary endpoint was all-cause mortality. The study identified seven predictors of MACE: history of out-of-hospital cardiac arrest (4 points), smoking, angina at rest alone, organic coronary stenosis, multivessel spasm (2 points each), ST segment elevation during angina and beta-blocker use (1 point each). Based on total score, three risk categories were defined: low risk (score of 0 to 2, which included 598 patients), intermediate risk (score of 3 to 5, which included 639 patients) and high risk (score of 6 or more, which included 192 patients). The incidences of major adverse cardiac events in the low-, intermediate-, and high-risk patients were 2.5%, 7.0%, and 13.0%, respectively (p<0.001). This scoring system, known as the JCSA risk score, may help provide a comprehensive risk assessment and prognostic stratification scheme for patients with coronary artery spasm.

In terms of treatment, calcium channel blockers (e.g. nifedipine, diltiazem and verapamil) are the mainstay of therapy for coronary artery spasm. The goal of such therapy is to prevent vasoconstriction and promote coronary artery vasodilation. In one study of 245 patients with coronary artery spasm who were followed for an average of 80.5 months, the use of a calcium cannel blocker therapy was an independent predictor of myocardial-infarct-free survival in patients with coronary artery spasm [31]. In another observational study of 300 patients with coronary artery spasm, calcium channel blockers were effective in alleviating symptoms in over 90-percent of patients [32]. The drugs were evaluated and ranked as follows: markedly effective, leading to complete elimination of angina attacks within 2 days; effective, leading to complete elimination of attacks after 2 days or a reduction in the number of attacks to less than half during the periods of drug administration in the hospital; ineffective, leading to no reduction to less than half during the periods of drug administration. Efficacy rates (including markedly effective as well as effective categories) for nifedipine, diltiazem and verapamil were 94.0%, 90.8%, and 85.7%, respectively. Rarely, cases are refractory to medical therapy and literature exists to support the effectiveness of surgical revascularization in these circumstances [33].

It is clear that the phenomenon of “variant angina” is a complicated, multifaceted product of forces that are not only anatomical, but also genetic, chemical, physiological and behavioral in nature. While endothelial nitric oxide bioactivity appears to play a critical role in this process, there are undoubtedly several other factors involved. Over time, our knowledge of the pathophysiology driving Prinzemetal’s angina will continue to expand, as will our diagnostic and therapeutic repertoire for this fascinating clinical entity.

Dr. Anjali Varma Desai is a 3rd year resident at NYU Langone Medical Center

Peer Reviewed by Harmony R. Reynolds, MD, Medicine (Cardio Div), NYU Langone Medical Center

References:

1. Prinzmetal M, Kennamer R, Merliss R, Wada T, Bor N. Angina pectoris: I: a variant form of angina pectoris: preliminary report. Am J Med. 1959; 27: 375–388 http://www.ncbi.nlm.nih.gov/pubmed/14434946

2. Maseri A, Severi S, Nes MD, et al. “Variant” angina: one aspect of a continuous spectrum of vasospastic myocardial ischemia. Pathogenetic mechanisms, estimated incidence and clinical and coronary arteriographic findings in 138 patients. Am J Cardiol. Dec 1978;42(6):1019-35 http://www.ncbi.nlm.nih.gov/pubmed/727129

3. Cheng TO, Bashour R, Kelser GA Jr, et al: Variant angina of Prinzmetal with normal coronary arteriograms: a variant of the variant. Circulation 1973; 47: 476-485. http://circ.ahajournals.org/content/47/3/476.abstract

4. Castello R, Alegria E, Merino A, Soria F, Martinez-Caro D. Syndrome of coronary artery spasm of normal coronary arteries: Clinical and angiographic features. Angiology 1988; 39: 8-15. http://www.ncbi.nlm.nih.gov/pubmed/3341608

5. Oliva PB, Potts DE, Pluss RG. Coronary arterial spasm in Prinzmetal angina: documentation by coronary arteriography. N Engl J Med 1973; 288: 745-751. http://www.ncbi.nlm.nih.gov/pubmed/4688712

6. Endo M, Kanda I, Hosoda S, et al. Prinzmetal’s variant form of angina pectoris: Re-evaluation of mechanisms. Circulation 1975; 52: 33-37. http://circ.ahajournals.org/content/52/1/33.abstract?cited-by=yes&legid=circulationaha;52/1/33&related-urls=yes&legid=circulationaha;52/1/33

7. Huckell VF, McLaughlin PR, Morch JE, Wigle ED, Adelman AG: Prinzmetal’s angina with documented coronary artery spasm: Treatment and follow-up. Br Heart J 1981 June; 45(6): 649-655. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC482578/

8. Selzer A, Langston M, Ruggeroli C, et al: Clinical syndrome of variant angina with normal coronary arteriogram. N Engl J Med 1976; 295: 1343-1347. http://www.ncbi.nlm.nih.gov/pubmed/980080

9. Kugiyama K, Yasue H, Okumura K, et al. Nitric oxide activity is deficient in spasm arteries of patients with coronary spastic angina. Circulation 1996 Aug 1; 94(3): 266-71. http://www.ncbi.nlm.nih.gov/pubmed/8759065

10. Miyao Y, Kugiyama K, Kawano H, et al. Diffuse intimal thickening of coronary arteries in patients with coronary spastic angina. J Am Coll Cardiol. 2000 Aug; 36(2): 432-7. http://www.ncbi.nlm.nih.gov/pubmed/10933354

11. Yasue H, Nakagawa H, Itoh T, Harada E, Mizuno Y. Coronary artery spasm – clinical features, diagnosis, pathogenesis and treatment. J Cardiol 2008; 51: 2-17. http://www.ncbi.nlm.nih.gov/pubmed/18522770

12. Kusama Y, Kodani E, Nakagomi A, et al. Variant angina and coronary artery spasm: the clinical spectrum, pathophysiology and management. J Nihon Med Sch. 2011;78(1):4-12. Review. http://www.researchgate.net/publication/50351691_Variant_angina_and_coronary_artery_spasm_the_clinical_spectrum_pathophysiology_and_management

13. Shimokawa H, Seto M, Katsumata N, et al. Rho-kinase mediated pathway induces enhanced myosin light chain phosphorylations in a swine model of coronary artery spasm. Cardiovasc Res 1999; 43: 1029-1039. http://cardiovascres.oxfordjournals.org/content/43/4/1029.full

14. Masumoto A, Mohri M, Shimokawa H, et al. Suppression of coronary artery spasm by a Rho-kinase inhibitor fasudil in patients with vasospastic angina. Circulation 2002; 105: 1545-1547. http://circ.ahajournals.org/content/105/13/1545.abstract

15. Miwa K, Fujita M, Sasayama S. Recent insights into the mechanisms, predisposing factors and racial differences of coronary vasospasm. Heart Vessels 2005; 20: 1-7. http://www.ncbi.nlm.nih.gov/pubmed/15700195

16. Okumura K, Yasue H, Matsuyama K, et al. Sensitivity and specificity of intracoronary injection of acetylcholine for the induction of coronary artery spasm. J Am Coll Cardiol. 1988 Oct;12(4):883-8. http://www.unboundmedicine.com/evidence/ub/citation/3047196/Sensitivity_and_specificity_of_intracoronary_injection_of_acetylcholine_for_the_induction_of_coronary_artery_spasm_

17. Miwa K, Fujita M, Ejiri M, Sasayama S. Usefulness of intracoronary injection of acetylcholine as a provocative test for coronary artery spasm in patients with vasospastic angina. Heart Vessels. 1991;6(2):96-101 http://www.ncbi.nlm.nih.gov/pubmed/1906457

18. Schroeder JS, Bolen JL, Quint RA, et al. Provocation of coronary spasm with ergonovine maleate: new test with results in 57 patients undergoing coronary arteriography. Am J Cardiol 1977; 40: 487-491. http://www.ncbi.nlm.nih.gov/pubmed/910712

19. Heupler FA, Proudfit WL, Razavi M, et al. Ergonovine maleate provocative test for coronary arterial spasm. Am J Cardiol 1978; 41: 631-640. http://www.ncbi.nlm.nih.gov/pubmed/645566

20. Song JK, Lee SJ, Kang DH, Cheong SS, Hong MK, Kim JJ, Park SW, Park SJ. Ergonovine echocardiography as a screening test for diagnosis of vasospastic angina before coronary angiography. J Am Coll Cardiol. 1996 Apr;27(5):1156-61. http://www.ncbi.nlm.nih.gov/pubmed/8609335

21. Palinkas A, Picano E, Rodriguez O, et al. Safety of ergot stress echocardiography for non-invasive detection of coronary vasospasm. Coron Artery Dis 2001 Dec; 12(8): 649-54. http://www.ncbi.nlm.nih.gov/pubmed/11811330

22. Djordjevic-Dikic A, Varga A, Rodriguez O, et al. Safety of ergotamine-ergic pharmacologic stress echocardiography for vasospasm testing in the echo lab: 14 year experience on 478 tests in 464 patients. Cardiologia 1999 Oct; 44(10): 901-6. http://www.ncbi.nlm.nih.gov/pubmed/10630049

23. M. Nakamura, A. Takeshita, Y. Nose. Clinical characteristics associated with myocardial infarction, arrhythmias and sudden death in patients with vasospastic angina. Circulation 1987; 75: 1110-1116.

24. R.J. Myerburg, K.M. Kessler, S.M. Mallon, et al. Life-threatening ventricular arrhythmias in patients with silent myocardial ischemia due to coronary artery spasm. New England Journal of Medicine 1992; 326: 1451-1455.

25. Sueda S, Saeki H, Otani T, et al. Major complications during spasm provocation tests with an intracoronary injection of acetylcholine. Am J. Cardiol. 2000; 85(3): 391.

26. Ong P, Athanasiadis A, Borgulya G, et al. Clinical usefulness, angiographic characteristics, and safety evaluation of intracoronary acetylcholine provocation testing among 921 consecutive white patients with unobstructed coronary arteries. Circulation 2014; 129(17): 1723.

27. Hirano Y, Ozasa Y, Yamamoto T, et al. Diagnosis of vasospastic angina by hyperventilation and cold-pressor stress echocardiography: comparison to I-MIBG myocardial scintigraphy. J Am Soc Echocardiogr. 2002 Jun;15(6):617-23. http://www.unboundmedicine.com/washingtonmanual/ub/citation/12050603/Diagnosis_of_vasospastic_angina_by_hyperventilation_and_cold_pressor_stress_echocardiography:_comparison_to_I_MIBG_myocardial_scintigraphy_

28. Raizner AE, Chahine RA, Ishimori T, et al. Provocation of coronary artery spasm by the cold pressor test. Hemodynamic, arteriographic and quantitative angiographic observations. Circulation. 1980; 62: 925-932. http://circ.ahajournals.org/content/62/5/925.citation

29. Hwang HJ, Chung WB, Park JH, et al. Estimation of coronary flow velocity reserve using transthoracic Doppler echocardiography and cold pressor test might be useful for detecting of patients with variant angina. Echocardiography. 2010 Apr;27(4):435-41. http://www.ncbi.nlm.nih.gov/pubmed/20113325

30. Takagi Y, Takahashi J, Yasuda S, et al. Prognostic Stratification of Patients with Vasospastic Angina: A Comprehensive Clinical Risk Score Developed by the Japanese Coronary Spasm Association. JACC 2013; 62(13): 1144-1153.

31. Yasue H, Takizawa A, Nagao M, et al. Long-term prognosis for patients with variant angina and influential factors. Circulation. 1988;78(1):1.

32. Kimura E, Kishida H. Treatment of variant angina with drugs: a survey of 11 cardiology institutes in Japan. Circulation 1981 April; 63(4): 844-8.

33. Ono T, Ohashi T, Asakura T, Shin T. Internal mammary revascularization in patients with variant angina and normal coronary arteries. Interact Cardiovasc Thorac Surg. 2005;4:426–428.

 

 

 

 

 

 

 

Pets Gone Wild: A Review of Animal Attacks

October 1, 2014

By Thomas Lee

Peer Reviewed

The age-old question that every one of us has been asked at least once: Are you a cat or a dog person? The answer is subjective, as both choices depend on a person’s values, preferences, and lifestyle. A different question, and perhaps a more objective one is: Which would you rather be bitten by? With news stories of pit bull attacks, the common sight of German shepherd police dogs in New York, and the relatively benign appearance of most domesticated felines, many might spring to answer with the cat. However, the data suggest a more nuanced response.

Animal bites are extremely common: an estimated 2-5 million occur each year in the United States alone. Dogs represent the overwhelming majority of attacks at 90%, cats a distant second at 5%, and rodents pulling up in third with 2-3%[1]. Around 20 deaths in the US can be attributed to animal bites each year, along with high morbidity rates in bites around the hand due to the plethora of superficial bones and joints in that area [2]. The most worrisome complications include trauma and infection.

German shepherds, pit bull terriers, and mixed breeds are implicated in most dog bites in the US [3]. Victims, usually males between the ages of 5 and 9, frequently know the dog who attacked them [4]. The animal can inflict a wide range of injuries including scratches, deep cuts, puncture wounds, and crush injuries [5]. Larger dog breeds can cause more severe crush injuries due to their powerful jaws that are able to produce up to 450 pounds per square inch of pressure; this leads to a greater risk of major organ or vessel injury [6]. The head and neck are the most common sites of injury, most likely due to a child’s head being near the level of a larger dog’s mouth [7]. Dog attacks rarely cause death, except in infants [6]. Even so, around 2-5% of all bites will go on to develop a local infection, usually polymicrobial, by common human skin flora and other microorganisms that are part of the dog’s normal flora [8]. Pasteurella is the most common bacteria found in dog bite infections, present in almost 50% of cases, and can cause septic arthritis and osteomyelitis [9]. Capnocytophaga is a part of normal flora in animals that can cause fulminant sepsis and meningitis and is most dangerous in asplenic patients. Brucella may lead to fever with nonspecific symptoms. The rabies virus is capable of causing a lethal viral encephalopathy [5].

Cat injuries can be dangerous as well. Cats can cause painful wounds using their teeth and claws, with bites usually affecting the arms and hands, and scratches common on the face [5]. The rate of deep puncture wounds sets cat injuries apart from most dog bites. A cat’s long, thin, sharp teeth can cause a wound that is difficult to clean properly. This leads to infection rates that have been reported to range from 20-80% [10]. Hand wounds are at even greater risk; one can see rapid redness, swelling, and intense pain as fast as 12-24 hours after a bite [11], with complications including osteomyelitis and abscess formation. The zoonoses are similar to dog attacks, but also include potentially life-threatening Bartonella that causes cat scratch disease. Interestingly, kittens more commonly transmit Bartonella infection than full-grown cats due to their more playful nature and higher propensity to host the organism [12].

Initial management of bites from both animals is identical: stabilize, debride and irrigate, x-ray deep injuries to assess bone or foreign bodies, and dress the wound [13]. Primary intention using sutures is performed to reduce scarring unless the injury is crushing, involves the hands or feet, occurred more than 12 hours ago, or the victim is immunocompromised. If any of the above, general practice dictates simply irrigating and dressing the wound with adequate elevation to allow drainage of the extremity if indicated, and close follow-up [14]. Antibiotic prophylaxis, with the standard of care being amoxicillin-clavulanate for 3-5 days, can be indicated in deep puncture wounds, crush injuries, hand bites, and the immunocompromised [6]. Tetanus and rabies prophylaxis should always be considered as well. One study showed that antibiotic prophylaxis is most effective between 9-24 hours after injury; if the patient is seen and treated within 8 hours, there was no benefit seen [14]. Moreover, some data suggest that antibiotics may only reduce infection rates in injuries to the hand [15]. Regardless, even with prophylactic treatment, cat bites are much more prone to infection than their counterparts [5].

Overall, data show that dog bites are more frequent than cat bites, and can lead to greater initial trauma. However, cat bites have significantly higher infection rates due to the difficulty inherent in cleaning a puncture wound, which leads to high morbidity, particularly in the vulnerable hand. Both should be a rare occurrence. Responsible pet ownership, education, and safety precautions can make a significant impact on the morbidity and mortality of these attacks.

Thomas Lee is a 2nd year medical student at NYU School of Medicine

Peer reviewed  by Thomas Norton, MD, NYU Langone Medical Center

Image courtesy of Wikimedia Commonds

References

1. Gilchrist J, Sacks JJ, White D, Kresnow MJ. Dog bites: still a problem? Inj Prev. 2008;14(5):296-301.

2. Callaham M. Controversies in antibiotic choices for bite wounds. Ann Emerg Med. 1988;17(12):1321-1330.

3. Morgan M, Palmer J. Dog bites. BMJ. 2007;334(7590):413-417. http://www.bmj.com/content/334/7590/413

4. Schalamon J, Ainoedhofer H, Singer G, et al. Analysis of dog bites in children who are younger than 17 years. Pediatrics. 2006;117(3):e374-379. http://pediatrics.aappublications.org/content/117/3/e374.long

5. Goldstein EJ. Bite wounds and infection. Clin Infect Dis. 1992;14(3):633-638. http://cid.oxfordjournals.org/content/14/3/633.long

6. Oehler RL, Velez AP, Mizrachi M, Lamarche J, Gompf S. Bite-related and septic syndromes caused by cats and dogs. Lancet Infect Dis. 2009;9(7):439-447. http://www.sciencedirect.com/science/article/pii/S1473309909701100

7. Talan DA, Citron DM, Abrahamian FM, Moran GJ, Goldstein EJ. Bacteriologic analysis of infected dog and cat bites. N Engl J Med. 1999;340(2):85-92. http://www.nejm.org/doi/full/10.1056/NEJM199901143400202

8. Dire DJ, Hogan DE, Riggs MW. A prospective evaluation of risk factors for infections from dog-bite wounds. Acad Emerg Med. 1994;1(3):258-266.

9. Abrahamian FM, Goldstein EJC. Microbiology of animal bite wound infections. Clin Microbiol Rev. 2011;24(2):231-246. http://cmr.asm.org/content/24/2/231.full.pdf+html

10. Thomas N, Brook I. Animal bite-associated infections: microbiology and treatment. Expert Rev Anti Infect Ther. 2011;9(2):215-226. http://www.medscape.com/viewarticle/739023_5

11. Fleisher GR. The management of bite wounds. N Engl J Med. 1999;340(2):138-140. http://www.sonny2.com/articles/Bite1.htm

12. Centers for Disease Control and Prevention. Cat scratch disease (Bartonella henselae infection). http://www.cdc.gov/healthypets/diseases/cat-scratch.html.  Published June 23, 2011.  Accessed April 14, 2014.

13. Paschos, NK, Makris EA, Gantsos A, Georgoulis AD. Primary closure versus non-closure of dog bite wounds: a randomized controlled trial. Injury. 2014;45(1):237-240. http://www.sciencedirect.com/science/article/pii/S0020138313003173

14. Brakenbury PH, Muwanga C. A comparative double blind study of amoxycillin/clavulanate vs placebo in the prevention of infection after animal bites. Arch Emerg Med. 1989;6(4):251-256.

15. Henton J, Jain A. Cochrane corner: antibiotic prophylaxis for mammalian bites (intervention review). J Hand Surg Eur Vol. 2012;37:804-806. http://jhs.sagepub.com/content/37/8/804.long