Are We Too Hesitant to Anticoagulate Elderly Patients with Atrial Fibrillation? A Risk-Benefit Analysis

June 28, 2013

By Sunny N. Shah, MD

Faculty Peer Reviewed


Atrial fibrillation (AF) is the most common cardiac arrhythmia and its prevalence increases with age. In fact, the lifetime incidence of AF is approximately 25% in individuals by age 80, with the incidence nearly doubling with each decade of life after age 50. (1) Multiple randomized controlled trials have shown that oral antithrombotic therapy with warfarin or aspirin decreases the risk of ischemic stroke in patients with AF. (2-6) Meta-analyses reveal a relative risk reduction of approximately 60% with warfarin and 20% with aspirin as compared to placebo. (7) Pooled analyses from these studies show a less than 0.3% increase in the absolute risk of intracranial hemorrhage with antithrombotic therapy. (7) This risk is much lower than the annual risk of ischemic stroke which is increased by greater than five-fold in patients with atrial fibrillation who are not receiving anticoagulation. (8)

Given these results, the current American Heart Association/American College of Cardiology/European Society of Cardiology guidelines for the management of AF recommend the use of anticoagulation based on a patient’s risk of ischemic stroke. This risk is determined using known risk factors such as those identified by the CHA2DS2-VASc scoring system. (9) These guidelines thus give a Class IA recommendation to the statement: “the selection of the antithrombotic agent should be based upon the absolute risk of stroke and bleeding and the relative risk and benefit for a given patient.” (9)


It also evident in clinical practice that the elderly are at increased risk for falls. Studies demonstrate that in the community, approximately one-third of individuals over the age of 65 fall every year. (10) In addition, those who fall once are at increased risk of additional falls, and approximately 10% of these falls result in serious injury. Following this, studies have shown that physicians are hesitant to prescribe antithrombotic therapy in elderly patients with atrial fibrillation who are perceived to have an increased fall risk to prevent inducing an intracranial hemorrhage. One review of a hospital’s electronic medical records revealed that the most common reason cited for not prescribing warfarin to a patient with AF was bleeding risk from falls.(11) A survey of residents, fellows, and attending physicians also highlighted fall risk as the most common reason for not anticoagulating a case patient in a nursing home with atrial fibrillation.(12)

Is this fear justified?

Review of Literature:

A prospective study of over 500 patients who were discharged on oral anticoagulation for AF showed that there was no significant increase in risk of major bleeds between those deemed high risk for falls versus low risk (8.0 events versus 6.8 events per 100 patient-years, respectively, p = 0.64, respectively). (13) In a retrospective study comparing 1,245 patients at high risk of falls compared to 18,261 other patients with AF, there was a significant difference in rates of intracranial hemorrhage between the high risk for falls and low risk for falls groups (2.8 events versus 1.1 events per 100 patient-years, respectively, p < 0.005). However, when looking at a composite outcome of out-of-hospital death or hospitalization for stroke, myocardial infarction, or hemorrhage, the hazard ratio was 0.75 (95% CI 0.61 to 0.91, p 0.004) for warfarin therapy. This led the authors to conclude that in patients at high risk for stroke – those with a CHADS2 score of 2 or greater, the overall benefit of warfarin therapy outweighed the risks in their study. (14)

Given the potential discrepancy in the interpretation of the results from these studies, a meta-analysis was conducted to help guide antithrombotic therapy for AF in elderly patients at risk for falls. The authors used a Markov decision model to calculate that an individual taking warfarin would need to fall about 295 times in one year before the risks of warfarin outweighed the benefits. (15) Of course, a major concern to the validity of these results is the potential imprecision in the values of the input variables to calculate this “number needed to fall” – i.e. incidence of fall per year, risk of ICH per fall, etc. The authors concluded that a patient’s tendency to fall should not play a significant role in the decision to prescribe anticoagulant therapy in this patient population.

The same authors followed up their original study with a subsequent appraisal of the literature to determine if the presence of additional clinical risk factors for bleeding impacted the risk of hemorrhage in elderly patients on anticoagulation for AF. (16) They identified three such risk factors for intracranial hemorrhage based on their literature search – hypertension, participation in activities that increase risk of head trauma, and increased rates of intracranial bleed in patients with prior stroke who were placed on anticoagulation. They concluded that these too should not change one’s decision to anticoagulate those with AF and high risk of stroke. (16)

A subsequent case-controlled study analyzed the level of anticoagulation, age, and stroke risk in AF patients on warfarin. The authors showed that the risk of ICH is increased in patients over the age of 85 as compared to those aged 70-74 (OR 2.5, 95% CI 1.3 to 4.7). (17) They thus advocated strict monitoring of INR values and avoiding an INR > 3.5 which they found to be associated with increased risk for ICH.

A more recent review on the subject analyzed the above studies among others, and concluded that physicians are still guided by their own concerns regarding a patient’s risk of hemorrhage as opposed to their risk of ischemic/embolic stroke, and that that warfarin is underused in the treatment of AF in the elderly as a result of these concerns. (18)

Summary and Conclusions:

The decision to anticoagulate an elderly person with AF at high risk for stroke is a common problem faced by physicians. The benefit of warfarin in reducing the risk of stroke in patients with AF is indisputable, as is its superior efficacy in comparison to aspirin. Some studies have shown that the rates of ICH may be comparable between patients treated with aspirin and those with well-managed warfarin regimens. (19, 20).Importantly, this review has focused on the use of Warfarin in patient with AF. As data on the newer anticoagulants continues to accrue, future risk/benefit analyses with the use of these agents should be conducted as well.

Because stroke risk increases with age, the elderly stand to benefit the most from warfarin therapy. Therefore, most experts would recommend an individual risk-benefit analysis per patient with careful attention to risk of ischemic stroke in this vulnerable population, and in general, advocate the use of warfarin therapy in those at high risk of stroke – i.e. CHADS2 or CHA2DS2-VASc score of 2 or greater – despite their fall risk. This would especially be advocated for patients who remain adherent to their regimen and have well-controlled INRs. Importantly, the patient should be included in a conversation regarding the risks, benefits, and lifestyle changes that go along with chronic anticoagulation therapy

To help ameliorate physician concern and potentially decrease the risk of ICH in an elderly patient, more frequent INR checks should be obtained to ensure that the INR remains at the goal of between 2 and 3. In addition, minimization of a patient’s fall risk via environmental changes, medication management, and treatment of any underlying diseases that contribute to risk of fall should be emphasized. (21) ,

Dr. Sunny N. Shah is a 3rd year Resident at NYU Langone Medical Center

Peer reviewed by Rob Donnino, MD, Cardiology Section Editor, Clinical Correlations

Image courtesy of Wikimedia Commons


1. Magnani JW, Rienstra M, Lin H, Sinner MF, Lubitz SA, McManus DD, et al. Atrial fibrillation: current knowledge and future directions in epidemiology and genomics. Circulation. 2011;124(18):1982-93.  http://www.ncbi.nlm.nih.gov/pubmed/22042927

2. Warfarin versus aspirin for prevention of thromboembolism in atrial fibrillation: Stroke Prevention in Atrial Fibrillation II Study. Lancet. 1994;343(8899):687-91.  http://www.ncbi.nlm.nih.gov/pubmed/7907677

3. The effect of low-dose warfarin on the risk of stroke in patients with nonrheumatic atrial fibrillation. The Boston Area Anticoagulation Trial for Atrial Fibrillation Investigators. N Engl J Med. 1990;323(22):1505-11.

4. Petersen P, Boysen G, Godtfredsen J, Andersen ED, Andersen B. Placebo-controlled, randomised trial of warfarin and aspirin for prevention of thromboembolic complications in chronic atrial fibrillation. The Copenhagen AFASAK study. Lancet. 1989;1(8631):175-9.

5. Connolly SJ, Laupacis A, Gent M, Roberts RS, Cairns JA, Joyner C. Canadian Atrial Fibrillation Anticoagulation (CAFA) Study. J Am Coll Cardiol. 1991;18(2):349-55.

6. Ezekowitz MD, Bridgers SL, James KE, Carliner NH, Colling CL, Gornick CC, et al. Warfarin in the prevention of stroke associated with nonrheumatic atrial fibrillation. Veterans Affairs Stroke Prevention in Nonrheumatic Atrial Fibrillation Investigators. N Engl J Med. 1992;327(20):1406-12.

7. Hart RG, Pearce LA, Aguilar MI. Meta-analysis: antithrombotic therapy to prevent stroke in patients who have nonvalvular atrial fibrillation. Ann Intern Med. 2007;146(12):857-67.  http://www.ncbi.nlm.nih.gov/pubmed/17577005

8. Wolf PA, Abbott RD, Kannel WB. Atrial fibrillation as an independent risk factor for stroke: the Framingham Study. Stroke. 1991;22(8):983-8.  http://www.ncbi.nlm.nih.gov/pubmed/1866765

9. Fuster V, Ryden LE, Cannom DS, Crijns HJ, Curtis AB, Ellenbogen KA, et al. ACC/AHA/ESC 2006 guidelines for the management of patients with atrial fibrillation–executive summary: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and the European Society of Cardiology Committee for Practice Guidelines (Writing Committee to Revise the 2001 Guidelines for the Management of Patients With Atrial Fibrillation). J Am Coll Cardiol. 2006;48(4):854-906.

10. Tinetti ME, Speechley M, Ginter SF. Risk factors for falls among elderly persons living in the community. N Engl J Med. 1988;319(26):1701-7.  http://www.ncbi.nlm.nih.gov/pubmed/3205267

11. Rosenman MB, Baker L, Jing Y, Makenbaeva D, Meissner B, Simon TA, et al. Why is warfarin underused for stroke prevention in atrial fibrillation? A detailed review of electronic medical records. Curr Med Res Opin. 2012;28(9):1407-14.  http://www.ncbi.nlm.nih.gov/pubmed/22746356

12. Dharmarajan TS, Varma S, Akkaladevi S, Lebelt AS, Norkus EP. To anticoagulate or not to anticoagulate? A common dilemma for the provider: physicians’ opinion poll based on a case study of an older long-term care facility resident with dementia and atrial fibrillation. J Am Med Dir Assoc. 2006;7(1):23-8.

13. Donze J, Clair C, Hug B, Rodondi N, Waeber G, Cornuz J, et al. Risk of falls and major bleeds in patients on oral anticoagulation therapy. Am J Med. 2012;125(8):773-8.

14. Gage BF, Birman-Deych E, Kerzner R, Radford MJ, Nilasena DS, Rich MW. Incidence of intracranial hemorrhage in patients with atrial fibrillation who are prone to fall. Am J Med. 2005;118(6):612-7. http://www.ncbi.nlm.nih.gov/pubmed/15922692

15. Man-Son-Hing M, Nichol G, Lau A, Laupacis A. Choosing antithrombotic therapy for elderly patients with atrial fibrillation who are at risk for falls. Arch Intern Med. 1999;159(7):677-85.

16. Man-Son-Hing M, Laupacis A. Anticoagulant-related bleeding in older persons with atrial fibrillation: physicians’ fears often unfounded. Arch Intern Med. 2003;163(13):1580-6.

17. Fang MC, Chang Y, Hylek EM, Rosand J, Greenberg SM, Go AS, et al. Advanced age, anticoagulation intensity, and risk for intracranial hemorrhage among patients taking warfarin for atrial fibrillation. Ann Intern Med. 2004;141(10):745-52.  http://www.ncbi.nlm.nih.gov/pubmed/15545674

18. Sellers MB, Newby LK. Atrial fibrillation, anticoagulation, fall risk, and outcomes in elderly patients. Am Heart J. 2011;161(2):241-6.  http://www.ncbi.nlm.nih.gov/pubmed/21315204

19. Mant J, Hobbs FD, Fletcher K, Roalfe A, Fitzmaurice D, Lip GY, et al. Warfarin versus aspirin for stroke prevention in an elderly community population with atrial fibrillation (the Birmingham Atrial Fibrillation Treatment of the Aged Study, BAFTA): a randomised controlled trial. Lancet. 2007;370(9586):493-503.

20. Secondary prevention in non-rheumatic atrial fibrillation after transient ischaemic attack or minor stroke. EAFT (European Atrial Fibrillation Trial) Study Group. Lancet. 1993;342(8882):1255-62.  http://www.ncbi.nlm.nih.gov/pubmed/7901582

21. Garwood CL, Corbett TL. Use of anticoagulation in elderly patients with atrial fibrillation who are at risk for falls. Ann Pharmacother. 2008;42(4):523-32.  http://www.ncbi.nlm.nih.gov/pubmed/18334606

Is Personalized Medicine Really the Cure? Looking Through the Lens of Breast Cancer

May 3, 2013

By Jessica Billig

Faculty Peer Reviewed 

Although millions of dollars are spent towards cancer research every year, progress toward a cure is less than ideal. Last year the New York Times posted a piece about the burgeoning improvements on the genomic front that could lead to a new approach to cancer treatment. “The promise is that low-cost gene sequencing will lead to a new era of personalized medicine, yielding new approaches for treating cancers and other serious diseases” [1]. Through genomic technology, physicians will be able to tailor chemotherapeutic regimens and treatments to each patient’s specific cancer. While this sounds like the holy grail of cancer treatment, it’s not as easy as it seems.

The notion of personalized medicine sprang from the human genome project, which sequenced the 30,000-40,000 protein-coding regions that make up human DNA [2]. Each type of cancer has a unique genome with redundant mutations and coding regions that can be manipulated as possible drug targets.

An excellent example of the application of genomics in cancer medicine is breast cancer. Gene-expression profiles have been generated that have identified different biomarkers for each subtype of breast cancer (estrogen receptor/progesterone receptor/ HER2/neu receptor). These biomarkers have shaped the way we currently treat breast cancer. Every subtype of breast cancer produces unique proteins and relies upon discrete growth factors. It is through these protein differences that targeted therapy is made possible [3]. For example, the discovery of the HER2 receptor and its associated monoclonal antibody, trastuzumab, changed the battlefield of breast cancer. Before the advent of trastuzumab, HER2 as a biomarker was a poor prognostic marker for survival, with an increased rate of relapse after surgery. With the addition of trastuzumab to standard chemotherapy, women with metastatic HER2-positive breast cancer who had not received prior chemotherapy had an increase in survival from 4.6 months to 7.4 months [4]. More recently, the application of HER2-targeted adjuvant therapy in early stage disease has changed the prognosis of HER2-positive breast cancer forever, preventing recurrences and significantly prolonging survival in patients whose cancer does relapse.

The cancer genome project along with other research endeavors began sequencing multiple tumors from each type of cancer. Through sequencing, researchers can look at which mutations “drive” the cancer to grow, invade, and metastasize. By targeting these driver mutations, therapies will be able to cut to the core of the cancer, destroying the tumor’s foundation [5]. With genotyping becoming more affordable, we will be able to sequence each patient’s tumor to determine which oncogene or tumor suppressor is fueling his or her specific cancer. In a perfect world, this may be the answer, but the molecular biology of cancer is not so straightforward. A 2012 article in the New England Journal of Medicine describes the heterogeneous landscape of the molecular biology of a cancer cell. Intratumor heterogeneity is a big problem, with each tumor undergoing its own evolutionary process within the patient. Every tumor is made up of millions of different cells that are accumulating mutations at various loci and at different rates. By sequencing just one part of the tumor, physicians may miss the essential part of the tumor that is driving growth or allowing for metastases. Through phylogenetic reconstruction of the numerous parts of a patient’s tumor, there is marked branched evolutionary tumor growth. This tumor heterogeneity may halt the prospects of personalized medicine by opening a Pandora’s box of mutations [6]. Through disparate mutations the tumor may harbor many different varieties of biomarkers, thus making it more challenging to find a single perfect drug target for each cancer.

Chemotherapy is a mainstay for shrinking a tumor before surgery or killing residual cancer cells that continue to grow and divide after surgery. Although the side effects of chemotherapy have been reduced by newer and better anti-nausea regimens and the use of growth factors to prevent low blood counts and infections, chemotherapy treatment remains difficult and unpleasant for most. The hope is that chemotherapy will be replaced by more specific targeted therapies with fewer side effects. However, more basic research and clinical trials will be needed to define appropriate targets and to combine therapies so that we may address the important issue of tumor heterogeneity. Personalized medicine is still the goal, but until more work is done at the bench and at the bedside, it may fall short of its promise.

Jessica Billig is a 4th year medical student at NYU School of Medicine

Peer reviewed by Ruth Oratz, MD, Department of Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. Markoff J. Cost of gene sequencing falls, raising hopes for medical advances. New York Times. Mar 10, 2012. http://www.nytimes.com/2012/03/08/technology/cost-of-gene-sequencing-falls-raising-hopes-for-medical-advances.html.Accessed March 21, 2012.

2. Lander ES, Linton LM, Birren B, et al. Initial sequencing and analysis of the human genome. Nature. 2001;409(6822):860-921.

3. Sotiriou C, Pusztai L. Gene-expression signatures in breast cancer. N Engl J Med. 2009;360(8):790-800.  http://www.ncbi.nlm.nih.gov/pubmed/19228622

4. Hudis CA. Trastuzumab—mechanism of action and use in clinical practice. N Engl J Med. 2007;357(1):39-51.

5. Stratton MR, Campbell PJ, Futreal PA. The cancer genome. Nature 2009; 458(7239):719-724.  http://www.ncbi.nlm.nih.gov/pubmed/19360079

6. Gerlinger M, Rowan AH, Horswell S, et al. Intratumor heterogeneity and branched evolution revealed by multiregion sequencing. N Engl J Med 2012; 366(10):883-892.

Anal cancer screening – A case for screening anal paps

January 24, 2013

By Nelson Sanchez, MD

Faculty Peer Reviewed


A 56 year-old homosexual male presents to your clinic to ask whether or not he should have an anal Pap smear. The patient is HIV positive, has been on HAART for five years, and has no history of opportunistic infections. He denies any anal pain, bleeding or masses.

While efforts to improve knowledge about colorectal cancer in various communities continues to grow, awareness of and misconceptions about anal cancer remain. Over the past couple of years there has been more discussion about anal cancer in large part because it was the seemingly unlikely cause of death of the actress Farrah Fawcett. The danger with the way in which this issue has come to light is that it might be dismissed as a curious anomaly not likely to affect anyone and with no screening modality for early detection.

A good screening test should meet certain criteria for its recommended use: early diagnosis of a common disease, a treatable condition, a high sensitivity and specificity, and ease of use. For example, cervical cancer has no symptoms early on, but a cervical pap smear can detect dysplastic cells or early stage cancer. Early detection can lead to complete cure, and the cervical pap smear is both reliable and easy to use.

Anal cancer remains an uncommon cancer with a slowly rising national incidence rate. The current age-adjusted incidence rate for anorectal cancer is 1.6 per 100,000 men and women per year and it is estimated that approximately 2,000 men and 3,260 women were diagnosed with cancer of the anus, anal canal and anorectum in 2010 [1]. The median age at diagnosis for cancer of the anus is 60 years of age. Among men, blacks have the highest incidence rates (1.9), and among women, whites have the highest incidence (2.0). The age-adjusted death rate is 0.2 per 100,000 per year. The overall 5-year mortality rate from 2001-2007 of 35.1% was either on par or better than more well-recognized malignancies with poor outcomes such as acute myeloid leukemia, multiple myeloma, gastric cancer, and ovarian cancer [2].

Cancer of the anus develops in the canal’s transition zone or linea pectinea [3]. Anal cancer is preceded by the development of anal squamous intraepithelial lesions (ASIL) [4]. Human papilloma virus (HPV) infection is responsible for 90% of ASIL’s [5]. Other risk factors for ASIL’s include multiple sexual partners, tobacco use, and immunosuppression. ASIL’s are further classified into low-grade anal intraepithelial lesions (LSIL) and high-grade anal intraepithelial lesions (HSIL). LSIL’s have spontaneous resolution in the majority of cases, while HSIL’s are a more likely precursor of invasive tumor [3].

The anal pap smear is an underutilized available screening tool for anal cancer with a sensitivity of 50-75% for detection of ASIL and specificity of 50% [6,7]. Similar to a cervical pap smear the anal pap test evaluates the morphology of epithelial cells from the respective region. Unlike a cervical pap smear a cavity scope is not required; rather only a small brush (measuring 3 millimeters) is inserted in the anal canal. Although the test is easily performed, there are numerous barriers to its widespread usage. One obstacle is the limited number of physicians who are aware of the test and are trained to perform it. Primary care doctors, gastroenterologists, gynecologists, and general surgeons could all potentially perform the anal pap smear. However, there is no training requirement for anal pap screening in residency programs. Discomfort with discussion of sexual behaviors and testing may pose a significant barrier that dissuades clinical discussion of the test. In addition, insurance coverage for anal pap smears is very limited.

Another issue that arises from anal cancer screening is the question of treatment. Unlike cervical dysplasia and localized cervical cancer, where large excisions or complete resections have high cure rates (eg: 90.9% 5-year survival rate for localized cervical cancer) [8], anal dysplasia and cancer does not have comparable treatment capability or success. Current treatment modalities for anal dysplasia include topical agents, immune modulation, cryotherapy, laser therapy and surgery. These treatments often don’t lead to a cure and are associated with recurrence rates as high as 50-85% [9]. Anal cancer is currently treated with chemoradiation and surgery, depending on oncologic staging, with an overall 5-year mortality rate of 35.1% [1].

Additionally, the cost-effectiveness of anal cancer screening is questionable. In the United States, a cost-effective screening program is determined to be one that has a treatment cost of under $30,000-50,000 per year of life saved (PYLS) [9]. Taking cervical cancer as an example, pap smears every three years for HIV-negative women incur a treatment cost of approximately $11,800 PYLS [10]. In HIV+ women with yearly cervical Pap smears, the treatment cost would be $13,000 PYLS [11]. Estimates for Pap screening in anal cancer showed that the costs of screening in HIV+ men with annual testing amounted to $11,000 PYLS, a cost similar to that of cervical cancer screening [12]. A three-yearly testing regimen in HIV- men was estimated to cost about $7,800 PYLS [13].

However, in the United Kingdom, a cost-effective analysis of anal pap screening did not reveal promising results [14]. The UK study examined the cost-effectiveness of screening high-risk HIV+ men who have sex with men (MSM). Researchers concluded that screening of this high-risk group would not generate health improvements at a reasonable cost. The estimated economic burden of screening HIV+ MSM was calculated at £66,000 ($102,227) per quality-adjusted life-year (QALY) gained, well above accepted cost-effectiveness thresholds. The authors of this research suggest that the main difference in this cost-effectiveness model when compared to the U.S. study is that the UK model combines HIV-, undiagnosed HIV+, and diagnosed HIV+ MSM. This explanation does not completely account for the large discrepancy in cost analysis.

Currently there are no national recommendations for anal pap screening. The test is best suited for specific high-risk populations: multiple sexual partners, a history of sexually transmitted disease, and HIV infection or other chronically immunosuppressed states. Among men who have sex with men, the incidence rate of anal cancer among HIV+ patients is 69/100,000 person-years [15]. Rates of anal cancer among HIV+ patients have risen during the HAART era because patients are living longer with their HIV disease, allowing time for anal dysplasia to progress to cancer [16]. Screening for these patients may yield significant health benefits.

Based on our patient’s HIV status, his increased risk for the development of anal cancer, ease of use of the screening test, and the potential for life-saving treatment if cancer is diagnosed, the anal pap should be recommended. The patient should be advised that any abnormal pap findings will result in anoscopy and biopsy. If the baseline screening is negative, annual surveillance screening should be discussed with a physician to review the risks and benefits of testing. Testing is also recommended if pain, bleeding or palpable masses develop in the anorectal region.

More research is needed to clarify the controversies surrounding anal cancer screening. Large population-based randomized controlled trials (RCT) are needed to further examine the survival benefit and cost-effectiveness to the screening and treatment of anal cancer in high-risk populations. Currently, there is a lack of RCT’s to conclusively support or refute the use of anal pap smears, and it remains unknown when this data will become available. In addition, clinician training and insurance policy modifications are needed for more widespread application of this screening modality.

Commentary by Michelle Cespedes MD Assistant Professor Department of Medicine (Infectious Disease and Immunology)

This commentary on the benefit of anal PAP screening in HIV infected populations is timely and will familiarize health care providers on the benefit of this simple but underutilized tool that can improve the health outcomes of our patients. Investigators from the North American AIDS Cohort Collaboration on Research and Design recently analyzed findings from 13 US and Canadian studies. Recent data suggest that 3% of all HIV infected adults (including non-gay HIV infected men, HIV infected women, and HIV infected men who have sex with men (MSM)) will develop anal cancer by age 60. HIV infected MSM are 80 times more likely to develop anal cancer compared to HIV negative men. HIV infected non gay men are 27 times more likely to develop anal cancer compared to HIV negative men.

This study suggests that anal cancer screening for HIV infected patients is likely to be cost effective. The current New York State AIDS Institute guidelines now recommends targeted anal PAP for HIV infected MSM, individuals with a history of anogenital warts, and for women with a history of abnormal cervical or vulvar histology.

Silverberg MJ, Lau B, Justice AC, et al. Risk of anal cancer in HIV-infected and HIV-uninfected individuals in North America. Clin Infect Dis. 2012 Apr; 54(7):1026-34.

Dr. Nelson Sanchez is a former resident at NYU Langone Medical Center and a current Instructor, Clinical Medicine at Memorial Sloan Kettering Hospital

Peer reviewed by Dr. Francois, Assistant Professor of Medicine (Gastroenterology) NYU Langone Medicacl Center

Image Courtesy of Wikimedia Commons


1. National Cancer Institute’s Surevillance Epidemiology and End Results (http://seer.cancer.gov/statfacts/html/anus.html)

2. National Cancer Institute’s Surevillance Epidemiology and End Results (http://seer.cancer.gov/statfacts/)

3. Calore EE, et al. Prevalence of anal cytological abnormalities in women with positive cervical cytology. Diagn Cytopathol 2011;39(5):323-7

4. Oon SF, et al. Perianal condylomas, anal squamous intraepithelial neoplasms and screening: a review of the literature.  http://jms.rsmjournals.com/content/17/1/44.full

J Med Screen 2010;17(1)44-9

5. Hakim AA, et al. Indications and efficacy of the human papillomavirus vaccine.  Curr Treat Options Oncol 2007;8(6):393-401

6. Arain S, et al. The Anal Pap Smear: Cytomorphology of squamous intraepithelial lesions. Cytojournal 2005;2(1):4  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC551597/

7. Ferraris A, et al. Anal pap smear in high-risk patients: a poor screening tool.  South Med J 2008;101(11):1185-6

8. National Cancer Institute’s Surevillance Epidemiology and End Results  http://seer.cancer.gov/statfacts/html/cervix.html

9. Matthews WC. Screening for anal dysplasia associated with human papillomavirus. Top HIV Med 2003;11(2):45-9

10. Mandelblatt JS, et al. Benefits and costs of using HPV testing to screen for cervical cancer. JAMA 2002;287(18):2372-81

11. Goldie SJ, et al. The costs, clinical benefits, and cost-effectiveness of screening for cervical cancer in HIV-infected women. Ann Int Med 1999;130(2):97-107

12. Goldie SJ, et al. The clinical effectiveness and cost-effectiveness of screening for anal squamous intraepithelial lesions in homosexual and bisexual HIV-positive men. JAMA 1999;281(19):1822-9

13. Goldie SJ, et al. Cost-effectiveness of screening for anal squamous intraepithelial lesions and anal cancer in human immunodeficiency virus-negative homosexual and bisexual men. Am J Med 2000;108(8):634-41

14. Czoski-Murray C, et al. Cost-effectiveness of screening high-risk HIV-positive men who have sex with men (MSM) and HIV-positive women for anal cancer. Health Technol Assess 2010;14(53):1-101  http://www.unboundmedicine.com/evidence/ub/citation/21083999/Cost_effectiveness_of_screening_high_risk_HIV_positive_men_who_have_sex_with_men__MSM__and_HIV_positive_women_for_anal_cancer_

15. D’Souza G, et al. Incidence and epidemiology of anal cancer in the multicenter AIDS cohort study. J Acquir Immune Defic Syndr 2008;48(4):491-9

16. Reed AC, et al. Gay and bisexual men’s willingness to receive anal Papanicolaou testing. Am J Public Health 2010;100(6):1123-9  http://www.unboundmedicine.com/evidence/ub/citation/20395576/Gay_and_bisexual_men’s_willingness_to_receive_anal_Papanicolaou_testing_

Breaking News: The Downfall of the PSA

May 23, 2012

The United States Preventive Services Task Force stands their ground in this week’s Annals of Internal Medicine and recommends against the routine use of the PSA as a screening tool for prostate cancer. This Class D recommendation is grounded in data that suggests a “very small” mortality benefit at the risk of significant over-diagnosis and unnecessary treatment. The PSA should still be used to follow response to treatment in those already diagnosed with prostate cancer. This recommendation has already set off a media frenzy and we’re sure that we have not yet heard the last word on this extremely controversial topic.


Should Patients With Nephrotic Syndrome Receive Anticoagulation?

May 9, 2012

By Jennifer Mulliken

Faculty Peer Reviewed

Case 1:

A 30-year-old African-American male with a history of bilateral pulmonary emboli presents with a 1-week history of bilateral lower extremity edema. Blood pressure is 138/83, cholesterol 385, LDL 250, albumin 2.9. Urinalysis shows 3+ protein. Twenty-four hour urinary protein is 7.2 grams.

Case 2:

A 47-year-old Hispanic male with a history of mild hypertension and venous insufficiency presents with a 3-month history of bilateral lower extremity edema. BP is 146/95, cholesterol 241, LDL 165, albumin 1.9. Urinalysis shows 3+ protein. Twenty-four hour urinary protein is 4.6 grams.

What is the evidence to support prophylactic anticoagulation in patients with nephrotic syndrome?

Nephrotic syndrome classically presents with heavy proteinuria (>3.5 g per day), hypoalbuminemia, edema, and hyperlipidemia. Damage to the glomerular basement membrane results in the loss of either charge or size selectivity, which then results in the leakage of glomerular proteins such as albumin, clotting factors, and immunoglobulins.[1] The heavy proteinuria seen in affected patients leads to a series of clinically important sequelae, including sodium retention, hyperlipidemia, greater susceptibility to infection, and a higher risk of both venous thromboembolism (VTE) and arterial thromboembolism (ATE).

The predisposition to a hypercoagulable state in nephrotic syndrome results from the urinary loss of clotting inhibitors such as antithrombin III and plasminogen. Hypercoagulability, in turn, can lead to a variety of complications including pulmonary embolism, renal vein thrombosis, and recurrent miscarriages. The incidence of both venous and arterial thrombosis is much higher in patients with nephrotic syndrome than in the general population. Mahmoodi and colleagues’ retrospective study of 298 patients with nephrotic syndrome followed for 10 years found a low absolute risk of venous and arterial thromboses: 1.02% and 1.48% per year, respectively.[2] These patients had approximately 8 times higher risk than the general population (matched for age and sex). The authors also found that the risk of venous and arterial thrombosis is highest in the first 6 months after diagnosis.[2-4] The risk of thrombosis varies with the underlying cause of nephrotic syndrome. Risk of these events is highest in membranous nephropathy, followed by membranoproliferative glomerulonephritis and minimal change disease.[5,6] While minimal change disease frequently responds to treatment with corticosteroids, membranous nephropathy is more difficult to treat and therefore more likely to lead to thrombosis.

Renal vein thrombosis typically presents with flank pain, gross hematuria, and loss of renal function, while pulmonary embolus usually presents with dyspnea, pleuritic chest pain, and tachypnea. A surprising number of patients with nephrotic syndrome who experience thromboembolic events present without any symptoms at all. Only one-tenth of patients with renal vein thrombosis and one-third of patients with pulmonary emboli are symptomatic.[5] With regard to renal vein thrombosis, Chugh and colleagues noted that because the development of venous occlusion is often slow and incomplete in patients with nephrotic syndrome, the clinical features of thrombosis are not easily distinguished from the primary renal disease.[6]

The overall risk of ATE in patients with nephrotic syndrome is related to both the glomerular filtration rate and the classic risk factors for atherosclerosis.[2] Increased risk of VTE, in contrast, is generally associated with high rates of proteinuria, low serum albumin levels, high fibrinogen levels, low antithrombin III levels, and hypovolemia.[7] Unfortunately, these are relatively unreliable predictors of patient outcomes. That potentially fatal thrombotic events can be clinically silent in nephrotic patients has important implications for treatment. Given that the incidence of thromboembolic complications in these patients is higher than in the general population, anticoagulant prophylaxis must be considered.

The likelihood of benefit from prophylactic anticoagulation depends on both the possibility of future thrombotic events, as discussed above, and the risks associated with anti-coagulation, such as intracranial hemorrhage and gastrointestinal bleeds. Unfortunately, data on prophylactic therapy in the nephrotic syndrome are seriously limited; there are no firm recommendations one way or another regarding anticoagulation. The decision to treat prophylactically must therefore be individualized, based on risk factors and prior history.

In circumstances where a patient has demonstrated a previous tendency towards hypercoagulability, such as previous pulmonary emboli in case 1 above, prophylactic anticoagulation would likely be warranted. In this case the benefit of anticoagulation outweighs the risk, particularly in the first 6 months after diagnosis when the risk of thromboembolism is highest. Patients with massive amounts of proteinuria and very low albumin are at especially high risk of VTE, and in these cases prophylactic anticoagulation should probably be given.

The patient in case 2 has milder disease and no history of thrombosis. Many physicians favor a more conservative approach to prophylaxis in this setting.* The risks of chronic anticoagulation frequently outweigh the benefits. In addition, while nephrotic syndrome predisposes to a hypercoagulable state, the risk of thrombosis is not necessarily lowered by anticoagulation. For example, the loss of antithrombin III in the urine contributes to the risk of hypercoagulability, but it also decreases the effectiveness of heparin. While this patient’s venous insufficiency increases the likelihood of deep vein thrombosis, chronic anticoagulation in a patient with no history of hypercoagulability seems unnecessary. That being said, no studies have addressed this issue and therefore practice will vary considerably.

The lack of evidence to support or refute the administration of prophylactic anticoagulation in patients with nephrotic syndrome means that the decision to treat prophylactically must be made on a case-to-case basis. Special consideration should be given to patients with a demonstrated history of hypercoagulability and to those with known hypercoagulable states. In addition, the literature suggests that patients should be monitored closely in the first 6 months after diagnosis for evidence of thrombosis. In all cases the risk of hemorrhage should be weighed heavily against the potential benefits of anticoagulation.

* Many thanks to Drs. Jerome Lowenstein, Gregory Mints, Manish Ponda, and Joseph Weisstuch for their input on this topic.

Commentary by Dr. Jerome Lowenstein

Jennifer Mulliken makes a reasonable argument for individualizing the decision to anticoagulate, as there are no appropriate trials comparing conservative management with anticoagulation. Unfortunately, individualization is not very easy when there are no good markers of high risk and there is no good evidence of the effect of anticoagulation in renal vein thrombosis.

Jennifer Mulliken is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Jerome Lowenstein, MD, Department of Medicine (Nephrology), NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. Orth SR, Ritz E. The nephrotic syndrome. N Engl J Med. 1998;338(17):1202-1211.  http://www.nejm.org/doi/full/10.1056/NEJM199804233381707

2. Mahmoodi BK, ten Kate MK, Waanders F, et al. High absolute risks and predictors of venous and arterial thromboembolic events in patients with nephrotic syndrome: results from a large retrospective cohort study. Circulation. 2008;117(2):224-230.

3. Anderson FA Jr, Wheeler HB, Goldberg RJ, et al. A population-based perspective of the hospital incidence and case-fatality rates of deep vein thrombosis and pulmonary embolism. The Worcester DVT Study. Arch Intern Med. 1991;151(5):933-938.

4. Thom TJ, Kannel WB, Silbershatz H, D’Agostino RB. Incidence, prevalence and mortality of cardiovascular diseases in the United States. In: Alexander RW, Schlant RC, Fuster V, eds. Hurst’s The Heart. 9th ed. New York, NY: McGraw-Hill; 1998: 3.

5. Llach F, Papper S, Massry SG. The clinical spectrum of renal vein thrombosis: acute and chronic. Am J Med. 1980;69(6):819-827.  http://www.amjmed.com/article/S0002-9343(80)80006-4/abstract

6. Chugh KS, Malik N, Uberoi HS, et al. Renal vein thrombosis in nephrotic syndrome – a prospective study and review. Postgrad Med J.1981;57(671):566-570.

7. Robert A, Olmer M, Sampol J, Gugliotta JE, Casanova P. Clinical correlation between hypercoagulability and thrombo-embolic phenomena. Kidney Int. 1987;31(3):830-835.

What are the Barriers to Using Low Dose CT to Screen for Lung Cancer?

February 23, 2012

By Benjamin Lok

Faculty Peer Reviewed

Lung cancer is the most common cause of cancer deaths globally [1] and responsible for an estimated 221,120 new cases and 156,940 deaths in 2011 in the United States.[2] Presently, the United States Preventive Services Task Force, the National Cancer Institute (NCI), the American College of Chest Physicians, and most other evidence-based organizations do not recommend screening for lung cancer with chest x-ray or low-dose helical computed tomography (CT) due to inadequate evidence to support mortality reduction.[3] This recommendation, however, may soon change.


In October 2010, the NCI announced that the National Lung Screening Trial (NLST) was concluded early because the study showed that low-dose CT screening, when compared with screening by chest radiography, resulted in a 20.0% relative reduction in lung cancer-related mortality and an all-cause mortality reduction of 6.7%. The number needed to screen with low-dose CT to prevent one lung cancer death was 320.[4] This report, published in the August 4th, 2011 issue of the New England Journal of Medicine, is the first randomized controlled trial of lung cancer screening to show a significant mortality benefit.

The trial enrolled 53,454 high-risk current and former smokers aged 55 to 74 years who had a history of at least 30 pack-years. Former smokers (52% of the total) had to have quit only recently–within the last 15 years. They underwent three annual screenings with CT or chest X-ray and then were followed for an additional 3.5 years. Suspicious screening results were three-fold higher in the low-dose CT group compared to radiography across all three screening rounds (24.2% vs 6.9%). More than 90% of the positive screening tests in the first round of screening led to follow-up, mainly consisting of further imaging, with invasive procedures rarely performed. More cancers were diagnosed after the screening period in the chest radiography group compared to those screened by low-dose CT, suggesting that radiography missed more cancers during the screening period. Furthermore, cancers detected in the low-dose CT arm were more likely to be early-stage compared to those discovered after chest radiography.


Lung cancer-specific deaths were 247 and 309 per 100,000 person-years in the low-dose CT and chest radiography groups, respectively, and this was statistically significant (P=0.004).[4] The internal validity of the study is firm, based on similar baseline characteristics and rates of follow-up between the two study groups.[4-6] Whether these results can be applied to the general population, however, is uncertain. Trial participants were, on average, younger urban dwellers with a higher level of education than a random sample of smokers 55 to 74 years old [4-5], which might have increased adherence in the study. Furthermore, radiologists interpreting the screening images had additional training in reading low-dose CT scans and presumably greater experience due to high workload.


One major barrier to implementation of any screening program is its cost. Eventual analysis of the results from the NLST will definitively answer this question; however, a recent Australian study can provide us some with some provisional guidance.[7] Manser and colleagues examined the cost effectiveness of CT screening in a baseline cohort of high-risk male current smokers aged 60 and 64 with an assumed annual incidence of lung cancer of 552 per 100,000 and determined that the incremental cost-effectiveness ratio was $105,090 per quality-adjusted life year (QALY) saved.[7] This is less than the generally accepted upper limit of $113,000 per QALY in the United States, but far above the $50,000 per QALY threshold that many authors of cost-effectiveness are advocating.[8] The NLST study population had an approximate annual incidence of lung cancer of 608.5 per 100,000[4], which is similar to the incidence rate in the Australian analysis. Though this extrapolation is purely speculative, it suggests that if the upper limit of $113,000 per QALY were the cut-off, low-dose CT screening in the United States may be cost-effective.


A second important issue to address is the identification of patients most likely to benefit from CT screening. In the Australian study, for the patient risk group with an annual incidence of lung cancer of 283 per 100,000 it costs $278,219 per QALY saved.[7]. The national incidence in the US in 2006 was 63.1 per 100,000 person-years.[9] To screen the average person in the US general population would be an astronomical expenditure of resources (greater than $1 million per QALY saved), dramatically increase the false-positive rates from screening, and promote unnecessary exposure to carcinogenic radiation. Accordingly, only high-risk patients (eg, advanced age, positive family history, and heavy smoking history) should be considered for this screening modality.


The United Kingdom Lung Screening Trial investigators listed other issues that will need to be addressed prior to implementation of a screening program:[10]

1. Synchronization of CT technique and scan interpretations

2. Value of the diagnostic work-up techniques for positive screening findings and establishing standards for follow-up

3. Optimal surgical management of detected nodules in patients

4. Optimal screening interval for both screen-negative and screen-positive patients

5. Continued study and collaboration by academic organizations, federal institutions and policymakers.


With all this in mind, how are we to counsel patients interested in lung cancer screening? First, only high-risk patients for lung cancer should be considered for low-dose CT screening. Even in the high-risk NLST cohort, positive images were false roughly 95% of the time in both study arms.[4] In patients at lower risk, these false-positive rates will undoubtedly be much higher. Second, patients should be informed about the potential harms from detection of benign abnormalities requiring follow-up and potential invasive interrogation, which can result in adverse outcomes. Finally, even with the exciting revelation of mortality reduction by a lung cancer screening modality, smoking cessation will remain one of the most important interventions in reducing mortality from lung cancer.

Benjamin Lok is a 4th year medical student at NYU School of Medicine

Peer reviewed by Craig Tenner, MD, Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. Parkin DM, Bray F, Ferlay J, Pisani P. Global cancer statistics, 2002. CA Cancer J Clin. 2005;55(2):74-108.

2. American Cancer Society. Cancer facts and figures 2011. http://www.cancer.org/Research/CancerFactsFigures/CancerFactsFigures/cancer-facts-figures-2011.  Accessed July 7, 2011.

3. National Cancer Institute: PDQ® Lung Cancer Screening. 2011; http://cancer.gov/cancertopics/pdq/screening/lung/HealthProfessional. Accessed July 21, 2011.

4. National Lung Screening Trial Research Team. Reduced lung-cancer mortality with low-dose computed tomographic screening. N Engl J Med. 2011;365(5):395-409.

5. National Lung Screening Trial Research Team. Baseline characteristics of participants in the randomized national lung screening trial. J Natl Cancer Inst. 2010;102(23):1771-1779.

6. Sox HC. Better evidence about screening for lung cancer. N Engl J Med. 2011;365(5):455-457.

7. Manser R, Dalton A, Carter R, Byrnes G, Elwood M, Campbell DA. Cost-effectiveness analysis of screening for lung cancer with low dose spiral CT (computed tomography) in the Australian setting. Lung Cancer. May 2005;48(2):171-185.

8. Weinstein MC. How much are Americans willing to pay for a quality-adjusted life year? Med Care. 2008;46(4):343-345.

9. American Lung Association. State of lung disease in diverse communities, 2010. http://www.lungusa.org/assets/documents/publications/lung-disease-data/solddc_2010.pdf. Accessed July 28, 2011.

10. Field JK, Baldwin D, Brain K, et al. CT screening for lung cancer in the UK: position statement by UKLS investigators following the NLST report. Thorax. 2011;66(8):736-737.

To Premed or Not to Premed: Are Tylenol and Benadryl Really Necessary Prior to All Transfusions?

January 19, 2012

By Robert Gianotti, MD

Faculty Peer Reviewed

Case: Mr. T is a 32-year-old male being treated by the oncology service for Acute Myelogenous Leukemia. You are the night float intern covering overnight when you are called by the nurse to inform you that his CMV negative platelets have finally arrived from the blood bank. The nurse notices that the day team has not ordered Benadryl or Tylenol to be given prior to the transfusion, and asks if you could place the order. As you start to enter the premedications, you stop and ask yourself whether this is really necessary. After all, Mr. T is an avid Yankee fan and would rather not miss the big game when the Benadryl knocks him out. You call the nurses station and request that the premedications be held. Is Mr. T now at increased risk for a transfusion reaction?

Premedication with diphenhydramine and acetaminophen for blood product transfusion is a common and accepted practice throughout US hospitals, with over 50% of blood product transfusions given with these premedications (1). In most cases, we give these medications to patients without a history of a transfusion reaction. In 2001, approximately 14 million units of packed red blood cells (PRBCs) and 1.5 million units of platelets were given in the US (2). Common reactions to these blood products are febrile non-hemolytic transfusion reaction (FNHTR) and allergic reactions. FNHTR is the most common adverse reaction to blood products and is defined as a rise in temperature by >1 degree Celsius above baseline with associated chills and/or rigors (3). The cause for these reactions are likely multifactorial and may result from the release of cytokines in cellular components or recipient antibodies directed against donor leucocytes or HLA antigens.

Allergic reactions are defined by urticaria, pruritus and/or pulmonary symptoms and hypotension. Many of these allergic reactions are characterized as type 1 hypersensitivity and are mediated by IgE and mast cell activation, although evidence exists that many allergic reactions are mediated by IgG interaction with plasma proteins such as complement and IgA (4). Acetaminophen and diphenhydramine are administered in the majority of blood transfusions in the US as a prophylaxis against fever generation via cyclooxygenase induced mechanisms following cytokine release, and histamine-mediated mechanisms from mast cell degranulation. The logic is straightforward, but does it work and is the cost worth it?

A review of the literature by Geiger and Howard (2) found a huge variability in the incidence of FNHTR with a rate of reaction to platelets ranging from 0.09% to 27%, with a similar rate observed for allergic reactions. The differences are hypothesized to be multifactorial with factors such as premedication use, storage technique and reporting rates contributing to the large variability.

To date, there are two prospective, randomized, controlled trials that examine the effectiveness of acetaminophen and diphenhydramine in preventing FNHTR and allergic reactions. Wang et al. (2002) was the first randomized trial to look at the effectiveness of premedication on reactions to platelet transfusion in patients with hematologic malignancy (5). 55 patients were enrolled for a total of 122 leukoreduced, single donor platelet transfusions. The treatment arm received 650 mg Tylenol PO and 25mg Benadryl IV. Patients with a history of hemolytic reactions as well as those with fever within 24 hours of transfusion were excluded; all administered platelets were the same age. The overall rate of non-hemolytic transfusion reactions (NHTR) were similar in the treatment and placebo groups with NHTR occurring in 15.4% of treatment transfusions and 15.2% of placebo transfusions. Patients with a history of NHTR were more likely to experience a transfusion reaction (25.9% vs. 11.3%) but were not protected by the administration of premedications. The results of this small, randomized study suggest that premedication is ineffective in preventing the non-hemolytic transfusion reaction.

Kennedy et al. (2008) published the second randomized trial that failed to demonstrate the effectiveness of premedication on preventing NHTR (6). In this study, a total of 4199 transfusions (both platelets and PRBCs) were administered with 2008 receiving 500mg acetaminophen and 25mg diphenhydramine, and 2191 receiving placebo. NHTR rates did not differ significantly between groups (1.4% premed vs. 1.5% placebo, p=0.43). Subgroup analysis revealed that febrile reactions occurred at a very low rate with less in the treatment group (0.35% premed vs. 0.64% placebo, p=0.084). The strength of this data is relatively weak, as premedication would need to be given prior to 344 transfusions to prevent one febrile reaction with an absolute risk reduction of 0.29%. It is important to note that these low NHTR rates occurred in the setting of 100% bedside leukoreduction with a leukofilter. In addition, a febrile reaction was classified as a temperature >100.5 or >1 degree Fahrenheit change in temperature which may have increased false positive rates in the bone marrow transplant and leukemic patients. The other data point that did reach significance was the number of transfusions required before a NHTR developed, with more patients in the placebo group developing a NHTR after 10 transfusions compared with the premedication group (10.9% vs. 2.4%, p=0.074).

Several non-randomized, non-blinded studies also demonstrate that premedication does not significantly reduce the incidence of NHTR. Patterson et al. (2000) showed that after the introduction of premedication administration guidelines, premedication use was reduced approximately 25% (7). Even with this reduction, NHTR including fever, chills, and hives were not significantly increased. Sanders et al. (2005) conducted a retrospective study in pediatric oncology patients that found a paradoxical non-significant increase in the odds of a NHTR when acetaminophen and diphenhydramine were administered (1).

It is also important to analyze the cost of this routine practice. Tylenol and Benadryl, although over-the counter, are not benign medications and are associated with adverse events including hepatotoxicity, urinary retention, altered mental status, somnolence, and agitation. These adverse effects have to be strongly considered especially in the transfusion population who may already have underlying hepatic dysfunction, neurologic dysfunction and have a higher overall morbidity from administered medications. Second, premedication with Tylenol may mask the fever curve that we rely on for making clinical decisions. If masked, we will lose valuable information and lifesaving interventions based on blood culture data and antibiotic use may be delayed. This data is particularly important in the neutropenic population and those with life threatening infections such as endocarditis. In addition, these medications have a financial and temporal cost. Sanders et al. (1) calculated a financial cost of $40,000/year for premedications, as well as over 700 hours of nursing and pharmacy time surrounding administration of the medications at their institution.

Low to no absolute risk reduction from premedication to prevent what are often benign reactions, potential for adverse reactions, delay in treatment decisions, and significant cost and manpower burden should make us all stop and reconsider the routine administration of Tylenol and Benadryl. In the end, Mr. T may suffer more from missing the Yankees beat the Red Sox than he would from missing his premeds.

Dr. Robert Gianotti  is associate editor of Clinical Correlations and current Chief Resident at NYU Langone Medical Center

Peer reviewed by Susan Talbot, MD,  Assistant Professor, Dept. Medicine (hematology/oncology), NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. Sanders RP, Maddirala SD, Geiger TL, et al. Premedication with acetaminophen or diphenhydramine for transfusion with leucoreduced blood products in children. Br J Haematol 2005;130:781-787.

2. Geiger TL and Howard SC. Acetaminophen and diphenhydramine premedication for allergic and febrile nonhemolytic transfusion reactions: good prophylaxis or bad practice? Transfusion Medicine Reviews 2007;21:1-12.

3. Dzieczkowski Jeffery S, Anderson Kenneth C, “Chapter 107. Transfusion Biology and Therapy” (Chapter). Fauci AS, Braunwald E, Kasper DL, Hauser SL, Longo DL, Jameson JL, Loscalzo J: Harrison’s Principles of Internal Medicine, 17th edition.

4. Gilstad CW. Anaphylactic transfusion reactions. Curr Opin Hematol 2003;10:419-423.

5. Wang SE, Lara Jr PN, Lee-Ow A, et al. Acetaminophen and diphenhydramine as premedication for platelet transfusions: a prospective randomized double-blind placebo-controlled trial.Am J Hematol 2002;70:191-194.

6. Kennedy LA, Case LD, Hurd DD, Cruz JM, and Pomper GJ. A prospective, randomized, double-blind controlled trial of acetaminophen and diphenhydramine pretransfusion medication versus placebo for the prevention of transfusion reactions. Transfusion 2008; 48:2285-2291.

7. Patterson BJ, Freedman J, Blanchette V, et al. Effect of premedication guidelines and leukoreduction on the rate of febrile nonhaemolytic platelet transfusion reactions. Transfusion Med 2000; 10:199-206.

What Is the Significance of Monoclonal Gammopathy of Undetermined Significance (MGUS)?

December 22, 2011

By Maryann Kwa, MD

Faculty Peer Reviewed

Clinical Case:

A.D. is a healthy 65-year-old African American male with no prior medical history who presents to his primary care physician for an annual check up. He feels well and has no complaints. Physical exam is normal. Common laboratory tests are ordered which are significant for an elevated total serum protein with normal albumin. A serum protein electrophoresis (SPEP) is then performed. The patient is found to have a monoclonal protein (M protein) of 12 g/L, IgG type, and normal free light chain ratio. All other lab results including hemoglobin, creatinine, and calcium are within their normal ranges. A skeletal survey is done which is negative for lytic lesions. A bone marrow biopsy is also performed which reveals <10% plasma cells. The patient is referred to a hematologist who informs him that he has monoclonal gammopathy of undetermined significance (MGUS). The patient inquires about whether any treatment is recommended.

Monoclonal gammopathy of undetermined significance (MGUS) is a premalignant plasma cell dyscrasia that is defined as a serum M protein <30 g/L, clonal plasma cells <10% in the bone marrow, and the absence of end-organ damage that can be attributed to a plasma cell proliferative disorder (see table) [1]. End-organ damage is defined by the presence of hypercalcemia, renal insufficiency, anemia, and bony lesions (which can be remembered by the acronym CRAB). MGUS is usually discovered incidentally in the blood during routine laboratory tests. It affects approximately 3% of individuals older than 50 years [2]. Prevalence is twice as high among African Americans and is lower in Asians. Older age, male sex, family history, and immunosuppression are also factors that increase the risk of MGUS [3]. So why do we worry about MGUS? It is important to clinicians because it is associated with a 1% annual risk of progression to multiple myeloma or related malignancy [4]. According to the International Myeloma Working Group (IMWG), the diagnostic criteria for multiple myeloma involves clonal plasma cells >10% in bone marrow biopsy, presence of monoclonal protein in either serum or urine, and evidence of end organ damage related to the plasma cell disorder. A prospective study by Landgren et al. (2009) demonstrated that multiple myeloma is consistently preceded by MGUS [5]. Of the approximately 77,400 healthy adults in the United States who were observed for up to ten years, 71 developed multiple myeloma. Prior evidence of MGUS was demonstrated in all these patients by assays for protein abnormalities in prediagnostic serum samples.

The pathophysiology of the transition from normal plasma cells to MGUS to multiple myeloma involves many overlapping oncogenic events [6]. The first step in the pathogenesis is usually an abnormal response to antigenic stimulation, possibly mediated by overexpression of interleukin (IL)-6 receptors and dysregulation of the cyclin D gene. These changes result in the development of primary cytogenetic abnormalities, either hyperdiploidy or immunoglobulin heavy chain translocation (the most common are t(4;14), t(14;16), t(6;14), t(11;14), and t(14;20)). The progression of MGUS to multiple myeloma is likely secondary to a random second hit, the manner of which is unknown. Mutations with Ras and p53, methylation of p16, myc abnormalities, and induction of angiogenesis have also been associated with progression.

Since MGUS was first described approximately thirty years ago, there have been new concepts and advances concerning classification and management. There are currently three distinct clinical types of MGUS: 1) non-IgM (IgG or IgA); 2) IgM; and 3) light chain. Non-IgM MGUS is the most common type and its more advanced premalignant stage of plasma cell proliferation is called smoldering (asymptomatic) multiple myeloma which is characterized by a higher risk of progression to multiple myeloma. Smoldering myeloma is defined by a serum monoclonal protein (IgG or IgA) ≥30 g/L and/or clonal plasma cells ≥10% in bone marrow with absence of end organ damage [7]. It is associated with a 10% annual risk of progression to multiple myeloma. On the other hand, the IgM subtype of MGUS is mainly associated with predisposition to Waldenström macroglobulinemia and less frequently to IgM multiple myeloma. Finally, the light chain type comprises approximately 20% of new cases of multiple myeloma.

In terms of outcome of MGUS, Kyle et al. (2002) published a cohort study of 1384 patients from Minnesota with MGUS who were followed for up to 35 years (median, 15.4 years) [8]. Results showed that eight percent developed multiple myeloma (n=75), IgM lymphoma (7), AL amyloidosis (10), leukemia (3), and plasmacytosis (1). The cumulative probability of progression was 12% at 10 years, 25% at 20 years, and 30% at 25 years. The overall risk of progression was about 1% per year.

When evaluating a patient for the first time, a complete history and physical examination should be done with emphasis on symptoms and findings that may suggest multiple myeloma. A complete blood count, serum creatinine, serum calcium, and a qualitative test for urine protein should also be performed. If serum abnormalities or proteinuria is found, electrophoresis and immunofixation is indicated. Predicting which patients with MGUS will remain stable compared to those who will progress is very difficult at the time of diagnosis. Those patients with non-IgG type, high serum M protein level (≥15 g/L), and an abnormal serum free light chain ratio (i.e., the ratio of free immunoglobulin kappa to lambda light chains in the serum) are associated with increased risk for progression to smoldering myeloma and then to multiple myeloma.

In June 2010, the IMWG released consensus guidelines for monitoring and managing patients with MGUS and smoldering myeloma. Patients with MGUS are divided into different categories based on low risk, intermediate risk, and high risk. If the serum monoclonal protein is <15 g/L, IgG type, and the free light chain ratio is normal, then the risk of eventual progression to multiple myeloma or related malignancy is low. In this low-risk setting, a baseline bone marrow examination or skeletal survey is not routinely indicated if the clinical evaluation and laboratory values suggest MGUS. Patients should be followed with SPEP 6 months after diagnosis and if stable can be followed every 2-3 years (or sooner if symptoms suggestive of disease progression arise).

However, patients that fall in the intermediate and high risk MGUS category are managed differently. They usually have a serum monoclonal protein >15 g/L, IgA or IgM type, and/or an abnormal free light chain ratio. In this situation, a bone marrow biopsy should be carried out at baseline. Both conventional cytogenetics and fluorescence in situ hybridization should be performed. These patients are followed with SPEP, complete blood count, serum calcium and creatinine levels 6 months after diagnosis and then yearly for life. It is important to note, however, that a bone marrow biopsy and skeletal survey is always indicated if a patient with presumed MGUS has unexplained anemia, renal insufficiency, hypercalcemia, or skeletal lesions.

And finally, those patients with smoldering (asymptomatic) multiple myeloma always receive a baseline bone marrow biopsy and mandatory skeletal survey. An MRI of the spine and pelvis is also recommended because it can detect occult lesions which predict a more rapid progression to multiple myeloma. Wang et al. (2003) estimated the risk of progression in 72 patients with smoldering myeloma in which an MRI of the spine was also performed at baseline [9]. The median time to progression was significantly shorter with an abnormal MRI compared with a normal MRI (1.5 years versus 5 years). Nonetheless, if laboratory values, bone marrow biopsy, and MRI results are stable, then these studies should be repeated every 4-6 months for one year with the interval afterward being every 6 to 12 months if stable.

An estimated 20,580 new cases of multiple myeloma were diagnosed in the United States in 2009. Median survival is about 3 to 4 years following diagnosis, although survival has improved with newer therapies such as autologous stem cell transplantation, immunomodulatory drugs (thalidomide and lenalidomide), and proteosome inhibitors (bortezomib) [10]. Given this finding, should patients with the precursor diseases of MGUS and smoldering myeloma also be treated? According to the current IMWG guidelines, MGUS and smoldering myeloma should not be treated outside of clinical trials. Patients with MGUS are relatively healthy and have a low lifetime risk of progression.

On the other hand, patients with smoldering myeloma have a relatively high rate of progression to multiple myeloma at 10% yearly. Prior to the advent of novel therapies, a 1993 randomized controlled trial of melphalan-prednisone given initially or at progression to multiple myeloma did not show a significant difference in response rate or overall survival [11]. A single-group trial in 2008 using thalidomide in 76 patients with smoldering myeloma failed to show a clear benefit for treatment [12]. Currently, a study by Mateos et al. (2009) that randomized patients with smoldering myeloma to lenalidomide-dexamethasone versus active surveillance is ongoing [13]. At 19 months of follow-up, interim analysis showed that approximately 50% of patients in the surveillance group progressed to multiple myeloma while none of the patients in the treatment group progressed. In general, it still remains unknown whether treating patients with smoldering myeloma improves overall survival.

Returning to patient A.D. in the clinical case, he is diagnosed with MGUS (low-risk type). Of note, he underwent a bone marrow biopsy and skeletal survey which are not routinely indicated. His hematologist advised him to repeat a SPEP in 6 months. If he remains stable at that time, then he can be followed every two to three years. Any treatment at this stage is not indicated.¬

Table: Diagnostic criteria for the plasma cell disorders

Disorder Disease definition
Monoclonal gammopathy of undetermined significance (MGUS)
  • Serum monoclonal protein <30 gm/L
  • Clonal bone marrow plasma cells <10%
  • No end organ damage that can be attributed to plasma cell proliferative disorder (hypercalcemia, renal insufficiency, anemia, and bone lesions)

Smoldering (asymptomatic) multiple myeloma
  • Serum monoclonal protein (IgG or IgA) ≥30 gm/L and/or
  • Clonal bone marrow plasma cells ≥10%
  • No end organ damage

Multiple myeloma
  • Clonal bone marrow plasma cells ≥10%
  • Presence of serum and/or urinary monoclonal protein
  • Evidence of end organ damage:

hypercalcemia: serum calcium ≥11.5 mg/dL or

renal insufficiency: serum creatinine >2mg/dL or estimated creatinine clearance <40 mL/min

anemia: normochromic, normocytic with hemoglobin >2 gm/dL below lower limit of normal or <10gm/dL

bone lesions: lytic lesions, severe osteopenia, or pathological fractures

Table adapted from Kyle RA, et al. Leukemia. 2010;24:1121-1127.

Dr. Maryann Kwa is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Harold Ballard, MD Clinical Professor of Medicine, Division of Hematology and Oncology, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


[1] Kyle RA, Durie BG, Rajkumar SV, et al. Monoclonal gammopathy of undetermined significance (MGUS) and smoldering (asymptomatic) multiple myeloma: International Myeloma Working Group (IMWG) consensus perspectives risk factors for progression and guidelines for monitoring and management. Leukemia. 2010;24(6):1121-1127. Available from: http://www.nature.com/leu/journal/v24/n6/full/leu201060a.html

[2] Kyle RA, Therneau TM, Rajkumar SV, et al. Prevalence of monoclonal gammopathy of undetermined significance. N Engl J Med. 2006;354(13):1362-1369. Available from: http://www.nejm.org/doi/full/10.1056/NEJMoa054494

[3] Rajkumar SV, Kyle RA, Buadi FK. Advances in the diagnosis, classification, risk stratification, and management of monoclonal gammopathy of undetermined significance: implication for recategorizing disease entities in the presence of evolving scientific evidence. Mayo Clin Proc. 2010;85(10):945-948. Available from:  http://www.mayoclinicproceedings.com/content/85/10/945.full

[4] Landgren O, Waxman AJ. Multiple myeloma precursor disease. JAMA. 2010;304(21):2397-2404. Available from: http://jama.ama-assn.org/content/304/21/2397.full

[5] Landgren O, Kyle RA, Pfeiffer RM, et al. Monoclonal gammopathy of undetermined significance (MGUS) consistently precedes multiple myeloma: a prospective study. Blood. 2009;113(22):5412-5417. Available from: http://bloodjournal.hematologylibrary.org/cgi/content/full/113/22/5412

[6] Chng WJ, Glebov O, Bergsagel PD, Kuehl WM. Genetic events in the pathogenesis of multiple myeloma. Best Pract Res Clin Haematol. 2007;20(4): 571-596. Available from: http://www.bprch.com/article/S1521-6926(07)00064-3/fulltext

[7] Kyle RA, Remstein ED, Therneau TM, et al. Clinical course and prognosis of smoldering (asymptomatic) multiple myeloma. N Engl J Med. 2007;356(25):2582-2590. Available from: http://www.nejm.org/doi/full/10.1056/NEJMoa070389

[8] Kyle RA, Therneau TM, Rajkumar SV, et al. A long-term study of prognosis in monoclonal gammopathy of undetermined significance. N Engl J Med. 2002;346(8):564-569. Available from:  http://www.nejm.org/doi/full/10.1056/NEJMoa01133202

[9] Wang M, Alexanian R, Delasalle K, Weber D. Abnormal MRI of spine is the dominant risk factor for abnormal progression of asymptomatic multiple myeloma. Blood. 2003;102:687a (abstract). Available from: http://bloodjournal.hematologylibrary.org/archive/2003.dtl

[10] Kumar SK, Rajkumar SV, Dispenzieri A, et al. Improved survival in multiple myeloma and the impact of novel therapies. Blood. 2008;111(5):2516-2520. Available from: http://bloodjournal.hematologylibrary.org/cgi/content/full/111/5/2516

[11] Hjorth M, Hellquist L, Holmberg E, et al. Initial versus deferred melphalan-prednisone therapy for asymptomatic multiple myeloma stage I—a randomized study. Eur J Haematol. 1993;50(2):95-102. Available from: http://onlinelibrary.wiley.com/doi/10.1111/j.1600-0609.1993.tb00148.x/abstract

[12] Barlogie B, van Rhee F, Shaughnessy JD Jr., et al. Seven-year median time to progression with thalidomide for smoldering myeloma: partial response identifies subset requiring earlier salvage therapy for symptomatic disease. Blood. 2008;112(8):3122-3125. Available from: http://bloodjournal.hematologylibrary.org/cgi/content/full/112/8/3122

[13] Mateos MV, Lopez-Corral L, Hernandez MT, et al. Multicenter, randomized, open-label, phase III trial of lenalidomide-dexamethasone vs therapeutic abstention in smoldering multiple myeloma at high risk of progression to symptomatic multiple myeloma: results of the first interim analysis. In: 51st American Society of Hematology Annual Meeting and Exposition; December 5-8; New Orleans, LA. Abstract 614. Available from: http://ash.confex.com/ash/2009/webprogram/Paper21268.html

Mystery Quiz- The Answer

December 14, 2011
Vivian Hayashi MD and Robert Smith MD, Mystery Quiz Section Editors

The answer to the mystery quiz is Kaposi’s sarcoma. The CXR shows bilateral confluent airspaceopacities which have a wide differential diagnosis in this case. The CT narrows the differential.  Specifically, the opacities appear to emanate from the central hilar areas, cuff the airways, and fan out into the more distal airspaces (Images 3 and 4).  This appearance is very suggestive of Kaposi’s sarcoma. CT scans may also reveal mediastinal lymphadenopathy and large pleural effusions which are not present in our case.  The absence of fever and chronicity of one month argue against bacterial pathogens such as pneumococcus and legionella.  Of note, there is an absence of cavitation as seen in necrotizing infections, scant ground glass opacity as found in pneumocystis pneumonia, and an absence of predominant nodularity as might be seen in pulmonary lymphoma.  Kaposi’s sarcoma was the most common neoplastic disease of patients with HIV infection and low CD4 counts early in the AIDS epidemic. It results from an additional infection with HHV-8. Inflammatory pathways involve vascular tissue; pathology typically shows spindle cells sometimes accompanied by blood in the vascular spaces.  Also, the lesions are commonly seen in the large airways as red-to-purple plaques, sometimes spreading throughout all the visualized airways.  There is still debate about whether Kaposi’s sarcoma is truly a neoplasm given the cellular multiclonality that is often reported.  Cutaneous Kaposi’s is by far the most common presentation, and pulmonary involvement is seen in about 30% of cases.  The GI tract is also commonly involved in as many as 40% of cases.  Kaposi’s was seen much more frequently in the 1980s but inexplicably declined in incidence even before the HAART era which further hastened the decline.

Our patient underwent bronchoscopy which revealed red plaques throughout most of the airways .  The transbronchial biopsy showed spindle cells that stained positive for CD 34, a marker of early hematopoietic and vascular-associated tissue.  No pathogens were found on BAL or biopsy. These findings confirmed the diagnosis of Kaposi’s sarcoma.  The patient was treated with ten cycles of liposomal doxorubicin along with HAART.  His recovery was quite dramatic, as can be seen on a CT scan six years after his presentation (Image 5).  The patient also gained weight and increased his CD4 count to nearly 300 cells/cmm.

Breast Self-Examination: Worth the Effort?

October 5, 2011

By Katherine Husk

Faculty Peer Reviewed

A healthy 40-year-old woman comes into your office for a routine health exam.  After you have performed a clinical breast exam, she asks you whether she should be examining her breasts on her own at home…

Breast self-exam (BSE) seems sensible. Empowering a patient to develop a sense of a personal norm could allow for easier recognition of breast changes, and could perhaps lead to earlier evaluation by a medical professional. There is a great deal of controversy, however, about the recommendation of this practice to patients.  Few studies have looked at the efficacy of BSE, and therefore the risks and benefits remain somewhat unclear.  Patients and physicians are faced with ambiguous or even disparate guidelines, including those from the American Cancer Society that suggest counseling women about the risks and benefits of BSE,[1] or from the National Comprehensive Cancer Network that recommend that women have “breast awareness,” but recommend against routine instruction in BSE.[2]

The literature suggests that technique and level of skill may be important variables to consider when evaluating BSE.  A case-control study by Newcomb and colleagues determined that the performance of more thorough breast self-exams resulted in a 35% decrease in the occurrence of late-stage breast cancer when compared with women who did not perform breast self-exams, independent of exam frequency.  Thoroughness of the exams was evaluated by self-reported description of recommended BSE techniques, and, although a small number of women met criteria for a thorough exam, this was by no means the norm.  Ultimately, the authors concluded that, despite the decrease in late-stage breast cancer as a result of a more thorough exam, because the majority of the women in the study reported a lack of self-proficiency in BSE, the typical performance of BSE provides no benefit.[3] The importance of a thorough exam is further supported by a study by Harvey and colleagues that stresses the importance of three components of the BSE: visual inspection of the breast, palpation with the finger pads, and examination with the three middle fingers.  When compared with women who performed all three of the components in regular breast self-exams, women who omitted one or more of these components had an increased chance of death due to breast cancer or distant metastases (OR=2.20, 95% CI 1.30-3.71, p=0.003), even after adjustment for confounding variables.[4] Based on these data, it seems possible that patient education regarding the necessary components of a thorough BSE could produce a potential benefit, even though with typical practice of BSE no benefit was seen; however, a meta-analysis of trials involving BSE instruction failed to identify lower mortality in the BSE group (pooled RR=1.01, 95% CI 0.92-1.12).[4]  The benefit of a thorough exam shown by the Newcomb and Harvey studies likely represents only a theoretical benefit without supportive data in practice.

A 2003 Cochrane Database meta-analysis examined the only two randomized clinical trials that have studied BSE versus no intervention in an attempt to determine whether screening for breast cancer by BSE reduces mortality.[8,9]  Ultimately, the systematic review found no statistically significant difference in breast cancer mortality with BSE when compared with the control group (RR=1.05, 95% CI 0.90-1.24), but close to twice as many biopsies with subsequent benign results were performed in the BSE group (RR=1.89, 95% CI 1.79-2.00).  This review indicates that the performance of breast self-exams represents a potential harm with no concomitant benefit,[5] a conclusion that is further supported by two additional meta-analyses that examined both observational and case control trials in addition to the randomized clinical trials.[6],[7] Of note, while both randomized clinical trials reported increased identification of benign tumors in the BSE group, the Russian study by Semiglazov and colleagues also found that increased numbers of malignant tumors were identified in the BSE group (RR=1.24, 95% CI 1.09-1.41),[8] a finding not supported by the study of women in Shanghai by Thomas and colleagues (RR=0.97, 95% CI 0.88-1.06).[9] Although the results of the Russian study show that BSE may promote identification of malignant tumors, both studies found no significant difference in tumor size or stage at diagnosis in the malignancies identified when comparing control versus BSE groups,[8,9] implying that increased identification is not synonymous with earlier identification.  The failure of BSE to confer a mortality benefit can seemingly be explained by the lack of earlier identification of malignancy with this technique.  Furthermore, both studies included intensive instruction in BSE, and their failure to show decreased mortality with BSE confirms that the benefit of a thorough exam proposed by the Newcomb and Harvey studies is only theoretical and does not have evidentiary support in these subsequent studies.

Based on the data available at this moment, several groups have reached very different conclusions.  After evaluating BSE through a systematic review of the literature and the use of population modeling techniques, the U.S. Preventive Services Task Force (USPSTF) recommends against the practice of physicians teaching patients how to perform a BSE (grade D recommendation). [10] The USPSTF recommendation is in direct opposition to recommendations offered by the American College of Obstetricians and Gynecologists (ACOG), which recommends BSE, despite the lack of definitive data to support or refute the recommendation, because the practice of BSE has the potential to identify palpable breast cancer.[11] Despite the lack of concrete evidence, studies show that a majority of primary care providers polled feel that BSE is “somewhat effective” and a majority also recommend BSE to their patients aged 40 and older, but only a minority classify this examination as “very effective.”[12] Ultimately, disparate guidelines are present, but based on the accumulated data regarding the efficacy of BSE, the evidence does not support physicians recommending BSE or teaching their patients the techniques required to perform it. Studies have failed to show a mortality benefit with BSE and instead have shown a potential harm, with increased biopsy rates of benign tissue in the BSE groups.  While study findings indicate that intensive instruction in BSE does not decrease mortality, the potential benefit associated with certain components of BSE technique remains largely unaddressed, and represents a possible focus of future studies.

Katherine Husk is a 4th year medical student at NYU School of Medicine

Peer reviewed by Nate Link, MD, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


[1]. Smith RA, Saslow D, Sawyer KA, et al.  American Cancer Society guidelines for breast cancer screening: update 2003.  CA Cancer J Clin. 2003:53(3):141-169.

[2]. Bevers TB, Anderson BO, Bonaccio E, et al.  NCCN clinical practice guidelines in oncology: breast cancer screening and diagnosis.  J Natl Compr Canc Netw. 2009:7(10):1060-1096.

[3]. Newcomb PA, Weiss NS, Storer BE, Scholes D, Young BE, Voight LF.  Breast self-examination in relation to the occurrence of advanced breast cancer.  J Natl Cancer Inst. 1991:83(4):260-265.

[4]. Harvey BJ, Miller AB, Baines CJ, Corey PN.  Effect of breast self-examination techniques on the risk of death from breast cancer.  CMAJ. 1997:157(9):1205-1212.

[5]. Kösters JP, Gøtzsche PC.  Regular self-examination or clinical examination for early detection of breast cancer.  Cochrane Database Syst Rev. 2003:(2):CD003373.  http://www.ncbi.nlm.nih.gov/pubmed/12804462

[6]. Hackshaw AK, Paul EA.  Breast self-examination and death from breast cancer: a meta-analysis.  Br J Cancer. 2003:88(7):1047-1053.

[7]. Baxter N; Canadian Task Force on Preventive Health Care.  Preventive health care, 2001 update: should women be routinely taught breast self-examination to screen for breast cancer?  CMAJ. 2001:164(13):1837-1846.

[8]. Semiglazov VF, Manikhas AG, Moiseenko VM, et al.  Results of a prospective randomized investigation [Russia (St.Petersburg)/WHO] to evaluate the significance of self-examination for the early detection of breast cancer.  Vopr Onkol. 2003:49(4):434-441.

[9]. Thomas DB, Gao DL, Ray RM, et al.  Randomized trial of breast self-examination in Shanghai: final results.  J Natl Cancer Inst. 2002:94(19):1445-1457.  http://jnci.oxfordjournals.org/content/94/19/1445.full

[10]. US Preventive Services Task Force.  Screening for breast cancer: U.S. Preventive Services Task Force recommendation statement.  Ann Intern Med. 2009:151(10):716-726, W-236.  http://www.uspreventiveservicestaskforce.org/uspstf09/breastcancer/brcanrs.pdf

[11]. American College of Obstetricians and Gynecologists Committee on Gynecologic Practice.  ACOG Committee Opinion No. 452: Primary and preventive care: periodic assessments.  Obstet Gynecol. 2009:114(6):1444-1451.

[12]. Meissner HI, Klabunde CN, Han PK, Benard VB, Breen N.  Breast cancer screening beliefs, recommendations and practices: Primary care physicians in the United States.  Cancer. 2011;117(14):3101-3111. http://www.ncbi.nlm.nih.gov/pubmed/21246531

What is Sister Mary Joseph’s Nodule And Why Is It Significant?

September 15, 2011

By Keri Herzog, MD
Faculty Peer Reviewed

The patient is a 62-year-old male who presented to an outpatient medical clinic complaining of a growing, slightly painful, periumbilical mass, and mild lower gastrointestinal discomfort over the last 4 months. On examination, the patient appeared cachectic with an erythematous soft nodule within the umbilicus. Laboratory evaluation revealed anemia (Hct: 28%) and colonoscopy detected a tumor in the sigmoid colon. Both biopsies of the sigmoid mass and the umbilical nodule revealed the presence of adenocarcinoma. Due to the advanced stage of the disease, the patient received chemotherapy as his primary treatment.

Umbilical tumors may be the first sign of an underlying cancer or of a recurrence of a previous cancer [1]. Metastatic cancer of the umbilicus is known as Sister Mary Joseph’s nodule [2]. It is encountered in about 1–3% of patients with an intra-abdominal and/or pelvic malignancy, with gastric carcinoma being the most common cause in men, and ovarian carcinoma the most common cause in women [3].

This condition was named for Sister Mary Joseph (1856–1939), who was a daughter of Irish immigrants and was born in Salamanca, New York. From 1890 to 1915, Sister Mary Joseph was the first surgical assistant to William James Mayo, and in September 1892, she was appointed nursing superintendent of St. Mary’s Hospital. She noted the association between paraumbilical nodules observed during skin preparation for surgery and metastatic intraabdominal cancer confirmed at surgery [2, 4]. Hamilton Bailey coined the term “Sister Mary Joseph’s nodule” in 1949 in her recognition [5].

Metastasis to the umbilical region has been hypothesized to occur in several ways, with the most important including hematogenous spread (through extensive arterial and venous networks). Additional possibilities include spread due to lymphatic communication between the umbilicus and the axillary, para-aortic, inguinal, external iliac, internal mammary nodes. Another theorized route of metastatic spread is via the ligaments of embryonic origin such as the vitelline duct, which connects the umbilicus to the ileum [4]. From the umbilicus, tumor cells are then most often spread through the rest of the body [6].

Sister Mary Joseph’s nodule is significant because it may be the first and only presenting sign of malignancy, as has been demonstrated in about 30% of cases [4]. The clinical appearance is often that of a painful umbilical nodule, with irregular margins, and a hard consistency. The surface has also been typically described to be necrotic-appearing with bloody, serous, or purulent discharge. The size of the nodule is most often less than 5cm but has been documented to reach as large as 10cm in diameter [4, 6, 7].

The differential diagnosis of umbilical lesions is extensive and can be divided into benign causes vs. primary malignancies vs. an umbilical nodule due to metastases (Sister Mary Joseph’s nodule). Benign causes include cysts, umbilical hernias, skin tags, teratomas, angiomas, abscess, pyogenic granulomas, formation of an omphalith (due to concretions of the umbilicus), or endometriosis. Primary umbilical malignancies include basal cell carcinomas, melanomas, and mesenchymal tumors [6]. To make the diagnosis of Sister Mary Joseph’s nodule, physicians rely on the histopathologic examination of biopsies taken from the umbilical tumor, which can also be used to detect primary origin of the cancer [1, 8].

A retrospective study from the Mayo Clinic reviewed 85 cases of umbilical tumors due to metastases from 1950 and 1982. Of the 85 patients that were reviewed, 40 occurred in men and 45 occurred in women. Twelve patients (14%) had umbilical nodules as their initial presentation of internal malignancy while 45 (53%) developed umbilical nodules within 12 months of diagnosis [6]. A study by Dubreil et al reviewed 368 cases of umbilical metastases [1]. In 152 out of 368 cases (41%), umbilical metastasis was discovered before the primary cancer, and in 97 out of the 152 cases (64%), the nodule was the only initial presenting sign of malignancy. The study by Dubreil et al also specifically evaluated the location of the primary malignancy leading to umbilical metastases. Out of a review of 368 cases of umbilical nodules, 96 were from metastasis of adenocarcinoma of the stomach (30% of male cases, 9% of female cases), 74 were from adenocarcimona of the rectum, colon, or small bowel (25% of male cases, 12% of female cases), 59 cases were linked to the ovary (64% of female cases), while others were due to squamous cell carcinoma (4%), the cervix (4%), the pancreas (10%), the gallbladder (2%), the breast, the lung, the prostate, and the penis. 41 out of 368 cases (11%) were of unknown etiology [1].

Since Sister Mary Joseph’s nodule is usually a reflection of metastatic disease, the majority of patients with a Sister Mary Joseph’s nodule have a poor prognosis and die within 10 months following discovery of the nodule. While we may not be able to change the course of advanced disease, it is imperative that we are aware of Sister Mary Joseph’s nodule, and its association with malignancy [3, 4, 7, 8].

Dr. Keri Herzog is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Michael Poles,  GI Section Editor, Clinical Correlations

Image courtesy of Wikimedia Commons.


1. Dubreuil A, Dompmartin A, Barjot P, Louvet S, Leroy D. Umbilical metastasis or Sister Mary Joseph’s nodule. International Journal of Dermatology 1998; 37: 7-13.

2. Albano EA, Kanter J. Sister Mary Joseph’s nodule. New England Journal of Medicine 2005; 352 (18): 1913. http://www.nejm.org/doi/full/10.1056/NEJMicm040708

3. Piura B, Meirovitz M, Bayne M, Shaco-Levy R. Sister Mary Joseph’s nodule originating from endometrial carcinoma incidentally detected during surgery for an umbilical hernia: a case report. Arch Gynecol Obstet 2006; 274:385–388.

4. Abu-Hilal M, Newman JS. Sister Mary Joseph and her nodule: Historical and clinical perspective. The American Journal of the Medical Sciences 2009; 337: 271-273.  http://www.ncbi.nlm.nih.gov/pubmed/19365173

5. Al-Wadi K, Bernier M. Sister Mary Joseph’s nodule. J Obstet Gynaecol Can 2010; 32(8): 72.

6. Sina B, Deng A. Umbilical metastasis from prostate carcinoma (Sister Mary Joseph’s nodule): a case report and review of literature. J Cutan Pathol 2007; 34: 581–583.  http://http://onlinelibrary.wiley.com/doi/10.1111/j.1600-0560.2006.00658.x/pdf

7. Al-Mashat F, Sibiany AM. Sister Mary Joseph’s nodule of the umbilicus: Is it always of gastric origin? A review of eight cases at different sites of origin. Indian Journal of Cancer 2010 ; 47: 65-69. http://www.indianjcancer.com/article.asp?issn=0019-509X;year=2010;volume=47;issue=1;spage=65;epage=69;aulast=Al-Mashat

8. Powell FC, Cooper AJ, Massa MC, et al. Sister Mary Joseph’s nodule: A clinical and histologic study. Journal of the American Academy of Dermatology. http://www.sciencedirect.com/science/article/pii/S0190962284802650

Avastin and the Meaning of Evidence

September 9, 2011

By Antonella Surbone MD PhD and Jerome Lowenstein MD

The recent hearings at the Food and Drug Administration regarding the revocation of approval for the use of Avastin in the treatment of breast cancer [1,2,3] bring into sharp focus several very important issues in medicine today.

The pharmaceutical industry, armed with powerful new tools for deciphering the signaling mechanisms and mutations responsible for the development and progression of malignancies, has developed new therapies for treating cancer and other malignancies. The cost of development of each new drug and the cost of carrying out controlled trials to evaluate the efficacy and safety of each new agent is very great, usually hundreds of millions of dollars. Once a drug is approved, the pharmaceutical industry makes tremendous profit, which in the case of Avastin has been estimated to be about 6.8 billion dollars in 2010 from its sale for the treatment of breast cancer alone.

There is pressure from oncologists and patients to move new treatments “through the pipeline”. Avastin received such “accelerated approval” for the treatment of breast cancer in 2008, after studies showed that bevacizumab added to paclitaxel yielded a 5.5 month increase in median progression-free survival over paclitaxel alone.[4,5] Subsequent studies have not confirmed the same effectiveness of Avastin, showing more limited progression-free survival benefit of only 1.2 to 2.9 months. [6-10] Furthermore, several serious side effects have been reported, leading to the present hearing, before the FDA, to reevaluate the use of Avastin in breast cancer.

The FDA’s role in approving/disapproving new treatment agents through intense scrutiny and ‘tough decisions’ is the best safeguard for sick and vulnerable patients against the risk of being treated with ineffective or dangerous drugs, though some decisions may be questioned and, eventually, reconsidered by the Agency. The recent Avastin hearing has pitted the drug’s manufacturer, Genentech, and a number of women who feel they have benefited from Avastin, against a panel of experts who have decided that the drug should be disapproved for this indication. The decision of the panel will be submitted to the FDA commissioner, Dr. Margaret Hamburg. This final FDA decision will be an important consideration in the decision whether Medicare, and major health plans in the US will cover the cost of treatment of breast cancer with Avastin. Medicare has already announced that it will continue to reimburse for the use of Avastin (11), but this decision is probably susceptible to “revisions and revisions which a minute will reverse”.

This very current issue is, we submit, the tip of a large iceberg. The decision regarding Avastin will hinge on the analysis of several large, randomized treatment trial which form the cornerstone of “evidence-based medicine”, a new discipline heralded since 1992 as de-emphasizing “intuition, unsystematic clinical experience and pathophysiologic rationale“(12).  Randomized double blind clinical trials (RCTs) which have greatly contributed to progress in oncology, are seen as the “gold standard” for clinical decision-making

Randomization, a keystone in the holy grail of evidence based-medicine, assures that treatment groups are closely matched for all identifiable important variables, however, within each treatment group there is always considerable patient-to-patient variability. The finding that the frequency of beneficial or desired effects does not differ between or among treatments in a RCT does not necessarily mean that the patients who respond demonstrate a “non-specific” or placebo effect. In any study, there may be a subset of patients who are “responders” because they carry a specific gene mutation, are exposed to specific environmental influences, or have differing underlying causes of their identifying disease. If such a subset makes up only a small proportion of the test group, the overall findings of a randomized trial might be judged as “not significantly different”, when in fact, some participants have truly responded. A simple example: if a trial of vitamin B12 in the treatment of anemia had been undertaken before the relatively uncommon cause, pernicious anemia, was recognized, it might well be that this highly effective, specific treatment would have been disapproved by the FDA. Today, would Gleevac have been approved if we didn’t know about the specific tyrosine kinase mutations that identify the subsets of patients who benefit from this specific therapy?A reasonable argument can be made that some women who “respond” to Avastin do so because they have a genetic polymorphism that makes them susceptible. This appears to be the case for patients with the BRAF mutation that makes metastatic melanoma more responsive to a kinase inhibitor. This is an important issue that will need to be pursued with many new drugs, including chemotherapy and others. It is an extension of what we already practice in avoiding some drugs in patients who are G6PD deficient or patients sensitive to CYP 450 inhibition. Simply randomizing patients with a given disease, or a given stage of a disease is no longer an adequate way to adduce evidence.

However, we see additional scientific and ethical problems related to the application of “evidence-based medicine” to the determination of approval or disapproval of Avastin for the treatment of advanced breast cancer. First, it is clear that the overall conclusions regarding efficacy (or toxicity) seen in a treatment trial, can never be applied to each single individual in that trial, as subsets of patients may have responded even in a “negative” trial. Subset analysis, however,is of little value unless a subset of responders or non-responders can be identified with certainty.Beyond this, there are several ethical issues related to potential differences among patient subsets. If some patients are recognized as “true responders”, wouldn’t the withdrawal of approval, and the consequent likely cessation of coverage of the cost of the treatment, conflict with the ethical principles of beneficence and of “do no harm”(to those who have responded and to future patients who may also be responders)? As many have remarked, the use of a drug such as Avastin in breast cancer would end up being limited to those who can afford to pay more than $ 80,000 yearly: hardly an ethical solution.Should we not therefore invest more research and funds in finding out , on a scientific basis, why some patients responded?  For example, Gelmon and colleagues just reported PARP inhibitors’ activity also in non-BRCA ovarian cancer – a new finding that might benefit all women with ovarian cancer [13, 14] Hua and colleagues have reported an association between VEGF polymorphism and survival in breast cancer, which might be explored in relation to treatment responses.[15]

Surely, it is important to place restrictions on the prescription of drugs (expensive or not) of doubtful efficacy, while avoiding the scientific and ethical issues that such restrictions may raise. Finding appropriate and just solutions is not easy, yet we wish to offer two possible strategies for consideration by all involved parties, based on all the evidence derived from a given trial and not limited to its global results. At the conclusion of a clinical trial, judged to show overall evidence of “no efficacy”of a drug under investigation, it might be decided that those patients (now termed “patients” rather than “participants”) who meet scientifically sound, pre-determined criteria defining a beneficial effect be enrolled in a follow up study to examine the duration and magnitude of the “beneficial effect”. The duration of such follow up study might depend on the clinical response, and the drug would have to be provided free of charge by the pharmaceutical industry.  During this follow up phase, the drug might be classified as “conditionally approved”.  Patients currently being treated with Avastin, who are judged to have benefited, might be similarly enrolled in a “follow up study.” This solution would only apply to those cancer patients who have already proven to be responders in the trial. Since the reason for response to Avastin in some women on the trial is not yet known, it would be “on the safe side” to assume that there might be a specific genetic reason.

The second solution is, therefore, intense investigation of genetic and/or environmental factors that make some patients likely responders. Coupled together, these could be relatively simple steps toward a sound, more equitable, approach to all patients on RTCs.

Indeed, in the era of patient-centered personalized medicine, we need a reappraisal of the notion of “evidence” to include new objective data, based on genomic assessment and pharmacogenetics (16-19), as well as quality of life and psychosocial considerations [20]. Only by abandoning a narrow perspective of what constitutes “evidence”, we can consider and treat each patient in her/his uniqueness.  This is very much the way the best physicians cared for patients before the advent of “evidence-based medicine” i.e. a treatment was initiated, based on the physician’s knowledge and experience and continued or discontinued depending on the response of the patient.  Evaluating evidence is especially difficult in patients with advanced cancer (8), yet today the old approach can be supported by embracing a broader, holistic, notion of evidence. The value of our proposed solutions, viewed from the perspective of science and the ethical practice of medicine, might be substantial.

Dr. Antonella Surbone, is the Ethics Editor, Clinical correlations

Dr. Jerome Lowenstein is a Professor of Medicine, Division of Nephrology, NYU School of Medicine


1. Pollack A. Cancer Survivors Appeal to FDA over Avastin . NY Times 6/28/2011

2. Pollack A. FDA Panel Rejects Use of Avastin in Breast Cancer. NY Times 6/29/2011.  http://prescriptions.blogs.nytimes.com/2011/06/29/f-d-a-panel-still-sees-no-benefit-of-avastin-for-breast-cancer/

3. Ratner M. FDA panel votes to pull Avastin in breast cancer, again. Nat Biotechnol 2011;29:676.

4. Carpenter  D, Kesselheim  AS, and Joffe  S.Reputation and Precedent in the Bevacizumab Decision. New Eng. J Med 365, e3, July 2011  http://www.ncbi.nlm.nih.gov/pubmed/21707383

5.Miles D, Chan A, Romieu Get al.Randomized, double-blind, placebo-controlled, phase III study of bevacizumab with docetaxel or docetaxel with placebo as first-line therapy for patients with locally recurrent or metastatic breast cancer (MBC): AVADO. J ClinOncol 2008; 26(suppl):43s, abstr LBA1011

6. O’ Shaughnessy J, Miles D, Gray RJ et al. A meta-analysis of  overall survival data from three randomized trials of bevacizumab (BV) and first-line chemotherapy as treatment for patients with metastatic breast cancer (MBC). J ClinOncol 2010: 28 (Suppl 15s) abstract.  http://meeting.ascopubs.org/cgi/content/abstract/28/15_suppl/1005

7. Miller K, Wang M, Gralow Jet al.Paclitaxel plus bevacizumab versus paclitaxel alone for metastatic breast cancer. N Engl J Med2007 ;357:2666–2676.

8.Robert NJ, Diéras V, Glaspy J et al. RIBBON-1: randomized, double-blind, placebo-controlled, phase III trial of chemotherapy with or without bevacizumab for first-line treatment of human epidermal growth factor receptor 2-negative, locally recurrent or metastatic breast cancer.J ClinOncol. 2011; 29:1252-60.

9. Ray R, Bhattacharya S, Bowden Cet al. Independent review of E2100: A phase III trial of bevacizumab plus paclitaxel versus paclitaxel in women with metastatic breast cancer. J ClinOncol 2009; 27:49664972.

10.Ocaña A, Amir E, Vera F, Eisenhauer EA, Tannock IF. Addition of bevacizumab to chemotherapy for treatment of solid tumors: similar results but different conclusions. J ClinOncol 2011; 29:254-6.

11. Pollack A. Medicare Will Pay for Avastin in treating breast cancer NY Times 6/30/2011

12. Lowenstein J. “Shaky Evidence” in The Midnight Meal and other Essays about Doctors, Patients and Medicine, Yale University Press 1997, University of Michigan Press 2005  http://press.umich.edu/pdf/9780472030842-fm.pdf

13. Gelmon KA et al. Olaparib in patients with recurrent high-grade serous or poorly differentiated ovarian carcinoma or triple-negative breast cancer: A phase II, a multicenter, open-label, nonrandomized study. Lancet Oncol 2011; 12: 852-861.

14 -Telli ML. PARP inhibitors in cancer: moving beyond BRCA. Lancet Oncol 2011; 12: 827-828.

15. Hua, L et al. Association of Genetic Polymorphisms in the VEGF Gene with Breast Cancer Survival Cancer Res 2005: 65:5015-5019

16. Jubb, A.M. and Harris, A.H. Biomarkers to predict the clinical efficacy of bevacizumab in cancer. Lancet Oncol2010; 11, 1172-83.  http://www.ncbi.nlm.nih.gov/pubmed/21126687

17. Ashley EA, Butte AJ, Wheeler MT et al. Clinical assessment incorporating a personal genome. Lancet 2010; 375:1525-1535.

18. Ormond KE, Wheeler MT, Hudgins L, et al. Challenges in the clinical application of whole- genome sequencing. Lancet 2010;375:1749-1751.

19. Samani NJ, Tomaszewski M, Shunkert H. The personal genomeandthe future of personalized medicine? Lancet 2010;375:1497-1498.

20. Surbone A, Baider L, Weitzman TS et al.on behalf of the MASCC Pychosocial Study Group Psychosocial Study Group at www.massc.org. Psychosocial care for patients and their families is integral to supportive care in cancer: MASCC position statement. Supp Care Cancer 2010;18:255-63.