Class Act: The Use of MRI in Breast Cancer Screening

August 28, 2008

breastmri.jpgClass act is a feature of Clinical Correlations written by NYU 3rd and 4th year medical students. Prior to publication, each commentary is thoroughly reviewed for content by a faculty member.

Commentary by Daniel Green MSIV and Boris Kobrinsky MD, Assistant Professor, NYU Division of Oncology

In 2008, an estimated 182,460 women in the United States will be diagnosed with invasive breast cancer, and 40,480 women will die of the disease as it remains the demographic’s second leading cause of cancer mortality.(1) Fortunately, breast cancer is one of the screenable cancers, and screening mammography has been shown to detect asymptomatic breast cancer at an early stage and reduce all-cause mortality when followed with appropriate treatment.(2,3) However, plain mammography has a sensitivity of about 85 percent, resulting in an estimated 21.8% of cases that are node positive at the time of diagnosis.(4)

The use of MRI in breast cancer screening is receiving increased attention, and the American Cancer Society has recently recommended it as an adjunct to plain mammography in the screening of high risk patients.(5) These patients include women with dense breast tissue, a personal history of breast cancer or lobular carcinoma in situ, prior mantle irradiation for Hodgkin’s lymphoma, and a strong family history of breast cancer.

Women with inherited BRCA1 and BRCA2 mutations have the greatest risk of breast cancer. Though only five to ten percent of women with breast cancer have one of the two mutations, those with a BRCA genotype have a lifetime risk of 65 to 80 percent of developing the disease.(5) This population tends to develop more aggressive breast cancers with significant risk of disease starting as early as age 30.

Several large, prospective, nonrandomized trials have been conducted to evaluate the use of MRI as an adjunct to plain mammography in screening high risk women for breast cancer. The largest of these studies, conducted in The Netherlands and published in 2004, evaluated both MRI and mammography in 1,909 high risk women.(6) The investigators found high sensitivity of MRI (80 percent) compared to that of mammography, whose sensitivity plummets to 33 percent in this high risk population. On the other hand, MRI had lower specificity than mammography (90 and 95 percent, respectively). Several other studies in North America and Europe have recapitulated these results.(7-11)

These studies were included in a recently published meta-analysis of 11 prospective studies on screening women at high risk of breast cancer with a combination of MRI and plain mammography. The investigators concluded that screening with mammography plus MRI may exclude breast cancer better then mammography alone in a population of women with a strong genetic predisposition to breast cancer.(12)

Enhancement of invasive breast carcinomas in contrast studies with gadolinium enables the increased sensitivity of MRI. However, many benign breast lesions enhance with gadolinium, resulting in a lower specificity. In women not characterized as high risk, the likelihood of false positives may lead to an unacceptable amount of recalls and biopsies. Because of the increased cancer rate in high risk women, the incidence of benign biopsy following MRI is similar to that of a population-based study using plain mammography.(13) In these patients, the benefit of high sensitivity MRI may outweigh the effects of lower specificity, though data on survival are not yet available.

A recent study from Stanford University evaluated the cost-effectiveness of supplementing screening mammography with MRI for carriers of BRCA mutations.(14) Health benefits were measured in terms of total health-related costs and quality-adjusted life years. The researchers found that screening MRI was more cost-effective in BRCA1-positive women compared to BRCA2-positive women because BRCA1 mutations confer a higher risk of breast cancer. However, they did not find that women with BRCA1 mutations aged 25-34 were at high enough risk to justify annual MRI screening. In addition, women with BRCA1 mutations over the age of 55 suffered from declining quality of life and competing risks for death, thus rejecting MRI as cost-effective for this age group. This leaves women with BRCA1 mutations aged 35-54 as the group most likely to benefit from MRI while taking cost into account.

MRI is also being utilized for screening of both the ipsilateral and contralateral breasts in women recently diagnosed with breast cancer. The prevalence of synchronous MRI detected breast cancer is considered to be between 1 and 9.5 percent, and these cancers are often both mammographically and clinically occult.(15,16) As is the case with standard breast cancer screening, the false positive rate is high due to limited specificity.

Current evidence suggests that MRI can benefit women at high risk, while there is much weaker evidence supporting its use in women of normal risk. Further research is necessary to develop the best method to improve screening in women at an intermediate level of risk, in whom the benefit of MRI remains unclear.


1. Jemal A, Siegel R, Ward E, et al. Cancer statistics, 2008. CA Cancer J Clin 2008;58(2):71-96.

2. Nyström L, Andersson I, Bjurstam N, et al. Long-term effects of mammography screening: updated overview of the Swedish randomised trials. Lancet 2002;359(9310):909-19.

3. Glass AG, Lacey JV Jr, Carreon JD, Hoover RN. Breast cancer incidence, 1980-2006: combined roles of menopausal hormone therapy, screening mammography, and estrogen receptor status. J Natl Cancer Inst 2007;99(15):1152-61.

4. Weaver DL, Rosenberg RD, Barlow WE, et al. Pathologic findings from the Breast Cancer Surveillance Consortium: population-based outcomes in women undergoing biopsy after screening mammography. Cancer 2006;106(4):732-42.

5. Saslow D, Boetes C, Burke W, et al. American Cancer Society guidelines for breast screening with MRI as an adjunct to mammography. CA Cancer J Clin 2007;57:75-89.

6. Kriege M, Brekelmans CT, Boetes C, et al. Efficacy of MRI and mammography for breast cancer screening in women with a familial or genetic predisposition. N Engl J Med 2004;351:427-437.

7. Kuhl CK, Schrading S, Leutner CC, et al. Mammography, breast ultrasound, and magnetic resonance imaging for surveillance of women at high familial risk for breast cancer. J Clin Oncol 2005;23:8469-8476.

8. Leach MO, Boggis CR, Dixon AK, et al. Screening with magnetic resonance imaging and mammography of a UK population at high familial risk of breast cancer: a prospective multicentre cohort study (MARIBS). Lancet 2005;365:1769-1778.

9. Lehman CD, Blume JD, Weatherall P, et al. Screening women at high risk for breast cancer with mammography and magnetic resonance imaging. Cancer 2005;103:1898-1905.

10. Sardanelli F, Podo F. Breast MR imaging in women at high risk of breast cancer. Is something changing in early breast cancer detection? Eur Radiol 2007;17(4):873-87.

11. Warner E, Plewes DB, Hill KA, et al. Surveillance of BRCA1 and BRCA2 mutation carriers with magnetic resonance imaging, ultrasound, mammography, and clinical breast examination. JAMA 2004;292:1317-1325.

12. Warner E, Messersmith H, Causer P, et al. Systematic Review: Using magnetic resonance imaging to screen women at high risk for breast cancer. Ann Intern Med 2008;148:671-79.

13. Warren RM, Pointon L, Caines R, et al. What is the recall rate of breast MRI when used for screening asymptomatic women at high risk? Magn Reson Imaging 2002;20(7):557-65.

14. Plevritis SK, Kurian AW, Sigal BM, et al. Cost-effectiveness of screening BRCA1/2 mutation carriers with breast magnetic resonance imaging. JAMA 2006;295:2374-2384.

15. Lehman CD, Gatsonis C, Kuhl CK, et al. MRI evaluation of the contralateral breast in women with recently diagnosed breast cancer. N Engl J Med 2007;356(13):1295-303.

16. Lee SG, Orel SG, Woo IJ, et al. MR imaging screening of the contralateral breast in patients with newly diagnosed breast cancer: preliminary results. Radiology 2003;226(3):773-8.

Breaking News: USPSTF Issues New Prostate Cancer Screening Guidelines

August 5, 2008

Commentary by Cara Litvin MD, Executive Editor, Clinical Correlations

The US Preventive Services Task Force issued new guidelines for prostate cancer screening on Monday.  For the first time, the task force recommends AGAINST routine screening in patients over 75 years of age, citing the “moderate to substantial” harms over small to no benefits from screening. The task force reports that there continues to be inadequate evidence that the PSA improves healthcare outcomes at any age.  However, even if there is eventual evidence that screening is beneficial in younger patients, because men over age 75 have an average life expectancy of about 10 years or less, there would unlikely ever be any mortality benefit from screening older males. On the other hand, screening does result in harms from unwarranted treatment, such as erectile dysfunction, incontinence, and even death.  Furthermore, there are harms associated with biopsy, including pain and psychological effects.  The task force continues to cite “insufficient evidence” for screening in younger patients, and again continues to suggest that the “uncertain benefits and the known harms” be discussed with all patients younger than 75 prior to ordering the test.

Neutropenic Precautions Demystified

June 13, 2008

800px-pseudomonas.jpgCommentary by Rachana Jani MD, PGY-1 and Neal Steigbigel MD, Professor of Medicine (Infectious Diseases/Immunology)

Rachana Jani MD:  Walking onto an oncology floor, one cannot help but notice the precautionary signs that segregate these patients from the rest of the hospital. “No fresh fruits or flowers.” “Neutropenic isolation, please see nurse before entering.” The idea of neutropenic precautions first emerged in the 1960s when myelosuppressive therapy came to the forefront of cancer treatment. It only made sense that patients with an impaired immune system be nursed in strict isolation. However, these ideals were based on clinical philosophy and continued based on tradition. It is important to consider that if there was a rationale in the past to implement protective isolation, has it now been outdated by the advent of antimicrobial prophylaxis and systemic growth factors?

Typical strategies to prevent infection among neutropenic patients have included a protective environment, dietary constraints, and protective clothing. With the resource burden associated with maintaining protective measures, there are surprisingly few studies systematically monitoring infection rates in neutropenic patients. In the early eighties, investigators studied the effect of laminar airflow and HEPA filtration in decreasing the rates of infection in neutropenic patients [1,2]. Although they were able to show some protective benefit against infection, particularly Aspergillus infections, there was no measurable effect on mortality. Other studies conducted in early 2000 failed to show a difference between patients treated in protective isolation and those who were not with respect to the median time to fever or a significant difference in mortality rate [3]. Most hospitals also institute low microbial or neutropenic diets, however, no recent studies exist that can associate dietary restriction with decreased rates of infection [4,5]. The efficacy of gloves/masks, cover gowns and single patient rooms has also been studied, again showing no mortality benefit [6,7].

Read more »

Future Medicine: The Search for a New Anticoagulant

April 16, 2008

coumadin.jpgFuture Medicine is a new section of Clinical Correlations devoted to hot areas of research and development in various fields of medicine. In tihis series, we will highlight treatments in their infancy, from basic research opening up new targets for treatment, to following small molecules throughout their clinical investigation. We will also bring you the latest on technology and devices, as well as perspectives on drug discovery from a business point of view. Watch out – the future is just around the corner!

Commentary by Aaron Lord MD, PGY-1

We have all been there before: a patient sitting in front of you, be it in clinic, the ER, or as an inpatient, with newly diagnosed atrial fibrillation (AF), and it’s up to you and the patient to decide on a plan for anticoagulation. With an aging population, AF has not only become more prevalent, but the decision of whether to anticoagulate has become more difficult – Can my patient reliably take Coumadin everyday? Can they understand the complex and changing dosing? Will they follow-up in Coumadin clinic? All of the pitfalls of Coumadin therapy have driven a number of pharmaceutical companies to develop new forms of anticoagulation that have far less drug-drug and drug-food interactions and do not require frequent INR checks. We will quickly review the necessity of anticoagulation in atrial fibrillation and then take a look to see if any of the new drugs compare favorably to the efficacy and safety profile of that old workhorse Coumadin. Read more »

Mystery Quiz- The Answer

February 6, 2008

Posted By: Vivian Hayashi, MD, Instructor of Clinical Medicine, Division of General Internal Medicine and Robert Smith, MD Associate Professor of Medicine, Division Pulmonary and Critical Care Medicine

The answer to the mystery quiz is lung cancer, in particular, adenocarcinoma with a predominantly bronchoalveolar cell pattern (BAC).  The clue to the mystery was the “cough productive of voluminous frothy, watery sputum,” bronchorrhea, which is often the presenting complaint of patients with BAC. Other entries in the differential diagnosis are characterized by cough that is non-productive such as UIP, sarcoid, inorganic pneumoconiosis, and other interstitial lung diseases.  The imaging shows a diffuse infiltrating process that involves the airspace and interstitium. Some areas show ground glass opacity while others are frankly consolidated.  The process was clearly worsening on repeat imaging. Prone imaging was done to exclude the presence of dependent pulmonary congestion.  If the infiltrates were due to heart failure, we would expect them to shift to the dependent regions on prone imaging.  A negative PAS stain ruled out alveoloar proteinosis.

BAC is a type of adenocarcinoma in which the tumor cells grow along the alveoli, the so-called “lepidic” pattern.  It can present in one of three ways:  (1) a solitary nodule that is characteristically ground glass on imaging and due to relatively slow growth, tends to have low uptake on PET scanning (hypometabolic, unlike most other lung cancers); (2) multiple ground glass opacities that may arise from aerogenous spread of tumor through the airways; and (3) a pneumonic pattern, as in our case, that frequently causes hypoxemia due to shunting through airspaces that are filled with tumor and secretions. Patients with the third pattern have the worst prognosis while those presenting with a solitary nodule have the best.  Compared to patients with a similar stage of non-small cell lung carcinoma, those with BAC tend to have a better prognosis, overall.   Also, some patients with BAC respond to erlotinib (tarceva), an epidermal growth factor receptor inhibitor.  Erlotinib may also reduce bronchorrhea.

BAC can usually be diagnosed by transbronchial biopsy.  In this regard, our case was unusual. As seen below, the degree of inflammation in the left lower lobe specimen obscured the presence of the tumor (fig 1), which was better seen on the open lung biopsy of the right middle lobe (fig 2).







Figure 1                         Figure 2

Grand Rounds: Breast Cancer Genomics

November 20, 2007

Bellevue AmphitheaterCommentary by Jonathan Willner MD, PGY-2

This week’s Medicine Grand Rounds speaker was Lisa Carey, MD, Associate Professor in Hematology/Oncology at the University of North Carolina and Medical Director of the UNC Breast Center.  Much of Dr. Carey’s research focuses on how an understanding of breast cancer genomics may tailor clinical therapy.

While the incidence of breast cancer has plateaued over the past few years, there has been a decline in the number of breast cancer deaths. The reason is thought to be two-fold: improved screening and more effective medical therapy.  As treatment has improved, so too has the variability in choice and duration of therapy. However our ability to personalize adjuvant therapy so that women are receiving only the drugs they need is limited; most women are either over or under-treated.

The traditional model of cancer development holds that malignancy develops as a linear progression of genetic “hits:” successive loss of genetic integrity over time allowing for unchecked proliferation and spread of malignant cells. The biologic model, by contrast, argues that inborn traits and biologic variability sum to create a cancer subtype, which is later evidenced by the phenotypic variability we see in disease progression and treatment response.  The biologic model has been largely informed by genomics, the study of large-scale genetic mapping, as its tool for defining breast cancer subtypes. Analysis of approximately 8000 genes via microarray has elucidated a gene sets that reliably identify several breast cancer subtypes. These subtypes are definable early in the course of disease, and are present both after chemotherapy and in metastatic disease.  Dr. Carey’s work has focused on the ‘basal-like’ subtype in particular, so named because of its high density of genes coding for EGFR, basal cytokeratins, and other basal proteins. The basal-like subtype also has typically low HER2 and ER/PR expression, and is very proliferative.

Breast cancer subtyping being increasing used to predict the clinicopathologic characteristics of a patient’s disease. Population-based studies, such as the Carolina Breast Cancer Study, have found a preponderance of the basal-like subtype among pre-menopausal, African-American women. Patients that carry germline BRCA1 mutations also tend to develop the basal-like subtype. Somewhat surprisingly, traditional breast cancer risk factors (degree of parity or OCP use, for example) fail to predict ER-negative subtypes, and some factors may actually be protective in some subtypes.. Such variability has led researchers to try to identify particular gene profiles that can be used to predict outcome across all subtypes.  From these studies, the basal-like subtype has emerged as the subtype most powerfully predictive of clinical progression and outcome.

Breast cancer relapse is known to be heterogenous. While ER-positive breast cancer tends to maintain a low-but-constant relapse rate, the basal-like subtype typically evinces a high early rate followed by a rapid decline in number of relapses. This may suggest why these two subtypes have different responses to chemotherapy: chemotherapy tends to decrease early relapses and is more effective in ER-negative subtypes like the basal-like subtype; ERER-positive breast cancer has a constant risk of  relapse over many years and is less impacted by chemotherapy.

Lastly, though genomic subtyping has shown promise as a model for predicting clinical outcome, it has yet to prove itself as a tool for choice of medical therapy. A number of agents have shown promise in the treatment of basal-like breast cancer, but it has not yet been proven that chemotherapy regimen  can be tailored on the basis of genetic subtyping. We do not yet have known targeted drugs for this subtype, which is the subject of active study.   Future efforts will work to clarify risk factors by breast cancer subtype, which may inform either directed lifestyle or chemopreventive strategies, improving chemotherapy, and identifying targeted drugs to effectively treat with a minimum of toxicity.

Grand Rounds: “Towards Biologically Rational Therapy for Myelodysplastic Sydrome.”

November 2, 2007

Bellevue AmphitheaterWelcome to our new Grand Rounds Series. Each week, we plan to post a summary of the week’s Medicine Grand Rounds lecture. The summaries are reviewed and approved by the grand rounds speaker prior to posting. Enjoy.

Commentary by Marshall Fordyce MD, Senior Chief Resident 

This week’s Medicine Grand Rounds guest lecturer was Dr. Steven Gore, currently Associate Professor of Oncology, and Faculty Member of Cell and Molecular Medicine, at the Johns Hopkins University School of Medicine. Dr. Gore’s research focuses on improving our understanding of the epigenetic determinants of myeloid neoplasms. Early in his career, he has had the insight and opportunity to translate bench observations to clinical trials at the bedside. Dr. Gore is the principal investigator for several multicenter Phase 1 and Phase 2 trials testing new regimens and novel therapies in the treatment of myelodysplastic syndrome, for which he has won multiple young investigator awards. His talk was entitled: “Towards Biologically Rational Therapy for Myelodysplastic Sydrome.”

Dr. Gore proposed a clear approach to the management of myelodysplastic syndrome (MDS), and described the rationale for DNA methyltransferase inhibitors in selected patients. MDS is a group of clonal bone marrow stem cell disorders characterized by hypercellular marrow and ineffective erythopoiesis. The syndrome is heterogeneous with variable natural histories. Dr. Gore emphasized that the distinction between MDS and acute myeloid leukemia by number of blasts in the bone marrow is somewhat arbitrary, and fails to appreciate the molecular similarity, and thus their response to therapy, of these diseases. Thus, he points out, acute myeloid leukemia with trilineage dysplasia (AML-TLD) is not “AML,” but MDS.

Dr. Gore defines MDS more generally as a chronic leukemia that is progressive and lethal. Currently, the only known curative therapy for MDS is allogenic bone marrow transplant. If a patient is not a candidate for transplant, a patient’s risk of progression is assessed using the International Prognostic Staging System (IPSS), which is based on percent of blasts in the bone marrow, karyotype, and cytopenia. If patients do not receive transplant, prognosis is generally poor, cytotoxic chemotherapy generally fails. A variety of medications are being used for these patients. One class of drugs that has shown promise are the DNA methyltransferase inhibitors, 5-azacytidine and decitabine, which have produced an overall response of 50% as monotherapy.

Dr. Gore presented early clinical trial data suggesting a benefit from the use of methyltransferase inhibitors in combination with histone deacetylase inhibitors in these patients. Because methyltransferase and the histone deacteylase play important roles in methylation of DNA and inducing its transcriptionally inactive form (heterochromatin), the effect of the medications are to inhibit these two steps, leading to transcriptionally active DNA and reactivation of gene expression. This attempt to target the epigenetic control of MDS holds great promise for patients.

Elevated Total Protein and the Interpretation of Serum Protein Electrophoresis

November 1, 2007

Commentary by Jamie Hoffman, MD 

A healthy 54 year old man without past medical history presents for a routine physical exam for his insurance company. His blood work reveals a total protein (TP) of 9.4 g/dl and an albumin of 3.0 g/dl. What should be included in this patient’s diagnostic workup?

An elevated TP:Albumin ratio often necessitates finding the protein(s) responsible for such an elevation. Plasma proteins largely consist of albumin and globulins such as immunoglobulins, carrier proteins, and acute phase reactants. Elevated globulin levels are concerning. An important question to ask oneself in the workup of elevated proteins is whether there is an increase in multiple immunoglobulins (i.e. polyclonal gammopathy like HIV, viral hepatitis, liver disease, connective tissue disease or anything that stimulates a generalized immune response) or in one specific ‘clone’ (i.e. monoclonal, produced by a malignant plasma cell or other B-cell malignancy).

Monoclonal proteins are made by the proliferation of a single clone of plasma cells. In order to assess for the presence of a monoclonal protein, an SPEP should be ordered. In this test, a patient’s serum is placed into a agarose medium that separates the proteins based on size and charge (+ charge on left). 

picture1.jpgFor reference, a normal SPEP: 

Alpha-1 fraction= alpha-1 antitrypsin, thyroid binding globulin.

Alpha-2 fraction= ceruloplasm, haptoglobin.

Beta-1= tranferrin Beta-2= beta-lipoprotein [IgA, IgM, even IgG at times].

                                                                               Between Beta and Gamma= CRP, fibrinogen.

                                                                               Gamma= immunoglobulins. (1)



Typical Gamma M Spike meaning that we a have a large amount of a very specific protein (i.e. produced by a clone).



Read more »

Tumor Lysis Syndrome and the Role of Urinary Alkalinization

September 13, 2007

fluid1.jpgCommentary by Bani Chander MD, PGY-2, and Sergio Obligado MD, Attending Physician, Nephrology

Tumor lysis syndrome (TLS) is characterized by a group of metabolic abnormalities including hyperkalemia, hyperuricemia, and hyperphosphatemia with secondary hypocalcemia, following the initiation of cytotoxic therapy. Although there is no well established definition for this syndrome, the Cairo-Bishop definition [1] is a commonly used classification system that stratifies the degree of severity by utilizing specific laboratory data and clinical features. The constellation of abnormalities that occurs in TLS is due to a rapid release of potassium, purine nucleic acids, and phosphorus when tumor cells abruptly lyse their contents into the extracellular space. These abnormalities can subsequently lead to acute renal failure and may result in multiple organ failure and/or death. It is therefore important to identify those patients who are at risk for this syndrome in order to initiate early preventative treatments [2], most often including a combination of allopurinol, IV hydration, and/or urinary alkalinization.

TLS is most often seen in acute lymphoblastic leukemia and high grade non hodgkins lymphoma, most commonly in Burkitt’s lymphoma [3]. Other common malignancies associated with TLS include CLL, AML, multiple myeloma, small cell lung cancer, breast and ovarian cancer, medulloblastomas, and sarcomas. The kidney is the primary organ involved in the clearance of potassium, phosphorous, and uric acid. Uric acid can precipitate in an acidic environment or if there is decreased tubular flow rate within the renal tubules. When uric acid crystals form, they can precipitate within the collecting ducts and ureters and can cause obstructive uric acid nephropathy. Calcium phosphate deposition in the renal collecting ducts, vessels, and ureters may also contribute to acute renal function in this setting.

Standard of care in patients who are to receive chemotherapy or radiation in a cancer with high cell turnover is at least 2 days of allopurinol prior to initiation of therapy. IV fluid hydration should also be used in order to maintain urine output at a minimum of 2.5 liters/day. More recently, rasburicase has been introduced and used to prevent uric acid nephropathy in patients who are at increased risk for tumor lysis syndrome. High risk features are characterized by increased uric acid levels, LDH levels greater than two times normal or WBC >50,000microL (both indicative of high tumor burden), certain tumors including Burkitt’s lymphoma, lymphoblastic lymphoma, ALL, AML, decreased intravascular volume status, and the presence of tumor infiltration in the kidney.

Early recognition of a fall in glomerular filtration or diuresis in patients with tumor lysis is critical, as prompt initiation of dialysis in this group can prevent both further renal deterioration as well as dangerous metabolic derangements. In contrast to most causes of acute kidney injury in which dialysis is initiated to treat the sequelae of decreased GFR (i.e. hyperkalemia, uremia, volume overload), dialysis can actually reverse the kidney injury, as both uric acid and phosphate are efficiently cleared by the dialysis membrane.

Read more »

How do you assess a patient’s risk for recurrent DVT?

July 6, 2007

Commentary by Sean Cavanaugh MD, Associate Editor, Clinical Correlations

A 51-year-old man with a history of DVT diagnosed seven months ago presents to your clinic for follow up. He has no family history of blood clots. He has been on coumadin since his DVT was diagnosed. No testing for thrombophilia has been done. How do you proceed?

Recently, The Annals of Internal Medicine released an excellent statement about the treatment of venous thrombosis (see prior post). Unfortunately, it does not address the more interesting questions of how and why we would evaluate a patient further for hypercoagulability and risk of recurrent DVT.

There is a general consensus that patients with the following disorders have a higher risk of recurrence and should have long-term anticoagulation:

1. Antiphospholipid syndrome: (APS)

A. Anti-cardiolipin antibodies: antibodies against lipid complexes.

B. Anti-Beta 2 glycoprotein I

C. Lupus anticoagulants: A reproducible lupus anticoagulant assay (documented positive assay one more than one occasion)

** Evidence suggests that LA and anti-Beta2 are more thrombogenic than the other anti-phospholipid antibodies

2. Antithrombin III deficiency (and, to a lesser extent, Protein C and S deficiency)

3. Homozygous factor V Leiden (R506Q)

4. Cancer

5. Compound heterozygous factor V Leiden (FVL) PLUS and prothrombin 20210a mutation carriers (Not much is known about the increased risk of homozygous prothrombin gene mutation but it presumably belongs in this category.) Patients who are heterozygous for Factor V Leiden or prothrombin 20210a have lower odds rations for recurrent DVTs. Many physicians do not recommend testing for these conditions after an initial DVT under routine circumstances.

Read more »

Should All Patients with Hepatitis C Be Screened for Hepatocelluar Carcinoma?

July 3, 2007

Should patients with Hepatitis C (HCV) with no evidence of cirrhosis undergo screening for hepatocellular carcinoma (HCC)? Is there any reason to check for HCC when the liver associated enzymes (LAEs) are normal?

-Sandeep Mangalmurti, PGY-2

Commentary by Mike Poles MD, Associate Editor Clinical Correlations and Assistant Professor, Division of Gastroenterology

HCC continues to be one of the most common solid malignancies worldwide. Further, almost all cases of HCC occur in the background of a histologically-abnormal liver; approximately 90% of cases of HCC occur in the background of cirrhosis. It is important to note that cirrhosis due to any etiology can result in the development of HCC, though cirrhosis due to the viral hepatitides is the most common causes. In the Far East, where HBV is highly endemic, it is the most common cause of cirrhosis-related HCC. On the other hand, in the U.S. and Western Europe, HCV-related cirrhosis is more commonly associated with HCC development. As noted above, approximately 10% of patients with HCC do not have cirrhosis. Worldwide, the majority of those with HCC, but without cirrhosis, are infected with HBV, which is believed to be oncogenic, in part, by virtue of its ability to integrate its DNA into the human genome. HCV, on the other hand, is a RNA virus that is not capable of integration, but nonetheless can probably rarely cause HCC through its effect on hepatic inflammation and increased hepatocyte activation and proliferation. A recent article in the Annals of Internal Medicine (Ikeda K et al. Antibody to Hepatitis B Core Antigen and Risk for HCV-Related HCC: A Prospective Study. Ann Int Med. 2007;146(9):649-656) [http://www.annals.org/cgi/content/full/146/9/649]supports past studies that have shown that the development of HCC in HCV-infected patients may also be related to occult (latent) HBV infection in which the patient has been exposed to HBV and has integrated HBV in the liver, but no signs of infection in the periphery except for antibody against the HBV Core antigen. Thus, in HCV-infected patients without cirrhosis, the development of HCC may be related to co-existent HBV infection. So, is this risk of development of HCC in non-cirrhotic HCV patients enough to trigger us to perform surveillance for HCC? It is generally accepted that it is not cost-effective to screen for HCC if it is not expected to occur at a rate of greater than 0.2% per year (Di Bisceglie AM. Issues in Screening and Surveillance for Hepatocellular Carcinoma. Gastroenterology. 2004;127:S104-S107). This threshold is exceeded in patients with established cirrhosis, who have a rare of development of HCC at 1-4% per year. Since the risk of development of HCC in patients with chronic HCV, but without cirrhosis is very low (below 0.2% yearly), surveillance is not cost-effective in such patients. Whether this recommendation would be modified by the presence of anti HBV Core antibody requires more study.

In response to the second question, it is important to realize that patients with chronic HCV, but without abnormal LAEs can still have significantly abnormal liver histology, including development of cirrhosis, though the risk of development of significant liver damage is decreased in this population. Thus, evidence for the presence of cirrhosis is far more pertinent with regard to HCC risk than is the presence of abnormal LAEs.

Meeting Perspectives-ASCO 2007

June 26, 2007

ASCOCommentary By: Theresa Ryan, M.D. Assistant Professor, Division of Oncology

During the first five days in June, the American Society of Clinical Oncology met in Chicago for their 43rd annual meeting. The theme of this meeting was “Translating Research into Practice,” emphasizing the society’s goal of enhancing patient care by creating a forum wherein the latest advances in translational and clinical cancer research are presented in the context of our current understanding of cancer biology. Many abstracts presented will lay the groundwork for further research. A number will likely have immediate impact. While I can not do justice to the scope of the meeting, I have highlighted a few of the presentations that I believe will have the greatest immediate impact on the practice of oncology as well as outline challenges for the future.

Abstract #1: “Randomized phase III trial of Sorafenib vs. placebo in patients with advanced hepatocellular carcinoma.” Results of the SHARP trial.

Importance: HCC is the 3rd cause of cancer death globally. HCC is expected to rise in incidence in the West. No standard therapy exists for advanced HCC.
Background: Sorafenib is a orally bioavailable multikinase inhibitor with anti-angiogenic, pro-apoptotic and Raf kinase inhibitory activity,
The Trial: A large, multicenter, randomized, placebo-controlled phase III trial evaluated the efficacy and safety of Sorafenib vs. placebo. This trial was stopped early as it met it pre-defined early stopping criteria. The HR for overall survival was 0.69 (95% CI: 0.55, 0.87; p=0.0006), representing a 44% improvement in overall survival vs. placebo. Median overall survival was 10.7 vs. 7.9 mos. Most frequent toxicities were diarrhea, hand-foot skin reaction, fatigue, and bleeding. The conclusions of the authors were that Sorafenib was well tolerated and is the first agent to demonstrate a statistically significant improvement in overall survival for patients with advanced HCC. This effect is clinically meaningful and establishes sorafenib as first-line treatment for these pts.
The editorial: While the improvement in overall survival is probably one only an oncologist could become excited about; this truly does represent a significant advance in our understanding and treatment of HCC. Traditional chemotherapy agents are essentially ineffective. This trial combined with the encouraging positive results of other (smaller) trials employing an “anti-angiogenic” strategy (bevacuzimab, sunitinib) provides a rationale upon which to develop future trials in HCC. Read more »