Clinical Questions

Is There a Long-Term Mortality Benefit From Bariatric Surgery?

March 8, 2012

By Marc O’Donnell

Faculty Peer Reviewed

Obesity is defined as a body mass index (BMI) of ?30 kg/m2. The rate of obesity in the United States has skyrocketed over the last several decades, becoming a disease of epidemic proportions. According to the Centers for Disease Control and Prevention, in 2009, 32 states had a prevalence of obesity of ?25%, while 9 of these states had a prevalence of ?30%. It has been estimated that the economic costs of treating obesity and its complications, including type 2 diabetes mellitus, heart disease, stroke, osteoarthritis, certain cancers, obstructive sleep apnea, and depression costs the United States approximately $100 billion annually.[1] Over the past several years, bariatric surgery has soared in popularity as an effective weight loss modality. In fact, the NYU Langone Medical Center and Bellevue Hospital Center have been designated Bariatric Surgery Center of Excellence. Although several studies have shown a mortality benefit with bariatric surgery, a 2011 study published in JAMA refutes this claim, posing the question: is there really a long-term mortality benefit with bariatric surgery?

Several studies have demonstrated a relationship between weight loss from bariatric surgery and improved survival when compared to medical management for obesity. The Swedish Obese Subjects study (SOS) was a prospective, matched, surgical interventional trial with 4047 patients (71% female) with an average age of 47 years, and an average follow-up of 10.9 years.[2] After 10 years, the control cohort had an average weight loss of ± 2%, while the surgery cohort had an average weight loss of 14-25%, depending on surgery type, with a decrease in mortality of 21% (absolute risk reduction 1.3%). Importantly, this study was not powered to elucidate the mechanism through which mortality was decreased by bariatric surgery.

A second study, this one a retrospective analysis, had 7925 patients (85% female), with an average age of 39 years, and a mean follow up of 7.9 years.[3] This study demonstrated a 40% decrease in mortality (absolute risk reduction 1.5%) in the surgery cohort, and importantly, this study stratified the mortality benefit. There was a 56% decrease in mortality from coronary artery disease, a 92% decrease in mortality from diabetes, and a 60% decrease in mortality from cancer. Interestingly, death not caused by disease was 58% higher in the surgery cohort. Both of these studies demonstrated a significant decrease in long-term mortality for obese patients who underwent bariatric surgery compared to matched obese patients who were treated with medical management.

The long-term decrease in mortality must be compared with the perioperative mortality rate from bariatric surgery. According to the LABS Consortium, the 30-day mortality rate for bariatric surgery is 0.3% (0% for laparoscopic adjustable gastric banding, 0.2% for laparoscopic Roux-en-Y gastric bypass, and 2.1% for open Roux-en-Y gastric bypass).[4] Considering these findings, the long-term mortality benefit for laparoscopic adjustable gastric banding and laparoscopic Roux-en-Y gastric bypass exceeds the perioperative mortality for both operations, while the perioperative mortality for open Roux-en-Y gastric bypass surpasses the long-term mortality benefit from the weight loss accomplished by the surgery. Open bariatric surgery, however, constitutes less than 10% of all bariatric surgery performed in the United States.[5] Perioperative mortality rate must be carefully considered when recommending bariatric surgery to a patient.

While the aforementioned studies have demonstrated a 21-40% reduction in long-term mortality after bariatric surgery compared to matched controls prescribed medical management for obesity, a 2011 study in JAMA by Maciejewski and colleagues did not demonstrate a statistically-significant mortality benefit for bariatric surgery. The study was a retrospective analysis of 1695 patients (74% male) in Veterans Affairs medical centers, with an average age of 50 years, and an average follow-up of 6.7 years.[6] With propensity-matched controls and after Cox regression, bariatric surgery was not found to have a decreased long-term mortality rate when compared to medical management for obesity (hazard ratio of 0.83, 95% confidence interval of 0.61-1.14). The study’s authors claim that their surgery cohort was at high risk because the majority of patients were men, at an older average age, and with higher average BMI than previous studies with a similar endpoint. In the 2 previously examined studies, the majority of patients were female, with a younger average age, and lower average BMI. Also, the BMI of the patients in the VA surgery group was 47.4 kg/m2 compared to 42.0 kg/m2 in the nonsurgical controls. The lack of mortality benefit found with bariatric surgery in such a high-risk cohort may be due to the advanced comorbid illnesses that the patients already had due to obesity, indicating that for older and sicker patients, bariatric surgery does not confer a mortality benefit; the benefit is found in younger and healthier patients who have yet to experience end-organ dysfunction from obesity and its related complications.

Historically, bariatric surgery has been associated with improvements in quality of life, morbidity, and, most importantly, mortality. It is essential to compare the perioperative mortality rate with the long-term improvement in mortality conferred by bariatric surgery to ultimately determine the mortality benefit of surgery. Recent data, however, have shown no mortality benefit for bariatric surgery in high-risk obese patients. The lack of mortality benefit can likely be explained by irreversible end-organ damage, which could have been prevented had surgery been done at a younger age. This suggests that a more careful examination of comorbid illnesses and end-organ dysfunction should be performed before recommending patients for bariatric surgery in order to more carefully select those patients who will have a mortality benefit from surgery. More studies on the long-term risks and benefits of bariatric surgery are sure to be published, improving the quality of evidence assessing the long-term implications of bariatric surgery.[7]

Marc O’Donnell is a 4th year student at NYU School of Medicine

Peer reviewed by Manish Parikh, MD, Assistant Professor, Department of Surgery, NYU Bariatric Surgery Associates

Image courtesy of Wikimedia Commons

References

1. Ludwig DS, Pollack HA. Obesity and the economy: from crisis to opportunity. JAMA. 2009;301(5):533-535. http://jama.ama-assn.org/content/301/5/533.full

2. Sjöström L, Narbro K, Sjöström CD, et al. Effects of bariatric surgery on mortality in Swedish obese subjects. N Engl J Med. 2007;357(8):741-752. http://www.nejm.org/doi/full/10.1056/NEJMoa066254#t=articleTop

3. Adams TD, Gress RE, Smith SC, et al. Long-term mortality after gastric bypass surgery. N Engl J Med. 2007;357(8):753-761. http://www.nejm.org/doi/full/10.1056/NEJMoa066603#t=articleTop

4. Longitudinal Assessment of Bariatric Surgery (LABS) Consortium. Flum DR, Belle SH, King WC, et al. Perioperative safety in the longitudinal assessment of bariatric surgery. N Engl J Med. 2009;361(5):445-454. http://www.nejm.org/doi/full/10.1056/NEJMoa0901836#t=articleTop

5. DeMaria EJ, Pate V, Warthen M, Winegar DA. Baseline data from American Society for Metabolic and Bariatric Surgery-designated Bariatric Surgery Centers of Excellence using the Bariatric Outcomes Longitudinal Database. Surg Obes Relat Dis. 2010;6(4):347-355. http://www.sciencedirect.com/science/article/pii/S1550728909007709

6. Maciejewski ML, Livingston EH, Smith VA, et al. Survival among high-risk patients after bariatric surgery. JAMA. 2011;305(23):2419-2426. http://jama.ama-assn.org/content/305/23/2419.full

7. Padwal R, Klarenbach S, Wiebe N, et al. Bariatric surgery: a systematic review of the clinical and economic evidence. J Gen Intern Med. 2011;26(10):1183-1194. http://www.springerlink.com/content/k48h0686338u7012/

FROM THE ARCHIVES: How Do You Diagnose Polymyalgia Rheumatica?

March 1, 2012

Please enjoy this post from the archives dated August 12, 2009

By Eve Wadsworth MD

Faculty Peer Reviewed

Polymyalgia rheumatica (PMR) is a condition that resembles several different disorders including osteoarthritis and can be difficult to diagnose. In addition to osteoarthritis, PMR can resemble conditions as diverse as depression, fibromyalgia, myopathic drug reactions, and malignancy. PMR, however, can be associated with dangerous consequences, namely blindness, and is responsive to well-established treatment regimens. As such, familiarity with PMR’s presentation and its unique features is critical so as to avoid serious complications that can result from a delayed or missed diagnosis.

PMR has a well-known association with temporal arteritis. PMR occurs in as many as 50% of patients who present with concurrent symptoms of temporal arteritis. (1) Temporal arteritis has a characteristic presentation that includes headache ipsilateral to the inflamed vessel; jaw claudication; and constitutional symptoms such as fever, night sweats, fatigue, and anorexia. Patients presenting later in the course of temporal arteritis will often complain of visual changes or loss, heralding the most serious consequence of temporal arteritis: blindness secondary to ischemic optic neuropathy. In a patient with said presentation and complaints of morning stiffness in the shoulder, hip, and girdle, physicians are likely to identify PMR in the setting of temporal arteritis. However, symptoms of PMR do not always occur at the same time as the symptoms of temporal arteritis, and PMR can occur completely independently of temporal arteritis. When it presents without the unique symptoms of temporal arteritis, PMR can present a diagnostic dilemma.

Familiarity with the full range of manifestations of PMR is imperative. In 1979, Howard Bird characterized the clinical features most helpful in the diagnosis of PMR by determining their specificity and sensitivity. Seven features were meaningful in their sensitivity or specificity for PMR: shoulder pain and/or stiffness bilaterally, onset of illness of <2 weeks duration, initial ESR >40 mm/hr, morning stiffness duration of >1 hour, age >65 years, depression and/or loss of weight, and upper arm tenderness bilaterally. Dr. Bird and his colleagues proposed that the presence of three or more of these characteristics is sufficient for identification of probable PMR. (2) In the wake of the publication of the Bird criteria, several other criteria were developed for accurate identification of PMR at first presentation. These criteria included other features like disease duration >2 months and absence of hand swelling. (3,4,5) In sum, patients with PMR are typically over the age of 50 years old and present with discomfort they describe as pain and stiffness, often worst in the morning, that is limited to the shoulders, hip girdle, neck, and torso. (6) Because of the typically proximal nature of the discomfort, patients complain of difficulty with dressing, using a hairdryer, etc. While it’s not unusual for patients to present with some constitutional symptoms, high spiking fever is more typical of temporal arteritis than lone PMR. (4)

Rheumatoid arthritis (RA) is one of the conditions that can be confused with PMR. Pre-treatment CD8 lymphocyte levels, the presence of RF positivity, and involvement of peripheral joints are features that have been pursued as possibly useful metrics for differentiation between RA and PMR. Peripheral joint involvement is significantly more common in patients with RA, while decreased CD8 lymphocyte count is more common in patients with PMR. The presence of RF positivity, however, is not significantly more common in patients with PMR than in patients with RA. (7) Imaging is also a modality that can and should be utilized in differentiating RA from PMR. Routine x-rays of patients with PMR will usually be normal. However, MRI examination of PMR patients reveals some meaningful differences in the nature of the arthropathy present in RA versus that of PMR. The site of inflammation is largely extraarticular synovial structures in PMR; common sites of involvement include flexor, posterior tibial, and peroneal tenosynovial sheaths. In a small study comparing MRI results of PMR and RA patients, 9 of 14 patients (10/20 joints) with PMR but only 2/14 (2/20 joints) with RA had prominent edema at extracapsular sites adjacent to the joint capsule or in the soft tissues (p = 0.02). Both groups had a comparable degree of joint effusion (18 PMR, 17 RA), bursitis (18 PMR, 16 RA), and tenosynovitis (3 PMR, 2 RA). (8, 9) It is worth mentioning, however, that MRI is both expensive and not always immediately accessible. Given this reality, a more cost- and time-efficient approach to clarifying a condition that is neither clearly RA or PMR is to treat empirically with steroids, to which PMR will rapidly respond.

Proper diagnosis of PMR requires a familiarity with its signs and symptoms and a systematic approach to the diagnostic work-up that reflects an understanding of the conditions that mimic PMR. A detailed history and physical which focuses on features common to the diagnostic criteria described earlier is essential and may eliminate the need for more costly and elaborate diagnostic procedures. Any visual changes warrant prompt referral to Ophthalmology. Checking routine labs including ESR, TSH, and hepatic panel is prudent, whereas imaging should be reserved for ambiguous cases or those where empiric treatment with steroids is contraindicated.

Peer reviewed by Peter Izmirly MD, NYU Division of Rheumatology

References:

1. Brooks RC, McGee SR. Diagnostic dilemmas in polymyalgia rheumatica. Arch Intern Med. 1997;157(2):1162-1168.

2. Bird HA, Esselinckx W, Dixon AS, Mowat AG, Wood PH. An evaluation of criteria for polymyalgia rheumatica. Ann Rheum Dis. 1979;38(5):434-439.

3. Jones JG, Hazleman BL. Prognosis and management of polymyalgia rheumatica. Ann Rheum Dis. 1981;40(1):1-5.

4. Chuang TY, Hunder GG, Ilstrup DM, Kurland LT. Polymyalgia rheumatica: a 10-year epidemiologic and clinical study. Ann Intern Med. 1982;97(5):672-680.

5. Nobunaga M, Yoshioka K, Ilstrup DM, Kurland LT. Clinical studies of polymyalgia rheumatica: a proposal of diagnostic criteria. Jpn J Med. 1989;28(4):155-159.

6. Brooks RC, McGee SR. Diagnostic dilemmas in polymyalgia rheumatica. Arch Intern Med. 1997;157(2):162-168.

7. Caporali R, Cimmino MA, Ferraccioli G, et al. Presenting features of polymyalgia rheumatica (PMR) and rheumatoid arthritis with PMR-like onset: a prospective study. Ann Rheum Dis. 2001;60(11):1021-1024.

8. Salvarani C, Cantini F, Olivieri I, Hunder GS. Polymyalgia rheumatica: a disorder of extra-articular synovial structures? J Rheumatol. 1999;26(3):517-521.

9. McGonagle D, Pease C, Marzo-Ortega H, O’Connor P, Gibbon W, Emery P. Comparison of extracapsular changes by magnetic resonance imaging in patients with rheumatoid arthritis and polymyalgia rheumatica. J Rheumatol. 2001;28(8):1837-1841.

How Should You Choose the Best Anti-platelet Agents for Secondary Stroke Prevention?

February 16, 2012

By Demetrios Tzimas, MD

Faculty Peer Reviewed

You are about to discharge a 75-year-old female with hyperlipidemia, hypertension, peripheral vascular disease, who was admitted to the hospital for an ischemic stroke. Being an astute physician, you would like to mitigate this patient’s risk of having a second stroke. But you ask yourself, “with all of the agents available today, what anti-platelet agents should I put this patient on to decrease her risk for a second stroke?”

The etiology of an ischemic stroke, as defined by Adam’s and Victor’s Principles of Neurology, is thrombosis from atheromatous plaques in the cerebral arteries [1]. Thus, it makes sense that after a stroke, as in cardiovascular disease, anti-platelet therapy can help mitigate the onset of a second stroke. Although aspirin has traditionally been the anti-platelet agent of choice [2], currently, there are a variety of anti-platelet agents at our disposal to help in the secondary prevention of ischemic stroke, including: aspirin (irreversible inhibitor of platelet aggregation), aspirin-dipyridamole (inhibitor of platelet aggregation and adhesion), clopidogrel (inhibits adenosine diphosphate-induced platelet aggregation), and cilostazol (inhibits cellular phosphodiesterase and thus platelet aggregation).

In 2006, the American Heart Association and the American Stroke Association published “Guidelines for Prevention of Stroke in Patients With Ischemic Stroke or Transient Ischemic Attack.” This article describes the Class 1 A recommendations for secondary stroke prevention in patients with non-cardioembolic ischemic stroke or transient ischemic attacks, stating that aspirin (at any dose), aspirin-dipyridamole, and clopidogrel as all being acceptable options for secondary prophylaxis [3]; yet, this does not solve our dilemma of which agent to start in our patient. Therefore, you decide to look at the literature yourself to help come up with an answer of what anti-platelet agents to place your patient on post-hospitalization.

The CAPRIE (Clopidogrel versus Aspirin in Patients at Risk of Ischaemic Events) Trial, conducted by Gent et al in 1996, looked at clopidogrel 75 mg vs. aspirin 325 mg in 19,185 patients with recent ischemic stroke, myocardial infarction, or symptomatic peripheral vascular disease [4]. The primary outcomes were defined as the reduction of ischemic stroke, myocardial infarction, or vascular death. Investigators found an 8.7% significant relative-risk reduction in favor of clopidogrel in the primary outcomes (939 events in clopidogrel group versus 1021 events in aspirin group), although when looking specifically at recurrent stroke there was no difference between the aspirin and clopidogrel groups. Of note, gastrointestinal bleeding was significantly more common in the aspirin group.

Diner et al (2004) conducted the MATCH (Management of Atherothrombosis with Clopidogrel in High Risk Patients) Trial, which was a follow-up to CAPRIE. In this trial, authors investigated the efficacy of aspirin 75 mg plus clopidogrel 75mg vs. clopidogrel 75mg plus placebo in 7, 276 patients with risk factors for stroke (previous stroke, previous MI, angina, diabetes mellitus, or symptomatic peripheral artery disease) as well as previous manifestations of atheroembolic disease (previous transient ischemic attacks or ischemic stroke) [5]. The study’s primary endpoint was the first occurrence of ischemic stroke, myocardial infarction, vascular death, or rehospitalization for any ischemic event over 18 months. Although there was no difference between the two groups in reaching the primary endpoint (15.7% in aspirin plus clopidogrel vs. 16.7% in clopidogrel plus placebo), there was a significantly higher rate of life-threatening bleeding in the clopidogrel plus aspirin group than in the clopidogrel plus placebo (3% versus 1 %, respectively). The study demonstrated that the addition of aspirin to clopidogrel did not add any benefit in terms of prevention of stroke, and actually increased the rate of serious bleeding.

In a follow-up of the MATCH Trial, Bhatt et al (2006) followed 15, 603 patients at high risk for atherothrombotic events in the CHARISMA (Clopidogrel for High Atherothrombotic Risk and Ischemic Stabilization, Management, and Avoidance) Trial, who were randomized to either low-dose aspirin plus clopidogrel 75 mg or low-dose aspirin plus placebo[6]. The primary endpoint of the trial was the first occurrence of myocardial infarction, stroke, or death from cardiovascular causes in the median 28 month follow-up. Similar to the MATCH Trial, there was no difference between the two groups in the rates of the primary outcomes (6.8% in the aspirin plus clopidogrel group, and 7.3% in the aspirin plus placebo group). Unlike the MATCH Trial, in terms of bleeding, there were no differences in severe bleeding, but there was a significantly increased relative risk for moderate bleeding in the aspirin plus clopidogrel group when compared to the aspirin plus placebo group. Thus, as in MATCH, authors found no benefit with clopidogrel plus aspirin in reducing the rate of stroke in patients with multiple cardiovascular risk factors.

Although we have seen that the use of aspirin and clopidogrel together show no increased benefit, with the introduction of newer agents we are able to have more combination drugs. One such agent, dipyridamole, was studied in combination with aspirin in the ESPS2 (European Stroke Prevention Study) Trial in 1996 [7]. Authors studied the efficacy of low-dose aspirin, dipyridamole, and the agents in combination for the secondary prevention of ischemic stroke in 6, 602 patients with prior stroke or transient ischemic attacks. The primary endpoint was defined as stroke or death over two years. Investigators demonstrated that stroke risk was significantly reduced by 18.1% with aspirin 25 mg twice daily, 16.3% with dipyridamole 200 mg twice daily, and 37% with asprin/dipyridamole 25mg/200mg twice daily as compared with placebo; there was no significant difference on the stroke or transient ischemic attack rate between asprin alone or dipyridamole alone, but the combination pill was significantly better than both in regards to stroke reduction. In terms of bleeding, this study showed that all groups containing aspirin had significantly more bleeding than the non-aspirin groups (aspirin alone 8.4%, aspririn/dipyridamole 8.7%, dipyridamole alone 4.7%, placebo 4.5%).

The PRoFESS (Prevention Regimen for Effectively Avoiding Second Strokes) Trial in 2008 looked at 20, 332 patients who had an ischemic stroke in the previous 90 days and assigned them to either aspirin/dipyridamole 25mg/200mg twice daily with telmisartan 80 mg daily or placebo, and clopidogrel 75mg daily with telmisartan 80 mg or placebo, with the predefined primary endpoint being recurrent stroke of any kind [8]. In regards to the anti-platelet arms of the trial, authors found no significant differences in the number of recurrent strokes between the aspirin/dipyridamole group and clopidogrel group, but there were more major hemorrhagic events in the aspirin/dipyridamole group than the clopidogrel group (4.1 % vs 3.6%, hazard ratio 1.15).

A newer agent, cilostazol, was studied in the CSPS 2 (Cilostazol for the Prevention of Secondary Stroke) Trial, a non-inferiority study in The Lancet [9]. Authors compared cilostazol 100 mg versus aspirin 81 mg in preventing stroke in 2,757 patients who had a cerebral infarction in the previous 26 weeks. The primary endpoint was defined as the first occurrence of cerebral infarction, cerebral hemorrhage, or subarachnoid hemorrhage over a mean of 29 months. Investigators found that compared with aspirin, cilistazol was not inferior in reducing the risk of recurrent stroke, but they did report that patients in the cilostazol group had significantly fewer hemorrhagic events (hazard ratio of 0.458 in comparison with aspirin).

These trials reviewed demonstrate that there are several options for us to use as secondary stroke prophylaxis. In summary:

Trial Anti-Platelet Agents Result Caveat
CAPRIE Clopidogrel vs Aspirin No difference in rates of stroke Increased bleeding with Aspirin
MATCH Aspirin + Clopidogrel vs Aspirin No difference in rates of stroke Increased bleeding with Aspirin + Clopidogrel
CHARISMA Aspirin + Clopidogrel vs Clopidogrel No difference in rates of stroke Increased bleeding with Aspirin + Clopidogrel
ESPS2 Aspirin + Dipyridamole vs Aspirin, Aspirin + Dipyridimole vs Dipyridamole Combination better than either agent alone in secondary stroke prevention More bleeding see in all Aspirin containing groups
PRoFESS Aspirin + Dipyridamole vs Clopidogrel No difference in rates of stroke More bleeding with Aspirin + Dipyridamole
CSPS 2 Aspirin vs Cilostazol No difference in rates of stroke More bleeding in Aspirin group

As the current guidelines for anti-platelet agents are not clear, it is our responsibility as physician-scientists to be knowledgeable on the current data, and to review each patient individually instead of relying on vague guidelines.  Upon this review of the literature, it seems that all of these anti-platelet agents have similar efficacy in reducing the incidences of recurrent strokes in high risk patients; a recurring theme of these trials is more bleeding in the aspirin groups and combination groups.  Since we always weigh the risks and benefits of our treatments to patients, in an older population we must seriously consider the risk of GI bleeding when placing our patients on secondary prophylaxis.  Since the combination groups mostly showed increased risk of bleeding with no real benefit, a single agent is probably the best method of secondary prevention.  Although aspirin was traditionally the anti-platelet agent of choice, this agent has been the culprit of bleeding in many trials.  Thus, since they all have similar efficacies in preventing recurrent strokes, whether we choose aspirin, clopidogrel, aspirin/dipyridamole, or cilostazol will depend on: the patient’s ability to tolerate the regimen, cost (aspirin pennies/pill, clopidogrel 5 dollars/pill, aspirin/dipyridamole 3 dollars/pill, cilostazol 1 dollar/pill), as well as compelling factors for certain anti-platelet agents (ie. having stents and requiring plavix).

Dr. Demetrios Tzimas is a contributing editor, Clinical Correlations and a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Saran Jonas, Professor of Neurology, Director of Bellevue Department of Neurology

Image courtesy of Wikimedia Commons

References:

1. Ropper, Allan. Adams and Victor’s Principles of Neurology. 9. New York: McGraw Hill, 2009. 773. Print.

2. Antiplatlet Trialist’s Collaboration. Collaborative overview of randomised trials of antiplatelet therapy–I: Prevention of death, myocardial infarction, and stroke by prolonged antiplatelet therapy in various categories of patients. Antiplatelet Trialists’ Collaboration. British Medical Journal. 1994 Jan; 308:81-106. Web. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2539220/?tool=pubmed

3. Sacco RL, Adams R, Albers G, et al. Guidelines for prevention of stroke in patients with ischemic stroke or transient ischemic attack: a statement for healthcare professionals from the American Heart Association/American Stroke Association Council on Stroke. Stroke. 2006; 37:577–561. Web. http://stroke.ahajournals.org/cgi/reprint/37/2/577

4. Gent M, Beaumont D, Blanchard J, et al. A randomised, blinded, trial of clopidogrel versus aspirin in patients at risk of ischaemic events (CAPRIE). The Lancet. 1996; 348: 1329-1339. Web. http://www.sciencedirect.com.ezproxy.med.nyu.edu/science?_ob=MImg&_imagekey=B6T1B-3Y9GPP9-G1&_cdi=4886&_user=142623&_pii=S0140673696094573&_origin=search&_coverDate=11%2F16%2F1996&_sk=996510961&view=c&wchp=dGLbVtzzSkWb&md5=cdb48c26ac1fc872fe55a83dd0dac855&ie=/sdarticle.pdf

5. Diener HC, Bogousslavsky J, Brass LM, et al. Aspirin and clopidogrel compared with clopidogrel alone after recent ischaemic stroke or transient ischaemic attack in high-risk patients (MATCH): randomised, double-blind, placebo-controlled trial. The Lancet. 2004 July; 364: 331-337. Web. http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6T1B4CXM224149&_cdi=4886&_user=142623&_pii=S0140673604167214&_origin=search&_coverDate=07%2F24%2F2004&_sk=996350568&view=c&wchp=dGLbVzzzSkWb&md5=23f76db79250bc04aeb0d15454516fb2&ie=/sdarticle .pdf

6. Bhatt DL, Fox KA, Hacke W, et al. Clopidogrel and aspirin versus aspirin alone for the prevention of atherothrombotic events. The New England Journal of Medicine. 2006 Apr; 354: 1706-1717. Web. http://www.nejm.org/doi/full/10.1056/NEJMoa060989

7. Diener HC, Cunha L, Forbes C, et al. Journal of the Neurological Sciences. 1996 Nov; 143: 1-13. Web. http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6T063YTCC2MS1&_cdi=4854&_user=142623&_pii=S0022510X96003085&_origin=search&_coverDate=11%2F30%2F1996&_sk=998569998&view=c&wchp=dGLbVlz-zSkWb&md5=3214efed51ae906c0281a6006414dae8&ie=/sdarticle.pdf

8. Sacco RL, Diener HC, Yusuf L, et al. Aspirin and extended-release dipyridamole versus clopidogrel for recurrent stroke. The New England Journal of Medicine. 2008; 359: 1238-1251. Web. http://www.nejm.org/doi/pdf/10.1056/NEJMoa0805002

9. Shinohara Y, Katayama Y, Uchiyama S, et al. Cilostazol for prevention of secondary stroke (CSPS 2): an aspirin-controlled, double-blind, randomised non-inferiority trial. The Lancet Neurology. 2010; 9: 959-968. Web. http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6X3F-511B43J-2-G&_cdi=7297&_user=142623&_pii=S1474442210701988&_origin=search&_coverDate=10%2F31%2F2010&_sk=999909989&view=c&wchp=dGLbVtzzSkWA&md5=0d7483198b97151accc6cea2893e3b0d&ie=/sdarticle.pdf

From The Archives: Why is Syphilis Still Sensitive to Penicillin?

January 13, 2012

Please enjoy this post from the Archives, first published on July 30, 2009

By Sam Rougas MD

Faculty Peer Reviewed

It seems that every week a new article in a major newspaper is reporting what most infectious disease physicians have been preaching for several years. Antibiotic resistance is rapidly spreading. Infections such as Methicillin Resistant Staphylococcal Aureus, Extremely Drug Resistant Tuberculosis, and Vancomycin Resistant Enterococcus have journeyed from the intensive care units to the locker rooms of the National Football League. That being said, some bacteria have strangely and until recently inexplicably behaved. Syphilis, a disease caused by the spirochete Treponema Pallidum, though first reported in Europe around the 15th century has likely been in North America since the dawn of mankind. Its rapid spread in Europe began shortly after Christopher Columbus returned from the new world[1] and remained unabated until it was first noted that Penicillin (PCN) could cure the disease[2]. However, since that time, syphilis, once the great pox, is now at the bottom of most differentials. How is it then, that one of our oldest diseases remains sensitive to our first antibiotic?Penicillin resistance to staphylococcal species was reported as early as 1946 and multiple cases were noted worldwide before the turn of the decade[3]. Literally within ten years of the existence of PCN there was resistance among staph species; however, after 50 years, PCN resistant syphilis is a worthy of a case report. In practically every case, the infection was cured with increasing the dose or duration of therapy or with another beta-lactam antibiotic[4,5]. One tempting explanation is that spirochetes are incapable of developing PCN resistance; however, that is not true. Brachyspira Pilosicoli, in intestinal spirochete has shown PCN resistance[6]. A second thought is that Syphilis is incapable of developing antibiotic resistance at all, though this too has not been shown to be true. Case reports of azithromycin resistance in T. Pallidum became increasingly common at the beginning of this century. Gene sequencing of these species mapped out the mutation leading to the macrolide resistant phenotype[7]. Obviously the mechanism of action of a macrolide antibiotic is different from a beta lactam as is the resistance profile. However, it does show that syphilis is capable of developing resistance to at least one class of antibiotic.

The classic teaching is that beta lactam antibiotics function at the level of the cell wall via binding to penicillin binding proteins (PBPs). Once bound, the beta lactams are able to interfere with the production of specific peptidoglycans critical for cell wall structure. Once these peptides are eliminated the cell wall ruptures and the bacteria dies. Resistance occurs when bacteria either via an innate mutation or via DNA exchange acquire the ability to produce beta lactamase, an enzyme cabable of cleaving the antibiotic rendering it useless. In syphilis the mechanism of action is thought to be the same, but resistance has never developed. This may be a direct consequence of one of the more recently discovered PBPs called Tp47[8]. Tp47 functions as both a PBP and a beta lactamase. However, it may paradoxically be responsible for the persistence of PCN sensitivity in syphilis. The binding of the beta lactam component of PCN to Tp47 results in hydrolysis of the beta-lactam bond of the antibiotic. However, in the process of this reaction several byproducts are created. The thought is that these byproducts have a higher affinity for Tp47 than the beta lactam itself[9]. Thus as a consequence of PCN being broken down, products are released which make it more difficult for the beta-lactamase to bind the antibiotic.

While this is one current theory behind the exquisite sensitivity of syphilis to PCN, it is clearly not cause for celebration. Cases of syphilis are increasing world-wide10 as the medical community has been unable to eradicate this disease. As the number of cases increase, so too does the potential for antibiotic resistance. Theoretically a mutation in Tp47 may alter the protective byproducts upon which the sensitivity of syphilis to PCN depends. Such a mutation would likely result in the end of the gravy train that has been the treatment of syphilis.

Faculty Peer Reviewed with commentary by Meagan O’brien MD, NYU Division of Infectious Diseases and Immunology

While it is true that Treponema Pallidum remains highly susceptible to Penicillin and has developed resistance to Azithromycin through an A–>G mutation at position 2058 of the 23S rRNA gene of T. pallidum, which confers resistance by precluding macrolide binding to the bacterial 50S ribosomal subunit, of which 23S rRNA is a structural component, the mechanisms of retained Penicillin sensitivity are not fully understood[7]. The discovery of Tp47 as a dual PBP and Beta-lactamase is interesting and important, but more studies would be needed to attribute this mechanism to the persistence of Treponema Pallidum sensitivity to Penicillin. Luckily, we do not have many clinical isolates to test this theorized mechanism. One key clinical point to remember is that eradication of the infection depends not only on the invading organism, but also upon the host defense system. In our HIV+ immunocompromized patient population, we routinely are concerned about treatment failure in syphilis infection due not to penicillin drug resistance but to dysfunctional host responses. A body of evidence now exists supporting the recommendation that if an HIV+ patient has a CD4 T-cell count ≤350 cell/uL and a blood RPR titer ≥ 1:32 with latent syphilis or syphilis of unknown duration, a lumbar puncture should be performed to rule out neurosyphilis, and if positive, that intravenous penicillin should be given instead of IM Benzathine Penicillin[11-14]. Additionally, after treating late or latent syphilis, a fall in RPR titer by 1:4 needs to be observed over 12 months or the patient should be evaluated for treatment failure or neurosyphilis, with the understanding that the CNS may be a more privelidged site for Treponema survival in the face of IM Benzathine Pencillin.

References:

1. Rose M. Origins of Syphilis. Archeology 1997; Volume 50 Number 1

2. Mahoney J, Arnold R, Harris A. Penicillin treatment of early syphilis; a preliminary report. Vener Dis Inform 1943; 24:355-357

3. Shanson DC, Review Article: Antibiotic-resistant staphylococcus aureus. Journal of Hospital Infection (1981) 2: 11-36

4. Cnossen W, Niekus H, Nielsen et al. Ceftriaxone treatment of penicillin resistant neurosyphilis in alcoholic patients. J. Neurol. Neurosurg. Psychiatry 1995; 59; 194-195

5. Stockli H, Current aspects of neurosyphilis: therapy resistant cases with high-dosage penicillin? Schweiz Rundsch Med Prax. 1992 Dec 1; 81(49):1473-80

6. Mortimer-Jones S, Phillips N, Ram Naresh T et al. Penicllin resistance in the intestinal spirochaete Brachyspira pilosicoli associated with OXA-136 and OXA-137, two new variants of the class D Beta-Lacatmase OXA-63. Journal of Medical Microbiology 2006; 57 1122-1128

7. Katz K, Klausner J. Current Opinion in Infectious Disease 2008, 21:83-91

8. Deka R, Machius M, Norgard M et al. Crystal Structure of the 47-kDa Lipoprotein of Treponemal Pallidum Reveals a Novel Pencillin-Binding Protein. The Journal of Biological Chemistry 2002. 277:44: 41857-41864

9. Cha J, Ishiwata A, Mobashery S. A Novel B-Lactamase Activity from a Penicllin-binding Protein of Treponema pallidum and Why Syphilis Is Still Treatable With Penicllin. The Journal of Biological Chemistry 2004. 279: 15: 14917-14921

10. Gerbose A, Rawley J, Heymann D, et al. Global prevalence and incidence estimates of selected curable STDs. Sex Transm Infections 1998; 74: 512-516

11. Marra, C.M., C.L. Maxwell, S.L. Smith, et al., Cerebrospinal fluid abnormalities in patients with syphilis: association with clinical and laboratory features. J Infect Dis, 2004. 189(3): p. 369-76.

12. Marra, C.M., C.L. Maxwell, L. Tantalo, et al., Normalization of cerebrospinal fluid abnormalities after neurosyphilis therapy: does HIV status matter? Clin Infect Dis, 2004. 38(7): p. 1001-6.

13. Ghanem KG, Moore RD, Rompalo AM, Erbelding EJ, Zenilman JM, Gebo KA. Lumbar puncture in HIV-infected patients with syphilis and no neurologic symptoms.

Clin Infect Dis. 2009 Mar 15;48(6):816-21.

14. Ghanem KG, Moore RD, Rompalo AM, Erbelding EJ, Zenilman JM, Gebo KA. Neurosyphilis in a clinical cohort of HIV-1-infected patients. AIDS. 2008 Jun 19;22(10):1145-51.

Does Perioperative Smoking Cessation Improve outcomes?

January 6, 2012

By Benjamin Wu, MD

Faculty Peer Reviewed

Mr. T is a 53-year-old man, with history significant for cholelithiasis. He decides to have an elective cholecystectomy after years of biliary colic. Mr. T is an active smoker and wanted to know if he should stop smoking prior to surgery?

Smoking is associated with adverse outcomes in surgery, however debate continues regarding the safety of perioperative smoking cessation. The current understanding of perioperative smoking cessation follows that smokers who stop smoking close to surgery have a higher risk of pulmonary and overall perioperative surgical complications. Warner et al first described the phenomenon in 1989 when he demonstrated that patients who stopped smoking less than two months prior to surgery had four times as many pulmonary complications compared to patients who stopped smoking for greater than two months. They surmised that the increased complication rate was likely from decreased cough and increased sputum production. [1,3] This prevailing wisdom continued with Smetana who suggested in 1999, ten years later, that smoking cessation must be initiated and continued eight weeks prior to surgery to prevent increased pulmonary complications. [2] If patients have an increased risk of adverse outcomes as described by Warner et al, would it be prudent to continue smoking if surgery is less than 4 to 8 weeks away? Most physicians would say absolutely not. There is a growing body of literature that is beginning to show that 4 to 8 weeks of smoking abstinence does not increase pulmonary or overall perioperative complication rates in surgical patients. [5,6]

Moller chipped away at the status quo in 2002 and analyzed data from 120 patients who were assigned to a smoking intervention program 6 to 8 weeks prior to surgery or to standard care. The study addressed the question of an ideal time to stop smoking: would 6 weeks be sufficient to show decreased complications? Moller found that smoking cessation 6 to 8 weeks before surgery actually reduced the overall complication rate (52% vs. 18% control). The study also found that pulmonary complications occurred at the same rate for the intervention group compared to the control group (2% vs. 2% control). [6] Moller also showed a decreased relative risk for any complication (RR 0.34; 95%, CI 0.17-0.58). [6]Subsequently, Sorensen and Jorgensen examined 60 patients who were randomly assigned to either abstinence or continued smoking 2 to 3 weeks prior to surgery. They also found that with even a few weeks of smoking cessation there was no difference between those who stopped smoking and those who continued smoking in pulmonary or overall complication rate. Interestingly, their study group demonstrated 11% pulmonary complications in the intervention group and 16% in the control group. [7] When the researchers compared both groups for overall surgical complications the intervention group had no significant improvement when compared to the control group (intervention 41% vs. control 43%). [7] Both of these articles showed that varying lengths of smoking cessation had no difference in pulmonary complications and decreased a risk for overall complications.

Barrera et al in 2005 prospectively studied pulmonary complications in lung cancer patients who underwent thoracotomies. They examined 300 lung cancer patients who were non-smokers, past quitters (>2months prior to surgery), recent quitters (< 2 months prior to surgery), and continuous smokers. The researchers found that the pulmonary complication rate was 19% between recent quitters and 23% in on-going smokers. [8] When comparing those who stopped smoking 8 weeks before surgery and those who continued to smoke until time of surgery no paradoxical increase in pulmonary complications was found in this study. [8] Furthermore, they found that independent risk factors for developing pulmonary complications were patients who had lower predicted DLCO (OR 1.42; 95% CI 1.17-1.70), smoking more than 60 pack years (OR 2.54; 95% CI 1.28-5.04), and primary lung cancer (OR 3.94; 95% CI 1.34-11.59). [8] Barrera provided further evidence that smoking cessation demonstrated no increased risk when comparing patients who continued to smoke and those who stopped smoking less than 8 weeks prior to surgery. Unfortunately, the patients who were recent smokers had high variation in the length of smoking cessation in the study.

More recently, Lindstrom et al published a small, multi-center, randomized control trial showing that smoking cessation prior to surgery reduces any postoperative complications. The researchers examined outcomes of 117 general and orthopedic surgical patients in a 4 week pre-operative period. The intervention group received intensive smoking cessation with the goal of abstinence, while the control group received the standard of care including neutral and general information about the harms of smoking. The researchers’ main outcome was any postoperative complication within 30 days. They demonstrated a relative risk reduction of 0.51 (95%, CI 0.27-0.97) of any postoperative complication with only 4 weeks of smoking cessation. They recorded no pulmonary complications in the intervention group and only one pulmonary complication in the control group. Furthermore, they established a number needed to treat (NNT) of 5 (95%; CI 3-40). [9] In 2010, Moller studied breast cancer patients who required surgery in less than 4 weeks. Researchers examined 130 patients were assigned to either brief intervention or standard care. Patients in the brief intervention arm had 2 days preoperative to 10 days postoperative of smoking abstinence. The researchers showed that postoperative complications between those patients who stopped smoking and those patients who continued were similar with a relative risk of 1.00 (95%; CI 0.75-1.33). They concluded that brief cessations are not of clinical relevance. [10] Again, conclusions were based upon a short follow up period and different types of surgery.

The Cochrane Review subsequently examined a series of eight clinical trials for the effect of smoking cessation programs upon pulmonary and overall complications. [11] They found that several studies showed no difference or even possibly reduced rates of pulmonary complications. [6,7,9,11] The authors of the review concluded that perioperative intervention 4 to 8 weeks prior to surgery with nicotine replacement therapy (NRT) is supported by evidence and likely to reduce any complication (RR 0.70; 95% CI 0.56-0.88). [11]

Another recent systematic review and meta-analysis in the Archives of Internal Medicine examined nine studies that compared surgical patients who recently quit smoking and those who continued smoking. The meta-analysis showed no difference between quitting within 8 weeks before surgery compared to continued smoking for any perioperative complication (RR 0.78; 95% CI 0.57-1.07). [3] The researchers did further analysis with studies that validated self-abstinence from smoking and showed that the trend favored recent quitters, however, no significant difference was seen between those patients who continued to smoke and recent quitters (RR 0.57; 95% CI, 0.16-2.01). [3] Interestingly, when researchers looked at four studies that examined pulmonary complications they saw a trend that favored higher risk in recent quitters, but the relative risk crossed 1 (1.18; 95% CI 0.95-1.46). [3] This trend can be explained by the increased weight placed on Warner’s research (80.76%) compared to other studies that showed no difference between quitters and smokers. [2] The authors also suggest that certain surgeries or patient populations could be at higher risk for pulmonary complications. However, in aggregate, the evidence shows no difference between those who stopped smoking compared to those who continued to smoke. A limitation noted by the authors was that the category of recent quitters consisted of significant heterogeneity, in that the category included patients who stopped smoking anywhere from 2 days to 8 weeks prior to surgery. They also noted that only three studies validated abstinence with urinary cotinine testing or exhaled carbon monoxide reading, lending higher quality to these studies. [3] And the meta-analysis found that two studies showed increased cough reflex in those who stopped smoking and two studies showed decreased cough reflex in those who stopped smoking. In other words, there is no satisfactory conclusion for the assumption that decreased cough reflex causes pulmonary complications. [3]

Conclusions reached by both observational studies, randomized control trials, and meta-analyses suggest that smoking cessation does not increase pulmonary or perioperative complications, and it may reduce complications four weeks prior to surgery. [6]Moreover, a short period of cessation prior to surgery showed no significant changes in clinical outcomes. [10] Ideally, patients should stop smoking 8 weeks prior to surgery, but if not possible, cessation of smoking 4-8 weeks prior to surgery will not adversely affect pulmonary and perioperative complications contrary to conclusions established by Warner. [2,10]Certain prospective and randomized controlled trials suggest that even 4 weeks of smoking cessation may decrease pulmonary and any perioperative complications. [7,9,11] The limitations of many of these studies include small sample sizes, limitation to single centers or if multi-centered regionally isolated to Europe, limited follow-up, as well as the significant heterogeneity in amount of smoking, time to surgery, and type of surgery across studies. Broader studies will need to be performed to examine the ideal time to stop smoking, with standardized complications and types of surgeries. Multi-center and multi-national studies must be performed to increase the generalizability of conclusions.

Mr. T stops smoking 4 weeks prior to surgery and aside from issues with post-surgical pain control does very well. The patient decides to continue with his smoking abstinence and is smoke-free to this day.

Conclusions

1. Smoking cessation 4 to 8 weeks prior to surgery carries no significant difference compared to the continuation of smoking until surgery, with respect to pulmonary or any post-operative complications. Emerging evidence suggests that smoking cessation may actually reduce postoperative complications. [7,8]

2. Pulmonary Function Tests and amount of smoking may better predict pulmonary complications in smokers. [8]

3. Further research should be completed to determine the ideal time to stop smoking prior to surgery, but even brief episodes of smoking cessation may be beneficial for the patient. [10,11]

Dr. Benjamin Wu is a 2nd year resident at NYU Langone Medical Center

Peer Reviewed by Nishay Chitkara, MD, Medicine (Pulmonary) at NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Warner MA, Offord KP, Warner ME, Lennon RL, Conover MA, Jansson-Schumacher U. “Role of preoperative cessation of smoking and other factors in postoperative pulmonary complications: a blinded prospective study of coronary artery bypass patients.” Mayo Clin Proc. 1989 Jun;64(6):609-16. http://www.ncbi.nlm.nih.gov/pubmed/2787456

2. Smetana GW. “Preoperative pulmonary evaluation.” N Engl J Med. 1999 Mar 25;340(12):937-44.  http://www.ncbi.nlm.nih.gov/pubmed/10089188

3. Myers K, Hajek P, Hinds C, McRobbie H. Stopping Smoking Shortly Before Surgery and Postoperative Complications: A Systematic Review and Meta-analysis. Arch Intern Med. 2011 Mar 14. [Epub ahead of print, cited 2011, March 22]

4. Warner DO.Perioperative abstinence from cigarettes: physiologic and clinical consequences. Anesthesiology. 2006 Feb;104(2):356-67.  http://www.ncbi.nlm.nih.gov/pubmed/16436857

5. Johnson RG, Arozullah AM, Neumayer L, Henderson WG, Hosokawa P, Khuri SF. “Multivariable predictors of postoperative respiratory failure after general and vascular surgery: results from the patient safety in surgery study.” J Am Coll Surg. 2007 Jun;204(6):1188-98.  http://www.ncbi.nlm.nih.gov/pubmed/17544077

6. Møller AM, Villebro N, Pedersen T, Tønnesen H. “Effect of preoperative smoking intervention on postoperative complications: a randomised clinical trial.” Lancet. 2002 Jan 12;359(9301):114-7.  http://www.ncbi.nlm.nih.gov/pubmed/11809253

7. Sørensen LT, Jørgensen T. “Short-term pre-operative smoking cessation intervention does not affect postoperative complications in colorectal surgery: a randomized clinical trial.” Colorectal Dis. 2003 Jul;5(4):347-52.  http://www.ncbi.nlm.nih.gov/pubmed/12814414

8. Barrera R, Shi W, Amar D, Thaler HT, Gabovich N, Bains MS, White DA. Smoking and timing of cessation: impact on pulmonary complications after thoracotomy. Chest. 2005 Jun;127(6):1977-83.  http://www.ncbi.nlm.nih.gov/pubmed/15947310

9. Lindström D et al. “Effects of a perioperative smoking cessation intervention on postoperative complications: a randomized trial.” Ann Surg. 2008 Nov;248(5):739-45.  http://www.ncbi.nlm.nih.gov/pubmed/18948800

10. Thomsen T, Tønnesen H, Okholm M, Kroman N, Maibom A, Sauerberg ML, Møller AM. “Brief smoking cessation intervention in relation to breast cancer surgery: a randomized controlled trial.” Nicotine Tob Res. 2010 Nov;12(11):1118-24.  http://www.ncbi.nlm.nih.gov/pubmed/20855414

11. Thomsen T, Villebro N, Møller AM.Interventions for preoperative smoking cessation.Cochrane Database Syst Rev. 2010 Jul 7;(7):CD002294.  http://www.ncbi.nlm.nih.gov/pubmed/20614429

What Is the Significance of Monoclonal Gammopathy of Undetermined Significance (MGUS)?

December 22, 2011

By Maryann Kwa, MD

Faculty Peer Reviewed

Clinical Case:

A.D. is a healthy 65-year-old African American male with no prior medical history who presents to his primary care physician for an annual check up. He feels well and has no complaints. Physical exam is normal. Common laboratory tests are ordered which are significant for an elevated total serum protein with normal albumin. A serum protein electrophoresis (SPEP) is then performed. The patient is found to have a monoclonal protein (M protein) of 12 g/L, IgG type, and normal free light chain ratio. All other lab results including hemoglobin, creatinine, and calcium are within their normal ranges. A skeletal survey is done which is negative for lytic lesions. A bone marrow biopsy is also performed which reveals <10% plasma cells. The patient is referred to a hematologist who informs him that he has monoclonal gammopathy of undetermined significance (MGUS). The patient inquires about whether any treatment is recommended.

Monoclonal gammopathy of undetermined significance (MGUS) is a premalignant plasma cell dyscrasia that is defined as a serum M protein <30 g/L, clonal plasma cells <10% in the bone marrow, and the absence of end-organ damage that can be attributed to a plasma cell proliferative disorder (see table) [1]. End-organ damage is defined by the presence of hypercalcemia, renal insufficiency, anemia, and bony lesions (which can be remembered by the acronym CRAB). MGUS is usually discovered incidentally in the blood during routine laboratory tests. It affects approximately 3% of individuals older than 50 years [2]. Prevalence is twice as high among African Americans and is lower in Asians. Older age, male sex, family history, and immunosuppression are also factors that increase the risk of MGUS [3]. So why do we worry about MGUS? It is important to clinicians because it is associated with a 1% annual risk of progression to multiple myeloma or related malignancy [4]. According to the International Myeloma Working Group (IMWG), the diagnostic criteria for multiple myeloma involves clonal plasma cells >10% in bone marrow biopsy, presence of monoclonal protein in either serum or urine, and evidence of end organ damage related to the plasma cell disorder. A prospective study by Landgren et al. (2009) demonstrated that multiple myeloma is consistently preceded by MGUS [5]. Of the approximately 77,400 healthy adults in the United States who were observed for up to ten years, 71 developed multiple myeloma. Prior evidence of MGUS was demonstrated in all these patients by assays for protein abnormalities in prediagnostic serum samples.

The pathophysiology of the transition from normal plasma cells to MGUS to multiple myeloma involves many overlapping oncogenic events [6]. The first step in the pathogenesis is usually an abnormal response to antigenic stimulation, possibly mediated by overexpression of interleukin (IL)-6 receptors and dysregulation of the cyclin D gene. These changes result in the development of primary cytogenetic abnormalities, either hyperdiploidy or immunoglobulin heavy chain translocation (the most common are t(4;14), t(14;16), t(6;14), t(11;14), and t(14;20)). The progression of MGUS to multiple myeloma is likely secondary to a random second hit, the manner of which is unknown. Mutations with Ras and p53, methylation of p16, myc abnormalities, and induction of angiogenesis have also been associated with progression.

Since MGUS was first described approximately thirty years ago, there have been new concepts and advances concerning classification and management. There are currently three distinct clinical types of MGUS: 1) non-IgM (IgG or IgA); 2) IgM; and 3) light chain. Non-IgM MGUS is the most common type and its more advanced premalignant stage of plasma cell proliferation is called smoldering (asymptomatic) multiple myeloma which is characterized by a higher risk of progression to multiple myeloma. Smoldering myeloma is defined by a serum monoclonal protein (IgG or IgA) ≥30 g/L and/or clonal plasma cells ≥10% in bone marrow with absence of end organ damage [7]. It is associated with a 10% annual risk of progression to multiple myeloma. On the other hand, the IgM subtype of MGUS is mainly associated with predisposition to Waldenström macroglobulinemia and less frequently to IgM multiple myeloma. Finally, the light chain type comprises approximately 20% of new cases of multiple myeloma.

In terms of outcome of MGUS, Kyle et al. (2002) published a cohort study of 1384 patients from Minnesota with MGUS who were followed for up to 35 years (median, 15.4 years) [8]. Results showed that eight percent developed multiple myeloma (n=75), IgM lymphoma (7), AL amyloidosis (10), leukemia (3), and plasmacytosis (1). The cumulative probability of progression was 12% at 10 years, 25% at 20 years, and 30% at 25 years. The overall risk of progression was about 1% per year.

When evaluating a patient for the first time, a complete history and physical examination should be done with emphasis on symptoms and findings that may suggest multiple myeloma. A complete blood count, serum creatinine, serum calcium, and a qualitative test for urine protein should also be performed. If serum abnormalities or proteinuria is found, electrophoresis and immunofixation is indicated. Predicting which patients with MGUS will remain stable compared to those who will progress is very difficult at the time of diagnosis. Those patients with non-IgG type, high serum M protein level (≥15 g/L), and an abnormal serum free light chain ratio (i.e., the ratio of free immunoglobulin kappa to lambda light chains in the serum) are associated with increased risk for progression to smoldering myeloma and then to multiple myeloma.

In June 2010, the IMWG released consensus guidelines for monitoring and managing patients with MGUS and smoldering myeloma. Patients with MGUS are divided into different categories based on low risk, intermediate risk, and high risk. If the serum monoclonal protein is <15 g/L, IgG type, and the free light chain ratio is normal, then the risk of eventual progression to multiple myeloma or related malignancy is low. In this low-risk setting, a baseline bone marrow examination or skeletal survey is not routinely indicated if the clinical evaluation and laboratory values suggest MGUS. Patients should be followed with SPEP 6 months after diagnosis and if stable can be followed every 2-3 years (or sooner if symptoms suggestive of disease progression arise).

However, patients that fall in the intermediate and high risk MGUS category are managed differently. They usually have a serum monoclonal protein >15 g/L, IgA or IgM type, and/or an abnormal free light chain ratio. In this situation, a bone marrow biopsy should be carried out at baseline. Both conventional cytogenetics and fluorescence in situ hybridization should be performed. These patients are followed with SPEP, complete blood count, serum calcium and creatinine levels 6 months after diagnosis and then yearly for life. It is important to note, however, that a bone marrow biopsy and skeletal survey is always indicated if a patient with presumed MGUS has unexplained anemia, renal insufficiency, hypercalcemia, or skeletal lesions.

And finally, those patients with smoldering (asymptomatic) multiple myeloma always receive a baseline bone marrow biopsy and mandatory skeletal survey. An MRI of the spine and pelvis is also recommended because it can detect occult lesions which predict a more rapid progression to multiple myeloma. Wang et al. (2003) estimated the risk of progression in 72 patients with smoldering myeloma in which an MRI of the spine was also performed at baseline [9]. The median time to progression was significantly shorter with an abnormal MRI compared with a normal MRI (1.5 years versus 5 years). Nonetheless, if laboratory values, bone marrow biopsy, and MRI results are stable, then these studies should be repeated every 4-6 months for one year with the interval afterward being every 6 to 12 months if stable.

An estimated 20,580 new cases of multiple myeloma were diagnosed in the United States in 2009. Median survival is about 3 to 4 years following diagnosis, although survival has improved with newer therapies such as autologous stem cell transplantation, immunomodulatory drugs (thalidomide and lenalidomide), and proteosome inhibitors (bortezomib) [10]. Given this finding, should patients with the precursor diseases of MGUS and smoldering myeloma also be treated? According to the current IMWG guidelines, MGUS and smoldering myeloma should not be treated outside of clinical trials. Patients with MGUS are relatively healthy and have a low lifetime risk of progression.

On the other hand, patients with smoldering myeloma have a relatively high rate of progression to multiple myeloma at 10% yearly. Prior to the advent of novel therapies, a 1993 randomized controlled trial of melphalan-prednisone given initially or at progression to multiple myeloma did not show a significant difference in response rate or overall survival [11]. A single-group trial in 2008 using thalidomide in 76 patients with smoldering myeloma failed to show a clear benefit for treatment [12]. Currently, a study by Mateos et al. (2009) that randomized patients with smoldering myeloma to lenalidomide-dexamethasone versus active surveillance is ongoing [13]. At 19 months of follow-up, interim analysis showed that approximately 50% of patients in the surveillance group progressed to multiple myeloma while none of the patients in the treatment group progressed. In general, it still remains unknown whether treating patients with smoldering myeloma improves overall survival.

Returning to patient A.D. in the clinical case, he is diagnosed with MGUS (low-risk type). Of note, he underwent a bone marrow biopsy and skeletal survey which are not routinely indicated. His hematologist advised him to repeat a SPEP in 6 months. If he remains stable at that time, then he can be followed every two to three years. Any treatment at this stage is not indicated.¬

Table: Diagnostic criteria for the plasma cell disorders

Disorder Disease definition
Monoclonal gammopathy of undetermined significance (MGUS)
  • Serum monoclonal protein <30 gm/L
  • Clonal bone marrow plasma cells <10%
  • No end organ damage that can be attributed to plasma cell proliferative disorder (hypercalcemia, renal insufficiency, anemia, and bone lesions)

Smoldering (asymptomatic) multiple myeloma
  • Serum monoclonal protein (IgG or IgA) ≥30 gm/L and/or
  • Clonal bone marrow plasma cells ≥10%
  • No end organ damage

Multiple myeloma
  • Clonal bone marrow plasma cells ≥10%
  • Presence of serum and/or urinary monoclonal protein
  • Evidence of end organ damage:

hypercalcemia: serum calcium ≥11.5 mg/dL or

renal insufficiency: serum creatinine >2mg/dL or estimated creatinine clearance <40 mL/min

anemia: normochromic, normocytic with hemoglobin >2 gm/dL below lower limit of normal or <10gm/dL

bone lesions: lytic lesions, severe osteopenia, or pathological fractures

Table adapted from Kyle RA, et al. Leukemia. 2010;24:1121-1127.

Dr. Maryann Kwa is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Harold Ballard, MD Clinical Professor of Medicine, Division of Hematology and Oncology, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

[1] Kyle RA, Durie BG, Rajkumar SV, et al. Monoclonal gammopathy of undetermined significance (MGUS) and smoldering (asymptomatic) multiple myeloma: International Myeloma Working Group (IMWG) consensus perspectives risk factors for progression and guidelines for monitoring and management. Leukemia. 2010;24(6):1121-1127. Available from: http://www.nature.com/leu/journal/v24/n6/full/leu201060a.html

[2] Kyle RA, Therneau TM, Rajkumar SV, et al. Prevalence of monoclonal gammopathy of undetermined significance. N Engl J Med. 2006;354(13):1362-1369. Available from: http://www.nejm.org/doi/full/10.1056/NEJMoa054494

[3] Rajkumar SV, Kyle RA, Buadi FK. Advances in the diagnosis, classification, risk stratification, and management of monoclonal gammopathy of undetermined significance: implication for recategorizing disease entities in the presence of evolving scientific evidence. Mayo Clin Proc. 2010;85(10):945-948. Available from:  http://www.mayoclinicproceedings.com/content/85/10/945.full

[4] Landgren O, Waxman AJ. Multiple myeloma precursor disease. JAMA. 2010;304(21):2397-2404. Available from: http://jama.ama-assn.org/content/304/21/2397.full

[5] Landgren O, Kyle RA, Pfeiffer RM, et al. Monoclonal gammopathy of undetermined significance (MGUS) consistently precedes multiple myeloma: a prospective study. Blood. 2009;113(22):5412-5417. Available from: http://bloodjournal.hematologylibrary.org/cgi/content/full/113/22/5412

[6] Chng WJ, Glebov O, Bergsagel PD, Kuehl WM. Genetic events in the pathogenesis of multiple myeloma. Best Pract Res Clin Haematol. 2007;20(4): 571-596. Available from: http://www.bprch.com/article/S1521-6926(07)00064-3/fulltext

[7] Kyle RA, Remstein ED, Therneau TM, et al. Clinical course and prognosis of smoldering (asymptomatic) multiple myeloma. N Engl J Med. 2007;356(25):2582-2590. Available from: http://www.nejm.org/doi/full/10.1056/NEJMoa070389

[8] Kyle RA, Therneau TM, Rajkumar SV, et al. A long-term study of prognosis in monoclonal gammopathy of undetermined significance. N Engl J Med. 2002;346(8):564-569. Available from:  http://www.nejm.org/doi/full/10.1056/NEJMoa01133202

[9] Wang M, Alexanian R, Delasalle K, Weber D. Abnormal MRI of spine is the dominant risk factor for abnormal progression of asymptomatic multiple myeloma. Blood. 2003;102:687a (abstract). Available from: http://bloodjournal.hematologylibrary.org/archive/2003.dtl

[10] Kumar SK, Rajkumar SV, Dispenzieri A, et al. Improved survival in multiple myeloma and the impact of novel therapies. Blood. 2008;111(5):2516-2520. Available from: http://bloodjournal.hematologylibrary.org/cgi/content/full/111/5/2516

[11] Hjorth M, Hellquist L, Holmberg E, et al. Initial versus deferred melphalan-prednisone therapy for asymptomatic multiple myeloma stage I—a randomized study. Eur J Haematol. 1993;50(2):95-102. Available from: http://onlinelibrary.wiley.com/doi/10.1111/j.1600-0609.1993.tb00148.x/abstract

[12] Barlogie B, van Rhee F, Shaughnessy JD Jr., et al. Seven-year median time to progression with thalidomide for smoldering myeloma: partial response identifies subset requiring earlier salvage therapy for symptomatic disease. Blood. 2008;112(8):3122-3125. Available from: http://bloodjournal.hematologylibrary.org/cgi/content/full/112/8/3122

[13] Mateos MV, Lopez-Corral L, Hernandez MT, et al. Multicenter, randomized, open-label, phase III trial of lenalidomide-dexamethasone vs therapeutic abstention in smoldering multiple myeloma at high risk of progression to symptomatic multiple myeloma: results of the first interim analysis. In: 51st American Society of Hematology Annual Meeting and Exposition; December 5-8; New Orleans, LA. Abstract 614. Available from: http://ash.confex.com/ash/2009/webprogram/Paper21268.html

Is There Really Any Role For Steroids In Acute Alcoholic Hepatitis?

December 8, 2011

By Keri Herzog, MD

Faculty Peer Reviewed

The patient is a 48-year-old male with a history of heavy alcohol use (he drinks about 1 pint of vodka daily) who presented to the hospital when he noticed that he had become increasingly jaundiced. The patient was hemodynamically stable on admission and afebrile, with jaundice and scleral icterus on exam. Laboratory data was significant for a total bilirubin of 6.6, an INR of 2.3, AST of 83, ALT 72, and a Maddrey’s discriminant function (MDF) that was calculated to be >32. The patient was diagnosed with alcoholic hepatitis.

The question that we now seek to answer is if administration of corticosteroids would be of benefit in the management of alcoholic hepatitis in this patient?

Alcoholic hepatitis (AH) was a term that was first used in 1961, however there were many antecedent descriptions in the medical literature of patients who developed jaundice following ethanol consumption [1]. Evidence exists suggesting that an increased risk of cirrhosis and AH is due, not only to daily ingestion of large amounts of alcohol (greater than 10-20g/day for women and 20-40g/day for men) [1], but with contributions from genetic and environmental factors. Chronic alcohol consumption has been demonstrated to cause hepatocyte dysfunction, apoptosis, and necrosis through a combination of oxidative stress and endotoxin-mediated cytokine release [2] with resultant liver abnormalities ranging from steatosis (fatty liver), to cirrhosis, to hepatocellular carcinoma [2]. Steatosis and hepatitis related to alcohol use can often be reversed with sobriety, though evidence for reversal of cirrhosis is more scant [1, 2].

The Maddrey discriminant function (MDF) was introduced in 1978 in an attempt to identify patients with severe AH who face very high mortality rates. The equation for calculation of the MDF is as follows:

MDF= 4.6 X (Patient’s prothrombin time- Control prothrombin time) + Total bilirubin

and a value of 32 or greater indicates that the patient has severe alcoholic hepatitis and a mortality rate of 50% without treatment [3]. Once a diagnosis has been made and the severity of AH characterized, it is necessary to address treatment. All patients should cease alcohol intake, and it is necessary to correct nutritional deficiencies. Steroids have been used in the treatment of severe alcoholic hepatitis (MDF >32), and we will examine the evidence for this approach in the remainder of this article.

Corticosteroids are the most extensively studied intervention in AH, though their use has engendered controversy in the literature. Corticosteroids inhibit the inflammatory process in alcoholic hepatitis by reducing the circulating levels of inflammatory cytokines and by downregulating expression of adhesion molecules responsible for attraction of immunocytes to the damaged liver [4]. Between 1971 and 1992, there were 13 clinical trials that assessed the efficacy of steroid use in alcoholic hepatitis. Eight of these trials concluded that there was no improvement in the outcome of steroid treated patients compared to those who received placebo therapy [5]. The majority of these trials were limited by small sample sizes (limiting statistical power), and varying inclusion/exclusion criteria. Meta-analyses have therefore been used to further examine the data. Imperiale et al studied eleven randomized trials from 1966 to 1989 and found that treatment of patients with alcoholic hepatitis with steroids was associated with a 37% reduction in mortality. In this meta-analysis, the mortality benefit was only evident in patients with hepatic encephalopathy [1]. Similarly, Rambaldi et al, examined 15 randomized trials with a total of 721 patients, and found that glucocorticosteroids did not reduce mortality of the entire population of patients with alcoholic hepatitis, compared with placebo (or no intervention), but mortality was statistically reduced in subgroups who either were experiencing hepatic encephalopathy or could be characterized as having severe alcoholic hepatitis with a MDF score of 32 or above [6].

Another study by Mathurin et al used pooled primary data from three placebo-controlled trials to compare corticosteroids with placebo in patients, all of whom had severe AH (MDF greater than or equal to 32). In this group of patients, at 28 days, patients who received corticosteroids had a significantly higher survival (84.6 % vs. 65.1%) compared to those who received placebo. Increasing age and serum creatinine were found to be independent prognostic factors for death from severe alcoholic hepatitis in this study [7]. While this represents a modest absolute reduction in risk, it nonetheless reveals that steroid treatment of severe alcoholic hepatitis is associated with a survival advantage, with a number needed to treat (NNT) of 5 [3]. Thus, based on the recent literature, we can conclude that patients with severe alcoholic hepatitis, as defined by a MDF greater or equal to 32 or with hepatic encephalopathy may derive a benefit from corticosteroid treatment. It is important to know that another agent, pentoxifylline (a phosphodiesterase inhibitor), has been demonstrated in various studies to reduce short-term mortality among patients with AH, and these mortality effects may be related to the prevention of hepatorenal syndrome (unlike steroids which reduce hepatic inflammation). Therefore, pentoxifylline is also an agent worth considering for some patients [5].

After initiation of steroid therapy, it is necessary to assess response to therapy, since steroid side effects are associated with increased morbidity and non-responders should have therapy stopped. A study by Mathurin et al included 238 patients with AH. They found that an early change in bilirubin levels (ECBL), defined as a decrease in bilirubin level after seven days of treatment, was associated with improved outlook; 95% of patients with ECBL continued to have improved liver function during treatment and a significantly higher 6 month survival rate of 83% vs. 23% in those patients who did not experience a decrease in bilirubin following treatment. An ECBL may therefore identify nonresponders to steroid therapy, and the authors of that study recommended discontinuation of steroids after 7 days in nonresponders [8]. Louvet, and his group in Lille, France, generated a specific prognostic model (the Lille model) to identify nonresponders. In order to validate the Lille model, they prospectively studied 328 patients with AH treated with corticosteroids. Their model combines six reproducible variables (age, renal insufficiency, albumin, prothrombin time, bilirubin, and evolution of bilirubin by day 7), and patients with a score on the Lille model above 0.45 have a markedly decreased 6 month survival compared to others (25% versus 85%). Thus, this model may further identify patients who are nonresponders to corticosteroids and who may be candidates for alternative therapy [9]. Lastly, a study by Mendenhall et al. suggested that patients with a very severe AH, as defined by a MDF of 54 or higher may actually be at a higher mortality risk from the use of steroids [10].

A common concern about use of steroids in the treatment of severe AH is the risk of inducing or exacerbating infections. Patients with AH commonly experience low-grade fevers as a manifestation of their systemic inflammatory process. In addition, patients with severe AH may suffer from significant clinical deterioration and are at risk of community and hospital-acquired infections. Thus, the decision to utilize steroids in these patients is often fraught with uncertainty. The risk of steroid use in infected patients with AH was examined in a study by Louvet et al. The investigators examined 246 patients with severe AH; 63 of whom were infected at admission, and 57 who developed an infection after initiation of steroid treatment. Among these patients, infection developed significantly more frequently in nonresponders to steroids, compared to those who responded to treatment (42.5% to 11.1% respectively). Therefore, infection is not a contraindication to steroid use in these patients and screening for infection prior to initiation of steroid therapy is unwarranted. In fact, nonresponse to steroids appeared to be the primary risk factor associated with development of infection in this study [11].

It is important to recognize that the efficacy of steroids has not been evaluated in patients with severe AH and concomitant pancreatitis or gastrointestinal bleeding, as these were exclusion criteria in many of the early studies of alcoholic hepatitis [5]. We must also take side effects of corticosteroids into consideration. Approximately 16% of patients will experience adverse effects due to corticosteroids, primarily in the form of hyperglycemia, but also with increased risk of infection [3]. Another serious practical issue regarding corticosteroid treatment is discontinuation of treatment [3]. The best available evidence suggests that prednisolone should be the steroid of choice and should be administered at a dose of 40 mg/day for 4 weeks [5].

Therefore, based on the data presented, we can conclude that treatment with prednisolone is most likely to improve short-term survival in our patient because he had a MDF between 32 and below 54 without evidence of gastrointestinal bleeding or pancreatitis. However, steroids should be stopped if our patient does not have an EBCL, and is therefore most likely a nonresponder to corticosteroids, and may suffer an adverse event from steroid use. If our patient had evidence of hepatic encephalopathy, treating with corticosteroids would be indicated as well.

Dr. Keri Herzog is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Neil Shapiro, Editor-In-Chief, Clinical Correlations

Image courtesy of Wikimedia Commons

References:

1. Imperiale TF, McCollough AJ. Do corticosteroids reduce mortality from alcoholic hepatitis? A meta-analysis of the randomized trials. Ann Intern Med. 1990 Aug 15; 113(4):299-307. www.ncbi.nlm.nih.gov/pubmed/2142869

2.Day CP. Treatment of Alcoholic Liver disease. Liver Transplantation 2007; 13 (11): S69-S75.

3. Amini M, Runyon BA. Alcoholic hepatitis 2010: A clinician’s guide to a diagnosis and therapy. World J Gastroenterol 2010 October 21; 16 (39): 4905-4912. www.wjgnet.com/1007-9327/16/4905.pd

4. Lucey, MR, Mathurin P, Morgan TR. Alcoholic Hepatitis. The New England Journal of Medicine 2009; 360: 2758-2769.

5. O’Shea RS, Dasarathy S, McCullough AJ, et al. Alcoholic liver disease: AASLD practice guidelines. Hepatology 2010; 51: 307-328.

6. Rambaldi A, Saconato HH, Christensen E, et al. Systematic review: glucocorticosteroids for alcoholic hepatitis-a Cochrane Hepato-Biliary Group systemic review with meta-analyses and trial sequential analyses of randomized clinic trials. Aliment Pharmacol Ther 2008; 27: 1167–1178.

7. Mathurin P, Mendenhall CL, Carither RL, et al. Corticosteroids improve short-term survival in patients with severe alcoholic hepatitis (AH): individual data analysis of the last three randomized placebo controlled double blind trials of corticosteroids in severe AH. Journal of Hepatology 2002; 36: 480-487.

8. Mathurin P, Abdelnour M, Raymond MJ, et al. Early change in bilirubin levels in an important prognostic factor in severe alcoholic hepatitis treated with prednisolone. Hepatology 2003; 38 (6): 1363-1369.

9. Louvet A, Naveau S, Abdelnour M, et al. The Lille Model: A new tool for therapeutic strategy in patients with severe alcoholic hepatitis treated with steroids. Hepatology 2007; 45 (6): 1348-1354. http://acutemed.co.uk/docs/AAH-steroids,%20Hepatology,%2006.pdf

10. Mendenhall C , Roselle GA , Gartside P et al. Relationship of protein calorie malnutrition to alcoholic liver disease: a reexamination of data from two Veterans Administration Cooperative Studies. Alcohol Clin Exp Res 1995 ; 19 : 635 – 41.

11. Louvet A, Wartel F, Castel H, et al. Infection in patients with severe alcoholic hepatitis treated with steroids: early response to therapy is the key factor. Gastroenterology 2009; 137: 541-548. http://gastrojournal.org/article/S0016-5085(09)00743-4/

Does Culturing the Catheter Tip Change Patient Outcomes?

November 17, 2011

By Todd Cutler, MD

Faculty Peer Reviewed

An 82-year-old man is admitted to the intensive care unit with fevers, hypoxic respiratory failure and hypotension. He is intubated and resuscitated with intravenous fluids. A central venous catheter is placed via the internal jugular vein. A chest x-ray showed a right lower lobe infiltrate and he is treated empirically with antibiotics for pneumonia. Blood cultures grow out S. pneumoniae. After four days he is successfully extubated. The night following extubation, the patient has a fever of 100.8 without hemodynamic instability. Peripheral blood cultures are drawn and his central venous catheter is removed. The next morning, during rounds, a debate ensues regarding whether the catheter tip should also have been sent to the microbiology lab to be cultured. What is the evidence for and what are the current recommendations regarding the culturing of central venous catheter tips?

The central venous catheter has an essential role of the field of critical care medicine, yet, as a foreign body, its use is associated with an increased risk of infection. While much effort has been devoted to improving aseptic techniques for central line insertion,over 250,000 blood stream infections each year are believed to be attributable to catheters.[1] Furthermore, due to substantial variations in retrieving, culturing, quantifying and subsequently defining catheter infections, there are many methodological differences between the experimental studies that investigate this topic. This article will highlight and evaluate the expert recommendations regarding catheter tip cultures, the evidence behind those recommendations and other studies that have been performed in this field.

In 2009, the Infectious Disease Society of America (IDSA) proposed that the diagnosis of a catheter related blood stream infection (CRBSI) require that a peripheral blood culture grow the same organism as a concurrently retrieved catheter tip culture.[2] Other possible criteria for CRBSI include microbiologic concordance between two positive blood cultures where one is drawn from the catheter hub and the other is drawn from a peripheral vein. While the exact criteria for meeting positive results remain unresolved, if blood cultures drawn from the catheter produce microbiologic colonies more rapidly or in greater quantitative degree than cultures from peripheral blood, this is believed to serve as evidence of catheter infection. As these techniques do not require removal of the catheter to be performed, they are considered catheter-sparing diagnostic methods and are, respectively, termed “simultaneous quantitative blood cultures” and “direct time to positivity” (DTP).[3,4,5] Further discussion of their utility, or related techniques for diagnosing CRBSI, is beyond the scope of this brief review.

In their 2009 guidelines, the IDSA recommended that, “catheter cultures should be done when a catheter is removed because of suspected [CRBSI].”[2] The most widely accepted technique for culturing central venous catheters was described in 1977 in a seminal paper by Maki et al and is known as the semi-quantitative culture method.[6]  Using this technique, the catheter tip is rolled over a culture dish and microorganism burden is indirectly quantified. In this study, 250 catheter tips were cultured of which 25 grew greater than 15 colony-forming units. Of these catheters, four were removed from patients who ultimately developed bacteremia. Popular for its ease of use, this article has been widely cited to support the use of this method to determine, in a clinical situation suspicious for bacteremia, whether the catheter could be implicated as the source. The premise of subsequent studies that sought to evaluate catheter tip infections was that early and precise identification of the causal organism should improve clinical outcomes. Unfortunately, randomized controlled trials have not been performed to support that premise. Of the studies cited in the 2009 IDSA guidelines regarding catheter tip cultures [7,8,9], none evaluated whether the information obtained from culturing catheter tips had any significant clinical impact on patient outcomes.

Alternatively, the utility of this practice was evaluated in a 1992 study by Widmer et al, in which 157 consecutive catheter tips were cultured order to determine whether the results had an impact on patient management. The authors assessed whether results prompted a change in, or the initiation of, an antibiotic regimen. While the authors determined that 4% of catheter culture results led to changes in patient management, the clinical significance of these results were found to be “questionable or even misleading.” The authors concluded that management of catheter infection was driven primarily by peripheral blood culture results and that catheter tip cultures contributed no benefit.[10]

In a 2009 publication, a retrospective analysis of 120 septic patients evaluated 238 retrieved and cultured catheter tips. In 5.5% of all catheters tested, blood and catheter tip cultures grew concordant organisms but 48.4% of all catheters grew positive cultures. The associated positive and negative predictive values of a catheter culture result were calculated to be 11% and 91%, respectively. This low positive predictive value of a positive catheter tip culture was consistent with the results of the Widmer study. Based on these findings, and in consideration of the ultimate clinical impact and associated costs of the practice, catheter cultures were discontinued in the hospital where this study was performed.[11] A smaller study examined whether catheter tip cultures could predict the likelihood of clinical bacteremia and the authors concluded that culture results from catheter tips provide minimal clinical benefit and are processed at considerable expense of time and effort by the laboratory.[12]

The authors of these studies concluded that the results of catheter tip cultures are unlikely to significantly change clinical management. One reason for this is because the removal of a central venous catheter often results in the resolution of CRBSI, regardless of use of antibiotics, while alternate sources are often found when infections do not quickly resolve.[13] In addition, as removal of the catheter is a prerequisite of tip culture, the necessary action precedes the desired outcome.

Typically, in a febrile patient with an indwelling catheter, clinically significant catheter infections will be detected by positive blood cultures and effectively ruled out by negative blood cultures. When catheter infection is suspected, while antimicrobial therapy usually is given adjunctively, there is general agreement that catheter removal is an absolute necessity.[14] When catheter infections are suspected, studies seem to suggest that that management based on peripheral blood cultures and clinical assessment leads to outcomes as good as or better than when central venous catheters are cultured [15] and that no laboratory test is reliably better than management guided by clinical judgment.[16]

Dr. Todd Cutler is an associate editor, Clinical Correlations

Peer reviewed by Howard Leaf, MD, Assistant Professor, Department of Medicine (ID), NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. O’Grady NP, Alexander M, Burns LA. Guidelines for the prevention of intravascular catheter-related infections. Am J Infect Control. 2011 May;39(4 Suppl 1):S1-34.

2. Mermel LA, Allon M, Bouza E. Clinical practice guidelines for the diagnosis and management of intravascular catheter-related infection: 2009 Update by the Infectious Diseases Society of America. Clin Infect Dis. 2009 Jul 1;49(1):1-45.  http://www.ncbi.nlm.nih.gov/pubmed/19489710

3. Edgeworth J. Intravascular catheter infections. J Hosp Infect. 2009 Dec;73(4):323-30.  http://www.ncbi.nlm.nih.gov/pubmed/19699555

4. Kite P, Dobbins BM, Wilcox MH. Rapid diagnosis of central-venous-catheter-related bloodstream infection without catheter removal. Lancet. 1999 Oct 30;354(9189):1504-7.

5. Blot F, Nitenberg G, Chachaty E. Diagnosis of catheter-related bacteraemia: a prospective comparison of the time to positivity of hub-blood versus peripheral-blood cultures. Lancet. 1999 Sep 25;354(9184):1071-7.

6. Maki DG, Weise CE, Sarafin HW. A semiquantitative culture method for identifying intravenous-catheter-related infection. N Engl J Med. 1977 Jun 9;296(23):1305-9. http://www.ncbi.nlm.nih.gov/pubmed/323710

7. Brun-Buisson C, Abrouk F, Legrand P. Diagnosis of central venous catheter-related sepsis: critical level of quantitative tip cultures.Arch Intern Med 1987;147:873-7

8. Cleri DJ, Corrado ML, Seligman SJ. Quantitative culture of intravenous catheters and other intravascular inserts. J Infect Dis 1980;141:781-6

9. Sherertz RJ, Raad II, Belani A. Three-year experience with sonicated vascular catheter cultures in a clinical microbiology laboratory.J Clin Microbiol. 1990 Jan;28(1):76-82.  http://jcm.asm.org/content/28/1/76.abstract

10. Widmer AF, Nettleman M, Flint K. The clinical impact of culturing central venous catheters. A prospective study. Arch Intern Med. 1992 Jun;152(6):1299-302. http://www.ncbi.nlm.nih.gov/pubmed/1599360

11. Smuszkiewicz P, Trojanowska I, Tomczak H. Venous catheter microbiological monitoring. Necessity or a habit? Med Sci Monit. 2009 Feb;15(2):SC5-8. http://www.ncbi.nlm.nih.gov/pubmed/19179982

12. Nahass RG, Weinstein MP. Qualitative intravascular catheter tip cultures do not predict catheter-related bacteremia. Diagn Microbiol Infect Dis. 1990 May-Jun;13(3):223-6.  http://www.ncbi.nlm.nih.gov/pubmed/2383972

13. Bozzetti F, Terno G, Camerini E Pathogenesis and predictability of central venous catheter sepsis. Surgery. 1982 Apr;91(4):383-9. http://www.ncbi.nlm.nih.gov/pubmed/6801797

14. Cunha BA. Intravenous line infections. Crit Care Clin. 1998 Apr;14(2):339-46. http://www.ncbi.nlm.nih.gov/pubmed/9561821

15. Bozzetti F, Bonfanti G, Regalia E. A new approach to the diagnosis of central venous catheter sepsis. JPEN J Parenter Enteral Nutr. 1991 Jul-Aug;15(4):412-6.  http://www.ncbi.nlm.nih.gov/pubmed/1895486

16. Raad I, Hanna H, Maki D. Intravascular catheter-related infections: advances in diagnosis, prevention, and management. Lancet Infect Dis. 2007 Oct;7(10):645-57. http://www.ncbi.nlm.nih.gov/pubmed/17897607

The Diagonal Earlobe Crease: Historical Trivia or a Useful Sign of Coronary Artery Disease?

November 2, 2011

Nicholas Mark, MD & Sarah Buckley, MD

Faculty Peer Reviewed

Background

Publius Aelius Hadrianus, better known as Hadrian, emperor of Rome (117-138 CE), traveler, warrior, and lover of all things Greek, fell ill at the age of 60. He developed progressive edema and episodic epistaxis, fell into a depression soothed by rich food and drink, and succumbed to death within 2 years. The exact cause of Hadrian’s death–whether by heart failure, glomerulonephritis, or even hereditary hemorrhagic telangiectasia–has been a topic of debate among paleopathologists. It was not until 1980 that a crucial clue was found, memorialized in stone busts of the late emperor: he was sculpted with a deep diagonal crease in both earlobes. [1]

Since its first description by Frank in the New England Journal of Medicine in 1973[2], the presence of the diagonal earlobe crease (ELC) has been recognized as a marker of coronary artery disease (CAD). Subsequent studies confirmed the ELC (or Frank’s sign) as a predictor of CAD independent of age, cholesterol, blood pressure, or smoking status. On the other hand, several studies found no correlation between ELC and CAD and suggest that it is simply a marker of advancing age. Over 50 papers have been published regarding this physical diagnosis sign, and for almost 4 decades controversy has raged over its utility. Is the ELC a clinically useful predictor of CAD? In order to answer this question we performed a meta-analysis of all published studies evaluating the role of ELC as a predictor of CAD.

Methods

Published articles, abstracts, and letters were obtained using the search term “Ear Lobe Crease.” Raw data regarding prevalence of ELC and CAD were collected and analyzed to calculate sensitivity, specificity, and likelihood ratios (LR). Significance was determined using the Fisher exact test and Chi squared test (depending on the number of patients in the study), and P values and 95% confidence intervals (CI) were calculated.

Results

There was significant variation in the design of the studies, ranging from large population screening studies to smaller studies looking at patients undergoing angiography for suspected CAD. The majority of studies found that ELC was a statistically significant predictor of CAD: the results of 6 of the 22 studies analyzed were not significant, and the remaining 16 studies demonstrated varying degrees of predictive value, with likelihood ratios ranging from of 1.33 to 9.20 (see figure 1). Upon pooling all data, we found an overall sensitivity of 60.4% and specificity of 74.4%. We found that the presence of ELC has a LR of 2.37 (CI of 2.26 to 2.48) for predicting CAD. We analyzed the subset of studies that included only cardiac patients and found that the utility of ELC was lower in this group than in an unselected patient population (LR of 1.88 vs 2.44). Similarly, when the higher-risk diabetic population of the Fremantle Diabetes study was excluded, the LR was slightly higher.

Discussion

Frank suggested that the presence of “a prominent crease in a lobule portion of the auricle” may be indicative of small vessel pathology, possibly explaining the correlation with CAD.[1] Shoenfield and colleagues performed histological examination of earlobe creases and found substantial thickening of the arteriolar walls relative to controls without ELC.[25]

Many authors performed multivariate analysis to ascertain if ELC is truly an independent marker of CAD risk or just a surrogate for other known risk factors. Several studies found that though ELC correlates with CAD, it is independent of other CAD risk factors (hypertension, hyperlipidemia, diabetes, and smoking).[13,15].

Other studies reported that that the LR of bilateral ELC is higher than that of unilateral ELC [15], and that depth of the ELC portends a greater likelihood of CAD. It has also been proposed that the presence of additional factors, such as earlobe hair [17], could further increase the utility of the sign.

The utility of ELC may vary depending on several factors. Age appears to be a significant potential confounder, and indeed the incidence of ELC increases with age in every study. However, as expected, the incidence of CAD also increases with age, and it is unclear at what ages the utility of the sign is highest. Several studies performed subgroup analyses looking at the utility of the ELC in patients of different ages with conflicting results. In general, though, the studies found that the presence of ELC has some predictive value across all age ranges.

A few studies have reported negative results in the context of specific ethnic groups. Fisher and colleagues found no significant relationship between ELC and CAD in American Indians [11], and Rhoads and colleagues found no relationship in Japanese-Americans living in Hawaii.[8] Overall, the published incidence of ELC seems to vary significantly among different populations; the significance of this is not clear.

The utility of ELC may also be lower in patients who have a higher pretest probability of CAD, such as in the studies of patients undergoing angiograms for assessment of suspected CAD.[3,17,22] Furthermore, the Fremantle diabetes study suggested that among patients with one of the most important CAD risk factors, diabetes, there is no correlation between ELC and CAD.[21]

Given these limitations, what is the utility of ELC relative to the more established CAD risk factors such as diabetes, hypertension, hyperlipidemia, and smoking? Unlike the modifiable risk factors, which can be addressed by medical management, ELC is only a marker for coronary disease. A recent epidemiological study by Greenland and colleagues [26] and review by Weissler in JAMA [27] showed that the predictive value of these traditional risk factors for predicting CAD complications (MI or death) was quite low (LR ranging from 1.07 to 1.39). Though our meta-analysis looked at ELC as a predictor of CAD (rather than complications of CAD), we find that while the LR of ELC for predicting CAD is modest (LR of 2.37, CI of 2.13 – 2.88), it is significantly higher than that of traditional risk factors. Although we find that the presence of ELC is neither especially sensitive nor specific for CAD, when compared to other known risk factors it does appear to be useful. Thus, we propose that ELC may be a useful additional marker for identifying patients with CAD.

Perhaps if Hadrian were alive today, his sculptors would not have been the only ones to take note of his ears.

FIGURE ONE:

Drs. Nicholas Mark and Sarah Buckley are former students of NYU School of Medicine

Reviewed by Beno Oppenheimer, MD, Assistant Professor Medicine, Division Pulmonary/Critical Care, Course director Introduction to Bedside Diagnosis, NYU School of Medicine

Image courtesy of Wikimedia Commons (Hadrian, emperor of Rome)

References:

1. Petrakis NL. Diagonal earlobe creases, type A behavior and the death of Emperor Hadrian. West J Med. 1980;132(1):87–91. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1216678/

2. Frank ST. Aural sign of coronary-artery disease. N Engl J Med. 1973;289(6):327-328.

3. Lichstein E, Chadda KD, Naik D, Gupta PK. Diagonal ear-lobe crease: prevalence and implications as a coronary risk factor. N Engl J Med. 1974;290(11):615-616.

4. Mehta J, Hamby RI. Letter: Diagonal ear-lobe crease as a coronary risk factor. N Engl J Med. 1974. 291(5):260.

5. Christiansen JS, Mathiesen B, Andersen AR, Calberg H. Letter: Diagonal ear-lobe crease in coronary heart disease. N Engl J Med. 1975;293(6):308-309.

6. Sprague DH. Diagonal ear-lobe crease as an indicator of operative risk. Anesthesiology. 1976;45(3):362-364.

7. Doering C, Ruhsenberger C, Phillips DS. Ear lobe creases and heart disease. J Am Geriatr Soc. 1977;25(4):183-185.

8. Rhoads GG, Yano K. Ear-lobe crease and coronary-artery heart disease. Ann Intern Med. 1977;87(2):245.  http://www.ncbi.nlm.nih.gov/pubmed/889207

9. Kaukola S. The diagonal ear-lobe crease, a physical sign associated with coronary heart disease. Acta Med Scand Suppl. 1978;619:1-49.

10. Wermut W, Jaszczenko S, Ruszel A. Ear lobe crease as a risk factor in coronary disease. Wiad Lek. 1980;33(6):435-438.

11. Fisher JR. Sievers ML. Ear-lobe crease in American Indians. Ann Intern Med. 1980;93(3):512.

12. Kaukola S. The diagonal ear-lobe crease, heredity and coronary heart disease. Acta Med Scand Suppl. 1982;668:60-63.

13. Elliott WJ. Ear lobe crease and coronary artery disease. 1,000 patients and review of the literature. Am J Med. 1983;75(6):1024-1032.

14. Wagner RF Jr, Reinfeld HB, Wagner KD, et al. Ear-canal hair and the ear-lobe crease as predictors for coronary-artery disease. N Engl J Med. 1984;311(20):1317-1318.

15. Gu?iu I, el Rifai C, Mallozi M. Relation between diagonal ear lobe crease and ischemic chronic heart disease and the factors of coronary risk. Med Interne. 1986;24(2):111-116.

16. Gibson TC, Ashikaga T. The ear lobe crease sign and coronary artery disease in aortic stenosis. Clin Cardiol. 1986;9(8):388-390.

17. Verma SK, Khamesra R, Bordia A. Ear-lobe crease and ear-canal hair as predictors of coronary artery disease in Indian population. Indian J Chest Dis Allied Sci. 1988;30(3):189-196.

18. Kenny DJ, Gilligan D. Ear lobe crease and coronary artery disease in patients undergoing coronary arteriography. Cardiology. 1989;76(4):293-298.

19. Miri? D, Rumboldt Z, Pavi? M, Kuzmani? A, Bagatin J. The role of diagonal ear lobe crease in the clinical evaluation of coronary risk. Lijec Vjesn. 1990;112(7-8):206-207.

20. Moraes D, McCormack P, Tyrrell J, Feely J. Ear lobe crease and coronary heart disease. Ir Med J. 1992;85(4):131-132. http://www.ncbi.nlm.nih.gov/pubmed/1473944

21. Motamed M, Pelekoudas N. The predictive value of diagonal ear-lobe crease sign. Int J Clin Pract. 1998;52(5):305-306.  http://www.ncbi.nlm.nih.gov/pubmed/9796561

22. Davis TM, Balme M, Jackson D, Stuccio G, Bruce DG. The diagonal ear lobe crease (Frank’s sign) is not associated with coronary artery disease or retinopathy in type 2 diabetes: the Fremantle Diabetes Study. Aust N Z J Med. 2000;30(5):573-577.

23. Dytfeld D, Le?na J, Protasewicz A, Sarnowski W, Dyszkiewicz W, Paradowski S. Ear lobe crease as a factor of potential risk for coronary artery disease?–World news review and own research. Pol Arch Med Wewn. 2002;108(1):633-638.

24. Bahcelioglu M, Isik AF, Demirel B, Senol E, Aycan S. The diagonal ear-lobe crease. As sign of some diseases. Saudi Med J. 2005;26(6):947-951.

25. Shoenfeld Y, Mor R, Weinberger A, Avidor I, Pinkhas J. Diagonal ear lobe crease and coronary risk factors. J Am Geriatr Soc. 1980;28(4):184-187.

26. Greenland P, Knoll MD, Stamler J, et al. Major risk factors as antecedents of fatal and nonfatal coronary heart disease events. JAMA. 2003;290(7):891-897.

27. Weissler AM. Traditional risk factors for coronary heart disease. JAMA. 2004;291(3):299-300. http://jama.ama-assn.org/content/291/3/299.3

Should Women Be Screened For Abdominal Aortic Aneurysms?

October 26, 2011

By Michael Boffa

Faculty Peer Reviewed

Laura K. sits in the office of her cardiologist waiting for the results of her follow-up aorto-iliac duplex scan. Six months ago, Laura had an endostent placed in her abdominal aorta after a 5.2 cm x 5.4 cm abdominal aortic aneurysm (AAA) was discovered.  Though she has recently quit, Laura, now 70, smoked for a large portion of her life. Her advanced age and smoking put her at increased risk of suffering a potentially life-threatening aortic aneurysm rupture.

Approximately 30,000 deaths annually in the United States are attributed to abdominal aortic aneurysms (AAA), and about one fifth of these deaths are in women.[1] An abdominal aortic aneurysm is defined as an aorta with a diameter of greater than 3 cm.  Ruptured AAA is the thirteenth most common cause of death in the United States and is responsible for 4-5% of sudden deaths.[2] The prevalence of AAA is 6 times greater in men than in women,[3] with one study demonstrating a prevalence of 1.3% in women and 7.6% in men.[4] Due to this low prevalence, women were excluded from many large trials of screening for AAA. But is it really prudent to assume that women don’t need to be screened?

The United States Preventive Services Task Force (USPSTF) published screening recommendations for abdominal aortic aneurysms in 2005 and recommended against routine screening for AAA in women.[6] According to the report, “Because of the low prevalence of large AAAs in women, the number of AAA-related deaths that can be prevented by screening this population is small. There is good evidence that screening and early treatment result in important harms, including an increased number of surgeries with associated morbidity and mortality, and psychological harms.”[6]  The report cited operative mortality rates of 4-5% for open surgical repair and the high rate of  cardiac and pulmonary complications as reasons why the cost of screening outweighs the benefits.[6] However, “endovascular repair has even better statistics, with an annual rate of rupture of 1%.”[6]  In contrast, the overall survival rate of patients who experience a ruptured abdominal aortic aneurysm is only about 25%.[7]  Of patients with ruptured AAA, about half will reach the hospital alive; of those, 50% will not survive emergent repair.[8-9] So which has a higher cost: psychological distress or a 75% mortality rate?

The incidence of AAA has been rising over the past 40 years. Interestingly, the rate of hospital admission for aneurysm rupture in men has been decreasing. However, the number of women admitted for AAA has been steadily increasing. Smoking is the risk factor most strongly associated with AAA. Smoking was a risk factor in 75% of all aneurysms that were ≥4.0 cm; additionally, the risk increases with the number of years spent smoking.[10] An interesting idea about the gradual increase in women with AAA may be related to changes in smoker demographics that have occurred in the past 50 years.[11] Smoking became popular for women in the 1950s, many years after men began smoking in large numbers.[11] Additionally, men have a higher rate of successful smoking cessation.[12] Since AAA follows the smoking trend, women may be increasingly at risk.  Family history also plays a role; one study showed that the prevalence in women with a family history of AAA was 8.3%.[5] Shouldn’t these women be screened?

Gender plays an important part in the issue of screening because the male aorta is generally larger than the female aorta. One study showed that a 5.2 cm AAA in women is equivalent to a 5.5 cm AAA in men.[13] Additionally, as the diameter of the AAA increases the likelihood of rupture increases exponentially in women.[13]  Therefore, the same cutpoints–3 cm for an aneurysm and 5.5 cm for surgical intervention–should not apply. However, the basis of the screening recommendations from the USPSTF uses the same cutoff for both men and women.  The American Association for Vascular Surgery and Society for Vascular Surgery recommends performing elective repair at 4.5 to 5 cm in women.[14]  Even though there is a clear difference, there are very few studies that look at the role of gender. However, one study showed that even without accounting for gender-related change in aortic diameter, the annual risk of rupture of an aneurysm greater than 5 cm was 18% in women versus 12% in men.[15]

As with all screening tests, it is prudent to consider cost. One study in 2006 reviewed data from many previous studies and found that the cost per year of life gained by screening and operating on women was about $5911. The cost-effectiveness ratio was similar to that of screening men. They further went on to explain that the lower AAA prevalence in women (1.1%) is balanced by a significantly higher rupture rate (3.1%).[16] Therefore, the authors concluded that screening is indeed cost-effective.[16]

Laura is relieved to hear that the stent is in place, her aneurysm is no longer enlarging, and there are no leaks or complications. She will go back to her home with her husband and continue her active lifestyle. At the age of 70 she jokes that she only has 30 good years left. Having her aorta imaged probably prolonged her life. She was saved because her cardiologist decided it was time to take a minute, think about what was going on despite the USPSTF, and take a look. Isn’t she worth it?

Michael Boffa is a 4th year medical student at NYU School of Medicine

Peer reviewed by Nate Link, MD, Chief of Medicine, Bellevue Hospital, Associate Proffer of Medicine

Image courtesy of Wikimedia Commons

References:

1. DeRubertis BG, Trocciola SM, Ryer EJ, et al. Abdominal aortic aneurysm in women: Prevalence, risk factors, and implications for screening. J Vasc Surg. 2007;46(4):630-635.  http://www.hopkinsguides.com/hopkins/ub/citation/17903646/Abdominal_aortic_aneurysm_in_women:_prevalence_risk_factors_and_implications_for_screening_

2. Schermerhorn M. A 66-year-old man with an abdominal aortic aneurysm: review of screening and treatment. JAMA. 2009;302(18):2015-2022.  http://jama.ama-assn.org/content/302/18/2015

3. Pleumeekers HJCM, Hoes AW, Van Der Does E, et al. Aneurysms of the abdominal aorta in older adults. The Rotterdam Study. Am J Epidemiol. 1995;142(12):1291–1299.  http://aje.oxfordjournals.org/content/142/12/1291.full.pdf

4.  Scott RA, Bridgewater S, Ashton HA. Randomized clinical trial of screening for abdominal aortic aneurysm in women. Br J Surg. 2002;89(3):283–285.

5. Le Hello C, Koskas F, Cluzel P, et al. French women from multiplex abdominal aortic aneurysm families should be screened. Ann Surg.2005;242(5):739–744.

6. U.S. Preventive Services Task Force. Screening for abdominal aortic aneurysm: recommendation statement. Ann Intern Med. 2005;142(3):198-202.  http://www.annals.org/content/142/3/198.full

7. Lambert ME, Baguley P, Charlesworth D. Ruptured abdominal aortic aneurysms. J Cardiovasc Surg (Torino). 1986;27(3):256-261.

8. Thomas PR, Stewart RD. Abdominal aortic aneurysm. Br J Surg. 1988;75(8):733-736.  http://onlinelibrary.wiley.com/doi/10.1002/bjs.1800750804/pdf

9. Harris LM, Faggioli GL, Fiedler R, Curl GR, Ricotta JJ. Ruptured abdominal aortic aneurysms: factors affecting mortality rates. J Vasc Surg. 1991;14(6):812-818; discussion 819-820.

10. Lederle FA, Johnson GR, Wilson SE, et al. Prevalence and associations of abdominal aortic aneurysm detected through screening. Ann Intern Med. 1997;126(6):441-449.

11. Norman PE and Powell JT. Abdominal aortic aneurysm: the prognosis in women is worse than in men. Circulation. 2007;115(22):2865-2869. http://circ.ahajournals.org/content/115/22/2865.full

12. Peto R, Darby S, Deo H, Silcocks P, Whitley E, Doll R. Smoking, smoking cessation, and lung cancer in the UK since 1950: combination of national statistics with two case-control studies. BMJ. 2000;321(7257):323–329. http://www.bmj.com/content/321/7257/323.full

13. Forbes TL, Lawlor DK, DeRose G, Harris KA. Gender differences in relative dilatation of abdominal aortic aneurysms. Ann Vasc Surg. 2006;20(5):564–568.

14. Brewster DC, Cronenwett JL, Hallett JW Jr, Johnston KW, Krupski WC, Matsumura JS. Guidelines for the treatment of abdominal aortic aneurysms: report of a subcommittee of the Joint Council of the American Association for Vascular Surgery and Society for Vascular Surgery. J Vasc Surg. 2003;37(5):1106–1117. http://www.ncbi.nlm.nih.gov/pubmed/12756363

15. Wong F, Brown LC, Powell JT. Can we predict aneurysm rupture? In: Becquemin J-P, Alimi YS, eds. Controversies and Updates in Vascular Surgery. Torino, Italy: Minerva Medica Italy; 2006:35-43

16. Wanhainen A, Lundkvist J, Bergqvist D, Björck M. Cost-effectiveness of screening women for abdominal aortic aneurysm. J Vasc Surg. 2006;43(5):908–914; dis  http://www.ncbi.nlm.nih.gov/pubmed/16678681

Bariatric Surgery: A Cure for Diabetes?

October 20, 2011

By Amy Dinitz

Faculty Peer Reviewed

The lifetime risk of developing diabetes for persons born in 2000 is around 35%[1] and the NHANES database has suggested a greater than fourfold increase in prevalence over the last three generations.  While bariatric surgery has become the most effective treatment for obesity, it has also been found to be an extremely effective treatment for type 2 diabetes.  It was initially thought that the weight loss experienced by patients after bariatric surgery was responsible for improved glycemic control.  However, patients experience improvement after only a few days, suggesting that hormonal changes are partly responsible.[2] Discovering exactly which hormones are involved and how they “cure” diabetes has proven difficult.

The major players seem to be the incretin hormones glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic peptide (GIP); peptide YY (PYY); and ghrelin.  GLP-1 is secreted by the L cells of the distal ileum in response to ingested nutrients, and acts as a potent insulin secretagogue.[3] It has also been shown to slow gastric emptying and induce satiety in the central nervous system.[4] GLP-1 increases lipogenesis in adipocytes and glycogenesis in liver cells and skeletal muscle.[5]

GIP is secreted by the K cells of the duodenum and jejunum in response to carbohydrate and fat intake, and acts on pancreatic beta cells as an insulin secretagogue.[6] However, it has no effect on gastric emptying or satiety.[7] Like GLP-1, PYY is secreted by the L cells of the ileum, increases satiety, and slows gastric emptying through binding of receptors in the central and peripheral nervous systems.[8]

Ghrelin is a hormone secreted by cells in the gastric fundus and proximal gut that acts on the hypothalamus to stimulate appetite and food intake, as well as decrease energy expenditure and fat catabolism. Serum ghrelin levels are high before a meal to stimulate appetite and decrease afterward.[9] Ghrelin also acts in a paracrine manner in the pancreas to inhibit insulin secretion.[10] Serum ghrelin levels are inversely proportional to body weight, while weight loss causes increased ghrelin levels [11], both of which suggest that ghrelin is important in maintaining body weight at a “set point.”

The hypothesis that the caloric restriction induced by bariatric surgery is responsible for improved blood glucose levels does not explain why bypass procedures have better diabetes remission rates than restrictive procedures.  Moreover, bypass procedures cause remission in a few days[12], but remission doesn’t occur until months after laparoscopic gastric banding (LAGB).[13]

There are two theories, both supported by studies of surgical procedures conducted on mice, to explain the rapid improvement in glucose metabolism following bypass procedures.  In the hindgut hypothesis, rapid delivery of nutrients to the distal bowel increases secretion of GLP-1 and PYY, thus increasing glucose-dependent insulin secretion.[14] The foregut theory, in contrast, suggests that causing food to bypass the duodenum and the jejunum prevents secretion of an unidentified “putative signal” that contributes to insulin resistance and type 2 diabetes.[15]

Consistent with the hindgut theory, GLP-1 levels increase as much as threefold soon after bypass, but not after gastric banding[16], and PYY has been shown to increase as soon as two days after bypass.[17] The effect of LAGB and bypass on GIP secretion is not as well understood, though studies have shown decreased levels two weeks after bypass.[18] This makes sense physiologically, as GIP is secreted by cells in the proximal gut that would be bypassed by the procedure.  Ghrelin levels after gastric bypass are more variable, and seem to be based on surgical technique.  The amount of residual ghrelin-producing tissue and vagal innervation seem to determine the post-operative levels.[19]

As more about the hormonal changes seen after bypass and gastric banding is learned, it becomes clear that it is not simply weight loss that causes improvements in glucose tolerance.  Gastric banding is an effective treatment for diabetes; thus, more research should be done to assess its safety in patients with diabetes who are not obese.  Further studies of patients after bariatric surgery will continue to elucidate the pathophysiologic mechanisms involved in diabetes.  Based on these studies, medications can be created that mimic the effects of bypass in the body to treat diabetes effectively without an invasive surgical procedure.

Amy Dinitz is a 4th year medical student at NYU School of Medicine

Reviewed by  Manish Parikh, MD, Assistant Professor, Bariatric Surgery, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Narayan KM, Boyle JP, Thompson TJ, Sorenson SW, Williamson DF. Lifetime risk for diabetes mellitus in the United States. JAMA. 2003;290(14):1884-1890.  http://www.cdc.gov/diabetes/news/docs/lifetime.htm

2. Rubino F. Bariatric surgery: effects on glucose homeostasis. Curr Opin Clin Nutr Metab Care. 2006;9(4):497-507.  http://www.ncbi.nlm.nih.gov/pubmed/16778583

3. Holst JJ. The physiology of glucagon-like peptide 1. Physiol Rev. 2007;87(4):1409-1439.  http://physrev.physiology.org/content/87/4/1409.full.pdf

4. Flint A, Raben A, Ersbøll AK, Holst JJ, Astrup A. The effect of physiological levels of glucagon-like peptide-1 on appetite, gastric emptying, energy and substrate metabolism in obesity. Int J Obes Relat Metab Disord. 2001;25(6):781-792.  http://www.nature.com/ijo/journal/v25/n6/full/0801627a.html

5. Luque MA, González N, Márquez L, et al. Glucagon-like peptide-1 (GLP-1) and glucose metabolism in human myocytes. J Endocrinol. 2002;173(3):465-473.  http://www.ncbi.nlm.nih.gov/pubmed/12065236

6. Hansotia T, Drucker DJ. GIP and GLP-1 as incretin hormones: lessons from single and double incretin receptor knockout mice. Regul Pept. 2005;128(2):125-134.

7. Meier JJ, Nauck MA, Schmidt WE, Gallwitz B. Gastric inhibitory polypeptide: the neglected incretin revisited. Regul Pept. 2002;107(1-3):1-13.  http://www.ncbi.nlm.nih.gov/pubmed/12137960

8. Ballantyne GH. Peptide YY(1-36) and peptide YY(3-36): Part I. Distribution, release and actions. Obes Surg. 2006;16(5):651-658.  http://www.springerlink.com/content/73p70u3312n10675/

9. Cummings DE, Overduin J. Gastrointestinal regulation of food intake. J Clin Invest. 2007;117(1):13-23.

10. Kageyama H, Funahashi H, Hirayama M, et al. Morphological analysis of ghrelin and its receptor distribution in the rat pancreas. Regul Pept. 2005;126(1-2):67-71.

11. Cummings DE, Shannon MH. Ghrelin and gastric bypass: is there a hormonal contribution to surgical weight loss? J Clin Endocrinol Metab. 2003;88(7):2999-3002.  http://www.ncbi.nlm.nih.gov/pubmed/12843132

12. Pories WJ, Swanson MS, MacDonald KG, et al. Who would have thought it? An operation proves to be the most effective therapy for adult-onset diabetes mellitus. Ann Surg. 1995;222(3):339-350; discussion 350-352.

13. Dixon JB, O’Brien PE, Playfair J, et al. Adjustable gastric banding and conventional therapy for type 2 diabetes: a randomized controlled trial. JAMA. 2008;299(3):316-323.  http://jama.ama-assn.org/content/299/3/316.full

14. Cummings DE, Overduin J, Foster-Schubert KE, Carlson MJ. Role of the bypassed proximal intestine in the anti-diabetic effects of bariatric surgery. Surg Obes Relat Dis. 2007;3(2):109-115.

15. Rubino F. Is type 2 diabetes an operable intestinal disease? A provocative yet reasonable hypothesis. Diabetes Care. 2008;31 Suppl 2:S290-296.  http://care.diabetesjournals.org/content/31/Supplement_2/S290.short

16. Korner J, Bessler M, Inabnet W, Taveras C, Holst JJ. Exaggerated glucagon-like peptide-1 and blunted glucose-dependent insulinotropic peptide secretion are associated with Roux-en-Y gastric bypass but not adjustable gastric banding. Surg Obes Relat Dis. 2007;3(6):597-601.  http://www.covidien.com/BariatricsProPhysician/pages.aspx?page=PosOutcome:About/181893&topicID=181893

17. Moriñigo R, Moizé V, Musri M, et al. Glucagon-like peptide-1, peptide YY, hunger, and satiety after gastric bypass surgery in morbidly obese subjects. J Clin Endocrinol Metab. 2006;91(5):1735-1740.  http://jcem.endojournals.org/content/91/5/1735.full

18. Clements RH, Gonzalez QH, Long CI, Wittert G, Laws HL. Hormonal changes after Roux-en Y gastric bypass for morbid obesity and the control of type-II diabetes mellitus. Am Surg. 2004;70(1):1-4; discussion 4-5.  http://www.ncbi.nlm.nih.gov/pubmed/14964537

19. Cummings DE, Shannon MH. Ghrelin and gastric bypass: is there a hormonal contribution to surgical weight loss? J Clin Endocrinol Metab. 2003;88(7):2999-3002. http://www.ncbi.nlm.nih.gov/pubmed/12843132

Subclinical Hypothyroidism: To Screen or Not to Screen?

August 17, 2011

By Addie Peretz, MD

Faculty Peer Reviewed

Despite the ease of screening for hypothyroidism with hormone assays and the availability of thyroxine replacement therapy, no recommendations regarding routine screening for hypothyroidism in adults are universally accepted. The American Academy of Family Physicians [1] and the American Association of Clinical Endocrinologists [2] recommend periodic assessment of thyroid function in older women.  The American Thyroid Association advocates for more frequent earlier screening, recommending measurement of thyroid stimulating hormone (TSH) beginning at age 35 and every 5 years thereafter.[3]

Whereas screening adults for hypothyroidism is controversial, screening infants for congenital hypothyroidism is routinely done.  The reasons for the universal acceptance of congenital hypothyroidism screening include its high prevalence (1:4000 births), the severity of the consequences if even mild hypothyroidism is left untreated, and the efficacy of treatment with levothyroxine replacement.  Adults have a lower prevalence of overt disease, and the necessity, efficacy, and cost -effectiveness of treating subclinical hypothyroidism are uncertain.[4]

Subclinical hypothyroidism (SCH) is defined as a normal serum free thyroxine (T4) concentration in the presence of an elevated serum thyrotropin (TSH) concentration.[5]  In the United States National Health and Examination Survey (NHANES III), 4.3% of the 16 533 people studied, excluding those subjects with known thyroid disease, were found to have SCH.[6] Other population-based studies report the prevalence of subclinical hypothyroidism to be as high as 15%.  A higher prevalence has been reported among women, Caucasian populations, those over 50 years of age, and men and women with a family history of thyroid disease.

One of the most compelling reasons to screen for hypothyroidism is to reduce the risk of the  potential consequences of SCH.  In a prospective study of patients with subclinical hypothyroidism with 10-20 years of follow-up, approximately 33-55% of patients went on to develop overt hypothyroidism.  The risk of progression tended to correlate with the initial serum TSH concentration and the presence of antithyroid peroxidase antibodies.[7]

Another clinically relevant potential consequence of SCH is cardiovascular disease.  While data are inconsistent, some observational studies have reported an increased risk of cardiovascular disease among subjects with SCH.[8]  This increased risk may be related to the reported association between elevated TSH and elevated total and LDL-cholesterol concentrations.[9]  Other proposed mechanisms point to the link between SCH and other cardiovascular risk factors, including markers of inflammation, vascular reactivity, endothelial function, and carotid intima media thickness.[10]

While the development of overt hypothyroidism or cardiovascular disease are two of the more severe potential consequences of SCH, the disease also has an impact on quality of life and has been associated with neuropsychiatric disease.  SCH has been linked to defects in verbal memory and executive function, defects which improve upon treatment with levothyroxine.[11]  In addition, the lifetime frequency of depression has been found to be higher in subjects who have SCH compared with those who do not.[12]

Though the potential consequences of SCH can be severe, standardized treatment recommendations are lacking. Levothyroxine replacement therapy is recommended for all patients with a TSH greater than 10 mlU/L; however, treatment recommendations are controversial for those patients with a serum TSH level between 5 and 10 mIU/L.  Large-scale randomized trials establishing a benefit of treatment of SCH have yet to be conducted.  Studies are necessary to confirm that detection of SCH will allow physicians to implement effective interventions with levothyroxine therapy.

Despite the lack of randomized trials, screening is nonetheless important.  Until these trials are conducted, clinicians ought to have an especially low threshold for screening higher risk populations, including women with vague symptoms suggestive of hypothyroidism (such as fatigue or depression), women who are pregnant or anticipate becoming pregnant, those with a strong family history of autoimmune thyroid disease,[13] and patients with type 1 diabetes.[14],[15]

Dr.  Addie Peretz is a former medical student at NYU School of Medicine,

Peer reviewed by Manfred Blum, MD, Medicine, Endocrinology, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. American Academy of Family Physicians. Age charts for periodic health examination. Kansas City, MO; American Academy of Family Physicians, 1994. (Reprint no. 510).

2. American Association of Clinical Endocrinologists and American College of Endocrinology. AACE clinical practice guidelines for the evaluation and treatment of hyperthyroidism and hypothyroidism. Endocr Pract.1995;1:54-62.
3.  Ladenson PW, Singer PA, Ain KB, et al. American Thyroid Association guidelines for detection of thyroid dysfunction. Arch Intern Med. 2000;160:1573–1575.

4. Weetman AP. Hypothyroidism: screening and subclinical disease. BMJ. 1997;314(7088):1175-1178.

5. An elevated TSH concentration is defined as above the upper limit of the normal TSH reference range of 4-5mU/L.  The majority of patients with subclinical hypothyroidism have a serum TSH level <10mU/L and are asymptomatic.

6. Hollowell JG, Staehling NW, Flanders WD, et al. Serum TSH, T(4), and thyroid antibodies in the United States population (1988 to 1994): National Health and Nutrition Examination Survey (NHANES III). J Clin Endocrinol Metab. 2002;87(2):489-499.
7. Vanderpump MP, Tunbridge WM, French JM, et al. The incidence of thyroid disorders in the community: a twenty-year follow-up of the Whickham Survey. Clin Endocrinol (Oxf);1995;43(1):55-68.  http://www.ncbi.nlm.nih.gov/pubmed/7641412
8. Hak AE, Pols HA, Visser TJ, Drexhage HA, Hofman A, Witteman JC. Subclinical hypothyroidism is an independent risk factor for atherosclerosis and myocardial infarction in elderly women: the Rotterdam Study. Ann Intern Med. 2000;132(4):270-278. http://www.ncbi.nlm.nih.gov/pubmed/10681281
9. . Biondi B, Cooper DS. The clinical significance of subclinical thyroid dysfunction. Endocr Rev. 2008;29(1):76-131.  http://www.ncbi.nlm.nih.gov/pubmed/17991805
10. Cikim AS, Oflaz H, Ozbey N, et al. Evaluation of endothelial function in subclinical hypothyroidism and subclinical hyperthyroidism. Thyroid. 2004;14(8):605-609.
11. . Samuels MH, Schuff KG, Carlson NE, Carello P, Janowsky JS. Health status, mood, and cognition in experimentally induced subclinical hypothyroidism. J Clin Endocrinol Metab. 2007;92(7):2545-2551.
12. . Haggerty JJ Jr, Stern RA, Mason GA, Beckwith J, Morey CE, Prange AJ Jr. Subclinical hypothyroidism: a modifiable risk factor for depression? Am J Psychiatry. 1993;150(3):508-510. http://ajp.psychiatryonline.org/cgi/content/abstract/150/3/508
13. Glinoer D, Riahi M, Grün JP, Kinthaert J. Risk of subclinical hypothyroidism in pregnant women with asymptomatic autoimmune thyroid disorders. J Clin Endocrinol Metab. 1994;79(1):197–204. http://http://www.ncbi.nlm.nih.gov/pubmed/8027226
14. . Perros P, McCrimmon RJ, Shaw G, Frier BM. Frequency of thyroid dysfunction in diabetic patients: value of annual screening. Diabet Med. 1995;12(7):622–627. http://http://www.ncbi.nlm.nih.gov/pubmed/7554786

15. Fatourechi V. Subclinical hypothyroidism: an update for primary care physicians. Mayo Clin Proc. 2009;84(1):65-71.