Clinical Questions

Can Young Patients Get Diverticular Disease?

January 23, 2014

By Aaron Smith, MD

Peer Reviewed

Case: A 35 year-old, overweight female presents to the emergency room with five days of left lower quadrant abdominal pain. The pain is 10/10 in severity and accompanied by nausea, bloating, and loss of appetite.

Diverticulosis, the presence of small colonic outpouchings thought to occur secondary to high pressure within the colon, is an extremely common condition in elderly patients. Recent data suggests that up to 50% of people over the age of 60 have colonic diverticula.[1] When a colonic diverticulum becomes inflamed, the result is diverticulitis, a painful condition that can result in colonic obstruction, perforation, or abscess formation. Diverticulitis is a very common cause of acute abdominal pain in elderly individuals, especially in the United States.[1]

Traditionally, diverticulosis and diverticulitis, together falling under the heading of “diverticular disease,” have been considered diseases of the elderly. That stereotype may have to change. A 2009 study by Etzioni et al. used a 1998 to 2005 nationwide inpatient sample to analyze the care given to 267,000 patients admitted with acute diverticulitis.[2] During this eight-year period, admissions for acute diverticulitis increased by 26%. During the same period, admissions in the 18 to 44 year-old age group increased by 82%, far more rapidly than in the older group. For the younger group, the incidence of diverticulitis necessitating inpatient admission increased from 1 in 6600 to 1 in 4000.

Etzioni et al. offer several potential explanations for the rapid rise of diverticulitis cases in young patients. One is that increased use of computed tomography (CT) scanning may have led to a higher rate of detection. This would mean that the actual incidence of diverticulitis has remained stable, but that more cases have been diagnosed. A second possible explanation is that an influx of a specific racial or ethnic group with a high rate of diverticulitis, likely Hispanics, may have increased in number between 1998 and 2005, affecting the results. (It has been suggested that Hispanics are prone to a particularly virulent form of diverticulitis at a young age, but the data are scarce.)[3] The dataset used for the study did not include race or ethnicity, and therefore the authors could not examine racial or ethnic data and could not exclude the possibility of a demographic shift affecting the numbers. The authors rightly note, however, that there is a distinct possibility that from 1998 to 2005 there was a real and dramatic increase in the rate of diverticulitis in younger patients. Why? Diverticulitis has been linked to obesity, poor fiber intake, and the western lifestyle in general, and so its increased incidence is mostly likely related to America’s current obesity epidemic.[3-5]

Two lessons can be gleaned from the data presented in this paper. First is a reminder of that favorite medical axiom, “common things are common.” When a disease is highly prevalent in the overall population, it may be highly prevalent in subsets of the population not stereotypically associated with the disease. Take diverticular disease as an example. According to the Etzioni study, diverticulitis is roughly ten times more common in patients above the age of 75 than in patients aged 18 to 44. It is therefore tempting to dismiss diverticulitis as a potential diagnosis in young patients, because diverticulitis is so much more common in the elderly. This would be a mistake. With a prevalence of 1 in 4000, diverticular disease in young patients is more common than rare causes of abdominal pain classically associated with young people. Symptomatic intestinal malrotation, for example, is classically considered a disease of the young, but is less common than diverticulitis, with a prevalence of 1 in 6000.[6] Decades of high colonic pressure in the elderly increase the chances of diverticula formation, and diverticulitis is certainly less common in the young than it is in the elderly. Still, less common does not equal uncommon.

The second lesson to be learned is that due to increases in obesity and sedentary lifestyle, clinicians should rethink which conditions are diseases of the elderly and which are not. Type II diabetes used to be called adult-onset diabetes until it became so common in children and adolescents that the term became a misnomer. Like type II diabetes, diverticular disease is associated with obesity and sedentary lifestyle, and its increased prevalence can be thought of as a correlate to the increased prevalence of other diseases of the western lifestyle (diabetes, hypertension, coronary artery disease…). If the population of the United States continues to grow more obese and inactive, diverticular disease may become more common.

The patient described in the introduction received a CT scan and was diagnosed with acute diverticulitis. Even after imaging confirmed the diagnosis, the patient’s primary physician was hesitant to accept that diverticulitis was the cause of the patient’s abdominal pain, because she was “too young” to have diverticulitis. The Etzioni paper and other recent studies suggest that this mode of thinking may need to be reexamined.[7] Diverticulitis is a diagnosis that should be considered in all patients with abdominal pain, and not just in the elderly. Remember: common things are common, even in young people.

Dr. Aaron Smith is a former medical student and now a  1st year  transitional medicine resident at Harbor-UCLA

Peer reviewed by Michael Poles, MD, Section Editor, Clinical Correlations

Image courtesy of Wikimedia Commons

References

[1] Weizman AV, Nguyen GC. Diverticular disease: epidemiology and management. Can. J. Gastroenterol. 2011;25(7):385–389. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3174080/

[2] Etzioni DA, Mack TM, Beart RW Jr, Kaiser AM. Diverticulitis in the United States: 1998–2005. Annals of Surgery. 2009;249(2):210–217. http://journals.lww.com/annalsofsurgery/pages/articleviewer.aspx?year=2009&issue=02000&article=00006&type=abstract

[3] Zaidi E, Daly B. CT and clinical features of acute diverticulitis in an urban U.S. population: rising frequency in young, obese adults. AJR Am J Roentgenol. 2006;187(3):689–694. http://www.ajronline.org/doi/abs/10.2214/AJR.05.0033

[4] Aldoori W, Ryan-Harshman M. Preventing diverticular disease: review of recent evidence on high-fibre diets. Can Fam Physician. 2002;48:1632–1637. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2213940/

[5] Hjern F, Johansson C, Mellgren A, et al. Diverticular disease and migration–the influence of acculturation to a western lifestyle on diverticular disease. Aliment Pharmacol Ther. 2006;23:797–805. http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2036.2006.02805.x/abstract;jsessionid=E566F268C85EEE5067FA4E1664437630.d03t02

[6] Berseth CL. Disorders of the intestines and pancreas. In: Avery’s Diseases of the Newborn, 7th, Taeusch WH, Ballard RA (Eds), WB Saunders, Philadelphia 1998. 918.

[7] van de Wall BJ, Poerink JA, Draaisma WA, Reitsma JB, Consten EC, Broeders IA. Diverticulitis in young versus elderly patients: a meta-analysis. Scand J Gastroenterol. 2013. http://informahealthcare.com/doi/abs/10.3109/00365521.2012.758765

Is it Time to Skip the Gym?

January 15, 2014

By Robert Mocharla, MD

Peer Reviewed

No. Sorry. Despite such reasonable excuses as – “I forgot my iPod”, “It’s pouring rain”, or “Game of Thrones is on” — an exhaustive literature search will not reveal a shred of evidence that you or most of your patients should skip daily exercise. However, a subset of your patients should indeed be skipping workouts regularly. The group referred to consists of endurance athletes (e.g. cyclists, swimmers, long-distance runners, competitive athletes). While this may not describe the majority of our patients, growing evidence suggests that overtraining can actually be maladaptive to overall health in certain individuals.

The question of too much exercise was first asked many years ago when it was noticed that, despite rigorous training schedules, the performance of certain athletes actually began to decline months into their training routines. The athletes burned out. But why? Common sense tells us that the more exercise we engage in, the better shape we will be in. Surprisingly, this is not always the case, and until recently, little was known about this phenomenon.

An entity known as Overtraining Syndrome (OTS) has gained widespread acceptance as the cause of deteriorating athletic performance [1]. OTS has been an active area of research since the early 1990s when it was noticed that not only can athletic performance in endurance athletes decline over time, but these athletes can also experience biochemical, psychological, and immunological abnormalities [2]. Currently, there is not a universally accepted theory as to the cause of OTS. One theory with growing evidence is the “Cytokine Hypothesis” [3]. It purports that repetitive joint and muscle trauma seen with excessive physical training elicits a response similar to that seen in chronic inflammation. Inflammatory cytokines at sites of injury activate monocytes, which then release pro-inflammatory IL-1ß, IL-6, and TNF-?. The body then enters a catabolic, inflammatory state. As such, one might hypothesize a chronic inflammatory state would be evident via biochemical markers (i.e. anemia, ESR, CRP). However, to date, no studies have been able to show a relationship between OTS and any of these biomarkers of chronic inflammation. In part, this is why a diagnosis of OTS is often difficult to reach (and always should be one of exclusion after other systemic processes are ruled out) [4].

Another theory involves dysregulation of the Hypothalamic-Pituitary Axis (as seen in amenorrheic female athletes). During exercise, the body acutely releases cortisol, epinephrine and norepinephrine as a means to enhance cardiovascular function and redistribution of metabolic fuel. However, these hormones quickly return to baseline levels following a workout. Interestingly, endurance athletes suffering from burnout show higher baseline cortisol levels [5]. This may then lead to negative effects on normal metabolism, healing, and immunity. In fact, studies have shown an increased susceptibility to infection in overtrained athletes, although the mechanism is not fully understood [6,7]. Studies have not been able to identify differences in leukocyte number or distribution among overtrained vs healthy athletes, and the true etiology may rather be related to impaired functionality of immune cells. It is important to note that the body’s normal response to intermittent exercise is overall adaptive. When allowed adequate time to recover, inflammatory cytokines and hormones decrease to normal levels. The body is then able to adapt to repeated exercise by increasing muscle mass and capillary density, endothelial cell function, and glucose utilization among other things [8]. It is the lack of recovery time that is problematic in OTS.

The most common initial manifestation of OTS involves mood changes. An otherwise emotionally stable athlete may become increasingly depressed, chronically fatigued, unable to sleep, have decreased appetite, and lose interest in competition [9]. Unfortunately, patients are rarely recognized in this stage and often go on to develop the hallmark of the syndrome, deteriorating athletic performance. Muscle and joint pain are often present as well. Depending on the severity, symptoms can last anywhere from a few weeks to years [10]. Even when declining performance is evident, there are no diagnostic criteria or laboratory tests that can confirm a suspicion of OTS. For now, the diagnosis is purely clinical. A high index of suspicion must be kept for all at-risk groups. While competitive athletes are most classically thought of as high-risk, OTS should also be considered in recreational athletes (who may unknowingly advance their training regimen too hastily). The primary focus of management is rest. Each case must be managed individually with regard to the symptom cluster experienced by the patient. It is recommended that patients rest at least a full 3-5 weeks with minimal to no athletic training [11]. Selective serotonin reuptake inhibitors are increasingly being used to combat mood and appetite symptoms [12]. After recovery, a cyclical workout routine should be established with adequate recovery time between cycles. Patients should be advised to consume a high carbohydrate diet to help facilitate recovery between workouts (upwards of 60-70% of caloric intake) [13]. Fortunately, athletes can and do recover after the appropriate interventions and precautions are made.

It is difficult to predict who will develop or re-experience OTS since the threshold of exercise tolerance varies widely among athletes. Therefore, patient education and prevention are critical. Studies estimate that up to 10% of vigorously training athletes have or will experience OTS [4]. Athletes should be questioned about their exercise routines and informed about the dangers and warning signs of over-training. Any evidence of psychiatric disturbance or decreased performance should prompt a discussion on the possibility and management of OTS.

Dr. Robert Mocharla is a graduate of NYU School of Medicine

Peer reviewed by Richard Greene, MD, Internal Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Budgett R. The overtraining syndrome. British Journal of Sports Medicine. 1990; 24:231–6.  http://bjsm.bmj.com/content/24/4/231.long

2. O’Toole, M. Overreaching and overtraining in endurance athletes. Overtraining in Sport. R.B. Kreider, A.C. Fry, M.L. O’Toole, eds. Champaign IL: Human Kinetics Publishers, Inc., 1998; 3-18.

3. Smith, LL. Cytokine hypothesis of overtraining: a physiological adaptation to excessive stress? Medicine & Science in Sports & Exercise. 2000; 32 (2): 317-31. http://journals.lww.com/acsm-msse/pages/articleviewer.aspx?year=2000&issue=02000&article=00011&type=abstract

4. Meeusen R, Ducios M, Foster C, et al. Prevention, diagnosis, and treatment of the overtraining syndrome: joint consensus statement of the European College of Sport Science and the American College of Sports Medicine. Med Sci Sports Exerc. 2013; 45(1):186-205  http://journals.lww.com/acsm-msse/Citation/2013/01000/Prevention,_Diagnosis,_and_Treatment_of_the.27.aspx

5. O’Connor, P.J. et al. Mood state and salivary cortisol levels following overtraining in female swimmers. Psychoneuroendocrinology. 1989; 14 (4), 303-310.

6. Mackinnon, LT. Chronic exercise training effects on immune function. Medicine & Science in Sports & Exercise. 2000; 32 (7 Suppl): S369-76. http://journals.lww.com/acsm-msse/pages/articleviewer.aspx?year=2000&issue=07001&article=00001&type=abstract

7. Heath, G. W., E. S. Ford, T. E. Craven, C. A. Macera, K. L. Jackson, and R. R. Pate. Exercise and the incidence of upper respiratory tract infections. Medicine & Science in Sports & Exercise. 1991; 23:152–157.

8. Mandroukas K, Krotkiewski M, Hedberg M, et al. Physical training in obese women. Effects of muscle morphology, biochemistry and function. Eur J Appl Physiol Occup Physiol. 1984;52:355-61.

9. W.P. Morgan, D.R. Brown, J.S. Raglin, P.J. O’Connor and K.A. Ellickson, Psychological monitoring of overtraining and staleness. Medicine & Science in Sports & Exercise. 1987; 21:107–114. http://bjsm.bmj.com/content/21/3/107.long

10. Lehmann, M., U. Gastmann, K.G. Petersen, N. Bach, A. Siedel, A.N. Khalaf, S. Fischer, and J. Keul. Training-overtraining: Performance, and hormone levels, after a defined increase in training volume versus intensity in experienced middle and long-distance runners. British Journal of Sports Medicine. 1992; 26:233-242.  http://bjsm.bmj.com/content/26/4/233.long

11. Koutedakis, Y., Budgett, R., Fullman, L. The role of physical rest for underperforming elite competitors. British Journal of Sports Medicine. 1990; 24(4):248-52.

12. Armstrong LE, VanHeest JL. The unknown mechanism of the overtraining syndrome: clues from depression and psychoneuroimmunology. Sports Med. 2002;32:185-209. http://link.springer.com/article/10.2165%2F00007256-200232030-00003

13. Costill DL: Inside Running: Basics of Sports Physiology. Indianapolis: Benchmark Press; 1986.

 

 

 

 

 

 

 

 

 

 

 

 

 

Who Should We Screen for Hepatitis C: By Risk Or Birth Cohort?

January 8, 2014

By Jung-Eun Ha

Peer Reviewed

Over the last few years major changes have occurred in the diagnosis and treatment of hepatitis C. In 2011 the U.S. Food and Drug Administration (FDA) approved a rapid finger stick antibody test for hepatitis C virus (HCV) infection [1]. The FDA also approved the protease inhibitors telapravir (Incivek; Vertex Pharmaceuticals, Cambridge, Massachusetts; Johnson & Johnson, New Brunswick, New Jersey) and boceprevir (Victrelis; Merck, Whitehouse Station, New Jersey), for the treatment of genotype 1 hepatitis C [1]. In August 2012, the Centers for Disease Control & Prevention recommended one-time screening for hepatitis C in all persons born between 1945 and 1965 [2}. In June 2013, the U.S. Preventive Services Task Force (USPSTF) also recommended screening for HCV infection in high-risk individuals and one-time screening in individuals born between 1945 and 1965 (“B” recommendation) [3]. The birth-cohort recommendation exponentially expands the size of the screening population, which was previously limited to high-risk individuals: ever IV drug users, blood transfusion or organ transplant recipients before 1992, those ever on hemodialysis, healthcare workers exposed to HCV-infected blood, children born to HCV-positive mothers, and sexual partners of HCV-positive persons.

The update affects about 82 million Americans born between 1945 and 1965 [4]. The1999-2008 National Health and Nutrition Examination Survey revealed that HCV antibody prevalence in this cohort is 3.25%, or 2.7 million people, as opposed to 0.88% prevalence in people born outside of the cohort [2]. The prevalence is about 1.6% in the general population [5]. More than two-thirds of the chronically infected belong to the 1945-1965 baby-boomer cohort. Many of them were inadvertently exposed to HCV-infected blood before the discovery of HCV in 1989 and the development of a screening test in 1992. HCV incidence was highest during the 1980s. Given the slow progression from chronic HCV infection to cirrhosis and hepatocellular carcinoma over decades [6], now is the time to screen this birth cohort before complications start to appear. Advanced fibrosis of the liver shows poor response to HCV treatment, and is also more costly to treat [7].

Relying solely on risk-based screening in this birth cohort is not sufficient, as up to 45% of people ever infected with HCV may not recall any exposure risk and thus will not likely volunteer to get screened [2]. Fifty to eighty percent of the infected don’t know their HCV status [8]. The number of people born between 1945 and 1965 needed to screen to prevent one HCV-related death is 607.

Overall, one-time HCV screening of this birth cohort is estimated to cost around $15,700 per quality-adjusted life year (QALY) [9]. By comparison, screening for colorectal cancer with colonoscopy can cost about $10,000 to $25,000 per QALY [10], and requires repeated studies. HCV screening of the 1945-1965 cohort is likely a one-time screening event, as HCV incidence has decreased drastically over the years, thanks to effective blood screening and increased awareness of HCV transmission among IV drug users. Morbidity and mortality from chronic HCV infection will be even lower, with a number of direct acting antivirals in the pipeline [11-12]. A recent proof-of-concept study of vaccine against single strain of HCV [13] suggests that mass screening may not even be necessary in the future if, and hopefully when, primary prevention is possible and feasible.

Commentary by Dr. Vin Pham of the Division of Infectious Diseases

The rationale for identifying persons earlier in the course of their disease includes having a greater likelihood of achieving successful outcome after treatment. The registrational studies for interferon-based hepatitis C therapies have consistently shown lower rates of sustained virologic response for subjects with fibrosis scores of 3 or 4, demonstrating the need to identify and treat people in the earlier stages of fibrosis. Ironically, the development of an effective vaccine against HCV may further expand the need for testing for HCV infection, since obviously vaccine would only be offered to those not already infected.

Jung-Eun Ha is a 4th year medical student at NYU School of Medicine

Peer reviewed by Vinh Pham, MD, Infectious Disease, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

[1] Getchell JP, Wroblewski KE, DeMaria A, et al. Testing for HCV infection: an update of guidance for clinicians and laboratorians. MMWR Morb Mortal Wkly Rep. 2013;62(18):1-4.  http://www.medscape.com/viewarticle/804472

[2] Smith BD, Morgan RL, Beckett GA, et al. Centers for Disease Control and Prevention. Recommendations for the identification of chronic hepatitis C virus infection among persons born during 1945–1965. MMWR Recomm Rep. 2012;61(RR-4):1-32.  http://www.cdc.gov/mmwr/preview/mmwrhtml/rr6104a1.htm

[3] U.S. Preventive Services Task Force. Screening for hepatitis C virus infection in adults: U.S. Preventive Services Task Force recommendation statement. http://www.uspreventiveservicestaskforce.org/uspstf12/hepc/hepcfinalrs.htm Published June 25, 2013. Accessed August 12, 2013.

[4] Centers for Disease Control and Prevention. Population projections, United States, 2004 – 2030, by state, age and sex. http://wonder.cdc.gov/population-projections.html.  Published 2005. Updated June 26, 2009. Accessed May 18, 2013.

[5] Armstrong G, Wasley A, Simard E, McQuillan GM, Kuhnert WL, Alter MJ. The prevalence of hepatitis C virus infection in the United States, 1999 through 2002. Ann Intern Med. 2006;144(10):705–714. http://www.ncbi.nlm.nih.gov/pubmed/16702586

[6] Chen SL, Morgan TR. The natural history of hepatitis C virus (HCV) infection. Int J Med Sci. 2006;3(2):47–52.  http://www.medsci.org/v03p0047

[7] Prati GM, Aghemo A, Rumi MG, et al. Hyporesponsiveness to PegIFNalpha2B plus ribavirin in patients with hepatitis C-related advanced fibrosis. J Hepatol. 2012;56(2):341-347.

[8] Hagan H, Campbell J, Thiede H, et al. Self-reported hepatitis C virus antibody status and risk behavior in young injectors. Public Health Rep. 2006;121(6):710-719.  http://www.ncbi.nlm.nih.gov/pubmed/17278406

[9] Rein DB, Smith BD, Wittenborn JS, et al. The cost-effectiveness of birth-cohort screening for hepatitis C antibody in U.S. primary care settings. Ann Intern Med. 2012;156(4):263-270.  http://www.ncbi.nlm.nih.gov/pubmed/22056542

[10] Pignone M, Saha S, Hoerger T, Mandelblatt J. Cost-effectiveness analyses of colorectal cancer screening: a systematic review for the U.S. Preventive Services Task Force. Ann Intern Med. 2002;137(2):96-104.  http://www.ncbi.nlm.nih.gov/pubmed/12118964

[11] Schaefer E, Chung R. Antihepatitis C virus drugs in development. Gastroenterology 2012;142(6):1340–1350.  http://www.ncbi.nlm.nih.gov/pubmed/22537441

[12] Poordad F, Dieterich D. Treating hepatitis C: current standard of care and emerging direct-acting antiviral agents. J Viral Hepat. 2012;19(7):449–464.  http://www.ncbi.nlm.nih.gov/pubmed/22676357

[13] Law JL, Chen C, Wong J, et al. A hepatitis C virus (HCV) vaccine comprising envelope glycoproteins gpE1/gpE2 derived from a single isolate elicits broad cross-genotype neutralizing antibodies in humans.  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3602185/

 

 

 

To Stent or not to Stent? A Review of the Evidence on the Utility of Stenting in Renal Artery Stenosis

November 22, 2013

By Elizabeth Hammer, MD

Faculty Peer Reviewed

Renovascular hypertension, often caused by renal artery stenosis (RAS) due to atherosclerosis or fibromuscular dysplasia, is the most common potentially correctable cause of secondary hypertension. Although only approximately one percent of patients with hypertension have atherosclerotic renovascular disease (ARVD), the prevalence increases to 30-40% in patients with CAD, CHF, and PVD. Screening studies of asymptomatic populations in the United States demonstrate a disease prevalence of 7%, with an annual incidence of 0.5% per year in analyses of medical claims of asymptomatic patients over age 65 [1]. Moreover, ARVD has been found to significantly increase cardiovascular morbidity and mortality; in a recent randomized controlled trial, annual mortality was 8% in patients with ARVD as compared with 3.7% in the general population [1]. As such, this article will focus on management of RAS due to atherosclerosis in particular.

Currently there are three treatment options: (1) medical therapy, consisting of statins, anti-platelet agents, and blood pressure control with renin-angiotensin blockade; (2) percutaneous transluminal renal angioplasty with stent placement (PTRAS); and (3) surgical revascularization. Testing for ARVD is not without risk, especially in those with impaired kidney function. Additionally, procedures to correct RAS are associated with morbidity and mortality, including renal artery dissection, thrombosis, or perforation, acute kidney injury, and death. According to the 2005 American College of Cardiology/American Heart Association guidelines [2], then, testing for RAS is indicated only in those with a suggestive history and in whom a corrective procedure will be performed if ARVD is detected. Therein lies the major question: is stenting worthwhile in the treatment of RAS? More specifically, does percutaneous angioplasty deliver improvements in blood pressure control, renal disease, and cardiovascular outcomes that cannot be obtained by medication alone?

Although many studies have attempted to answer these questions, the findings thus far are unsettling. Upon review of the relevant research, a wide discrepancy emerges between the outcomes from observational data versus those obtained from randomized controlled trials. Observational data have demonstrated a clear benefit from stenting in patients with ARVD. Grey et al [3] found a significant decrease in blood pressure, serum creatinine, New York Heart Association functional class, and number of annual hospitalizations for heart failure in 39 patients who had undergone PTRAS for refractory heart failure. Similarly, Kalra [4] reported significantly lower rates of mortality and renal disease progression in 89 patients treated with PTRAS in comparison with 346 patients treated with medical therapy alone. Additionally, systematic reviews of multiple observational cohort studies have consistently found that PTRAS is associated with improvement in blood pressure and kidney function [5].

Yet these impressive positive effects have failed to withstand more rigorous investigation. The three earliest RCTs from 1998 to 2000 [6,7,8] failed to show improvement in blood pressure with procedural intervention as compared with medical therapy alone. It should be noted that the external validity of these studies is limited. In these trials, balloon angioplasty was performed without stent placement, a practice that has since been associated with worse angiographic outcomes and higher rates of re-stenosis, and one which is therefore no longer standard of care. Additionally, these studies have been criticized for their small sample sizes, short follow up times, high cross-over rates, and inclusion of patients with clinically insignificant stenosis.

Despite the limitations of these earliest RCTs, their findings – namely a lack of benefit with stenting as compared with medical therapy- appear to be validated in subsequent RCTs and meta-analyses. The STAR trial [9] enrolled 140 patients with RAS >50% and renal insufficiency and found that PTRAS did not improve creatinine clearance or secondary endpoints of cardiovascular morbidity and mortality. In the largest trial to date, ASTRAL [10], 806 patients with RAS >50% were randomized to PTRAS versus medical therapy alone. Not only did stenting fail to demonstrate benefit in terms of the primary endpoint of loss of renal function and secondary endpoints of blood pressure, cardiovascular events, and all-cause mortality, but also the rate of serious adverse events with stenting was 6.8%. Unsurprisingly then, a meta-analyses by Kumbhani et al [11] of the various RCTs failed to demonstrate any difference between PTRAS and drug therapy in terms of blood pressure control, renal function, all cause mortality, heart failure, and stroke. Additional meta-analysis by Pierdeomenico et al [12] also failed to show any difference in the risk of future nonfatal myocardial infarction.

However, neither one of the above RCTs is above reproach, specifically with regards to patient selection. Approximately 28% of patients randomized to PTRAS in the STAR trial [9] did not get stented because they were noted to have insignificant (less than 50%) stenosis when angiogram for stent placement was performed. And in the ASTRAL trial [10], patients were included only if their primary treating physician was uncertain whether the patient would definitely benefit from revascularization, thereby likely excluding high-risk patients.

So what conclusions can we draw? Well, here are my parting thoughts:

1. Most patients should be managed with drug therapy alone. The preponderance of the most rigorous type of evidence from RCTs clearly demonstrates the risk of harm without clear demonstrable benefit from revascularization.

2. More research is needed on the utility of revascularization in high-risk patients. It is possible that the discrepancy between outcomes in the observational trials and RCTs is due to the fact that high-risk patients derive benefit from PTRAS whereas low to moderate risk patients or patients with incidental finding of RAS on imaging, the majority of patients studied, do not.

The ongoing CORAL trial [13] – whose results are due imminently – should shed some light on these issues. This prospective, multicenter, un-blinded randomized controlled trial aims to determine the incremental value of stent revascularization in addition to optimal medical therapy. Potential subjects all underwent renal angiogram and were included in the study if they had atherosclerotic renal stenosis >80% or >60% with a 20mmHg systolic pressure gradient as well as systolic hypertension of at least 155 on a minimum of two anti-hypertensive agents. 1,080 subjects were randomized with a 90% power to detect a 28% reduction in the composite endpoint of cardiovascular or renal death, MI, hospitalization for CHF, stroke, doubling of creatinine, and need for renal replacement therapy. The results of this study are expected to provide more definitive guidance, so stay tuned.

3. Current AHA/ACC guidelines [2] suggest that revascularization be considered only in those with a high likelihood of benefit from undergoing stenting, namely those with short duration of blood pressure elevation prior to diagnosis of ARVD, failure of optimal medical therapy to control blood pressure, intolerance to optimal medical therapy, and recurrent flash pulmonary edema and/or refractory heart failure.

Dr. Elizabeth Hammer is a 2nd-year resident at NYU Langone Medical Center

Peer reviewed by David Goldfarb, MD, Professor of Medicine, Department of Medicine (Nephrology),  NYU Langone Medical Center and Chief of Nephrology at the Department of Veterans Affairs New York Harbor.

Image courtesy of Wikimedia Commons

References

1. Ritchie J, Green D, Kalra PA. Current views on the management of atherosclerotic renovascular disease. Ann Intern Med 2012; 44(supp1): S98.  http://www.ncbi.nlm.nih.gov/pubmed/22713155

2. Hirsch AT, Haskal ZJ, Hertzer NR, et al. ACC/AHA 2005 Practice Guidelines for the management of patients with peripheral arterial disease (lower extremity, renal, mesenteric, and abdominal aortic): a collaborative report from the American Association for Vascular Surgery/Society for Vascular Surgery, Society for Cardiovascular Angiography and Interventions, Society for Vascular Medicine and Biology, Society of Interventional Radiology, and the ACC/AHA Task Force on Practice Guidelines (Writing Committee to Develop Guidelines for the Management of Patients With Peripheral Arterial Disease): endorsed by the American Association of Cardiovascular and Pulmonary Rehabiitation; National Heart, Lung, and Blood Institute; Society for Vascular Nursing; TransAtlantic Inter-Society Consensus; and Vascular Disease Foundation. Circulation 2006; 113:e463

3. Gray BH, Olin JW, Childs MB, et al. Clinical benefit of renal artery angioplasty with stenting for the control of recurrent and refractory congestive heart failure. Vasc Med 2002; 7:275.

4. Kalra PA, Chrysochou C, Green D, et al. The benefit of renal artery stenting in patients with atheromatous renovascular disease and advanced chronic kidney disease. Catheter Cardiovasc Interv 2010; 75:1.  https://www.ncbi.nlm.nih.gov/m/pubmed/19937777/

5. Foy A, Ruggiero NJ, Filippone. Revascularization in renal artery stenosis. Cardiology in Review 2012; 20:189.  http://www.ncbi.nlm.nih.gov/pubmed/22314144

6. Plouin PF, Chatellier G, Darne B, Raynaud A. Blood pressure outcome of angioplasty in atherosclerotic renal artery stenosis: a randomized trial. Essai Multicentrique Medicaments v Angioplastie (EMMA) Study Group. Hypertension 1998; 31:823.

7. Webster J, Marshall F, Abdalla M, et al. Randomised comparison of percutaneous angioplasty vs continued medical therapy for hypertensive patients with atheromatous renal artery stenosis. Scottish and Newcastle Renal Artery Stenosis Collaborative Group. J Hum Hypertens 1998; 12:329.

8. van Jaarsveld BC, Krinjen P, Pieterman H, et al. The effect of balloon angioplasty on hypertension in atherosclerotic renal-artery stenosis. Dutch Renal Artery Stenosis Intervention Cooperative Study Group. N Engl J Med 2000; 342:1007.

9. Bax L, Woittiez AJ, Kouwenberg HJ, et al. Stent placement in patients with atherosclerotic renal artery stenosis and impaired renal function: as randomized trial. Ann Intern Med 2009; 150:840.  http://www.ncbi.nlm.nih.gov/pubmed/19414832

10. ASTRAL Investigators, Wheatley K, Ives N, et al. Revascularization versus medical therapy for renal-artery stenosis. N Engl J Med 2009; 361:1953.

11. Kumbhani DJ, Bavry AA, Harvey JE, et al. Clinical outcomes after percutaneous revascularization versus medical management in patients with significant renal artery stenosis: a meta-analysis of randomized controlled trials. Am Heart J 2011; 161:622. http://www.ncbi.nlm.nih.gov/pubmed/21392620

12. Pierdomenico AD, Pierdomenico AM, Cuccurullo C, et al. Cardiac events in hypertensive patients with renal artery stenosis treated with renal angioplasty or drug therapy: meta-analysis of randomized trials. Am J Hypertension 2012; 25:1209.

13. Cooper CJ, Murphy TP, Matsumoto A, et al. Stent revascularization for the prevention of cardiovascular and renal events among patients with renal artery stenosis and systolic hypertension: rational and design of the CORAL trail. American Heart Journal 2006; 152:59.  http://www.ncbi.nlm.nih.gov/pubmed/16824832

 

 

 

 

 

FROM THE ARCHIVES – Kayexalate: What is it and does it work?

November 7, 2013

Please enjoy this post from the archives dated December 1, 2010

By Todd Cutler, MD

Faculty Peer Reviewed

A 62-year-old male is hospitalized with an acute congestive heart failure exacerbation. On hospital day three, the patient’s symptoms have significantly improved with twice daily furosemide 80mg IV. He is continued on IV diuretics and aggressive electrolyte repletion. On day five of his admission, his basic metabolic panel is significant for a creatinine of 2.3 mg/dL (increased from 1.3 on admission) and a potassium concentration of 5.9 mEq/L. His EKG is unchanged from admission. His furosemide is discontinued and he is given 15g of Kayexalate. Overnight he has a large bowel movement. The next morning his creatinine is 1.9 mg/dL and his potassium is 5.1 mEq/L.

Should Kayexalate be used in the management of hyperkalemia?

Developed in the 1930s, synthetic ion-exchange resins are insoluble polymers combined with a reactive acid group and saturated with a specific ion.[1] Once introduced into complex solvents, the resins exchange their preloaded ions for others in the solution. Their utility was predominantly industrial until 1946 when resins were proposed as a tool for removing dietary sodium in patients with heart failure and other “edematous states.”[2,3,4] While ion-exchange resins were ultimately found to be ineffective in heart failure management, small studies showed promise in the treatment of “potassium toxicity” using a polymer called sodium polystyrene sulfonate (SPS) and marketed as Kayexalate.[5] The exchanging of sodium for potassium within the large bowel was believed to induce gut dialysis, resulting in diminished total body potassium concentrations.

In 1961, an uncontrolled study evaluated 32 patients with renal failure. Patients received either oral or rectally administered SPS while their intake of dietary potassium was tightly controlled. Patients were treated and monitored for varying lengths of time with one patient receiving SPS three times weekly for 280 days. Over the first 24 hours, plasma potassium concentrations decreased 1.0 mEq/L and 0.8 mEq/L in patients receiving oral and rectal SPS, respectively, with a few patients developing hypokalemia. The authors also reported complete reversion of abnormal EKG findings after SPS administration.[6]

An accompanying report in the same journal evaluated ten patients with renal failure who were treated for five days with either an SPS and sorbitol mixture or sorbitol alone. Prior to this study, it had been noted that an adverse effect of SPS was the induction of constipation leading to, in some cases, fecal impaction. A proposed solution to this problem involved the co-administration of sorbitol, an osmotic laxative, at a concentration of 70%, speeding delivery of SPS to the colon where the majority of ion-exchange activity was believed to occur while simultaneously inducing defecation – ultimately, the desired diuretic effect of the drug.

In the study, both patients who received the co-administered formulation and sorbitol alone, demonstrated decreased serum potassium concentrations. Furthermore, sodium concentrations were increased in patients who received SPS with sorbitol but not with the sorbitol control. The authors noted, “That this rise is caused by the sodium released from the resin in exchange for potassium is evident since there is no elevation of serum sodium when sorbitol is used alone,” while ultimately concluding that, “sorbitol alone is as effective as a combination of resin and sorbitol in removing potassium, or more so. However, sorbitol alone necessitated a greater volume of debilitating diarrhea. In either case the predictability of the fall in serum potassium was impressive.”[7]

Since the 1960s, investigations into the efficacy of SPS in treating hyperkalemia have been limited. One small study in 1998 showed no changed in serum potassium concentration after a single dose of SPS or a placebo both with and without a sorbitol additive.[8] The efficacy of SPS and any additive effect of sorbitol on serum potassium concentrations have never been elucidated in larger studies. Meanwhile, SPS became widely accepted as a means for treating hyperkalemia based on the results of uncontrolled reports and empiric observations.

While the efficacy of these drugs remains a matter of debate, their toxicities are widely recognized. Multiple reports have implicated sorbitol in the development of SPS crystals and resultant intestinal bleeding, ischemia, colitis, necrosis and bowel perforation.[9,10,11,12] In 2007, the FDA mandated a decrease in the concentration of sorbitol in the SPS formulations from 70% to 33% however episodes of ischemic colitis continued to be reported with the less concentrated mixture. In late 2009, the FDA issued a non-mandated recommended against the practice of combining SPS and sorbitol in a prepackaged mixture. [13] Compliance with this recommendation would result in the effective termination of current practices as most pharmacies supply SPS only in the prepackaged formulation. Furthermore, any further utilization of SPS would necessitate the co-administration of a laxative considering the drug’s known constipating effects.

While empiric evidence supports the effectiveness of SPS when used over an extended period of time, the argument remains that, in published studies, any perceived short-term effect cannot be definitively attributed to SPS due to confounding factors such as a low-potassium diet or fluid repletion.[8] Others have suggested that apparent decreases in serum potassium concentrations after single doses of SPS may be explained by extracellular volume expansion following absorption of sodium released from the SPS resin.[8]

The dearth of clinical evidence supporting the efficacy of SPS prompted the authors of a recent commentary in the Journal of the American Society of Nephrology to call for careful consideration before using SPS remarking, “It would be wise to exhaust other alternatives for managing hyperkalemia before turning to these largely unproven and potentially harmful therapies.”xiii The utilization of dietary restriction, diuretics, bicarbonate, beta-agonists, insulin and dextrose along with a careful investigation into the etiology of an individual patient’s hyperkalemia may obviate the perceived need for SPS administration. Until future studies clarify a role for this controversial drug, physicians should take into account the compiled evidence when weighing of the risks and benefits of SPS administration.

Special thanks to Dr. John Papadopoulus for his helpful commentary and assistance in the drafting of this article.

Kayexalate: What is it and does it work?

Commentary by Dr. John Papadopoulos

The skill to select and dose an optimal pharmacotherapeutic regimen to treat our patients develops over the course of one’s professional career and during our training.  When learning about a medication, we focus on pharmacology, pharmacokinetics, potential adverse events, and data to support use in clinical practice.  Unfortunately, we have a paucity of data for guidance when using medications developed and marketed before the rigor of our current drug review standards.  The use of kayexalate in the management of hyperkalemia has been propagated by bedside teaching and without the rigor of evidence-based clinical trials.  Dr. Cutler cogently summarizes the available literature and highlights the potential complications of kayexalate.

In my experience, kayexelate (per os and per rectum) is able to lower potassium levels modestly over the course of a few hours.  It is a second-line agent that may be used when there is a need to lower total body potassium, as other interventions (except renal replacement therapies) temporarily move potassium into intracellular fluid.

Dr. Cutler is a second year resident at NYU Langone Medical Center and co-editor, pharmacology section of Clinical Correlations

Faculty Peer Reviewed by Neil Shapiro, MD, Editor-In-Chief, Clinical Correlations

Image courtesy of Wikimedia Commons

References:

1. Cation exchange resins in the treatment of congestive heart failure. Hay SH, Wood JE Jr. Ann Intern Med. 1950 Nov;33(5):1139-49.

2. The use of a carboxylic cation exchange resin in the therapy of congestive heart failure. Feinberg AW, Rosenberg B. Am Heart J. 1951 Nov;42(5):698-709.

3. Prolonged cation-exchange resin therapy in congestive heart failure. Voyles C Jr, Orgain ES. N Engl J Med. 1951 Nov 22;245(21):808-11.

4. The effect of a cation exchange resin on electrolyte balance and its use in edematous states. Irwin L, Berger EY, Rosenberg B, Jackenthal R. J Clin Invest. 1949 Nov;28(6, Pt. 2):1403-11.

5. Ion-exchange resins in the treatment of anuria. Evans BM, Jones NC, Milne MD, Yellowlees H. Lancet. 1953 Oct 17;265(6790):791-5.

6. Management of hyperkalemia with a cation-exchange resin. Scherr L, Ogden DA, Mead AW, Spritz N, Rubin AL. N Engl J Med. 1961 Jan 19;264:115-9.

7. Treatment of the oliguric patient with a new sodium-exchange resin and sorbitol; a preliminary report. Flinn RB, Merrill JP, Welzant WR. N Engl J Med. 1961 Jan 19;264:111-5.

8. Effect of single dose resin-cathartic therapy on serum potassium concentration in patients with end-stage renal disease. Gruy-Kapral C, Emmett M, Santa Ana CA, Porter JL, Fordtran JS, Fine KD. J Am Soc Nephrol. 1998 Oct;9(10):1924-30.

9. Upper gastrointestinal tract injury in patients receiving kayexalate (sodium polystyrene sulfonate) in sorbitol: clinical, endoscopic, and histopathologic findings. Abraham SC, Bhagavan BS, Lee LA, Rashid A, Wu TT. Am J Surg Pathol. 2001 May;25(5):637-44.

10. Intestinal necrosis due to sodium polystyrene (Kayexalate) in sorbitol enemas: clinical and experimental support for the hypothesis. Lillemoe KD, Romolo JL, Hamilton SR, Pennington LR, Burdick JF, Williams GM. Surgery. 1987 Mar;101(3):267-72.

11.  Necrosis of the gastrointestinal tract in uremic patients as a result of sodium polystyrene sulfonate (Kayexalate) in sorbitol: an underrecognized condition. Rashid A, Hamilton SR. Am J Surg Pathol. 1997 Jan;21(1):60-9.

12. From hyperkalemia to ischemic colitis: a resinous way. Tapia C, Schneider T, Manz M. Clin Gastroenterol Hepatol. 2009 Aug;7(8):e46-7.

13. Ion-exchange resins for the treatment of hyperkalemia: are they safe and effective? Sterns RH, Rojas M, Bernstein P, Chennupati S. J Am Soc Nephrol. 2010 May;21(5):733-5.

To Stent or Not to Stent?

November 6, 2013

By Anish Vani

Faculty Peer Reviewed

According to the 2010 Heart Disease and Stroke Statistics update of the American Heart Association, there are 17.6 million Americans living with coronary heart disease (CHD) [1]. Fortunately, mortality from heart disease is on the decline in the United States and in countries with advanced health care, likely due to better management of acute coronary syndrome (ACS) and a reduction in lifestyle risk factors such as smoking. However, for the millions of Americans with stable ischemic heart disease (SIHD), there is ambiguity over the best course of treatment. While there is little doubt that revascularization reduces mortality in the setting of ACS, there remains significant controversy over how to manage SIHD. For example, if a patient has angina and documented severe coronary artery disease, should he receive guideline-directed medical therapy (GDMT), followed by revascularization if his symptoms persist? GDMT would entail beta blockers, ACE inhibitors, statins, and a daily aspirin, in addition to lifestyle modifications which include smoking cessation, exercise, and a healthy diet. Or, should he instead receive immediate revascularization with GDMT?

The COURAGE* trial in 2007 demonstrated that initial management with revascularization plus optimal medical therapy (OMT) did not reduce the risk of death or non-fatal myocardial infarctions (MI) compared with OMT alone [2]. In COURAGE, which followed 2287 patients with SIHD in multiple centers across the United States and Canada, the revascularization and conservative medical management groups had similar outcomes in the primary endpoint of death from any cause and non-fatal MI (19% and 18.5%, respectively, p=0.62). Additionally, hospitalization for ACS was approximately the same in both groups at 12% (p=0.56). The BARI 2D* trial showed similar results in patients with diabetes mellitus and stable angina. There was no significant difference in survival rates between the revascularization and medical therapy groups at 5 years of follow-up (88.3% and 88.2%, respectively, p=0.89) [3]. Interestingly, both COURAGE and BARI 2D showed greater freedom from angina for patients undergoing prompt revascularization. These differences are no longer significant after several years of follow-up, which may partly be a result of the high rate (almost 40%) of revascularization in the OMT only group [4]. More recently, Brown and colleagues (2012) performed a meta-analysis of 8 randomized controlled trials from 1970 to 2011, and arrived at similar conclusions as the aforementioned trials [5].

Taken together, these studies suggest that GDMT should be started before percutaneous coronary intervention (PCI), and have changed the way the medical profession thinks about stents. It created a paradigm shift from the “clogged pipe” model in which a coronary artery is thought to slowly accumulate plaque and cause decreasing blood flow to distal myocardium over time. Newer models now suggest a systemic atherosclerotic process, in which there are multiple plaques in the coronary arteries. These plaques can then rupture at any time, forming a clot and causing an MI [6]. Therefore, those opposed to immediate revascularization argue that it is not feasible to balloon and stent each plaque, but instead advocate that the underlying disease be treated with medical therapy.

However, a study performed by Spertus and colleagues (2011) questions the impact of the COURAGE trial on general medical practice. The authors examined the National Cardiovascular Data Registry and found that patients undergoing PCI for SIHD only received a trial of GDMT less than half the time, and the rates did not change significantly after publication of the COURAGE trial [7]. This may be due to important limitations in both COURAGE and BARI 2D. It can be argued that these trials did not enroll enough high risk-patients. For example, approximately 30% of patients in BARI 2D had a myocardial jeopardy index (MJI) < 25% and another 40% of patients had a MJI < 50%. Eighteen percent of the screened population in COURAGE were excluded due to “logistic reasons” and it is unclear how patients enrolled in the study differed from those not enrolled. Additionally, thousands of patients with SIHD were excluded from these trials, raising the question of selection bias and whether the results can be applied to the general patient population. Lastly, 97% of PCIs in the COURAGE trial were with bare metal stents, as drug-eluting stents were not available until the closing years of the trial. Although the authors do not consider this a limitation, more recent data, including a meta-analysis by Bangalore and colleagues (2012) and a study by Palmerini and colleagues (2012), demonstrate significant reductions in stent thrombosis and MI with newer generation drug-eluting (everolimus) stents [8-9].

The National Institutes of Health (NIH) recently awarded NYU Langone Medical Center an $84 million grant to determine the best course of management for patients with SIHD. The trial, called ISCHEMIA*, hypothesizes that patients with moderate-to-severe ischemia (as determined by stress test imaging) will benefit from early revascularization via PCI or coronary artery bypass grafting (CABG) [10]. The trial, which expects to enroll 8000 patients across 400 research centers around the world, partly bases this hypothesis on an observational study from Cedars-Sinai. In this study, 10,627 patients underwent myocardial perfusion imaging, and those with at least moderate ischemia (>10%) had fewer cardiac deaths and experienced significant ischemic reduction with revascularization than patients treated with GDMT alone after a 3-year follow-up period [11]. Furthermore, a COURAGE trial nuclear substudy looked at COURAGE patients with moderate-to-severe pre-treatment ischemia, and found that PCI and OMT combined resulted in greater reduction of ischemia than with OMT alone [12]. However, as this was a substudy, there is still a need for a randomized clinical trial to compare coronary revascularization with OMT to OMT alone in a high-risk population, which is where ISCHEMIA will fill the void. The ISCHEMIA authors also argue that the COURAGE and the BARI 2D trials randomized patients after catheterization, not before, and may have excluded higher-risk patients who may have benefitted more, thus creating a significant selection bias. The ISCHEMIA study uses a unique protocol where eligible patients undergo CT angiography (CTA) first to rule out severe left main disease and normal coronary arteries. The results of the CTA are otherwise blinded, such that patients without the above findings are randomized to coronary revascularization with OMT or OMT alone prior to coronary angiography, thereby reducing the risk of selection bias related to knowledge of exact coronary anatomy. [10]

According to the American Heart Association, it is estimated that 600,000 stents are placed per year, costing upwards of $12 billion dollars per year [13]. Although the risk of stroke, myocardial infarction, kidney damage, and life-threatening allergic reactions is low in the setting of contemporary technique, adjunctive medical therapy, and newer generation drug-eluting stents, when they do occur they are serious and sometimes fatal. After having a stent placed, there is also a possibility that the stent will occlude again over time and the patient will need to be revascularized. Additionally, patients need to be placed on anti-platelet medication for a period of time, which can significantly increase the chance of adverse bleeding events, and there is still uncertainty over the duration of therapy. Studies such as PRODIGY* show no difference between 6 months and 2 years of dual-antiplatelet therapy after PCI, while studies such DAPT* are currently ongoing [14].

In summary, the jury is still out over the best course of treatment for patients with stable angina, especially those categorized as high-risk based on imaging. Current guidelines suggest revascularization in patients with SIHD who have severe angina refractory to medical treatment, while acknowledging that PCIs do not improve survival in non-ACS settings. Trials such as ISCHEMIA will help guide the management of patients with stable angina and evidence of moderate to severe ischemia on physiologic studies, and ultimately, the decision to stent or not to stent in SIHD will have significant repercussions on the landscape of health care, and on the management of millions of Americans.

*Clinical Trial Acronyms:

COURAGE: Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation

BARI 2D: Bypass Angioplasty Revascularization Investigation 2 Diabetes

ISCHEMIA: International Study of Comparative Health Effectiveness with Medical and Invasive Approaches

PRODIGY: Prolonging Dual Antiplatelet Treatment after Grading Stent-Induced Intimal Hyperplasia

DAPT: Dual Antiplatelet Therapy Study

Anish Vani is a 4th year medical student at NYU School of Medicine

Peer reviewed by Binita Shah, MD, Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. Lloyd-Jones D, Adams RJ, Brown TM, et al. American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Executive summary: heart disease and stroke statistics—2010 update: a report from the American Heart Association. Circulation. 2010;121(7):948-954.  http://www.ncbi.nlm.nih.gov/pubmed/20177011

2. Boden WE, O’Rourke RA, Teo KK, et al. COURAGE Trial Research Group. Optimal medical therapy with or without PCI for stable coronary disease. N Engl J Med. 2007;356(15):1503-1516.  http://www.ncbi.nlm.nih.gov/pubmed/17387127

3. BARI 2D Study Group, Frye RL, August P, Brooks MM, et al. A randomized trial of therapies for type 2 diabetes and coronary artery disease. N Engl J Med. 2009;360(24):2503-2515. http://www.ncbi.nlm.nih.gov/pubmed/19502645

4. Shah B Srinivas VS, Lu J, et al. Change in enrollment patterns, patient selection, and clinical outcomes with the availability of drug-eluting stents in the Bypass Angioplasty Revascularization Investigation 2 Diabetes trial.” Am Heart J. 2013;166(3):519-526.  http://www.ncbi.nlm.nih.gov/m/pubmed/24016502/

5. Stergiopoulos K, Brown DL. Initial coronary stent implantation with medical therapy vs medical therapy alone for stable coronary artery disease: meta-analysis of randomized controlled trials. Arch Intern Med. 2012;172(4):312-319.  http://www.ncbi.nlm.nih.gov/pubmed/22371919

6. Bakalar N. No extra benefits are seen in stents for coronary artery disease. February 27, 2012. http://www.nytimes.com/2012/02/28/health/stents-show-no-extra-benefits-for-coronary-artery-disease.html?pagewanted=all. Accessed November 16, 2012.  http://www.democraticunderground.com/11428945

7. Borden WB, Redberg RF, Mushlin AI, et al. Patterns and intensity of medical therapy in patients undergoing percutaneous coronary intervention. JAMA. 2011;305(18):1882-1889. http://jama.jamanetwork.com/article.aspx?articleid=899881

8. Bangalore S, Kumar S, Fusaro M, et al. Outcomes with various drug eluting or bare metal stents in patients with diabetes mellitus: mixed treatment comparison analysis of 22 844 patient years of follow-up from randomised trials. BMJ. 2012;345;e5170.  http://www.bmj.com/content/345/bmj.e5170

9. Palmerini T, Biondi-Zaccai G, Della Riva D, et al. Stent thrombosis with drug-eluting and bare-metal stents: evidence from a comprehensive network meta-analysis. Lancet. 2012;379(9824):1393-1402.  http://www.ncbi.nlm.nih.gov/pubmed/22445239

10. Executive Summary of the ISCHEMIA Trial. January 2012. https://www.ischemiatrial.org/for-physicians/Executive_Summary_ISCHEMIA_November_2011_Final.pdf.   Accessed November 16, 2012.

11. O’Keefe JH Jr, Bateman TM, Ligon RW, et al. Outcome of medical versus invasive treatment strategies for non-high-risk ischemic heart disease. J Nucl Cardiol. 1998;5(1):28-33.

12. Shaw LJ, Berman DS, Maron DJ, et al. Optimal medical therapy with or without percutaneous coronary intervention to reduce ischemic burden: results from the Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation (COURAGE) trial nuclear substudy. Circulation. 2008;117(10):1283-1291.  http://www.ncbi.nlm.nih.gov/pubmed/18268144

13. Cadet J. AHA adjusts angioplasty stats to lower annual figure. Cardiovascular Business News. December 19, 2010. http://www.cardiovascularbusiness.com/index.php?option=com_articles&article=25634&publication=22&view=portals.   Accessed November 16, 2012.

14. Valgimigli M, Campo G, Monti M, et al. Short-versus long-term duration of dual-antiplatelet therapy after coronary stenting clinical perspective a randomized multicenter trial.” Circulation. 2012;125(16):2015-2026. http://circ.ahajournals.org/content/125/16/2015.long

 

 

 

 

Can we teach the immune system to fight cancer?

November 1, 2013

By Jenny Gartshteyn

Faculty Peer Reviewed

Since the start of vaccination – we’ve eradicated smallpox and polio, saved college kids from meningitis, averted flu epidemics, and decreased the incidence of HPV-related cervical cancer … but can we teach our immune systems to actively fight existing cancer?

Here’s the mechanism for an ideal anti-cancer vaccine:

With the growth and turnover of cancerous cells, cancer-specific tumor-associated antigens (TAAs) would be recognized and processed by professional antigen-presenting cells (APCs), such as dendritic cells and macrophages – which would present the antigens to T-cell receptors (TCR) via unique MHC:TCR binding sites. Intracellular antigens would be presented via the MHC class I molecules directly to cytotoxic CD8 cells, whereas extracellular antigens on cell-membranes would be presented via the MHC class II molecules to Cd4 helper cells. Despite this general rule, there also exists the possibility of cross-presentation by a minority of the dendritic cell subtypes – by which extracellular antigens can be presented to CD 8 cells via MHC-I (and vice versa). For example, as rapidly proliferating cancer cells undergo breakdown (autophagy), cell membrane components (that would normally be presented to CD4 cells via MHC-II) are engulfed by phagosomes and subsequently fused with and processed by liposomes – thereby allowing these extracellular cytoplasmic components to be presented to CD8 cells as well. Meanwhile, the cytotoxic CD8 response would be enhanced by the supporting CD4 helper response.

Here’s the reality:

Cancers are indeed “infiltrated with dendritic cells early in the course of disease – approximately 30% of node-negative early stage breast cancers have significant dendritic cell infiltration”. [1] Immune anti-tumor response may even be a prognostic indicator – for example, in ER-/HER2- breast cancer, each 10% increase in intratumor lymphocytic infiltration correlates with a 17% risk reduction of relapse and a 27% risk reduction of death [2]. The problem, however, is that the majority of tumor-associated antigens are actually over-expressed products of normal cellular genes. This means, that as a result of negative thymic selection early in T-cell development, only a mild-moderate affinity MHC:TCR binding occurs, and thus a less than ideal cytotoxic T-cell response. [3] Finally, most of the tumor-associated antigens are of intracellular protein origin and thus presented via the MHC Class I to CD8 T-cells only – resulting in a brief cytotoxic response with little CD4 enhancement/antibody response and only a moderate affinity CD-8 response [1]. To complicate matters further, while the Th1 Cd4 cells augment the CD8 and macrophage response via cytokines like IL-2 and TNF-alpha, the Th2 CD4 cells are less helpful, and may even be detrimental in cancer immunotherapy. Th2 CD4 cells produce IL4 cytokine (which functions as a regulator of B-cell expansion and therefore enhances cancer cell survival) as well as IL13, less well understood but known to correlate with metastatic tumor spread. [4] In the world of cancer vaccine, therefore, there is a focus on enhancing the CD8 and Th1 CD4 response while down-regulating the Th2 CD4 response – but more on this later.

The making of a cancer vaccine:

There are different approaches to tumor antigen delivery. Tumor antigens can be whole tumor cells (i.e: irradiated/lysed cancer cells from an autologous or allogenic source such as a previously resected tumor) or just parts of a cell (full length proteins). Alternatively, antigens can be specific peptides, which in turn can be loaded onto dendritic cells (DCs) in-vivo using chimeric proteins made of anti-DC receptor Ab; and subsequently re-infused into the patient. Finally, antigens can also be DNA or RNA strands that are transduced or transfected in a vector. For a more rigorous immune response to a vaccine, antigens can be combined with adjuvants, or boosters known to augment immune system activation (common examples include general immune stimulants like the BCG vaccine or non-specific bacterial products, as well as specific immune system activators such as the GCF stimulating factors). [5, 6]

Peptide vaccines are the most common, and one such example is the gp100 peptide from a melanoma antigen. In 2011, a phase 3 randomized control trial compared the standard treatment for advanced melanoma at the time, interleukin-2 (IL-2), with a vaccine (primed with gp-100 peptide) plus IL-2. [7] In the end, complete response was seen in 9% vs 1% (p-value 0.02) in the vaccine vs control group respectively. Although the benefit was minor, the interesting finding was that on immunologic analysis, anti-peptide reactivity developed in none of the 12 control patients as compared with 7 of the 37 tested patients in the experimental group. Why did only 7 of these 37 patients show the immunologic response?

Several factors play a role in inducing, and maintaining, an immunological response. First, a strong cytotoxic T-cell response must be induced. This is limited by self-immunity and the difficulty in identifying an effective antigen. Furthermore, even if good activation of cytotoxic CD8 T-cells is achieved – this activity may not be enough for a sustained long-term response without the support of helper/memory T-cells. Animal studies have shown that CD-8 activation via MHC-I presented antigens did induce a direct antitumor effect, however these same CD-8 cells were unable to support themselves in the absence of helper activity from CD4 T-h cells, which in turn are activated via the MHC-II antigens. [1] This brings us to the second point, that induction of a CD4 Th1 helper response is important in maintaining anti-tumor CD8 T-cell responsiveness. Third, regulatory T-cell response (such as the IL-4 and IL-13 producing Th2 cells) must be eliminated/suppressed. The failure of the gp-100 peptide to induce and maintain a good cytotoxic response may therefore stem from the failure to induce direct CD8 activation vs. failure to boost the cytotoxic response with a helper CD4 response vs. interference of response by regulatory cells/ctyotkines.

So, what do we have so far that works?

An alternative approach by Kantoff et al resulted in the first FDA approved anti-cancer vaccine known as Sipuleucel-T, approved for castration resistant, metastatic prostate cancer. [8] The vaccine consists of blood cells collected from individual patients (leukophoresis) and then fused with a protein called PA2024, which is a combination of a prostate antigen called prostatic acid phosphotase, and a granulocyte-macrophage-colony-stimulating factor and re-infused. The phase 3 trial involved 512 men with metastatic, castration resistant prostate cancer and showed a relative risk reduction for mortality of 22% (P=0.03). However, the validity of the trial was subsequently questioned. [9] Specifically, it was noted that patients aged >65y/o in the placebo arm had a higher than predicted mortality and that it was this age group that contributed to the significantly improved mortality in the vaccine arm of the trial; whereas patients <65y/o did not significantly benefit from the vaccine. It was therefore suggested that while leukophoresis removed the majority of mononuclear cells, only the control group received GM-CSF as part of the vaccine – and that the ultimate difference in mortality may be driven by an induced “immunodeficient” state in the control patients rather than a benefit from the vaccine itself. In support of this latter theory is the discomforting finding that at the end of the experiment, T-cell and B-cell reactivity against the original PA2024 antigen was only 28% and 27% respectively – suggesting a non-specific stimulating effect of GMCF with little selection for the original peptide antigen.

A similar concept of activating autologous immune cells against cancer is that of adoptive transfer of tumor-infiltrating lymphocytes (TIL). Simply put, autologous TILs are obtained from the patient’s tumor, expanded in vitro and subsequently re-infused into the patient after a course of lymphodepleting chemotherapy. In metastatic melanoma, response rates range from 40-50% with a small subpopulation of complete remission – although overall 3-year survival in those who do not achieve complete remision still remains low at 36%. [10, 11] An alternative to TIL therapy are the genetically engineered lymphocytes selected against a specific tumor antigen. Early phase I trials in metastatic colon ca and melanoma have shown clinical benefit, but have also raised concerns for significant side-effects when normal tissue is attacked. [12]

Approaching cancer immunotherapy from a different angle are attempts to eliminate the inhibitory T-cell checkpoints thereby allowing for a more potent, less regulated, cytotoxic t-cell response. Examples of such regulatory molecules are the CTLA-4 (cytotoxic t-lymphocyte associated antigen 4) and PD1 (programmed death 1) – both are inhibitory receptors expressed on T-cells and involved in downregulating T-cell activation. [13] In 2010, a monoclonal antibody blocking CTLA-4 (ipilimumab) was shown to improve survival by four months in patients with unresectable stage III or IV melanoma. [14]. More recently, in July 2013, two phase 1 clinical trials showed that monotherapy with anti- PD-1 receptor as well as combination therapy with anti-CTLA-4 and anti-PD-1 can induce an objective response in up to 50% of patients. [15, 16]

Summary:

The concept of giving our immune system a boost in the already innate ability to recognize and destroy cancer cells is ambitious and fraught with difficulty, but perhaps not impossible. Current ongoing research to improve current immune therapies ranges on the one hand, from molecular activation of immune cell receptors (e.g: CD40) mediating an overall pro-inflammatory response [17] to, on the other hand, using ionizing radiation to induce a pro-inflammatory cell injury response as a booster for antigen recognition and immune system activation [18]. So although we cannot yet use vaccines to eliminate cancer like we did smallpox and polio, we perhaps are – one step at a time- making advances in that direction.

Dr. Jenny Gartshteyn is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Sylvia Adams, Associate Professor, Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons (A – normal cell division, B – cancer cell division; 1 – apoptosis; 2 – damaged cell. From the National Cancer Institute)

References:

1. Knutson, K.L. and M.L. Disis, Augmenting T helper cell immunity in cancer. Curr Drug Targets Immune Endocr Metabol Disord, 2005. 5(4): p. 365-71.  http://www.ncbi.nlm.nih.gov/pubmed/16375690

2. Loi, S., et al., Prognostic and predictive value of tumor-infiltrating lymphocytes in a phase III randomized adjuvant breast cancer trial in node-positive breast cancer comparing the addition of docetaxel to doxorubicin with doxorubicin-based chemotherapy: BIG 02-98. J Clin Oncol, 2013. 31(7): p. 860-7.  http://www.ncbi.nlm.nih.gov/pubmed/23341518

3. Durrant, L.G. and J.M. Ramage, Development of cancer vaccines to activate cytotoxic T lymphocytes. Expert Opin Biol Ther, 2005. 5(4): p. 555-63.  http://www.ncbi.nlm.nih.gov/pubmed/15934833

4. Hallett, M.A., K.T. Venmar, and B. Fingleton, Cytokine stimulation of epithelial cancer cells: the similar and divergent functions of IL-4 and IL-13. Cancer Res, 2012. 72(24): p. 6338-43.  http://www.ncbi.nlm.nih.gov/pubmed/23222300

5. Renno, T., et al., What’s new in the field of cancer vaccines? Cell Mol Life Sci, 2003. 60(7): p. 1296-310.

6. Palucka, K., H. Ueno, and J. Banchereau, Recent developments in cancer vaccines. J Immunol, 2011. 186(3): p. 1325-31. http://www.ncbi.nlm.nih.gov/pubmed/21248270

7. Schwartzentruber, D.J., et al., gp100 peptide vaccine and interleukin-2 in patients with advanced melanoma. N Engl J Med, 2011. 364(22): p. 2119-27.  http://www.ncbi.nlm.nih.gov/pubmed/21631324

8. Kantoff, P.W., et al., Sipuleucel-T immunotherapy for castration-resistant prostate cancer. N Engl J Med, 2010. 363(5): p. 411-22.  http://www.ncbi.nlm.nih.gov/pubmed/20818862

9. Huber, M.L., et al., Interdisciplinary critique of sipuleucel-T as immunotherapy in castration-resistant prostate cancer. J Natl Cancer Inst, 2012. 104(4): p. 273-9.  http://jnci.oxfordjournals.org/content/early/2012/01/09/jnci.djr514.full

10. Besser, M.J., et al., Adoptive Transfer of Tumor Infiltrating Lymphocytes in Metastatic Melanoma Patients: Intent-to-Treat Analysis and Efficacy after Failure to Prior Immunotherapies. Clin Cancer Res, 2013.

11. Rosenberg, S.A., et al., Durable complete responses in heavily pretreated patients with metastatic melanoma using T-cell transfer immunotherapy. Clin Cancer Res, 2011. 17(13): p. 4550-7.  http://www.ncbi.nlm.nih.gov/pubmed/21498393

12. Park, T.S., S.A. Rosenberg, and R.A. Morgan, Treating cancer with genetically engineered T cells. Trends Biotechnol, 2011. 29(11): p. 550-7.  http://www.ncbi.nlm.nih.gov/pubmed/21663987

13. Pardoll, D.M., The blockade of immune checkpoints in cancer immunotherapy. Nat Rev Cancer, 2012. 12(4): p. 252-64.  http://www.ncbi.nlm.nih.gov/pubmed/22437870

14. Hodi, F.S., et al., Improved survival with ipilimumab in patients with metastatic melanoma. N Engl J Med, 2010. 363(8): p. 711-23.

15. Hamid, O., et al., Safety and tumor responses with lambrolizumab (anti-PD-1) in melanoma. N Engl J Med, 2013. 369(2): p. 134-44.

16. Wolchok, J.D., et al., Nivolumab plus ipilimumab in advanced melanoma. N Engl J Med, 2013. 369(2): p. 122-33.

17. Zhang, B., et al., The CD40/CD40L system: A new therapeutic target for disease. Immunol Lett, 2013. 153(1-2): p. 58-61.  http://scibite.com/site/library/2013_7/1/0/23892087.html

18. Formenti, S.C. and S. Demaria, Combining radiotherapy and cancer immunotherapy: a paradigm shift. J Natl Cancer Inst, 2013. 105(4): p. 256-65.

 

 

 

Corticosteroids and Prophylaxis. What complications should you try to prevent in patients on chronic corticosteroids?

October 30, 2013

By Robert Joseph Fakheri, MD

Faculty Peer Reviewed

A 55 year-old male is recently diagnosed with systemic sarcoidosis. The patient is started on prednisone 40mg with the plan to decrease the dose after remission of symptoms, which may take a number of months. What kind of prophylaxis should the patient receive?

Corticosteroids are an effective treatment option for a number of diseases spanning many specialties. However, long-term corticosteroid treatment is marred with a number of side effects including hypertension, hyperglycemia, weight gain, adrenal suppression, osteoporosis, peptic ulcer disease (PUD), and increased risk of infections [1,2]. In general, the risk of side effects has a direct relationship with dose and duration of treatment. In the past, the standard of care would be to monitor for these effects and address them accordingly. But with our expanding armamentarium of medications and growing literature, when is it appropriate to prevent complications before they even start? The most common targets of prophylaxis are the latter three: osteoporosis, PUD, and infections (specifically Pneumocystis jiroveci pneumonia or PCP).

Starting with osteoporosis, the agents available for prophylaxis are calcium, vitamin D, bisphosphonates, and lastly teriparatide (an analogue of parathyroid hormone). The literature has been reviewed by the American College of Rheumatology (ACR) and they issued guidelines with their recommendations [3]. Firstly, per their recommendations, all patients should be given 1,200-1,500mg of calcium per day and 800IU-1,000IU of vitamin D per day, or enough to achieve a therapeutic level of 25-hydroxyvitamin D. Though there is some variation based on age and gender, the general consensus is that everyone should have baseline dual-energy X-ray absorptiometry (DEXA) scan and that patients with high fracture risk (post-menopausal women, men older than 50 years old, low DEXA scores) on a dose equivalent of prednisone 7.5mg for more than 3 months should also be on a bisphosphonate. Very high-risk groups should be started on a bisphosphonate even with 5mg of prednisone for 1 month.

Though calcium and vitamin D are widely encouraged, the actual evidence is fairly limited. One study of 62 patients (on average prednisone dose from 16-21mg) compared the combination of vitamin D 50,000IU weekly and calcium 1,000mg daily to placebo and found no difference in loss of bone mineral density (BMD) after 3 years of follow-up [4]. A similar study of 17 patients with inflammatory bowel disease found no benefit in BMD after 1 year (calcium 1,000mg plus vitamin D 250IU daily with average prednisone dose 12-14mg) [5]. Another study of 41 women on average prednisone dose of 15mg/day compared 0.25-1ug/day of alfacalcidol (a vitamin D analogue) to 500mg/day of calcium over a period of 3 years and found that the former was able to maintain BMD while patients on calcium alone lost BMD as early as 6 months into therapy [6]. Lastly, a study of 81 patients with systemic lupus erythematosus (SLE) on average prednisone dose of 10-11mg compared calcium 1,200mg/day plus calcitriol 0.5ug/day against calcium 1,200mg/day alone against placebo found a minimal increase in the combination group, but not significantly different than the other two groups over 2 year study period [7]. It makes sense in this population to screen for vitamin D deficiency and treat when necessary, but routine supplementation has limited evidence. Although calcium and vitamin D are considered benign, they may be problematic in patients with hypercalcemia such as sarcoidosis or other granulomatous diseases and may interfere with absorption of other medications such as mycophenolate mofetil.

On the other hand, data supporting bisphosphonates is more robust. One of the first studies randomized 477 patients receiving corticosteroids (average prednisone dose of 9-10mg/day) with diagnoses across specialties over 48-weeks to either alendronate 5mg, alendronate 10mg or placebo [8]. All patients received 800-1,000mg of calcium and 250-500IU of vitamin D daily. The authors found that both of the bisphosphonate groups had increased BMD in lumbar spine (+2-3%) and femoral-neck (+1%), while the placebo group had decreased BMD in lumbar spine (-0.4%) and femoral neck (-1.2%), p<0.01.

Different specialty groups have individually confirmed this benefit of bisphosphonates. In rheumatology, a study of 200 patients with rheumatic diseases compared alendronate 10mg to alfacalcidol over 18 months and found an increase in BMD by +2.1% in bisphosphonate group compared to loss of BMD by -1.9% in alfacalcidol group, with a net difference of 4% [9]. In dermatology, 29 patients with immunobullous disease were randomized to alendronate or placebo for 12 months and found increases in BMD in the treatment group by +3.5% to +3.7% in lumbar spine and femoral neck respectively as opposed to decreased in BMD in the control group by -1.4% and -0.7% (p=0.01) [10]. In gastroenterology, a study of 39 patients with ulcerative colitis were randomized to receive alendronate 5mg or alfacalcidol for 12 months; the investigators demonstrated an increase in lumbar spine BMD in the bisphosphonate group by +4.1% compared with +0.9% in the alfacalcidol group (p<0.0005), though smaller differences were observed in femoral neck BMDs that did not meet statistical significance [11]. Lastly, in pulmonology, a study of 30 patients with sarcoidosis were randomized to either alendronate 5mg or placebo for 12 months and found a change in radial BMD of +0.8% in the treatment group compared with -4.5% in the placebo group (p<0.01) [12]. Although BMD is merely a surrogate marker for fracture risk with inconsistent correlation, it is commonly used due to its facile measurement and will likely remain the measuring stick until better data is available on fracture risk [13].

Moving on to PUD, there is a theoretical benefit of reducing gastric pH with agents such as proton-pump inhibitors (PPIs) to prevent steroid-induced peptic ulcers. However, there is limited data on the subject largely because, despite much notoriety, there is limited data to even show that corticosteroids actually cause peptic ulcers in the first place. Initial reports of an association date as far back as 1951 with case series and studies showing the effect of corticosteroids to increase gastric acidity [14]. Since then, a multitude of trials followed by meta-analyses have reviewed the topic. In 1976, a meta-analysis of over 3558 patients from 26 prospective, randomized, double-blind, placebo-controlled trials found no difference in ulcer risk over placebo with an absolute risk of about 1% [15]. Another meta-analysis in 1983 challenged these findings and found a relative risk (RR) of 2.3 (CI 1.4 – 3.7) [16]. However, this analysis was critiqued primarily for its use of non-blinded studies and suspected cases of PUD based on symptoms of severe dyspepsia [17].

Then, in 1991, a nested case-control study of 1415 patients found that ulcer risk was only increased in concurrent users of non-steroidal anti-inflammatory drugs (NSAIDs): RR 1.1 for corticosteroid alone, RR 4.4 for corticosteroid use in patients already on NSAIDs, RR 14.6 for corticosteroid and NSAIDs compared to controls [18]. This suggested that corticosteroids themselves do not cause ulcers by themselves, but may impair wound-healing to exacerbate ulcers caused by other etiologies. In 2001, a similar nested case-control study from the UK of 2105 cases looking at upper gastrointestinal complications found odds ratios (OR) of 1.8 for users of corticosteroids alone, 4.0 for users of NSAIDs alone, and 8.9 for users of both [19].

Thus, the debate continues. It is clear that corticosteroids in conjunction with NSAIDs is a major risk factor for PUD and should be avoided; when unavoidable, patients would likely benefit from acid suppressive therapy. In fact, use of NSAIDs alone should warrant consideration of PPI prophylaxis as some data shows they can reduce hospitalization for PUD complications from NSAID use by 67% [20]. Given the data that corticosteroids increase gastric acidity, this mechanism may contribute to symptoms of dyspepsia without mucosal breakdown; consequently, acid suppressive therapy may relieve these symptoms. Therefore, given the lack of evidence and multiple potential complications of PPIs including enteric infections like Clostridium difficile colitis, nutrient malabsorption, pneumonia, gastrointestinal neoplasms, acute interstitial nephritis, and gallbladder dyskinesia [21] , the role of PPIs should largely be reserved for treatment rather than prophylaxis of gastrointestinal complications of corticosteroids.

Lastly is the topic of PCP prophylaxis. While there are clear data and guidelines for patients with acquired immune deficiency syndrome (AIDS), the data is not as clear for other states of immunosuppresion. Although the first-line agent trimethoprim-sulfamethaxoazole (TMP-SMX) is very effective at preventing PCP, it comes with its own consequences including adverse drug reactions, cost, and risk of antibiotic resistance.

As before, the risk of PCP is correlated with dose and duration of corticosteroid use, but with PCP there is the added variable of the relative immune dysregulation caused by the underlying disease being treated. A retrospective study of 116 PCP patients without AIDS were analyzed and compared by underlying disease process including hematologic malignancy, solid tumors, organ transplants, inflammatory diseases, and other. With inflammatory diseases, the median dose of prednisone at time of diagnosis was 40mg (range 12-100, interquartile range 20-56) with a median duration of 16 weeks (range 5-672, interquartile range 8-60) [22]. Another retrospective study of 15 patients with SLE and PCP compared with matched controls found that patients that developed PCP had higher doses of prednisone (49 vs. 20mg), lower total lymphocyte counts (1040 vs. 1842 cells/mm3), and lower CD4 lymphocyte counts (156 vs. 276). The authors suggested prophylaxis when total lymphocyte count was less than 750 or CD4 count less than 200 [23].

So CD4 counts appear to be a useful tool in assessing risk, but other factors also contribute such as lung architecture. In a retrospective study of 74 patients with interstitial lung disease on corticosteroids, 7 patients developed PCP. The mean dose at time of diagnosis was prednisone 37mg with mean duration of 10 weeks. CD4 counts ranged from 59 to 836, with a mean of 370 [24]. The authors argued that due to their underlying lung disease, the patients were at higher risk for PCP and became infected at higher CD4 counts than patients with other underlying diseases.

In some cases, such as patients with bone marrow transplants, PCP prophylaxis is recommended regardless of corticosteroid use [25]. A meta-analysis of transplant patients and hematologic malignancies estimated the number needed to harm from prophylaxis as 32, and thereby recommended prophylaxis for patients with a risk of 3.5% or higher [26]. This risk is difficult to determine without good quality studies on specific patient populations, though clearly the risk will be modified by individual patient co-morbidities and concurrent therapies. Patients that meet this criterion include patients with transplants, acute lymphoblastic leukemia, severe combined immunodeficiency syndrome, and Wegener’s granulomatosis [26].

Using the above metric as a guide, routine prophylaxis is not indicated in the average patient with skin disease requiring immunosuppressive therapy. In one study of 198 dermatology patients on immunosuppressive therapy, only 0.7% of at-risk patients developed PCP. The majority of patients, 79%, were on corticosteroids (either alone or in conjunction with other agents) with median duration of 28.5 months (average dosage unavailable) [27].

Multiple authors have recommended prophylaxis in patients with underlying immunologic disorder or malignancy receiving equivalent prednisone daily dose of 20mg or more for at least 1 month [28,29], but the current data suggests that this one-size-fits-all approach may subject many patients unnecessarily to prophylaxis and also fail to protect a number of patients at significant risk for PCP. Although unfortunate, the decision for PCP prophylaxis ultimately rests on a clinician’s assessment of multiple variables including not only corticosteroid dose and duration, but also underlying disease and co-morbidities that may affect immune function and lung architecture, in addition to concurrent therapies and measurements of total lymphocyte counts and CD4 counts.

In summary, there is fairly strong evidence for DEXA screening and bisphosphonate use for osteoporosis prevention in most patients on chronic corticosteroids (particularly patients greater than age 50 on prednisone equivalent dose of 7.5mg or more for at least 3 months), hardly any evidence for acid suppressive therapy for PUD prevention, and the need for a case-by-case risk assessment for antibiotic use for PCP prevention. Despite this evidence, there appears to be a lack of awareness in the medical community. A study of 360 physicians in the Czech Republic (100 from Gastroenterology, 100 from General Practice, 80 from Pulmonology/Immunology, and 80 from Neurology/Neurosurgery) found that 82% of physicians (61% of gastroenterologists) believed that corticosteroids significantly increase the risk of PUD and 75% of physicians (55% of gastroenterologists) believed that gastroprotective therapy was appropriate for patients on systemic corticosteroids alone without concurrent therapy with NSAIDs [30]. Moreover, despite the evidence for initiating therapy with bisphosphonates, they are still not widely used. One study in the United Kingdom assessed adherence to guidelines by rheumatologists and found about half did not order a DEXA scan when it was indicated [31]. Another study in a multispecialty rheumatology urban practice found that only 39% patients with rheumatoid arthritis on chronic corticosteroids received the recommended DEXA screening and treatment/prophylaxis according to ACR guidelines. The literature in dermatology is similar with only 20% patients referred to a tertiary center on chronic oral corticosteroids for median duration of 6 months that had been received bisphosphonates [32]. The reason for this discrepancy between evidence and practice is unclear, but likely in part due to limited awareness of the scientific literature as demonstrated in the above survey. Moreover, treating physicians may have varying comfort levels in prescribing different medications (e.g. high comfort with PPIs leading to over-prescription and low comfort with bisphosphonates leading to under-prescription).Thus, across specialties, there is a need to increase awareness to increase the use of screening DEXA scans, increase the use of bisphosphonates, and decrease the use of acid-suppressive therapy for patients on chronic corticosteroids.

Dr.  Robert Joseph Fakheri is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Peter Izmirly, MD, Division of Rheumatology, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Buchman AL. Side effects of corticosteroid therapy. J Clin Gastroenterol. 2001;33(4):289-294. http://www.ncbi.nlm.nih.gov/pubmed/11588541

2. Huscher D, Thiele K, Gromnica-Ihle E, et al. Dose-related patterns of glucocorticoid-induced side effects. Ann Rheum Dis. 2009;68(7):1119-1124. http://www.ncbi.nlm.nih.gov/pubmed/18684744

3. Grossman JM, Gordon R, Ranganath VK, et al. American college of rheumatology 2010 recommendations for the prevention and treatment of glucocorticoid-induced osteoporosis. Arthritis Care Res (Hoboken). 2010;62(11):1515-1526. http://www.ncbi.nlm.nih.gov/pubmed/20662044

4. Adachi JD, Bensen WG, Bianchi F, et al. Vitamin d and calcium in the prevention of corticosteroid induced osteoporosis: A 3 year followup. J Rheumatol. 1996;23(6):995-1000. http://www.ncbi.nlm.nih.gov/pubmed/8782129

5. Bernstein CN, Seeger LL, Anton PA, et al. A randomized, placebo-controlled trial of calcium supplementation for decreased bone density in corticosteroid-using patients with inflammatory bowel disease: A pilot study. Aliment Pharmacol Ther. 1996;10(5):777-786. http://www.ncbi.nlm.nih.gov/pubmed/8899087

6. Lakatos P, Nagy Z, Kiss L, et al. Prevention of corticosteroid-induced osteoporosis by alfacalcidol. Z Rheumatol. 2000;59 Suppl 1:48-52.   http://www.ncbi.nlm.nih.gov/pubmed/10769437

7. Lambrinoudaki I, Chan DT, Lau CS, Wong RW, Yeung SS, Kung AW. Effect of calcitriol on bone mineral density in premenopausal chinese women taking chronic steroid therapy. A randomized, double blind, placebo controlled study. J Rheumatol. 2000;27(7):1759-1765. http://www.ncbi.nlm.nih.gov/pubmed/10914864

8. Saag KG, Emkey R, Schnitzer TJ, et al. Alendronate for the prevention and treatment of glucocorticoid-induced osteoporosis. Glucocorticoid-induced osteoporosis intervention study group. N Engl J Med. 1998;339(5):292-299. http://www.ncbi.nlm.nih.gov/pubmed/9682041

9. de Nijs RN, Jacobs JW, Lems WF, et al. Alendronate or alfacalcidol in glucocorticoid-induced osteoporosis. N Engl J Med. 2006;355(7):675-684. http://www.ncbi.nlm.nih.gov/pubmed/16914703

10. Tee SI, Yosipovitch G, Chan YC, et al. Prevention of glucocorticoid-induced osteoporosis in immunobullous diseases with alendronate: A randomized, double-blind, placebo-controlled study. Arch Dermatol. 2012;148(3):307-314. http://www.ncbi.nlm.nih.gov/pubmed/22105813

11. Kitazaki S, Mitsuyama K, Masuda J, et al. Clinical trial: Comparison of alendronate and alfacalcidol in glucocorticoid-associated osteoporosis in patients with ulcerative colitis. Aliment Pharmacol Ther. 2009;29(4):424-430. http://www.ncbi.nlm.nih.gov/pubmed/19035979

12. Gonnelli S, Rottoli P, Cepollaro C, et al. Prevention of corticosteroid-induced osteoporosis with alendronate in sarcoid patients. Calcif Tissue Int. 1997;61(5):382-385. http://www.ncbi.nlm.nih.gov/pubmed/9351879

13. Divittorio G, Jackson KL, Chindalore VL, Welker W, Walker JB. Examining the relationship between bone mineral density and fracture risk reduction during pharmacologic treatment of osteoporosis. Pharmacotherapy. 2006;26(1):104-114. http://www.ncbi.nlm.nih.gov/pubmed/16506352

14. Gray SJ, Benson JA, Jr., Reifenstein RW. Chronic stress and peptic ulcer. I. Effect of corticotropin (acth) and cortisone on gastric secretion. J Am Med Assoc. 1951;147(16):1529-1537. http://www.ncbi.nlm.nih.gov/pubmed/14873720

15. Conn HO, Blitzer BL. Nonassociation of adrenocorticosteroid therapy and peptic ulcer. N Engl J Med. 1976;294(9):473-479. http://www.ncbi.nlm.nih.gov/pubmed/173997

16. Messer J, Reitman D, Sacks HS, Smith H, Jr., Chalmers TC. Association of adrenocorticosteroid therapy and peptic-ulcer disease. N Engl J Med. 1983;309(1):21-24. http://www.ncbi.nlm.nih.gov/pubmed/6343871

17. Conn HO, Poynard T. Adrenocorticosteroid therapy and peptic-ulcer disease. N Engl J Med. 1984;310(3):201-202. http://www.ncbi.nlm.nih.gov/pubmed/6690935

18. Piper JM, Ray WA, Daugherty JR, Griffin MR. Corticosteroid use and peptic ulcer disease: Role of nonsteroidal anti-inflammatory drugs. Ann Intern Med. 1991;114(9):735-740. http://www.ncbi.nlm.nih.gov/pubmed/2012355

19. Hernandez-Diaz S, Rodriguez LA. Steroids and risk of upper gastrointestinal complications. Am J Epidemiol. 2001;153(11):1089-1093. http://www.ncbi.nlm.nih.gov/pubmed/11390328

20. Vonkeman HE, Fernandes RW, van der Palen J, van Roon EN, van de Laar MA. Proton-pump inhibitors are associated with a reduced risk for bleeding and perforated gastroduodenal ulcers attributable to non-steroidal anti-inflammatory drugs: A nested case-control study. Arthritis Res Ther. 2007;9(3):R52. http://www.ncbi.nlm.nih.gov/pubmed/17521422

21. Cote GA, Howden CW. Potential adverse effects of proton pump inhibitors. Curr Gastroenterol Rep. 2008;10(3):208-214. http://www.ncbi.nlm.nih.gov/pubmed/18625128

22. Yale SH, Limper AH. Pneumocystis carinii pneumonia in patients without acquired immunodeficiency syndrome: Associated illness and prior corticosteroid therapy. Mayo Clin Proc. 1996;71(1):5-13. http://www.ncbi.nlm.nih.gov/pubmed/8538233

23. Lertnawapan R, Totemchokchyakarn K, Nantiruj K, Janwityanujit S. Risk factors of pneumocystis jeroveci pneumonia in patients with systemic lupus erythematosus. Rheumatol Int. 2009;29(5):491-496. http://www.ncbi.nlm.nih.gov/pubmed/18828021

24. Enomoto T, Azuma A, Matsumoto A, et al. Preventive effect of sulfamethoxasole-trimethoprim on pneumocystis jiroveci pneumonia in patients with interstitial pneumonia. Intern Med. 2008;47(1):15-20. http://www.ncbi.nlm.nih.gov/pubmed/18175999

25. Dykewicz CA. Summary of the guidelines for preventing opportunistic infections among hematopoietic stem cell transplant recipients. Clin Infect Dis. 2001;33(2):139-144. http://www.ncbi.nlm.nih.gov/pubmed/11418871

26. Green H, Paul M, Vidal L, Leibovici L. Prophylaxis of pneumocystis pneumonia in immunocompromised non-hiv-infected patients: Systematic review and meta-analysis of randomized controlled trials. Mayo Clin Proc. 2007;82(9):1052-1059. http://www.ncbi.nlm.nih.gov/pubmed/17803871

27. Lehman JS, Kalaaji AN. Role of primary prophylaxis for pneumocystis pneumonia in patients treated with systemic corticosteroids or other immunosuppressive agents for immune-mediated dermatologic conditions. J Am Acad Dermatol. 2010;63(5):815-823. http://www.ncbi.nlm.nih.gov/pubmed/20643496

28. Sepkowitz KA. Pneumocystis carinii pneumonia without acquired immunodeficiency syndrome: Who should receive prophylaxis? Mayo Clin Proc. 1996;71(1):102-103. http://www.ncbi.nlm.nih.gov/pubmed/8538221

29. Worth LJ, Dooley MJ, Seymour JF, Mileshkin L, Slavin MA, Thursky KA. An analysis of the utilisation of chemoprophylaxis against pneumocystis jirovecii pneumonia in patients with malignancy receiving corticosteroid therapy at a cancer hospital. Br J Cancer. 2005;92(5):867-872. http://www.ncbi.nlm.nih.gov/pubmed/15726101

30. Martinek J, Hlavova K, Zavada F, et al. “A surviving myth”–corticosteroids are still considered ulcerogenic by a majority of physicians. Scand J Gastroenterol. 2010;45(10):1156-1161. http://www.ncbi.nlm.nih.gov/pubmed/20569095

31. Wall E, Walker-Bone K. Use of bisphosphonates and dual-energy x-ray absorptiometry scans in the prevention and treatment of glucocorticoid-induced osteoporosis in rheumatology. Qjm. 2008;101(4):317-323. http://www.ncbi.nlm.nih.gov/pubmed/18270228

32. Liu RH, Albrecht J, Werth VP. Cross-sectional study of bisphosphonate use in dermatology patients receiving long-term oral corticosteroid therapy. Arch Dermatol. 2006;142(1):37-41. http://www.ncbi.nlm.nih.gov/pubmed/16415384

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

From The Archives: Creatine Kinase: How Much is Too Much?

October 24, 2013

Please enjoy this post from the archives dated November 3, 2010

By Jon-Emile Kenny, MD

Faculty Peer Reviewed

A 37-year-old man, with no past medical history and taking finasteride for male pattern baldness, is admitted to Medicine with profound lower extremity weakness after a weekend of performing multiple quadriceps exercises. His measured creatine phosphokinase (CPK) is over 35,000 IU/liter. I wonder to myself, what is the risk to his kidneys and can I mitigate the damage?

Rhabdomyolysis means destruction of striated muscle. Physical manifestations range from an asymptomatic illness with an elevation in the CPK level, to a life-threatening condition associated with extreme elevations in CPK, electrolyte imbalances, disseminated intravascular coagulation (DIC), and acute kidney injury (AKI)[1].

CPK elevations are frequently classified as mild, moderate, or severe. These classifications roughly correspond to less than 10 times the upper limit of normal (or 2,000 IU/L), 10 to 50 times the upper limit of normal (or 2,000 IU/L to 10,000 IU/L), and greater than 50 times the upper limit of normal (or greater than 10,000 IU/L), respectively (2). The risk of renal failure increases above 5,000 to 6,000 IU/L [2]. Interestingly, one series found that only patients with a peak CK greater than 20,000 IU/L failed to respond to diuresis and required dialysis [3].

No studies have found a normal range of CPK levels following exercise. However, the incidence of renal failure does not correlate with CPK levels alone. After triathalons, athletes may have CPK elevations of greater than 20,000 IU/L without any renal compromise [2]. A review of 35 patients with exercise-induced rhabdomyolysis, with an average admission CPK level of 40,000 IU/L, revealed no cases of acute renal failure [4]. The risk of renal failure increases with co-morbid conditions such as sepsis, dehydration, and acidosis [5].

Hypovolemia and aciduria are felt to be key pathophysiological events leading to acute kidney injury in the setting of muscle breakdown. Damage to the kidneys is mediated by heme-proteins released from myoglobin [6]. There are four converging pathways by which heme-proteins harm the kidneys: 1) renal vasoconstriction; 2) cytokine activation; 3) precipitation of Tamm-Horsfall protein at an acid pH with subsequent cast-nephropathy; and 4) acid-sensitive renal free-radical production [6]. Due to the many liters of fluid sequestered in injured muscle, patients with rhabdomyolysis are profoundly volume depleted. Consequently, homeostatic mechanisms, such as the renin-angiotensin, aldosterone, and vasopressin systems, are activated, leading to renal vasoconstriction. Various cytokines induced in rhabdomyolysis have also been shown to have similar effects on renal perfusion [6]. Because myoglobin becomes concentrated in the presence of aciduria, it precipitates with Tamm-Horsfall protein and also induces free radical production [7]. Given the aforementioned mechanisms of acute kidney injury, evidence of CPK elevation should lead to attempts to protect the kidneys. Treatment should include reversal of fluid deficits with or without urinary alkalinization.

Reversal of hypovolemia with copious amounts of intravenous (IV) normal saline, with individualized urine output goals, is the mainstay of therapy [7]. While no prospective clinical trials have proven the efficacy of volume resuscitation, retrospective analyses support its use [6]. In one study, investigators compared the clinical outcomes of two groups of patients who developed crush syndrome during building collapses. All seven patients in the group that had IV fluids delayed for more than six hours required dialysis, whereas none of the seven patients with similar injuries in the group that received IV fluids at the time of extrication developed acute renal failure [8].

Despite the protective effects of urinary alkalinization on experimental models of heme-protein nephrotoxicity [9] and similarly positive reports from various case series [10], evidence from randomized controlled trials is lacking. A retrospective study of 24 patients demonstrated that augmentation with mannitol and bicarbonate may have no benefit over and above aggressive fluid resuscitation with saline alone [11]. Further, Brown and colleagues retrospectively identified patients with trauma-induced renal failure and CPK levels greater than 5,000 IU. Roughly 40% of these patients received mannitol and bicarbonate with fluid resuscitation, while the remainder received saline alone. No significant differences in the incidence of dialysis or in the mortality rate between the two groups were observed [12]. Nevertheless, large volume saline repletion without alkalinization raises the risk of hyperchloremic acidosis and may perpetuate kidney injury. In a recent NEJM review, Bosch et al. especially recommend both normal saline and sodium bicarbonate in patients with metabolic acidosis [13]. Important to note, studies comparing saline versus saline plus urinary alkalinization are complicated by variable definitions of renal failure (e.g. creatinine > 2.0 mg/dL versus need for dialysis), large variations in study design and patient selection, the number of patients studied, and inconsistent amounts of time between injury and treatment [14].

In summary, renal injury with high serum CPK values becomes a true concern when levels of CPK reach 5,000 IU/L and the patient has serious co-morbid disease such as volume depletion, sepsis or acidosis. Otherwise, values of up to 20,000 IU/L may be tolerated without untoward event. The key pathophysiological events are volume depletion and aciduria, which should be corrected immediately and primarily with ample IV normal saline and secondarily with urinary alkalinization. As our patient was young and healthy, he was administered IV normal saline only, with a goal of 200 cc per hour of urine output, until his CPK levels trended below 6,000 IU/L. He was counseled on appropriate exercise routines and urged to stop his 5-alpha reductase inhibitor, as this class of drugs has been associated with rhabdomyolysis. He did not experience any renal injury and his weakness improved. He was discharged home 36 hours following his admission.

Editorial comment:

Studies have suggested that there is a limited time to prevent renal injury, perhaps as little as 6 hours after rhabdomyolysis occurs. Patients should always have their extracellular volume repleted after experiencing third-spacing of plasma volume into injured muscle, in part attributed to the osmotic effects of local proteolysis. However, if kidney injury is already established, continuing to force IV fluids into a patient with renal failure may lead to volume overload and pulmonary edema. This same limitation may explain why alkalinization is of unproven benefit: it’s difficult to get the bicarbonate into the urine if GFR is low. If urine pH fails to rise after volume repletion is achieved, the risk of continued sodium bicarbonate administration far outweighs the little chance of benefit at that late point.

Of note, third-spacing into muscle may lead to compartment syndrome with compression of arteries and nerves; surgical consultation and measurement of compartment hydrostatic pressure is sometimes needed though the risks and benefits of fasciotomy are debated.

Dr. Kenny is a chief resident in internal medicine at NYU Langone Medical Center

Peer reviewed by David Goldfarb, MD, Professor of Medicine, Department of Medicine (Nephrology), NYU Langone Medical Center and Chief of Nephrology at the Department of Veterans Affairs New York Harbor.

Image (model of finasteride) courtesy of Wikimedia Commons.

References:

(1) Huerta-Alardín et al. Bench-to-bedside review: Rhabdomyolysis – an overview for Clinicians. Critical Care April 2005 Vol 9 No 2. 158 – 169. http://www.biomedcentral.com/content/pdf/cc2978.pdf

(2) Latham and Nichols. How Much can Exercise Raise the CK level – and does it matter? The Journal of Family Practice. Vol: 57 (8) 545-546.

(3) Eneas et al. The effect of infusion of mannitol– sodium bicarbonate on the clinical course of myoglobinuria. Arch Intern Med 1979;139(7):801– 5. http://archinte.ama-assn.org/cgi/reprint/139/7/801.pdf

(4) Sinert et al. Exercise-induced rhabdomyolysis. Ann Emerg Med. 1994 Jun;23(6):1301-6. http://www.charlydmiller.com/LIB04/1994exerciserhabdo.html

(5) Ward MM. Factors predictive of acute renal failure in rhabdomyolysis. Arch Intern Med 1988;148:1553-7.

(6) Bagely et al. Rhabdomyolysis. Intern Emerg Med. 2007 Oct;2(3):210-8

(7) Zager R: Rhabdomyolysis and myohemoglobinuric acute renal failure. Kidney Int 1996, 49:314-326.

(8) Ron et al. Prevention of acute renal failure in traumatic rhabdomyolysis. Arch Intern Med 144:277—280, 1984. http://archinte.ama-assn.org/cgi/content/abstract/144/2/277

(9) Salahudeen et al. Synergistic renal protection by combining alkaline-diuresis with lipid peroxidation inhibitors in rhabdomyolysis: possible interaction between oxidant and nonoxidant mechanisms. Nephrol Dial Transplant 1996; 11(4):635–42.

(10) Mathes et al. Rhabdomyolysis and myonecrosis in a patient in the lateral decubitus position. Anesthesiology 1996;84(3):727–9.   http://journals.lww.com/anesthesiology/fulltext/1996/03000/rhabdomyolysis_and_myonecrosis_in_a_patient_in_the.30.aspx

(11) Homsi et al. Prophylaxis of acute renal failure in patients with rhabdomyolysis. Ren Fail 1997, 19:283-288.

(12) Brown et al. Preventing renal failure in patients with rhabdomyolysis: do bicarbonate and mannitol make a difference? J Trauma 2004, 56:1191-1196. http://journals.lww.com/jtrauma/Abstract/2004/06000/Preventing_Renal_Failure_in_Patients_with.4.aspx

(13) Bosch et al. Rhabdomyolysis and acute kidney injury. N Engl J Med. 2009 Jul 2;361(1):62-7

(14) Malinoski et al. Crush injury and rhabdomyolysis Critical Care Clinics – Volume 20, Issue 1 (January 2004). http://www.ubccriticalcaremedicine.ca/academic/jc_article/Crush%20injuries%20and%20rhabdomyolysis%20(Feb-14-08).pdf

 

Why Aren’t Patients Using Advance Directives?

October 23, 2013

By Abigail Maller, MD

Faculty Peer Reviewed

Advance directives are a means for patients to communicate their wishes regarding medical decisions to their families and health care professionals once they are unable to make these decisions themselves. These documents, together with the assignment of health care proxies, help avoid a discrepancy between what a patient wanted in terms of end-of-life care and the level of care that they end up receiving [1]. These resources also prevent confusion and promote mutual understanding between providers and family members.

However, advance directives are only used by a small percentage of all patients [2]. Literature shows that only between 18% and 30% of Americans have completed an advance directive. Additionally, only 1 in 3 chronically ill patients have a documented advance directive [3]. Even among patients admitted to Medical ICUs, only a minority has advance care planning documented prior to admission [1]. Analysis of a majority of state advance directive documents has shown that these forms tend to be riddled with complex medical and legal jargon [4, 5], and are written at a reading level significantly higher than the national average (11th grade versus 8th grade, respectively) [6]. The complexity of these documents establishes a barrier between advance care planning and patients that could potentially benefit from it.

Even though only a minority of patients overall use advance directives, research has shown that people of different ethnicities have varying rates of utilization of advance directives [7, 8]. Additionally, attitudes towards end-of-life care and advance care planning vary among different patient populations8. Patients of all populations tend to have misconceptions when it comes to advance care planning including fear of early withdrawal of care, neglect, and poor administration of care. Patient surveys conducted at the University of Kansas Medical Center showed that hospitalized African Americans have fewer advanced directives and living wills than Caucasian patients [9]. Patients in this study group also expressed the belief they would be treated differently or cared for less if they had a living will, noting mistrust of the system. They also noted preference for more aggressive medical care in the case of a terminal illness. Latino patients studied in the same survey also had lower rates of advance directive care, but attributed this to language barriers rather than mistrust. Focus groups of Latino populations at Massachusetts General Hospital have shown that Latino patients are unfamiliar with the language and documentation of advance care directives [10]. Additionally, they were confused with regards to not only the purpose of these forms but the legality, as well. Some patients did not understand the difference between a living will and a last will and testament.

While some ethnic minorities have shown lower rates of advance directive use as above, a study conducted in Florida showed that patients of Asian descent are as likely as Caucasian patients to have a surrogate decision maker assigned [11]. Judging by these results authors concluded that the difference in advance directive and end-of-life planning, while partially influenced by cultural differences, was at root significantly associated with socioeconomic status. However, this association has not been demonstrated in the literature. More recent studies have shown that the discrepancies identified between demographics, while associated with ethnicity, socioeconomic status, and level of education, may be caused at root by an underlying difference in literacy level [12, 13]. Low Health Literacy is a stronger predictor of a person’s health than age, income, employment status, education level, and race [14]. In fact, a lower health literacy has even been shown to be independently associated with poorer physical and mental health status [15].

The Volandes group at Massachussets General Hospital performed a study of end-of-life preferences among a group of African American and Caucasian patients. Subjects were initially given a verbal description of advanced dementia and asked to report their preferences for end-of-life care, which were dichotomized into comfort care and aggressive care. Patients were shown a 2-minute video of a patient with advanced dementia and asked to report their preferences for end-of-life care again after viewing the video. These subjects were also tested for level of Health Literacy using the Rapid Estimate of Adult Literacy in Medicine (REALM) score [16]. This group found that the African American patients were more likely than the Caucasian patients to prefer aggressive care after the verbal description. Patients with low to marginal health literacy were also more likely than patients with adequate health literacy to prefer aggressive care after the verbal description. However, after viewing the video of a patient with advanced dementia, there were no differences in preference attributable to race or level of health literacy [17]. Additional research using this video-based education model used above has been performed. These studies also aim to reduce the bias that differences in health literacy introduce to advance care planning [18]. Results of these have shown that after intervention with video-based education, the associations between choice of level of end-of-life care and education/literacy level decrease substantially [19, 20].

A research group at University of California, San Francisco (UCSF) developed a low-literacy form for advance care planning and studied it as compared to the standard advance care documents in regular use in the state of California [21]. Patients were randomized to review either the redesigned form or the standard form and were asked to rate acceptability and usefulness in advance care planning. Subjects were then given the alternative form and rated form preference. Subject completion of advance directive documentation was also monitored. This group showed that in comparison with standard documentation patients overall preferred the redesigned form, and this was especially true of patients deemed to have limited literacy. Additionally, patients randomized to initially receiving the redesigned form had higher rates of completion of an advance directive after 6-month follow-up.

Despite this, the research thus far on differences in health literacy and end-of-life decision-making remains limited. Further investigation into how redesigned advance care planning documentation can improve utilization rates is important. Additionally, while educational video intervention has been successful in the cases of advanced dementia as above, it is not clear whether similar techniques would work for terminal illnesses that present less overtly. It is clear that more research into this area is necessary. More importantly, however, there has been very little initiative to change the usual practice in clinics across the country to promote advance directive use.

Overall, it seems that in order to promote use of advance directives multiple approaches and interventions are necessary. Potential targets include improvements in health literacy, addressing cultural biases as well as language barriers, and improving document complexity. Redesigning the documents would only be a first step, though – education must also be a primary goal. Improvement in communication, education, and understanding regarding advance directives has the potential for major improvements health outcomes in our hospital population.

Dr. Abigail Maller is a second year Internal Medicine resident at NYU Langone Medical Center

Peer reviewed by Antonella Surbone MD, PhD FACP, Ethics Editor, Clinical Correlations

References

1. Johnson RF Jr, Baranowski-Birkmeier T, O’Donnell JB. Advance directives in the medical intensive care unit of a community teaching hospital. Chest. 1995 Mar;107(3):752-6. PubMed PMID: 7874948.

2. Schickedanz AD, Schillinger D, Landefeld CS, Knight SJ, Williams BA, Sudore RL. A clinical framework for improving the advance care planning process: start with patients’ self-identified barriers. J Am Geriatr Soc. 2009 Jan;57(1):31-9. PubMed PMID: 19170789; PubMed Central PMCID: PMC2788611.  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2788611/

3. Wilkinson A, Wenger N, Shugarman LR. Literature review on advance directives. Report prepared for the U.S. Department of Health and Human Services; Assistant Secretary for Planning and Evaluation; Office of Disability, Aging and Long-Term Care Policy: Washington, DC. 2007. http://aspe.hhs.gov/daltcp/reports/2007/advdirlr.htm

4. Castillo LS, Williams BA, Hooper SM, Sabatino CP, Weithorn LA, Sudore RL. Lost in translation: the unintended consequences of advance directive law on clinical care. Ann Intern Med. 2011 Jan 18;154(2):121-8. Review. PubMed PMID: 21242368; PubMed Central PMCID: PMC3124843.  http://www.ncbi.nlm.nih.gov/pubmed/21242368

5. Gunter-Hunt G, Mahoney JE, Sieger CE. A comparison of state advance directive documents. Gerontologist. 2002 Feb;42(1):51-60. PubMed PMID: 11815699.

6. Mueller LA, Reid KI, Mueller PS. Readability of state-sponsored advance directive forms in the United States: a cross sectional study. BMC Med Ethics. 2010 Apr 25;11:6. PubMed PMID: 20416105; PubMed Central PMCID: PMC2868033.

7. Kwak J, Haley WE. Current research findings on end-of-life decision making among racially or ethnically diverse groups. Gerontologist. 2005 Oct;45(5):634-41. Review. PubMed PMID: 16199398.

8. Muni S, Engelberg RA, Treece PD, Dotolo D, Curtis JR. The influence of race/ethnicity and socioeconomic status on end-of-life care in the ICU. Chest. 2011 May;139(5):1025-33. Epub 2011 Feb 3. PubMed PMID: 21292758.   http://www.ncbi.nlm.nih.gov/pubmed/21292758

9. Born W, Greiner KA, Sylvia E, Butler J, Ahluwalia JS. Knowledge, attitudes, and beliefs about end-of-life care among inner-city African Americans and Latinos. J Palliat Med. 2004 Apr;7(2):247-56. PubMed PMID: 15130202.

10. Cohen MJ, McCannon JB, Edgman-Levitan S, Kormos WA. Exploring attitudes toward advance care directives in two diverse settings. J Palliat Med. 2010 Dec;13(12):1427-32. Epub 2010 Nov 22. PubMed PMID: 21091225.

11. Kwak J, Haley WE. Current research findings on end-of-life decision making among racially or ethnically diverse groups. Gerontologist. 2005 Oct;45(5):634-41. Review. PubMed PMID: 16199398.  http://www.ncbi.nlm.nih.gov/pubmed/16199398

12. Shea JA, Beers BB, McDonald VJ, Quistberg DA, Ravenell KL, Asch DA. Assessing health literacy in African American and Caucasian adults: disparities in rapid estimate of adult literacy in medicine (REALM) scores. Fam Med. 2004 Sep;36(8):575-81. PubMed PMID: 15343419.

13. Melhado L, Bushy A. Exploring Uncertainty in Advance Care Planning in African Americans: Does Low Health Literacy Influence Decision Making Preference at End of Life. Am J Hosp Palliat Care. 2011 Mar 10. [Epub ahead of print] PubMed PMID: 21398263.408.

14. Report on the Council of Scientific Affairs, Ad Hoc Committee on Health Literacy for the Council on Scientific Affairs, American Medical Association, JAMA, Feb 10, 1999.

15. Wolf MS, Gazmararian JA, Baker DW. Health literacy and functional health status among older adults. Arch Intern Med. 2005 Sep 26;165(17):1946-52. PubMed PMID: 16186463. http://www.ncbi.nlm.nih.gov/pubmed/16186463

16. Doak CC, Doak Lg, Root JH: Teaching Patients with Low Literacy Skills, 2nd ed. Philadelphia: J.B. Lippincott, 1996.

17. Volandes AE, Paasche-Orlow M, Gillick MR, Cook EF, Shaykevich S, Abbo ED, Lehmann L. Health literacy not race predicts end-of-life care preferences. J Palliat Med. 2008 Jun;11(5):754-62. PubMed PMID: 18588.

18. Volandes AE, Ferguson LA, Davis AD, Hull NC, Green MJ, Chang Y, Deep K, Paasche-Orlow MK. Assessing end-of-life preferences for advanced dementia in rural patients using an educational video: a randomized controlled trial. J Palliat Med. 2011 Feb;14(2):169-77. Epub 2011 Jan 21. PubMed PMID: 21254815.

19.Volandes AE, Ariza M, Abbo ED, Paasche-Orlow M. Overcoming educational barriers for advance care planning in Latinos with video images. J Palliat Med. 2008 Jun;11(5):700-6. PubMed PMID: 18588401.  http://www.ncbi.nlm.nih.gov/pubmed/18588401

20. Volandes AE, Barry MJ, Chang Y, Paasche-Orlow MK. Improving decision making at the end of life with video images. Med Decis Making. 2010 Jan-Feb;30(1):29-34. Epub 2009 Aug 12. PubMed PMID: 19675323.

21. Sudore RL, Landefeld CS, Barnes DE, Lindquist K, Williams BA, Brody R, Schillinger D. An advance directive redesigned to meet the literacy level of most adults: a randomized trial. Patient Educ Couns. 2007 Dec;69(1-3):165-95. Epub 2007 Oct 17. PubMed PMID: 17942272; PubMed Central PMCID: PMC2257986.  http://www.ncbi.nlm.nih.gov/pubmed/17942272

ADVANCED DIRECTIVES: FURTHER ELEMENTS OF COMPLEXITY.

By Antonella Surbone MD PhD FACP, Clinical Correlations Ethics Editor.

The interesting piece by Dr. Maller —“Why aren’t patients using advanced directives?”— addresses in detail two main aspects of this issue: cross-cultural and ethnic differences that affect people attitudes toward advanced directives (ADs) and health literacy. (1) The latter, as Dr. Maller indicates, relates to individual educational level, but also to the fact that most AD forms are “riddled with complex medical and legal jargon.” Different methods to improve readability and cross-cultural comprehension of AD forms must be designed and implemented in order to increase their utilization, yet the key is always individually and culturally sensitive communication between physicians and their patients and families (2,3).

In 2005 JAMA published a poignant piece entitled “Beyond advanced directives: Importance of communication skills at the end of life. (4). Yet, still most Americans and people around the world do not discuss end-of-life issues with their doctors and do not have ADs, regardless of gender, age, ethnicity and educational level. Why? Because in western countries we tend to avoid speaking of death, as if this could make death less part of our life. As physicians, we are especially reluctant to address difficult conversations about poor prognosis and dying, because in medical school we study and train to overcome disease, to cure our patients and constantly improve our success rates, while forgetting that our profession –and our “mission” for those who still believe in this term- is to care for our patients during the course of their entire lives and illness, including the palliative phase with the approach of death.

In 2011, the American Society of Clinical Oncology (ASCO) issued statements for both oncologists and patients regarding the importance and benefit of early, or concomitant, palliative care for patients with advanced cancer. ASCO stressed that oncologists should initiate candid communication about palliative and end-of-life care as early as possible when death appears inevitable. (5) ASCO also identified as the main barrier to such discussions oncologists’ difficulty with discussions about bad prognosis and impending death, because of their perceptions that such discussions are an indication of their own “failure”, that such discussions are time consuming and not adequately recognized or compensated, and because no sufficient education or training in communication is provided in medical or specialty schools and training. (6)

By contrast, studies show the many benefits of open communication with patients and their families, including improved end-of-life care that is more respectful of patients’ wishes, decreased number of hospitalizations, reduced use of chemotherapy in the last days of life, and less complicated grief in surviving relatives. (7)

The 1997 Institute of Medicine report “Approaching Death: Improving Care at the End of Life” identified the many barriers that impede the delivery of high-quality, compassionate care for patients with advanced illness. (8) Now that “quality of life” has become a standard measure of care, it is time to address “quality of death” as well, including greater attention to the use of Ads. “Having Your Own Say: Getting the Right Care When It Matters Most”, edited in 2012 by B.J. Hammes calls for “a true transformation [of health care] so that all Americans with advanced illness, especially the sickest and most vulnerable, will receive comprehensive, high-quality care that is consistent with their goals and values and that honors their dignity.” (9) New laws that permit adults to sign documents to specify their preferences for future health-care decisions are part of more supportive, coordinated and patient-centered care.

Yet living wills and powers of attorney for health care, while important tools for documentation, do not by themselves address all the morally complex questions that physicians at time face with regard to how to take care of individual persons in particular circumstances. For example, what should be done when the patient’s advance directive does not seem to the treating physician to be in the best interest of a patient with a surrogate? A 2013 article in JAMA Internal Medicine describes a framework to tackle this challenging dilemma and applies it to two clinical cases. (10) The Authors recommend that doctors ask themselves the following questions in attempting to address the potential dilemmas of ADs: “1) Is the clinical situation an emergency that allows no time for deliberation?; 2) In the view of the patient’s values and goals, how likely is it that the benefits of the intervention will outweigh the burdens? ; 3) How well does the AD fit the situation at hand? ; 4) How much leeway did the surrogate provide for overriding the AD?; 5) How well does the surrogate represent the patient’s best interest?” (10)

No study has ever shown ADs to be associated with worse survival or worse end-of-life care. Yet a 2008 survey of Members of the American Society of Clinical Oncology showed that despite caring for patients with a high mortality rate, and often, discussing end-of-life planning with their patients, oncologists do not often write ADs for themselves. Those who have done it, however, have a more comprehensive understanding of their patients’ end-of-life wishes, discuss ADs more frequently with their patients, and feel more knowledgeable and comfortable in helping patients to complete ADs. By contrast, almost 75% of oncologists without ADs tended to have end-of-life discussions not directly with their patients, but rather with family members or loved ones. (11)

The issue of advanced directives is complex. Writing ADs involves deep feelings and considerations of our own mortality and potential suffering. As physicians, we must strive to improve our communication with them and their families and to respect their wishes and priorities, while always being at their side.

1. Searight HR, Gafford J. Cultural diversity at the end of life. American Family Physicians 2007; 72, 515-522.

2.Surbone A , Rajer M, Stiefel R, Zwitter M. (Eds) New challenges in communication with cancer patients. New York, NY: Springer 2012.

3. Fallowfield L, Jenkins V. Communicating sad, bad, difficult new in medicine. Lancet 2004; 363: 312-319.

4. Tulsky JA. Beyond advanced directives: Importance of communication skills at the end of life. JAMA 2005; 20; 294:359-65.

5. American Society of Clinical Oncology. Advanced Cancer Care Planning 2001. Available at http://www.cancer.net/coping/advanced-cancer-care-planning.  (last accessed August 10th 2013)

6. OncoTalk Program, available at http://depts.washington.edu/oncotalk/modules.php (last accessed August 10th 2013)

7. Mack JW, Cronin A, Keating NL et al. Associations Between End-of-Life Discussion Characteristics and Care Received Near Death: A Prospective Cohort Study. J Clin Oncol 2012; 30: 4387-95.

8. Institute of Medicine (IOM). Approaching Death: Improving Care at the End of Life. Washington DC, National Academy Press 1997.

9. Hammes JB. (Ed) Having your own say: Getting the right care when it matters most. CHT Press 2012.

10. Smith AK, Lo B, Sudore R. When previously expressed wishes conflict with best interests. JAMA Intern Med 2013: 173: 1241-5.

11. Schroeder JE, Mathiason MA, Meyer CM, Frisby KA, Williams E and Go RS. Advance directives (ADs) among members of the American Society of Clinical Oncology (ASCO)  Journal of Clinical Oncology, 2008 ASCO Annual Meeting Proceedings, Vol 26, No 15S (May 20 Supplement), 2008: 20611.

 

Is there a Non-Invasive Method to Diagnose Cirrhosis/Hepatic Fibrosis?

October 11, 2013

By Becky Naoulou, MD

Faculty Peer Reviewed

Clinical Question:

You are asked to see a 45 year-old male with a medical history significant for untreated hepatitis C (HCV RNA 5,000,000 copies/mL, genotype 1a). He presents complaining of worsening fatigue and weakness for several months. Labs are remarkable for mildly elevated transaminases, low albumin, and an elevated INR. The patient is very worried because he has heard that hepatitis C can cause liver cancer and asks you if there is a non-invasive screening test for liver cancer. You suspect that the patient has cirrhosis. Acknowledging that the development of cirrhosis is the primary risk factor for development of hepatocellular carcinoma (HCC) in patients with HCV, you plan to look up the data examining whether a liver biopsy is necessary to assess the degree of fibrosis/cirrhosis or whether non-invasive methodologies are available.

Currently, liver biopsy remains the gold standard for staging the extent of fibrosis/cirrhosis in patients with chronic liver disease; however, there is active research examining a number of serologic markers and imaging procedures that may obviate the need for liver biopsy. This article will focus on the imaging modalities currently being investigated for diagnosis of liver fibrosis.

Fibrosis is the final common pathway for almost all forms of chronic liver disease. Fibrosis results from the accumulation of collagen and proteoglycans in the extracellular matrix in response to repetitive liver injury [1]. There are multiple staging systems for fibrosis; the most commonly is the Metavir classification (figure 1) [2]. Staining for collagen is performed on biopsies using trichrome stain. The scoring system reflects the natural history of fibrosis development. Initially, fibrous connective tissue surrounds the portal triads (stage 1). Stage 2 fibrosis is characterized by extension of the collagen fibers into the periportal space, while fibrous connective linking neighboring portal triads and extending to central veins is considered, stage 3 disease. Finally, cirrhosis (stage 4) is when most portal areas are connected by fibrous tissue and hepatocyte clusters are completely surrounded by fibrous tissue forming cirrhotic nodules [6]. The detection of hepatic fibrosis has important clinical implications for the management of chronic liver disease. For example, the presence of clinically significant hepatic fibrosis (histologic grade greater than or equal to 2) influences the timing of antiviral treatment for patients with chronic hepatitis B or C [3]. Similarly, patients with nonalcoholic fatty liver disease who are found to have fibrosis will need closer monitoring and follow-up for the development of cirrhosis [3] and more aggressive counseling for weight loss and lipid control. Furthermore, the presence of cirrhosis will require initiation of screening for HCC as well as gastroesophageal varices.

Figure 1: Metavir classification for the stage of liver disease

No scarring 0

Minimal scarring 1

Scarring has occurred and extends outside the areas in the liver that contain blood vessels 2

Bridging fibrosis is spreading and connecting to other areas that contain fibrosis 3

Cirrhosis or advanced scarring of the liver 4

While liver biopsy remains the gold standard method for diagnosing hepatic fibrosis, it is an invasive test with many possible complications. The most common complication is pain (84%), but other, more serious, complications may occur, including: hemorrhage (intraperitoneal or intrathoracic); puncture of the gallbladder, colon, or pleura; accidental biopsy of the kidney or pancreas; and creation of an arteriovenous fistula in the liver [4]. The mortality rate from the procedure is estimated to range from 0.01% to 0.1%, with bleeding accounting for the largest percentage of deaths [4]. Sampling error also remains a major limitation for liver biopsy and it is difficult to overcome; only 1/50,000 of the liver is analyzed by a standard biopsy. Further, pathologic examination of a liver biopsy is subject to interobserver and intra-observer variability. Given the potential complications and the diagnostic limitations of a liver biopsy, there is a pressing need for more accurate and non-invasive tools for the diagnosis and assessment of liver fibrosis.

Conventional cross-sectional imaging studies (CT and MRI), and to a lesser extent abdominal ultrasound, are very useful in the diagnosis of advanced cirrhosis. The cirrhotic liver develops characteristic changes in morphology that include increased surface nodularity, enlargement of the gallbladder fossa, atrophy of the right lobe, and relative increase in size of the caudate lobe [5]. However, these imaging studies have lower sensitivity for earlier stages of disease and are not suitable for staging liver fibrosis over its entire spectrum. Newer techniques must be capable of detecting hepatic fibrosis at an early stage.

Ultrasound elastography is a promising new technique for the detection of hepatic fibrosis. Elastography refers to a method for estimating liver stiffness; it can currently be performed with either US or MR imaging. In US elastography, a device creates a mild-amplitude, low-frequency shear wave which travels through the liver. The shear wave is then detected by pulse-echo US [6]. The wave travels faster through denser tissue, allowing the radiologist to estimate the tissue density and thus, indirectly, the degree of hepatic fibrosis. This technique has several advantages: it is non-invasive, low-cost, can be repeated at intervals relatively easily, and can assess a larger portion of the liver parenchyma than does liver biopsy. A meta-analysis of nine studies [7] showed that US elastography had a sensitivity of 87% (95% CI 84%-90%) and a specificity of 91% (95% CI 89%-92%) for a diagnosis of cirrhosis. More impressive though, was that in seven of the nine studies, US elastography diagnosed stage II to IV fibrosis with 70% sensitivity (95% CI 67%-73%) and an 84% specificity (95% CI 80%-88%), suggesting that this method can reliably identify liver disease during its earlier stages. Limitations of US elastography, as with all US techniques, include its operator-dependent nature and its reduced effectiveness in obese patients. In addition, other conditions apart from fibrosis can result in increased liver stiffness, including heart failure, acute inflammation and portal hypertension, so results have to be interpreted with caution [8].

MR imaging offers several distinct modalities for the detection of hepatic fibrosis. These include: double contrast-enhanced MR imaging, diffusion-weighted imaging, perfusion imaging, and MR elastography [5]. Administration of a gadolinium-based contrast agent, improves the visibility of fibrosis. Preferential accumulation of contrast in the extracellular fluid, where collagen is deposited, allows fibrotic areas to be more readily detected on MR imaging. Double contrast-enhanced MRI involves the administration of gadolinium and another contrast agent called superparamagnetic iron oxide (SPIO). SPIO is a reticuloendothelial system-specific contrast agent and after intravenous administration, approximately 80% of it is taken up by the liver [5]. As a result, liver parenchyma has low signal intensity on MR imaging. Fibrotic areas, however, accumulate less iron oxide, due largely to a reduced Kupffer cell (a reticuloendothelial cell) density, and will appear as high-signal-intensity reticulations [9]. Individually, gadolinium and SPIO are of limited efficacy. In combination, however, they are synergistic and demonstrate fibrosis with higher clarity than either agent alone. In a study by Aguirre et. al. [10], the sensitivity and specificity for the detection of grade 3 or higher fibrosis exceeded 90% with this MR modality. Limitations of this method include: high cost; the need for high-quality images free of motion (requiring prolonged breath-holds by patients); and some minor adverse reactions associated with the contrast agents, the most common of which was back pain (10%) [10]. Further, the sensitivity at lower stages of fibrosis is significantly reduced.

Diffusion-weighted imaging (DWI) measures the ability of water protons to diffuse in a particular tissue. This is expressed as the apparent diffusion coefficient (ADC), which increases as the molecular diffusion of water increases [11]. As collagen accumulates in a fibrotic liver, the free diffusion of water molecules is restricted, thus the ADC corresponding to these areas is lower than that seen in normal hepatic parenchyma; therefore, by measuring the ADC in different sections of the liver, one can theoretically identify the fibrotic areas. The challenging aspects of this technique, including confounding factors that can contribute to altered ADC and technical imaging parameters, render DWI in need of further refinement before adoption for large-scale use. MR perfusion imaging is another technique being applied to the detection of liver fibrosis. It takes advantage of physiologic changes that occur as fibrosis progresses: decreased portal venous blood flow, increased hepatic arterial blood flow, and formation of intrahepatic shunts [5]. This technique is sensitive to microscopic level of blood flow and measures the rate at which blood is delivered to the tissues. In one study [12], for example, researchers reported significant perfusion differences between patients with and without advanced fibrosis. Patients with advanced fibrosis had increased absolute arterial blood flow and an increased arterial fraction. This imaging technique is still in its infancy and it is limited by many of the same features as DWI.

Finally, MR elastography (MRE) is a new technique that possibly holds the most promise for becoming a clinically useful tool in the near future for the diagnosis of hepatic fibrosis. As in US elastography, a mechanical wave is propagated through the liver and measurement of its velocity allows for quantification of liver stiffness. The greater the liver stiffness (fibrosis), the faster the wave propagation. In a study by Yin et al. which examined the performance of this technique in 50 patients with various forms of chronic liver disease and 35 healthy volunteers, the authors found that with a cutoff mean liver stiffness value of 2.93 kPa, MRE had a sensitivity of 98% and a specificity of 99% for differentiating any stage of fibrosis from normal liver tissue [13]. The advantages of MR elastography, compared to US, include sampling of the entire liver, insensitivity to body habitus, and that it is operator-independent. Standard contraindications to MRI, confounding contributors to liver stiffness and high cost are some of the limitations of this technique.

In addition to the imaging techniques described above, a number of researchers are examining he use of serologic markers that can identify the presence of hepatic fibosis or cirrhosis with a simple blood test. Markers that estimate the turnover or metabolism of extracellular matrix may serve this purpose as direct markers of fibrosis. These include procollagen type III amino-terminal peptide (P3NP), and matrix metalloproteinases 2. There are also several indirect markers of fibrosis. The Sequential Algorithm for Fibrosis Evaluation (SAFE) combines two such markers, the APRI and Fibrotest-Fibrosure tests, which are used in a sequential fashion to test for fibrosis and cirrhosis. The AST: platelet ratio index (APRI), is calculated by the following formula: (AST/upper limit of normal for AST) x100/ platelet count x 10 9 /L). Fibrotest and Fibrosure are commercial tests that use a mathematical formula to predict fibrosis using the levels of alpha-2- macroglobulin, alpha-2 globulin and other proteins [6]. In a large multicenter study that studied the ability of the SAFE to detect significant fibrosis, its accuracy was 90.1%, the area under the receiver operating characteristic curve was 0.89 (95% CI 0.87-0.90), and it reduced the number of liver biopsies needed by 46.5%. When the algorithm was used to detect cirrhosis, its accuracy was 92.5%, the area under the curve was 0.92 (95% CI 0.89-0.94), and it reduced the number of liver biopsies needed by 81.5% [14].

Currently liver biopsy remains the only validated, reproducible test capable of determining the extent of liver damage. However, the eventual adoption of a non-invasive test for detection and staging of fibrosis is inevitable as active research on many such imaging and serologic tests are close to fruition.

Dr. Becky Naoulou is a a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Michael Poles, MD, Section Editor, Clinical Correlations

Image courtesy of Wikimedia Commons

References

1. Friedman SL. Hepatic fibrosis: overview. Toxicology 2008; 254(3): 120-129.  http://www.ncbi.nlm.nih.gov/pubmed/18662740

2. Theise ND. Liver biopsy assessment in chronic viral hepatitis: a personal, practical approach. Mod Pathol 2007; 20:S3-14.  http://www.ncbi.nlm.nih.gov/pubmed/17486049

3. Schmeltzer PA, Talwalkar JA. Noninvasive tools to assess hepatic fibrosis: Ready for prime time? Gastroenterol Clin N Am 2011; 40: 507-521.

4. Sanai FM, Keeffe EB. Liver biopsy for histological assessment – the case against. Saudi J Gastroenterol 2010; 16:124-32.  http://www.ncbi.nlm.nih.gov/pubmed/20339187

5. Faria SC, et al. MR Imaging of Liver Fibrosis: Current State of the Art. Radiographics 2009; 29:1615-1635.

6. Carey E, Carey WD. Noninvasive tests for liver disease, fibrosis, and cirrhosis: Is liver biopsy obsolete? Cleve Clin J Med 2010; 77(8): 519-27.

7. Talwalkar JA, Kurtz DM, Schoenleber SJ, West CP, Montori VM. Ultrasound-based transient elastograhpy for the detection of hepatic fibrosis: systematic review and meta-analysis. Clin Gastroenterol Hepatol 2007; 5:1214-1220.  http://www.ncbi.nlm.nih.gov/pubmed/17916549

8. Millonig G, Friedrich S, Adolf S, et al. Liver stiffness is directly influenced by central venous pressure. J. Hepatol. 2010; 52(2)206-210.

9. Lucidarme O, Baleston F, Cadi M, et al. Non-invasive detection of liver fibrosis: is superparamagnetic iron oxide particle-enhanced MR imaging a contributive technique? Eur Radiol 2003; 13(3):467-74.  http://www.unboundmedicine.com/harrietlane/ub/citation/12594548/Non_invasive_detection_of_liver_fibrosis:_Is_superparamagnetic_iron_oxide_particle_enhanced_MR_imaging_a_contributive_technique

10. Aguirre DA, Behling CA, Alpert E, et al. Liver fibrosis: noninvasive diagnosis with double contrast material-enhanced MR imaging. Radiology 2006; 239(2):425-437.

11. Guiu B, Cercueil JP. Liver diffusion-weighted MR imaging: the tower of Babel? Eur Radiol. Forthcoming 2011.

12. Miyazaki S, Yamazaki Y, Murase K. Error analysis of the quantification of hepatic perfusion using a dual-input single-compartment model. Phys Med Biol 2008; 53(21):5927-5946.

13. Yin M, Talwalkar JA, Glasser KJ, et al. Assessment of hepatic fibrosis with magnetic resonance elastography. Clin Gastroenterol Hepatol 2007;5: 1207-1213.

14. Sebastiani G, Halfon P, Castera L, et al. SAFE biopsy: a validated method for large-scale staging of liver fibrosis in chronic hepatitis C. Hepatology 2009; 49: 1821-1827.  http://onlinelibrary.wiley.com/doi/10.1002/hep.22859/pdf

15. Abeysekera KWM, Fitzpatrick J, Lim AKP, et al. Recent advances in imaging hepatic fibrosis and steatosis. Expert Review of Gastroenterology & Hepatology 2011; http://dx.doi.org/10.1586/egh.10.85.

16. Patel, Keyur. Noninvasive tools to assess liver disease. Current Opinion in Gastroenterology 2010; 26:227-23

 

The DLO: Does FFP Correct INR?

September 20, 2013

By Nicole A Lamparello, MD

Faculty Peer Reviewed

Page from the hematology laboratory: critical lab value; INR 1.9. Liver biopsy scheduled for tomorrow. What is a knowledgeable physician practicing evidence-based medicine to do?

Fresh frozen plasma (FFP) is the liquid, acellular component of blood. FFP contains water, electrolytes, and the majority of the coagulation proteins [1]. It is frequently transfused to patients with an elevated prothrombin time (PT), a measure of the activity of the common coagulation pathway (involving factors X, V, prothrombin and fibrinogen) and the activity of the extrinsic pathway (involving factor VII). In clinical practice, PT is better understood using the international normalized ratio (INR), which takes into account variability due to different thromboplastin reagents. Most commonly, FFP transfusions are administered in an effort to “correct” coagulopathy and prevent the risk of bleeding.

In the very recent past, indications for transfusion of FFP included prophylaxis in non-bleeding patients with acquired coagulation defects prior to invasive procedures. Less than eight years ago, guidelines from the New York State Council on Human Blood and Transfusion Services approved the use of prophylactic FFP transfusions in patients with a PT or aPTT greater than 1.5 times the normal range [1, 2 ]. However, an examination into the literature suggests that FFP does not correct INR, and in fact, prophylactic FFP transfusions do not result in fewer bleeding events.

One of the first studies to examine the effect of large volume FFP transfusions in correcting INR was in 1966 by Spector et al. Thirteen patients with liver disease were given 3-9 units FFP. The PT in only eight patients decreased to within 60% of normal activity. This effect was short-lived as elevation of coagulation factors declined by 50% within the first 2-4 hours after transfusion [3]. Forty years later at Massachusetts General Hospital, a study examined a larger cohort of 121 patients. The patients had an INR between 1.1-1.85 and a repeat INR within 8 hours after receiving FFP. The main indications for transfusion were pre-procedure with elevated INR and bleeding with elevated INR. Results showed that normalization of INR (INR <1.1) occurred in only 0.8% of patients and halfway to normalization of INR occurred in only 15% of patients [4]. On average, INR decreased by only 0.07.

Further, a retrospective study by Holland and Brooks introduced the use of a control group to examine the effect of medical treatment alone on mildly prolonged coagulation factors. In this study, 224 patients receiving 295 FFP transfusions and a control group of 71 patients were included in the analysis. Findings showed that mildly elevated INRs (1.3-1.6) decrease without FFP via supportive care and treatment of the underlying medical condition after a median time of 8.5 hours. The proposed mechanism for the “natural correction” was thought to be related to the treatment of dehydration, anemia and metabolic disturbances which had led to organ hypoperfusion, systemic hypoxia and pH abnormalities[5]. Interestingly, a linear relationship was discovered to predict the change in INR per unit of FFP:

INR change = 0.37 [pretransfusion INR] – 0.47

While pre-transfusion INR was predictive of the response to FFP, the strongest response was found when pre-transfusion INR was greater than 2 and not for mild elevations in INR. FFP treatment was minimally effective in correcting mild elevations in INR <1.7 [5]. While there may be a consumptive process occurring that depletes factors at a quicker rate than they are replaced by FFP transfusions, another likely explanation is the that the dose of FFP may be inadequate. Standard doses of FFP are in the range of 15-30mL/kg. In a 70-kg patient, a transfusion of 2U of FFP is only 400-450mL or 6mL/kg. With this volume, the coagulation factors are only expected to increase by 6% and would be unlikely to change INR significantly [6]. In order to achieve hemostatically adequate coagulation factor levels (20-30%), patients would need to receive a much large volume of plasma, which could represent a potential problem in many patient populations especially already volume-overloaded critically ill patients.

The above studies support the modification in recent guidelines indicating FFP transfusion as prophylaxis in non-bleeding patients with hereditary coagulation defects prior to invasive procedures. In general, prophylaxis may be indicated if the coagulation factor activity is <30% or in patients with a history of recurrent significant bleeding with a coagulation factor activity >30% [7]. Coagulation activity is determined by ordering coagulation factor levels, such as factor V and factor VII, which are assayed from the PT [6]. These newer guidelines do not clearly define the recommendations for critically ill patients or patients with acquired coagulation deficits prior to invasive procedure.

Regardless of the guidelines, mild elevations in INR may be normal. When interpreting the INR, as in any laboratory value, it is important to remember that the “normal range” of INR, 0.8-1.2, represents 95% of individuals. 5% of healthy individuals fall outside the reported reference range. Additionally, INR levels may be influenced by multiple factors. Most commonly, elevated hematocrit has a proportionally reduced volume of plasma, and therefore the ratio of anticoagulant to plasma is increased, resulting in a prolonged bleeding time and spurious elevation in the INR [8]. It is important to recognize the limitations of the INR. INR is validated for patients who are stably Coumadin anticoagulated, but not for patients with coagulopathy of liver disease or isolated factor VII deficiency. In the latter settings the interlaboratory agreement may be poor and the bleeding risks poorly correlated with INR.

The purpose of FFP transfusion is to lower the risk of bleeding in patients with coagulopathy. However, studies have found no difference in bleeding events in patients receiving FFP compared to those not. In a retrospective study of 115 patients, 44 critically-ill, non-bleeding patients received FFP. Only 36% of this group had an INR “corrected” or reduced to <1.5 and received approximately 12mL/kg of FFP. 71 patients did not receive FFP. Results showed no statistically significant decrease in INR or bleeding episodes, hospital deaths or length of stay in ICU [9]. Even more interesting, the group receiving FFP had a statistically significant greater incidence of new-onset acute lung injury, 18% vs. 4%, p=0.021. Which begs the question, what are the risks of FFP, especially when the benefits are seemingly slim to none?

While infection is a well-known but rare risk of blood product transfusion, other risks of plasma therapy include severe allergic reaction, transfusion-associated volume overload which is the most common, and transfusion-related acute lung injury (2). Between 2005 to 2008, transfusion-related acute lung injury (TRALI) was the leading cause of transfusion-associated death, responsible for 35% to 65% of all fatalities [10]. TRALI represents the acute pulmonary pathology occurring within six hours of transfusion and presenting with tachypnea, cyanosis, hypoxemia, and decreased pulmonary compliance [11]. Plasma-containing products are most frequently involved in TRALI, and FFP was the most commonly transfused product in TRALI fatalities, accounting for 51 out of 115 fatalities, or 44% [10]. One strategy in place at many blood banks to reduce TRALI risk is to use male only plasma donors.

There is a large proportion of inappropriate FFP use in medical institutions, a glaring example of which is the reversal of Coumadin anticoagulant effect in patients undergoing elective procedures or surgery due to scheduling constraints. Usage of FFP in non-bleeding patients with acquired coagulation deficits is more often dictated by convention rather than rationally based approaches. Furthermore, physician uncertainty in a medico-legal society leads to practicing “precautionary medicine.” In the eyes of many clinicians, the alternative of not transfusing patients and the risk of bleeding seems comparatively less desirable. However, awareness of the harmful consequences of FFP usage must be underscored among healthcare providers.

What is the take home message? Review of recent evidence suggests that FFP does not correct mild elevations in INR, much less reduce INR beyond several hours. Furthermore, elevation of INR does not predict bleeding in the setting of a procedure nor does prophylactic FFP transfusions result in fewer bleeding events. Given the absence of evidence of the benefit of FFP in correcting INR in conjunction with the known risks of FFP, FFP transfusion to non-bleeding patients with mild elevations in INR cannot be supported.

Dr. Nicole Lamparello, Internal Medicine Resident, NYU Langone Medical Center

Peer reviewed by David Green, MD Assistant Professor, Department of Medicine (Hematology/Oncology), Director Adult Coagulation Lab Tisch/Bellevue Hospitals

Image courtesy of Wikimedia Commons

References:

(1) New York State Council on Human Blood and Transfusion Services. Guidelines for the administration of plasma, 2004. Available at http://www.wadsworth.org/labcert/blood_ tissue/FFPadminfinal1204.pdf. Accessed September 28, 2012.

(2) Gajic O et al. Fresh frozen plasma and platelet transfusion for nonbleeding patients in the intensive care unit: Benefit or harm? Critical Care Medicine. 2006; 34; 170-174.

(3) Spector I, Corn M, Ticktin HE. Effect of plasma transfusions on the prothrombin time and clotting factors in liver disease. New England Journal of Medicine. 1966;275:1032-7.  http://www.nejm.org/doi/full/10.1056/NEJM196611102751902

(4) Abdel-Wahab, OI, Healy B, Dzik WH. Effect of fresh-frozen plasma transfusion on prothrombin time and bleeding in patients with mild coagulation abnormalities. Transfusion. 2006; 46: 1279-1285.  http://www.ncbi.nlm.nih.gov/pubmed/16934060

(5) Holland LL and Brooks JP. Toward rational fresh frozen plasma transfusion: the effect of plasma transfusion on coagulation test results. American Journal of Clinical Pathology. 2006; 126:133-139.  http://ajcp.ascpjournals.org/content/126/1/133.abstract

(6) Ciavarella D, Reed RL, Counts RB, et al. Clotting factor levels and the risk of diffuse microvascular bleeding in the massively transfused patient. British Journal of Haematology. 1987; 67:365-368.

(7) Indications- New York State Council on Human Blood and Transfusion Services, 2010. Available at www.wadsworth.org/labcert/blood_tissue/pdf/txoptsalts0411.pdf. Accessed September 28, 2012.

(8) West KL, Adamson C, Hoffman M. Prophylactic correction of the international normalized ratio in neurosurgery: a brief review of a brief literature. Journal of Neurosurgery. 2011; 114:9-18.  http://www.ncbi.nlm.nih.gov/pubmed/20815695

(9) Dara SI, Rana R, Afessa B, Moore SB, Gajic O: Fresh frozen plasma transfusion in critically ill medical patients with coagulopathy. Critical Care Medicine. 2005; 33:2667–2671.  http://www.ncbi.nlm.nih.gov/pubmed/16276195

(10) Center for Biologics Evaluation and Research: Fatalities Reported to FDA Following Blood Collection and Transfusion. Annual Summary for Fiscal Year, 2008. Available at www.fda.gov/BiologicsBloodVaccines/SafetyAvailability/ReportaProblem/TransfusionDonationFatalities. Accessed September 28, 2012.

(11) Popovsky MA and Moore SB. Diagnostic and pathogenetic considerations in transfusion-related acute lung injury. Transfusion. 1985;25(6):573–7. http://www.ncbi.nlm.nih.gov/pubmed/4071603