From the Archives

From The Archives -The DLO: Does FFP Correct INR?

October 5, 2017

Please enjoy this post from the archives date September 20, 2013

By Nicole A Lamparello, MD

Faculty Peer Reviewed

Page from the hematology laboratory: critical lab value; INR 1.9. Liver biopsy scheduled for tomorrow. What is a knowledgeable physician practicing evidence-based medicine to do?

Fresh frozen plasma (FFP) is the liquid, acellular component of blood. FFP contains water, electrolytes, and the majority of the coagulation proteins [1]. It is frequently transfused to patients with an elevated prothrombin time (PT), a measure of the activity of the common coagulation pathway (involving factors X, V, prothrombin and fibrinogen) and the activity of the extrinsic pathway (involving factor VII). In clinical practice, PT is better understood using the international normalized ratio (INR), which takes into account variability due to different thromboplastin reagents. Most commonly, FFP transfusions are administered in an effort to “correct” coagulopathy and prevent the risk of bleeding.

In the very recent past, indications for transfusion of FFP included prophylaxis in non-bleeding patients with acquired coagulation defects prior to invasive procedures. Less than eight years ago, guidelines from the New York State Council on Human Blood and Transfusion Services approved the use of prophylactic FFP transfusions in patients with a PT or aPTT greater than 1.5 times the normal range [1, 2 ]. However, an examination into the literature suggests that FFP does not correct INR, and in fact, prophylactic FFP transfusions do not result in fewer bleeding events.

One of the first studies to examine the effect of large volume FFP transfusions in correcting INR was in 1966 by Spector et al. Thirteen patients with liver disease were given 3-9 units FFP. The PT in only eight patients decreased to within 60% of normal activity. This effect was short-lived as elevation of coagulation factors declined by 50% within the first 2-4 hours after transfusion [3]. Forty years later at Massachusetts General Hospital, a study examined a larger cohort of 121 patients. The patients had an INR between 1.1-1.85 and a repeat INR within 8 hours after receiving FFP. The main indications for transfusion were pre-procedure with elevated INR and bleeding with elevated INR. Results showed that normalization of INR (INR <1.1) occurred in only 0.8% of patients and halfway to normalization of INR occurred in only 15% of patients [4]. On average, INR decreased by only 0.07.

Further, a retrospective study by Holland and Brooks introduced the use of a control group to examine the effect of medical treatment alone on mildly prolonged coagulation factors. In this study, 224 patients receiving 295 FFP transfusions and a control group of 71 patients were included in the analysis. Findings showed that mildly elevated INRs (1.3-1.6) decrease without FFP via supportive care and treatment of the underlying medical condition after a median time of 8.5 hours. The proposed mechanism for the “natural correction” was thought to be related to the treatment of dehydration, anemia and metabolic disturbances which had led to organ hypoperfusion, systemic hypoxia and pH abnormalities[5]. Interestingly, a linear relationship was discovered to predict the change in INR per unit of FFP:

INR change = 0.37 [pretransfusion INR] – 0.47

While pre-transfusion INR was predictive of the response to FFP, the strongest response was found when pre-transfusion INR was greater than 2 and not for mild elevations in INR. FFP treatment was minimally effective in correcting mild elevations in INR <1.7 [5]. While there may be a consumptive process occurring that depletes factors at a quicker rate than they are replaced by FFP transfusions, another likely explanation is the that the dose of FFP may be inadequate. Standard doses of FFP are in the range of 15-30mL/kg. In a 70-kg patient, a transfusion of 2U of FFP is only 400-450mL or 6mL/kg. With this volume, the coagulation factors are only expected to increase by 6% and would be unlikely to change INR significantly [6]. In order to achieve hemostatically adequate coagulation factor levels (20-30%), patients would need to receive a much large volume of plasma, which could represent a potential problem in many patient populations especially already volume-overloaded critically ill patients.

The above studies support the modification in recent guidelines indicating FFP transfusion as prophylaxis in non-bleeding patients with hereditary coagulation defects prior to invasive procedures. In general, prophylaxis may be indicated if the coagulation factor activity is <30% or in patients with a history of recurrent significant bleeding with a coagulation factor activity >30% [7]. Coagulation activity is determined by ordering coagulation factor levels, such as factor V and factor VII, which are assayed from the PT [6]. These newer guidelines do not clearly define the recommendations for critically ill patients or patients with acquired coagulation deficits prior to invasive procedure.

Regardless of the guidelines, mild elevations in INR may be normal. When interpreting the INR, as in any laboratory value, it is important to remember that the “normal range” of INR, 0.8-1.2, represents 95% of individuals. 5% of healthy individuals fall outside the reported reference range. Additionally, INR levels may be influenced by multiple factors. Most commonly, elevated hematocrit has a proportionally reduced volume of plasma, and therefore the ratio of anticoagulant to plasma is increased, resulting in a prolonged bleeding time and spurious elevation in the INR [8]. It is important to recognize the limitations of the INR. INR is validated for patients who are stably Coumadin anticoagulated, but not for patients with coagulopathy of liver disease or isolated factor VII deficiency. In the latter settings the interlaboratory agreement may be poor and the bleeding risks poorly correlated with INR.

The purpose of FFP transfusion is to lower the risk of bleeding in patients with coagulopathy. However, studies have found no difference in bleeding events in patients receiving FFP compared to those not. In a retrospective study of 115 patients, 44 critically-ill, non-bleeding patients received FFP. Only 36% of this group had an INR “corrected” or reduced to <1.5 and received approximately 12mL/kg of FFP. 71 patients did not receive FFP. Results showed no statistically significant decrease in INR or bleeding episodes, hospital deaths or length of stay in ICU [9]. Even more interesting, the group receiving FFP had a statistically significant greater incidence of new-onset acute lung injury, 18% vs. 4%, p=0.021. Which begs the question, what are the risks of FFP, especially when the benefits are seemingly slim to none?

While infection is a well-known but rare risk of blood product transfusion, other risks of plasma therapy include severe allergic reaction, transfusion-associated volume overload which is the most common, and transfusion-related acute lung injury (2). Between 2005 to 2008, transfusion-related acute lung injury (TRALI) was the leading cause of transfusion-associated death, responsible for 35% to 65% of all fatalities [10]. TRALI represents the acute pulmonary pathology occurring within six hours of transfusion and presenting with tachypnea, cyanosis, hypoxemia, and decreased pulmonary compliance [11]. Plasma-containing products are most frequently involved in TRALI, and FFP was the most commonly transfused product in TRALI fatalities, accounting for 51 out of 115 fatalities, or 44% [10]. One strategy in place at many blood banks to reduce TRALI risk is to use male only plasma donors.

There is a large proportion of inappropriate FFP use in medical institutions, a glaring example of which is the reversal of Coumadin anticoagulant effect in patients undergoing elective procedures or surgery due to scheduling constraints. Usage of FFP in non-bleeding patients with acquired coagulation deficits is more often dictated by convention rather than rationally based approaches. Furthermore, physician uncertainty in a medico-legal society leads to practicing “precautionary medicine.” In the eyes of many clinicians, the alternative of not transfusing patients and the risk of bleeding seems comparatively less desirable. However, awareness of the harmful consequences of FFP usage must be underscored among healthcare providers.

What is the take home message? Review of recent evidence suggests that FFP does not correct mild elevations in INR, much less reduce INR beyond several hours. Furthermore, elevation of INR does not predict bleeding in the setting of a procedure nor does prophylactic FFP transfusions result in fewer bleeding events. Given the absence of evidence of the benefit of FFP in correcting INR in conjunction with the known risks of FFP, FFP transfusion to non-bleeding patients with mild elevations in INR cannot be supported.

Dr. Nicole Lamparello, Internal Medicine Resident, NYU Langone Medical Center

Peer reviewed by David Green, MD Assistant Professor, Department of Medicine (Hematology/Oncology), Director Adult Coagulation Lab Tisch/Bellevue Hospitals

Image courtesy of Wikimedia Commons

References:

(1) New York State Council on Human Blood and Transfusion Services. Guidelines for the administration of plasma, 2004. Available at http://www.wadsworth.org/labcert/blood_ tissue/FFPadminfinal1204.pdf. Accessed September 28, 2012.

(2) Gajic O et al. Fresh frozen plasma and platelet transfusion for nonbleeding patients in the intensive care unit: Benefit or harm? Critical Care Medicine. 2006; 34; 170-174.

(3) Spector I, Corn M, Ticktin HE. Effect of plasma transfusions on the prothrombin time and clotting factors in liver disease. New England Journal of Medicine. 1966;275:1032-7.  http://www.nejm.org/doi/full/10.1056/NEJM196611102751902

(4) Abdel-Wahab, OI, Healy B, Dzik WH. Effect of fresh-frozen plasma transfusion on prothrombin time and bleeding in patients with mild coagulation abnormalities. Transfusion. 2006; 46: 1279-1285.  http://www.ncbi.nlm.nih.gov/pubmed/16934060

(5) Holland LL and Brooks JP. Toward rational fresh frozen plasma transfusion: the effect of plasma transfusion on coagulation test results. American Journal of Clinical Pathology. 2006; 126:133-139.  http://ajcp.ascpjournals.org/content/126/1/133.abstract

(6) Ciavarella D, Reed RL, Counts RB, et al. Clotting factor levels and the risk of diffuse microvascular bleeding in the massively transfused patient. British Journal of Haematology. 1987; 67:365-368.

(7) Indications- New York State Council on Human Blood and Transfusion Services, 2010. Available at www.wadsworth.org/labcert/blood_tissue/pdf/txoptsalts0411.pdf. Accessed September 28, 2012.

(8) West KL, Adamson C, Hoffman M. Prophylactic correction of the international normalized ratio in neurosurgery: a brief review of a brief literature. Journal of Neurosurgery. 2011; 114:9-18.  http://www.ncbi.nlm.nih.gov/pubmed/20815695

(9) Dara SI, Rana R, Afessa B, Moore SB, Gajic O: Fresh frozen plasma transfusion in critically ill medical patients with coagulopathy. Critical Care Medicine. 2005; 33:2667–2671.  http://www.ncbi.nlm.nih.gov/pubmed/16276195

(10) Center for Biologics Evaluation and Research: Fatalities Reported to FDA Following Blood Collection and Transfusion. Annual Summary for Fiscal Year, 2008. Available at www.fda.gov/BiologicsBloodVaccines/SafetyAvailability/ReportaProblem/TransfusionDonationFatalities. Accessed September 28, 2012.

(11) Popovsky MA and Moore SB. Diagnostic and pathogenetic considerations in transfusion-related acute lung injury. Transfusion. 1985;25(6):573–7. http://www.ncbi.nlm.nih.gov/pubmed/4071603

From the Archives: Does Running Cause Knee Osteoarthritis?

September 15, 2017

Please enjoy this post from the archives dated September 14, 2013

By Karin Katz, MD

Faculty Peer Reviewed

Post-summer is here. Despite the heat and what feels like 100% humidity, the East River Path is packed with runners. No amount of car fumes pouring onto the path could stop those in training. Others are circling the 6-mile-loop around Central Park. Or, if you are bored of running the typical routes, for a few Saturdays, Park Avenue will be shut down for automobile traffic. New Yorkers love to run (well, some do). And while unforeseen circumstances led to a cancellation of the NYC Marathon in 2012, in 2011, almost 47,000 people ran those well-known 26 miles. Given the epidemic of obesity, we should be celebrating this phenomenon! But, is there any reason to advise our patients to be cautious of such strenuous weight-bearing activity? Osteoarthritis is thought of as a disease of “wear and tear” on the joints. Is running a risk factor for developing osteoarthritis? If so, is the damage worse for marathon runners? Maybe we should be advising our patients to stick to water aerobics.

Approximately 19-28% of adults aged 45 or older have knee osteoarthritis (OA) [1]. OA is a degenerative joint disease characterized by articular cartilage failure, although all structures of the joint are involved in the pathologic process. Risks for developing osteoarthritis include systemic factors (age, female gender, genetic susceptibility), intrinsic joint vulnerabilities (previous damage such as meniscal tears, muscle weakness, increased bone density, malalignment), and joint stressors (such as obesity). Individuals who are overweight or obese may have three times the risk of incident knee arthritis [2]. While the risk that obesity confers on osteoarthritis is well established, the impact of exercise on weight-bearing joints is complex. Exercise in different forms has been shown to prevent, cause, accelerate, or treat osteoarthritis.

Let’s start slow with a study on the effects of walking on the development of knee osteoarthritis. Felson et al. published a longitudinal study of the Framinghmam Offspring cohort to evaluate the long-term effect of recreational exercise on the development of knee OA in older adults [3]. A total of 1,279 subjects were included with a mean age at baseline of 53 years. Most reported walking for exercise. Subjects were asked about their knee pain and anteroposterior and lateral knee radiographs were obtained. Nine years later, subjects were reexamined for osteoarthritis. The primary outcomes of the study included incident radiographic OA, symptomatic OA, and tibiofemoral joint space loss. Walking (categorized as less than 6 miles per week, or greater than or equal to 6 miles per week) was not associated with an increased or decreased risk of radiographic or symptomatic OA compared to subjects who did not walk for exercise. Joint space loss was also unaffected by this activity.

Moving on to running, a study in the American Journal of Preventative Medicine investigated differences in the progression of knee OA in middle- to older-aged runners compared to healthy non-runners over two decades [4]. This study included 45 long distance runners and matched controls with a mean age of 58 years. Most of these runners had been running for over a decade. The study examined radiographic knee OA (specifically tibiofemoral disease) by serial radiographs. At the start of this study, members of the runner’s club were running for an average of 214 minutes per week. By the completion of the trial, their running time decreased by 55%. A small proportion of the controls ran for exercise at baseline, but almost all had stopped running by the time the final radiograph was obtained. In the analysis, long-distance running was not associated with accelerated incidence or severity of radiographic OA over a mean observation time of 11.7 years. While this study has a number of limitations, perhaps the greatest one is that there was no analysis of subjects’ clinical symptoms. It is also important to note that the running group was slightly younger with a slightly lower BMI compared to the control group. These differences could have confounded the results. Finally, one must also wonder, why did the runners stop running? What if a runner acquired a knee injury and had to cut down their usual workout? In such cases, longer follow-up might be necessary to detect the development of OA in “retired” runners.

Another study in the American Journal of Sports Medicine showed that formerly competitive runners did not have higher rates of arthritis in their hips, knees or ankles when compared to nonrunners [5]. This retrospective study included Danish male runners who qualified for county teams from 1950 to 1955. The study was published in 1990. Only 30 subjects were included, and were assessed using pain scores, clinical examination and x-rays. There was no difference between runners and non-runners with regard to narrowing of the joint space or osteophytosis in the lower extremity joints. There was also no difference in range of motion of the joints. Some of the runners experienced pain, but clinical and radiographic findings in this particular group were considered normal. While this study is also not without limitations, it is small, retrospective, and much of the data is subjective- it is interesting to look at the effects of long-distance running on the joints over an extended period of time in this athletic population.

Although there are other conflicting data, the medical literature generally does not support the idea that running contributes to the degeneration of articular cartilage [6]. Nonetheless, more advanced imaging techniques raise more questions about this potential association. A cohort study by Luke et al. used advanced MRI techniques to detect changes that could signify early osteoarthritis [7]. The loss of proteoglycan or glycosaminoglycan may be the initiating event in OA; this study used imaging markers to detect proteoglycan loss and collagen breakdown. In this investigation, the knee cartilage of 10 asymptomatic individuals was evaluated before and after a marathon (26.2 miles) using 3-T MRI techniques. A 3-T MRI is an MRI with a 3.0 tesla strength magnet, an advanced imaging technique that has been used assess the biochemical degradation of articular cartilage. Marathon runners were between the ages of 18 and 40 years and were matched to non-runners. The subjects had not participated in a marathon for at least 4 months. The inclusion criteria also included a BMI of less than 30. Runners had their first MRI within 2 weeks before the marathon, then had repeat imaging post-marathon (within 48 hours) and at 10 to 12 weeks after the race. One imaging technique used to describe the biochemical composition of cartilage is called a T1rho measurement. This measurement has been proposed for detecting damage to the proteoglycan-cartilage matrix. The study by Luke et al. demonstrated a significant increase in T1rho values on the post-marathon MRIs of runners. The T1rho values remained elevated 3 months later. Another MRI mapping technique for assessing biochemical changes in the joint is T2 MRI, which characterizes water content and collagen degradation. T2 signals were increased immediately after the marathon, but returned to baseline at 3 months. While this study shows that the biomechanics of knee joints change as a result of marathon running, these changes are not necessarily detrimental to the individual or their joints. The T1rho technique was previously validated in patients with OA, and the T2 relaxation time measurements has also been found to be a reliable means of detecting early degenerative changes of the cartilage. However, whether changes in T1rho and T2 after running lead to joint degeneration over time is not known. In addition, it is not clear how these changes correlate with patients’ symptoms [8,9].

Another interesting article assessing OA with advanced MRI techniques showed moderate exercise in subjects at high risk for OA was associated with joint composition changes that could have chondroprotective effects on the knee [10]. In this study, 45 subjects who underwent a partial medial meniscus resection (a group considered high risk for developing OA) were randomized to undergo a 4-month long supervised exercise regimen or to receive no intervention. The primary outcome was an estimate of cartilage glycosaminoglycan (GAG) content using delayed gadolinium-enhanced MRI techniques. The investigators found that the exercise group showed an improvement in cartilage quality. This suggests that the biochemical changes of exercise on the joint can be protective, rather than harmful.

At present, there is not enough long-term data to suggest that running is a risk factor for knee osteoarthritis. In addition, if patients are counseled that running is bad for their knees, this may deter them from physical exercise and deprive them of the benefits of staying active. An important consideration, however, is that in a patient with a prior knee injury, your advice may differ. Knee injury has been shown to confer a four-fold increased risk of developing knee osteoarthritis, and 50% of individuals with an ACL or meniscus tear may develop knee OA [2,11]. There is limited data to guide counseling patients on the prevention of OA after knee injury. The study by Roos et al. would suggest that some exercise is better than no exercise in such patients [10]. However, it is important to keep in mind that the exercise regimen in that study was supervised, and these patients were not running marathons. Athletes with a history of a knee injury who continue long-distance running may have a very different clinical course of disease.

Although there are more questions to be answered, this summer there’s no reason to tell your healthy runners to stick to the swimming pool and quit training for their next marathon.

Karin Katz, MD is a second-year internal medicine resident at NYU Langone Medical Center

Peer reviewed by Michael Pillinger, MD, Associate Professor, Department of Medicine, Rheumatology Divison, NYU Langone Medical Center

References

1. Lawrence RC, Felson DT, Helmick CG, et al. Estimates of the prevalence of arthritis and other rheumatic conditions in the United States. Part II. Arthritis Rheum. Jan 2008;58(1):26-35.

2. Blagojevic M, Jinks C, Jeffery A, Jordan KP. Risk factors for onset of osteoarthritis of the knee in older adults: a systematic review and meta-analysis. Osteoarthritis Cartilage. Jan 2010;18(1):24-33.

3. Felson DT, Niu J, Clancy M, Sack B, Aliabadi P, Zhang Y. Effect of recreational physical activities on the development of knee osteoarthritis in older adults of different weights: the Framingham Study. Arthritis Rheum. Feb 15 2007;57(1):6-12.

4. Chakravarty EF, Hubert HB, Lingala VB, Zatarain E, Fries JF. Long distance running and knee osteoarthritis. A prospective study. Am J Prev Med. Aug 2008;35(2):133-138.

5. Konradsen L, Hansen EM, Sondergaard L. Long distance running and osteoarthrosis. Am J Sports Med. Jul-Aug 1990;18(4):379-381.

6. Willick SE, Hansen PA. Running and osteoarthritis. Clin Sports Med. Jul 2010;29(3):417-428.

7. Luke AC, Stehling C, Stahl R, et al. High-field magnetic resonance imaging assessment of articular cartilage before and after marathon running: does long-distance running lead to cartilage damage? Am J Sports Med. Nov 2010;38(11):2273-2280.

8. Li X, Benjamin Ma C, Link TM, et al. In vivo T(1rho) and T(2) mapping of articular cartilage in osteoarthritis of the knee using 3 T MRI. Osteoarthritis Cartilage. Jul 2007;15(7):789-797.

9. Nishii T, Kuroda K, Matsuoka Y, Sahara T, Yoshikawa H. Change in knee cartilage T2 in response to mechanical loading. J Magn Reson Imaging. Jul 2008;28(1):175-180.

10. Roos EM, Dahlberg L. Positive effects of moderate exercise on glycosaminoglycan content in knee cartilage: a four-month, randomized, controlled trial in patients at risk of osteoarthritis. Arthritis Rheum. Nov 2005;52(11):3507-3514.

11. Ratzlaff CR, Liang MH. New developments in osteoarthritis. Prevention of injury-related knee osteoarthritis: opportunities for the primary and secondary prevention of knee osteoarthritis. Arthritis Res Ther. 2010;12(4):215.

 

Health Care: Do Celebrities Know Best?

August 24, 2017

Fox_1988-cropped2-198x300Please enjoy this post from the archives dated August 25, 2013

By Emma Gorynski

Faculty Peer Reviewed

The power that celebrities have over Americans is undeniable. We look to them for guidance on what to listen to, what to wear, and even what to name our children. Celebrities even affect the decisions we make about our own health care. With the increasing popularity of direct-to-consumer advertising, celebrities are promoting pharmaceuticals and other health-related products.

Is there a role for celebrities in health advocacy? On one hand, celebrities can increase public awareness of medical conditions and encourage people to be more proactive with their health. However, celebrities are not medically trained, and the uneducated public may be inappropriately swayed by their advice. The public is much more likely to listen to their favorite actress’s opinions on breast cancer screening than that of the United States Preventive Services Task Force. While we may not be able to change celebrities’ actions, we should be aware of the information our patients are receiving so that we can encourage the beneficial messages and correct any misinformation.

By discussing health issues, celebrities serve as free public health campaigns. They can increase awareness and funding and encourage early detection of diseases. In 2000 Katie Couric demonstrated the power of celebrity opinion when she advocated for colon cancer screening after the death of her husband. Aware that a screening colonoscopy is dreaded and feared procedure for many patients, Ms. Couric underwent a live, on-air colonoscopy on The Today Show following a weeklong cancer awareness campaign.[1] The impact she had on the public, now known as the “Couric Effect,” was remarkable. Cram and colleagues found a 20% increase in screening colonoscopies after her campaign.[1]

In the same year, Michael J. Fox became the face of Parkinson’s disease after his diagnosis. The Michael J. Fox Foundation for Parkinson’s Research has raised $85 million.[2] After Olympic cyclist Lance Armstrong’s diagnosis of testicular cancer, it became fashionable to wear yellow Livestrong bracelets showing support for the battle against cancer. Since its creation in 2004, over 55 million bracelets, each costing $1, have been sold.[2] This idea has expanded to include different colored bracelets for different types of cancer: pink for breast, gray for brain, and light blue for prostate. Thanks to the money raised by these celebrity advocates, more research can be done to advance scientific and clinical knowledge about these diseases.

But at what costs do these celebrity health endorsements come? While most people regard the efforts of celebrities as beneficial and even heroic, we must consider the negative impact they can have on the healthcare system. Celebrity recommendations are often not based on evidence. Media coverage of a celebrity’s illness can lead to inappropriate healthcare budget prioritization, as shown by a study in the International Journal of Epidemiology. Kelaher and colleagues studied the incidence of breast imaging, image-guided biopsy, and cancer excisions among 25-to-44 year old Australian women following singer Kylie Minogue’s widely publicized breast cancer diagnosis in 2005.[3] Breast imaging in this population increased by 20% following Minogue’s diagnosis; however, the number of breast cancers requiring surgical excision did not change. This suggests that there was excessive breast imaging in women who were not recommended to undergo routine screening. (See Reference 4 for the National Cancer Institute’s recommendations on breast cancer screening).

Just as people seek to emulate celebrities’ taste in clothes and style, they seek the same treatment that celebrities receive for their illnesses. Unfortunately, many people fail to realize that celebrities can invest more money, time, and effort in health care than the average American. For example, following his equestrian accident in 1995, Christopher Reeve received extensive treatment for his spinal cord injury. He reported recovering the ability to move his left index finger, his feet, and right wrist, and regaining sensation over much of his body due to electrical stimulation therapy.[5] Patients with spinal cord injuries were hopeful that they too could have such a recovery. However, Reeve received a lot of free care and equipment due to his celebrity status and was able to spend hundreds of thousands of dollars of his own money in his medical care. These resources are not available to the general public and can lead to a false sense of hope.

In this age of celebrity worship, the public, as well as the medical community, is easily influenced by media coverage of celebrity health campaigns. While publicity about a celebrity’s illness can provide an opportunity to promote public health, it can also result in inappropriate health care spending. These factors need to be considered by both doctors and patients to ensure proper treatment for the individual and society at large.

By Emma Gorynski, 4th year medical student at NYU School of Medicine

Peer reviewed by Ishmeal Bradley, MD, Section Editor, Clinical Correlations

Image courtesy of Wikimedia Commons

References

1) Cram P, Fendrick AM, Inadomi J, Cowen ME, Carpenter D, Vijan S. The impact of a celebrity promotional campaign on the use of colon cancer screening: the Katie Couric effect. Arch Intern Med. 2003:163(13):1601-1605.  http://www.ncbi.nlm.nih.gov/pubmed/12860585

2) Van Dusen A. Celebrity health causes. www.forbes.com. http://www.forbes.com/2006/11/21/celebrities-health-causes-forbeslife-health-cx_avd_1122medical.html.  Published November 22, 2006. Accessed March 5, 2012.

3) Kelaher M, Cawson J, Miller J, Kavanagh A, Dunt D, Studdert DM. Use of breast cancer screening and treatment services by Australian women aged 25-44 years following Kylie Minogue’s breast cancer diagnosis. Int J Epidemiol. 2008:37(6):1326-1332.  http://www.ncbi.nlm.nih.gov/pubmed/18515324

4) National Cancer Institute. Breast cancer screening (PDQ®). http://www.cancer.gov/cancertopics/pdq/screening/breast/HealthProfessional.  Updated March 30, 2012.  Accessed February 19, 2012.

5) Blakeslee S. Christopher Reeve regains some movement, doctor says. New York Times. http://www.nytimes.com/2002/09/12/health/12REEV.html.  Published September 12, 2002. Accessed March 5, 2012.

From the Archives: Are We Too Hesitant to Anticoagulate Elderly Patients with Atrial Fibrillation? A Risk-Benefit Analysis

August 17, 2017

Senior_citizens_gathered_around_sign-300x199Please enjoy this post from the archives dated June 28, 2013

By Sunny N. Shah, MD

Faculty Peer Reviewed

Background:

Atrial fibrillation (AF) is the most common cardiac arrhythmia and its prevalence increases with age. In fact, the lifetime incidence of AF is approximately 25% in individuals by age 80, with the incidence nearly doubling with each decade of life after age 50. (1) Multiple randomized controlled trials have shown that oral antithrombotic therapy with warfarin or aspirin decreases the risk of ischemic stroke in patients with AF. (2-6) Meta-analyses reveal a relative risk reduction of approximately 60% with warfarin and 20% with aspirin as compared to placebo. (7) Pooled analyses from these studies show a less than 0.3% increase in the absolute risk of intracranial hemorrhage with antithrombotic therapy. (7) This risk is much lower than the annual risk of ischemic stroke which is increased by greater than five-fold in patients with atrial fibrillation who are not receiving anticoagulation. (8)

Given these results, the current American Heart Association/American College of Cardiology/European Society of Cardiology guidelines for the management of AF recommend the use of anticoagulation based on a patient’s risk of ischemic stroke. This risk is determined using known risk factors such as those identified by the CHA2DS2-VASc scoring system. (9) These guidelines thus give a Class IA recommendation to the statement: “the selection of the antithrombotic agent should be based upon the absolute risk of stroke and bleeding and the relative risk and benefit for a given patient.” (9)

Dilemma:

It also evident in clinical practice that the elderly are at increased risk for falls. Studies demonstrate that in the community, approximately one-third of individuals over the age of 65 fall every year. (10) In addition, those who fall once are at increased risk of additional falls, and approximately 10% of these falls result in serious injury. Following this, studies have shown that physicians are hesitant to prescribe antithrombotic therapy in elderly patients with atrial fibrillation who are perceived to have an increased fall risk to prevent inducing an intracranial hemorrhage. One review of a hospital’s electronic medical records revealed that the most common reason cited for not prescribing warfarin to a patient with AF was bleeding risk from falls.(11) A survey of residents, fellows, and attending physicians also highlighted fall risk as the most common reason for not anticoagulating a case patient in a nursing home with atrial fibrillation.(12)

Is this fear justified?

Review of Literature:

A prospective study of over 500 patients who were discharged on oral anticoagulation for AF showed that there was no significant increase in risk of major bleeds between those deemed high risk for falls versus low risk (8.0 events versus 6.8 events per 100 patient-years, respectively, p = 0.64, respectively). (13) In a retrospective study comparing 1,245 patients at high risk of falls compared to 18,261 other patients with AF, there was a significant difference in rates of intracranial hemorrhage between the high risk for falls and low risk for falls groups (2.8 events versus 1.1 events per 100 patient-years, respectively, p < 0.005). However, when looking at a composite outcome of out-of-hospital death or hospitalization for stroke, myocardial infarction, or hemorrhage, the hazard ratio was 0.75 (95% CI 0.61 to 0.91, p 0.004) for warfarin therapy. This led the authors to conclude that in patients at high risk for stroke – those with a CHADS2 score of 2 or greater, the overall benefit of warfarin therapy outweighed the risks in their study. (14)

Given the potential discrepancy in the interpretation of the results from these studies, a meta-analysis was conducted to help guide antithrombotic therapy for AF in elderly patients at risk for falls. The authors used a Markov decision model to calculate that an individual taking warfarin would need to fall about 295 times in one year before the risks of warfarin outweighed the benefits. (15) Of course, a major concern to the validity of these results is the potential imprecision in the values of the input variables to calculate this “number needed to fall” – i.e. incidence of fall per year, risk of ICH per fall, etc. The authors concluded that a patient’s tendency to fall should not play a significant role in the decision to prescribe anticoagulant therapy in this patient population.

The same authors followed up their original study with a subsequent appraisal of the literature to determine if the presence of additional clinical risk factors for bleeding impacted the risk of hemorrhage in elderly patients on anticoagulation for AF. (16) They identified three such risk factors for intracranial hemorrhage based on their literature search – hypertension, participation in activities that increase risk of head trauma, and increased rates of intracranial bleed in patients with prior stroke who were placed on anticoagulation. They concluded that these too should not change one’s decision to anticoagulate those with AF and high risk of stroke. (16)

A subsequent case-controlled study analyzed the level of anticoagulation, age, and stroke risk in AF patients on warfarin. The authors showed that the risk of ICH is increased in patients over the age of 85 as compared to those aged 70-74 (OR 2.5, 95% CI 1.3 to 4.7). (17) They thus advocated strict monitoring of INR values and avoiding an INR > 3.5 which they found to be associated with increased risk for ICH.

A more recent review on the subject analyzed the above studies among others, and concluded that physicians are still guided by their own concerns regarding a patient’s risk of hemorrhage as opposed to their risk of ischemic/embolic stroke, and that that warfarin is underused in the treatment of AF in the elderly as a result of these concerns. (18)

Summary and Conclusions:

The decision to anticoagulate an elderly person with AF at high risk for stroke is a common problem faced by physicians. The benefit of warfarin in reducing the risk of stroke in patients with AF is indisputable, as is its superior efficacy in comparison to aspirin. Some studies have shown that the rates of ICH may be comparable between patients treated with aspirin and those with well-managed warfarin regimens. (19, 20).Importantly, this review has focused on the use of Warfarin in patient with AF. As data on the newer anticoagulants continues to accrue, future risk/benefit analyses with the use of these agents should be conducted as well.

Because stroke risk increases with age, the elderly stand to benefit the most from warfarin therapy. Therefore, most experts would recommend an individual risk-benefit analysis per patient with careful attention to risk of ischemic stroke in this vulnerable population, and in general, advocate the use of warfarin therapy in those at high risk of stroke – i.e. CHADS2 or CHA2DS2-VASc score of 2 or greater – despite their fall risk. This would especially be advocated for patients who remain adherent to their regimen and have well-controlled INRs. Importantly, the patient should be included in a conversation regarding the risks, benefits, and lifestyle changes that go along with chronic anticoagulation therapy

To help ameliorate physician concern and potentially decrease the risk of ICH in an elderly patient, more frequent INR checks should be obtained to ensure that the INR remains at the goal of between 2 and 3. In addition, minimization of a patient’s fall risk via environmental changes, medication management, and treatment of any underlying diseases that contribute to risk of fall should be emphasized. (21) ,

Dr. Sunny N. Shah is a 3rd year Resident at NYU Langone Medical Center

Peer reviewed by Rob Donnino, MD, Cardiology Section Editor, Clinical Correlations

Image courtesy of Wikimedia Commons

References

1. Magnani JW, Rienstra M, Lin H, Sinner MF, Lubitz SA, McManus DD, et al. Atrial fibrillation: current knowledge and future directions in epidemiology and genomics. Circulation. 2011;124(18):1982-93.  http://www.ncbi.nlm.nih.gov/pubmed/22042927

2. Warfarin versus aspirin for prevention of thromboembolism in atrial fibrillation: Stroke Prevention in Atrial Fibrillation II Study. Lancet. 1994;343(8899):687-91.  http://www.ncbi.nlm.nih.gov/pubmed/7907677

3. The effect of low-dose warfarin on the risk of stroke in patients with nonrheumatic atrial fibrillation. The Boston Area Anticoagulation Trial for Atrial Fibrillation Investigators. N Engl J Med. 1990;323(22):1505-11.

4. Petersen P, Boysen G, Godtfredsen J, Andersen ED, Andersen B. Placebo-controlled, randomised trial of warfarin and aspirin for prevention of thromboembolic complications in chronic atrial fibrillation. The Copenhagen AFASAK study. Lancet. 1989;1(8631):175-9.

5. Connolly SJ, Laupacis A, Gent M, Roberts RS, Cairns JA, Joyner C. Canadian Atrial Fibrillation Anticoagulation (CAFA) Study. J Am Coll Cardiol. 1991;18(2):349-55.

6. Ezekowitz MD, Bridgers SL, James KE, Carliner NH, Colling CL, Gornick CC, et al. Warfarin in the prevention of stroke associated with nonrheumatic atrial fibrillation. Veterans Affairs Stroke Prevention in Nonrheumatic Atrial Fibrillation Investigators. N Engl J Med. 1992;327(20):1406-12.

7. Hart RG, Pearce LA, Aguilar MI. Meta-analysis: antithrombotic therapy to prevent stroke in patients who have nonvalvular atrial fibrillation. Ann Intern Med. 2007;146(12):857-67.  http://www.ncbi.nlm.nih.gov/pubmed/17577005

8. Wolf PA, Abbott RD, Kannel WB. Atrial fibrillation as an independent risk factor for stroke: the Framingham Study. Stroke. 1991;22(8):983-8.  http://www.ncbi.nlm.nih.gov/pubmed/1866765

9. Fuster V, Ryden LE, Cannom DS, Crijns HJ, Curtis AB, Ellenbogen KA, et al. ACC/AHA/ESC 2006 guidelines for the management of patients with atrial fibrillation–executive summary: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and the European Society of Cardiology Committee for Practice Guidelines (Writing Committee to Revise the 2001 Guidelines for the Management of Patients With Atrial Fibrillation). J Am Coll Cardiol. 2006;48(4):854-906.

10. Tinetti ME, Speechley M, Ginter SF. Risk factors for falls among elderly persons living in the community. N Engl J Med. 1988;319(26):1701-7.  http://www.ncbi.nlm.nih.gov/pubmed/3205267

11. Rosenman MB, Baker L, Jing Y, Makenbaeva D, Meissner B, Simon TA, et al. Why is warfarin underused for stroke prevention in atrial fibrillation? A detailed review of electronic medical records. Curr Med Res Opin. 2012;28(9):1407-14.  http://www.ncbi.nlm.nih.gov/pubmed/22746356

12. Dharmarajan TS, Varma S, Akkaladevi S, Lebelt AS, Norkus EP. To anticoagulate or not to anticoagulate? A common dilemma for the provider: physicians’ opinion poll based on a case study of an older long-term care facility resident with dementia and atrial fibrillation. J Am Med Dir Assoc. 2006;7(1):23-8.

13. Donze J, Clair C, Hug B, Rodondi N, Waeber G, Cornuz J, et al. Risk of falls and major bleeds in patients on oral anticoagulation therapy. Am J Med. 2012;125(8):773-8.

14. Gage BF, Birman-Deych E, Kerzner R, Radford MJ, Nilasena DS, Rich MW. Incidence of intracranial hemorrhage in patients with atrial fibrillation who are prone to fall. Am J Med. 2005;118(6):612-7. http://www.ncbi.nlm.nih.gov/pubmed/15922692

15. Man-Son-Hing M, Nichol G, Lau A, Laupacis A. Choosing antithrombotic therapy for elderly patients with atrial fibrillation who are at risk for falls. Arch Intern Med. 1999;159(7):677-85.

16. Man-Son-Hing M, Laupacis A. Anticoagulant-related bleeding in older persons with atrial fibrillation: physicians’ fears often unfounded. Arch Intern Med. 2003;163(13):1580-6.

17. Fang MC, Chang Y, Hylek EM, Rosand J, Greenberg SM, Go AS, et al. Advanced age, anticoagulation intensity, and risk for intracranial hemorrhage among patients taking warfarin for atrial fibrillation. Ann Intern Med. 2004;141(10):745-52.  http://www.ncbi.nlm.nih.gov/pubmed/15545674

18. Sellers MB, Newby LK. Atrial fibrillation, anticoagulation, fall risk, and outcomes in elderly patients. Am Heart J. 2011;161(2):241-6.  http://www.ncbi.nlm.nih.gov/pubmed/21315204

19. Mant J, Hobbs FD, Fletcher K, Roalfe A, Fitzmaurice D, Lip GY, et al. Warfarin versus aspirin for stroke prevention in an elderly community population with atrial fibrillation (the Birmingham Atrial Fibrillation Treatment of the Aged Study, BAFTA): a randomised controlled trial. Lancet. 2007;370(9586):493-503.

20. Secondary prevention in non-rheumatic atrial fibrillation after transient ischaemic attack or minor stroke. EAFT (European Atrial Fibrillation Trial) Study Group. Lancet. 1993;342(8882):1255-62.  http://www.ncbi.nlm.nih.gov/pubmed/7901582

21. Garwood CL, Corbett TL. Use of anticoagulation in elderly patients with atrial fibrillation who are at risk for falls. Ann Pharmacother. 2008;42(4):523-32.  http://www.ncbi.nlm.nih.gov/pubmed/18334606

From the Archives: Decoding the APOL1 Kidney

May 4, 2017

Please enjoy this post from the archives dated September 25, 2013

By Areeba Sadiq

Faculty Peer Reviewed

African American patients have a higher risk of developing end-stage renal disease (ESRD) than their Caucasian counterparts [1]. If over the age of 70, that risk is 3 times higher. If between the ages of 60-69, the risk is 8 times higher. And, if between 30 and 39, African American patients are an astounding 11 times more likely to develop ESRD [1]. Why are African Americans more likely to develop ESRD? What is different about the African American kidney?

In end stage renal disease (ESRD), the kidneys are unable to perform their daily task of filtering wastes and excreting excess fluids, and the patient will either require dialysis or transplantation. This failure is often preceded by chronic kidney disease (CKD), defined as an estimated glomerular filtration rate <60 ml/min/1.73 m2 or a urinary albumin-to-creatinine ratio greater than or equal to 30 mg/g for at least 3 months [1,2]. Hypertension undoubtedly makes renal disease worse through an insidious hyalinization of afferent arterioles, also known as nephrosclerosis, that eventually leads to glomerular damage and proteinuria [3]. Not only do African Americans have a higher risk of developing ESRD from CKD, but they also experience a greater rate of progression towards ESRD than their Caucasian counterparts [1,4]. For every new case of ESRD out of 100 CKD cases of a Caucasian source population, there are 5 new cases of ESRD out of 100 CKD cases of an African American source population [4].

Some explanations have included differences in socioeconomic status and systolic blood pressure between African Americans and Caucasians with ESRD. A prospective study looking at men ages 35 to 57 showed that both higher blood pressure and lower income were associated with higher incidence of ESRD for both white and African American men [5]. When adjusting for age, more African American men developed hypertensive ESRD, with a risk ratio of 5.16 (95% CI 3.64-7.31). However, when adjusting for both age and systolic blood pressure, this risk ratio reduced to 3.84 (95% CI 2.68-5.48). Likewise, when adjusting for both age and median income, the risk ratio reduced further to 2.83 (95% CI 1.80-4.45).

The finding that controlling for systolic blood pressure led to a more accurate representation of relative risk suggested that an intervention like intensive pharmacologic blood pressure control in African Americans could slow the progression of CKD to ESRD. The African American Study of Kidney Disease and Hypertension (AASK) was a randomized trial of blacks ages 18 to 70 with hypertensive CKD that sought to evaluate whether intensive blood pressure control (systolic target of 92 mmHg) could reduce the progression to specific outcomes: doubling of serum creatinine, ESRD, or death [6]. Surprisingly, when compared to standard control (systolic target of 102-107 mmHg), intensive blood pressure control did not show a significant difference in progression to those outcomes (hazard ratio 0.91; 95% CI=0.77 to 1.08; P=0.27). Of note, intensive blood pressure control did appear to benefit those with baseline proteinuria defined as a protein-to-creatinine ratio of greater than 0.22 [6].

Therefore, neither socioeconomic status nor higher systolic blood pressure alone could explain the extent of the differences observed. External influences and environment are only a part of the picture. Perhaps there is something intrinsically different about the African American kidney.

So begins the story of the apolipoprotein L1 (APOL1) gene on the western banks of the African continent, a major stop along the transatlantic slave trade during its 18th century peak. APOL1, located on chromosome 22q, encodes a minor component of high density lipoprotein (HDL) [7]. More importantly, its secreted product, apolipoprotein L1 (ApolL1), lyses Trypanosoma brucei brucei (the flagellate protozoan that causes African trypanomiasis, or “sleeping sickness”) and provides people in endemic regions with resistance to infection from tsetse fly bites [7]. T brucei rhodensiense, on the other hand, was able to evolve and evade lysing by ApolL1. However, two allelic variants of APOL1–G1 and G2–confer a heterozygote advantage that, in essence, reverses this evolution, making T brucei rhodensiense susceptible to lysing. This protection against sleeping sickness was conserved and perpetuated; however, it came at a cost. African Americans today, who are recipients of both the G1 and G2 alleles from their ancestors in Yoruba and Western Africa, show an increased risk for kidney disease [7]. A genome-wide analysis demonstrated that having 2 risk alleles when compared to none had an odds ratio of 7.3 (95% CI 5.6–9.5) for developing renal diseases including focal segmental glomerulosclerosis, HIV-associated nephropathy, and “hypertension associated” ESRD [8].

Interestingly, Lipkowitz and colleagues returned to the original AASK trial and examined 675 of its African American participants with hypertensive CKD. They found that the presence of both APOL1 risk alleles was associated with “hypertensive” kidney disease when compared to controls, with an odds ratio of 2.57 (95% CI 1.85, 3.55) [9]. Severe cases with proteinuria had even higher odds ratios. There is now strong evidence that the APOL1 risk alleles are significantly associated with the progression of CKD. What was thought to be “hypertensive kidney disease” in fact turned out to be APOL-1-associated kidney disease. Hypertension was a manifestation rather than a cause of the disease.

Aside from its trypanolytic abilities, the function of the APOL1 product in the kidney is unknown. Its role will be the key to understanding why African Americans are more prone to CKD progression. The applications are numerous. The AASK trial showed that African Americans with mild-to-moderate renal insufficiency benefited more from use of ramipril, an ACE inhibitor, than metoprolol, a beta blocker, or amlodipine, a calcium channel blocker [6,10]. Ramipril showed a 44% risk reduction in progression to ESRD or death and a 20% decrease in proteinuria, whereas amlodipine increased proteinuria by 58% [6,10]. Is there a relationship between the presence of risk alleles and efficacy of certain drugs at reducing progression to ESRD? This possibility has not yet been studied. Additionally, the presence of APOL1 risk alleles is significantly associated with shorter allograft survival of donor kidneys [11]. Could screening for APOL1 risk alleles become a new marker for donor organ longevity?

The presence of APOL1 risk alleles does not guarantee that one will develop kidney disease. Therefore, environment, socioeconomic status, high systolic blood pressure, and all the extrinsic factors that alone did not explain rapid progression of CKD might together, with this new genetic component, lead to formulation of a better understanding of this special phenotype that disproportionately affects the African American population. The implications are vast: we are in the process of defining a subset of chronic kidney disease that has the potential to be identified through new diagnostic means and treated with a different clinical approach with regard to time course and drug therapy.

Commentary by David S. Goldfarb MD, FASN

One would have to consider the discovery of APOL1 as one of the most important mechanistic discoveries in nephrology in recent years. Barry Freedman’s insight that there was a gene for “kidney failure,” and that kidney failure was a distinct phenotype, that united diabetes, HIV-associated nephropathy, lupus, et al, was a huge breakthrough. He faced considerable naysaying and skepticism over the years and now has been vindicated in a major way. The recognition that APOL1 variants became widespread in the African population due to its postulated protective effect against trypanosomiasis, much the way sickle cell variants protected against malaria, makes the story even more incredible.

The other historical perspective is the remembrance that I was trained to think that “hypertensive nephropathy” was vastly overdiagnosed; that training was completely correct. Most patients with “hypertensive nephropathy” have the diagnosis made only after some other cause of chronic kidney disease was missed. Since hypertension is frequently due to CKD, it is incorrectly assigned as the causative factor of the reduced GFR, rather than recognizing it as a secondary manifestation of CKD. Perhaps it is responsible for some chronic nephropathy in white populations, in whom APOL1 does not account for CKD. But when ESRD statistics cite hypertension as the cause of 25-30%% of American ESRD, they are clearly overstating the case.

Areeba Sadiq is a 2nd year medical student at NYU School of Medicine

Peer reviewed by David Goldfarb, MD, Clinical Chief, Nephrology, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. United States Renal Data System. Annual data report: atlas of chronic kidney disease and end-stage renal disease in the United States. http://www.usrds.org/2012/view/v1_01.aspx.  Published 2012. Accessed January 23, 2013.

2. Levey AS, Coresh J, Balk E, et al. National Kidney Foundation practice guidelines for chronic kidney disease: evaluation, classification, and stratification. Ann Intern Med. 2003;139(2):137-147.  http://www.ncbi.nlm.nih.gov/pubmed/12859163

3. Bidani AK, Griffin KA. Pathophysiology of hypertensive renal damage: implications for therapy. Hypertension. 2004;44(5):595-601.  http://hyper.ahajournals.org/content/44/5/595.full

4. Hsu C, Lin F, Vittinghoff E, Shlipak M. Racial differences in the progression from chronic renal insufficiency to end-stage renal disease in the United States. J Am Soc Nephrol. 2003; 14(11):2902–2907.

5. Klag MJ, Whelton PK, Randall BL, Neaton JD, Brancati FL, Stamler J. End-stage renal disease in African-American and white men. 16-year MRFIT findings. JAMA. 1997;277(16):1293-1298.  http://www.ncbi.nlm.nih.gov/pubmed/9109467

6. Appel LJ, Wright JT Jr, Greene T, et al. Intensive blood-pressure control in hypertensive chronic kidney disease. N Engl J Med. 2010;363(10):918-929.  http://www.nejm.org/doi/full/10.1056/NEJMoa0910975

7. Kopp JB, Nelson GW, Sampath K, et al. APOL1 genetic variants in focal segmental glomerulosclerosis and HIV-associated nephropathy. J Am Soc Nephrol. 2011;22:2129–2137.  http://www.ncbi.nlm.nih.gov/pubmed/21997394

8. Genovese G, Friedman DJ, Ross MD, et al. Association of trypanolytic ApoL1 variants with kidney disease in African Americans. Science. 2010;329(5993):841–845.  http://www.ncbi.nlm.nih.gov/pubmed/20647424

9. Lipkowitz MS, Freedman BI, Langefeld CD, et al. Apolipoprotein L1 gene variants associate with hypertension-attributed nephropathy and the rate of kidney function decline in African Americans. Kidney Int. 2013;83(1):114-120.  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3484228/

10. Sica DA. The African American Study of kidney disease and hypertension (AASK) trial: what more have we learned? J Clin Hypertens (Greenwich). 2003;5(2):159–167. http://www.ncbi.nlm.nih.gov/pubmed/12671332

11. Reeves-Daniel AM, DePalma JA, Bleyer AJ, et al. The APOL1 gene and allograft survival after kidney transplantation. Am J Transplant. 2011;11(5):1025–1030.  http://www.ncbi.nlm.nih.gov/pubmed/21486385

From the Archives: Did Abraham Lincoln Have Marfan Syndrome?

April 13, 2017

Please enjoy this post from the archives dated April 19, 2013

By Anna Krigel
Faculty Peer Reviewed
The iconic image of Abraham Lincoln is ubiquitous in our lives, from his small face on the penny to his large figure looming over the National Mall in Washington, D.C. Lincoln fascinates historians because of his significant role in American history when our nation was bitterly divided, but he intrigues physicians because of his remarkable stature. A reporter once described the 16th president as a “tall, lank, lean man considerably over six feet in height with stooping shoulders, long pendulous arms terminating in hands of extraordinary dimensions, which, however, were far exceeded in proportion by his feet” [1]. The president’s strikingly tall and lanky build, his long, thin face, and especially his enormous hands and feet, first sparked the notion that Lincoln might have had Marfan syndrome. Geneticists and historians have debated this idea since it was first proposed in the early 1960s [3-5]. The French pediatrician Antoine-Bernard Marfan first described Marfan syndrome at the turn of the 20th century, 30 years after Lincoln’s assassination, in a young girl with long digits and several other skeletal abnormalities. The incidence of Marfan syndrome is estimated to be 2-3 per 10,000 people, and it is passed in an autosomal dominant fashion in families or is caused by de novo mutations. These mutations occur in the extracellular matrix protein fibrillin 1 and affect the connective tissue. The disorder manifests in multiple body systems, most predominantly the skeletal, ocular, and cardiovascular systems. Patients may have overgrowth of the long bones, long fingers, loose joints, dislocation of the ocular lens, early myopia, and thickening of the heart valves leading to mitral valve prolapse and variable degrees of valve regurgitation. The most life-threatening manifestations of the disorder are aortic aneurysm and dissection, but improved recognition and treatment of these outcomes have made life expectancy in Marfan syndrome nearly normal [7]. In 1962, Dr. A. M. Gordon, a Cincinnati physician, was the first to suggest that Lincoln had Marfan syndrome based on the president’s physical appearance and the similarly tall and lanky appearance of his mother [2,3]. Two years later, a cardiologist from California named Harold Schwartz published an article describing a 7-year-old patient with Marfan syndrome whose ancestry he traced back to Lincoln’s great-great grandfather, Mordecai Lincoln II [1,4]. A debate about the president and Marfan syndrome in the form of letters to the editor of JAMA ensued in 1964. Gordon and Schwartz supported the diagnosis based on Lincoln’s skeletal structure but argued over whether he inherited the mutation from his mother or from his father. In the same debate, Dr. J. Willard Montgomery denied that Lincoln had Marfan syndrome at all based on the president’s strength and athletic prowess [3-5]. Schwartz again contributed to the debate in 1972 using an anecdote about a photograph of the president. Lincoln reportedly made an observation that the foot of his crossed leg appeared blurry in the photograph, and a newspaperman named Noah Brooks suggested that the throbbing of the arteries might have caused the motion in the president’s leg. Lincoln tested the idea by crossing his legs and, upon watching his crossed foot, exclaimed, “That’s it! That’s it! Now that’s very curious, isn’t it?” Schwartz argued that the blurriness of the foot was due to pulsations of the large arteries associated with aortic insufficiency, a defect found in Marfan syndrome [6]. Despite the impressive evidence of Marfan syndrome features in the president, Lincoln was not known to be loose-jointed, he was never known to have a heart murmur, there was no mention of aortic abnormalities at his autopsy, and he was not known to have the ocular abnormalities associated with Marfan syndrome [8,9]. For these reasons, many scientists have called into question the diagnosis of Marfan in the president [9]. In a recent article, Dr. John Sotos, a cardiologist with an interest in the medical history of America’s presidents, proposes a new theory on the president’s genetics in the context of newly discovered marfanoid syndromes with mutations in the transforming growth factor-beta receptor. One such syndrome is multiple endocrine neoplasia type 2B (MEN2B), which is a cancer syndrome characterized by mucosal neuromas, medullary thyroid cancer, pheochromocytoma, and marfanoid habitus. Sotos uses information about the appearance of Lincoln’s mother, Nancy Hanks Lincoln, to propose that Nancy and Abraham both suffered from the same marfanoid disorder, and that this disorder may have been MEN2B. According to an Indiana minister who knew several of Lincoln’s cousins, Nancy was “quite tall…bony, angular, lean…She had long arms, large head, with the forehead exceedingly broad…with chest sunken.” Nancy and Abraham shared many of the same facial features that are common to marfanoid facies, including a thin face and prominent chin. The two may have also had skeletal muscle hypotonia leading to their melancholic expressions. Muscular hypotonia, which is distinguished from weakness, is a prominent feature of MEN2B. Nancy died at age 34, and her death was described as a “wasting death,” which may be an indication of a cancer syndrome. Sotos concludes that all of these elements of Nancy’s history support a diagnosis of the MEN2B cancer syndrome, but we will never be able to prove this diagnosis without testing her DNA [9] . The testing of Lincoln’s DNA was suggested and disputed in the 1990s, after scientists identified the gene for Marfan syndrome. It would be possible to test several objects containing Lincoln’s DNA from the night of his assassination, including the bloody shirt cuffs of a young surgeon on the scene, the pistol ball that lodged behind his right eye, locks of hair, and even small fragments of the president’s skull. A committee of geneticists, forensic scientists, and lawyers convened in 1991 to decide whether or not to test Lincoln’s DNA for a mutation in fibrillin 1. The most important ethical question they encountered was whether or not this testing would be a violation of Lincoln’s privacy. The compelling reason the committee found to test the DNA was for the betterment of the community of disabled individuals, especially those with Marfan syndrome, who would be encouraged to learn that one of the most significant figures in American history lived with a genetic disorder. No member of the committee knew what Lincoln would have wanted, but they felt confident that Lincoln would have supported the testing of his DNA if it was helpful to others. Ultimately, the committee decided against testing Lincoln’s DNA for Marfan syndrome, not because it was a violation of his privacy, but because it would be too technically difficult given the growing number of mutations found in Marfan families [1]. Without DNA testing, we may never know whether Lincoln carried the mutation in his genes. From the extensive research of historians and geneticists, it now seems less likely that the president had Marfan syndrome and more probable that he had some other marfanoid syndrome, possibly MEN2B. More importantly, we can confidently surmise that Lincoln did have a genetic disorder, passed to him in an autosomal dominant fashion from his mother. According to Sotos, “Nancy and Abraham had an almost perfect concordance for a large number of unusual craniofacial and marfanoid skeletal features…there can be little doubt that Nancy had the same marfanoid disorder–whatever it was–as her son” [9]. The discovery of Lincoln’s likely genetic disorder is particularly significant to those with marfanoid syndromes. One patient with Marfan said, “The fact that Lincoln may have had Marfan syndrome shows those of us that we too can contribute something of value to society…It’s time that all people, especially medical ethicists, realize that having the Marfan syndrome is not shameful, it’s just darned inconvenient”[10]. The community of patients with genetic disorders can now be assured that the well-respected figure, whose iconic face is carved into Mt. Rushmore, almost certainly had a genetic mutation, and it did not hinder his many achievements. By Anna Krigel, 4th year medical student at NYU School of Medicine Peer reviewed by Ann Garment, MD, Department of Medicine (GIM Div.) NYU Langone Medical Center Image courtesy of Wikimedia Commons References 1. Reilly PR. Abraham Lincoln’s DNA and Other Adventures in Genetics. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press; 2000.  http://books.google.com/books/about/Abraham_Lincoln_s_DNA_and_Other_Adventur.html?id=1mvLjIJUV_IC 2. Gordon AM. Abraham Lincoln–a medical appraisal. J Ky Med Assoc. 1962:60:249–253. 3. Gordon AM. Lincoln-Marfan debate. JAMA. 1964:189(2):164. http://jama.jamanetwork.com/article.aspx?articleid=1163795 4. Schwartz H. Lincoln-Marfan debate. JAMA. 1964:189(2):164. 5. Montgomery JW. Lincoln-Marfan debate. JAMA. 1964:189(2):164-165. 6. Schwartz H. Abraham Lincoln and aortic insufficiency. The declining health of the President. Calif Med. 1972:116(5):82-84. 7. Judge DP, Dietz HC. Marfan’s syndrome. Lancet. 2005:366(9501):1965-1976.  http://www.ncbi.nlm.nih.gov/pubmed/16325700 8. Kroen C. Abraham Lincoln and the ‘Lincoln sign.’ Cleve Clin J Med. 2007:74(2):108-110. 9. Sotos JG. Abraham Lincoln’s marfanoid mother: the earliest known case of multiple endocrine neoplasia type 2B? Clin Dysmorphol. 2012:21(3):131-136.  http://www.ncbi.nlm.nih.gov/pubmed/22504423 10. McKusick VA. The defect in Marfan syndrome. Nature. 1991:352(6333):279-281.

From The Archives – In Search of a Competitive Advantage: A Primer for the Clinician Treating the Anabolic Steroid User

January 26, 2017

Please enjoy this post from the archives dated, April 17, 2013

By David G. Rosenthal and Robert Gianotti, MD

Faculty Peer Reviewed

Case: A 33-year-old man comes to your clinic complaining of worsening acne over the last 6 months. You note a significant increase in both BMI and bicep circumference. After several minutes of denial, he reveals that he has been using both injectable and oral anabolic steroids. He receives these drugs from a local supplier and via the Internet. He confides that his libido has dramatically increased and he feels increasingly pressured at work, describing several recent altercations. He admits that these symptoms are a small price to pay for the amazing performance gains he has seen at the gym. He plans to compete in a local deadlifting tournament at the end of the month. He asks you if he is at increased risk for any health problems and whether short-term use is associated with any long-term consequences. You quickly realize that you have no idea what literature exists on the health consequences of anabolic steroids. Fortunately, you have set the homepage on your web browser to Clinical Correlations. Together, you read…

The recreational use of anabolic steroids has drawn increasing international attention over the last decade due to their use and abuse by athletes and bodybuilders. Athletes including Arnold Schwarzenegger, cyclist Lance Armstrong, baseball slugger Mark McGuire, and Olympic gold medal sprinter Marion Jones have all come under scrutiny for using steroids to gain a competitive advantage and shatter records. In fact, the 1990’s are notoriously known in Major League Baseball as the “Steroids Era.” Critics argue that the use of these substances contradicts the nature of competition and are dangerous given the abundance of reported side effects. Accordingly, the vast majority of sporting associations have banned the use of anabolic steroids, and their possession without a prescription is illegal in the United States, punishable by up to one year in prison. Nevertheless, the performance-enhancing, aesthetic, and financial benefits of anabolic steroids has led to rampant abuse by both professional and high school athletes with an astonishing 3.9% of students having tried anabolic steroids at least once during high school 1, 2.

Anabolic steroids are synthetic derivatives of testosterone, the primary male sex hormone. Androgenic effects of testosterone include maturation of secondary sex characteristics in both males and females, development of typical hair patterns, and prostate enlargement, while its anabolic effects include strength gains and bone maturation via regulation of protein metabolism 3. Administration of exogenous testosterone causes upregulation of the androgen receptor in skeletal muscle, resulting in increased muscle fiber size and number 4. Anabolic steroids can be absorbed directly into skin, injected, or taken orally. Synthetic oral steroids, including methyltestosterone and fluoxymesterone, are 17-alpha alkylated which prevents first-pass metabolism by the liver and may contribute to increased hepatotoxicity 5.

Much of the public opinion about anabolic steroids has been obtained from individual testimonies and well-publicized user narratives. While thousands of articles have been published in scientific journals describing both the desired and adverse effects of anabolic steroid abuse, a number of these studies have drawn questionable conclusions due to flawed methodologies, inadequate sample sizes, study biases, and most importantly the inability to replicate the actual drug dosages used by many athletes. The regimens of many steroid users often consist of twenty-fold higher concentrations than have been previously examined in the literature 6. Hence, the precise effects of the supraphysiologic doses of steroids that are commonly abused may never be known.

Strength, endurance and reduced recovery time are all attributes that the competitive athlete strives to obtain. Historically, institutions and even governments have dabbled in performance enhancement for competitive athletes. It has been well documented that Communist-era East Germany sought to build superior athletes to compete in the Olympic Games and flex their muscles on the world stage. Documents studying the effects of anabolic steroids, including oral Turinabol on Olympic athletes in East Germany from 1968-1972 showed remarkable improvements in strength sports: Discus throws increased by 11-20 meters; shot put distance improved by 4.5-5 meters; hammer throw increased by 6-10 meters; and javelin throw increased 8-15 meters 7. The strength gains among East German female athletes were most notable, as were the side effects including hirsutism, amenorrhea, severe acne, and voice deepening. In fact, when a rival coach commented on the voice changes of the competitors, the East German coach responded “We came here to swim, not sing”8. Following the implementation of “off-season” steroid screening by the International Olympic Committee and other competitive organizations in 1989, track and field sports saw a dramatic reduction in performance. Notably, the longest javelin throw by a female in the 1996 Olympics was 35 feet shorter than the world record of 1988.

The gains seen with anabolic steroid use extend beyond the Olympic athlete to recreational body-builders and gym-rats. In a small placebo controlled study from the Netherlands, a ten-week course of injectable nadrolone in a cohort of recreational body builders increased lean body mass by an average of 2-5 kg, with no accompanying increase in fat mass or fluid retention 9. These effects persisted for more than 6 weeks after the cessation of nandrolone. Surprisingly, performance enhancement can be seen with anabolic steroids even in the absence of exercise. In fact, one study including healthy, young men between the ages of 18 and 35 who had endogenous androgen production suppressed with GnRH showed that supraphysiologic doses of testosterone enanthate administered for 20 weeks caused a 15% dose dependent increase in muscle size and a 20% increase in muscle strength without any exercise 10. This study came as a logical follow up to a smaller study published in the New England Journal of Medicine in 1996 that showed impressive performance gains compared to placebo among both exercising and sedentary subgroups. At 10 weeks, the testosterone + exercise group was able to bench press a mean of 10kg more than both the testosterone alone and exercise alone subgroups11.

The performance gains from steroids have also been shown to extend into the eighth and ninth decades of life. A 2003 study in men aged 65-80 showed significant gains compared to placebo in both lean body mass and single repetition chest press after receiving either 50mg or 100mg of the orally bioavailable steroid, oxymethalone. The men in the 100mg group were able to chest press 13.9% +/- 8.1% (p<0.03) compared to placebo and had a 4.2 +/- 2.4 kg (p<0.001) increase in lean body mass 12. Many athletes also report that anabolic steroids increase endurance and decrease recovery time after workouts. This has been supported in the literature where indirect measures of fatigue, such as increased serum lactate and elevated heart rate were delayed after the injection of nandrolone decanoate with a notable improvement in recovery time4.

We now know from a small, but significant pool of data that the performance gains from anabolic steroids are real and can be seen not only in elite athletes but casual users as well. The existing data regarding the side effects of anabolic steroid is varied and relies heavily on self-reported outcomes and dosing regimens that are often variable and combine multiple unique drugs.

One method of obtaining data regarding the adverse effects of anabolic steroid abuse is by employing questionnaires. While this method is inherently biased, it may the only way to obtain data from subjects using very high doses that are considered unsafe or unethical for higher quality studies. Regardless of the method of data collection, it has been well established that up to 40% of male and 90% of female steroid users self-report adverse side effects including aggression, depression, increased sexual drive, fluid retention, hypertension, hair loss, and gynecomastia4. Other reported side effects include: increased levels of the hormone erythropoietin leading to an increased red blood cell count; vocal cord enlargement, leading to voice deepening; and increased risk of sleep apnea.

Exogenous administration of steroids can have immediate and profound effects on the reproductive system, largely mediated through disruption of the hypothalamic-pituitary-adrenal-gonadal axis. Within 24 hours of use, steroids cause a dramatic decrease in follicle stimulating hormone and luteinizing hormone, which can result in azospermia in males and menstrual irregularities in females within weeks, and infertility within months 13, 14. Supraphysiologic testosterone concentrations result in virilization of females, which is characterized by hirsutism, clitoromegaly, amenorrhea, and voice deepening 15. When steroids are abused for longer periods of time, men can suffer from hypogonadotropic hypogonadism, manifested by testicular atrophy, as well as gynecomastia due to peripheral conversion of the exogenous testosterone to estrogen 15. Some athletes try to increase their sperm count by using human chorionic gonadotropin or clomiphene, both commonly used female fertility drugs, but the efficacy of these hormones are debated; moreover, they do not reduce gynecomastia4. Commonly, drugs such as Propecia, routinely used to treat male-pattern baldness and benign prostatic hypertrophy, are used to increase testosterone levels. Although there have been reports of prostatic hypertrophy in steroid users, there is no known associated risk with the development of prostate adenocarcinoma 16, 17.

Adverse cardiovascular outcomes in steroid abusers have been published, including cardiomyopathy, arrhythmia, stroke, and sudden cardiac death18. However, causation has often been inappropriately attributed solely to anabolic steroid use and the data can be misleading due to confounding variables and study biases 4. The structural, functional, and chemical changes associated with steroid abuse are crucial to consider because many of the reported effects are independent risk factors for cardiovascular disease.

A study published in Circulation in 2010 evaluated left ventricular function in a cohort of weightlifters (n=12) with self-reported anabolic steroid use compared to age-matched weightlifting controls (n=7). After adjusting for body surface area and exercise, the investigators found a significant reduction in left ventricular systolic function (EF= 50.6% vs. 59.1%, p=0.003) 19, and the association remained statistically significant even after controlling for prior drug use including alcohol and cocaine. Interestingly, there appeared to be no relationship between cumulative anabolic steroid use and ventricular dysfunction, although the authors note limitations due to small sample size and the bias of self reported data.

Other studies investigating cardiovascular outcomes of anabolic steroid suggest a transient increase in both systolic and diastolic blood pressure in steroid users, although these values return to baseline within weeks of cessation 20. In addition, long term use of anabolic steroids can lead to increased platelet aggregation, possibly contributing to increased risk for myocardial infarction and cerebrovascular events 18.

Anabolic steroids cause a variable increase in LDL and up to a 40-70% decrease in HDL, often resulting in the misleading finding that steroids do not affect total plasma cholesterol 21. Fortunately, these effects are reversible within 3 months of cessation of the agent 22. The use of the 17-alpha-alkylated steroids can cause a 40% reduction in apolipoprotein A-1, a major component of HDL, while an injectable testosterone has been shown to have a tempered 8% reduction 23. Although these effects are reversible with cessation, they underscore the importance of screening anabolic steroid users for lipid abnormalities.

Steroid use has been linked with a number of hepatic diseases. The use of oral steroids is associated with a transient increase in transaminase levels, although some data suggest that this may be due to muscle damage from bodybuilding rather than from liver damage 24. The link between 17-alpha-alkylated steroids and hepatomas, peliosis hepatis (a rare vascular phenomenon resulting in multiple blood filled cavities within the liver), and hepatocellular carcinoma has been suggested in case studies, but no causal relationship has been established 25.

Possibly the most publicized adverse effect of steroid use is psychological, publicly coined “roid rage.” In one study using self-reported data, 23% of steroid users acknowledged major mood symptoms, including depression, mania, and psychosis 26. However, most studies report only subtle psychiatric alterations in the majority of patients, with few patients experiencing significant mood disorders 27. However, a 2006 cohort study from Greece found a dose dependent association between steroid use and psychopathology that was driven by significant increases in hostility, aggression and paranoia (P<0.001) 28. While this topic needs further research, it does lend credence to the theory that “’roid rage” exists, and its effects are exacerbated by higher doses of steroids.

Conclusion:

The former baseball all-star Jose Canseco once claimed that “steroids, used correctly, will not only make you stronger and sexier, they will also make you healthier 29.” Although current research reveals that steroid abuse is not independently associated with increased mortality 16, and many of the adverse effects are rare and reversible with cessation of use, there is a dearth of knowledge about the effects of the actual regimens used, and the long-term side effects of these drugs are largely unknown.

Based on the paucity of quality data and frightening implications of metabolic derangements, heart failure, and infertility, your patient leaves convinced that he has made a poor decision in choosing to use anabolic steroids. He pledges to quit immediately and defer competing in the deadlifting tournament until next year after a “washout” period. He is eager to disseminate his new found knowledge at the local gym, but not before he makes a stop at GNC to load up on creatinine supplements and whey protein.

David G. Rosenthal is a 4th year medical student at NYU Langone Medical Center and Robert Gianotti, MD is Associate Editor, Clinical Correlations

Peer reviewed by Loren Greene , MD, Clinical Associate Professor, Department of Medicine (endocrine division) and Obstetrics and Gynecology

Image Courtesy of Wikimedia Commons

Bibliography:

1. Eaton D, Kann L, Kinchen S, et al. Youth risk behavior surveillance – United States, 2009. MMWR. Surveillance summaries. 2010;59(5):1-142.  http://www.cdc.gov/mmwr/preview/mmwrhtml/ss5905a1.htm

2. Handelsman DJ GL. Prevalence and risk factors for anabolic-androgenic steroid abuse in Australian secondary school students. Int J Androl. 1997;20:159-164.

3. Kochakian CD. History, chemistry and pharmacodynamics of anabolic-androgenic steroids. Wiener medizinische Wochenschrift. 1993;143(14-15):359-363.

4. Hartgens F, Kuipers H. Effects of androgenic-anabolic steroids in athletes. Sports medicine. 2004;34(8):513-554.  http://www.ncbi.nlm.nih.gov/pubmed/15248788

5. Stimac D, Mili? S, Dintinjana R, Kovac D, Risti? S. Androgenic/Anabolic steroid-induced toxic hepatitis. Journal of clinical gastroenterology. 2002;35(4):350-352.

6. Wilson JD. Androgen abuse by athletes. Endocrine reviews. 1988;9(2):181-199.  http://edrv.endojournals.org/content/9/2/181.abstract

7. Franke WW, Berendonk B. Hormonal doping and androgenization of athletes: a secret program of the German Democratic Republic government. Clinical Chemistry. 1997;43(7):1262-1279.  http://www.ncbi.nlm.nih.gov/pubmed/9216474

8. Janofsky M. Coaches Concede That Steroids Fueled East Germany’s Success in Swimming. New York Times. 12.03.91, 1991.

9. Hartgens F, Van Marken Lichtenbelt WD, Ebbing S, Vollaard N, Rietjens G, Kuipers H. Body composition and anthropometry in bodybuilders: regional changes due to nandrolone decanoate administration. International journal of sports medicine. 2001;22(3):235-241.

10. Bhasin S, Woodhouse L, Casaburi R, et al. Testosterone dose-response relationships in healthy young men. American journal of physiology: endocrinology and metabolism. 2001;281(6):E1172-E1181.

11. Bhasin S, Storer TW, Berman N, et al. The effects of supraphysiologic doses of testosterone on muscle size and strength in normal men. The New England journal of medicine. 1996;335(1):1-7.

12. Schroeder ET, Terk M, Sattler F. Androgen therapy improves muscle mass and strength but not muscle quality: results from two studies. American journal of physiology: endocrinology and metabolism. 2003;285(1):E16-E24.  http://ajpendo.physiology.org/content/285/1/E16.long

13. Bijlsma JW, Duursma SA, Thijssen JH, Huber O. Influence of nandrolondecanoate on the pituitary-gonadal axis in males. Acta endocrinologica. 1982;101(1):108-112.

14. Torres Calleja J, Gonzlez-Unzaga M, DeCelis Carrillo R, Calzada-Snchez L, Pedrn N. Effect of androgenic anabolic steroids on sperm quality and serum hormone levels in adult male bodybuilders. Life sciences. 2001;68(15):1769-1774.

15. Martikainen H, Aln M, Rahkila P, Vihko R. Testicular responsiveness to human chorionic gonadotrophin during transient hypogonadotrophic hypogonadism induced by androgenic/anabolic steroids in power athletes. The Journal of steroid biochemistry. 1986;25(1):109-112.

16. Fernndez-Balsells MM, Murad M, Lane M, et al. Clinical review 1: Adverse effects of testosterone therapy in adult men: a systematic review and meta-analysis. The Journal of clinical endocrinology and metabolism. 2010;95(6):2560-2575.

17. Bain J. The Many Faces of Testosteron. Clin Intern Aging. 2007;2(4):567-576.

18. Vanberg P, Atar D. Androgenic anabolic steroid abuse and the cardiovascular system. Handbook of experimental pharmacology. 2010 2010(195):411-457.

19. Baggish A, Weiner R, Kanayama G, et al. Long-term anabolic-androgenic steroid use is associated with left ventricular dysfunction. Circulation. Heart failure. 2010;3(4):472-476.

20. Kuipers H, Wijnen JA, Hartgens F, Willems SM. Influence of anabolic steroids on body composition, blood pressure, lipid profile and liver functions in body builders. International journal of sports medicine. 1991;12(4):413-418.

21. Glazer G. Atherogenic effects of anabolic steroids on serum lipid levels. A literature review. Archives of internal medicine. 1991;151(10):1925-1933.

22. Hartgens F, Rietjens G, Keizer HA, Kuipers H, Wolffenbuttel BHR. Effects of androgenic-anabolic steroids on apolipoproteins and lipoprotein (a). British journal of sports medicine. 2004;38(3):253-259.

23. Thompson PD, Cullinane EM, Sady SP, et al. Contrasting effects of testosterone and stanozolol on serum lipoprotein levels. JAMA (Chicago, Ill.). 1989;261(8):1165-1168.

24. Dickerman RD, Pertusi RM, Zachariah NY, Dufour DR, McConathy WJ. Anabolic steroid-induced hepatotoxicity: is it overstated? Clinical journal of sport medicine. 1999;9(1):34-39.

25. Overly WL, Dankoff JA, Wang BK, Singh UD. Androgens and hepatocellular carcinoma in an athlete. Annals of Internal Medicine. 1984;100(1):158-159.

26. Pope HG, Katz DL. Psychiatric and medical effects of anabolic-androgenic steroid use. A controlled study of 160 athletes. Archives of general psychiatry. 1994;51(5):375-382.

27. Pope HG, Kouri EM, Hudson JI. Effects of supraphysiologic doses of testosterone on mood and aggression in normal men: a randomized controlled trial. Archives of general psychiatry. 2000;57(2):133-140.

28. Pagonis T, Angelopoulos N, Koukoulis G, Hadjichristodoulou C. Psychiatric side effects induced by supraphysiological doses of combinations of anabolic steroids correlate to the severity of abuse. European psychiatry. 2006;21(8):551-562.

29. Canseco J. Juiced: Wild Times, Rampant ‘Roids, Smash Hits, and How Baseball Got Big. Philadelphia, PA: Reed Elsevier Inc.; 2005.

30. Young NR BH, Liu G, Seeman E. . Body composition and muscle strength in healthy men receiving testosterone enanthate for contraception. J Clin Endrocrinol Metab. 1993;77:1028-1032.

From The Archives: In Search of a Competitive Advantage: A Primer for the Clinician Treating the Anabolic Steroid User

November 10, 2016
Please enjoy this post from the archives dated April 17, 2013

By David G. Rosenthal and Robert Gianotti, MD

Faculty Peer Reviewed

Case: A 33-year-old man comes to your clinic complaining of worsening acne over the last 6 months. You note a significant increase in both BMI and bicep circumference. After several minutes of denial, he reveals that he has been using both injectable and oral anabolic steroids. He receives these drugs from a local supplier and via the Internet. He confides that his libido has dramatically increased and he feels increasingly pressured at work, describing several recent altercations. He admits that these symptoms are a small price to pay for the amazing performance gains he has seen at the gym. He plans to compete in a local deadlifting tournament at the end of the month. He asks you if he is at increased risk for any health problems and whether short-term use is associated with any long-term consequences. You quickly realize that you have no idea what literature exists on the health consequences of anabolic steroids. Fortunately, you have set the homepage on your web browser to Clinical Correlations. Together, you read…

The recreational use of anabolic steroids has drawn increasing international attention over the last decade due to their use and abuse by athletes and bodybuilders. Athletes including Arnold Schwarzenegger, cyclist Lance Armstrong, baseball slugger Mark McGuire, and Olympic gold medal sprinter Marion Jones have all come under scrutiny for using steroids to gain a competitive advantage and shatter records. In fact, the 1990’s are notoriously known in Major League Baseball as the “Steroids Era.” Critics argue that the use of these substances contradicts the nature of competition and are dangerous given the abundance of reported side effects. Accordingly, the vast majority of sporting associations have banned the use of anabolic steroids, and their possession without a prescription is illegal in the United States, punishable by up to one year in prison. Nevertheless, the performance-enhancing, aesthetic, and financial benefits of anabolic steroids has led to rampant abuse by both professional and high school athletes with an astonishing 3.9% of students having tried anabolic steroids at least once during high school 1, 2.

Anabolic steroids are synthetic derivatives of testosterone, the primary male sex hormone. Androgenic effects of testosterone include maturation of secondary sex characteristics in both males and females, development of typical hair patterns, and prostate enlargement, while its anabolic effects include strength gains and bone maturation via regulation of protein metabolism 3. Administration of exogenous testosterone causes upregulation of the androgen receptor in skeletal muscle, resulting in increased muscle fiber size and number 4. Anabolic steroids can be absorbed directly into skin, injected, or taken orally. Synthetic oral steroids, including methyltestosterone and fluoxymesterone, are 17-alpha alkylated which prevents first-pass metabolism by the liver and may contribute to increased hepatotoxicity 5.

Much of the public opinion about anabolic steroids has been obtained from individual testimonies and well-publicized user narratives. While thousands of articles have been published in scientific journals describing both the desired and adverse effects of anabolic steroid abuse, a number of these studies have drawn questionable conclusions due to flawed methodologies, inadequate sample sizes, study biases, and most importantly the inability to replicate the actual drug dosages used by many athletes. The regimens of many steroid users often consist of twenty-fold higher concentrations than have been previously examined in the literature 6. Hence, the precise effects of the supraphysiologic doses of steroids that are commonly abused may never be known.

Strength, endurance and reduced recovery time are all attributes that the competitive athlete strives to obtain. Historically, institutions and even governments have dabbled in performance enhancement for competitive athletes. It has been well documented that Communist-era East Germany sought to build superior athletes to compete in the Olympic Games and flex their muscles on the world stage. Documents studying the effects of anabolic steroids, including oral Turinabol on Olympic athletes in East Germany from 1968-1972 showed remarkable improvements in strength sports: Discus throws increased by 11-20 meters; shot put distance improved by 4.5-5 meters; hammer throw increased by 6-10 meters; and javelin throw increased 8-15 meters 7. The strength gains among East German female athletes were most notable, as were the side effects including hirsutism, amenorrhea, severe acne, and voice deepening. In fact, when a rival coach commented on the voice changes of the competitors, the East German coach responded “We came here to swim, not sing”8. Following the implementation of “off-season” steroid screening by the International Olympic Committee and other competitive organizations in 1989, track and field sports saw a dramatic reduction in performance. Notably, the longest javelin throw by a female in the 1996 Olympics was 35 feet shorter than the world record of 1988.

The gains seen with anabolic steroid use extend beyond the Olympic athlete to recreational body-builders and gym-rats. In a small placebo controlled study from the Netherlands, a ten-week course of injectable nadrolone in a cohort of recreational body builders increased lean body mass by an average of 2-5 kg, with no accompanying increase in fat mass or fluid retention 9. These effects persisted for more than 6 weeks after the cessation of nandrolone. Surprisingly, performance enhancement can be seen with anabolic steroids even in the absence of exercise. In fact, one study including healthy, young men between the ages of 18 and 35 who had endogenous androgen production suppressed with GnRH showed that supraphysiologic doses of testosterone enanthate administered for 20 weeks caused a 15% dose dependent increase in muscle size and a 20% increase in muscle strength without any exercise 10. This study came as a logical follow up to a smaller study published in the New England Journal of Medicine in 1996 that showed impressive performance gains compared to placebo among both exercising and sedentary subgroups. At 10 weeks, the testosterone + exercise group was able to bench press a mean of 10kg more than both the testosterone alone and exercise alone subgroups11.

The performance gains from steroids have also been shown to extend into the eighth and ninth decades of life. A 2003 study in men aged 65-80 showed significant gains compared to placebo in both lean body mass and single repetition chest press after receiving either 50mg or 100mg of the orally bioavailable steroid, oxymethalone. The men in the 100mg group were able to chest press 13.9% +/- 8.1% (p<0.03) compared to placebo and had a 4.2 +/- 2.4 kg (p<0.001) increase in lean body mass 12. Many athletes also report that anabolic steroids increase endurance and decrease recovery time after workouts. This has been supported in the literature where indirect measures of fatigue, such as increased serum lactate and elevated heart rate were delayed after the injection of nandrolone decanoate with a notable improvement in recovery time4.

We now know from a small, but significant pool of data that the performance gains from anabolic steroids are real and can be seen not only in elite athletes but casual users as well. The existing data regarding the side effects of anabolic steroid is varied and relies heavily on self-reported outcomes and dosing regimens that are often variable and combine multiple unique drugs.

One method of obtaining data regarding the adverse effects of anabolic steroid abuse is by employing questionnaires. While this method is inherently biased, it may the only way to obtain data from subjects using very high doses that are considered unsafe or unethical for higher quality studies. Regardless of the method of data collection, it has been well established that up to 40% of male and 90% of female steroid users self-report adverse side effects including aggression, depression, increased sexual drive, fluid retention, hypertension, hair loss, and gynecomastia4. Other reported side effects include: increased levels of the hormone erythropoietin leading to an increased red blood cell count; vocal cord enlargement, leading to voice deepening; and increased risk of sleep apnea.

Exogenous administration of steroids can have immediate and profound effects on the reproductive system, largely mediated through disruption of the hypothalamic-pituitary-adrenal-gonadal axis. Within 24 hours of use, steroids cause a dramatic decrease in follicle stimulating hormone and luteinizing hormone, which can result in azospermia in males and menstrual irregularities in females within weeks, and infertility within months 13, 14. Supraphysiologic testosterone concentrations result in virilization of females, which is characterized by hirsutism, clitoromegaly, amenorrhea, and voice deepening 15. When steroids are abused for longer periods of time, men can suffer from hypogonadotropic hypogonadism, manifested by testicular atrophy, as well as gynecomastia due to peripheral conversion of the exogenous testosterone to estrogen 15. Some athletes try to increase their sperm count by using human chorionic gonadotropin or clomiphene, both commonly used female fertility drugs, but the efficacy of these hormones are debated; moreover, they do not reduce gynecomastia4. Commonly, drugs such as Propecia, routinely used to treat male-pattern baldness and benign prostatic hypertrophy, are used to increase testosterone levels. Although there have been reports of prostatic hypertrophy in steroid users, there is no known associated risk with the development of prostate adenocarcinoma 16, 17.

Adverse cardiovascular outcomes in steroid abusers have been published, including cardiomyopathy, arrhythmia, stroke, and sudden cardiac death18. However, causation has often been inappropriately attributed solely to anabolic steroid use and the data can be misleading due to confounding variables and study biases 4. The structural, functional, and chemical changes associated with steroid abuse are crucial to consider because many of the reported effects are independent risk factors for cardiovascular disease.

A study published in Circulation in 2010 evaluated left ventricular function in a cohort of weightlifters (n=12) with self-reported anabolic steroid use compared to age-matched weightlifting controls (n=7). After adjusting for body surface area and exercise, the investigators found a significant reduction in left ventricular systolic function (EF= 50.6% vs. 59.1%, p=0.003) 19, and the association remained statistically significant even after controlling for prior drug use including alcohol and cocaine. Interestingly, there appeared to be no relationship between cumulative anabolic steroid use and ventricular dysfunction, although the authors note limitations due to small sample size and the bias of self reported data.

Other studies investigating cardiovascular outcomes of anabolic steroid suggest a transient increase in both systolic and diastolic blood pressure in steroid users, although these values return to baseline within weeks of cessation 20. In addition, long term use of anabolic steroids can lead to increased platelet aggregation, possibly contributing to increased risk for myocardial infarction and cerebrovascular events 18.

Anabolic steroids cause a variable increase in LDL and up to a 40-70% decrease in HDL, often resulting in the misleading finding that steroids do not affect total plasma cholesterol 21. Fortunately, these effects are reversible within 3 months of cessation of the agent 22. The use of the 17-alpha-alkylated steroids can cause a 40% reduction in apolipoprotein A-1, a major component of HDL, while an injectable testosterone has been shown to have a tempered 8% reduction 23. Although these effects are reversible with cessation, they underscore the importance of screening anabolic steroid users for lipid abnormalities.

Steroid use has been linked with a number of hepatic diseases. The use of oral steroids is associated with a transient increase in transaminase levels, although some data suggest that this may be due to muscle damage from bodybuilding rather than from liver damage 24. The link between 17-alpha-alkylated steroids and hepatomas, peliosis hepatis (a rare vascular phenomenon resulting in multiple blood filled cavities within the liver), and hepatocellular carcinoma has been suggested in case studies, but no causal relationship has been established 25.

Possibly the most publicized adverse effect of steroid use is psychological, publicly coined “roid rage.” In one study using self-reported data, 23% of steroid users acknowledged major mood symptoms, including depression, mania, and psychosis 26. However, most studies report only subtle psychiatric alterations in the majority of patients, with few patients experiencing significant mood disorders 27. However, a 2006 cohort study from Greece found a dose dependent association between steroid use and psychopathology that was driven by significant increases in hostility, aggression and paranoia (P<0.001) 28. While this topic needs further research, it does lend credence to the theory that “’roid rage” exists, and its effects are exacerbated by higher doses of steroids.

Conclusion:

The former baseball all-star Jose Canseco once claimed that “steroids, used correctly, will not only make you stronger and sexier, they will also make you healthier 29.” Although current research reveals that steroid abuse is not independently associated with increased mortality 16, and many of the adverse effects are rare and reversible with cessation of use, there is a dearth of knowledge about the effects of the actual regimens used, and the long-term side effects of these drugs are largely unknown.

Based on the paucity of quality data and frightening implications of metabolic derangements, heart failure, and infertility, your patient leaves convinced that he has made a poor decision in choosing to use anabolic steroids. He pledges to quit immediately and defer competing in the deadlifting tournament until next year after a “washout” period. He is eager to disseminate his new found knowledge at the local gym, but not before he makes a stop at GNC to load up on creatinine supplements and whey protein.

David G. Rosenthal is a 4th year medical student at NYU Langone Medical Center and Robert Gianotti, MD is Associate Editor, Clinical Correlations

Peer reviewed by Loren Greene , MD, Clinical Associate Professor, Department of Medicine (endocrine division) and Obstetrics and Gynecology

Image Courtesy of Wikimedia Commons

Bibliography:

1. Eaton D, Kann L, Kinchen S, et al. Youth risk behavior surveillance – United States, 2009. MMWR. Surveillance summaries. 2010;59(5):1-142.  http://www.cdc.gov/mmwr/preview/mmwrhtml/ss5905a1.htm

2. Handelsman DJ GL. Prevalence and risk factors for anabolic-androgenic steroid abuse in Australian secondary school students. Int J Androl. 1997;20:159-164.

3. Kochakian CD. History, chemistry and pharmacodynamics of anabolic-androgenic steroids. Wiener medizinische Wochenschrift. 1993;143(14-15):359-363.

4. Hartgens F, Kuipers H. Effects of androgenic-anabolic steroids in athletes. Sports medicine. 2004;34(8):513-554.  http://www.ncbi.nlm.nih.gov/pubmed/15248788

5. Stimac D, Mili? S, Dintinjana R, Kovac D, Risti? S. Androgenic/Anabolic steroid-induced toxic hepatitis. Journal of clinical gastroenterology. 2002;35(4):350-352.

6. Wilson JD. Androgen abuse by athletes. Endocrine reviews. 1988;9(2):181-199.  http://edrv.endojournals.org/content/9/2/181.abstract

7. Franke WW, Berendonk B. Hormonal doping and androgenization of athletes: a secret program of the German Democratic Republic government. Clinical Chemistry. 1997;43(7):1262-1279.  http://www.ncbi.nlm.nih.gov/pubmed/9216474

8. Janofsky M. Coaches Concede That Steroids Fueled East Germany’s Success in Swimming. New York Times. 12.03.91, 1991.

9. Hartgens F, Van Marken Lichtenbelt WD, Ebbing S, Vollaard N, Rietjens G, Kuipers H. Body composition and anthropometry in bodybuilders: regional changes due to nandrolone decanoate administration. International journal of sports medicine. 2001;22(3):235-241.

10. Bhasin S, Woodhouse L, Casaburi R, et al. Testosterone dose-response relationships in healthy young men. American journal of physiology: endocrinology and metabolism. 2001;281(6):E1172-E1181.

11. Bhasin S, Storer TW, Berman N, et al. The effects of supraphysiologic doses of testosterone on muscle size and strength in normal men. The New England journal of medicine. 1996;335(1):1-7.

12. Schroeder ET, Terk M, Sattler F. Androgen therapy improves muscle mass and strength but not muscle quality: results from two studies. American journal of physiology: endocrinology and metabolism. 2003;285(1):E16-E24.  http://ajpendo.physiology.org/content/285/1/E16.long

13. Bijlsma JW, Duursma SA, Thijssen JH, Huber O. Influence of nandrolondecanoate on the pituitary-gonadal axis in males. Acta endocrinologica. 1982;101(1):108-112.

14. Torres Calleja J, Gonzlez-Unzaga M, DeCelis Carrillo R, Calzada-Snchez L, Pedrn N. Effect of androgenic anabolic steroids on sperm quality and serum hormone levels in adult male bodybuilders. Life sciences. 2001;68(15):1769-1774.

15. Martikainen H, Aln M, Rahkila P, Vihko R. Testicular responsiveness to human chorionic gonadotrophin during transient hypogonadotrophic hypogonadism induced by androgenic/anabolic steroids in power athletes. The Journal of steroid biochemistry. 1986;25(1):109-112.

16. Fernndez-Balsells MM, Murad M, Lane M, et al. Clinical review 1: Adverse effects of testosterone therapy in adult men: a systematic review and meta-analysis. The Journal of clinical endocrinology and metabolism. 2010;95(6):2560-2575.

17. Bain J. The Many Faces of Testosteron. Clin Intern Aging. 2007;2(4):567-576.

18. Vanberg P, Atar D. Androgenic anabolic steroid abuse and the cardiovascular system. Handbook of experimental pharmacology. 2010 2010(195):411-457.

19. Baggish A, Weiner R, Kanayama G, et al. Long-term anabolic-androgenic steroid use is associated with left ventricular dysfunction. Circulation. Heart failure. 2010;3(4):472-476.

20. Kuipers H, Wijnen JA, Hartgens F, Willems SM. Influence of anabolic steroids on body composition, blood pressure, lipid profile and liver functions in body builders. International journal of sports medicine. 1991;12(4):413-418.

21. Glazer G. Atherogenic effects of anabolic steroids on serum lipid levels. A literature review. Archives of internal medicine. 1991;151(10):1925-1933.

22. Hartgens F, Rietjens G, Keizer HA, Kuipers H, Wolffenbuttel BHR. Effects of androgenic-anabolic steroids on apolipoproteins and lipoprotein (a). British journal of sports medicine. 2004;38(3):253-259.

23. Thompson PD, Cullinane EM, Sady SP, et al. Contrasting effects of testosterone and stanozolol on serum lipoprotein levels. JAMA (Chicago, Ill.). 1989;261(8):1165-1168.

24. Dickerman RD, Pertusi RM, Zachariah NY, Dufour DR, McConathy WJ. Anabolic steroid-induced hepatotoxicity: is it overstated? Clinical journal of sport medicine. 1999;9(1):34-39.

25. Overly WL, Dankoff JA, Wang BK, Singh UD. Androgens and hepatocellular carcinoma in an athlete. Annals of Internal Medicine. 1984;100(1):158-159.

26. Pope HG, Katz DL. Psychiatric and medical effects of anabolic-androgenic steroid use. A controlled study of 160 athletes. Archives of general psychiatry. 1994;51(5):375-382.

27. Pope HG, Kouri EM, Hudson JI. Effects of supraphysiologic doses of testosterone on mood and aggression in normal men: a randomized controlled trial. Archives of general psychiatry. 2000;57(2):133-140.

28. Pagonis T, Angelopoulos N, Koukoulis G, Hadjichristodoulou C. Psychiatric side effects induced by supraphysiological doses of combinations of anabolic steroids correlate to the severity of abuse. European psychiatry. 2006;21(8):551-562.

29. Canseco J. Juiced: Wild Times, Rampant ‘Roids, Smash Hits, and How Baseball Got Big. Philadelphia, PA: Reed Elsevier Inc.; 2005.

30. Young NR BH, Liu G, Seeman E. . Body composition and muscle strength in healthy men receiving testosterone enanthate for contraception. J Clin Endrocrinol Metab. 1993;77:1028-1032.

From The Archives: The Effect of Bariatric Surgery on Incretin Hormones and Glucose Homeostasis

October 13, 2016
Please enjoy this post from the archives dated April 4, 2013

By Michael Crist

Faculty Peer Reviewed

Until recently, little thought was given to the important role played by the duodenum, jejunum, and ileum in glucose homeostasis. The involvement of the gut in glucose regulation is mediated by the enteroinsular axis, which refers to the neural and hormonal signaling pathways that connect the gastrointestinal (GI) tract with pancreatic beta cells. These pathways are largely responsible for the increase in insulin that occurs during the postprandial period. In 1964 McIntyre and colleagues first reported the phenomenon of oral glucose administration eliciting a greater insulin response than a similar amount of glucose infused intravenously [1]. This observation, later named the incretin effect, accounts for the role of certain gut hormones within the enteroinsular axis that promote insulin secretion [2]. Although many hormones are believed to contribute, the two that play the most significant role in nutrient-stimulated insulin secretion are glucagon-like peptide 1 (GLP-1) and glucose-dependent insulinotropic peptide (GIP) [3,4]. GLP-1 is synthesized by L-cells found predominantly in the ileum and colon [5], and GIP is secreted from K-cells found predominantly in the duodenum [6]. Both GLP-1 and GIP are secreted in response to nutrients within the gut and are powerful insulin secretagogues, accounting for roughly 50% of postprandial insulin secretion [7]. They have both been shown to promote pancreatic beta cell proliferation and survival [7]. GLP-1 has also been shown to inhibit glucagon secretion and gastric emptying while promoting satiety and weight loss [7].

The incretin effect progressively diminishes with the onset of type 2 diabetes in a process that contributes to disordered glucose metabolism. GLP-1 secretion is significantly lower in type 2 diabetics than in non-diabetic individuals, and GIP loses its insulinotropic properties [8]. Malabsorptive bariatric surgery operations, which alter GI tract anatomy, have been shown to affect incretin hormone profiles and glucose homeostasis [9,10]. Many patients show a postoperative return to normal plasma glucose, plasma insulin, and glycosylated hemoglobin levels and discontinue the use of diabetes-related medication [10,11]. The dramatic resolution of diabetes and the return to euglycemia often occur within one week of surgery, before a significant amount of weight loss occurs [12,13]. In 2001 Pories and Albrecht reported long-term glycemic control in 91% of patients 14 years after they underwent malabsorptive bariatric surgery [12]. Furthermore, the improvement in insulin sensitivity postoperatively has been shown to prevent the progression from impaired glucose tolerance to diabetes [12].

The exact mechanism through which malabsorptive bariatric surgery improves glucose homeostasis is unclear. Much of the evidence in support of bariatric surgery as treatment for diabetes comes from studies that have focused on roux-en-Y gastric bypass and biliopancreatic diversion (which results in enteral nutrition passing directly from the stomach to the ileum) [9,10]. Both of these procedures surgically alter the GI tract such that nutrient chyme bypasses the duodenum and the proximal jejunum. Many initially hypothesized that enhanced nutrient delivery to the distal intestine promotes a physiological signal that ameliorates glucose metabolism. Enhanced GLP-1 secretion as a result of expedited nutrient delivery to the L-cell-rich ileum has been proposed as a mechanism that contributes to this process (14,15). Alternatively, the exclusion of nutrient flow through the duodenum and proximal jejunum may interrupt a signaling pathway that confers insulin resistance [11,12].

Rubino and colleagues tested both theories in a non-obese model of type 2 diabetic rats by comparing glucose tolerance among 3 surgery groups and one non-operated control. Of the 3 surgery groups, one underwent duodenal jejunal bypass (DJB), which excluded nutrient passage from the proximal foregut, resulting in early nutrient delivery to the distal gut. Another group underwent gastrojejunostomy (in which a surgical anastomosis was created between the stomach and jejunum while preserving the normal connection between the stomach and duodenum). This allowed both the normal passage of nutrients through the foregut and enhanced nutrient delivery to the hindgut. In effect, both the DJB and gastrojejunostomy promoted nutrient delivery to the ileum, whereas only the DJB procedure excluded the duodenum from nutrient passage. The third group was a sham-operated control. The DJB group showed better glucose tolerance than all other study groups even though there were no differences in food intake, body weight, or nutrient absorption [16]. Furthermore, when the DJB group underwent a second operation to allow nutrient passage through the foregut, glucose tolerance deteriorated. When the gastrojejunostomy group underwent a second operation to prevent nutrient passage through the foregut, glucose tolerance improved [16]. These findings suggest that exclusion of the duodenum and proximal jejunum is a necessary component of surgical interventions aimed at improving glucose tolerance.

Bariatric surgery holds great potential in the treatment of T2DM and will likely play an increasingly important role in diabetes management. Improved glucose regulation following malabsorptive bariatric surgery procedures is likely multifactorial, with alterations in gut microflora and the beneficial effects of weight loss contributing with time. Changes in gut hormone secretion profiles, however, appear to play an important role in the initial improvements reported in glucose homeostasis.

FIGURE 1. Interventions. A, Duodenal-jejunal bypass (DJB). This operation does not impose any restriction to the flow of food through the gastrointestinal tract. The proximal small intestine is excluded from the transit of nutrients, which are rapidly delivered more distally in the small bowel. Food exits the stomach and enters the small bowel at 10 cm from the ligament of Treitz, and digestive continuity is reestablished approximately 25% of the way down the jejunum. B, Gastrojejunostomy (GJ). This operation consists of a simple anastomosis between the distal stomach and the first quarter of the jejunum. The site of the jejunum that is anastomosed to the stomach is chosen at the same distance as in DJB (10 cm from the ligament of Treitz). Hence, the DJB and GJ share the feature of enabling early delivery of nutrients to the same level of small bowel. In contrast to DJB, the GJ does not involve exclusion of duodenal passage, and nutrient stimulation of the duodenum is maintained. C, Ileal bypass (ILB). This operation reduces intestinal fat absorption by preventing nutrients from passing through the distal ileum, where most lipids are absorbed.

Michael Crist is a 4th year medical student at NYU School of Medicine

Peer reviewed by Natalie Levy, MD, Department of Medicine (GIM Div.) NYU Langone Medical Center

References

1. McIntyre N, Holdsworth CD, Turner DS. New interpretation of oral glucose tolerance. Lancet. 1964;2(7349):20-21.

2. Creutzfeldt W. The incretin concept today. Diabetologia. 1979;16(2):75-85.  http://www.ncbi.nlm.nih.gov/pubmed/32119

3. Fetner R, McGinty J, Russell C, Pi-Sunyer FX, Laferrère B. Incretins, diabetes, and bariatric surgery: a review. Surg Obes Relat Dis. 2005;1(6):589-597.

4. Drucker DJ. Enhancing incretin action for the treatment of type 2 diabetes. Diabetes Care. 2003;26(10):2929-2940. http://care.diabetesjournals.org/content/26/10/2929

5. Drucker DJ. Incretin-based therapies: A clinical need filled by unique metabolic effects. Diabetes Educ. 2006;32 Suppl 2:65S-71S.

6. Vilsbøll T, Holst JJ. Incretins, insulin secretion and type 2 diabetes mellitus. Diabetologia. 2004;47(3):357-366.

7. Fetner R, McGinty J, Russell C, Pi-Sunyer FX, Laferrère B. Incretins, diabetes, and bariatric surgery: a review. Surg Obes Rel Dis. 2005;1(6):589-597.

8. Nauck M, Stöckmann F, Ebert R, Creutzfeldt W. Reduced incretin effect in type 2 (non-insulin-dependent) diabetes. Diabetologia. 1986;29(1):46-52. http://www.ncbi.nlm.nih.gov/pubmed/3514343

9. Rosa G, Mingrone G, Manco M, et al. Molecular mechanisms of diabetes reversibility after bariatric surgery. Int J Obes (Lond). 2007;31(9):1429-1436.  http://www.ncbi.nlm.nih.gov/pubmed/17515913

10. Schauer PR, Burguera B, Ikramuddin S, et al. Effect of laparoscopic Roux-en Y gastric bypass on type 2 diabetes mellitus. Ann Surg. 2003;238(4):467-84.

11. Rubino F, Gagner M, Gentileschi P, et al. The early effect of the Roux-en Y gastric bypass on hormones involved in body weight regulation and glucose metabolism. Ann Surg. 2004;240(2):236-242.

12. Pories WJ, Albrecht RJ. Etiology of type II diabetes mellitus: role of the foregut. World J Surg. 2001;25(4):527-531.  http://www.ncbi.nlm.nih.gov/pubmed/11344408

13. Guidone C, Manco M, Valera-Mora E, et al. Mechanisms of recovery from type 2 diabetes after malabsorptive bariatric surgery. Diabetes. 2006;55(7):2025-2031.

14. Patriti A, Aisa MC, Annetti C, et al. How the hindgut can cure type 2 diabetes. Ileal transposition improves glucose metabolism and beta-cell function in Goto-kakizaki rats through an enhanced Proglucagon gene expression and L-cell number. Surgery. 2007;142(1):74-85.  https://www.ncbi.nlm.nih.gov/m/pubmed/17630003/?i=2&from=/16259883/related

15. Patriti A, Facchiano E, Sanna A, Gulla N, Donini A. The enteroinsular axis and the recovery from type 2 diabetes after bariatric surgery. Obes Surg. 2004;14(6):840-848.

16. Rubino F, Forgione A, Cummings DE, et al. The mechanism of diabetes control after gastrointestinal bypass surgery reveals a role of the proximal small intestine in the pathophysiology of type 2 diabetes. Ann Surg. 2006;244(5):741-749.

From The Archives: Have a Cow? How Recent Studies on Red Meat Consumption Apply to Clinical Practice

October 6, 2016
Please enjoy this post from the archives dated, April 12, 2013

By Tyler R. McClintock

Faculty Peer Reviewed

“Red Meat Kills.” “Red Meat a Ticket to Early Grave.” “A Hot Dog a Day Raises Risk of Dying.” Such were the headlines circulating in popular press last year when the Annals of Internal Medicine released details of an upcoming article out of Frank Hu’s research group at the Harvard School of Public Health [1-3]. Analyzing long-term prospective data from two large cohort studies, researchers found that individuals who ate a serving of unprocessed red meat each day had a 13% higher risk of mortality during the study period. The numbers were even more grim for processed meats, as a one-serving-per-day increase in such foods as bacon or hot dogs was associated with a 20% increase in mortality risk. Hu and colleagues ultimately concluded that 9.3% of the observed deaths in men and 7.6% of the deaths in women could have been avoided by participants consuming less than 0.5 daily servings (42 g) of red meat [4].

While this study received a great deal of media buzz, it is merely the latest in a long line of studies over the past decade that have tried to better understand how red meat consumption may impact the development of chronic disease. Indeed, our own research group recently set out to answer that same question, although through a different approach: focusing on dietary patterns rather than specific diet elements. Compared to the “single nutrient” or “single food” approach, this analytic method more fully accounts for biochemical interactions between nutrients, as well as interrelationships between dietary components that cause difficulty in distinguishing individual food or nutrient effects. We followed over 11,000 individuals in Bangladesh for nearly 7 years, identifying distinct dietary patterns as well as the associations between these patterns and risk of adverse cardiovascular outcomes. In short, we found that adherence to an animal protein diet increased risk of death from overall cardiovascular disease, especially heart disease. In fact, after stratifying adherence to the animal protein diet into 4 levels, the most adherent group had twice the risk of heart disease mortality compared to the least adherent. While striking, these results inevitably raise the question of what role red meat in particular played in increased mortality, as it was only a component of the more unhealthy diet [5].

The contrasting analytical approaches in these two studies highlight the difficulty in fully understanding how red meat may affect cardiovascular health and mortality. It is believed that adverse outcomes from red meat intake are mediated mainly through the effects of high saturated fat on blood low-density lipoprotein and other cholesterol levels, although high sodium content in processed red meat may also play a role by elevating blood pressure and impairing vascular compliance. Additionally, nitrate preservatives, which are used in processed meats, have been shown in experimental models to reduce insulin secretion, impair glucose tolerance, and promote atherosclerosis [6].

Although multiple studies have shown an association between red meat and cardiovascular disease [7-10], the magnitude of risk is somewhat debatable. In a recent set of meta-analyses, for example, one found equivocal evidence for the influence of meat on cardiovascular disease [11], while another showed consumption of processed red meat, but not unprocessed red meat, to be associated with risk of coronary heart disease [6]. Much of this cloudiness is likely due to inconsistencies across studies in terms of study design, as well as how each defines meat intake and meat types (distinguishing what constitutes “red,” “processed,” or “lean”). Taking all of this into consideration, the best current evidence still seems to indicate that red meat consumption at very high levels conveys increased risk of cardiovascular disease, with processed meats likely increasing that risk. This is similar to what has been observed with respect to type 2 diabetes and colon cancer, as red meat (particularly processed meats) has been linked to a higher risk of both [12-15].

A more complete understanding of healthy eating and advisable intake of red meat is truly of vital importance. Although cardiovascular disease remains the world’s leading cause of death, it has been posited that over 90% of cases may be preventable simply by modifying diet and lifestyle [16-18]. A recent literature review summarized foods that are protective against cardiovascular disease: vegetables, nuts, and monounsaturated fats, as well as Mediterranean, prudent, and high-quality diets [11]. Conversely, as discussed above, current evidence indicates most convincingly that high intake of processed red meats, particularly as part of a Western diet, carries significant risk for increased mortality and adverse cardiovascular outcomes. Many questions, though, remain unanswered–namely, to what extent unprocessed red meat can be grouped with its processed counterpart in terms of health risks, as well as what risk reduction may be possible by substituting lean red meats for either processed or unprocessed meat (which has not yet been addressed in any large prospective study) [19].

Without a full understanding of red meat’s health effects, clinicians are faced with the need to settle for the best available evidence to counsel their patients in need of dietary guidance. The 2010 US Dietary Guidelines for Americans advise for moderation of red meat intake, mainly due to the expected effect of its saturated fat and cholesterol on blood cholesterol [20]. However, with unprocessed and processed red meats having similar levels of saturated fat yet distinctly different clinical outcomes, current dietary recommendations on meat consumption are shown to be based almost solely on the “avoidance of fat” postulate. The resultant dietary recommendations, neither comprehensive nor specific, are justifiably limited by our current level of understanding. Without elucidating the health effects of preservatives in processed meats or potential risk reduction from substitution of lean meats for standard red meat, it is nearly impossible to make more nuanced or quantitative recommendations.

So how does all of this impact the day-to-day practice of a clinician–particularly one in primary care? There will likely never come a day when it is realistic to counsel or expect every patient to avoid red meat completely. In light of recent evidence, though, it is certainly justifiable to recommend moderation, particularly with respect to processed types. Until further research is able to establish hard-and-fast guidelines, qualitative guidance will remain the best evidence-based advice that physicians can hand down. In other words, if a patient’s going to have a cow (or lamb or pork, for that matter), emphasize moderation and recommend that it not be processed.

Tyler R. McClintock is an M.D./M.S. candidate in the Department of Environmental Medicine at New York University School of Medicine. Under the direction of Dr. Yu Chen, his research focuses on how environmental and dietary factors are related to the risk of chronic diseases.

Tyler R. McClintock is a 4th year medical student at NYU School of Medicine

Peer reviewed by Michelle McMacken, MD, Dept. of Medicine (GIM Div.) NYU Langone Medical center

Image courtesy of Wikimedia Commons

References

1. Wanjek C. Red meat a ticket to early grave, Harvard says. Yahoo! Daily News. March 12, 2012. http://article.wn.com/view/2012/03/12/Red_Meat_a_Ticket_to_Early_Grave_Harvard_Says/#/related_news. Accessed May 23, 2012.

2. Dale R. Red meat ‘kills.’ The Sun. March 13, 2012.

3. Ostrow N. A hot dog a day raises risk of dying, Harvard study finds. Bloomberg Businessweek. March 12, 2012. http://www.businessweek.com/news/2012-03-12/a-hot-dog-a-day-raises-risk-of-dying-harvard-study-finds.  Accessed March 23, 2012.

4. Pan A, Sun Q, Bernstein AM, et al. Red meat consumption and mortality: results from 2 prospective cohort studies. Arch Intern Med. 2012;172(7):555-63.

5. Chen Y, McClintock TR, Segers S, et al., Prospective investigation of major dietary patterns and risk of cardiovascular mortality in Bangladesh. Int J Cardiol. 2012 May 3. [Epub ahead of print]

6. Micha R, Wallace SK, Mozaffarian D. Red and processed meat consumption and risk of incident coronary heart disease, stroke, and diabetes mellitus: a systematic review and meta-analysis. Circulation. 2010;121(21):2271-2283.

7. Fraser GE. Associations between diet and cancer, ischemic heart disease, and all-cause mortality in non-Hispanic white California Seventh-day Adventists. Am J Clin Nutr. 1999;70(3 Suppl):532S-538S.

8. Sinha R, Cross AJ, Graubard BI, Leitzmann MF, Schatzkin A. Meat intake and mortality: a prospective study of over half a million people. Arch Intern Med, 2009;169(6):562-571.  http://www.ncbi.nlm.nih.gov/pubmed/19307518

9. Kelemen LE, Kushi LH, Jacobs DR Jr, Cerhan JR. Associations of dietary protein with disease and mortality in a prospective study of postmenopausal women. Am J Epidemiol. 2005;161(3):239-249.

10. Kontogianni MD, Panagiotakos DB, Pitsavos C, Chrysohoou C, Stefanidis C. Relationship between meat intake and the development of acute coronary syndromes: the CARDIO2000 case-control study. Eur J Clin Nutr. 2008;62(2):171-177.

11. Mente A, deKoning L, Shannon HS, Anand SS. A systematic review of the evidence supporting a causal link between dietary factors and coronary heart disease. Arch Intern Med. 2009;169(7):659-669.  http://www.ncbi.nlm.nih.gov/pubmed/19364995

12. McAfee AJ, McSorley EM, Cuskelly GJ, et al. Red meat consumption: an overview of the risks and benefits. Meat Sci. 2010;84(1):1-13.  http://www.ncbi.nlm.nih.gov/pubmed/20374748

13. Fung TT, Schulze M, Manson JE, Willett WC, Hu FB. Dietary patterns, meat intake, and the risk of type 2 diabetes in women. Arch Intern Med. 2004;164(20):2235-2240.

14. Pan A, Sun Q, Bernstein AM, et al. Red meat consumption and risk of type 2 diabetes: 3 cohorts of US adults and an updated meta-analysis. Am J Clin Nutr. 2011;94(4):1088-1096.

15. Larsson SC, Wolk A. Meat consumption and risk of colorectal cancer: a meta-analysis of prospective studies. Int J Cancer. 200;119(11):2657-2664.

16. Lopez AD, Mathers CD. Measuring the global burden of disease and epidemiological transitions: 2002-2030. Ann Trop Med Parasitol. 2006;100(5-6):481-499.

17. Yusuf S, Reddy S, Ounpuu S, Anand S. Global burden of cardiovascular diseases: part I: general considerations, the epidemiologic transition, risk factors, and impact of urbanization. Circulation. 2001;104(22):2746-2753.

18. Ornish D. Dean Ornish on the world’s killer diet. TED Talk. Monterey, CA. February, 2006.

19. Roussell MA, Hill AM, Gaugler TL, et al. Beef in an Optimal Lean Diet study: effects on lipids, lipoproteins, and apolipoproteins. Am J Clin Nutr. 2012;95(1):9-16.  http://www.unboundmedicine.com/washingtonmanual/ub/citation/22170364/Beef_in_an_Optimal_Lean_Diet_study:_effects_on_lipids_lipoproteins_and_apolipoproteins_

20. U.S. Department of Agriculture and U.S. Department of Health and Human Services. Dietary Guidelines for Americans, 2010. 7th Edition, Washington, DC: U.S. Government Printing Office, December 2010. Page 1 of 4.  http://health.gov/dietaryguidelines/dga2010/dietaryguidelines2010.pdf

From The Archives – White Coat Hypertension: Are Doctors Bad for Your Blood Pressure?

September 29, 2016

OscilomanPlease enjoy this post from the archives dated March 20, 2013

By Lauren Foster

Faculty Peer Reviewed

Hypertension is a pervasive chronic disease affecting approximately 65 million adults in the United States, and a significant cause of morbidity and mortality [1]. Antihypertensives are widely prescribed due to their effectiveness in lowering blood pressure, thereby reducing the risk of cardiovascular events. However, the phenomenon of the “white coat effect” may be a complicating factor in the diagnosis and management of hypertensive patients. It is well established that a considerable number of people experience an elevation of their blood pressure in the office setting, and particularly when measured by a physician. The cause of this white coat hypertension, as well as its implications in the prognosis and treatment of hypertension, is still controversial.

The concept of white coat hypertension has existed for many years, with some of the first reports of blood pressure varying between a resting value and one taken by the physician written by Alam and Smirk in the 1930s [2]. Studies since then have continued to demonstrate the elevating effect of a physician’s office on blood pressure, with an estimated 20% prevalence of white coat hypertension in the general population [3]. The definition of white coat hypertension used in research continues to vary, however, producing a range of incidences from 14.7% to 59.6% [3]. Most studies characterize white coat hypertension as an office blood pressure of greater than 140/90 mmHg, with ambulatory blood pressures less than 135/85 [3]. The regular use of home blood pressure monitors and 24-hour ambulatory blood pressure monitoring (ABPM) has further demonstrated this discrepancy in clinical practice as well as in research.

White coat hypertension is hypothesized to be a result of anxiety and subsequent sympathic nervous system activation. Studies examining the presence of white coat hypertension among individuals with anxious traits have not found evidence of this association; rather it appears to be associated with a state of anxiety unique to the presence of a physician [5]. In a study by Gerin, Ogedegbe, and colleagues, ABPM measurements of patients’ blood pressure in a separate laboratory facility were compared to ABPM measurements in the waiting room of a physician’s office and a manual blood pressure performed by a physician in the examining room. Their results demonstrated a significant elevation of blood pressure on the day of the physician’s office visit, with a larger increase in previously diagnosed hypertensive patients, and no difference in blood pressure between the waiting room and the examining room [2]. This provides evidence for the notion that white coat hypertension is the result of a classically conditioned response to a physician’s office. That this occurred more often in patients with previously established hypertension may be due to an initial anxiety reaction as patients learn they have hypertension, which is further conditioned by the following office visits to check their blood pressure control [2].

The effect of isolated white coat hypertension on cardiovascular risk has been controversial. One study examining the target organ damage of hypertension in terms of left ventricular mass and carotid-femoral pulse wave velocity found a positive correlation with daytime blood pressure values, but not with those who had elevated office blood pressures alone [6]. A recent meta-analysis likewise showed that cardiovascular risk is not significantly different between white coat hypertension and normotension [7]. However, another study by Gustavsen and colleagues evaluating the rate of cardiovascular deaths and nonfatal events over a 10-year follow-up period found that patients with white coat hypertension and essential hypertension had similar event rates, but normotensive patients had significantly lower rates [8]. In contrast, a different study determined that the unadjusted rate of all-cause mortality in patients with white coat hypertension (4.4 deaths per 1,000 years of follow-up) was less than patients with sustained hypertension (10.2 deaths per 1,000 years of follow-up), and that this was clinically significant after adjusting for age, sex, smoking, and use of antihypertensive medication [9]. The effect of isolated white coat hypertension on cardiovascular risk still needs further investigation to determine the necessity of treating it with antihypertensives.

As hypertension is routinely diagnosed by the blood pressure measurements obtained by a physician in an office setting, it is likely that a significant portion of white coat hypertension is treated with antihypertensives. In the study by Gustavsen and colleagues, they noted that 60.3% of patients with white coat hypertension were treated with antihypertensives at some point during the 10-year follow-up [8]. In the Treatment of Hypertension Based on Home or Office Blood Pressure (THOP) trial, antihypertensive treatment was adjusted based on either self-measured home blood pressure values or conventional office measurements. At the end of the 6-month period, less intensive drug treatment was used for the home blood pressure group as opposed to those measured in an office, and more home blood pressure patients could permanently stop antihypertensive drug treatment (25.6% vs. 11.3%). However, those treated based on home blood pressure measurements had slightly higher blood pressures at the end of the trial than those treated in the office, which could potentially increase cardiovascular risk [10]. Evaluating whether a patient has sustained hypertension or white coat hypertension with normotensive ambulatory blood pressure using home devices or ABPM may help to identify those who do in fact require antihypertensive medications.

White coat hypertension may also play a role in cases of resistant hypertension. ABPM may be necessary to differentiate cases of true drug-resistant hypertension and those that are well controlled outside of the physician’s office in order to prevent overtreatment. One study found that when patients who were documented to have uncontrolled hypertension had their blood pressure monitored for 24 hours, only 69% were actually uncontrolled [11]. Studies have also looked for other ways to differentiate true resistant hypertension and white-coat resistant hypertension, and have determined that true resistant hypertension patients have excessive intake of salt and alcohol as well as higher renin values [12].

In clinical practice, white coat hypertension is likely a common confounding factor in the diagnosis and treatment of hypertension. Patients often insist that their blood pressure is much lower at home than at their office visit, and the anxiety of an appointment solely for a blood pressure check is likely a contributing factor. Shifts away from physician measurement of blood pressure or substitution with automatic blood pressure devices may help to counteract this phenomenon. Home blood pressure monitoring devices can be a useful tool in discerning whether a patient’s blood pressure is properly controlled on a current treatment regimen or if additional therapy is needed. Avoiding overtreatment of hypertension may also lower health care costs, although the cardiovascular risks of white coat hypertension must be further elucidated so that the importance of treating white coat hypertension can be determined. White coat hypertension is a real and ubiquitous phenomenon, and must be considered by physicians for all patients with elevated blood pressures.

Commentary by Dr. Stephen Kayode Williams

Attending Physician, Bellevue Primary Care Hypertension Clinic

Are doctors bad for your blood pressure? Yes! This is a timely discussion as we eagerly await updated national guidelines for the management of hypertension. How will JNC 8 address this issue that comes up at every visit to our primary care clinics? The latest US hypertension guidelines were published in 2003 [13]. The more recent 2011 UK guidelines are remarkable in stating that in order to confirm a new diagnosis of hypertension, ambulatory blood pressure monitoring (or alternatively home blood pressure monitoring) should demonstrate daytime blood pressures greater than or equal to 135/85 mmHg [14] . An exhaustive cost-effectiveness analysis performed for these guidelines came to the conclusion that, despite the expenses incurred with ambulatory blood pressure monitoring, there are vast cost savings that come with the prevention of an erroneous diagnosis of hypertension using office blood pressure readings alone. In this country, ambulatory blood pressure monitoring is not widely available in primary care. Stayed tuned to see how the upcoming hypertension guidelines address these clinical correlations.

Lauren Foster is a 4th year medical student at NYU School of Medicine

Peer reviewed by Stephen Kayode Williams, MD, MS, Bellevue Primary Care Hypersion Clinic

Image courtesy of Wikimedia Commons

References:

1. Fields LE, Burt VL, Cutler JA, Hughes J, Roccella EJ, Sorlie P. The burden of adult hypertension in the United States 1999 to 2000: a rising tide. Hypertension. 2004;44(4):398-404.  http://www.ncbi.nlm.nih.gov/pubmed/15326093

2. Gerin W, Ogedegbe G, Schwartz JE, et al. Assessment of the white-coat effect. J Hypertens. 2006;24(1):67-74.

3. Pickering TG. White coat hypertension. Curr Opin Nephrol Hypertens. 1996;5(2):192-198.  http://circ.ahajournals.org/content/98/18/1834.full

4. Verdecchia P, Schillaci G, Boldrini F, Zampi I, Porcellati C. Variability between current definitions of ‘normal’ ambulatory blood pressure. Implications in the assessment of white coat hypertension. Hypertension. 1992;20(4):555-562.

5. Ogedegbe G, Pickering TG, Clemow L, et al. The misdiagnosis of hypertension: the role of patient anxiety. Arch Intern Med. 2008;168(22):2459-2465. http://archinte.jamanetwork.com/article.aspx?articleid=773457

6. Silveira A, Mesquita A, Maldonado J, Silva JA, Polonia J. White coat effect in treated and untreated patients with high office blood pressure. Relationship with pulse wave velocity and left ventricular mass index. Rev Port Cardiol. 2002;21(5):517-530.

7. Pierdomenico SD, Cuccurullo F. Prognostic value of white-coat and masked hypertension diagnosed by ambulatory monitoring in initially untreated subjects: an updated meta analysis. Am J Hypertens. 2011;24(1):52-58.  http://ajh.oxfordjournals.org/content/24/1/52.abstract

8. Gustavsen PH, Høegholm A, Bang LE, Kristensen KS. White coat hypertension is a cardiovascular risk factor: a 10-year follow-up study. J Hum Hypertens. 2003;17(12):811-817.

9. Dawes MG, Bartlett G, Coats AJ, Juszczak E. Comparing the effects of white coat hypertension and sustained hypertension on mortality in a UK primary care setting. Ann Fam Med. 2008;6(5):390-396.  http://www.annfammed.org/content/6/5/390.full.pdf

10. Den Hond E, Staessen JA, Celis H, et al. Treatment of Hypertension Based on Home or Office Blood Pressure (THOP) Trial Investigators. Antihypertensive treatment based on home or office blood pressure–the THOP trial. Blood Press Monit. 2004;9(6):311-314.

11. Godwin M, Delva D, Seguin R, et al. Relationship between blood pressure measurements recorded on patients’ charts in family physicians’ offices and subsequent 24 hour ambulatory blood pressure monitoring. BMC Cardiovasc Disord. 2004;4:2.  http://www.biomedcentral.com/1471-2261/4/2/

12. Veglio F, Rabbia F, Riva P, et al. Ambulatory blood pressure monitoring and clinical characteristics of the true and white-coat resistant hypertension. Clin Exp Hypertens. 2001;23(3):203-211.

13. Chobanian AV, Bakris GL, Black HR, et al. The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure: the JNC 7 report. JAMA. 2003;289:2560-2572.  http://www.ncbi.nlm.nih.gov/pubmed/12748199

14. Krause T, Lovibond K, Caulfield M, McCormack T, Williams B. Management of hypertension: summary of NICE guidance. BMJ. 2011;343:d4891.  http://www.bmj.com/content/343/bmj.d4891?tab=responses

From The Archives: Reflections on Hurricane Sandy

September 22, 2016
Please enjoy this post from the archives, dated, January 11, 2013

By Jessica Taff, MD

As the 3 major teaching hospitals that make up NYU Medical Center begin to come back online, we thought it was the right time to share some of our reflections on Hurricane Sandy.  It’s been a long strange journey for the faculty, housestaff, students and most of all our patients.  It’s time now though for us to come back home; to return with a renewed sense of purpose and a new appreciation for our institution.

As the East River lapped over the FDR and poured into our hospitals, high winds slammed against the buildings with vengeance, and the lights flickered off, text messages from fellow residents poured in: “Backup generators failed” “Evacuating NYU” “Three hours of power left at Bellevue.” The Manhattan VA was the only behemoth on our “superblock” of three institutions to fully evacuate before the storm. At NYU Langone Hospital, nurses, physicians, and staff, later with the help of the NY Fire Department, carried patients down dozens of flights of stairs to awaiting ambulances that took them to neighboring city hospitals for continued medical care.

Amidst the foreboding darkness, hundreds of Bellevue employees came together in the little used stairwell, passing tubs of gasoline up thirteen flights of stairs to keep the power pumping through the veins of the only generators spared from the river’s crest. Teams were set up near patients on ventilators and medication drips in case all power failed, and immediate mechanical bagging was needed. And while these stories are indeed impressive and heartwarming, it is the precedent that they set and the trend that followed that will forever be ingrained in my memory of this monstrous hurricane.

In the days just after the storm and the hurried evacuation of NYU, Bellevue Hospital trudged along. First, main sources of power went down. Then, the running water stopped. Radiology was unavailable, and only a single lab test could be run. There were no phones or electronic medical records accessible. This meant that all the streamlining and integration of technology that we strived so hard to achieve left us abandoned. In a public hospital known for treating among the sickest and underserved patients, adversity is not a stranger. The natural tempo of Bellevue Hospital is fast and furious. Staff must move swiftly as the patients act quickly. It took the furies of Sandy to press pause on the constantly intense and relentless undertone of Bellevue and to unearth her true soul.

When the lights went out on our superblock, something bigger appeared: a community. More than ever, it was clear. We were all on the same team, with unwavering commitment to patient care. We were stripped of our job descriptions and workplace frustrations, and came together with the common goal of keeping our home intact. People left their still powered houses and nervous families to spend several straight days in the hospital. Offices, call rooms, and closets became scattered with cots and inflatable mattresses. Nurses, PCAs, maintenance workers, and other staff braved the elements and formed transportation pools to get to and from the hospital while the subways and busses were nonfunctional. Despite there being no phones in the hospital, there was no shouting, no yelling. Instead, there was teamwork and communication. People volunteered to trek up ten flights of stairs to fetch someone’s medication without even being asked. A patient’s husband carried several pizzas and liter bottles of soda up seventeen flights, to thank the staff for taking care of his wife. A fellow resident held a flashlight for me as I drew labs from a critical patient amidst the darkness.

Conditions rapidly became dire and, at times, pungent smells of bodily stasis wrapped through the halls. Yet patient care never suffered. As the National Guard crowded floors and patients were lifted onto boards and fully evacuated at marathon-pace, there was a sense of sadness. A 276-year-old institution, arguably the heart of the city’s public health system, would for the first time be without its patients. In the days that followed, Facebook messages equating Bellevue to a home became commonplace. Although we have been scattered throughout the five boroughs, it’s safe to say that the community remained intact. There was immediate kinship between anyone with a link to our three hospitals. And while we are very thankful that our neighboring medical institutions have welcomed us with open arms, we are grateful to be able to now start to come home. The rebuilding process is well underway, and the staff and patients are all ready to restart.

Dr. Jessica Taff is a 2nd year resident at NYU Langone Medical Center