Systems

Too Much of a Good Thing: The Evidence Behind the Need for a Bisphosphonate Holiday

May 9, 2013

By Jenna Piccininni

Faculty Peer Reviewed

Bisphosphonates are a relatively new medication having only been approved to treat osteoporosis in the US since 1995 [1]. In addition, large placebo controlled trials have, at most, 10 years of follow-up data. Thus, there are still questions regarding the long-term use of these agents. There are a few well-established side effects of bisphosphonates including rare osteonecrosis of the jaw and more common esophageal irritation. However, several more recent case reports suggest a correlation between prolonged bisphosphonate use and atypical femoral fracture [2]. This brings into question whether too much antiresorptive activity can lead to increased fracture risk and if a medication holiday is therefore appropriate.

The pharmacology and mechanism of bisphosphonates is integral in understanding the biological underpinnings of this potential association and the implications of discontinuing the medication. Once absorbed into the blood stream there is no systemic metabolism and the molecules directly bind to active bone [1]. These binding sites on bone are so numerous that saturation is physiologically unrealistic. In the setting of osteoclast activity an acidic microenvironment is created which causes the bisphosphonates to be released and taken up by these cells, leading to loss of osteoclast function and ultimately apoptosis. This decrease in bone resorption leads to increased bone density but also suppress the repair of microdamage that occurs with normal daily activities. One study in Beagle dogs examined how the use of bisphosphonates, at six times the clinical dose, affected bone structure [3]. In control dogs resorption spaces were more numerous but more likely to be associated with areas of microdamage cracks whereas treated dogs had a higher proportion of cracks that were not associated with resorption spaces. These findings suggest that bisphosphonates impair targeted repair of microdamage, which is the proposed mechanism by which long-term use is thought to lead to increased fracture risk.

Based on these case reports, controlled studies have looked for a significant association between prolonged bisphosphonate use and risk of atypical fracture. In 2010 a NEJM study used data from the FIT, FLEX, and HORIZON PFT large randomized controlled trials to assess this association and found no increased risk [4]. However in 51,287 patient years only 12 atypical fractures were identified resulting in wide confidence intervals and low power. Despite the lack of evidence for definitive causation, in 2010 the FDA issued a label change for all bisphosphonates requiring the risk of atypical subtrochanteric femur fractures to be included [5]. Then in 2011 a paper in JAMA reported the results of an epidemiological population based case-control study showing that women using bisphosphonates for more than five years have 2.74 times the odds of being hospitalized for a subtrochanteric or femoral neck fracture than women with only sporadic use (<100 days) [6]. Following 5 years of use the risk of fracture was .13% in the first year and .22% in two years. 64% of the fractures in long-term users were attributable to bisphosphonate use, but because the total incidence of these fractures is so low eliminating exposure would lead to only an 11% decrease in the total rate of fracture. While this study shows a significant association between prolonged bisphosphonate use and atypical fracture, the evidence should be interpreted keeping in mind that confounders may be present in this observational study. Larger controlled studies must be done confirm these findings. If this association is present, the effects of limiting bisphosphonate exposure though medication holidays could be beneficial.

The idea of a bisphosphonate holiday is particularly appealing due to the unsaturable accumulation of the molecules in bone allowing for continued effectiveness even after discontinuation of the medication. The Fracture Intervention Trial Long-term Extension (FLEX) study randomized women who previously had three to five years of alendronate in the FIT trial to receive either 5 more years of 10 mg alendronate/day, 5mg alendronate/day, or placebo [7]. Their results showed that while bone mineral density (BMD) at the hip decreased 2-3% in the placebo group this was less BMD loss than expected in women never treated with bisphosphonates and that their BMD remained above baseline levels at the start of the FIT trial. In addition, there was no difference in the rate of non-vertebral fractures between any of the groups, showing that loss of hip BMD in the placebo group was not clinically relevant. However, the women in the alendronate arms had a 2.9% absolute risk reduction and 55% relative risk reduction for clinical vertebral fractures compared to placebo, but the rate of morphometric fracture was equal. This benefit was most appreciable in women with past vertebral fractures and very low BMD. Another study in NEJM similarly followed women for 10 years assigned to either 10mg/day alendronate, 5mg/day alendronate, or 5mg/day alendronate for 5 years followed by 5 years of placebo [8]. They similarly found that changes in BMD in the placebo group varied depending on the site, but that the risk of morphometric vertebral fractures did not differ between the three groups. This data shows that after treatment for five years with a bisphosphonates a five-year holiday is only detrimental to women at high risk for fracture with very low BMD and previous fractures.

Although no official recommendations have been made regarding bisphosphonate holidays, some authors have proposed personal guidelines based upon the patient’s risk of fracture. Watts and Diab suggest an indefinite holiday for patients at mild risk after 5 years of treatment, a two to three year holiday for moderate risk patients after 5-10 years of treatment, and a 1-2 year “holiday” for high-risk patients after 10 years of treatment during which they receive an alternative medication such as a SERM [1]. During these holidays bisphosphonates should be restarted if BMD decreases significantly or if fracture occurs. I believe that the aforementioned studies support this type of algorithm. Discontinuing bisphosphonates after several years of treatment does not appear to have clinical implications in most low risk patients most likely due to their storage in bone and residual effects. Data regarding risk of atypical fracture with long-term treatment is not as clear but in the setting of this potential debilitating side effect physicians that are risk-averse are justified in giving their patients a medication holiday based on their clinical scenario. More controlled studies must be done examining the clinical outcomes of specific holiday durations, the optimal time to begin a holiday and indications for ending one. With that information more solid evidence based guidelines can be constructed, patient safety can be improved, and the use of these relatively new medications can be optimized.

Jenna Piccininni is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Michael Tanner, Associate Editor, Clinical Correlations

Image courtesy of Wikimedia Commons

References

1. Watts N, Diab D. Long-term use of bisphosphonates in osteoporosis. Journal of Clinical Endocrinology and Metabolism. 2010;95(4):1555-1565.  http://jcem.endojournals.org/content/95/4/1555.full

2. Lenart B, Lorich D, and Lane J. Atypical fractures of the femoral diaphysis in postmenopausal women taking alendronate. New England Journal of Medicine. 2008;358:1304-1306.

3. Li J, Mashiba T, Burr D. Bisphosphonate treatment suppresses not only stochastic remodeling but also the targeted repair of microdamage. Calcification Tissue International. 2001;69:281-286.  http://www.ncbi.nlm.nih.gov/pubmed/11768198

4. Black D, Kelly M, Genant H, et al. Bisphosphonates and fractures of the subtrochanteric or diaphyseal femur. The New England Journal of Medicine. 2010;362(19):1761-1771.

5. Bisphosphonates 9osteoporosis drugs): Label change-Atypical fractures update. Food and Drug Administration Website. http://www.fda.gov/Safety/MedWatch/SafetyInformation/SafetyAlertsforHumanMedicalProducts/ucm229244.htm.  Accessed July 25, 2012.

6. Park-Wyllie L, Mamdani M, Juurlink D, et al. Bisphosphonate use and the risk of subtrochanteric for femoral shaft fractures in older women. Journal of the American Medical Association. 2011;305(8): 783-789.

7. Black D, Schwartz A, Ensrud K, et al. Effects of continuing or stopping alendronate after 5 years of treatment The fracture intervention trial long-term extension (FLEX): A randomized trial. Journal of the American Medical Association. 2006;296(24):2927-2938.  http://www.ncbi.nlm.nih.gov/pubmed/17190893

8. Bone H, Hosking D, Devogelaer J, et al. Ten years’ experience with alendronate for osteoporosis in postmenopausal women. The New England Journal of Medicine. 2004;350:1189-1199.  http://www.fosalan.co.il/secure/resources/publications/10years-English.pdf

Is Personalized Medicine Really the Cure? Looking Through the Lens of Breast Cancer

May 3, 2013

By Jessica Billig

Faculty Peer Reviewed 

Although millions of dollars are spent towards cancer research every year, progress toward a cure is less than ideal. Last year the New York Times posted a piece about the burgeoning improvements on the genomic front that could lead to a new approach to cancer treatment. “The promise is that low-cost gene sequencing will lead to a new era of personalized medicine, yielding new approaches for treating cancers and other serious diseases” [1]. Through genomic technology, physicians will be able to tailor chemotherapeutic regimens and treatments to each patient’s specific cancer. While this sounds like the holy grail of cancer treatment, it’s not as easy as it seems.

The notion of personalized medicine sprang from the human genome project, which sequenced the 30,000-40,000 protein-coding regions that make up human DNA [2]. Each type of cancer has a unique genome with redundant mutations and coding regions that can be manipulated as possible drug targets.

An excellent example of the application of genomics in cancer medicine is breast cancer. Gene-expression profiles have been generated that have identified different biomarkers for each subtype of breast cancer (estrogen receptor/progesterone receptor/ HER2/neu receptor). These biomarkers have shaped the way we currently treat breast cancer. Every subtype of breast cancer produces unique proteins and relies upon discrete growth factors. It is through these protein differences that targeted therapy is made possible [3]. For example, the discovery of the HER2 receptor and its associated monoclonal antibody, trastuzumab, changed the battlefield of breast cancer. Before the advent of trastuzumab, HER2 as a biomarker was a poor prognostic marker for survival, with an increased rate of relapse after surgery. With the addition of trastuzumab to standard chemotherapy, women with metastatic HER2-positive breast cancer who had not received prior chemotherapy had an increase in survival from 4.6 months to 7.4 months [4]. More recently, the application of HER2-targeted adjuvant therapy in early stage disease has changed the prognosis of HER2-positive breast cancer forever, preventing recurrences and significantly prolonging survival in patients whose cancer does relapse.

The cancer genome project along with other research endeavors began sequencing multiple tumors from each type of cancer. Through sequencing, researchers can look at which mutations “drive” the cancer to grow, invade, and metastasize. By targeting these driver mutations, therapies will be able to cut to the core of the cancer, destroying the tumor’s foundation [5]. With genotyping becoming more affordable, we will be able to sequence each patient’s tumor to determine which oncogene or tumor suppressor is fueling his or her specific cancer. In a perfect world, this may be the answer, but the molecular biology of cancer is not so straightforward. A 2012 article in the New England Journal of Medicine describes the heterogeneous landscape of the molecular biology of a cancer cell. Intratumor heterogeneity is a big problem, with each tumor undergoing its own evolutionary process within the patient. Every tumor is made up of millions of different cells that are accumulating mutations at various loci and at different rates. By sequencing just one part of the tumor, physicians may miss the essential part of the tumor that is driving growth or allowing for metastases. Through phylogenetic reconstruction of the numerous parts of a patient’s tumor, there is marked branched evolutionary tumor growth. This tumor heterogeneity may halt the prospects of personalized medicine by opening a Pandora’s box of mutations [6]. Through disparate mutations the tumor may harbor many different varieties of biomarkers, thus making it more challenging to find a single perfect drug target for each cancer.

Chemotherapy is a mainstay for shrinking a tumor before surgery or killing residual cancer cells that continue to grow and divide after surgery. Although the side effects of chemotherapy have been reduced by newer and better anti-nausea regimens and the use of growth factors to prevent low blood counts and infections, chemotherapy treatment remains difficult and unpleasant for most. The hope is that chemotherapy will be replaced by more specific targeted therapies with fewer side effects. However, more basic research and clinical trials will be needed to define appropriate targets and to combine therapies so that we may address the important issue of tumor heterogeneity. Personalized medicine is still the goal, but until more work is done at the bench and at the bedside, it may fall short of its promise.

Jessica Billig is a 4th year medical student at NYU School of Medicine

Peer reviewed by Ruth Oratz, MD, Department of Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Markoff J. Cost of gene sequencing falls, raising hopes for medical advances. New York Times. Mar 10, 2012. http://www.nytimes.com/2012/03/08/technology/cost-of-gene-sequencing-falls-raising-hopes-for-medical-advances.html.Accessed March 21, 2012.

2. Lander ES, Linton LM, Birren B, et al. Initial sequencing and analysis of the human genome. Nature. 2001;409(6822):860-921.

3. Sotiriou C, Pusztai L. Gene-expression signatures in breast cancer. N Engl J Med. 2009;360(8):790-800.  http://www.ncbi.nlm.nih.gov/pubmed/19228622

4. Hudis CA. Trastuzumab—mechanism of action and use in clinical practice. N Engl J Med. 2007;357(1):39-51.

5. Stratton MR, Campbell PJ, Futreal PA. The cancer genome. Nature 2009; 458(7239):719-724.  http://www.ncbi.nlm.nih.gov/pubmed/19360079

6. Gerlinger M, Rowan AH, Horswell S, et al. Intratumor heterogeneity and branched evolution revealed by multiregion sequencing. N Engl J Med 2012; 366(10):883-892.

Preserving Residual Renal Function

May 1, 2013

By Jerome Lowenstein,  MD

Faculty Peer Reviewed

Two questions that often arise concerning the administration of radio-contrast in patients with advanced renal disease, receiving hemodialysis or peritoneal dialysis, reveal what appear to be widespread and important misconceptions.

The first misconception is that in end-stage renal disease, glomerular filtration is absent or minimal and the removal of wastes (“uremic toxins”) is accomplished only by peritoneal or hemodialysis Most patients who reach the advanced stages of renal disease requiring hemodialysis or peritoneal dialysis are not oliguric and a significant proportion (? 50%) of patients continue to excrete urine, often as much as 200 ml/day despite adequate dialysis treatment. These patients are said to have “residual renal function” .which is typically quantitated by application of a variant of urea or creatinine kinetics. Urea and creatinine, small molecules normally removed by glomerular filtration and readily removed by hemodialysis or peritoneal dialysis, are taken as surrogates for unmeasured “uremic toxins”. These measures suggest that the contribution of “residual renal function” to waste removal is small or inconsequential.

While dialysis does effectively control uremic symptoms,_nausea, vomiting, encephalopathy, and pruritus _ by the removal of small, dialyzable substances, several lines of evidence suggest that the “uremic toxins” responsible for the progressive renal scarring and severe cardiovascular disease seen in patients accounting for the early mortality with end-stage renal disease are not those normally filtered, but rather protein-bound molecules removed by proximal tubular secretion (1). Marine species, dating back 120 million years, are aglomerular. These species and many aglomerular fish of more recent vintage (23 million years) excrete their metabolic wastes (“uremic toxins”) by active tubular secretion. Glomeruli evolved when life moved into fresh water, as a mechanism for maintaining fluid balance, but as evolutionary biologists point out, physiologic mechanisms (e.g. tubular secretion) are not “discarded” by environmental change. Klaus Beyenbach (2) described the repeated emergence of tubular secretion as the mechanism responsible for the excretion of salts and toxic wastes in many species that have evolved more recently. The tubular secretion of small solutes is followed by the osmotic transfer of fluid from the renal interstitium into the tubule.

This is precisely what was described by Jared Grantham (3) who observed tubular secretion and the generation of “urine” in isolated rabbit renal tubules bathed in a medium to which arylamines such as para-aminohippurate or uremic plasma was added. Grantham suggested that the tubular secretion of organic solutes might serve a role in chronic renal failure (3).

The composition and the source of the urine produced by patients with marked reduction in glomerular filtration rate are not known. Several lines of evidence suggest that “residual renal function” might represent urine produced not by glomerular filtration, but rather by tubular secretion. Further, studies in remnant kidney models in the rat strongly suggest that a major “uremic toxin” might be indoxyl sulfate, a protein-bound aryl amine actively transported by organic anion transporters (OATs) in the proximal renal tubule (1). Preliminary studies in uremic patients support an important role for indoxyl sulfate as a uremic toxin (4). It does not seem far-fetched to speculate that the urine which is produced by patients with end-stage renal disease represents, wholly or in part, the product of tubular secretion of one or more metabolites, and may be the human counterpart of the urine formed by healthy marine species with aglomerular kidneys or glomeruli that filter little or not at all (5).

If these speculations should prove to be valid, it would dictate that “residual renal function” in patients with ESRD be carefully preserved. Although virtually all patients with ESRD ultimately become anuric, measures to slow the loss of function in “aglomerular kidneys” should be sought. Such measures include the avoidance of hypotension, the choice of peritoneal dialysis as the initial mode of renal replacement, and the avoidance of nephrotoxic drugs, notably radiocontrast agents. Since the OAT transporters are competitively inhibited by a wide range of drugs (7,8) e.g., probenecid and penicillin, the list of drugs to be avoided should not be limited to those we think of as “nephrotoxic”.

The second misconception is evident in the wide-spread practice of performing dialysis as soon as possible following the administration of radiocontrast. The belief that contrast-induced renal failure occurs over a period of hours or days derives from the manner in which renal failure is diagnosed, i.e. by elevation of the serum creatinine or other “glomerular” marker. Creatinine may accumulate slowly in acute renal failure requiring one or two days to show a significant change from baseline. However, it has been known for many years that patients who develop contrast-induced nephropathy have persistent cortical opacification readily appreciated immediately after contrast administration and still evident 24 hours later in patients who develop contrast-induced renal failure.(6). Contrast administered for visualization of the kidneys usually transits through the cortex in 5-10 minutes. “Delayed nephrogram” visualized 24 or more hours following contrast administration is strong evidence that acute renal failure occurs within minutes after contrast administration. There is no evidence that dialysis plays any role in the prevention of contrast-induced renal failure…and yet as any renal fellow will attest, the request for early dialysis following contrast administration is still widespread !

Dr. Jerome Lowenstein, Department of Medicine (Nephrology), NYU Langone Medical Center

Peer Reviewed by David Goldfarb, MD Nephrology Section Editor, Clinical Correlations

Image courtesy of Wikimedia Commons

References:

1. Enomoto, A., Takeda, M., Tojo, A., et alRole of organic anion transporters in the tubular transport of indoxyl sulfate and the induction of its nephrotoxicity. J. Am. Soc. Nephrol. 13, 1711–1720, 2002 https://www.ncbi.nlm.nih.gov/m/pubmed/12089366/?i=6&from=/14675047/related

2. Beyenbach, K. Kidneys sans glomeruli, Amer J Physiol 286; F811, 2003 http://ajprenal.physiology.org/content/286/5/F811.full

3. Grantham, JJ and Wallace DP, Return of the secretory kidney. Amer J Physiol 282;F1-9, 2002 http://www.ncbi.nlm.nih.gov/pubmed/11739106

4. Jhawar S., Zavadil, J and Lowenstein, J, The Molecular Basis of the Renal and Vascular Consequences of Uremia. ( abstract) American Soc Nephrology 2012.

5. Lowenstein, J. The anglerfish and Uremic Toxins. FASEB 25 1782-5, 2011 http://www.ncbi.nlm.nih.gov/pubmed/21622695

6. Love, L, Johnson M S, Bresler, ME, et al. The persistent computed tomography nephrogram: its significance in the diagnosis of contrast-associated nephrotoxicity. The British Journal of Radiology, 67, 951-957. 1994

7. Sekine T, Cha SH, and Endou H, The Multispecific organic Anion Transporter (OAT1) Family. Plugers Archiv-Eur J Physiol: 440: 337-350, 2000

8. HW Smith (1951) The Kidney: Structure and Function in Health and Disease. Oxford University Press, 146 1951, http://books.google.com/books/about/The_Kidney.html?id=jBuvAAAACAAJ

 

 

 

 

In Search of a Competitive Advantage: A Primer for the Clinician Treating the Anabolic Steroid User

April 17, 2013

By David G. Rosenthal and Robert Gianotti, MD

Faculty Peer Reviewed

Case: A 33-year-old man comes to your clinic complaining of worsening acne over the last 6 months. You note a significant increase in both BMI and bicep circumference. After several minutes of denial, he reveals that he has been using both injectable and oral anabolic steroids. He receives these drugs from a local supplier and via the Internet. He confides that his libido has dramatically increased and he feels increasingly pressured at work, describing several recent altercations. He admits that these symptoms are a small price to pay for the amazing performance gains he has seen at the gym. He plans to compete in a local deadlifting tournament at the end of the month. He asks you if he is at increased risk for any health problems and whether short-term use is associated with any long-term consequences. You quickly realize that you have no idea what literature exists on the health consequences of anabolic steroids. Fortunately, you have set the homepage on your web browser to Clinical Correlations. Together, you read…

The recreational use of anabolic steroids has drawn increasing international attention over the last decade due to their use and abuse by athletes and bodybuilders. Athletes including Arnold Schwarzenegger, cyclist Lance Armstrong, baseball slugger Mark McGuire, and Olympic gold medal sprinter Marion Jones have all come under scrutiny for using steroids to gain a competitive advantage and shatter records. In fact, the 1990’s are notoriously known in Major League Baseball as the “Steroids Era.” Critics argue that the use of these substances contradicts the nature of competition and are dangerous given the abundance of reported side effects. Accordingly, the vast majority of sporting associations have banned the use of anabolic steroids, and their possession without a prescription is illegal in the United States, punishable by up to one year in prison. Nevertheless, the performance-enhancing, aesthetic, and financial benefits of anabolic steroids has led to rampant abuse by both professional and high school athletes with an astonishing 3.9% of students having tried anabolic steroids at least once during high school 1, 2.

Anabolic steroids are synthetic derivatives of testosterone, the primary male sex hormone. Androgenic effects of testosterone include maturation of secondary sex characteristics in both males and females, development of typical hair patterns, and prostate enlargement, while its anabolic effects include strength gains and bone maturation via regulation of protein metabolism 3. Administration of exogenous testosterone causes upregulation of the androgen receptor in skeletal muscle, resulting in increased muscle fiber size and number 4. Anabolic steroids can be absorbed directly into skin, injected, or taken orally. Synthetic oral steroids, including methyltestosterone and fluoxymesterone, are 17-alpha alkylated which prevents first-pass metabolism by the liver and may contribute to increased hepatotoxicity 5.

Much of the public opinion about anabolic steroids has been obtained from individual testimonies and well-publicized user narratives. While thousands of articles have been published in scientific journals describing both the desired and adverse effects of anabolic steroid abuse, a number of these studies have drawn questionable conclusions due to flawed methodologies, inadequate sample sizes, study biases, and most importantly the inability to replicate the actual drug dosages used by many athletes. The regimens of many steroid users often consist of twenty-fold higher concentrations than have been previously examined in the literature 6. Hence, the precise effects of the supraphysiologic doses of steroids that are commonly abused may never be known.

Strength, endurance and reduced recovery time are all attributes that the competitive athlete strives to obtain. Historically, institutions and even governments have dabbled in performance enhancement for competitive athletes. It has been well documented that Communist-era East Germany sought to build superior athletes to compete in the Olympic Games and flex their muscles on the world stage. Documents studying the effects of anabolic steroids, including oral Turinabol on Olympic athletes in East Germany from 1968-1972 showed remarkable improvements in strength sports: Discus throws increased by 11-20 meters; shot put distance improved by 4.5-5 meters; hammer throw increased by 6-10 meters; and javelin throw increased 8-15 meters 7. The strength gains among East German female athletes were most notable, as were the side effects including hirsutism, amenorrhea, severe acne, and voice deepening. In fact, when a rival coach commented on the voice changes of the competitors, the East German coach responded “We came here to swim, not sing”8. Following the implementation of “off-season” steroid screening by the International Olympic Committee and other competitive organizations in 1989, track and field sports saw a dramatic reduction in performance. Notably, the longest javelin throw by a female in the 1996 Olympics was 35 feet shorter than the world record of 1988.

The gains seen with anabolic steroid use extend beyond the Olympic athlete to recreational body-builders and gym-rats. In a small placebo controlled study from the Netherlands, a ten-week course of injectable nadrolone in a cohort of recreational body builders increased lean body mass by an average of 2-5 kg, with no accompanying increase in fat mass or fluid retention 9. These effects persisted for more than 6 weeks after the cessation of nandrolone. Surprisingly, performance enhancement can be seen with anabolic steroids even in the absence of exercise. In fact, one study including healthy, young men between the ages of 18 and 35 who had endogenous androgen production suppressed with GnRH showed that supraphysiologic doses of testosterone enanthate administered for 20 weeks caused a 15% dose dependent increase in muscle size and a 20% increase in muscle strength without any exercise 10. This study came as a logical follow up to a smaller study published in the New England Journal of Medicine in 1996 that showed impressive performance gains compared to placebo among both exercising and sedentary subgroups. At 10 weeks, the testosterone + exercise group was able to bench press a mean of 10kg more than both the testosterone alone and exercise alone subgroups11.

The performance gains from steroids have also been shown to extend into the eighth and ninth decades of life. A 2003 study in men aged 65-80 showed significant gains compared to placebo in both lean body mass and single repetition chest press after receiving either 50mg or 100mg of the orally bioavailable steroid, oxymethalone. The men in the 100mg group were able to chest press 13.9% +/- 8.1% (p<0.03) compared to placebo and had a 4.2 +/- 2.4 kg (p<0.001) increase in lean body mass 12. Many athletes also report that anabolic steroids increase endurance and decrease recovery time after workouts. This has been supported in the literature where indirect measures of fatigue, such as increased serum lactate and elevated heart rate were delayed after the injection of nandrolone decanoate with a notable improvement in recovery time4.

We now know from a small, but significant pool of data that the performance gains from anabolic steroids are real and can be seen not only in elite athletes but casual users as well. The existing data regarding the side effects of anabolic steroid is varied and relies heavily on self-reported outcomes and dosing regimens that are often variable and combine multiple unique drugs.

One method of obtaining data regarding the adverse effects of anabolic steroid abuse is by employing questionnaires. While this method is inherently biased, it may the only way to obtain data from subjects using very high doses that are considered unsafe or unethical for higher quality studies. Regardless of the method of data collection, it has been well established that up to 40% of male and 90% of female steroid users self-report adverse side effects including aggression, depression, increased sexual drive, fluid retention, hypertension, hair loss, and gynecomastia4. Other reported side effects include: increased levels of the hormone erythropoietin leading to an increased red blood cell count; vocal cord enlargement, leading to voice deepening; and increased risk of sleep apnea.

Exogenous administration of steroids can have immediate and profound effects on the reproductive system, largely mediated through disruption of the hypothalamic-pituitary-adrenal-gonadal axis. Within 24 hours of use, steroids cause a dramatic decrease in follicle stimulating hormone and luteinizing hormone, which can result in azospermia in males and menstrual irregularities in females within weeks, and infertility within months 13, 14. Supraphysiologic testosterone concentrations result in virilization of females, which is characterized by hirsutism, clitoromegaly, amenorrhea, and voice deepening 15. When steroids are abused for longer periods of time, men can suffer from hypogonadotropic hypogonadism, manifested by testicular atrophy, as well as gynecomastia due to peripheral conversion of the exogenous testosterone to estrogen 15. Some athletes try to increase their sperm count by using human chorionic gonadotropin or clomiphene, both commonly used female fertility drugs, but the efficacy of these hormones are debated; moreover, they do not reduce gynecomastia4. Commonly, drugs such as Propecia, routinely used to treat male-pattern baldness and benign prostatic hypertrophy, are used to increase testosterone levels. Although there have been reports of prostatic hypertrophy in steroid users, there is no known associated risk with the development of prostate adenocarcinoma 16, 17.

Adverse cardiovascular outcomes in steroid abusers have been published, including cardiomyopathy, arrhythmia, stroke, and sudden cardiac death18. However, causation has often been inappropriately attributed solely to anabolic steroid use and the data can be misleading due to confounding variables and study biases 4. The structural, functional, and chemical changes associated with steroid abuse are crucial to consider because many of the reported effects are independent risk factors for cardiovascular disease.

A study published in Circulation in 2010 evaluated left ventricular function in a cohort of weightlifters (n=12) with self-reported anabolic steroid use compared to age-matched weightlifting controls (n=7). After adjusting for body surface area and exercise, the investigators found a significant reduction in left ventricular systolic function (EF= 50.6% vs. 59.1%, p=0.003) 19, and the association remained statistically significant even after controlling for prior drug use including alcohol and cocaine. Interestingly, there appeared to be no relationship between cumulative anabolic steroid use and ventricular dysfunction, although the authors note limitations due to small sample size and the bias of self reported data.

Other studies investigating cardiovascular outcomes of anabolic steroid suggest a transient increase in both systolic and diastolic blood pressure in steroid users, although these values return to baseline within weeks of cessation 20. In addition, long term use of anabolic steroids can lead to increased platelet aggregation, possibly contributing to increased risk for myocardial infarction and cerebrovascular events 18.

Anabolic steroids cause a variable increase in LDL and up to a 40-70% decrease in HDL, often resulting in the misleading finding that steroids do not affect total plasma cholesterol 21. Fortunately, these effects are reversible within 3 months of cessation of the agent 22. The use of the 17-alpha-alkylated steroids can cause a 40% reduction in apolipoprotein A-1, a major component of HDL, while an injectable testosterone has been shown to have a tempered 8% reduction 23. Although these effects are reversible with cessation, they underscore the importance of screening anabolic steroid users for lipid abnormalities.

Steroid use has been linked with a number of hepatic diseases. The use of oral steroids is associated with a transient increase in transaminase levels, although some data suggest that this may be due to muscle damage from bodybuilding rather than from liver damage 24. The link between 17-alpha-alkylated steroids and hepatomas, peliosis hepatis (a rare vascular phenomenon resulting in multiple blood filled cavities within the liver), and hepatocellular carcinoma has been suggested in case studies, but no causal relationship has been established 25.

Possibly the most publicized adverse effect of steroid use is psychological, publicly coined “roid rage.” In one study using self-reported data, 23% of steroid users acknowledged major mood symptoms, including depression, mania, and psychosis 26. However, most studies report only subtle psychiatric alterations in the majority of patients, with few patients experiencing significant mood disorders 27. However, a 2006 cohort study from Greece found a dose dependent association between steroid use and psychopathology that was driven by significant increases in hostility, aggression and paranoia (P<0.001) 28. While this topic needs further research, it does lend credence to the theory that “’roid rage” exists, and its effects are exacerbated by higher doses of steroids.

Conclusion:

The former baseball all-star Jose Canseco once claimed that “steroids, used correctly, will not only make you stronger and sexier, they will also make you healthier 29.” Although current research reveals that steroid abuse is not independently associated with increased mortality 16, and many of the adverse effects are rare and reversible with cessation of use, there is a dearth of knowledge about the effects of the actual regimens used, and the long-term side effects of these drugs are largely unknown.

Based on the paucity of quality data and frightening implications of metabolic derangements, heart failure, and infertility, your patient leaves convinced that he has made a poor decision in choosing to use anabolic steroids. He pledges to quit immediately and defer competing in the deadlifting tournament until next year after a “washout” period. He is eager to disseminate his new found knowledge at the local gym, but not before he makes a stop at GNC to load up on creatinine supplements and whey protein.

David G. Rosenthal is a 4th year medical student at NYU Langone Medical Center and Robert Gianotti, MD is Associate Editor, Clinical Correlations

Peer reviewed by Loren Greene , MD, Clinical Associate Professor, Department of Medicine (endocrine division) and Obstetrics and Gynecology

Image Courtesy of Wikimedia Commons

Bibliography:

1. Eaton D, Kann L, Kinchen S, et al. Youth risk behavior surveillance – United States, 2009. MMWR. Surveillance summaries. 2010;59(5):1-142.  http://www.cdc.gov/mmwr/preview/mmwrhtml/ss5905a1.htm

2. Handelsman DJ GL. Prevalence and risk factors for anabolic-androgenic steroid abuse in Australian secondary school students. Int J Androl. 1997;20:159-164.

3. Kochakian CD. History, chemistry and pharmacodynamics of anabolic-androgenic steroids. Wiener medizinische Wochenschrift. 1993;143(14-15):359-363.

4. Hartgens F, Kuipers H. Effects of androgenic-anabolic steroids in athletes. Sports medicine. 2004;34(8):513-554.  http://www.ncbi.nlm.nih.gov/pubmed/15248788

5. Stimac D, Mili? S, Dintinjana R, Kovac D, Risti? S. Androgenic/Anabolic steroid-induced toxic hepatitis. Journal of clinical gastroenterology. 2002;35(4):350-352.

6. Wilson JD. Androgen abuse by athletes. Endocrine reviews. 1988;9(2):181-199.  http://edrv.endojournals.org/content/9/2/181.abstract

7. Franke WW, Berendonk B. Hormonal doping and androgenization of athletes: a secret program of the German Democratic Republic government. Clinical Chemistry. 1997;43(7):1262-1279.  http://www.ncbi.nlm.nih.gov/pubmed/9216474

8. Janofsky M. Coaches Concede That Steroids Fueled East Germany’s Success in Swimming. New York Times. 12.03.91, 1991.

9. Hartgens F, Van Marken Lichtenbelt WD, Ebbing S, Vollaard N, Rietjens G, Kuipers H. Body composition and anthropometry in bodybuilders: regional changes due to nandrolone decanoate administration. International journal of sports medicine. 2001;22(3):235-241.

10. Bhasin S, Woodhouse L, Casaburi R, et al. Testosterone dose-response relationships in healthy young men. American journal of physiology: endocrinology and metabolism. 2001;281(6):E1172-E1181.

11. Bhasin S, Storer TW, Berman N, et al. The effects of supraphysiologic doses of testosterone on muscle size and strength in normal men. The New England journal of medicine. 1996;335(1):1-7.

12. Schroeder ET, Terk M, Sattler F. Androgen therapy improves muscle mass and strength but not muscle quality: results from two studies. American journal of physiology: endocrinology and metabolism. 2003;285(1):E16-E24.  http://ajpendo.physiology.org/content/285/1/E16.long

13. Bijlsma JW, Duursma SA, Thijssen JH, Huber O. Influence of nandrolondecanoate on the pituitary-gonadal axis in males. Acta endocrinologica. 1982;101(1):108-112.

14. Torres Calleja J, Gonzlez-Unzaga M, DeCelis Carrillo R, Calzada-Snchez L, Pedrn N. Effect of androgenic anabolic steroids on sperm quality and serum hormone levels in adult male bodybuilders. Life sciences. 2001;68(15):1769-1774.

15. Martikainen H, Aln M, Rahkila P, Vihko R. Testicular responsiveness to human chorionic gonadotrophin during transient hypogonadotrophic hypogonadism induced by androgenic/anabolic steroids in power athletes. The Journal of steroid biochemistry. 1986;25(1):109-112.

16. Fernndez-Balsells MM, Murad M, Lane M, et al. Clinical review 1: Adverse effects of testosterone therapy in adult men: a systematic review and meta-analysis. The Journal of clinical endocrinology and metabolism. 2010;95(6):2560-2575.

17. Bain J. The Many Faces of Testosteron. Clin Intern Aging. 2007;2(4):567-576.

18. Vanberg P, Atar D. Androgenic anabolic steroid abuse and the cardiovascular system. Handbook of experimental pharmacology. 2010 2010(195):411-457.

19. Baggish A, Weiner R, Kanayama G, et al. Long-term anabolic-androgenic steroid use is associated with left ventricular dysfunction. Circulation. Heart failure. 2010;3(4):472-476.

20. Kuipers H, Wijnen JA, Hartgens F, Willems SM. Influence of anabolic steroids on body composition, blood pressure, lipid profile and liver functions in body builders. International journal of sports medicine. 1991;12(4):413-418.

21. Glazer G. Atherogenic effects of anabolic steroids on serum lipid levels. A literature review. Archives of internal medicine. 1991;151(10):1925-1933.

22. Hartgens F, Rietjens G, Keizer HA, Kuipers H, Wolffenbuttel BHR. Effects of androgenic-anabolic steroids on apolipoproteins and lipoprotein (a). British journal of sports medicine. 2004;38(3):253-259.

23. Thompson PD, Cullinane EM, Sady SP, et al. Contrasting effects of testosterone and stanozolol on serum lipoprotein levels. JAMA (Chicago, Ill.). 1989;261(8):1165-1168.

24. Dickerman RD, Pertusi RM, Zachariah NY, Dufour DR, McConathy WJ. Anabolic steroid-induced hepatotoxicity: is it overstated? Clinical journal of sport medicine. 1999;9(1):34-39.

25. Overly WL, Dankoff JA, Wang BK, Singh UD. Androgens and hepatocellular carcinoma in an athlete. Annals of Internal Medicine. 1984;100(1):158-159.

26. Pope HG, Katz DL. Psychiatric and medical effects of anabolic-androgenic steroid use. A controlled study of 160 athletes. Archives of general psychiatry. 1994;51(5):375-382.

27. Pope HG, Kouri EM, Hudson JI. Effects of supraphysiologic doses of testosterone on mood and aggression in normal men: a randomized controlled trial. Archives of general psychiatry. 2000;57(2):133-140.

28. Pagonis T, Angelopoulos N, Koukoulis G, Hadjichristodoulou C. Psychiatric side effects induced by supraphysiological doses of combinations of anabolic steroids correlate to the severity of abuse. European psychiatry. 2006;21(8):551-562.

29. Canseco J. Juiced: Wild Times, Rampant ‘Roids, Smash Hits, and How Baseball Got Big. Philadelphia, PA: Reed Elsevier Inc.; 2005.

30. Young NR BH, Liu G, Seeman E. . Body composition and muscle strength in healthy men receiving testosterone enanthate for contraception. J Clin Endrocrinol Metab. 1993;77:1028-1032.

Have a Cow? How Recent Studies on Red Meat Consumption Apply to Clinical Practice

April 12, 2013

By Tyler R. McClintock

Faculty Peer Reviewed

“Red Meat Kills.” “Red Meat a Ticket to Early Grave.” “A Hot Dog a Day Raises Risk of Dying.” Such were the headlines circulating in popular press last year when the Annals of Internal Medicine released details of an upcoming article out of Frank Hu’s research group at the Harvard School of Public Health [1-3]. Analyzing long-term prospective data from two large cohort studies, researchers found that individuals who ate a serving of unprocessed red meat each day had a 13% higher risk of mortality during the study period. The numbers were even more grim for processed meats, as a one-serving-per-day increase in such foods as bacon or hot dogs was associated with a 20% increase in mortality risk. Hu and colleagues ultimately concluded that 9.3% of the observed deaths in men and 7.6% of the deaths in women could have been avoided by participants consuming less than 0.5 daily servings (42 g) of red meat [4].

While this study received a great deal of media buzz, it is merely the latest in a long line of studies over the past decade that have tried to better understand how red meat consumption may impact the development of chronic disease. Indeed, our own research group recently set out to answer that same question, although through a different approach: focusing on dietary patterns rather than specific diet elements. Compared to the “single nutrient” or “single food” approach, this analytic method more fully accounts for biochemical interactions between nutrients, as well as interrelationships between dietary components that cause difficulty in distinguishing individual food or nutrient effects. We followed over 11,000 individuals in Bangladesh for nearly 7 years, identifying distinct dietary patterns as well as the associations between these patterns and risk of adverse cardiovascular outcomes. In short, we found that adherence to an animal protein diet increased risk of death from overall cardiovascular disease, especially heart disease. In fact, after stratifying adherence to the animal protein diet into 4 levels, the most adherent group had twice the risk of heart disease mortality compared to the least adherent. While striking, these results inevitably raise the question of what role red meat in particular played in increased mortality, as it was only a component of the more unhealthy diet [5].

The contrasting analytical approaches in these two studies highlight the difficulty in fully understanding how red meat may affect cardiovascular health and mortality. It is believed that adverse outcomes from red meat intake are mediated mainly through the effects of high saturated fat on blood low-density lipoprotein and other cholesterol levels, although high sodium content in processed red meat may also play a role by elevating blood pressure and impairing vascular compliance. Additionally, nitrate preservatives, which are used in processed meats, have been shown in experimental models to reduce insulin secretion, impair glucose tolerance, and promote atherosclerosis [6].

Although multiple studies have shown an association between red meat and cardiovascular disease [7-10], the magnitude of risk is somewhat debatable. In a recent set of meta-analyses, for example, one found equivocal evidence for the influence of meat on cardiovascular disease [11], while another showed consumption of processed red meat, but not unprocessed red meat, to be associated with risk of coronary heart disease [6]. Much of this cloudiness is likely due to inconsistencies across studies in terms of study design, as well as how each defines meat intake and meat types (distinguishing what constitutes “red,” “processed,” or “lean”). Taking all of this into consideration, the best current evidence still seems to indicate that red meat consumption at very high levels conveys increased risk of cardiovascular disease, with processed meats likely increasing that risk. This is similar to what has been observed with respect to type 2 diabetes and colon cancer, as red meat (particularly processed meats) has been linked to a higher risk of both [12-15].

A more complete understanding of healthy eating and advisable intake of red meat is truly of vital importance. Although cardiovascular disease remains the world’s leading cause of death, it has been posited that over 90% of cases may be preventable simply by modifying diet and lifestyle [16-18]. A recent literature review summarized foods that are protective against cardiovascular disease: vegetables, nuts, and monounsaturated fats, as well as Mediterranean, prudent, and high-quality diets [11]. Conversely, as discussed above, current evidence indicates most convincingly that high intake of processed red meats, particularly as part of a Western diet, carries significant risk for increased mortality and adverse cardiovascular outcomes. Many questions, though, remain unanswered–namely, to what extent unprocessed red meat can be grouped with its processed counterpart in terms of health risks, as well as what risk reduction may be possible by substituting lean red meats for either processed or unprocessed meat (which has not yet been addressed in any large prospective study) [19].

Without a full understanding of red meat’s health effects, clinicians are faced with the need to settle for the best available evidence to counsel their patients in need of dietary guidance. The 2010 US Dietary Guidelines for Americans advise for moderation of red meat intake, mainly due to the expected effect of its saturated fat and cholesterol on blood cholesterol [20]. However, with unprocessed and processed red meats having similar levels of saturated fat yet distinctly different clinical outcomes, current dietary recommendations on meat consumption are shown to be based almost solely on the “avoidance of fat” postulate. The resultant dietary recommendations, neither comprehensive nor specific, are justifiably limited by our current level of understanding. Without elucidating the health effects of preservatives in processed meats or potential risk reduction from substitution of lean meats for standard red meat, it is nearly impossible to make more nuanced or quantitative recommendations.

So how does all of this impact the day-to-day practice of a clinician–particularly one in primary care? There will likely never come a day when it is realistic to counsel or expect every patient to avoid red meat completely. In light of recent evidence, though, it is certainly justifiable to recommend moderation, particularly with respect to processed types. Until further research is able to establish hard-and-fast guidelines, qualitative guidance will remain the best evidence-based advice that physicians can hand down. In other words, if a patient’s going to have a cow (or lamb or pork, for that matter), emphasize moderation and recommend that it not be processed.

Tyler R. McClintock is an M.D./M.S. candidate in the Department of Environmental Medicine at New York University School of Medicine. Under the direction of Dr. Yu Chen, his research focuses on how environmental and dietary factors are related to the risk of chronic diseases.

Tyler R. McClintock is a 4th year medical student at NYU School of Medicine

Peer reviewed by Michelle McMacken, MD, Dept. of Medicine (GIM Div.) NYU Langone Medical center

Image courtesy of Wikimedia Commons

References

1. Wanjek C. Red meat a ticket to early grave, Harvard says. Yahoo! Daily News. March 12, 2012. http://article.wn.com/view/2012/03/12/Red_Meat_a_Ticket_to_Early_Grave_Harvard_Says/#/related_news. Accessed May 23, 2012.

2. Dale R. Red meat ‘kills.’ The Sun. March 13, 2012.

3. Ostrow N. A hot dog a day raises risk of dying, Harvard study finds. Bloomberg Businessweek. March 12, 2012. http://www.businessweek.com/news/2012-03-12/a-hot-dog-a-day-raises-risk-of-dying-harvard-study-finds.  Accessed March 23, 2012.

4. Pan A, Sun Q, Bernstein AM, et al. Red meat consumption and mortality: results from 2 prospective cohort studies. Arch Intern Med. 2012;172(7):555-63.

5. Chen Y, McClintock TR, Segers S, et al., Prospective investigation of major dietary patterns and risk of cardiovascular mortality in Bangladesh. Int J Cardiol. 2012 May 3. [Epub ahead of print]

6. Micha R, Wallace SK, Mozaffarian D. Red and processed meat consumption and risk of incident coronary heart disease, stroke, and diabetes mellitus: a systematic review and meta-analysis. Circulation. 2010;121(21):2271-2283.

7. Fraser GE. Associations between diet and cancer, ischemic heart disease, and all-cause mortality in non-Hispanic white California Seventh-day Adventists. Am J Clin Nutr. 1999;70(3 Suppl):532S-538S.

8. Sinha R, Cross AJ, Graubard BI, Leitzmann MF, Schatzkin A. Meat intake and mortality: a prospective study of over half a million people. Arch Intern Med, 2009;169(6):562-571.  http://www.ncbi.nlm.nih.gov/pubmed/19307518

9. Kelemen LE, Kushi LH, Jacobs DR Jr, Cerhan JR. Associations of dietary protein with disease and mortality in a prospective study of postmenopausal women. Am J Epidemiol. 2005;161(3):239-249.

10. Kontogianni MD, Panagiotakos DB, Pitsavos C, Chrysohoou C, Stefanidis C. Relationship between meat intake and the development of acute coronary syndromes: the CARDIO2000 case-control study. Eur J Clin Nutr. 2008;62(2):171-177.

11. Mente A, deKoning L, Shannon HS, Anand SS. A systematic review of the evidence supporting a causal link between dietary factors and coronary heart disease. Arch Intern Med. 2009;169(7):659-669.  http://www.ncbi.nlm.nih.gov/pubmed/19364995

12. McAfee AJ, McSorley EM, Cuskelly GJ, et al. Red meat consumption: an overview of the risks and benefits. Meat Sci. 2010;84(1):1-13.  http://www.ncbi.nlm.nih.gov/pubmed/20374748

13. Fung TT, Schulze M, Manson JE, Willett WC, Hu FB. Dietary patterns, meat intake, and the risk of type 2 diabetes in women. Arch Intern Med. 2004;164(20):2235-2240.

14. Pan A, Sun Q, Bernstein AM, et al. Red meat consumption and risk of type 2 diabetes: 3 cohorts of US adults and an updated meta-analysis. Am J Clin Nutr. 2011;94(4):1088-1096.

15. Larsson SC, Wolk A. Meat consumption and risk of colorectal cancer: a meta-analysis of prospective studies. Int J Cancer. 200;119(11):2657-2664.

16. Lopez AD, Mathers CD. Measuring the global burden of disease and epidemiological transitions: 2002-2030. Ann Trop Med Parasitol. 2006;100(5-6):481-499.

17. Yusuf S, Reddy S, Ounpuu S, Anand S. Global burden of cardiovascular diseases: part I: general considerations, the epidemiologic transition, risk factors, and impact of urbanization. Circulation. 2001;104(22):2746-2753.

18. Ornish D. Dean Ornish on the world’s killer diet. TED Talk. Monterey, CA. February, 2006.

19. Roussell MA, Hill AM, Gaugler TL, et al. Beef in an Optimal Lean Diet study: effects on lipids, lipoproteins, and apolipoproteins. Am J Clin Nutr. 2012;95(1):9-16.  http://www.unboundmedicine.com/washingtonmanual/ub/citation/22170364/Beef_in_an_Optimal_Lean_Diet_study:_effects_on_lipids_lipoproteins_and_apolipoproteins_

20. U.S. Department of Agriculture and U.S. Department of Health and Human Services. Dietary Guidelines for Americans, 2010. 7th Edition, Washington, DC: U.S. Government Printing Office, December 2010. Page 1 of 4.  http://health.gov/dietaryguidelines/dga2010/dietaryguidelines2010.pdf

The Effect of Bariatric Surgery on Incretin Hormones and Glucose Homeostasis

April 4, 2013

By Michael Crist

Faculty Peer Reviewed

Until recently, little thought was given to the important role played by the duodenum, jejunum, and ileum in glucose homeostasis. The involvement of the gut in glucose regulation is mediated by the enteroinsular axis, which refers to the neural and hormonal signaling pathways that connect the gastrointestinal (GI) tract with pancreatic beta cells. These pathways are largely responsible for the increase in insulin that occurs during the postprandial period. In 1964 McIntyre and colleagues first reported the phenomenon of oral glucose administration eliciting a greater insulin response than a similar amount of glucose infused intravenously [1]. This observation, later named the incretin effect, accounts for the role of certain gut hormones within the enteroinsular axis that promote insulin secretion [2]. Although many hormones are believed to contribute, the two that play the most significant role in nutrient-stimulated insulin secretion are glucagon-like peptide 1 (GLP-1) and glucose-dependent insulinotropic peptide (GIP) [3,4]. GLP-1 is synthesized by L-cells found predominantly in the ileum and colon [5], and GIP is secreted from K-cells found predominantly in the duodenum [6]. Both GLP-1 and GIP are secreted in response to nutrients within the gut and are powerful insulin secretagogues, accounting for roughly 50% of postprandial insulin secretion [7]. They have both been shown to promote pancreatic beta cell proliferation and survival [7]. GLP-1 has also been shown to inhibit glucagon secretion and gastric emptying while promoting satiety and weight loss [7].

The incretin effect progressively diminishes with the onset of type 2 diabetes in a process that contributes to disordered glucose metabolism. GLP-1 secretion is significantly lower in type 2 diabetics than in non-diabetic individuals, and GIP loses its insulinotropic properties [8]. Malabsorptive bariatric surgery operations, which alter GI tract anatomy, have been shown to affect incretin hormone profiles and glucose homeostasis [9,10]. Many patients show a postoperative return to normal plasma glucose, plasma insulin, and glycosylated hemoglobin levels and discontinue the use of diabetes-related medication [10,11]. The dramatic resolution of diabetes and the return to euglycemia often occur within one week of surgery, before a significant amount of weight loss occurs [12,13]. In 2001 Pories and Albrecht reported long-term glycemic control in 91% of patients 14 years after they underwent malabsorptive bariatric surgery [12]. Furthermore, the improvement in insulin sensitivity postoperatively has been shown to prevent the progression from impaired glucose tolerance to diabetes [12].

The exact mechanism through which malabsorptive bariatric surgery improves glucose homeostasis is unclear. Much of the evidence in support of bariatric surgery as treatment for diabetes comes from studies that have focused on roux-en-Y gastric bypass and biliopancreatic diversion (which results in enteral nutrition passing directly from the stomach to the ileum) [9,10]. Both of these procedures surgically alter the GI tract such that nutrient chyme bypasses the duodenum and the proximal jejunum. Many initially hypothesized that enhanced nutrient delivery to the distal intestine promotes a physiological signal that ameliorates glucose metabolism. Enhanced GLP-1 secretion as a result of expedited nutrient delivery to the L-cell-rich ileum has been proposed as a mechanism that contributes to this process (14,15). Alternatively, the exclusion of nutrient flow through the duodenum and proximal jejunum may interrupt a signaling pathway that confers insulin resistance [11,12].

Rubino and colleagues tested both theories in a non-obese model of type 2 diabetic rats by comparing glucose tolerance among 3 surgery groups and one non-operated control. Of the 3 surgery groups, one underwent duodenal jejunal bypass (DJB), which excluded nutrient passage from the proximal foregut, resulting in early nutrient delivery to the distal gut. Another group underwent gastrojejunostomy (in which a surgical anastomosis was created between the stomach and jejunum while preserving the normal connection between the stomach and duodenum). This allowed both the normal passage of nutrients through the foregut and enhanced nutrient delivery to the hindgut. In effect, both the DJB and gastrojejunostomy promoted nutrient delivery to the ileum, whereas only the DJB procedure excluded the duodenum from nutrient passage. The third group was a sham-operated control. The DJB group showed better glucose tolerance than all other study groups even though there were no differences in food intake, body weight, or nutrient absorption [16]. Furthermore, when the DJB group underwent a second operation to allow nutrient passage through the foregut, glucose tolerance deteriorated. When the gastrojejunostomy group underwent a second operation to prevent nutrient passage through the foregut, glucose tolerance improved [16]. These findings suggest that exclusion of the duodenum and proximal jejunum is a necessary component of surgical interventions aimed at improving glucose tolerance.

Bariatric surgery holds great potential in the treatment of T2DM and will likely play an increasingly important role in diabetes management. Improved glucose regulation following malabsorptive bariatric surgery procedures is likely multifactorial, with alterations in gut microflora and the beneficial effects of weight loss contributing with time. Changes in gut hormone secretion profiles, however, appear to play an important role in the initial improvements reported in glucose homeostasis.

FIGURE 1. Interventions. A, Duodenal-jejunal bypass (DJB). This operation does not impose any restriction to the flow of food through the gastrointestinal tract. The proximal small intestine is excluded from the transit of nutrients, which are rapidly delivered more distally in the small bowel. Food exits the stomach and enters the small bowel at 10 cm from the ligament of Treitz, and digestive continuity is reestablished approximately 25% of the way down the jejunum. B, Gastrojejunostomy (GJ). This operation consists of a simple anastomosis between the distal stomach and the first quarter of the jejunum. The site of the jejunum that is anastomosed to the stomach is chosen at the same distance as in DJB (10 cm from the ligament of Treitz). Hence, the DJB and GJ share the feature of enabling early delivery of nutrients to the same level of small bowel. In contrast to DJB, the GJ does not involve exclusion of duodenal passage, and nutrient stimulation of the duodenum is maintained. C, Ileal bypass (ILB). This operation reduces intestinal fat absorption by preventing nutrients from passing through the distal ileum, where most lipids are absorbed.

Michael Crist is a 4th year medical student at NYU School of Medicine

Peer reviewed by Natalie Levy, MD, Department of Medicine (GIM Div.) NYU Langone Medical Center

References

1. McIntyre N, Holdsworth CD, Turner DS. New interpretation of oral glucose tolerance. Lancet. 1964;2(7349):20-21.

2. Creutzfeldt W. The incretin concept today. Diabetologia. 1979;16(2):75-85.  http://www.ncbi.nlm.nih.gov/pubmed/32119

3. Fetner R, McGinty J, Russell C, Pi-Sunyer FX, Laferrère B. Incretins, diabetes, and bariatric surgery: a review. Surg Obes Relat Dis. 2005;1(6):589-597.

4. Drucker DJ. Enhancing incretin action for the treatment of type 2 diabetes. Diabetes Care. 2003;26(10):2929-2940. http://care.diabetesjournals.org/content/26/10/2929

5. Drucker DJ. Incretin-based therapies: A clinical need filled by unique metabolic effects. Diabetes Educ. 2006;32 Suppl 2:65S-71S.

6. Vilsbøll T, Holst JJ. Incretins, insulin secretion and type 2 diabetes mellitus. Diabetologia. 2004;47(3):357-366.

7. Fetner R, McGinty J, Russell C, Pi-Sunyer FX, Laferrère B. Incretins, diabetes, and bariatric surgery: a review. Surg Obes Rel Dis. 2005;1(6):589-597.

8. Nauck M, Stöckmann F, Ebert R, Creutzfeldt W. Reduced incretin effect in type 2 (non-insulin-dependent) diabetes. Diabetologia. 1986;29(1):46-52. http://www.ncbi.nlm.nih.gov/pubmed/3514343

9. Rosa G, Mingrone G, Manco M, et al. Molecular mechanisms of diabetes reversibility after bariatric surgery. Int J Obes (Lond). 2007;31(9):1429-1436.  http://www.ncbi.nlm.nih.gov/pubmed/17515913

10. Schauer PR, Burguera B, Ikramuddin S, et al. Effect of laparoscopic Roux-en Y gastric bypass on type 2 diabetes mellitus. Ann Surg. 2003;238(4):467-84.

11. Rubino F, Gagner M, Gentileschi P, et al. The early effect of the Roux-en Y gastric bypass on hormones involved in body weight regulation and glucose metabolism. Ann Surg. 2004;240(2):236-242.

12. Pories WJ, Albrecht RJ. Etiology of type II diabetes mellitus: role of the foregut. World J Surg. 2001;25(4):527-531.  http://www.ncbi.nlm.nih.gov/pubmed/11344408

13. Guidone C, Manco M, Valera-Mora E, et al. Mechanisms of recovery from type 2 diabetes after malabsorptive bariatric surgery. Diabetes. 2006;55(7):2025-2031.

14. Patriti A, Aisa MC, Annetti C, et al. How the hindgut can cure type 2 diabetes. Ileal transposition improves glucose metabolism and beta-cell function in Goto-kakizaki rats through an enhanced Proglucagon gene expression and L-cell number. Surgery. 2007;142(1):74-85.  https://www.ncbi.nlm.nih.gov/m/pubmed/17630003/?i=2&from=/16259883/related

15. Patriti A, Facchiano E, Sanna A, Gulla N, Donini A. The enteroinsular axis and the recovery from type 2 diabetes after bariatric surgery. Obes Surg. 2004;14(6):840-848.

16. Rubino F, Forgione A, Cummings DE, et al. The mechanism of diabetes control after gastrointestinal bypass surgery reveals a role of the proximal small intestine in the pathophysiology of type 2 diabetes. Ann Surg. 2006;244(5):741-749.

White Coat Hypertension: Are Doctors Bad for Your Blood Pressure?

March 20, 2013

By Lauren Foster

Faculty Peer Reviewed

Hypertension is a pervasive chronic disease affecting approximately 65 million adults in the United States, and a significant cause of morbidity and mortality [1]. Antihypertensives are widely prescribed due to their effectiveness in lowering blood pressure, thereby reducing the risk of cardiovascular events. However, the phenomenon of the “white coat effect” may be a complicating factor in the diagnosis and management of hypertensive patients. It is well established that a considerable number of people experience an elevation of their blood pressure in the office setting, and particularly when measured by a physician. The cause of this white coat hypertension, as well as its implications in the prognosis and treatment of hypertension, is still controversial.

The concept of white coat hypertension has existed for many years, with some of the first reports of blood pressure varying between a resting value and one taken by the physician written by Alam and Smirk in the 1930s [2]. Studies since then have continued to demonstrate the elevating effect of a physician’s office on blood pressure, with an estimated 20% prevalence of white coat hypertension in the general population [3]. The definition of white coat hypertension used in research continues to vary, however, producing a range of incidences from 14.7% to 59.6% [3]. Most studies characterize white coat hypertension as an office blood pressure of greater than 140/90 mmHg, with ambulatory blood pressures less than 135/85 [3]. The regular use of home blood pressure monitors and 24-hour ambulatory blood pressure monitoring (ABPM) has further demonstrated this discrepancy in clinical practice as well as in research.

White coat hypertension is hypothesized to be a result of anxiety and subsequent sympathic nervous system activation. Studies examining the presence of white coat hypertension among individuals with anxious traits have not found evidence of this association; rather it appears to be associated with a state of anxiety unique to the presence of a physician [5]. In a study by Gerin, Ogedegbe, and colleagues, ABPM measurements of patients’ blood pressure in a separate laboratory facility were compared to ABPM measurements in the waiting room of a physician’s office and a manual blood pressure performed by a physician in the examining room. Their results demonstrated a significant elevation of blood pressure on the day of the physician’s office visit, with a larger increase in previously diagnosed hypertensive patients, and no difference in blood pressure between the waiting room and the examining room [2]. This provides evidence for the notion that white coat hypertension is the result of a classically conditioned response to a physician’s office. That this occurred more often in patients with previously established hypertension may be due to an initial anxiety reaction as patients learn they have hypertension, which is further conditioned by the following office visits to check their blood pressure control [2].

The effect of isolated white coat hypertension on cardiovascular risk has been controversial. One study examining the target organ damage of hypertension in terms of left ventricular mass and carotid-femoral pulse wave velocity found a positive correlation with daytime blood pressure values, but not with those who had elevated office blood pressures alone [6]. A recent meta-analysis likewise showed that cardiovascular risk is not significantly different between white coat hypertension and normotension [7]. However, another study by Gustavsen and colleagues evaluating the rate of cardiovascular deaths and nonfatal events over a 10-year follow-up period found that patients with white coat hypertension and essential hypertension had similar event rates, but normotensive patients had significantly lower rates [8]. In contrast, a different study determined that the unadjusted rate of all-cause mortality in patients with white coat hypertension (4.4 deaths per 1,000 years of follow-up) was less than patients with sustained hypertension (10.2 deaths per 1,000 years of follow-up), and that this was clinically significant after adjusting for age, sex, smoking, and use of antihypertensive medication [9]. The effect of isolated white coat hypertension on cardiovascular risk still needs further investigation to determine the necessity of treating it with antihypertensives.

As hypertension is routinely diagnosed by the blood pressure measurements obtained by a physician in an office setting, it is likely that a significant portion of white coat hypertension is treated with antihypertensives. In the study by Gustavsen and colleagues, they noted that 60.3% of patients with white coat hypertension were treated with antihypertensives at some point during the 10-year follow-up [8]. In the Treatment of Hypertension Based on Home or Office Blood Pressure (THOP) trial, antihypertensive treatment was adjusted based on either self-measured home blood pressure values or conventional office measurements. At the end of the 6-month period, less intensive drug treatment was used for the home blood pressure group as opposed to those measured in an office, and more home blood pressure patients could permanently stop antihypertensive drug treatment (25.6% vs. 11.3%). However, those treated based on home blood pressure measurements had slightly higher blood pressures at the end of the trial than those treated in the office, which could potentially increase cardiovascular risk [10]. Evaluating whether a patient has sustained hypertension or white coat hypertension with normotensive ambulatory blood pressure using home devices or ABPM may help to identify those who do in fact require antihypertensive medications.

White coat hypertension may also play a role in cases of resistant hypertension. ABPM may be necessary to differentiate cases of true drug-resistant hypertension and those that are well controlled outside of the physician’s office in order to prevent overtreatment. One study found that when patients who were documented to have uncontrolled hypertension had their blood pressure monitored for 24 hours, only 69% were actually uncontrolled [11]. Studies have also looked for other ways to differentiate true resistant hypertension and white-coat resistant hypertension, and have determined that true resistant hypertension patients have excessive intake of salt and alcohol as well as higher renin values [12].

In clinical practice, white coat hypertension is likely a common confounding factor in the diagnosis and treatment of hypertension. Patients often insist that their blood pressure is much lower at home than at their office visit, and the anxiety of an appointment solely for a blood pressure check is likely a contributing factor. Shifts away from physician measurement of blood pressure or substitution with automatic blood pressure devices may help to counteract this phenomenon. Home blood pressure monitoring devices can be a useful tool in discerning whether a patient’s blood pressure is properly controlled on a current treatment regimen or if additional therapy is needed. Avoiding overtreatment of hypertension may also lower health care costs, although the cardiovascular risks of white coat hypertension must be further elucidated so that the importance of treating white coat hypertension can be determined. White coat hypertension is a real and ubiquitous phenomenon, and must be considered by physicians for all patients with elevated blood pressures.

Commentary by Dr. Stephen Kayode Williams

Attending Physician, Bellevue Primary Care Hypertension Clinic

Are doctors bad for your blood pressure? Yes! This is a timely discussion as we eagerly await updated national guidelines for the management of hypertension. How will JNC 8 address this issue that comes up at every visit to our primary care clinics? The latest US hypertension guidelines were published in 2003 [13]. The more recent 2011 UK guidelines are remarkable in stating that in order to confirm a new diagnosis of hypertension, ambulatory blood pressure monitoring (or alternatively home blood pressure monitoring) should demonstrate daytime blood pressures greater than or equal to 135/85 mmHg [14] . An exhaustive cost-effectiveness analysis performed for these guidelines came to the conclusion that, despite the expenses incurred with ambulatory blood pressure monitoring, there are vast cost savings that come with the prevention of an erroneous diagnosis of hypertension using office blood pressure readings alone. In this country, ambulatory blood pressure monitoring is not widely available in primary care. Stayed tuned to see how the upcoming hypertension guidelines address these clinical correlations.

Lauren Foster is a 4th year medical student at NYU School of Medicine

Peer reviewed by Stephen Kayode Williams, MD, MS, Bellevue Primary Care Hypersion Clinic

Image courtesy of Wikimedia Commons

References:

1. Fields LE, Burt VL, Cutler JA, Hughes J, Roccella EJ, Sorlie P. The burden of adult hypertension in the United States 1999 to 2000: a rising tide. Hypertension. 2004;44(4):398-404.  http://www.ncbi.nlm.nih.gov/pubmed/15326093

2. Gerin W, Ogedegbe G, Schwartz JE, et al. Assessment of the white-coat effect. J Hypertens. 2006;24(1):67-74.

3. Pickering TG. White coat hypertension. Curr Opin Nephrol Hypertens. 1996;5(2):192-198.  http://circ.ahajournals.org/content/98/18/1834.full

4. Verdecchia P, Schillaci G, Boldrini F, Zampi I, Porcellati C. Variability between current definitions of ‘normal’ ambulatory blood pressure. Implications in the assessment of white coat hypertension. Hypertension. 1992;20(4):555-562.

5. Ogedegbe G, Pickering TG, Clemow L, et al. The misdiagnosis of hypertension: the role of patient anxiety. Arch Intern Med. 2008;168(22):2459-2465. http://archinte.jamanetwork.com/article.aspx?articleid=773457

6. Silveira A, Mesquita A, Maldonado J, Silva JA, Polonia J. White coat effect in treated and untreated patients with high office blood pressure. Relationship with pulse wave velocity and left ventricular mass index. Rev Port Cardiol. 2002;21(5):517-530.

7. Pierdomenico SD, Cuccurullo F. Prognostic value of white-coat and masked hypertension diagnosed by ambulatory monitoring in initially untreated subjects: an updated meta analysis. Am J Hypertens. 2011;24(1):52-58.  http://ajh.oxfordjournals.org/content/24/1/52.abstract

8. Gustavsen PH, Høegholm A, Bang LE, Kristensen KS. White coat hypertension is a cardiovascular risk factor: a 10-year follow-up study. J Hum Hypertens. 2003;17(12):811-817.

9. Dawes MG, Bartlett G, Coats AJ, Juszczak E. Comparing the effects of white coat hypertension and sustained hypertension on mortality in a UK primary care setting. Ann Fam Med. 2008;6(5):390-396.  http://www.annfammed.org/content/6/5/390.full.pdf

10. Den Hond E, Staessen JA, Celis H, et al. Treatment of Hypertension Based on Home or Office Blood Pressure (THOP) Trial Investigators. Antihypertensive treatment based on home or office blood pressure–the THOP trial. Blood Press Monit. 2004;9(6):311-314.

11. Godwin M, Delva D, Seguin R, et al. Relationship between blood pressure measurements recorded on patients’ charts in family physicians’ offices and subsequent 24 hour ambulatory blood pressure monitoring. BMC Cardiovasc Disord. 2004;4:2.  http://www.biomedcentral.com/1471-2261/4/2/

12. Veglio F, Rabbia F, Riva P, et al. Ambulatory blood pressure monitoring and clinical characteristics of the true and white-coat resistant hypertension. Clin Exp Hypertens. 2001;23(3):203-211.

13. Chobanian AV, Bakris GL, Black HR, et al. The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure: the JNC 7 report. JAMA. 2003;289:2560-2572.  http://www.ncbi.nlm.nih.gov/pubmed/12748199

14. Krause T, Lovibond K, Caulfield M, McCormack T, Williams B. Management of hypertension: summary of NICE guidance. BMJ. 2011;343:d4891.  http://www.bmj.com/content/343/bmj.d4891?tab=responses

Anal cancer screening – A case for screening anal paps

January 24, 2013

By Nelson Sanchez, MD

Faculty Peer Reviewed

Case:

A 56 year-old homosexual male presents to your clinic to ask whether or not he should have an anal Pap smear. The patient is HIV positive, has been on HAART for five years, and has no history of opportunistic infections. He denies any anal pain, bleeding or masses.

While efforts to improve knowledge about colorectal cancer in various communities continues to grow, awareness of and misconceptions about anal cancer remain. Over the past couple of years there has been more discussion about anal cancer in large part because it was the seemingly unlikely cause of death of the actress Farrah Fawcett. The danger with the way in which this issue has come to light is that it might be dismissed as a curious anomaly not likely to affect anyone and with no screening modality for early detection.

A good screening test should meet certain criteria for its recommended use: early diagnosis of a common disease, a treatable condition, a high sensitivity and specificity, and ease of use. For example, cervical cancer has no symptoms early on, but a cervical pap smear can detect dysplastic cells or early stage cancer. Early detection can lead to complete cure, and the cervical pap smear is both reliable and easy to use.

Anal cancer remains an uncommon cancer with a slowly rising national incidence rate. The current age-adjusted incidence rate for anorectal cancer is 1.6 per 100,000 men and women per year and it is estimated that approximately 2,000 men and 3,260 women were diagnosed with cancer of the anus, anal canal and anorectum in 2010 [1]. The median age at diagnosis for cancer of the anus is 60 years of age. Among men, blacks have the highest incidence rates (1.9), and among women, whites have the highest incidence (2.0). The age-adjusted death rate is 0.2 per 100,000 per year. The overall 5-year mortality rate from 2001-2007 of 35.1% was either on par or better than more well-recognized malignancies with poor outcomes such as acute myeloid leukemia, multiple myeloma, gastric cancer, and ovarian cancer [2].

Cancer of the anus develops in the canal’s transition zone or linea pectinea [3]. Anal cancer is preceded by the development of anal squamous intraepithelial lesions (ASIL) [4]. Human papilloma virus (HPV) infection is responsible for 90% of ASIL’s [5]. Other risk factors for ASIL’s include multiple sexual partners, tobacco use, and immunosuppression. ASIL’s are further classified into low-grade anal intraepithelial lesions (LSIL) and high-grade anal intraepithelial lesions (HSIL). LSIL’s have spontaneous resolution in the majority of cases, while HSIL’s are a more likely precursor of invasive tumor [3].

The anal pap smear is an underutilized available screening tool for anal cancer with a sensitivity of 50-75% for detection of ASIL and specificity of 50% [6,7]. Similar to a cervical pap smear the anal pap test evaluates the morphology of epithelial cells from the respective region. Unlike a cervical pap smear a cavity scope is not required; rather only a small brush (measuring 3 millimeters) is inserted in the anal canal. Although the test is easily performed, there are numerous barriers to its widespread usage. One obstacle is the limited number of physicians who are aware of the test and are trained to perform it. Primary care doctors, gastroenterologists, gynecologists, and general surgeons could all potentially perform the anal pap smear. However, there is no training requirement for anal pap screening in residency programs. Discomfort with discussion of sexual behaviors and testing may pose a significant barrier that dissuades clinical discussion of the test. In addition, insurance coverage for anal pap smears is very limited.

Another issue that arises from anal cancer screening is the question of treatment. Unlike cervical dysplasia and localized cervical cancer, where large excisions or complete resections have high cure rates (eg: 90.9% 5-year survival rate for localized cervical cancer) [8], anal dysplasia and cancer does not have comparable treatment capability or success. Current treatment modalities for anal dysplasia include topical agents, immune modulation, cryotherapy, laser therapy and surgery. These treatments often don’t lead to a cure and are associated with recurrence rates as high as 50-85% [9]. Anal cancer is currently treated with chemoradiation and surgery, depending on oncologic staging, with an overall 5-year mortality rate of 35.1% [1].

Additionally, the cost-effectiveness of anal cancer screening is questionable. In the United States, a cost-effective screening program is determined to be one that has a treatment cost of under $30,000-50,000 per year of life saved (PYLS) [9]. Taking cervical cancer as an example, pap smears every three years for HIV-negative women incur a treatment cost of approximately $11,800 PYLS [10]. In HIV+ women with yearly cervical Pap smears, the treatment cost would be $13,000 PYLS [11]. Estimates for Pap screening in anal cancer showed that the costs of screening in HIV+ men with annual testing amounted to $11,000 PYLS, a cost similar to that of cervical cancer screening [12]. A three-yearly testing regimen in HIV- men was estimated to cost about $7,800 PYLS [13].

However, in the United Kingdom, a cost-effective analysis of anal pap screening did not reveal promising results [14]. The UK study examined the cost-effectiveness of screening high-risk HIV+ men who have sex with men (MSM). Researchers concluded that screening of this high-risk group would not generate health improvements at a reasonable cost. The estimated economic burden of screening HIV+ MSM was calculated at £66,000 ($102,227) per quality-adjusted life-year (QALY) gained, well above accepted cost-effectiveness thresholds. The authors of this research suggest that the main difference in this cost-effectiveness model when compared to the U.S. study is that the UK model combines HIV-, undiagnosed HIV+, and diagnosed HIV+ MSM. This explanation does not completely account for the large discrepancy in cost analysis.

Currently there are no national recommendations for anal pap screening. The test is best suited for specific high-risk populations: multiple sexual partners, a history of sexually transmitted disease, and HIV infection or other chronically immunosuppressed states. Among men who have sex with men, the incidence rate of anal cancer among HIV+ patients is 69/100,000 person-years [15]. Rates of anal cancer among HIV+ patients have risen during the HAART era because patients are living longer with their HIV disease, allowing time for anal dysplasia to progress to cancer [16]. Screening for these patients may yield significant health benefits.

Based on our patient’s HIV status, his increased risk for the development of anal cancer, ease of use of the screening test, and the potential for life-saving treatment if cancer is diagnosed, the anal pap should be recommended. The patient should be advised that any abnormal pap findings will result in anoscopy and biopsy. If the baseline screening is negative, annual surveillance screening should be discussed with a physician to review the risks and benefits of testing. Testing is also recommended if pain, bleeding or palpable masses develop in the anorectal region.

More research is needed to clarify the controversies surrounding anal cancer screening. Large population-based randomized controlled trials (RCT) are needed to further examine the survival benefit and cost-effectiveness to the screening and treatment of anal cancer in high-risk populations. Currently, there is a lack of RCT’s to conclusively support or refute the use of anal pap smears, and it remains unknown when this data will become available. In addition, clinician training and insurance policy modifications are needed for more widespread application of this screening modality.

Commentary by Michelle Cespedes MD Assistant Professor Department of Medicine (Infectious Disease and Immunology)

This commentary on the benefit of anal PAP screening in HIV infected populations is timely and will familiarize health care providers on the benefit of this simple but underutilized tool that can improve the health outcomes of our patients. Investigators from the North American AIDS Cohort Collaboration on Research and Design recently analyzed findings from 13 US and Canadian studies. Recent data suggest that 3% of all HIV infected adults (including non-gay HIV infected men, HIV infected women, and HIV infected men who have sex with men (MSM)) will develop anal cancer by age 60. HIV infected MSM are 80 times more likely to develop anal cancer compared to HIV negative men. HIV infected non gay men are 27 times more likely to develop anal cancer compared to HIV negative men.

This study suggests that anal cancer screening for HIV infected patients is likely to be cost effective. The current New York State AIDS Institute guidelines now recommends targeted anal PAP for HIV infected MSM, individuals with a history of anogenital warts, and for women with a history of abnormal cervical or vulvar histology.

Silverberg MJ, Lau B, Justice AC, et al. Risk of anal cancer in HIV-infected and HIV-uninfected individuals in North America. Clin Infect Dis. 2012 Apr; 54(7):1026-34.

Dr. Nelson Sanchez is a former resident at NYU Langone Medical Center and a current Instructor, Clinical Medicine at Memorial Sloan Kettering Hospital

Peer reviewed by Dr. Francois, Assistant Professor of Medicine (Gastroenterology) NYU Langone Medicacl Center

Image Courtesy of Wikimedia Commons

References:

1. National Cancer Institute’s Surevillance Epidemiology and End Results (http://seer.cancer.gov/statfacts/html/anus.html)

2. National Cancer Institute’s Surevillance Epidemiology and End Results (http://seer.cancer.gov/statfacts/)

3. Calore EE, et al. Prevalence of anal cytological abnormalities in women with positive cervical cytology. Diagn Cytopathol 2011;39(5):323-7

4. Oon SF, et al. Perianal condylomas, anal squamous intraepithelial neoplasms and screening: a review of the literature.  http://jms.rsmjournals.com/content/17/1/44.full

J Med Screen 2010;17(1)44-9

5. Hakim AA, et al. Indications and efficacy of the human papillomavirus vaccine.  Curr Treat Options Oncol 2007;8(6):393-401

6. Arain S, et al. The Anal Pap Smear: Cytomorphology of squamous intraepithelial lesions. Cytojournal 2005;2(1):4  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC551597/

7. Ferraris A, et al. Anal pap smear in high-risk patients: a poor screening tool.  South Med J 2008;101(11):1185-6

8. National Cancer Institute’s Surevillance Epidemiology and End Results  http://seer.cancer.gov/statfacts/html/cervix.html

9. Matthews WC. Screening for anal dysplasia associated with human papillomavirus. Top HIV Med 2003;11(2):45-9

10. Mandelblatt JS, et al. Benefits and costs of using HPV testing to screen for cervical cancer. JAMA 2002;287(18):2372-81

11. Goldie SJ, et al. The costs, clinical benefits, and cost-effectiveness of screening for cervical cancer in HIV-infected women. Ann Int Med 1999;130(2):97-107

12. Goldie SJ, et al. The clinical effectiveness and cost-effectiveness of screening for anal squamous intraepithelial lesions in homosexual and bisexual HIV-positive men. JAMA 1999;281(19):1822-9

13. Goldie SJ, et al. Cost-effectiveness of screening for anal squamous intraepithelial lesions and anal cancer in human immunodeficiency virus-negative homosexual and bisexual men. Am J Med 2000;108(8):634-41

14. Czoski-Murray C, et al. Cost-effectiveness of screening high-risk HIV-positive men who have sex with men (MSM) and HIV-positive women for anal cancer. Health Technol Assess 2010;14(53):1-101  http://www.unboundmedicine.com/evidence/ub/citation/21083999/Cost_effectiveness_of_screening_high_risk_HIV_positive_men_who_have_sex_with_men__MSM__and_HIV_positive_women_for_anal_cancer_

15. D’Souza G, et al. Incidence and epidemiology of anal cancer in the multicenter AIDS cohort study. J Acquir Immune Defic Syndr 2008;48(4):491-9

16. Reed AC, et al. Gay and bisexual men’s willingness to receive anal Papanicolaou testing. Am J Public Health 2010;100(6):1123-9  http://www.unboundmedicine.com/evidence/ub/citation/20395576/Gay_and_bisexual_men’s_willingness_to_receive_anal_Papanicolaou_testing_

Promising New Hepatitis C Medications Raise Hopes, Questions

January 17, 2013

By Carl M. Gay, MD

Faculty Peer Reviewed

A healthy 61-year old man with a history of chronic genotype 1b hepatitis C virus infection of unknown duration arrives for his semiannual appointment in the Hepatology Clinic. The patient has previously been offered treatment with pegylated interferon and ribavirin, which he has declined on the basis of potential side effects and poor reported efficacy. He states that he has read that new treatment options for hepatitis C have recently become available…

Hepatitis C virus (HCV), first isolated in 1989, is a positive-stranded, enveloped RNA virus of the flaviviridae family.[1] A recent survey estimates the prevalence of Americans with antibodies to HCV to be over 4 million, including 3.2 million with evidence of HCV viral load, indicating chronic infection.[2] The majority of individuals with chronic HCV infection in the US are infected with genotype 1,[3] which has proven difficult to treat. Risk factors for the acquisition of HCV include intravenous drug use, high-risk sexual behavior, and blood transfusion prior to the advent of HCV screening in 1992, although many infected individuals have no risk factors for transmission.[4] While the incidence of acute HCV infection has decreased since its peak in the 1980s, estimates suggest that the prevalence of chronic HCV infection will not peak until 2015 [5] and that as few as 25-30% of chronic HCV cases are being diagnosed due to the asymptomatic nature of HCV infection in its early stages.[6]

HCV is the most common cause of chronic liver disease, cirrhosis, and hepatocellular carcinoma in the US.[7] While many patients with chronic HCV infection will never experience any serious complications, data show that roughly one-third of patients with untreated chronic HCV will progress to cirrhosis within 20 years.[7,8] Identifying this third of chronic HCV patients has proven difficult, however, because of poor correlation between quantification of HCV viral load and clinical outcomes.[8] Because of this, treatment may be initiated at any point in the natural history of HCV infection, although liver fibrosis is the best indication to initiate antiviral therapy to prevent progression. Thus, patients with chronic HCV infection should be regularly surveyed for abnormalities in liver biomarkers and with ultrasound to determine whether a liver biopsy to assess for fibrosis is indicated.[9]

Treatment of chronic HCV infection with interferon-alpha predates the identification of HCV itself,[10] but sustained responses were below 10% following a 6-month course.[11] Improvements to this treatment regimen came in 2 forms: the addition of the antiviral compound ribavirin, which more than doubled the sustained response rate,[12] and the covalent modification of interferon with polyethylene glycol (peg), which vastly improved the half-life of the molecule.[13] However, pegylated interferon + ribavirin “double therapy” yielded a sustained virologic response, defined as undetectable HCV viral load 24 weeks following cessation of therapy, in only ~40% of patients infected with HCV genotype 1.[14]

While the difficulty in generating models to study HCV infection has made analyses difficult, it is widely believed that exogenous interferon reduces HCV indirectly, by activating cell-surface interferon receptors and inducing JAK/STAT signaling. This subsequently alters transcription and translation of genes associated with inflammation and protein degradation to induce an “antiviral” state.[15,16] Several mechanisms have been proposed for the synergistic effect of ribavirin when paired with interferon therapy. Ribavirin is a guanosine analog and, as such, may be phosphorylated and subsequently incorporated into nascent RNA chains, causing early termination.[16,17] Other proposed mechanisms include ribavirin-dependent induction of catastrophic viral mutations, depletion of GTP required for RNA synthesis, and synergistic influence on the induction of interferon-dependent genes.[17,18] As a result of these indirect mechanisms of double therapy, there are considerable side effects associated with treatment, including cytopenias, fatigue, depression, pruritus, and anorexia.[19] Furthermore, for genotype 1, this therapy is continued for 48 weeks and includes weekly subcutaneous injections of interferon.[19] The poor efficacy and side effect profile paired with the length and mode of treatment administration underscore the need for direct therapies.

Recent advances in both in vitro and in vivo models of HCV infection have identified numerous candidates for drug targeting within the HCV proteome.[20] The function of the nonstructural 3 (NS3) serine proteases is twofold. They are responsible for both the cleavage of the HCV polyprotein and for inhibition of innate immune signaling within hepatocytes via cleavage and inactivation of interferon-beta promoter stimulator 1.[20] These NS3 molecules have been specifically targeted by 2 recently FDA-approved medications, telaprevir (Incivek, Vertex Pharmaceuticals, Boston, MA) and boceprevir (Victrelis, Merck, Whitehouse Station, NJ). Several recent clinical trials have highlighted the significant improvement in the efficacy of HCV treatment with “triple therapy” including an NS3 protease inhibitor in conjunction with peg-interferon and ribavirin.

The PROVE 1 randomized, controlled clinical trial found that while double therapy for 48 weeks achieved a sustained virologic response in only 41% of all patients with previously untreated genotype 1 chronic HCV, those who underwent triple therapy with telaprevir for the initial 12 weeks, followed by 36 weeks of double therapy, had a sustained virologic response in 67%, which was significantly greater.[21] The PROVE 2 trial subsequently showed that a similar sustained virologic response of 69% (vs 46% for 48 weeks of double therapy alone) can be obtained in genotype 1-infected patients with only 12 additional weeks of double therapy following 12 weeks of triple therapy with telaprevir.[22] The ADVANCE trial has shown that the treatment can be further simplified by response-guided therapy, in which an extended rapid virologic response (ie, undetectable HCV viral load between 4 and 12 weeks of triple therapy with telaprevir) can be used as an indication for cessation of therapy after only 12 additional weeks of pegylated interferon-ribavirin double therapy.[23] These patients had similar sustained virologic responses (75% vs 44% for 48 weeks of double therapy alone) to the PROVE 1 and PROVE 2 trials, which included 36 weeks of double therapy following telaprevir.[23] The SPRINT-1 trial found similarly promising results for boceprevir with the caveat that a 4-week lead-in of double therapy is required prior to 44 weeks of triple therapy with boceprevir in order to achieve the best results of 75% (vs 38% for 48 weeks of double therapy alone).[24] The SPRINT-2 trial additionally showed that response-guided therapy, similar to that utilized in the ADVANCE trial for telaprevir, can be used with boceprevir, again with a 4-week lead-in of double therapy alone.[25] While these results all apply to patients with chronic HCV genotype 1 infection who were previously untreated, like the patient in the initial case, additional randomized, controlled clinical trials have showed promising results for triple therapy for those patients who have previously failed double therapy.[26,27]

Thus, the current treatment guidelines for a patient like the one in the case above would be telaprevir 750 mg by mouth 3 times daily with peg-interferon and ribavirin for 12 weeks, with viral load assays between 4 and 12 weeks of treatment.[28] Depending on the virologic response at these time points, double therapy would follow for either an additional 12 or 36 weeks.[28] A 4-week lead-in of double therapy followed by either 24 or 44 weeks of boceprevir-based triple therapy, depending on virologic response between 8 and 24 weeks of therapy, would be an FDA-approved, albeit more complicated, alternative.[28]

While the prospect of a curative 24-week regimen for genotype 1 HCV infection is certainly exciting, even the most generous predictions of sustained virologic response suggest that 20% of patients will fail to respond to the new regimens. Furthermore, there are considerable side effects associated with telaprevir and boceprevir, including anemia and rash; triple therapy combines the potential side effects of 3 agents. Initial data have, however, highlighted the fact that relapse is common in patients who receive NS3 protease inhibitors without double therapy.[22] Thus, the precautions associated with double therapy and the indications for its initiation continue to be pertinent for the addition of these new agents. For patients like the one in this case, without any clinical signs of decompensated cirrhosis, the decision of whether to treat his HCV infection remains challenging.

Carl Gay, MD is a former medical student at NYU School of Medicine

Peer reviewed by Natalie Levy, MD, Department of Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. Choo QL, Kuo G, Weiner AJ, Overby LR, Bradley DW, Houghton M. Isolation of a cDNA clone derived from a blood-borne non-A, non-B viral hepatitis genome. Science. 1989;244(4902):359-362.  http://www.ncbi.nlm.nih.gov/pubmed/11983439

2. Armstrong GL, Wasley A, Simard EP, McQuillan GM, Kuhnert WL, Alter MJ. The prevalence of hepatitis C virus infection in the United States, 1999 through 2002. Ann Intern Med. 2006;144(10):705-714.

3. Alter MJ, Kurszon-Moran D, Nainan OV, et al. The prevalence of hepatitis C virus infection in the United States, 1988 through 1994. N Engl J Med. 1999;341(8):556-562. http://archive.is/MVA0

4. Wang CC, Krantz E, Klarquist J, et al. Acute hepatitis C in a contemporary US cohort: modes of acquisition and factors influencing viral clearance. J Infect Dis. 2007;196(10):1474-1482.

5. Armstrong GL, Alter MJ, McQuillan GM, Margolis HS. The past incidence of hepatitis C virus infection: implications for the future burden of chronic liver disease in the United States. Hepatology. 2000;31(3):777-782.

6. Management of hepatitis C. NIH Consens Statement. 1997;15(3):1-41. http://consensus.nih.gov/1997/1997HepatitisC105html.htm

7. Afdhal NH. The natural history of hepatitis C. Semin Liver Dis. 2004; 24(Suppl 2):S3-S8.

8. Poynard T, Bedossa P, Opolon P. Natural history of liver fibrosis progression in patients with chronic hepatitis C. The OBSVIRC, METAVIR, CLINIVIR, and DOSVIRC groups. Lancet. 1997;349(9055):825-832.

9. Manning DS, Afdhal NH. Diagnosis and quantitation of fibrosis. Gastroenterology. 2008;134(6):1670-1681.

10. Hoofnagle JH, Mullen KD, Jones DB, et al. Treatment of chronic non-A, non-B hepatitis with recombinant human alpha interferon. A preliminary report. N Engl J Med. 1986;315(25):1575-1578.

11. Di Bisceglie AM, Hoofnagle JH. Optimal therapy of hepatitis C. Hepatology. 2002;36(5 Suppl 1): S121-S127.

12. McHutchison JG, Poynard T. Combination therapy with interferon plus ribavirin for the initial treatment of chronic hepatitis C. Semin Liver Dis. 1999;19(Suppl 1):S57-S65.

13. Zeuzem S, Feinman SV, Rasenack J, et al. Peginterferon alfa-2a in patients with chronic hepatitis C. N Engl J Med. 2000;343(23):1666-1672.

14. Fried MW, Shiffman ML, Reddy KR, et al. Peginterferon alfa-2a plus ribavirin for chronic hepatitis C virus infection. N Engl J Med. 2002;347(13):975-982.  http://www.nejm.org/doi/full/10.1056/NEJMoa020047

15. Zhu H, Zhao H, Collins CD, et al. Gene expression associated with interferon alfa antiviral activity in an HCV replicon cell line. Hepatology. 2003;37(5):1180-1188.  http://www.ncbi.nlm.nih.gov/pubmed/12717400

16. de Veer MJ, Holko M, Frevel M, et al. Functional classification of interferon-stimulated genes identified using microarrays. J Leukoc Biol. 2001;69(6):912-920.  http://www.jleukbio.org/content/69/6/912.full.pdf

17. Maag D, Castro C, Hong Z, Cameron CE. Hepatitis C virus RNA-dependent RNA polymerase (NS5B) as a mediator of the antiviral activity of ribavirin. J Biol Chem. 2001;276(49):46094-46098.

18. Feld JJ, Hoofnagle JH. Mechanism of action of interferon and ribavirin in treatment of hepatitis C. Nature. 2005;436(7053):967-972.

19. Ward RP, Kugelmas M. Using pegylated interferon and ribavirin to treat patients with chronic hepatitis C. Am Fam Physician. 2005;72(4):655-662.  http://www.aafp.org/afp/2005/0815/p655.html

20. Boonstra A, van der Laan LJ, Vanwolleghem T, Janssen HL. Experimental models for hepatitis C viral infection. Hepatology. 2009;50(5):1646-1655.

21. McHutchison JG, Everson GT, Gordon SC, et al. Telaprevir with peginterferon and ribavirin for chronic HCV genotype 1 infection. N Engl J Med. 2009;360(18):1827-1838.

22. Hezode C, Forestier N, Dusheiko G, et al. Telaprevir and peginterferon with or without ribavirin for chronic HCV infection. N Engl J Med. 2009;360(18):1839-1850.

23. Jacobson IM, McHutchison JG, Dusheiko G, et al. Telaprevir for previously untreated chronic hepatitis C virus infection. N Engl J Med. 2011;364(25):2405-2416.

24. Kwo PY, Lawitz EJ, McCone J, et al. Efficacy of boceprevir, an NS3 protease inhibitor, in combination with peginterferon alfa-2b and ribavirin in treatment-naïve patients with genotype 1 hepatitis C infection (SPRINT-1): an open-label, randomized, multicentre phase 2 trial. Lancet. 2010;276(9742):705-716.

25. Poordad F, McCone J Jr, Bacon BR, et al. Boceprevir for untreated chronic HCV genotype 1 infection. N Engl J Med. 2011;364(13):1195-1206.  http://www.ncbi.nlm.nih.gov/pubmed/21449783

26. Zeuzem S, Andreone P, Pol S, et al. Telaprevir for retreatment of HCV infection. N Engl J Med. 2011;364(25):2417-2428.

27. Bacon BR, Gordon SC, Lawitz E, et al. Boceprevir for previously treated chronic HCV genotype 1 infection. N Engl J Med. 2011;364(13):1207-1217.

28. Rosen HR. Clinical practice. Chronic hepatitis C infection. N Engl J Med. 2011;364(25):2429-2438.  http://www.nejm.org/doi/full/10.1056/NEJMcp1006613

Mystery Quiz-The Answer

January 10, 2013

Elizabeth Mulaikal MD, Vivian Hayashi MD, Robert Smith MD

The answer to the mystery quiz is pulmonary Mycobacterium kansasii infection. The patient’s clinical presentation of fevers and night sweats suggested an infectious process or B symptoms due to lymphoma. The initial chest radiograph (image 1) demonstrated a left hilar mass which was noted to be larger on a subsequent chest radiograph (images 2 and 4)) 1 month later. This increase in size over a short duration again suggested an infectious etiology. Importantly and a key to the case, the CT images demonstrated unilateral hilar lymphadenopathy with regions of central low attenuation and evidence of enhancingrims following intravenous contrast administration (image 5). The low density areas may represent caseous or possibly liquefication necrosis. Additionally, there is a mixed reticulo-nodular and airspace infiltrate lateral to the enlarged left hilum (image 6). These radiographic findings are strongly suggestive of  mycobacterial disease associated with HIV infection, especially in patients with severely reduced CD4 cell counts. In New York City, M. tuberculosis would be most likely, followed by M kansasii which may be indistinguishable from M tuberculosis on imaging. M avium complex is less likely to cause an inflammatory reaction of this degree in patients with AIDS; pneumonitis and lymphadenopathy are relatively uncommon in this setting. The presence of M avium complex is more often a marker of severe immunocompromise in HIV patients. Culture is required for definitive diagnosis, but nucleic acid testing can suggest the diagnosis before culture results are available. Other differential diagnostic considerations include pulmonary Kaposi’s sarcoma with intrathoracic lymphadenopathy, but unlikely without cutaneous disease, and lymphoma, which typically lacks the low-density attenuation seen here.Without a history of residence in an endemic area, Histoplasmosis is also unlikely.

M kansasii is a common cause of non-tuberculous mycobacterial lung disease in HIV positive patients.Tap water is thought to be the most likely environmental source of exposure, and person-to-person transmission does not occur. Affected individuals are typically severely immunosuppressed with CD4 counts less than 50/cmm. The clinical and radiographic features closely resemble that of M.tuberculosis. Classically, patients present with fevers, night sweats, weight loss, productive cough,and dyspnea. Treatment generally includes isoniazid, rifampin, and ethambutol, but trimethoprim-sulfamethoxazole, macrolides, and floroquinolones have also proven to be efficacious.

Our patient was initially placed in respiratory isolation for possible M tuberculosis infection. Sputum samples showed AFB on smear, but PCR testing for M tuberculosis and M avium complex was negative. Sputum culture ultimately grew M kansasii. The patient was placed on isoniazid, rifabutin, ethambutol,and moxifloxacin for an intended duration of 18 months.

Mystery Quiz

December 21, 2012

Elizabeth Mulaikal MD, Vivian Hayashi MD, Robert Smith MD

The patient is a 55 year old African American male with a 60 pack year history of tobacco use and AIDS,who presented with 1 month of intermittent fevers and weight loss. His most recent CD4 count and viral load were 2/cmm and 50,623 copies/mL, respectively. Prior opportunistic infections included pneumocystis pneumonia and thrush. He was previously homeless, but currently resides in a Single Room Occupancy Housing. Upon presentation he complained of occasional night sweats, but no shortness of breath, cough, sputum production, or hemoptysis. Vital signs were notable for a fever of101.5F, but the patient was otherwise normotensive with a room air saturation of 98%. On physical examination he appeared cachectic and the lung fields were clear. Labs showed a WBC count of 3.6k/cmm with 47% neutrophils and 13% bands, and an LDH of 364 U/L.


What is the most likely diagnosis?

View Results

Kidney Stones and Climate Change

October 10, 2012

By Jeffrey Shyu, MD

Faculty Peer Reviewed

Climate change has been linked to a variety of adverse effects on human health, effects that are expected to worsen in the coming decades [1]. For example, a heat wave in August 2003 resulted in nearly 15000 deaths in France, and the anticipated increase in average world temperatures is expected to lead to longer and more frequent heat waves that will disproportionately affect our more vulnerable populations. Infectious disease outbreaks, particularly vector-borne ones such as malaria, are expected to rise with global warming [2]. Wild swings in weather patterns, including large-scale flooding and droughts, will more likely occur. Smog and particulate air pollution, both of which lead to pulmonary disease and increased mortality, are expected to worsen [3, 4]. Agriculture production may be greatly impacted in certain regions of the world, raising the possibility of widespread famine [5].

Some of these effects are obvious, while others may surprise the reader. NYU nephrologist David Goldfarb has spoken of a possible link between climate change and the rising incidence and prevalence of yet another disorder—kidney stones. Kidney stones, also known as nephrolithiasis, are linked to a number of factors including diet, infection, and hereditary conditions like cystinuria. However, the link between climate change and nephrolithiasis has also been studied, and a literature review turns up a number of articles that touch on this question.

An intriguing report by Brikowski et al. in the Proceedings of the National Academy of Sciences (PNAS) [6] describes a “kidney stone belt” along the southern United States. Stone disease may be as much as 50 percent more prevalent in southern states when compared to the northwest, and according to the authors, much of this difference is attributed to differences in mean annual temperature (MAT). The authors predict that with global warming, by 2050 we will see a climate-related increase of approximately 2 million lifetime cases of kidney stones in the country, primarily in the south.

What is the mechanism? Kidney stones can form in response to metabolic and environmental factors, including low urine output from decreases in fluid intake or from increases in insensible fluid loss, mainly from sweating. When urine saturates, stone forming salts become concentrated and precipitate. The thought is that higher temperatures make people more prone to low urine output, making them more prone to stone formation.

However, the link between temperature and stone formation is not entirely clear. Some data suggest that the relationship behaves in a nonlinear fashion; that is, stone formation peaks at a certain MAT and then plateaus at higher temperatures (perhaps because people are less active at these higher temperatures). Other data suggest that the correlation is more linear. For their study, the authors employed both linear and nonlinear models using large survey datasets.

In both models, the authors of the PNAS article find that the “kidney stone belt” will grow, with the percentage of people living in high-risk zones increasing from 40 percent to 56 percent by 2050. They also estimate that the cost of treating this increase in kidney stones will cost the United States an additional $1 billion annually, not accounting for inflation.

Fakheri and Goldfarb [7] reanalyzed some of the data used in the PNAS study. They took the original data used by Brikowski’s non-linear model and graphed the prevalence of kidney stones with MAT, broken down by gender. They were able to show a greater association of kidney stones and increased temperature among both menand women though the effect on men was more marked. The authors speculate that this may be because men are more likely to have occupations that expose them to higher ambient temperatures. However, it has also been suggested that perhaps women are better able to keep up with fluid losses than men, via an unclear mechanism [8].

However, a cross-sectional survey of American soldiers returning from service in the Middle East, many of whom were likely to have been exposed to high temperatures, actually demonstrated a lower rate of nephrolithiasis (1%) compared to the general population (2-3%) [9]. The military emphasizes forced hydration to prevent heat injury, and this may be the reason why a lower rate was seen. They did find an increase of stone disease in people who have a previous history of kidney stones, and also people with a family history of the disease.

Clearly, more evidence is needed. A causal mechanism for temperature and kidney stone formation makes intuitive sense – higher temperatures make people more prone to dehydration and low urine output, encouraging stone formation. However, stone formation is a multifactorial process, and diet plays a large role as well. Diet and obesity have also been shown be strongly linked to stone formation. [10] However, the southern United States, which tends to get more kidney stones, also has a higher overall rate of obesity compared to the north. This is one obvious potential confounder.

Another suggested mechanism is that increases in sun exposure (specifically UV light) leads to the increased production of 1,25-OH-vitamin D, which in turn increases the absorption of dietary calcium and possibly the excretion of calcium in the kidneys [11]. Another potential confounder certainly, however this mechanism is largely unproven and one would expect people to try to avoid the sun during times of higher temperature.

In addition to obtaining data that control for these confounders, data that show a tighter link between temperature and stone formation would be helpful. Instead of using mean annual temperature, a study that looked at the incidence of stone formation by month, or to average monthly temperature would provide more convincing evidence for an association. However, conducting that study may be complicated by the fact that stones may take months to develop before they become clinically significant. They may start to form in the hot summer days, but patients may not present with symptoms until months later. Clearly, more work is needed to establish a link between climate change and this common and very painful disease.

Dr. Jeffrey Shyu recently completed his preliminary year internal medicine residency at NYU Langone Medical Center

Peer Reviewed by David Goldfarb, Nephrology, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References:

1. Costello A, Abbas M, Allen A, Ball S. Managing the health effects of climate change. Lancet. 2009 May 16: 373(9676): 1693-1733. http://linkinghub.elsevier.com/retrieve/pii/S0140-6736(09)60935-1

2. Tanser FC, Sharp B, Le Sueur D. Potential effect of climate change on malaria transmission in Africa. Lancet. 2003 Nov 29. 362(9398): 1792-1798. (http://linkinghub.elsevier.com/retrieve/pii/S0140-6736(03)14898-2)

3. Levy JJ, Chemerynski SM, Sarnat JA. Ozone exposure and mortality: an empiric bayes metaregression analysis. Epidemiology. 2005 Jul; 16(4): 458-468.

4. Pope CA, Burnett RT, Thun MJ, Calle EE, Krewski D, Ito K, Thurston GD. Lung cancer, cardiopulmonary mortality, and long-term exposure to fine particulate air pollution. JAMA. 2002 Mar 6; 287(9): 1132-1141. (http://jama.ama-assn.org/content/287/9/1132.long)

5. Parry ML, Rosenzweig C, Iglesias A, Livermore M, Fischer G. Effects of climate change on global food production under SRES emissions and socio-economic scenarios. Global Environmental Change. 2004 14: 53-67. (http://www.preventionweb.net/files/1090_foodproduction.pdf)

6. Brikowski TH, Lotan Y, Pearle MS. Climate-related increase in the prevalence of urolithiasis in the United States. Proc Natl Acad Sci USA. 2008 Jul 15; 105(28): 9841-6 (http://www.pnas.org/content/105/28/9841.long)

7. Fakheri RJ, Goldfarb DS. Association of nephrolithiasis prevalence rates with ambient temperature in the United States: a re-analysis. Kidney Int. 2009 Oct; 76(7): 798. (http://www.nature.com/ki/journal/v76/n7/full/ki2009274a.html)

8. Parks JH, Barsky R, Coe FL. Gender differences in seasonal variation of urine stone risk factors. J. Urology. 2003 170: 384-388. (http://www.jurology.com/article/S0022-5347(05)63332-0/abstract)

9. Pugliese JM, Baker KC. Epidemiology of nephrolithiasis in personnel returning from Operation Iraqi Freedom. Urology. 2009 Jul; 74(1): 56-60. (http://www.goldjournal.net/article/S0090-4295(09)00127-7/abstract)

10. Taylor EN, Stampfer MJ, Curhan GC. Obesity, weight gain, and the risk of kidney stones. JAMA. 2005 Jan 26; 293(4): 455-62. (http://jama.ama-assn.org/content/293/4/455.long)

11. Goldfarb DS. Unpublished correspondence.