Class Act

Lies My Patients Told Me: “I Take My Medications Every Day.”

January 15, 2016

BloodPressure2By Rebecca Sussman

Peer Reviewed

Reviewing medical evidence has become such a habit that sometimes it feels almost impossible to think independently. I’ve always been a top-down thinker; I go with my gut instinct, and then look for the evidence to support my assessment.

The problem is that very often it feels like what patients need most is not the precision of a particular etiology or the selection of a medication that is perfectly and precisely tailored to their condition and comorbidities; what they need is education about what it means to maintain their health, and practical strategies for how to do so. And my long hours delving into the literature for evidence on how best to do that have been less helpful than I’d hoped.

Says the patient, “I take my medications every day.” Literature on this subject is, in fact, adequately robust for me to mistrust those words when uttered by the majority of patients. For example, toxicological monitoring (which is pretty sensitive, if not practical in the day-to-day clinic setting) reveals that a whopping 50-60% of patients with resistant hypertension are non-adherent with their medications [1].  A cohort study done in Quebec and published in 2014 found that 31.3% of 37,506 first-time prescriptions were not even filled, much less taken [2].

I blame myself for the patient’s fib. If my patients feel that they cannot be honest with me about missed doses, that’s a reflection of my own failing to develop rapport. One of the best pieces of advice I have received in medical school is to start with, “I know that when I need to take medications it’s very hard for me to remember every dose. How many pills do you think you’ve missed this week?” I’ve put it to good use, but where is the literature on how to ask questions that will yield honest answers? It may be there, but I’ve yet to find it, outside of maybe the most psychodynamically-oriented mental health journals.

There is, however, a growing body of research into validated tools to assist in screening for medication adherence. My personal favorite is the brand-spanking-new Measure of Drug Self-Management, abbreviated as MeDS—cute, right [3]? But it’s not the catchy name that draws me to this newly-developed and validated 12-item questionnaire; rather, it’s that the authors specifically sought to develop an inexpensive tool that is considerate of the wide range of patient behaviors (and barriers) and that applies to a diverse range of patients with variable literacy levels [3]:

  1. Did you forget to take your (insert drug name) at any time last week?
  2. In the past month have you stopped taking (insert drug name) for any reason without telling your doctor?
  3. I often forget to take my medicine.
  4. I am organized about when and how I take my medicines.
  5. I have a hard time paying for my medicines.
  6. The print instructions of my prescription bottles are confusing.
  7. Having to take medicines worries me.
  8. I often have a hard time remembering if I have already taken my medicine.
  9. I do not take my medicines when I am feeling sad or upset.
  10. My medicines disrupt my life.
  11. When my medicine causes minor side effects, I stop taking it.
  12. The idea of taking medications for the rest of my life makes me very uncomfortable.

The tool is practical, efficient, and patient-centered–all of the things I strive to be. However, as it has only been publicized within the past month, I’d like to see some more validity testing before investing in the MeDS scale myself. Part of the reason for my skepticism is a lack of evidence-based interventions that physicians can recommend to help patients improve their medication adherence. A Cochrane review published in 2014 analyzed a total of 182 randomized controlled trials aimed at enhancing medication adherence for a wide range of patient populations and medical conditions, and concluded that the methods used for researching such interventions were insufficiently advanced [4]. In essence, no conclusions can be drawn on the basis of existing research because we are not sufficiently adept at performing this research in the first place.At what point, then, do I give up on finding the evidence to back up my instincts? Do I withhold my suggestions from patients until I know that we’re on the right track? Do I start doing the research myself? How could my methods possibly be more advanced than those of the Cochrane review? It brings me back to how I framed the issue for a patient who was frequently skipping breakfast: You should try to get more of your calories in earlier in the day. Theres research to support that. Ive heard the explanation that when you eat in the morning, thats fuel for your body and you burn those calories during your daily activities. But the calories that you eat before bed go right into storage, because youre just going to bed and not doing anything active. Thats just anecdotal, though—I’m not sure what science says about that. It kind of makes sense though, right? Im just throwing it out there to help you feel motivated and empowered to change the way you eat, so that you dont follow in your dads footsteps of having a heart attack at age 50.So I printed out the Mayo Clinic page on Mediterranean diet and sent the patient home to do some reading about plant-based diets. I’m still, however, flooded with a sense of inadequacy and powerlessness when it comes to educating myself on how to foster the trust of my patients and educate them appropriately.

Rebecca Sussman is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Michael Tanner, Associate Professor of Medicine, Executive Editor, Clinical Correlations

Image courtesy of Wikimedia Commons


  1. Pandey A, Raza F, Velasco A, et al. Comparison of Morisky Medication Adherence Scale with therapeutic drug monitoring in apparent treatment-resistant hypertension. J Am Soc Hypertension. 2015;9(6):420-426.
  2.  Tamblyn R, Eguale T, Huang A, Winslade N, Doran P. The incidence and determinants of primary nonadherence with prescribed medication in primary care. Ann Intern Med. 2014;160(7):441-450.
  3. Bailey SC, Annis IA, Rueland DS, Locklear AD, Sleath BL, Wolf MS. Development and evaluation of the Measure of Drugs Self-Management. Patient Preference and Adherence. 2015;9:1101-1108.
  4.  Nieuwlaat R, Wilczynski N, Navarro T, et al. Interventions for enhancing medication adherence. Cochrane Database of Systematic Reviews 2014, Issue 11. Art. No.: CD000011. DOI: 10.1002/14651858.CD000011.pub4.

Is It Time to Reconsider Who Should Get Metformin?

December 11, 2015

Metformin 500mg Tablets.jpgBy Lauren Strazzulla

Current FDA guidelines for the use of metformin stipulate that it not be prescribed to those with an elevated creatinine (at or above 1.5 mg/dL for men and 1.4 mg/dL for women). It is also contraindicated in patients with heart failure requiring pharmacologic treatment, and people over age 80, unless their creatinine demonstrates that renal function is not reduced. These guidelines are in place to prevent lactic acidosis, an understandably feared complication of metformin. However, metformin is, by consensus, the initial drug of choice in type 2 diabetes and may prevent or delay the disease in people with pre-diabetes. Metformin is used successfully with less restriction throughout Europe, where it is considered acceptable to prescribe as long as the patient’s glomerular filtration rate (GFR) exceeds 30 mL/minute [1].

Biguanides, such as metformin, act by improving insulin sensitivity and by suppressing inappropriate gluconeogenesis in the liver. They inhibit the mitochondrial respiratory chain, which shifts energy production from aerobic metabolism to anaerobic, generating lactic acid as a byproduct [2]. Much of the concern for lactic acidosis (LA) arose from the legacy of metformin’s predecessor phenformin, which was removed from the market in 1978 due to high incidence of LA. But the pharmacokinetics of metformin differ markedly from phenformin, which has a longer half life and causes LA at a lower blood level relative to metformin [3,4].

The actual risk for lactic acidosis may be lower than widely believed. In fact, some studies have demonstrated that the vast majority of patients who get LA have serious underlying conditions, with the most common being infection, acute liver or kidney injury, and cardiovascular collapse [5,6,7]. A study by Lalau and colleagues found that survival in patients with LA correlates with the severity of the associated condition and not the degree of metformin accumulation. Metformin levels did not carry diagnostic or prognostic significance in patients with LA, and in some cases higher levels were associated with reduced mortality [8]. These data call into question how significant a role metformin truly plays in potentiating lactic acidosis.

There has been speculation that patients with type 2 diabetes may already have a baseline risk for LA that is separate from the risk conferred by metformin use. Brown and colleagues (1998) showed that the rate of LA among patients with type 2 diabetes using metformin versus those not using metformin was indistinguishable, which implies that the pathogenesis of LA may be more closely related to the disease itself [9]. Other studies have showed that the overall incidence of LA among metformin users is about 1 per 23,000-30,000 person-years compared to 1 per 18,000-21,000 person-years among diabetic patients on other agents [10,11]. Thus, metformin may not be as dangerous as previously thought.

Moreover, metformin has numerous health benefits that reduce the progression to diabetes as well as the disease burden. Metformin also provides the versatility of being able to be used with every other oral antidiabetic, in addition to insulin [12]. The Diabetes Prevention Program study, which followed 30,000 people on metformin or placebo for an average of 3.2 years followed by a 7-8 year open label extension, showed that the drug produced significant weight loss and delayed or prevented diabetes. There were no cases of LA during the 18,000 patient-years of follow up [13]. A retrospective analysis of over 19,000 patients enrolled in the REACH trial found that metformin was associated with a 24% reduction in all-cause mortality after only 2 years of use [14].

Yet, metformin is contraindicated in groups of patients for whom it is has a proven benefit. For example, metformin is contraindicated in heart failure because of a presumed increase in the LA risk. A meta-analysis performed in 2007 showed metformin to be the only antidiabetic drug not associated with harm to patients with both diabetes and heart failure; it also reduced mortality in these patients [15]. Many cardiac catheterization lab protocols require withholding metformin 48 hours before and after the procedure. But there is concern that hyperglycemia from temporary cessation of metformin could be harmful during high-risk cardiac interventions [16,17]. Khurana and colleagues point out that metformin is not nephrotoxic and there is no known reaction with iodinated contrast [12]. Similarly, among patients with moderate renal failure, metformin is associated with a reduction in mortality, though the drug is contraindicated in these patients according to current guidelines [14]. Overall, evidence suggests that the benefits likely outweigh the risks for metformin in patients with heart failure and moderate renal failure–at least in those younger than 80 [14].

Metformin is a medication that helps mitigate the consequences of diabetes. Current FDA contraindications do not reflect the evidence suggesting that adverse events from metformin are uncommon, even among at-risk groups. The 2015 guidelines by the American Diabetes Association and the European Association for the Study of Diabetes maintain that the current cutoffs for renal safety are overly restrictive and recognize that many practitioners use metformin even when GFR falls to less than 60 mL/min [18]. In fact, other studies have suggested that metformin remains within the therapeutic range and lactate levels are not significantly affected as long as estimated GFR is greater than 30 mL/minute [5]. Therefore, it is time to re-evaluate metformin prescribing practices, given that this medication can safely improve the outlook for many patients who may not currently be eligible for the drug.


Commentary by Michael Tanner, MD Executive Editor, Clinical Correlations
Dimethyl biguanide (metformin) was first synthesized from Galega officinalis (French lilac) in the1920s. Jean Sterne, the French physician who developed it in the 1950s, coined its first trade name “Glucophage” (glucose eater). It was added to the British National Formulary in 1958. Metformin was not approved in the United States until 1994, largely due to guilt by association with the other truly dangerous biguanides phenformin and buformin. In 1998, the United Kingdom Prospective Diabetes Study (UKPDS 34) found that metformin monotherapy in overweight diabetics reduced all-cause mortality by 36% at 10.7 years compared to diet, and was associated with better patient outcomes compared with insulin supply-side drugs–glyburide, chlorpropamide, and insulin itself [19]. The UKPDS was largely responsible for the American Diabetes Association’s eventual recommendation that metformin, barring contraindications, should be the first-line pharmacological agent in most cases of type 2 diabetes.

Citizen petitions were submitted in 2012 and 2013 to relax the FDA’s draconian metformin rules, which are based, inexplicably, on creatinine level rather than GFR. The FDA needs to relax the no-metformin cutoff to a GFR of <30 mL/minute, so that the nearly one million diabetic patients for whom metformin is unnecessarily contraindicated can benefit.

Lauren Strazzulla is a third year medical student at NYU Langone School of Medicine

Michael Tanner, MD is an Associate Professor of Medicine and Executive Editor, Clinical Correlations


  1. Nathan DM, Buse JB, Davidson MB, et al. Medical management of hyperglycemia in type 1 diabetes: a consensus algorithm for the initiation and adjustment of therapy: a consensus statement of the American Diabetes Association and the European Association for the Study of Diabetes. Diabetes Care. 2009;32(1):193-203.


  1. Cho YM, Kieffer TJ.  New aspects of an old drug: metformin as a glucagon-like peptide 1 (GLP-1) enhancer and sensitiser. Diabetologia. 2011:54(2):219-222.


  1. Sirtori CR, Franceschini G, Galli-Kienle M, et al.  Disposition of metformin (N,N-dimethylbiguanide) in man. Clin Pharmacol Ther. 1978:24(6):683-693.


  1. Pernicova I, Korbonits M.  Metformin—mode of action and clinical implications for diabetes and cancer. Nat Rev Endocrinol. 2014:10(3):143-156.


  1. Inzucchi SE, Lipska KJ, Mayo H, Bailey CJ, McGuire DK. Metformin in patients with type 2 diabetes and kidney disease: a systematic review. JAMA. 2014:312(24):2668-2675.


  1. Misbin RI, Green L, Stadel BV, Gueriguian JL, Gubbi A, Fleming GA. Lactic acidosis in patients with diabetes treated with metformin. N Engl J Med.1998:338:265-266.


  1. Wilholm BE, Myrhed M. Metformin-associated lactic acidosis in Sweden 1977-1991. Eur J Clin Pharmacol. 1993:44:589-591.


  1. Lalau JD, Lacroix C, Compagnon P, et al. Role of metformin accumulation in metformin-associated lactic acidosis. Diabetes Care. 1995:18(6):779-784.


  1. Brown JB, Pedula K, Barzilay J, Herson MK, Latare P. Lactic acidosis rates in type 2 diabetes. Diabetes Care. 1998:21(10):1659–1663.


  1. Salpeter SR, Greyber E, Pasternak GA, Salpeter EE.  Risk of fatal and nonfatal lactic acidosis with metformin use in type 2 diabetes mellitus. Cochrane Database Syst Rev. 2010 April 14 (4):CD002967.


  1. Bodmer M, Meier C, Krähenbühl S, Jick SS, Meier CR.  Metformin, sulfonylureas, or other antidiabetes drugs and the risk of lactic acidosis or hypoglycemia: a nested case-control analysis. Diabetes Care. 2008:31(11):2086-2091.


  1. Khurana R, Malik IS. Metformin: safety in cardiac patients. Postgrad Med J. 2010:86:371-373.


  1. Diabetes Prevention Program Research Group. Long-term safety, tolerability, and weight loss associated with metformin in the Diabetes Prevention Program Outcomes Study. Diabetes Care. 2012:35(4):731-737.


  1. Roussel R, Travert F, Pasquet B, et al. Reduction of Atherothrombosis for Continued Health (REACH) Registry Investigators. Metformin use and mortality among patients with diabetes and atherothrombosis. Arch Intern Med. 2010:170(21):1892-1899.


  1. Eurich DT, McAlister FA, Blackburn DF, et al. Benefits and harms of antidiabetic agents in patients with diabetes and heart failure: systematic review. BMJ. 2007:335(7618):497.
  2. Willfort-Ehringer A, Ahmadi R, Gessl A, et al. Neointimal proliferation within carotid stents is more pronounced in diabetic patients with initial poor glycaemic state. Diabetologia. 2004:47(3):400–406.


  1. Timmer JR, Ottervanger JP, de Boer MJ, et al. Hyperglycemia is an important predictor of impaired coronary flow before reperfusion therapy in ST-segment elevation myocardial infarction. J Am Coll Cardiol. 2005:45(7):999–1002.


  1. Inzucchi SE, Bergenstal RM, Buse JB, et al. Management of hyperglycemia in type 2 diabetes, 2015: a patient-centered approach: update to a position statement of the American Diabetes Association and the European Association for the Study of Diabetes. Diabetes Care. 2015:38(1):140-149.


  1. Effect of intensive blood-glucose control with metformin on complications in overweight patients with type 2 diabetes (UKPDS 34). UK Prospective Diabetes Study (UKPDS) Group. Lancet. 1998;352(9131):854-865.


Are We Overusing Proton Pump Inhibitors?

November 13, 2015

File:Nexium (esomeprazole magnesium) pills.JPGBy Shimwoo Lee
Peer Reviewed
Case: A 31-year-old man with poorly controlled type 2 diabetes was hospitalized for community-acquired pneumonia. His home medications included esomeprazole. When asked why he was receiving this medication, the patient said it was first started during his prior hospitalization for “ulcer prevention” eight months ago and that he had continued to take it since. He denied any history of upper gastrointestinal symptoms. Esomeprazole was tapered off during this admission. When being discharged after successful treatment of his pneumonia, he was told he no longer needed to take esomeprazole.

Proton pump inhibitors (PPIs) are one of the most widely used medications in the US. Last year, esomeprazole was ranked as one of the top three best-selling drugs in the nation, with 17.8 million prescriptions [1]. PPIs are the most potent inhibitors of gastric secretion and are used to treat common upper gastrointestinal disorders, such as gastroesophageal reflux disease (GERD) and peptic ulcer disease. The effectiveness of PPIs and their perceived low toxicity profile have led to their popularity and even inappropriate overutilization in the medical setting, as exemplified by the patient case above. However, PPI use can have potentially serious medical consequences, including an increased risk of infections, malabsorption, and adverse drug-drug interactions.

Physicians use empiric PPI therapy to diagnose GERD, one of the most common gastrointestinal diseases. If symptoms improve with empiric therapy, PPIs are then continued, often indefinitely, though it may be possible to step down to acid suppression with H-2 blockers such as ranitidine. PPIs work by irreversibly inhibiting the parietal cell H+K+ATPase, a pump that actively secretes protons into the gastric lumen in exchange for potassium ions. Because PPIs take several days to cut down maximal acid output, short-term use of a PPI does not provide optimal acid inhibition [2]. Upon discontinuation of the drug, patients can experience rebound acid hypersecretion due to hypergastrinemia, leading to worsening of GERD symptoms. These are reasons that many physicians simply keep patients on daily PPIs indefinitely. Currently, there are no evidence-based guidelines for discontinuing PPIs.

Prolonged PPI use can have serious infectious risks. Reduced acid production due to PPIs compromises the sterility of the gastric lumen, thus making it easier for pathogens to colonize the upper gastrointestinal tract and subsequently alter the colonic microbiome [3]. The best-documented enteric infection linked to PPI use is Clostridium difficile, which is the leading cause of gastroenteritis-associated death in the US [4]. In 2012, a meta-analysis of 42 studies linked PPI use with a significantly increased risk of both incident and recurrent C difficile infection (odds ratio 1.7) [5]. Through a similar mechanism of decreased gastric sterility, PPIs predispose patients to other bacterial gastroenteritides, as well as to both community-acquired and nosocomial pneumonia (as, perhaps, in our patient above). A meta-analysis of 31 studies in 2011 found that patients taking PPIs were at increased risk for pneumonia (odds ratio 1.27) [6].

PPI use also has been implicated in gut malabsorption. In 2011, the FDA issued a safety warning regarding the risk of hypomagnesemia in patients who have been on PPIs for more than a year [7]. PPIs promote the loss of magnesium, which is essential for nucleic acid synthesis, by disrupting active transport molecules in the gut that actively absorb magnesium [8]. Hypomagnesemia is associated with a host of conditions, including hypertension and type 2 diabetes. Furthermore, PPI-induced hypochlorhydria can reduce calcium absorption and thus decrease bone density. The Nurses’ Health Study in 2012 demonstrated that the risk of hip fracture was 36 percent higher among postmenopausal women who regularly used PPIs for at least two years compared with nonusers [9].

Another reason for caution when prescribing PPIs is their potential to cause deleterious drug interactions (uncommonly). PPIs are metabolized via hepatic cytochrome P450 enzymes, CYP2C19 being the predominant isoenzyme [10]. The use of PPIs can interfere with many other drugs sharing the same hepatic metabolic pathway, especially in individuals with CYP2C19-inactivating polymorphisms. For instance, patients on warfarin can have a 10 percent decrease in prothrombin time with concomitant use of omeprazole, and the same PPI can also lead to increased half-life of diazepam by 130 percent [11]. Furthermore, due to studies linking omeprazole use to decreased activation of clopidogrel, the FDA issued an alert in 2009 regarding this potential drug interaction raising concern for adverse cardiovascular events; however, the clinical significance of such interaction remain controversial [12].

Given the possible dangers of using PPIs, the widespread practice of physicians keeping patients on prolonged PPI therapy is concerning. While currently we lack an evidence-based approach for discontinuing PPIs, the general guideline is that patients with GERD or dyspepsia deserve a consideration for a PPI taper after being asymptomatic for three to six months. In order to prevent rebound acid secretion when attempting to stop PPIs, it may be necessarily to temporarily overlap use with an H2 blocker, which when stopped does not result in acid rebound. Unfortunately, physicians frequently over-prescribe PPIs in the first place and fail to follow up their patients with a goal toward stopping unnecessary therapy [13]. A 2010 study conducted in a Veterans’ Administration ambulatory care center showed that, of 946 patients receiving PPI therapy, 36 percent had no documented appropriate indication for initiating such therapy, and 42 percent lacked re-evaluation of their upper-GI symptoms, thus precluding any potential for step-down therapy [14].

Overutilization of PPIs occurs in the inpatient setting as well. In the intensive care unit (ICU), PPIs are indicated specifically for stress ulcer prophylaxis in select patients with high risk of GI bleeding, including those with coagulopathies, traumatic brain injury, and severe burns, or on long-term mechanical ventilation [15]. However, no such indications exist in non-ICU settings. Yet, a study of 1769 non-ICU patients found that 22 percent received PPIs for stress ulcer prophylaxis, and over half of these patients were subsequently discharged home on PPIs inappropriately [16]. The majority of physicians who prescribe PPIs in non-ICU settings appear to do so out of fear of upper GI bleeding and associated legal repercussions [17]. However, such hospital practice not only incurs unnecessary costs but also can lead to serious harm since PPIs can further add to the already high rates of hospital-acquired C difficile infections and pneumonia.

Even if physicians were to stop over-prescribing PPIs, this would not eliminate the problem of PPI overuse. Thanks to the FDA approval of over-the-counter omeprazole (Prilosec-OTC) in 2010, more individuals have access to PPIs. Advertised as “on-demand” relief medication for people with frequent heartburn, Prilosec-OTC has a label warning against its use for more than 14 days. What is troubling with this message is that it may promote chronic on-and-off usage, which is not optimal, given that PPIs take several days to take maximal effect and can cause rebound acid reflux when stopped abruptly. Hence, over-the-counter PPIs may provide only suboptimal relief of symptoms while exposing patients to adverse side effects all the same.

We need more judicious usage of PPIs in the face of their ever-rising popularity. Their widespread use certainly attests to their effectiveness, but more care must be taken to minimize their overuse. Physicians have a big role as stewards of guiding proper use of PPIs, even Prilosec-OTC, by educating patients about the adverse effects of PPIs and keeping close track of both their prescription and over-the-counter medication lists. It is crucial for physicians to check proper indications for PPIs before prescribing them and regularly reassess patients’ symptoms for possible step-down therapy. Just as important is their role in counseling patients on lifestyle changes that can improve reflux symptoms–avoiding acidic foods, quitting smoking, and losing weight–to decrease or even eliminate the need for PPIs.

Shimwoo Lee is a 3rd year medical student at NYU School of Medicine

Peer Reviewed by Michael Poles, MD Associate Professor of Medicine, Division of Gastroenterology

1. Brooks M. Top 100 most prescribed, top selling drugs. Medscape Medical News. Published August 1, 2014. Accessed May 15, 2015.
2. Wolfe MM, Sachs G. Acid suppression: optimizing therapy for gastroduodenal ulcer healing, gastroesophageal reflux disease, and stress-related erosive syndrome. Gastroenterology. 2000;118(2 Suppl 1):S9-S31.
3. DuPont HL. Acute infectious diarrhea in immunocompetent adults. New Engl J Med. 2014;370(16):1532-1540.
4. Hall AJ, Curns AT, McDonald LC, Parashar UD, Lopman BA. The roles of Clostridium difficile and norovirus among gastroenteritis-associated deaths in the United States, 1999-2007. Clin Infect Dis. 2012;55(2):216-223.
5. Kwok CS, Arthur AK, Anibueze CI, Singh S, Cavallazzi R, Loke YK. Risk of Clostridium difficile infection with acid suppressing drugs and antibiotics: meta-analysis. Am J Gastroenterol. 2012;107(7):1011-1019.
6. Eom CS, Jeon CY, Lim JW, Cho EG, Park SM, Lee KS. Use of acid-suppressive drugs and risk of pneumonia: a systematic review and meta-analysis. CMAJ. 2011;183(3):310-319.
7. U.S. Food and Drug Administration. FDA Drug Safety Communication: Low magnesium levels can be associated with long-term use of Proton Pump Inhibitor drugs (PPIs). Published March 2, 2011. Accessed
8. Perazella MA. Proton pump inhibitors and hypomagnesemia: a rare but serious complication. Kidney Int. 2013;83(4):553-556.
9. Khalili H, Huang ES, Jacobson BC, Camargo CA, Jr, Feskanich D, Chan AT. Use of proton pump inhibitors and risk of hip fracture in relation to dietary and lifestyle factors: a prospective cohort study. BMJ. 2012;344:e372.
10. Klotz U, Schwab M, Treiber G. CYP2C19 polymorphism and proton pump inhibitors. Basic Clin Pharmacol Toxicol. 2004;95(1):2-8.
11. Wolfe MM. Overview and comparison of the proton pump inhibitors for the treatment of acid-related disorders. Up to Date. Updated July 22, 2014. Accessed May 15, 2015.
12. U.S. Food and Drug Administration. Information for Healthcare Professionals: Update to the labeling of Clopidogrel Bisulfate (marketed as Plavix) to alert healthcare professionals about a drug interaction with omeprazole (marketed as Prilosec and Prilosec OTC). Published November 17, 2009. Accessed May 15, 2015.
13. Heidelbaugh JJ, Kim AH, Chang R, Walker PC. Overutilization of proton-pump inhibitors: what the clinician needs to know. Therap Adv Gastroenterol. 2012;5(4):219-232.
14. Heidelbaugh JJ, Goldberg KL, Inadomi JM. Magnitude and economic effect of overuse of antisecretory therapy in the ambulatory care setting. Am J Manag Care. 2010;16(9):e228-234.
15. America Society of Health System Pharmacists. ASHP therapeutic guidelines on stress ulcer prophylaxis. Am J Health Syst Pharm. 1999;56(4):347-379.
16. Heidelbaugh JJ, Inadomi JM. Magnitude and economic impact of inappropriate use of stress ulcer prophylaxis in non-ICU hospitalized patients. Am J Gastroenterol. 2006;101(10):2200-2205.
17. Hussain S, Stefan M, Visintainer P, Rothberg M. Why do physicians prescribe stress ulcer prophylaxis to general medicine patients? South Med J. 2010;103(11):1103-1110.


Supporting Evidence

October 30, 2015

File:Research design and evidence - Capho.svgBy: Amy Ou

During a weekend off at my parents’ home, the subject of this chronic cough that I had developed for the entirety of the winter season came up. My mother, noticeably more concerned about it than I, asked: “Did you get a flu shot? Did you get your cough after you got your flu shot? You know this happened when you were little, right? I just don’t know about those flu shots, I think they have some bad side effects. Your dad and I were adamant about not getting flu shots this year.”

A nervous mixture of emotions churned in my stomach. Could it be? Could my highly-educated scientist/engineer parents be against vaccines? “We were talking to the retired couple next-door about it too. They agreed. They didn’t get it either.” Agck!

Anti-vaccine is quite the dirty word in the medical community. When the unvaccinated baby with high fevers rolled into the emergency room on my pediatrics rotation, the air in the ER was heavy with judgment. “This. This is what happens when you don’t vaccinate your kids.” Could it be that I myself was living in a community of anti-vaxxers? One that included my own parents? How could they, as intelligent as they are, not understand the overwhelming proof?

Hierarchies of Evidence

As I rambled to my parents about the insurmountable evidence of the safety of vaccines and the role of the FDA in ensuring that trials of safety and efficacy are appropriately conducted before mass vaccine campaigns are launched, I could tell that I was not making even a dent in their opinion. When I was a kid, my consecutive string of healthy winters was ruined by one particularly devilish cold. Unfortunately, it happened to occur the same year I got a flu shot for the first time. My parents’ brains kicked into gear and decided the two were related.

In 1979, the Canadian Task Force on Periodic Health Examination came up with the idea of ranking evidence by quality, with the pinnacle of the pyramid being evidence obtained from “at least one properly randomized controlled trial,” followed by “well-designed cohort or case-control analytic studies” [1]. At the bottom of the barrel was “opinions of respected authorities” [1]. The idea of hierarchies of evidence was adapted and popularized by many subsequent scientists.

The hierarchy seems intuitive after learning about it, but medical school was actually the first place I had heard of it. Concepts such as the placebo effect or association-versus-causation were ingrained in my head, but someone had to teach them to me first. It seemed obvious to me that the evidence from systematic reviews of double-blinded, randomized controlled trials of the influenza vaccine overruled my mother’s observation of one subject, but it’s not like my mother had sat through a whole course in clinical epidemiology like I had.

Arbiters of Evidence

Back in the emergency room, the nurse threaded a Foley catheter into the screaming baby. His mother was on the verge of tears, mortified at what was happening to her precious son. Did she wish this on her child when she made the decision to withhold vaccinations? Not likely.

It turns out the anti-vaccine movement was not born of the malicious intent of some layperson to harm whole generations of children. Vaccines have been controversial since 1796, when Edward Jenner first inoculated an eight-year-old boy with pus from a milkmaid’s cowpox lesion in the hope of preventing smallpox [2]. Historically, concern about the use of vaccines has been voiced by doctors and citizens alike, sometimes for good reason. For example, in the spring of 1955 a particular lot of polio vaccine actually contained active wild-type polio virus, causing 200 children to contract the disease [2].

The most recent surge in the anti-vaccine movement came from an article about an association between autism and the measles-mumps-rubella (MMR) vaccine written by a doctor. Yes, a retracted, but very Google-able and citable article. Andrew Wakefield, the author of the article, was eventually found to have falsified data. He has been repeatedly discredited, but to this day he maintains a substantial following of people who don’t believe in mainstream medicine [3]. And why not be skeptical? Germ theory was once out of the mainstream and it turned out to be correct [4].

We ask our patients to place their trust in us, to serve as arbiters of this vast quantity of evidence. But why? Simply because we went to medical school?

What We Want To Report

In searching for evidence, randomized controlled trials (RCTs) are often considered the gold standard in evaluating healthcare interventions. The CONSORT Group, a self-described “international and eclectic group, comprising trialists, methodologists and medical journal editors,” publish a statement, updated every few years, with guidelines for the reporting of RCTs [5]. They recognize that RCTs are only as good as their design, and provide a checklist of things that make for a high-quality trial. Without these features, RCT findings may be no better than those of observational trials.

In a review of all RCTs indexed in PubMed in December 2006, surprisingly few studies followed those guidelines [6]. Out of 616 studies identified, 47% didn’t define the primary outcome, 66% failed to report the method of random sequence generation, and 75% didn’t report how allocation was concealed (and of the 25% that did report it, 50% used envelopes, an extremely fallible form of concealment). Forty-four percent of the papers they looked at were published in journals that endorse CONSORT. And this was an improvement over the results from December 2000.

What We Want To Believe

If RCTs are the gold standard, systematic reviews and meta-analyses are the platinum standard. They similarly have a group, PRISMA (previously known as QUOROM), which publishes statements on what to include in systematic reviews and meta-analyses [7]. They too have found inconsistencies in the reporting of systematic reviews, with one-third of studies not revealing funding sources, one-third not even reporting what electronic databases were searched, and only 23% assessing for publication bias, a very insidious problem in scientific research [8].

A 1991 paper in the Lancet gives a demonstration of this problem of publication bias. A retrospective survey was conducted of research projects approved by the Central Oxford Research Ethics Committee between 1984 and 1987 [9]. The authors identified 285 studies whose data had been analyzed at the time of their investigation. Of those, 73% had been published or presented. Among the published/presented studies, 68% had statistically significant results, compared to only 29% of those neither published nor presented. A study that had been published or presented had an odds ratio of 4.54 of being statistically significant over being null. There is a whole sector of evidence that meta-analyses are not picking up.

One test that attempts to evaluate the existence of publication bias in meta-analyses is the funnel plot, which graphs trial sample size against effect size, with the expected result being that smaller studies will be scattered widely at the bottom of the graph with a narrowing spread as the studies get larger [10]. Without bias, the graph is expected to be symmetrical. A review of reviews from the Cochrane Database of Systematic Reviews demonstrated that, of reviews that included enough trials to be analyzed by the funnel method, a whopping 48% suggested the presence of publication bias [11].

We place a lot of trust in our evidence, but how do we know we are not just selectively choosing evidence to support what we want to believe?


My understanding of evidence-based medicine in shambles, I return to my parents’ refusal of the flu vaccine, and the ER mother’s refusal of all childhood vaccines. My parents, having made it to April, escaped the illness we tried to protect them from. The ER baby’s mother was perhaps not so lucky, although I never found out the child’s final diagnosis. They both took a risk based on their own judgment and their own internal weighing of the odds of morbidity. Were they presented with all the evidence? Does the evidence even mean anything?

No experiment is capable of producing absolute truth. Statistical testing is a game of chance that’s subject to interpretation. When we tell patients to get vaccinated, we are not dispensing fact so much as probability. When they refuse, they are simply weighing their own impression of the odds.

I’m certainly not throwing my evidence out the window, but I’m carrying with me a big block of salt. As far as my parents go, a large Cochrane review (with only a brief line on publication bias) was performed in 2014 of 116 data sets comparing the influenza vaccine to placebo or no intervention [12]. As it turns out, in healthy adults, 71 people would need to be vaccinated to prevent one case of influenza, with no effect on working days lost or hospitalization. A separate 2010 Cochrane review of the influenza vaccine in the elderly failed to reach any clear conclusions [13]. Was my cough from the vaccine? Probably not. Were my parents and their neighbors taking significant risks by refusing the vaccine? Maybe not as much as I thought they were.

As for the ER baby, if the mother’s choice to not vaccinate her baby came from the general fear that the MMR vaccine causes autism, a 2012 Cochrane review (with no mention of publication bias) shows no such evidence [14]. It does, however, point out that certain strains of the vaccine (strains still in use in some countries) have been shown to have an association with aseptic meningitis, and that there is an increased risk of febrile seizures in children following general receipt of the vaccine. Does the very real risk of hospitalization or death from measles, mumps, and rubella outweigh the minute risk of the side effects? I strongly believe so. And is there a civic duty to protect ourselves and our fellow citizens from these contagious diseases? Definitely. But are parents within reason to question our insistence to vaccinate? Maybe more so than we think.

To convince my parents or anybody else’s parents, I can’t just shove my beliefs down their throats. I have to consider my evidence more critically, and more importantly, to give some credence to the beliefs my patients hold, in order to facilitate learning on both ends.

Amy Ou is a medical student at NYU Langone School of Medicine Class of 2017

Peer Reviewed by Michael Tanner, MD, Executive Editor, Clinical Correlations


  1. The periodic health examination. Canadian Task Force on the Periodic Health Examination. Can Med Assoc J. 1979;121(9):1193-1254.
  2. Stern AM, Markel H. The history of vaccines and immunization: familiar patterns, new challenges. Health Aff (Millwood). 2005;24(3):611-621.
  3. Deer B. How the case against the MMR vaccine was fixed. BMJ. 2011;342:c5347.
  4. Britt LD. The death of an American President and the birth of an organization: the American Surgical Association and its legacy. Ann Surg. 2013;258(3):377-384.
  5. Schulz KF, Altman DG, Moher D, Group C. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c332.
  6. Hopewell S, Dutton S, Yu LM, Chan AW, Altman DG. The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed. BMJ. 2010;340:c723.
  7. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264-269,W64.
  8. Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.
  9. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet. 1991;337(8746):867-872.
  10. Egger M, Davey Smith G, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ. 1997;315(7109):629-634.
  11. Sutton AJ, Duval SJ, Tweedie RL, Abrams KR, Jones DR. Empirical assessment of effect of publication bias on meta-analyses. BMJ. 2000;320(7249):1574-1577.
  12. Jefferson T, Di Pietrantonj C, Rivetti A, Bawazeer GA, Al-Ansary LA, Ferroni E. Vaccines for preventing influenza in healthy adults. Cochrane Database Syst Rev. 2014;3:CD001269.
  13. Jefferson T, Di Pietrantonj C, Al-Ansary LA, Ferroni E, Thorning S, Thomas RE. Vaccines for preventing influenza in the elderly. Cochrane Database Syst Rev. 2010(2):CD004876.
  14. Demicheli V, Rivetti A, Debalini MG, Di Pietrantonj C. Vaccines for measles, mumps and rubella in children. Cochrane Database Syst Rev. 2012;2:CD004407.


There’s an App for That: Fitness Apps and Behavior Change Theory

September 18, 2015

fitness_apps_-_CopyBy Alyson Kaplan

Peer Reviewed

According to recent reports by the CDC, more than one-third (78.6 million) of American adults are obese. Approximately 17% (12.7 million) of children and adolescents ages 2-19 also meet criteria for obesity [1]. Obesity-related health conditions, including diabetes, heart disease, certain types of cancer, and stroke are among the leading causes of preventable death. Yet, obesity is not the sole contributor to these diseases. Other health risk behaviors, including smoking, alcohol abuse, and lack of physical activity all interact to produce poor outcomes.

Software developers have begun to take advantage of people’s obsession with technological advances to develop applications targeting health improvement. These apps exploit society’s move towards the “quantified self,” or the ability to self-monitor, self-sense, and self-track aspects of our daily lives [2]. They also make use of “gamification,” the concept of applying game mechanics and game design techniques, such as competition and reward systems, to engage and motivate people to achieve their goals [3]. Questions remain, however. How effective are these programs in actually changing behavior and do they make use of what we already know about behavior change? Let us review two of the more prevalent health behavior theories that have been successfully implemented for behavior change to determine what fitness apps already do well and what can be improved.

The health belief model, first developed in the 1950s by social psychologists Hochbaum, Rosenstock, and Kegels, is based on the understanding that a person will take a health-related action if that person a) feels that a negative health condition can be avoided b) has a positive expectation that by taking a recommended action, he/she will avoid a negative health condition and c) believes that he/she can successfully take a recommended health action. In this model, a person will weigh threats with net benefits, taking into account perceived susceptibility, severity, and barriers [4].

The Health Belief Model

apps model 1


The transtheoretical model was developed by Prochaska and DiClemente in the late 1970s and has been used successfully in smoking cessation. It suggests that individuals move through several stages of change. The first stage, precontemplation, includes people who do not yet recognize that they need to change their behavior. During the contemplation stage, people start thinking about changing. In the preparation stage, people start taking small steps toward initiating their behavior change. During action, people have recently changed their behavior and plan to continue this change. In maintenance, people intend to maintain their behavior change. Finally, in termination, people have no desire to return to their unhealthy behaviors. It is important to keep in mind that the model also includes a stage known as relapse, in which people return to their old unhealthy behaviors, and this can occur at any point in the process [5].

The Transtheoretical Model

apps theoretical model


Adapting these behavior change models to fitness applications could prove very effective. Let’s take MyFitnessPal as an example. Upon downloading this app for the first time, users are immediately asked for their goal: lose weight, maintain weight, or gain weight. The app then asks for physical activity level, height, weight, and contact information. Immediately after these steps, users are told their maximum allowed calorie count per day and are asked to begin tracking their meals and activity. According to the transtheoretical model, for people in the precontemplation or contemplation stages this first five-minute encounter with the app can seem overwhelming and abrasive. Negative feelings about behavior change have been shown to deter people from engaging [6]. If the app instead first assessed a user’s readiness to change, it could then tailor goals to specific stages. Someone in the contemplation stage, for example, may benefit from easy-to-understand information regarding weight loss and its benefits on several organ systems. Another user in relapse may benefit from encouragement from members of his or her social network rather than constant app reminders to fulfill a certain daily calorie requirement.

According to the health belief model, behavior change is an individualized process, in which the patient’s self-efficacy is of upmost importance. MyFitnessPal may be successful in providing assessment and feedback to motivate change, as users are reminded of goals they set and are rewarded when they reach these goals. It is less effective, however, in providing individually tailored assistance or guidelines that are specific to a person’s unique risk factors for disease. If the initial profile assessment were to include factors that are included in the American Heart Association atherosclerotic cardiovascular disease (ASCVD) risk calculator, such as systolic blood pressure, HDL cholesterol, or smoking, recommendations could be made that are specific to an individual’s risk factors [7].

Taken together, it seems that, while fitness apps are promising in some respects, there is room for improvement to better align with the principles of behavior change theory. One study, conducted by Brunstein and colleagues in 2012, examined a multitude of different behavior change apps and found that they would benefit from giving tailored, personalized advice that is integrated into a treatment plan for a particular person [8]. Another study, conducted by West and colleagues in 2012, looked specifically at diet apps and found that most apps were theory-deficient and provided just general information or assistance [9].

There are newer wearable devices that automate the input process. These devices, such as FitBit or JawboneUp, systematically upload physical activity (number of steps) and sleep (hours slept and deep versus light sleep) onto a mobile device. This automaticity makes behavior tracking more convenient for the user. A 12-month prospective quasi-experimental single cohort study conducted by HITLAB and Boehringer Ingelheim assessed the impact of wearable devices in improving physical activity, sleep, body-mass index, and self-reported health in 565 healthy adult volunteers. In this study, every age group of participants increased physical activity, with the older population (50-67) demonstrating the greatest overall increase in the number of steps from baseline [10]. The convenience and ease of use of the wearable fitness trackers can successfully improve users’ self-efficacy.

One of the most promising future aspects of fitness applications is their use in the larger healthcare system. Programs such as HealthKit are being developed that will allow healthcare providers access to patients’ fitness app information [11]. This will enable physicians to make recommendations or adjustments to a treatment plan based on the patient’s unique progress. With apps that can measure heart rate, as well as apps soon to be developed that can detect blood pressure or even serious arrhythmias, doctors can also incorporate individualized physiological information to help patients set more realistic goals. Of course, the success of these larger-scale projects depends on the ability of the apps to change behavior successfully. An opportunity exists for software developers to partner with health behavior change experts to improve user experience and success. As the legendary Rosie the Riveter poster taught us, albeit for a very different cause, “We can do it!”

Alyson Kaplan is a 3rd year medical student at NYU School of Medicine

Reviewed by Michael Tanner, MD, Associate Editor, Clinical Correlations


  1. Ogden CL, Carroll MD, Kit BK, Flegal KM. Prevalence of childhood and adult obesity in the United States, 2011-2012. 2014;311(8):806-814.
  2. Swan M. Emerging patient-driven health care models: an examination of health social networks, consumer personalized medicine and quantified self-tracking. Int J Environ Res Public Health. 2009;6(2):492-525.
  3. Zichermann G, Cunningham C. Gamification by design: Implementing game mechanics in web and mobile apps. Sebastopol, CA: O’Reilly Media; 2011.
  4. Rosenstock IM, Strecher VJ, Becker MH. Social learning theory and the health belief model. Health Educ Q. 1988;15(2):175-183.
  5. Prochaska JO, Velicer WF. The transtheoretical model of health behavior change. Am J Health Promot. 1997;12(1):38-48.
  6. Ajzen I, Fishbein M. Attitude-behavior relations: A theoretical analysis and review of empirical research. Psychol Bull. 1977;84(5):888-918.
  7. American College of Cardiology/American Heart Association 2013 cardiovascular risk calculator.
  8. Brunstein A, Brunstein J, Mansar SL. Integrating health theories in health and fitness applications for sustained behavior change: Current state of the art. Creative Educ. 2012;3:1-5.
  9. West JH, Hall PC, Arredondo V, et al. Health behavior theories in diet apps. J Consum Health Internet. 2012;17(1).
  10. Pugliese L, Crowley O, Britton B, et al. Wearable fitness tracker intervention increases physical activity in Baby Boomers. 142nd Annual Meeting of the American Public Health Association; 2014. Accessed October 20, 2014.
  11. Joh JW. What do doctors think of Apple’s HealthKit? Forbes.  Published June 9, 2014. Accessed October 10, 2014.

The Great Marijuana Debate – Effects on Psychosis and Cognition

August 13, 2015

cannabisBy Kristina Cieslak, MD

Peer Reviewed 

The heavily debated gradual decriminalization and legalization of marijuana will likely result in easier access for all ages. An informed debate has been stymied, however, by a lack of prospective data examining the various long-term effects of marijuana use on the brain, particularly among adolescents who use it heavily. This year, the National Institute on Drug Abuse (NIDA) initiated the “National Longitudinal Study of the Neurodevelopmental Consequences of Substance Use.” This study will follow a large cohort of children from age 10 onward and will examine the effects of exposure to nicotine, marijuana, alcohol, and other drugs on the developing brain. Though likely to provide a wealth of information, these data will not be available for many years.

The link between marijuana use and acute psychiatric symptoms has been known for years. Although transient psychosis and paranoia have been reported, the contribution of marijuana use to the development and exacerbation of chronic psychotic disorders remains under investigation. Several studies have shown that exposure to cannabis increases the likelihood of developing an overt psychotic state among individuals already at high risk for a psychotic disorder [1-2]. Additionally, Henquet et al demonstrated a negative impact of tetrahydrocannabinol (THC) on cognition and psychosis, conditional on an individual’s psychotic liability. There was also the suggestion of potential gene-environment interactions [3]. In a review examining 5 of the largest population-based studies, Arseneault et al found associations between cannabis use, particularly early and heavy use, and later schizophrenia outcomes, with an overall 2-fold increased risk of developing schizophrenia or schizophreniform disorder [2, 4-8].

Although a strong association has been observed, there is a lack of evidence demonstrating that marijuana use is necessary or sufficient grounds for the development of psychotic illness. Additionally, the vast majority of individuals who use marijuana do not develop a psychotic disorder. A recent study by Proal et al posited that the increased familial risk for schizophrenia was the driving force behind incident schizophrenia among those who use marijuana, not the use of marijuana itself [9]. This raises the question of whether marijuana use directly increases the risk of psychosis or if a genetic predisposition to schizophrenia escalates the likelihood of using marijuana. Last year Power et al demonstrated that at least part of the association between marijuana use and schizophrenia is indeed due to a shared underlying genetic etiology [10]. Future studies may be able to elucidate specific gene-environment interactions and identify which of the numerous compounds in marijuana is associated with schizophrenia and affects brain structure and development.

Current factors that have been shown to play a role in the influence of marijuana on psychosis include the age and plasticity of the brain, vulnerability to mental illness, and combination with other drugs. A large body of evidence already demonstrates the acute and non-acute effects of marijuana on learning, memory, attention, concentration, and abstract reasoning, though the underlying mechanisms and potential reversibility require further elucidation [11]. Harvey et al reported on a significant relationship between the frequency of cannabis use in adolescents aged 13-18 years and a decline in cognitive function [12]. Similarly, Meier et al found that persistent cannabis use in adolescence was associated with broad neuropsychological decline and lower IQ later in life, an association not confounded by additional drug use, socioeconomic status, education, or personality differences [13-14]. Furthermore, evidence of gross morphological brain changes among individuals with chronic, heavy cannabis use was recently reported by Lorenzetti et al, including evidence of smaller hippocampal and amygdala volumes [15].

The current data highlight concern for the potential detrimental effects of heavy marijuana use on the developing brain and the increased risk for, and exacerbation of, psychiatric disorders, particularly schizophrenia. Identifying those individuals with the propensity to develop cannabis dependence or addiction, particularly in adolescence, remains a challenge. Large gaps in our knowledge undoubtedly persist; however, it may be prudent to focus our efforts on keeping marijuana away from the brains of vulnerable youth. Doing so may prevent future neurological and psychiatric morbidities.

Dr. Kristina Cieslak is a 1st year resident at NYU Langone Medical Center

Peer reviewed by Ishmeal Bradley, MD, Section Editor, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


  1. Verdoux H, Gindre C, Sorbara F, Tournier M, Swendsen J. Effects of cannabis and psychosis vulnerability in daily life: an experience sampling test study. Psychological medicine 2003; 33(01): 23-32.


  1. Henquet C, Krabbendam L, Spauwen J, Kaplan C, Lieb R, Wittchen H-U et al. Prospective cohort study of cannabis use, predisposition for psychosis, and psychotic symptoms in young people. BMJ 2005; 330(7481): 11.


  1. Henquet C, Rosa A, Krabbendam L, Papiol S, Fananas L, Drukker M et al. An Experimental Study of Catechol-O-Methyltransferase Val158Met Moderation of [Delta]-9-Tetrahydrocannabinol-Induced Effects on Psychosis and Cognition. Neuropsychopharmacology 2006; 31(12): 2748-2757.


  1. Arseneault L, Cannon M, Poulton R, Murray R, Caspi A, Moffitt TE. Cannabis use in adolescence and risk for adult psychosis: longitudinal prospective study. BMJ 2002; 325(7374): 1212-1213.


  1. Andreasson S, Allebeck P, Engstrom A, Rydberg U. Cannabis and schizophrenia. A longitudinal study of Swedish conscripts. Lancet 1987; 2(8574): 1483-1486.


  1. Zammit S, Allebeck P, Andreasson S, Lundberg I, Lewis G. Self reported cannabis use as a risk factor for schizophrenia in Swedish conscripts of 1969: historical cohort study. BMJ 2002; 325(7374): 1199.


  1. van Os J, Bak M, Hanssen M, Bijl RV, de Graaf R, Verdoux H. Cannabis use and psychosis: a longitudinal population-based study. American journal of epidemiology 2002; 156(4): 319-327.


  1. Fergusson DM, Horwood LJ, Swain-Campbell NR. Cannabis dependence and psychotic symptoms in young people. Psychological medicine 2003; 33(1): 15-21.


  1. Proal AC, Fleming J, Galvez-Buccollini JA, DeLisi LE. A controlled family study of cannabis users with and without psychosis. Schizophrenia research 2014; 152(1): 283-288.


  1. Power RA, Verweij KJH, Zuhair M, Montgomery GW, Henders AK, Heath AC et al. Genetic predisposition to schizophrenia associated with increased use of cannabis. Mol Psychiatry 2014; 19(11): 1201-1204.


  1. Crane NA, Schuster RM, Fusar-Poli P, Gonzalez R. Effects of cannabis on neurocognitive functioning: recent advances, neurodevelopmental influences, and sex differences. Neuropsychology Review 2013; 23(2): 117-137.


  1. Harvey MA, Sellman JD, Porter RJ, Frampton CM. The relationship between non-acute adolescent cannabis use and cognition. Drug & Alcohol Review 2007; 26(3): 309-319.


  1. Meier MH, Caspi A, Ambler A, Harrington H, Houts R, Keefe RSE et al. Persistent cannabis users show neuropsychological decline from childhood to midlife. Proceedings of the National Academy of Sciences 2012; 109(40): E2657–E2664.


  1. Moffitt TE, Meier MH, Caspi A, Poulton R. Reply to Rogeberg and Daly: No evidence that socioeconomic status or personality differences confound the association between cannabis use and IQ decline. Proceedings of the National Academy of Sciences 2013; 110(11): E980-E982.


  1. Lorenzetti V, Solowij N, Whittle S, Fornito A, Lubman DI, Pantelis C et al. Gross morphological brain changes with chronic, heavy cannabis use. The British Journal of Psychiatry 2014.



Chronicles of a Second Year Medical Student

August 6, 2015

medical studentsBy Matthew Siow

Peer Reviewed 

Day 1 of the medicine rotation: complete. I was on long call today, which meant three things. One, the hours during which I had to pretend I knew something were longer. Two, I saw a lot of things I had never seen before, from more common things like COPD exacerbations and acute pancreatitis to more obscure things like erythrodermic psoriasis and multiple brain abscesses. And third, it’s 8 PM and I am absolutely exhausted.

As I lie down and start to fall asleep, the words of my peers who went through the rotation before me suddenly come to mind: “Practice questions. You must do them. Every. Single. Day.”

Fine, voices. You win. After all, I did pay around $400 just so I could do these practice questions.

I open up the question bank and look at the database. 1,397 questions. Piece of cake. I’ll do a couple questions tonight just to get my feet wet. You know what they say: a journey of 1,397 questions begins with a single step.

Question 1

A 65-year-old male comes to your office to establish care. He tells you he has spent the last 40 years working as a shark tank tester in Botswana and once came within 100 feet of touching a shark. He believes this close encounter has led him to develop an allergy to shark fin soup. He takes no medications but admits taking “some kind of supplement” he found online for bulking up “so I can stay sexy for my wife.” On exam, he is a middle-aged male appearing younger than his stated age, demonstrating verbal tangentiality and an inability to sit still. His vital signs are stable. One hour later, when you ask for the fifth time why he came to clinic today, he tells you he was recently diagnosed with renal cell carcinoma. What classic triad is typically found in patients with renal cell carcinoma? 

  1. Varicocele, flank pain, cough
  2. Hematuria, flank pain, palpable mass
  3. Weight loss, hematuria, flank pain
  4. Hematuria, fever, rash secondary to shark fin soup allergy
  5. Weight loss, flank pain, palpable mass
  6. Fever, weight loss, rash secondary to shark fin soup allergy
  7. Hematuria, weight loss, palpable mass
  8. Fever, hematuria, palpable mass
  9. Pruritis, edema, confusion
  10. Varicocele, pityriasis rosea, pyoderma gangrenosum
  11. There is really no classic triad for renal cell carcinoma

Wow. Three things. First, there is nothing actually relevant in that question stem. Second, what on earth did I just read? Third, ELEVEN answer choices? Is that even legal? Seriously, who comes up with this stuff?

Okay, focus. Like the awesome medical student I am, I remember reading in Harrison’s Principles of Internal Medicine about the classic triad of renal cell carcinoma: hematuria, flank pain, and a palpable mass in the flank or abdomen [1]. It has to be B.

Your answer: B. Hematuria, flank pain, palpable mass

Correct answer: K. There is really no classic triad for renal cell carcinoma

WHAT?! There’s no way!

Explanation: Renal cell carcinoma is the most common type of kidney cancer in adults, comprising roughly 3% of adult malignancies and 90-95% of kidney neoplasms [2]. You may have heard of the so-called “classic triad” of findings for renal cell carcinoma, consisting of hematuria, flank pain, and palpable mass (choice B). However, this “classic triad” is only present in 9% of patients [3]. And considering Merriam-Webster’s definition of “classic” as “standard or recognized especially because of great frequency or consistency of occurrence,”[4] we decided that 9% did not meet these criteria. Nonetheless, the most common symptoms at presentation are hematuria, abdominal mass, pain, and weight loss. In addition, scrotal varicoceles (more commonly left-sided) are found in as many as 11% of men with renal cell carcinoma [5].

I am actually speechless. Tricky, tricky question bank. Definitely a good sign that I have 1,396 more of these to go through. Moving on…

Question 2

A 19-year-old female born in Kiribati on Leap Day presents to your office in the middle of January complaining of “sniffles.” When asked how long she has experienced sniffles, she replies, “I can’t remember,” but she thinks it is related to the fact that she works at Build-A-Bear Workshop and stuffs animals with fuzz that she sometimes confuses with used facial tissues. She reports the sniffles are accompanied by diarrhea that comes mostly after eating the leftover Chinese food she finds in the back of her refrigerator a couple of months after ordering take-out. She denies sexual activity or illicit drug use, most likely because she is a teenager. She reports drinking 1-2 alcoholic beverages on weekends. She does not report any family history of confusing stuffed animal fuzz with facial tissues, but she mentioned that her father also experiences diarrhea after eating old Chinese food. 

Upon further questioning, the patient reports right lower quadrant abdominal pain, fevers, anorexia, nausea, and multiple episodes of vomiting. On physical exam, the patient is febrile to 101 degrees and has tenderness localized to McBurney’s point. Labs are notable for a white blood cell count of 14,500 cells/uL. What is the most appropriate next step in management? 

  1. Order abdominal ultrasound
  2. Order a complete respiratory viral panel to further evaluate the patient’s sniffles
  3. Emergent surgery
  4. Obtain further history, since abdominal pain could be secondary to surreptitious overconsumption of stuffed animal fuzz
  5. Discharge the patient before noon (DBN) 

“Classic” question bank, throwing me another curveball. I close the database without answering the question. Like I said earlier, a journey of 1,397 questions begins with a single step. Sure, I took a wrong step, but cut me some slack. A step is a step.

If I learned anything from tonight, it is that there is a lot of medicine left for me to learn. But as I reflect on the rest of my day, a few moments come to mind. I helped a patient admitted for a COPD exacerbation regain her ability to breathe. I had a long conversation with a patient about reducing her alcohol intake in order to prevent recurrent bouts of pancreatitis. I helped a patient with erythrodermic psoriasis fight off Staph bacteremia, resulting in the least amount of pain he has experienced in 14 months. And I helped a patient with multiple brain abscesses walk again. So, the journey ahead is long, and for now, I would be lucky if I said I knew even “9%” of what I will end up knowing. Despite that, I still made a difference in at least four people’s lives today. And the best part is: it’s only going to get better from here.

Matthew Siow is a 2nd  year medical student at NYU School of Medicine

Peer reviewed by Michael Tanner, MD, executive editor, Clinical Correlations

Image courtesy of Wikimedia Commons


  1. Kasper DL, Harrison TR. Harrison’s Principles of Internal Medicine. 16th ed. New York, NY: McGraw-Hill, Medical Pub. Division. 2005.
  2. American Cancer Society. Cancer facts & figures 2014. Available at Accessed May 18, 2015.
  3. Skinner DG, Colvin RB, Vermillion CD, Pfister RC, Leadbetter WF. Diagnosis and management of renal cell carcinoma. A clinical and pathologic study of 309 cases. Cancer. 1971;28(5):1165-1177.
  4. Merriam-Webster Dictionaries Online. Accessed May 18, 2015.
  5. Pinals RS, Krane SM. Medical aspects of renal carcinoma. Postgrad Med J. 1962;38:507-519.

Morbidity & Mortality for James A. Garfield – A Book Review of “Destiny of the Republic: A Tale of Madness, Medicine, and the Murder of a President” by Candice Millard

July 31, 2015

Garfield -Presidents_James_A_GarfieldBy David Kudlowitz, MD

Peer Reviewed 

Last December, an unremitting sore throat led President Barack Obama to see an ENT. When the fiberoptic exam revealed soft tissue swelling in his throat, his physicians ordered a CAT scan. After a 28-minute visit to Walter Reed Hospital and a normal imaging study, he was diagnosed with acid reflux.   It is likely that the president’s doctors were acting in an overabundance of caution. Unfortunately, President Obama is not the first American president to get superfluous medical care. For Obama, an unnecessary CAT scan was likely harmless, but for one of his predecessors, James A. Garfield, medical overreach turned out to be deadly. 

Unlike more popular and longer-tenured presidents, many are unfamiliar with the details of Garfield’s life. As reviewed in Candice Millard’s book, “Destiny of a Republic: A Tale of Madness, Medicine, and the Murder of a President,” Garfield was born in squalor in rural Ohio and by age 26 had worked his way up from janitor to President of Williams College. He was a Union General in the Civil War, winning the key battle of Middle Creek, which kept Lincoln’s home state Kentucky in the Union. He was an eight-term congressman and senator. At the 1880 Republican National Convention, he was nominated for President, mostly against his own wishes. He only served in office for 200 days (the 2nd shortest presidential term behind William Henry Harrison’s 31 days).

On July 2, 1881, the President rushed to catch a train at Sixth Street Station in Washington D.C. There, his assassin, Charles J. Guiteau, shot him on the right side of his back. Guiteau was an unbalanced individual who believed that God had commanded him to assassinate the President. In his out of touch reality, Guiteau believed this action would allow him to become Ambassador to France. Guiteau was eventually tried, convicted, and hanged for murder. However, during the trial, part of Guiteau’s defense was that the true murderers of President Garfield were actually his doctors.  Oddly enough, as Millard careful analyzes, there may be some truth behind his statement.

Back at the train station, Garfield’s condition was bleak. The bullet entered his back, four inches to the right of his spinal column, damaging a lumbar vertebra and several ribs before lodging itself behind the pancreas. It did not damage any vital organs, nor did it hit the spinal cord. Some have argued that he likely would have survived if just left alone. Unfortunately at the time of the shooting, there was no way to determine the location or trajectory of the bullet.

On the floor of the train station, no less than 10 doctors invasively examined the president; the most infamous was the aptly named Dr. Doctor Willard Bliss (his given name was Doctor). Robert Todd Lincoln, Secretary of War and son of Abraham Lincoln, called on Dr. Bliss after the shooting. Lincoln had always been impressed with the care Dr. Bliss had provided to his father after his assassination at Ford’s Theater.

Unfortunately Dr. Bliss’s medical practices were out of date and at times fraudulent. He was once thrown in prison for accepting bribes. Additionally, he forayed into medical quackery with his miracle substance cundurango, which he marketed as “The wonderful remedy for cancer, syphilis, scrofula, ulcers, salt rebum and other chronic blood diseases.” “Destiny of the Republic” scrutinizes Bliss’s rejection of antisepsis. Even though most European physicians adopted Joseph Lister’s theories and methods on antisepsis, American physicians remained dogmatic in their attitudes against cleanliness. Bliss surrounded himself with physicians who also did not buy in to Lister’s hypotheses. One of these physicians was Frank Hamilton, a surgeon at Bellevue Medical College. Hamilton preferred using warm water on surgical wounds to prevent infection.

After several physicians had done manual exams of the president’s wound (using unwashed hands), Bliss placed 2 unsterilized probes inside of the bullet wound in an attempt to determine the trajectory. Convinced that the bullet was near the liver (based on his blind, unsterilized exam), Bliss’s medical care for the next 80 days mostly revolved around attempting to locate the bullet. However, he was hampered by his own arrogance. While using a modified metal detector developed by Alexander Graham Bell, he only allowed the inventor of the telephone to investigate the president’s right side, near the liver. Had he allowed examination of the president’s left side, he likely would have found the bullet. This tool went on to serve a unique purpose as an imaging modality before access to the x-ray was widely available.

At the White House, Bliss and his medical staff performed several procedures to lance abscesses and place drainage tubes (while not using clean instruments). As the President’s condition worsened over the next 11 weeks, Bliss refused to admit that the president was septic and dying, saying several weeks into his medical care “Not the minutest symptom of pyemia has appeared thus far in the President’s Case. The wound is healthier and healing rapidly.” Meanwhile, “septic acne,” or pus-filled abscesses, formed all over Garfield’s body, including his arms, back, and even his parotid gland (which burst and drained into the President’s middle ear). Bliss and his men continued to place drainage tubes and lance abscesses. Later, at autopsy, a long sinus tract with multiple abscesses was found going toward the liver (with no bullet nearby).

More than the president’s overwhelming sepsis, Bliss was concerned with his extreme 80lb weight loss (from 210 to 130 pounds). To counteract the president’s cachexia, Bliss prescribed rectal feeding, reminiscent of CIA torture practices. In the rectal feeds, Bliss included beef bouillon (predigested with hydrochloric acid), warmed milk, egg yolks, and opium every 4 hours for 8 days. He also used varying mixtures of whiskey and charcoal in his feeds.

While James Garfield’s presidency and assassination are now only footnotes of American history, his plight was certainly a tragedy that potentially could have been avoided. Candice Millard’s book not only tackles the topic with attention to medical details, but it is also readable and enjoyable. She suggests that had Bliss and his counterparts bought in to Joseph Lister’s methods (as Europeans had already done and American physicians would do in about 15 years), the president’s life may have been saved. Additionally, her conclusion is that Bliss’s obstinance in refusing to recognize Garfield’s septic shock likely prolonged the President’s suffering. Today, we should use this story as a lesson: sometimes less medical intervention and care is better — even for a President. Every potential invasive test has a consequence, and we must always weigh its risks and benefits. It is imperative that we stay up to date with current research and medical practices and not become set in our traditional protocols. The medical community in the 1880s realized this lesson as well. A medical critic at the time said, “ If Garfield had been a ‘tough,’ and had received his wound in a Bowery dive, he would have been brought to Bellevue Hospital … without any fuss or feathers, and would have gotten well.”

Dr. David Kudlowitz is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Neil A. Shapiro, MD, Editor-In-Chief, Clinical Correlations

Image courtesy of Wikimedia Commons


Millard C. Destiny of the Republic: A Tale of Madness, Medicine, and the Murder of a President. New York, NY : Doubleday, a division of Random House; 2011.

The Role of Fish Oil in Arrhythmia Prevention

July 29, 2015

fishoilBy Steven Bolger

Peer Reviewed

Omega-3 fatty acids were first identified as a potential agent to prevent and treat cardiovascular disease through several epidemiologic studies of the Greenlandic Inuit in the 1970s suggesting that high consumption of fish oil was associated with a decreased risk of cardiovascular disease [1,2]. Fish oil contains two omega-3 fatty acids, eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), that have been shown to be beneficial in treating hypertriglyceridemia and in the secondary prevention of cardiac events [3-5].

The GISSI-Prevenzione trial, published in 1999, was one of the first multicenter, randomized controlled trials to explore the effect of supplementation with omega-3 fatty acids on patients with recent myocardial infarctions [5]. The trial included 11,324 patients with recent myocardial infarctions. They were randomized to receive daily supplementation with either a capsule containing EPA and DHA in a 1-to-2 ratio or a placebo capsule for 3.5 years, with death from any cause, non-fatal myocardial infarction, and stroke as the composite primary endpoint. The trial demonstrated that supplementation with omega-3 fatty acids resulted in a significant reduction in the primary endpoint, with a relative risk reduction of 10% compared to placebo. The results of this trial suggested that a reduction in sudden cardiac death could be responsible for the decrease in mortality, sparking investigation of the potential anti-arrhythmic properties of omega-3 fatty acids.

Omega-3 fatty acids have been shown to increase the threshold of depolarization of cardiac muscle required for action potential generation in animal models, resulting in a decrease in arrhythmias. A 1994 study using a canine model showed that infusion of a fish oil emulsion resulted in a significantly decreased incidence of ventricular fibrillation compared to a control infusion in response to exercise-induced ischemia [6]. Further studies in rat cardiomyocytes revealed that the mechanism responsible for the reduction in arrhythmias is inhibition of voltage-dependent sodium and L-type calcium channels [7-9]. By shifting the cell membrane potential to a more negative value, omega-3 fatty acids increase the threshold required to generate an action potential, preventing the initiation of arrhythmias.

Several randomized controlled trials have failed to demonstrate that omega-3 fatty acid supplementation results in a reduction in ventricular arrhythmias in patients with implantable cardioverter-defibrillators. A 2005 trial of 200 patients with implantable cardioverter-defibrillators and recent episodes of sustained ventricular tachycardia or ventricular fibrillation showed no reduction in the risk of arrhythmias with fish oil supplementation [10]. The results of this trial furthermore suggested a possible pro-arrhythmic effect of omega-3 fatty acids. A 2006 trial similarly failed to show a reduction in ventricular tachycardia, ventricular fibrillation, or all-cause mortality in 546 patients with implantable cardioverter-defibrillators who received supplementation with omega-3 fatty acids [11].

A 2005 randomized controlled trial of 402 patients with implantable cardioverter-defibrillators, however, demonstrated a trend towards benefit in patients receiving supplementation with omega-3 fatty acids [12]. The primary endpoint selected for the trial was time to first episode of ventricular tachycardia, ventricular fibrillation, or death from any cause. Though the results did not show a significant reduction in the primary endpoint, patients who received omega-3 fatty acid supplementation showed a trend towards a prolonged time to the first episode of these arrhythmias or death from any cause, with a risk reduction of 28% and p-value of 0.057. Furthermore, the risk reduction was significant when probable episodes of ventricular tachycardia and ventricular fibrillation were included in the analysis, with a risk reduction of 31%.

With conflicting results from several trials, a systematic review was performed in 2008 of 12 randomized controlled trials to synthesize clinical data on the effects of fish oil on mortality and arrhythmia prevention [13]. The primary outcomes were defined as the arrhythmic end points of appropriate implantable cardioverter-defibrillator intervention and sudden cardiac death. The results of the meta-analysis showed that fish oil supplementation did not have a significant effect on arrhythmias and all-cause mortality. The review did demonstrate a significant reduction in deaths from cardiac causes consistent with previous studies, including the GISSI-Prevenzione trial.

Fish Oil For Atrial Fibrillation Prevention

In addition to trials investigating ventricular arrhythmias in patients with implantable cardioverter-defibrillators, there have been several observational studies exploring the effect of fish oil on the incidence of atrial fibrillation, which have yielded conflicting results. The Danish Diet, Cancer, and Health Study, a prospective cohort study, found that consumption of omega-3 fatty acids from fish was not associated with a reduction in the risk of atrial fibrillation or flutter [14]. The cohort for this study included 47,949 individuals living in Denmark with a mean age of 56 years. The Rotterdam Study found that consumption of EPA and DHA was similarly not associated with a reduction in the risk of developing atrial fibrillation [15]. The cohort for this study included 5184 patients with a mean age of 67.4 years who lived in the Netherlands. A 12-year prospective, observational study by Mozaffarian and colleagues of 4815 patients over the age of 65, however, found that consumption of fish was associated with a 31% reduction in the risk of atrial fibrillation [16].

The mixed results between these studies may reflect differences in the baseline characteristics of the cohorts of the three studies. The Mozaffarian study placed an age restriction on the cohort of the study, resulting in a mean age of 72.8 years, compared to 56 years for the Danish Diet, Cancer, and Health Study and 67.4 years for the Rotterdam Study. The risk of atrial fibrillation increases with age; thus, the reduction in risk of atrial fibrillation in response to omega-3 fatty acid supplementation may only be appreciable in elderly populations at highest risk [17-18]. The assessment of dietary intake of omega-3 fatty acids also differed between the studies depending on the method of information collection. The Rotterdam study, for example, obtained information via a questionnaire and follow-up interview with a dietician, while the Mozaffarian study employed only a questionnaire.

The 2012 OPERA trial was the first randomized controlled trial to assess the effect of omega-3 fatty acid supplementation on atrial fibrillation [19]. The OPERA trial randomized 1516 patients with a mean age of 64 years who were scheduled for cardiac surgery to receive either a daily fish oil capsule or placebo for 3-5 days before the surgery and for 10 postoperative days or until discharge, whichever came first. The results of the trial showed that perioperative supplementation with fish oil did not reduce the risk of postoperative atrial fibrillation compared to the placebo.

Overall, the results of studies exploring the potential anti-arrhythmic effects of omega-3 fatty acids in reducing the risk of atrial fibrillation have been conflicting. A 2010 meta-analysis of 10 randomized controlled trials examining the role of omega-3 fatty acids in preventing atrial fibrillation found no evidence of significant effects of omega-3 fatty acids on atrial fibrillation prevention [20].

In conclusion, although omega-3 fatty acid supplementation has been shown to provide several potential cardiovascular benefits, trials have failed to consistently show that omega-3 fatty acids have significant anti-arrhythmic effects. The reasons for the inconsistent results are unknown, and perhaps may be related to patient selection, type of fish oil preparation, fish oil dose, or other factors. Meta-analyses of randomized controlled trials have not shown a reduction in either ventricular arrhythmias or atrial fibrillation. Additional studies are necessary to further characterize the role of fish oil in preventing arrhythmias.

Steven Bolger is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Robert Donnino, MD, Cardiology Editor, Clinical Correlations,  NYU Langone Medical Center

 Image courtesy of Wikimedia Commons


  1. Bang HO, Dyerberg J, Hjøorne N. The composition of food consumed by Greenland Eskimos. Acta Med Scand. 1976;200(1-2):69-73.


  1. Dyerberg J, Bang HO, Stoffersen E, Moncada S, Vane JR. Eicosapentaenoic acid and prevention of thrombosis and atherosclerosis? Lancet. 1978;2(8081):117-119.


  1. Balk EM, Lichtenstein AH, Chung M, Kupelnick B, Chew P, Lau J. Effects of omega-3 fatty acids on serum markers of cardiovascular disease risk: a systematic review. Atherosclerosis. 2006;189(1):19-30.


  1. Harris WS. n-3 fatty acids and serum lipoproteins: human studies. Am J Clin Nutr. 1997;65(5 Suppl):1645S-1654S.


  1. GISSI-Prevenzione Investigators (Gruppo Italiano per lo Studio della Sopravvivenza nell’Infarto miocardico). Dietary supplementation with n-3 polyunsaturated fatty acids and vitamin E after myocardial infarction: results of the GISSI-Prevenzione trial. Lancet. 1999;354(9177):447-455.


  1. Billman GE, Hallaq H, Leaf A. Prevention of ischemia-induced ventricular fibrillation by omega 3 fatty acids. Proc Natl Acad Sci U S A. 1994;91(10):4427-4430.


  1. Kang JX, Xiao YF, Leaf A. Free, long-chain, polyunsaturated fatty acids reduce membrane electrical excitability in neonatal rat cardiac myocytes. Proc Natl Acad Sci U S A. 1995;92(9):3997-4001.


  1. Xiao YF, Kang JX, Morgan JP, Leaf A. Blocking effects of polyunsaturated fatty acids on Na+ channels of neonatal rat ventricular myocytes. Proc Natl Acad Sci U S A. 1995;92(24):11000-11004.


  1. Xiao YF, Gomez AM, Morgan JP, Lederer WJ, Leaf A. Suppression of voltage-gated L-type Ca2+ currents by polyunsaturated fatty acids in adult and neonatal rat ventricular myocytes. Proc Natl Acad Sci U S A. 1997;94(8):4182-4187.


  1. Raitt MH, Connor WE, Morris C, et al. Fish oil supplementation and risk of ventricular tachycardia and ventricular fibrillation in patients with implantable defibrillators: a randomized controlled trial. JAMA. 2005;293(23):2884-2891.


  1. Brouwer IA, Zock PL, Camm AJ, et al; SOFA Study Group. Effect of fish oil on ventricular tachyarrhythmia and death in patients with implantable cardioverter defibrillators: the Study on Omega-3 Fatty Acids and Ventricular Arrhythmia (SOFA) randomized trial. JAMA. 2006;295(22):2613-2619.


  1. Leaf A, Albert CM, Josephson M, et al. Prevention of fatal arrhythmias in high-risk subjects by fish oil n-3 fatty acid intake. Circulation. 2005;112(18):2762-2768.


  1. León H, Shibata MC, Sivakumaran S, Dorgan M, Chatterley T, Tsuyuki RT. Effect of fish oil on arrhythmias and mortality: systematic review. BMJ. 2008;337:a2931.


  1. Frost L, Vestergaard P. n-3 Fatty acids consumed from fish and risk of atrial fibrillation or flutter: the Danish Diet, Cancer, and Health Study. Am J Clin Nutr. 2005;81(1):50-54.


  1. Brouwer IA, Heeringa J, Geleijnse JM, Zock PL, Witteman JC. Intake of very long-chain n-3 fatty acids from fish and incidence of atrial fibrillation. The Rotterdam Study. Am Heart J. 2006;151(4):857-862.


  1. Mozaffarian D, Psaty BM, Rimm EB, et al. Fish intake and risk of incident atrial fibrillation. Circulation. 2004;110(4):368-373.


  1. Psaty BM, Manolio TA, Kuller LH, et al. Incidence of and risk factors for atrial fibrillation in older adults. Circulation. 1997;96(7):2455-2461.


  1. Furberg CD, Psaty BM, Manolio TA, Gardin JM, Smith VE, Rautaharju PM. Prevalence of atrial fibrillation in elderly subjects (the Cardiovascular Health Study). Am J Cardiol. 1994;74(3):236-241.


  1. Mozaffarian D, Marchioli R, Macchia A, et al; OPERA Investigators. Fish oil and postoperative atrial fibrillation: the Omega-3 Fatty Acids for Prevention of Post-operative Atrial Fibrillation (OPERA) randomized trial. JAMA. 2012;308(19):2001-2011.


  1. Liu T, Korantzopoulos P, Shehata M, Li G, Wang X, Kaul S. Prevention of atrial fibrillation with omega-3 fatty acids: a meta-analysis of randomised clinical trials. Heart. 2011;97(13):1034-1040.

UV Nail Lamps and Cancer: A Correlation?

July 24, 2015

UV%20Nail%20LampBy Jennifer Ng, MD

Peer Reviewed 

Beauty and suffering are often thought to be intertwined.  It is hard to have your cake and eat it too.  In the quest for beauty, women (and men) have subjected themselves to toxic and potentially deadly practices, such as applying lead-based cosmetics to whiten their faces historically [1], or more recently, going to tanning beds and/or laying out in the sun for prolonged periods to get a “healthy glow.”  As we have become increasingly health-conscious and vigilant, more and more beauty products and practices have come under scrutiny for their possible toxic effects.  Most recently, a cousin of the tanning bed, the popular ultraviolet (UV) nail lamp, has become a topic of much controversy [2].

At first glance, the UV nail lamp seems like a miracle worker.  It serves many purposes in the nail salon, such as its uses in: one, quickly drying UV-cured acrylic nails and traditional nail polish; two, activating special topcoats that help protect the nail; and three, the creation of gel nails, which are more durable than regular nail polish [3].  However, like the tanning bed, it produces predominantly UV-A radiation, which is known to cause oxidative stress and free radical formation, leading to DNA damage [4].

The controversy over the potential carcinogenic effects of the UV nail lamp started with two case reports published in 2009 [3].  These two reports described the development of non-melanoma skin cancers on the hands of two white women, who both had indoor occupations, little to moderate recreational UV exposures and no personal/family history of skin cancers.  The major commonality shared by the two women was frequent visits to the nail salon – one with a 15-year history of twice monthly UV nail light exposure and the other with a history of eight episodes of UV nail light exposure within one year.

These two case reports prompted a lot of research into the amount of potential UV exposure from the UV nail lamp.  The authors of the 2009 case reports argued that based on the amount of power produced by most nail lamps (4-54W), as compared to that of tanning beds (1200W or more), and the amount of body surface area exposed (2% in nail lamps vs. 100% in tanning beds), the amount of radiation per meter squared was actually comparable between UV nail lamps and tanning beds [3].  Another study found that the amount of UV exposure per a typical nail session (lasting less than 10 minutes) totaled 15-22.5 joules per meters squared, which was comparable to that of the day-long recommended limit for outdoor workers and recreationalists (30 joules per meter squared over 8 hours) as per the International Commission on Non-Ionizing Radiation Protection [4].

However, other researchers have argued that the risk of skin cancer from UV lamps is minimal.  In fact, UV light (especially narrowband UV-B) is frequently used in the treatment of common skin conditions such as psoriasis, vitiligo and atopic dermatitis [5]. One study compared a UV nail lamp session’s UV dose to the UV dose of a single course of narrowband UV-B (NBUVB) used for phototherapy and found that more than 250 years of weekly UV nail sessions would be required in order to equal to exposure of that of one course of narrowband UV-B therapy [6].  Since the risk of developing skin cancer from one course of NBUVB treatment was thought to be low [7], the authors of the study concluded that the risk of skin carcinogenesis from UV exposure from nail lamps must be low as well.

Another study attempted to quantify the actual risk of squamous cell carcinoma (SCC) from UV nail lamps [8]. Based on complex calculations that took into account the ages of subjects and doses of UV light to which they were exposed, the study authors derived an SCC risk model, using data from six different studies on the incidence of SCC in different regions around the world, including Norway and the USA. They used this model to compare the risks of skin cancer from day-to-day sun exposure to that of UV nail lamps and from these calculated risks, they were able to determine the number of women who would need to be exposed to UV nail lamps in order for one woman to develop SCC on the dorsum of her hands, also known as the number needed to harm (NNH). Results showed that, depending on the age of the woman and the number of years of use of UV nail lamps, anywhere from tens to hundreds of thousands of women would need to be exposed to UV nail lamps in order for one woman to develop SCC on the dorsum of her hands.

Why is there such variety in the results from different studies?  One group recently hypothesized that the range in different UV lamps available for commercial use might be the explanation [9].  UV lamps differ in the number of bulbs, the power/wattage of each bulb and the brand of the light source, all of which may lead to differing amounts of UV-A radiation produced.  This study measured the UV-A energy exposure in an average manicure visit from seventeen different UV lamps used in 16 nail salons, resulting in a range of 0 to 8 joules per centimeters squared, with a median of 5.1.  The threshold value for DNA damage in UV-A irradiated skin cells is 60 joules per centimeters squared.  Therefore, the number of visits needed for a customer to be exposed to this threshold value can range from 8 to 208, with a median of 11.8.  The take-home point is that, depending on the UV nail lamp, the number of visits needed to confer DNA damage (i.e. the potential for carcinogenesis) to skin cells can vary greatly.

While different researchers may present conflicting evidence in regards to the degree of risk of skin cancer posed by UV nail lamps, they do mostly agree on recommending the use of either full spectrum sunscreens or UV-A blocking gloves to limit the exposure to only the nails [4, 8, 9].  Interestingly enough, the nail plate itself is very resistant to UV penetration – completely blocking the transmission of UV-B light and almost completely blocking UV-A light [10].  Thus, even if the debate is ongoing, sunblock and/or UV-A blocking gloves may be a way for your patients to have their cake and eat it too.

Dr. Jennifer Ng completed her internal medicine residency at NYU Langone Medical Center

Peer Reviewed by Jo-Ann Latkowski, MD, Dermatology, NYU Langone Medical Center

Image courtesy of Image – Skin Cancer: Are nail salon UV lamps a skin cancer risk? 1 May 2014.


  1. Mapes D. Suffering for beauty has ancient roots: From lead eyeliner to mercury makeup, killer cosmetics over the decades. Web. Accessed June 1 2014.
  2. Park A. 24 visits to the nail salon could trigger skin cancer. 30 Apr 2014. Web. Accessed June 1, 2014.


  1. MacFarlane DF et al. Occurrence of Nonmelanoma Skin Cancers on the Hands After UV Nail Light Exposure. Arch Dermatol. 2009;145(4):447-449. doi:10.1001/archdermatol.2008.622.


  1. Curtis J et al. Acrylic nail curing UV lamps: High-intensity exposure warrants further research of skin cancer risk. Journal of the American Academy of Dermatology. 2013 Dec;69(6):1069-1070. Doi:


  1. Tanew A et al. Narrowband UV-B phototherapy vs photochemotherapy in the treatment of chronic plaque-type psoriasis: a paired comparison study. Arch Dermatol. 1999;135(5):519.


  1. Markova A et al. Risk of Skin Cancer Associated with the Use of UV Nail Lamp. Journal of Investigative Dermatology. 2013;133: 1097–1099. doi:10.1038/jid.2012.440. Epub 6 December 2012.


  1. Diffey BL, Farr PM. The challenge of follow up in narrowband ultraviolet B phototherapy. Br J Dermatol. 2007;157:344–9.


  1. Diffey BL. The risk of squamous cell carcinoma in women from exposure to UVA lamps used in cosmetic nail treatment. Br J Dermatol. 2012 Nov;167(5):1175-8. doi: 10.1111/j.1365-2133.2012.11107.x. Epub 2012 Oct 5.


  1. Shipp LR et al. Further Investigation Into the Risk of Skin Cancer Associated With the Use of UV Nail Lamps. JAMA Dermatol. Epub April 30, 2014. doi:10.1001/jamadermatol.2013.8740
  2. Stern DK et al. UV-A and UV-B penetration of normal human cadaveric fingernail plate. Arch Dermatol. 2011 Apr;147(4):439-41. doi: 10.1001/archdermatol.2010.375. Epub 2010 Dec 20.


A Primer on CRP and Cardiovascular Risk

July 22, 2015

Heart-beatCindy Fei, MD

Peer Reviewed

A 63-year-old woman with hypertension presents to your clinic for routine follow-up. She came across an online article regarding C-reactive protein and its purported link to heart disease, and she asks you whether she should be tested for it. She is an otherwise asymptomatic non-smoker without a family history of heart disease. Her only medication is hydrochlorothiazide. Her blood pressure measured in the office is 128/81 mmHg, her low-density lipoprotein is 110 mg/dL, and her high-density lipoprotein is 54 mg/dL. What do you tell her?

What is CRP?

C-reactive protein (CRP) is an acute-phase reactant produced by the liver in response to the inflammatory cytokines interleukin-6 and interferon. CRP primarily mediates the inflammatory response by binding to complement and damaged cell membranes, but it has also been noted to bind to low-density lipoprotein (LDL) [1]. Common stimuli of high CRP levels (conventionally defined as >3 mg/L) include infection, cancer, and surgery. CRP also increases to intermediate levels (1-3 mg/L) with age, obesity, smoking, gum disease, and related co-morbidities such as chronic lung disease, diabetes, and hypertension [2]. Interestingly, the variability of multiple CRP measurements in the same person over time exhibits stability comparable to blood pressure and cholesterol [3]. While early measurements of CRP only detected levels greater than 3 mg/L, later studies capitalized on the development of improved high-sensitivity CRP (hs-CRP) assays, which detect levels as low as 0.1 mg/L.

With respect to healthy adults, studies show a positive correlation between elevated CRP levels and development of coronary heart disease, independent of other risk factors. A meta-analysis of 54 observation studies characterized this relationship as a log-linear association when adjusted for age and sex [4]. A 2009 meta-analysis of 11 good-quality studies calculated a relative risk of 1.58 (confidence interval 1.37-1.83) for the development of coronary artery disease in the high versus low serum CRP groups. The studies all adjusted for Framingham risk factors beforehand. The corresponding risk ratio for the intermediate versus low serum CRP groups was 1.22 (confidence interval 1.11-1.33) [5]. This relationship persists in individuals with known cardiovascular disease, with higher CRP values portending a worse prognosis. For instance, stable coronary artery disease subjects with a fairly even distribution of low, intermediate, and high serum CRP categories showed a statistically significant increased risk of cardiovascular death, myocardial infarction, or stroke in the intermediate CRP group compared with low CRP group (adjusted hazard ratio of 1.39). The adjusted hazard ratio rose to 1.52 for high CRP group compared to the low CRP group [6].

Does CRP play a pathologic role in atherosclerosis?

Multiple studies demonstrate an association between elevated CRP and increased risk of heart disease, regardless of prior cardiovascular disease diagnosis. However, it is unclear if a causal mechanism governs this association. Do high CRP levels drive atherosclerosis, or are they simply a marker of disease? Atherosclerotic plaques stain positive for CRP, but the evidence for causality is less clear [1]. Proposed avenues for CRP-induced plaque build-up include monocyte adhesion and recruitment into the vessel walls, macrophage activation, and smooth muscle cell proliferation. Moreover, binding to LDL facilitates LDL oxidation and uptake by macrophages. CRP also interferes with endothelial nitric oxide synthase function and prostacyclin synthesis, leading to decreased vasodilation [7].

In addition, CRP’s classification as an acute-phase reactant and its subsequent association with inflammatory conditions offer numerous confounding variables. On one hand, lower CRP levels after statin therapy are associated with a lower risk of recurrent myocardial infarction or coronary fatalities, regardless of post-statin LDL levels [8]. Post hoc analyses of the PROVE-IT trial demonstrated that lower CRP was significantly and independently associated with slower progression of atherosclerosis as measured by intravascular ultrasound over 18 months [9,10]. This suggests a direct link between CRP and cardiovascular risk independent of LDL levels.

On the other hand, scenarios that attempt to directly influence or change CRP levels do not necessarily maintain this link. For example, murine models of atherosclerosis do not reliably show increased plaque build-up in transgenic mice designed to produce human CRP [7]. One mendelian randomization study from 2008 calculated whether naturally-occurring genetic polymorphisms in the CRP gene and subsequent variations in serum CRP levels could predict cardiovascular outcomes. Genetic variation was responsible for up to 64% change in CRP level, but this did not translate into a statistically significant increased odds ratio for ischemic heart disease. In contrast, different apolipoprotein E genotypes accounted for up to a 14% change in cholesterol level, with a statistically significant increased odds ratio of 1.29 for development of ischemic heart disease [11]. A later mendelian randomization study also did not find a statistically significant relationship between genetically-raised CRP levels and the development of heart disease [12].

How to Use CRP in Clinical Practice

To date, the main randomized clinical trial that examines CRP and cardiovascular risk is the JUPITER trial published in 2008. This trial evaluated rosuvastatin 20mg daily for primary prevention in healthy adults who demonstrated both LDL <130 and hs-CRP >2. The trial was stopped early at the first interim analysis because the statin’s benefit was clear. After a median of 1.9 years of follow-up, a statistically significant reduction in the primary outcome (a composite of heart attack, stroke, unstable angina, revascularization, or cardiovascular death) was found for the statin group as compared to placebo (hazard ratio 0.56, 95% confidence interval 0.46 to 0.59) [13]. This suggested a role for CRP in selecting additional patients who would benefit from statins. Although the trial only included patients with higher levels of hs-CRP, a post hoc analysis demonstrated a consistent association between higher baseline hs-CRP and increased frequency of the primary outcome [14]. Of note, the trial was criticized on the grounds of conflict of interest, as the principal investigator co-owns the patent for the hs-CRP blood test used in the study [15].

In 2003, the Centers for Disease Control and Prevention and the American Heart Association recommended against universal screening for cardiovascular risk with CRP. The document identified intermediate-risk patients as the population for which it is reasonable to measure hs-CRP twice, 2 weeks apart, for further risk stratification [16]. In healthy asymptomatic adults with an intermediate Framingham risk of 5-20%, the addition of CRP appropriately reclassified only 4.3% of subjects into the high-risk category, and only 3.6% into the low-risk category [17]. According to one model developed prior to the updated statin therapy guidelines, testing the CRP of 440 intermediate-risk patients without a coronary heart disease equivalent is needed in order to reclassify 23 individuals as high-risk. If those 23 subjects initiated statin therapy, then 1 cardiovascular event (myocardial infarction, stroke, or fatal coronary heart disease) would be averted. In effect, the number needed to “test” of 440 would avert 1 cardiovascular event over 10 years, assuming appropriate statin interventions based on the 2002 Adult Treatment Panel III guidelines [18]. However, studies that have compared the accuracy of CRP versus coronary artery calcium score and carotid intima-media thickness in reclassifying intermediate-risk patients found that coronary artery calcium score and carotid intima-media thickness both outperformed CRP [17].

More recent guidelines still fail to offer compelling indications for CRP utilization. In fact, the 2009 US Preventive Services Task Force stated that there was insufficient evidence for the use of hs-CRP for cardiovascular risk assessment [19]. Two simultaneously released guidelines in November 2013 from the American College of Cardiology/American Heart Association (ACC/AHA), on the topics of cholesterol and on cardiovascular risk assessment, discuss a possible role for hs-CRP in patients who do not fall into the outlined four major statin benefit groups or who have unclear risk even after quantitative risk assessment. The recommendation to consider hs-CRP use under these select circumstances is based on expert opinion only, and does not distinguish between CRP versus other novel risk factors such as coronary artery calcium score and ankle-brachial index [20,21]. The new guidelines also suggest hs-CRP >2 as the threshold for upgrading the level of cardiovascular risk for a patient.


In summary, existing evidence tentatively suggests that CRP is an independent risk factor for heart disease; however, in the absence of data examining universal CRP screening, hard clinical outcomes, mortality, or cost effectiveness, the current recommendations are to use CRP sparingly under select circumstances. In the clinic, CRP may be used as a tool for further risk stratification of intermediate-risk patients in order to select candidates who may benefit the most from additional interventions and therapies.

With regards to the clinical vignette, this patient does not fall into one of the 4 major statin benefit groups, as outlined in the newly released 2013 ACC/AHA guidelines. Her calculated 10-year risk of atherosclerotic cardiovascular disease is 6%, which does not reach the threshold of 7.5% for starting a statin. According to the 2013 ACC/AHA Guideline on the Assessment of Cardiovascular Risk document, expert opinion states that hs-CRP may have a role in determining whether to begin statin therapy. If her measured hs-CRP were greater than 2, one may consider upgrading her risk level and adding a statin for primary prevention, with the knowledge that this recommendation is based on very limited data. 

Commentary By Robert Donnino, MD  Assistant Professor of Medicine (Cardiology)

The use of hs-CRP for cardiovascular risk stratification remains highly controversial. Analysis of existing data suggests that CRP is, at best, a weak independent risk factor for clinical cardiovascular events. Without the inclusion of patients with CRP < 2 in the JUPITER trial (Ridker et al., reference 8 from above), it cannot be concluded that the CRP level of >2 conferred any increased risk, nor does it identify patients who would have received additional benefit with statin therapy. This has led many to question whether patients with CRP < 2 would have received similar benefits from statin therapy if they had been included in the trial.

As mentioned in this overview on CRP, data published from the MESA cohort showed CRP was not a very effective tool for reclassifying intermediate risk patients into higher or lower risk groups, reclassifying a total of only 8% of patients (Yeboah, et al; reference 17 from above). For comparison, coronary calcium score in that same cohort reclassified 66% of patients into higher or lower risk groups. Other studies have even lower reclassification ability for CRP. Thus, although supported by current guidelines and followed by some practitioners, I believe the data do not support the use of CRP as a risk stratification tool and that much more powerful stratification tools are available (i.e. coronary calcium score). For more in-depth analysis of CPR for cardiovascular risk, I would recommend the excellent review by Yousuf and colleages (reference 7 from above). Until we have more clarifying data, the role of CRP in clinical practice will remain controversial. 

Dr. Cindy Fei is an internist at NYU Langone Medical Center

Peer review by Robert Donnino, MD, Assistant Professor of Medicine (Cardiology), NYU Langone Medical Center

Image courtesy of Wikimedia Commons


  1. Scirica BM, Morrow DA. Is C-reactive protein an innocent bystander or proatherogenic culprit? The verdict is still out. Circulation 2006;113(17): 2128-2134.
  2. Windgassen EB, Funtowicz L, Lunsford TN, Harris LA, Mulvagh SL. C-reactive protein and high-sensitivity C-reactive protein: an update for clinicians Postgrad Med 2011;123(1): 114-119.
  3. Danesh J, Wheeler JG, Hirschfield GM, et al. C-reactive protein and other circulating markers of inflammation in the prediction of coronary heart disease. NEJM 2004;350(14): 1387-1397.
  4. Emerging Risk Factors Collaboration, Kaptoge S, Di Angelantonio E, et al. C-reactive protein concentration and risk of coronary heart disease, stroke, and mortality: an individual participant meta-analysis Lancet 2010;375(9709): 132-140.
  5. Buckley DI, Fu R, Freeman M, Rogers K, Helfand M. C-reactive protein as a risk factor for coronary heart disease: a systematic review and meta-analyses for the U.S. Preventive Services Task Force. Ann Intern Med 2009;151(7): 483-495.
  6. Sabatine MS, Morrow DA, Jablonski KA, et al. Prognostic significance of the Centers for Disease Control/American Heart Association high-sensitivity C-reactive protein cut points for cardiovascular and other outcomes in patients with stable coronary artery disease. Circulation 2007;115(12): 1528-1536.
  7. Yousuf O, Mohanty BD, Martin SS, et al. High-sensitivity C-reactive protein and cardiovascular disease: a resolute belief or an elusive link? J Am Coll Cardiol 2013;62(5): 397-408.
  8. Ridker PM, Cannon CP, Morrow D, et al. C-reactive protein levels and outcomes after statin therapy. NEJM 2005;352(1): 20-28.
  9. Cannon CP, Braunwald E, McCabe CH, et al. Intensive versus moderate lipid lowering with statins after acute coronary syndromes. NEJM 2004;350(15): 1495-1504.
  10. Nissen SE, Tuzcu EM, Schoenhagen P, et al. Statin therapy, LDL cholesterol, C-reactive protein, and coronary artery disease. NEJM 2005;352(1): 29-38.
  11. Zacho J, Tybjaerg-Hansen A, Jensen JS, Grande P, Sillesen H, Nordestgaard BG. Genetically elevated C-reactive protein and ischemic vascular disease. NEJM 2008;359(18): 1897-1908.
  12. C Reactive Protein Coronary Heart Disease Genetics Collaboration (CCGC), Wensley F, Gao P, et al. Association between C reactive protein and coronary heart disease: mendelian randomisation analysis based on individual participant data. BMJ 2011;342:d548.
  13. Ridker PM, Danielson E, Fonseca FA, et al. Rosuvastatin to prevent vascular events in men and women with elevated C-reactive protein NEJM 2008;359(21): 2195-2207.
  14. Ridker PM, MacFadyen J, Libby P, Glynn RJ. Relation of baseline high-sensitivity C-reactive protein level to cardiovascular outcomes with rosuvastatin in the Justification for Use of statins in Prevention: an Intervention Trial Evaluating Rosuvastatin (JUPITER) Am J Cardiol 2010;106(2): 204-209.
  15. de Lorgeril M, Salen P, Abramson J, et al. Cholesterol lowering, cardiovascular diseases, and the rosuvastatin-JUPITER controversy: a critical reappraisal. Arch Intern Med 2010;170(12): 1032-1036.
  16. Pearson TA, Mensah GA, Alexander RW, et al. Markers of inflammation and cardiovascular disease: application to clinical and public health practice: A statement for healthcare professionals from the Centers for Disease Control and Prevention and the American Heart Association. Circulation 2003;107(3): 499-511.
  17. Yeboah J, McClelland RL, Polonsky TS, et al. Comparison of novel risk markers for improvement in cardiovascular risk assessment in intermediate-risk individuals JAMA 2012;308(8): 788-795.
  18. Emerging Risk Factors Collaboration, Kaptoge S, Di Angelantonio E, et al. C-reactive protein, fibrinogen, and cardiovascular disease prediction NEJM 2012;367(14): 1310-1320.
  19. US Preventive Services Task Force. Using Nontraditional Risk Factors In Coronary Heart Disease Risk Assessment. Oct 2009. Accessed Nov 2013.
  20. Stone NJ, Robinson J, Lichtenstein AH, et al. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation
  21. Goff DC Jr, Lloyd-Jones DM, Bennett G, et al. 2013 ACC/AHA guideline on the assessment of cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation

Neurologic Complications In Infective Endocarditis: To Anticoagulate Or Not To Anticoagulate

July 10, 2015

479px-Pink_tulip_flowerBy Shannon Chiu, MD

Peer Reviewed

The annual incidence of infective endocarditis (IE) is estimated to be 3 to 9 cases per 100,000 persons in developed countries [1-2]. Neurologic complications are the most severe and frequent extracardiac complications of IE, affecting 15-20% of patients [3-4]. They consist of 1) ischemic infarction secondary to septic emboli from the valvular vegetation, which can eventually undergo hemorrhagic transformation; 2) focal vasculitis/cerebritis from septic emboli obstructing the vascular lumen, which can then develop into brain abscess or meningoencephalitis; 3) mycotic aneurysm secondary to inflammation from septic emboli penetrating the vessel wall [5]. Amongst these complications, stroke is the most common and is the presenting feature in 50-75% of patients [6]. To date, an ongoing debate amongst physicians is the appropriateness of anticoagulation in patients with IE and how to balance the risk of thromboembolism against that of hemorrhagic transformation of stroke.

Specific risk factors have been associated with increased risk of symptomatic embolic events. Embolic risk is especially high within the first 2 weeks after diagnosis, decreasing in frequency after initiation of antibiotics [7]. Size, location and mobility of vegetations are key predictors; in fact, surgery may be indicated for prevention of embolism with involvement of anterior mitral leaflet, vegetation size >10mm, or increasing size despite appropriate antibiotics [5,8]. Additional risk factors for embolism in IE include advanced age and S. aureus infection. Importantly, S. aureus prosthetic valve endocarditis is known to be associated with higher overall mortality and severe neurologic complications such as hemorrhagic stroke [3,9-10]. Mechanisms for intracranial hemorrhage (ICH) in patients with IE include hemorrhagic transformation (HT) of ischemic infarct, rupture of mycotic aneurysms, or erosion of septic arteritic vessels [11].

Currently, evidence regarding anticoagulants primarily stems from observational studies. One of the arguments against anticoagulation in IE is the fear of early ICH and HT of ischemic stroke. In Tornos et al.’s retrospective observational series of 56 patients with native and prosthetic S. aureus IE, mortality was higher in prosthetic valve IE than in native valve IE (p=.02; odds ration [OR], 4.23; 95% confidence interval [CI], 1.15-16.25) [12]. The authors inferred that part of this difference stemmed from the deleterious effect of anticoagulation leading to lethal neurologic damage as 90% of patients with prosthetic valve IE due to S. aureus were receiving oral anticoagulant treatment on admission (vs. no patient with native valve IE due to S. aureus was receiving such treatment). Meanwhile, in Heiro et al.’s retrospective study, a sub-analysis of 32 patients with S. aureus IE showed that 57% of patients receiving anticoagulant therapy died within 3 months of admission vs. 20% of those not receiving anticoagulant therapy, though the difference was not statistically significant (p=0.1) [9]. Garcia-Cabrera et al. conducted a retrospective analysis of 1,345 cases of left-sided IE, and likewise found that hemorrhagic complications were significantly associated with anticoagulant therapy that was primarily used in patients with mechanical valves (hazard ration [HR] 2.71, 95% CI 1.54-4.76, p=0.001) [13]. On this basis, these authors have recommended stopping anticoagulants as soon as diagnosis of IE is suspected, at least until past the septic phase of the disease. Despite these reported associations of poor outcome in S. aureus IE and detrimental effect of anticoagulant therapy in these patients, these results arose from nonrandomized retrospective studies without matched cohorts. Moreover, Tornos et al.’s study was primarily designed to compare native valve with prosthetic valve IE patients, and the sample size of those receiving anticoagulation was small (19 out of 56) [12]. Similarly, Heiro et al.’s study was of limited statistical power, as only 2 of the 4 patients with lethal S. aureus IE actually died of hemorrhagic conditions while taking anticoagulant therapy.

On the opposing end, more recent prospective studies show no significant association between anticoagulation and increased risk of hemorrhagic complications, and that ICH due to anticoagulation after IE-related stroke is overestimated. Rasmussen et al. conducted a prospective cohort study of 175 S. auerus IE patients, of which 70 patients (40%, 95% CI 33-47%) experienced major cerebral events during the course of the disease [14]. Stroke was the most common complication (34%, 95% CI 27-41%), but the incidence of cerebral hemorrhage was low (3%, 95% CI 0.5-6%). None of the patients who experienced cerebral hemorrhage were receiving anticoagulant treatment. In fact, Rasmussen et al. found that patients receiving anticoagulation were less likely to have experienced a major cerebral event at time of admission compared to those not receiving such treatment (15% vs 37%, p=0.009). The indication for anticoagulation for the majority of patients in this study was prosthetic heart valves. Anticoagulation at the time of admission was associated with a significant reduction in the number of major cerebral events in patients with native valve IE (0 vs. 39%, p=0.008); however, this was not evident in those with prosthetic valve IE. In-hospital mortality rate was 23% (95% CI 17-29%) with no significant difference between patients with or without anticoagulant therapy.

An added complication to the picture is the decision for cardiac surgery in patients with IE who suffer a neurologic event. Except for clinically severe ICH, neurologic complications are not a contraindication for surgical treatment [5]. The decision to perform cardiopulmonary bypass remains controversial, as the surgery can cause/aggravate cerebral damage in several ways, such as ICH related to heparinization during the procedure, and possible hemodynamic worsening of the ischemic infarction (e.g. additional embolism, hypoperfusion) [5,15]. The timing of the surgery is also hotly debated, and evidence supporting surgical intervention is of limited quality and primarily based on observational studies. However, when needed, cardiac surgery can be performed promptly after a silent cerebral embolism or transient ischemic attack, but must be postponed for at least 1 month following ICH [8].

Despite controversy over anticoagulant therapy, recommendations regarding antiplatelet therapy are more clear-cut: antiplatelets are not recommended for patients with IE. In a double-blind, placebo-controlled trial comparing aspirin 325mg with placebo for 4 weeks in 115 IE patients, there was no significant decrease in the incidence of embolic events (OR 1.62, 95% CI 0.68-3.86) [16]. In fact, there was a trend toward more bleeding in the aspirin group (OR 1.92, 95% CI 0.76-4.86); and aspirin had no effect on vegetation size. While there are conflicting findings from observation studies regarding the use of chronic antiplatelet treatment before IE, in terms risks of death and embolic events, current available evidence suggests that antiplatelet therapy is not indicated in IE [17-19]. Patients on antiplatelet therapy for other indications may continue taking it, in the absence of major bleeding.

So where does this leave us? According to the most recent European Society of Cardiology guidelines, there is no indication to start anticoagulation in patients with IE [8]. For those already receiving anticoagulant therapy, and in which IE is complicated by ischemic or non-hemorrhagic stroke, the oral anticoagulant agent should be replaced by unfractionated heparin for 2 weeks. For those with ICH complication, all anticoagulation should be stopped, except for those with prosthetic valve IE in which case the recommendation is to reinitiate unfractionated heparin “as soon as possible” (no specified time-frame given in guidelines). Critically, the European Society of Cardiology guidelines acknowledge the low level of evidence supporting these recommendations.

Anticoagulation is undoubtedly a double-edged sword. Whenever cerebrovascular complications of IE are suspected, there should be low threshold to perform diagnostic brain imaging to rule out cerebral hemorrhage, which would definitively justify discontinuation of anticoagulation and likely postpone planned cardiac surgery. Repeat echocardiography and neuroimaging play an important role in management of IE patients. At this time, the lack of robust information on anticoagulant therapy in IE stresses the need for more large randomized controlled trials.

Dr. Shannon Chiu is a 2nd year resident at NYU Langone Medical Center

Peer Reviewed by Albert Jung, MD, Internal Medicine, NYU Langone Medical Center

Image courtesy of Wikimedia Commons



  1. Correa de Sa DD, Tleyjeh IM, Anavekar NS, et al.; Epidemiological trends of infective endocarditis: a population-based study in Olmsted County, Minnesota. Mayo Clin Proc. 2010;85:422-426. 
  2. Duval X, Delahaye F, Alla F, et al.; Temporal trends in infective endocarditis in the context of prophylaxis guideline modifications: three successive population-based surveys. J Am Coll Cardiol. 2012;59:1968-1976.
  3. Thuny F, Avierinos JF, Tribouilloy C, et al.; Impact of cerebrovascular complications on mortality and neurologic outcome during infective endocarditis: a prospective multicentre study. Eur Heart J. 2007;28:1155-1161. 
  4. Sonneville R, Mirabel M, Hajage D, et al.; Neurologic complications and outcomes of infective endocarditis in critically ill patients: the ENDOcardite en REAnimation prospective multicenter study. Crit Care Med. 2011;39:1474-1481.
  5. Ferro JM, Fonesca C. Infective endocarditis. Handb Clin Neurol, Neurologic Aspects of Systemic Disease Part I. 2014;119:75-91.
  6. Sila C. Anticoagulation should not be used in most patients with stroke with infective endocarditis. Stroke. 2011;42:1797-1798.
  7. Snygg-Martin U, Gustafsson L, Rosengren L, et al.; Cerebrovascular complications in patients with left-sided infective endocarditis are common: a prospective study using magnetic resonance imaging and neurochemical brain damage markers. Clin Infect Dis. 2008;47:23-30.
  8. Habib G, Hoen B, Tornos P, et al.; The task force on the prevention, diagnosis, and treatment of infective endocarditis of the European Society of Cardiology (ESC). Eur Heart J. 2009;30:2369-2413.
  9. Heiro M, Nikoskelainen J, Engblom E, et al.; Neurologic manifestations of infective endocarditis: A 17-Year Experience in a Teaching Hospital in Finland. Arch Intern Med. 2000;160:2781-2787.
  10. Di Salvo G, Habib G, Pergola V, et al.; Echocardiography predicts embolic events in infective endocarditis. J Am Coll Cardiol. 2001;37:1069-1076.
  11. Molina CA, Selim MH. Anticoagulation in patients with stroke with infective endocarditis: the sword of Damocles. Stroke. 2011;42:1799-1800.
  12. Tornos P, Almirante B, Mirabet S, et al.; Infective endocarditis due to Staphylococcus aureus: deleterious effect of anticoagulant therapy. Arch Intern Med. 1999;159:473-475.
  13. Garcia-Cabrera E, Fernandez-Hidalgo N, Almirante B, et al.; Neurological complications of infective endocarditis: risk factors, outcome, and impact of cardiac surgery: a multicenter observational study. Circulation. 2013;127(23):2272-84.
  14. Rasmussen RV, Snygg-Martin U, Olaison L, et al.; Major cerebral events in Staphylococcus aureus infective endocarditis: is anticoagulant therapy safe? Cardiology. 2009;114:284-291.
  15. Goldstein LB, Husseini NE. Neurology and cardiology: points of contact. Rev Esp Cardiol. 2011;64(4):319-27.
  16. Chan KL, Dumesnil JG, Cujec B, et al.; A randomized trial of aspirin on the risk of embolic events in patients with infective endocarditis. J Am Coll Cardiol. 2003;42:775-780.
  17. Anavekar NS, Tleyjeh IM, Anavekar NS, et al.; Impact of prior antiplatelet therapy on risk of embolism in infective endocarditis. Clin Infect Dis. 2007;44:1180-1186.
  18. Pepin J, Tremblay V, Bechard D, et al.; Chronic antiplatelet therapy and mortality among patients with infective endocarditis. Clin Microbiol Infect. 2009;15:193-199.
  19. Chan KL, Tam J, Dumesnil JG, et al.; Effect of long-term aspirin use on embolic events in infective endocarditis. Clin Infect Dis. 2008;46:37-41.