From the Archives

From the Archives: Morgellons-Real Disease or Delusion Turned Internet Meme?

August 12, 2016

morgellons_color-300x225Please enjoy this post from the archives dated October 3, 2012

By Robert Mazgaj

Faculty Peer Reviewed

Morgellons disease is an “unexplained dermopathy” characterized by fibers emerging from skin lesions, and associated with various cutaneous sensations.[1] Inspired by a curious medical condition reported by a 17th century English physician, Morgellons was actually named in 2002 by Mary Leitao, a layperson, to describe the mysterious set of symptoms reportedly suffered by her then 2-year-old son.[2,3] Leitao then launched the not-for-profit Morgellons Research Foundation (MRF) along with a (no longer active) website,[3] MRF successfully petitioned members of Congress as well as the public to convince the Centers for Disease Control (CDC) to perform an epidemiologic study of Morgellons disease. In January 2012, the CDC published their findings from this investigation on the peer-reviewed online journal PLoS ONE.[1]

The CDC study enrolled members of Kaiser Permanent of Northern California (KPNC), a managed care consortium of about 3.2 million members.[1] For the purposes of the study, the CDC defined a case-patient as any patient who received care at KPNC between July 1, 2006 and June 30, 2008 and reported fibers or similar forms of solid material such as “threads, specks, dots, fuzzballs and granules” and at least one of the following:

1. A skin lesion such as a rash, wound, ulcer or nodule.

2. A disturbing skin symptom such as pruritus, feeling that something is crawling on top of or under the skin, or stinging, biting, or a pins and needles sensation.

The CDC identified a total of 115 case-patients, yielding a prevalence of 3.65 per 100,000 enrollees. These case-patients were mostly female (77%) and white (77%) and had a median age of approximately 55 years. More than half of all case-patients admitted to additional symptoms including fatigue of at least 6 months duration and musculoskeletal complaints. More than half of all case-patients also rated their general health status as fair or poor on a web-based survey. The case-patients reported using a variety of over-the-counter, prescription, and alternative therapies to relieve their dermatologic complaints, but no treatment was reported to be regularly effective.

Of the 115 identified case-patients, 41 received comprehensive evaluations, including clinical examinations by both internists and dermatologists; histopathologic, immunohistochemical, molecular, and chemical analysis of skin biopsies; molecular and spectral analysis of collected fibers and other material; neurocognitive and neuropsychiatric testing; extensive blood tests; chest radiographs; urinalysis and culture; and drug testing of collected hair samples. These clinical evaluations yielded the following results:

1. Skin lesions were most consistent with “arthropod bites or chronic excoriations.”

2. No parasites or mycobacteria were found in skin biopsies.

3. Collected fibers and other materials were mostly of textile origin.

4. Fifty-nine percent of case-patients had cognitive deficits.

5. Fifty percent tested positive for drugs, including amphetamines, barbiturates, benzodiazepines, opiates, cannabinoids, and propoxyphene.

6. All chest radiographs were normal.

Based on these results, the authors of the study concluded that, although Morgellons is associated with a significant reduction in quality of life, no causative medical condition or infectious agent was found in the case-patients. They likened this “unexplained dermopathy” to delusional infestation, a well-characterized psychiatric disorder responsive to antipsychotics.

One of the most intriguing facts gleaned from the study was that more than 75% of case-patients reported onset of their symptoms after 2002, the year in which Mary Leitao launched MRF and its website.[1] This finding begs the question of whether the Internet helped spread a delusion to individuals with pre-existing psychiatric morbidities. In fact, even before the CDC study’s results were released, several articles suggesting this very possibility were published. Although these suspicions may never be proven, they raise the provocative issue of the considerable influence of Internet memes on beliefs in modern society.[2] The term meme, first coined by the British evolutionary biologist Richard Dawkins in his 1976 book The Selfish Gene,[3] refers to an idea or concept that is essentially the cultural analogue of a gene. That is, a meme can be spread from generation to generation, change due to imperfect copying, and be selected for or against within a given culture. Examples of memes include musical pieces, religious beliefs and one-liners from movies.

A study published in Psychosomatics offered an explanation of how a meme such as Morgellons disease came to be rapidly accepted by a relatively large online community.[4] First, simply being able to attach a specific label to one’s own perceptual abnormalities provides significant, albeit temporary, relief of anxiety. Second, interacting with others supposedly suffering from the same ailment breaks one’s social isolation and provides a sense of legitimization and comfort.[4] This confers on the Morgellons meme a significant advantage in the marketplace of ideas that is the Internet over the competing meme of delusional parasitosis, a much more stigmatizing label. Thus, we see that the psychological appeal of an idea, and not necessarily its validity, can be more valuable to its success as an Internet meme. Finally, it becomes apparent that the conventional definition of a delusion as a fixed, false belief not held by one’s culture may be challenged by the rise of the Internet as an unprecedented platform for the exchange and acceptance of memes, and quite possibly, delusions as well [5].

By Robert Mazgaj, 2nd year medical student at NYU School of Medicine

Peer reviewed by Mitchell Charap, MD, Medicine (GIM Div), NYU Langone Medical Center

Image courtesy of Wikimedia Commons


[1] Pearson ML, Selby JV, Katz KA, et al. Clinical, epidemiologic, histopathologic and molecular features of an unexplained dermopathy. PLoS ONE. 2012;7(1):e29908. Published January 25, 2012. Accessed January 27, 2012.

[2] Lustig A, Mackay S, Strauss J. Morgellons disease as Internet meme. Psychosomatics. 2009;50(1):90.

[3] Dawkins R. The Selfish Gene. Oxford, England: Oxford University Press; 1976.

[4] Freudenreich O, Kontos N, Tranulis C, Cather C. Morgellons disease, or antipsychotic-responsive delusional parasitosis, in an HIV patient: beliefs in the age of the Internet. Psychosomatics. 2010;51(6):453-457.

[5] Vila-Rodriguez F, Macewan BG. Delusional parasitosis facilitated by web-based dissemination. Am J Psychiatry. 2008;165(12):1612.

From The Archives – Fractional Excretion of Sodium (FENa): Diagnostic Godsend or Gimmick?

July 22, 2016

renalPlease enjoy this post from the archives dated September 5, 2012

By Jon-Emile S Kenny, MD

Faculty Peer Reviewed

A 62- year-old man with a history of hypertension, diastolic dysfunction and chronic kidney disease is admitted 4 days after beginning outpatient treatment of community acquired pneumonia with cefpodoxime and azithromycin; he had been intermittently vomiting for two days, but proudly states that he has been keeping all of his home medications down, including hydrochlorothiazide. The morning after his admission, he was noted to have a serum creatinine of 3.4 mg/dL (from a baseline of 1.7 mg/dL). His lung fields were clear to auscultation, and he had no jugular venous distention. His urinalysis revealed trace WBC casts. My attending tells me to ‘get some urine lytes and figure out if this is pre-renal or a renal problem…do a FENa…if it’s less than 1%, start fluid resuscitating him.’ I nod my head and send the intern for the patient’s urine unquestioningly.

After rounds, I wait at the computer for the results and ponder this ‘fractional excretion of sodium.’

What is a normal FENa? In plain-speak, the FENa, or fractional excretion of sodium, is the amount of sodium excreted in the urine out of all the sodium filtered at the glomerulus. It seems like the kidneys are exceptionally sodium avid, as we are told through medical school that a FENa above 2% is equivalent to vast tubular death and dysfunction. But even in such renal apocalypse, 98% of sodium is still being reabsorbed—which is rather phenomenal. I wonder what my FENa is and to figure it out, I do some quick mathematics.

If my GFR is normal (125 cc/min or 180 L/d) and my serum sodium is 140 meq/L, then I am filtering 25,200 mEq of sodium through my kidneys per day (140 mEq/L x 180 L/day). I am in sodium balance, as I am neither sodium deplete, nor overloaded (I hope); therefore, the typical American diet of 150 mEq (3.5 grams) of sodium per day that I am ingesting, is being excreted. Hence my ‘resting FENa’ is 150 mEq / 25,200 mEq or 0.6%! Am I pre-renal, I wonder? Do I need a bolus of saline? I looked to the literature.

How were FENa cut-offs derived? In two classic studies [1, 2] the FENa was elevated to 3.5% and 7%, respectively in patients with acute, oliguric kidney injury secondary to acute tubular necrosis. By contrast, these studies revealed fractional sodium excretions of 0.4% in patients with acute, oliguric kidney injury secondary to volume depletion. Notably, the patients in these studies had relatively normal baseline creatinine (< 1.4 mg/dl as an inclusion criterion) [1] and oliguria was defined as < 400 mL of urine per day in the pre-renal azotemia group [1]. Because the FENa is dependent on functioning nephron mass, and the patients in these studies used to define FENa had impaired glomerular filtration (i.e. oliguria), a relatively high FENa threshold (1%, compared to a ‘normal’ FENa) is used to define pre-renal azotemia. I was interested that these parameters only helped to distinguish pre-renal kidney injury from acute tubular necrosis, and not other renal insults such as acute interstitial nephritis. Further, the patients in the study did not have chronic kidney disease (a baseline creatinine of > 1.6 mg/dL was an exclusion criterion) [1], as my patient does, and the urinary electrolytes and serum electrolytes were collected simultaneously, unlike the collections from my patient.

The urinary electrolytes return in the computer and I use the serum electrolyte panel drawn from the patient earlier in the morning to make the calculations. My patient’s FENa is 1.9%. I guess I should hold on the fluids, or could he still be volume depleted?

Do patients with chronic renal insufficiency have a higher FENa? I decide to use the same reasoning that I used for myself on my patient. I trend his serum electrolyte panel for the last six months and see that his baseline creatinine is 1.7 mg/dL. I plug this into the MDRD calculation and find that his baseline creatinine clearance can be estimated at 32 cc/min/1.73m2 or 46.1 L/day. I assume that he eats a typical American diet of 150 mEq of sodium per day and that he is in sodium balance; his ‘resting’ FENa is therefore 150 mEq / (46.1 L/day x 140 mEq) or 2.3% [3].

Are there other patient populations with higher FENa quotients? I continue my reading and find that there are other clinical scenarios, in addition to chronic renal insufficiency, in which a relatively high FENa can be seen with volume depletion—vomiting and diuretic therapy [4, 5]; both of which my patient has!

When volume contraction is the result of gastric loss of hydrogen chloride (HCl), a metabolic alkalosis ensues. The kidneys compensate by excreting sodium bicarbonate; consequently, distal delivery of sodium is increased and the FENa quotient will also elevate. One mechanism by which the kidneys increase bicarbonate excretion is via the chloride-bicarbonate exchanger in the distal collecting duct [6]. In clinical scenarios where loss of chloride is prominent (e.g. loss of gastric contents), a low fractional excretion of chloride may be a more accurate diagnostic tool [7] as more chloride is reabsorbed distally, in exchange for bicarbonate.

Inhibition of the Na-K-2Cl transporter in the thick ascending limb of the loop of Henle by loop diuretics or of the distal sodium transporter by thiazide diuretics will increase distal sodium delivery and raise the FENa quotient [4].

Is there another test to be used in the setting of diuretics?In 2002, Carvounis and colleagues attempted to circumvent problems with distal sodium delivery and diuretics by utilizing the fractional excretion of urea (FeUrea) as a tool to distinguish pre-renal azotemia from acute tubular necrosis [8]. They found that patients with pre-renal etiologies on diuretics had high FENa, but low FeUrea (less than 35%); by contrast, those with ATN had an average FeUrea of 55 to 63%. These results, however, have not been consistent. Pepin et al. found, in contradistinction, that FeUrea is a rather poor tool for the detection of pre-renal azotemia compared to the FENa in patients administered diuretics (with a sensitivity and specificity of 79% and 33% in patients administered diuretics) [9]. Biological reasons for the difference in these studies may lie in urea reabsorption changes with age, and in response to cytokines [10]. The patients in the Pepin study were significantly older and more likely to have had sepsis. Unfortunately, this is the typical patient population encountered in the hospital. Importantly, both studies had similar exclusion criteria and these included: renal transplant, malignant hypertension, contrast examination less than 48 hours before the onset of AKI, rhabdomyolysis, obstructive uropathy, adrenal insufficiency, and acute glomerulonephritis. Neither study provided information on the timing of urine collection in relation to diuretic administration. This may be important as diuretics have variable natriuretic pharmacokinetics, and this phenomenon may be age-dependent [11].

What about time course? It is important for clinicians to realize that a fractional excretion is only a snapshot of kidney function. The pathophysiology of pre-renal azotemia and acute tubular necrosis often lie on a spectrum. Indeed, early obstructive uropathy, acute intersitial nephritis and glomerulonephritis have been found to produce low FENa quotients and this may be due to early tubule function and sodium avidity.

Can a patient have a low FENa and not be in a volume-depleted state? Given the aforementioned caveats to FENa, I wondered about the converse of my patient. That is, can one be euvolemic or even hypervolemic and have a low FENa? In diseases like cirrhosis and congestive heart failure, neurohumoral responses are activated. Significantly, the renin-angiotensin system is inappropriately active, leading to sodium avidity throughout the nephron, especially in the proximal tubule under the influence of angiotensin 2. This will lower the FENa even if the patient has aminoglycoside-induced or other causes of ATN[4]. Interestingly, acute kidney injury from rhabdomyolysis or contrast nephropathy, typically produce a low FENa as well. The reasoning may involve myoglobin’s ability to scavenge nitric oxide and result in profound renal vasoconstriction [12].

How do I put this together? Taken together, the clinician must appreciate that a FENa of 1% can be used as a cutoff to determine salt-avid kidneys only in patients with preserved GFR at baseline. As above, patients with chronic kidney disease will have progressively higher FENa as a function of decreasing nephron mass. Once the clinician determines a FENa cutoff, it is imperative to realize that a low value can only be interpreted as ‘sodium-avid’ kidneys. This implies that the renin-angiotensin-alosterone axis is intact and acting upon acting upon responsive tubulular cells. This will occur in many clinical states in addition to volume depletion (e.g. cirrhosis, CHF, early nephritic syndrome, early cast nephropathy and early obstructive nephropathy). By contrast, a relatively high FENa implies increased distal sodium delivery which may occur when the aforementioned hormonal axis is ablated, the tubular cells are damaged (e.g. ATN), or in the event of natriuresis (e.g. diuretics, ketonuria, bicarbonaturia). The clinical situations in which pre-renal states have high FENa and ATN is accompanied by low FENa are numerous and the sensitivities and specificities of the test are not well established in these settings.

I returned to my patient’s bedside and performed a thorough history and physical examination. I measured his supine vital signs, and then stood him up. After one minute, I measured his vital signs again. His heart rate increased from 63 to 99; his systolic blood pressure dropped by 20 mmHg. He had dry axilla and his mucous membranes were parched [13]. His lung examination was clear. He looked up at me through sunken eyes and said “Doc…I’m thirsty.” I walked down to the hall to the nursing station and asked the nurse to give him 500 cc’s of normal saline. There’s no substitute for good clinical skills, I thought.

Dr. Kenny, Internal medicine at NYU Langone Medical Center

Peer reviewed by David Goldfarb, MD, Nephrology,  section editor, Clinical Correlations

Image courtesy of Wikimedia Commons


(1) Miller et al. Urinary diagnostic indices in acute renal failure: a prospective study. Ann Intern Med 1978; 89; 47-50

(2) Espinal and Gregory. Differential Diagnosis of acute renal failure. Clin Nephrol 1980; 13; 73-77.

(3) Nguyen et al. Misapplications of Commonly Used Kidney Equations: Renal physiology in practice. Clin J Soc Nephrol 2009; 4; 528-534

(4) Steiner. Interpreting the Fractional Excretion of Sodium. Am J Med 1984; 77; 699.

(5) Harrington & Cohen. Measurement of urinary electrolytes – indications and limitations. NEJM 1975; 293; 1241-1243.

(6) Luke et al. New Roles for Chloride in Renal Physiology and Pathophysiology. Trans Am Clin Climatol Assoc. 1991; 102; 84-95.

(7) Ziyadeh & Badr. Fractional Excretion of Chloride in Prerenal Azotemia. Arch Int Med. 1985; 145; 1929.

(8) Carvounis et al. Significance of the fractional excretion of urea in the differential diagnosis of acute renal failure. Kidney Int 2002; 62: 2223-2229.

(9) Pépin et al. Diagnostic performance of fractional excretion of urea and fractional excretion of sodium in the evaluations of patients with acute kidney injury with or without diuretic treatment. Am J Kidney Dis 2007; 50:566.

(10) Schönermarck et al. Diagnostic performance of fractional excretion of urea and sodium in acute kidney injury. Am J Kidney Dis 2008; 51:870

(11) Musso et al. Fractional excretion of K, Na and Cl following furosemide infusion in healthy, young and very old people. Int Urol Nephrol 2009.

(12) Bosch et al. Rhabdomyolysis and acute kidney injury. NEJM 2009; 361(1):62-7.

(13) McGee et al. Is this patient hypovolemic? 1999 JAMA. 281; 11; 1022.

Does Stress Cause Stress Ulcers? The Etiology and Pathophysiology of Stress Ulcers

July 14, 2016

AnxietyPlease enjoy this post from the archives dated August 22, 2012

Sara-Megumi Naylor, MD

Faculty Peer Reviewed

When Warren and Marshall were awarded the Nobel Prize in Physiology or Medicine in 2005 for their work on Helicobacter pylori and peptic ulcer disease [1], a long-standing controversy concerning the major cause of peptic ulcers was settled. They are not due to the reasons—spicy food, excessive coffee consumption, poor sleep, a stressful lifestyle—that we have heard from relatives and perhaps believed over the years. It is now well accepted that the leading causes of peptic ulcers are infection with the H. pylori bacterium and the use of non-steroidal anti-inflammatory drugs (NSAIDs) [2]. But what causes the stress ulcers that we see in patients who have experienced prolonged stays in the intensive care unit (ICU)? Why are they called “stress ulcers”? And why do we place at-risk individuals on prophylactic acid-inhibiting therapies?

The “stress” of stress ulcers specifically refers to the physiologic stress of serious illness [3]. Stress ulcers lie on the continuum of stress related mucosal disease (SRMD), a term used to describe the range of changes seen in the gastrointestinal (GI) mucosa of individuals who are critically ill [2]. On one end of the spectrum are patients with stress related injury which consists of diffuse, superficial, small erosions that do not extend into the submucosa, and thus do not reach submucosal blood vessels and do not lead to hemodynamically significant bleeding. On the other end of the spectrum are patients with true stress ulcers, which are discrete, deeper lesions that can extend into the submucosa, reaching significant blood vessels that, when compromised, can result in hemodynamically significant GI bleeding [2]. Other terms commonly used for conditions related to, or synonymous with, SRMD include stress erosions, stress gastritis, hemorrhagic gastritis, and erosive gastritis [4].

The mucosa of the GI tract consists of a single layer of cells approximately 0.1 mm thick [5]. This seemingly frail layer protects our body from the external environment and is maintained by multiple defense mechanisms. These include: (1) pre-epithelial factors like the mucus-bicarbonate-phospholipid barrier, (2) epithelial factors, such as the layer of simple columnar epithelial cells that is hydrophobic and interconnected by tight joints and continuous, and well-coordinated cell renewal which maintains constant structural integrity of the mucosa, and lastly (3) post-epithelial factors such as the continuous blood flow that delivers oxygen and nutrients with the simultaneous removal of toxic substances, an endothelial barrier that produces nitric oxide and prostacyclin to oppose the damaging effects of vasoconstrictors and inflammation in the microcirculation, and sensory innervation which helps regulate mucosal blood flow and gastric motility [6]. These critical mechanisms exist under normal conditions to balance out injurious factors and prevent the development of ulcers and their complications.

Contrary to popular belief as shown by the widespread practice of stress ulcer prophylaxis, bleeding is very rare in patients who are under significant “physiologic stress”. Within 24 hours of admission to the ICU, 75-100% of critically ill patients demonstrate evidence of SRMD [3]. However, only 25% percent of these patients will have clinical apparent bleeding [5], and only about 1-4% will have clinically significant bleeding [7], which is defined as overt bleeding (ie. hematemesis, gross blood, or “coffee ground” material in the nasogastric aspirate, hematochezia, or melena) complicated by one of the following within 24 hours after the onset of bleeding: (1) a spontaneous decrease of more than 20 mm Hg in systolic blood pressure, (2) an increase of more than 20 beats per minute in heart rate, or a decrease of more than 10 mm Hg in systolic blood pressure measured sitting up, or (3) a decrease of more than 2 g/dl of hemoglobin level and need for subsequent blood transfusion, after which the hemoglobin level does not increase by a value defined as the number of units of blood transfused minus 2 g/dl. [4, 7]. Those at highest risk for clinically significant bleeding are patients on mechanical ventilation for greater than 48 hours and those with a coagulopathy defined as INR >1.5 or a platelet count <50,000 platelet/uL [4, 7]. Providing stress ulcer prophylaxis to patients at highest risk for bleeding is evidence-based and of upmost importance. Guidelines for starting acid-inhibiting therapies in the ICU focus on these groups of patients.

How do stress ulcers form in these critically ill patients? While the association between severe physiologic stress and GI ulceration is well established, the mechanism by which stress ulcers form is multifactorial and still incompletely understood [4]. Given the numerous recent publications addressing the overuse of stress ulcer prophylaxis both inside and outside of the ICU [4, 8, 9, 10], it is crucial to understand the pathogenesis of stress ulcers, including the role of gastric acid, against which our prophylactic measures are directed.

The surprising truth about all ulcers (including stress ulcers) is that acid, even excessive acid, is not the primary cause of ulcers [2]. Instead, acid is only one of multiple pathogenic factors involved, and contributes to the persistence and worsening of already-formed ulcers [6]. The major cause of stress ulcers appears to be splanchnic hypoperfusion in critically ill patients [4, 5, 11]. Common stress-related responses are seen in patients who are seriously ill, including those with respiratory failure requiring mechanical ventilation, and/or those with coagulopathy, acute renal insufficiency, acute hepatic failure, sepsis, hypotension, and severe head or spinal cord injury [4]. These include sympathetic nervous system activation, increased catecholamine release, vasoconstriction, and secretion of pro-inflammatory cytokines [12]. These effects are initially beneficial because they shift blood away from the GI tract and to critical organs such as the brain [11]. However, when they persist they cause damage by breaking down gastric mucosal defenses, leading to injury and ulceration.

Decreased blood flow results in poor oxygen delivery and ischemic damage which have structural and chemical consequences. Loss of the epithelial cell layer that separates the contents of the gastric lumen with the body’s interior milieu leads to increased permeability and back-diffusion of protons and pepsin which further damages the mucosa [3, 4, 11]. Hypoperfusion also triggers the production of oxygen radicals and the decreases synthesis of gastro-protective prostaglandins which can lead to an aggressive inflammatory response [3, 6]. Interestingly, reperfusion injury also plays a significant role in the pathogenesis of stress ulcers [4]. Once blood flow is restored to ischemic tissue, the sudden hyperemia brings in an influx of inflammatory cells and cytokines that result in even more cell death. Both ischemia and reperfusion cause and worsen gastric dysmotility through the effect of cytokines on the enteric nervous system [3, 13]. Poor motility aggravates the situation because the persistence of acidic material and other irritants increase the risk of ulcer formation and persistence [3].

As explained, the pathogenesis of stress ulcer formation is complex. While acid plays a role, acid alone does not cause stress ulcers. Physiologic stress leading to splanchnic hypoperfusion, ischemic and reperfusion damage, and a cascade of inflammatory responses are the key causes. In this sense, stress does cause stress ulcers, though the stress responsible is of significant severity. Understanding the pathogenesis of stress ulcers is critical to appreciating what occurs at the level of the GI mucosa in critically ill patients. Therefore, the next time you start stress ulcer prophylaxis on a patient, keep in mind all of the factors that are at play, and that perhaps you should also focus your energy on improving the hemodynamics, gastric motility, and the overall medical status of your patients to help prevent splanchnic hypoperfusion, as well as providing acid suppression, typically with a proton pump inhibitor.

Dr. Sara-Megumi Naylor is a a recent graduate of NYU School of Medicine

Peer reviewed by Michael Poles, MD, Section Editor, Clinical Correlations

Image courtesy of Wikimedia Commons


1 The Nobel Prize in Physiology or Medicine 2005. Available at:

2 Spirit MJ. Stress-related mucosal disease: risk factors and prophylactic therapy. Clin Ther. 2004 Feb;26(2):197-213. Review. Available from:

3 Fennerty MB. Pathophysiology of the upper gastrointestinal tract in the critically ill patient: rationale for the therapeutic benefits of acid suppression. Crit Care Med. 2002 Jun;30(6 Suppl):S351-5. Review. Available from:

4 Quenot JP, Thiery N, Barbar S. When should stress ulcer prophylaxis be used in the ICU? Curr Opin Crit Care. 2009 Apr;15(2):139-43. Review. Available from:

5 Marino P, Sutin K. The ICU book. 3rd Edition.

6 Laine L, Takeuchi K, Tarnawski A. Gastric mucosal defense and cytoprotection: bench to bedside. Gastroenterology. 2008 Jul;135(1):41-60. Epub 2008 Jun 10. Review. Available from:

7 Cook DJ, Fuller HD, Guyatt GH, Marshall JC, Leasa D, Hall R, Winton TL, Rutledge F, Todd TJ, Roy P, et al. Risk factors for gastrointestinal bleeding in critically ill patients. Canadian Critical Care Trials Group. N Engl J Med. 1994 Feb 10;330(6):377-81. Available from:

8 Heidelbaugh JJ, Inadomi JM. Magnitude and economic impact of inappropriate use of stress ulcer prophylaxis in non-ICU hospitalized patients. Am J Gastroenterol. 2006 Oct;101(10):2200-5. Epub 2006 Sep 4. Available from:

9 Grube RR, May DB. Stress ulcer prophylaxis in hospitalized patients not in intensive care units. Am J Health Syst Pharm. 2007 Jul 1;64(13):1396-400. Available from:

10 Fohl AL, Regal RE. Proton pump inhibitor-associated pneumonia: Not a breath of fresh air after all? World J Gastrointest Pharmacol Ther. 2011 Jun 6;2(3):17-26. Available from:

11 Sesler JM. Stress-related mucosal disease in the intensive care unit: an update on prophylaxis. AACN Adv Crit Care. 2007 Apr-Jun;18(2):119-26; quiz 127-8. Review. Available from:

12 Martindale RG. Contemporary strategies for the prevention of stress-related mucosal bleeding. Am J Health Syst Pharm. 2005 May 15;62(10 Suppl 2):S11-7.Review. Available from:

13 Ritz MA, Fraser R, Tam W, Dent J. Impacts and patterns of disturbed gastrointestinal function in critically ill patients. Am J Gastroenterol. 2000 Nov;95(11):3044-52. Review. Available from:

Primecuts – This Week In The Journals

July 11, 2016

dallasBy Amar Parikh, MD

Peer Reviewed

Just days after the United States celebrated its 240th birthday, the nation was devastated by the tragic deaths of two young black men and five Dallas police officers amidst the country’s ongoing struggle over race relations. Alton Sterling was shot to death in Baton Rouge, Louisiana during an encounter with two police officers, while Philando Castile was killed in Falcon Heights, Minnesota during a routine stop for a broken taillight. The grisly footage of both their deaths was widely shared on social media, spreading rapidly across the nation and sparking outrage over police brutality. During subsequent protests in Dallas, snipers killed five police officers, making it the deadliest assault on US law enforcement since the 9/11 terrorist attacks. These horrific events continue to expose deep-seated issues with racism, gun control, and justice in our country, and have ignited a heated debate on the state of race relations in America.

As we mourn those who lost their lives and contemplate the future of our embattled nation, we turn now to the medical journals to draw optimism from the most recent advances in the literature this week.

Is Ticagrelor More Effective Than Aspirin in the Prevention of Recurrent Stroke or Transient Ischemic Attack?

In patients who suffer from ischemic stroke or transient ischemic attack, the risk of recurrent ischemic events in the first 90 days afterwards is notably high at approximately 10-15%. Patients are frequently placed on antiplatelet therapy such as aspirin for secondary stroke prevention, however its efficacy is limited and is associated with an increased risk of hemorrhage. Ticagrelor (trade name Brilinta) is a potent P2Y12 receptor inhibitor similar in mechanism of action to clopidogrel, however it has the advantage of not being limited by variable metabolic activation, a key drawback to clopidogrel. In the SOCRATES trial (Acute Stroke or Transient Ischemic Attack Treated with Aspirin or Ticagrelor and Patient Outcomes), investigators compared ticagrelor with aspirin for the prevention of recurrent cerebrovascular events in patients over a period of 90 days after presenting with acute cerebral ischemia [1].

SOCRATES was a multicenter, randomized, double-blinded trial that randomized 13,199 patients to receive ticagrelor versus aspirin therapy. Patients were randomly assigned within 24 hours of symptom onset to ticagrelor or aspirin with appropriate loading and maintenance doses of each for 90 days. The primary end point was time from randomization to first occurrence of stroke (ischemic or hemorrhagic), myocardial infarction, or death. Primary end-point events occurred in 442 of the 6589 patients (6.7%) in the ticagrelor group and in 497 of the 6610 patients (7.5%) in the aspirin group (hazard ratio 0.89; 9% CI 0.78 to 1.01; P = 0.07). The authors concluded that ticagrelor did not reduce the recurrence of major cardiovascular events in the first 90 days compared to aspirin therapy. In regards to safety, similar rates of major bleeding occurred in both the ticagrelor group (31 patients, 0.5%) and in the aspirin group (38 patients, 0.6%), demonstrating that the more potent P2Y12 inhibitor did not lead to an increased risk of bleeding. Interestingly, however, about a third of the enrolled patients were already taking aspirin at the time of initial presentation and subsequent randomization to the ticagrelor group, and were therefore briefly on short-term dual antiplatelet therapy. Subgroup analysis showed that the rates of ischemic stroke were lower in these patients, though this was not statistically significant. This does, however, suggest a possible benefit of combination therapy with aspirin and ticagrelor that may warrant further study. Regardless, it is evident that determining the optimal antiplatelet agent for secondary stroke prevention remains an unsettled question.

How Do Palliative Care Interventions in the Patient with Chronic Critical Illness Impact the Psychological Health of Family Surrogates?

Patients with chronic critical illness are a uniquely challenging population, and are defined as those suffering from acute illness requiring prolonged mechanical ventilation but neither recover or die within days to weeks [2]. Surrogate decision makers for these patients grapple with highly challenging choices about whether to continue life-prolonging treatments given their uncertain outcomes. Faced with the burden of having to make such difficult decisions, surrogate decision makers and family members often experience symptoms of anxiety, depression, and post-traumatic stress disorder in the months following a family member’s stay to the intensive care unit. Given the importance of communication regarding prognosis and goals of care between health care providers and family members for these tenuous patients, investigators sought to determine whether family meetings led by palliative care specialists had any impact on reducing symptoms of anxiety and depression in family members compared to routine support by primary ICU teams.

In a fascinating and innovative study published in JAMA, a multicenter randomized clinical trial was conducted where 256 patients with 365 surrogate decision makers were enrolled and randomized to intervention and control groups. The intervention group included two structured conversations delivered by palliative care-trained consultants that focused on providing information regarding prognosis, emotional support, and discussing the patient’s values and preferences. The surrogates in the control group received “usual care” which included an informational brochure and family meetings conducted by primary ICU teams. The primary outcome was a measure of surrogates’ symptoms of depression and anxiety 3 months after the patient’s hospitalization based on the Hospital Anxiety and Depression Scale (HADS). The results of the study were surprising – the investigators observed no difference between groups for surrogates’ symptoms of depression and anxiety at 3-month follow-up, as the mean score measured by HADS in the intervention group was 12.2 compared to 11.4 in the control group (95% CI, [-0.9 to 2.6]; P = 0.34). Perhaps most unexpectedly, the intervention was found to have actually increased surrogates’ posttraumatic stress symptoms at 3-month follow-up.

The authors offer several explanations for these results. They theorize that the direct and honest focus on poor prognosis may have been emotionally traumatic for surrogates and possibly increased their risk of developing PTSD symptoms. They also point out that the intervention sessions may have been limited as palliative care teams led them exclusively without direct participation by ICU clinicians, potentially creating discordance in communication as compared to a more integrated intervention. Another consideration is that the ‘dose’ of the intervention, on average 1.4 intervention sessions per participant, was too small to affect the surrogates’ mental health or their decision-making regarding life-prolonging therapies. This is a landmark study, as it is the first multicenter randomized trial examining palliative care-led support interventions for surrogate decision makers of patients with chronic critical illness, and while it does not support routine palliative-care led discussions of goals of care for all families of these patients, it does show with striking clarity that developing support interventions for surrogates of ICU patients is a vital area of further research.

Is dexmedetomidine the best therapy for agitation after haloperidol?

The use of dexmedetomidine (precedex) as a sedative for intubated patients with hyperactive delirium is very common in the ICU setting – it has impressive sedative, analgesic, and anxiolytic effects with a reduced risk of respiratory depression, and is easily titratable with minimal drug interaction. However, in nonintubated ICU patients suffering from agitation, its use is less established. Haloperidol is the most commonly used agent to manage agitation, however guidelines are less clear about the drug of choice when haloperidol is contraindicated or fails at high doses.

In the past issue of Critical Care Medicine, investigators published the first known study examining the efficacy and safety of dexmedetomidine to treat agitated delirium refractory to haloperidol in nonintubated patients [3]. The investigators intended to design the study as a controlled, randomized, double-blinded trial comparing haloperidol, dexmedetomidine, and placebo, however the Hospital Committee on Bioethics and Human Research did not authorize this approach on ethical grounds, as dexmedetomidine is only approved for cases where haloperidol has already failed. As a result, the investigators were forced to switch to a nonrandomized, controlled design where dexmedetomidine was used only after failure of haloperidol. 132 patients met inclusion criteria (nonintubated patients with a RASS score between +1 to +4 points and prominent scores on several ICU delirium scoring systems including CAM-ICU and Intensive Care Delirium Screening Checklist [ICDSC]), and all patients initially received IV haloperidol boluses. 86 patients responded to haloperidol with a subsequent RASS of 0 to -3, while 46 patients were classified as non-responders that then received infusions of dexmedetomidine. The primary endpoint was time to satisfactory sedation – dexmedetomidine achieved 33.4% more time in adequate sedation than haloperidol. Importantly, due to the added analgesic benefit of dexmedetomidine, patients received six times less morphine than those treated with haloperidol, reducing the potential exacerbation of delirium by additional opioids. The primary safety endpoint was oversedation, however no patients treated with dexmedetomidine suffered from this, in contrast to the haloperidol group where sedation had to be suspended in 10 patients and managed with noninvasive ventilation. The incidence of other side effects, such as bradycardia associated with dexmedetomidine and QTc prolongation with haloperidol, was low and did not reach statistically significant differences between the two groups.

Finally, although dexmedetomidine is 17 times more expensive than haloperidol, a cost-benefit analysis showed that patients treated with dexmedetomidine had a markedly reduced length of stay in the ICU as compared to haloperidol, resulting in mean savings of $4,370 per patient. While the data supporting more widespread use of dexmedetomidine is very encouraging from an efficacy, safety, and monetary perspective, the authors were unable to perform a randomized, controlled, double-blinded trial due to ethical constraints, which would have been the gold standard. A larger, appropriately designed, and ethically acceptable trial evaluating the use of dexmedetomidine in agitated, nonintubated patients is merited to further explore these very promising initial results.


Supporting pioneering claims made by our very own Dr. Martin Blaser in his recent book Missing Microbes, a study published in Gastroenterology found that the administration of antibiotics to children before the age of 2 increased their risk of childhood obesity [4, 5]. This adds to the growing body of evidence demonstrating that early, frequent administration of antibiotics during childhood development increases the risk of obesity and autoimmune disease through their impact on the human microbiome.

A recent study in JAMA Internal Medicine compared patterns of end-of-life-care and family-rated quality of care for patients dying with different critical illnesses. Perhaps not unexpectedly, family-reported quality of end-of-life care was significantly better for patients with cancer and those with dementia than for patients with end-stage-renal-disease, cardiopulmonary failure, or frailty [6]. This emphasizes the need for clinicians to better recognize the importance of appropriate end-life-of-care for patients with all serious illnesses.

In a study published in the Journal of the American College of Cardiology called EARLY-BAMI, investigators examined the impact of intravenous beta-blockers before primary percutaneous intervention (PPCI) on infarct size and clinical outcomes in patients presenting with STEMI [7]. Early IV metoprolol before PPCI was not associated with a reduction in infarct size, though it did decrease the incidence of malignant arrhythmias.

Dr. Amar Parikh is a 3rd year internal medicine resident at NYU Langone Medical Center. 

Peer reviewed by Dr. Kerrilynn Carney, chief resident of internal medicine at NYU Langone Medical Center 

Image courtesy of 


  1. Johnston SC, Amarenco P, Albers GW, et al. Ticagrelor versus Aspirin in Acute Stroke or Transient Ischemic Attack. N Engl J Med. 2016; 375: 35-43. DOI: 10.1056/NEJMoa1603060
  2. Carson SS, Cox CE, Wallenstein S, et al. Effect of Palliative Care–Led Meetings for Families of Patients With Chronic Critical Illness: A Randomized Clinical Trial. JAMA. 2016;316(1):51-62. doi:10.1001/jama.2016.8474.
  3. Carrasco G, Baeza N, Cabre, L, et al. Dexmedetomidine for the Treatment of Hyperactive Delirium Refractory to Haloperidol in Nonintubated ICU Patients: A Nonrandomized Controlled Trial. Critical Care Medicine. 2016:44 (7): 1295-1306
  4. Blaser, Martin J., MD. “Modern Plagues”. Missing Microbes. New York: Picador, 2014
  5. Scott FI, Horton DB, Mamtani R, et al. Administration of Antibiotics Before Age 2 Years Increases Risk for Childhood Obesity. Gastroenterology. 2016:151(1): 120-129
  6. Wachterman MW, Pilver C, Smith D, Ersek M, Lipsitz SR, Keating NL. Quality of End-of-Life Care Provided to Patients With Different Serious Illnesses. JAMA Intern Med. Published online June 26, 2016. doi:10.1001/jamainternmed.2016.1200.
  7. Roolvink V, Ibáñez B, Ottervanger JP, et al. Early intravenous beta-blockers in patients with ST-segment elevation myocardial infarction before primary percutaneous coronary Intervention. J Am Coll Cardiol, 67 (2016), pp. 2705–2715

From the Archives: Myth vs. Reality: The July Effect

July 7, 2016

Please enjoy this post from the archives dated August 12, 2012

By Mark Adelman, MD

Faculty Peer Reviewed

Another July 1st has come and gone, marking the yearly transition in US graduate medical education of interns to junior residents, junior residents to senior residents, and senior residents to fellows. With this annual mid-summer mass influx of nearly 37,000 interns and other trainees [1] taking on new clinical responsibilities, learning to use different electronic medical record systems and navigating the other idiosyncrasies of unfamiliar institutions, one cannot help but wonder what implications this may have on patient safety.

The notion that nationwide morbidity and mortality increase in July as thousands of interns, residents and fellows adjust to their new roles is typically referred to as the “July effect” in both the lay press[2] and medical literature;[3,4] our British colleagues often refer to their analogous transition every August as the decidedly more ominous “killing season.”[5]

But what does the available evidence suggest regarding this supposed yearly trend in adverse outcomes? Should we advise loved ones to avoid teaching hospital ERs, wards and ORs every July? Unfortunately, one cannot draw firm conclusions from the published literature, but some recent studies may be cause for concern.

There is much disagreement among the medical and surgical specialties and even within each field. For example, a retrospective review of 4325 appendicitis cases at two California teaching hospitals[6] found no significant difference in post-op wound infection rates in July/August vs. all other months (4.8% vs. 4.3%, p=0.6), nor was there a significant difference in need for post-op abscess drainage (1.2% vs. 1.5%, p=0.6) or length of hospitalization (2.5 +/-2.8 days vs. 2.5 +/- 2.2 days, p=1.0). In contrast, retrospective review of a nationwide sample of 2920 patients hospitalized for surgical intervention of spinal metastases noted increased mortality in July vs. August-June (OR 1.81; 95% CI, 1.13-2.91; P = .01), as well as intra-operative complication rates (OR, 2.11; 95% CI, 1.41-3.17; P < .001), but not post-operative complications (OR, 1.08; 95% CI, 0.81-1.45; P = .60).[7]

Turning to studies of patients with medical diagnoses, a single-institution retrospective review of patients admitted with the common diagnoses of either acute coronary syndrome (764 patients) or decompensated heart failure (609 patients) also failed to find evidence of a July effect.[8] In this study, researchers looked at in-hospital mortality and peri-procedural complications (for those patients that underwent PCI or CABG) in July-September vs. October-June and found no significant difference in mortality or complication rates (1% vs. 1.4%, p=0.71 and 2.1 vs. 2.8%, p=0.60 respectively). The investigators were also able to track use of aspirin, beta-blockers, statins and ACE/ARBs at time of discharge as these are standard quality metrics for these two cardiac conditions; again, no significant difference was found comparing July-September to October-June prescriptions for any of these guideline recommended medications.

In a rather unique examination of over 62 million computerized US death certificates from 1979-2006, Philips and Barker developed a least-squares regression equation to compare observed to expected inpatient deaths on a monthly basis over this 27-year period.[9] Looking specifically for “medication error” (i.e. a preventable adverse drug event ) as the primary cause of death, they found that July was the only month of the year (both on a yearly basis and in aggregate over the entire study period) in which the ratio of observed to expected deaths exceeded 1.00. These findings held only for medication errors, not for other causes of death. That this spike in mortality is due to the presence of new residents is further suggested by their comparison of US counties with and without teaching hospitals; the elevated ratio of observed to expected medication error deaths in June was present in counties with teaching hospitals but not in those without teaching hospitals, and those counties with the greater concentration of teaching hospitals had a greater July spike.

With such conflicting evidence in published reports on this question, where is one to turn for guidance? Annals of Internal Medicine published a systematic review by Young and colleagues last year that synthesized the findings of 39 studies published since 1989.[10] Studies were determined to be of higher or lower quality by such factors as statistical adjustments for confounders such as patient demographics/case mix, seasonal/year-to-year variations and presence/absence of a concurrent control group. Studies were further stratified by outcomes examined, including mortality, morbidity and medical errors, and efficiency (e.g., length of stay, OR time). Perhaps the most interesting finding was that the higher quality studies were more likely to detect a “July effect” on morbidity and mortality outcomes than the lower quality studies. For example, 45% of the higher quality studies noted an association between housestaff turnover and mortality but only 6% of the lower quality studies did. Reported effect sizes ranged from a relative risk increase of 4.3% to 12.0% or an adjusted odds ratio of 1.08 to 1.34. The authors did caution that study heterogeneity does not permit firm conclusions regarding the degree of risk posed by trainee changeover or which features of residency programs are particularly problematic and thus should be the target of future interventions.

Clearly, the question of whether the “July effect” exists is a complicated one that is difficult to answer through observational studies. If it were otherwise, the medical literature would not be full of studies with widely divergent conclusions. The systematic review by Young’s group, which appears to be the only such recent review on the topic, can hopefully provide some clarity. In my opinion, the greatest contribution by Young et al. was their finding that higher quality studies were more likely to detect a “July effect” on mortality and efficiency. There may be many studies that attempt to address this question, but not all of their conclusions are equally valid. The next logical step is to examine in a more focused way the potential underlying causes of such an effect. Is it the lack of clinical experience/technical ability among a large group of new trainees? Is it lack of familiarity with clinical information systems or institutional protocols and practices? Or is it perhaps poor communication and teamwork among new coworkers? Targeted interventions could include enhanced supervision of new housestaff by attendings, limiting the overall volume of clinical workload for new trainees, avoiding overnight responsibilities, simulation-based team training or even staggering of start dates for new housestaff.[11] While it may be difficult to conclude from the currently available evidence which of these changes would be the highest yield, I believe that the impact of the “July effect” should not be discounted and additional steps must be taken to maximize patient safety during this annual transitional period.

Dr. Mark Adelman is a second year resident at NYU Langone Medical Center

Peer reviewed by Patrick Cocks, MD, Program Director, NYU Internal Medicine Residency, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. Accreditation Council for Graduate Medical Education. Data Resource Book Academic Year 2010-2011. Available at: Accessed 7/9/12

2. O’Connor A. Really? The Claim: Hospital Mortality Rates Rise in July. The New York Times. July 4, 2011. Available at: Accessed 7/9/12.

3. Kravitz RL, Feldman MD, Alexander GC. From the editors’ desk: The July effect [editorial]. J Gen Intern Med. 2011;26(7):677.

4. McDonald RJ, Cloft HJ, Kallmes DF. Impact of admission month and hospital teaching status on outcomes in subarachnoid hemorrhage: evidence against the July effect. J Neurosurg. 2012;116(1):157-63.

5. Hough A. New junior doctor rules ‘will stop NHS killing season.’ The Telegraph. June 23, 2012. Available at: Accessed 7/9/12.

6. Yaghoubian A, de Virgilio C, Chiu V, Lee SL. “July effect” and appendicitis. J Surg Educ. 2010;67(3):157-60.

7. Dasenbrock HH, Clarke MJ, Thompson RE, Gokaslan ZL, Bydon A. The impact of July hospital admission on outcome after surgery for spinal metastases at academic medical centers in the United States, 2005 to 2008. Cancer. 2012;118(5):1429-38.

8. Garcia S, Canoniero M, Young L. The effect of July admission in the process of care of patients with acute cardiovascular conditions. South Med J. 2009;102(6):602-7.

9. Phillips DP, Barker GE. A July spike in fatal medication errors: a possible effect of new medical residents. J Gen Intern Med. 2010;25(8):774-9.

10. Young JQ, Ranji SR, Wachter RM, Lee CM, Niehaus B, Auerbach AD. “July effect”: impact of the academic year-end changeover on patient outcomes: a systematic review. Ann Intern Med. 2011;155(5):309-15.

11. Barach P, Philibert I. The July effect: fertile ground for systems improvement. Ann Intern Med. 2011;155(5):331-2.

From The Archives – Medicine’s Favorite Default Diagnosis: Non-compliance

June 23, 2016
Please enjoy this post from the archives, dated August 2, 2012

By Robert Keller

Faculty Peer Reviewed

In a small examination room on the Ambulatory Care floor of a large hospital in Brooklyn, I greet Ms. S, a 53-year-old Jamaican woman, as she walks through the door and plops herself down in the chair across from me. Having spent 20 minutes perusing her chart, I know that she suffers from morbid obesity, uncontrolled hypertension (blood pressure 165/95), and terrible diabetes (A1c 13.8%). I have already concluded that her worsening condition over the past 5 years, despite the extensive medical interventions attempted, can be explained by a simple yet dismissive diagnosis of patient “non-compliance.” I am poised to unleash my spiel on the gravity of her condition and the necessity for change, but before I have a chance to start, she begins speaking in her strong Jamaican accent.

“Doctor, you will be so proud of me. Since the last time I was seen here in the clinic 6 months ago, I have made great changes in my life. I eat only healthy foods—just salads and fruits. I’ve stopped drinking soda and instead take in lots of water every day. I even joined a gym and now walk for 30 minutes on the treadmill each and every morning.”

To say the least, I am caught off guard. My assumptions behind her worsening obesity, diabetes, and hypertension seem threatened, so I ask, “But Ms. S, have you been taking your medications as prescribed?”

“Oh yes, of course, Doctor. I never miss a dose,” she responds.

Her statement leaves me perplexed. How could the conditions of this woman, who over the past 6 months has reportedly been compliant with her medications and adherent to physician-recommended life modifications, continue to be so poorly controlled in so many parameters? The first thought to cross my mind is that maybe she is not telling the truth. She may not drink soda anymore, but loads her coffee with sugar every morning. She may go to the gym, but “30 minutes on the treadmill” includes the 25 minutes of travel to and from the gym each day. And her medications—she probably doesn’t want to disappoint me by admitting that she misses a dose here and there. I cannot escape the conclusion that she is not following doctor’s orders.

After presenting the case to my attending, we agreed that the full story probably wasn’t being told and there was likely some factor of non-compliance at stake. Ms. S was sent home that day with encouragement to continue maintaining her supposed new healthy habits and, in an attempt to provide some control of her chronic conditions, was started on a new regimen of increased dosages and additional medications.

In retrospect, my preconceived notions and hasty conclusions were both stubborn and naïve. Who was I to make assumptions about the patient’s compliance without first hearing her story and, worse yet, disregard her subsequent testaments as not being the “full story”? Perhaps this was the natural course of her disease and our interventions were just inadequate—but why was this option never considered? The case left me with a reality that needed to be faced: patient “non-compliance” is too often a default diagnosis physicians overuse to conceal and ignore a more complicated underlying issue.

In health care, compliance has been described as “the extent to which the patient’s actual history of drug administration corresponds to the prescribed regimen.”[1] While this term has long been engrained in everyday medical discourse, physicians and scholars have recently questioned its political correctness and its ability to reflect the value of partnership in a doctor-patient relationship. In the past, it was generally accepted that healthcare professionals employ a model of paternalism, where the provider specifies a therapy and the conditions of its use while the patient follows these orders as directed.[2] The term “compliance” defined in Merriam-Webster’s as “a disposition to yield to others,”[3] fits quite nicely in the context of such a relationship. However, as the social contracts between doctors and patients have evolved, doctors now assume roles more akin to educators, advisers, and enablers who desire partnership rather than dictatorship.[4] To accommodate this transition, the term “adherence” seems a better fit than “compliance,” as it provides a more supportive and collaborative connotation. Recent literature now strongly advocates the use of non-adherence over non-compliance in the language of health professionals, but many have yet to catch on. They may actually be hindering the process of change toward forming more synergistic alliances with their patients.[2]

Terminology aside, the issue of patient non-adherence is currently plaguing the health care system. It is estimated that half of the 3.2 billion prescriptions dispensed in the US annually are not taken as prescribed, even though there exists substantial evidence that medical therapy improves quality of life and mortality in people with chronic diseases.[5, 6] Not only does medication non-adherence lead to poor clinical outcomes, it is estimated to stress the healthcare system, with costs reaching $177 billion annually.[7] In fact, a 2003 report produced by the World Health Organization argued that improving adherence to existing treatments would provide more health benefits across the globe than from creating new medical therapies.[8]

The scope of the issue is no longer a mystery, but what remains ill-defined are the underlying causes of non-adherence. Barriers to adherence that have been well described in the literature include low health literacy, poor bidirectional doctor-patient communication, failure to negotiate an agreement on a medication plan, cost prohibition, non-response to the prescribed intervention, and unpreventable reasons such as serious mental illness and side effects.[6] When a doctor senses the urge to diagnose “non-adherence,” these barriers ought to be contemplated first. Perhaps the physician will recognize his own shortcomings in explaining why a certain medication is being prescribed—because after all, who is really willing to inject themselves 4 times a day and painfully prick their fingers just because “this helps with your sugar”? Donovan and colleagues argue that the key to eliminating these barriers is the nourishment of cooperative doctor-patient relationships with doctor-recognition of patients’ autonomy, needs, and constraints.[9] Conversely, the patients’ obligation is to convey their needs, expectations, and how they reach their decisions about treatments.[9] While this advice may benefit in the local office, there remains an international need for public health initiatives geared toward non-adherence.

Doctors are trained to be experts at recognizing signs and symptoms that guide them to important diagnoses. Unfortunately, many have yet to see non-adherence as a symptom of underlying pathology rather than a diagnosis itself. Physicians owe it to patients like Ms. S to make the effort to uncover the true reasons behind their non-adherence.

Robert Keller is a 3rd year medical student at NYU School of Medicine

Faculty peer reviewed by Sabrina Felson, MD, Medicine (GIM Division), NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. Urquhart J. Patient non-compliance with drug regimens: measurement, clinical correlates, economic impact. Eur Heart J. 1996:17 Suppl A:8-15.

2. Holm S. What is wrong with compliance? J Med Ethics. 1993:19(2):108-110.

3. Merriam-Webster’s Collegiate Dictionary. 11th ed. Springfield, MA: Merriam-Webster Inc; 2003.

4. Tilson H. Adherence or compliance? Changes in terminology. Ann Pharmacother. 2004:38(1):161-162.

5. Osterberg L, Blaschke T. Adherence to medication. N Engl J Med 2005:353(5):487-497.

6. Bosworth HB, Granger BB, Mendys P, et al. Medication adherence: a call for action. Am Heart J. 2011:162(3):412-424.

7. National Council on Patient Information and Education. Enhancing prescription medication adherence: a national action plan. Published  August 2007.   Accessed November 18, 2011.

8. World Health Organization. Adherence to long-term therapies: evidence for action.  Published 2003. Accessed November 18, 2011.

9. Donovan JL, Blake DR. Patient non-compliance: deviance or reasoned decision-making? Soc Sci Med. 1992:34(5):507-513.

From The Archives: Is a VBG just as good as an ABG?

May 6, 2016
Please enjoy this post from the archives, dated July 13, 2012

By Sunnie Kim, MD

Faculty Peer Reviewed

A rapid response is called overhead. As white-coated residents rush to the patient’s bedside, the medical consult starts to shout out orders to organize the chaos. “What’s the one-liner?” “Whose patient is this?” And of course, “Who’s drawing the labs?” Usually, at this point, the intern proceeds to collect the butterfly needle, assorted colored tubes, and the arterial blood gas (ABG) syringe. If lucky, there’s a strong pulse. The intern pauses, directs the needle, and hopes for that pulsatile jet of bright red blood to come through the clear tubing. If successful, a sigh of relief. If not, a wave of defeat and more butterfly needles are scattered across the bed as multiple residents attempt to get the elusive arterial blood.

Obtaining the ABG is considered almost a rite of passage for the medicine intern. However in ill patients with thready pulses, it can be difficult to obtain. Also, getting the ABG is not without its complications. Significant pain, hematoma, aneurysm formation, thrombosis or embolization, and needlestick injuries are all risks [1]. Given these risks, the question is whether we are subjecting our patients to undue pain and potential complications when a venous blood gas (VBG) would suffice.

An ABG provides important data including the pH, arterial oxygen tension (PaO2), carbon dioxide tension (PaCO2), arterial oxyhemoglobin saturation (SaO2), lactate, and electrolytes. In certain instances, an ABG is considered unequivocal. In order to calculate an A-a gradient, the ABG is necessary. To assess whether that HIV patient with PCP pneumonia needs steroids, we use the A-a gradient to help guide out treatment plan [2]. However, are there other patient populations in which the VBG is just as good as the ABG?

Brandenburg and Dire investigated whether venous pH could replace the arterial pH in the initial evaluation of patients with diabetic ketoacidosis (DKA) [3]. 61 emergency room patients were initially enrolled based on a finger stick greater than 250, a urine dipstick positive of ketones, and a clinical suspicion for DKA prior to lab tests. An ABG and VBG were subsequently drawn as temporally close to each other as possible. 44 episodes of DKA were identified after acidosis was established by an arterial pH less than 7.35. Among these cases, the mean difference between arterial and venous pH values was 0.02 (range 0.0 to 0.11) with a Pearson’s correlation coefficient (r) of 0.9689. Pearson correlation (r) determines the extent to which values of two variables are proportional to each other. Ranging between -1 and +1, an r closer to -1 represents a strong negative association while an r closer to +1 shows a strong positive correlation. This study concludes that venous blood gas measurements accurately demonstrated the degree of acidosis in patients with DKA.

However, the accuracy of the VBG becomes more controversial in patients with exacerbations of chronic pulmonary diseases. In 1991, Elborn et al followed 48 patients during the recovery phase of their chronic pulmonary disease exacerbations (12 with pulmonary fibrosis and 36 with chronic obstructive lung disease) to see whether there were significant differences between the venous and arterial partial pressures of carbon dioxide [4]. Simultaneous blood samples were taken from the radial artery and from the dorsal hand vein and were analyzed immediately. Results showed that there was no significant difference between the two CO2 tensions (PaCO2:41+/-9.5mmHg and PvCO2 42+/-10.6mmHg (r=0.84, p<0.001). However, perhaps a more meaningful study would compare the arterial and venous PCO2 values in patients during the initial stage of exacerbation, when PCO2 levels may lead to more significant management decisions.

A more recent study in 2002 investigated whether venous pCO2 and pH could be used to screen for significant hypercarbia in emergency room patients with acute respiratory disease [5]. The aim of the study was to evaluate the agreement between venous and arterial pH and pCO2 and to see if there is a cut-off level of venous pH or pCO2 that could accurately screen for significant hypercarbia. The study uses limits of agreement (LOA) and a Bland-Altman plot to assess the agreement between the ABG and VBG. These are considered more comprehensive statistical methods than the Pearson’s correlation to assess agreement between the two tests [6]. The LOA states that there is a 95% chance that the difference between the venous and arterial value will fall within +/- 1.96 of the standard deviation. The Bland Altman plot takes this further and graphs the difference of each point, the mean difference between the points, and LOA on the vertical axis against the average of the two ratings on the horizontal axis. The plot demonstrates not only the degree of agreement but also assesses whether the agreement occurs more frequently for smaller or larger values, thereby assessing bias as well.

The investigators included 196 patients presenting with respiratory illness with potential ventilator compromise and deemed by a physician to require ABG analysis. COPD, pneumonia, cardiac failure, asthma, and suspected pulmonary embolism were the top five diagnoses of the study population. Both an ABG and VBG were then taken with “minimal delay.” 56 (29%) had significant hypercarbia, defined as a PaCO2 greater than or equal to 50mmHg.

The venous pH was 0.034 lower than the arterial pH with an LOA between -0.10 and +0.04. The venous pCO2 was 5.8mmHg higher than the arterial pCO2 with an LOA between -8.8 to 20.5, which was considered by the study investigators to be unacceptably wide. However, the authors stated that a venous pCO2 of 44mmHg had a sensitivity for detection of hypercarbia of 100% and a specificity of 57%, thus making it an effective screening test for hypercarbia. Although I agree that an LOA between -8.8 to 20.5 appears to be too broad, the study (as is the case with the remaining literature) does not address whether using a venous pCO2 with a wide LOA makes a difference in patient outcomes. However intuitively, as clinicians, we would be hard-pressed to depend solely on the VBG for management decisions knowing that this wide range exists.

Given the discrepancy between the arterial and venous values, other investigators have attempted to create an equation that could accurately predict ABG values from a VBG result. Rudkin et al studied 385 trauma patients requiring ABGs to see if venous pH and calculated base excess (BE) were similar to those values in an ABG [6]. A priori, 15 physicians blinded to the study hypothesis set a consensus single threshold of <0.05 pH units and a variance in BE <2 as the greatest accepted difference between an ABG and VBG. The authors acknowledge that these thresholds are purely clinical and are not based on validated outcome measures. The predictive equations obtained from linear regression models for pH and BE were considered inadequate as only 72% of subjects fit within the predefined acceptable range for pH with a unacceptably wide LOA at -0.11 to -0.10 pH units and 80% of subjects fit within acceptable parameters for BE with also an unacceptable LOA at -3.9 to 4.4 BE units. In addition, the predictive equations fit the validation data better for healthier patients (higher pHs and normal or positive BE).

Another population of interest is the mechanically ventilated. In 2005, Malinoski et al studied 30 consecutive intubated trauma patients to see if central venous pH, pCO2, and BE were comparable to their corresponding arterial values [1]. When an ABG was drawn to assist in ventilator management, a VBG was also obtained. The indications for intubation varied (neurologic (60%), respiratory (24%), and hypotension (16%)). When the two values were compared, the following was noted: pH, R=0.92, P<0.001 and 95% LOAs of -0.09 to 0.03; pCO2, R=0.88, P<0.001 and 95% LOAs of -2.2 to 10.9; and BE, R=0.96, P<0.001 and 95% LOAs of -2.2 to 1.8. The authors in this study also concluded that the LOAs represented clinically significant ranges that could affect important management decisions in a critically ill population.

On the other hand, lactate, a prognostic value in the critically ill [8], has been shown to be equivalent to its venous counterpart. In 2000, Lavary et al studied 375 patients and compared arterial and venous lactates and showed that there was no significant difference between the two [9]. The correlation was 0.94 (95% CI 0.94 to 0.96, p = 0.0001). They also showed that the effect of a tourniquet was negligible although the exact time of tourniquet use was not mentioned.

ABGs are considered the gold standard by clinicians to assess acid-base status, ventilation, and oxygenation. However, obtaining the ABG is not without risks namely patient discomfort, hematoma, and/or thrombosis. If we can obtain the same information with a VBG, which carries fewer risks, we should opt for this choice instead. As mentioned, the ABG is considered unequivocal when measuring the A-a gradient especially if it leads to important management decision (i.e. the use of steroids in PCP). Through this review, other populations that would likely benefit from an ABG over a VBG are patients with severe hypercarbia and those on mechanical ventilation. The margin of error between arterial and venous samples is the largest among these populations, but it is still up for debate whether these differences are clinically significant. Until there is definitive evidence to support not getting the ABG in these populations, most clinicians will likely err on obtaining the ABG to gain as much information as possible.

Dr. Sunnie Kim is a 3rd year resident at NYU Langone Medical Center

Peer reviewed by Harald Sauthoff, Department of Medicine, Pulmonary, NYU Langone Medical Center.

Image courtesy of Wikimedia Commons


1. Malinoski DJ, Todd SR, et al. Correlation of central venous and arterial blood gas measurements in mechanically ventilated trauma patients. Arch Surg 2005;140:1122-5.

2. Briel M, Bucher HC, Boscacci R, Furrer H. Adjunctive corticosteroids for Pneumocystis jiroveci pneumonia in patients with HIV infection. Cochrane Database Syst Rev. 2006 Jul 19;3.

3. Brandenburg MA, Dire DJ. Comparison of arterial and venous blood gas values in the initial emergency department evaluation of patients with diabetic ketoacidosis. Ann Emerg Med 1998;31(4):459-65.

4. Elborn JS, Finch MB, et al. Non-arterial assessment of blood gas status in patients with chronic pulmonary disease. Ulster Med J 1991;60(2): 164-7.

5. Kelly AM, Kyle E, McAlpine R. Venous pCO2 and pH can be used to screen for significant hypercarbia in emergency patients with acute respiratory disease.J Emerg Med. 2002;22:15-19.

6. Hopkins WG. Measures of reliability in sports medicine and science. Sports Medicine 30, 1-15, 2000.

7. Rudkin SE. Prospective correlation of arterial vs venous blood gas measurements in trauma patients. Am J Emerg Med 2011 Dec 12. PMID:22169587

8. Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet, i, 307-310.

9. Mikkelsen ME, Miltiades AN, Gaieski DF, et al. Serum lactate is associated with mortality in severe sepsis independent of organ failure and shock. Crit Care Med 2009;37:1670-7

10. Lavery RF, Livingston DH, Tortella BJ, Sambol JT, Slomovitz BM, Siegel JH. The utility of venous lactate to triage injured patients in the trauma center. J Am Coll Surg 2000;190:656-64

From The Archives: Should Patients With Nephrotic Syndrome Receive Anticoagulation?

March 31, 2016
Please enjoy this post from the Archives dated May 9, 2012
By Jennifer Mulliken

Faculty Peer Reviewed

Case 1:

A 30-year-old African-American male with a history of bilateral pulmonary emboli presents with a 1-week history of bilateral lower extremity edema. Blood pressure is 138/83, cholesterol 385, LDL 250, albumin 2.9. Urinalysis shows 3+ protein. Twenty-four hour urinary protein is 7.2 grams.

Case 2:

A 47-year-old Hispanic male with a history of mild hypertension and venous insufficiency presents with a 3-month history of bilateral lower extremity edema. BP is 146/95, cholesterol 241, LDL 165, albumin 1.9. Urinalysis shows 3+ protein. Twenty-four hour urinary protein is 4.6 grams.

What is the evidence to support prophylactic anticoagulation in patients with nephrotic syndrome?

Nephrotic syndrome classically presents with heavy proteinuria (>3.5 g per day), hypoalbuminemia, edema, and hyperlipidemia. Damage to the glomerular basement membrane results in the loss of either charge or size selectivity, which then results in the leakage of glomerular proteins such as albumin, clotting factors, and immunoglobulins.[1] The heavy proteinuria seen in affected patients leads to a series of clinically important sequelae, including sodium retention, hyperlipidemia, greater susceptibility to infection, and a higher risk of both venous thromboembolism (VTE) and arterial thromboembolism (ATE).

The predisposition to a hypercoagulable state in nephrotic syndrome results from the urinary loss of clotting inhibitors such as antithrombin III and plasminogen. Hypercoagulability, in turn, can lead to a variety of complications including pulmonary embolism, renal vein thrombosis, and recurrent miscarriages. The incidence of both venous and arterial thrombosis is much higher in patients with nephrotic syndrome than in the general population. Mahmoodi and colleagues’ retrospective study of 298 patients with nephrotic syndrome followed for 10 years found a low absolute risk of venous and arterial thromboses: 1.02% and 1.48% per year, respectively.[2] These patients had approximately 8 times higher risk than the general population (matched for age and sex). The authors also found that the risk of venous and arterial thrombosis is highest in the first 6 months after diagnosis.[2-4] The risk of thrombosis varies with the underlying cause of nephrotic syndrome. Risk of these events is highest in membranous nephropathy, followed by membranoproliferative glomerulonephritis and minimal change disease.[5,6] While minimal change disease frequently responds to treatment with corticosteroids, membranous nephropathy is more difficult to treat and therefore more likely to lead to thrombosis.

Renal vein thrombosis typically presents with flank pain, gross hematuria, and loss of renal function, while pulmonary embolus usually presents with dyspnea, pleuritic chest pain, and tachypnea. A surprising number of patients with nephrotic syndrome who experience thromboembolic events present without any symptoms at all. Only one-tenth of patients with renal vein thrombosis and one-third of patients with pulmonary emboli are symptomatic.[5] With regard to renal vein thrombosis, Chugh and colleagues noted that because the development of venous occlusion is often slow and incomplete in patients with nephrotic syndrome, the clinical features of thrombosis are not easily distinguished from the primary renal disease.[6]

The overall risk of ATE in patients with nephrotic syndrome is related to both the glomerular filtration rate and the classic risk factors for atherosclerosis.[2] Increased risk of VTE, in contrast, is generally associated with high rates of proteinuria, low serum albumin levels, high fibrinogen levels, low antithrombin III levels, and hypovolemia.[7] Unfortunately, these are relatively unreliable predictors of patient outcomes. That potentially fatal thrombotic events can be clinically silent in nephrotic patients has important implications for treatment. Given that the incidence of thromboembolic complications in these patients is higher than in the general population, anticoagulant prophylaxis must be considered.

The likelihood of benefit from prophylactic anticoagulation depends on both the possibility of future thrombotic events, as discussed above, and the risks associated with anti-coagulation, such as intracranial hemorrhage and gastrointestinal bleeds. Unfortunately, data on prophylactic therapy in the nephrotic syndrome are seriously limited; there are no firm recommendations one way or another regarding anticoagulation. The decision to treat prophylactically must therefore be individualized, based on risk factors and prior history.

In circumstances where a patient has demonstrated a previous tendency towards hypercoagulability, such as previous pulmonary emboli in case 1 above, prophylactic anticoagulation would likely be warranted. In this case the benefit of anticoagulation outweighs the risk, particularly in the first 6 months after diagnosis when the risk of thromboembolism is highest. Patients with massive amounts of proteinuria and very low albumin are at especially high risk of VTE, and in these cases prophylactic anticoagulation should probably be given.

The patient in case 2 has milder disease and no history of thrombosis. Many physicians favor a more conservative approach to prophylaxis in this setting.* The risks of chronic anticoagulation frequently outweigh the benefits. In addition, while nephrotic syndrome predisposes to a hypercoagulable state, the risk of thrombosis is not necessarily lowered by anticoagulation. For example, the loss of antithrombin III in the urine contributes to the risk of hypercoagulability, but it also decreases the effectiveness of heparin. While this patient’s venous insufficiency increases the likelihood of deep vein thrombosis, chronic anticoagulation in a patient with no history of hypercoagulability seems unnecessary. That being said, no studies have addressed this issue and therefore practice will vary considerably.

The lack of evidence to support or refute the administration of prophylactic anticoagulation in patients with nephrotic syndrome means that the decision to treat prophylactically must be made on a case-to-case basis. Special consideration should be given to patients with a demonstrated history of hypercoagulability and to those with known hypercoagulable states. In addition, the literature suggests that patients should be monitored closely in the first 6 months after diagnosis for evidence of thrombosis. In all cases the risk of hemorrhage should be weighed heavily against the potential benefits of anticoagulation.

* Many thanks to Drs. Jerome Lowenstein, Gregory Mints, Manish Ponda, and Joseph Weisstuch for their input on this topic.

Commentary by Dr. Jerome Lowenstein

Jennifer Mulliken makes a reasonable argument for individualizing the decision to anticoagulate, as there are no appropriate trials comparing conservative management with anticoagulation. Unfortunately, individualization is not very easy when there are no good markers of high risk and there is no good evidence of the effect of anticoagulation in renal vein thrombosis.

Jennifer Mulliken is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Jerome Lowenstein, MD, Department of Medicine (Nephrology), NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. Orth SR, Ritz E. The nephrotic syndrome. N Engl J Med. 1998;338(17):1202-1211.

2. Mahmoodi BK, ten Kate MK, Waanders F, et al. High absolute risks and predictors of venous and arterial thromboembolic events in patients with nephrotic syndrome: results from a large retrospective cohort study. Circulation. 2008;117(2):224-230.

3. Anderson FA Jr, Wheeler HB, Goldberg RJ, et al. A population-based perspective of the hospital incidence and case-fatality rates of deep vein thrombosis and pulmonary embolism. The Worcester DVT Study. Arch Intern Med. 1991;151(5):933-938.

4. Thom TJ, Kannel WB, Silbershatz H, D’Agostino RB. Incidence, prevalence and mortality of cardiovascular diseases in the United States. In: Alexander RW, Schlant RC, Fuster V, eds. Hurst’s The Heart. 9th ed. New York, NY: McGraw-Hill; 1998: 3.

5. Llach F, Papper S, Massry SG. The clinical spectrum of renal vein thrombosis: acute and chronic. Am J Med. 1980;69(6):819-827.

6. Chugh KS, Malik N, Uberoi HS, et al. Renal vein thrombosis in nephrotic syndrome – a prospective study and review. Postgrad Med J.1981;57(671):566-570.

7. Robert A, Olmer M, Sampol J, Gugliotta JE, Casanova P. Clinical correlation between hypercoagulability and thrombo-embolic phenomena. Kidney Int. 1987;31(3):830-835.

The Placebo Effect: Can Understanding Its Role Improve Patient Care?

February 25, 2016

Please enjoy this post from the archives dated March 4, 2012

By Brian D. Clark

Faculty Peer Reviewed

The ability to critically assess the validity of a clinical trial is one of many important skills that a physician strives to develop. This skill helps guide clinical decision-making, and there are a number of things that we are trained to look for to help determine the validity of any given study. Right at the top of the list of factors that go into this appraisal is that of study design, with the randomized, placebo-controlled trial serving as the gold standard for testing new treatments. Drugs that are considered therapeutic failures are said to have “performed no better than placebo.”

As described in a recent article in The New Yorker, research into the basis of the placebo effect and its potential role in therapy is becoming more mainstream.[1] This year, the Program in Placebo Studies and the Therapeutic Encounter was created at Harvard, bringing together research into the placebo effect and its potential application in patient care. While the “realness” of the placebo effect has long been appreciated in shaping subjective responses such as pain, recent studies suggest that placebos may be useful in other conditions such as irritable bowel syndrome (IBS) and emphasize the role of the patient-doctor interaction in shaping patient expectations.

An appreciation of the power of the placebo and the need to control for it in scientific studies followed the work of Colonel Henry Beecher. Beecher saw the effects of expectation and emotions in wounded World War II soldiers on their perception of pain. Published in 1955, his article “The Powerful Placebo” is widely cited; in it, Beecher concluded that the placebo effect is an important factor in almost any medical intervention.[2]

Following the discovery of endorphins, early evidence for a biological mechanism accounting for the placebo effect in analgesia came in a study published in 1978. Levine and colleagues wanted to determine if endorphins were responsible for patients reporting reduced pain after receiving a placebo. In the experiment, patients recovering from dental surgery were initially given morphine, the opioid antagonist naloxone, or a placebo and asked to rate their pain. The investigators focused on those who had initially received the placebo and then divided these patients into two groups based upon their response to placebo. Naloxone was then administered to the two groups, and the patients who had previously responded to the placebo experienced a significant increase in pain following administration of naloxone. Furthermore, naloxone had no effect in the patients who did not initially respond to the placebo.[3] Therefore, at least in the case of pain, placebos and patient expectation appear to have a very real effect on the body’s production of endogenous opiates in certain individuals.

To address the question of whether placebos could be useful as treatments outside the realm of clinical trials, a series of meta-analyses by Hróbjarsson and Gotzsche examined studies that randomized patients to either placebo or no treatment at all. After analyzing the results from over 100 such trials, the authors concluded that, in general, placebos had no significant effect on objective outcomes, but they had possible small benefits in subjective outcomes and for the treatment of pain.[4,5] As the authors pointed out, these meta-analyses do not provide justification for the therapeutic use of placebos outside the setting of clinical trials.

One argument against the routine use of placebos (such as sugar pills for improvement of subjective symptoms) is that by its very nature, a placebo is thought to require some element of concealment or deception. Previous work has shown a beneficial effect of placebos in patients with IBS.[6] To test whether concealment was necessary for this effect, Kaptchuk and colleagues randomized patients with IBS to either an “open-label placebo” treatment group or no treatment at all. The patients in the open-label placebo group were told that “the placebo pills were made of an inert substance, like sugar pills, that had been shown in clinical studies to produce significant improvement in IBS symptoms through mind-body self-healing processes.” They found that patients given the open-label placebo had significantly reduced symptom severity compared to the no-treatment group.[7] Thus, placebo-like treatment may be helpful in certain conditions such as IBS. This study suggests that the therapeutic effect is due the patients’ perception that the potential treatment, even if inert, has been shown to be helpful. This highlights the possibility that the patient-doctor relationship itself may be beneficial in certain conditions if the patient believes the doctor is trying to help him or her.

Commentary by Antonella Surbone, MD, PhD, FACP

Clinical Correlations Ethics Editor

This piece by Brian D. Clark on the placebo effect follows one that we posted in our Ethics section on February 18th 2009.[8] The present piece adds a different fresh perspective to a subject always relevant to clinical care. For further consideration and debate, I wish to point out to all readers the importance of the most interesting study by Kaptchuk and colleagues [7] that Brian reports in his conclusion. In my opinion and clinical experience, a good patient-doctor relationship, based on reciprocal honesty and trust, always has an inherent therapeutic value: this is why it so important to establish and maintain. I have added three references that readers may find interesting.[9,10,11]

Brian Clark is a 4th year medical student at NYU School of Medicine

Peer reviewed by Dr. Antonella Surbone, MD, Ethics Editor, Clinical Correlations

Image courtesy of Wikimedia Commons


1. Specter M. The power of nothing. New Yorker. December 12, 2011:30-36.

2. Beecher HK. The powerful placebo. JAMA. 1955;159(17):1602-1606.

3. Levine JD, Gordon NC, Fields HL. The mechanism of placebo analgesia. Lancet.  1978;2(8091):654-657.

4. Hróbjartsson A, Gotzsche PC. Is the placebo powerless? An analysis of clinical trials comparing placebo with no treatment. N Engl J Med. 2001;344(21):1594-1602.

5. Hróbjartsson A, Gotzsche PC. Placebo interventions for all clinical conditions.  Cochrane Database Syst Rev. 2010(1):CD003974.

6. Kaptchuk TJ, Kelley JM, Conboy LA, et al. Components of placebo effect: randomised controlled trial in patients with irritable bowel syndrome. BMJ. 2008;336(7651):999-1003.

7. Kaptchuk TJ, Friedlander E, Kelley JM, et al. Placebos without deception: a randomized controlled trial in irritable bowel syndrome. PloS One. 2010;5(12):e15591.

8. Surbone A. Is prescribing placebos an ethical practice? February 18th 2009.

9. Daugherty CK, Ratain MJ, Emanuel EJ, Farrell AT, Schilsky RL. Ethical, scientific, and regulatory perspectives regarding the use of placebos in cancer clinical trials. J Clin Oncol. 2008;26(8):1371-1378.

10. Harris G. Half of doctors routinely prescribe placebos. New York Times. October 24th, 2008.

11. Berkowitz K, Sutton T, et al. Clinical use of placebo: an ethics analysis. National ethics teleconference. July 28, 2004.

From The Archives: Nothing QT (Cute) about it: rethinking the use of the QT interval to evaluate risk of drug induced arrhythmias

February 4, 2016

Please enjoy this post from the archives dated April 27, 2012

By Aneesh Bapat, MD

Faculty Peer Reviewed

Perhaps it’s the French name, the curvaceous appearance on electrocardiogram (EKG), or its elusive and mysterious nature, but Torsades des pointes, a polymorphic ventricular arrhythmia, is certainly the sexiest of all ventricular arrhythmias. Very few physicians and scientists can explain its origin in an early afterdepolarization (EAD), and fewer still can explain its “twisting of the points” morphology on EKG. Despite its rare occurrence (only 761 cases reported to the WHO Drug Monitoring Center between 1983 and 1999)1, every medical student is taught that it is an arrhythmia caused by prolongation of the QT interval. The more savvy medical student will implicate the corrected QT interval (QTC) and recite a fancy formula with a square root sign to determine this value. Suffice to say, the mystic nature of torsades des pointes and the negative attitude towards QT prolongation have been closely intertwined over the years. While the most common culprits, such as cisapride, macrolides, terfenadine, and haloperidol[1] have become maligned for this reason, there are many more that do not even make it to the market because they prolong QT interval2. (A comprehensive list of QT prolonging drugs with associated arrhythmia risk can be found at <>) Although our conceptions of a long QT interval have been inculcated repeatedly, there is growing evidence that QT interval prolongation may not be sufficient to predict risk of drug induced TdP, and that other, more sensitive and specific markers should be utilized.

The QT interval, which is the amount of time between the start of the QRS complex and the end of the T wave on EKG, is a marker of the amount of time required for ventricular tissue depolarization and repolarization. On a cellular basis, it is closely related to the duration of the cardiac myocyte action potential (AP). In pathological conditions, myocyte depolarization can even occur during phase 2 or 3 of the action potential, producing an EAD. These EADs can give rise to ectopic beats in the tissue, and produce arrhythmias such as TdP due to the R on T phenomenon. It is generally thought that EADs are caused by problems with myocyte repolarization- which would prolong QT interval- and thus QT prolongation has been linked to EAD mediated arrhythmias such as TdP. Drugs such as haloperidol have been shown to block the potassium channels responsible for AP repolarization, and as a result prolong QT interval. However, it has become more apparent in recent years that prolonged repolarization (or QT interval) is neither sufficient nor necessary to produce the EADs (or ectopic beats) that cause arrhythmias[3-6].

The EAD is an arrhythmogenic entity that has been implicated in a variety of abnormal rhythms, including ventricular tachycardias, such as TdP, ventricular fibrillation, and atrial arrhythmias. In the normal cardiac action potential, Na+ channel opening produces the inward current necessary for the initial upstroke of the action potential, L-type calcium channels produce a plateau caused by influx of calcium, and K+ channels produce an outward current to assure full repolarization. The simplistic explanation for EADs has always been that EADs occur when inward currents (Na+ and Ca2+) are greater than outward currents (K+). This explains why EADs can occur as a result of K+ current inhibition- as seen with the use of anti-arrhythmics such as sotalol, or in the presence of hypokalemia. The advantage of this simplistic viewpoint is just that- it is simple. However, it is far from comprehensive. In fact, there are cases where potassium current inhibition does not cause EADs, and others where potassium currents are augmented and EADs do occur[3,4,7]. The key to the genesis of EADs is not the duration or magnitude of the various currents that make the action potenatial, but rather the timing of channel openings8. The classic example to contradict the simplistic idea of EAD genesis is the antiarrhythmic drug amiodarone, which acts via potassium channel blockade and causes QT prolongation, but does not produce EADs or increase TdP risk7,9. Since the occurrence of EADs is not solely determined by the duration of the action potential, it follows that the risk of TdP is not solely determined by the QT interval.

Although much basic science and clinical research have brought into question the validity of using QT prolongation to determine TdP risk, the message has been lost in translation to the bedside. In the clinic or hospital setting, too much weight is put into the baseline QT interval when deciding whether a drug can be used. A recent clinical study has shown that the degree of QT prolongation does not correlate to the baseline QT interval10. Another study has proposed the use of the Tp-e (amount of time from peak to end of the T wave) or the Tp-e/QT ratio as indicators for arrhythmia risk, regardless of whether they occur in presence of long QT, short QT, or unchanged QT11. Yet another study proposes the use of three EKG criteria- T wave flattening, reverse use dependence of the QT interval, and instability of the T wave- to determine whether a drug is arrhythmogenic. This study cites a better sensitivity to arrhythmia risk and an earlier onset in changes as compared to QT prolongation. This set of criteria even stands tall in the face of the paradoxical situations of prolonged QT and decreased arrhythmia risk- as with amiodarone use7. When the day comes that QT prolongation is deemed unsatisfactory, better alternatives exist.

The study of arrhythmias has advanced significantly over the years, but unfortunately the clinical practice has lagged behind. The major shortcoming of arrhythmia treatment in the clinic has been tunnel vision. For example, in the landmark CAST trial, Na+ channel blockade was used to prevent post-MI premature ventricular complexes. However, the study had to be terminated because of increased mortality- which was partially a result of arrhythmias [3,12]. The lesson from that trial should have been that a multi-faceted problem involving a variety of players cannot be eliminated by targeting one of them. Unfortunately, a similar approach has been taken in using QT prolongation as a marker for TdP risk. The factors that influence arrhythmogenesis are far too numerous to focus on only one, and a new, more comprehensive approach should be considered.

Aneesh Bapat is a 4th year medical student at NYU Langone Medical Center

Peer reviewed by Neil Bernstein, MD, Departments of Medicine (Cardio Div) and Pediatrics, NYU Langone Medical Center

Image courtesy of Wikimedia Commons


1. Darpo B. Spectrum of drugs prolonging QT interval and the incidence of torsades de pointes. European Heart Journal Supplements. 2001;3:K70-K80. Available at:

2. Kannankeril P, Roden DM, Darbar D. Drug-Induced Long QT Syndrome. Journal of the American College of Cardiology. 2010;62(4):760 -781.

3. Weiss JN, Garfinkel A, Karagueuzian HS, Chen P-S, Qu Z. Early afterdepolarizations and cardiac arrhythmias. Heart rhythm?: the official journal of the Heart Rhythm Society. 2010;7(12):1891-9. Available at:  [Accessed February 15, 2011].

4. Ding C. Predicting the degree of drug-induced QT prolongation and the risk for torsades de pointes. Heart rhythm?: the official journal of the Heart Rhythm Society. 2011. Available at: [Accessed August 25, 2011].

5. Couderc J-P, Lopes CM. Short and long QT syndromes: does QT length really matter? Journal of electrocardiology. 2010;43(5):396-9. Available at:  [Accessed June 18, 2011].

6. Hondeghem LM. QT prolongation is an unreliable predictor of ventricular arrhythmia. Heart rhythm?: the official journal of the Heart Rhythm Society. 2008;5(8):1210-2. Available at:  [Accessed September 7, 2011].

7. Shah RR, Hondeghem LM. Refining detection of drug-induced proarrhythmia: QT interval and TRIaD. Heart rhythm?: the official journal of the Heart Rhythm Society. 2005;2(7):758-72. Available at:  [Accessed August 25, 2011].

8. Tran D, Sato D, Yochelis A, et al. Bifurcation and Chaos in a Model of Cardiac Early Afterdepolarizations. Physical Review Letters. 2009;102(25):1-4. Available at:  [Accessed July 14, 2010].

9. Opstal JM van, Schoenmakers M, Verduyn SC, et al. Chronic Amiodarone Evokes No Torsade de Pointes Arrhythmias Despite QT Lengthening in an Animal Model of Acquired Long-QT Syndrome. Circulation. 2001;104(22):2722-2727. Available at:  [Accessed August 25, 2011].

10. Kannankeril PJ, Norris KJ, Carter S, Roden DM. Factors affecting the degree of QT prolongation with drug challenge in a large cohort of normal volunteers. Heart rhythm?: the official journal of the Heart Rhythm Society. 2011. Available at:  [Accessed August 25, 2011].

11. Gupta P, Patel C, Patel H, et al. T(p-e)/QT ratio as an index of arrhythmogenesis. Journal of electrocardiology. 2008;41(6):567-74. Available at:  [Accessed August 25, 2011].

12. Anon. Preliminary report: effect of encainide and flecainide on mortality in a randomized trial of arrhythmia suppression after myocardial infarction. The Cardiac Arrhythmia Suppression Trial (CAST) Investigators. The New England journal of medicine. 1989;321(6):406-12. Available at: [Accessed August 26, 2011].

From The Archives: Medical Etymology: The Origins of Our Language

January 21, 2016

Melilotus_albaPlease enjoy this post from the archives dated March 28, 2012

By Robert Gianotti , MD, Todd Cutler, MD and Patrick Cocks, MD

Welcome. We are proud to present the first installment of a new section dedicated to exploring the roots of common medical terminology. We hope this will give you a chance to incorporate a historical perspective into your daily practice and to reflect on the rich and often unexpected stories lying at the heart of our profession. This is our ode to the days of the giants…..

It was the winter of 1933 and something was amiss on the farm of Ed Carlson. A dedicated cattle farmer, Mr. Carlson was now faced with a plague of proportions that sought to destroy his livelihood. His cows were dying from a mysterious hemorrhagic illness that was not unique to his farm, but had been appearing in isolated clusters for years across the Great Plains. Not a man taken to idle his time away and pray for a higher power to save his beloved steer, he took it upon himself to find the answer. In a blizzard, he drove nearly 200 miles at the advice of his local veterinarian, and ended up at the door of the University of Wisconsin Agricultural Experiment Station. As good fortune has a tendency to show itself at exactly the moment when it is needed, he made the acquaintance of Dr. Karl Link and his faithful student Eugene Schoeffel . These two well respected agricultural minds were hard at work on a vexing problem, how to make Sweet Clover, a commonly used feed staple, less bitter so cows would happily eat it. They and others had known for some time that the ingredient responsible for the acrid flavor so unpalatable to the bovine sort was indeed the benzopyrone named coumarin.

The farmer Carlson did not come empty handed. In fact, he brought with him all the necessary pieces to the puzzle. He had the corpse of one of his prize heifers that had succumbed to spontaneous hemorrhage, a milk can with unclotted blood, and a bushel of rancid sweet clover that represented the only source of nourishment for his unlucky herd. Steadfast in their determination to solve the mystery of the “sweet clover disease”, a band of chemists led by Link set forth on a six year journey that culminated in the isolation of Dicumarol, the formaldehyde crosslinked by-product of a reaction catalyzed by Aspergillus sp., from the spoiled sweet clover. From here, a long lineage of coumarins with potent anticoagulant potential was born. Several years later, inspired while convalescing from tuberculosis, Dr. Link proposed a tool in the long fight against rodent invasion. Soon after, derivative number 42, supported by the Wisconsin Alumni Research Foundation, was being used to deter furry invaders worldwide. Warfarin, named as a hybrid of the aforementioned W.A.R.F and its coumarIN base, was now on its way to becoming a foundation of modern pharmacology, feared by mice, beloved by physicians.

Segments of this piece were inspired by the writings of Dr. Karl Link.

Link, KP. The Discovery of Dicumarol and Its Sequels. Circulation 1959, 19:97-107.

From The Archives: Does the BCG Vaccine Really Work?

October 1, 2015

cc 5293Please enjoy this post from the archives dated March 14, 2012

By Mitchell Kim

Faculty Peer Reviewed

Mycobacterium tuberculosis, an acid-fast bacillus, is the causative agent of tuberculosis (TB), an infection that causes significant morbidity and mortality worldwide. A highly contagious infection, TB is spread by aerosolized pulmonary droplet nuclei containing the infective organism. Most infections manifest as pulmonary disease, but TB is also known to cause meningitis, vertebral osteomyelitis, and other systemic diseases through hematogenous dissemination.[1] In 2009, there were an estimated 9.4 million incident and 14 million prevalent cases of TB worldwide, with a vast majority of cases occurring in developing countries of Asia and Africa. Approximately 1.7 million patients died of TB in 2009.[2]

TB has afflicted human civilization throughout known history, and may have killed more people than any other microbial agent. Hermann Koch first identified the bacillus in 1882, for which he was awarded the Nobel Prize in 1905. In 1921, Albert Calmette and Camille Guérin developed a live TB vaccine known as the bacille Calmette-Guérin (BCG) from an attenuated strain of Mycobacterium bovis.[3]

As the only TB vaccine, BCG has been in use since 1921,[4] and is now the most widely used vaccine worldwide,[5] with more than 3 billion total doses given. The BCG was initially administered as a live oral vaccine. This route of administration was stopped in 1930 following the Lübeck (Germany) disaster, in which 27% of 249 infants receiving the vaccine developed and died from TB. It was later discovered that the Lübeck vaccine was contaminated with virulent M tuberculosis. The intradermal route of administration was later found to be safe for mass vaccination, through studies conducted in the 1930s.[6] The World Health Organization currently recommends BCG vaccination for newborns in high-burden countries, although the protection against TB is thought to dissipate within 10-20 years.[7] The BCG vaccine is not used in the US, where TB control emphasizes treatment of latently infected individuals.[3]

Although widely used, the efficacy of the vaccine in preventing pulmonary TB is uncertain, with studies showing 0-80% protective benefit. A meta-analysis performed in 1994 showed that the BCG vaccine reduces the risk of pulmonary TB by 50% on average, with greater reduction in risk of disseminated TB and TB meningitis (78% and 64%, respectively).[8] It is currently accepted that the BCG vaccine provides protection against TB meningitis and disseminated TB in children, as well as leprosy in endemic areas such as Brazil, India, and Africa.[9]

There are several possible explanations for the variations in BCG vaccine efficacy found in different studies. Based on the observation that BCG vaccine trials showed more efficacy at higher latitudes than lower latitudes (P<0.00001), it is hypothesized that exposure to certain endemic mycobacteria, thought to be more common in lower latitudes, might provide natural immunity to the indigenous people, and the addition of BCG vaccine does not add much to this natural protection. The higher prevalence of skin reactivity to PPD-B (Mycobacterium avium-intracellulare antigen) in the lower latitudes supports this theory. However, there has been no conclusive link found between endemic Mycobacterium exposure and protection against TB. In addition, TB infection rates are highest in lower latitudes, where natural immunity should be the greatest;[5] this may indicate that other factors are at play. Other reasons why the observed efficacy of BCG vaccines may vary so widely is that they are produced at different sites around the world, with inconsistent quality control.[4] Also, the vaccine’s efficacy depends on the viability of the BCG organisms, which can be markedly altered by storage conditions.[10]

BCG is considered a safe vaccine,[4] with the main side effect being a localized reaction at the injection site with erythema and tenderness, followed by ulceration and scarring. This occurs almost invariably following correct intradermal administration. Overall, the rate of any adverse reaction has been reported to be between 0.1% and 19%[11] and serious adverse reactions such as osteitis, osteomyelitis, and disseminated BCG infection are rare [7] and estimated to occur less than once per 1 million doses given.[11] Disseminated BCG infection is a serious complication almost exclusively seen in immunized patients with underlying immunodeficiency, such as HIV infection or severe combined immunodeficiency. This complication carries a high mortality rate of 80-83%, and the incidence of fatality is estimated at 0.19-1.56 cases per 1 million vaccines given.[7]

Immunization with BCG vaccine increases the risk of a positive purified protein derivative tuberculin skin (PPD) test. This can complicate the interpretation of a PPD test, and may lead to unnecessary preventive treatment in people who do not truly have latent TB infection. However, it has been shown that a person’s age at time of BCG vaccination, as well as the years since vaccination, affects the risk of PPD positivity. Therefore, the US Preventive Services Task Force recommends PPD screening of high-risk patients, and that a >10 mm induration after PPD administration should not be attributed to the BCG vaccine. If a patient has a previous exposure to the BCG vaccine, the CDC recommends using the QuantiFERON-TB Gold test (QFT-G, Cellestis Limited, Carnegie, Victoria, Australia), an interferon-gamma release assay, to detect TB exposure instead of the PPD. This test is specific for M tuberculosis proteins without cross-reactivity with BCG. The major drawback of the QFT-G test is that it is roughly 3 times more expensive than the PPD test.[12]

In summary, the BCG vaccine has been in use for 90 years to reduce the prevalence of TB infection. It is the most widely used vaccine worldwide, with 100 million doses administered every year.[7] Although the vaccine is compulsory in 64 countries and recommended in another 118, its use is uncommon in the US, where treatment of latent infection is the major form of TB control. The vaccine limits multiplication and systemic dissemination of TB [13] and decreases the morbidity and mortality of TB infection, but has no effect on its transmission [7] and has no use in the secondary prevention of TB.[13] The vaccine’s efficacy in preventing pulmonary TB is highly variable, but it is thought to be efficacious in preventing TB meningitis, disseminated TB, and leprosy. In order to make up for the BCG vaccine’s shortfalls in preventing pulmonary TB, substantial progress is being made in the field of TB vaccines. In 2010, 11 vaccine candidates were being evaluated in clinical trials, with 2 being evaluated for efficacy.[9] Future developments in the field of TB vaccine development may improve on the foundations built by the BCG vaccine in reducing the worldwide health burden of this ancient disease.

Mitchell Kim is a 3rd year medical student at NYU School of Medicine

Peer reviewed by Robert Holzman, MD, Professor Emeritus of Medicine and Environmental Medicine; Departments of Medicine (Infectious Disease and Immunology) and Environmental Medicine

Image courtesy of Wikimedia Commons


1. Raviglione MC, O’Brien RJ. Tuberculosis. In: Fauci AS, Braunwald E, Kasper DL, Hauser SL, Longo DL, Jameson JL, Loscalzo J, eds. Harrison’s Principles of Internal Medicine. 17th ed. New York, NY: McGraw-Hill; 2008: 1006-1020.

2. World Health Organization. Global tuberculosis Control: WHO report 2010.   Accessed September 11, 2011.

3. Daniel TM. The history of tuberculosis. Respir Med. 2006;100(11):1862-1870.

4. World Health Organization. Initiative for Vaccine Research: BCG–the current vaccine for tuberculosis.  Published 2011 .  Accessed September 11, 2011.

5. Fine PE. Variation in protection by BCG: implications of and for heterologous immunity. Lancet. 1995;346(8986):1339-1345.

6. Anderson P, Doherty TM. The success and failure of BCG–implications for a novel tuberculosis vaccine. Nat Rev Microbiol. 2005;3(8):656-662.

7. Rezai MS, Khotaei G, Mamishi S, Kheirkhah M, Parvaneh N. Disseminated Bacillus Calmette-Guérin infection after BCG vaccination. J Trop Pediatr. 2008; 54(6): 413-416.

8. Colditz GA, Brewer TF, Berkey CS, et al. Efficacy of BCG vaccine in the prevention of tuberculosis: Meta-analysis of the published literature. JAMA. 1994;271(9):698-702.

9. McShane H. Tuberculosis vaccines: beyond bacille Calmette-Guérin. Philos Trans R Soc Lond B Biol Sci. 2011;366(1579):2782-2789.

10. World Health Organization. Temperature sensitivity of vaccines. Published August, 2006. Accessed October 30, 2011.

11. Turnbull FM, McIntyre PB, Achat HM, et al. National study of adverse reactions after vaccination with bacille Calmette-Guérin. Clin Infect Dis. 2002;34(4):447-453.

12. Rowland K, Guthmann R, Jamieson B. Clinical inquiries. How should we manage a patient with a positive PPD and prior BCG vaccination? J Fam Pract. 2006;55(8):718-720.

13. Thayyil-Sudhan S, Kumar A, Singh M, Paul VK, Deorari AK. Safety and effectiveness of BCG vaccination in preterm babies. Arch Dis Child Fetal Neonatal Ed. 1999;81(1):F64-F66.