Myth VS. Reality: The July Effect

August 8, 2012


By Mark Adelman, MD

Faculty Peer Reviewed

Another July 1st has come and gone, marking the yearly transition in US graduate medical education of interns to junior residents, junior residents to senior residents, and senior residents to fellows. With this annual mid-summer mass influx of nearly 37,000 interns and other trainees [1] taking on new clinical responsibilities, learning to use different electronic medical record systems and navigating the other idiosyncrasies of unfamiliar institutions, one cannot help but wonder what implications this may have on patient safety.

The notion that nationwide morbidity and mortality increase in July as thousands of interns, residents and fellows adjust to their new roles is typically referred to as the “July effect” in both the lay press[2] and medical literature;[3,4] our British colleagues often refer to their analogous transition every August as the decidedly more ominous “killing season.”[5]

But what does the available evidence suggest regarding this supposed yearly trend in adverse outcomes? Should we advise loved ones to avoid teaching hospital ERs, wards and ORs every July? Unfortunately, one cannot draw firm conclusions from the published literature, but some recent studies may be cause for concern.

There is much disagreement among the medical and surgical specialties and even within each field. For example, a retrospective review of 4325 appendicitis cases at two California teaching hospitals[6] found no significant difference in post-op wound infection rates in July/August vs. all other months (4.8% vs. 4.3%, p=0.6), nor was there a significant difference in need for post-op abscess drainage (1.2% vs. 1.5%, p=0.6) or length of hospitalization (2.5 +/-2.8 days vs. 2.5 +/- 2.2 days, p=1.0). In contrast, retrospective review of a nationwide sample of 2920 patients hospitalized for surgical intervention of spinal metastases noted increased mortality in July vs. August-June (OR 1.81; 95% CI, 1.13-2.91; P = .01), as well as intra-operative complication rates (OR, 2.11; 95% CI, 1.41-3.17; P < .001), but not post-operative complications (OR, 1.08; 95% CI, 0.81-1.45; P = .60).[7]

Turning to studies of patients with medical diagnoses, a single-institution retrospective review of patients admitted with the common diagnoses of either acute coronary syndrome (764 patients) or decompensated heart failure (609 patients) also failed to find evidence of a July effect.[8] In this study, researchers looked at in-hospital mortality and peri-procedural complications (for those patients that underwent PCI or CABG) in July-September vs. October-June and found no significant difference in mortality or complication rates (1% vs. 1.4%, p=0.71 and 2.1 vs. 2.8%, p=0.60 respectively). The investigators were also able to track use of aspirin, beta-blockers, statins and ACE/ARBs at time of discharge as these are standard quality metrics for these two cardiac conditions; again, no significant difference was found comparing July-September to October-June prescriptions for any of these guideline recommended medications.

In a rather unique examination of over 62 million computerized US death certificates from 1979-2006, Philips and Barker developed a least-squares regression equation to compare observed to expected inpatient deaths on a monthly basis over this 27-year period.[9] Looking specifically for “medication error” (i.e. a preventable adverse drug event ) as the primary cause of death, they found that July was the only month of the year (both on a yearly basis and in aggregate over the entire study period) in which the ratio of observed to expected deaths exceeded 1.00. These findings held only for medication errors, not for other causes of death. That this spike in mortality is due to the presence of new residents is further suggested by their comparison of US counties with and without teaching hospitals; the elevated ratio of observed to expected medication error deaths in June was present in counties with teaching hospitals but not in those without teaching hospitals, and those counties with the greater concentration of teaching hospitals had a greater July spike.

With such conflicting evidence in published reports on this question, where is one to turn for guidance? Annals of Internal Medicine published a systematic review by Young and colleagues last year that synthesized the findings of 39 studies published since 1989.[10] Studies were determined to be of higher or lower quality by such factors as statistical adjustments for confounders such as patient demographics/case mix, seasonal/year-to-year variations and presence/absence of a concurrent control group. Studies were further stratified by outcomes examined, including mortality, morbidity and medical errors, and efficiency (e.g., length of stay, OR time). Perhaps the most interesting finding was that the higher quality studies were more likely to detect a “July effect” on morbidity and mortality outcomes than the lower quality studies. For example, 45% of the higher quality studies noted an association between housestaff turnover and mortality but only 6% of the lower quality studies did. Reported effect sizes ranged from a relative risk increase of 4.3% to 12.0% or an adjusted odds ratio of 1.08 to 1.34. The authors did caution that study heterogeneity does not permit firm conclusions regarding the degree of risk posed by trainee changeover or which features of residency programs are particularly problematic and thus should be the target of future interventions.

Clearly, the question of whether the “July effect” exists is a complicated one that is difficult to answer through observational studies. If it were otherwise, the medical literature would not be full of studies with widely divergent conclusions. The systematic review by Young’s group, which appears to be the only such recent review on the topic, can hopefully provide some clarity. In my opinion, the greatest contribution by Young et al. was their finding that higher quality studies were more likely to detect a “July effect” on mortality and efficiency. There may be many studies that attempt to address this question, but not all of their conclusions are equally valid. The next logical step is to examine in a more focused way the potential underlying causes of such an effect. Is it the lack of clinical experience/technical ability among a large group of new trainees? Is it lack of familiarity with clinical information systems or institutional protocols and practices? Or is it perhaps poor communication and teamwork among new coworkers? Targeted interventions could include enhanced supervision of new housestaff by attendings, limiting the overall volume of clinical workload for new trainees, avoiding overnight responsibilities, simulation-based team training or even staggering of start dates for new housestaff.[11] While it may be difficult to conclude from the currently available evidence which of these changes would be the highest yield, I believe that the impact of the “July effect” should not be discounted and additional steps must be taken to maximize patient safety during this annual transitional period.

Dr. Mark Adelman is a second year resident at NYU Langone Medical Center

Peer reviewed by Patrick Cocks, MD, Program Director, NYU Internal Medicine Residency, NYU Langone Medical Center

Image courtesy of Wikimedia Commons

References

1. Accreditation Council for Graduate Medical Education. Data Resource Book Academic Year 2010-2011. Available at: http://www.acgme.org/acWebsite/dataBook/2010-2011_ACGME_Data_Resource_Book.pdf.  Accessed 7/9/12

2. O’Connor A. Really? The Claim: Hospital Mortality Rates Rise in July. The New York Times. July 4, 2011. Available at: http://well.blogs.nytimes.com/2011/07/04/really-the-claim-hospital-mortality-rates-rise-in-july.  Accessed 7/9/12.

3. Kravitz RL, Feldman MD, Alexander GC. From the editors’ desk: The July effect [editorial]. J Gen Intern Med. 2011;26(7):677. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3138586/

4. McDonald RJ, Cloft HJ, Kallmes DF. Impact of admission month and hospital teaching status on outcomes in subarachnoid hemorrhage: evidence against the July effect. J Neurosurg. 2012;116(1):157-63. http://thejns.org/doi/full/10.3171/2011.8.JNS11324

5. Hough A. New junior doctor rules ‘will stop NHS killing season.’ The Telegraph. June 23, 2012. Available at: http://www.telegraph.co.uk/health/healthnews/9350659/New-junior-doctor-rules-will-stop-NHS-killing-season.html.  Accessed 7/9/12.

6. Yaghoubian A, de Virgilio C, Chiu V, Lee SL. “July effect” and appendicitis. J Surg Educ. 2010;67(3):157-60. http://www.sciencedirect.com/science/article/pii/S1931720410000711

7. Dasenbrock HH, Clarke MJ, Thompson RE, Gokaslan ZL, Bydon A. The impact of July hospital admission on outcome after surgery for spinal metastases at academic medical centers in the United States, 2005 to 2008. Cancer. 2012;118(5):1429-38. http://onlinelibrary.wiley.com/doi/10.1002/cncr.26347/abstract

8. Garcia S, Canoniero M, Young L. The effect of July admission in the process of care of patients with acute cardiovascular conditions. South Med J. 2009;102(6):602-7. http://journals.lww.com/smajournalonline/pages/articleviewer.aspx?year=2009&issue=06000&article=00013&type=abstract

9. Phillips DP, Barker GE. A July spike in fatal medication errors: a possible effect of new medical residents. J Gen Intern Med. 2010;25(8):774-9. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2896592

10. Young JQ, Ranji SR, Wachter RM, Lee CM, Niehaus B, Auerbach AD. “July effect”: impact of the academic year-end changeover on patient outcomes: a systematic review. Ann Intern Med. 2011;155(5):309-15. http://annals.org/article.aspx?volume=155&page=309

11. Barach P, Philibert I. The July effect: fertile ground for systems improvement. Ann Intern Med. 2011;155(5):331-2. http://annals.org/article.aspx?volume=155&page=331