By Morgan Simons, MD
With medicine advancing at such a rapid pace, it is crucial for physicians to keep up with the medical literature. This can quickly become an overwhelming endeavor given the sheer quantity and breadth of literature released on a daily basis. Primecuts helps you stay current by taking a shallow dive into recently released articles that should be on your radar. Our goal is for you to slow down and take a few small sips from the medical literature firehose.
Effects of once-weekly subcutaneous semaglutide on kidney function and safety in patients with type 2 diabetes: a post-hoc analysis of the SUSTAIN 1–7 randomised controlled trials (https://www.thelancet.com/journals/landia/article/PIIS2213-8587(20)30313-2/fulltext)
It is well known that poor glycemic control in type two diabetes mellitus can result in microvascular complications including chronic kidney disease. The authors of this study sought to determine the effects of semaglutide on kidney function using a post-hoc analysis of several randomized control trials, SUSTAIN 1-7. These initial studies explored the effects of semaglutide versus various comparators on HbA1C, body weight, and cardiovascular outcomes.
The original studies, SUSTAIN 1-7, utilized in this analysis included varying numbers of participants (388 to 3297) and had different durations (30 weeks to 104 weeks). In all trials, semaglutide was dosed subcutaneously, once weekly to achieve a maintenance dose of 0.5 or 1mg. Across trials, all participants were at least 18 years old and had a diagnosis of type two diabetes with an HbA1C of at least 7.0%. SUSTAIN 1, 4, and 5 excluded patients with A1C greater than 10%; SUSTAIN 2, 3, and 7 excluded patients with A1C greater than 10.5%; and SUSTAIN 6 had no upper limit for A1C. Exclusion criteria based upon kidney function also varied across trials. Patients with an eGFR < 60mL/min per 1.73m^2 were excluded in SUSTAIN 2, 3, and 7; patients with eGFR < 30mL/min per 1.73m^2 were excluded in SUSTAIN 1, 4, and 5; and patients on hemodialysis or peritoneal dialysis were excluded in SUSTAIN 6. The main outcomes identified by investigators performing the post-hoc analysis were eGFR, UACR, and adverse kidney events reported in the original trials. Because of differences in trial design, data from each of the studies was pooled and analyzed with individualized methods (ANCOVA, MMRM, etc.) depending upon which of the primary outcomes identified above was being examined. For the eGFR outcome, investigators determined that in SUSTAIN 1-5 and 7 there was an initial decrease in eGFR from baseline to week 12 with a plateau in eGFR by week 30; overall semaglutide decreased eGFR more than placebo in these trials. In SUSTAIN 6, there was also an initial decline in eGFR in the semaglutide group from baseline to week 16, but over the course of this 104 week trial, there was no statistically significant difference in eGFR between the semaglutide arm and the placebo arm. The UACR outcome was examined in SUSTAIN 1-6 and there was a statistically significant decrease in UACR in the semaglutide groups compared to the placebo groups. Across trials, there was found to be no difference in the adverse kidney event outcome between semaglutide and placebo.
The primary limitations in this study are the heterogenicity of the data, the differences in trial design, and the fact that kidney function and its analogs were not included as primary outcomes in all trials. The authors pooled data from multiple trials despite differences in data collection and trial design. Of the seven RCTs studied, only one, SUSTAIN 6, included kidney function as a primary outcome; the others collected data related to kidney function, but were not designed with this outcome in mind. Ultimately, this study provides evidence to suggest that semaglutide negatively effects kidney function in the short term after initiation of therapy, but that the long term effect on kidney function is not significantly different from that of placebo. When prescribing semaglutide, physicians should be aware of these effects as well as the positive effects on HbA1C, weight loss, and cardiovascular outcomes previously described in the SUSTAIN trials.
Risk for Serious Infection With Low-Dose Glucocorticoids in Patients With Rheumatoid Arthritis (https://www.acpjournals.org/doi/10.7326/M20-1594)
Both high and low dose glucocorticoid therapy are common in the treatment of rheumatic disease. Although the risks of high dose, long term glucocorticoid therapy, such as increased rates of infection, have been well studied, there is a dearth of information about the risks of lower dose therapy. In this study, investigators used Medicare claims data and Optum’s deidentified Clinformatics Data Mart database to examine the risk of acute infection during a hospitalization in a cohort of patients with rheumatoid arthritis receiving both disease modifying antirheumatic drugs (DMARDs) and low dose glucocorticoids.
To be included in the study cohort, patients needed to be 18 years of age or older, have at least two ICD-9 codes for rheumatoid arthritis documented at least seven days apart, and be receiving stable DMARD. An extensive list of DMARDs and the amount of time for each to qualify as stable are available in the study supplement. Ultimately, 172, 041 Medicare claims patients with 247,297 observations of stable DMARD uses and 44,118 Optum patients with 58,279 observations fit the inclusion criteria. The qualifying cohort was further delineated based upon their exposure to glucocorticoid treatment: none, 5mg or less per day, 5 to 10mg per day, or great than 10mg per day. All glucocorticoid doses were converted to prednisone equivalent mgs to standardize the above classification. The primary outcome of interest was acute infection in a hospitalization after 6 months of stable DMARD therapy. This outcome was defined by an ICD-9 code for infection during a hospital encounter. The Optum and Medicare groups were analyzed separately using cause-specific proportional hazards models. Analyses of cohorts in both databases demonstrated that there was a dose-dependent relationship between glucocorticoids and hospitalized infection. Additionally, even low doses of glucocorticoids were associated with some risk of increased infection with a cumulative 1 year incidence of 14.4% (CI, 13.8% to 15.1%) compared to 11.0% (95% CI, 10.6% to 11.5%) in those not receiving glucocorticoids.
As with any cohort study, the major limitation in this research is the possibility of confounding variables impacting the observed associations. An additional limitation specific to this study is the way in which glucocorticoid doses were identified. Because the investigators choose to use pharmacy filled prescription data, they could be over or underestimating the actual daily glucocorticoid dose taken by patients. The study adds to the body of evidence that high dose glucocorticoids can be associated with infection while also shining light on the risks of infection in daily low dose glucocorticoid use.
Risk stratification of patients admitted to hospital with covid-19 using the ISARIC WHO Clinical Characterisation Protocol: development and validation of the 4C Mortality Score (https://www.bmj.com/content/370/bmj.m3339)
One of the significant challenges during the SARS-Cov-2 pandemic has been predicting the severity of infection in hospitalized patients. Many investigators have worked to create tools to stratify risk in these patients and help institutions triage with limited hospital resources. In this study, researchers developed the 4C Mortality Score to predict inpatient mortality in patients with COVID-19.
Patients included in the study were 18 years of age or older and had been hospitalized with COVID-19 in one of 260 hospitals across England, Scotland, and Wales. The derivation dataset included 35,463 patients who met these criteria and were hospitalized between February 6, 2020 to May 20, 2020 while the validation dataset included 22,361 patients seen between May 21, 2020 and June 29, 2020. Initially, 41 candidate predictor variables were identified, but following the use of machine learning methods, 8 important predictors were used to create the final 4C Mortality Score tool. The authors trialed multiple model classes to predict their primary outcome of mortality. Ultimately, the 4C Mortality Score was created using a penalized logistic regression model which was evaluated in comparison to a gradient decision tree model (XGBoost). The authors reported model performance using area under the receiving operator curve (AUROC), sensitivity, specificity, positive predictive value, and negative predictive value. AUROC , the primary metric, is effectively a measure of a model’s ability to differentiate between positive and negative cases; an model with an AUROC of 0.5 discriminates between cases as well as a coin toss, or random chance. Using all of the aforementioned metrics, the authors report that the 4C Mortality Score’s ability to predict mortality in COVID-19 approaches that of the more complicated XGBoost model in both the derivation and validation cohorts. The 4C Mortality Score has an AUROC of 0.77 (95% CI, 0.76 – 0.77) and outperforms 15 other previously reported mortality prediction scores.
The authors note two important limitations in their study: their inability to compare the 4C Mortality Score to other tools to predict mortality such as APACHE II and the presence of patients in their cohort who had no completed their hospital courses. The authors were unable to compare to other more generalized mortality scores because of differences in data collection that lead to incomparable independent predictive variables. Additionally, there was a small but significant percentage of their dataset (3.3%) who had “incomplete episodes” meaning that their hospital courses had not finished by the date of data collection. Therefore, it is unclear if these patients survived their hospitalizations or not and could be misclassified within the derivation cohort. Overall, the 4C Mortality Score has the potential to provide front line physician users with an easy and effective method of predicting in-hospital mortality in COVID-19 patients.
Opportunistic screening versus usual care for detection of atrial fibrillation in primary care: cluster randomised controlled trial (https://www.bmj.com/content/370/bmj.m3208)
The prevalence of atrial fibrillation increases with age and is associated with adverse events such as embolic stroke. At present, there are no guidelines for screening for atrial fibrillation in asymptomatic patients and such cases are typically identified incidentally. In this clustered randomized control trial conducted in the Netherlands, investigators wished to evaluate their proposed screening method versus standard of care in identifying atrial fibrillation.
To complete their study, investigators assigned participating primary care practices throughout the Netherlands to either the usual care arm or intention-to-screen arm of their study; allocation to an individual arm was not blinded and based on a region’s prevalence of atrial fibrillation. Each participating primary care practice contributed 200 patients, who were greater than 65 years of age and with no prior history of atrial fibrillation, to their arm of the study. 19,189 patients hailing from 47 intention-to-screen practices and 49 usual care practices were eligible for the study and 17,976 patients successfully completed the study. The intention-to-screen arm used three first line methods of screening: palpation of the pulse, use of a blood pressure cuff with atrial fibrillation sensor, and one lead EKG. If a patient tested positively on any of these index tests, they next moved to 12 lead EKG screening. An additional 10% of patients who tested negative on all three index tests also were allocated to 12 lead EKG. Thereafter, those who were determined to not be in atrial fibrillation on 12 lead EKG were invited to wear a Holter monitor of two weeks. In the usual care arm, physicians provided standard of care treatment in the Netherlands, which consists of assessing heart rhythm when checking blood pressure or when patients are symptomatic. The study was conducted for one year, and the primary outcome was diagnosis of atrial fibrillation within that time frame. In the analysis, a logistic mixed effects model with an intercept representing general practice was used to account for the effects of clusters within practices. Sensitivity and survival analyses were used to assess time to atrial fibrillation diagnosis between the usual care and intention-to-screen arms. For their primary outcome, the hazard ratio was 1.06 (0.84 to 1.35) and they concluded that there was no significant difference between diagnosis of atrial fibrillation between the two groups.
The primary limitations in the study included the increased burden of screening on clinical workers and a possible selection bias among screened individuals. Because of the additional work of incorporating the three index tests required in the intention-to-screen arm, several primary care practices did not screen all eligible patients. Additionally, the researchers noted a possible selection bias for young, “worried-well” individuals to partake in screening. Finally, many participants chose to not undergo two weeks of Holter monitor screening following initially positive screening tests. Overall, the study demonstrates that although the research into screening methods for atrial fibrillation are ongoing at both local clinical practices and large corporations such as Apple, at this time screening is no different in detecting atrial fibrillation cases than the standard of care.
Non-alcoholic fatty liver disease and risk of incident diabetes mellitus: an updated meta-analysis of 501 022 adult individuals (https://gut.bmj.com/content/early/2020/09/16/gutjnl-2020-322572)
The meta-analysis was conducted using 501,022 participants from 33 studies to determine the association between non-alcoholic fatty liver disease (NAFLD) severity and development of diabetes. Those with severe NAFLD were more likely to develop diabetes (n=9 studies; random-effects HR 2.69, 95% CI 2.08 to 3.49; I2=69%) and overall the risk of developing diabetes corresponded to the severity NAFLD. In all cases of NAFLD, physicians should consider the long-term risk of patient’s developing diabetes.
Norepinephrine Dysregulates the Immune Response and Compromises Host Defense during Sepsis (https://www.atsjournals.org/doi/full/10.1164/rccm.202002-0339OC)
Investigators performed in vitro, mice, and human studies to determine the effects of norepinephrine on the immune system. Following challenges with LPS (an immunostimulatory agent), norepinephrine, and vasopressin in various combinations, the results of the study suggest that norepinephrine contributes to anti-cytokine production. Based on the results of this and other studies, continued exploration of the potential anti-inflammatory effects on norepinephrine on the human immune system should be undertaken.
Laxative Use Does Not Preclude Diagnosis or Reduce Disease Severity in Clostridiodes difficile Infection (https://academic.oup.com/cid/article/71/6/1472/5581227)
By studying 209 patients with confirmed C. difficile infection, 65 of whom had recently taken laxatives, investigators demonstrated that current ISDA guidelines regarding C. difficile testing within 48 hours of laxative use may incorrect and lead to cases of delayed diagnosis. In cases of high clinical suspicion for C. difficile infection, the study suggests that physicians should not forgo testing in patients with recent laxative use.
Dr. Morgan Simons is a 1st year resident at NYU Langone Health
Peer reviewed by David Kudlowitz, MD, Associate Editor, Clinical Correlations
Image courtesy of Wikimedia Commons