Myths and Realities

Myths and Realities: Colon Cleansing: Healthful or just a load of @$%!

July 16, 2009

colonixChau Che MD

Faculty Peer Reviewed

You’ll have increased energy, radiant skin, reduced joint pain, improved asthma symptoms, and best of all…you will lose weight. These are some of the purported benefits of removing “toxins” (otherwise known as undigested material) from the colon through cleansing. As with fashion, music, and art, what’s old has a way of becoming trendy again…especially when celebrities such as Beyonce talk about it on the Oprah Winfrey show. Colon cleansing has become popular but it is far from a new idea.The concept of disease-causing toxins arising from the bowel is called intestinal autointoxication and dates back to the Egyptians in the 16th century, B.C. who practiced colon cleansing. Physicians at that time believed that feces, absorbed into the circulation, were responsible for producing fever and pus (7,8). The Greeks also believed that incomplete digestion of food resulted in residue ascending to the head and causing disease (1,2,8). The practice of colon cleansing can be traced throughout history although its popularity has waxed and waned. Many well respected physicians in the 1800s and 1900s advocated colonic cleansing to maintain good health. In 1884, this concept regained prominence when Charles Bouchard coined the term intestinal autointoxication, claiming that “[Man] is continually on the threshold of disease. Every moment of his life he runs the risk of being overpowered by poisons generated within his system” (1). Inherent in his theory was that the intestinal tract was responsible for these poisons.

Eli Metchnikoff, a Russian born scientist, who won a Nobel Prize and is considered the Father of Probiotic Research, expanded further on autointoxication, describing the colon as “the reservoir of waste of the digestive processes, [that] stagnates long enough to putrefy. The products of putrefaction are harmful. When fecal matter is allowed to remain in the intestine…certain products are absorbed by the organism and produce poisoning” (2). Acceptance of this “disease entity” resulted in hospital admissions and reached its apex in the early 20th century when a prominent surgeon, Dr. Sir William Arbuthnot Lane, began performing colectomies to “cure intestinal autointoxication.” The practice of colonic cleansing in its various forms fell out of favor by the 1930s when books such as “Nostrums and Quakery” from the AMA press challenged the health benefit claims [Cramp AJ, ed. Nostrums and Quackery, vols. 1 and 2. Chicago: American Medical Association Press, 1911 and 1921].

In today’s society, the popularity of colonic cleansing has been driven largely by the interest in leading a healthy lifestyle. Numerous products on the market today, including Colonix, Oxypowder, Almighty Cleanse, Dual Action Cleanse, as well as high-colonics, have been promoted in the media as effective methods of expelling impurities from your body through cleansing of the colonic lumen. Although popular, is there truth behind the claimed benefits of colon cleansing?

Despite evidence to the contrary, the idea that colon cleansing is therapeutic by removing impurities, lives on. However, these claims largely fail to consider normal physiology. The digestive tract has inherent properties that work to clean our colon. After food ingestion, the small intestine absorbs nutrients and contracts to move indigestible residue into the colon. The colon continues the digestive process by absorbing and secreting electrolytes and fluids while its resident bacteria ferments the undigested food products (4). These are just some of the many mechanisms in which the digestive tract works in order to ensure that the colon does not become encrusted with fecal matter as some products claim. The black, rubbery material passed on a colon cleanse is actually a byproduct from the supplements and fiber in the colon cleansing product as opposed to old fecal material, in which consumers are led to believe (5).

Colon cleansing is at times a necessary medical procedure, for example in preparation for a colonoscopy. Incorrect use and overuse of colonic cleansing may result in harm from deregulation of the colonic environment. The colon contains indigenous probiotic bacteria that play important roles including digestion of nutrients, inhibition of pathogenic enteric bacteria, and maintenance of the colonic epithelial barrier (6). The bacteria also induce intestinal production of anti-inflammatory cytokines and reduce production of proinflammatory cytokines (3,6). Of note, the rising incidence of inflammatory bowel disorder in the United States has been linked to the loss of resident bacteria and thus its various functions (6). Colon cleansing products may disrupt the natural flora in the colon by removing both the pathogenic and indigenous bacteria because the detoxification process does not differentiate between the two entities. Disruption of the resident bacteria may actually lead to increased inflammation or infection (6). Cleansing methods such as “high colonics” which introduce large volumes of fluids into the large intestine, have been associated with perforation, electrolyte imbalances, as well as transmission of pathogens from improperly sterilized equipment.

In 1935 the physician Sir Arthur Hurst stated that “No organ in the body is so misunderstood, so slandered and maltreated as the colon” (2). This statement still resonates in today’s society as the preoccupation of intestinal regularity continues to exist in the public and health practitioners. Conventional physicians and researchers were at the forefront of intestinal autointoxication in the 18th century; however, it is now the unorthodox practitioners who are proponents of the theory. Numerous experiments have shown that in healthy individuals putative toxins are generally not absorbed from the colon, and those that do, are inactivated by the liver (2,8). Although double blind trials do not exist to support or contest claims of colonic cleansing, it is best to allow the colon to continue doing its job instead of offering it unsolicited help from colon cleansing products.

Reviewed by Michael Poles MD and Fritz Francois MD, NYU Division of Gastroenterology

1. Brown, J.H., Sonnenberg, A. The Wax and Wane of Intestinal Autointoxication and Visceroptosis-Historical Trends of Real Versus Apparent New Digestive Diseases. The American Journal of Gastroenterology, 2002; 97(11): 2695-2699.
2. Chen, T.S.N., Chen, P.S.Y. Intestinal Autointoxication: A Medical Leitmotif. Journal of Clinical Gastroenterology, 1989; 11(4): 434-41.
3. Chichlowski, M, Hale, L.P. Bacterial-mucosal interactions in inflammatory bowel disease-an alliance gone bad. American Journal of Physiology: Gastrointestinal and Liver Physiology, 2008; 295: G1139-1149.
4. Hasler, W. Harrison’s Principles of Internal Medicine. Ed. Kasper, D.L., et al. New York: McGraw-Hill, 2005. 1725.
5. Horne, S. Colon Cleansing: A Popular, but Misunderstood Natural Therapy. Journal of Herbal Pharmacotherapy, 2006; 6(2): 93-99.
6. Ismail, A.S., Hooper, L.V. Epithelial Cells and Their Neighbors. IV. Bacterial contributions to intestinal epithelial barrier integrity. American Journal of Physiology: Gastrointestinal and Liver Physiology, 2005; 289: G779-G784
7. Muller-Lissner, S.A., Kamm, M.A., Scarpignato, C., Wald, A. Myths and Misconceptions About Chronic Constipation. American Journal of Gastroenterology, 2005; 100: 232-242.
8. Sullivan-Fowler, M. Doubtful Theories, Drastic Therapies: Autointoxication and Faddism in the Late Nineteenth and Early Twentieth Centuries. The Journal of the History of Medicine and Allied Sciences, 1995; 50: 364-390.

Myths and Realities: Is Breakfast the Most Important Meal of the Day?

June 18, 2009

wheaties1Chau Che MD

Faculty Peer Reviewed

In an age when two thirds of adults are either overweight or obese and obesity rates in children continue to rise, would an intervention such as consuming breakfast daily help combat this problem? Skipping breakfast has become increasingly common in adults and adolescents in the United States, with the proportion of adults and children skipping breakfast increasing from fourteen to twenty-five percent between 1965 and 1991 (1,3). Additionally, skipping breakfast may be detrimental to other functions. To examine the role that breakfast plays, one must examine the relationship between breakfast in the reduction of body mass index (BMI) , improvement of cognition and reduction of fatigue.

Despite thoughts regarding the importance of breakfast, reasons cited for skipping breakfast include lack of time for breakfast preparation and consumption as well as a fear of weight gain. It has been shown that breakfast eaters tend to have a lower BMI than breakfast skippers (1,3). While much of this evidence is cross-sectional and it is difficult to infer a causal relationship between skipping breakfast and weight gain, theories exist regarding the pathophysiology of weight gain in breakfast skippers. Eating breakfast is associated with increased eating frequency which results in a lower BMI (1-3). This is believed to occur by increasing dietary induced thermogenesis and increasing the metabolic rate (1). Studies show that obese persons are prone to skip breakfast, leading to a high caloric intake at nighttime, while normal weight persons’ caloric intake is more evenly distributed throughout the day (1,2). Late night eating results in stored glycogen, and unless the glycogen is burned as fuel, it will be stored as fat causing weight gain (2). Breakfast skipping has also been linked to poorer eating habits including larger meal portions, impulsive snacking, a higher intake of dietary fat, and minimal fruit consumption (1,3). Although there is no established causal relationship between skipping breakfast and weight gain, these various theories suggest an association.

The Third National Health and Nutrition Examination Survey (NHANES III) investigated further the inverse relationship between BMI and breakfast consumption by looking at the relationship between breakfast type, total daily energy intake, and BMI. The survey addressed 16,452 adults in a large, population based study that reiterated the ineffectiveness of skipping breakfast as a weight loss technique. Importantly, the study also noted that breakfast choice plays a role in lowering BMI. Individuals that ate cereal (hot or cold) or quick breads for breakfast as opposed to meat and eggs had a significantly lower BMI (1). Cross sectional data from a large prospective study, the Seasonal Variation of Blood Cholesterol Study (SEASONS), also evaluated the relation between obesity and eating patterns. The study showed that subjects who regularly skipped breakfast had 4.5 times the risk of obesity as those who consumed breakfast regularly (95% CI 1.57-12.90) (2).

Independent of other factors, a study in Japan published in Nutrition showed that skipping breakfast was associated with fatigue in medical students. The Chalder Fatigue Scale which has been confirmed as reliable and valid in previous studies was used to evaluate the severity of fatigue. Completely skipping breakfast everyday versus having breakfast every day was positively correlated with the prevalence of fatigue through an unknown mechanism (OR 5.62, 95% CI 1.28-24.76, P < 0.022). Limitations to the study included the small sample size and the lack of determination of a cause and effect relationship. Fatigue was associated with poor academic performance, decreased school attendance, displeasure in school, and poor lecture learning (7).

The beneficial effects of consuming breakfast are also apparent in the adolescent population. Adolescents who consume breakfast also have a lower BMI than their counterparts (3). The effect of breakfast consumption also extends into the classroom. Academic performance, problem solving skills, school attendance, and mood are all affected by breakfast consumption (6). A randomized intervention study of 569 students aged 11 to 13 years reported that breakfast consumed thirty minutes before testing was helpful in short term cognitive functions such as immediate recall, acquisition, and recognition using the Rey Auditory-Verbal Learning Test in which various trials involved recognizing, learning, and remembering lists of common nouns (6,8).  Although the mechanism is not clear, the effects may be due to glucose and insulin changes in non breakfast eaters, as well as other hormonal and neurotransmitter concentrations in the brain involved in cognitive function (5,6). These effects have also been demonstrated in experimental studies in adults with similar results from an increased glucose concentration and nutrient supply to the central nervous system before or after learning, thus improving cognitive functions and memory skills (6,8).

A step in the right direction is to start off the morning by listening to the myth that breakfast is the most important meal of the day. It has been shown that there are several benefits to eating breakfast. With benefits of decreased body mass index, reduced fatigue, and improved cognition, we should all eat our Wheaties to start the day!

Reviewed  by Michelle McMacken MD, Assistant Professor of Medicine, NYU Division of General Internal Medicine

Dr. Che is a 1st year internal medicine resident at NYU Medical Center

[1] Cho, S., Dietrich, M., Brown, C, et al. The Effect of Breakfast Type on Total Daily Energy Intake and Body Mass Index: Results from the Third National Health and Nutrition Examination Survey (NHANES III). Journal of the American College of Nutrition, 2003; 22(4): 296-302.

[2] Ma, Y., Bertone, E., Staneck, EJ., et al. Association between Eating Patterns and Obesity in a Free-living US Adult Population. American Journal of Epidemiology, 2003; 158(1): 85-92.

[2] Mota, J., Fidalgo, F., Silva R., et al. Relationships Between Physical Activity, Obesity and Meal Frequency in Adolescents. Annals of Human Biology, 2008; 35(1): 1-10.

[3] Nicklas, TA., Baranowski, T., Cullen, KW., et al. Eating Patterns, Dietary Quality and Obesity. Journal of the American College of Nutrition, 2001; 20(6), 599-608.

[4] Ogden CL, Carroll MD, McDowell MA, Flegal KM. Obesity among adults in the United States- no change since 2003-2004. NCHS data brief no 1. Hyattsville, MD: National Center for Health Statistics. 2007.

[5] Pollitt, E., Lewis, NL., Garzat, C., et al. Fasting and Cognitive Function. Journal of Psychiatric Research, 1982; 17(2): 169-174

[6] Rampersaud, GC., Pereira, MA., Girard, BL. Et al. Breakfast Habits, Nutritional Status, Body Weight, and Academic Performance in Children and Adolescents. Journal of the American Dietetic Association, 2005; 105: 743-760.

[7] Tanaka, M., Mizuno, K., Fukuda, S., et al. Relationships Between Dietary Habits and the Prevalence of Fatigue in Medical Students. Nutrition, 2008; 24: 985-989.

[8] Vaisman N, Voet H, Akivis A, Vakil E. Effect of breakfast timing on the cognitive functions of elementary school students. Archives of Pediatric and Adolescent Medicine, 1996;150:1089-1092.

Myths and Realities: Do Power Lines Cause Cancer?

May 20, 2009

powerlines

Aditya Mattoo MD

Faculty Peer Reviewed

Prompted by personal experience, I thought I would explore the alleged causative role of power lines in hematologic malignancies for the next installment of Myths and Realities. In recent years, two close family friends living at separate locations but in homes adjacent to lots with electrical transformers were diagnosed with Multiple Myeloma and Non-Hodgkin’s Lymphoma. Naturally, the coincidence was not unnoticed, so I decided to put power lines on trial and look through the literature to see if a correlation exists or if it is all just media-hyped paranoia.

Coincidently, this past month is the 30th anniversary of the beginning of this saga. In March of 1979, Wertheimer et al published an article that suggested a higher incidence of hematologic malignancies in children who lived near high current power lines in the Denver area.(1) This initial study implied a possible carcinogenic role of the electromagnetic fields (EMFs) generated by power lines although the authors clearly state in the article that there is “no independent evidence or theoretical understanding which seems to support this possibility.”

That didn’t stop the mainstream media frenzy that ensued. The general public was put on alert to this potential silent killer, largely as a result of the works of Paul Brodeur. As a writer for The New Yorker magazine, he brought national attention to this debate in a series of articles and books (one sensationally titled “Currents of Death”) published in the late 1980s and early 1990s. His “exposes” also alleged a government and industrial conspiracy to cover up the relationship between EMFs and cancer.

A quick physics refresher might be in order at this point. Harkening back to our premedical days, recall that the movement of a charged particle (i.e. a current) generates an electric and magnetic field. The electric field is the difference of the electrical potential between two points a circuit (i.e. voltage). The flow of the current generates a magnetic field. These two fields together are referred to as EMFs. The EM spectrum can be classified into non-ionizing waves (low frequency waves, radio wave, microwave, infrared, visible light, near ultraviolet waves) and ionizing waves (medium/far ultraviolet, x-ray and gamma ray) according to the ability of the EMF to break molecular bonds. Ionizing radiation has mutagenic potential as it can break the bonds of DNA. However, when discussing the EMFs generated by the typical 60-Hz residential electric current, we fall in the extremely low frequency (non-ionizing) area of the EM spectrum. These EMFs are on the order of 0.01 to 0.05 µTesla and have weaker EMFs than radio waves. To put this in perspective, an editorial published in the New England Journal of Medicine on this debate noted that the earth’s static magnetic field is on the order of 50 µT and the EMF generated by using an electrical razor is approximately 60 µT, both of which are 1000 times greater than the EMF generated by the current delivered to your home.(2)

Although the Wertheimer study spurred the scientific community to publish hundreds of subsequent studies attempting to draw a correlation between EMFs and a variety of cancer types in differing age groups, the majority of studies have focused on hematologic malignancies in children. I am pleased to report that the literature has failed to demonstrate any consistent and statistically significant evidence to suggest a relationship. Most studies have been plagued by limitations in design including small sample size, unblinded investigators and the use of wiring codes as surrogates for EMFs. Detractors of the original study criticize Wertheimer’s incorrect assumption that the wiring code of a home could be used as a surrogate for EMF exposure and that EMFs levels were not directly measured.(3)

I will briefly take a moment to mention two of the better designed studies. One of the largest studies to date, conducted by the Children’s Cancer Group, failed to demonstrate any statistically significant relationship of either directly measured EMFs or wiring codes in the homes of 638 children with acute lymphoblastic leukemia as compared to controls.(4) The second study was a meta-analysis that pooled the data of nine separate studies. It showed that of the 3203 children with leukemia exposed to less than 0.4 µT of directly measured EMFs, there was no significant relationship when compared to controls.(5) The one caveat of this meta-analysis is that less than one percent of the pooled study participants (44 children) were exposed to EMF levels greater than 0.4 µT and had a relative risk of 2 in the development of leukemia. Again, the small sample size has critics questioning the validity of this positive finding.

Overall, there is little evidence demonstrating that living near power lines increases the risk of cancer, particularly childhood leukemias. Media supported paranoia can be a powerful force as people to this date still believe an association may exist (myself included prior to researching this post). Also it is important to note that even though we are increasingly exposed to EMFs as technology infiltrates every aspect of our lives, the overall incidence of leukemia has been slowly declining for several decades. Therefore, as judge and jury I find the defendant, Power Lines, not guilty. Court is adjourned until our next installment of Myths and Realities.

Dr. Matoo is a third year resident in internal medicine at NYU Medical Center.

Peer Reviewed by Theresa Ryan MD, Assistant Professor of Medicine, NYU Division of Medical Oncology

References:

1. Wertheimer N et al. Electrical wiring configurations and childhood cancer. American Journal of Epidemiology 109:273-284, 1979.
2. Campion EW. Power lines, cancer, and fear. New England Journal of Medicine 337:44-46, 1997.
3. Farley JW, 2003. Power lines and cancer. Nothing to Fear [online]. Quackwatch. Available from http://www.quackwatch.org/01QuackeryRelatedTopics/emf.html
4. Linet MS, et al. Residential exposure to magnetic fields and acute lymphoblastic leukemia in children. The New England Journal of Medicine 1997; 337(1): 1-7.
5. Ahlbom A, Day N, Feychting M, et al. A pooled analysis of magnetic fields and childhood leukaemia. British Journal of Cancer 2000; 83(5): 692-698.

Myths and Realities: Does the Weather Really Affect Arthritis?

March 19, 2009

storm.jpgWelcome to the first installment of Myths and Realities! With each post we hope to tackle some of the longstanding myths often perpetuated by patients and physicians alike. Through literature reviews we will attempt to validate or debunk these beliefs in an evidence-based manner. We hope you enjoy (and learn a little bit)!

Commentary by Aditya Mattoo MD PGY-3

Faculty Peer Reviewed

For our first post, I wanted to address the age old belief that changes in the weather can affect arthritis pain. Since the time of Hippocrates, who wrote about the effects of hot and cold winds on people’s health, this topic has been debated. Even Osler suggested in 1892 that arthritis sufferers of wealth vacation in the south to avoid the cold damp weather of the northern winters.(1) The majority of physicians have had at least one patient claim that his or her arthritis is acting up on account of the changing weather conditions. In fact, surveys have demonstrated that upwards of 90% of patients believe that weather plays a role in their arthritic pain. Some arthritis sufferers assert that they can predict the weather as storm systems are often heralded by joint pain flares. This belief has become so commonplace that even weather reporting services have developed arthritis indices to accompany the daily forecast.

Some of the theories to support the association between weather and arthritis relate pressure and temperature to direct effects on joint biomechanics. Increased atmospheric pressure can increase intraarticular pressure leading to pain as evidenced by joint pains secondary to compression in deep-sea divers.(2) Similarly, lower temperature can effect the compliance of articular structures and synovial fluid viscosity, making joints stiffer. Another proposed mechanism is the fact that temperature and pressure is known to sensitize nociceptors in the densely innervated periarticular structures. (3)

Enough background… let’s get down to business. So what does the data show? Well, the older literature was largely equivocal, limited by flawed study designs trying to measure a subjective complaint. Small sample sizes and single center studies didn’t help either. In more recent years, I am sad to report that the jury is still out. In 2003, Wilder et al prospectively followed 154 patients in the same geographical area with osteoarthritis (OA) who self-reported pain at five body sites.(4) The researchers used temperature, barometric pressure and precipitation data from the US National Oceanographic and Atmospheric Administration (USOAA) to attempt to find a relationship between these weather related variables and the severity of pain complaints. The only statistically significant finding was in a subgroup of women in which worsening pain complaints were seen on days with rising barometric pressure. All other subgroups failed to demonstrate statistically significant associations. Although this was one of the largest studies of the time, it was criticized by its limited geographical scope.

Fueled by the limitations of earlier studies including the one mentioned above, McAlindon et al published a multi-site study which followed 200 patients with knee OA over different times of the year.(5) Using a well validated pain questionnaire (WOMAC) and data from the USOAA the study prospectively compared knee pain with different atmospheric conditions throughout the continental US. Positive correlations were found in this study. The weather associations with statistical significance were ambient temperature and change in barometric pressure. Both lower ambient temperatures and rising barometric pressure were independently associated with increased pain complaints.

As we have learned, whether weather is a contributing factor in arthritic pain can not be stated definitively. Even though the literature is filled with contradicting studies, one of the largest, multi-site studies conducted to date has recently demonstrated an association between the two. To quote Robert Ripley, “believe it or not.”

1 Osler W. The Principles and Practice of Medicine: Designed for the Use of Practitioners and Students of Medicine (1892). Birmingham: Classics of Medicine Library, 1978.

2 Compression Pains. In: US Navy Diving Manual. Revision 4 ed. Naval Sea Systems Command, 3-45, 1999.

3 Verges J et al. Weather Conditions can Influence Rheumatic Diseases. Proceedings of the Western Pharmacology Society, 47:134-136, 2004.

4 Wilder FV et al. Osteoarthritis Pain and Weather. Rheumatology, 43:955-958, 2003.

5 McAlindon T et al. Changes in Barometric Pressure and Ambient Temperature Influence Osteoarthritic Pain. American Journal of Meidcine,120:429-434, 2007.

Faculty Peer Reviewed By: Svetlana Krasnokutsky, MD Division of Rheumatology