page_title
stringlengths 1
91
| text
stringlengths 57
141k
⌀ | __index_level_0__
int64 0
7.28k
|
---|---|---|
Brucellosis | Brucellosis is a highly contagious zoonosis caused by ingestion of unpasteurized milk or undercooked meat from infected animals, or close contact with their secretions. It is also known as undulant fever, Malta fever, and Mediterranean fever.The bacteria causing this disease, Brucella, are small, Gram-negative, nonmotile, nonspore-forming, rod-shaped (coccobacilli) bacteria. They function as facultative intracellular parasites, causing chronic disease, which usually persists for life. Four species infect humans: B. abortus, B. canis, B. melitensis, and B. suis. B. abortus is less virulent than B. melitensis and is primarily a disease of cattle. B. canis affects dogs. B. melitensis is the most virulent and invasive species; it usually infects goats and occasionally sheep. B. suis is of intermediate virulence and chiefly infects pigs. Symptoms include profuse sweating and joint and muscle pain. Brucellosis has been recognized in animals and humans since the early 20th century.
Signs and symptoms
The symptoms are like those associated with many other febrile diseases, but with emphasis on muscular pain and night sweats. The duration of the disease can vary from a few weeks to many months or even years.
In the first stage of the disease, bacteremia occurs and leads to the classic triad of undulant fevers, sweating (often with characteristic foul, moldy smell sometimes likened to wet hay), and migratory arthralgia and myalgia (joint and muscle pain). Blood tests characteristically reveal a low number of white blood cells and red blood cells, show some elevation of liver enzymes such as aspartate aminotransferase and alanine aminotransferase, and demonstrate positive Bengal rose and Huddleston reactions. Gastrointestinal symptoms occur in 70% of cases and include nausea, vomiting, decreased appetite, unintentional weight loss, abdominal pain, constipation, diarrhea, an enlarged liver, liver inflammation, liver abscess, and an enlarged spleen.This complex is, at least in Portugal, Israel, Syria, and Jordan, known as Malta fever. During episodes of Malta fever, melitococcemia (presence of brucellae in the blood) can usually be demonstrated by means of blood culture in tryptose medium or Albini medium. If untreated, the disease can give origin to focalizations or become chronic. The focalizations of brucellosis occur usually in bones and joints, and osteomyelitis or spondylodiscitis of the lumbar spine accompanied by sacroiliitis is very characteristic of this disease. Orchitis is also common in men.
The consequences of Brucella infection are highly variable and may include arthritis, spondylitis, thrombocytopenia, meningitis, uveitis, optic neuritis, endocarditis, and various neurological disorders collectively known as neurobrucellosis.
Cause
Brucellosis in humans is usually associated with consumption of unpasteurized milk and soft cheeses made from the milk of infected animals—primarily goats, infected with B. melitensis and with occupational exposure of laboratory workers, veterinarians, and slaughterhouse workers. Some vaccines used in livestock, most notably B. abortus strain 19, also cause disease in humans if accidentally injected. Brucellosis induces inconstant fevers, miscarriage, sweating, weakness, anemia, headaches, depression, and muscular and bodily pain. The other strains, B. suis and B. canis, cause infection in pigs and dogs, respectively.Overall findings support that brucellosis poses an occupational risk to goat farmers with specific areas of concern including weak awareness of disease transmission to humans and lack of knowledge on specific safe farm practices such as quarantine practices.
Diagnosis
The diagnosis of brucellosis relies on:
Demonstration of the agent: blood cultures in tryptose broth, bone marrow cultures: The growth of brucellae is extremely slow (they can take up to two months to grow) and the culture poses a risk to laboratory personnel due to high infectivity of brucellae.
Demonstration of antibodies against the agent either with the classic Huddleson, Wright, and/or Bengal Rose reactions, either with ELISA or the 2-mercaptoethanol assay for IgM antibodies associated with chronic disease
Histologic evidence of granulomatous hepatitis on hepatic biopsy
Radiologic alterations in infected vertebrae: the Pedro Pons sign (preferential erosion of the anterosuperior corner of lumbar vertebrae) and marked osteophytosis are suspicious of brucellic spondylitis.Definite diagnosis of brucellosis requires the isolation of the organism from the blood, body fluids, or tissues, but serological methods may be the only tests available in many settings. Positive blood culture yield ranges between 40 and 70% and is less commonly positive for B. abortus than B. melitensis or B. suis. Identification of specific antibodies against bacterial lipopolysaccharide and other antigens can be detected by the standard agglutination test (SAT), rose Bengal, 2-mercaptoethanol (2-ME), antihuman globulin (Coombs) and indirect enzyme-linked immunosorbent assay (ELISA). SAT is the most commonly used serology in endemic areas. An agglutination titre greater than 1:160 is considered significant in nonendemic areas and greater than 1:320 in endemic areas.Due to the similarity of the O polysaccharide of Brucella to that of various other Gram-negative bacteria (e.g. Francisella tularensis, Escherichia coli, Salmonella urbana, Yersinia enterocolitica, Vibrio cholerae, and Stenotrophomonas maltophilia), the appearance of cross-reactions of class M immunoglobulins may occur. The inability to diagnose B. canis by SAT due to lack of cross-reaction is another drawback. False-negative SAT may be caused by the presence of blocking antibodies (the prozone phenomenon) in the α2-globulin (IgA) and in the α-globulin (IgG) fractions.Dipstick assays are new and promising, based on the binding of Brucella IgM antibodies, and are simple, accurate, and rapid. ELISA typically uses cytoplasmic proteins as antigens. It measures IgM, IgG, and IgA with better sensitivity and specificity than the SAT in most recent comparative studies. The commercial Brucellacapt test, a single-step immunocapture assay for the detection of total anti-Brucella antibodies, is an increasingly used adjunctive test when resources permit. PCR is fast and should be specific. Many varieties of PCR have been developed (e.g. nested PCR, realtime PCR, and PCR-ELISA) and found to have superior specificity and sensitivity in detecting both primary infection and relapse after treatment. Unfortunately, these are not standardized for routine use, and some centres have reported persistent PCR positivity after clinically successful treatment, fuelling the controversy about the existence of prolonged chronic brucellosis.Other laboratory findings include normal peripheral white cell count, and occasional leucopenia with relative lymphocytosis. The serum biochemical profiles are commonly normal.
Prevention
Surveillance using serological tests, as well as tests on milk such as the milk ring test, can be used for screening and play an important role in campaigns to eliminate the disease. Also, individual animal testing both for trade and for disease-control purposes is practiced. In endemic areas, vaccination is often used to reduce the incidence of infection. An animal vaccine is available that uses modified live bacteria. The World Organisation for Animal Health Manual of Diagnostic Test and Vaccines for Terrestrial Animals provides detailed guidance on the production of vaccines. As the disease is closer to being eliminated, a test and eradication program is required to eliminate it.The main way of preventing brucellosis is by using fastidious hygiene in producing raw milk products, or by pasteurizing all milk that is to be ingested by human beings, either in its unaltered form or as a derivative, such as cheese.
Treatment
Antibiotics such as tetracyclines, rifampicin, and the aminoglycosides streptomycin and gentamicin are effective against Brucella bacteria. However, the use of more than one antibiotic is needed for several weeks, because the bacteria incubate within cells.The gold standard treatment for adults is daily intramuscular injections of streptomycin 1 g for 14 days and oral doxycycline 100 mg twice daily for 45 days (concurrently). Gentamicin 5 mg/kg by intramuscular injection once daily for 7 days is an acceptable substitute when streptomycin is not available or contraindicated. Another widely used regimen is doxycycline plus rifampicin twice daily for at least 6 weeks. This regimen has the advantage of oral administration. A triple therapy of doxycycline, with rifampicin and co-trimoxazole, has been used successfully to treat neurobrucellosis. Doxycycline plus streptomycin regimen (for 2 to 3 weeks) is more effective than doxycycline plus rifampicin regimen (for 6 weeks).Doxycycline is able to cross the blood–brain barrier, but requires the addition of two other drugs to prevent relapse. Ciprofloxacin and co-trimoxazole therapy is associated with an unacceptably high rate of relapse. In brucellic endocarditis, surgery is required for an optimal outcome. Even with optimal antibrucellic therapy, relapses still occur in 5 to 10% of patients with Malta fever.
Prognosis
The mortality of the disease in 1909, as recorded in the British Army and Navy stationed in Malta, was 2%. The most frequent cause of death was endocarditis. Recent advances in antibiotics and surgery have been successful in preventing death due to endocarditis. Prevention of human brucellosis can be achieved by eradication of the disease in animals by vaccination and other veterinary control methods such as testing herds/flocks and slaughtering animals when infection is present. Currently, no effective vaccine is available for humans. Boiling milk before consumption, or before using it to produce other dairy products, is protective against transmission via ingestion. Changing traditional food habits of eating raw meat, liver, or bone marrow is necessary, but difficult to implement. Patients who have had brucellosis should probably be excluded indefinitely from donating blood or organs. Exposure of diagnostic laboratory personnel to Brucella organisms remains a problem in both endemic settings and when brucellosis is unknowingly imported by a patient. After appropriate risk assessment, staff with significant exposure should be offered postexposure prophylaxis and followed up serologically for 6 months.
Epidemiology
Argentina
According to a study published in 2002, an estimated 10–13% of farm animals are infected with Brucella species. Annual losses from the disease were calculated at around $60 million. Since 1932, government agencies have undertaken efforts to contain the disease. Currently, all cattle of ages 3–8 months must receive the Brucella abortus strain 19 vaccine.
Australia
Australia is free of cattle brucellosis, although it occurred in the past. Brucellosis of sheep or goats has never been reported. Brucellosis of pigs does occur. Feral pigs are the typical source of human infections.
Canada
On 19 September 1985, the Canadian government declared its cattle population brucellosis-free. Brucellosis ring testing of milk and cream, and testing of cattle to be slaughtered ended on 1 April 1999. Monitoring continues through testing at auction markets, through standard disease-reporting procedures, and through testing of cattle being qualified for export to countries other than the United States.
China
An outbreak infecting humans took place in Lanzhou in 2020 after the Lanzhou Biopharmaceutical Plant, which was involved in vaccine production, accidentally pumped out the bacteria into the atmosphere in exhaust air due to use of expired disinfectant. The outbreak affected over 6,000 people.
Europe
Malta
Until the early 20th century, the disease was endemic in Malta to the point of it being referred to as "Maltese fever". Since 2005, due to a strict regimen of certification of milk animals and widespread use of pasteurization, the illness has been eradicated from Malta.
Republic of Ireland
Ireland was declared free of brucellosis on 1 July 2009. The disease had troubled the countrys farmers and veterinarians for several decades. The Irish government submitted an application to the European Commission, which verified that Ireland had been liberated. Brendan Smith, Irelands then Minister for Agriculture, Food and the Marine, said the elimination of brucellosis was "a landmark in the history of disease eradication in Ireland". Irelands Department of Agriculture, Food and the Marine intends to reduce its brucellosis eradication programme now that eradication has been confirmed.
UK
Mainland Britain has been free of brucellosis since 1979, although there have been episodic re-introductions since. The last outbreak of brucellosis in Great Britain was in cattle in Cornwall in 2004. Northern Ireland was declared officially brucellosis-free in 2015.
New Zealand
Brucellosis in New Zealand is limited to sheep (B. ovis). The country is free of all other species of Brucella.
United States
Dairy herds in the U.S. are tested at least once a year to be certified brucellosis-free with the Brucella milk ring test. Cows confirmed to be infected are often killed. In the United States, veterinarians are required to vaccinate all young stock, to further reduce the chance of zoonotic transmission. This vaccination is usually referred to as a "calfhood" vaccination. Most cattle receive a tattoo in one of their ears, serving as proof of their vaccination status. This tattoo also includes the last digit of the year they were born.The first state–federal cooperative efforts towards eradication of brucellosis caused by B. abortus in the U.S. began in 1934.Brucellosis was originally imported to North America with non-native domestic cattle (Bos taurus), which transmitted the disease to wild bison (Bison bison) and elk (Cervus canadensis). No records exist of brucellosis in ungulates native to America until the early 19th century.
History
Brucellosis first came to the attention of British medical officers in the 1850s in Malta during the Crimean War, and was referred to as Malta Fever. Jeffery Allen Marston (1831–1911) described his own case of the disease in 1861. The causal relationship between organism and disease was first established in 1887 by David Bruce. Bruce considered the agent spherical and classified it as a coccus.
In 1897, Danish veterinarian Bernhard Bang isolated a bacillus as the agent of heightened spontaneous abortion in cows, and the name "Bangs disease" was assigned to this condition. Bang considered the organism rod-shaped and classified it as a bacillus. So at the time, no one knew that this bacillus had anything to do with the causative agent in Malta fever.Maltese scientist and archaeologist Themistocles Zammit identified unpasteurized goat milk as the major etiologic factor of undulant fever in June 1905.In the late 1910s, American bacteriologist Alice C. Evans was studying the Bang bacillus and gradually realized that it was virtually indistinguishable from the Bruce coccus. The short-rod versus oblong-round morphologic borderline explained the leveling of the erstwhile bacillus/coccus distinction (that is, these "two" pathogens were not a coccus versus a bacillus but rather were one coccobacillus). The Bang bacillus was already known to be enzootic in American dairy cattle, which showed itself in the regularity with which herds experienced contagious abortion. Having made the discovery that the bacteria were certainly nearly identical and perhaps totally so, Evans then wondered why Malta fever was not widely diagnosed or reported in the United States. She began to wonder whether many cases of vaguely defined febrile illnesses were in fact caused by the drinking of raw (unpasteurized) milk. During the 1920s, this hypothesis was vindicated. Such illnesses ranged from undiagnosed and untreated gastrointestinal upset to misdiagnosed febrile and painful versions, some even fatal. This advance in bacteriological science sparked extensive changes in the American dairy industry to improve food safety. The changes included making pasteurization standard and greatly tightening the standards of cleanliness in milkhouses on dairy farms. The expense prompted delay and skepticism in the industry, but the new hygiene rules eventually became the norm. Although these measures have sometimes struck people as overdone in the decades since, being unhygienic at milking time or in the milkhouse, or drinking raw milk, are not a safe alternative.In the decades after Evanss work, this genus, which received the name Brucella in honor of Bruce, was found to contain several species with varying virulence. The name "brucellosis" gradually replaced the 19th-century names Mediterranean fever and Malta fever.Neurobrucellosis, a neurological involvement in brucellosis, was first described in 1879. In the late 19th century, its symptoms were described in more detail by M. Louis Hughes, a Surgeon-Captain of the Royal Army Medical Corps stationed in Malta who isolated brucella organisms from a patient with meningo-encephalitis. In 1989, neurologists in Saudi Arabia made significant contributions to the medical literature involving neurobrucellosis.These obsolete names have previously been applied to brucellosis:
Biological warfare
Brucella species were weaponized by several advanced countries by the mid-20th century. In 1954, B. suis became the first agent weaponized by the United States at its Pine Bluff Arsenal near Pine Bluff, Arkansas. Brucella species survive well in aerosols and resist drying. Brucella and all other remaining biological weapons in the U.S. arsenal were destroyed in 1971–72 when the American offensive biological warfare program was discontinued by order of President Richard Nixon.The experimental American bacteriological warfare program focused on three agents of the Brucella group:
Porcine brucellosis (agent US)
Bovine brucellosis (agent AA)
Caprine brucellosis (agent AM)Agent US was in advanced development by the end of World War II. When the United States Air Force (USAF) wanted a biological warfare capability, the Chemical Corps offered Agent US in the M114 bomblet, based on the four-pound bursting bomblet developed for spreading anthrax during World War II. Though the capability was developed, operational testing indicated the weapon was less than desirable, and the USAF designed it as an interim capability until it could eventually be replaced by a more effective biological weapon.The main drawback of using the M114 with Agent US was that it acted mainly as an incapacitating agent, whereas the USAF administration wanted weapons that were deadly. The stability of M114 in storage was too low to allow for storing it at forward air bases, and the logistical requirements to neutralize a target were far higher than was originally planned. Ultimately, this would have required too much logistical support to be practical in the field.Agents US and AA had a median infective dose of 500 organisms/person, and for Agent AM it was 300 organisms/person. The incubation time was believed to be about 2 weeks, with a duration of infection of several months. The lethality estimate was, based on epidemiological information, 1 to 2
per cent. Agent AM was believed to be a somewhat more virulent disease, with a fatality rate of 3 per cent being expected.
Other animals
Species infecting domestic livestock are B. abortus (cattle, bison, and elk), B. canis (dogs), B. melitensis (goats and sheep), B. ovis (sheep), and B. suis (caribou and pigs). Brucella species have also been isolated from several marine mammal species (cetaceans and pinnipeds).
Cattle
B. abortus is the principal cause of brucellosis in cattle. The bacteria are shed from an infected animal at or around the time of calving or abortion. Once exposed, the likelihood of an animal becoming infected is variable, depending on age, pregnancy status, and other intrinsic factors of the animal, as well as the number of bacteria to which the animal was exposed. The most common clinical signs of cattle infected with B. abortus are high incidences of abortions, arthritic joints, and retained placenta.The two main causes for spontaneous abortion in animals are erythritol, which can promote infections in the fetus and placenta, and the lack of anti-Brucella activity in the amniotic fluid. Males can also harbor the bacteria in their reproductive tracts, namely seminal vesicles, ampullae, testicles, and epididymes.
Dogs
The causative agent of brucellosis in dogs, B. canis, is transmitted to other dogs through breeding and contact with aborted fetuses. Brucellosis can occur in humans who come in contact with infected aborted tissue or semen. The bacteria in dogs normally infect the genitals and lymphatic system, but can also spread to the eyes, kidneys, and intervertebral discs. Brucellosis in the intervertebral disc is one possible cause of discospondylitis. Symptoms of brucellosis in dogs include abortion in female dogs and scrotal inflammation and orchitis in males. Fever is uncommon. Infection of the eye can cause uveitis, and infection of the intervertebral disc can cause pain or weakness. Blood testing of the dogs prior to breeding can prevent the spread of this disease. It is treated with antibiotics, as with humans, but it is difficult to cure.
Aquatic wildlife
Brucellosis in cetaceans is caused by the bacterium B. ceti. First discovered in the aborted fetus of a bottlenose dolphin, the structure of B. ceti is similar to Brucella in land animals. B. ceti is commonly detected in two suborders of cetaceans, the Mysticeti and Odontoceti. The Mysticeti include four families of baleen whales, filter-feeders, and the Odontoceti include two families of toothed cetaceans ranging from dolphins to sperm whales. B. ceti is believed to transfer from animal to animal through sexual intercourse, maternal feeding, aborted fetuses, placental issues, from mother to fetus, or through fish reservoirs. Brucellosis is a reproductive disease, so has an extreme negative impact on the population dynamics of a species. This becomes a greater issue when the already low population numbers of cetaceans are taken into consideration. B. ceti has been identified in four of the 14 cetacean families, but the antibodies have been detected in seven of the families. This indicates that B. ceti is common amongst cetacean families and populations. Only a small percentage of exposed individuals become ill or die. However, particular species apparently are more likely to become infected by B. ceti. The harbor porpoise, striped dolphin, white-sided dolphin, bottlenose dolphin, and common dolphin have the highest frequency of infection amongst ondontocetes. In the mysticetes families, the northern minke whale is by far the most infected species. Dolphins and porpoises are more likely to be infected than cetaceans such as whales. With regard to sex and age biases, the infections do not seem influenced by the age or sex of an individual. Although fatal to cetaceans, B. ceti has a low infection rate for humans.
Terrestrial wildlife
The disease in its various strains can infect multiple wildlife species, including elk (Cervus canadensis), bison (Bison bison), African buffalo (Syncerus caffer), European wild boar (Sus scrofa), caribou (Rangifer tarandus), moose (Alces alces), and marine mammals (see section on aquatic wildlife above). While some regions use vaccines to prevent the spread of brucellosis between infected and uninfected wildlife populations, no suitable brucellosis vaccine for terrestrial wildlife has been developed. This gap in medicinal knowledge creates more pressure for management practices that reduce spread of the disease.Wild bison and elk in the greater Yellowstone area are the last remaining reservoir of B. abortus in the US. The recent transmission of brucellosis from elk back to cattle in Idaho and Wyoming illustrates how the area, as the last remaining reservoir in the United States, may adversely affect the livestock industry. Eliminating brucellosis from this area is a challenge, as many viewpoints exist on how to manage diseased wildlife. However, the Wyoming Game and Fish Department has recently begun to protect scavengers (particularly coyotes and red fox) on elk feedgrounds, because they act as sustainable, no-cost, biological control agents by removing infected elk fetuses quickly.The National Elk Refuge in Jackson, Wyoming asserts that the intensity of the winter feeding program affects the spread of brucellosis more than the population size of elk and bison. Since concentrating animals around food plots accelerates spread of the disease, management strategies to reduce herd density and increase dispersion could limit its spread.
Effects on hunters
Hunters may be at additional risk for exposure to brucellosis due to increased contact with susceptible wildlife, including predators that may have fed on infected prey. Hunting dogs can also be at risk of infection. Exposure can occur through contact with open wounds or by directly inhaling the bacteria while cleaning game. In some cases, consumption of undercooked game can result in exposure to the disease. Hunters can limit exposure while cleaning game through the use of precautionary barriers, including gloves and masks, and by washing tools rigorously after use. By ensuring that game is cooked thoroughly, hunters can protect themselves and others from ingesting the disease. Hunters should refer to local game officials and health departments to determine the risk of brucellosis exposure in their immediate area and to learn more about actions to reduce or avoid exposure.
See also
Brucella suis
References
Further reading
Fact sheet on Brucellosis from World Organisation for Animal Health
Brucella genomes and related information at PATRIC, a Bioinformatics Resource Center funded by NIAID
Prevention about Brucellosis from Centers for Disease Control
Capasso L (August 2002). "Bacteria in two-millennia-old cheese, and related epizoonoses in Roman populations". The Journal of Infection. 45 (2): 122–7. doi:10.1053/jinf.2002.0996. PMID 12217720. – re high rate of brucellosis in humans in ancient Pompeii
Brucellosis, factsheet from European Centre for Disease Prevention and Control
== External links == | 100 |
Burning mouth syndrome | Burning mouth syndrome (BMS) is a burning, tingling or scalding sensation in the mouth, lasting for at least four to six months, with no underlying known dental or medical cause. No related signs of disease are found in the mouth. People with burning mouth syndrome may also have a subjective xerostomia (dry mouth sensation where no cause can be found such as reduced salivary flow), paraesthesia (altered sensation such as tingling in the mouth), or an altered sense of taste or smell.A burning sensation in the mouth can be a symptom of another disease when local or systemic factors are found to be implicated; this is not considered to be burning mouth syndrome, which is a syndrome of medically unexplained symptoms. The International Association for the Study of Pain defines burning mouth syndrome as "a distinctive nosological entity characterized by unremitting oral burning or similar pain in the absence of detectable mucosal changes" and "burning pain in the tongue or other oral mucous membranes", and the International Headache Society defines it as "an intra-oral burning sensation for which no medical or dental cause can be found". To ensure the correct diagnosis of burning mouth syndrome, Research Diagnostic Criteria (RDC/BMS) have been developed.Insufficient evidence leaves it unclear if effective treatments exist.
Signs and symptoms
By definition, BMS has no signs. Sometimes affected persons will attribute the symptoms to sores in the mouth, but these are in fact normal anatomic structures (e.g. lingual papillae, varices). Symptoms of BMS are variable, but the typical clinical picture is given below, considered according to the Socrates pain assessment method (see table). If clinical signs are visible, then another explanation for the burning sensation may be present. Erythema (redness) and edema (swelling) of papillae on the tip of the tongue may be a sign that the tongue is being habitually pressed against the teeth. The number and size of filiform papillae may be reduced. If the tongue is very red and smooth, then there is likely a local or systemic cause (e.g. eythematous candidiasis, anemia).
Causes
Theories
In about 50% of cases of burning mouth sensation no identifiable cause is apparent; these cases are termed (primary) BMS. Several theories of what causes BMS have been proposed, and these are supported by varying degrees of evidence, but none is proven.
As most people with BMS are postmenopausal women, one theory of the cause of BMS is of estrogen or progesterone deficit, but a strong statistical correlation has not been demonstrated. Another theory is that BMS is related to autoimmunity, as abnormal antinuclear antibody and rheumatoid factor can be found in the serum of more than 50% of persons with BMS, but these levels may also be seen in elderly people who do not have any of the symptoms of this condition. Whilst salivary flow rates are normal and there are no clinical signs of a dry mouth to explain a complaint of dry mouth, levels of salivary proteins and phosphate may be elevated and salivary pH or buffering capacity may be reduced.Depression and anxiety are strongly associated with BMS. It is not known if depression is a cause or result of BMS, as depression may develop in any setting of constant unrelieved irritation, pain, and sleep disturbance. It is estimated that about 20% of BMS cases involve psychogenic factors, and some consider BMS a psychosomatic illness, caused by cancerophobia, concern about sexually transmitted infections, or hypochondriasis.Chronic low-grade trauma due to parafunctional habits (e.g. rubbing the tongue against the teeth or pressing it against the palate), may be involved. BMS is more common in persons with Parkinsons disease, so it has been suggested that it is a disorder of reduced pain threshold and increased sensitivity. Often people with BMS have unusually raised taste sensitivity, termed hypergeusia ("super tasters"). Dysgeusia (usually a bitter or metallic taste) is present in about 60% of people with BMS, a factor which led to the concept of a defect in sensory peripheral neural mechanisms. Changes in the oral environment, such as changes in the composition of saliva, may induce neuropathy or interruption of nerve transduction. The onset of BMS is often spontaneous, although it may be gradual. There is sometimes a correlation with a major life event or stressful period in life. In women, the onset of BMS is most likely three to twelve years following menopause.
Other causes of an oral burning sensation
Several local and systemic factors can give a burning sensation in the mouth without any clinical signs, and therefore may be misdiagnosed as BMS. Some sources state that where there is an identifiable cause for a burning sensation, this can be termed "secondary BMS" to distinguish it from primary BMS. However, the accepted definitions of BMS hold that there are no identifiable causes for BMS, and where there are identifiable causes, the term BMS should not be used.Some causes of a burning mouth sensation may be accompanied by clinical signs in the mouth or elsewhere on the body. For example, burning mouth pain may be a symptom of allergic contact stomatitis. This is a contact sensitivity (type IV hypersensitivity reaction) in the oral tissues to common substances such as sodium lauryl sulfate, cinnamaldehyde or dental materials. However, allergic contact stomatitis is accompanied by visible lesions and gives positive response with patch testing. Acute (short term) exposure to the allergen (the substance triggering the allergic response) causes non-specific inflammation and possibly mucosal ulceration. Chronic (long term) exposure to the allergen may appear as chronic inflammatory, lichenoid (lesions resembling oral lichen planus), or plasma cell gingivitis, which may be accompanied by glossitis and cheilitis. Apart from BMS itself, a full list of causes of an oral burning sensation is given below:
Deficiency of iron, folic acid or various B vitamins (glossitis e.g. due to anemia), or zinc
Neuropathy, e.g. following damage to the chorda tympani nerve.
Hypothyroidism.
Medications ("scalded mouth syndrome", unrelated to BMS) - protease inhibitors and angiotensin-converting-enzyme inhibitors (e.g. captopril).
Type 2 diabetes
True xerostomia, caused by hyposalivation e.g. Sjögrens syndrome
Parafunctional activity, e.g. nocturnal bruxism or a tongue thrusting habit.
Restriction of the tongue by poorly constructed dentures.
Geographic tongue.
Oral candidiasis.
Herpetic infection (herpes simplex virus).
Fissured tongue.
Lichen planus.
Allergies and contact sensitivities to foods, metals, and other substances (see table).
Hiatal hernia.
Human immunodeficiency virus.
Multiple myeloma
Diagnosis
BMS is a diagnosis of exclusion, i.e. all other explanations for the symptoms are ruled out before the diagnosis is made. There are no clinically useful investigations that would help to support a diagnosis of BMS (by definition all tests would have normal results), but blood tests and / or urinalysis may be useful to rule out anemia, deficiency states, hypothyroidism and diabetes. Investigation of a dry mouth symptom may involve sialometry, which objectively determines if there is any reduction of the salivary flow rate (hyposalivation). Oral candidiasis can be tested for with use of a swabs, smears, an oral rinse or saliva samples. It has been suggested that allergy testing (e.g., patch test) is inappropriate in the absence of a clear history and clinical signs in people with a burning sensation in the mouth. The diagnosis of a people with a burning symptom may also involve psychologic screening e.g. depression questionnaires.The second edition of the International Classification of Headache Disorders lists diagnostic criteria for "Glossodynia and Sore Mouth":
A. Pain in the mouth present daily and persisting for most of the day,
B. Oral mucosa is of normal appearance,
C. Local and systemic diseases have been excluded.
Classification
A burning sensation in the mouth may be primary (i.e. burning mouth syndrome) or secondary to systemic or local factors. Other sources refer to a "secondary BMS" with a similar definition, i.e. a burning sensation which is caused by local or systemic factors, or "where oral burning is explained by a clinical abnormality". However this contradicts the accepted definition of BMS which specifies that no cause can be identified. "Secondary BMS" could therefore be considered a misnomer. BMS is an example of dysesthesia, or a distortion of sensation.Some consider BMS to be a variant of atypical facial pain. More recently, BMS has been described as one of the 4 recognizable symptom complexes of chronic facial pain, along with atypical facial pain, temporomandibular joint dysfunction and atypical odontalgia. BMS has been subdivided into three general types, with type two being the most common and type three being the least common. Types one and two have unremitting symptoms, whereas type three may show remitting symptoms.
Type 1 - Symptoms not present upon waking, and then increase throughout the day
Type 2 - Symptoms upon waking and through the day
Type 3 - No regular pattern of symptomsSometimes those terms specific to the tongue (e.g. glossodynia) are reserved for when the burning sensation is located only on the tongue.
Treatment
If a cause can be identified for a burning sensation in the mouth, then treatment of this underlying factor is recommended. If symptom persist despite treatment a diagnosis of BMS is confirmed. BMS has been traditionally treated by reassurance and with antidepressants, anxiolytics or anticonvulsants. A 2016 Cochrane review of treatment for burning mouth syndrome concluded that strong evidence of an effective treatment was not available, however, a systematic review in 2018 found that the use of antidepressants and alpha-lipoic acids gave promising results.Other treatments which have been used include atypical antipsychotics, histamine receptor antagonists, and dopamine agonists. Supplementation with vitamin complexes and cognitive behavioral therapy may be helpful in the management of burning mouth syndrome.
Prognosis
BMS is benign (importantly, it is not a symptom of oral cancer), but as a cause of chronic pain which is poorly controlled, it can detriment quality of life, and may become a fixation which cannot be ignored, thus interfering with work and other daily activities. Two thirds of people with BMS have a spontaneous partial recovery six to seven years after the initial onset, but in others the condition is permanent. Recovery is often preceded by a change in the character of the symptom from constant to intermittent. No clinical factors predicting recovery have been noted.If there is an identifiable cause for the burning sensation, then psychologic dysfunctions such as anxiety and depression often disappear if the symptom is successfully treated.
Epidemiology
BMS is fairly uncommon worldwide, affecting up to five individuals per 100,000 general population. People with BMS are more likely to be middle aged or elderly, and females are three to seven times more likely to have BMS than males. Some report a female to male ratio of as much as 33 to 1. BMS is reported in about 10-40% of women seeking medical treatment for menopausal symptoms, and BMS occurs in about 14% of postmenopausal women. Males and younger individuals of both sexes are sometimes affected.Asian and Native American people have considerably higher risk of BMS.
Notable cases
Sheila Chandra, a singer of Indian heritage, retired due to this condition.
References
Scala A; Checchi L; Montevecchi M; Marini I; Giamberardino MA (2003). "Update on burning mouth syndrome: overview and patient management". Crit Rev Oral Biol Med. 14 (4): 275–91. doi:10.1177/154411130301400405. PMID 12907696.
== External links == | 101 |
Bursitis | Bursitis is the inflammation of one or more bursae (fluid filled sacs) of synovial fluid in the body. They are lined with a synovial membrane that secretes a lubricating synovial fluid. There are more than 150 bursae in the human body. The bursae rest at the points where internal functionaries, such as muscles and tendons, slide across bone. Healthy bursae create a smooth, almost frictionless functional gliding surface making normal movement painless. When bursitis occurs, however, movement relying on the inflamed bursa becomes difficult and painful. Moreover, movement of tendons and muscles over the inflamed bursa aggravates its inflammation, perpetuating the problem. Muscle can also be stiffened.
Signs and symptoms
Bursitis commonly affects superficial bursae. These include the subacromial, prepatellar, retrocalcaneal, and pes anserinus bursae of the shoulder, knee, heel and shin, etc. (see below). Symptoms vary from localized warmth and erythema to joint pain and stiffness, to stinging pain that surrounds the joint around the inflamed bursa. In this condition, the pain usually is worse during and after activity, and then the bursa and the surrounding joint becomes stiff the next morning.Bursitis could possibly also cause a snapping, grinding or popping sound – known as snapping scapula syndrome – when it occurs in the shoulder joint. This is not necessarily painful.
Cause
There can be several concurrent causes. Trauma, auto-immune disorders, infection and iatrogenic (medicine-related) factors can all cause bursitis. Bursitis is commonly caused by repetitive movement and excessive pressure. Shoulders, elbows and knees are the most commonly affected. Inflammation of the bursae may also be caused by other inflammatory conditions such as rheumatoid arthritis, scleroderma, systemic lupus erythematosus, and gout. Immune deficiencies, including HIV and diabetes, can also cause bursitis. Infrequently, scoliosis can cause bursitis of the shoulders; however, shoulder bursitis is more commonly caused by overuse of the shoulder joint and related muscles.Traumatic injury is another cause of bursitis. The inflammation irritates because the bursa no longer fits in the original small area between the bone and the functionary muscle or tendon. When the bone increases pressure upon the bursa, bursitis results. Sometimes the cause is unknown. It can also be associated with various other chronic systemic diseases.
Diagnosis
Types
The most common examples of this condition:
Prepatellar bursitis, "housemaids knee"
Infrapatellar bursitis, "clergymans knee"
Trochanteric bursitis, giving pain over lateral aspect of hip
Olecranon bursitis, "students elbow", characterised by pain and swelling in the elbow
Subacromial bursitis, giving shoulder pain, is the most common form of bursitis.
Achilles bursitis
Retrocalcaneal bursitis
Ischial bursitis, "weavers bottom"
Iliopsoas bursitis
Anserine bursitis
Treatment
It is important to differentiate between infected and non-infected bursitis. People may have surrounding cellulitis and systemic symptoms include a fever. The bursa should be aspirated to rule out an infectious process.Bursae that are not infected can be treated symptomatically with rest, ice, elevation, physiotherapy, anti-inflammatory drugs and pain medication. Since bursitis is caused by increased friction from the adjacent structures, a compression bandage is not suggested because compression would create more friction around the joint. Chronic bursitis can be amenable to bursectomy and aspiration. Bursae that are infected require further investigation and antibiotic therapy. Steroid therapy may also be considered. In cases when all conservative treatment fails, surgical therapy may be necessary. In a bursectomy the bursa is cut out either endoscopically or with open surgery. The bursa grows back in place after a couple of weeks but without any inflammatory component.
See also
Calcific bursitis
Snapping scapula syndrome
References
External links
Bursitis treatment from NHS Direct
Questions and Answers about Bursitis and Tendinitis – US National Institute of Arthritis and Musculoskeletal and Skin Diseases | 102 |
Clostridioides difficile infection | Clostridioides difficile infection
(CDI or C-diff), also known as Clostridium difficile infection, is a symptomatic infection due to the spore-forming bacterium Clostridioides difficile. Symptoms include watery diarrhea, fever, nausea, and abdominal pain. It makes up about 20% of cases of antibiotic-associated diarrhea. Antibiotics can contribute to detrimental changes in gut microbiota; specifically, they decrease short-chain fatty acid absorption which results in osmotic, or watery, diarrhea. Complications may include pseudomembranous colitis, toxic megacolon, perforation of the colon, and sepsis.Clostridioides difficile infection is spread by bacterial spores found within feces. Surfaces may become contaminated with the spores with further spread occurring via the hands of healthcare workers. Risk factors for infection include antibiotic or proton pump inhibitor use, hospitalization, other health problems, and older age. Diagnosis is by stool culture or testing for the bacterias DNA or toxins. If a person tests positive but has no symptoms, the condition is known as C. difficile colonization rather than an infection.Prevention efforts include terminal room cleaning in hospitals, limiting antibiotic use, and handwashing campaigns in hospitals. Alcohol based hand sanitizer does not appear effective. Discontinuation of antibiotics may result in resolution of symptoms within three days in about 20% of those infected. The antibiotics metronidazole, vancomycin or fidaxomicin, will cure the infection. Retesting after treatment, as long as the symptoms have resolved, is not recommended, as a person may often remain colonized. Recurrences have been reported in up to 25% of people. Some tentative evidence indicates fecal microbiota transplantation and probiotics may decrease the risk of recurrence.C. difficile infections occur in all areas of the world. About 453,000 cases occurred in the United States in 2011, resulting in 29,000 deaths. Global rates of disease increased between 2001 and 2016. C. difficile infections occur more often in women than men. The bacterium was discovered in 1935 and found to be disease-causing in 1978. In the United States, healthcare-associated infections increase the cost of care by US$1.5 billion each year. Although C. difficile is a common healthcare-associated infection, at most 30% of infections are transmitted within hospitals. The majority of infections are acquired outside of hospitals, where medications and a recent history of diarrheal illnesses (e.g. laxative abuse or food poisoning due to Salmonellosis) are thought to drive the risk of colonization.
Signs and symptoms
Signs and symptoms of CDI range from mild diarrhea to severe life-threatening inflammation of the colon.In adults, a clinical prediction rule found the best signs to be significant diarrhea ("new onset of more than three partially formed or watery stools per 24-hour period"), recent antibiotic exposure, abdominal pain, fever (up to 40.5 °C or 105 °F), and a distinctive foul odor to the stool resembling horse manure. In a hospital population, prior antibiotic treatment plus diarrhea or abdominal pain had a sensitivity of 86% and a specificity of 45%. In this study with a prevalence of positive cytotoxin assays of 14%, the positive predictive value was 18% and the negative predictive value was 94%.In children, the most prevalent symptom of a CDI is watery diarrhea with at least three bowel movements a day for two or more days, which may be accompanied by fever, loss of appetite, nausea, and/or abdominal pain. Those with a severe infection also may develop serious inflammation of the colon and have little or no diarrhea.
Cause
Infection with C. difficile bacteria is responsible for C. difficile diarrhea.
C. difficile
Clostridia are anaerobic motile bacteria, ubiquitous in nature, and especially prevalent in soil. Under the microscope, they appear as long, irregular (often drumstick- or spindle-shaped) cells with a bulge at their terminal ends. Under Gram staining, C. difficile cells are Gram-positive and show optimum growth on blood agar at human body temperatures in the absence of oxygen. When stressed, the bacteria produce spores that are able to tolerate extreme conditions that the active bacteria cannot tolerate.C. difficile may colonize the human colon without symptom; approximately 2–5% of the adult population are carriers, although it varies considerably with demographics. The risk of colonization has been linked to a history of unrelated diarrheal illnesses (e.g. laxative abuse and food poisoning due to Salmonellosis or Vibrio cholerae infection).Pathogenic C. difficile strains produce multiple toxins. The most well-characterized are enterotoxin (Clostridium difficile toxin A) and cytotoxin (Clostridium difficile toxin B), both of which may produce diarrhea and inflammation in infected people, although their relative contributions have been debated. Toxins A and B are glucosyltransferases that target and inactivate the Rho family of GTPases. Toxin B (cytotoxin) induces actin depolymerization by a mechanism correlated with a decrease in the ADP-ribosylation of the low molecular mass GTP-binding Rho proteins. Another toxin, binary toxin, also has been described, but its role in disease is not fully understood.Antibiotic treatment of CDIs may be difficult, due both to antibiotic resistance and physiological factors of the bacteria (spore formation, protective effects of the pseudomembrane). The emergence of a new and highly toxic strain of C. difficile that is resistant to fluoroquinolone antibiotics such as ciprofloxacin and levofloxacin, said to be causing geographically dispersed outbreaks in North America, was reported in 2005. The U.S. Centers for Disease Control and Prevention in Atlanta warned of the emergence of an epidemic strain with increased virulence, antibiotic resistance, or both.C. difficile is transmitted from person to person by the fecal-oral route. The organism forms heat-resistant spores that are not killed by alcohol-based hand cleansers or routine surface cleaning. Thus, these spores survive in clinical environments for long periods. Because of this, the bacteria may be cultured from almost any surface. Once spores are ingested, their acid-resistance allows them to pass through the stomach unscathed. Upon exposure to bile acids, they germinate and multiply into vegetative cells in the colon. People without a history of gastrointestinal disturbances due to antibiotic use or diarrheal illness are less likely to become colonized by C. difficile.In 2005, molecular analysis led to the identification of the C. difficile strain type characterized as group BI by restriction endonuclease analysis, as North American pulse-field-type NAP1 by pulsed-field gel electrophoresis and as ribotype 027; the differing terminology reflects the predominant techniques used for epidemiological typing. This strain is referred to as C. difficile BI/NAP1/027.
Risk factors
Antibiotics
C. difficile colitis is associated most strongly with the use of these antibiotics: fluoroquinolones, cephalosporins, and clindamycin.Some research suggests the routine use of antibiotics in the raising of livestock is contributing to outbreaks of bacterial infections such as C. difficile.
Healthcare environment
People are most often infected in hospitals, nursing homes, or other medical institutions, although infection outside medical settings is increasing. Individuals can develop the infection if they touch objects or surfaces that are contaminated with feces and then touch their mouth or mucous membranes. Healthcare workers could possibly spread the bacteria or contaminate surfaces through hand contact. The rate of C. difficile acquisition is estimated to be 13% in those with hospital stays of up to two weeks, and 50% with stays longer than four weeks.Long-term hospitalization or residence in a nursing home within the previous year are independent risk factors for increased colonization.
Acid suppression medication
Increasing rates of community-acquired CDI are associated with the use of medication to suppress gastric acid production: H2-receptor antagonists increased the risk 1.5-fold, and proton pump inhibitors by 1.7 with once-daily use and 2.4 with more than once-daily use.
Diarrheal illnesses
People with a recent history of diarrheal illness are at increased risk of becoming colonized by C. difficile when exposed to spores, including laxative abuse and gastrointestinal pathogens. Disturbances that increase intestinal motility are thought to transiently elevate the concentration of available dietary sugars, allowing C. difficile to proliferate and gain a foothold in the gut. Although not all colonization events lead to disease, asymptomatic carriers remain colonized for years at a time. During this time, the abundance of C. difficile varies considerably day-to-day, causing periods of increased shedding that could substantially contribute to community-acquired infection rates.
Other
As a result of suppression of healthy bacteria, via a loss of bacterial food source, prolonged use of an elemental diet increases the risk of developing C. difficile infection. Low serum albumin levels is a risk factor for the development of C. difficile infection and when infected for severe disease. The protective effects of serum albumin may be related to the capability of this protein to bind C. difficile toxin A and toxin B, thus impairing entry into enterocytes.
Pathophysiology
The use of systemic antibiotics, including broad-spectrum penicillins/cephalosporins, fluoroquinolones, and clindamycin, causes the normal microbiota of the bowel to be altered. In particular, when the antibiotic kills off other competing bacteria in the intestine, any bacteria remaining will have less competition for space and nutrients. The net effect is to permit more extensive growth than normal of certain bacteria. C. difficile is one such type of bacterium. In addition to proliferating in the bowel, C. difficile also produces toxins. Without either toxin A or toxin B, C. difficile may colonize the gut, but is unlikely to cause pseudomembranous colitis. The colitis associated with severe infection is part of an inflammatory reaction, with the "pseudomembrane" formed by a viscous collection of inflammatory cells, fibrin, and necrotic cells.
Diagnosis
Prior to the advent of tests to detect C. difficile toxins, the diagnosis most often was made by colonoscopy or sigmoidoscopy. The appearance of "pseudomembranes" on the mucosa of the colon or rectum is highly suggestive, but not diagnostic of the condition. The pseudomembranes are composed of an exudate made of inflammatory debris, white blood cells. Although colonoscopy and sigmoidoscopy are still employed, now stool testing for the presence of C. difficile toxins is frequently the first-line diagnostic approach. Usually, only two toxins are tested for—toxin A and toxin B—but the organism produces several others. This test is not 100% accurate, with a considerable false-negative rate even with repeat testing.
Classification
CDI may be classified in non-severe CDI, severe CDI and fulminant CDI depending on creatinine and white blood count parameters.
Cytotoxicity assay
C. difficile toxins have a cytopathic effect in cell culture, and neutralization of any effect observed with specific antisera is the practical gold standard for studies investigating new CDI diagnostic techniques. Toxigenic culture, in which organisms are cultured on selective media and tested for toxin production, remains the gold standard and is the most sensitive and specific test, although it is slow and labor-intensive.
Toxin ELISA
Assessment of the A and B toxins by enzyme-linked immunosorbent assay (ELISA) for toxin A or B (or both) has a sensitivity of 63–99% and a specificity of 93–100%.Previously, experts recommended sending as many as three stool samples to rule out disease if initial tests are negative, but evidence suggests repeated testing during the same episode of diarrhea is of limited value and should be discouraged. C. difficile toxin should clear from the stool of somebody previously infected if treatment is effective. Many hospitals only test for the prevalent toxin A. Strains that express only the B toxin are now present in many hospitals, however, so testing for both toxins should occur. Not testing for both may contribute to a delay in obtaining laboratory results, which is often the cause of prolonged illness and poor outcomes.
Other stool tests
Stool leukocyte measurements and stool lactoferrin levels also have been proposed as diagnostic tests, but may have limited diagnostic accuracy.
PCR
Testing of stool samples by real-time polymerase chain reaction is able to detect C. difficile about 93% of the time and when positive is incorrectly positive about 3% of the time. This is more accurate than cytotoxigenic culture or cell cytotoxicity assay. Another benefit is that the result can be achieved within three hours. Drawbacks include a higher cost and the fact that the test only looks for the gene for the toxin and not the toxin itself. The latter means that if the test is used without confirmation, overdiagnosis may occur. Repeat testing may be misleading, and testing specimens more than once every seven days in people without new symptoms is highly unlikely to yield useful information.
Prevention
Self containment by housing people in private rooms is important to prevent the spread of C. difficile. Contact precautions are an important part of preventing the spread of C. difficile. C. difficile does not often occur in people who are not taking antibiotics so limiting use of antibiotics decreases the risk.
Antibiotics
The most effective method for preventing CDI is proper antibiotic prescribing. In the hospital setting, where CDI is most common, most people who develop CDI are exposed to antibiotics. Although proper antibiotic prescribing is highly recommended, about 50% is considered inappropriate. This is consistent whether in the hospital, clinic, community, or academic setting. A decrease in CDI by limiting antibiotics or by limiting unnecessary prescriptions in general, both in an outbreak and nonoutbreak setting has been demonstrated to be most strongly associated with reduced CDI. Further, reactions to medication may be severe: CDI infections were the most common contributor to adverse drug events seen in U.S. hospitals in 2011. In some regions of the UK, reduced used of fluoroquinolone antibiotics seems to lead to reduced rates of CDI.
Probiotics
Some evidence indicates probiotics may be useful to prevent infection and recurrence. Treatment with Saccharomyces boulardii in those who are not immunocompromised with C. difficile also may be useful. Initially, in 2010, the Infectious Diseases Society of America recommended against their use due to the risk of complications. Subsequent reviews, however, did not find an increase in adverse effects with treatment, and overall treatment appears safe and moderately effective in preventing Clostridium difficile-associated diarrhea.One study in particular found that there does appear to be a "protective effect" of probiotics, specifically reducing the risk of antibiotic-associated diarrhea (AAD) by 51% in 3,631 outpatients, but it is important to note that the types of infections in the subjects were not specified. Yogurt, tablets, dietary supplements are just a few examples of probiotics available for people.
Infection control
Rigorous infection protocols are required to minimize this risk of transmission. Infection control measures, such as wearing gloves and noncritical medical devices used for a single person with CDI, are effective at prevention. This works by limiting the spread of C. difficile in the hospital setting. In addition, washing with soap and water will wash away the spores from contaminated hands, but alcohol-based hand rubs are ineffective. These precautions should remain in place among those in hospital for at least 2 days after the diarrhea has stopped.Bleach wipes containing 0.55% sodium hypochlorite have been shown to kill the spores and prevent transmission. Installing lidded toilets and closing the lid prior to flushing also reduces the risk of contamination.Those who have CDIs should be in rooms with other people with CDIs or by themselves when in hospital.Common hospital disinfectants are ineffective against C. difficile spores, and may promote spore formation, but various oxidants (e.g 1% sodium hypochlorite solution) rapidly destroy spores. Hydrogen peroxide vapor (HPV) systems used to sterilize a room after treatment is completed have been shown to reduce infection rates and to reduce risk of infection to others. The incidence of CDI was reduced by 53% or 42% through use of HPV. Ultraviolet cleaning devices, and housekeeping staff especially dedicated to disinfecting the rooms of people with C. difficile after discharge may be effective.
Treatment
Carrying C. difficile without symptoms is common. Treatment in those without symptoms is controversial. In general, mild cases do not require specific treatment. Oral rehydration therapy is useful in treating dehydration associated with the diarrhea.
Medications
Several different antibiotics are used for C. difficile, with the available agents being more or less equally effective.Vancomycin or fidaxomicin by mouth are the typically recommended for mild, moderate, and severe infections. They are also the first-line treatment for pregnant women, especially since metronidazole may cause birth defects. Typical vancomycin 125mg is taken four times a day by mouth for 10 days. Fidaxomicin is taken at 200 mg twice daily for 10 days. It may also be given rectally if the person develops an ileus.Fidaxomicin is tolerated as well as vancomycin, and may have a lower risk of recurrence. Fidaxomicin has been found to be as effective as vancomycin in those with mild to moderate disease, and it may be better than vancomycin in those with severe disease. Fidaxomicin may be used in those who have recurrent infections and have not responded to other antibiotics. Metronidazole (500 mg 3 times daily for 10 days) by mouth is recommended as an alternative treatment only for C. difficile infections when the affected person is allergic to first-line treatments, is unable to tolerate them, or has financial difficulties preventing them from accessing them. In fulminant disease vancomycin by mouth and intravenous metronidazole are commonly used together.Medications used to slow or stop diarrhea, such as loperamide, may only be used after initiating the treatment.Cholestyramine, an ion-exchange resin, is effective in binding both toxin A and B, slowing bowel motility, and helping prevent dehydration. Cholestyramine is recommended with vancomycin. A last-resort treatment in those who are immunosuppressed is intravenous immunoglobulin. Monoclonal antibodies against C. difficile toxin A and C. difficile toxin B are approved to prevent recurrence of C. difficile infection including bezlotoxumab.
Probiotics
Evidence to support the use of probiotics in the treatment of active disease is insufficient. Researchers have recently begun taking a mechanical approach to fecal-derived products. It is known that certain microbes with 7α-dehydroxylase activity can metabolize primary to secondary bile acids, which inhibit C. difficile. Thus, incorporating such microbes into therapeutic products such as probiotics may be protective, although more pre-clinical investigations are needed.
Fecal microbiota transplantation
Fecal microbiota transplant, also known as a stool transplant, is roughly 85% to 90% effective in those for whom antibiotics have not worked. It involves infusion of the microbiota acquired from the feces of a healthy donor to reverse the bacterial imbalance responsible for the recurring nature of the infection. The procedure replenishes the normal colonic microbiota that had been wiped out by antibiotics, and re-establishes resistance to colonization by Clostridioides difficile. Side effects, at least initially, are few.Some evidence looks hopeful that fecal transplant can be delivered in the form of a pill. They are available in the United States, but are not FDA-approved as of 2015.
Surgery
In those with severe C. difficile colitis, colectomy may improve the outcomes. Specific criteria may be used to determine who will benefit most from surgery.
Recurrent infection
Recurrent CDI occurs in 20 to 30% of the patients, with increasing rates of recurrence with each subsequent episode. In clinical settings, it is virtually impossible to distinguish a recurrence that develops as a relapse of CDI with the same strain of C. difficile versus reinfection that is the result of a new strain.
Several treatment options exist for recurrent C difficile infection. For the first episode of recurrent C difficile infection, the 2017 IDSA guidelines recommend oral vancomycin at a dose of 125 mg four times daily for 10 days if metronidazole was used for the initial episode. If oral vancomycin was used for the initial episode, then a prolonged oral vancomycin pulse dose of 125 mg four times daily for 10-14 days followed by a taper (twice daily for one week, then every two to three days for 2-8 weeks) or fidaxomicin 200 mg twice daily for 10 days. For a second recurrent episode, the IDSA recommends options including the aforementioned oral vancomycin pulse dose followed by the prolonged taper; oral vancomycin 125 mg four times daily for 10 days followed by rifaximin 400 mg three times daily for 20 days; fidaxomicin 200 mg twice daily for 10 days, or a fecal microbiota transplant.For patients with C. diff infections that fail to be resolved with traditional antibiotic regimens, fecal microbiome transplants boasts an average cure rate of >90%. In a review of 317 patients, it was shown to lead to resolution in 92% of the persistent and recurrent disease cases. It is clear that restoration of gut flora is paramount in the struggle against recurrent CDI. With effective antibiotic therapy, C. difficile can be reduced and natural colonization resistance can develop over time as the natural microbial community recovers. Reinfection or recurrence may occur before this process is complete. Fecal microbiota transplant may expedite this recovery by directly replacing the missing microbial community members. However, human-derived fecal matter is difficult to standardize and has multiple potential risks, including the transfer of infectious material and long-term consequences of inoculating the gut with a foreign fecal material. As a result, further research is necessary to study the long term effective outcomes of FMT.
Prognosis
After a first treatment with metronidazole or vancomycin, C. difficile recurs in about 20% of people. This increases to 40% and 60% with subsequent recurrences.
Epidemiology
C. difficile diarrhea is estimated to occur in eight of 100,000 people each year. Among those who are admitted to hospital, it occurs in between four and eight people per 1,000. In 2011, it resulted in about half a million infections and 29,000 deaths in the United States.Due in part to the emergence of a fluoroquinolone-resistant strain, C. difficile-related deaths increased 400% between 2000 and 2007 in the United States. According to the CDC, "C. difficile has become the most common microbial cause of healthcare-associated infections in U.S. hospitals and costs up to $4.8 billion each year in excess health care costs for acute care facilities alone."
History
Ivan C. Hall and Elizabeth OToole first named the bacterium Bacillus difficilis in 1935, choosing its specific epithet because it was resistant to early attempts at isolation and grew very slowly in culture. André Romain Prévot subsequently transferred it to the genus Clostridium, which made its binomen Clostridium difficile. Its combination was later changed to Clostridiodes difficile after being transferred to the new genus Clostridioides.Pseudomembranous colitis first was described as a complication of C. difficile infection in 1978, when a toxin was isolated from people with pseudomembranous colitis and Kochs postulates were met.
Notable outbreaks
On 4 June 2003, two outbreaks of a highly virulent strain of this bacterium were reported in Montreal, Quebec, and Calgary, Alberta. Sources put the death count to as low as 36 and as high as 89, with around 1,400 cases in 2003 and within the first few months of 2004. CDIs continued to be a problem in the Quebec healthcare system in late 2004. As of March 2005, it had spread into the Toronto area, hospitalizing 10 people. One died while the others were being discharged.
A similar outbreak took place at Stoke Mandeville Hospital in the United Kingdom between 2003 and 2005. The local epidemiology of C. difficile may offer clues on how its spread may relate to the time a patient spends in hospital and/or a rehabilitation center. It also samples the ability of institutions to detect increased rates, and their capacity to respond with more aggressive hand-washing campaigns, quarantine methods, and the availability of yogurt containing live cultures to patients at risk for infection.
Both the Canadian and English outbreaks possibly were related to the seemingly more virulent strain NAP1/027 of the bacterium. Known as Quebec strain, it has been implicated in an epidemic at two Dutch hospitals (Harderwijk and Amersfoort, both 2005). A theory for explaining the increased virulence of 027 is that it is a hyperproducer of both toxins A and B and that certain antibiotics may stimulate the bacteria to hyperproduce.
On 1 October 2006, C. difficile was said to have killed at least 49 people at hospitals in Leicester, England, over eight months, according to a National Health Service investigation. Another 29 similar cases were investigated by coroners. A UK Department of Health memo leaked shortly afterward revealed significant concern in government about the bacterium, described as being "endemic throughout the health service"
On 27 October 2006, nine deaths were attributed to the bacterium in Quebec.
On 18 November 2006, the bacterium was reported to have been responsible for 12 deaths in Quebec. This 12th reported death was only two days after the St. Hyacinthes Honoré Mercier announced the outbreak was under control. Thirty-one people were diagnosed with CDIs. Cleaning crews took measures in an attempt to clear the outbreak.
C. difficile was mentioned on 6,480 death certificates in 2006 in UK.
On 27 February 2007, a new outbreak was identified at Trillium Health Centre in Mississauga, Ontario, where 14 people were diagnosed with CDIs. The bacteria were of the same strain as the one in Quebec. Officials have not been able to determine whether C. difficile was responsible for the deaths of four people over the prior two months.
Between February and June 2007, three people at Loughlinstown Hospital in Dublin, Ireland, were found by the coroner to have died as a result of C. difficile infection. In an inquest, the Coroners Court found the hospital had no designated infection control team or consultant microbiologist on staff.
Between June 2007 and August 2008, Northern Health and Social Care Trust Northern Ireland, Antrim Area, Braid Valley, Mid Ulster Hospitals were the subject of inquiry. During the inquiry, expert reviewers concluded that C. difficile was implicated in 31 of these deaths, as the underlying cause in 15, and as a contributory cause in 16. During that time, the review also noted 375 instances of CDIs in those being treated at the hospital.
In October 2007, Maidstone and Tunbridge Wells NHS Trust was heavily criticized by the Healthcare Commission regarding its handling of a major outbreak of C. difficile in its hospitals in Kent from April 2004 to September 2006. In its report, the Commission estimated approximately 90 people "definitely or probably" died as a result of the infection.
In November 2007, the 027 strain spread into several hospitals in southern Finland, with 10 deaths out of 115 infected people reported on 2007-12-14.
In November 2009, four deaths at Our Lady of Lourdes Hospital in Ireland have possible links to CDI. A further 12 people tested positive for infection, and another 20 showed signs of infection.
From February 2009 to February 2010, 199 people at Herlev hospital in Denmark were suspected of being infected with the 027 strain. In the first half of 2009, 29 died in hospitals in Copenhagen after they were infected with the bacterium.
In May 2010, a total of 138 people at four different hospitals in Denmark were infected with the 027 strain plus there were some isolated occurrences at other hospitals.
In May 2010, 14 fatalities were related to the bacterium in the Australian state of Victoria. Two years later, the same strain of the bacterium was detected in New Zealand.
On 28 May 2011, an outbreak in Ontario had been reported, with 26 fatalities as of 24 July 2011.
In 2012/2013, a total of 27 people at one hospital in the south of Sweden (Ystad) were infected with 10 deaths. Five died of the strain 017.
Etymology and pronunciation
The genus name is from the Greek klōstēr (κλωστήρ), "spindle", and the specific name is from Latin difficile, neuter singular form of difficilis "difficult, obstinate", chosen in reference to fastidiousness upon culturing.
Regarding the pronunciation of the current and former genus assignments, Clostridioides is and Clostridium is . Both genera still have species assigned to them, but this species is now classified in the former. Via the norms of binomial nomenclature, it is understood that the former binomial name of this species is now an alias.
Regarding the specific name, is the traditional norm, reflecting how medical English usually pronounces naturalized New Latin words (which in turns largely reflects traditional English pronunciation of Latin), although a restored pronunciation of is also sometimes used (the classical Latin pronunciation is reconstructed as [kloːsˈtrɪdɪ.ũː dɪfˈfɪkɪlɛ]). The specific name is also commonly pronounced , as though it were French, which from a prescriptive viewpoint is a "mispronunciation" but from a linguistically descriptive viewpoint cannot be described as erroneous because it is so widely used among health care professionals; it can be described as "the non-preferred variant" from the viewpoint of sticking most regularly to New Latin in binomial nomenclature, which is also a valid viewpoint, although New Latin specific names contain such a wide array of extra-Latin roots (including surnames and jocular references) that extra-Latin pronunciation is involved anyway (as seen, for example, with Ba humbugi, Spongiforma squarepantsii, and hundreds of others).
Research
As of 2019, vaccine candidates providing immunity against C. difficile toxin A and C. difficile toxin B have advanced the most in clinical research, but do not prevent bacterial colonization. A vaccine candidate by Pfizer is in a phase 3 clinical trial that is estimated to be completed in September 2021 and a vaccine candidate by GlaxoSmithKline is in a phase 1 clinical trial that is estimated to be completed in July 2021.
CDA-1 and CDB-1 (also known as MDX-066/MDX-1388 and MBL-CDA1/MBL-CDB1) is an investigational, monoclonal antibody combination co-developed by Medarex and Massachusetts Biologic Laboratories (MBL) to target and neutralize C. difficile toxins A and B, for the treatment of CDI. Merck & Co., Inc. gained worldwide rights to develop and commercialize CDA-1 and CDB-1 through an exclusive license agreement signed in April 2009. It is intended as an add-on therapy to one of the existing antibiotics to treat CDI.
Nitazoxanide is a synthetic nitrothiazolyl-salicylamide derivative indicated as an antiprotozoal agent (FDA-approved for the treatment of infectious diarrhea caused by Cryptosporidium parvum and Giardia lamblia) and also is currently being studied in C. difficile infections vs. vancomycin.
Rifaximin, is a clinical-stage semisynthetic, rifamycin-based, nonsystemic antibiotic for CDI. It is FDA-approved for the treatment of infectious diarrhea and is being developed by Salix Pharmaceuticals.
Other drugs for the treatment of CDI are under development and include rifalazil, tigecycline, ramoplanin, ridinilazole, and SQ641.
Research has studied whether the appendix has any importance in C. difficile. The appendix is thought to have a function of housing good gut flora. In a study conducted in 2011, it was shown that when C. difficile bacteria were introduced into the gut, the appendix housed cells that increased the antibody response of the body. The B cells of the appendix migrate, mature, and increase the production of toxin A-specific IgA and IgG antibodies, leading to an increased probability of good gut flora surviving against the C. difficile bacteria.
Taking non-toxic types of C. difficile after an infection has promising results with respect to preventing future infections.
Treatment with bacteriophages directed against specific toxin-producing strains of C difficile are also being tested.
A study in 2017 linked severe disease to trehalose in the diet.
Other animals
Colitis-X (in horses)
References
External links
Pseudomembranous colitis at Curlie
Updated guidance on the management and treatment of Clostridium difficile infection | 103 |
Cachexia | Cachexia () is a complex syndrome associated with an underlying illness, causing ongoing muscle loss that is not entirely reversed with nutritional supplementation. A range of diseases can cause cachexia, most commonly cancer, congestive heart failure, chronic obstructive pulmonary disease, chronic kidney disease, and AIDS. Systemic inflammation from these conditions can cause detrimental changes to metabolism and body composition. In contrast to weight loss from inadequate caloric intake, cachexia causes mostly muscle loss instead of fat loss. Diagnosis of cachexia can be difficult due to the lack of well-established diagnostic criteria. Cachexia can improve with treatment of the underlying illness but other treatment approaches have limited benefit. Cachexia is associated with increased mortality and poor quality of life.
The term is from Greek κακός kakos, "bad", and ἕξις hexis, "condition".
Causes
Cachexia can be caused by diverse medical conditions, but is most often associated with end-stage cancer, known as cancer cachexia. About 50% of all cancer patients develop cachexia. Those with upper gastrointestinal and pancreatic cancers have the highest frequency of developing a cachexic symptom. Prevalence of cachexia rises in more advanced stages and is estimated to affect 80% of terminal cancer patients.Congestive heart failure, AIDS, chronic obstructive pulmonary disease, and chronic kidney disease are other conditions that often cause cachexia. Cachexia can also be the result of advanced stages of cystic fibrosis, multiple sclerosis, motor neuron disease, Parkinsons disease, dementia, tuberculosis, multiple system atrophy, mercury poisoning, Crohns disease, trypanosomiasis, rheumatoid arthritis, and celiac disease as well as other systemic diseases.
Mechanism
The exact mechanism in which these diseases cause cachexia is poorly understood, and likely is multifactorial with multiple disease pathways involved. Inflammatory cytokines appear to play a central role including tumor necrosis factor (TNF) (which is also nicknamed cachexin or cachectin), interferon gamma and interleukin 6. TNF has been shown to have a direct catabolic effect on skeletal muscle and adipose tissue through the ubiquitin proteasome pathway. This mechanism involves the formation of reactive oxygen species leading to upregulation of the transcription factor NF-κB. NF-κB is a known regulator of the genes that encode cytokines and cytokine receptors. The increased production of cytokines induces proteolysis and breakdown of myofibrillar proteins. Systemic inflammation also causes reduced protein synthesis through inhibition of the Akt/mTOR pathway.Although many different tissues and cell types may be responsible for the increase in circulating cytokines, evidence indicates tumors themselves are an important source of factors that may promote cachexia in cancer. Tumor-derived molecules such as lipid mobilizing factor, proteolysis-inducing factor, and mitochondrial uncoupling proteins may induce protein degradation and contribute to cachexia. Uncontrolled inflammation in cachexia can lead to an elevated resting metabolic rate, further increasing the demands for protein and energy sources.There is also evidence of alteration in feeding control loops in cachexia. High levels of leptin, a hormone secreted by adipocytes, block the release of neuropeptide Y, which is the most potent feeding-stimulatory peptide in the hypothalamic orexigenic network, leading to decreased energy intake despite the high metabolic demand for nutrients.
Diagnosis
Diagnostic guidelines and criteria have only recently been proposed despite the prevalence of cachexia and varying criteria, the primary features of cachexia include progressive depletion of muscle and fat mass, reduced food intake, abnormal metabolism of carbohydrate, protein, and fat, reduced quality of life, and increased physical impairment.Historically, body weight changes were used as the primary metrics of cachexia, including low body mass index and involuntary weight loss of more than 10%. Using weight alone is limited by the presence of edema, tumor mass and the high prevalence of obesity in the general population. Weight-based criteria do not take into account changes in body composition, especially loss of lean body mass.
In the attempt to include a broader evaluation of the burden of cachexia, diagnostic criteria using assessments of laboratory metrics and symptoms in addition to weight have been proposed. The criteria included weight loss of at least 5% in 12 months or low body mass index (less than 22 kg/m2) with at least three of the following features: decreased muscle strength, fatigue, anorexia, low fat‐free mass index, or abnormal biochemistry (increased inflammatory markers, anemia, low serum albumin). In cancer patients, cachexia is diagnosed from unintended weight loss of more than 5%. For cancer patients with a body mass index of less than 20 kg/m2, cachexia is diagnosed after the unintended weight loss of more than 2%. Additionally, it can be diagnosed through sarcopenia, or loss of skeletal muscle mass.Laboratory markers are used in evaluation of people with cachexia, including albumin, prealbumin, C-reactive protein, or hemoglobin. However, laboratory metrics and cut-off values are not standardized across different diagnostic criteria. Acute phase reactants (IL-6, IL-1b, tumor necrosis factor-a, IL-8, interferon-g) are sometimes measured but correlate poorly with outcomes. There are no biomarkers to identify people with cancer who may develop cachexia.In the effort to better classify cachexia severity, several scoring systems have been proposed including the Cachexia Staging Score (CSS) and Cachexia Score (CASCO). The CSS takes into account weight loss, subjective reporting of muscle function, performance status, appetite loss, and laboratory changes to categorize patients into non-cachexia, pre-cachexia, cachexia, and refractory cachexia. The Cachexia SCOre (CASCO) is another validated score that includes evaluation of body weight loss and composition, inflammation, metabolic disturbances, immunosuppression, physical performance, anorexia, and quality of life.Evaluation of changes in body composition is limited by the difficulty in measuring muscle mass and health in a non-invasive and cost-effective way. Imaging with quantification of muscle mass has been investigated including bioelectrical impedance analysis, computed tomography, dual-energy X-ray absorptiometry (DEXA), and magnetic resonance imaging but are not widely used.
Definition
Identification, treatment, and research of cachexia have historically been limited by the lack of a widely accepted definition of cachexia. In 2011, an international consensus group adopted a definition of cachexia as "a multifactorial syndrome defined by an ongoing loss of skeletal muscle mass (with or without loss of fat mass) that can be partially but not entirely reversed by conventional nutritional support."Cachexia differs from weight loss due to malnutrition from malabsorption, anorexia nervosa, or anorexia due to major depressive disorder. Weight loss from inadequate caloric intake generally causes fat loss before muscle loss, whereas cachexia causes predominantly muscle wasting. Cachexia is also distinct from sarcopenia, or age-related muscle loss, although they often co-exist.
Treatment
The management of cachexia depends on the underlying cause, the general prognosis, and the needs of the person affected. The most effective approach to cachexia is treating the underlying disease process. An example is the reduction in cachexia from AIDS by highly active antiretroviral therapy. However this is often not possible or maybe inadequate to reverse the cachexia syndrome in other diseases. Approaches to mitigate muscle loss include exercise, nutritional therapies, and medications.
Exercise
Therapy that includes regular physical exercise can be recommended for the treatment of cachexia due to the positive effects of exercise on skeletal muscle but current evidence remains uncertain as to its effectiveness, acceptability and safety for cancer patients. Individuals with cachexia generally report low levels of physical activity and few engage in an exercise routine, owing to low motivation to exercise and a belief that exercising may worsen their symptoms or cause harm.
Medications
Appetite stimulant medications are used to treat cachexia to increase food intake, but are not effective in stopping muscle wasting and may have detrimental side effects. Appetite stimulants include glucocorticoids, cannabinoids, or progestins such as megestrol acetate. Anti-emetics such as 5-HT3 antagonists are also commonly used in cancer cachexia if nausea is a prominent symptom.Anabolic-androgenic steroids like oxandrolone may be beneficial in cachexia but their use is recommended for a maximum of two weeks since a longer duration of treatment increases side effects. Whilst preliminary studies have suggested thalidomide may be useful, a Cochrane review found no evidence to make an informed decision about the use of this drug in cancer patients with cachexia.
Nutrition
The increased metabolic rate and appetite suppression common in cachexia can compound muscle loss. Studies using a calorie-dense protein supplementation have suggested at least weight stabilization can be achieved, although improvements in lean body mass have not been observed in these studies.
Supplements
Administration of exogenous amino acids have been investigated to serve as a protein-sparing metabolic fuel by providing substrates for both muscle metabolism and gluconeogenesis. The branched-chain amino acids leucine and valine may have potential in inhibiting overexpression of protein breakdown pathways. The amino acid glutamine has been used as a component of oral supplementation to reverse cachexia in people with advanced cancer or HIV/AIDS.β-hydroxy β-methylbutyrate (HMB) is a metabolite of leucine that acts as a signaling molecule to stimulate protein synthesis. Studies showed positive results for chronic pulmonary disease, hip fracture, and in AIDS‐related and cancer‐related cachexia. However, many of these clinical studies used HMB as a component of combination treatment with glutamine, arginine, leucine, higher dietary protein and/or vitamins, which limits the assessment of the efficacy of HMB alone.
Epidemiology
Accurate epidemiological data on the prevalence of cachexia is lacking due to changing diagnostic criteria and under-identification of people with the disorder. It is estimated that cachexia from any disease is estimated to affect more than 5 million people in the United States. The prevalence of cachexia is growing and estimated at 1% of the population. The prevalence is lower in Asia but due to the larger population, represents a similar burden. Cachexia is also a significant problem in South America and Africa.The most frequent causes of cachexia in the United States by population prevalence are: 1) chronic obstructive pulmonary disease (COPD), 2) heart failure, 3) cancer cachexia, 4) chronic kidney disease. The prevalence of cachexia ranges from 15 to 60% among people with cancer, increasing to an estimated 80% in terminal cancer. This wide range is attributed to differences in cachexia definition, variability in cancer populations, and timing of diagnosis. Although the prevalence of cachexia among people with COPD or heart failure is lower (estimated 5% to 20%), the large number of people with these conditions dramatically increases the total cachexia burden.Cachexia contributes to significant loss of function and healthcare utilization. Estimates using the National Inpatient Sample in the United States suggest that cachexia accounted for 177,640 hospital stays in 2016. Cachexia is considered the immediate cause of death of many people with cancer, estimated between 22 and 40%.
History
The word "cachexia" is derived from the Greek words "Kakos" (bad) and "hexis" (condition). English ophthalmologist John Zachariah Laurence was the first to use the phrase "cancerous cachexia", doing so in 1858. He applied the phrase to the chronic wasting associated with malignancy. It was not until 2011 that the term "cancer-associated cachexia" was given a formal definition, with a publication by Kenneth Fearon. Fearon defined it as "a multifactorial syndrome characterized by ongoing loss of skeletal muscle (with or without loss of fat mass) that cannot be fully reversed by conventional nutritional support and leads to progressive functional impairment".
Research
Several medications are under investigation or have been previously trialed for use in cachexia but are currently not in widespread clinical use:
Thalidomide
Cytokine antagonists
Cannabinoids
Omega-3 fatty acids, including eicosapentaenoic acid (EPA)
Non-steroidal anti-inflammatory drugs
Prokinetics
Ghrelin and ghrelin receptor agonist
Anabolic catabolic transforming agents such as MT-102
Selective androgen receptor modulators
Cyproheptadine
HydrazineMedical marijuana has been allowed for the treatment of cachexia in some US states, such as Illinois, Maryland, Delaware, Nevada, Michigan, Washington, Oregon, California, Colorado, New Mexico, Arizona, Vermont, New Jersey, Rhode Island, Maine, and New York Hawaii and Connecticut.
Multimodal therapy
Despite the extensive investigation into single therapeutic targets for cachexia, the most effective treatments use multi-targeted therapies. In Europe, a combination of non-drug approaches including physical training, nutritional counseling, and psychotherapeutic intervention are used in belief this approach may be more effective than monotherapy. Administration of anti-inflammatory drugs showed efficacy and safety in the treatment of people with advanced cancer cachexia.
See also
Sarcopenia
Muscle atrophy
Marasmus
Cancer
Progressive disease
Journal of Cachexia, Sarcopenia and Muscle
References
== External links == | 104 |
Calcium channel blocker toxicity | Calcium channel blocker toxicity is the taking of too much of the medications known as calcium channel blockers (CCBs), either by accident or on purpose. This often causes a slow heart rate and low blood pressure. This can progress to the heart stopping altogether. Some CCBs can also cause a fast heart rate as a result of the low blood pressure. Other symptoms may include nausea, vomiting, sleepiness, and shortness of breath. Symptoms usually occur in the first six hours but with some forms of the medication may not start until 24 after hours.There are a number of treatments that may be useful. These include efforts to reduce absorption of the drug including: activated charcoal taken by mouth if given shortly after the ingestion or whole bowel irrigation if an extended release formula was taken. Efforts to bring about vomiting are not recommended. Medications to treat the toxic effects include: intravenous fluids, calcium gluconate, glucagon, high dose insulin, vasopressors and lipid emulsion. Extracorporeal membrane oxygenation may also be an option.More than ten thousand cases of calcium channel blocker toxicity were reported in the United States in 2010. Along with beta blockers and digoxin calcium channel blockers have one of the highest rates of death in overdose. These medications first became available in the 1970s and 1980s. They are one of the few types of medication in which one pill can result in the death of a child.
Signs and symptoms
Most people who have taken too much of a calcium channel blocker, especially diltiazem, get slow heart rate and low blood pressure (vasodilatory shock). This can progress to the heart stopping altogether. CCBs of the dihydropyridine group, as well as flunarizine, predominantly cause reflex tachycardia as a reaction to the low blood pressure.Other potential symptoms include: nausea and vomiting, a decreased level of consciousness, and breathing difficulties. Symptoms usually begin within 6 hours of taking the medication by mouth. With extended release formulations symptoms may not occur for up to a day. Seizures are rare in adults but in children occur more often. Hypocalcaemia may also occur.
Cause
Calcium channel blockers, also known as calcium channel antagonists, are widely used for a number of health conditions. Thus they are commonly present in many peoples homes. In young children one pill may cause serious health problems and potentially death. The calcium channel blocker that caused the greatest number of deaths in 2010 in the United States was verapamil. This agent is believed to cause more heart problems than many of the others.
Diagnosis
A blood or urine test to diagnose overdose is not generally available. CCB overdose may cause high blood sugar levels, and this is often a sign of how severe the problem will become.
Electrocardiogram
CCB toxicity can cause a number of electrocardiogram abnormalities with a low sinus rhythm being the most common. Others include: QT prolongation, bundle branch block, first-degree atrioventricular block, and even sinus tachycardia.
Differential
It may not be possible to tell the difference between beta blocker toxicity and calcium channel blocker overdose based on signs and symptoms.
Management
The medical management of CCB toxicity may be difficult. It may not improve with the usual treatments used for a low blood pressure and a slow heart rate. In those who have no symptoms or signs six hours following taking an immediate release formulation and 24 hours after taking an extended release formulation need no further medical treatment.
Detoxification
Activated charcoal is recommended if it can be given within an hour or two of taking the calcium channel blockers. In those who have taken an extended release formulation of a CCB but are otherwise doing fine, whole bowel irrigation with polyethylene glycol may be useful. Causing vomiting by the use of medications such as ipecac is not recommended.
Insulin
High doses of intravenous insulin with glucose may be useful and are a first line treatment in overdoses. As this treatment may cause a drop in blood sugar and blood potassium levels, these should be monitored closely.
Other
Intravenous calcium gluconate or calcium chloride is considered a specific antidotes. Slow heart rate can be treated with atropine and sympathomimetics. Low blood pressure is treated with vasopressors such as adrenaline.There is tentative clinical evidence and good theoretical evidence of the benefit of lipid emulsion in severe overdoses of CCBs. Methylene blue may also be used for those with low blood pressure that does not respond to other treatments.
Epidemiology
More than 10,000 cases of potential calcium channel blocker toxicity occurred in the United States in 2010. When death occurs in medicine overdose, heart medications are the cause more than 10% of time. The three most common types of heart medications that result in this outcome are calcium channel blockers along with beta blockers and digoxin.
References
External links
St-Onge, Maude; Anseeuw, Kurt; Cantrell, Frank Lee; Gilchrist, Ian C.; Hantson, Philippe; Bailey, Benoit; Lavergne, Valéry; Gosselin, Sophie; Kerns, William; Laliberté, Martin; Lavonas, Eric J.; Juurlink, David N.; Muscedere, John; Yang, Chen-Chang; Sinuff, Tasnim; Rieder, Michael; Mégarbane, Bruno (October 2016). "Experts Consensus Recommendations for the Management of Calcium Channel Blocker Poisoning in Adults". Critical Care Medicine. 45 (3): e306–e315. doi:10.1097/CCM.0000000000002087. PMC 5312725. PMID 27749343. | 105 |
Campylobacteriosis | Campylobacteriosis is an infection by the Campylobacter bacterium, most commonly C. jejuni. It is among the most common bacterial infections of humans, often a foodborne illness. It produces an inflammatory, sometimes bloody, diarrhea or dysentery syndrome, mostly including cramps, fever and pain.
Symptoms and signs
The prodromal symptoms are fever, headache, and myalgia, which can be severe, lasting as long as 24 hours. After 1–5 days, typically, these are followed by diarrhea (as many as 10 watery, frequently bloody, bowel movements per day) or dysentery, cramps, abdominal pain, and fever as high as 40 °C (104 °F). In most people, the illness lasts for 2–10 days. It is classified as invasive/inflammatory diarrhea, also described as bloody diarrhea or dysentery.There are other diseases showing similar symptoms. For instance, abdominal pain and tenderness may be very localized, mimicking acute appendicitis. Furthermore, Helicobacter pylori is closely related to Campylobacter and causes peptic ulcer disease.
Complications
Complications include toxic megacolon, dehydration and sepsis. Such complications generally occur in young children (< 1 year of age) and immunocompromised people. A chronic course of the disease is possible; this disease process is likely to develop without a distinct acute phase. Chronic campylobacteriosis features a long period of sub-febrile temperature and asthenia; eye damage, arthritis, endocarditis may develop if infection is untreated. Occasional deaths occur in young, previously healthy individuals because of blood volume depletion (due to dehydration), and in people who are elderly or immunocompromised.Some individuals (1–2 in 100,000 cases) develop Guillain–Barré syndrome, in which the nerves that join the spinal cord and brain to the rest of the body are damaged, sometimes permanently. This occurs only with infection of C. jejuni and C. upsaliensis.
Other factors
In patients with HIV, infections may be more frequent, may cause prolonged bouts of dirty brown diarrhea, and may be more commonly associated with bacteremia and antibiotic resistance. In participants of unprotected anal intercourse, campylobacteriosis is more localized to the distal end of the colon and may be termed a proctocolitis. The severity and persistence of infection in patients with AIDS and hypogammaglobulinemia indicates that both cell-mediated and humoral immunity are important in preventing and terminating infection.
Cause
Campylobacteriosis is caused by Campylobacter bacteria (curved or spiral, motile, non–spore-forming, Gram-negative rods). The disease is usually caused by C. jejuni, a spiral and comma-shaped bacterium normally found in cattle, swine, and birds, where it is nonpathogenic, but the illness can also be caused by C. coli (also found in cattle, swine, and birds), C. upsaliensis (found in cats and dogs) and C. lari (present in seabirds in particular).One effect of campylobacteriosis is tissue injury in the gut. The sites of tissue injury include the jejunum, the ileum, and the colon. C jejuni appears to achieve this by invading and destroying epithelial cells.C. jejuni can also cause a latent autoimmune effect on the nerves of the legs, which is usually seen several weeks after a surgical procedure of the abdomen. The effect is known as an acute idiopathic demyelinating polyneuropathy (AIDP), i.e. Guillain–Barré syndrome, in which one sees symptoms of ascending paralysis, dysaesthesias usually below the waist, and, in the later stages, respiratory failure.Some strains of C jejuni produce a cholera-like enterotoxin, which is important in watery diarrhea observed in infections. The organism produces diffuse, bloody, edematous, and exudative enteritis. In a small number of cases, the infection may be associated with hemolytic uremic syndrome and thrombotic thrombocytopenic purpura through a poorly understood mechanism.
Transmission
The common routes of transmission for the disease-causing bacteria are fecal-oral, person-to-person sexual contact, ingestion of contaminated food (generally unpasteurized (raw) milk and undercooked or poorly handled poultry), and waterborne (i.e., through contaminated drinking water). Contact with contaminated poultry, livestock, or household pets, especially puppies, can also cause disease.Animals farmed for meat are the main source of campylobacteriosis. A study published in PLoS Genetics (26 September 2008) by researchers from Lancashire, England, and Chicago, Illinois, found that 97 percent of campylobacteriosis cases sampled in Lancashire were caused by bacteria typically found in chicken and livestock. In 57 percent of cases, the bacteria could be traced to chicken, and in 35 percent to cattle. Wild animal and environmental sources were accountable for just three percent of disease.The infectious dose is 1000–10,000 bacteria (although ten to five hundred bacteria can be enough to infect humans). Campylobacter species are sensitive to hydrochloric acid in the stomach, and acid reduction treatment can reduce the amount of inoculum needed to cause disease.Exposure to bacteria is often more common during travelling, and therefore campylobacteriosis is a common form of travelers diarrhea.
Diagnosis
Campylobacter organisms can be detected by performing a Gram stain of a stool sample with high specificity and a sensitivity of ~60%, but are most often diagnosed by stool culture. Fecal leukocytes should be present and indicate the diarrhea to be inflammatory in nature. Methods currently being developed to detect the presence of campylobacter organisms include antigen testing via an EIA or PCR.
Prevention
Pasteurization of milk and chlorination of drinking water destroys the organisms.
Treatment with antibiotics can reduce fecal excretion.
Infected health care workers should not provide direct patient care.
Separate cutting boards should be used for foods of animal origin and other foods. After preparing raw food of animal origin, all cutting boards and countertops should be carefully cleaned with soap and hot water.
Contact with pet saliva and feces should be avoided.The World Health Organization recommends the following:
Food should be properly cooked and hot when served.
Consume only pasteurized or boiled milk and milk products, never raw milk products.
Make sure that ice is from safe water.
If you are not sure of the safety of drinking water, boil it, or disinfect it with chemical disinfectant.
Wash hands thoroughly and frequently with soap, especially after using the toilet and after contact with pets and farm animals.
Wash fruits and vegetables thoroughly, especially if they are to be eaten raw. Peel fruits and vegetables whenever possible.
Food handlers, professionals and at home, should observe hygienic rules during food preparation.
Professional food handlers should immediately report to their employer any fever, diarrhea, vomiting or visible infected skin lesions.
Treatment
The infection is usually self-limiting, and in most cases, symptomatic treatment by liquid and electrolyte replacement is enough in human infections.
Antibiotics
Antibiotic treatment only has a marginal effect on the duration of symptoms, and its use is not recommended except in high-risk patients with clinical complications.Erythromycin can be used in children, and tetracycline in adults. Some studies show, however, that erythromycin rapidly eliminates Campylobacter from the stool without affecting the duration of illness. Nevertheless, children with dysentery due to C. jejuni benefit from early treatment with erythromycin. Treatment with antibiotics, therefore, depends on the severity of symptoms. Quinolones are effective if the organism is sensitive, but high rates of quinolone use in livestock mean that quinolones are now largely ineffective.Antimotility agents, such as loperamide, can lead to prolonged illness or intestinal perforation in any invasive diarrhea, and should be avoided. Trimethoprim/sulfamethoxazole and ampicillin are ineffective against Campylobacter.
In animals
In the past, poultry infections were often treated by mass administration of enrofloxacin and sarafloxacin for single instances of infection. The FDA banned this practice, as it promoted the development of fluoroquinolone-resistant populations.
A major broad-spectrum fluoroquinolone used in humans is ciprofloxacin.Currently growing resistance of the Campylobacter to fluoroquinolones and macrolides is of a major concern.
Prognosis
Campylobacteriosis is usually self-limited without any mortality (assuming proper hydration is maintained). However, there are several possible complications.
Epidemiology
Campylobacter is one of the most common causes of human bacterial gastroenteritis. For instance, an estimated 2 million cases of Campylobacter enteritis occur annually in the U.S., accounting for 5–7% of cases of gastroenteritis. Furthermore, in the United Kingdom during 2000, Campylobacter jejuni was involved in 77.3% in all cases of laboratory confirmed foodborne illness. About 15 of every 100,000 people are diagnosed with campylobacteriosis every year, and with many cases going unreported, up to 0.5% of the general population may unknowingly harbor Campylobacter in their gut.Unfortunately, the true incidence of campylobacteriosis is unknown in most countries, especially developing countries. The reasons are among others underreporting, difficulties with diagnosis and differences in reporting systems in different countries.A large animal reservoir is present as well, with up to 100% of poultry, including chickens, turkeys, and waterfowl, having asymptomatic infections in their intestinal tracts. Infected chicken feces may contain up to 109 bacteria per 25 grams, and due to the animals close proximity, the bacteria are rapidly spread to other chickens. This vastly exceeds the infectious dose of 1000–10,000 bacteria for humans.In January 2013, the UKs Food Standards Agency warned that two-thirds of all raw chicken bought from UK shops was contaminated with campylobacter, affecting an estimated half a million people annually and killing approximately 100.
Outbreak
In August–September 2016, 5,200 people fell ill with campylobacteriosis in Hastings, New Zealand after the local water supply in Havelock North tested positive for the pathogen Campylobacter jejuni. Four deaths were suspected to be due to the outbreak. It is suspected that after heavy rain fell on 5–6 August, water contamination from flooding caused the outbreak, although this is the subject of a government Inquiry. It is the largest outbreak of waterborne disease ever to occur in New Zealand. All schools in Havelock North closed for two weeks, with the Hastings District Council advising an urgent notice to boil water for at least one minute before consumption. This notice was lifted on 3 September, with the outbreak officially under control.According to Centers for Disease Control and Prevention, a multistate outbreak of human Campylobacter infection has been reported since 11 September 2017. In all, 55 cases were reported from 12 states (Florida, Kansas, Maryland, Missouri, New Hampshire, New York, Ohio, Pennsylvania, Tennessee, Utah, Wisconsin and Wyoming). Epidemiological and laboratory evidence indicated that puppies sold through Petland stores were a likely source of this outbreak. Out of 55 cases reported, 50 were either employees of Petland or had recently purchased a puppy at Petland, or visited there before illness began. Five people out of 55 cases reported were exposed to puppies from various sources.Campylobacter can spread through contact with dog feces. It usually does not spread from one person to another. However, activities such as changing an infected persons diapers or sexual contact with an infected person can lead to infection. Regardless of where they are from, any puppies and dogs may carry Campylobacter germs.
References
External links
Campylobacter jejuni genomes and related information at PATRIC, a Bioinformatics Resource Center funded by NIAID | 106 |
Fungemia | Fungemia is the presence of fungi or yeasts in the blood. The most common type, also known as candidemia, candedemia, or systemic candidiasis, is caused by Candida species; candidemia is also among the most common bloodstream infections of any kind. Infections by other fungi, including Saccharomyces, Aspergillus and Cryptococcus, are also called fungemia. It is most commonly seen in immunosuppressed or immunocompromised patients with severe neutropenia, cancer patients, or in patients with intravenous catheters. It has been suggested the otherwise immunocompetent patients taking infliximab may be at a higher risk for fungemia.
Diagnosis is difficult, as routine blood cultures have poor sensitivity.
Signs and symptoms
Symptoms can range from mild to extreme—often described as extreme flu-like symptoms. Many symptoms may be associated with fungemia, including pain, acute confusion, chronic fatigue, and infections. Skin infections can include persistent or non-healing wounds and lesions, sweating, itching, and unusual discharge or drainage.
Risk factors
Pathogens
The most commonly known pathogen is Candida albicans, causing roughly 70% of fungemias, followed by Candida glabrata with 10%, Aspergillus with 1% and Saccharomyces as the fourth most common. However, the frequency of infection by C. glabrata, Saccharomyces boulardii, Candida tropicalis, C. krusei and C. parapsilosis is increasing, perhaps because significant use of fluconazole is common or due to increase in antibiotic use.Candida auris is an emerging multidrug-resistant (MDR) yeast that can cause invasive infections and is associated with high mortality. It was first described in 2009 after being isolated from external ear discharge of a patient in Japan. Since the 2009 report, C. auris infections, specifically fungemia, have been reported from South Korea, India, South Africa, and Kuwait. Although published reports are not available, C. auris has also been identified in Colombia, Venezuela, Pakistan, and the United Kingdom.In a single reported instance, Psilocybe cubensis was reported to have been cultured from a case of fungemia in which an individual self-injected an underprocessed decoction of fungal matter. The patient, who had been suffering from mild depression, attempted to self-medicate with the mushrooms but was frustrated by the lag time between eating the mushrooms and experiencing the psychedelic effects. In an attempt to bypass this, the patient boiled and filtered the mushrooms into a mushroom tea which was then administered by injection. The patient had multiple organ failure, but this was successfully reversed and the infection treated with antifungal drugs. Two other examples of fungemia as a result of injecting fungal matter in this way have been described in medical literature, both dating to 1985.
Diagnosis
The gold standard for the diagnosis of invasive candidiasis and candidemia is a positive culture. Blood cultures should be obtained in all patients with suspected candidemia.
Treatment
Neutropenic vs non-neutropenic candidemia is treated differently.An intravenous echinocandin such as anidulafungin, caspofungin or micafungin is recommended as first-line therapy for fungemia, specifically candidemia. Oral or intravenous fluconazole is an acceptable alternative. The lipid formulation amphotericin B is a reasonable alternative if there is limited antifungal availability, antifungal resistance, or antifungal intolerance.
See also
Bacteremia
Candidiasis
Fungicide
Mycosis
References
== External links == | 107 |
Capillariasis | Capillariasis is a disease caused by nematodes in the genus Capillaria. The two principal forms of the disease are:
Intestinal capillariasis, caused by Capillaria philippinensis
Hepatic capillariasis, caused by Capillaria hepatica
== References == | 108 |
Cardiac stress test | A cardiac stress test (also referred to as a cardiac diagnostic test, cardiopulmonary exercise test, or abbreviated CPX test) is a cardiological test that measures the hearts ability to respond to external stress in a controlled clinical environment. The stress response is induced by exercise or by intravenous pharmacological stimulation.
Cardiac stress tests compare the coronary circulation while the patient is at rest with the same patients circulation during maximum cardiac exertion, showing any abnormal blood flow to the myocardium (heart muscle tissue). The results can be interpreted as a reflection on the general physical condition of the test patient. This test can be used to diagnose coronary artery disease (also known as ischemic heart disease) and assess patient prognosis after a myocardial infarction (heart attack).
Exercise-induced stressors are most commonly either exercise on a treadmill or pedalling a stationary exercise bicycle ergometer. The level of stress is progressively increased by raising the difficulty (steepness of the slope on a treadmill or resistance on an ergometer) and speed. People who cannot use their legs may exercise with a bicycle-like crank that they turn with their arms, or may be given a medication to induce cardiac stress. Once the stress test is completed, the patient generally is advised to not suddenly stop activity but to slowly decrease the intensity of the exercise over the course of several minutes.
The test administrator or attending physician examines the symptoms and blood pressure response. To measure the hearts response to the stress the patient may be connected to an electrocardiogram (ECG); in this case the test is most commonly called a cardiac stress test but is known by other names, such as exercise testing, stress testing treadmills, exercise tolerance test, stress test or stress test ECG. Alternatively a stress test may use an echocardiogram for ultrasonic imaging of the heart (in which case the test is called an echocardiography stress test or stress echo), or a gamma camera to image radioisotopes injected into the bloodstream (called a nuclear stress test).
Stress echocardiography
A stress test may be accompanied by echocardiography. The echocardiography is performed both before and after the exercise so that structural differences can be compared.
A resting echocardiogram is obtained prior to stress. The images obtained are similar to the ones obtained during a full surface echocardiogram, commonly referred to as transthoracic echocardiogram. The patient is subjected to stress in the form of exercise or chemically (usually dobutamine). After the target heart rate is achieved, stress echocardiogram images are obtained. The two echocardiogram images are then compared to assess for any abnormalities in wall motion of the heart. This is used to detect obstructive coronary artery disease.
Cardiopulmonary exercise test
While also measuring breathing gases (e.g. O2, VO2), the test is often referred to as a cardiopulmonary exercise test (CPET).
Common indications for a cardiopulmonary exercise test is:
Evaluation of dyspnea.
Work up before heart transplantation.
Prognosis and risk assessment of heart failure patients.
The test is also common in sport science for measuring athletes VO2 max.
Nuclear stress test
The best known example of a nuclear stress test is myocardial perfusion imaging. Typically, a radiotracer (Tc-99 sestamibi, Myoview or thallous chloride 201) may be injected during the test. After a suitable waiting period to ensure proper distribution of the radiotracer, scans are acquired with a gamma camera to capture images of the blood flow. Scans acquired before and after exercise are examined to assess the state of the coronary arteries of the patient.Showing the relative amounts of radioisotope within the heart muscle, the nuclear stress tests more accurately identify regional areas of reduced blood flow.Stress and potential cardiac damage from exercise during the test is a problem in patients with ECG abnormalities at rest or in patients with severe motor disability. Pharmacological stimulation from vasodilators such as dipyridamole or adenosine, or positive chronotropic agents such as dobutamine can be used. Testing personnel can include a cardiac radiologist, a nuclear medicine physician, a nuclear medicine technologist, a cardiology technologist, a cardiologist, and/or a nurse.The typical dose of radiation received during this procedure can range from 9.4 millisieverts to 40.7 millisieverts.
Function
The American Heart Association recommends ECG treadmill testing as the first choice for patients with medium risk of coronary heart disease according to risk factors of smoking, family history of coronary artery stenosis, hypertension, diabetes and high cholesterol. In 2013, in its "Exercise Standards for Testing and Training", the AHA indicated that high frequency QRS analysis during ECG treadmill test have useful test performance for detection of coronary heart disease.
Perfusion stress test (with 99mTc labelled sestamibi) is appropriate for select patients, especially those with an abnormal resting electrocardiogram.
Intracoronary ultrasound or angiogram can provide more information at the risk of complications associated with cardiac catheterization.
Diagnostic value
The common approach for stress testing by American College of Cardiology and American Heart Association indicates the following:
Treadmill test: sensitivity 73-90%, specificity 50-74% (modified Bruce protocol)
Nuclear test: sensitivity 81%, specificity 85-95%(Sensitivity is the percentage of people with the condition who are correctly identified by the test as having the condition; specificity is the percentage of people without the condition are correctly identified by the test as not having the condition).
To arrive at the patients posttest likelihood of disease, interpretation of the stress test result requires integration of the patients pretest likelihood with the tests sensitivity and specificity. This approach, first described by Diamond and Forrester in the 1970s, results in an estimate of the patients post-test likelihood of disease.
The value of stress tests has always been recognized as limited in assessing heart disease such as atherosclerosis, a condition which mainly produces wall thickening and enlargement of the arteries. This is because the stress test compares the patients coronary flow status before and after exercise and is suitable to detecting specific areas of ischemia and lumen narrowing, not a generalized arterial thickening.According to American Heart Association data, about 65% of men and 47% of women present with a heart attack or sudden cardiac arrest as their first symptom of cardiovascular disease. Stress tests, carried out shortly before these events, are not relevant to the prediction of infarction in the majority of individuals tested. Over the past two decades, better methods have been developed to identify atherosclerotic disease before it becomes symptomatic. These detection methods include anatomical and physiological methods.
Examples of anatomical methodsCT coronary calcium score
Coronary CT angiography
Intima-media thickness (IMT)
Intravascular ultrasound (IVUS)Examples of physiological methodsLipoprotein analysis
HbA1c
Hs-CRP
HomocysteineThe anatomic methods directly measure some aspects of the actual process of atherosclerosis itself and therefore offer the possibility of early diagnosis but are often more expensive and may be invasive (in the case of IVUS, for example). The physiological methods are often less expensive and safer but are not able to quantify the current status of the disease or directly track progression.
Contraindications and termination conditions
Stress cardiac imaging is not recommended for asymptomatic, low-risk patients as part of their routine care. Some estimates show that such screening accounts for 45% of cardiac stress imaging, and evidence does not show that this results in better outcomes for patients. Unless high-risk markers are present, such as diabetes in patients aged over 40, peripheral arterial disease; or a risk of coronary heart disease greater than 2 percent yearly, most health societies do not recommend the test as a routine procedure.Absolute contraindications to cardiac stress test include:
Acute myocardial infarction within 48 hours
Unstable angina not yet stabilized with medical therapy
Uncontrolled cardiac arrhythmia, which may have significant hemodynamic responses (e.g. ventricular tachycardia)
Severe symptomatic aortic stenosis, aortic dissection, pulmonary embolism, and pericarditis
Multivessel coronary artery diseases that have a high risk of producing an acute myocardial infarction
Decompensated or inadequately controlled congestive heart failure
Uncontrolled hypertension (blood pressure>200/110mm Hg)
Severe pulmonary hypertension
Acute aortic dissection
Acutely ill for any reasonIndications for termination:
A cardiac stress test should be terminated before completion under the following circumstances:Absolute indications for termination include:
Systolic blood pressure decreases by more than 10 mmHg with increase in work rate, or drops below baseline in the same position, with other evidence of ischemia.
Increase in nervous system symptoms: Dizziness, ataxia or near syncope
Moderate to severe anginal pain (above 3 on standard 4-point scale)
Signs of poor perfusion, e.g. cyanosis or pallor
Request of the test subject
Technical difficulties (e.g. difficulties in measuring blood pressure or EGC)
ST Segment elevation of more than 1 mm in aVR, V1 or non-Q wave leads
Sustained ventricular tachycardiaRelative indications for termination include:
Systolic blood pressure decreases by more than 10 mmHg with increase in work rate, or drops below baseline in the same position, without other evidence of ischemia.
ST or QRS segment changes, e.g. more than 2 mm horizontal or downsloping ST segment depression in non-Q wave leads, or marked axis shift
Arrhythmias other than sustained ventricular tachycardia e.g. Premature ventricular contractions, both multifocal or triplet; heart block; supraventricular tachycardia or bradyarrhythmias
Intraventricular conduction delay or bundle branch block or that cannot be distinguished from ventricular tachycardia
Increasing chest pain
Fatigue, shortness of breath, wheezing, claudication or leg cramps
Hypertensive response (systolic blood pressure > 250 mmHg or diastolic blood pressure > 115 mmHg)
Adverse effects
Side effects from cardiac stress testing may include
Palpitations, chest pain, myocardial infarction, shortness of breath, headache, nausea or fatigue.
Adenosine and dipyridamole can cause mild hypotension.
As the tracers used for this test are carcinogenic, frequent use of these tests carries a small risk of cancer.
Pharmacological agents
Pharmacologic stress testing relies on coronary steal. Vasodilators are used to dilate coronary vessels, which causes increased blood velocity and flow rate in normal vessels and less of a response in stenotic vessels. This difference in response leads to a steal of flow and perfusion defects appear in cardiac nuclear scans or as ST-segment changes.The choice of pharmacologic stress agents used in the test depends on factors such as potential drug interactions with other treatments and concomitant diseases.
Pharmacologic agents such as adenosine, Lexiscan (regadenoson), or dipyridamole is generally used when a patient cannot achieve adequate work level with treadmill exercise, or has poorly controlled hypertension or left bundle branch block. However, an exercise stress test may provide more information about exercise tolerance than a pharmacologic stress test.Commonly used agents include:
Vasodilators acting as adenosine receptor agonists, such as adenosine itself, and dipyridamole (brand name "Persantine"), which acts indirectly at the receptor.
Regadenoson (brand name "Lexiscan"), which acts specifically at the adenosine A2A receptor, thus affecting the heart more than the lung.
Dobutamine. The effects of beta-agonists such as dobutamine can be reversed by administering beta-blockers such as propranolol.Lexiscan (Regadenoson) or Dobutamine is often used in patients with severe reactive airway disease (asthma or COPD) as adenosine and dipyridamole can cause acute exacerbation of these conditions. If the patients Asthma is treated with an inhaler then it should be used as a pre-treatment prior to the injection of the pharmacologic stress agent. In addition, if the patient is actively wheezing then the physician should determine the benefits versus the risk to the patient of performing a stress test especially outside of a hospital setting. Caffeine is usually held 24 hours prior to an adenosine stress test, as it is a competitive antagonist of the A2A adenosine receptor and can attenuate the vasodilatory effects of adenosine.Aminophylline may be used to attenuate severe and/or persistent adverse reactions to Adenosine and Lexiscan.
Limitations
The stress test does not detect:
Atheroma
Vulnerable plaquesThe test has relatively high rates of false positives and false negatives compared with other clinical tests. Females in particular have a higher rate of false positives, which is theorized to be because on average they have smaller hearts.
Results
Increased spatial resolution allows a more sensitive detection of ischemia.
Stress testing, even if made in time, is not able to guarantee the prevention of symptoms, fainting, or death. Stress testing, although more effective than a resting ECG at detecting heart function, is only able to detect certain cardiac properties.
The detection of high-grade coronary artery stenosis by a cardiac stress test has been the key to recognizing people who have heart attacks since 1980. From 1960 to 1990, despite the success of stress testing to identify many who were at high risk of heart attack, the inability of this test to correctly identify many others is discussed in medical circles but unexplained.
High degrees of coronary artery stenosis, which are detected by stress testing methods are often, though not always, responsible for recurrent symptoms of angina.
Unstable atheroma produces "vulnerable plaques" hidden within the walls of coronary arteries which go undetected by this test.
Limitation in blood flow to the left ventricle can lead to recurrent angina pectoris.
See also
Cardiac steal syndrome
Duke Treadmill Score
Harvard step test
Metabolic equivalent
Robert A. Bruce
Wasserman 9-Panel Plot
References
External links
Preparing for the exercise stress test
"A Simple Exercise Tolerance Test for Circulatory Efficiency with Standard Tables for Normal Individuals," American Journal of the Medical Sciences
"Optimal Medical Therapy with or without PCI for Stable Coronary Disease," New England Journal of Medicine
Stress test information from the American Heart Association
Nuclear stress test information at NIH MedLine | 109 |
Systemic primary carnitine deficiency | Systemic primary carnitine deficiency (SPCD) is an inborn error of fatty acid transport caused by a defect in the transporter responsible for moving carnitine across the plasma membrane. Carnitine is an important amino acid for fatty acid metabolism. When carnitine cannot be transported into tissues, fatty acid oxidation is impaired, leading to a variety of symptoms such as chronic muscle weakness, cardiomyopathy, hypoglycemia and liver dysfunction. The specific transporter involved with SPCD is OCTN2, coded for by the SLC22A5 gene located on chromosome 5. SPCD is inherited in an autosomal recessive manner, with mutated alleles coming from both parents.
Acute episodes due to SPCD are often preceded by metabolic stress such as extended fasting, infections or vomiting. Cardiomyopathy can develop in the absence of an acute episode, and can result in death. SPCD leads to increased carnitine excretion in the urine and low levels in plasma. In most locations with expanded newborn screening, SPCD can be identified and treated shortly after birth. Treatment with high doses of carnitine supplementation is effective, but needs to be rigorously maintained for life.
Signs and symptoms
The presentation of patient with SPCD can be incredibly varied, from asymptomatic to lethal cardiac manifestations. Early cases were reported with liver dysfunction, muscular findings (weakness and underdevelopment), hypoketotic hypoglycemia, cardiomegaly, cardiomyopathy and marked carnitine deficiency in plasma and tissues, combined with increased excretion in urine. Patients who present clinically with SPCD fall into two categories, a metabolic presentation with hypoglycemia and a cardiac presentation characterized by cardiomyopathy. Muscle weakness can be found with either presentation.In countries with expanded newborn screening, SPCD can be identified shortly after birth. Affected infants show low levels of free carnitine and all other acylcarnitine species by tandem mass spectrometry. Not all infants with low free carnitine are affected with SPCD. Some may have carnitine deficiency secondary to another metabolic condition or due to maternal carnitine deficiency. Proper follow-up of newborn screening results for low free carnitine includes studies of the mother to determine whether her carnitine deficiency is due to SPCD or secondary to a metabolic disease or diet. Maternal cases of SPCD have been identified at a higher than expected rate, often in women who are asymptomatic. Some mothers have also been identified through newborn screening with cardiomyopathy that had not been previously diagnosed. The identification and treatment of these asymptomatic individuals is still developing, as it is not clear whether they require the same levels of intervention as patients identified with SPCD early in life based on clinical presentation.
Genetics
SPCD is an autosomal recessive condition, meaning a mutated allele must be inherited from each parent for an individual to be affected. The gene responsible for the OCTN2 carnitine transporter is SLC22A5, located at 5q31.1-32. SLC22A5 is regulated by peroxisome proliferator-activated receptor alpha. The transporter, OCTN2, is located in the apical membrane of the renal tubular cells, where it plays a role in tubular reabsorption.The defective OCTN2 is unable to recapture carnitine prior to its excretion in urine, leading to the characteristic biochemical findings of massively increased urine carnitine levels and significantly decreased plasma carnitine levels. Decreased levels of plasma carnitine inhibit fatty acid oxidation during times of excessive energy demand. Carnitine is needed to transport long chain fatty acids into the mitochondria, where they can be broken down to produce acetyl-CoA. Individuals with SPCD cannot produce ketone bodies as energy due to the interruption of fatty acid oxidation. Although SPCD is an autosomal recessive condition, heterozygotes have been shown to be at an increased risk for developing benign cardiomyopathy compared to wild type individuals.
Diagnosis
The first suspicion of SPCD in a patient with a non-specific presentation is an extremely low plasma carnitine level. When combined with an increased concentration of carnitine in urine, the suspicion of SPCD can often be confirmed by either molecular testing or functional studies assessing the uptake of carnitine in cultured fibroblasts.
Treatment
Identification of patients presymptomatically via newborn screening has allowed early intervention and treatment. Treatment for SPCD involves high dose carnitine supplementation, which must be continued for life. Individuals who are identified and treated at birth have very good outcomes, including the prevention of cardiomyopathy. Mothers who are identified after a positive newborn screen but are otherwise asymptomatic are typically offered carnitine supplementation as well. The long-term outcomes for asymptomatic adults with SPCD is not known, but the discovery of mothers with undiagnosed cardiomyopathy and SPCD has raised the possibility that identification and treatment may prevent adult-onset manifestations.
Incidence
Worldwide, SPCD is most common in the Faroe Islands, where at least one out of every 1000 inhabitants of the Faroes have the disorder, according to the Faroese Ministry of Health. Scientists believe that around 10% of the Faroese population are carriers of variants which cause SPCD. These people are not ill, but may have a lower amount of carnitine in their blood than non-carriers. The first Faroese patient was diagnosed with SPCD in 1995, and since then several young people and children in the Faroese Islands have died of cardiac arrest because of SPCD.The addition of SPCD to newborn screening panels has offered insight into the incidence of the disorder around the world. In Taiwan, the incidence of SPCD in newborns was estimated to be approximately 1:67,000, while maternal cases were identified at a higher frequency of approximately 1:33,000. The increased incidence of SPCD in mothers compared to newborns is not completely understood. Estimates of SPCD in Japan have shown a similar incidence of 1:40,000.
History
Carnitine deficiency has been extensively studied, although most commonly as a secondary finding to other metabolic conditions. The first case of SPCD was reported in the 1980s, in a child with fasting hypoketotic hypoglycemia that resolved after treatment with carnitine supplementation. Later cases were reported with cardiomyopathy and muscle weakness. Newborn screening expanded the potential phenotypes associated with SPCD, to include otherwise asymptomatic adults.
References
External links
GeneReviews/NCBI/NIH/UW entry on Systemic Primary Carnitine Deficiency | 110 |
Precocious puberty | In medicine, precocious puberty is puberty occurring at an unusually early age. In most cases, the process is normal in every aspect except the unusually early age and simply represents a variation of normal development. In a minority of children with precocious puberty, the early development is triggered by a disease such as a tumor or injury of the brain. Even when there is no disease, unusually early puberty can have adverse effects on social behavior and psychological development, can reduce adult height potential, and may shift some lifelong health risks. Central precocious puberty can be treated by suppressing the pituitary hormones that induce sex steroid production. The opposite condition is delayed puberty.The term is used with several slightly different meanings that are usually apparent from the context. In its broadest sense, and often simplified as early puberty, "precocious puberty" sometimes refers to any physical sex hormone effect, due to any cause, occurring earlier than the usual age, especially when it is being considered as a medical problem. Stricter definitions of "precocity" may refer only to central puberty starting before a statistically specified age based on percentile in the population (e.g., 2.5 standard deviations below the population mean), on expert recommendations of ages at which there is more than a negligible chance of discovering an abnormal cause, or based on opinion as to the age at which early puberty may have adverse effects. A common definition for medical purposes is onset before 8 years in girls or 9 years in boys.
Causes
Early pubic hair, breast, or genital development may result from natural early maturation or from several other conditions.
Central
If the cause can be traced to the hypothalamus or pituitary, the cause is considered central. Other names for this type are complete or true precocious puberty.Causes of central precocious puberty can include:
hypothalamic hamartoma produces pulsatile gonadotropin-releasing hormone (GnRH)
Langerhans cell histiocytosis
McCune–Albright syndromeCentral precocious puberty can also be caused by brain tumors, infection (most commonly tuberculous meningitis, especially in developing countries), trauma, hydrocephalus, and Angelman syndrome. Precocious puberty is associated with advancement in bone age, which leads to early fusion of epiphyses, thus resulting in reduced final height and short stature.Adrenocortical oncocytomas are rare with mostly benign and nonfunctioning tumors. There have been only three cases of functioning adrenocortical oncocytoma that have been reported up until 2013. Children with adrenocortical oncocytomas will present with "premature pubarche, clitoromegaly, and increased serum dehydroepiandrosterone sulfate and testosterone" which are some of the presentations associated with precocious puberty.Precocious puberty in girls begins before the age of 8. The youngest mother on record is Lina Medina, who gave birth at the age of either 5 years, 7 months and 17 days or 6 years 5 months as mentioned in another report."Central precocious puberty (CPP) was reported in some patients with suprasellar arachnoid cysts (SAC), and SCFE (slipped capital femoral epiphysis) occurs in patients with CPP because of rapid growth and changes of growth hormone secretion."If no cause can be identified, it is considered idiopathic or constitutional.
Peripheral
Secondary sexual development induced by sex steroids from other abnormal sources is referred to as peripheral precocious puberty or precocious pseudopuberty. It typically presents as a severe form of disease with children. Symptoms are usually as a sequelae from adrenal hyperplasia (because of 21-hydroxylase deficiency or 11-beta hydroxylase deficiency, the former being more common), which includes but is not limited to hypertension, hypotension, electrolyte abnormalities, ambiguous genitalia in females, signs of virilization in females. Blood tests will typically reveal high level of androgens with low levels of cortisol.
Causes can include:
Endogenous sources
Gonadal tumors (such as arrhenoblastoma)
Adrenal tumors
Germ cell tumor
Congenital adrenal hyperplasia
McCune–Albright syndrome
Silver–Russell syndrome
Familial male-limited precocious puberty (testotoxicosis)
Exogenous hormones
Environmental exogenous hormones
As treatment for another condition
Isosexual and heterosexual
Generally, patients with precocious puberty develop phenotypically appropriate secondary sexual characteristics. This is called isosexual precocity.In some cases, a patient may develop characteristics of the opposite sex. For example, a male may develop breasts and other feminine characteristics, while a female may develop a deepened voice and facial hair. This is called heterosexual or contrasexual precocity. It is very rare in comparison to isosexual precocity and is usually the result of unusual circumstances. As an example, children with a very rare genetic condition called aromatase excess syndrome – in which exceptionally high circulating levels of estrogen are present – usually develop precocious puberty. Males and females are hyper-feminized by the syndrome. The "opposite" case would be the hyper-masculinisation of both male and female patients with congenital adrenal hyperplasia (CAH) due to 21-hydroxylase deficiency, in which there is an excess of androgens. Thus, in the aromatase excess syndrome the precocious puberty is isosexual in females and heterosexual in males, whilst in the CAH its isosexual in males and heterosexual in females.
Research
Although the causes of early puberty are still somewhat unclear, girls who have a high-fat diet and are not physically active or are obese are more likely to physically mature earlier. "Obese girls, defined as at least 10 kilograms (22 pounds) overweight, had an 80 percent chance of developing breasts before their ninth birthday and starting menstruation before age 12 – the western average for menstruation is about 12.7 years." In addition to diet and exercise habits, exposure to chemicals that mimic estrogen (known as xenoestrogens) is another possible cause of early puberty in girls. Bisphenol A, a xenoestrogen found in hard plastics, has been shown to affect sexual development. "Factors other than obesity, however, perhaps genetic and/or environmental ones, are needed to explain the higher prevalence of early puberty in black versus white girls." While more girls are increasingly entering puberty at younger ages, new research indicates that some boys are actually starting later (delayed puberty). "Increasing rates of obese and overweight children in the United States may be contributing to a later onset of puberty in boys, say researchers at the University of Michigan Health System."High levels of beta-hCG in serum and cerebrospinal fluid observed in a 9-year-old boy suggest a pineal gland tumor. The tumor is called a chorionic gonadotropin secreting pineal tumor. Radiotherapy and chemotherapy reduced tumor and beta-hCG levels normalized.In a study using neonatal melatonin on rats, results suggest that elevated melatonin could be responsible for some cases of early puberty.Familial cases of idiopathic central precocious puberty (ICPP) have been reported, leading researchers to believe there are specific genetic modulators of ICPP. Mutations in genes such as LIN28, and LEP and LEPR, which encode leptin and the leptin receptor, have been associated with precocious puberty. The association between LIN28 and puberty timing was validated experimentally in vivo, when it was found that mice with ectopic over-expression of LIN28 show an extended period of pre-pubertal growth and a significant delay in puberty onset.Mutations in the kisspeptin (KISS1) and its receptor, KISS1R (also known as GPR54), involved in GnRH secretion and puberty onset, are also thought to be the cause for ICPP However, this is still a controversial area of research, and some investigators found no association of mutations in the LIN28 and KISS1/KISS1R genes to be the common cause underlying ICPP.The gene MKRN3, which is a maternally imprinted gene, was first cloned by Jong et al. in 1999. MKRN3 was originally named Zinc finger protein 127. It is located on human chromosome 15 on the long arm in the Prader-Willi syndrome critical region2, and has since been identified as a cause of premature sexual development or CPP. The identification of mutations in MKRN3 leading to sporadic cases of CPP has been a significant contribution to better understanding the mechanism of puberty. MKRN3 appears to act as a "brake" on the central hypothalamic-pituitary access. Thus, loss of function mutations of the protein allow early activation of the GnRH pathway and cause phenotypic CPP. Patients with a MKRN3 mutation all display the classic signs of CCP including early breast and testes development, increased bone aging and elevated hormone levels of GnRH and LH.
Diagnosis
Studies indicate that breast development in girls and the appearance of pubic hair in both girls and boys are starting earlier than in previous generations. As a result, "early puberty" in children as young as 9 and 10 is no longer considered abnormal, particularly with girls. Although it is not considered as abnormal, it may be upsetting to parents and can be harmful to children who mature physically at a time when they are immature mentally.No age reliably separates normal from abnormal processes in children, but the following age thresholds for evaluation are thought to minimize the risk of missing a significant medical problem:
Breast development in boys before appearance of pubic hair or testicular enlargement
Pubic hair or genital enlargement (gonadarche) in boys with onset before 9 years
Pubic hair (pubarche) before 8 or breast development (thelarche) in girls with onset before 7 years
Menstruation (menarche) in girls before 10 yearsMedical evaluation is sometimes necessary to recognize the few children with serious conditions from the majority who have entered puberty early but are still medically normal. Early sexual development warrants evaluation because it may:
induce early bone maturation and reduce eventual adult height
indicate the presence of a tumour or other serious problem
cause the child, particularly a girl, to become an object of adult sexual interest.
Treatment
One possible treatment is with anastrozole. GnRH agonists, including histrelin, triptorelin, or leuprorelin, are other possible treatments. Non-continuous use of GnRH agonists stimulates the pituitary gland to release follicle stimulating hormone (FSH) and luteinizing hormone (LH).
Prognosis
Early puberty is posited to put girls at higher risk of sexual abuse; however, a causal relationship is, as yet, inconclusive. Early puberty also puts girls at a higher risk for teasing or bullying, mental health disorders and short stature as adults. Girls as young as 8 are increasingly starting to menstruate, develop breasts and grow pubic and underarm hair; these "biological milestones" typically occurred only at 13 or older in the past. African-American girls are especially prone to early puberty.Though boys face fewer problems from early puberty than girls do, early puberty is not always positive for boys. Early sexual maturation in boys can be accompanied by increased aggressiveness due to the surge of pubertal hormones. Because they appear older than their peers, pubescent boys may face increased social pressure to conform to adult norms; society may view them as more emotionally advanced, although their cognitive and social development may lag behind their physical development. Studies have shown that early-maturing boys are more likely to be sexually active and are more likely to participate in risky behaviors.
History
Pubertas praecox is the Latin term used by physicians from the 1790s onward. Various hypotheses and inferences on pubertal (menstrual, procreative) timing are attested since ancient times, which, well into early modernity were explained on the basis of temperamental, humoral and Jungian "complexional" causes, or general or local "plethora" (blood excess). Endocrinological (hormonal) theories and discoveries are a twentieth-century development.
See also
List of youngest birth mothers
List of youngest birth fathers
Premature menopause
Premature ovarian failure
References
== External links == | 111 |
Cervical effacement | Cervical effacement or cervical ripening refers to a thinning of the cervix.
Background
Cervical effacement is a component of the Bishop score and can be expressed as a percentage.Prior to effacement, the cervix is like a long bottleneck, usually about four centimeters in length. Throughout pregnancy, the cervix is tightly closed and protected by a plug of mucus. When the cervix effaces, the mucus plug is loosened and passes out of the vagina. The mucus may be tinged with blood and the passage of the mucus plug is called bloody show (or simply "show"). As effacement takes place, the cervix then shortens, or effaces, pulling up into the uterus and becoming part of the lower uterine wall. Effacement may be measured in percentages, from zero percent (not effaced at all) to 100 percent, which indicates a paper-thin cervix.
Results from a systematic review of the literature found no differences in cesarean delivery nor neonatal outcomes in women with low-risk pregnancies between inpatient or outpatient cervical ripening.Effacement is accompanied by cervical dilation.
== References == | 112 |
Cervicitis | Cervicitis is inflammation of the uterine cervix. Cervicitis in women has many features in common with urethritis in men and many cases are caused by sexually transmitted infections. Non-infectious causes of cervicitis can include intrauterine devices, contraceptive diaphragms, and allergic reactions to spermicides or latex condoms.
Cervicitis affects over half of all women during their adult life.Cervicitis may ascend and cause endometritis and pelvic inflammatory disease (PID). Cervicitis may be acute or chronic.
Symptoms and signs
Cervicitis may have no symptoms. If symptoms do manifest, they may include:
Abnormal vaginal bleeding after intercourse between periods
Unusual gray, white, or yellow vaginal discharge
Painful sexual intercourse
Pain in the vagina
Pressure or heaviness in the pelvis
Frequent, painful urination
Causes
Cervicitis can be caused by any of a number of infections, of which the most common are chlamydia and gonorrhea, with chlamydia accounting for approximately 40% of cases. Other causes include Trichomonas vaginalis, herpes simplex virus, and Mycoplasma genitalium.While sexually transmitted infections (STIs) are the most common cause of cervicitis, there are other potential causes as well. This includes vaginitis caused by bacterial vaginosis or Trichomonas vaginalis. This also includes a device inserted into the pelvic area (i.e. a cervical cap, IUD, pessary, etc.); an allergy to spermicides or latex in condoms; or, exposure to a chemical, for example while douching. Inflammation can also be idiopathic, where no specific cause is found. While IUDs do not cause cervicitis, active cervicitis is a contraindication to placing an IUD. If a person with an IUD develops cervicitis, it usually does not need to be removed, if the person wants to continue using it.There are also certain behaviors that can place individuals at a higher risk for contracting cervicitis. High-risk sexual behavior, a history of STIs, many sexual partners, sex at an early age, and sexual partners who engage in high-risk sexual behavior or have had an STI can increase the likelihood of contracting cervicitis.
Diagnosis
To diagnose cervicitis, a clinician will perform a pelvic exam. This exam includes a speculum exam with visual inspection of the cervix for abnormal discharge, which is usually purulent or bleeding from the cervix with little provocation. Swabs can be used to collect a sample of this discharge for inspection under a microscope and/or lab testing for gonorrhea, chlamydia, and Trichomonas vaginalis. A bimanual exam in which the clinician palpates the cervix to see if there is any associated pain should be done to assess for pelvic inflammatory disease.
Prevention
The risk of contracting cervicitis from STIs can be reduced by using condoms during every sexual encounter. Condoms are effective against the spread of STIs like chlamydia and gonorrhea that cause cervicitis. Also, being in a long-term monogamous relationship with an uninfected partner can lower the risk of an STI.Ensuring that foreign objects like tampons are properly placed in the vagina and following instructions how long to leave it inside, how often to change it, and/or how often to clean it can reduce the risk of cervicitis. In addition, avoiding potential irritants like douches and deodorant tampons can prevent cervicitis.
Treatment
Non-infectious causes of cervicitis are primarily treated by eliminating or limiting exposure to the irritant. Antibiotics, usually azithromycin or doxycycline, or antiviral medications are used to treat infectious causes. Women at increased risk of sexually transmitted infections (i.e., less than 25 years of age and a new sexual partner, a sexual partner with other partners, or a sexual partner with a known sexually transmitted infection), should be treated presumptively for chlamydia and possibly gonorrhea, particularly if follow-up care cannot be ensured or diagnostic testing is not possible. For lower risk women, deferring treatment until test results are available is an option.To reduce the risk of reinfection, women should abstain from sexual intercourse for seven days after treatment is started. Also, sexual partners (within the last sixty days) of anyone with infectious cervicitis should be referred for evaluation or treated through expedited partner therapy (EPT). EPT is the process by which a clinician treats the sexual partner of a patient diagnosed with a sexually transmitted infection without first meeting or examining the partner. Sexual partners should also avoid sexual intercourse until they and their partners are adequately treated.Untreated cervicitis is also associated with an increased susceptibility to HIV infection. Women with infectious cervicitis should be tested for other sexually transmitted infections, including HIV and syphilis.Cervicitis should be followed up. Women with a specific diagnosis of chlamydia, gonorrhea, or trichomonas should see a clinician in three months after treatment for repeat testing because they are at higher risk of getting reinfected, regardless of whether their sex partners were treated. Treatment in pregnant women is the same as those who are not pregnant.
References
== External links == | 113 |
Chancroid | Chancroid ( SHANG-kroyd) is a bacterial sexually transmitted infection characterized by painful sores on the genitalia. Chancroid is known to spread from one individual to another solely through sexual contact. However, there have been reports of accidental infection through another route which is by the hand. While uncommon in the western world, it is the most common cause of genital ulceration worldwide.
Signs and symptoms
These are only local and no systemic manifestations are present.
The ulcer characteristically:
Ranges in size dramatically from 3 to 50 mm (1/8 inch to 2 inches) across
Is painful
Has sharply defined, undermined borders
Has irregular or ragged borders, described as saucer-shaped.
Has a base that is covered with a gray or yellowish-gray material
Has a base that bleeds easily if traumatized or scraped
Painful swollen lymph nodes occur in 30–60% of patients.
Dysuria (pain with urination) and dyspareunia (pain with intercourse) in femalesAbout half of infected men have only a single ulcer. Women frequently have four or more ulcers, with fewer symptoms. The ulcers are typically confined to the genital region most of the time.
The initial ulcer may be mistaken as a "hard" chancre, the typical sore of primary syphilis, as opposed to the "soft chancre" of chancroid.Approximately one-third of the infected individuals will develop enlargements of the inguinal lymph nodes, the nodes located in the fold between the leg and the lower abdomen.Half of those who develop swelling of the inguinal lymph nodes will progress to a point where the nodes rupture through the skin, producing draining abscesses. The swollen lymph nodes and abscesses are often referred to as buboes.
Complications
Extensive lymph node inflammation may develop.
Large inguinal abscesses may develop and rupture to form draining sinus or giant ulcer.
Superinfection by Fusarium and Bacteroides. These later require debridement and may result in disfiguring scars.
Phimosis can develop in long-standing lesion by scarring and thickening of foreskin, which may subsequently require circumcision.
Sites For Chancroid Lesions
Males
Internal and external surface of prepuce.
Coronal sulcus
Frenulum
Shaft of penis
Prepucial orifice
Urethral meatus
Glans penis
Perineum area
Females
Labia majora is most common site. "Kissing ulcers" may develop. These are ulcers that occur on opposing surfaces of the labia.
Labia minora
Fourchette
Vestibule
Clitoris
Perineal area
Inner thighs
Causes
Chancroid is a bacterial infection caused by the fastidious Gram-negative streptobacillus Haemophilus ducreyi. This pathogen is highly infectious. It is a disease found primarily in developing countries, most prevalent in low socioeconomic groups, associated with commercial sex workers.Chancroid, caused by H. ducreyi has infrequently been associated with cases of Genital Ulcer Disease in the US but has been isolated in up to 10% of genital ulcers diagnosed from STD clinics in Memphis and Chicago.Infection levels are very low in the Western world, typically around one case per two million of the population (Canada, France, Australia, UK and US). Most individuals diagnosed with chancroid have visited countries or areas where the disease is known to occur frequently, although outbreaks have been observed in association with crack cocaine use and prostitution.Chancroid is a risk factor for contracting HIV, due to their ecological association or shared risk of exposure, and biologically facilitated transmission of one infection by the other. Approximately 10% of people with chancroid will have a co-infection with syphilis and/or HIV.
Pathogenesis
H. ducreyi enters skin through microabrasions incurred during sexual intercourse. The incubation period of H. ducreyi infection is 10 to 14 days after which there is progression of the disease. A local tissue reaction leads to development of erythomatous papule, which progresses to pustule in 4–7 days. It then undergoes central necrosis to ulcerate.
Diagnosis
Variants
Some of clinical variants are as follows.
Laboratory findings
From bubo pus or ulcer secretions, H. ducreyi can be identified using special culture media; however, there is a <80% sensitivity. PCR-based identification of the organisms is available, but none in the United States are FDA-cleared. Simple, rapid, sensitive and inexpensive antigen detection methods for H. ducreyi identification are also popular. Serologic detection of H. ducreyi uses outer membrane protein and lipooligosaccharide. Most of the time, the diagnosis is based on presumptive approach using the symptomatology which in this case includes multiple painful genital ulcers.
Differential diagnosis
Despite many distinguishing features, the clinical spectrums of following diseases may overlap with chancroid:
Primary syphilis
Genital herpesPractical clinical approach for this STI as Genital Ulcer Disease is to rule out top differential diagnosis of Syphilis and Herpes and consider empirical treatment for Chancroid as testing is not commonly done for the latter.
Comparison with syphilis
There are many differences and similarities between the conditions syphilitic chancre and chancroid:
SimilaritiesBoth originate as pustules at the site of inoculation, and progress to ulcerated lesions
Both lesions are typically 1–2 cm in diameter
Both lesions are caused by sexually transmissible organisms
Both lesions typically appear on the genitals of infected individuals
Both lesions can be present at multiple sites and with multiple lesionsDifferencesChancre is a lesion typical of infection with the bacterium that causes syphilis, Treponema pallidum
Chancroid is a lesion typical of infection with the bacterium Haemophilus ducreyi
Chancres are typically painless, whereas chancroid are typically painful
Chancres are typically non-exudative, whereas chancroid typically have a grey or yellow purulent exudate
Chancres have a hard (indurated) edge, whereas chancroid have a soft edge
Chancres heal spontaneously within three to six weeks, even in the absence of treatment
Chancres can occur in the pharynx as well as on the genitals
Prevention
Chancroid spreads in populations with high sexual activity, such as prostitutes. Use of condom, prophylaxis by azithromycin, syndromic management of genital ulcers, treating patients with reactive syphilis serology are some of the strategies successfully tried in Thailand. Also, treatment of sexual partners is advocated whether they develop symptoms or not as long as there was unprotected sexual intercourse with the patient within 10 days of developing the symptoms.
Treatment
For the initial stages of the lesion, cleaning with soapy solution is recommended and sitz bath may be beneficial. Fluctuant nodules may require aspiration. Treatment may include more than one prescribed medication.
Antibiotics
Macrolides are often used to treat chancroid. The CDC recommendation is either a single oral dose (1 gram) of azithromycin, a single IM dose (250 mg) of ceftriaxone, oral (500 mg) of erythromycin three times a day for seven days, or oral (500 mg) of ciprofloxacin twice a day for three days. Due to a paucity of reliable empirical evidence it is not clear whether macrolides are actually more effective and/or better tolerated than other antibiotics when treating chancroid. Data is limited, but there have been reports of ciprofloxacin and erythromycin resistance.Aminoglycosides such as gentamicin, streptomycin, and kanamycin has been used to successfully treat chancroid; however aminoglycoside-resistant strain of H. ducreyi have been observed in both laboratory and clinical settings.[7] Treatment with aminoglycosides should be considered as only a supplement to a primary treatment.Pregnant and lactating women, or those below 18 years of age regardless of gender, should not use ciprofloxacin as treatment for chancroid. Treatment failure is possible with HIV co-infection and extended therapy is sometimes required.
Prognosis
Prognosis is excellent with proper treatment. Treating sexual contacts of affected individual helps break cycle of infection.
Follow-up
Within 3–7 days after commencing treatment, patients should be re-examined to determine whether the treatment was successful. Within 3 days, symptoms of ulcers should improve. Healing time of the ulcer depends mainly on size and can take more than two weeks for larger ulcers. In uncircumcised men, healing is slower if the ulcer is under the foreskin. Sometimes, needle aspiration or incision and drainage are necessary.
Epidemiology
Although the prevalence of chancroid has decreased in the United States and worldwide, sporadic outbreaks can still occur in regions of the Caribbean and Africa. Like other sexually transmitted diseases, having chancroid increases the risk of transmitting and acquiring HIV.
History
Chancroid has been known to humans since time of ancient Greeks. Some of important events on historical timeline of chancre are:
References
== External links == | 114 |
Cholangiocarcinoma | Cholangiocarcinoma, also known as bile duct cancer, is a type of cancer that forms in the bile ducts. Symptoms of cholangiocarcinoma may include abdominal pain, yellowish skin, weight loss, generalized itching, and fever. Light colored stool or dark urine may also occur. Other biliary tract cancers include gallbladder cancer and cancer of the ampulla of Vater.Risk factors for cholangiocarcinoma include primary sclerosing cholangitis (an inflammatory disease of the bile ducts), ulcerative colitis, cirrhosis, hepatitis C, hepatitis B, infection with certain liver flukes, and some congenital liver malformations. However, most people have no identifiable risk factors. The diagnosis is suspected based on a combination of blood tests, medical imaging, endoscopy, and sometimes surgical exploration. The disease is confirmed by examination of cells from the tumor under a microscope. It is typically an adenocarcinoma (a cancer that forms glands or secretes mucin).Cholangiocarcinoma is typically incurable at diagnosis which is why early detection is ideal. In these cases palliative treatments may include surgical resection, chemotherapy, radiation therapy, and stenting procedures. In about a third of cases involving the common bile duct and less commonly with other locations the tumor can be completely removed by surgery offering a chance of a cure. Even when surgical removal is successful chemotherapy and radiation therapy are generally recommended. In certain cases surgery may include a liver transplantation. Even when surgery is successful the 5-year survival is typically less than 50%.Cholangiocarcinoma is rare in the Western world, with estimates of it occurring in 0.5–2 people per 100,000 per year. Rates are higher in Southeast Asia where liver flukes are common. Rates in parts of Thailand are 60 per 100,000 per year. It typically occurs in people in their 70s; however, in those with primary sclerosing cholangitis it often occurs in the 40s. Rates of cholangiocarcinoma within the liver in the Western world have increased.
Signs and symptoms
The most common physical indications of cholangiocarcinoma are abnormal liver function tests, jaundice (yellowing of the eyes and skin occurring when bile ducts are blocked by tumor), abdominal pain (30–50%), generalized itching (66%), weight loss (30–50%), fever (up to 20%), and changes in the color of stool or urine. To some extent, the symptoms depend upon the location of the tumor: people with cholangiocarcinoma in the extrahepatic bile ducts (outside the liver) are more likely to have jaundice, while those with tumors of the bile ducts within the liver more often have pain without jaundice.Blood tests of liver function in people with cholangiocarcinoma often reveal a so-called "obstructive picture", with elevated bilirubin, alkaline phosphatase, and gamma glutamyl transferase levels, and relatively normal transaminase levels. Such laboratory findings suggest obstruction of the bile ducts, rather than inflammation or infection of the liver parenchyma, as the primary cause of the jaundice.
Risk factors
Although most people present without any known risk factors evident, a number of risk factors for the development of cholangiocarcinoma have been described. In the Western world, the most common of these is primary sclerosing cholangitis (PSC), an inflammatory disease of the bile ducts which is closely associated with ulcerative colitis (UC). Epidemiologic studies have suggested that the lifetime risk of developing cholangiocarcinoma for a person with PSC is on the order of 10–15%, although autopsy series have found rates as high as 30% in this population. For inflammatory bowel disease patients with altered DNA repair functions, the progression from PSC to cholangiocarcinoma may be a consequence of DNA damage resulting from biliary inflammation and bile acids.Certain parasitic liver diseases may be risk factors as well. Colonization with the liver flukes Opisthorchis viverrini (found in Thailand, Laos PDR, and Vietnam) or Clonorchis sinensis (found in China, Taiwan, eastern Russia, Korea, and Vietnam) has been associated with the development of cholangiocarcinoma. Control programs (Integrated Opisthorchiasis Control Program) aimed at discouraging the consumption of raw and undercooked food have been successful at reducing the incidence of cholangiocarcinoma in some countries. People with chronic liver disease, whether in the form of viral hepatitis (e.g. hepatitis B or hepatitis C), alcoholic liver disease, or cirrhosis of the liver due to other causes, are at significantly increased risk of cholangiocarcinoma. HIV infection was also identified in one study as a potential risk factor for cholangiocarcinoma, although it was unclear whether HIV itself or other correlated and confounding factors (e.g. hepatitis C infection) were responsible for the association.Infection with the bacteria Helicobacter bilis and Helicobacter hepaticus species can cause biliary cancer.Congenital liver abnormalities, such as Caroli disease (a specific type of five recognized choledochal cysts), have been associated with an approximately 15% lifetime risk of developing cholangiocarcinoma. The rare inherited disorders Lynch syndrome II and biliary papillomatosis have also been found to be associated with cholangiocarcinoma. The presence of gallstones (cholelithiasis) is not clearly associated with cholangiocarcinoma. However, intrahepatic stones (called hepatolithiasis), which are rare in the West but common in parts of Asia, have been strongly associated with cholangiocarcinoma. Exposure to Thorotrast, a form of thorium dioxide which was used as a radiologic contrast medium, has been linked to the development of cholangiocarcinoma as late as 30–40 years after exposure; Thorotrast was banned in the United States in the 1950s due to its carcinogenicity.
Pathophysiology
Cholangiocarcinoma can affect any area of the bile ducts, either within or outside the liver. Tumors occurring in the bile ducts within the liver are referred to as intrahepatic, those occurring in the ducts outside the liver are extrahepatic, and tumors occurring at the site where the bile ducts exit the liver may be referred to as perihilar. A cholangiocarcinoma occurring at the junction where the left and right hepatic ducts meet to form the common hepatic duct may be referred to eponymously as a Klatskin tumor.Although cholangiocarcinoma is known to have the histological and molecular features of an adenocarcinoma of epithelial cells lining the biliary tract, the actual cell of origin is unknown. Recent evidence has suggested that the initial transformed cell that generates the primary tumor may arise from a pluripotent hepatic stem cell. Cholangiocarcinoma is thought to develop through a series of stages – from early hyperplasia and metaplasia, through dysplasia, to the development of frank carcinoma – in a process similar to that seen in the development of colon cancer. Chronic inflammation and obstruction of the bile ducts, and the resulting impaired bile flow, are thought to play a role in this progression.Histologically, cholangiocarcinomas may vary from undifferentiated to well-differentiated. They are often surrounded by a brisk fibrotic or desmoplastic tissue response; in the presence of extensive fibrosis, it can be difficult to distinguish well-differentiated cholangiocarcinoma from normal reactive epithelium. There is no entirely specific immunohistochemical stain that can distinguish malignant from benign biliary ductal tissue, although staining for cytokeratins, carcinoembryonic antigen, and mucins may aid in diagnosis. Most tumors (>90%) are adenocarcinomas.
Diagnosis
Blood tests
There are no specific blood tests that can diagnose cholangiocarcinoma by themselves. Serum levels of carcinoembryonic antigen (CEA) and CA19-9 are often elevated, but are not sensitive or specific enough to be used as a general screening tool. However, they may be useful in conjunction with imaging methods in supporting a suspected diagnosis of cholangiocarcinoma.
Abdominal imaging
Ultrasound of the liver and biliary tree is often used as the initial imaging modality in people with suspected obstructive jaundice. Ultrasound can identify obstruction and ductal dilatation and, in some cases, may be sufficient to diagnose cholangiocarcinoma. Computed tomography (CT) scanning may also play an important role in the diagnosis of cholangiocarcinoma.
Imaging of the biliary tree
While abdominal imaging can be useful in the diagnosis of cholangiocarcinoma, direct imaging of the bile ducts is often necessary. Endoscopic retrograde cholangiopancreatography (ERCP), an endoscopic procedure performed by a gastroenterologist or specially trained surgeon, has been widely used for this purpose. Although ERCP is an invasive procedure with attendant risks, its advantages include the ability to obtain biopsies and to place stents or perform other interventions to relieve biliary obstruction. Endoscopic ultrasound can also be performed at the time of ERCP and may increase the accuracy of the biopsy and yield information on lymph node invasion and operability. As an alternative to ERCP, percutaneous transhepatic cholangiography (PTC) may be utilized. Magnetic resonance cholangiopancreatography (MRCP) is a non-invasive alternative to ERCP. Some authors have suggested that MRCP should supplant ERCP in the diagnosis of biliary cancers, as it may more accurately define the tumor and avoids the risks of ERCP.
Surgery
Surgical exploration may be necessary to obtain a suitable biopsy and to accurately stage a person with cholangiocarcinoma. Laparoscopy can be used for staging purposes and may avoid the need for a more invasive surgical procedure, such as laparotomy, in some people.
Pathology
Histologically, cholangiocarcinomas are classically well to moderately differentiated adenocarcinomas. Immunohistochemistry is useful in the diagnosis and may be used to help differentiate a cholangiocarcinoma from hepatocellular carcinoma and metastasis of other gastrointestinal tumors. Cytological scrapings are often nondiagnostic, as these tumors typically have a desmoplastic stroma and, therefore, do not release diagnostic tumor cells with scrapings.
Staging
Although there are at least three staging systems for cholangiocarcinoma (e.g. those of Bismuth, Blumgart, and the American Joint Committee on Cancer), none have been shown to be useful in predicting survival. The most important staging issue is whether the tumor can be surgically removed, or whether it is too advanced for surgical treatment to be successful. Often, this determination can only be made at the time of surgery.General guidelines for operability include:
Absence of lymph node or liver metastases
Absence of involvement of the portal vein
Absence of direct invasion of adjacent organs
Absence of widespread metastatic disease
Treatment
Cholangiocarcinoma is considered to be an incurable and rapidly lethal disease unless all the tumors can be fully resected (cut out surgically). Since the operability of the tumor can only be assessed during surgery in most cases, a majority of people undergo exploratory surgery unless there is already a clear indication that the tumor is inoperable. However, the Mayo Clinic has reported significant success treating early bile duct cancer with liver transplantation using a protocolized approach and strict selection criteria.Adjuvant therapy followed by liver transplantation may have a role in treatment of certain unresectable cases. Locoregional therapies including transarterial chemoembolization (TACE), transarterial radioembolization (TARE) and ablation therapies have a role in intrahepatic variants of cholangiocarcinoma to provide palliation or potential cure in people who are not surgical candidates.
Adjuvant chemotherapy and radiation therapy
If the tumor can be removed surgically, people may receive adjuvant chemotherapy or radiation therapy after the operation to improve the chances of cure. If the tissue margins are negative (i.e. the tumor has been totally excised), adjuvant therapy is of uncertain benefit. Both positive and negative results have been reported with adjuvant radiation therapy in this setting, and no prospective randomized controlled trials have been conducted as of March 2007. Adjuvant chemotherapy appears to be ineffective in people with completely resected tumors. The role of combined chemoradiotherapy in this setting is unclear. However, if the tumor tissue margins are positive, indicating that the tumor was not completely removed via surgery, then adjuvant therapy with radiation and possibly chemotherapy is generally recommended based on the available data.
Treatment of advanced disease
The majority of cases of cholangiocarcinoma present as inoperable (unresectable) disease in which case people are generally treated with palliative chemotherapy, with or without radiotherapy. Chemotherapy has been shown in a randomized controlled trial to improve quality of life and extend survival in people with inoperable cholangiocarcinoma. There is no single chemotherapy regimen which is universally used, and enrollment in clinical trials is often recommended when possible. Chemotherapy agents used to treat cholangiocarcinoma include 5-fluorouracil with leucovorin, gemcitabine as a single agent, or gemcitabine plus cisplatin, irinotecan, or capecitabine. A small pilot study suggested possible benefit from the tyrosine kinase inhibitor erlotinib in people with advanced cholangiocarcinoma.
Radiation therapy appears to prolong survival in people with resected extrahepatic cholangiocarcinoma, and the few reports of its use in unresectable cholangiocarcinoma appear to show improved survival, but numbers are small.Infigratinib (Truseltiq) is a tyrosine kinase inhibitor of fibroblast growth factor receptor (FGFR) that was approved for medical use in the United States in May 2021. It is indicated for the treatment of people with previously treated locally advanced or metastatic cholangiocarcinoma harboring an FGFR2 fusion or rearrangement.Pemigatinib (Pemazyre) is a kinase inhibitor of fibroblast growth factor receptor 2 (FGFR2) that was approved for medical use in the United States in April 2020. It is indicated for the treatment of adults with previously treated, unresectable locally advanced or metastatic cholangiocarcinoma with a fibroblast growth factor receptor 2 (FGFR2) fusion or other rearrangement as detected by an FDA-approved test.
Ivodesinib (Tibsovo) is a small molecule inhibitor of isocitrate dehydrogenase 1. The FDA approved ivosidenib in August 2021 for adults with previously treated, locally advanced or metastatic cholangiocarcinoma with an isocitrate dehydrogenase-1 (IDH1) mutation as detected by an FDA-approved test.Durvalumab, (Imfinzi) is an immune checkpoint inhibitor that blocks the PD-L1 protein on the surface of immune cells, thereby allowing the immune system to recognize and attack tumor cells. In Phase III clinical trials, durvalumab, in combination with standard-of-care chemotherapy, demonstrated a statistically significant and clinically meaningful improvement in overall survival and progression-free survival versus chemotherapy alone as a 1st-line treatment for patients with advanced biliary tract cancer.
Prognosis
Surgical resection offers the only potential chance of cure in cholangiocarcinoma. For non-resectable cases, the five-year survival rate is 0% where the disease is inoperable because distal lymph nodes show metastases, and less than 5% in general. Overall mean duration of survival is less than 6 months in people with metastatic disease.For surgical cases, the odds of cure vary depending on the tumor location and whether the tumor can be completely, or only partially, removed. Distal cholangiocarcinomas (those arising from the common bile duct) are generally treated surgically with a Whipple procedure; long-term survival rates range from 15 to 25%, although one series reported a five-year survival of 54% for people with no involvement of the lymph nodes. Intrahepatic cholangiocarcinomas (those arising from the bile ducts within the liver) are usually treated with partial hepatectomy. Various series have reported survival estimates after surgery ranging from 22 to 66%; the outcome may depend on involvement of lymph nodes and completeness of the surgery. Perihilar cholangiocarcinomas (those occurring near where the bile ducts exit the liver) are least likely to be operable. When surgery is possible, they are generally treated with an aggressive approach often including removal of the gallbladder and potentially part of the liver. In patients with operable perihilar tumors, reported 5-year survival rates range from 20 to 50%.The prognosis may be worse for people with primary sclerosing cholangitis who develop cholangiocarcinoma, likely because the cancer is not detected until it is advanced. Some evidence suggests that outcomes may be improving with more aggressive surgical approaches and adjuvant therapy.
Epidemiology
Cholangiocarcinoma is a relatively rare form of cancer; each year, approximately 2,000 to 3,000 new cases are diagnosed in the United States, translating into an annual incidence of 1–2 cases per 100,000 people. Autopsy series have reported a prevalence of 0.01% to 0.46%. There is a higher prevalence of cholangiocarcinoma in Asia, which has been attributed to endemic chronic parasitic infestation. The incidence of cholangiocarcinoma increases with age, and the disease is slightly more common in men than in women (possibly due to the higher rate of primary sclerosing cholangitis, a major risk factor, in men). The prevalence of cholangiocarcinoma in people with primary sclerosing cholangitis may be as high as 30%, based on autopsy studies.Multiple studies have documented a steady increase in the incidence of intrahepatic cholangiocarcinoma; increases have been seen in North America, Europe, Asia, and Australia. The reasons for the increasing occurrence of cholangiocarcinoma are unclear; improved diagnostic methods may be partially responsible, but the prevalence of potential risk factors for cholangiocarcinoma, such as HIV infection, has also been increasing during this time frame.
References
External links
American Cancer Society Detailed Guide to Bile Duct Cancer.
Patient information on extrahepatic bile duct tumors, from the National Cancer Institute.
Cancer.Net: Bile Duct Cancer
Cholangiocarcinoma Foundation
World Cholangiocarcinoma Day | 115 |
Cholera | Cholera is an infection of the small intestine by some strains of the bacterium Vibrio cholerae. Symptoms may range from none, to mild, to severe. The classic symptom is large amounts of watery diarrhea that lasts a few days. Vomiting and muscle cramps may also occur. Diarrhea can be so severe that it leads within hours to severe dehydration and electrolyte imbalance. This may result in sunken eyes, cold skin, decreased skin elasticity, and wrinkling of the hands and feet. Dehydration can cause the skin to turn bluish. Symptoms start two hours to five days after exposure.Cholera is caused by a number of types of Vibrio cholerae, with some types producing more severe disease than others. It is spread mostly by unsafe water and unsafe food that has been contaminated with human feces containing the bacteria. Undercooked shellfish is a common source. Humans are the only known host for the bacteria. Risk factors for the disease include poor sanitation, not enough clean drinking water, and poverty. Cholera can be diagnosed by a stool test. A rapid dipstick test is available but is not as accurate.Prevention methods against cholera include improved sanitation and access to clean water. Cholera vaccines that are given by mouth provide reasonable protection for about six months. They have the added benefit of protecting against another type of diarrhea caused by E. coli. By 2017 the US Food and Drug Administration (FDA) had approved a single-dose, live, oral cholera vaccine called Vaxchora for adults aged 18–64 who are travelling to an area of active cholera transmission. It offers limited protection to young children. People who survive an episode of cholera have long-lasting immunity for at least 3 years (the period tested.)The primary treatment for affected individuals is oral rehydration salts (ORS), the replacement of fluids and electrolytes by using slightly sweet and salty solutions. Rice-based solutions are preferred. Zinc supplementation is useful in children. In severe cases, intravenous fluids, such as Ringers lactate, may be required, and antibiotics may be beneficial. Testing to see which antibiotic the cholera is susceptible to can help guide the choice.Cholera continues to affect an estimated 3–5 million people worldwide and causes 28,800–130,000 deaths a year. The most recent of seven cholera pandemics and associated outbreaks, since the early 19th century, started about 1961. As of 2010, it is rare in high income countries. Children are mostly affected. Cholera occurs as both outbreaks and chronically in certain areas. Areas with an ongoing risk of disease include Africa and Southeast Asia. The risk of death among those affected is usually less than 5%, given improved treatment, but may be as high as 50% without such access to treatment. Descriptions of cholera are found as early as the 5th century BC in Sanskrit. In Europe, cholera was a term initially used to describe any kind of gastroenteritis, and was not used for this disease until the early 19th century. The study of cholera in England by John Snow between 1849 and 1854 led to significant advances in the field of epidemiology because of his insights about transmission via contaminated water.
Signs and symptoms
The primary symptoms of cholera are profuse diarrhea and vomiting of clear fluid. These symptoms usually start suddenly, half a day to five days after ingestion of the bacteria. The diarrhea is frequently described as "rice water" in nature and may have a fishy odor. An untreated person with cholera may produce 10 to 20 litres (3 to 5 US gal) of diarrhea a day. Severe cholera, without treatment, kills about half of affected individuals. If the severe diarrhea is not treated, it can result in life-threatening dehydration and electrolyte imbalances. Estimates of the ratio of asymptomatic to symptomatic infections have ranged from 3 to 100. Cholera has been nicknamed the "blue death" because a persons skin may turn bluish-gray from extreme loss of fluids.Fever is rare and should raise suspicion for secondary infection. Patients can be lethargic and might have sunken eyes, dry mouth, cold clammy skin, or wrinkled hands and feet. Kussmaul breathing, a deep and labored breathing pattern, can occur because of acidosis from stool bicarbonate losses and lactic acidosis associated with poor perfusion. Blood pressure drops due to dehydration, peripheral pulse is rapid and thready, and urine output decreases with time. Muscle cramping and weakness, altered consciousness, seizures, or even coma due to electrolyte imbalances are common, especially in children.
Cause
Transmission
Cholera bacteria have been found in shellfish and plankton.Transmission is usually through the fecal-oral route of contaminated food or water caused by poor sanitation. Most cholera cases in developed countries are a result of transmission by food, while in developing countries it is more often water. Food transmission can occur when people harvest seafood such as oysters in waters infected with sewage, as Vibrio cholerae accumulates in planktonic crustaceans and the oysters eat the zooplankton.People infected with cholera often have diarrhea, and disease transmission may occur if this highly liquid stool, colloquially referred to as "rice-water", contaminates water used by others. A single diarrheal event can cause a one-million fold increase in numbers of V. cholerae in the environment. The source of the contamination is typically other people with cholera when their untreated diarrheal discharge is allowed to get into waterways, groundwater or drinking water supplies. Drinking any contaminated water and eating any foods washed in the water, as well as shellfish living in the affected waterway, can cause a person to contract an infection. Cholera is rarely spread directly from person to person.V. cholerae also exists outside the human body in natural water sources, either by itself or through interacting with phytoplankton, zooplankton, or biotic and abiotic detritus. Drinking such water can also result in the disease, even without prior contamination through fecal matter. Selective pressures exist however in the aquatic environment that may reduce the virulence of V. cholerae. Specifically, animal models indicate that the transcriptional profile of the pathogen changes as it prepares to enter an aquatic environment. This transcriptional change results in a loss of ability of V. cholerae to be cultured on standard media, a phenotype referred to as viable but non-culturable (VBNC) or more conservatively active but non-culturable (ABNC). One study indicates that the culturability of V. cholerae drops 90% within 24 hours of entering the water, and furthermore that this loss in culturability is associated with a loss in virulence.Both toxic and non-toxic strains exist. Non-toxic strains can acquire toxicity through a temperate bacteriophage.
Susceptibility
About 100 million bacteria must typically be ingested to cause cholera in a normal healthy adult. This dose, however, is less in those with lowered gastric acidity (for instance those using proton pump inhibitors). Children are also more susceptible, with two- to four-year-olds having the highest rates of infection. Individuals susceptibility to cholera is also affected by their blood type, with those with type O blood being the most susceptible. Persons with lowered immunity, such as persons with AIDS or malnourished children, are more likely to suffer a severe case if they become infected. Any individual, even a healthy adult in middle age, can undergo a severe case, and each persons case should be measured by the loss of fluids, preferably in consultation with a professional health care provider.The cystic fibrosis genetic mutation known as delta-F508 in humans has been said to maintain a selective heterozygous advantage: heterozygous carriers of the mutation (who are not affected by cystic fibrosis) are more resistant to V. cholerae infections. In this model, the genetic deficiency in the cystic fibrosis transmembrane conductance regulator channel proteins interferes with bacteria binding to the intestinal epithelium, thus reducing the effects of an infection.
Mechanism
When consumed, most bacteria do not survive the acidic conditions of the human stomach. The few surviving bacteria conserve their energy and stored nutrients during the passage through the stomach by shutting down protein production. When the surviving bacteria exit the stomach and reach the small intestine, they must propel themselves through the thick mucus that lines the small intestine to reach the intestinal walls where they can attach and thrive.Once the cholera bacteria reach the intestinal wall, they no longer need the flagella to move. The bacteria stop producing the protein flagellin to conserve energy and nutrients by changing the mix of proteins that they express in response to the changed chemical surroundings. On reaching the intestinal wall, V. cholerae start producing the toxic proteins that give the infected person a watery diarrhea. This carries the multiplying new generations of V. cholerae bacteria out into the drinking water of the next host if proper sanitation measures are not in place.The cholera toxin (CTX or CT) is an oligomeric complex made up of six protein subunits: a single copy of the A subunit (part A), and five copies of the B subunit (part B), connected by a disulfide bond. The five B subunits form a five-membered ring that binds to GM1 gangliosides on the surface of the intestinal epithelium cells. The A1 portion of the A subunit is an enzyme that ADP-ribosylates G proteins, while the A2 chain fits into the central pore of the B subunit ring. Upon binding, the complex is taken into the cell via receptor-mediated endocytosis. Once inside the cell, the disulfide bond is reduced, and the A1 subunit is freed to bind with a human partner protein called ADP-ribosylation factor 6 (Arf6). Binding exposes its active site, allowing it to permanently ribosylate the Gs alpha subunit of the heterotrimeric G protein. This results in constitutive cAMP production, which in turn leads to the secretion of water, sodium, potassium, and bicarbonate into the lumen of the small intestine and rapid dehydration. The gene encoding the cholera toxin was introduced into V. cholerae by horizontal gene transfer. Virulent strains of V. cholerae carry a variant of a temperate bacteriophage called CTXφ.
Microbiologists have studied the genetic mechanisms by which the V. cholerae bacteria turn off the production of some proteins and turn on the production of other proteins as they respond to the series of chemical environments they encounter, passing through the stomach, through the mucous layer of the small intestine, and on to the intestinal wall. Of particular interest have been the genetic mechanisms by which cholera bacteria turn on the protein production of the toxins that interact with host cell mechanisms to pump chloride ions into the small intestine, creating an ionic pressure which prevents sodium ions from entering the cell. The chloride and sodium ions create a salt-water environment in the small intestines, which through osmosis can pull up to six liters of water per day through the intestinal cells, creating the massive amounts of diarrhea. The host can become rapidly dehydrated unless treated properly.By inserting separate, successive sections of V. cholerae DNA into the DNA of other bacteria, such as E. coli that would not naturally produce the protein toxins, researchers have investigated the mechanisms by which V. cholerae responds to the changing chemical environments of the stomach, mucous layers, and intestinal wall. Researchers have discovered a complex cascade of regulatory proteins controls expression of V. cholerae virulence determinants. In responding to the chemical environment at the intestinal wall, the V. cholerae bacteria produce the TcpP/TcpH proteins, which, together with the ToxR/ToxS proteins, activate the expression of the ToxT regulatory protein. ToxT then directly activates expression of virulence genes that produce the toxins, causing diarrhea in the infected person and allowing the bacteria to colonize the intestine. Current research aims at discovering "the signal that makes the cholera bacteria stop swimming and start to colonize (that is, adhere to the cells of) the small intestine."
Genetic structure
Amplified fragment length polymorphism fingerprinting of the pandemic isolates of V. cholerae has revealed variation in the genetic structure. Two clusters have been identified: Cluster I and Cluster II. For the most part, Cluster I consists of strains from the 1960s and 1970s, while Cluster II largely contains strains from the 1980s and 1990s, based on the change in the clone structure. This grouping of strains is best seen in the strains from the African continent.
Antibiotic resistance
In many areas of the world, antibiotic resistance is increasing within cholera bacteria. In Bangladesh, for example, most cases are resistant to tetracycline, trimethoprim-sulfamethoxazole, and erythromycin. Rapid diagnostic assay methods are available for the identification of multi-drug resistant cases. New generation antimicrobials have been discovered which are effective against cholera bacteria in in vitro studies.
Diagnosis
A rapid dipstick test is available to determine the presence of V. cholerae. In those samples that test positive, further testing should be done to determine antibiotic resistance. In epidemic situations, a clinical diagnosis may be made by taking a patient history and doing a brief examination. Treatment via hydration and over-the-counter hydration solutions can be started without or before confirmation by laboratory analysis, especially where cholera is a common problem.Stool and swab samples collected in the acute stage of the disease, before antibiotics have been administered, are the most useful specimens for laboratory diagnosis. If an epidemic of cholera is suspected, the most common causative agent is V. cholerae O1. If V. cholerae serogroup O1 is not isolated, the laboratory should test for V. cholerae O139. However, if neither of these organisms is isolated, it is necessary to send stool specimens to a reference laboratory.Infection with V. cholerae O139 should be reported and handled in the same manner as that caused by V. cholerae O1. The associated diarrheal illness should be referred to as cholera and must be reported in the United States.
Prevention
The World Health Organization (WHO) recommends focusing on prevention, preparedness, and response to combat the spread of cholera. They also stress the importance of an effective surveillance system. Governments can play a role in all of these areas.
Water, sanitation and hygiene
Although cholera may be life-threatening, prevention of the disease is normally straightforward if proper sanitation practices are followed. In developed countries, due to their nearly universal advanced water treatment and sanitation practices, cholera is rare. For example, the last major outbreak of cholera in the United States occurred in 1910–1911. Cholera is mainly a risk in developing countries in those areas where access to WASH (water, sanitation and hygiene) infrastructure is still inadequate.
Effective sanitation practices, if instituted and adhered to in time, are usually sufficient to stop an epidemic. There are several points along the cholera transmission path at which its spread may be halted:
Sterilization: Proper disposal and treatment of all materials that may have come into contact with the feces of other people with cholera (e.g., clothing, bedding, etc.) are essential. These should be sanitized by washing in hot water, using chlorine bleach if possible. Hands that touch cholera patients or their clothing, bedding, etc., should be thoroughly cleaned and disinfected with chlorinated water or other effective antimicrobial agents.
Sewage and fecal sludge management: In cholera-affected areas, sewage and fecal sludge need to be treated and managed carefully in order to stop the spread of this disease via human excreta. Provision of sanitation and hygiene is an important preventative measure. Open defecation, release of untreated sewage, or dumping of fecal sludge from pit latrines or septic tanks into the environment need to be prevented. In many cholera affected zones, there is a low degree of sewage treatment. Therefore, the implementation of dry toilets that do not contribute to water pollution, as they do not flush with water, may be an interesting alternative to flush toilets.
Sources: Warnings about possible cholera contamination should be posted around contaminated water sources with directions on how to decontaminate the water (boiling, chlorination etc.) for possible use.
Water purification: All water used for drinking, washing, or cooking should be sterilized by either boiling, chlorination, ozone water treatment, ultraviolet light sterilization (e.g., by solar water disinfection), or antimicrobial filtration in any area where cholera may be present. Chlorination and boiling are often the least expensive and most effective means of halting transmission. Cloth filters or sari filtration, though very basic, have significantly reduced the occurrence of cholera when used in poor villages in Bangladesh that rely on untreated surface water. Better antimicrobial filters, like those present in advanced individual water treatment hiking kits, are most effective. Public health education and adherence to appropriate sanitation practices are of primary importance to help prevent and control transmission of cholera and other diseases.Handwashing with soap or ash after using a toilet and before handling food or eating is also recommended for cholera prevention by WHO Africa.
Surveillance
Surveillance and prompt reporting allow for containing cholera epidemics rapidly. Cholera exists as a seasonal disease in many endemic countries, occurring annually mostly during rainy seasons. Surveillance systems can provide early alerts to outbreaks, therefore leading to coordinated response and assist in preparation of preparedness plans. Efficient surveillance systems can also improve the risk assessment for potential cholera outbreaks. Understanding the seasonality and location of outbreaks provides guidance for improving cholera control activities for the most vulnerable. For prevention to be effective, it is important that cases be reported to national health authorities.
Vaccination
Spanish physician Jaume Ferran i Clua developed a cholera inoculation in 1885, the first to immunize humans against a bacterial disease. However, his vaccine and inoculation was rather controversial and was rejected by his peers and several investigation commissions. Russian-Jewish bacteriologist Waldemar Haffkine successfully developed the first human cholera vaccine in July 1892. He conducted a massive inoculation program in British India.Persons who survive an episode of cholera have long-lasting immunity for at least 3 years (the period tested.) A number of safe and effective oral vaccines for cholera are available. The World Health Organization (WHO) has three prequalified oral cholera vaccines (OCVs): Dukoral, Sanchol, and Euvichol. Dukoral, an orally administered, inactivated whole cell vaccine, has an overall efficacy of about 52% during the first year after being given and 62% in the second year, with minimal side effects. It is available in over 60 countries. However, it is not currently recommended by the Centers for Disease Control and Prevention (CDC) for most people traveling from the United States to endemic countries. The vaccine that the US Food and Drug Administration (FDA) recommends, Vaxchora, is an oral attenuated live vaccine, that is effective for adults aged 18-64 as a single dose.One injectable vaccine was found to be effective for two to three years. The protective efficacy was 28% lower in children less than five years old. However, as of 2010, it has limited availability. Work is under way to investigate the role of mass vaccination. The WHO recommends immunization of high-risk groups, such as children and people with HIV, in countries where this disease is endemic. If people are immunized broadly, herd immunity results, with a decrease in the amount of contamination in the environment.WHO recommends that oral cholera vaccination be considered in areas where the disease is endemic (with seasonal peaks), as part of the response to outbreaks, or in a humanitarian crisis during which the risk of cholera is high. Oral Cholera Vaccine (OCV) has been recognized as an adjunct tool for prevention and control of cholera. The World Health Organization (WHO) has prequalified three bivalent cholera vaccines—Dukoral (SBL Vaccines), containing a non-toxic B-subunit of cholera toxin and providing protection against V. cholerae O1; and two vaccines developed using the same transfer of technology—ShanChol (Shantha Biotec) and Euvichol (EuBiologics Co.), which have bivalent O1 and O139 oral killed cholera vaccines. Oral cholera vaccination could be deployed in a diverse range of situations from cholera-endemic areas and locations of humanitarian crises, but no clear consensus exists.
Sari filtration
Developed for use in Bangladesh, the "sari filter" is a simple and cost-effective appropriate technology method for reducing the contamination of drinking water. Used sari cloth is preferable but other types of used cloth can be used with some effect, though the effectiveness will vary significantly. Used cloth is more effective than new cloth, as the repeated washing reduces the space between the fibers. Water collected in this way has a greatly reduced pathogen count—though it will not necessarily be perfectly safe, it is an improvement for poor people with limited options. In Bangladesh this practice was found to decrease rates of cholera by nearly half. It involves folding a sari four to eight times. Between uses the cloth should be rinsed in clean water and dried in the sun to kill any bacteria on it. A nylon cloth appears to work as well but is not as affordable.
Treatment
Continued eating speeds the recovery of normal intestinal function. The WHO recommends this generally for cases of diarrhea no matter what the underlying cause. A CDC training manual specifically for cholera states: "Continue to breastfeed your baby if the baby has watery diarrhea, even when traveling to get treatment. Adults and older children should continue to eat frequently."
Fluids
The most common error in caring for patients with cholera is to underestimate the speed
and volume of fluids required. In most cases, cholera can be successfully treated with oral rehydration therapy (ORT), which is highly effective, safe, and simple to administer. Rice-based solutions are preferred to glucose-based ones due to greater efficiency. In severe cases with significant dehydration, intravenous rehydration may be necessary. Ringers lactate is the preferred solution, often with added potassium. Large volumes and continued replacement until diarrhea has subsided may be needed. Ten percent of a persons body weight in fluid may need to be given in the first two to four hours. This method was first tried on a mass scale during the Bangladesh Liberation War, and was found to have much success. Despite widespread beliefs, fruit juices and commercial fizzy drinks like cola are not ideal for rehydration of people with serious infections of the intestines, and their excessive sugar content may even harm water uptake.If commercially produced oral rehydration solutions are too expensive or difficult to obtain, solutions can be made. One such recipe calls for 1 liter of boiled water, 1/2 teaspoon of salt, 6 teaspoons of sugar, and added mashed banana for potassium and to improve taste.
Electrolytes
As there frequently is initially acidosis, the potassium level may be normal, even though large losses have occurred. As the dehydration is corrected, potassium levels may decrease rapidly, and thus need to be replaced. This may be done by consuming foods high in potassium, like bananas or coconut water.
Antibiotics
Antibiotic treatments for one to three days shorten the course of the disease and reduce the severity of the symptoms. Use of antibiotics also reduces fluid requirements. People will recover without them, however, if sufficient hydration is maintained. The WHO only recommends antibiotics in those with severe dehydration.Doxycycline is typically used first line, although some strains of V. cholerae have shown resistance. Testing for resistance during an outbreak can help determine appropriate future choices. Other antibiotics proven to be effective include cotrimoxazole, erythromycin, tetracycline, chloramphenicol, and furazolidone. Fluoroquinolones, such as ciprofloxacin, also may be used, but resistance has been reported.Antibiotics improve outcomes in those who are both severely and not severely dehydrated. Azithromycin and tetracycline may work better than doxycycline or ciprofloxacin.
Zinc supplementation
In Bangladesh zinc supplementation reduced the duration and severity of diarrhea in children with cholera when given with antibiotics and rehydration therapy as needed. It reduced the length of disease by eight hours and the amount of diarrhea stool by 10%. Supplementation appears to be also effective in both treating and preventing infectious diarrhea due to other causes among children in the developing world.
Prognosis
If people with cholera are treated quickly and properly, the mortality rate is less than 1%; however, with untreated cholera, the mortality rate rises to 50–60%.For certain genetic strains of cholera, such as the one present during the 2010 epidemic in Haiti and the 2004 outbreak in India, death can occur within two hours of becoming ill.
Epidemiology
Cholera affects an estimated 2.8 million people worldwide, and causes approximately 95,000 deaths a year (uncertainty range: 21,000–143,000) as of 2015. This occurs mainly in the developing world.In the early 1980s, death rates are believed to have still been higher than three million a year. It is difficult to calculate exact numbers of cases, as many go unreported due to concerns that an outbreak may have a negative impact on the tourism of a country. As of 2004, cholera remained both epidemic and endemic in many areas of the world.Recent major outbreaks are the 2010s Haiti cholera outbreak and the 2016–2021 Yemen cholera outbreak. In October 2016, an outbreak of cholera began in war-ravaged Yemen. WHO called it "the worst cholera outbreak in the world". In 2019, 93% of the reported 923,037 cholera cases were from Yemen (with 1911 deaths reported). Between September 2019 and September 2020, a global total of over 450,000 cases and over 900 deaths was reported; however, the accuracy of these numbers suffer from over-reporting from countries that report suspected cases (and not laboratory confirmed cases), as well as under-reporting from countries that do not report official cases (such as Bangladesh, India and Philippines).Although much is known about the mechanisms behind the spread of cholera, researchers still do not have a full understanding of what makes cholera outbreaks happen in some places and not others. Lack of treatment of human feces and lack of treatment of drinking water greatly facilitate its spread. Bodies of water have been found to serve as a reservoir of infection, and seafood shipped long distances can spread the disease.
Cholera had disappeared from the Americas for most of the 20th century, but it reappeared toward the end of that century, beginning with a severe outbreak in Peru. Following the end of the 2010s Haiti cholera outbreak, there have not been any cholera cases in the Americas since February 2019. As of August 2021 the disease is endemic in Africa and some areas of eastern and western Asia (Bangladesh, India and Yemen). Cholera is not endemic in Europe; all reported cases had a travel history to endemic areas.
History of outbreaks
The word cholera is from Greek: χολέρα kholera from χολή kholē "bile". Cholera likely has its origins in the Indian subcontinent as evidenced by its prevalence in the region for centuries.References to cholera appear in the European literature as early as 1642, from the Dutch physician Jakob de Bondts description in his De Medicina Indorum. (The "Indorum" of the title refers to the East Indies. He also gave first European descriptions of other diseases.) But at the time, the word "cholera" was historically used by European physicians to refer to any gastrointestinal upset resulting in yellow diarrhea. De Bondt thus used a common word already in regular use to describe the new disease. This was a frequent practice of the time. It was not until the 1830s that the name for severe yellow diarrhea changed in English from "cholera" to "cholera morbus" to differentiate it from what was then known as "Asiatic cholera", or that associated with origins in India and the East.
Early outbreaks in the Indian subcontinent are believed to have been the result of crowded, poor living conditions, as well as the presence of pools of still water, both of which provide ideal conditions for cholera to thrive. The disease first spread by travelers along trade routes (land and sea) to Russia in 1817, later to the rest of Europe, and from Europe to North America and the rest of the world, (hence the name "Asiatic cholera"). Seven cholera pandemics have occurred since the early 19th century; the first one did not reach the Americas. The seventh pandemic originated in Indonesia in 1961.The first cholera pandemic occurred in the Bengal region of India, near Calcutta starting in 1817 through 1824. The disease dispersed from India to Southeast Asia, the Middle East, Europe, and Eastern Africa. The movement of British Army and Navy ships and personnel is believed to have contributed to the range of the pandemic, since the ships carried people with the disease to the shores of the Indian Ocean, from Africa to Indonesia, and north to China and Japan.The second pandemic lasted from 1826 to 1837 and particularly affected North America and Europe. Advancements in transportation and global trade, and increased human migration, including soldiers, meant that more people were carrying the disease more widely. The third pandemic erupted in 1846, persisted until 1860, extended to North Africa, and reached North and South America. It was introduced to North America at Quebec, Canada, via Irish immigrants from the Great Famine. In this pandemic, Brazil was affected for the first time. The fourth pandemic lasted from 1863 to 1875, spreading from India to Naples and Spain, and reaching the United States at New Orleans, Louisiana in 1873. It spread throughout the Mississippi River system on the continent.
The fifth pandemic was from 1881 to 1896. It started in India and spread to Europe, Asia, and South America. The sixth pandemic ran from 1899–1923. These epidemics had a lower number of fatalities because physicians and researchers had a greater understanding of the cholera bacteria. Egypt, the Arabian peninsula, Persia, India, and the Philippines were hit hardest during these epidemics. Other areas, such as Germany in 1892 (primarily the city of Hamburg, where more than 8.600 people died) and Naples from 1910 to 1911, also suffered severe outbreaks.
The seventh pandemic originated in 1961 in Indonesia and is marked by the emergence of a new strain, nicknamed El Tor, which still persists (as of 2018) in developing countries. This pandemic had initially subsided about 1975 and was thought to have ended, but, as noted, it has persisted. There were a rise in cases in the 1990s and since.
Cholera became widespread in the 19th century. Since then it has killed tens of millions of people. In Russia alone, between 1847 and 1851, more than one million people died from the disease. It killed 150,000 Americans during the second pandemic. Between 1900 and 1920, perhaps eight million people died of cholera in India. Cholera officially became the first reportable disease in the United States due to the significant effects it had on health. John Snow, in England, in 1854 was the first to identify the importance of contaminated water as its source of transmission. Cholera is now no longer considered a pressing health threat in Europe and North America due to filtering and chlorination of water supplies, but it still strongly affects populations in developing countries.
In the past, vessels flew a yellow quarantine flag if any crew members or passengers had cholera. No one aboard a vessel flying a yellow flag would be allowed ashore for an extended period, typically 30 to 40 days.Historically many different claimed remedies have existed in folklore. Many of the older remedies were based on the miasma theory, that the disease was transmitted by bad air. Some believed that abdominal chilling made one more susceptible, and flannel and cholera belts were included in army kits. In the 1854–1855 outbreak in Naples, homeopathic camphor was used according to Hahnemann. T. J. Ritters Mothers Remedies book lists tomato syrup as a home remedy from northern America. Elecampane was recommended in the United Kingdom, according to William Thomas Fernie. The first effective human vaccine was developed in 1885, and the first effective antibiotic was developed in 1948.
Cholera cases are much less frequent in developed countries where governments have helped to establish water sanitation practices and effective medical treatments. In the 19th century the United States, for example, had a severe cholera problem similar to those in some developing countries. It had three large cholera outbreaks in the 1800s, which can be attributed to Vibrio choleraes spread through interior waterways such as the Erie Canal and the extensive Mississippi River valley system, as well as the major ports along the Eastern Seaboard and their cities upriver. The island of Manhattan in New York City touches the Atlantic Ocean, where cholera collected from river waters and ship discharges just off the coast. At this time, New York City did not have as effective a sanitation system as it developed in the later 20th century, so cholera spread through the citys water supply.Cholera morbus is a historical term that was used to refer to gastroenteritis rather than specifically to what is now defined as the disease of cholera.
Research
One of the major contributions to fighting cholera was made by the physician and pioneer medical scientist John Snow (1813–1858), who in 1854 found a link between cholera and contaminated drinking water. Dr. Snow proposed a microbial origin for epidemic cholera in 1849. In his major "state of the art" review of 1855, he proposed a substantially complete and correct model for the cause of the disease. In two pioneering epidemiological field studies, he was able to demonstrate human sewage contamination was the most probable disease vector in two major epidemics in London in 1854. His model was not immediately accepted, but it was increasingly seen as plausible as medical microbiology developed over the next 30 years or so. For his work on cholera, John Snow is often regarded as the "Father of Epidemiology".The bacterium was isolated in 1854 by Italian anatomist Filippo Pacini, but its exact nature and his results were not widely known. In the same year, the Catalan Joaquim Balcells i Pascual discovered the bacterium. In 1856 António Augusto da Costa Simões and José Ferreira de Macedo Pinto, two Portuguese researchers, are believed to have done the same.Between the mid-1850s and the 1900s, cities in developed nations made massive investment in clean water supply and well-separated sewage treatment infrastructures. This eliminated the threat of cholera epidemics from the major developed cities in the world. In 1883, Robert Koch identified V. cholerae with a microscope as the bacillus causing the disease.Hemendra Nath Chatterjee, a Bengali scientist, was the first to formulate and demonstrate the effectiveness of oral rehydration salt (ORS) to treat diarrhea. In his 1953 paper, published in The Lancet, he states that promethazine can stop vomiting during cholera and then oral rehydration is possible. The formulation of the fluid replacement solution was 4 g of sodium chloride, 25 g of glucose and 1000 ml of water.
Indian medical scientist Sambhu Nath De discovered the cholera toxin, the animal model of cholera, and successfully demonstrated the method of transmission of cholera pathogen Vibrio cholerae.Robert Allan Phillips, working at US Naval Medical Research Unit Two in Southeast Asia, evaluated the pathophysiology of the disease using modern laboratory chemistry techniques. He developed a protocol for rehydration. His research led the Lasker Foundation to award him its prize in 1967.More recently, in 2002, Alam, et al., studied stool samples from patients at the International Centre for Diarrhoeal Disease in Dhaka, Bangladesh. From the various experiments they conducted, the researchers found a correlation between the passage of V. cholerae through the human digestive system and an increased infectivity state. Furthermore, the researchers found the bacterium creates a hyperinfected state where genes that control biosynthesis of amino acids, iron uptake systems, and formation of periplasmic nitrate reductase complexes were induced just before defecation. These induced characteristics allow the cholera vibrios to survive in the "rice water" stools, an environment of limited oxygen and iron, of patients with a cholera infection.
Global Strategy
In 2017, the WHO launched the "Ending Cholera: a global roadmap to 2030" strategy which aims to reduce cholera deaths by 90% by 2030. The strategy was developed by the Global Task Force on Cholera Control (GTFCC) which develops country-specific plans and monitors progress. The approach to achieve this goal combines surveillance, water sanitation, rehydration treatment and oral vaccines. Specifically, the control strategy focuses on three approaches: i) early detection and response to outbreaks to contain outbreaks, ii) stopping cholera transmission through improved sanitation and vaccines in hotspots, and iii) a global framework for cholera control through the GTFCC.The WHO and the GTFCC do not consider global cholera eradication a viable goal. Even though humans are the only host of cholera, the bacterium can persist in the environment without a human host. While global eradication is not possible, elimination of human to human transmission may be possible. Local elimination is possible, which has been underway most recently during the 2010s Haiti cholera outbreak. Haiti aims to achieve certification of elimination by 2022.The GTFCC targets 47 countries, 13 of which have established vaccination campaigns.
Society and culture
Health policy
In many developing countries, cholera still reaches its victims through contaminated water sources, and countries without proper sanitation techniques have greater incidence of the disease. Governments can play a role in this. In 2008, for example, the Zimbabwean cholera outbreak was due partly to the governments role, according to a report from the James Baker Institute. The Haitian governments inability to provide safe drinking water after the 2010 earthquake led to an increase in cholera cases as well.Similarly, South Africas cholera outbreak was exacerbated by the governments policy of privatizing water programs. The wealthy elite of the country were able to afford safe water while others had to use water from cholera-infected rivers.According to Rita R. Colwell of the James Baker Institute, if cholera does begin to spread, government preparedness is crucial. A governments ability to contain the disease before it extends to other areas can prevent a high death toll and the development of an epidemic or even pandemic. Effective disease surveillance can ensure that cholera outbreaks are recognized as soon as possible and dealt with appropriately. Oftentimes, this will allow public health programs to determine and control the cause of the cases, whether it is unsanitary water or seafood that have accumulated a lot of Vibrio cholerae specimens. Having an effective surveillance program contributes to a governments ability to prevent cholera from spreading. In the year 2000 in the state of Kerala in India, the Kottayam district was determined to be "Cholera-affected"; this pronouncement led to task forces that concentrated on educating citizens with 13,670 information sessions about human health. These task forces promoted the boiling of water to obtain safe water, and provided chlorine and oral rehydration salts. Ultimately, this helped to control the spread of the disease to other areas and minimize deaths. On the other hand, researchers have shown that most of the citizens infected during the 1991 cholera outbreak in Bangladesh lived in rural areas, and were not recognized by the governments surveillance program. This inhibited physicians abilities to detect cholera cases early.According to Colwell, the quality and inclusiveness of a countrys health care system affects the control of cholera, as it did in the Zimbabwean cholera outbreak. While sanitation practices are important, when governments respond quickly and have readily available vaccines, the country will have a lower cholera death toll. Affordability of vaccines can be a problem; if the governments do not provide vaccinations, only the wealthy may be able to afford them and there will be a greater toll on the countrys poor. The speed with which government leaders respond to cholera outbreaks is important.Besides contributing to an effective or declining public health care system and water sanitation treatments, government can have indirect effects on cholera control and the effectiveness of a response to cholera. A countrys government can impact its ability to prevent disease and control its spread. A speedy government response backed by a fully functioning health care system and financial resources can prevent choleras spread. This limits choleras ability to cause death, or at the very least a decline in education, as children are kept out of school to minimize the risk of infection.
Notable cases
Tchaikovskys death has traditionally been attributed to cholera, most probably contracted through drinking contaminated water several days earlier. Tchaikovskys mother died of cholera, and his father became sick with cholera at this time but made a full recovery. Some scholars, however, including English musicologist and Tchaikovsky authority David Brown and biographer Anthony Holden, have theorized that his death was a suicide.
2010s Haiti cholera outbreak. Ten months after the 2010 earthquake, an outbreak swept over Haiti, traced to a United Nations base of peacekeepers from Nepal. This marks the worst cholera outbreak in recent history, as well as the best documented cholera outbreak in modern public health.
Adam Mickiewicz, Polish poet and novelist, is thought to have died of cholera in Istanbul in 1855.
Sadi Carnot, physicist, a pioneer of thermodynamics (d. 1832)
Charles X, King of France (d. 1836)
James K. Polk, eleventh president of the United States (d. 1849)
Carl von Clausewitz, Prussian soldier and German military theorist (d. 1831)
Elliot Bovill, Chief Justice of the Straits Settlements (1893)
Nikola Tesla, Serbian-American inventor, engineer and futurist known for his contributions to the design of the modern alternating current (AC) electricity supply system, contracted cholera in 1873 at the age of 17. He was bedridden for nine months, and near death multiple times, but survived and fully recovered.
In popular culture
Unlike tuberculosis ("consumption") which in literature and the arts was often romanticized as a disease of denizens of the demimonde or those with an artistic temperament, cholera is a disease which almost entirely affects the lower-classes living in filth and poverty. This, and the unpleasant course of the disease – which includes voluminous "rice-water" diarrhea, the hemorrhaging of liquids from the mouth, and violent muscle contractions which continue even after death – has discouraged the disease from being romanticized, or even the actual factual presentation of the disease in popular culture.
The 1889 novel Mastro-don Gesualdo by Giovanni Verga presents the course of a cholera epidemic across the island of Sicily, but does not show the suffering of those affected.
In Thomas Manns novella Death in Venice, first published in 1912 as Der Tod in Venedig, Mann "presented the disease as emblematic of the final bestial degradation of the sexually transgressive author Gustav von Aschenbach." Contrary to the actual facts of how violently cholera kills, Mann has his protagonist die peacefully on a beach in a deck chair. Luchino Viscontis 1971 film version also hid from the audience the actual course of the disease. Manns novella was also made into an opera by Benjamin Britten in 1973, his last one, and into a ballet by John Neumeier for his Hamburg Ballet company, in December 2003.*
In Gabriel Garcia Márquezs 1985 novel Love in the Time of Cholera, cholera is "a looming background presence rather than a central figure requiring vile description." The novel was adapted in 2007 for the film of the same name directed by Mike Newell.
Country examples
Zambia
In Zambia, widespread cholera outbreaks have occurred since 1977, most commonly in the capital city of Lusaka. In 2017, an outbreak of cholera was declared in Zambia after laboratory confirmation of Vibrio cholerae O1, biotype El Tor, serotype Ogawa, from stool samples from two patients with acute watery diarrhea. There was a rapid increase in the number of cases from several hundred cases in early December 2017 to approximately 2,000 by early January 2018. With intensification of the rains, new cases increased on a daily basis reaching a peak on the first week of January 2018 with over 700 cases reported.In collaboration with partners, the Zambia Ministry of Health (MoH) launched a multifaceted public health response that included increased chlorination of the Lusaka municipal water supply, provision of emergency water supplies, water quality monitoring and testing, enhanced surveillance, epidemiologic investigations, a cholera vaccination campaign, aggressive case management and health care worker training, and laboratory testing of clinical samples.The Zambian Ministry of Health implemented a reactive one-dose Oral Cholera Vaccine (OCV) campaign in April 2016 in three Lusaka compounds, followed by a pre-emptive second-round in December.
India
The city of Kolkata, India in the state of West Bengal in the Ganges delta has been described as the "homeland of cholera", with regular outbreaks and pronounced seasonality. In India, where the disease is endemic, cholera outbreaks occur every year between dry seasons and rainy seasons. India is also characterized by high population density, unsafe drinking water, open drains, and poor sanitation which provide an optimal niche for survival, sustenance and transmission of Vibrio cholerae.
Democratic Republic of Congo
In Goma in the Democratic Republic of Congo, cholera has left an enduring mark on human and medical history. Cholera pandemics in the 19th and 20th centuries led to the growth of epidemiology as a science and in recent years it has continued to press advances in the concepts of disease ecology, basic membrane biology, and transmembrane signaling and in the use of scientific information and treatment design.
Notes
References
Further reading
Arnold, David (1986). "Cholera and Colonialism in British India". Past & Present. 113 (113): 118–151. doi:10.1093/past/113.1.118. JSTOR 650982. PMID 11617906.
Azizi, MH; Azizi, F (January 2010). "History of Cholera Outbreaks in Iran during the 19th and 20th Centuries". Middle East Journal of Digestive Diseases. 2 (1): 51–55. PMC 4154910. PMID 25197514.
Bilson, Geoffrey. A Darkened House: Cholera in Nineteenth-Century Canada (U of Toronto Press, 1980).
Cooper, Donald B. (1986). "The New Black Death: Cholera in Brazil, 1855-1856". Social Science History. 10 (4): 467–488. doi:10.2307/1171027. JSTOR 1171027. PMID 11618140.
Echenberg, Myron (2011). Africa in the Time of Cholera: A History of Pandemics from 1817 to the Present. ISBN 978-0-521-18820-3.
Evans, Richard J. (1988). "Epidemics and Revolutions: Cholera in Nineteenth-Century Europe". Past & Present. 120 (120): 123–146. doi:10.1093/past/120.1.123. JSTOR 650924. PMID 11617908.
Evans, Richard J. (2005). Death in Hamburg: Society and Politics in the Cholera Years. ISBN 978-0-14-303636-4.
Gilbert, Pamela K. Cholera and Nation: Doctoring the Social Body in Victorian England" (SUNY Press, 2008).
Hamlin, Christopher (2009). Cholera: The Biography. Oxford University Press.
Huber, Valeska (November 2020). "Pandemics and the politics of difference: rewriting the history of internationalism through nineteenth-century cholera". Journal of Global History. 15 (3): 394–407. doi:10.1017/S1740022820000236. S2CID 228940685.
Huber, Valeska (June 2006). "THE UNIFICATION OF THE GLOBE BY DISEASE? THE INTERNATIONAL SANITARY CONFERENCES ON CHOLERA, 1851–1894". The Historical Journal. 49 (2): 453–476. doi:10.1017/S0018246X06005280. S2CID 162994263.
Jenson, Deborah; Szabo, Victoria (November 2011). "Cholera in Haiti and Other Caribbean Regions, 19th Century". Emerging Infectious Diseases. 17 (11): 2130–2135. doi:10.3201/eid1711.110958. PMC 3310590. PMID 22099117.
Kotar, S. L.; Gessler, J. E. (2014). Cholera: A Worldwide History. ISBN 978-0-7864-7242-0.
Kudlick, Catherine Jean (1996). Cholera in Post-Revolutionary Paris: A Cultural History. Berkeley: University of California Press.
Legros, Dominique (15 October 2018). "Global Cholera Epidemiology: Opportunities to Reduce the Burden of Cholera by 2030". The Journal of Infectious Diseases. 218 (suppl_3): S137–S140. doi:10.1093/infdis/jiy486. PMC 6207143. PMID 30184102.
Mukharji, Projit Bihari (2012). "The Cholera Cloud in the Nineteenth-Century British World: History of an Object-Without-an-Essence". Bulletin of the History of Medicine. 86 (3): 303–332. doi:10.1353/bhm.2012.0050. JSTOR 26305866. PMID 23241908. S2CID 207267413. INIST:26721136 Project MUSE 492086.
Rosenberg, Charles E. (1987). The Cholera Years: The United States in 1832, 1849, and 1866. University of Chicago Press. ISBN 978-0-226-72677-9.
Roth, Mitchel (1997). "Cholera, Community, and Public Health in Gold Rush Sacramento and San Francisco". Pacific Historical Review. 66 (4): 527–551. doi:10.2307/3642236. JSTOR 3642236.
Snowden, Frank M. Naples in the Time of Cholera, 1884-1911 (Cambridge UP, 1995).
Vinten-Johansen, Peter, ed. Investigating Cholera in Broad Street: A History in Documents (Broadview Press, 2020). regarding 1850s in England.
Vinten-Johansen, Peter, et al. Cholera, chloroform, and the science of medicine: a life of John Snow (2003).
External links
Prevention and control of cholera outbreaks: WHO policy and recommendations
Cholera—World Health Organization
Cholera – Vibrio cholerae infection—Centers for Disease Control and Prevention
"Cholera" . Encyclopædia Britannica. Vol. 6 (11th ed.). 1911. pp. 262–267. | 116 |
Invasive hydatidiform mole | Invasive hydatidiform mole is a type of neoplasia that grows into the muscular wall of the uterus. It is formed after conception (fertilization of an egg by a sperm). It may spread to other parts of the body, such as the vagina, vulva, and lung.
See also
Hydatidiform mole
References
External links
Chorioadenoma destruens entry in the public domain NCI Dictionary of Cancer Terms This article incorporates public domain material from the U.S. National Cancer Institute document: "Dictionary of Cancer Terms". | 117 |
Choriocarcinoma | Choriocarcinoma is a malignant, trophoblastic cancer, usually of the placenta. It is characterized by early hematogenous spread to the lungs. It belongs to the malignant end of the spectrum in gestational trophoblastic disease (GTD). It is also classified as a germ cell tumor and may arise in the testis or ovary.
Signs and symptoms
increased quantitative chorionic gonadotropin (the "pregnancy hormone") levels
vaginal bleeding
shortness of breath
hemoptysis (coughing up blood)
chest pain
chest X-ray shows multiple infiltrates of various shapes in both lungs
presents in males as a testicular cancer, sometimes with skin hyperpigmentation (from excess chorionic gonadotropin cross reacting with the alpha MSH receptor), gynecomastia, and weight loss (from excess chorionic gonadotropin cross reacting with the LH, FSH, and TSH receptor) in males
can present with decreased thyroid-stimulating hormone (TSH) due to hyperthyroidism.
Cause
Choriocarcinoma of the placenta during pregnancy is preceded by:
hydatidiform mole (50% of cases)
spontaneous abortion (20% of cases)
ectopic pregnancy (2% of cases)
normal term pregnancy (20–30% of cases)
hyperemesis gravidarumRarely, choriocarcinoma occurs in primary locations other than the placenta; very rarely, it occurs in testicles. Although trophoblastic components are common components of mixed germ cell tumors, pure choriocarcinoma of the adult testis is rare. Pure choriocarcinoma of the testis represents the most aggressive pathologic variant of germ cell tumors in adults, characteristically with early hematogenous and lymphatic metastatic spread. Because of early spread and inherent resistance to anticancer drugs, patients have poor prognosis. Elements of choriocarcinoma in a mixed testicular tumor have no prognostic importance.Choriocarcinomas can also occur in the ovaries and other organs.
Pathology
Characteristic feature is the identification of intimately related syncytiotrophoblasts and cytotrophoblasts without formation of definite placental type villi. Since choriocarcinomas include syncytiotrophoblasts (beta-HCG producing cells), they cause elevated blood levels of beta-human chorionic gonadotropin.
Syncytiotrophoblasts are large multi-nucleated cells with eosinophilic cytoplasm. They often surround the cytotrophoblasts, reminiscent of their normal anatomical relationship in chorionic villi. Cytotrophoblasts are polyhedral, mononuclear cells with hyperchromatic nuclei and a clear or pale cytoplasm. Extensive hemorrhage is a common finding.
Treatment
Since gestational choriocarcinoma (which arises from a hydatidiform mole) contains paternal DNA (and thus paternal antigens), it is exquisitely sensitive to chemotherapy. The cure rates, even for metastatic gestational choriocarcinoma, more than 90% when using chemotherapy for invasive mole and choriocarcinoma.As of 2019, treatment with either single-agent methotrexate or actinomycin-D is recommended for low-risk disease, while intense combination regimens including EMACO (etoposide, methotrexate, actinomycin D, cyclophosphamide and vincristine (Oncovin) are recommended for intermediate or high-risk disease.Hysterectomy (surgical removal of the uterus) can also be offered to patients >40 years of age or those for whom sterilisation is not an obstacle. It may be required for those with severe infection and uncontrolled bleeding.
Choriocarcinoma arising in the testicle is rare, malignant and highly resistant to chemotherapy. The same is true of choriocarcinoma arising in the ovary. Testicular choriocarcinoma has the worst prognosis of all germ-cell cancers.
References
External links
00976 at CHORUS | 118 |
Claudication | Claudication is a medical term usually referring to impairment in walking, or pain, discomfort, numbness, or tiredness in the legs that occurs during walking or standing and is relieved by rest. The perceived level of pain from claudication can be mild to extremely severe. Claudication is most common in the calves but it can also affect the feet, thighs, hips, buttocks, or arms. The word claudication comes from the Latin claudicare meaning to limp.
Claudication that appears after a short amount of walking may sometimes be described by US medical professionals by the number of typical city street blocks that the patient can walk before the onset of claudication. Thus, "one-block claudication" appears after walking one block, "two-block claudication" appears after walking two blocks, etc. The term block would be understood more exactly locally but is on the order of 100 metres.
Types
Intermittent vascular
Intermittent vascular (or arterial) claudication (Latin: claudicatio intermittens) most often refers to cramping pains in the buttock or leg muscles, especially the calves. It is caused by poor circulation of the blood to the affected area, called peripheral arterial disease. The poor blood flow is often a result of atherosclerotic blockages more proximal to the affected area; individuals with intermittent claudication may have diabetes — often undiagnosed. Another cause, or exacerbating factor, is excessive sitting (several hours), especially in the absence of reasonable breaks, along with a general lack of walking or other exercise that stimulates the legs.
Spinal or neurogenic
Spinal or neurogenic claudication is not due to lack of blood supply, but rather it is caused by nerve root compression or stenosis of the spinal canal, usually from a degenerative spine, most often at the "L4-L5" or "L5-S1" level. This may result from many factors, including bulging disc, herniated disc or fragments from previously herniated discs (post-operative), scar tissue from previous surgeries, osteophytes (bone spurs that jut out from the edge of a vertebra into the foramen, the opening through which the nerve root passes). In most cases neurogenic claudication is bilateral, i.e. symmetrical.
Jaw
Jaw claudication is pain in the jaw or ear while chewing. This is caused by insufficiency of the arteries supplying the jaw muscles, associated with giant cell arteritis.
Diagnosis
Differential diagnosis
Vascular (or arterial) claudication typically occurs after activity or ambulation for a distance with resultant vascular insufficiency (lack of blood flow) where the muscular demands of oxygen outweighs the supply. Symptoms are lower extremity cramping. Resting from activity even in a standing position may help relieve the symptoms. Spinal or neurogenic claudication may be differentiated from arterial claudication based on activity and position. In neurogenic claudication, positional changes lead to increased stenosis (narrowing) of the spinal canal and compression of nerve roots and resultant lower extremity symptoms. Standing and extension of the spine narrows the spinal canal diameter. Sitting and flexion of the spine increases spinal canal diameter. A person with neurogenic claudication will have worsening of leg cramping with standing erect or standing and walking. Symptoms may be relieved by sitting down (flexing the spine) or even by walking while leaning over (flexion of the spine) a shopping cart.The ability to ride a stationary bike for a prolonged period of time differentiates neurogenic claudication from vascular claudication. Weakness is also a prominent feature of spinal claudication that is not usually present in intermittent claudication.
Treatment
Blocking agents of the adrenoceptors alpha 1/alpha 2 are typically used to treat the effects of the vasoconstriction associated with vascular claudication. Cilostazol (trade name: Pletal) is FDA approved for intermittent claudication. It is contraindicated in patients with heart failure, and improvement of symptoms may not be evident for two to three weeks.Neurogenic claudication can be treated surgically with spinal decompression.
Prognosis
The prognosis for patients with peripheral vascular disease due to atherosclerosis is poor; patients with intermittent claudication due to atherosclerosis are at increased risk of death from cardiovascular disease (e.g. heart attack), because the same disease that affects the legs is often present in the arteries of the heart.The prognosis for neurogenic claudication is good if the cause of it can be addressed surgically.
References
== External links == | 119 |
Colic | Colic or cholic () is a form of pain that starts and stops abruptly. It occurs due to muscular contractions of a hollow tube (small and large intestine, gall bladder, ureter, etc.) in an attempt to relieve an obstruction by forcing content out. It may be accompanied by sweating and vomiting. Types include:
Baby colic, a condition, usually in infants, characterized by incessant crying
Biliary colic, blockage by a gallstone of the common bile duct or cystic duct
Devon colic or painters colic, a condition caused by lead poisoning
Horse colic, a potentially fatal condition experienced by horses, caused by intestinal displacement or blockage
Renal colic, a pain in the flank, characteristic of kidney stonesThe term is from Greek κολικός kolikos, "relative to the colon".
References
== External links == | 120 |
Common cold | The common cold, also known simply as a cold, is a viral infectious disease of the upper respiratory tract that primarily affects the respiratory mucosa of the nose, throat, sinuses, and larynx. Signs and symptoms may appear less than two days after exposure to the virus. These may include coughing, sore throat, runny nose, sneezing, headache, and fever. People usually recover in seven to ten days, but some symptoms may last up to three weeks. Occasionally, those with other health problems may develop pneumonia.Well over 200 virus strains are implicated in causing the common cold, with rhinoviruses, coronaviruses, adenoviruses and enteroviruses being the most common. They spread through the air during close contact with infected people or indirectly through contact with objects in the environment, followed by transfer to the mouth or nose. Risk factors include going to child care facilities, not sleeping well, and psychological stress. The symptoms are mostly due to the bodys immune response to the infection rather than to tissue destruction by the viruses themselves. The symptoms of influenza are similar to those of a cold, although usually more severe and less likely to include a runny nose.There is no vaccine for the common cold. The primary methods of prevention are hand washing; not touching the eyes, nose or mouth with unwashed hands; and staying away from sick people. Some evidence supports the use of face masks. There is also no cure, but the symptoms can be treated. Zinc may reduce the duration and severity of symptoms if started shortly after the onset of symptoms. Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen may help with pain. Antibiotics, however, should not be used, as all colds are caused by viruses, and there is no good evidence that cough medicines are effective.The common cold is the most frequent infectious disease in humans. Under normal circumstances, the average adult gets two to three colds a year, while the average child may get six to eight. Infections occur more commonly during the winter. These infections have existed throughout human history.
Signs and symptoms
The typical symptoms of a cold include cough, runny nose, sneezing, nasal congestion, and a sore throat, sometimes accompanied by muscle ache, fatigue, headache, and loss of appetite. A sore throat is present in about 40% of cases, a cough in about 50%, and muscle ache likewise in about 50%. In adults, a fever is generally not present but it is common in infants and young children. The cough is usually mild compared to that accompanying influenza. While a cough and a fever indicate a higher likelihood of influenza in adults, a great deal of similarity exists between these two conditions. A number of the viruses that cause the common cold may also result in asymptomatic infections.The color of the mucus or nasal secretion may vary from clear to yellow to green and does not indicate the class of agent causing the infection.
Progression
A cold usually begins with fatigue, a feeling of being chilled, sneezing, and a headache, followed in a couple of days by a runny nose and cough. Symptoms may begin within sixteen hours of exposure and typically peak two to four days after onset. They usually resolve in seven to ten days, but some can last for up to three weeks. The average duration of cough is eighteen days and in some cases people develop a post-viral cough which can linger after the infection is gone. In children, the cough lasts for more than ten days in 35–40% of cases and continues for more than 25 days in 10%.
Causes
Viruses
The common cold is an infection of the upper respiratory tract which can be caused by many different viruses. The most commonly implicated is a rhinovirus (30–80%), a type of picornavirus with 99 known serotypes. Other commonly implicated viruses include human coronaviruses (≈ 15%), influenza viruses (10–15%), adenoviruses (5%), human respiratory syncytial virus (RSV), enteroviruses other than rhinoviruses, human parainfluenza viruses, and human metapneumovirus. Frequently more than one virus is present. In total, more than 200 viral types are associated with colds.
Transmission
The common cold virus is typically transmitted via airborne droplets (aerosols), direct contact with infected nasal secretions, or fomites (contaminated objects). Which of these routes is of primary importance has not been determined. The viruses may survive for prolonged periods in the environment (over 18 hours for rhinoviruses) and can be picked up by peoples hands and subsequently carried to their eyes or nose where infection occurs. Transmission from animals is considered highly unlikely; an outbreak documented at a British scientific base on Adelaide Island after seventeen weeks of isolation was thought to have been caused by transmission from a contaminated object or an asymptomatic human carrier, rather than from the husky dogs which were also present at the base.Transmission is common in daycare and at school due to the proximity of many children with little immunity and frequently poor hygiene. These infections are then brought home to other members of the family. There is no evidence that recirculated air during commercial flight is a method of transmission. People sitting in close proximity appear to be at greater risk of infection.Rhinovirus-caused colds are most infectious during the first three days of symptoms; they are much less infectious afterwards.
Weather
A common misconception is that one can "catch a cold" simply through prolonged exposure to cold weather. Although it is now known that colds are viral infections, the prevalence of many such viruses are indeed seasonal, occurring more frequently during cold weather. The reason for the seasonality has not been conclusively determined. Possible explanations may include cold temperature-induced changes in the respiratory system, decreased immune response, and low humidity causing an increase in viral transmission rates, perhaps due to dry air allowing small viral droplets to disperse farther and stay in the air longer.The apparent seasonality may also be due to social factors, such as people spending more time indoors, near infected people, and specifically children at school. Although normal exposure to cold does not increase ones risk of infection, severe exposure leading to significant reduction of body temperature (hypothermia) may put one at a greater risk for the common cold; although controversial, the majority of evidence suggests that it may increase susceptibility to infection.
Other
Herd immunity, generated from previous exposure to cold viruses, plays an important role in limiting viral spread, as seen with younger populations that have greater rates of respiratory infections. Poor immune function is a risk factor for disease. Insufficient sleep and malnutrition have been associated with a greater risk of developing infection following rhinovirus exposure; this is believed to be due to their effects on immune function. Breast feeding decreases the risk of acute otitis media and lower respiratory tract infections among other diseases, and it is recommended that breast feeding be continued when an infant has a cold. In the developed world breast feeding may not be protective against the common cold in and of itself.
Pathophysiology
The symptoms of the common cold are believed to be primarily related to the immune response to the virus. The mechanism of this immune response is virus specific. For example, the rhinovirus is typically acquired by direct contact; it binds to humans via ICAM-1 receptors and the CDHR3 receptor through unknown mechanisms to trigger the release of inflammatory mediators. These inflammatory mediators then produce the symptoms. It does not generally cause damage to the nasal epithelium. The respiratory syncytial virus (RSV), on the other hand, is contracted by direct contact and airborne droplets. It then replicates in the nose and throat before frequently spreading to the lower respiratory tract. RSV does cause epithelium damage. Human parainfluenza virus typically results in inflammation of the nose, throat, and bronchi. In young children when it affects the trachea it may produce the symptoms of croup due to the small size of their airways.
Diagnosis
The distinction between viral upper respiratory tract infections is loosely based on the location of symptoms, with the common cold affecting primarily the nose (rhinitis), throat (pharyngitis), and lungs (bronchitis). There can be significant overlap, and more than one area can be affected. Self-diagnosis is frequent. Isolation of the viral agent involved is rarely performed, and it is generally not possible to identify the virus type through symptoms.
Prevention
The only useful ways to reduce the spread of cold viruses are physical measures such as using correct hand washing technique and face masks; in the healthcare environment, gowns and disposable gloves are also used. Isolation or quarantine is not used as the disease is so widespread and symptoms are non-specific. There is no vaccine to protect against the common cold. Vaccination has proven difficult as there are many viruses involved and they mutate rapidly. Creation of a broadly effective vaccine is, therefore, highly improbable.Regular hand washing appears to be effective in reducing the transmission of cold viruses, especially among children. Whether the addition of antivirals or antibacterials to normal hand washing provides greater benefit is unknown. Wearing face masks when around people who are infected may be beneficial; however, there is insufficient evidence for maintaining a greater social distance.It is unclear if zinc supplements affect the likelihood of contracting a cold. Routine vitamin C supplements do not reduce the risk or severity of the common cold, though they may reduce its duration. Gargling with water was found useful in one small trial.
Management
Treatments of the common cold primarily involve medications and other therapies for symptomatic relief. Getting plenty of rest, drinking fluids to maintain hydration, and gargling with warm salt water are reasonable conservative measures. Much of the benefit from symptomatic treatment is, however, attributed to the placebo effect. As of 2010, no medications or herbal remedies had been conclusively demonstrated to shorten the duration of infection.
Symptomatic
Treatments that may help with symptoms include simple pain medication and medications for fevers such as ibuprofen and acetaminophen (paracetamol). It, however, is not clear if acetaminophen helps with symptoms. It is not known if over the counter cough medications are effective for treating an acute cough. Cough medicines are not recommended for use in children due to a lack of evidence supporting effectiveness and the potential for harm. In 2009, Canada restricted the use of over-the-counter cough and cold medication in children six years and under due to concerns regarding risks and unproven benefits. The misuse of dextromethorphan (an over-the-counter cough medicine) has led to its ban in a number of countries. Intranasal corticosteroids have not been found to be useful.In adults short term use of nasal decongestants may have a small benefit. Antihistamines may improve symptoms in the first day or two; however, there is no longer-term benefit and they have adverse effects such as drowsiness. Other decongestants such as pseudoephedrine appear effective in adults. Combined oral analgesics, antihistaminics and decongestants are generally effective for older children and adults. Ipratropium nasal spray may reduce the symptoms of a runny nose but has little effect on stuffiness. Ipratropium may also help with cough in adults. The safety and effectiveness of nasal decongestant use in children is unclear.Due to lack of studies, it is not known whether increased fluid intake improves symptoms or shortens respiratory illness. As of 2017 heated and humidified air, such as via RhinoTherm, is of unclear benefit. One study has found chest vapor rub to provide some relief of nocturnal cough, congestion, and sleep difficulty.Some advise to avoid physical exercise if there are symptoms such as fever, widespread muscle aches or fatigue. It is regarded as safe to perform moderate exercise if the symptoms are confined to the head, including runny nose, nasal congestion, sneezing, or a minor sore throat. There is a popular belief that having a hot drink can help with cold symptoms, but evidence to support this is very limited.
Antibiotics and antivirals
Antibiotics have no effect against viral infections, including the common cold. Due to their side effects, antibiotics cause overall harm but are still frequently prescribed. Some of the reasons that antibiotics are so commonly prescribed include peoples expectations for them, physicians desire to help, and the difficulty in excluding complications that may be amenable to antibiotics. There are no effective antiviral drugs for the common cold even though some preliminary research has shown benefits.
Zinc
Zinc supplements may shorten the duration of colds by up to 33% and reduce the severity of symptoms if supplementation begins within 24 hours of the onset of symptoms. Some zinc remedies directly applied to the inside of the nose have led to the loss of the sense of smell. A 2017 review did not recommend the use of zinc for the common cold for various reasons; whereas a 2017 and 2018 review both recommended the use of zinc, but also advocated further research on the topic.
Alternative medicine
While there are many alternative medicines and Chinese herbal medicines supposed to treat the common cold, there is insufficient scientific evidence to support their use. As of 2015, there is weak evidence to support nasal irrigation with saline. There is no firm evidence that Echinacea products or garlic provide any meaningful benefit in treating or preventing colds.
Vitamins C and D
Vitamin C supplementation does not affect the incidence of the common cold, but may reduce its duration. There is no conclusive evidence that vitamin D supplementation is efficacious in the prevention or treatment of respiratory tract infections.
Prognosis
The common cold is generally mild and self-limiting with most symptoms generally improving in a week. In children, half of cases go away in 10 days and 90% in 15 days. Severe complications, if they occur, are usually in the very old, the very young, or those who are immunosuppressed. Secondary bacterial infections may occur resulting in sinusitis, pharyngitis, or an ear infection. It is estimated that sinusitis occurs in 8% and ear infection in 30% of cases.
Epidemiology
The common cold is the most common human disease and affects people all over the globe. Adults typically have two to three infections annually, and children may have six to ten colds a year (and up to twelve colds a year for school children). Rates of symptomatic infections increase in the elderly due to declining immunity.Native Americans and Inuit are more likely to be infected with colds and develop complications such as otitis media than Caucasians. This may be explained as much by issues such as poverty and overcrowding as by ethnicity.
History
While the cause of the common cold was identified in the 1950s, the disease appears to have been with humanity since its early history. Its symptoms and treatment are described in the Egyptian Ebers papyrus, the oldest existing medical text, written before the 16th century BCE. The name "cold" came into use in the 16th century, due to the similarity between its symptoms and those of exposure to cold weather.In the United Kingdom, the Common Cold Unit (CCU) was set up by the Medical Research Council in 1946 and it was where the rhinovirus was discovered in 1956. In the 1970s, the CCU demonstrated that treatment with interferon during the incubation phase of rhinovirus infection protects somewhat against the disease, but no practical treatment could be developed. The unit was closed in 1989, two years after it completed research of zinc gluconate lozenges in the prevention and treatment of rhinovirus colds, the only successful treatment in the history of the unit.
Research directions
Antivirals have been tested for effectiveness in the common cold; as of 2009, none had been both found effective and licensed for use. There are ongoing trials of the anti-viral drug pleconaril which shows promise against picornaviruses as well as trials of BTA-798. The oral form of pleconaril had safety issues and an aerosol form is being studied. Double-stranded RNA activated caspase oligomerizer (DRACO), a broad-spectrum antiviral therapy, has shown preliminary effectiveness in treating rhinovirus, as well as other infectious viruses.The genomes of all known human rhinovirus strains have been sequenced.
Societal impact
The economic impact of the common cold is not well understood in much of the world. In the United States, the common cold leads to 75–100 million physician visits annually at a conservative cost estimate of $7.7 billion per year. Americans spend $2.9 billion on over-the-counter drugs and another $400 million on prescription medicines for symptom relief. More than one-third of people who saw a doctor received an antibiotic prescription, which has implications for antibiotic resistance. An estimated 22–189 million school days are missed annually due to a cold. As a result, parents missed 126 million workdays to stay home to care for their children. When added to the 150 million workdays missed by employees who have a cold, the total economic impact of cold-related work loss exceeds $20 billion per year. This accounts for 40% of time lost from work in the United States.
References
Notes
Bibliography
Eccles R, Weber O, eds. (2009). Common Cold (Illustrated ed.). Springer Science & Business Media. ISBN 978-3-7643-9912-2.
External links
Common cold at Curlie | 121 |
Genital wart | Genital warts are a sexually transmitted infection caused by certain types of human papillomavirus (HPV). They are generally pink in color and project out from the surface of the skin. Usually they cause few symptoms, but can occasionally be painful. Typically they appear one to eight months following exposure. Warts are the most easily recognized symptom of genital HPV infection.HPV types 6 and 11 are responsible for causing majority of genital warts whereas HPV types 16, 18, 31, 33, and 35 are also occasionally found. It is spread through direct skin-to-skin contact, usually during oral, genital, or anal sex with an infected partner. Diagnosis is generally based on symptoms and can be confirmed by biopsy. The types of HPV that cause cancer are not the same as those that cause warts.Some HPV vaccines can prevent genital warts as may condoms. Treatment options include creams such as podophyllin, imiquimod, and trichloroacetic acid. Cryotherapy or surgery may also be an option. After treatment warts often resolve within six months. Without treatment, in up to a third of cases they resolve on their own.About 1% of people in the United States have genital warts. Many people, however, are infected and do not have symptoms. Without vaccination nearly all sexually active people will get some type of HPV at one point in their lives. The disease has been known at least since the time of Hippocrates in 300 BC.
Signs and symptoms
They may be found anywhere in the anal or genital area, and are frequently found on external surfaces of the body, including the penile shaft, scrotum, or labia majora of the vagina. They can also occur on internal surfaces like the opening to the urethra, inside the vagina, on the cervix, or in the anus.They can be as small as 1-5mm in diameter, but can also grow or spread into large masses in the genital or anal area. In some cases they look like small stalks. They may be hard ("keratinized") or soft. Their color can be variable, and sometimes they may bleed.In most cases, there are no symptoms of HPV infection other than the warts themselves. Sometimes warts may cause itching, redness, or discomfort, especially when they occur around the anus. Although they are usually without other physical symptoms, an outbreak of genital warts may cause psychological distress, such as anxiety, in some people.
Causes
Transmission
HPV is most commonly transmitted through penetrative sex. While HPV can also be transmitted via non-penetrative sexual activity, it is less transmissible than via penetrative sex. There is conflicting evidence about the effect of condoms on transmission of low-risk HPV. Some studies have suggested that they are effective at reducing transmission. Other studies suggest that condoms are not effective at preventing transmission of the low-risk HPV variants that cause genital warts. The effect of condoms on HPV transmission may also be sex-dependent; there is some evidence that condoms are more effective at preventing infection of males than of females.The types of HPV that cause warts are highly transmissible. Roughly three out of four unaffected partners of patients with warts develop them within eight months. Other studies of partner concordance suggest that the presence of visible warts may be an indicator of increased infectivity; HPV concordance rates are higher in couples where one partner has visible warts.
Latency and recurrence
Although 90% of HPV infections are cleared by the body within two years of infection, it is possible for infected cells to undergo a latency (quiet) period, with the first occurrence or a recurrence of symptoms happening months or years later. Latent HPV, even with no outward symptoms, is still transmissible to a sexual partner. If an individual has unprotected sex with an infected partner, there is a 70% chance that he or she will also become infected.In individuals with a history of previous HPV infection, the appearance of new warts may be either from a new exposure to HPV, or from a recurrence of the previous infection. As many as one-third of people with warts will experience a recurrence.
Children
Anal or genital warts may be transmitted during birth. The presence of wart-like lesions on the genitals of young children has been suggested as an indicator of sexual abuse. However, genital warts can sometimes result from autoinoculation by warts elsewhere on the body, such as from the hands. It has also been reported from sharing of swimsuits, underwear, or bath towels, and from non-sexual touching during routine care such as diapering. Genital warts in children are less likely to be caused by HPV subtypes 6 and 11 than adults, and more likely to be caused by HPV types that cause warts elsewhere on the body ("cutaneous types"). Surveys of pediatricians who are child abuse specialists suggest that in children younger than 4 years old, there is no consensus on whether the appearance of new anal or genital warts, by itself, can be considered an indicator of sexual abuse.
Diagnosis
The diagnosis of genital warts is most often made visually, but may require confirmation by biopsy in some cases. Smaller warts may occasionally be confused with molluscum contagiosum.
Genital warts, histopathologically, characteristically rise above the skin surface due to enlargement of the dermal papillae, have parakeratosis and the characteristic nuclear changes typical of HPV infections (nuclear enlargement with perinuclear clearing).
DNA tests are available for diagnosis of high-risk HPV infections. Because genital warts are caused by low-risk HPV types, DNA tests cannot be used for diagnosis of genital warts or other low-risk HPV infections.Some practitioners use an acetic acid solution to identify smaller warts ("subclinical lesions"), but this practice is controversial. Because a diagnosis made with acetic acid will not meaningfully affect the course of the disease, and cannot be verified by a more specific test, a 2007 UK guideline advises against its use.
Prevention
Gardasil (sold by Merck & Co.) is a vaccine that protects against human papillomavirus types 6, 11, 16 and 18. Types 6 and 11 cause genital warts, while 16 and 18 cause cervical cancer. The vaccine is preventive, not therapeutic, and must be given before exposure to the virus type to be effective, ideally before the beginning of sexual activity. The vaccine is approved by the US Food and Drug Administration for use in both males and females as early as 9 years of age.In the UK, Gardasil replaced Cervarix in September 2012 for reasons unrelated to safety. Cervarix had been used routinely in young females from its introduction in 2008, but was only effective against the high-risk HPV types 16 and 18, neither of which typically causes warts.
Management
There is no cure for HPV. Existing treatments are focused on the removal of visible warts, but these may also regress on their own without any therapy. There is no evidence to suggest that removing visible warts reduces transmission of the underlying HPV infection. As many as 80% of people with HPV will clear the infection within 18 months.A healthcare practitioner may offer one of several ways to treat warts, depending on their number, sizes, locations, or other factors. All treatments can potentially cause depigmentation, itching, pain, or scarring.Treatments can be classified as either physically ablative, or topical agents. Physically ablative therapies are considered more effective at initial wart removal, but like all therapies have significant recurrence rates.Many therapies, including folk remedies, have been suggested for treating genital warts, some of which have little evidence to suggest they are effective or safe. Those listed here are ones mentioned in national or international practice guidelines as having some basis in evidence for their use.
Physical ablation
Physically ablative methods are more likely to be effective on keratinized warts. They are also most appropriate for patients with fewer numbers of relatively smaller warts.
Simple excision, such as with scissors under local anesthesia, is highly effective.
Liquid nitrogen cryosurgery is usually performed in an office visit, at weekly intervals. It is effective, inexpensive, safe for pregnancy, and does not usually cause scarring.
Electrocauterization (sometimes called "loop electrical excision procedure" or LEEP) is a procedure with a long history of use and is considered effective.
Laser ablation has less evidence to suggest its use. It may be less effective than other ablative methods. It is extremely expensive, and often used as a last resort.
Formal surgical procedures, performed by a specialist under general anesthesia or spinal anesthesia may be necessary for larger or more extensive warts, intra-anal warts, or warts in children. It carries a greater risk of scarring than other methods.
Topical agents
A 0.15–0.5% podophyllotoxin (also called podofilox) solution in a gel or cream. It can be applied by the patient to the affected area and is not washed off. It is the purified and standardized active ingredient of podophyllin (see below). Podofilox is safer and more effective than podophyllin. Skin erosion and pain are more commonly reported than with imiquimod and sinecatechins. Its use is cycled (2 times per day for 3 days then 4–7 days off); one review states that it should only be used for four cycles.
Imiquimod is a topical immune response cream, applied to the affected area. It causes less local irritation than podofilox but may cause fungal infections (11% in package insert) and flu-like symptoms (less than 5% disclosed in package insert).
Sinecatechins is an ointment of catechins (55% epigallocatechin gallate) extracted from green tea and other components. Mode of action is undetermined. It appears to have higher clearance rates than podophyllotoxin and imiquimod and causes less local irritation, but clearance takes longer than with imiquimod.
Trichloroacetic acid (TCA) is less effective than cryosurgery, and is not recommended for use in the vagina, cervix, or urinary meatus.
Interferon can be used; it is effective, but it is also expensive and its effect is inconsistent.DiscontinuedA 5% 5-fluorouracil (5-FU) cream was used, but it is no longer considered an acceptable treatment due to the side-effects.Podophyllin, podofilox and isotretinoin should not be used during pregnancy, as they could cause birth defects in the fetus.
Epidemiology
Genital HPV infections have an estimated prevalence in the US of 10–20% and clinical manifestations in 1% of the sexually active adult population. US incidence of HPV infection has increased between 1975 and 2006. About 80% of those infected are between the ages of 17–33. Although treatments can remove warts, they do not remove the HPV, so warts can recur after treatment (about 50–73% of the time). Warts can also spontaneously regress (with or without treatment).Traditional theories postulated that the virus remained in the body for a lifetime. However, studies using sensitive DNA techniques have shown that through immunological response, the virus can either be cleared or suppressed to levels below what polymerase chain reaction (PCR) tests can measure. One study testing genital skin for subclinical HPV using PCR found a prevalence of 10%.
Etymology
A condyloma acuminatum is a single genital wart, and condylomata acuminata are multiple genital warts. The word roots mean pointed wart (from Greek κόνδυλος knuckle, Greek -ωμα -oma disease, and Latin acuminatum pointed). Although similarly named, it is not the same as condyloma latum, which is a complication of secondary syphilis.
References
External links
Human Papilloma Virus at Curlie | 122 |
Convex | Convex or convexity may refer to:
Science and technology
Convex lens, in optics
Mathematics
Convex set, containing the whole line segment that joins points
Convex polygon, a polygon which encloses a convex set of points
Convex polytope, a polytope with a convex set of points
Convex metric space, a generalization of the convexity notion in abstract metric spaces
Convex function, when the line segment between any two points on the graph of the function lies above or on the graph
Convex conjugate, of a function
Convexity (algebraic geometry), a restrictive technical condition for algebraic varieties originally introduced to analyze Kontsevich moduli spaces
Economics and finance
Convexity (finance), second derivatives in financial modeling generally
Convexity in economics
Bond convexity, a measure of the sensitivity of the duration of a bond to changes in interest rates
Convex preferences, an individuals ordering of various outcomes
Other uses
Convex Computer, a former company that produced supercomputers
See also
List of convexity topics
Non-convexity (economics), violations of the convexity assumptions of elementary economics
Obtuse angle
All pages with titles beginning with Convex | 123 |
Croup | Croup, also known as laryngotracheobronchitis, is a type of respiratory infection that is usually caused by a virus. The infection leads to swelling inside the trachea, which interferes with normal breathing and produces the classic symptoms of "barking/brassy" cough, inspiratory stridor and a hoarse voice. Fever and runny nose may also be present. These symptoms may be mild, moderate, or severe. Often it starts or is worse at night and normally lasts one to two days.Croup can be caused by a number of viruses including parainfluenza and influenza virus. Rarely is it due to a bacterial infection. Croup is typically diagnosed based on signs and symptoms after potentially more severe causes, such as epiglottitis or an airway foreign body, have been ruled out. Further investigations, such as blood tests, X-rays and cultures, are usually not needed.Many cases of croup are preventable by immunization for influenza and diphtheria. Most cases of croup are mild and the child can be treated at home with supportive care. Croup is usually treated with a single dose of steroids by mouth. In more severe cases inhaled epinephrine may also be used. Hospitalization is required in one to five percent of cases.Croup is a relatively common condition that affects about 15% of children at some point. It most commonly occurs between six months and five years of age but may rarely be seen in children as old as fifteen. It is slightly more common in males than females. It occurs most often in autumn. Before vaccination, croup was frequently caused by diphtheria and was often fatal. This cause is now very rare in the Western world due to the success of the diphtheria vaccine.
Signs and symptoms
Croup is characterized by a "barking" cough, stridor, hoarseness, and difficult breathing which usually worsens at night. The "barking" cough is often described as resembling the call of a sea lion. The stridor is worsened by agitation or crying, and if it can be heard at rest, it may indicate critical narrowing of the airways. As croup worsens, stridor may decrease considerably.Other symptoms include fever, coryza (symptoms typical of the common cold), and indrawing of the chest wall–known as Hoovers sign. Drooling or a very sick appearance can indicate other medical conditions, such as epiglottitis or tracheitis.
Causes
Croup is usually deemed to be due to a viral infection. Others use the term more broadly, to include acute laryngotracheitis (laryngitis and tracheitis together), spasmodic croup, laryngeal diphtheria, bacterial tracheitis, laryngotracheobronchitis, and laryngotracheobronchopneumonitis. The first two conditions involve a viral infection and are generally milder with respect to symptomatology; the last four are due to bacterial infection and are usually of greater severity.
Viral
Viral croup or acute laryngotracheitis is most commonly caused by parainfluenza virus (a member of the paramyxovirus family), primarily types 1 and 2, in 75% of cases. Other viral causes include influenza A and B, measles, adenovirus and respiratory syncytial virus (RSV). Spasmodic croup is caused by the same group of viruses as acute laryngotracheitis, but lacks the usual signs of infection (such as fever, sore throat, and increased white blood cell count). Treatment, and response to treatment, are also similar.
Bacteria and cocci
Croup caused by a bacterial infection is rare. Bacterial croup may be divided into laryngeal diphtheria, bacterial tracheitis, laryngotracheobronchitis, and laryngotracheobronchopneumonitis. Laryngeal diphtheria is due to Corynebacterium diphtheriae while bacterial tracheitis, laryngotracheobronchitis, and laryngotracheobronchopneumonitis are usually due to a primary viral infection with secondary bacterial growth. The most common cocci implicated are Staphylococcus aureus and Streptococcus pneumoniae, while the most common bacteria are Haemophilus influenzae, and Moraxella catarrhalis.
Pathophysiology
The viral infection that causes croup leads to swelling of the larynx, trachea, and large bronchi due to infiltration of white blood cells (especially histiocytes, lymphocytes, plasma cells, and neutrophils). Swelling produces airway obstruction which, when significant, leads to dramatically increased work of breathing and the characteristic turbulent, noisy airflow known as stridor.
Diagnosis
Croup is typically diagnosed based on signs and symptoms. The first step is to exclude other obstructive conditions of the upper airway, especially epiglottitis, an airway foreign body, subglottic stenosis, angioedema, retropharyngeal abscess, and bacterial tracheitis.A frontal X-ray of the neck is not routinely performed, but if it is done, it may show a characteristic narrowing of the trachea, called the steeple sign, because of the subglottic stenosis, which resembles a steeple in shape. The steeple sign is suggestive of the diagnosis, but is absent in half of cases.Other investigations (such as blood tests and viral culture) are discouraged, as they may cause unnecessary agitation and thus worsen the stress on the compromised airway. While viral cultures, obtained via nasopharyngeal aspiration, can be used to confirm the exact cause, these are usually restricted to research settings. Bacterial infection should be considered if a person does not improve with standard treatment, at which point further investigations may be indicated.
Severity
The most commonly used system for classifying the severity of croup is the Westley score. It is primarily used for research purposes rather than in clinical practice. It is the sum of points assigned for five factors: level of consciousness, cyanosis, stridor, air entry, and retractions. The points given for each factor is listed in the adjacent table, and the final score ranges from 0 to 17.
A total score of ≤ 2 indicates mild croup. The characteristic barking cough and hoarseness may be present, but there is no stridor at rest.
A total score of 3–5 is classified as moderate croup. It presents with easily heard stridor, but with few other signs.
A total score of 6–11 is severe croup. It also presents with obvious stridor, but also features marked chest wall indrawing.
A total score of ≥ 12 indicates impending respiratory failure. The barking cough and stridor may no longer be prominent at this stage.85% of children presenting to the emergency department have mild disease; severe croup is rare (<1%).
Prevention
Croup is contagious during the first few days of the infection. Basic hygiene including hand washing can prevent transmission. There are no vaccines that have been developed to prevent croup, however, many cases of croup have been prevented by immunization for influenza and diphtheria. At one time, croup referred to a diphtherial disease, but with vaccination, diphtheria is now rare in the developed world.
Treatment
Most children with croup have mild symptoms and supportive care at home is effective. For children with moderate to severe croup, treatment with corticosteroids and nebulized epinephrine may be suggested. Steroids are given routinely, with epinephrine used in severe cases. Children with oxygen saturation less than 92% should receive oxygen, and those with severe croup may be hospitalized for observation. In very rare severe cases of croup that result in respiratory failure, emergency intubation and ventilation may be required. With treatment, less than 0.2% of children require endotracheal intubation. Since croup is usually a viral disease, antibiotics are not used unless secondary bacterial infection is suspected. The use of cough medicines, which usually contain dextromethorphan or guaifenesin, are also discouraged.
Supportive care
Supportive care for children with croup includes resting and keeping the child hydrated. Infections that are mild are suggested to be treated at home. Croup is contagious so washing hands is important. Children with croup should generally be kept as calm as possible. Over the counter medications for pain and fever may be helpful to keep the child comfortable. There is some evidence that cool or warm mist may be helpful, however, the effectiveness of this approach is not clear. If the child is showing signs is distress while breathing (inspiratory stridor, working hard to breath, blue (or blue-ish) coloured lips, or decrease in the level of alertness), immediate medical evaluation by a doctor is required.
Steroids
Corticosteroids, such as dexamethasone and budesonide, have been shown to improve outcomes in children with all severities of croup, however, the benefits may be delayed. Significant relief may be obtained as early as two hours after administration. While effective when given by injection, or by inhalation, giving the medication by mouth is preferred. A single dose is usually all that is required, and is generally considered to be quite safe. Dexamethasone at doses of 0.15, 0.3 and 0.6 mg/kg appear to be all equally effective.
Epinephrine
Moderate to severe croup (for example, in the case of severe stridor) may be improved temporarily with nebulized epinephrine. While epinephrine typically produces a reduction in croup severity within 10–30 minutes, the benefits are short-lived and last for only about 2 hours. If the condition remains improved for 2–4 hours after treatment and no other complications arise, the child is typically discharged from the hospital. Epinephrine treatment is associated with potential adverse effects (usually related to the dose of epinephrine) including tachycardia, arrhythmias, and hypertension.
Oxygen
More severe cases of croup may require treatment with oxygen. If oxygen is needed, "blow-by" administration (holding an oxygen source near the childs face) is recommended, as it causes less agitation than use of a mask.
Other
While other treatments for croup have been studied, none has sufficient evidence to support its use. There is tentative evidence that breathing heliox (a mixture of helium and oxygen) to decrease the work of breathing is useful in those with severe disease, however, there is uncertainty in the effectiveness and the potential adverse effects and/or side effects are not well known. In cases of possible secondary bacterial infection, the antibiotics vancomycin and cefotaxime are recommended. In severe cases associated with influenza A or B infections, the antiviral neuraminidase inhibitors may be administered.
Prognosis
Viral croup is usually a self-limiting disease, with half of cases resolving in a day and 80% of cases in two days. It can very rarely result in death from respiratory failure and/or cardiac arrest. Symptoms usually improve within two days, but may last for up to seven days. Other uncommon complications include bacterial tracheitis, pneumonia, and pulmonary edema.
Epidemiology
Croup affects about 15% of children, and usually presents between the ages of 6 months and 5–6 years. It accounts for about 5% of hospital admissions in this population. In rare cases, it may occur in children as young as 3 months and as old as 15 years. Males are affected 50% more frequently than are females, and there is an increased prevalence in autumn.
History
The word croup comes from the Early Modern English verb croup, meaning "to cry hoarsely." The noun describing the disease originated in southeastern Scotland and became widespread after Edinburgh physician Francis Home published the 1765 treatise An Inquiry into the Nature, Cause, and Cure of the Croup.Diphtheritic croup has been known since the time of Homers ancient Greece, and it was not until 1826 that viral croup was differentiated from croup due to diphtheria by Bretonneau. Viral croup was then called "faux-croup" by the French and often called "false croup" in English, as "croup" or "true croup" then most often referred to the disease caused by the diphtheria bacterium. False croup has also been known as pseudo croup or spasmodic croup. Croup due to diphtheria has become nearly unknown in affluent countries in modern times due to the advent of effective immunization.One famous fatality of croup was Napoleons designated heir, Napoléon Charles Bonaparte. His death in 1807 left Napoleon without an heir and contributed to his decision to divorce from his wife, the Empress Josephine de Beauharnais.
References
External links
"Croup". MedlinePlus. U.S. National Library of Medicine. | 124 |
Cryopyrin-associated periodic syndrome | Cryopyrin-associated periodic syndrome (CAPS) is a group of rare, heterogeneous autoinflammatory disease characterized by interleukin 1β-mediated systemic inflammation and clinical symptoms involving skin, joints, central nervous system, and eyes. It encompasses a spectrum of three clinically overlapping autoinflammatory syndromes including familial cold autoinflammatory syndrome (FCAS, formerly termed familial cold-induced urticaria), the Muckle–Wells syndrome (MWS), and neonatal-onset multisystem inflammatory disease (NOMID, also called chronic infantile neurologic cutaneous and articular syndrome or CINCA) that were originally thought to be distinct entities, but in fact share a single genetic mutation and pathogenic pathway, and keratoendotheliitis fugax hereditaria in which the autoinflammatory symptoms affect only the anterior segment of the eye.
Signs and symptoms
The syndromes within CAPS overlap clinically, and patients may have features of more than one disorder. In a retrospective cohort of 136 CAPS patients with systemic involvement from 16 countries, the most prevalent clinical features were fever (84% of cases, often with concurrent constitutional symptoms such as fatigue, malaise, mood disorders or failure to thrive), skin rash (either urticarial or maculopapular rash; 97% of cases) especially after cold exposure, and musculoskeletal involvement (myalgia, arthralgia, and/or arthritis, or less commonly joint contracture, patellar overgrowth, bone deformity, bone erosion and/or osteolytic lesion; 86% of cases). Less common features included ophthalmological involvement (conjunctivitis and/or uveitis, or less commonly optic nerve atrophy, cataract, glaucoma or impaired vision; 71% of cases), neurosensory hearing loss (42% of cases), neurological involvement (morning headache, papilloedema, and/or meningitis, or less commonly seizure, hydrocephalus or mental retardation; 40% of cases), and AA amyloidosis (4% of cases).In keratoendotheliitis fugax hereditaria, systemic symptoms are not reported whereas the patients experience periodical transient inflammation of the corneal endothelium and stroma, leading to short term blurring of vision and, after repeated attacks, to central corneal stromal opacities in some patients.Age of onset is typically in infancy or early childhood. In 57% of cases, CAPS had a chronic phenotype with symptoms present almost daily, whereas the remaining 43% of patients experienced only acute episodes. Up to 56% of patients reported a family history of CAPS. Previous studies confirm these symptoms, although the exact reported rates vary.
Pathogenesis
Cryopyrin-associated periodic syndromes are associated with a gain-of-function missense mutation in exon 3 of NLRP3, the gene encoding cryopyrin, a major component of the interleukin 1 inflammasome. In keratoendotheliitis fugax hereditaria, the mutation occurs in exon 1. Intracellular formation of the interleukin 1 inflammasome leads to the activation of the potent pro-inflammatory cytokines interleukin 1β and interleukin-18 through a cascade involving caspase 1. The IL-1 inflammasome may also be released from activated macrophages, amplifying the cytokine production cascade. The mutation in NLRP3 leads to aberrant formation of this inflammasome and subsequent unregulated production of interleukin 1β.Up to 170 heterogenous mutations in NLRP3 have been identified. Some reports suggest rare mutations are more frequently associated with a severe phenotype, and some mutations are associated with distinct phenotypes, probably reflecting the differential impact of the mutation on the activity of the inflammasome in the context of individual genetic background. Inheritance of these disorders is autosomal dominant with variable penetrance.
Diagnosis
Because CAPS is extremely rare and has a broad clinical presentation, it is difficult to diagnose, and a significant delay exists between symptom onset and definitive diagnosis. There are currently no clinical or diagnostic criteria for CAPS based solely on clinical presentation. Instead, diagnosis is made by genetic testing for NLRP3 mutations. Acute phase reactants and white blood cell count are usually persistently elevated, but this is aspecific for CAPS.
Treatment
Since interleukin 1β plays a central role in the pathogenesis of the disease, therapy typically targets this cytokine in the form of monoclonal antibodies (such as canakinumab), binding proteins/traps (such as rilonacept), or interleukin 1 receptor antagonists (such as anakinra). These therapies are generally effective in alleviating symptoms and substantially reducing levels of inflammatory indices. Case reports suggest that thalidomide and the anti-IL-6 receptor antibody tocilizumab may also be effective.
References
Kubota T, Koike R. Cryopyrin-associated periodic syndromes: background and therapeutics. Mod Rheumatol. 2010 Jun;20(3):213-21
Autoinflammatory Alliance CAPS Guidebook
== External links == | 125 |
Cushings syndrome | Cushings syndrome is a collection of signs and symptoms due to prolonged exposure to glucocorticoids such as cortisol. Signs and symptoms may include high blood pressure, abdominal obesity but with thin arms and legs, reddish stretch marks, a round red face, a fat lump between the shoulders, weak muscles, weak bones, acne, and fragile skin that heals poorly. Women may have more hair and irregular menstruation. Occasionally there may be changes in mood, headaches, and a chronic feeling of tiredness.Cushings syndrome is caused by either excessive cortisol-like medication, such as prednisone, or a tumor that either produces or results in the production of excessive cortisol by the adrenal glands. Cases due to a pituitary adenoma are known as Cushings disease, which is the second most common cause of Cushings syndrome after medication. A number of other tumors, often referred to as ectopic due to their placement outside the pituitary, may also cause Cushings. Some of these are associated with inherited disorders such as multiple endocrine neoplasia type 1 and Carney complex. Diagnosis requires a number of steps. The first step is to check the medications a person takes. The second step is to measure levels of cortisol in the urine, saliva or in the blood after taking dexamethasone. If this test is abnormal, the cortisol may be measured late at night. If the cortisol remains high, a blood test for ACTH may be done.Most cases can be treated and cured. If due to medications, these can often be slowly decreased if still required or slowly stopped. If caused by a tumor, it may be treated by a combination of surgery, chemotherapy, and/or radiation. If the pituitary was affected, other medications may be required to replace its lost function. With treatment, life expectancy is usually normal. Some, in whom surgery is unable to remove the entire tumor, have an increased risk of death.About two to three people per million are affected each year. It most commonly affects people who are 20 to 50 years of age. Women are affected three times more often than men. A mild degree of overproduction of cortisol without obvious symptoms, however, is more common. Cushings syndrome was first described by American neurosurgeon Harvey Cushing in 1932. Cushings syndrome may also occur in other animals including cats, dogs, and horses.
Signs and symptoms
Symptoms include rapid weight gain, particularly of the trunk and face with sparing of the limbs (central obesity). Common signs include the growth of fat pads along the collarbone, on the back of the neck ("buffalo hump" or lipodystrophy), and on the face ("moon face"). Other symptoms include excess sweating, dilation of capillaries, thinning of the skin (which causes easy bruising and dryness, particularly the hands) and mucous membranes, purple or red striae (the weight gain in Cushings syndrome stretches the skin, which is thin and weakened, causing it to hemorrhage) on the trunk, buttocks, arms, legs, or breasts, proximal muscle weakness (hips, shoulders), and hirsutism (facial male-pattern hair growth), baldness and/or extremely dry and brittle hair. In rare cases, Cushings can cause hypocalcemia. The excess cortisol may also affect other endocrine systems and cause, for example, insomnia, inhibited aromatase, reduced libido, impotence in men, and amenorrhoea, oligomenorrhea and infertility in women due to elevations in androgens. Studies have also shown that the resultant amenorrhea is due to hypercortisolism, which feeds back onto the hypothalamus resulting in decreased levels of GnRH release.Many of the features of Cushings are those seen in metabolic syndrome, including insulin resistance, hypertension, obesity, and elevated blood levels of triglycerides.Cognitive conditions, including memory and attention dysfunctions, as well as depression, are commonly associated with elevated cortisol, and may be early indicators of exogenous or endogenous Cushings. Depression and anxiety disorders are also common.Other striking and distressing skin changes that may appear in Cushings syndrome include facial acne, susceptibility to superficial fungus (dermatophyte and malassezia) infections, and the characteristic purplish, atrophic striae on the abdomen.: 500 Other signs include increased urination (and accompanying increased thirst), persistent high blood pressure (due to cortisols enhancement of epinephrines vasoconstrictive effect) and insulin resistance (especially common with ACTH production outside the pituitary), leading to high blood sugar and insulin resistance which can lead to diabetes mellitus. Insulin resistance is accompanied by skin changes such as acanthosis nigricans in the axilla and around the neck, as well as skin tags in the axilla. Untreated Cushings syndrome can lead to heart disease and increased mortality. Cortisol can also exhibit mineralocorticoid activity in high concentrations, worsening the hypertension and leading to hypokalemia (common in ectopic ACTH secretion) and hypernatremia (increased Na+ ions concentration in plasma). Furthermore, excessive cortisol may lead to gastrointestinal disturbances, opportunistic infections, and impaired wound healing related to cortisols suppression of the immune and inflammatory responses. Osteoporosis is also an issue in Cushings syndrome since osteoblast activity is inhibited. Additionally, Cushings syndrome may cause sore and aching joints, particularly in the hip, shoulders, and lower back.Brain changes such as cerebral atrophy may occur. This atrophy is associated with areas of high glucocorticoid receptor concentrations such as the hippocampus and correlates highly with psychopathological personality changes.
Rapid weight gain
Moodiness, irritability, or depression
Muscle and bone weakness
Memory and attention dysfunction
Osteoporosis
Diabetes mellitus
Hypertension
Immune suppression
Sleep disturbances
Menstrual disorders such as amenorrhea in women
Infertility in women
Impotence in men
Hirsutism
Baldness
Hypercholesterolemia
Hyperpigmentation
Cushings syndrome due to excess ACTH may also result in hyperpigmentation. This is due to melanocyte-stimulating hormone production as a byproduct of ACTH synthesis from pro-opiomelanocortin (POMC). Alternatively, it is proposed that the high levels of ACTH, β-lipotropin, and γ-lipotropin, which contain weak MSH function, can act on the melanocortin 1 receptor. A variant of Cushings disease can be caused by ectopic, i.e. extra pituitary, ACTH production from, for example, a small-cell lung cancer.When Cushings syndrome is caused by an increase of cortisol at the level of the adrenal glands (via an adenoma or hyperplasia), negative feedback ultimately reduces ACTH production in the pituitary. In these cases, ACTH levels remain low and no hyperpigmentation develops.
Causes
Cushings syndrome may result from any cause of increased glucocorticoid levels, whether due to medication or internal processes. Some sources however do not consider the glucocorticoid medication-induced condition as "Cushings syndrome" proper, instead using the term "Cushingoid" to describe the medications side effects which mimic the endogenous condition.Cushings disease is a specific type of Cushings syndrome caused by a pituitary tumor leading to excessive production of ACTH (adrenocorticotropic hormone). Excessive ACTH stimulates the adrenal cortex to produce high levels of cortisol, producing the disease state. While all Cushings disease gives Cushings syndrome, not all Cushings syndrome is due to Cushings disease. Several possible causes of Cushings syndrome are known.
Exogenous
The most common cause of Cushings syndrome is the use of prescribed glucocorticoids to treat other diseases (iatrogenic Cushings syndrome). Glucocorticoids are used in treatment of a variety of disorders, including asthma and rheumatoid arthritis, and also used for immunosuppression after organ transplants. Administration of synthetic ACTH is also possible, but ACTH is less often prescribed due to cost and lesser utility. Rarely, Cushings syndrome can also be due to the use of medroxyprogesterone acetate. In exogenous Cushings, the adrenal glands may often gradually atrophy due to lack of stimulation by ACTH, the production of which is suppressed by glucocorticoid medication. Abruptly stopping the medication can thus result in acute and potentially life-threatening adrenal insufficiency and the dose must hence be slowly and carefully tapered off to allow internal cortisol production to pick up. In some cases, patients never recover sufficient levels of internal production and must continue taking glucocorticoids at physiological doses for life.Cushings syndrome in childhood is especially rare and usually results from use of glucocorticoid medication.
Endogenous
Endogenous Cushings syndrome results from some derangement of the bodys own system of cortisol secretion. Normally, ACTH is released from the pituitary gland when necessary to stimulate the release of cortisol from the adrenal glands.
In pituitary Cushings, a benign pituitary adenoma secretes ACTH. This is also known as Cushings disease and is responsible for 70% of endogenous Cushings syndrome.
In adrenal Cushings, excess cortisol is produced by adrenal gland tumors, hyperplastic adrenal glands, or adrenal glands with nodular adrenal hyperplasia.
Tumors outside the normal pituitary-adrenal system can produce ACTH (occasionally with CRH) that affects the adrenal glands. This etiology is called ectopic or paraneoplastic Cushings disease and is seen in diseases such as small cell lung cancer.
Finally, rare cases of CRH-secreting tumors (without ACTH secretion) have been reported, which stimulates pituitary ACTH production.
Pseudo-Cushings syndrome
Elevated levels of total cortisol can also be due to estrogen found in oral contraceptive pills that contain a mixture of estrogen and progesterone, leading to pseudo-Cushings syndrome. Estrogen can cause an increase of cortisol-binding globulin and thereby cause the total cortisol level to be elevated. However, the total free cortisol, which is the active hormone in the body, as measured by a 24-hour urine collection for urinary free cortisol, is normal.
Pathophysiology
The hypothalamus is in the brain and the pituitary gland sits just below it. The paraventricular nucleus (PVN) of the hypothalamus releases corticotropin-releasing hormone (CRH), which stimulates the pituitary gland to release adrenocorticotropin (ACTH). ACTH travels via the blood to the adrenal gland, where it stimulates the release of cortisol. Cortisol is secreted by the cortex of the adrenal gland from a region called the zona fasciculata in response to ACTH. Elevated levels of cortisol exert negative feedback on CRH in the hypothalamus, which decreases the amount of ACTH released from the anterior pituitary gland.Strictly, Cushings syndrome refers to excess cortisol of any etiology (as syndrome means a group of symptoms). One of the causes of Cushings syndrome is a cortisol-secreting adenoma in the cortex of the adrenal gland (primary hypercortisolism/hypercorticism). The adenoma causes cortisol levels in the blood to be very high, and negative feedback on the pituitary from the high cortisol levels causes ACTH levels to be very low.Cushings disease refers only to hypercortisolism secondary to excess production of ACTH from a corticotroph pituitary adenoma (secondary hypercortisolism/hypercorticism) or due to excess production of hypothalamus CRH (Corticotropin releasing hormone) (tertiary hypercortisolism/hypercorticism). This causes the blood ACTH levels to be elevated along with cortisol from the adrenal gland. The ACTH levels remain high because the tumor is unresponsive to negative feedback from high cortisol levels.When Cushings syndrome is due to extra ACTH it is known as ectopic Cushing syndrome. This may be seen in a paraneoplastic syndrome.
When Cushings syndrome is suspected, either a dexamethasone suppression test (administration of dexamethasone and frequent determination of cortisol and ACTH level), or a 24-hour urinary measurement for cortisol offers equal detection rates. Dexamethasone is a glucocorticoid and simulates the effects of cortisol, including negative feedback on the pituitary gland. When dexamethasone is administered and a blood sample is tested, cortisol levels >50 nmol/L (1.81 μg/dL) would be indicative of Cushings syndrome because an ectopic source of cortisol or ACTH (such as adrenal adenoma) exists which is not inhibited by the dexamethasone. A novel approach, recently cleared by the US FDA, is sampling cortisol in saliva over 24 hours, which may be equally sensitive, as late-night levels of salivary cortisol are high in cushingoid patients. Other pituitary hormone levels may need to be ascertained. Performing a physical examination to determine any visual field defect may be necessary if a pituitary lesion is suspected, which may compress the optic chiasm, causing typical bitemporal hemianopia.When any of these tests is positive, CT scanning of the adrenal gland and MRI of the pituitary gland are performed to detect the presence of any adrenal or pituitary adenomas or incidentalomas (the incidental discovery of harmless lesions). Scintigraphy of the adrenal gland with iodocholesterol scan is occasionally necessary. Occasionally, determining the ACTH levels in various veins in the body by venous catheterization, working towards the pituitary (petrosal sinus sampling) is necessary. In many cases, the tumors causing Cushings disease are less than 2 mm in size and difficult to detect using MRI or CT imaging. In one study of 261 patients with confirmed pituitary Cushings disease, only 48% of pituitary lesions were identified using MRI prior to surgery.Plasma CRH levels are inadequate at diagnosis (with the possible exception of tumors secreting CRH) because of peripheral dilution and binding to CRHBP.
Diagnosis
Cushings syndrome can be ascertained via a variety of test which include the following:
24-hour urine free cortisol
Dexamethasone suppression test
Saliva cortisol level
Treatment
Most cases of Cushingoid symptoms are caused by corticosteroid medications, such as those used for asthma, arthritis, eczema and other inflammatory conditions. Consequently, most patients are effectively treated by carefully tapering off (and eventually stopping) the medication that causes the symptoms.If an adrenal adenoma is identified, it may be removed by surgery. An ACTH-secreting corticotrophic pituitary adenoma should be removed after diagnosis. Regardless of the adenomas location, most patients require steroid replacement postoperatively at least in the interim, as long-term suppression of pituitary ACTH and normal adrenal tissue does not recover immediately. Clearly, if both adrenals are removed, replacement with hydrocortisone or prednisolone is imperative.In those patients not suited for or unwilling to undergo surgery, several drugs have been found to inhibit cortisol synthesis (e.g. ketoconazole, metyrapone) but they are of limited efficacy. Mifepristone is a powerful glucocorticoid type II receptor antagonist and, since it does not interfere with normal cortisol homeostasis type I receptor transmission, may be especially useful for treating the cognitive effects of Cushings syndrome. However, the medication faces considerable controversy due to its use as an abortifacient. In February 2012, the FDA approved mifepristone to control high blood sugar levels (hyperglycemia) in adult patients who are not candidates for surgery, or who did not respond to prior surgery, with the warning that mifepristone should never be used by pregnant women- although pregnancy is extremely rare during the course of Cushings Syndrome In March 2020, Isturisa (osilodrostat) oral tablets a 11-beta-hydroxylase enzyme inhibitor was approved by FDA for treating those patients who cannot undergo pituitary surgery or for patients who underwent surgery but continue to have the disease.Removal of the adrenals in the absence of a known tumor is occasionally performed to eliminate the production of excess cortisol. In some occasions, this removes negative feedback from a previously occult pituitary adenoma, which starts growing rapidly and produces extreme levels of ACTH, leading to hyperpigmentation. This clinical situation is known as Nelsons syndrome.
Epidemiology
Cushings syndrome caused by treatment with corticosteroids is the most common form. Cushings disease is rare; a Danish study found an incidence of less than one case per million people per year. However, asymptomatic microadenomas (less than 10 mm in size) of the pituitary are found in about one in six individuals.People with Cushings syndrome have increased morbidity and mortality as compared to the general population. The most common cause of mortality in Cushings syndrome is cardiovascular events. People with Cushings syndrome have nearly 4 times increased cardiovascular mortality as compared to the general population.About 0.9 to 1% of those with Cushings syndrome has tendency to develop venous thrombosis. Other factors such as surgery and obesity also increases the chance of getting thrombosis.
Other animals
For more information on the form in horses, see pituitary pars intermedia dysfunction.
See also
Addisons disease
Adrenal insufficiency (hypocortisolism)
Corticosteroid-induced lipodystrophy
References
External links
"Cushings Syndrome". MedlinePlus. U.S. National Library of Medicine. | 126 |
Cyanide poisoning | Cyanide poisoning is poisoning that results from exposure to any of a number of forms of cyanide. Early symptoms include headache, dizziness, fast heart rate, shortness of breath, and vomiting. This phase may then be followed by seizures, slow heart rate, low blood pressure, loss of consciousness, and cardiac arrest. Onset of symptoms usually occurs within a few minutes. Some survivors have long-term neurological problems.Toxic cyanide-containing compounds include hydrogen cyanide gas and a number of cyanide salts. Poisoning is relatively common following breathing in smoke from a house fire. Other potential routes of exposure include workplaces involved in metal polishing, certain insecticides, the medication sodium nitroprusside, and certain seeds such as those of apples and apricots. Liquid forms of cyanide can be absorbed through the skin. Cyanide ions interfere with cellular respiration, resulting in the bodys tissues being unable to use oxygen.Diagnosis is often difficult. It may be suspected in a person following a house fire who has a decreased level of consciousness, low blood pressure, or high lactic acid. Blood levels of cyanide can be measured but take time. Levels of 0.5–1 mg/L are mild, 1–2 mg/L are moderate, 2–3 mg/L are severe, and greater than 3 mg/L generally result in death.If exposure is suspected, the person should be removed from the source of exposure and decontaminated. Treatment involves supportive care and giving the person 100% oxygen. Hydroxocobalamin (vitamin B12a) appears to be useful as an antidote and is generally first-line. Sodium thiosulphate may also be given. Historically cyanide has been used for mass suicide and by the Nazis for genocide.
Signs and symptoms
Acute exposure
If hydrogen cyanide is inhaled it can cause a coma with seizures, apnea, and cardiac arrest, with death following in a matter of seconds. At lower doses, loss of consciousness may be preceded by general weakness, dizziness, headaches, vertigo, confusion, and perceived difficulty in breathing. At the first stages of unconsciousness, breathing is often sufficient or even rapid, although the state of the person progresses towards a deep coma, sometimes accompanied by pulmonary edema, and finally cardiac arrest. A cherry red skin color that darkens may be present as the result of increased venous hemoglobin oxygen saturation. Despite the similar name, cyanide does not directly cause cyanosis. A fatal dose for humans can be as low as 1.5 mg/kg body weight. Other sources claim a lethal dose is 1–3 mg per kg body weight for vertebrates.
Chronic exposure
Exposure to lower levels of cyanide over a long period (e.g., after use of improperly processed cassava roots, which are a primary food source in tropical Africa) results in increased blood cyanide levels, which can result in weakness and a variety of symptoms, including permanent paralysis, nervous lesions, hypothyroidism, and miscarriages. Other effects include mild liver and kidney damage.
Causes
Cyanide poisoning can result from the ingestion of cyanide salts; imbibing pure liquid prussic acid; skin absorption of prussic acid; intravenous infusion of nitroprusside for hypertensive crisis; or the inhalation of hydrogen cyanide gas. The last typically occurs through one of three mechanisms:
The gas is directly released from canisters (e.g. as part of a pesticide, insecticide, or Zyklon B).
It is generated on site by reacting potassium cyanide or sodium cyanide with sulfuric acid (e.g. in a modern American gas chamber).
Fumes arise during a building fire or any similar scenario involving the burning of polyurethane, vinyl or other polymer products that required nitriles in their production.As potential contributing factors, cyanide is present in:
Tobacco smoke.
Many seeds or kernels such as those of almonds, apricots, apples, oranges, and flaxseed.
Foods including cassava (also known as tapioca, yuca or manioc) and bamboo shoots.As a potential harm-reduction factor, Vitamin B12, in the form of hydroxocobalamin (also spelled hydroxycobalamin), might reduce the negative effects of chronic exposure; whereas, a deficiency might worsen negative health effects following exposure to cyanide.
Mechanism
Cyanide is a potent cytochrome c oxidase (COX, a.k.a Complex IV) inhibitor. As such, cyanide poisoning is a form of histotoxic hypoxia, because it interferes with oxidative phosphorylation.: 1475 Specifically, cyanide binds to the heme a3-CuB binuclear center of COX (and thus is a non-competitive inhibitor of it). This prevents electrons passing through COX from being transferred to O2, which not only blocks the mitochondrial electron transport chain but also interferes with the pumping of a proton out of the mitochondrial matrix which would otherwise occur at this stage. Therefore, cyanide interferes not only with aerobic respiration but also with the ATP synthesis pathway it facilitates, owing to the close relationship between those two processes.: 705 One antidote for cyanide poisoning, nitrite (i.e. via amyl nitrite), works by converting ferrohemoglobin to ferrihemoglobin, which can then compete with COX for free cyanide (as the cyanide will bind to the iron in its heme groups instead). Ferrihemoglobin cannot carry oxygen, but the amount of ferrihemoglobin that can be formed without impairing oxygen transport is much greater than the amount of COX in the body.: 1475 Cyanide is a broad-spectrum poison because the reaction it inhibits is essential to aerobic metabolism; COX is found in many forms of life. However, susceptibility to cyanide is far from uniform across affected species; for instance, plants have an alternative electron transfer pathway available that passes electrons directly from ubiquinone to O2, which confers cyanide resistance by bypassing COX.: 704
Diagnosis
Lactate is produced by anaerobic glycolysis when oxygen concentration becomes too low for the normal aerobic respiration pathway. Cyanide poisoning inhibits aerobic respiration and therefore increases anaerobic glycolysis which causes a rise of lactate in the plasma. A lactate concentration above 10 mmol per liter is an indicator of cyanide poisoning, as defined by the presence of a blood cyanide concentration above 40 µmol per liter. Lactate levels greater than 6 mmol/L after reported or strongly suspected pure cyanide poisoning, such as cyanide-containing smoke exposure, suggests significant cyanide exposure.Methods of detection include colorimetric assays such as the Prussian blue test, the pyridine-barbiturate assay, also known as the "Conway diffusion method" and the taurine fluorescence-HPLC but like all colorimetric assays these are prone to false positives. Lipid peroxidation resulting in "TBARS," an artifact of heart attack produces dialdehydes that cross-react with the pyridine-barbiturate assay. Meanwhile, the taurine-fluorescence-HPLC assay used for cyanide detection is identical to the assay used to detect glutathione in spinal fluid.
Cyanide and thiocyanate assays have been run with mass spectrometry (LC/MS/MS), which are considered specific tests. Since cyanide has a short half-life, the main metabolite, thiocyanate is typically measured to determine exposure. Other methods of detection include the identification of plasma lactate.
Treatment
Decontamination
Decontamination of people exposed to hydrogen cyanide gas only requires removal of the outer clothing and the washing of their hair. Those exposed to liquids or powders generally require full decontamination.
Antidote
The International Programme on Chemical Safety issued a survey (IPCS/CEC Evaluation of Antidotes Series) that lists the following antidotal agents and their effects: oxygen, sodium thiosulfate, amyl nitrite, sodium nitrite, 4-dimethylaminophenol, hydroxocobalamin, and dicobalt edetate (Kelocyanor), as well as several others. Other commonly-recommended antidotes are solutions A and B (a solution of ferrous sulfate in aqueous citric acid, and aqueous sodium carbonate, respectively) and amyl nitrite.
The United States standard cyanide antidote kit first uses a small inhaled dose of amyl nitrite, followed by intravenous sodium nitrite, followed by intravenous sodium thiosulfate. Hydroxocobalamin was approved for use in the US in late 2006 and is available in Cyanokit antidote kits. Sulfanegen TEA, which could be delivered to the body through an intra-muscular (IM) injection, detoxifies cyanide and converts the cyanide into thiocyanate, a less toxic substance. Alternative methods of treating cyanide intoxication are used in other countries.
The Irish Health and Safety Executive (HSE) has recommended against the use of solutions A and B because of their limited shelf life, potential to cause iron poisoning, and limited applicability (effective only in cases of cyanide ingestion, whereas the main modes of poisoning are inhalation and skin contact). The HSE has also questioned the usefulness of amyl nitrite due to storage/availability problems, risk of abuse, and lack of evidence of significant benefits. It also states that the availability of kelocyanor at the workplace may mislead doctors into treating a patient for cyanide poisoning when this is an erroneous diagnosis. The HSE no longer recommends a particular cyanide antidote.
History
Fires
The República Cromañón nightclub fire broke out in Buenos Aires, Argentina on 30 December 2004, killing 194 people and leaving at least 1,492 injured. Most of the victims died from inhaling poisonous gases, and carbon monoxide. After the fire, the technical institution INTI found that the level of toxicity due to the materials and volume of the building was 225 ppm of cyanide in the air. A lethal dose for rats is between 150 ppm and 220 ppm, meaning the air in the building was highly toxic.
On 5 December 2009, a fire in the night club Lame Horse (Khromaya Loshad) in the Russian city of Perm took the lives of 156 people. Fatalities consisted of 111 people at the site and 45 later in hospitals. One of the main causes of death was poisoning from cyanide and other toxic gases released by the burning of plastic and polyurethane foam used in the construction of club interiors. Taking into account the number of deaths, this was the largest fire in post-Soviet Russia.On 27 January 2013, a fire at the Kiss nightclub in the city of Santa Maria, in the south of Brazil, caused the poisoning of hundreds of young people by cyanide released by the combustion of soundproofing foam made with polyurethane. By March 2013, 245 fatalities were confirmed.
Gas chambers
In early 1942, Zyklon B, which contains hydrogen cyanide, emerged as the preferred killing tool of Nazi Germany for use in extermination camps during the Holocaust. The chemical was used to murder roughly one million people in gas chambers installed in extermination camps at Auschwitz-Birkenau, Majdanek, and elsewhere. Most of the people who were murdered were Jews, and by far the majority of these murders took place at Auschwitz. Zyklon B was supplied to concentration camps at Mauthausen, Dachau, and Buchenwald by the distributor Heli, and to Auschwitz and Majdanek by Testa. Camps also occasionally bought Zyklon B directly from the manufacturers. Of the 729 tonnes of Zyklon B sold in Germany in 1942–44, 56 tonnes (about eight percent of domestic sales) were sold to concentration camps. Auschwitz received 23.8 tonnes, of which six tonnes were used for fumigation. The remainder was used in the gas chambers or lost to spoilage (the product had a stated shelf life of only three months). Testa conducted fumigations for the Wehrmacht and supplied them with Zyklon B. They also offered courses to the SS in the safe handling and use of the material for fumigation purposes. In April 1941, the German agriculture and interior ministries designated the SS as an authorized applier of the chemical, and thus they were able to use it without any further training or governmental oversight.Hydrogen cyanide gas has been used for judicial execution in some states of the United States, where cyanide was generated by reaction between potassium cyanide (or sodium cyanide) dropped into a compartment containing sulfuric acid, directly below the chair in the gas chamber.
Suicide
Cyanide salts are sometimes used as fast-acting suicide devices. Cyanide reacts at a higher level with high stomach acidity.
On 26 January 1904, company promoter and swindler Whitaker Wright committed suicide by ingesting cyanide in a court anteroom immediately after being convicted of fraud.
In February 1937, the Uruguayan short story writer Horacio Quiroga committed suicide by drinking cyanide in a hospital at Buenos Aires.
In 1937, polymer chemist Wallace Carothers committed suicide by cyanide.
In the 1943 Operation Gunnerside to destroy the Vemork Heavy Water Plant in World War II (an attempt to stop or slow German atomic bomb progress), the commandos were given cyanide tablets (cyanide enclosed in rubber) kept in the mouth and were instructed to bite into them in case of German capture. The tablets ensured death within three minutes.
Cyanide, in the form of pure liquid prussic acid (a historical name for hydrogen cyanide), was the favored suicide agent of Nazi Germany. Erwin Rommel (1944), Adolf Hitlers wife, Eva Braun (1945), and Nazi leaders Heinrich Himmler (1945), possibly Martin Bormann (1945), and Hermann Göring (1946) all committed suicide by ingesting it.
It is speculated that, in 1954, Alan Turing used an apple that had been injected with a solution of cyanide to commit suicide after being convicted of having a homosexual relationship, which was illegal at the time in the United Kingdom, and forced to undergo hormonal castration to avoid prison. An inquest determined that Turings death from cyanide poisoning was a suicide, although this has been disputed.
Members of the Sri Lankan LTTE (Liberation Tigers of Tamil Eelam, whose insurgency lasted from 1983 to 2009), used to wear cyanide vials around their necks with the intention of committing suicide if captured by the government forces.
On 22 June 1977, Moscow, Aleksandr Dmitrievich Ogorodnik, a Soviet diplomat accused of spying on behalf of the Colombian Intelligence Agency and the US Central Intelligence Agency, was arrested. During the interrogations, Ogorodnik offered to write a full confession and asked for his pen. Inside the pen cap was a cleverly hidden cyanide pill, which when bitten on, caused Ogorodnik to die before he hit the floor, according to the Soviets.
On 18 November 1978, Jonestown. A total of 909 individuals died in Jonestown, many from apparent cyanide poisoning, in an event termed "revolutionary suicide" by Jones and some members on an audio tape of the event and in prior discussions. The poisonings in Jonestown followed the murder of five others by Temple members at Port Kaituma, including United States Congressman Leo Ryan, an act that Jones ordered. Four other Temple members committed murder-suicide in Georgetown at Jones command.
On 6 June 1985, serial killer Leonard Lake died in custody after having ingested cyanide pills he had sewn into his clothes.
On 28 June 2012, Wall Street trader Michael Marin ingested a cyanide pill seconds after a guilty verdict was read in his arson trial in Phoenix, AZ; he died minutes after.
On 22 June 2015, John B. McLemore, a horologist and the central figure of the podcast S-Town, died after ingesting cyanide.
On 29 November 2017, Slobodan Praljak died from drinking potassium cyanide, after being convicted of war crimes by the International Criminal Tribunal for the former Yugoslavia.
Mining and industrial
In 1993, an illegal spill resulted in the death of seven people in Avellaneda, Argentina. In their memory, the National Environmental Conscious Day (Día Nacional de la Conciencia Ambiental) was established.
In 2000, a spill at Baia Mare, Romania, resulted in the worst environmental disaster in Europe since Chernobyl.
In 2000, Allen Elias, CEO of Evergreen Resources was convicted of knowing endangerment for his role in the cyanide poisoning of employee Scott Dominguez. This was one of the first successful criminal prosecutions of a corporate executive by the Environmental Protection Agency.
Murder
John Tawell, a murderer who in 1845 became the first person to be arrested as the result of telecommunications technology.
Grigori Rasputin (1916; attempted, later killed by gunshot)
The Goebbels children (1945)
Stepan Bandera (1959)
Jonestown, Guyana, was the site of a large mass murder–suicide, in which over 900 members of the Peoples Temple drank potassium cyanide–laced Flavor Aid in 1978.
Chicago Tylenol murders (1982)
Timothy Marc OBryan (1966–1974) died on October 31, 1974 by ingesting potassium cyanide placed into a giant Pixy Stix. His father, Ronald Clark OBryan, was convicted of Tims murder plus four counts of attempted murder. OBryan put potassium cyanide into five giant Pixy Stix that he gave to his son and daughter along with three other children. Only Timothy ate the poisoned candy and died.
Bruce Nickell (5 June 1986) Murdered by his wife who poisoned a bottle of Excedrin.
Richard Kuklinski (1935–2006)
Janet Overton (1942–1988) Her husband, Richard Overton was convicted of poisoning her, but Janets symptoms did not match those of classic cyanide poisoning, the timeline was inconsistent with cyanide poisoning, and the amount found was just a trace. The diagnostic method used was prone to false positives. Richard Overton died in prison in 2009.
Urooj Khan (1966–2012), won the lottery and was found dead a few days later. A blood diagnostic reported a lethal level of cyanide in his blood, but the body did not display any classic symptoms of cyanide poisoning, and no link to cyanide could be found in Uroojs social circle. The diagnostic method used was the Conway diffusion method, prone to false positives with artifacts of heart attack and kidney failure.
Autumn Marie Klein (20 April 2013), a prominent 41-year-old neuroscientist and physician, died from cyanide poisoning. Kleins husband, Robert J. Ferrante, also a prominent neuroscientist who used cyanide in his research, was convicted of murder and sentenced to life in prison for her death. Robert Ferrante is appealing his conviction.
Mirna Salihin died in hospital on 6 January 2016, after drinking a Vietnamese iced coffee at a cafe in a shopping mall in Jakarta. Police reports claim that cyanide poisoning was the most likely cause of her death.
Jolly Thomas of Kozhikode, Kerala, India, was arrested in 2019 for the murder of 6 family members. Murders took place over 14-year period and each victim ate a meal prepared by the killer. The murders were allegedly motivated by wanting control of the family finances and property.
Mei Xiang Li of Brooklyn, NY, collapsed and died in April 2017, with cyanide later reported to be in her blood. However, Mei never exhibited symptoms of cyanide poisoning and no link to cyanide could be found in her life.
Warfare or terrorism
In 1988, between 3,200 and 5,000 people died in the Halabja massacre owing to unknown chemical nerve agents. Hydrogen cyanide gas was strongly suspected.
In 1995, a device was discovered in a restroom in the Kayabacho Tokyo subway station, consisting of bags of sodium cyanide and sulfuric acid with a remote controlled motor to rupture them in what was believed to be an attempt by the Aum Shinrikyo cult to produce toxic amounts of hydrogen cyanide gas.
In 2003, Al Qaeda reportedly planned to release cyanide gas into the New York City Subway system. The attack was supposedly aborted because there would not be enough casualties.
Research
Cobinamide is the final compound in the biosynthesis of cobalamin. It has greater affinity for the cyanide than cobalamin itself, which suggests that it could be a better option for emergency treatment.
See also
Anaerobic glycolysis
Lactic acidosis
List of poisonings
Konzo
References
Explanatory notes
Citations
Sources
Longerich, Peter (2010). Holocaust: The Nazi Persecution and Murder of the Jews. Oxford; New York: Oxford University Press. ISBN 978-0-19-280436-5.
Hayes, Peter (2004). From Cooperation to Complicity: Degussa in the Third Reich. Cambridge; New York; Melbourne: Cambridge University Press. ISBN 978-0-521-78227-2.
Piper, Franciszek (1994). "Gas Chambers and Crematoria". In Gutman, Yisrael; Berenbaum, Michael (eds.). Anatomy of the Auschwitz Death Camp. Bloomington, Indiana: Indiana University Press. pp. 157–182. ISBN 978-0-253-32684-3. | 127 |
Cycloplegia | Cycloplegia is paralysis of the ciliary muscle of the eye, resulting in a loss of accommodation. Because of the paralysis of the ciliary muscle, the curvature of the lens can no longer be adjusted to focus on nearby objects. This results in similar problems as those caused by presbyopia, in which the lens has lost elasticity and can also no longer focus on close-by objects. Cycloplegia with accompanying mydriasis (dilation of pupil) is usually due to topical application of muscarinic antagonists such as atropine and cyclopentolate.
Belladonna alkaloids are used for testing the error of refraction and examination of eye.
Management
Cycloplegic drugs are generally muscarinic receptor blockers. These include atropine, cyclopentolate, homatropine, scopolamine and tropicamide. They are indicated for use in cycloplegic refraction (to paralyze the ciliary muscle in order to determine the true refractive error of the eye) and the treatment of uveitis. All cycloplegics are also mydriatic (pupil dilating) agents and are used as such during eye examination to better visualize the retina.
When cycloplegic drugs are used as a mydriatic to dilate the pupil, the pupil in the normal eye regains its function when the drugs are metabolized or carried away. Some cycloplegic drugs can cause dilation of the pupil for several days. The ones specifically used by ophthalmologists or optometrists wear off in hours, but when the patient leaves the office strong sunglasses are provided for comfort.
See also
References
External links
Kels, Barry D.; Grzybowski, Andrzej; Grant-Kels, Jane M. (March 2015). "Human ocular anatomy". Clinics in Dermatology. 33 (2): 140–146. doi:10.1016/j.clindermatol.2014.10.006. PMID 25704934.
van der Hoeve, J.; Flieringa, H. J. (1 March 1924). "Accommodation". British Journal of Ophthalmology. 8 (3): 97–106. doi:10.1136/bjo.8.3.97. PMC 512904. PMID 18168370. | 128 |
Cystic fibrosis | Cystic fibrosis (CF) is a rare genetic disorder that affects mostly the lungs, but also the pancreas, liver, kidneys, and intestine. Long-term issues include difficulty breathing and coughing up mucus as a result of frequent lung infections. Other signs and symptoms may include sinus infections, poor growth, fatty stool, clubbing of the fingers and toes, and infertility in most males. Different people may have different degrees of symptoms.Cystic fibrosis is inherited in an autosomal recessive manner. It is caused by the presence of mutations in both copies of the gene for the cystic fibrosis transmembrane conductance regulator (CFTR) protein. Those with a single working copy are carriers and otherwise mostly healthy. CFTR is involved in the production of sweat, digestive fluids, and mucus. When the CFTR is not functional, secretions which are usually thin instead become thick. The condition is diagnosed by a sweat test and genetic testing. Screening of infants at birth takes place in some areas of the world.There is no known cure for cystic fibrosis. Lung infections are treated with antibiotics which may be given intravenously, inhaled, or by mouth. Sometimes, the antibiotic azithromycin is used long term. Inhaled hypertonic saline and salbutamol may also be useful. Lung transplantation may be an option if lung function continues to worsen. Pancreatic enzyme replacement and fat-soluble vitamin supplementation are important, especially in the young. Airway clearance techniques such as chest physiotherapy have some short-term benefit, but long-term effects are unclear. The average life expectancy is between 42 and 50 years in the developed world. Lung problems are responsible for death in 80% of people with cystic fibrosis.CF is most common among people of Northern European ancestry and affects about one out of every 3,000 newborns. About one in 25 people is a carrier. It is least common in Africans and Asians. It was first recognized as a specific disease by Dorothy Andersen in 1938, with descriptions that fit the condition occurring at least as far back as 1595. The name "cystic fibrosis" refers to the characteristic fibrosis and cysts that form within the pancreas.
Signs and symptoms
Cystic fibrosis typically manifests early in life. Newborns and infants with cystic fibrosis tend to have frequent, large, greasy stools (a result of malabsorption) and are underweight for their age. 15–20% of newborns have their small intestine blocked by meconium, often requiring surgery to correct. Newborns occasionally have neonatal jaundice due to blockage of the bile ducts. Children with cystic fibrosis lose excessive salt in their sweat, and parents often notice salt crystallizing on the skin, or a salty taste when they kiss their child.The primary cause of morbidity and death in people with cystic fibrosis is progressive lung disease, which eventually leads to respiratory failure. This typically begins as a prolonged respiratory infection that continues until treated with antibiotics. Chronic infection of the respiratory tract is nearly universal in people with cystic fibrosis, with Pseudomonas aeruginosa, fungi, and mycobacteria all increasingly common over time. Inflammation of the upper airway results in frequent runny nose and nasal obstruction. Nasal polyps are common, particularly in children and teenagers. As the disease progresses, people tend to have shortness of breath, and a chronic cough that produces sputum. Breathing problems make it increasingly challenging to exercise, and prolonged illness causes those affected to be underweight for their age. In late adolescence or adulthood, people begin to develop severe signs of lung disease: wheezing, digital clubbing, cyanosis, coughing up blood, pulmonary heart disease, and collapsed lung (atelectasis or pneumothorax).In rare cases, cystic fibrosis can manifest itself as a coagulation disorder. Vitamin K is normally absorbed from breast milk, formula, and later, solid foods. This absorption is impaired in some CF patients. Young children are especially sensitive to vitamin K malabsorptive disorders because only a very small amount of vitamin K crosses the placenta, leaving the child with very low reserves and limited ability to absorb vitamin K from dietary sources after birth. Because clotting factors II, VII, IX, and X are vitamin K–dependent, low levels of vitamin K can result in coagulation problems. Consequently, when a child presents with unexplained bruising, a coagulation evaluation may be warranted to determine whether an underlying disease is present.
Lungs and sinuses
Lung disease results from clogging of the airways due to mucus build-up, decreased mucociliary clearance, and resulting inflammation. In later stages, changes in the architecture of the lung, such as pathology in the major airways (bronchiectasis), further exacerbate difficulties in breathing. Other signs include high blood pressure in the lung (pulmonary hypertension), heart failure, difficulties getting enough oxygen to the body (hypoxia), and respiratory failure requiring support with breathing masks, such as bilevel positive airway pressure machines or ventilators. Staphylococcus aureus, Haemophilus influenzae, and Pseudomonas aeruginosa are the three most common organisms causing lung infections in CF patients.: 1254 In addition, opportunistic infection due to Burkholderia cepacia complex can occur, especially through transmission from patient to patient.In addition to typical bacterial infections, people with CF more commonly develop other types of lung diseases. Among these is allergic bronchopulmonary aspergillosis, in which the bodys response to the common fungus Aspergillus fumigatus causes worsening of breathing problems. Another is infection with Mycobacterium avium complex, a group of bacteria related to tuberculosis, which can cause lung damage and do not respond to common antibiotics.Mucus in the paranasal sinuses is equally thick and may also cause blockage of the sinus passages, leading to infection. This may cause facial pain, fever, nasal drainage, and headaches. Individuals with CF may develop overgrowth of the nasal tissue (nasal polyps) due to inflammation from chronic sinus infections. Recurrent sinonasal polyps can occur in 10% to 25% of CF patients.: 1254 These polyps can block the nasal passages and increase breathing difficulties.Cardiorespiratory complications are the most common causes of death (about 80%) in patients at most CF centers in the United States.: 1254
Gastrointestinal
In addition, protrusion of internal rectal membranes (rectal prolapse) is more common, occurring in as many as 10% of children with CF, and it is caused by increased fecal volume, malnutrition, and increased intra–abdominal pressure due to coughing.The thick mucus seen in the lungs has a counterpart in thickened secretions from the pancreas, an organ responsible for providing digestive juices that help break down food. These secretions block the exocrine movement of the digestive enzymes into the duodenum and result in irreversible damage to the pancreas, often with painful inflammation (pancreatitis). The pancreatic ducts are totally plugged in more advanced cases, usually seen in older children or adolescents. This causes atrophy of the exocrine glands and progressive fibrosis.Individuals with CF also have difficulties absorbing the fat-soluble vitamins A, D, E, and K.In addition to the pancreas problems, people with CF experience more heartburn, intestinal blockage by intussusception, and constipation. Older individuals with CF may develop distal intestinal obstruction syndrome, which occurs when feces becomes thick with mucus (inspissated) and can cause bloating, pain, and incomplete or complete bowel obstruction.Exocrine pancreatic insufficiency occurs in the majority (85% to 90%) of patients with CF.: 1253 It is mainly associated with "severe" CFTR mutations, where both alleles are completely nonfunctional (e.g. ΔF508/ΔF508).: 1253 It occurs in 10% to 15% of patients with one "severe" and one "mild" CFTR mutation where little CFTR activity still occurs, or where two "mild" CFTR mutations exist.: 1253 In these milder cases, sufficient pancreatic exocrine function is still present so that enzyme supplementation is not required.: 1253 Usually, no other GI complications occur in pancreas-sufficient phenotypes, and in general, such individuals usually have excellent growth and development.: 1254 Despite this, idiopathic chronic pancreatitis can occur in a subset of pancreas-sufficient individuals with CF, and is associated with recurrent abdominal pain and life-threatening complications.Thickened secretions also may cause liver problems in patients with CF. Bile secreted by the liver to aid in digestion may block the bile ducts, leading to liver damage. Impaired digestion or absorption of lipids can result in steatorrhea. Over time, this can lead to scarring and nodularity (cirrhosis). The liver fails to rid the blood of toxins and does not make important proteins, such as those responsible for blood clotting. Liver disease is the third-most common cause of death associated with CF.Around 5–7% of people experience liver damage severe enough to cause symptoms: typically gallstones causing biliary colic.
Endocrine
The pancreas contains the islets of Langerhans, which are responsible for making insulin, a hormone that helps regulate blood glucose. Damage to the pancreas can lead to loss of the islet cells, leading to a type of diabetes unique to those with the disease. This cystic fibrosis-related diabetes shares characteristics of type 1 and type 2 diabetes, and is one of the principal nonpulmonary complications of CF.Vitamin D is involved in calcium and phosphate regulation. Poor uptake of vitamin D from the diet because of malabsorption can lead to the bone disease osteoporosis in which weakened bones are more susceptible to fractures.
Infertility
Infertility affects both men and women. At least 97% of men with cystic fibrosis are infertile, but not sterile, and can have children with assisted reproductive techniques. The main cause of infertility in men with CF is congenital absence of the vas deferens (which normally connects the testes to the ejaculatory ducts of the penis), but potentially also by other mechanisms such as causing no sperm, abnormally shaped sperm, and few sperm with poor motility. Many men found to have congenital absence of the vas deferens during evaluation for infertility have a mild, previously undiagnosed form of CF. Around 20% of women with CF have fertility difficulties due to thickened cervical mucus or malnutrition. In severe cases, malnutrition disrupts ovulation and causes a lack of menstruation.
Causes
CF is caused by a mutation in the gene cystic fibrosis transmembrane conductance regulator (CFTR). The most common mutation, ΔF508, is a deletion (Δ signifying deletion) of three nucleotides that results in a loss of the amino acid phenylalanine (F) at the 508th position on the protein. This mutation accounts for two-thirds (66–70%) of CF cases worldwide and 90% of cases in the United States; however, over 1500 other mutations can produce CF. Although most people have two working copies (alleles) of the CFTR gene, only one is needed to prevent cystic fibrosis. CF develops when neither allele can produce a functional CFTR protein. Thus, CF is considered an autosomal recessive disease.The CFTR gene, found at the q31.2 locus of chromosome 7, is 230,000 base pairs long, and creates a protein that is 1,480 amino acids long. More specifically, the location is between base pair 117,120,016 and 117,308,718 on the long arm of chromosome 7, region 3, band 1, subband 2, represented as 7q31.2. Structurally, the CFTR is a type of gene known as an ABC gene. The product of this gene (the CFTR protein) is a chloride ion channel important in creating sweat, digestive juices, and mucus. This protein possesses two ATP-hydrolyzing domains, which allows the protein to use energy in the form of ATP. It also contains two domains comprising six alpha helices apiece, which allow the protein to cross the cell membrane. A regulatory binding site on the protein allows activation by phosphorylation, mainly by cAMP-dependent protein kinase. The carboxyl terminal of the protein is anchored to the cytoskeleton by a PDZ domain interaction. The majority of CFTR in the lungs passages is produced by rare ion-transporting cells that regulate mucus properties.In addition, the evidence is increasing that genetic modifiers besides CFTR modulate the frequency and severity of the disease. One example is mannan-binding lectin, which is involved in innate immunity by facilitating phagocytosis of microorganisms. Polymorphisms in one or both mannan-binding lectin alleles that result in lower circulating levels of the protein are associated with a threefold higher risk of end-stage lung disease, as well as an increased burden of chronic bacterial infections.
Carriers
Up to one in 25 individuals of Northern European ancestry is considered a genetic carrier. The disease appears only when two of these carriers have children, as each pregnancy between them has a 25% chance of producing a child with the disease. Although only about one of every 3,000 newborns of the affected ancestry has CF, more than 900 mutations of the gene that causes CF are known. Current tests look for the most common mutations.The mutations screened by the test vary according to a persons ethnic group or by the occurrence of CF already in the family. More than 10 million Americans, including one in 25 white Americans, are carriers of one mutation of the CF gene. CF is present in other races, though not as frequently as in white individuals. About one in 46 Hispanic Americans, one in 65 African Americans, and one in 90 Asian Americans carry a mutation of the CF gene.
Pathophysiology
Several mutations in the CFTR gene can occur, and different mutations cause different defects in the CFTR protein, sometimes causing a milder or more severe disease. These protein defects are also targets for drugs which can sometimes restore their function. ΔF508-CFTR gene mutation, which occurs in >90% of patients in the U.S., creates a protein that does not fold normally and is not appropriately transported to the cell membrane, resulting in its degradation.Other mutations result in proteins that are too short (truncated) because production is ended prematurely. Other mutations produce proteins that do not use energy (in the form of ATP) normally, do not allow chloride, iodide, and thiocyanate to cross the membrane appropriately, and degrade at a faster rate than normal. Mutations may also lead to fewer copies of the CFTR protein being produced.The protein created by this gene is anchored to the outer membrane of cells in the sweat glands, lungs, pancreas, and all other remaining exocrine glands in the body.
The protein spans this membrane and acts as a channel connecting the inner part of the cell (cytoplasm) to the surrounding fluid. This channel is primarily responsible for controlling the movement of halide anions from inside to outside of the cell; however, in the sweat ducts, it facilitates the movement of chloride from the sweat duct into the cytoplasm. When the CFTR protein does not resorb ions in sweat ducts, chloride and thiocyanate released from sweat glands are trapped inside the ducts and pumped to the skin.
Additionally hypothiocyanite, OSCN, cannot be produced by the immune defense system. Because chloride is negatively charged, this modifies the electrical potential inside and outside the cell that normally causes cations to cross into the cell. Sodium is the most common cation in the extracellular space. The excess chloride within sweat ducts prevents sodium resorption by epithelial sodium channels and the combination of sodium and chloride creates the salt, which is lost in high amounts in the sweat of individuals with CF. This lost salt forms the basis for the sweat test.Most of the damage in CF is due to blockage of the narrow passages of affected organs with thickened secretions. These blockages lead to remodeling and infection in the lung, damage by accumulated digestive enzymes in the pancreas, blockage of the intestines by thick feces, etc. Several theories have been posited on how the defects in the protein and cellular function cause the clinical effects. The most current theory suggests that defective ion transport leads to dehydration in the airway epithelia, thickening mucus. In airway epithelial cells, the cilia exist in between the cells apical surface and mucus in a layer known as airway surface liquid (ASL). The flow of ions from the cell and into this layer is determined by ion channels such as CFTR. CFTR not only allows chloride ions to be drawn from the cell and into the ASL, but it also regulates another channel called ENac, which allows sodium ions to leave the ASL and enter the respiratory epithelium. CFTR normally inhibits this channel, but if the CFTR is defective, then sodium flows freely from the ASL and into the cell.As water follows sodium, the depth of ASL will be depleted and the cilia will be left in the mucous layer. As cilia cannot effectively move in a thick, viscous environment, mucociliary clearance is deficient and a buildup of mucus occurs, clogging small airways. The accumulation of more viscous, nutrient-rich mucus in the lungs allows bacteria to hide from the bodys immune system, causing repeated respiratory infections. The presence of the same CFTR proteins in the pancreatic duct and sweat glands in the skin also cause symptoms in these systems.
Chronic infections
The lungs of individuals with cystic fibrosis are colonized and infected by bacteria from an early age. These bacteria, which often spread among individuals with CF, thrive in the altered mucus, which collects in the small airways of the lungs. This mucus leads to the formation of bacterial microenvironments known as biofilms that are difficult for immune cells and antibiotics to penetrate. Viscous secretions and persistent respiratory infections repeatedly damage the lung by gradually remodeling the airways, which makes infection even more difficult to eradicate. The natural history of CF lung infections and airway remodeling is poorly understood, largely due to the immense spatial and temporal heterogeneity both within and between the microbiomes of CF patients.Over time, both the types of bacteria and their individual characteristics change in individuals with CF. In the initial stage, common bacteria such as S. aureus and H. influenzae colonize and infect the lungs. Eventually, Pseudomonas aeruginosa (and sometimes Burkholderia cepacia) dominates. By 18 years of age, 80% of patients with classic CF harbor P. aeruginosa, and 3.5% harbor B. cepacia. Once within the lungs, these bacteria adapt to the environment and develop resistance to commonly used antibiotics. Pseudomonas can develop special characteristics that allow the formation of large colonies, known as "mucoid" Pseudomonas, which are rarely seen in people who do not have CF. Scientific evidence suggests the interleukin 17 pathway plays a key role in resistance and modulation of the inflammatory response during P. aeruginosa infection in CF. In particular, interleukin 17-mediated immunity plays a double-edged activity during chronic airways infection; on one side, it contributes to the control of P. aeruginosa burden, while on the other, it propagates exacerbated pulmonary neutrophilia and tissue remodeling.Infection can spread by passing between different individuals with CF. In the past, people with CF often participated in summer "CF camps" and other recreational gatherings. Hospitals grouped patients with CF into common areas and routine equipment (such as nebulizers) was not sterilized between individual patients. This led to transmission of more dangerous strains of bacteria among groups of patients. As a result, individuals with CF are now routinely isolated from one another in the healthcare setting, and healthcare providers are encouraged to wear gowns and gloves when examining patients with CF to limit the spread of virulent bacterial strains.CF patients may also have their airways chronically colonized by filamentous fungi (such as Aspergillus fumigatus, Scedosporium apiospermum, Aspergillus terreus) and/or yeasts (such as Candida albicans); other filamentous fungi less commonly isolated include Aspergillus flavus and Aspergillus nidulans (occur transiently in CF respiratory secretions) and Exophiala dermatitidis and Scedosporium prolificans (chronic airway-colonizers); some filamentous fungi such as Penicillium emersonii and Acrophialophora fusispora are encountered in patients almost exclusively in the context of CF. Defective mucociliary clearance characterizing CF is associated with local immunological disorders. In addition, the prolonged therapy with antibiotics and the use of corticosteroid treatments may also facilitate fungal growth. Although the clinical relevance of the fungal airway colonization is still a matter of debate, filamentous fungi may contribute to the local inflammatory response and therefore to the progressive deterioration of the lung function, as often happens with allergic bronchopulmonary aspergillosis – the most common fungal disease in the context of CF, involving a Th2-driven immune response to Aspergillus species.
Diagnosis
In many localities all newborns are screened for cystic fibrosis within the first few days of life, typically by blood test for high levels of immunoreactive trypsinogen. Newborns with positive tests or those who are otherwise suspected of having cystic fibrosis based on symptoms or family history, then undergo a sweat test. An electric current is used to drive pilocarpine into the skin, stimulating sweating. The sweat is collected and analyzed for salt levels. Having unusually high levels of chloride in the sweat suggests CFTR is dysfunctional; the person is then diagnosed with cystic fibrosis. Genetic testing is also available to identify the CFTR mutations typically associated with cystic fibrosis. Many laboratories can test for the 30–96 most common CFTR mutations, which can identify over 90% of people with cystic fibrosis.People with CF have less thiocyanate and hypothiocyanite in their saliva and mucus (Banfi et al.). In the case of milder forms of CF, transepithelial potential difference measurements can be helpful. CF can also be diagnosed by identification of mutations in the CFTR gene.In many cases, a parent makes the diagnosis because the infant tastes salty. Immunoreactive trypsinogen levels can be increased in individuals who have a single mutated copy of the CFTR gene (carriers) or, in rare instances, in individuals with two normal copies of the CFTR gene. Due to these false positives, CF screening in newborns can be controversial.By 2010 every US state had instituted newborn screening programs and as of 2016, 21 European countries had programs in at least some regions.
Prenatal
Women who are pregnant or couples planning a pregnancy can have themselves tested for the CFTR gene mutations to determine the risk that their child will be born with CF. Testing is typically performed first on one or both parents and, if the risk of CF is high, testing on the fetus is performed. The American College of Obstetricians and Gynecologists recommends all people thinking of becoming pregnant be tested to see if they are a carrier.Because development of CF in the fetus requires each parent to pass on a mutated copy of the CFTR gene and because CF testing is expensive, testing is often performed initially on one parent. If testing shows that parent is a CFTR gene mutation carrier, the other parent is tested to calculate the risk that their children will have CF. CF can result from more than a thousand different mutations. As of 2016, typically only the most common mutations are tested for, such as ΔF508 Most commercially available tests look for 32 or fewer different mutations. If a family has a known uncommon mutation, specific screening for that mutation can be performed. Because not all known mutations are found on current tests, a negative screen does not guarantee that a child will not have CF.During pregnancy, testing can be performed on the placenta (chorionic villus sampling) or the fluid around the fetus (amniocentesis). However, chorionic villus sampling has a risk of fetal death of one in 100 and amniocentesis of one in 200; a recent study has indicated this may be much lower, about one in 1,600.Economically, for carrier couples of cystic fibrosis, when comparing preimplantation genetic diagnosis (PGD) with natural conception (NC) followed by prenatal testing and abortion of affected pregnancies, PGD provides net economic benefits up to a maternal age around 40 years, after which NC, prenatal testing, and abortion have higher economic benefit.
Management
While no cures for CF are known, several treatment methods are used. The management of CF has improved significantly over the past 70 years. While infants born with it 70 years ago would have been unlikely to live beyond their first year, infants today are likely to live well into adulthood. Recent advances in the treatment of cystic fibrosis have meant that individuals with cystic fibrosis can live a fuller life less encumbered by their condition. The cornerstones of management are the proactive treatment of airway infection, and encouragement of good nutrition and an active lifestyle. Pulmonary rehabilitation as a management of CF continues throughout a persons life, and is aimed at maximizing organ function, and therefore the quality of life. Occupational therapists use energy conservation techniques (ECT) in the rehabilitation process for patients with Cystic Fibrosis. Examples of energy conservation techniques are ergonomic principles, pursed lip breathing, and diaphragmatic breathing. Patients with CF tend to have fatigue and dyspnoea due to chronic pulmonary infections, so reducing the amount of energy spent during activities can help patients feel better and gain more independence. At best, current treatments delay the decline in organ function. Because of the wide variation in disease symptoms, treatment typically occurs at specialist multidisciplinary centers and is tailored to the individual. Targets for therapy are the lungs, gastrointestinal tract (including pancreatic enzyme supplements), the reproductive organs (including assisted reproductive technology), and psychological support.The most consistent aspect of therapy in CF is limiting and treating the lung damage caused by thick mucus and infection, with the goal of maintaining quality of life. Intravenous, inhaled, and oral antibiotics are used to treat chronic and acute infections. Mechanical devices and inhalation medications are used to alter and clear the thickened mucus. These therapies, while effective, can be extremely time-consuming. Oxygen therapy at home is recommended in those with significant low oxygen levels. Many people with CF use probiotics, which are thought to be able to correct intestinal dysbiosis and inflammation, but the clinical trial evidence regarding the effectiveness of probiotics for reducing pulmonary exacerbations in people with CF is uncertain.
Antibiotics
Many people with CF are on one or more antibiotics at all times, even when healthy, to prophylactically suppress infection. Antibiotics are absolutely necessary whenever pneumonia is suspected or a noticeable decline in lung function is seen, and are usually chosen based on the results of a sputum analysis and the persons past response. This prolonged therapy often necessitates hospitalization and insertion of a more permanent IV such as a peripherally inserted central catheter or Port-a-Cath. Inhaled therapy with antibiotics such as tobramycin, colistin, and aztreonam is often given for months at a time to improve lung function by impeding the growth of colonized bacteria. Inhaled antibiotic therapy helps lung function by fighting infection, but also has significant drawbacks such as development of antibiotic resistance, tinnitus, and changes in the voice. Inhaled levofloxacin may be used to treat Pseudomonas aeruginosa in people with cystic fibrosis who are infected. The early management of Pseudomonas aeruginosa infection is easier and better, using nebulised antibiotics with or without oral antibiotics may sustain its eradication up to two years. When choosing antibiotics to treat CF patients with lung infections caused by Pseudomonas aeruginosa in people with cystic fibrosis, it is still unclear whether the choice of antibiotics should be based on the results of testing antibiotics separately (one at a time) or in combination with each other.Antibiotics by mouth such as ciprofloxacin or azithromycin are given to help prevent infection or to control ongoing infection. The aminoglycoside antibiotics (e.g. tobramycin) used can cause hearing loss, damage to the balance system in the inner ear or kidney failure with long-term use. To prevent these side-effects, the amount of antibiotics in the blood is routinely measured and adjusted accordingly.All these factors related to the antibiotics use, the chronicity of the disease, and the emergence of resistant bacteria demand more exploration for different strategies such as antibiotic adjuvant therapy. Currently, no reliable clinical trial evidence shows the effectiveness of antibiotics for pulmonary exacerbations in people with cystic fibrosis and Burkholderia cepacia complex or for the use of antibiotics to treat nontuberculous mycobacteria in people with CF.
Other medication
Aerosolized medications that help loosen secretions include dornase alfa and hypertonic saline. Dornase is a recombinant human deoxyribonuclease, which breaks down DNA in the sputum, thus decreasing its viscosity. Dornase alpha improves lung function and probably decreases the risk of exacerbations but there is insufficient evidence to know if it is more or less effective than other similar medications. Dornase alpha may improve lung function, however there is no strong evidence that it is better than other hyperosmolar therapies.Denufosol, an investigational drug, opens an alternative chloride channel, helping to liquefy mucus. Whether inhaled corticosteroids are useful is unclear, but stopping inhaled corticosteroid therapy is safe. There is weak evidence that corticosteroid treatment may cause harm by interfering with growth. Pneumococcal vaccination has not been studied as of 2014. As of 2014, there is no clear evidence from randomized controlled trials that the influenza vaccine is beneficial for people with cystic fibrosis.Ivacaftor is a medication taken by mouth for the treatment of CF due to a number of specific mutations responsive to ivacaftor-induced CFTR protein enhancement. It improves lung function by about 10%; however, as of 2014 it is expensive. The first year it was on the market, the list price was over $300,000 per year in the United States. In July 2015, the U.S. Food and Drug Administration approved lumacaftor/ivacaftor. In 2018, the FDA approved the combination ivacaftor/tezacaftor; the manufacturer announced a list price of $292,000 per year. Tezacaftor helps move the CFTR protein to the correct position on the cell surface, and is designed to treat people with the F508del mutation.In 2019, the combination drug elexacaftor/ivacaftor/tezacaftor marketed as Trikafta in the United States, was approved for CF patients over the age of 12. In 2021, this was extended to include patients over the age of 6. In Europe this drug was approved in 2020 and marketed as Kaftrio. It is used in those that have a f508del mutation, which occurs in about 90% of patients with cystic fibrosis. According to the Cystic Fibrosis Foundation, "this medicine represents the single greatest therapeutic advancement in the history of CF, offering a treatment for the underlying cause of the disease that could eventually bring modulator therapy to 90 percent of people with CF." In a clinical trial, participants who were administered the combination drug experienced a subsequent 63% decrease in pulmonary exacerbations and a 41.8 mmol/L decrease in sweat chloride concentration. By mitigating a repertoire of symptoms associated with cystic fibrosis, the combination drug significantly improved quality-of-life metrics among patients with the disease as well. The combination drug is also known to interact with CYP3A inducers, such as carbamazepine used in the treatment of bipolar disorder, causing elexafaftor/ivacaftor/tezacaftor to circulate in the body at decreased concentrations. As such, concomitant use is not recommended. The list price in the US is going to be $311,000 per year; however, insurance may cover much of the cost of the drug.Ursodeoxycholic acid, a bile salt, has been used, however there is insufficient data to show if it is effective.
Nutrient supplementation
It is uncertain whether vitamin A or beta-carotene supplementation have any effect on eye and skin problems caused by vitamin A deficiency.There is no strong evidence that people with cystic fibrosis can prevent osteoporosis by increasing their intake of vitamin D.For people with vitamin E deficiency and cystic fibrosis, there is evidence that vitamin E supplementation may improve vitamin E levels, although it is still uncertain what effect supplementation has on vitamin E‐specific deficiency disorders or on lung function.Robust evidence regarding the effects of vitamin K supplementation in people with cystic fibrosis is lacking as of 2020.Various studies have examined the effects of omega-3 fatty acid supplementation for people with cystic fibrosis but the evidence is uncertain whether it has any benefits or adverse effects.
Procedures
Several mechanical techniques are used to dislodge sputum and encourage its expectoration. One technique good for short-term airway clearance is chest physiotherapy where a respiratory therapist percusses an individuals chest by hand several times a day, to loosen up secretions. This "percussive effect" can be administered also through specific devices that use chest wall oscillation or intrapulmonary percussive ventilator. Other methods such as biphasic cuirass ventilation, and associated clearance mode available in such devices, integrate a cough assistance phase, as well as a vibration phase for dislodging secretions. These are portable and adapted for home use.Another technique is positive expiratory pressure physiotherapy that consists of providing a back pressure to the airways during expiration. This effect is provided by devices that consists of a mask or a mouthpiece in which a resistance is applied only on the expiration phase. Operating principles of this technique seems to be the increase of gas pressure behind mucus through collateral ventilation along with a temporary increase in functional residual capacity preventing the early collapse of small airways during exhalation.As lung disease worsens, mechanical breathing support may become necessary. Individuals with CF may need to wear special masks at night to help push air into their lungs. These machines, known as bilevel positive airway pressure (BiPAP) ventilators, help prevent low blood oxygen levels during sleep. Non-invasive ventilators may be used during physical therapy to improve sputum clearance. It is not known if this type of therapy has an impact on pulmonary exacerbations or disease progression. It is not known what role non-invasive ventilation therapy has for improving exercise capacity in people with cystic fibrosis. However, the authors noted that "non‐invasive ventilation may be a useful adjunct to other airway clearance techniques, particularly in people with cystic fibrosis who have difficulty expectorating sputum." During severe illness, a tube may be placed in the throat (a procedure known as a tracheostomy) to enable breathing supported by a ventilator.For children, preliminary studies show massage therapy may help people and their families quality of life.Some lung infections require surgical removal of the infected part of the lung. If this is necessary many times, lung function is severely reduced. The most effective treatment options for people with CF who have spontaneous or recurrent pneumothoraces is not clear.
Transplantation
Lung transplantation may become necessary for individuals with CF as lung function and exercise tolerance decline. Although single lung transplantation is possible in other diseases, individuals with CF must have both lungs replaced because the remaining lung might contain bacteria that could infect the transplanted lung. A pancreatic or liver transplant may be performed at the same time to alleviate liver disease and/or diabetes. Lung transplantation is considered when lung function declines to the point where assistance from mechanical devices is required or someones survival is threatened. According to Merck Manual, "bilateral lung transplantation for severe lung disease is becoming more routine and more successful with experience and improved techniques. Among adults with CF, median survival posttransplant is about 9 years."
Other aspects
Newborns with intestinal obstruction typically require surgery, whereas adults with distal intestinal obstruction syndrome typically do not. Treatment of pancreatic insufficiency by replacement of missing digestive enzymes allows the duodenum to properly absorb nutrients and vitamins that would otherwise be lost in the feces. However, the best dosage and form of pancreatic enzyme replacement is unclear, as are the risks and long-term effectiveness of this treatment.So far, no large-scale research involving the incidence of atherosclerosis and coronary heart disease in adults with cystic fibrosis has been conducted. This is likely because the vast majority of people with cystic fibrosis do not live long enough to develop clinically significant atherosclerosis or coronary heart disease.Diabetes is the most common nonpulmonary complication of CF. It mixes features of type 1 and type 2 diabetes, and is recognized as a distinct entity, cystic fibrosis-related diabetes. While oral antidiabetic drugs are sometimes used, the recommended treatment is the use of insulin injections or an insulin pump, and, unlike in type 1 and 2 diabetes, dietary restrictions are not recommended. While Stenotrophomonas maltophilia is relatively common in people with cystic fibrosis, the evidence about the effectiveness of antibiotics for S. maltophilia is uncertain.Bisphosphonates taken by mouth or intravenously can be used to improve the bone mineral density in people with cystic fibrosis. When taking bisphosphates intravenously, adverse effects such as pain and flu-like symptoms can be an issue. The adverse effects of bisphosphates taken by mouth on the gastrointestinal tract are not known.Poor growth may be avoided by insertion of a feeding tube for increasing food energy through supplemental feeds or by administration of injected growth hormone.Sinus infections are treated by prolonged courses of antibiotics. The development of nasal polyps or other chronic changes within the nasal passages may severely limit airflow through the nose, and over time reduce the persons sense of smell. Sinus surgery is often used to alleviate nasal obstruction and to limit further infections. Nasal steroids such as fluticasone propionate are used to decrease nasal inflammation.Female infertility may be overcome by assisted reproduction technology, particularly embryo transfer techniques. Male infertility caused by absence of the vas deferens may be overcome with testicular sperm extraction, collecting sperm cells directly from the testicles. If the collected sample contains too few sperm cells to likely have a spontaneous fertilization, intracytoplasmic sperm injection can be performed. Third party reproduction is also a possibility for women with CF. Whether taking antioxidants affects outcomes is unclear.Physical exercise is usually part of outpatient care for people with cystic fibrosis. Aerobic exercise seems to be beneficial for aerobic exercise capacity, lung function and health-related quality of life; however, the quality of the evidence was poor.Due to the use of aminoglycoside antibiotics, ototoxicity is common. Symptoms may include "tinnitus, hearing loss, hyperacusis, aural fullness, dizziness, and vertigo".
Gastrointestinal
Problems with the gastrointestinal system including constipation and obstruction of the gastrointestinal tract including distal intestinal obstruction syndrome are frequent complications for people with cystic fibrosis. Treatment of gastrointestinal problems is required in order to prevent a complete obstruction, reduce other CF symptoms, and improve the quality of life. While stool softeners, laxatives, and prokinetics (GI-focused treatments) are often suggested, there is no clear consensus from experts at to which approach is the best and comes with the least risks. Mucolytics or systemic treatments aimed at dysfunctional CFTR are also sometimes suggested to improve symptoms.
Prognosis
The prognosis for cystic fibrosis has improved due to earlier diagnosis through screening and better treatment and access to health care. In 1959, the median age of survival of children with CF in the United States was six months.
In 2010, survival is estimated to be 37 years for women and 40 for men. In Canada, median survival increased from 24 years in 1982 to 47.7 in 2007. In the United States those born with CF in 2016 have an predicted life expectancy of 47.7 when cared for in specialty clinics.In the US, of those with CF who are more than 18 years old as of 2009, 92% had graduated from high school, 67% had at least some college education, 15% were disabled, 9% were unemployed, 56% were single, and 39% were married or living with a partner.
Quality of life
Chronic illnesses can be difficult to manage. CF is a chronic illness that affects the "digestive and respiratory tracts resulting in generalized malnutrition and chronic respiratory infections". The thick secretions clog the airways in the lungs, which often cause inflammation and severe lung infections. If it is compromised, it affects the quality of life of someone with CF and their ability to complete such tasks as everyday chores.According to Schmitz and Goldbeck (2006), CF significantly increases emotional stress on both the individual and the family, "and the necessary time-consuming daily treatment routine may have further negative effects on quality of life". However, Havermans and colleagues (2006) have established that young outpatients with CF who have participated in the Cystic Fibrosis Questionnaire-Revised "rated some quality of life domains higher than did their parents". Consequently, outpatients with CF have a more positive outlook for themselves. As Merck Manual notes, "with appropriate support, most patients can make an age-appropriate adjustment at home and school. Despite myriad problems, the educational, occupational, and marital successes of patients are impressive."Furthermore, there are many ways to enhance the quality of life in CF patients. Exercise is promoted to increase lung function. Integrating an exercise regimen into the CF patients daily routine can significantly improve quality of life. No definitive cure for CF is known, but diverse medications are used, such as mucolytics, bronchodilators, steroids, and antibiotics, that have the purpose of loosening mucus, expanding airways, decreasing inflammation, and fighting lung infections, respectively.
Epidemiology
Cystic fibrosis is the most common life-limiting autosomal recessive disease among people of European heritage. In the United States, about 30,000 individuals have CF; most are diagnosed by six months of age. In Canada, about 4,000 people have CF. Around 1 in 25 people of European descent, and one in 30 of white Americans, is a carrier of a CF mutation. Although CF is less common in these groups, roughly one in 46 Hispanics, one in 65 Africans, and one in 90 Asians carry at least one abnormal CFTR gene. Ireland has the worlds highest prevalence of CF, at one in 1353.Although technically a rare disease, CF is ranked as one of the most widespread life-shortening genetic diseases. It is most common among nations in the Western world. An exception is Finland, where only one in 80 people carries a CF mutation. The World Health Organization states, "In the European Union, one in 2000–3000 newborns is found to be affected by CF". In the United States, one in 3,500 children is born with CF. In 1997, about one in 3,300 white children in the United States was born with CF. In contrast, only one in 15,000 African American children have it, and in Asian Americans, the rate was even lower at one in 32,000.Cystic fibrosis is diagnosed equally in males and females. For reasons that remain unclear, data have shown that males tend to have a longer life expectancy than females, though recent studies suggest this gender gap may no longer exist, perhaps due to improvements in health care facilities. A recent study from Ireland identified a link between the female hormone estrogen and worse outcomes in CF.The distribution of CF alleles varies among populations. The frequency of ΔF508 carriers has been estimated at one in 200 in northern Sweden, one in 143 in Lithuanians, and one in 38 in Denmark. No ΔF508 carriers were found among 171 Finns and 151 Saami people. ΔF508 does occur in Finland, but it is a minority allele there. CF is known to occur in only 20 families (pedigrees) in Finland.
Evolution
The ΔF508 mutation is estimated to be up to 52,000 years old. Numerous hypotheses have been advanced as to why such a lethal mutation has persisted and spread in the human population. Other common autosomal recessive diseases such as sickle-cell anemia have been found to protect carriers from other diseases, an evolutionary trade-off known as heterozygote advantage. Resistance to the following have all been proposed as possible sources of heterozygote advantage:
Cholera: With the discovery that cholera toxin requires normal host CFTR proteins to function properly, it was hypothesized that carriers of mutant CFTR genes benefited from resistance to cholera and other causes of diarrhea. Further studies have not confirmed this hypothesis.
Typhoid: Normal CFTR proteins are also essential for the entry of Salmonella Typhi into cells, suggesting that carriers of mutant CFTR genes might be resistant to typhoid fever. No in vivo study has yet confirmed this. In both cases, the low level of cystic fibrosis outside of Europe, in places where both cholera and typhoid fever are endemic, is not immediately explicable.
Diarrhea: The prevalence of CF in Europe might be connected with the development of cattle domestication. In this hypothesis, carriers of a single mutant CFTR had some protection from diarrhea caused by lactose intolerance, before the mutations that created lactose tolerance appeared.
Tuberculosis: Another possible explanation is that carriers of the gene could have some resistance to tuberculosis. This hypothesis is based on the thesis that CFTR gene mutation carriers have insufficient action in one of their enzymes – arylsulphatase - which is necessary for Mycobacterium tuberculosis virulence. As M. tuberculosis would use its hosts sources to affect the individual, and due to the lack of enzyme it could not presents its virulence, being a carrier of CFTR mutation could provide resistance against tuberculosis.
History
CF is supposed to have appeared about 3,000 BC because of migration of peoples, gene mutations, and new conditions in nourishment. Although the entire clinical spectrum of CF was not recognized until the 1930s, certain aspects of CF were identified much earlier. Indeed, literature from Germany and Switzerland in the 18th century warned "Wehe dem Kind, das beim Kuß auf die Stirn salzig schmeckt, es ist verhext und muss bald sterben" ("Woe to the child who tastes salty from a kiss on the brow, for he is cursed and soon must die"), recognizing the association between the salt loss in CF and illness.In the 19th century, Carl von Rokitansky described a case of fetal death with meconium peritonitis, a complication of meconium ileus associated with CF. Meconium ileus was first described in 1905 by Karl Landsteiner. In 1936, Guido Fanconi described a connection between celiac disease, cystic fibrosis of the pancreas, and bronchiectasis.In 1938, Dorothy Hansine Andersen published an article, "Cystic Fibrosis of the Pancreas and Its Relation to Celiac Disease: a Clinical and Pathological Study", in the American Journal of Diseases of Children. She was the first to describe the characteristic cystic fibrosis of the pancreas and to correlate it with the lung and intestinal disease prominent in CF. She also first hypothesized that CF was a recessive disease and first used pancreatic enzyme replacement to treat affected children. In 1952, Paul di SantAgnese discovered abnormalities in sweat electrolytes; a sweat test was developed and improved over the next decade.The first linkage between CF and another marker (Paroxonase) was found in 1985 by Hans Eiberg, indicating that only one locus exists for CF. In 1988, the first mutation for CF, ΔF508 was discovered by Francis Collins, Lap-Chee Tsui, and John R. Riordan on the seventh chromosome. Subsequent research has found over 1,000 different mutations that cause CF.Because mutations in the CFTR gene are typically small, classical genetics techniques had been unable to accurately pinpoint the mutated gene. Using protein markers, gene-linkage studies were able to map the mutation to chromosome 7. Chromosome walking and chromosome jumping techniques were then used to identify and sequence the gene. In 1989, Lap-Chee Tsui led a team of researchers at the Hospital for Sick Children in Toronto that discovered the gene responsible for CF. CF represents a classic example of how a human genetic disorder was elucidated strictly by the process of forward genetics.
Research
People with CF may be listed in a disease registry that allows researchers and doctors to track health results and identify candidates for clinical trials.
Gene therapy
Gene therapy has been explored as a potential cure for CF. Results from clinical trials have shown limited success as of 2016, and using gene therapy as routine therapy is not suggested. A small study published in 2015 found a small benefit.The focus of much CF gene therapy research is aimed at trying to place a normal copy of the CFTR gene into affected cells. Transferring the normal CFTR gene into the affected epithelium cells would result in the production of functional CFTR protein in all target cells, without adverse reactions or an inflammation response. To prevent the lung manifestations of CF, only 5–10% the normal amount of CFTR gene expression is needed. Multiple approaches have been tested for gene transfer, such as liposomes and viral vectors in animal models and clinical trials. However, both methods were found to be relatively inefficient treatment options, mainly because very few cells take up the vector and express the gene, so the treatment has little effect. Additionally, problems have been noted in cDNA recombination, such that the gene introduced by the treatment is rendered unusable. There has been a functional repair in culture of CFTR by CRISPR/Cas9 in intestinal stem cell organoids of cystic fibrosis patients.
Phage therapy
Phage therapy is being studied for multidrug resistant bacteria in people with CF.
Gene modulators
A number of small molecules that aim at compensating various mutations of the CFTR gene are under development. CFTR modulator therapies have been used in place of other types of genetic therapies. These therapies focus on the expression of a genetic mutation instead of the mutated gene itself. Modulators are split into two classes: potentiators and correctors. Potentiators act on the CFTR ion channels that are embedded in the cell membrane, and these types of drugs help open up the channel to allow transmembrane flow. Correctors are meant to assist in the transportation of nascent proteins, a protein that is formed by ribosomes before it is morphed into a specific shape, to the cell surface to be implemented into the cell membrane.Most target the transcription stage of genetic expression. One approach has been to try and develop medication that get the ribosome to overcome the stop codon and produce a full-length CFTR protein. About 10% of CF results from a premature stop codon in the DNA, leading to early termination of protein synthesis and truncated proteins. These drugs target nonsense mutations such as G542X, which consists of the amino acid glycine in position 542 being replaced by a stop codon. Aminoglycoside antibiotics interfere with protein synthesis and error-correction. In some cases, they can cause the cell to overcome a premature stop codon by inserting a random amino acid, thereby allowing expression of a full-length protein. Future research for these modulators is focused on the cellular targets that can be effected by a change in a genes expression. Otherwise, genetic therapy will be used as a treatment when modulator therapies do not work given that 10% of people with cystic fibrosis are not affected by these drugs.Elexacaftor/ivacaftor/tezacaftor was approved in the United States in 2019 for cystic fibrosis. This combination of previously developed medicines is able to treat up to 90% of people with cystic fibrosis. This medications restores some effectiveness of the CFTR protein so that it can work as an ion channel on the cells surface.
Ecological therapy
It has previously been shown that inter-species interactions are an important contributor to the pathology of CF lung infections. Examples include the production of antibiotic degrading enzymes such as β-lactamases and the production of metabolic by-products such as short-chain fatty acids (SCFAs) by anaerobic species, which can enhance the pathogenicity of traditional pathogens such as Pseudomonas aeruginosa. Due to this, it has been suggested that the direct alteration of CF microbial community composition and metabolic function would provide an alternative to traditional antibiotic therapies.
Society and culture
Sick: The Life and Death of Bob Flanagan, Supermasochist, a 1997 documentary film
65 Redroses, a 2009 documentary film
Breathing for a Living, a memoir by Laura Rothenberg
Every Breath I Take, Surviving and Thriving With Cystic Fibrosis, book by Claire Wineland
Five Feet Apart, a 2019 romantic drama film starring Cole Sprouse and Haley Lu Richardson
Orla Tinsley: Warrior, a 2018 documentary film about CF campaigner Orla Tinsley
The performance art of Martin OBrien
Continent Chasers, traveller and CF patient documenting travel and CF blogs, continentchasers.com
Notes
References
Further reading
External links
Search GeneCards for genes involved in cystic fibrosis
Cystic Fibrosis Mutation Database
"Cystic Fibrosis". MedlinePlus. U.S. National Library of Medicine. | 129 |
Cysticercosis | Cysticercosis is a tissue infection caused by the young form of the pork tapeworm. People may have few or no symptoms for years. In some cases, particularly in Asia, solid lumps of between one and two centimetres may develop under the skin. After months or years these lumps can become painful and swollen and then resolve. A specific form called neurocysticercosis, which affects the brain, can cause neurological symptoms. In developing countries this is one of the most common causes of seizures.Cysticercosis is usually acquired by eating food or drinking water contaminated by tapeworms eggs from human feces. Among foods egg-contaminated vegetables are a major source. The tapeworm eggs are present in the feces of a person infected with the adult worms, a condition known as taeniasis. Taeniasis, in the strict sense, is a different disease and is due to eating cysts in poorly cooked pork. People who live with someone with the pork tapeworm have a greater risk of getting cysticercosis. The diagnosis can be made by aspiration of a cyst. Taking pictures of the brain with computer tomography (CT) or magnetic resonance imaging (MRI) are most useful for the diagnosis of disease in the brain. An increased number of a type of white blood cell, called eosinophils, in the cerebral spinal fluid and blood is also an indicator.Infection can be effectively prevented by personal hygiene and sanitation: this includes cooking pork well, proper toilets and sanitary practices, and improved access to clean water. Treating those with taeniasis is important to prevent spread. Treating the disease when it does not involve the nervous system may not be required. Treatment of those with neurocysticercosis may be with the medications praziquantel or albendazole. These may be required for long periods. Steroids, for anti-inflammation during treatment, and anti-seizure medications may also be required. Surgery is sometimes done to remove the cysts.The pork tapeworm is particularly common in Asia, Sub-Saharan Africa, and Latin America. In some areas it is believed that up to 25% of people are affected. In the developed world it is very uncommon. Worldwide in 2015 it caused about 400 deaths. Cysticercosis also affects pigs and cows but rarely causes symptoms as most are slaughtered before symptoms arise. The disease has occurred in humans throughout history. It is one of the neglected tropical diseases.
Signs and symptoms
Muscles
Cysticerci can develop in any voluntary muscle. Invasion of muscle can cause inflammation of the muscle, with fever, eosinophilia, and increased size, which initiates with muscle swelling and later progress to atrophy and scarring. In most cases, it is asymptomatic since the cysticerci die and become calcified.
Nervous system
The term neurocysticercosis is generally accepted to refer to cysts in the parenchyma of the brain. It presents with seizures and, less commonly, headaches. Cysticerca in brain parenchyma are usually 5–20 mm in diameter. In subarachnoid space and fissures, lesions may be as large as 6 cm in diameter and lobulated. They may be numerous and life-threatening.Cysts located within the ventricles of the brain can block the outflow of cerebrospinal fluid and present with symptoms of increased intracranial pressure.Racemose neurocysticercosis refers to cysts in the subarachnoid space. These can occasionally grow into large lobulated masses causing pressure on surrounding structures.Spinal cord neurocysticercosis most commonly presents symptoms such as back pain and radiculopathy.
Eyes
In some cases, cysticerci may be found in the eyeball, extraocular muscles, and under the conjunctiva (subconjunctiva). Depending on the location, they may cause visual difficulties that fluctuate with eye position, retinal edema, hemorrhage, a decreased vision or even a visual loss.
Skin
Subcutaneous cysts are in the form of firm, mobile nodules, occurring mainly on the trunk and extremities. Subcutaneous nodules are sometimes painful.
Cause
The cause of human cysticercosis is the egg form of Taenia solium (often abbreviated as T. solium and also called pork tapeworm), which is transmitted through the oral-fecal route. The eggs enter the intestine where they develop into larvae. The larvae enter the bloodstream and invade host tissues, where they further develop into larvae called cysticerci. The cysticercus larva completes development in about 2 months. It is semitransparent, opalescent white, and elongate oval in shape and may reach a length of 0.6 to 1.8 cm.
Diagnosis
The traditional method of demonstrating either tapeworm eggs or proglottids in stool samples diagnoses only taeniasis, carriage of the tapeworm stage of the life cycle. Only a small minority of patients with cysticercosis will harbor a tapeworm, rendering stool studies ineffective for diagnosis. Ophthalmic cysticercosis can be diagnosed by visualizing parasite in eye by fundoscopy.In cases of human cysticercosis, diagnosis is a sensitive problem and requires biopsy of the infected tissue or sophisticated instruments. Taenia solium eggs and proglottids found in feces, ELISA, or polyacrylamide gel electrophoresis diagnose only taeniasis and not cysticercosis. Radiological tests, such as X-ray, CT scans which demonstrate "ring-enhancing brain lesions", and MRIs, can also be used to detect diseases. X-rays are used to identify calcified larvae in the subcutaneous and muscle tissues, and CT scans and MRIs are used to find lesions in the brain.
Serological
Antibodies to cysticerci can be demonstrated in serum by enzyme linked immunoelectrotransfer blot (EITB) assay and in CSF by ELISA. An immunoblot assay using lentil-lectin (agglutinin from Lens culinaris) is highly sensitive and specific. However, individuals with intracranial lesions and calcifications may be seronegative. In the CDCs immunoblot assay, cysticercosis-specific antibodies can react with structural glycoprotein antigens from the larval cysts of Taenia solium. However, this is mainly a research tool not widely available in clinical practice and nearly unobtainable in resource limited settings.
Neurocysticercosis
The diagnosis of neurocysticercosis is mainly clinical, based on a compatible presentation of symptoms and findings of imaging studies.
Imaging
Neuroimaging with CT or MRI is the most useful method of diagnosis. CT scan shows both calcified and uncalcified cysts, as well as distinguishing active and inactive cysts. Cystic lesions can show ring enhancement and focal enhancing lesions. Some cystic lesions, especially the ones in ventricles and subarachnoid space may not be visible on CT scan, since the cyst fluid is isodense with cerebrospinal fluid (CSF). Thus diagnosis of extraparenchymal cysts usually relies on signs like hydrocephalus or enhanced basilar meninges. In such cases CT scan with intraventricular contrast or MRI can be used. MRI is more sensitive in detection of intraventricular cysts.
CSF
CSF findings include pleocytosis, elevated protein levels and depressed glucose levels; but these may not be always present.
Prevention
Cysticercosis is considered as “tools-ready disease” according to WHO. International Task Force for Disease Eradication in 1992 reported that cysticercosis is potentially eradicable. It is feasible because there are no animal reservoirs besides humans and pigs. The only source of Taenia solium infection for pigs is from humans, a definite host. Theoretically, breaking the life cycle seems easy by doing intervention strategies from various stages in the life cycle.For example,
Massive chemotherapy of infected individuals, improving sanitation, and educating people are all major ways to discontinue the cycle, in which eggs from human feces are transmitted to other humans and/or pigs.
Cooking of pork or freezing it and inspecting meat are effective means to cease the life cycle
The management of pigs by treating them or vaccinating them is another possibility to intervene
The separation of pigs from human faeces by confining them in enclosed piggeries. In Western European countries post World War 2 the pig industry developed rapidly and most pigs were housed. This was the main reason for pig cysticercosis largely being eliminated from the region. This of course is not a quick answer to the problem in developing countries.
Pigs
The intervention strategies to eradicate cysticercosis includes surveillance of pigs in foci of transmission and massive chemotherapy treatment of humans. In reality, control of T. solium by a single intervention, for instance, by treating only human population will not work because the existing infected pigs can still carry on the cycle. The proposed strategy for eradication is to do multilateral intervention by treating both human and porcine populations. It is feasible because treating pigs with oxfendazole has been shown to be effective and once treated, pigs are protected from further infections for at least 3 months.
Limitations
Even with the concurrent treatment of humans and pigs, complete elimination is hard to achieve. In one study conducted in 12 villages in Peru, both humans and pigs were treated with praziquantel and oxfendazole, with the coverage of more than 75% in humans and 90% in pigs The result shows a decrease in prevalence and incidence in the intervention area; however the effect did not eliminate T. solium. The possible reason includes the incomplete coverage and re-infection. Even though T. solium could be eliminated through mass treatment of human and porcine population, it is not sustainable. Moreover, both tapeworm carriers of humans and pigs tend to spread the disease from endemic to non-endemic areas resulting in periodic outbreaks of cysticercosis or outbreaks in new areas.
Vaccines
Given the fact that pigs are part of a life cycle, vaccination of pigs is another feasible intervention to eliminate cysticercosis. Research studies have been focusing on vaccine against cestode parasites, since many immune cell types are found to be capable of destroying cysticercus. Many vaccine candidates are extracted from antigens of different cestodes such as Taenia solium, T. crassiceps, T. saginata, T. ovis and target oncospheres and/or cysticerci. In 1983, Molinari et al. reported the first vaccine candidate against porcine cysticercosis using antigen from cysticercus cellulosae drawn out from naturally infected. Recently, vaccines extracted from genetically engineered 45W-4B antigens have been successfully tested to pigs in an experimental condition. This type of vaccine can protect against cysticercosis in both Chinese and Mexican type of T. solium. However, it has not been tested in endemic field conditions, which is important because the realistic condition in the field differ greatly from experimental condition, and this can result in a great difference in the chances of infection and immune reaction.Even though vaccines have been successfully generated, the feasibility of its production and usage in rural free ranging pigs still remains a challenge. If a vaccine is to be injected, the burden of work and the cost of vaccine administration to pigs will remain high and unrealistic. The incentives of using vaccines by pig owners will decrease if the vaccine administration to pigs takes time by injecting every single pig in their livestock. A hypothetical oral vaccine is proposed to be more effective in this case as it can be easily delivered to the pigs by food.
S3PVAC vaccine
The vaccine constituted by 3 peptide synthetically produced (S3Pvac) has proven its efficacy in natural conditions of transmission. The S3PVAC vaccine so far, can be considered as the best vaccine candidate to be used in endemic areas such as Mexico (20). S3Pvac consists of three protective peptides: KETc12, KETc1 and GK1, whose sequences belong to native antigens that are present in the different developmental stages of T. solium and other cestode parasites.Non-infected pigs from rural villages in Mexico were vaccinated with S3Pvac and the vaccine reduced 98% the number of cysticerci and 50% the number of prevalence. The diagnostic method involves necropsy and tongue inspection of pigs. The natural challenge conditions used in the study proved the efficacy of the S3Pvac vaccine in transmission control of T. solium in Mexico. The S3Pvac vaccine is owned by the National Autonomous University of Mexico and the method of high scale production of the vaccine has already been developed. The validation of the vaccine in agreement with the Secretary of Animal Health in Mexico is currently in the process of completion. It is also hoped that the vaccine will be well-accepted by pig owners because they also lose their income if pigs are infected cysticercosis. Vaccination of pigs against cysticercosis, if successful, can potentially have a great impact on transmission control since there is no chance of re-infection once pigs receive vaccination.
Other
Cysticercosis can also be prevented by routine inspection of meat and condemnation of measly meat by the local government and by avoiding partially cooked meat products. However, in areas where food is scarce, cyst-infected meat might be considered as wasted since pork can provide high quality protein. At times, infected pigs are consumed within the locality or sold at low prices to traffickers who take the uninspected pigs at urban areas for sale.
Management
Neurocysticercosis
Asymptomatic cysts, such as those discovered incidentally on neuroimaging done for another reason, may never lead to symptomatic disease and in many cases do not require therapy. Calcified cysts have already died and involuted. Seizures can also occur in individuals with only calcified cysts.Neurocysticercosis may present as hydrocephalus and acute onset seizures, thus the immediate therapy is emergent reduction of intracranial pressure and anticonvulsant medications. Once the seizures have been brought under control, antihelminthic treatments may be undertaken. The decision to treat with antiparasitic therapy is complex and based on the stage and number of cysts present, their location, and the persons specific symptoms.Adult Taenia solium are easily treated with niclosamide, and is most commonly used in taeniasis. However cysticercosis is a complex disease and requires careful medication. Praziquantel (PZQ) is the drug of choice. In neurocysticercosis praziquantel is widely used. Albendazole appears to be more effective and a safe drug for neurocysticercosis. In complicated situation a combination of praziquantel, albendazole and steroid (such as corticosteroids to reduce the inflammation) is recommended. In the brain the cysts can be usually found on the surface. Most cases of brain cysts are found by accident, during diagnosis for other ailments. Surgical removals are the only option of complete removal even if treated successfully with medications.Antiparasitic treatment should be given in combination with corticosteroids and anticonvulsants to reduce inflammation surrounding the cysts and lower the risk of seizures. When corticosteroids are given in combination with praziquantel, cimetidine is also given, as corticosteroids decrease action of praziquantel by enhancing its first pass metabolism. Albendazole is generally preferable over praziquantel due to its lower cost and fewer drug interactions.Surgical intervention is much more likely to be needed in cases of intraventricular, racemose, or spinal neurocysticercosis. Treatments includes direct excision of ventricular cysts, shunting procedures, and removal of cysts via endoscopy.
Eyes
In eye disease, surgical removal is necessary for cysts within the eye itself as treating intraocular lesions with anthelmintics will elicit an inflammatory reaction causing irreversible damage to structural components. Cysts outside the globe can be treated with anthelmintics and steroids. Treatment recommendations for subcutaneous cysticercosis includes surgery, praziquantel and albendazole.
Skin
In general, subcutaneous disease does not need specific therapy. Painful or bothersome cysts can be surgically removed.
Epidemiology
Regions
Taenia solium is found worldwide, but is more common where pork is part of the diet. Cysticercosis is most prevalent where humans live in close contact with pigs. Therefore, high prevalences are reported in Mexico, Latin America, West Africa, Russia, India, Pakistan, North-East China, and Southeast Asia. In Europe it is most widespread among Slavic people. However, reviews of the epidemiological in Western and Eastern Europe shows there are still considerable gaps in our understanding of the disease also in these regions.The frequency has decreased in developed countries owing to stricter meat inspection, better hygiene and better sanitation of facilities.
Infection estimates
In Latin America, an estimated 75 million persons live in endemic areas and 400,000 people have symptomatic disease. Some studies suggest that the prevalence of cysticercosis in Mexico is between 3.1 and 3.9 percent. Other studies have found the seroprevalence in areas of Guatemala, Bolivia, and Peru as high as 20 percent in humans, and 37 percent in pigs. In Ethiopia, Kenya and the Democratic Republic of Congo around 10% of the population is infected, in Madagascar 16%. The distribution of cysticercosis coincides with the distribution of T. solium. Cysticercosis is the most common cause of symptomatic epilepsy worldwide.Prevalence rates in the United States have shown immigrants from Mexico, Central and South America, and Southeast Asia account for most of the domestic cases of cysticercosis.In 1990 and 1991, four unrelated members of an Orthodox Jewish community in New York City developed recurrent seizures and brain lesions, which were found to have been caused by T. solium. Researchers who interviewed the families suspect the infection was acquired from domestic workers who were carriers of the tapeworm.
Deaths
Worldwide as of 2010 it caused about 1,200 deaths, up from 700 in 1990. Estimates from 2010 were that it contributed to at least 50,000 deaths annually.In US during 1990–2002, 221 cysticercosis deaths were identified. Mortality rates were highest for Latinos and men. The mean age at death was 40.5 years (range 2–88). Most patients, 84.6%, were foreign born, and 62% had emigrated from Mexico. The 33 US-born persons who died of cysticercosis represented 15% of all cysticercosis-related deaths. The cysticercosis mortality rate was highest in California, which accounted for 60% of all cysticercosis deaths.
History
The earliest reference to tapeworms were found in the works of ancient Egyptians that date back to almost 2000 BC. The description of measled pork in the History of Animals written by Aristotle (384–322 BC) showed that the infection of pork with tapeworm was known to ancient Greeks at that time. It was also known to Jewish and later to early Muslim physicians and has been proposed as one of the reasons for pork being forbidden by Jewish and Islamic dietary laws. Recent examination of evolutionary histories of hosts and parasites and DNA evidence show that over 10,000 years ago, ancestors of modern humans in Africa became exposed to tapeworm when they scavenged for food or preyed on antelopes and bovids, and later passed the infection on to domestic animals such as pigs.Cysticercosis was described by Johannes Udalric Rumler in 1555; however, the connection between tapeworms and cysticercosis had not been recognized at that time. Around 1850, Friedrich Küchenmeister fed pork containing cysticerci of T. solium to humans awaiting execution in a prison, and after they had been executed, he recovered the developing and adult tapeworms in their intestines. By the middle of the 19th century, it was established that cysticercosis was caused by the ingestion of the eggs of T. solium.
See also
Coenurosis
Coenurosis in humans
Echinococcosis
Trichinosis
Cysticercus
References
External links
"Taenia solium". NCBI Taxonomy Browser. 6204. | 130 |
Cystinosis | Cystinosis is a lysosomal storage disease characterized by the abnormal accumulation of cystine, the oxidized dimer of the amino acid cysteine. It is a genetic disorder that follows an autosomal recessive inheritance pattern. It is a rare autosomal recessive disorder resulting from accumulation of free cystine in lysosomes, eventually leading to intracellular crystal formation throughout the body. Cystinosis is the most common cause of Fanconi syndrome in the pediatric age group. Fanconi syndrome occurs when the function of cells in renal tubules is impaired, leading to abnormal amounts of carbohydrates and amino acids in the urine, excessive urination, and low blood levels of potassium and phosphates.
Cystinosis was the first documented genetic disease belonging to the group of lysosomal storage disease disorders. Cystinosis is caused by mutations in the CTNS gene that codes for cystinosin, the lysosomal membrane-specific transporter for cystine. Intracellular metabolism of cystine, as it happens with all amino acids, requires its transport across the cell membrane. After degradation of endocytosed protein to cystine within lysosomes, it is normally transported to the cytosol. But if there is a defect in the carrier protein, cystine is accumulated in lysosomes. As cystine is highly insoluble, when its concentration in tissue lysosomes increases, its solubility is immediately exceeded and crystalline precipitates are formed in almost all organs and tissues.However, the progression of the disease is not related to the presence of crystals in target tissues. Although tissue damage might depend on cystine accumulation, the mechanisms of tissue damage are not fully understood. Increased intracellular cystine profoundly disturbs cellular oxidative metabolism and glutathione status, leading to altered mitochondrial energy metabolism, autophagy, and apoptosis.Cystinosis is usually treated with cysteamine, which is prescribed to decrease intralysosomal cystine accumulation. However, the discovery of new pathogenic mechanisms and the development of an animal model of the disease may open possibilities for the development of new treatment modalities to improve long-term prognosis.
Symptoms
There are three distinct types of cystinosis each with slightly different symptoms: nephropathic cystinosis, intermediate cystinosis, and non-nephropathic or ocular cystinosis. Infants affected by nephropathic cystinosis initially exhibit poor growth and particular kidney problems (sometimes called renal Fanconi syndrome). The kidney problems lead to the loss of important minerals, salts, fluids, and other nutrients. The loss of nutrients not only impairs growth, but may result in soft, bowed bones (hypophosphatemic rickets), especially in the legs. The nutrient imbalances in the body lead to increased urination, thirst, dehydration, and abnormally acidic blood (acidosis).
By about age two, cystine crystals may also be present in the cornea. The buildup of these crystals in the eye causes an increased sensitivity to light (photophobia). Without treatment, children with cystinosis are likely to experience complete kidney failure by about age ten. With treatment this may be delayed into the patients teens or 20s. Other signs and symptoms that may occur in patients include muscle deterioration, blindness, inability to swallow, impaired sweating, decreased hair and skin pigmentation, diabetes, and thyroid and nervous system problems.
The signs and symptoms of intermediate cystinosis are the same as nephropathic cystinosis, but they occur at a later age. Intermediate cystinosis typically begins to affect individuals around age twelve to fifteen. Malfunctioning kidneys and corneal crystals are the main initial features of this disorder. If intermediate cystinosis is left untreated, complete kidney failure will occur, but usually not until the late teens to mid twenties.
People with non-nephropathic or ocular cystinosis do not usually experience growth impairment or kidney malfunction. The only symptom is photophobia due to cystine crystals in the cornea.
Crystal morphology and identification
Cystine crystals are hexagonal in shape and are colorless. They are not found often in alkaline urine due to their high solubility. The colorless crystals can be difficult to distinguish from uric acid crystals which are also hexagonal. Under polarized examination, the crystals are birefringent with a polarization color interference.
Genetics
Cystinosis occurs due to a mutation in the gene CTNS, located on chromosome 17, which codes for cystinosin, the lysosomal cystine transporter. Symptoms are first seen at about 3 to 18 months of age with profound polyuria (excessive urination), followed by poor growth, photophobia, and ultimately kidney failure by age 6 years in the nephropathic form.
All forms of cystinosis (nephropathic, juvenile and ocular) are autosomal recessive, which means that the trait is located on an autosomal chromosome, and only an individual who inherits two copies of the gene – one from both parents – will have the disorder. There is a 25% risk of having a child with the disorder, when both parents are carriers of an autosomal recessive trait.
Cystinosis affects approximately 1 in 100,000 to 200,000 newborns. and there are only around 2,000 known individuals with cystinosis in the world. The incidence is higher in the province of Brittany, France, where the disorder affects 1 in 26,000 individuals.
Diagnosis
Cystinosis is a rare genetic disorder that causes an accumulation of the amino acid cystine within cells, forming crystals that can build up and damage the cells. These crystals negatively affect many systems in the body, especially the kidneys and eyes.The accumulation is caused by abnormal transport of cystine from lysosomes, resulting in a massive intra-lysosomal cystine accumulation in tissues. Via an as yet unknown mechanism, lysosomal cystine appears to amplify and alter apoptosis in such a way that cells die inappropriately, leading to loss of renal epithelial cells. This results in renal Fanconi syndrome, and similar loss in other tissues can account for the short stature, retinopathy, and other features of the disease.
Definitive diagnosis and treatment monitoring are most often performed through measurement of white blood cell cystine level using tandem mass spectrometry.
Types
Online Mendelian Inheritance in Man (OMIM): 219800 – Infantile nephropathic
Online Mendelian Inheritance in Man (OMIM): 219900 – Adolescent nephropathic
Online Mendelian Inheritance in Man (OMIM): 219750 – Adult nonnephropathic
Treatment
Cystinosis is normally treated with cysteamine, which is available in capsules and in eye drops. People with cystinosis are also often given sodium citrate to treat the blood acidosis, as well as potassium and phosphorus supplements as well as others. If the kidneys become significantly impaired or fail, then treatment must be begun to ensure continued survival, up to and including renal transplantation.
See also
Hartnup disease
Cystinuria
CTNS
References
External links
Cystinosis at NLM Genetics Home Reference
GeneReviews/NCBI/NIH/UW entry on Cystinosis | 131 |
Cystinuria | Cystinuria is an inherited autosomal recessive disease characterized by high concentrations of the amino acid cystine in the urine, leading to the formation of cystine stones in the kidneys, ureters, and bladder. It is a type of aminoaciduria. "Cystine", not "cysteine," is implicated in this disease; the former is a dimer of the latter.
Presentation
Cystinuria is a cause of recurrent kidney stones. It is a disease involving the defective transepithelial transport of cystine and dibasic amino acids in the kidney and intestine, and is one of many causes of kidney stones. If not treated properly, the disease could cause serious damage to the kidneys and surrounding organs, and in some rare cases death. The stones may be identified by a positive nitroprusside cyanide test. The crystals are usually hexagonal, translucent, white. Upon removal, the stones may be pink or yellow in color, but later they turn to greenish due to exposure to air. Cystinuria is usually asymptomatic when no stone is formed. However, once a stone is formed, signs and symptoms can occur:
Nausea
Flank pain
Hematuria
Urinary tract infections
Rarely, acute or chronic kidney diseasePeople with cystinuria pass stones monthly, weekly, or daily, and need ongoing care. Cystinurics have an increased risk for chronic kidney disease and since kidney damage or poor function is often present in cystinurics, the use of nonsteroidal anti-inflammatory drugs (NSAIDs) or over the counter (OTC) medications should be used with caution.
Cystine stones are often difficult to detect using plain x-rays. Computed tomography or ultrasound may be used instead for imaging.Urine odor in cystinuria has a smell of rotten eggs due to the increase in cystine.
Genetics
Cystinuria is an autosomal recessive disease, which means that the defective gene responsible for the disease is located on an autosome, and two copies of the defective gene (one inherited from each parent) are required in order to be born with the disease. The parents of an individual with an autosomal recessive disease both carry one copy of the defective gene, but usually do not experience any signs or symptoms of the disease. Although signs and symptoms are rare, there are some directly and indirectly associated with cystinuria. These sign and symptoms consist of 1) hematuria- blood in the urine, 2) flank pain – pain in the side due to kidney pain, 3) renal colic – intense, cramping pain due to stones in the urinary tract, 4) obstructive uropathy- urinary tract disease due to obstruction, and 5) urinary tract infections.
Cause
Cystinuria is caused by mutations in the SLC3A1 and SLC7A9 genes. These defects prevent proper reabsorption of basic, or positively charged, amino acids: cystine, lysine, ornithine, arginine. Under normal circumstances, this protein allows certain amino acids, including cystine, to be reabsorbed into the blood from the filtered fluid that will become urine. Mutations in either of these genes disrupt the ability of this transporter protein to reabsorb these amino acids, allowing them to become concentrated in the urine. As the levels of cystine in the urine increase, it forms cystine crystals, resulting in kidney stones. Cystine crystals form hexagonal-shaped crystals that can be viewed upon microscopic analysis of the urine. The other amino acids that are not reabsorbed do not create crystals in urine.The overall prevalence of cystinuria is approximately 1 in 7,000 neonates (from 1 in 2,500 neonates in Libyan Jews to 1 in 100,000 among Swedes).
Pathophysiology
Cystinuria is characterized by the inadequate reabsorption of cystine in the proximal convoluted tubules after the filtering of the amino acids by the kidneys glomeruli, thus resulting in an excessive concentration of this amino acid in the urine. Cystine may precipitate out of the urine, if the urine is neutral or acidic, and form crystals or stones in the kidneys, ureters, or bladder. It is one of several inborn errors of metabolism included in the Garrods tetrad. The disease is attributed to deficiency in transport and metabolism of amino acids.
Diagnosis
Blood: Routine hemogram along with blood sugar, urea, and creatinine.
Urine: For cystine crystals, and casts. The most specific test is the cyanide–nitroprusside test
Ultrasound/CT scan to reveal if a stone is present.
Genetic analysis to determine which mutation associated with the disease may be present. Currently genotyping is not available in the United States but might be available in Spain, Italy, UK, Germany and Russia (by private companies in Germany and Russia).Regular X-rays often fail to show the cystine stones, however they can be visualized in the diagnostic procedure that is called intravenous pyelogram (IVP). Stones may show up on XR with a fuzzy gray appearance. They are radioopaque due to sulfur content, though more difficult to visualize than calcium oxalate stones.
Treatment
Initial treatment is with adequate hydration, alkalization of the urine with citrate supplementation or acetazolamide, and dietary modification to reduce salt and protein intake (especially methionine). If this fails then patients are usually started on chelation therapy with an agent such as penicillamine. Tiopronin is another agent.
Once renal stones have formed, however, the first-line treatment is endoscopic laser lithotripsy. ESWL (Extracorporeal shock wave lithotripsy) is often not effective because of the hardness of the stones that do not fragment easily. Conventional open-abdominal surgery is rarely used but has proven to be effective treatment modalities for patients with more advanced disease. Adequate hydration is the foremost aim of treatment to prevent cystine stones. The goal is to increase the urine volume because the concentration of cystine in the urine is reduced which prevents cystine from precipitating from the urine and forming stones. People with cystine stones should consume 5 to 7 liters a day. The rationale behind alkalizing the urine is that cystine tends to stay in solution and causes no harm. In order to alkalize the urine, sodium bicarbonate has been used. One must be careful in alkalizing their urine because it could lead to other forms of stones in process of preventing cystine stones. Penicillamine is a drug that acts to form a complex with cystine that is 50 times more soluble than cystine itself. Percutaneous nephrolithotripsy (PNL) is performed via a port created by puncturing the kidney through the skin and enlarging the access port to 1 cm in diameter. Most of the time, cystine stones are too dense to be broken up by shock (ESWL) so PNL is needed.Videos of surgery are available on various websites that show stone removal by percutaneous nephrolithotomy.In February 2017, an article was published in Nature Medicine entitled "Alpha lipoic acid treatment prevents cystine urolithiasis in a mouse model of cystinuria", suggesting that a high dose of the readily available antioxidant, alpha-lipoic acid at 2,700 mg/67 kg body weight daily reduced the incidence of stones. The effects were dose dependent. The results are unprecedented for cystinuria. A clinical trial is underway based on this mouse model.
Occurrence in animals
This disease is known to occur in at least four mammalian species: humans, domestic canines, domestic ferrets and a wild canid, the maned wolf of South America.
Cystine uroliths have been demonstrated, usually in male dogs, from approximately 70 breeds including the Australian cattle dog, Australian shepherd, Basenji, Basset, Bullmastiff, Chihuahua, Scottish deerhound, Scottish terrier, Staffordshire terrier, Welsh corgi, and both male and female Newfoundland dogs.
See also
References
== External links == | 132 |
Isosporiasis | Isosporiasis, also known as cystoisosporiasis, is a human intestinal disease caused by the parasite Cystoisospora belli (previously known as Isospora belli). It is found worldwide, especially in tropical and subtropical areas. Infection often occurs in immuno-compromised individuals, notably AIDS patients, and outbreaks have been reported in institutionalized groups in the United States. The first documented case was in 1915. It is usually spread indirectly, normally through contaminated food or water (CDC.gov).
Signs and symptoms
Infection causes acute, non-bloody diarrhea with crampy abdominal pain, which can last for weeks and result in malabsorption and weight loss. In immunodepressed patients, and in infants and children, the diarrhea can be severe. Eosinophilia may be present (differently from other protozoan infections).
Cause
The coccidian parasite Cystoisospora belli infects the epithelial cells of the small intestine, and is the least common of the three intestinal coccidia that infect humans (Toxoplasma, Cryptosporidium, and Cystoisospora).
Transmission
People become infected by swallowing the mature parasite; this normally occurs through the ingestion of contaminated food or water. The infected host then produces an immature form of the parasite in their feces, and when the parasite matures, it is capable of infecting its next host, via food or water containing the parasite.
Life cycle
At time of excretion, the immature oocyst contains usually one sporoblast (more rarely two). In further maturation after excretion, the sporoblast divides in two, so the oocyst now contains two sporoblasts. The sporoblasts secrete a cyst wall, thus becoming sporocysts; and the sporocysts divide twice to produce four sporozoites each. Infection occurs by ingestion of sporocyst-containing oocysts: the sporocysts excyst in the small intestine and release their sporozoites, which invade the epithelial cells and initiate schizogony. Upon rupture of the schizonts, the merozoites are released, invade new epithelial cells, and continue the cycle of asexual multiplication. Trophozoites develop into schizonts which contain multiple merozoites. After a minimum of one week, the sexual stage begins with the development of male and female gametocytes. Fertilization results in the development of oocysts that are excreted in the stool. Cystoisospora belli infects both humans and animals.
Diagnosis
Microscopic demonstration of the large typically shaped oocysts is the basis for diagnosis. Because the oocysts may be passed in small amounts and intermittently, repeated stool examinations and concentration procedures are recommended. If stool examinations are negative, examination of duodenal specimens by biopsy or string test (Enterotest) may be needed. The oocysts can be visualized on wet mounts by microscopy with bright-field, differential interference contrast (DIC), and epifluorescence. They can also be stained by modified acid-fast stain.Typical laboratory analyses include:
Microscopy
Morphologic comparison with other intestinal parasites
Bench aids for Cystoisospora
Prevention
Avoiding food or water that may be contaminated with stool can help prevent the infection of Cystoisospora (Isosporiasis). Good hand-washing, and personal-hygiene practices should be used as well. One should wash their hands with soap and warm water after using the toilet, changing diapers, and before handling food (CDC.gov). It is also important to teach children the importance of washing their hands, and how to properly wash their hands.
Treatment
The treatment of choice is trimethoprim-sulfamethoxazole (Bactrim).
Epidemiology
While isosporiasis occurs throughout the world, it is more common in tropical and subtropical areas. Cystoisospora infections are more common in individuals with compromised immune systems, such as HIV or leukemia.
See also
List of parasites (human)
References
== External links == | 133 |
Gastroparesis | Gastroparesis (gastro- from Ancient Greek γαστήρ - gaster, "stomach"; and -paresis, πάρεσις - "partial paralysis"), also called delayed gastric emptying, is a medical disorder consisting of weak muscular contractions (peristalsis) of the stomach, resulting in food and liquid remaining in the stomach for a prolonged period of time. Stomach contents thus exit more slowly into the duodenum of the digestive tract. This can result in irregular absorption of nutrients, inadequate nutrition, and poor glycemic control.Symptoms include nausea, vomiting, abdominal pain, feeling full soon after beginning to eat (early satiety), abdominal bloating, and heartburn. The most common known mechanism is autonomic neuropathy of the nerve which innervates the stomach: the vagus nerve. Uncontrolled diabetes mellitus is a major cause of this nerve damage; other causes include post-infectious and trauma to the vagus nerve.
Diagnosis is via one or more of the following: barium swallow X-ray, barium beefsteak meal, radioisotope gastric-emptying scan, gastric manometry, and esophagogastroduodenoscopy (EGD). Complications include malnutrition, fatigue, weight loss, vitamin deficiencies, intestinal obstruction due to bezoars, and small intestine bacterial overgrowth.
Treatment includes dietary modifications, medications to stimulate gastric emptying, medications to reduce vomiting, and surgical approaches.
Signs and symptoms
The most common symptoms of gastroparesis are the following:
Chronic nausea
Vomiting (especially of undigested food)
Abdominal pain
A feeling of fullness after eating just a few bitesOther symptoms include the following:
Abdominal bloating
Body aches (myalgia)
Erratic blood glucose levels
Acid reflux (GERD)
Heartburn
Lack of appetite
Morning nausea
Muscle weakness
Night sweats
Palpitations
Spasms of the stomach wall
Constipation or infrequent bowel movements
Weight loss, malnutrition
Difficulty swallowingVomiting may not occur in all cases, as those affected may adjust their diets to include only small amounts of food.
Causes
Transient gastroparesis may arise in acute illness of any kind, as a consequence of certain cancer treatments or other drugs which affect digestive action, or due to abnormal eating patterns. The symptoms are almost identical to those of low stomach acid, therefore most doctors will usually recommend trying out supplemental hydrochloric acid before moving on to the invasive procedures required to confirm a damaged nerve. Patients with cancer may develop gastroparesis because of chemotherapy-induced neuropathy, immunosuppression followed by viral infections involving the GI tract, procedures such as celiac blocks, paraneoplastic neuropathy or myopathy, or after an allogeneic bone marrow transplant via graft-versus-host disease.Gastroparesis present similar symptoms to slow gastric emptying caused by certain opioid medications, antidepressants, and allergy medications, along with high blood pressure. For patients already with gastroparesis, these can make the condition worse. More than 50% of all gastroparesis cases are idiopathic in nature, with unknown causes. It is, however, frequently caused by autonomic neuropathy. This may occur in people with type 1 or type 2 diabetes, about 30–50% among long-standing diabetics. In fact, diabetes mellitus has been named as the most common cause of gastroparesis, as high levels of blood glucose may effect chemical changes in the nerves. The vagus nerve becomes damaged by years of high blood glucose or insufficient transport of glucose into cells resulting in gastroparesis. Adrenal and thyroid gland problems could also be a cause.Gastroparesis has also been associated with connective tissue diseases such as scleroderma and Ehlers–Danlos syndrome, and neurological conditions such as Parkinsons disease and multiple system atrophy. It may occur as part of a mitochondrial disease. Opioids and anticholinergic medications can cause medication-induced gastroparesis. Chronic gastroparesis can be caused by other types of damage to the vagus nerve, such as abdominal surgery. Heavy cigarette smoking is also a plausible cause since smoking causes damage to the stomach lining. Idiopathic gastroparesis (gastroparesis with no known cause) accounts for a third of all chronic cases; it is thought that many of these cases are due to an autoimmune response triggered by an acute viral infection. Gastroenteritis, mononucleosis, and other ailments have been anecdotally linked to the onset of the condition, but no systematic study has proven a link.People with gastroparesis are disproportionately female. One possible explanation for this finding is that women have an inherently slower stomach emptying time than men. A hormonal link has been suggested, as gastroparesis symptoms tend to worsen the week before menstruation when progesterone levels are highest. Neither theory has been proven definitively.
Mechanism
On the molecular level, it is thought that gastroparesis can be caused by the loss of neuronal nitric oxide expression since the cells in the GI tract secrete nitric oxide. This important signaling molecule has various responsibilities in the GI tract and in muscles throughout the body. When nitric oxide levels are low, the smooth muscle and other organs may not be able to function properly. Other important components of the stomach are the interstitial cells of Cajal (ICC) which act as a pacemaker since they transduce signals from motor neurons to produce an electrical rhythm in the smooth muscle cells. Lower nitric oxide levels also correlate with loss of ICC cells, which can ultimately lead to the loss of function in the smooth muscle in the stomach, as well as in other areas of the gastrointestinal tract.Pathogenesis of symptoms in diabetic gastroparesis include:
Loss of gastric neurons containing nitric oxide synthase (NOS) is responsible for defective accommodation reflex, which leads to early satiety and postprandial fullness.
Impaired electromechanical activity in the myenteric plexus is responsible for delayed gastric emptying, resulting in nausea and vomiting.
Sensory neuropathy in the gastric wall may be responsible for epigastric pain.
Abnormal pacemaker activity (tachybradyarrhythmia) may generate a noxious signal transmitted to the CNS to evoke nausea and vomiting.
Diagnosis
Gastroparesis can be diagnosed with tests such as barium swallow X-rays, manometry, and gastric emptying scans. For the X-ray, the patient drinks a liquid containing barium after fasting which will show up in the X-ray and the physician is able to see if there is still food in the stomach as well. This can be an easy way to identify whether the patient has delayed emptying of the stomach. The clinical definition for gastroparesis is based solely on the emptying time of the stomach (and not on other symptoms), and severity of symptoms does not necessarily correlate with the severity of gastroparesis. Therefore, some patients may have marked gastroparesis with few, if any, serious complications.In other cases or if the X-ray is inconclusive, the physician may have the patient eat a meal of toast, water, and eggs containing a radioactive isotope so they can watch as it is digested and see how slowly the digestive tract is moving. This can be helpful for diagnosing patients who are able to digest liquids but not solid foods.
Complications
Complications of gastroparesis include:
Fluctuations in blood glucose due to unpredictable digestion times due to changes in rate and amount of food passing into the small bowel. This makes diabetes worse, but does not cause diabetes. Lack of control of blood sugar levels will make the gastroparesis worsen.
General malnutrition due to the symptoms of the disease (which frequently include vomiting and reduced appetite) as well as the dietary changes necessary to manage it. This is especially true for vitamin deficiencies such as scurvy because of inability to tolerate fresh fruits.
Severe fatigue and weight loss due to calorie deficit
Intestinal obstruction due to the formation of bezoars (solid masses of undigested food). This can cause nausea and vomiting, which can in turn be life-threatening if they prevent food from passing the small intestine.
Small intestine bacterial overgrowth is commonly found in patients with gastroparesis.
Bacterial infection due to overgrowth in undigested food
A decrease in quality of life, since it can make keeping up with work and other responsibilities more difficult.
Treatment
Treatment includes dietary modifications, medications to stimulate gastric emptying, medications to reduce vomiting, and surgical approaches.Dietary treatment involves low fiber diets and, in some cases, restrictions on fat or solids. Eating smaller meals, spaced two to three hours apart has proved helpful. Avoiding foods like rice or beef that cause the individual problems such as pain in the abdomen or constipation will help avoid symptoms.Metoclopramide, a dopamine D2 receptor antagonist, increases contractility and resting tone within the GI tract to improve gastric emptying. In addition, dopamine antagonist action in the central nervous system prevents nausea and vomiting. Similarly, the dopamine receptor antagonist domperidone is used to treat gastroparesis. Erythromycin is known to improve emptying of the stomach but its effects are temporary due to tachyphylaxis and wane after a few weeks of consistent use. Sildenafil citrate, which increases blood flow to the genital area in men, is being used by some practitioners to stimulate the gastrointestinal tract in cases of diabetic gastroparesis. The antidepressant mirtazapine has proven effective in the treatment of gastroparesis unresponsive to conventional treatment. This is due to its antiemetic and appetite stimulant properties. Mirtazapine acts on the same serotonin receptor (5-HT3) as does the popular anti-emetic ondansetron. Camicinal is a motilin agonist for the treatment of gastroparesis.
In specific cases where treatment of chronic nausea and vomiting proves resistant to drugs, implantable gastric stimulation may be utilized. A medical device is implanted that applies neurostimulation to the muscles of the lower stomach to reduce the symptoms. This is only done in refractory cases that have failed all medical management (usually at least two years of treatment). Medically refractory gastroparesis may also be treated with a pyloromyotomy, which widens the gastric outlet by cutting the circular pylorus muscle. This can be done laparoscopically or endoscopically. Vertical sleeve gastrectomy, a procedure in which a part or all of the affected portion of the stomach is removed, has been shown to have some success in the treatment of gastroparesis in obese patients, even curing it in some instances. Further studies have been recommended due to the limited sample size of previous studies.In cases of postinfectious gastroparesis, patients have symptoms and go undiagnosed for an average of 3 weeks to 6 months before their illness is identified correctly and treatment begins.
Prognosis
Post-infectious
Cases of post-infectious gastroparesis are self‐limiting, with recovery within 12 months of initial symptoms, although some cases last well over 2 years. In children, the duration tends to be shorter and the disease course milder than in adolescent and adults.
Diabetic gastropathy
Diabetic gastropathy is usually slowly progressive, and can become severe and lethal.
Prevalence
Post-infectious gastroparesis, which constitutes the majority of idiopathic gastroparesis cases, affects up to 4% of the American population. Women in their 20s and 30s seem to be susceptible. One study of 146 American gastroparesis patients found the mean age of patients was 34 years with 82% affected being women, while another study found the patients were young or middle aged and up to 90% were women.There has only been one true epidemiological study of idiopathic gastroparesis which was completed by the Rochester Epidemiology Project. They looked at patients from 1996 to 2006 who were seeking medical attention instead of a random population sample and found that the prevalence of delayed gastric emptying was fourfold higher in women. It is difficult for medical professionals and researchers to collect enough data and provide accurate numbers since studying gastroparesis requires specialized laboratories and equipment.
References
Further reading
Overview from NIDDK National Institute of Diabetes, Digestive, and Kidney Diseases at NIH
Camilleri M, Parkman HP, Shafi MA, Abell TL, Gerson L (January 2013). "Clinical guideline: management of gastroparesis". The American Journal of Gastroenterology. 108 (1): 18–37, quiz 38. doi:10.1038/ajg.2012.373. PMC 3722580. PMID 23147521.
Parkman HP, Fass R, Foxx-Orenstein AE (June 2010). "Treatment of patients with diabetic gastroparesis". Gastroenterology & Hepatology. 6 (6): 1–16. PMC 2920593. PMID 20733935.
== External links == | 134 |
Delayed puberty | Delayed puberty is when a person lacks or has incomplete development of specific sexual characteristics past the usual age of onset of puberty. The person may have no physical or hormonal signs that puberty has begun. In the United States, girls are considered to have delayed puberty if they lack breast development by age 13 or have not started menstruating by age 15. Boys are considered to have delayed puberty if they lack enlargement of the testicles by age 14. Delayed puberty affects about 2% of adolescents.Most commonly, puberty may be delayed for several years and still occur normally, in which case it is considered constitutional delay of growth and puberty, a common variation of healthy physical development. Delay of puberty may also occur due to various causes such as malnutrition, various systemic diseases, or defects of the reproductive system (hypogonadism) or the bodys responsiveness to sex hormones.Initial workup for delayed puberty not due to a chronic condition involves measuring serum FSH, LH, testosterone/estradiol, as well as bone age radiography. If it becomes clear that there is a permanent defect of the reproductive system, treatment usually involves replacement of the appropriate hormones (testosterone/dihydrotestosterone for boys, estradiol and progesterone for girls).
Timing and definitions
Puberty is considered delayed when the child has not begun puberty when two standard deviations or about 95% of children from similar backgrounds have.In North American girls, puberty is considered delayed when breast development has not begun by age 13, when they have not started menstruating by age 15, and when there is no increased growth rate. Furthermore, slowed progression through the Tanner scale or lack of menarche within 3 years of breast development may also be considered delayed puberty.In the United States, the age of onset of puberty in girls depends heavily on their racial background. Delayed puberty means the lack of breast development by age 12.8 years for White girls, and by age 12.4 years for Black girls. The lack of menstruation by age 15 in any ethnic background is considered delayed.In North American boys, puberty is considered delayed when the testes remain less than 2.5 cm in diameter or less than 4 mL in volume by the age of 14. Delayed puberty is more common in males.Although absence of pubic and/or axillary hair is common in children with delayed puberty, the presence of sexual hair is due to adrenal sex hormone secretion unrelated to the sex hormones produced by the ovaries or testes.The age of onset of puberty is dependent on genetics, general health, socioeconomic status, and environmental exposures. Children residing closer to the equator, at lower altitudes, in cities and other urban areas generally begin the process of puberty earlier than their counterparts. Mildly obese to morbidly obese children are also more likely to begin puberty earlier than children of normal weight. Variation in genes related to obesity such as FTO or NEGRI have been associated with earlier onset of puberty. Children whose parents started puberty at an earlier age were also more likely to experience it themselves, especially in women where onset of menstruation correlated well between mothers and daughters and between sisters.
Causes
Pubertal delay can be separated into four categories from most to least common:
Constitutional and physiologic delay
Children who are healthy but have a slower rate of physical development than average have a constitutional delay with a subsequent delay in puberty. It is the most common cause of delayed puberty in girls (30%) and even more so in boys (65%). It is commonly inherited with as much as 80% of the variation in the age of onset of puberty due to genetic factors. These children have a history of shorter stature than their age-matched peers throughout childhood, but their height is appropriate for bone age, meaning that they have delayed skeletal maturation with potential for future growth.It is often difficult to establish if it is a true constitutional delay of growth and puberty or if there is an underlying pathology because lab tests are not always discriminatory. In absence of any other symptoms, short stature, delayed growth in height and weight, and/or delayed puberty may be the only clinical manifestations of certain chronic diseases including coeliac disease.
Malnutrition or chronic disease
When underweight or sickly children present with pubertal delay, it is warranted to search for illnesses that cause a temporary and reversible delay in puberty. Chronic conditions such as sickle cell disease and thalassemia, cystic fibrosis, HIV/AIDS, hypothyroidism, chronic kidney disease, and chronic gastroenteric disorders (such as coeliac disease and inflammatory bowel disease) cause a delayed activation of the hypothalamic region of the brain to send signals to start puberty.Childhood cancer survivors can also present with delayed puberty secondary to their cancer treatments, especially males. The type of treatment, amount of exposure/dosage of drugs, and age during treatment determine the level by which the gonads are affected with younger patients at a lower risk of negative reproductive effects.Excessive physical exercise and physical stress, especially in athletes can also delay pubertal onset. Eating disorders such as bulimia nervosa and anorexia nervosa can also impair puberty due to undernutrition.Carbohydrate-restricted diets for weight loss has also been shown to decrease the stimulation of insulin which in turn does not stimulate kisspeptin neurons vital in the release of puberty-starting hormones. This shows that carbohydrate restricted children and children with diabetes mellitus type 1 can have delayed puberty.
Primary failure of the ovaries or testes (hypergonadotropic hypogonadism)
Primary failure of the ovaries or testes (gonads) will cause delayed puberty due to the lack of hormonal response by the final receptors of the HPG axis. In this scenario, the brain sends a lot of hormonal signals (high gonadotropin), but the gonads are unable to respond to said signals causing hypergonadotropic hypogonadism. Hypergonadotropic hypogonadism can be caused by congenital defects or acquired defects.
Congenital disorders
Congenital diseases include untreated cryptorchidism where the testicles fail to descend from the abdomen. Other congenital disorders are genetic in nature. In males, there can be deformities in the seminiferous tubule as in Klinefelter syndrome (most common cause in males), defects in the production of testicular steroids, receptor mutations preventing testicular hormones from working, chromosomal abnormalities such as Noonan syndrome, or problems with the cells making up the testes. Females can also have chromosomal abnormalities such as Turner syndrome (most common cause in girls), XX gonadal dysgenesis, and XY gonadal dysgenesis, problems in the ovarian hormone synthesis pathway such as aromatase deficiency or congenital anatomical deformities such as Müllerian agenesis.
Acquired disorders
Acquired diseases include mumps orchitis, Coxsackievirus B infection, irradiation, chemotherapy, or trauma; all problems causing the gonads to fail.
Genetic or acquired defect of the hormonal pathway of puberty (hypogonadotropic hypogonadism)
The hypothalamic–pituitary–gonadal axis can also be affected at the level of the brain. The brain does not send its hormonal signals to the gonads (low gonadotropins) causing the gonads to never be activated in the first-place resulting in hypogonadotropic hypogonadism. The HPG axis can be altered in two places, at the hypothalamic or at the pituitary level. CNS disorders such as childhood brain tumors (e.g. craniopharyngioma, prolactinoma, germinoma, glioma) can disrupt the communication between the hypothalamus and the pituitary. Pituitary tumors, especially prolactinomas, can increase the level of dopamine causing an inhibiting effect to the HPG axis. Hypothalamic disorders include Prader-Willi syndrome and Kallmann syndrome, but the most common cause of hypogonadotropic hypogonadism is a functional deficiency in the hormone regulator produced by the hypothalamus, the gonadotropin-releasing hormone or GnRH.
Diagnosis
Pediatric endocrinologists are the physicians with the most training and experience evaluating delayed puberty. A complete medical history, review of systems, growth pattern, and physical examination as well as laboratory testing and imaging will reveal most of the systemic diseases and conditions capable of arresting development or delaying puberty, as well as providing clues to some of the recognizable syndromes affecting the reproductive system.Timely medical assessment is a necessity since as many as half of girls with delayed puberty have an underlying pathology.
History and physical
Constitutional and physiologic delay
Children with constitutional delay are reported to be shorter than their peers, lacking a growth spurt, and having an overall smaller build. Their growth has begun to slow down years before the expected growth spurt secondary to puberty, which helps differentiate a constitutional delay from an HPG-axis related disorder. A complete family history with the ages at which parents hit the pubertal milestones can also provide a reference point for the expected age of puberty. Growth measurement parameters in children with suspected constitutional delay include a height, a weight, the rate of growth, and the calculated mid-parental height which represents the expected adult height for the child.
Malnutrition or chronic disease
Diet and physical activity habits, as well as history of previous serious illnesses and medication history can provide clues as to the cause of delayed puberty. Delayed growth and puberty can be the first signs of severe chronic illnesses such as metabolic disorders including inflammatory bowel disease and hypothyroidism. Symptoms such as fatigue, pain, and abnormal stooling pattern are suggestive of an underlying chronic condition. Low BMI can lead a physician to diagnose an eating disorder, undernutrition, child abuse, or chronic gastrointestinal disorders.
Primary failure of the ovaries or testes
A eunuchoid body shape where the arm span exceeds the height by more than 5 cm suggests a delay in growth plate closure secondary to hypogonadism. Turner syndrome has unique diagnostic features including a webbed neck, short stature, shield chest, and low hairline. Klinefelter syndrome presents with tall stature as well as small, firm testes.
Genetic or acquired defect of the hormonal pathway of puberty
Lacking the sense of smell (anosmia) along with delayed puberty are strong clinical indications for Kallmann syndrome. Deficiencies in GnRH, the signalling hormone produced by the hypothalamus, can cause congenital malformations including cleft lip and scoliosis. The presence of neurological symptoms including headaches and visual disturbances suggest a brain disorder such as a brain tumor causing hypopituitarism. The presence of neurological symptoms in addition to lactation are signs of high prolactin levels and could indicate either a drug side effect or a prolactinoma.
Imaging
Since bone maturation is a good indicator of overall physical maturation, an x-ray of the left hand and wrist to assess bone age usually reveals whether the child has reached a stage of physical maturation at which puberty should be occurring. X-ray displaying a bone age <11 years in girls or <13 years in boys (despite a higher chronological age) is most often consistent with constitutional delay of puberty. An MRI of the brain should be considered if neurological symptoms are present in addition to delayed puberty, two findings suspicious for pituitary or hypothalamic tumors. An MRI can also confirm the diagnosis of Kallmann syndrome due to the absence or abnormal development of the olfactory tract. However, in the absence of clear neurological symptoms, an MRI may not be the most cost-effective option. A pelvic ultrasound can detect anatomical abnormalities including undescended testes and müllerian agenesis.
Laboratory evaluation
The first step in evaluating children with delayed puberty involves differentiating between the different causes of delayed puberty. Constitutional delay can be evaluated with a thorough history, physical, and bone age. Malnutrition and chronic diseases can be diagnosed through history and disease-specific testing. Screening studies include a complete blood count, an erythrocyte sedimentation rate, and thyroid studies. Hypogonadism can be differentiated between hyper- and hypo-gonadotropic hypogonadism by measuring serum follicle-stimulating hormone (FSH) and luteinizing hormone (LH) (gonadotropins to measure pituitary output), and estradiol in girls (to measure gonadal output). By the age of 10–12, children with failure of the ovaries or testes will have high LH and FSH because the brain is attempting to jump-start puberty, but the gonads are not responsive to these signals.Stimulating the body by administering an artificial version of gonadotropin-releasing hormone (GnRH, the hypothalamic hormone) can differentiate between constitutional delay of puberty and a GnRH deficiency in boys, although no studies have been done in girls to prove this. It is often sufficient to simply measure the baseline gonadotrophin levels to differentiate between the two.In girls with hypogonadotropic hypogonadism, a serum prolactin level is measured to identify if they have the pituitary tumor prolactinoma. High levels of prolactin would warrant further testing with MRI imaging, except if drugs inducing the production of prolactin can be identified. If the child has any neurological symptoms, it is highly recommended that the physician obtains a head MRI to detect possible brain lesions.In girls with hypergonadotropic hypogonadism, a karyotype can identify chromosomal abnormalities, the most common of which is Turner syndrome. In boys, a karyotype is indicated if the child may have a congenital gonadal defect such as Klinefelter syndrome. In children with a normal karyotype, defects in the synthesis of the adrenal steroid sex hormones can be identified by measuring 17-hydroxylase, an important enzyme involved in the production of sex hormones.
Management
The goals of short-term hormone therapy are to induce the beginning of sexual development and induce a growth spurt, but should be limited to children with severe distress or anxiety secondary to their delayed puberty. Bone age must be monitored frequently to prevent precocious closure of the bone plates, thereby stunting growth.
Constitutional and physiologic delay
If a child is healthy with a constitutional delay of growth and puberty, reassurance and prediction based on the bone age can be provided. No other intervention is usually necessary, but repeat evaluation by measuring serum testosterone or estrogen is recommended. Furthermore, the diagnosis of hypogonadism can be excluded once the adolescent has started puberty by age 16–18.Boys aged >14 years old whose growth is severely stunted or are experiencing severe distress secondary to their lack of puberty can be started on testosterone to increase their height. Testosterone treatment can also be used to stimulate sexual development, but it can close bone plates prematurely stopping growth altogether if not carefully administered. Another therapeutic option is the use of aromatase inhibitors to inhibit the conversion of androgens to estrogens as estrogens are responsible for stopping bone growth plate development and thus growth. However, due to side effects, therapy with testosterone alone is most often used. Overall, neither growth hormone nor aromatase inhibitors are recommended for constitutional delay to increase growth.Girls can be started on estrogen with the same goals as their male counterparts.Overall, studies have shown no significant difference in final adult height between adolescents treated with sex steroids and those who were only observed with no treatment.
Malnutrition or chronic disease
If the delay is due to systemic disease or malnutrition, the therapeutic intervention is likely to focus direction on those conditions. In patients with coeliac disease, an early diagnosis and the establishment of a gluten-free diet prevents long-term complications and allows restoration of normal maturation. Thyroid hormone therapy will be necessary in the case of hypothyroidism.
Primary failure of the ovaries or testes (hypergonadotropic hypogonadism)
Whereas children with constitutional delay will have normal levels of sex hormones post-puberty, gonadotropin deficiency or hypogonadism may require lifelong sex steroid replacement.In girls with primary ovarian failure, estrogen should be started when puberty is supposed to start. Progestins are usually added after there is acceptable breast development, about 12 to 24 months after starting estrogen, as starting treatment with progestin too early can negatively affect breast growth. After acceptable breast growth, administering estrogen and progestin in a cyclical manner can help establish regular menses once puberty is started. The goal is to complete sexual maturation over 2 to 3 years. Once sexual maturation has been achieved, a trial period with no hormonal therapy can determine whether or not the child will require life-long treatment. Girls with congenital GnRH deficiency require enough sex hormone supplementation to maintain body levels in the expected pubertal levels necessary to induce ovulation, especially when fertility is a concern.Males with primary failure of the testes will be on lifelong testosterone.Pulsatile GnRH, weekly multi-LH, or hCG and FSH can be used to induce fertility in adulthood for both males and females.
Genetic or acquired defect of the hormonal pathway of puberty (hypogonadotropic hypogonadism)
Boys aged >12 years old with hypogonadotropic hypogonadism are most often treated with short-term testosterone while males with testicular failure will be on life-long testosterone. Choice of formulation (topical vs injection) is dependent on the childs and familys preference as well as on how well they tolerate side effects. Although testosterone therapy alone will result in the start of puberty, to increase fertility potential, they may need pulsatile GnRH or hCG with rFSH. hCG can be used by itself in boys with spontaneous onset of puberty from non-permanent forms of hypogonadotropic hypogonadism and rFSH can be added in cases of low sperm count after 6 to 12 months of treatment.If puberty has not started after 1 year of treatment, then permanent hypogonadotropic hypogonadism should be considered.Girls with hypogonadotropic hypogonadism are started on the same sex steroid therapy as their counterparts with a constitutional delay, however doses are gradually increased to reach full adult replacement levels. Dosage of estrogen is titrated based on the womans ability to have withdrawal bleeds and to maintain appropriate bone density. Induction of fertility must also be done through pulsatile GnRH.
Others
Growth hormone is another option that has been described, however it should only be used in proven growth hormone deficiency such as idiopathic short stature. Children with a constitutional delay have not been shown to benefit from growth hormone therapy. Although serum growth hormone levels are low in constitutional delay of puberty, they increase after treatment with sex hormones and in those cases, growth hormone is not suggested to accelerate growth.Subnormal vitamin A intake is one of the etiological factors in delayed pubertal maturation. Supplementation of both vitamin A and iron to normal constitutionally delayed children with subnormal vitamin A intake is as efficacious as hormonal therapy in the induction of growth and puberty.More therapies are being developed to target the more discreet modulators of the HPG axis including kisspeptin and neurokinin B.In cases of severe delayed puberty secondary to hypogonadism, evaluation by a psychologist or psychiatrist, as well as counseling and a supportive environment are an important supplemental therapy for the child. Transition from pediatric to adult care is also vital as many children are lost during transition of care.
Outlook
Constitutional delay of growth and puberty is a variation of normal development with no long-term health consequences, however it can have lasting psychological effects. Adolescent boys with delayed puberty have a higher level of anxiety and depression relative to their peers. Children with delayed puberty also display decreased academic performance in their adolescent education, but changes in academic achievement in adulthood have not been determined.There is conflicting evidence as to whether or not children with constitutional growth and pubertal delay reach their full height potential. The conventional teaching is that these children catch up on their growth during the pubertal growth spurt and just remain shorter before their delayed puberty starts. However, some studies show that these children fall short of their target height from about 4 to 11 cm. Factors that could affect final height include familial short stature and pre-pubertal growth development.Pubertal delay can also affect bone mass and subsequent development of osteoporosis. Men with delayed puberty often have low to normal bone mineral density unaffected by androgen therapy. Women are more likely to have lower bone mineral density and thus an increased risk of fractures as early as even before the onset of puberty.Furthermore, delayed puberty is correlated with a higher risk in cardiovascular and metabolic disorders in women only, but also appears to be protective for breast and endometrial in women and testicular cancer in men.
See also
Developmental milestones
Endocrinology
Puberty
Constitutional growth delay
Hypogonadism
Kallmann syndrome
Turner syndrome
Klinefelter syndrome
References
== External links == | 135 |
Dermatitis herpetiformis | Dermatitis herpetiformis (DH) is a chronic autoimmune blistering skin condition, characterised by intensely itchy blisters filled with a watery fluid. DH is a cutaneous manifestation of coeliac disease, although the exact causal mechanism is not known. DH is neither related to nor caused by herpes virus; the name means that it is a skin inflammation having an appearance (Latin: -formis) similar to herpes.
The age of onset is usually about 15–40, but DH also may affect children and the elderly. Men are slightly more affected than women. Estimates of DH prevalence vary from 1 in 400 to 1 in 10,000. It is most common in patients of northern European and northern Indian ancestry, and is associated with the human leukocyte antigen (HLA) haplotype HLA-DQ2 or HLA-DQ8 along with coeliac disease and gluten sensitivity.Dermatitis herpetiformis was first described by Louis Adolphus Duhring in 1884. A connection between DH and coeliac disease was recognized in 1967.
Signs and symptoms
Dermatitis herpetiformis is characterized by intensely itchy, chronic papulovesicular eruptions, usually distributed symmetrically on extensor surfaces (buttocks, back of neck, scalp, elbows, knees, back, hairline, groin, or face).: 616 The blisters vary in size from very small up to 1 cm across. The condition is extremely itchy, and the desire to scratch may be overwhelming. This sometimes causes the affected person to scratch the blisters off before they are examined by a physician. Intense itching or burning sensations are sometimes felt before the blisters appear in a particular area.The signs and symptoms of DH typically appear around 30 to 40 years of age, although all ages may be affected. Although the first signs and symptoms of dermatitis herpetiformis are intense itching and burning, the first visible signs are the small papules or vesicles that usually look like red bumps or blisters. The rash rarely occurs on other mucous membranes, excepting the mouth or lips. The symptoms range in severity from mild to serious, but they are likely to disappear if gluten ingestion is avoided and appropriate treatment is administered.
Dermatitis herpetiformis symptoms are chronic, and they tend to come and go, mostly in short periods of time in response to the amount of gluten ingested. Sometimes, these symptoms may be accompanied by symptoms of coeliac disease, which typically include abdominal pain, bloating or loose stool, weight loss, and fatigue. However, individuals with DH often have no gastrointestinal symptoms even if they have associated intestinal damage.The rash caused by dermatitis herpetiformis forms and disappears in three stages. In the first stage, the patient may notice a slight discoloration of the skin at the site where the lesions appear. In the next stage, the skin lesions transform into obvious vesicles and papules that are likely to occur in groups. Healing of the lesions is the last stage of the development of the symptoms, usually characterized by a change in the skin color. This may result in areas of the skin turning darker or lighter than the color of the skin on the rest of the body. Because of the intense itching, patients usually scratch, which may lead to the formation of crusts.
Pathophysiology
In terms of pathology, the first signs of the condition may be observed within the dermis. The changes that may take place at this level may include edema, vascular dilatation, and cellular infiltration. It is common for lymphocytes and eosinophils to be seen. The bullae found in the skin affected by dermatitis herpetiformis are subepidermal and have rounded lateral borders.
When looked at under the microscope, the skin affected by dermatitis herpetiformis presents a collection of neutrophils. They have an increased prevalence in the areas where the dermis is closest to the epidermis.
Direct IMF studies of uninvolved skin show IgA in the dermal papillae and patchy granular IgA along the basement membrane. The jejunal mucosa may show partial villous atrophy, but the changes tend to be milder than in coeliac disease.Immunological studies revealed findings that are similar to those of coeliac disease in terms of autoantigens. The main autoantigen of dermatitis herpetiformis is epidermal transglutaminase (eTG), a cytosolic enzyme involved in cell envelope formation during keratinocyte differentiation.Various research studies have pointed out different potential factors that may play a larger or smaller role in the development of dermatitis herpetiformis. The fact that eTG has been found in precipitates of skin-bound IgA from skin affected by this condition has been used to conclude that dermatitis herpetiformis may be caused by a deposition of both IgA and eTG within the dermis. It is estimated that these deposits may resorb after ten years of following a gluten-free diet. Moreover, it is suggested that this condition is closely linked to genetics. This theory is based on the arguments that individuals with a family history of gluten sensitivity who still consume foods containing gluten are more likely to develop the condition as a result of the formation of antibodies to gluten. These antibodies cross-react with eTG, and IgA/eTG complexes deposit within the papillary dermis to cause the lesions of dermatitis herpetiformis. These IgA deposits may disappear after long-term (up to ten years) avoidance of dietary gluten.Gliadin proteins in gluten are absorbed by the gut and enter the lamina propria where they need to be deamidated by tissue transglutaminase (tTG). tTG modifies gliadin into a more immunogenic peptide. Classical dendritic cells (cDCs) endocytose the immunogenic peptide and if their pattern recognition receptors (PRRs) are stimulated by pathogen-associated molecular patterns (PAMPs) or danger-associated molecular pattern (DAMPs), the danger signal will influence them to secrete IL-8 (CXCL8) in the lamina propria, recruiting neutrophils. Neutrophil recruitment results in a very rapid onset of inflammation. Therefore, co-infection with microbes that carry PAMPs may be necessary for the initial onset of symptoms in gluten sensitivity, but would not be necessary for successive encounters with gluten due to the production of memory B and memory T cells (discussed below).
Dermatitis herpetiformis may be characterised based on inflammation in the skin and gut. Inflammation in the gut is similar to, and linked to, celiac disease. tTG is treated as an autoantigen, especially in people with certain HLA-DQ2 and HLA-DQ8 alleles and other gene variants that cause atopy. tTG is up-regulated after gluten absorption. cDCs endocytose tTG-modified gliadin complexes or modified gliadin alone but they only present gliadin to CD4+ T cells on pMHC-II complexes. These T cells become activated and polarised into type I helper T (Th1) cells. Th1 cells reactive towards gliadin have been discovered, but none against tTG. A naive B cell sequesters tTG-modified gliadin complexes from the surface of cDCs in the lymph nodes (LNs) before they become endocytosed by the cDCs. The B cell receptor (membrane bound antibody; BCR) is specific to the tTG portion of the complex. The B cell endocytoses the complex and presents the modified gliadin to the activated Th1 cells T cell receptor (TCR) via pMHC-II in a process known as epitope spreading. Thus, the B cell presents the foreign peptide (modified gliadin) but produces antibodies specific for the self-antigen (tTG). Once the B cell becomes activated, it differentiates into plasma cells that secrete autoantibodies against tTG, which may be cross-reactive with epidermal transglutanimase (eTG). Class A antibodies (IgA) deposit in the gut. Some may bind to the CD89 (FcαRI) receptor on macrophages (M1) via their Fc region (constant region). This will trigger endocytosis of the tTG-IgA complex, resulting in the activation of macrophages. Macrophages secrete more IL-8, propagating the neutrophil-mediated inflammatory response.
The purportedly cross-reactive autoantibodies may migrate to the skin in dermatitis herpetiformis. IgA deposits may form if the antibodies cross-react with epidermal transglutanimase (eTG). Some patients have eTG-specific antibodies instead of tTG-specific cross-reactive antibodies and the relationship between dermatitis herpetiformis and celiac disease in these patients is not fully understood. Macrophages may be stimulated to secrete IL-8 by the same process as is seen in the gut, causing neutrophils to accumulate at sites of high eTG concentrations in the dermal papillae of the skin. Neutrophils produce pus in the dermal papillae, generating characteristic blisters. IL-31 accumulation at the blisters may intensify itching sensations. Memory B and T cells may become activated in the absence of PAMPs and DAMPs during successive encounters with tTG-modified gliadin complexes or modified gliadin alone, respectively. Symptoms of dermatitis herpetiformis are often resolved if patients avoid a gluten-rich diet.
Diagnosis
Dermatitis herpetiformis often is misdiagnosed, being confused with drug eruptions, contact dermatitis, dishydrotic eczema (dyshidrosis), and even scabies. Other diagnoses in the differential diagnosis include bug bites and other blistering conditions such as bullous pemphigoid, linear IgA bullous dermatosis, and bullous systemic lupus erythematosus.
The diagnosis may be confirmed by a simple blood test for IgA antibodies against tissue transglutaminase (which cross-react with epidermal transglutaminase), and by a skin biopsy in which the pattern of IgA deposits in the dermal papillae, revealed by direct immunofluorescence, distinguishes it from linear IgA bullous dermatosis and other forms of dermatitis. Additionally, the concomitant diagnosis of Celiac disease can be made without the need for a small-intenstinal biopsy if an individual has biopsy-confirmed dermatitis herpetiformis as well as supporting serologic studies (elevated levels of IgA tissue transglutaminase antibodies, IgA epidermal transglutaminase antibodies, or IgA endomysial antibodies). These tests should be performed before the patient starts on a gluten-free diet, otherwise they might produce false negatives.
As with ordinary celiac disease, IgA against transglutaminase disappears (often within months) when patients eliminate gluten from their diet. Thus, for both groups of patients, it may be necessary to restart gluten for several weeks before testing may be done reliably. In 2010, Cutis reported an eruption labelled gluten-sensitive dermatitis which is clinically indistinguishable from dermatitis herpetiformis, but lacks the IgA connection, similar to gastrointestinal symptoms mimicking coeliac disease but without the diagnostic immunological markers.
Treatment
First-line therapy
A strict gluten-free diet must be followed, and usually, this treatment will be a lifelong requirement. Avoidance of gluten will reduce any associated intestinal damage and the risk of other complications. It can be very difficult to maintain a strict gluten-free diet, however, as contamination with gluten is common in many supposedly gluten-free foods and restaurants.
Dapsone is an effective initial treatment in most people and is the initial drug of choice to alleviate the rash and itching. Itching is typically reduced within 2–3 days, however, dapsone treatment has no effect on any intestinal damage that might be present. After some time on a gluten-free diet, the dosage of dapsone usually may be reduced or even stopped, although this may take many years. Dapsone is an antibacterial, and its role in the treatment of DH, which is not caused by bacteria, is poorly understood. It may cause adverse effects, especially hemolytic anemia, so regular blood monitoring is required.
Alternative treatment options
For individuals with DH unable to tolerate dapsone for any reason, alternative treatment options may include the following:
colchicine
lymecycline
nicotinamide
tetracycline
sulfamethoxypyridazine
sulfapyridineCombination therapy with nicotinamide and tetracyclines has been shown to be effective and well tolerated in some individuals who cannot tolerate dapsone or live in places where dapsone is not readily available. While the mechanism of action of tetracyclines and nicotinamide in DH is unknown, it is speculated to be due to their immune-modulating effects.Topical steroid medications are also sometimes used in combination with dapsone and a gluten-free diet to alleviate the itchiness associated with the rash.
Prognosis
Dermatitis herpetiformis generally responds well to medication and a strict gluten-free diet. It is an autoimmune disease, however, and thus individuals with DH are more likely to develop other autoimmune conditions such as thyroid disease, insulin-dependent diabetes, lupus erythematosus, Sjögrens syndrome, sarcoidosis, vitiligo, and alopecia areata. There has been an association of non-Hodgkin lymphoma in individuals who have dermatitis herpetiformis, although this risk decreases to less than the population risk with a strict gluten-free diet.Dermatitis herpetiformis does not usually cause complications on its own, without being associated with another condition. Complications from this condition, however, arise from the autoimmune character of the disease, as an overreacting immune system is a sign that something does not work well and might cause problems to other parts of the body that do not necessarily involve the digestive system.Gluten intolerance and the bodys reaction to it make the disease more worrying in what concerns the possible complications. This means that complications that may arise from dermatitis herpetiformis are the same as those resulting from coeliac disease, which include osteoporosis, certain kinds of gut cancer, and an increased risk of other autoimmune diseases such as thyroid disease.
The risks of developing complications from dermatitis herpetiformis decrease significantly if the affected individuals follow a gluten-free diet.
Epidemiology
Global estimates of the prevalence of dermatitis herpetiformis range from 1 in 400 to 1 in 10,000 people. Individuals of Northern European descent are most likely to be affected and estimates of the rates of DH in British and Finnish populations range from 30 in 100,000 to 75 in 100,000 people, respectively. The annual incidence rate of DH in these populations range from 0.8 to 2.7 per 100,000.People of all ages may be affected, although the mean age at diagnosis varies between 30 and 40 years of age. There is a slight male predominance in DH for unknown reasons and it is associated with celiac disease and the haplotypes HLA-DQ2 and, less commonly, HLA-DQ8.
Notable cases
It has been suggested that French revolutionary Jean-Paul Marat had DH. Marat was known to have a painful skin disease, from which he could only achieve relief by immersing himself in a bathtub filled with an herbal mixture; it was in this tub that he was famously assassinated, as portrayed in The Death of Marat. A researcher suggested in 1979 that the mysterious skin disease was DH based on these symptoms and this regimen of self-treatment.
See also
Pemphigus herpetiformis
Dyshidrosis
References
Further reading
Kárpáti S (2012). "Dermatitis herpetiformis". Clinics in Dermatology (Review). 30 (1): 56–9. doi:10.1016/j.clindermatol.2011.03.010. PMID 22137227. S2CID 206774423.
External links
Pictures: DermNet NZ
Pictures: The Gastrolab Image Library
DermNet immune/dermatitis-herpetiformisJournal of Investigative Dermatology | 136 |
Dermatofibrosarcoma protuberans | Dermatofibrosarcoma protuberans (DFSP) is a rare locally aggressive malignant cutaneous soft-tissue sarcoma. DFSP develops in the connective tissue cells in the middle layer of the skin (dermis). Estimates of the overall occurrence of DFSP in the United States are 0.8 to 4.5 cases per million persons per year. In the United States, DFSP accounts for between 1 and 6 percent of all soft tissue sarcomas and 18 percent of all cutaneous soft tissue sarcomas. In the Surveillance, Epidemiology and End Results (SEER) tumor registry from 1992 through 2004, DFSP was second only to Kaposi sarcoma.
Presentation
Dermatofibrosarcoma protuberans begins as a minor firm area of skin most commonly about to 1 to 5 cm in diameter. It can resemble a bruise, birthmark, or pimple. It is a slow-growing tumor and is usually found on the torso but can occur anywhere on the body. About 90% of DFSPs are low-grade sarcomas. About 10% are mixed, containing a high-grade sarcomatous component (DFSP-FS); therefore, they are considered to be intermediate-grade sarcomas. DFSPs rarely lead to a metastasis (fewer than 5% metastasize), but DFSPs can recur locally. DFSPs most often arise in patients who are in their thirties but this may be due to diagnostic delay.
Location
Commonly located on the chest and shoulders, the following is the site distribution of DFPS as was observed in Surveillance, Epidemiology, and End Results (SEER) [1] database between 2000 and 2010.
Trunk / Torso – 42%
Lower extremity – 21%
Upper extremity – 21%
Head and neck – 13%
Genitals – 1%
Variants
The World Health Organization, 2020, classified the fibrosarcomatous DFSP (DFSP-FS) variant (also termed dermatofibrosarcoma protuberans, fibrosarcomatous) of the dermatofibrosarcoma protuberans as a specific form of the intermediate (rarely metastasizing) fibroblastic and myofibroblastic tumors and other variants of this disorder as a specific form of the intermediate (locally aggressive) fibroblastic and myofibroblastic tumors.
Bednar tumors
Bednar, or pigmented DFSP, is distinguished by the dispersal of melanin-rich dendritic cells of the skin. It represents 1-5 percent of all DFSP occurring in people with rich in melanin pigments. Bednar characterized by a dermal spindle cell proliferation like DFSP but distinguished by the additional presence of melanocytic dendritic cells. It occurs at the same rate of DFSP on fairer skin and should be considered to have the same chances of metastasis.
Myxoid DFSP
Myxoid DFSP, which has areas of myxoid degeneration in the stroma
Giant cell fibroblastoma
Giant cell fibroblastoma that contains giant cells, and is also known as juvenile DFSP. Giant cell fibroblastomas are skin and soft tissue tumors that usually arise in childhood. They are sometimes seen in association with dermatofibrosarcoma protuberans (DFSP, hybrid lesions) or may transform or recur as DFSP.
Atrophic DFSP
Atrophic DFSP resemble other benign lesions such as morphea, idiopathic atrophoderma, atrophic scar, anetoderma or lipoatrophy. It behaves like classic DFSP. It commonly favours young to middle aged adults. It has a slow infiltrative growth and a high rate of local recurrence if not completely excised.
Sclerosing DFSP
Sclerosing DFSP is a variant in which the cellularity is low, and the tumor consists of uniform bundles of collagen interspersed with more typical DFSP cells.Granular cell variant is a rare type in which spindle cells are mingled with richly granular cells, the granules being lysosomal, with prominent nucleoli.
Fibrosarcomatous DFSP (DFSP-FS)
Fibrosarcomatous DFSP, a rare variant of DFSP involving greater aggression, high rates of local occurrences, and higher metastatic potential. DFSP - FS are considered to be intermediate-grade sarcomas. Although they rarely metastasize (fewer than 5 percent of cases)
Pathophysiology
More than 90% of DFSP tumors have the chromosomal translocation t(17;22). The translocation fuses the collagen gene (COL1A1) with the platelet-derived growth factor (PDGF) gene. The fibroblast, the cell of origin of this tumor, expresses the fusion gene in the belief that it codes for collagen. However, the resulting fusion protein is processed into a mature platelet-derived growth factor which is a potent growth factor. Fibroblasts contain the receptor for this growth factor. Thus the cell "thinks" it is producing a structural protein, but it creates a self-stimulatory growth signal. The cell divides rapidly and tumor forms.
The tissue is often positive for CD34.
Diagnosis
DFSP is a malignant tumor diagnosed with a biopsy, when a portion of the tumor is removed for examination. In order to ensure that enough tissue is removed to make an accurate diagnosis, the initial biopsy of a suspected DFSP is usually done with a core needle or a surgical incision.Clinical palpation is not entirely reliable for ascertaining the depth of a DFSP infiltration. Magnetic resonance imaging (MRI) is more sensitive addressing the depth of the invasion on some types of DFSP, particularly large or recurring tumors. Though MRI is less accurate for identifying infiltration to head and neck tumors.
Diagnostic delay and misdiagnosis
Due to the rarity, initial presentation of flat plaque (skin hardening) and the slow-growing nature of DFSP, it may be months to years without a protuberance (bump). The dissonance between the name of the neoplasm and its clinical presentations may cause a majority of patients to experience a diagnostic delay. A 2019 research study found out of 214 patients a range between less than a year to 42 years before diagnosis (median, four years) from patients noticing a symptom to diagnosis.Currently, a majority of patients (53%) receive a misdiagnosis by health care providers. The most frequent prebiopsy clinical suspicion included cyst (101 [47.2%]), lipoma (30 [14.0%]), and scar (17 [7.9%]).It has been suggested an alternative term for DFSP should be dermatofibrosarcoma, often protuberant
Pregnancy
It is suggested that DFSPs may enlarge more rapidly during pregnancy. Immunohistochemical stains for CD34, S-100 protein, factor XIIIa, and estrogen and progesterone receptors were performed on biopsy specimens. The tumors showed the expression of the progesterone receptor. As with many other stromal neoplasms, DFSPs appear to express low levels of hormone receptors, which may be one factor that accounts for their accelerated growth during pregnancy.
Treatment
Treatment is primarily surgical, with chemotherapy and radiation therapy used if clear resection margins are not acquired.
Surgical treatment
The type of surgical treatment chosen is dependent on the location of the DFSP occurrence and possible size.
Mohs surgery
Mohs Micrographic Surgery (MMS) has a high cure rate and lowers the recurrence reduction of DFSP if negative resection margins are achieved.
Wide local excision
WLE was the gold standard for treating DFSP but is currently under reevaluation. Presently in the United States, WLE may be suggested after the recurrence of MMS. Larger resection margins are suggested for WLE than MMS. Recurrence rate with WLE is about 8.5% with a lower recurrence rate related to wider excision.
Resection margin
DFSP characteristic features are its capacity to invade surrounding tissues, to a considerable distance from the central focus of the tumor in a "tentacle-like" fashion. This fact, coupled with diagnostic delay, may lead to inadequate initial resection. Inadequate initial treatment results in larger, deeper recurrent lesions, but these can be managed by appropriate wide excision.
Radiation therapy
DFSP is a radioresponsive tumor, radiation therapy (RT) is not used as the first choice for treatment. Conservative resection through MMS or WLE is attempted first. If clear margins are not achieved RT, or Chemotherapy is recommended.
Chemotherapy
DFSP was previously regarded and nonresponsive to standard chemotherapy. In 2006 the US FDA approved (imatinib mesylate) for the treatment of DFSP. As is true for all medicinal drugs that have a name that ends in "ib," imatinib is a small molecular pathway inhibitor; imatinib inhibits tyrosine kinase. It may be able to induce tumor regression in patients with recurrent DFSP, unresectable DFSP, or metastatic DFSP. There is clinical evidence that imatinib, which inhibits PDGF-receptors, may be effective for tumors positive for the t(17;22) translocation. It is suggested that imatinib may be a treatment for challenging, locally advanced disease and in the rare metastatic cases It was approved for use by adult patients with unresectable, recurrent and/or metastatic dermatofibrosarcoma protuberans (DFSP).
Metastatic disease
Distant hematogenous metastases are extremely rare. Metastases to regional lymph nodes are rarer and are most likely in patients who have had multiple local recurrences after inadequate surgical resection. Repeatedly recurring tumors have an increased risk for transformation into a more malignant form (DFSP-FS). The lungs are most frequently affected, but metastases to the brain, bone, and other soft tissues are reported.
Studies
DFSP is not extensively studied due to its rarity and low mortality. The majority of studies are small size case studies or meta-analysis.
The most extensive research study to date was Perspectives of Patients With Dermatofibrosarcoma Protuberans on Diagnostic Delays, Surgical Outcomes, and Nonprotuberance. The lead researcher Jerad Gardner spoke at a Ted Talk in February 2020 on the topic.
History
Taylor, RW, in 1890. first identified DFSP as a keloid sarcoma. Later in 1924, Darier, J and Ferrand, identified it as a progressive recurrent dermatofibroma. In 1925 it was E Hoffmann who coined the term Dermatofibrosarcoma protuberans. Bednar tumor was first described by Bednar in 1957.
ICD Coding
ICD-0: 8832/3 - dermatofibrosarcoma protuberans, NOS
ICD-0: 8833/3 - pigmented dermatofibrosarcoma protuberans
ICD-0: 8834/1 - giant cell fibroblastoma
Fibrosarcomatous dermatofibrosarcoma protuberans: no distinct coding identified
Additional images
See also
List of cutaneous conditions
References
External links
Dermatofibrosarcoma protuberans in NIH Genetic and Rare Diseases Information Center | 137 |
Skin condition | A skin condition, also known as cutaneous condition, is any medical condition that affects the integumentary system—the organ system that encloses the body and includes skin, nails, and related muscle and glands. The major function of this system is as a barrier against the external environment.Conditions of the human integumentary system constitute a broad spectrum of diseases, also known as dermatoses, as well as many nonpathologic states (like, in certain circumstances, melanonychia and racquet nails). While only a small number of skin diseases account for most visits to the physician, thousands of skin conditions have been described. Classification of these conditions often presents many nosological challenges, since underlying causes and pathogenetics are often not known. Therefore, most current textbooks present a classification based on location (for example, conditions of the mucous membrane), morphology (chronic blistering conditions), cause (skin conditions resulting from physical factors), and so on.Clinically, the diagnosis of any particular skin condition begins by gathering pertinent information of the presenting skin lesion(s), including: location (arms, head, legs); symptoms (pruritus, pain); duration (acute or chronic); arrangement (solitary, generalized, annular, linear); morphology (macules, papules, vesicles); and color (red). Some diagnoses may also require a skin biopsy which yields histologic information that can be correlated with the clinical presentation and any laboratory data. The introduction of cutaneous ultrasound has allowed the detection of cutaneous tumors, inflammatory processes, and skin diseases.
Layer of skin involved
The skin weighs an average of 4 kg (8.8 lb), covers an area of 2 m2 (22 sq ft), and is made of three distinct layers: the epidermis, dermis, and subcutaneous tissue. The two main types of human skin are glabrous skin, the nonhairy skin on the palms and soles (also referred to as the "palmoplantar" surfaces), and hair-bearing skin. Within the latter type, hairs in structures called pilosebaceous units have a hair follicle, sebaceous gland, and associated arrector pili muscle. In the embryo, the epidermis, hair, and glands are from the ectoderm, which is chemically influenced by the underlying mesoderm that forms the dermis and subcutaneous tissues.
Epidermis
The epidermis is the most superficial layer of skin, a squamous epithelium with several strata: the stratum corneum, stratum lucidum, stratum granulosum, stratum spinosum, and stratum basale. Nourishment is provided to these layers via diffusion from the dermis, since the epidermis is without direct blood supply. The epidermis contains four cell types: keratinocytes, melanocytes, Langerhans cells, and Merkel cells. Of these, keratinocytes are the major component, constituting roughly 95% of the epidermis. This stratified squamous epithelium is maintained by cell division within the stratum basale, in which differentiating cells slowly displace outwards through the stratum spinosum to the stratum corneum, where cells are continually shed from the surface. In normal skin, the rate of production equals the rate of loss; about two weeks are needed for a cell to migrate from the basal cell layer to the top of the granular cell layer, and an additional two weeks to cross the stratum corneum.
Dermis
The dermis is the layer of skin between the epidermis and subcutaneous tissue, and comprises two sections, the papillary dermis and the reticular dermis. The superficial papillary dermis interdigitates with the overlying rete ridges of the epidermis, between which the two layers interact through the basement membrane zone. Structural components of the dermis are collagen, elastic fibers, and ground substance also called extra fibrillar matrix. Within these components are the pilosebaceous units, arrector pili muscles, and the eccrine and apocrine glands. The dermis contains two vascular networks that run parallel to the skin surface—one superficial and one deep plexus—which are connected by vertical communicating vessels. The function of blood vessels within the dermis is fourfold: to supply nutrition, to regulate temperature, to modulate inflammation, and to participate in wound healing.
Subcutaneous tissue
The subcutaneous tissue is a layer of fat between the dermis and underlying fascia. This tissue may be further divided into two components, the actual fatty layer, or panniculus adiposus, and a deeper vestigial layer of muscle, the panniculus carnosus. The main cellular component of this tissue is the adipocyte, or fat cell. The structure of this tissue is composed of septal (i.e. linear strands) and lobular compartments, which differ in microscopic appearance. Functionally, the subcutaneous fat insulates the body, absorbs trauma, and serves as a reserve energy source.
Diseases of the skin
Diseases of the skin include skin infections and skin neoplasms (including skin cancer).
History
In 1572, Geronimo Mercuriali of Forlì, Italy, completed De morbis cutaneis (On the diseases of the skin). It is considered the first scientific work dedicated to dermatology.
Diagnoses
The physical examination of the skin and its appendages, as well as the mucous membranes, forms the cornerstone of an accurate diagnosis of cutaneous conditions. Most of these conditions present with cutaneous surface changes termed "lesions," which have more or less distinct characteristics. Often proper examination will lead the physician to obtain appropriate historical information and/or laboratory tests that are able to confirm the diagnosis. Upon examination, the important clinical observations are the (1) morphology, (2) configuration, and (3) distribution of the lesion(s). With regard to morphology, the initial lesion that characterizes a condition is known as the "primary lesion", and identification of such a lesions is the most important aspect of the cutaneous examination. Over time, these primary lesions may continue to develop or be modified by regression or trauma, producing "secondary lesions". However, with that being stated, the lack of standardization of basic dermatologic terminology has been one of the principal barriers to successful communication among physicians in describing cutaneous findings. Nevertheless, there are some commonly accepted terms used to describe the macroscopic morphology, configuration, and distribution of skin lesions, which are listed below.
Lesions
Primary lesions
Macule: A macule is a change in surface color, without elevation or depression, so nonpalpable, well or ill-defined, variously sized, but generally considered less than either 5 or 10 mm in diameter at the widest point.
Patch: A patch is a large macule equal to or greater than either 5 or 10 mm across, depending on ones definition of a macule. Patches may have some subtle surface change, such as a fine scale or wrinkling, but although the consistency of the surface is changed, the lesion itself is not palpable.
Papule: A papule is a circumscribed, solid elevation of skin, varying in size from less than either 5 or 10 mm in diameter at the widest point.
Plaque: A plaque has been described as a broad papule, or confluence of papules equal to or greater than 10 mm, or alternatively as an elevated, plateau-like lesion that is greater in its diameter than in its depth.
Nodule: A nodule is morphologically similar to a papule in that it is also a palpable spherical lesion less than 10 mm in diameter. However, it is differentiated by being centered deeper in the dermis or subcutis.
Tumor: Similar to a nodule, but it is larger than 10 mm in diameter.
Vesicle: A vesicle is a small blister, a circumscribed, epidermal elevation generally considered less than either 5 or 10 mm in diameter at the widest point.
Bulla: A bulla is a large blister, a rounded or irregularly shaped blister equal to or greater than either 5 or 10 mm, depending on ones definition of a vesicle.
Pustule: A pustule is a small elevation of the skin usually consisting of necrotic inflammatory cells.
Cyst: A cyst is an epithelial-lined cavity.
Wheal: A wheal is a rounded or flat-topped, pale red papule or plaque that is characteristically evanescent, disappearing within 24 to 48 hours. The temporary raised skin on the site of a properly delivered intradermal (ID) injection is also called a welt, with the ID injection process itself frequently referred to as simply "raising a wheal" in medical texts.
Welts: Welts occur as a result of blunt force being applied to the body with elongated objects without sharp edges.
Telangiectasia: A telangiectasia represents an enlargement of superficial blood vessels to the point of being visible.
Burrow: A burrow appears as a slightly elevated, grayish, tortuous line in the skin, and is caused by burrowing organisms.
Secondary lesions
Scale: Dry or greasy laminated masses of keratin, they represent thickened stratum corneum.
Crust: Dried sebum usually mixed with epithelial and sometimes bacterial debris
Lichenification: Epidermal thickening characterized by visible and palpable thickening of the skin with accentuated skin markings
Erosion: An erosion is a discontinuity of the skin exhibiting incomplete loss of the epidermis, a lesion that is moist, circumscribed, and usually depressed.
Excoriation: A punctate or linear abrasion produced by mechanical means (often scratching), usually involving only the epidermis, but commonly reaching the papillary dermis
Ulcer: An ulcer is a discontinuity of the skin exhibiting complete loss of the epidermis and often portions of the dermis.
Fissure is a lesion in the skin that is usually narrow but deep.
Induration is dermal thickening causing the cutaneous surface to feel thicker and firmer.
Atrophy refers to a loss of skin, and can be epidermal, dermal, or subcutaneous. With epidermal atrophy, the skin appears thin, translucent, and wrinkled. Dermal or subcutaneous atrophy is represented by depression of the skin.
Maceration: softening and turning white of the skin due to being consistently wet.
Umbilication is formation of a depression at the top of a papule, vesicle, or pustule.
Phyma: A tubercle on any external part of the body, such as in phymatous rosacea
Configuration
"Configuration" refers to how lesions are locally grouped ("organized"), which contrasts with how they are distributed (see next section).
Distribution
"Distribution" refers to how lesions are localized. They may be confined to a single area (a patch) or may exist in several places. Some distributions correlate with the means by which a given area becomes affected. For example, contact dermatitis correlates with locations where allergen has elicited an allergic immune response. Varicella zoster virus is known to recur (after its initial presentation as chicken pox) as herpes zoster ("shingles"). Chicken pox appears nearly everywhere on the body, but herpes zoster tends to follow one or two dermatomes; for example, the eruptions may appear along the bra line, on either or both sides of the patient.
Other related terms
Histopathology
See also
Wound, an injury which damages the epidermis.
References
== External links == | 138 |
Dexamethasone suppression test | The dexamethasone suppression test (DST) is used to assess adrenal gland function by measuring how cortisol levels change in response to oral doses or an injection of dexamethasone. It is typically used to diagnose Cushings syndrome.
The DST was historically used for diagnosing depression, but by 1988 it was considered to be "at best, severely limited in its clinical ability" for this purpose.
Physiology
Dexamethasone is an exogenous steroid that provides negative feedback to the pituitary gland to suppress the secretion of adrenocorticotropic hormone (ACTH). Specifically, dexamethasone binds to glucocorticoid receptors in the anterior pituitary gland, which lie outside the blood-brain barrier, resulting in regulatory modulation.
Test Procedures
There are several types of DST procedures:
Overnight DST - An oral dose of dexamethasone is given between 11pm and midnight, and the cortisol level is measured at 8 - 9am the next morning
Two-day DST - This involves giving an oral dose of dexamethasone at six-hourly intervals for 2 days, with the cortisol level measured 6 hours after the final dose was given
Intravenous DST
Dexamethasone-CRT test
Interpretation
Low-dose and high-dose variations of the test exist. The test is given at low (usually 1–2 mg) and high (8 mg) doses of dexamethasone, and the levels of cortisol are measured to obtain the results.A low dose of dexamethasone suppresses cortisol in individuals with no pathology in endogenous cortisol production. A high dose of dexamethasone exerts negative feedback on pituitary neoplastic ACTH-producing cells (Cushings disease), but not on ectopic ACTH-producing cells or adrenal adenoma (Cushings syndrome).
Dose
A normal result is a decrease in cortisol levels upon administration of low-dose dexamethasone. Results indicative of Cushings disease involve no change in cortisol on low-dose dexamethasone, but inhibition of cortisol on high-dose dexamethasone. If the cortisol levels are unchanged by low- and high-dose dexamethasone, then other causes of Cushings syndrome must be considered with further work-up necessary.
After the high-dose dexamethasone, it may be possible to make further interpretations.
†ACTH as measured prior to dosing of dexamethasoneEquivocal results should be followed by a corticotropin-releasing hormone stimulation test, with inferior petrosal sinus sampling.
References
Theodore C. Friedman, M.D., Ph.D. Professor of Medicine-UCLA Chairman, Department of Internal Medicine Charles R. Drew University (2013). [1] | 139 |
Type 1 diabetes | Type 1 diabetes (T1D), formerly known as juvenile diabetes, is an autoimmune disease that originates when cells that make insulin (beta cells) are destroyed by the immune system. Insulin is a hormone required for the cells to use blood sugar for energy and it helps regulate glucose levels in the bloodstream. Before treatment this results in high blood sugar levels in the body. The common symptoms of this elevated blood sugar are frequent urination, increased thirst, increased hunger, weight loss, and other serious complications. Additional symptoms may include blurry vision, tiredness, and slow wound healing. Symptoms typically develop over a short period of time, often a matter of weeks.The cause of type 1 diabetes is unknown, but it is believed to involve a combination of genetic and environmental factors. The underlying mechanism involves an autoimmune destruction of the insulin-producing beta cells in the pancreas. Diabetes is diagnosed by testing the level of sugar or glycated hemoglobin (HbA1C) in the blood. Type 1 diabetes can be distinguished from type 2 by testing for the presence of autoantibodies.There is no known way to prevent type 1 diabetes. Treatment with insulin is required for survival. Insulin therapy is usually given by injection just under the skin but can also be delivered by an insulin pump. A diabetic diet and exercise are important parts of management. If left untreated, diabetes can cause many complications. Complications of relatively rapid onset include diabetic ketoacidosis and nonketotic hyperosmolar coma. Long-term complications include heart disease, stroke, kidney failure, foot ulcers and damage to the eyes. Furthermore, since insulin lowers blood sugar levels, complications may arise from low blood sugar if more insulin is taken than necessary.Type 1 diabetes makes up an estimated 5–10% of all diabetes cases. The number of people affected globally is unknown, although it is estimated that about 80,000 children develop the disease each year. Within the United States the number of people affected is estimated at one to three million. Rates of disease vary widely, with approximately one new case per 100,000 per year in East Asia and Latin America and around 30 new cases per 100,000 per year in Scandinavia and Kuwait. It typically begins in children and young adults.
Signs and symptoms
Type 1 diabetes begins suddenly, typically in childhood or adolescence. The major sign of type 1 diabetes is very high blood sugar, which typically manifests in children as a few days to weeks of polyuria (increased urination), polydipsia (increased thirst), and weight loss. Children may also experience increased appetite, blurred vision, bedwetting, recurrent skin infections, candidiasis of the perineum, irritability, and performance issues at school. Adults with type 1 diabetes tend to have more varied symptoms that come on over months rather than days to weeks.Prolonged lack of insulin can also result in diabetic ketoacidosis, characterized by persistent fatigue, dry or flushed skin, abdominal pain, nausea or vomiting, confusion, trouble breathing, and a fruity breath odor. Blood and urine tests reveal unusually high glucose and ketones in the blood and urine. Untreated ketoacidosis can rapidly progress to loss of consciousness, coma, and death. The percentage of children whose type 1 diabetes begins with an episode of diabetic ketoacidosis varies widely by geography, as low as 15% in parts of Europe and North America, and as high as 80% in the developing world.
Cause
Type 1 diabetes is caused by the destruction of β-cells – the only cells in the body that produce insulin – and the consequent progressive insulin deficiency. Without insulin, the body is unable to respond effectively to increases in blood sugar and diabetics have persistent hyperglycemia. In 70–90% of cases, β-cells are destroyed by someones own immune system, for reasons that are not entirely clear. The best-studied components of this autoimmune response are β-cell-targeted antibodies that begin to develop in the months or years before symptoms arise. Typically someone will first develop antibodies against insulin or the protein GAD65, followed eventually by antibodies against the proteins IA-2, IA-2β, and/or ZNT8. People with more of these antibodies, and who develop them earlier in life, are at higher risk for developing symptomatic type 1 diabetes. The trigger for the development of these antibodies remains unclear. A number of explanatory theories have been put forward, and the cause may involve genetic susceptibility, a diabetogenic trigger, and/or exposure to an antigen. The remaining 10–30% of type 1 diabetics have β-cell destruction but no sign of autoimmunity; this is called idiopathic type 1 diabetes and its cause remains unclear.
Environmental
Various environmental risks have been studied in an attempt to understand what triggers β-cell autoimmunity. Many aspects of environment and life history are associated with slight increases in type 1 diabetes risk, however the connection between each risk and diabetes often remains unclear. Type 1 diabetes risk is slightly higher for children whose mothers are obese or older than 35, or for children born by caesarean section. Similarly, a childs weight gain in the first year of life, total weight, and BMI are associated with slightly increased type 1 diabetes risk. Some dietary habits have also been associated with type 1 diabetes risk, namely consumption of cows milk and dietary sugar intake. Animal studies and some large human studies have found small associations between type 1 diabetes risk and intake of gluten or dietary fiber; however, other large human studies have found no such association. Many potential environmental triggers have been investigated in large human studies and found to be unassociated with type 1 diabetes risk including duration of breastfeeding, time of introduction of cow milk into the diet, vitamin D consumption, blood levels of active vitamin D, and maternal intake of omega-3 fatty acids.A longstanding hypothesis for an environmental trigger is that some viral infection early in life contributes to type 1 diabetes development. Much of this work has focused on enteroviruses, with some studies finding slight associations with type 1 diabetes, and others finding none. Large human studies have searched for, but not yet found an association between type 1 diabetes and various other viral infections, including infections of the mother during pregnancy. Conversely, some have postulated that reduced exposure to pathogens in the developed world increases the risk of autoimmune diseases, often called the hygiene hypothesis. Various studies of hygiene-related factors – including household crowding, daycare attendance, population density, childhood vaccinations, antihelminth medication, and antibiotic usage during early life or pregnancy – show no association with type 1 diabetes.
Genetics
Type 1 diabetes is partially caused by genetics, and family members of type 1 diabetics have a higher risk of developing the disease themselves. In the general population, the risk of developing type 1 diabetes is around 1 in 250. For someone whose parent has type 1 diabetes, the risk rises to 1–9%. If a sibling has type 1 diabetes, the risk is 6–7%. If someones identical twin has type 1 diabetes, they have a 30–70% risk of developing it themselves.About half of the diseases heritability is due to variations in three HLA class II genes involved in antigen presentation: HLA-DRB1, HLA-DQA1, and HLA-DQB1. The variation patterns associated with increased risk of type 1 diabetes are called HLA-DR3 and HLA-DR4-HLA-DQ8, and are common in people of European descent. A pattern associated with reduced risk of type 1 diabetes is called HLA-DR15-HLA-DQ6. Large genome-wide association studies have identified dozens of other genes associated with type 1 diabetes risk, mostly genes involved in the immune system.
Chemicals and drugs
Some medicines can reduce insulin production or damage β cells, resulting in disease that resembles type 1 diabetes. The antiviral drug didanosine triggers pancreas inflammation in 5 to 10% of those who take it, sometimes causing lasting β-cell damage. Similarly, up to 5% of those who take the anti-protozoal drug pentamidine experience β-cell destruction and diabetes. Several other drugs cause diabetes by reversibly reducing insulin secretion, namely statins (which may also damage β cells), the post-transplant immunosuppressants cyclosporin A and tacrolimus, the leukemia drug L-asparaginase, and the antibiotic gatifloxicin. Pyrinuron (Vacor), a rodenticide introduced in the United States in 1976, selectively destroys pancreatic beta cells, resulting in type 1 diabetes after accidental poisoning. Pyrinuron was withdrawn from the U.S. market in 1979.
Diagnosis
Diabetes is typically diagnosed by a blood test showing unusually high blood sugar. The World Health Organization defines diabetes as blood sugar levels at or above 7.0 mmol/L (126 mg/dL) after fasting for at least eight hours, or a glucose level at or above 11.1 mmol/L (200 mg/dL) two hours after an oral glucose tolerance test. The American Diabetes Association additionally recommends a diagnosis of diabetes for anyone with symptoms of hyperglycemia and blood sugar at any time at or above 11.1 mmol/L, or glycated hemoglobin (hemoglobin A1C) levels at or above 48 mmol/mol.Once a diagnosis of diabetes is established, type 1 diabetes is distinguished from other types by a blood test for the presence of autoantibodies that target various components of the beta cell. The most commonly available tests detect antibodies against glutamic acid decarboxylase, the beta cell cytoplasm, or insulin, each of which are targeted by antibodies in around 80% of type 1 diabetics. Some healthcare providers also have access to tests for antibodies targeting the beta cell proteins IA-2 and ZnT8; these antibodies are present in around 58% and 80% of type 1 diabetics respectively. Some also test for C-peptide, a byproduct of insulin synthesis. Very low C-peptide levels are suggestive of type 1 diabetes.
Management
The mainstay of type 1 diabetes treatment is the regular injection of insulin to manage hyperglycemia. Injections of insulin – via subcutaneous injection using either a syringe or an insulin pump – are necessary multiple times per day, adjusting dosages to account for food intake, blood glucose levels and physical activity. The goal of treatment is to maintain blood sugar in a normal range – 80–130 mg/dL before a meal; <180 mg/dL after – as often as possible. To achieve this, people with diabetes often monitor their blood glucose levels at home. Around 83% of type 1 diabetics monitor their blood glucose by capillary blood testing – pricking the finger to draw a drop of blood, and determining blood glucose with a glucose meter. The American Diabetes Association recommends testing blood glucose around 6–10 times per day: before each meal, before exercise, at bedtime, occasionally after a meal, and any time someone feels the symptoms of hypoglycemia. Around 17% of people with type 1 diabetes use a continuous glucose monitor, a device with a sensor under the skin that constantly measures glucose levels and communicates those levels to an external device. Continuous glucose monitoring is associated with better blood sugar control than capillary blood testing alone; however, continuous glucose monitoring tends to be substantially more expensive. Healthcare providers can also monitor someones hemoglobin A1C levels which reflect the average blood sugar over the last three months. The American Diabetes Association recommends a goal of keeping hemoglobin A1C levels under 7% for most adults and 7.5% for children.The goal of insulin therapy is to mimic normal pancreatic insulin secretion: low levels of insulin constantly present to support basic metabolism, plus the two-phase secretion of additional insulin in response to high blood sugar – an initial spike in secreted insulin, then an extended phase with continued insulin secretion. This is accomplished by combining different insulin preparations that act with differing speeds and durations. The standard of care for type 1 diabetes is a bolus of rapid-acting insulin 10–15 minutes before each meal or snacks, and as-needed to correct hyperglycemia. In addition, constant low levels of insulin are achieved with one or two daily doses of long-acting insulin, or by steady infusion of low insulin levels by an insulin pump. The exact dose of insulin appropriate for each injection depends on the content of the meal/snack, and the individual persons sensitivity to insulin, and is therefore typically calculated by the individual with diabetes or a family member by hand or assistive device (calculator, chart, mobile app, etc.). People unable to manage these intensive insulin regimens are sometimes prescribed alternate plans relying on mixtures of rapid- or short-acting and intermediate-acting insulin, which are administered at fixed times along with meals of pre-planned times and carbohydrate composition.The only non-insulin medication approved by the U.S. Food and Drug Administration for treating type 1 diabetes is the amylin analog pramlintide, which replaces the beta-cell hormone amylin. Addition of pramlintide to mealtime insulin injections reduces the boost in blood sugar after a meal, improving blood sugar control. Occasionally, metformin, GLP-1 receptor agonists, Dipeptidyl peptidase-4 inhibitors, or SGLT2 inhibitor are prescribed off-label to people with type 1 diabetes, although fewer than 5% of type 1 diabetics use these drugs.
Lifestyle
Besides insulin, the major way type 1 diabetics control their blood sugar is by learning how various foods impact their blood sugar levels. This is primarily done by tracking their intake of carbohydrates – the type of food with the greatest impact on blood sugar. In general, people with type 1 diabetes are advised to follow an individualized eating plan rather than a pre-decided one. There are camps for children to teach them how and when to use or monitor their insulin without parental help. As psychological stress may have a negative effect on diabetes, a number of measures have been recommended including: exercising, taking up a new hobby, or joining a charity, among others.Regular exercise is important for maintaining general health, though the effect of exercise on blood sugar can be challenging to predict. Exogenous insulin can drive down blood sugar, leaving those with diabetes at risk of hypoglycemia during and immediately after exercise, then again seven to eleven hours after exercise (called the "lag effect"). Conversely, high-intensity exercise can result in a shortage of insulin, and consequent hyperglycemia. The risk of hypoglycemia can be managed by beginning exercise when blood sugar is relatively high (above 100 mg/dL), ingesting carbohydrates during or shortly after exercise, and reducing the amount of injected insulin within two hours of the planned exercise. Similarly, the risk of exercise-induced hyperglycemia can be managed by avoiding exercise when insulin levels are very low, when blood sugar is extremely high (above 350 mg/dL), or when one feels unwell.
Transplant
In some cases, people can receive transplants of the pancreas or isolated islet cells to restore insulin production and alleviate diabetic symptoms. Transplantation of the whole pancreas is rare, due in part to the few available donor organs, and to the need for lifelong immunosuppressive therapy to prevent transplant rejection. The American Diabetes Association recommends pancreas transplant only in people who also require a kidney transplant, or who struggle to perform regular insulin therapy and experience repeated severe side effects of poor blood sugar control. Most pancreas transplants are done simultaneously with a kidney transplant, with both organs from the same donor. The transplanted pancreas continues to function for at least five years in around three quarters of recipients, allowing them to stop taking insulin.Transplantations of islets alone have become increasingly common. Pancreatic islets are isolated from a donor pancreas, then injected into the recipients portal vein from which they implant onto the recipients liver. In nearly half of recipients, the islet transplant continues to work well enough that they still dont need exogenous insulin five years after transplantation. If a transplant fails, recipients can receive subsequent injections of islets from additional donors into the portal vein. Like with whole pancreas transplantation, islet transplantation requires lifelong immunosuppression and depends on the limited supply of donor organs; it is therefore similarly limited to people with severe poorly controlled diabetes and those who have had or are scheduled for a kidney transplant.
Pathogenesis
Type 1 diabetes is a result of the destruction of pancreatic beta cells, although what triggers that destruction remains unclear. People with type 1 diabetes tend to have more CD8+ T-cells and B-cells that specifically target islet antigens than those without type 1 diabetes, suggesting a role for the adaptive immune system in beta cell destruction. Type 1 diabetics also tend to have reduced regulatory T cell function, which may exacerbate autoimmunity. Destruction of beta cells results in inflammation of the islet of Langerhans, called insulitis. These inflamed islets tend to contain CD8+ T-cells and – to a lesser extent – CD4+ T cells. Abnormalities in the pancreas or the beta cells themselves may also contribute to beta-cell destruction. The pancreases of people with type 1 diabetes tend to be smaller, lighter, and have abnormal blood vessels, nerve innervations, and extracellular matrix organization. In addition, beta cells from people with type 1 diabetes sometimes overexpress HLA class I molecules (responsible for signaling to the immune system) and have increased endoplasmic reticulum stress and issues with synthesizing and folding new proteins, any of which could contribute to their demise.The mechanism by which the beta cells actually die likely involves both necroptosis and apoptosis induced or exacerbated by CD8+ T-cells and macrophages. Necroptosis can be triggered by activated T cells – which secrete toxic granzymes and perforin – or indirectly as a result of reduced blood flow or the generation of reactive oxygen species. As some beta cells die, they may release cellular components that amplify the immune response, exacerbating inflammation and cell death. Pancreases from people with type 1 diabetes also have signs of beta cell apoptosis, linked to activation of the janus kinase and TYK2 pathways.Partial ablation of beta-cell function is enough to cause diabetes; at diagnosis, people with type 1 diabetes often still have detectable beta-cell function. Once insulin therapy is started, many people experience a resurgence in beta-cell function, and can go some time with little-to-no insulin treatment – called the "honeymoon phase". This eventually fades as beta-cells continue to be destroyed, and insulin treatment is required again. Beta-cell destruction is not always complete, as 30–80% of type 1 diabetics produce small amounts of insulin years or decades after diagnosis.
Alpha cell dysfunction
Onset of autoimmune diabetes is accompanied by impaired ability to regulate the hormone glucagon, which acts in antagonism with insulin to regulate blood sugar and metabolism. Progressive beta cell destruction leads to dysfunction in the neighboring alpha cells which secrete glucagon, exacerbating excursions away from euglycemia in both directions; overproduction of glucagon after meals causes sharper hyperglycemia, and failure to stimulate glucagon upon hypoglycemia prevents a glucagon-mediated rescue of glucose levels.
Hyperglucagonemia
Onset of type 1 diabetes is followed by an increase in glucagon secretion after meals. Increases have been measured up to 37% during the first year of diagnosis, while c-peptide levels (indicative of islet-derived insulin), decline by up to 45%. Insulin production will continue to fall as the immune system destroys beta cells, and islet-derived insulin will continue to be replaced by therapeutic exogenous insulin. Simultaneously, there is measurable alpha cell hypertrophy and hyperplasia in the early stage of the disease, leading to expanded alpha cell mass. This, together with failing beta cell insulin secretion, begins to account for rising glucagon levels that contribute to hyperglycemia. Some researchers believe glucagon dysregulation to be the primary cause of early stage hyperglycemia. Leading hypotheses for the cause of postprandial hyperglucagonemia suggest that exogenous insulin therapy is inadequate to replace the lost intraislet signalling to alpha cells previously mediated by beta cell-derived pulsatile insulin secretion. Under this working hypothesis intensive insulin therapy has attempted to mimic natural insulin secretion profiles in exogenous insulin infusion therapies.
Hypoglycemic glucagon impairment
Glucagon secretion is normally increased upon falling glucose levels, but normal glucagon response to hypoglycemia is blunted in type 1 diabetics. Beta cell glucose sensing and subsequent suppression of administered insulin secretion is absent, leading to islet hyperinsulinemia which inhibits glucagon release.Autonomic inputs to alpha cells are much more important for glucagon stimulation in the moderate to severe ranges of hypoglycemia, yet the autonomic response is blunted in a number of ways. Recurrent hypoglycemia leads to metabolic adjustments in the glucose sensing areas of the brain, shifting the threshold for counter regulatory activation of the sympathetic nervous system to lower glucose concentration. This is known as hypoglycemic unawareness. Subsequent hypoglycemia is met with impairment in sending of counter regulatory signals to the islets and adrenal cortex. This accounts for the lack of glucagon stimulation and epinephrine release that would normally stimulate and enhance glucose release and production from the liver, rescuing the diabetic from severe hypoglycemia, coma, and death. Numerous hypotheses have been produced in the search for a cellular mechanism of hypoglycemic unawareness, and a consensus has yet to be reached. The major hypotheses are summarized in the following table:
In addition, autoimmune diabetes is characterized by a loss of islet specific sympathetic innervation. This loss constitutes an 80–90% reduction of islet sympathetic nerve endings, happens early in the progression of the disease, and is persistent though the life of the patient. It is linked to the autoimmune aspect of type 1 diabetics and fails to occur in type 2 diabetics. Early in the autoimmune event, the axon pruning is activated in islet sympathetic nerves. Increased BDNF and ROS that result from insulitis and beta cell death stimulate the p75 neurotrophin receptor (p75NTR), which acts to prune off axons. Axons are normally protected from pruning by activation of tropomyosin receptor kinase A (Trk A) receptors by NGF, which in islets is primarily produced by beta cells. Progressive autoimmune beta cell destruction, therefore, causes both the activation of pruning factors and the loss of protective factors to the islet sympathetic nerves. This unique form of neuropathy is a hallmark of type 1 diabetes, and plays a part in the loss of glucagon rescue of severe hypoglycemia.
Complications
The most pressing complication of type 1 diabetes are the always present risks of poor blood sugar control: severe hypoglycemia and diabetic ketoacidosis. Hypoglycemia – typically blood sugar below 70 mg/dL – triggers the release of epinephrine, and can cause people to feel shaky, anxious, or irritable. People with hypoglycemia may also experience hunger, nausea, sweats, chills, dizziness, and a fast heartbeat. Some feel lightheaded, sleepy, or weak. Severe hypoglycemia can develop rapidly, causing confusion, coordination problems, loss of consciousness, and seizure. On average, people with type 1 diabetes experience a hypoglycemia event that requires assistance of another 16–20 times in 100 person-years, and an event leading to unconsciousness or seizure 2–8 times per 100 person-years. The American Diabetes Association recommends treating hypoglycemia by the "15-15 rule": eat 15 grams of carbohydrates, then wait 15 minutes before checking blood sugar; repeat until blood sugar is at least 70 mg/dL. Severe hypoglycemia that impairs someones ability to eat is typically treated with injectable glucagon, which triggers glucose release from the liver into the bloodstream. People with repeated bouts of hypoglycemia can develop hypoglycemia unawareness, where the blood sugar threshold at which they experience symptoms of hypoglycemia decreases, increasing their risk of severe hypoglycemic events. Rates of severe hypoglycemia have generally declined due to the advent of rapid-acting and long-acting insulin products in the 1990s and early 2000s; however, acute hypoglycemic still causes 4–10% of type 1 diabetes-related deaths.The other persistent risk is diabetic ketoacidosis – a state where lack of insulin results in cells burning fat rather than sugar, producing toxic ketones as a byproduct. Ketoacidosis symptoms can develop rapidly, with frequent urination, excessive thirst, nausea, vomiting, and severe abdominal pain all common. More severe ketoacidosis can result in labored breathing, and loss of consciousness due to cerebral edema. People with type 1 diabetes experience diabetic ketoacidosis 1–5 times per 100 person-years, the majority of which result in hospitalization. 13–19% of type 1 diabetes-related deaths are caused by ketoacidosis, making ketoacidosis the leading cause of death in people with type 1 diabetes less than 58 years old.
Long-term complications
In addition to the acute complications of diabetes, long-term hyperglycemia results in damage to the small blood vessels throughout the body. This damage tends to manifest particularly in the eyes, nerves, and kidneys causing diabetic retinopathy, diabetic neuropathy, and diabetic nephropathy respectively. In the eyes, prolonged high blood sugar causes the blood vessels in the retina to become fragile.People with type 1 diabetes also have increased risk of cardiovascular disease, which is estimated to shorten the life of the average type 1 diabetic by 8–13 years. Cardiovascular disease as well as neuropathy may have an autoimmune basis, as well. Women with type 1 DM have a 40% higher risk of death as compared to men with type 1 DM.About 12 percent of people with type 1 diabetes have clinical depression. About 6 percent of people with type 1 diabetes also have celiac disease, but in most cases there are no digestive symptoms or are mistakenly attributed to poor control of diabetes, gastroparesis or diabetic neuropathy. In most cases, celiac disease is diagnosed after onset of type 1 diabetes. The association of celiac disease with type 1 diabetes increases the risk of complications, such as retinopathy and mortality. This association can be explained by shared genetic factors, and inflammation or nutritional deficiencies caused by untreated celiac disease, even if type 1 diabetes is diagnosed first.
Urinary tract infection
People with diabetes show an increased rate of urinary tract infection. The reason is bladder dysfunction is more common in people with diabetes than people without diabetes due to diabetes nephropathy. When present, nephropathy can cause a decrease in bladder sensation, which in turn, can cause increased residual urine, a risk factor for urinary tract infections.
Sexual dysfunction
Sexual dysfunction in people with diabetes is often a result of physical factors such as nerve damage and poor circulation, and psychological factors such as stress and/or depression caused by the demands of the disease. The most common sexual issues in males with diabetes are problems with erections and ejaculation: "With diabetes, blood vessels supplying the peniss erectile tissue can get hard and narrow, preventing the adequate blood supply needed for a firm erection. The nerve damage caused by poor blood glucose control can also cause ejaculate to go into the bladder instead of through the penis during ejaculation, called retrograde ejaculation. When this happens, semen leaves the body in the urine." Another cause of erectile dysfunction is reactive oxygen species created as a result of the disease. Antioxidants can be used to help combat this. Sexual problems are common in women who have diabetes, including reduced sensation in the genitals, dryness, difficulty/inability to orgasm, pain during sex, and decreased libido. Diabetes sometimes decreases estrogen levels in females, which can affect vaginal lubrication. Less is known about the correlation between diabetes and sexual dysfunction in females than in males.Oral contraceptive pills can cause blood sugar imbalances in women who have diabetes. Dosage changes can help address that, at the risk of side effects and complications.Women with type 1 diabetes show a higher than normal rate of polycystic ovarian syndrome (PCOS). The reason may be that the ovaries are exposed to high insulin concentrations since women with type 1 diabetes can have frequent hyperglycemia.
Autoimmune disorders
People with type 1 diabetes are at an increased risk for developing several autoimmune disorders, particularly thyroid problems – around 20% of people with type 1 diabetes have hypothyroidism or hyperthyroidism, typically caused by Hashimoto thyroiditis or Graves disease respectiveley. Celiac disease affects 2–8% of people with type 1 diabetes, and is more common in those who were younger at diabetes diagnosis, and in white people. Type 1 diabetics are also at increased risk of rheumatoid arthritis, lupus, autoimmune gastritis, pernicious anemia, vitiligo, and Addisons disease. Conversely, complex autoimmune syndromes caused by mutations in the immunity-related genes AIRE (causing autoimmune polyglandular syndrome), FoxP3 (causing IPEX syndrome), or STAT3 include type 1 diabetes in their effects.
Epidemiology
Type 1 diabetes makes up an estimated 10–15% of all diabetes cases or 11–22 million cases worldwide. Symptoms can begin at any age, but onset is most common in children, with diagnoses slightly more common in 5 to 7 year olds, and much more common around the age of puberty. In contrast to most autoimmune diseases, type 1 diabetes is slightly more common in males than in females.In 2006, type 1 diabetes affected 440,000 children under 14 years of age and was the primary cause of diabetes in those less than 15 years of age.Rates vary widely by country and region. Incidence is highest in Scandinavia, at 30–60 new cases per 100,000 children per year, intermediate in the U.S. and Southern Europe at 10–20 cases per 100,000 per year, and lowest in China, much of Asia, and South America at 1–3 cases per 100,000 per year.In the United States, type 1 and 2 diabetes affected about 208,000 youths under the age of 20 in 2015. Over 18,000 youths are diagnosed with Type 1 diabetes every year. Every year about 234,051 Americans die due to diabetes (type I or II) or diabetes-related complications, with 69,071 having it as the primary cause of death.In Australia, about one million people have been diagnosed with diabetes and of this figure 130,000 people have been diagnosed with type 1 diabetes. Australia ranks 6th-highest in the world with children under 14 years of age. Between 2000 and 2013, 31,895 new cases were established, with 2,323 in 2013, a rate of 10–13 cases per 100,00 people each year. Aboriginals and Torres Strait Islander people are less affected.Since the 1950s, the incidence of type 1 diabetes has been gradually increasing across the world by an average 3–4% per year. The increase is more pronounced in countries that began with a lower incidence of type 1 diabetes.
History
The connection between diabetes and pancreatic damage was first described by Martin Schmidt, who in a 1902 paper noted inflammation around the pancreatic islet of a child who had died of diabetes. The connection between this inflammation and diabetes onset was further developed through the 1920s by Shields Warren, and the term "insulitis" was coined by Hanns von Meyenburg in 1940 to describe the phenomenon.Type 1 diabetes was described as an autoimmune disease in the 1970s, based on observations that autoantibodies against islets were discovered in diabetics with other autoimmune deficiencies. It was also shown in the 1980s that immunosuppressive therapies could slow disease progression, further supporting the idea that type 1 diabetes is an autoimmune disorder. The name juvenile diabetes was used earlier as it often first is diagnosed in childhood.
Society and culture
Type 1 and 2 diabetes was estimated to cause $10.5 billion in annual medical costs ($875 per month per diabetic) and an additional $4.4 billion in indirect costs ($366 per month per person with diabetes) in the U.S. In the United States $245 billion every year is attributed to diabetes. Individuals diagnosed with diabetes have 2.3 times the health care costs as individuals who do not have diabetes. One in ten health care dollars are spent on individuals with type 1 and 2 diabetes.
Research
Funding for research into type 1 diabetes originates from government, industry (e.g., pharmaceutical companies), and charitable organizations. Government funding in the United States is distributed via the National Institutes of Health, and in the UK via the National Institute for Health Research or the Medical Research Council. The Juvenile Diabetes Research Foundation (JDRF), founded by parents of children with type 1 diabetes, is the worlds largest provider of charity-based funding for type 1 diabetes research. Other charities include the American Diabetes Association, Diabetes UK, Diabetes Research and Wellness Foundation, Diabetes Australia, the Canadian Diabetes Association.
A number of approaches have been explored to understand causes and provide treatments for type 1.
Prevention
Type 1 diabetes is not currently preventable. Several trials have attempted dietary interventions with the hope of reducing the autoimmunity that leads to type 1 diabetes. Trials that withheld cows milk or gave infants formula free of bovine insulin decreased the development of β-cell-targeted antibodies, but did not prevent the development of type 1 diabetes. Similarly, trials that gave high-risk individuals injected insulin, oral insulin, or nicotinamide did not prevent diabetes development.Other research has focused on treating high-risk individuals with immunosuppressive agents to prevent beta cell destruction. Large trials of cyclosporine treatment suggested that cyclosporine could improve insulin secretion in those recently diagnosed with type 1 diabetes; however, people who stopped taking cyclosporine rapidly stopped making insulin, and cyclosporines kidney toxicity and increased risk of cancer prevented people from using it long-term. Several other immunosuppressive agents – prednisone, azathioprine, anti-thymocyte globulin, mycophenolate, and antibodies against CD20 and IL2 receptor α – have been the subject of research, but none have provided lasting protection from development of type 1 diabetes. Antibodies against CD3 have been shown to delay the development of type 1 diabetes in those at high risk; however, they have not been widely adopted due to concerns over the duration of their effect, and activation of Epstein-Barr virus infections in those undergoing treatment.Vitamin D supplementation may help with preventing type one diabetes. This is believed to be the case due to vitamin D receptors affecting the B-cells involved in promoting pancreatic homeostasis.Vaccines are being looked at to treat or prevent type 1 diabetes by inducing immune tolerance to insulin or pancreatic beta cells. While Phase II clinical trials of a vaccine containing alum and recombinant GAD65, an autoantigen involved in type 1 diabetes, were promising, as of 2014 Phase III had failed. As of 2014, other approaches, such as a DNA vaccine encoding proinsulin and a peptide fragment of insulin, were in early clinical development.
Organ replacement
Pluripotent stem cells can be used to generate beta cells but previously these cells did not function as well as normal beta cells. In 2014 more mature beta cells were produced which released insulin in response to blood sugar when transplanted into mice. Before these techniques can be used in humans more evidence of safety and effectiveness is needed.There has also been substantial effort to develop a fully automated insulin delivery system or "artificial pancreas" that could sense glucose levels and inject appropriate insulin without conscious input from the user. Current "hybrid closed-loop systems" use a continuous glucose monitor to sense blood sugar levels, and a subcutaneous insulin pump to deliver insulin; however, due to the delay between insulin injection and its action, current systems require the user to initiate insulin before taking meals. Several improvements to these systems are currently undergoing clinical trials in humans, including a dual-hormone system that injects glucagon in addition to insulin, and an implantable device that injects insulin intraperitoneally where it can be absorbed more quickly.
Disease models
Various animal models of disease are used to understand the pathogenesis and etiology of type 1 diabetes. Currently available models of T1D can be divided into spontaneously autoimmune, chemically induced, virus induced and genetically induced.The nonobese diabetic (NOD) mouse is the most widely studied model of type 1 diabetes. It is an inbred strain that spontaneously develops type 1 diabetes in 30–100% of female mice depending on housing conditions. Diabetes in NOD mice is caused by several genes, primarily MHC genes involved in antigen presentation. Like diabetic humans, NOD mice develop islet autoantibodies and inflammation in the islet, followed by reduced insulin production and hyperglycemia. Some features of human diabetes are exaggerated in NOD mice, namely the mice have more severe islet inflammation than humans, and have a much more pronounced sex bias, with females developing diabetes far more frequently than males. In NOD mice the onset of insulitis occurs at 3–4 weeks of age. The islets of Langerhans are infiltrated by CD4+, CD8+ T lymphocytes, NK cells, B lymphocytes, dendritic cells, macrophages and neutrophils, similar to the disease process in humans. In addition to sex, breeding conditions, gut microbiome composition or diet also influence the onset of T1D.The BioBreeding Diabetes-Prone (BB) rat is another widely used spontaneous experimental model for T1D. The onset of diabetes occurs, in up to 90% of individuals (regardless of sex) at 8–16 weeks of age. During insulitis, the pancreatic islets are infiltrated by T lymphocytes, B lymphocytes, macrophages, and NK cells, with the difference from the human course of insulitis being that CD4 + T lymphocytes are markedly reduced and CD8 + T lymphocytes are almost absent. The aforementioned lymphopenia is the major drawback of this model. The disease is characterized by hyperglycemia, hypoinsulinemia, weight loss, ketonuria, and the need for insulin therapy for survival. BB Rats are used to study the genetic aspects of T1D and are also used for interventional studies and diabetic nephropathy studies.LEW-1AR1 / -iddm rats are derived from congenital Lewis rats and represent a rarer spontaneous model for T1D. These rats develop diabetes at about 8–9 weeks of age with no sex differences unlike NOD mice. In LEW mice, diabetes presents with hyperglycemia, glycosuria, ketonuria, and polyuria. The advantage of the model is the progression of the prediabetic phase, which is very similar to human disease, with infiltration of islet by immune cells about a week before hyperglycemia is observed. This model is suitable for intervention studies or for the search for predictive biomarkers. It is also possible to observe individual phases of pancreatic infiltration by immune cells. The advantage of congenic LEW mice is also the good viability after the manifestation of T1D (compared to NOD mice and BB rats).
Chemically induced
The chemical compounds aloxan and streptozotocin (STZ) are commonly used to induce diabetes and destroy β-cells in mouse/rat animal models. In both cases, it is a cytotoxic analog of glucose that passes GLUT2 transport and accumulates in β-cells, causing their destruction. The chemically induced destruction of β-cells leads to decreased insulin production, hyperglycemia and weight loss in the experimental animal. The animal models prepared in this way are suitable for research into blood sugar-lowering drugs and therapies (e.g. for testing new insulin preparations). They are also The most commonly used genetically induced T1D model is the so-called AKITA mouse (originally C57BL/6NSIc mouse). The development of diabetes in AKITA mice is caused by a spontaneous point mutation in the Ins2 gene, which is responsible for the correct composition of insulin in the endoplasmic reticulum. Decreased insulin production is then associated with hyperglycemia, polydipsia and polyuria. If severe diabetes develops within 3–4 weeks, AKITA mice survive no longer than 12 weeks without treatment intervention. The description of the etiology of the disease shows that, unlike spontaneous models, the early stages of the disease are not accompanied by insulitis. AKITA mice are used to test drugs targeting endoplasmic reticulum stress reduction, to test islet transplants, and to study diabetes-related complications such as nephropathy, sympathetic autonomic neuropathy, and vascular disease. for testing transplantation therapies. Their advantage is mainly the low cost, the disadvantage is the cytotoxicity of the chemical compounds.
Genetically induced
Virally induced
Viral infections play a role in the development of a number of autoimmune diseases, including human type 1 diabetes. However, the mechanisms by which viruses are involved in the induction of type 1 DM are not fully understood. Virus-induced models are used to study the etiology and pathogenesis of the disease, in particular the mechanisms by which environmental factors contribute to or protect against the occurrence of type 1 DM. Among the most commonly used are Coxsackie virus, lymphocytic choriomeningitis virus, encephalomyocarditis virus, and Kilham rat virus. Examples of virus-induced animals include NOD mice infected with coxsackie B4 that developed type 1 DM within two weeks.
References
Works cited
External links
National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) – Diabetes in America Textbook (PDFs)
IDF Diabetes Atlas
Type 1 Diabetes Archived 30 October 2009 at the Wayback Machine at the American Diabetes Association
ADAs Standards of Medical Care in Diabetes 2019 | 140 |
Type 2 diabetes | Type 2 diabetes, formerly known as adult-onset diabetes, is a form of diabetes mellitus that is characterized by high blood sugar, insulin resistance, and relative lack of insulin. Common symptoms include increased thirst, frequent urination, and unexplained weight loss. Symptoms may also include increased hunger, feeling tired, and sores that do not heal. Often symptoms come on slowly. Long-term complications from high blood sugar include heart disease, strokes, diabetic retinopathy which can result in blindness, kidney failure, and poor blood flow in the limbs which may lead to amputations. The sudden onset of hyperosmolar hyperglycemic state may occur; however, ketoacidosis is uncommon.Type 2 diabetes primarily occurs as a result of obesity and lack of exercise. Some people are genetically more at risk than others.Type 2 diabetes makes up about 90% of cases of diabetes, with the other 10% due primarily to type 1 diabetes and gestational diabetes. In type 1 diabetes there is a lower total level of insulin to control blood glucose, due to an autoimmune induced loss of insulin-producing beta cells in the pancreas. Diagnosis of diabetes is by blood tests such as fasting plasma glucose, oral glucose tolerance test, or glycated hemoglobin (A1C).Type 2 diabetes is largely preventable by staying a normal weight, exercising regularly, and eating a healthy diet (high in fruits and vegetables and low in sugar and saturated fats). Treatment involves exercise and dietary changes. If blood sugar levels are not adequately lowered, the medication metformin is typically recommended. Many people may eventually also require insulin injections. In those on insulin, routinely checking blood sugar levels is advised; however, this may not be needed in those taking pills. Bariatric surgery often improves diabetes in those who are obese.Rates of type 2 diabetes have increased markedly since 1960 in parallel with obesity. As of 2015 there were approximately 392 million people diagnosed with the disease compared to around 30 million in 1985. Typically it begins in middle or older age, although rates of type 2 diabetes are increasing in young people. Type 2 diabetes is associated with a ten-year-shorter life expectancy. Diabetes was one of the first diseases ever described, dating back to an Egyptian manuscript from c. 1500 BCE. The importance of insulin in the disease was determined in the 1920s.
Signs and symptoms
The classic symptoms of diabetes are frequent urination (polyuria), increased thirst (polydipsia), increased hunger (polyphagia), and weight loss. Other symptoms that are commonly present at diagnosis include a history of blurred vision, itchiness, peripheral neuropathy, recurrent vaginal infections, and fatigue. Other symptoms may include loss of taste. Many people, however, have no symptoms during the first few years and are diagnosed on routine testing. A small number of people with type 2 diabetes can develop a hyperosmolar hyperglycemic state (a condition of very high blood sugar associated with a decreased level of consciousness and low blood pressure).
Complications
Type 2 diabetes is typically a chronic disease associated with a ten-year-shorter life expectancy. This is partly due to a number of complications with which it is associated, including: two to four times the risk of cardiovascular disease, including ischemic heart disease and stroke; a 20-fold increase in lower limb amputations, and increased rates of hospitalizations. In the developed world, and increasingly elsewhere, type 2 diabetes is the largest cause of nontraumatic blindness and kidney failure. It has also been associated with an increased risk of cognitive dysfunction and dementia through disease processes such as Alzheimers disease and vascular dementia. Other complications include hyperpigmentation of skin (acanthosis nigricans), sexual dysfunction, and frequent infections. There is also an association between type 2 diabetes and mild hearing loss.
Causes
The development of type 2 diabetes is caused by a combination of lifestyle and genetic factors. While some of these factors are under personal control, such as diet and obesity, other factors are not, such as increasing age, female sex, and genetics. Obesity is more common in women than men in many parts of Africa. The nutritional status of a mother during fetal development may also play a role, with one proposed mechanism being that of DNA methylation. The intestinal bacteria Prevotella copri and Bacteroides vulgatus have been connected with type 2 diabetes.
Lifestyle
Lifestyle factors are important to the development of type 2 diabetes, including obesity and being overweight (defined by a body mass index of greater than 25), lack of physical activity, poor diet, psychological stress, and urbanization. Excess body fat is associated with 30% of cases in those of Chinese and Japanese descent, 60–80% of cases in those of European and African descent, and 100% of cases in Pima Indians and Pacific Islanders. Among those who are not obese, a high waist–hip ratio is often present. Smoking appears to increase the risk of type 2 diabetes. A lack of sleep has also been linked to type 2 diabetes. Laboratory studies have linked short-term sleep deprivations to changes in glucose metabolism, nervous system activity, or hormonal factors that may lead to diabetes.Dietary factors also influence the risk of developing type 2 diabetes. Consumption of sugar-sweetened drinks in excess is associated with an increased risk. The type of fats in the diet are important, with saturated fats and trans fatty acids increasing the risk, and polyunsaturated and monounsaturated fat decreasing the risk. Eating a lot of white rice appears to play a role in increasing risk. A lack of exercise is believed to cause 7% of cases. Persistent organic pollutants may also play a role.
Genetics
Most cases of diabetes involve many genes, with each being a small contributor to an increased probability of becoming a type 2 diabetic. The proportion of diabetes that is inherited is estimated at 72%. More than 36 genes and 80 single nucleotide polymorphisms (SNPs) had been found that contribute to the risk of type 2 diabetes. All of these genes together still only account for 10% of the total heritable component of the disease. The TCF7L2 allele, for example, increases the risk of developing diabetes by 1.5 times and is the greatest risk of the common genetic variants. Most of the genes linked to diabetes are involved in pancreatic beta cell functions.There are a number of rare cases of diabetes that arise due to an abnormality in a single gene (known as monogenic forms of diabetes or "other specific types of diabetes"). These include maturity onset diabetes of the young (MODY), Donohue syndrome, and Rabson–Mendenhall syndrome, among others. Maturity onset diabetes of the young constitute 1–5% of all cases of diabetes in young people.
Medical conditions
There are a number of medications and other health problems that can predispose to diabetes. Some of the medications include: glucocorticoids, thiazides, beta blockers, atypical antipsychotics, and statins. Those who have previously had gestational diabetes are at a higher risk of developing type 2 diabetes. Other health problems that are associated include: acromegaly, Cushings syndrome, hyperthyroidism, pheochromocytoma, and certain cancers such as glucagonomas. Individuals with cancer may be at a higher risk of mortality if they also have diabetes. Testosterone deficiency is also associated with type 2 diabetes. Eating disorders may also interact with type 2 diabetes, with bulimia nervosa increasing the risk and anorexia nervosa decreasing it.
Pathophysiology
Type 2 diabetes is due to insufficient insulin production from beta cells in the setting of insulin resistance. Insulin resistance, which is the inability of cells to respond adequately to normal levels of insulin, occurs primarily within the muscles, liver, and fat tissue. In the liver, insulin normally suppresses glucose release. However, in the setting of insulin resistance, the liver inappropriately releases glucose into the blood. The proportion of insulin resistance versus beta cell dysfunction differs among individuals, with some having primarily insulin resistance and only a minor defect in insulin secretion and others with slight insulin resistance and primarily a lack of insulin secretion.Other potentially important mechanisms associated with type 2 diabetes and insulin resistance include: increased breakdown of lipids within fat cells, resistance to and lack of incretin, high glucagon levels in the blood, increased retention of salt and water by the kidneys, and inappropriate regulation of metabolism by the central nervous system. However, not all people with insulin resistance develop diabetes since an impairment of insulin secretion by pancreatic beta cells is also required.In the early stages of insulin resistance, the mass of beta cells expands, increasing the output of insulin to compensate for the insulin insensitivity. But when type 2 diabetes has become manifest, a type 2 diabetic will have lost about half of their beta cells.Fatty acids in the beta cells activate FOXO1, resulting in apoptosis of the beta cells.The causes of the aging-related insulin resistance seen in obesity and in type 2 diabetes are uncertain. Effects of intracellular lipid metabolism and ATP production in liver and muscle cells may contribute to insulin resistance. New evidence also points to a role of a brain region called the hypothalamus in the development of insulin resistance. For one thing, a gene called Dusp8 is linked with an increased risk for diabetes. This gene codes for a protein that regulates neuronal signaling in the hypothalamus. Also, infusions into the hypothalamus of a hormone called leptin normalize blood glucose and diminish insulin resistance in diabetic animals. Activation of hypothalamic cells by leptin has an important role in maintaining normal levels of blood glucose. Thus, both the endocrine cells of the pancreas AND cells in the hypothalamus may have a role in the etiology of type 2 diabetes.
Hypothalamic cells regulate blood glucose via projections to the autonomic nervous system. Autonomic innervation of liver and muscle cells stimulates an increased uptake of glucose. In diabetic humans, the control of blood glucose by the autonomic nervous system is abnormal. Leptin-sensitive, glucose regulating neurons become resistant to leptin during aging or during exposure to a high-fat diet. These leptin resistant neurons fail to restrain food intake, obesity, and blood glucose. The reasons for this lowered responsiveness to leptin are uncertain and are part of the puzzle of the causes of type 2 diabetes.Blood glucose levels can also be normalized in diabetic rodents by a single intrahypothalamic infusion of Fibroblast Growth Factor 1 (FGF1), an effect that persists for months even in severely diabetic animals. This remarkable cure of diabetes is accomplished by a stimulation of accessory brain cells called astrocytes. Hypothalamic astrocytes that produce Fatty Acid Binding Protein 7 (FABP7) are targets of FGF1; these cells are also in close contact with leptin-sensitive neurons, influence their function, and regulate leptin sensitivity. An abnormal function of FABP7+ astrocytes thus may contribute to the resistance to leptin and insulin that appear during aging and during exposure to high-fat diets.
During aging, FABP7+ astrocytes develop cytoplasmic granules derived from degenerating mitochondria. This mitochondrial degeneration is partly due to the oxidative stress of the heightened amounts of fatty acids that are taken up by these cells and oxidized within mitochondria. A pathological degeneration of mitochondria in these cells may compromise their normal functions and contribute to abnormalities in the control of blood glucose by the hypothalamus.
Diagnosis
The World Health Organization definition of diabetes (both type 1 and type 2) is for a single raised glucose reading with symptoms, otherwise raised values on two occasions, of either:
fasting plasma glucose ≥ 7.0 mmol/l (126 mg/dl)orwith a glucose tolerance test, two hours after the oral dose a plasma glucose ≥ 11.1 mmol/l (200 mg/dl)A random blood sugar of greater than 11.1 mmol/l (200 mg/dl) in association with typical symptoms or a glycated hemoglobin (HbA1c) of ≥ 48 mmol/mol (≥ 6.5 DCCT %) is another method of diagnosing diabetes. In 2009 an International Expert Committee that included representatives of the American Diabetes Association (ADA), the International Diabetes Federation (IDF), and the European Association for the Study of Diabetes (EASD) recommended that a threshold of ≥ 48 mmol/mol (≥ 6.5 DCCT %) should be used to diagnose diabetes. This recommendation was adopted by the American Diabetes Association in 2010. Positive tests should be repeated unless the person presents with typical symptoms and blood sugars >11.1 mmol/l (>200 mg/dl).
Threshold for diagnosis of diabetes is based on the relationship between results of glucose tolerance tests, fasting glucose or HbA1c and complications such as retinal problems. A fasting or random blood sugar is preferred over the glucose tolerance test, as they are more convenient for people. HbA1c has the advantages that fasting is not required and results are more stable but has the disadvantage that the test is more costly than measurement of blood glucose. It is estimated that 20% of people with diabetes in the United States do not realize that they have the disease.Type 2 diabetes is characterized by high blood glucose in the context of insulin resistance and relative insulin deficiency. This is in contrast to type 1 diabetes in which there is an absolute insulin deficiency due to destruction of islet cells in the pancreas and gestational diabetes that is a new onset of high blood sugars associated with pregnancy. Type 1 and type 2 diabetes can typically be distinguished based on the presenting circumstances. If the diagnosis is in doubt antibody testing may be useful to confirm type 1 diabetes and C-peptide levels may be useful to confirm type 2 diabetes, with C-peptide levels normal or high in type 2 diabetes, but low in type 1 diabetes.
Screening
No major organization recommends universal screening for diabetes as there is no evidence that such a program improve outcomes. Screening is recommended by the United States Preventive Services Task Force (USPSTF) in adults without symptoms whose blood pressure is greater than 135/80 mmHg. For those whose blood pressure is less, the evidence is insufficient to recommend for or against screening. There is no evidence that it changes the risk of death in this group of people. They also recommend screening among those who are overweight and between the ages of 40 and 70.The World Health Organization recommends testing those groups at high risk and in 2014 the USPSTF is considering a similar recommendation. High-risk groups in the United States include: those over 45 years old; those with a first degree relative with diabetes; some ethnic groups, including Hispanics, African-Americans, and Native-Americans; a history of gestational diabetes; polycystic ovary syndrome; excess weight; and conditions associated with metabolic syndrome. The American Diabetes Association recommends screening those who have a BMI over 25 (in people of Asian descent screening is recommended for a BMI over 23).A Cochrane Systematic Review looking at the effects of screening on all-cause and diabetes-related mortality did not show any benefits in these outcomes for either screening or not-screening. This was based on only one included study, so no conclusions can be made on the benefits of screening, or lack thereof. This same review did not assess the effects on other outcomes such as adverse effects, incidence of type 2 diabetes, HbA1c or socioeconomic effects.In the UK, NICE guidelines suggest taking action to prevent diabetes for people with a body mass index (BMI) of 30. For people of Black African, African-Caribbean, South Asian and Chinese descent the recommendation to start prevention starts at the BMI of 27,5. A study based on a large sample of people in England suggest even lower BMIs for certain ethnic groups for the start of prevention, for example 24 in South Asian and 21 in Bangladeshi populations.
Prevention
Onset of type 2 diabetes can be delayed or prevented through proper nutrition and regular exercise. Intensive lifestyle measures may reduce the risk by over half. The benefit of exercise occurs regardless of the persons initial weight or subsequent weight loss. High levels of physical activity reduce the risk of diabetes by about 28%. Evidence for the benefit of dietary changes alone, however, is limited, with some evidence for a diet high in green leafy vegetables and some for limiting the intake of sugary drinks. There is an association between higher intake of sugar-sweetened fruit juice and diabetes, but no evidence of an association with 100% fruit juice. A 2019 review found evidence of benefit from dietary fiber.In those with impaired glucose tolerance, diet and exercise either alone or in combination with metformin or acarbose may decrease the risk of developing diabetes. Lifestyle interventions are more effective than metformin. A 2017 review found that, long term, lifestyle changes decreased the risk by 28%, while medication does not reduce risk after withdrawal. While low vitamin D levels are associated with an increased risk of diabetes, correcting the levels by supplementing vitamin D3 does not improve that risk.
Management
Management of type 2 diabetes focuses on lifestyle interventions, lowering other cardiovascular risk factors, and maintaining blood glucose levels in the normal range. Self-monitoring of blood glucose for people with newly diagnosed type 2 diabetes may be used in combination with education, although the benefit of self-monitoring in those not using multi-dose insulin is questionable. In those who do not want to measure blood levels, measuring urine levels may be done. Managing other cardiovascular risk factors, such as hypertension, high cholesterol, and microalbuminuria, improves a persons life expectancy. Decreasing the systolic blood pressure to less than 140 mmHg is associated with a lower risk of death and better outcomes. Intensive blood pressure management (less than 130/80 mmHg) as opposed to standard blood pressure management (less than 140-160 mmHg systolic to 85–100 mmHg diastolic) results in a slight decrease in stroke risk but no effect on overall risk of death.Intensive blood sugar lowering (HbA1c<6%) as opposed to standard blood sugar lowering (HbA1c of 7–7.9%) does not appear to change mortality. The goal of treatment is typically an HbA1c of 7 to 8% or a fasting glucose of less than 7.2 mmol/L (130 mg/dl); however these goals may be changed after professional clinical consultation, taking into account particular risks of hypoglycemia and life expectancy. Hypoglycemia is associated with adverse outcomes in older people with type 2 diabetes. Despite guidelines recommending that intensive blood sugar control be based on balancing immediate harms with long-term benefits, many people – for example people with a life expectancy of less than nine years who will not benefit, are over-treated.It is recommended that all people with type 2 diabetes get regular eye examinations. There is weak evidence suggesting that treating gum disease by scaling and root planing may result in a small short-term improvement in blood sugar levels for people with diabetes. There is no evidence to suggest that this improvement in blood sugar levels is maintained longer than four months. There is also not enough evidence to determine if medications to treat gum disease are effective at lowering blood sugar levels.
Lifestyle
Exercise
A proper diet and regular exercise are foundations of diabetic care, with one review indicating that a greater amount of exercise improved outcomes. Regular exercise may improve blood sugar control, decrease body fat content, and decrease blood lipid levels.
Diet
A diabetic diet which includes calorie restriction to promote weight loss is generally recommended. Other recommendations include emphasizing intake of fruits, vegetables, reduced saturated fat and low-fat dairy products, and with a macronutrient intake tailored to the individual, to distribute calories and carbohydrates throughout the day. Several diets may be effective such as the Dietary Approaches to Stop Hypertension (DASH), Mediterranean diet, low-fat diet, or monitored carbohydrate diets such as a low carbohydrate diet. Viscous fiber supplements may be useful in those with diabetes.Vegetarian diets in general have been related to lower diabetes risk, but do not offer advantages compared with diets which allow moderate amounts of animal products. There is not enough evidence to suggest that cinnamon improves blood sugar levels in people with type 2 diabetes. A 2021 review showed that consumption of tree nuts (walnuts, almonds, and hazelnuts) reduced fasting blood glucose in diabetic people.Culturally appropriate education may help people with type 2 diabetes control their blood sugar levels for up to 24 months. There is not enough evidence to determine if lifestyle interventions affect mortality in those who already have type 2 diabetes.As of 2015, there is insufficient data to recommend nonnutritive sweeteners, which may help reduce caloric intake.
Medications
Blood sugar control
There are several classes of anti-diabetic medications available. Metformin is generally recommended as a first line treatment as there is some evidence that it decreases mortality; however, this conclusion is questioned. Metformin should not be used in those with severe kidney or liver problems.A second oral agent of another class or insulin may be added if metformin is not sufficient after three months. Other classes of medications include: sulfonylureas, thiazolidinediones, dipeptidyl peptidase-4 inhibitors, SGLT2 inhibitors, and glucagon-like peptide-1 analogs. As of 2015 there was no significant difference between these agents. A 2018 review found that SGLT2 inhibitors and GLP-1 agonists, but not DPP-4 inhibitors, were associated with lower mortality than placebo or no treatment.Rosiglitazone, a thiazolidinedione, has not been found to improve long-term outcomes even though it improves blood sugar levels. Additionally it is associated with increased rates of heart disease and death.The effects of Plioglitazone have been compared in a Cochrane systematic review to that of other blood sugar lowering-medicine, including metformin, acarbose, and repaglinide, not showing any benefit in reducing the chance of developing type 2 diabetes in people at risk. It did, however, show reduction of risk of developing type 2 diabetes when compared to a placebo or to no treatment. These results should be interpreted considering that most of the data of the studies included in this review were of low or very-low certainty.
Injections of insulin may either be added to oral medication or used alone. Most people do not initially need insulin. When it is used, a long-acting formulation is typically added at night, with oral medications being continued. Doses are then increased to effect (blood sugar levels being well controlled). When nightly insulin is insufficient, twice daily insulin may achieve better control. The long acting insulins glargine and detemir are equally safe and effective, and do not appear much better than neutral protamine Hagedorn (NPH) insulin, but as they are significantly more expensive, they are not cost effective as of 2010. In those who are pregnant, insulin is generally the treatment of choice.
Blood pressure lowering
Many international guidelines recommend blood pressure treatment targets that are lower than 140/90 mmHg for people with diabetes. However, there is only limited evidence regarding what the lower targets should be. A 2016 systematic review found potential harm to treating to targets lower than 140 mmHg, and a subsequent review in 2019 found no evidence of additional benefit from blood pressure lowering to between 130–140mmHg, although there was an increased risk of adverse events.2015 American Diabetes Association recommendations are that people with diabetes and albuminuria should receive an inhibitor of the renin-angiotensin system to reduce the risks of progression to end-stage renal disease, cardiovascular events, and death. There is some evidence that angiotensin converting enzyme inhibitors (ACEIs) are superior to other inhibitors of the renin-angiotensin system such as angiotensin receptor blockers (ARBs), or aliskiren in preventing cardiovascular disease. Although a more recent review found similar effects of ACEIs and ARBs on major cardiovascular and renal outcomes. There is no evidence that combining ACEIs and ARBs provides additional benefits.
Other
The use of aspirin to prevent cardiovascular disease in diabetes is controversial. Aspirin is recommended in people at high risk of cardiovascular disease, however routine use of aspirin has not been found to improve outcomes in uncomplicated diabetes. 2015 American Diabetes Association recommendations for aspirin use (based on expert consensus or clinical experience) are that low-dose aspirin use is reasonable in adults with diabetes who are at intermediate risk of cardiovascular disease (10-year cardiovascular disease risk, 5–10%).Vitamin D supplementation to people with type 2 diabetes may improve markers of insulin resistance and HbA1c.Sharing their electronic health records with people who have type 2 diabetes helps them to reduce their blood sugar levels. It is a way of helping people understand their own health condition and involving them actively in its management.
Surgery
Weight loss surgery in those who are obese is an effective measure to treat diabetes. Many are able to maintain normal blood sugar levels with little or no medication following surgery and long-term mortality is decreased. There however is some short-term mortality risk of less than 1% from the surgery. The body mass index cutoffs for when surgery is appropriate are not yet clear. It is recommended that this option be considered in those who are unable to get both their weight and blood sugar under control.
Epidemiology
The International Diabetes Federation estimates nearly 537 million people lived with diabetes worldwide in 2021, 90–95% of whom have type 2 diabetes. Diabetes is common both in the developed and the developing world. It remains uncommon, however, in the least developed countries.Women seem to be at a greater risk as do certain ethnic groups, such as South Asians, Pacific Islanders, Latinos, and Native Americans. This may be due to enhanced sensitivity to a Western lifestyle in certain ethnic groups. Traditionally considered a disease of adults, type 2 diabetes is increasingly diagnosed in children in parallel with rising obesity rates. Type 2 diabetes is now diagnosed as frequently as type 1 diabetes in teenagers in the United States.Rates of diabetes in 1985 were estimated at 30 million, increasing to 135 million in 1995 and 217 million in 2005. This increase is believed to be primarily due to the global population aging, a decrease in exercise, and increasing rates of obesity. The five countries with the greatest number of people with diabetes as of 2000 are India having 31.7 million, China 20.8 million, the United States 17.7 million, Indonesia 8.4 million, and Japan 6.8 million. It is recognized as a global epidemic by the World Health Organization.
History
Diabetes is one of the first diseases described with an Egyptian manuscript from c. 1500 BCE mentioning "too great emptying of the urine." The first described cases are believed to be of type 1 diabetes. Indian physicians around the same time identified the disease and classified it as madhumeha or honey urine noting that the urine would attract ants. The term "diabetes" or "to pass through" was first used in 230 BCE by the Greek Apollonius Memphites. The disease was rare during the time of the Roman empire with Galen commenting that he had only seen two cases during his career.Type 1 and type 2 diabetes were identified as separate conditions for the first time by the Indian physicians Sushruta and Charaka in 400–500 AD with type 1 associated with youth and type 2 with being overweight. Effective treatment was not developed until the early part of the 20th century when the Canadians Frederick Banting and Charles Best discovered insulin in 1921 and 1922. This was followed by the development of the long acting NPH insulin in the 1940s.In 1916, Elliot Joslin proposed that in people with diabetes, periods of fasting are helpful. Subsequent research has supported this, and weight loss is a first line treatment in type 2 diabetes.
Research
Researchers developed the Diabetes Severity Score (DISSCO), a tool that might better than the standard blood test at identify if a persons condition is declining. It uses a computer algorithm to analyse data from anonymised electronic patient records and produces a score based on 34 indicators.
References
Works cited
Kahn CR, Ferris HA, ONeill BT (2020). "Pathophysiology of Type 1 Diabetes Mellitus". Williams Textbook of Endocrinology (14 ed.). Elsevier. pp. 1349–1370.
International Diabetes Federation (2021). IDF Diabetes Atlas (PDF) (10 ed.). ISBN 9782930229980. Retrieved 18 March 2022.
External links
IDF Diabetes Atlas 2015
National Diabetes Information Clearinghouse Archived 2010-02-21 at the Wayback Machine
Centers for Disease Control (Endocrine pathology)
ADAs Standards of Medical Care in Diabetes 2019 | 141 |
Gastroparesis | Gastroparesis (gastro- from Ancient Greek γαστήρ - gaster, "stomach"; and -paresis, πάρεσις - "partial paralysis"), also called delayed gastric emptying, is a medical disorder consisting of weak muscular contractions (peristalsis) of the stomach, resulting in food and liquid remaining in the stomach for a prolonged period of time. Stomach contents thus exit more slowly into the duodenum of the digestive tract. This can result in irregular absorption of nutrients, inadequate nutrition, and poor glycemic control.Symptoms include nausea, vomiting, abdominal pain, feeling full soon after beginning to eat (early satiety), abdominal bloating, and heartburn. The most common known mechanism is autonomic neuropathy of the nerve which innervates the stomach: the vagus nerve. Uncontrolled diabetes mellitus is a major cause of this nerve damage; other causes include post-infectious and trauma to the vagus nerve.
Diagnosis is via one or more of the following: barium swallow X-ray, barium beefsteak meal, radioisotope gastric-emptying scan, gastric manometry, and esophagogastroduodenoscopy (EGD). Complications include malnutrition, fatigue, weight loss, vitamin deficiencies, intestinal obstruction due to bezoars, and small intestine bacterial overgrowth.
Treatment includes dietary modifications, medications to stimulate gastric emptying, medications to reduce vomiting, and surgical approaches.
Signs and symptoms
The most common symptoms of gastroparesis are the following:
Chronic nausea
Vomiting (especially of undigested food)
Abdominal pain
A feeling of fullness after eating just a few bitesOther symptoms include the following:
Abdominal bloating
Body aches (myalgia)
Erratic blood glucose levels
Acid reflux (GERD)
Heartburn
Lack of appetite
Morning nausea
Muscle weakness
Night sweats
Palpitations
Spasms of the stomach wall
Constipation or infrequent bowel movements
Weight loss, malnutrition
Difficulty swallowingVomiting may not occur in all cases, as those affected may adjust their diets to include only small amounts of food.
Causes
Transient gastroparesis may arise in acute illness of any kind, as a consequence of certain cancer treatments or other drugs which affect digestive action, or due to abnormal eating patterns. The symptoms are almost identical to those of low stomach acid, therefore most doctors will usually recommend trying out supplemental hydrochloric acid before moving on to the invasive procedures required to confirm a damaged nerve. Patients with cancer may develop gastroparesis because of chemotherapy-induced neuropathy, immunosuppression followed by viral infections involving the GI tract, procedures such as celiac blocks, paraneoplastic neuropathy or myopathy, or after an allogeneic bone marrow transplant via graft-versus-host disease.Gastroparesis present similar symptoms to slow gastric emptying caused by certain opioid medications, antidepressants, and allergy medications, along with high blood pressure. For patients already with gastroparesis, these can make the condition worse. More than 50% of all gastroparesis cases are idiopathic in nature, with unknown causes. It is, however, frequently caused by autonomic neuropathy. This may occur in people with type 1 or type 2 diabetes, about 30–50% among long-standing diabetics. In fact, diabetes mellitus has been named as the most common cause of gastroparesis, as high levels of blood glucose may effect chemical changes in the nerves. The vagus nerve becomes damaged by years of high blood glucose or insufficient transport of glucose into cells resulting in gastroparesis. Adrenal and thyroid gland problems could also be a cause.Gastroparesis has also been associated with connective tissue diseases such as scleroderma and Ehlers–Danlos syndrome, and neurological conditions such as Parkinsons disease and multiple system atrophy. It may occur as part of a mitochondrial disease. Opioids and anticholinergic medications can cause medication-induced gastroparesis. Chronic gastroparesis can be caused by other types of damage to the vagus nerve, such as abdominal surgery. Heavy cigarette smoking is also a plausible cause since smoking causes damage to the stomach lining. Idiopathic gastroparesis (gastroparesis with no known cause) accounts for a third of all chronic cases; it is thought that many of these cases are due to an autoimmune response triggered by an acute viral infection. Gastroenteritis, mononucleosis, and other ailments have been anecdotally linked to the onset of the condition, but no systematic study has proven a link.People with gastroparesis are disproportionately female. One possible explanation for this finding is that women have an inherently slower stomach emptying time than men. A hormonal link has been suggested, as gastroparesis symptoms tend to worsen the week before menstruation when progesterone levels are highest. Neither theory has been proven definitively.
Mechanism
On the molecular level, it is thought that gastroparesis can be caused by the loss of neuronal nitric oxide expression since the cells in the GI tract secrete nitric oxide. This important signaling molecule has various responsibilities in the GI tract and in muscles throughout the body. When nitric oxide levels are low, the smooth muscle and other organs may not be able to function properly. Other important components of the stomach are the interstitial cells of Cajal (ICC) which act as a pacemaker since they transduce signals from motor neurons to produce an electrical rhythm in the smooth muscle cells. Lower nitric oxide levels also correlate with loss of ICC cells, which can ultimately lead to the loss of function in the smooth muscle in the stomach, as well as in other areas of the gastrointestinal tract.Pathogenesis of symptoms in diabetic gastroparesis include:
Loss of gastric neurons containing nitric oxide synthase (NOS) is responsible for defective accommodation reflex, which leads to early satiety and postprandial fullness.
Impaired electromechanical activity in the myenteric plexus is responsible for delayed gastric emptying, resulting in nausea and vomiting.
Sensory neuropathy in the gastric wall may be responsible for epigastric pain.
Abnormal pacemaker activity (tachybradyarrhythmia) may generate a noxious signal transmitted to the CNS to evoke nausea and vomiting.
Diagnosis
Gastroparesis can be diagnosed with tests such as barium swallow X-rays, manometry, and gastric emptying scans. For the X-ray, the patient drinks a liquid containing barium after fasting which will show up in the X-ray and the physician is able to see if there is still food in the stomach as well. This can be an easy way to identify whether the patient has delayed emptying of the stomach. The clinical definition for gastroparesis is based solely on the emptying time of the stomach (and not on other symptoms), and severity of symptoms does not necessarily correlate with the severity of gastroparesis. Therefore, some patients may have marked gastroparesis with few, if any, serious complications.In other cases or if the X-ray is inconclusive, the physician may have the patient eat a meal of toast, water, and eggs containing a radioactive isotope so they can watch as it is digested and see how slowly the digestive tract is moving. This can be helpful for diagnosing patients who are able to digest liquids but not solid foods.
Complications
Complications of gastroparesis include:
Fluctuations in blood glucose due to unpredictable digestion times due to changes in rate and amount of food passing into the small bowel. This makes diabetes worse, but does not cause diabetes. Lack of control of blood sugar levels will make the gastroparesis worsen.
General malnutrition due to the symptoms of the disease (which frequently include vomiting and reduced appetite) as well as the dietary changes necessary to manage it. This is especially true for vitamin deficiencies such as scurvy because of inability to tolerate fresh fruits.
Severe fatigue and weight loss due to calorie deficit
Intestinal obstruction due to the formation of bezoars (solid masses of undigested food). This can cause nausea and vomiting, which can in turn be life-threatening if they prevent food from passing the small intestine.
Small intestine bacterial overgrowth is commonly found in patients with gastroparesis.
Bacterial infection due to overgrowth in undigested food
A decrease in quality of life, since it can make keeping up with work and other responsibilities more difficult.
Treatment
Treatment includes dietary modifications, medications to stimulate gastric emptying, medications to reduce vomiting, and surgical approaches.Dietary treatment involves low fiber diets and, in some cases, restrictions on fat or solids. Eating smaller meals, spaced two to three hours apart has proved helpful. Avoiding foods like rice or beef that cause the individual problems such as pain in the abdomen or constipation will help avoid symptoms.Metoclopramide, a dopamine D2 receptor antagonist, increases contractility and resting tone within the GI tract to improve gastric emptying. In addition, dopamine antagonist action in the central nervous system prevents nausea and vomiting. Similarly, the dopamine receptor antagonist domperidone is used to treat gastroparesis. Erythromycin is known to improve emptying of the stomach but its effects are temporary due to tachyphylaxis and wane after a few weeks of consistent use. Sildenafil citrate, which increases blood flow to the genital area in men, is being used by some practitioners to stimulate the gastrointestinal tract in cases of diabetic gastroparesis. The antidepressant mirtazapine has proven effective in the treatment of gastroparesis unresponsive to conventional treatment. This is due to its antiemetic and appetite stimulant properties. Mirtazapine acts on the same serotonin receptor (5-HT3) as does the popular anti-emetic ondansetron. Camicinal is a motilin agonist for the treatment of gastroparesis.
In specific cases where treatment of chronic nausea and vomiting proves resistant to drugs, implantable gastric stimulation may be utilized. A medical device is implanted that applies neurostimulation to the muscles of the lower stomach to reduce the symptoms. This is only done in refractory cases that have failed all medical management (usually at least two years of treatment). Medically refractory gastroparesis may also be treated with a pyloromyotomy, which widens the gastric outlet by cutting the circular pylorus muscle. This can be done laparoscopically or endoscopically. Vertical sleeve gastrectomy, a procedure in which a part or all of the affected portion of the stomach is removed, has been shown to have some success in the treatment of gastroparesis in obese patients, even curing it in some instances. Further studies have been recommended due to the limited sample size of previous studies.In cases of postinfectious gastroparesis, patients have symptoms and go undiagnosed for an average of 3 weeks to 6 months before their illness is identified correctly and treatment begins.
Prognosis
Post-infectious
Cases of post-infectious gastroparesis are self‐limiting, with recovery within 12 months of initial symptoms, although some cases last well over 2 years. In children, the duration tends to be shorter and the disease course milder than in adolescent and adults.
Diabetic gastropathy
Diabetic gastropathy is usually slowly progressive, and can become severe and lethal.
Prevalence
Post-infectious gastroparesis, which constitutes the majority of idiopathic gastroparesis cases, affects up to 4% of the American population. Women in their 20s and 30s seem to be susceptible. One study of 146 American gastroparesis patients found the mean age of patients was 34 years with 82% affected being women, while another study found the patients were young or middle aged and up to 90% were women.There has only been one true epidemiological study of idiopathic gastroparesis which was completed by the Rochester Epidemiology Project. They looked at patients from 1996 to 2006 who were seeking medical attention instead of a random population sample and found that the prevalence of delayed gastric emptying was fourfold higher in women. It is difficult for medical professionals and researchers to collect enough data and provide accurate numbers since studying gastroparesis requires specialized laboratories and equipment.
References
Further reading
Overview from NIDDK National Institute of Diabetes, Digestive, and Kidney Diseases at NIH
Camilleri M, Parkman HP, Shafi MA, Abell TL, Gerson L (January 2013). "Clinical guideline: management of gastroparesis". The American Journal of Gastroenterology. 108 (1): 18–37, quiz 38. doi:10.1038/ajg.2012.373. PMC 3722580. PMID 23147521.
Parkman HP, Fass R, Foxx-Orenstein AE (June 2010). "Treatment of patients with diabetic gastroparesis". Gastroenterology & Hepatology. 6 (6): 1–16. PMC 2920593. PMID 20733935.
== External links == | 142 |
Irritant diaper dermatitis | Irritant diaper dermatitis (IDD, also called a diaper/nappy rash) is a generic term applied to skin rash in the diaper nappy area that are caused by various skin disorders and/or irritants.
Generic irritant diaper/nappy dermatitis is characterized by joined patches of erythema and scaling mainly seen on the convex surfaces, with the skin folds spared.
Diaper/nappy dermatitis with secondary bacterial or fungal involvement tends to spread to concave surfaces (i.e. skin folds), as well as convex surfaces, and often exhibits a central red, beefy erythema with satellite pustules around the border.
It is usually considered a form of irritant contact dermatitis. The word "diaper" is in the name not because the diaper/nappy itself causes the rash but rather because the rash is associated with diaper use, being caused by the materials trapped by the diaper (usually feces). Allergic contact dermatitis has also been suggested, but there is little evidence for this cause. In adults with incontinence (fecal, urinary, or both), the rash is sometimes called incontinence-associated dermatitis (IAD).The term diaper candidiasis is used when a fungal origin is identified. The distinction is critical because the treatment (antifungals) is completely different.
Causes
Irritant diaper dermatitis develops when skin is exposed to prolonged wetness, increased skin pH caused by the combination, and subsequent reactions, of urine and feces, and resulting breakdown of the stratum corneum, or outermost layer of the skin. This may be due to diarrhea, frequent stools, tight diapers, overexposure to ammonia, or allergic reactions. In adults, the stratum corneum is composed of 25 to 30 layers of flattened dead keratinocytes, which are continuously shed and replaced from below. These dead cells are interlaid with lipids secreted by the stratum granulosum just underneath, which help to make this layer of the skin a waterproof barrier. The stratum corneums function is to reduce water loss, repel water, protect deeper layers of the skin from injury, and to repel microbial invasion of the skin. In infants, this layer of the skin is much thinner and more easily disrupted.
Urine
Although wetness alone has the effect of macerating the skin, softening the stratum corneum, and greatly increasing susceptibility to friction injury, urine has an additional impact on skin integrity because of its effect on skin pH. While studies show that ammonia alone is only a mild skin irritant, when urea breaks down in the presence of fecal urease it increases pH because ammonia is released, which in turn promotes the activity of fecal enzymes such as protease and lipase. These fecal enzymes increase the skins hydration and permeability to bile salts which also act as skin irritants.
There is no detectable difference in rates of diaper rash in conventional disposable diaper wearers and reusable cloth diaper wearers. "Babies wearing superabsorbent disposable diapers with a central gelling material have fewer episodes of diaper dermatitis compared with their counterparts wearing cloth diapers. However, keep in mind that superabsorbent diapers contain dyes that were suspected to cause allergic contact dermatitis (ACD)." Whether wearing cloth or disposable diapers they should be changed frequently to prevent diaper rash, even if they dont feel wet. To reduce the incidence of diaper rash, disposable diapers have been engineered to pull moisture away from the babys skin using synthetic non-biodegradable gel. Today, cloth diapers use newly available superabsorbent microfiber cloth placed in a pocket with a layer of light permeable material that contacts the skin. This design serves to pull moisture away from the skin in to the microfiber cloth. This technology is used in most major pocket cloth diapers brands today.
Diet
The interaction between fecal enzyme activity and IDD explains the observation that infant diet and diaper rash are linked because fecal enzymes are in turn affected by diet. Breast-fed babies, for example, have a lower incidence of diaper rash, possibly because their stools have higher pH and lower enzymatic activity. Diaper rash is also most likely to be diagnosed in infants 8–12 months old, perhaps in response to an increase in eating solid foods and dietary changes around that age that affect fecal composition. Any time an infants diet undergoes a significant change (i.e. from breast milk to formula or from milk to solids) there appears to be an increased likelihood of diaper rash.The link between feces and IDD is also apparent in the observation that infants are more susceptible to developing diaper rash after treating with antibiotics, which affect the intestinal microflora. Also, there is an increased incidence of diaper rash in infants who have had diarrhea in the previous 48 hours, which may be because fecal enzymes such as lipase and protease are more active in feces which have passed rapidly through the gastrointestinal tract.
Secondary infections
The significance of secondary infection in IDD remains controversial. There seems to be no link between presence or absence of IDD and microbial counts. Although apparently healthy infants sometimes culture positive for Candida and other organisms without exhibiting any symptoms, there does seem to be a positive correlation between the severity of the diaper rash noted and the likelihood of secondary involvement. A wide variety of infections has been reported, including Staphylococcus aureus, Streptococcus pyogenes, Proteus mirabilis, enterococci and Pseudomonas aeruginosa, but it appears that Candida is the most common opportunistic invader in diaper areas.
Diagnosis
The diagnosis of IDD is made clinically, by observing the limitation of an erythematous eruption to the convex surfaces of the genital area and buttocks. If the diaper dermatitis occurs for greater than 3 days it may be colonized with Candida albicans, giving it the beefy red, sharply marginated, appearance of diaper candidiasis.
Differential diagnosis
Other rashes that occur in the diaper area include seborrhoeic dermatitis and atopic dermatitis. Both Seborrheic and Atopic dermatitis require individualized treatment; they are not the subject of this article.
Seborrheic dermatitis, typified by oily, thick yellowish scales, is most commonly seen on the scalp (cradle cap) but can also appear in the inguinal folds.
Atopic dermatitis, or eczema, is associated with allergic reaction, often hereditary. This class of rashes may appear anywhere on the body and is characterized by intense itchiness.
Treatments
Possible treatments include minimizing diaper use, barrier creams, mild topical cortisones, and antifungal agents. A variety of other inflammatory and infectious processes can occur in the diaper area and an awareness of these secondary types of diaper dermatitis aids in the accurate diagnosis and treatment of patients.Overall, there is sparse evidence of sufficient quality to be certain of the effectiveness of the various treatments. Washcloths with cleansing, moisturising and protective properties may be better than soap and water, and skin cleansers may also be better than soap and water, but the certainty of evidence with regard other treatments is very low.
Diaper changing
The most effective treatment, although not the most practical one, is to discontinue use of diapers, allowing the affected skin to air out. Another option is simply to increase the frequency of diaper changing. Thorough drying of the skin before diapering is a good preventive measure because it is the excess moisture, either from urine and feces or from sweating, that sets the conditions for a diaper rash to occur.
Diaper type
Some sources claim that diaper rash is more common with cloth diapers. Others claim the material of the diaper is relevant insofar as it can wick and keep moisture away from the babys skin, and preventing secondary Candida infection. However, there may not be enough data from good-quality, randomized controlled trials to support or refute disposable diaper use thus far. Furthermore, the effect of non-biodegradable diapers on the environment is a concerning matter for public policy.
Creams, ointments
Another approach is to block moisture from reaching the skin, and commonly recommended remedies using this approach include oil-based protectants or barrier cream, various over-the-counter "diaper creams", petroleum jelly, dimethicone and other oils. Such sealants sometimes accomplish the opposite if the skin is not thoroughly dry, in which case they serve to seal the moisture inside the skin rather than outside.
Zinc oxide-based ointments such as Pinxav can be quite effective, especially in prevention, because they have both a drying and an astringent effect on the skin, being mildly antiseptic without causing irritation.A 2005 meta-analysis found no evidence to support the use of topical vitamin A to treat the condition.
Dangers of using powders
Various moisture-absorbing powders, such as talcum or starch, reduce moisture but may introduce other complications. Airborne powders of any sort can irritate lung tissue, and powders made from starchy plants (corn, arrowroot) provide food for fungi and are not recommended by the American Academy of Dermatology.
Antifungals
In persistent or especially bad rashes, an antifungal cream often has to be used. In cases that the rash is more of an irritation, a mild topical corticosteroid preparation, e.g. hydrocortisone cream, is used. As it is often difficult to tell a fungal infection apart from a mere skin irritation, many physicians prefer an corticosteroid-and-antifungal combination cream such as hydrocortisone/miconazole.
References
== External links == | 143 |
Digoxin toxicity | Digoxin toxicity, also known as digoxin poisoning, is a type of poisoning that occurs in people who take too much of the medication digoxin or eat plants such as foxglove that contain a similar substance. Symptoms are typically vague. They may include vomiting, loss of appetite, confusion, blurred vision, changes in color perception, and decreased energy. Potential complications include an irregular heartbeat, which can be either too fast or too slow.Toxicity may occur over a short period of time following an overdose or gradually during long-term treatment. Risk factors include low potassium, low magnesium, and high calcium. Digoxin is a medication used for heart failure or atrial fibrillation. An electrocardiogram is a routine part of diagnosis. Blood levels are only useful more than six hours following the last dose.Activated charcoal may be used if it can be given within two hours of the person taking the medication. Atropine may be used if the heart rate is slow while magnesium sulfate may be used in those with premature ventricular contractions. Treatment of severe toxicity is with digoxin-specific antibody fragments. Its use is recommended in those who have a serious dysrhythmia, are in cardiac arrest, or have a potassium of greater than 5 mmol/L. Low blood potassium or magnesium should also be corrected. Toxicity may reoccur within a few days after treatment.In Australia in 2012 there were about 140 documented cases. This is a decrease by half since 1994 as a result of decreased usage of digoxin. In the United States 2500 cases were reported in 2011 which resulted in 27 deaths. The condition was first described in 1785 by William Withering.
Signs and symptoms
Digoxin toxicity is often divided into acute or chronic toxicity. In both of these toxicity, cardiac effects are of the greatest concern. With an acute ingestion, symptoms such as nausea, vertigo, and vomiting are prominent. On the other hand, nonspecific symptoms are predominant in chronic toxicity. These symptoms include fatigue, malaise, and visual disturbances.The classic features of digoxin toxicity are nausea, vomiting, abdominal pain, headache, dizziness, confusion, delirium, vision disturbance (blurred or yellow vision). It is also associated with cardiac disturbances including irregular heartbeat, ventricular tachycardia, ventricular fibrillation, sinoatrial block and AV block.
Diagnosis
In individuals with suspected digoxin toxicity, a serum digoxin concentration, serum potassium concentration, creatinine, BUN, and serial electrocardiograms is obtained.
ECG
In digoxin toxicity, the finding of frequent premature ventricular beats (PVCs) is the most common and the earliest dysrhythmia. Sinus bradycardia is also very common. In addition, depressed conduction is a predominant feature of digoxin toxicity. Other ECG changes that suggest digoxin toxicity include bigeminal and trigeminal rhythms, ventricular bigeminy, and bidirectional ventricular tachycardia.
Blood test
The level of digoxin for treatment is typically 0.5-2 ng/mL. Since this is a narrow therapeutic index, digoxin overdose can happen. A serum digoxin concentration of 0.5-0.9 ng/mL among those with heart failure is associated with reduced heart failure deaths and hospitalizations. It is therefore recommended that digoxin concentration be maintained in approximately this range if it is used in heart failure patients.
High amounts of the electrolyte potassium (K+) in the blood (hyperkalemia) is characteristic of digoxin toxicity. Digoxin toxicity increases in individuals who have kidney impairment. This is most often seen in elderly or those with chronic kidney disease or end-stage kidney disease.
Treatment
The primary treatment of digoxin toxicity is digoxin immune fab, which is an antibody made up of anti-digoxin immunoglobulin fragments. This antidote has been shown to be highly effective in treating life-threatening signs of digoxin toxicity such as hyperkalemia, hemodynamic instability, and arrhythmias. Fab dose can be determined by two different methods. First method is based on the amount of digoxin ingested whereas the second method is based on the serum digoxin concentration and the weight of the person.Other treatment that may be used to treat life-threatening arrhythmias until Fab is acquired are magnesium, phenytoin, and lidocaine. Magnesium suppresses digoxin-induced ventricular arrhythmias while phenytoin and lidocaine suppresses digoxin-induced ventricular automaticity and delay afterdepolarizations without depressing AV conduction. In the case of an abnormally slow heart rate (bradyarrhythmias), Atropine, catecholamines (isoprenaline or salbutamol), and/or temporary cardiac pacing can be used.
References
== External links == | 144 |
Discoid lupus erythematosus | Discoid lupus erythematosus is the most common type of chronic cutaneous lupus (CCLE), an autoimmune skin condition on the lupus erythematosus spectrum of illnesses. It presents with red, painful, inflamed and coin-shaped patches of skin with a scaly and crusty appearance, most often on the scalp, cheeks, and ears. Hair loss may occur if the lesions are on the scalp. The lesions can then develop severe scarring, and the centre areas may appear lighter in color with a rim darker than the normal skin. These lesions can last for years without treatment.Patients with systemic lupus erythematous develop discoid lupus lesions with some frequency. However, patients who present initially with discoid lupus infrequently develop systemic lupus. Discoid lupus can be divided into localized, generalized, and childhood discoid lupus.The lesions are diagnosed by biopsy. Patients are first treated with sunscreen and topical steroids. If this does not work, an oral medication—most likely hydroxychloroquine or a related medication—can be tried.
Signs and symptoms
Morphology of lesions
Discoid lupus erythematosus (DLE) skin lesions first present as dull or purplish red, disc-shaped flat or raised and firm areas of skin. These lesions then develop increasing amounts of white, adherent scale. Finally, the lesions develop extensive scarring and/or atrophy, as well as pigment changes. They may also have overlying dried fluid, known as crust. On darker skin, the lesions often lose skin pigmentation in the center and develop increased, dark skin pigmentation around the rim. On lighter skin, the lesions often develop a gray color or have very little color change. More rarely, the lesions may be bright red and look like hives.
Location of lesions
The skin lesions are most often in sun-exposed areas localized above the neck, with favored sites being the scalp, bridge of the nose, upper cheeks, lower lip, and ear and hands 24% of patients also have lesions in the mouth (most often the palate), nose, eye, or vulva, which are all mucosal parts of the body.More rarely, patients may have lesions on the head and neck as well as the arms and trunk.
Special characteristics of some lesions
Scalp lesions
When discoid lupus is on the scalp, it starts as a red flat or raised area of skin that then loses hair and develops extensive scarring. The lesions often lose skin pigment and become white with areas of increased skin pigment, with or without areas of redness, and have a sunken appearance. They can have a smooth surface or have visible, dilated hair follicles on the surface.
Lip lesions
When discoid lupus is on the lip, it often has a grey or red colour with a thickened top layer of skin (known as hyperkeratosis), areas where the top layer has worn away (known as erosion), and a surrounding rim of redness.
Other symptoms
Patients may state that their lesions are itchy, tender, or asymptomatic. In addition to their skin lesions, they may also have swelling and redness around their eyes, as well as blepharitis.
Complications
Darker-skinned patients are often left with severe scarring and skin color changes even after the lesions get better. In addition, these patients have an increased, though still small, risk for aggressive skin squamous cell carcinoma.
Causes
Sun exposure triggers lesions in people with discoid lupus erythematous (DLE). Evidence does not clearly demonstrate a genetic component to DLE; however, genetics may predispose certain people to disease.
Mechanism
Most experts consider DLE an autoimmune disease since pathologists see antibodies when they biopsy the lesions and look at the tissue under the microscope. However, scientists do not understand the connection between these antibodies and the lesions seen in discoid lupus.Possibly, UV light damages skin cells, which then release material from their nuclei. This material diffuses to the dermoepidermal junction, where it binds to circulating antibodies, thereby leading to a series of inflammatory reactions by the immune system.Alternatively, dysfunctional T cells may lead to the disease.
Diagnosis
When a patient initially presents with discoid lupus, the doctor should ensure that the patient does not have systemic lupus erythematosus. The doctor will order tests to check for anti-nuclear antibodies in the patients serum, low white blood cell levels, and protein and/or blood in the urine.In order to help with diagnosis, the doctor may peel off the top layer of scale from a patients lesions in order to look at its underside. If the patients do indeed have discoid lupus, the doctor may see tiny spines of keratin that look like carpet tacks and are called langue au chat.Diagnosis is confirmed through biopsy. Typical biopsy findings include deposits of IgG and IgM antibodies at the dermoepidermal junction on direct immunofluorescence. This finding is 90% sensitive; however, false positives can occur with biopsies of facial lesions. In addition, pathologists often see groups of white blood cells, particularly T helper cells, around the follicles and blood vessels in the dermis. The epidermis appears thin and has effaced rete ridges as well as excess amounts of keratin clogging the openings of the follicles. The basal layer of the epidermis sometimes appears to have holes in it since some of the cells in this layer have broken apart. The remains of skin cells that have died through a process called apoptosis are visible in the upper layer of the dermis and the basal layer of the epidermis.The differential diagnosis includes actinic keratoses, sebborheic dermatitis, lupus vulgaris, sarcoidosis, drug rash, Bowens disease, lichen planus, tertiary syphilis, polymorphous light eruption, lymphocytic infiltration, psoriasis, and systemic lupus erythematosus.
Classification
Discoid lupus can be broadly classified into localized discoid lupus and generalized discoid lupus based on the location of the lesions. Patients who develop discoid lupus in childhood also have their own sub-type of disease.Hypertrophic lupus and lupus profundus are two special types of discoid lupus distinguished by their characteristic morphological findings.Finally, many patients with systemic lupus also develop discoid lupus lesions.
Localized
Most people with discoid lupus only have lesions above the neck and therefore have localized discoid lupus erythematosus.
Generalized
Rarely, patients may have lesions above and below the neck; these patients have generalized discoid lupus erythematosus. In addition to lesions in the typical above-the-neck locations, patients with generalized discoid lupus often have lesions on the thorax and the arms. These patients are often bald, with abnormal skin pigment on their scalp, and have severe scarring of the face and arms. Patients with generalized discoid lupus often have abnormal lab tests, such as an elevated ESR or a low white blood cell count. They also often have auto-antibodies, such as ANA or anti-ssDNA antibody.
Childhood
When patients develop discoid lupus in childhood, it differs from typical discoid lupus in several ways. Boys and girls are equally affected, and these patients later develop SLE more often. These patients also typically do not have any abnormal sensitivity to the sun.
Special types of discoid lupus lesions
Hypertrophic lupus
Some experts consider hypertrophic lupus erythematosus—which consists of lesions covered by a very thick, keratin-filled scale—an unusual subset of discoid lupus. Others consider it a distinct entity.
Lupus profundus
If a patient has discoid lupus lesions on top of lupus panniculitis, they have lupus profundus. These patients have firm, nontender nodules with defined borders underneath their discoid lupus lesions.
Systemic lupus erythematosus with discoid lupus lesions
In general, patients with discoid lupus who have only skin disease and no systemic symptoms have a genetically distinct disease from patients with SLE. However, 25% of patients with SLE get discoid lupus lesions at some point as part of their disease.
Treatment
Treatment for discoid lupus erythematosus includes smoking cessation and a sunscreen that protects against both UVA and UVB light as well as very strong topical steroids or steroids injected into the lesions. Other topical treatments, tacrolimus or pimecrolimus can also be used. If this does not help the patient, his or her physician can prescribe an antimalarial medication such as oral hydroxychloroquine or chloroquine. Other oral medications used to treat discoid lupus include retinoids (isotretinoin or acitretin), dapsone, thalidomide (teratogenic, side effects include peripheral neuropathy), azathioprine, methotrexate, or gold. The topical steroid fluocinonide is more effective than hydrocortisone in the treatment of discoid lupus erythematosus. For oral treatment, hydroxychloroquine and acitretin are equally effective; however, acitretin was associated with more adverse effects.Pulsed dye laser is also an effective treatment for patients with localized discoid lupus. For patients with scalp disease, hair transplantation can help with their hair loss.
Prognosis
Discoid lupus erythematosus is a chronic condition, and lesions will last for several years without treatment. 50% of patients will eventually get better on their own. If a patient does not have any signs of systemic lupus erythematosus, such as generalized hair loss, ulcers in the mouth or nose, Raynauds phenomenon, arthritis, or fever at the time that they develop discoid lupus, they will most likely only have discoid lupus and will never develop systemic lupus erythematosus.
Epidemiology
Discoid lupus has an unknown incidence, although it is two to three times more common than systemic lupus erythematosus. The disease tends to affect young adults, and women are affected more than men in a 2:1 ratio.
Society and culture
The musician Seal has this skin condition.Singer Michael Jackson was reportedly diagnosed with discoid lupus in 1984; the condition might have damaged his nasal cartilage and led to some of his cosmetic surgery.
In animals
Dogs and horses can also get discoid lupus.
See also
List of cutaneous conditions associated with increased risk of nonmelanoma skin cancer
List of people with lupus
References
== External links == | 145 |
Dry eye syndrome | Dry eye syndrome (DES), also known as keratoconjunctivitis sicca (KCS), is the condition of having dry eyes. Other associated symptoms include irritation, redness, discharge, and easily fatigued eyes. Blurred vision may also occur. Symptoms range from mild and occasional to severe and continuous. Scarring of the cornea may occur in untreated cases.Dry eye occurs when either the eye does not produce enough tears or when the tears evaporate too quickly. This can result from contact lens use, meibomian gland dysfunction, pregnancy, Sjögren syndrome, vitamin A deficiency, omega-3 fatty acid deficiency, LASIK surgery, and certain medications such as antihistamines, some blood pressure medication, hormone replacement therapy, and antidepressants. Chronic conjunctivitis such as from tobacco smoke exposure or infection may also lead to the condition. Diagnosis is mostly based on the symptoms, though a number of other tests may be used.Treatment depends on the underlying cause. Artificial tears are usually the first line of treatment. Wrap-around glasses that fit close to the face may decrease tear evaporation. Stopping or changing certain medications may help. The medication ciclosporin or steroid eye drops may be used in some cases. Another option is lacrimal plugs that prevent tears from draining from the surface of the eye. Dry eye syndrome occasionally makes wearing contact lenses impossible.Dry eye syndrome is a common eye disease. It affects 5–34% of people to some degree depending on the population looked at. Among older people it affects up to 70%. In China it affects about 17% of people. The phrase "keratoconjunctivitis sicca" means "dryness of the cornea and conjunctiva" in Latin
Signs and symptoms
Typical symptoms of dry eye syndrome are dryness, burning and a sandy-gritty eye irritation that gets worse as the day goes on. Symptoms may also be described as itchy, stinging or tired eyes. Other symptoms are pain, redness, a pulling sensation, and pressure behind the eye. There may be a feeling that something, such as a speck of dirt, is in the eye. The resultant damage to the eyes surface increases discomfort and sensitivity to bright light. Both eyes usually are affected.There may also be a stringy discharge from the eyes. Although it may seem contradictory, dry eye can cause the eyes to water due to irritation. One may experience excessive tearing such as if something got into the eye. These reflex tears will not necessarily make the eyes feel better since they are the watery tears that are produced in response to injury, irritation, or emotion which lack the lubricating qualities necessary to prevent dry eye.Because blinking coats the eye with tears, symptoms are worsened by activities in which the rate of blinking is reduced due to prolonged use of the eyes. These activities include prolonged reading, computer usage (computer vision syndrome), driving, or watching television. Symptoms increase in windy, dusty or smoky (including cigarette smoke) areas, in dry environments high altitudes including airplanes, on days with low humidity, and in areas where an air conditioner (especially in a car), fan, heater, or even a hair dryer is being used. Symptoms reduce during cool, rainy, or foggy weather and in humid places, such as in the shower.Most people who have dry eyes experience mild irritation with no long-term effects. However, if the condition is left untreated or becomes severe, it can produce complications that can cause eye damage, resulting in impaired vision or (rarely) in the loss of vision.Symptom assessment is a key component of dry eye diagnosis – to the extent that many believe dry eye syndrome to be a symptom-based disease. Several questionnaires have been developed to determine a score that would allow for a diagnosis. The McMonnies & Ho dry eye questionnaire is often used in clinical studies of dry eyes.
Causes
Any abnormality of any one of the three layers of tears produces an unstable tear film, resulting in symptoms of dry eyes.
Increased evaporation
The most common cause of dry eye is increased evaporation of the tear film, typically as a result of Meibomian gland dysfunction. The meibomian glands are two sets of oil glands that line the upper and lower eyelids and secrete the oily outer layer of the tear film—the lipid layer. These glands often become clogged due to inflammation caused by blepharitis and/or rosacea, preventing an even distribution of oil. The result is an unstable lipid layer that leads to increased evaporation of the tear film.In severe cases of MGD, the meibomiam glands can atrophy and cease producing oil entirely.
Low humidity
Low humidity may cause dry eye syndrome.
Decreased tear production
Keratoconjunctivitis sicca can be caused by inadequate tear production from lacrimal hyposecretion. The aqueous tear layer is affected, resulting in aqueous tear deficiency (ATD). The lacrimal gland does not produce sufficient tears to keep the entire conjunctiva and cornea covered by a complete layer. This usually occurs in people who are otherwise healthy. Increased age is associated with decreased tearing. This is the most common type found in postmenopausal women.In many cases, aqueous deficient dry eye may have no apparent cause (idiopathic). Other causes include congenital alacrima, xerophthalmia, lacrimal gland ablation, and sensory denervation. In rare cases, it may be a symptom of collagen vascular diseases, including relapsing polychondritis, rheumatoid arthritis, granulomatosis with polyangiitis, and systemic lupus erythematosus. Sjögren syndrome and other autoimmune diseases are associated with aqueous tear deficiency. Drugs such as isotretinoin, sedatives, diuretics, tricyclic antidepressants, antihypertensives, oral contraceptives, antihistamines, nasal decongestants, beta-blockers, phenothiazines, atropine, and pain relieving opiates such as morphine can cause or worsen this condition. Infiltration of the lacrimal glands by sarcoidosis or tumors, or postradiation fibrosis of the lacrimal glands can also cause this condition. Recent attention has been paid to the composition of tears in normal or dry eye individuals. Only a small fraction of the estimated 1543 proteins in tears are differentially deficient or upregulated in dry eye, one of which is lacritin. Topical lacritin promotes tearing in rabbit preclinical studies. Also, topical treatment of eyes of dry eye mice (Aire knockout mouse model of dry eye) restored tearing, and suppressed both corneal staining and the size of inflammatory foci in lacrimal glands.
Additional causes
Aging is one of the most common causes of dry eyes because tear production decreases with age. Several classes of medications (both prescription and OTC) have been hypothesized as a major cause of dry eye, especially in the elderly. Particularly, anticholinergic medications that also cause dry mouth are believed to promote dry eye. Dry eye may also be caused by thermal or chemical burns, or (in epidemic cases) by adenoviruses. A number of studies have found that diabetics are at increased risk for the disease.About half of all people who wear contact lenses complain of dry eyes. There are two potential connections between contact usage and dry eye. Traditionally, it was believed that soft contact lenses, which float on the tear film that covers the cornea, absorb the tears in the eyes. The connection between a loss in nerve sensitivity and tear production is also the subject of current research.Dry eye also occurs or becomes worse after LASIK and other refractive surgeries, in which the corneal nerves which stimulate tear secretion are cut during the creation of a corneal flap. Dry eye caused by these procedures usually resolves after several months, but it can be permanent. Persons who are thinking about refractive surgery should consider this.An eye injury or other problem with the eyes or eyelids, such as bulging eyes or a drooping eyelid can cause keratoconjunctivitis sicca. Disorders of the eyelid can impair the complex blinking motion required to spread tears.Abnormalities of the mucin tear layer caused by vitamin A deficiency, trachoma, diphtheric keratoconjunctivitis, mucocutaneous disorders and certain topical medications are also causes of keratoconjunctivitis sicca.Persons with keratoconjunctivitis sicca have elevated levels of tear nerve growth factor (NGF). It is possible that this eyes surface NGF plays an important role in ocular surface inflammation associated with dry eyes.
Pathophysiology
Having dry eyes for a while can lead to tiny abrasions on the surface of the eyes. In advanced cases, the epithelium undergoes pathologic changes, namely squamous metaplasia and loss of goblet cells. Some severe cases result in thickening of the corneal surface, corneal erosion, punctate keratopathy, epithelial defects, corneal ulceration (sterile and infected), corneal neovascularization, corneal scarring, corneal thinning, and even corneal perforation.Another contributing factor may be lacritin monomer deficiency. Lacritin monomer, active form of lacritin, is selectively decreased in aqueous deficient dry eye, Sjögren syndrome dry eye, contact lens-related dry eye and in blepharitis.
Diagnosis
Some tests allow patients to be classified into one of two categories, “aqueous-deficient” or “hyperevaporative.” Diagnostic guidelines were published in 2007 by the Dry Eye Workshop. A slit lamp examination can be performed to diagnose dry eyes and to document any damage to the eye. When realizing this test, the practitioner is testing the eyelid margin.A Schirmers test can measure the amount of moisture bathing the eye. This test is useful for determining the severity of the condition. A five-minute Schirmers test with and without anesthesia using a Whatman #41 filter paper 5 mm wide by 35 mm long is performed. For this test, wetting under 5 mm with or without anesthesia is considered diagnostic for dry eyes.If the results for the Schirmers test are abnormal, a Schirmer II test can be performed to measure reflex secretion. In this test, the nasal mucosa is irritated with a cotton-tipped applicator, after which tear production is measured with a Whatman #41 filter paper. For this test, wetting under 15 mm after five minutes is considered abnormal.A tear breakup time (TBUT) test measures the time it takes for tears to break up in the eye. The tear breakup time can be determined after placing a drop of fluorescein in the cul-de-sac.A tear protein analysis test measures the lysozyme contained within tears. In tears, lysozyme accounts for approximately 20 to 40 percent of total protein content.A lactoferrin analysis test provides good correlation with other tests.The presence of the recently described molecule Ap4A, naturally occurring in tears, is abnormally high in different states of ocular dryness. This molecule can be quantified biochemically simply by taking a tear sample with a plain Schirmer test. Utilizing this technique it is possible to determine the concentrations of Ap4A in the tears of patients and in such way diagnose objectively if the samples are indicative of dry eye.The tear osmolarity test has been proposed as a test for dry eye disease. Tear osmolarity may be a more sensitive method of diagnosing and grading the severity of dry eye compared to corneal and conjunctival staining, tear break-up time, Schirmer test, and meibomian gland grading. Others have recently questioned the utility of tear osmolarity in monitoring dry eye treatment.
Prevention
Avoiding refractive surgery (LASIK & PRK), limiting contact lens use, limiting computer screen use, avoiding environmental conditions can decrease symptoms. Complications can be prevented by use of wetting and lubricating drops and ointments.
Treatment
A variety of approaches can be taken to treatment. These can be summarised as: avoidance of exacerbating factors, tear stimulation and supplementation, increasing tear retention, and eyelid cleansing and treatment of eye inflammation.Dry eyes can be exacerbated by smoky environments, dust and air conditioning and by our natural tendency to reduce our blink rate when concentrating. Purposefully blinking, especially during computer use and resting tired eyes are basic steps that can be taken to minimise discomfort. Rubbing ones eyes can irritate them further, so should be avoided. Conditions such as blepharitis can often co-exist and paying particular attention to cleaning the eyelids morning and night with mild soaps and warm compresses can improve both conditions.
Environmental control
Dry, drafty environments and those with smoke and dust should be avoided. This includes avoiding hair dryers, heaters, air conditioners or fans, especially when these devices are directed toward the eyes. Wearing glasses or directing gaze downward, for example, by lowering computer screens can be helpful to protect the eyes when aggravating environmental factors cannot be avoided. Using a humidifier, especially in the winter, can help by adding moisture to the dry indoor air.
Rehydration
For mild and moderate cases, supplemental lubrication is the most important part of treatment.Application of artificial tears every few hours can provide temporary relief. Additional research is necessary to determine whether certain artificial tear formulations are superior to others in treating dry eye.
Autologous serum eye drops
A 2017 Cochrane review found mixed results when comparing autologous serum eye drops to artificial tears or saline. Evidence from the examined trials showed that autologous serum eye drops may have a small short-term benefit when compared to artificial tears, but there is no evidence of improvement after 2 weeks.
Additional options
Lubricating tear ointments can be used during the day, but they generally are used at bedtime due to poor vision after application. They contain white petrolatum, mineral oil, and similar lubricants. They serve as a lubricant and an emollient. Application requires pulling down the lower eyelid and applying a small amount (0.25 in) inside. Depending on the severity of the condition, it may be applied from every hour to just at bedtime. It should never be used with contact lenses. Specially designed glasses that form a moisture chamber around the eye may be used to create additional humidity.
Medication
Inflammation occurring in response to tears film hypertonicity can be suppressed by mild topical steroids or with topical immunosuppressants such as ciclosporin (Restasis). Elevated levels of tear NGF can be decreased with 0.1% prednisolone.Diquafosol, an agonist of the P2Y2 purinergic receptor, is approved in Japan for managing dry eye disease by promoting secretion of fluid and mucin from cells in the conjunctiva, rather than by directly stimulating the lacrimal glands.Lifitegrast was approved by the US FDA for the treatment of the condition in 2016.Varenicline (Tyrvaya by Oyster Point Pharma) was approved by the US FDA for the treatment of dry eye disease in October 2021.
Ciclosporin
Topical ciclosporin (topical ciclosporin A, tCSA) 0.05% ophthalmic emulsion is an immunosuppressant. The drug decreases surface inflammation. In a trial involving 1200 people, Restasis increased tear production in 15% of people, compared to 5% with placebo.It should not be used while wearing contact lenses, during eye infections or in people with a history of herpes virus infections. Side effects include burning sensation (common), redness, discharge, watery eyes, eye pain, foreign body sensation, itching, stinging, and blurred vision. Long term use of ciclosporin at high doses is associated with an increased risk of cancer.Cheaper generic alternatives are available in some countries.
Conserving tears
There are methods that allow both natural and artificial tears to stay longer.In each eye, there are two puncta – little openings that drain tears into the tear ducts. There are methods to partially or completely close the tear ducts. This blocks the flow of tears into the nose, and thus more tears are available to the eyes. Drainage into either one or both puncta in each eye can be blocked.
Punctal plugs are inserted into the puncta to block tear drainage. It is not clear if punctal plugs are effective at reducing dry eye syndrome symptoms. Punctal plugs are thought to be "relatively safe", however, their use may result in epiphora (watery eyes), and more rarely, serious infection and swelling of the tear sac where the tears drain. They are reserved for people with moderate or severe dry eye when other medical treatment has not been adequate.If punctal plugs are effective, thermal or electric cauterization of puncti can be performed. In thermal cauterization, a local anesthetic is used, and then a hot wire is applied. This shrinks the drainage area tissues and causes scarring, which closes the tear duct.
Other
Heating systems that try to unblock the oil glands in the eye has some preliminary evidence of benefit. Fish oil (omega-3) supplements are not effective in relieving symptoms.
Surgery
In severe cases of dry eyes, tarsorrhaphy may be performed where the eyelids are partially sewn together. This reduces the palpebral fissure (eyelid separation), ideally leading to a reduction in tear evaporation.
Prognosis
Keratoconjunctivitis sicca usually is a chronic problem. Its prognosis shows considerable variance, depending upon the severity of the condition. Most people have mild-to-moderate cases, and can be treated symptomatically with lubricants. This provides an adequate relief of symptoms.When dry eyes symptoms are severe, they can interfere with quality of life. People sometimes feel their vision blurs with use, or severe irritation to the point that they have trouble keeping their eyes open or they may not be able to work or drive.
Epidemiology
Keratoconjunctivitis sicca is relatively common within the United States, especially so in older patients. Specifically, the persons most likely to be affected by dry eyes are those aged 40 or older. 10–20% of adults experience Keratoconjunctivitis sicca. Approximately 1 to 4 million adults (age 65–84) in the USA are affected.While persons with autoimmune diseases have a high likelihood of having dry eyes, most persons with dry eyes do not have an autoimmune disease. Instances of Sjögren syndrome and keratoconjunctivitis sicca associated with it are present much more commonly in women, with a ratio of 9:1. In addition, milder forms of keratoconjunctivitis sicca also are more common in women. This is partly because hormonal changes, such as those that occur in pregnancy, menstruation, and menopause, can decrease tear production.In areas of the world where malnutrition is common, vitamin A deficiency is a common cause. This is rare in the United States.Racial predilections do not exist for this disease.
Synonyms
Other names for dry eye include dry eye syndrome, keratoconjunctivitis sicca (KCS), dysfunctional tear syndrome, lacrimal keratoconjunctivitis, evaporative tear deficiency, aqueous tear deficiency, and LASIK-induced neurotrophic epitheliopathy (LNE).
Other animals
Among other animals, dry eye can occur in dogs, cats, and horses.
Dogs
Keratoconjunctivitis sicca is common in dogs. Most cases are caused by a genetic predisposition, but chronic conjunctivitis, canine distemper, and drugs such as sulfasalazine and trimethoprim-sulfonamide also cause the disease. Symptoms include eye redness, a yellow or greenish discharge, corneal ulceration, pigmented cornea, and blood vessels on the cornea. Diagnosis is made by measuring tear production with a Schirmer tear test. Less than 15 mm of wetting by tears produced in a minute is abnormal.Tear replacers are a mainstay of treatment, preferably containing methylcellulose or carboxymethyl cellulose. Ciclosporin stimulates tear production and acts as a suppressant on the immune-mediated processes that cause the disease. Topical antibiotics and corticosteroids are sometimes used to treat secondary infections and inflammation. A surgery known as parotid duct transposition is used in some extreme cases where medical treatment has not helped. This redirects the duct from the parotid salivary gland to the eye. Saliva replaces the tears. Dogs with cherry eye should have the condition corrected to help prevent this disease.Breeds with a higher risk of dry eye compared to other breeds include:
American Cocker Spaniel
Bloodhound
Boston Terrier
English Bulldog
Cavalier King Charles Spaniel
Lhasa Apso
Miniature Schnauzer
Pekingese
Pug
Samoyed
Shih Tzu
West Highland White Terrier
Cats
Keratoconjunctivitis sicca is uncommon in cats. Most cases seem to be caused by chronic conjunctivitis, especially secondary to feline herpesvirus. Diagnosis, symptoms, and treatment are similar to those for dogs.
See also
Conjunctivochalasis
References
^ S.Arivazhagan MBBS
Further reading
External links See S. Arivazhagan MBBS
Facts About the Cornea and Corneal Disease The National Eye Institute (NEI).
Dry Eye Syndrome on NHS Choices
Am.J.Managed Care - Dry Eye Disease: Pathophysiology, Classification, and Diagnosis
Dry Eye Syndrome on eMedicine
Nasolacrimal and Lacrimal Apparatus, The Merck Veterinary Manual | 146 |
Duchenne muscular dystrophy | Duchenne muscular dystrophy (DMD) is a severe type of muscular dystrophy that primarily affects boys. Muscle weakness usually begins around the age of four, and worsens quickly. Muscle loss typically occurs first in the thighs and pelvis followed by the arms. This can result in trouble standing up. Most are unable to walk by the age of 12. Affected muscles may look larger due to increased fat content. Scoliosis is also common. Some may have intellectual disability. Females with a single copy of the defective gene may show mild symptoms.The disorder is X-linked recessive. About two thirds of cases are inherited from a persons mother, while one third of cases are due to a new mutation. It is caused by a mutation in the gene for the protein dystrophin. Dystrophin is important to maintain the muscle fibers cell membrane. Genetic testing can often make the diagnosis at birth. Those affected also have a high level of creatine kinase in their blood.Although there is no known cure, physical therapy, braces, and corrective surgery may help with some symptoms. Assisted ventilation may be required in those with weakness of breathing muscles. Medications used include steroids to slow muscle degeneration, anticonvulsants to control seizures and some muscle activity, and immunosuppressants to delay damage to dying muscle cells. Gene therapy, as a treatment, is in the early stages of study in humans. A small initial study using gene therapy has given some children improved muscle strength, but long term effects are unknown as of 2020.DMD affects about one in 3,500 to 6,000 males at birth. It is the most common type of muscular dystrophy. The life expectancy is 26; however, with excellent care, some may live into their 30s or 40s. The disease is much more rare in girls, occurring approximately once in 50,000,000 live female births.
Signs and symptoms
DMD causes progressive muscle weakness due to muscle fiber disarray, death, and replacement with connective tissue or fat. The voluntary muscles are affected first, especially those of the hips, pelvic area, thighs, calves. It eventually progresses to the shoulders and neck, followed by arms, respiratory muscles, and other areas. Fatigue is common.Signs usually appear before age five, and may even be observed from the moment a boy takes his first steps. There is general difficulty with motor skills, which can result in an awkward manner of walking, stepping, or running. They tend to walk on their toes, in part due to shortening of the Achilles tendon, and because it compensates for knee extensor weakness. Falls can be frequent. It becomes harder and harder for the boy to walk; his ability to walk usually completely disintegrates before age 13. Most men affected with DMD become essentially "paralyzed from the neck down" by the age of 21. Cardiomyopathy, particularly dilated cardiomyopathy, is common, seen in half of 18-year-olds. The development of congestive heart failure or arrhythmia (irregular heartbeat) is only occasional. In late stages of the disease, respiratory impairment and swallowing impairment can occur, which can result in pneumonia.
A classic sign of DMD is trouble getting up from lying or sitting position, as manifested by a positive Gowerss sign. When a child tries to arise from lying on his stomach, he compensates for pelvic muscle weakness through use of the upper extremities: first by rising to stand on his arms and knees, and then "walking" his hands up his legs to stand upright. Another characteristic sign of DMD is pseudohypertrophy (enlarging) of the muscles of the tongue, calves, buttocks, and shoulders (around age 4 or 5). The muscle tissue is eventually replaced by fat and connective tissue, hence the term pseudohypertrophy. Muscle fiber deformities and muscle contractures of Achilles tendon and hamstrings can occur, which impair functionality because the muscle fibers shorten and fibrose in connective tissue. Skeletal deformities can occur, such as lumbar hyperlordosis, scoliosis, anterior pelvic tilt, and chest deformities. Lumbar hyperlordosis is thought to be compensatory mechanism in response to gluteal and quadriceps muscle weakness, all of which cause altered posture and gait (e.g.: restricted hip extension).Non musculoskeletal manifestations of DMD occur. There is a higher risk of neurobehavioral disorders (e.g., ADHD), learning disorders (dyslexia), and non-progressive weaknesses in specific cognitive skills (in particular short-term verbal memory), which are believed to be the result of absent or dysfunctional dystrophin in the brain.
Cause
DMD is caused by a mutation of the dystrophin gene, located on the short arm of the X chromosome (locus Xp21) that codes for dystrophin protein. Mutations can either be inherited or occur spontaneously during germline transmission, causing to a large reduction or absence of dystrophin, a protein that provides structural integrity in muscle cells. Dystrophin is responsible for connecting the actin cytoskeleton of each muscle fiber to the underlying basal lamina (extracellular matrix), through a protein complex containing many subunits. The absence of dystrophin permits excess calcium to penetrate the sarcolemma (the cell membrane). Alterations in calcium and signalling pathways cause water to enter into the mitochondria, which then burst.In skeletal muscle dystrophy, mitochondrial dysfunction gives rise to an amplification of stress-induced cytosolic calcium signals and an amplification of stress-induced reactive-oxygen species production. In a complex cascading process that involves several pathways and is not clearly understood, increased oxidative stress within the cell damages the sarcolemma and eventually results in the death of the cell. Muscle fibers undergo necrosis and are ultimately replaced with adipose and connective tissue.
DMD is inherited in an X-linked recessive pattern. Females typically are carriers of the genetic trait while males are affected. A female carrier will be unaware she carries a mutation until she has an affected son. The son of a carrier mother has a 50% chance of inheriting the defective gene from his mother. The daughter of a carrier mother has a 50% chance of being a carrier and a 50% chance of having two normal copies of the gene. In all cases, an unaffected father either passes a normal Y to his son or a normal X to his daughter. Female carriers of an X-linked recessive condition, such as DMD, can show symptoms depending on their pattern of X-inactivation.DMD is extremely rare in females (about 1 in 50,000,000 female births). It can occur in females with an affected father and a carrier mother, in those who are missing an X chromosome, or those who have an inactivated X chromosome (the most common of the rare reasons). The daughter of a carrier mother and an affected father will be affected or a carrier with equal probability, as she will always inherit the affected X-chromosome from her father and has a 50% chance of also inheriting the affected X-chromosome from her mother.Disruption of the blood-brain barrier has been seen to be a noted feature in the development of DMD.
Diagnosis
Genetic counseling is advised for people with a family history of the disorder. DMD can be detected with about 95% accuracy by genetic studies performed during pregnancy. Creatine kinase (CPK-MM) levels in the bloodstream are extremely high. An electromyography (EMG) shows that weakness is caused by destruction of muscle tissue rather than by damage to nerves.
DNA test
The muscle-specific isoform of the dystrophin gene is composed of 79 exons, and DNA testing (blood test) and analysis can usually identify the specific type of mutation of the exon or exons that are affected. DNA testing confirms the diagnosis in most cases.
Muscle biopsy
If DNA testing fails to find the mutation, a muscle biopsy test may be performed. A small sample of muscle tissue is extracted using a biopsy needle. The key tests performed on the biopsy sample for DMD are immunohistochemistry, immunocytochemistry, and immunoblotting for dystrophin, and should be interpreted by an experienced neuromuscular pathologist. These tests provide information on the presence or absence of the protein. Absence of the protein is a positive test for DMD. Where dystrophin is present, the tests indicate the amount and molecular size of dystrophin, helping to distinguish DMD from milder dystrophinopathy phenotypes. Over the past several years, DNA tests have been developed that detect more of the many mutations that cause the condition, and muscle biopsy is not required as often to confirm the presence of DMD.
Prenatal tests
A prenatal test can be considered when the mother is a known or suspected carrier.Prenatal tests can tell whether the unborn child has one of the most common mutations. Many mutations are responsible for DMD, and some have not been identified, so genetic testing may be falsely negative if the suspected mutation in the mother has not been identified.Prior to invasive testing, determination of the fetal sex is important; while males are sometimes affected by this X-linked disease, female DMD is extremely rare. This can be achieved by ultrasound scan at 16 weeks or more recently by free fetal DNA (cffDNA) testing. Chorion villus sampling (CVS) can be done at 11–14 weeks, and has a 1% risk of miscarriage. Amniocentesis can be done after 15 weeks, and has a 0.5% risk of miscarriage. Non invasive prenatal testing can be done around 10–12 weeks. Another option in the case of unclear genetic test results is fetal muscle biopsy.
Treatment
No cure for DMD is known, and an ongoing medical need has been recognized by regulatory authorities. Gene therapy has shown some success.Treatment is generally aimed at controlling symptoms to maximize the quality of life which can be measured using specific questionnaires, and include:
Corticosteroids such as prednisolone and deflazacort lead to short-term improvements in muscle strength and function up to 2 years. Corticosteroids have also been reported to help prolong walking, though the evidence for this is not robust.
Randomised control trials have shown that β2 agonists increase muscle strength, but do not modify disease progression. Follow-up time for most RCTs on β2 agonists is only around 12 months, hence results cannot be extrapolated beyond that time frame.
Mild, nonjarring physical activity such as swimming is encouraged. Inactivity (such as bed rest) can worsen the muscle disease.
Physical therapy is helpful to maintain muscle strength, flexibility, and function.
Orthopedic appliances (such as braces and wheelchairs) may improve mobility and the ability for self-care. Form-fitting removable leg braces that hold the ankle in place during sleep can defer the onset of contractures.
Appropriate respiratory support as the disease progresses is important.
Cardiac problems may require a pacemaker.The medication eteplirsen, a Morpholino antisense oligo, has been approved in the United States for the treatment of mutations amenable to dystrophin exon 51 skipping. The US approval has been controversial as eteplirsen failed to establish a clinical benefit; it has been refused approval by the European Medicines Agency.The medication ataluren (Translarna) is approved use in the European Union.The antisense oligonucleotide golodirsen (Vyondys 53) was approved for medical use in the United States in 2019, for the treatment of cases that can benefit from skipping exon 53 of the dystrophin transcript.The Morpholino antisense oligonucleotide viltolarsen (Viltepso) was approved for medical use in the United States in August 2020, for the treatment of Duchenne muscular dystrophy (DMD) in people who have a confirmed mutation of the DMD gene that is amenable to exon 53 skipping. It is the second approved targeted treatment for people with this type of mutation in the United States. Approximately 8% of people with DMD have a mutation that is amenable to exon 53 skipping.Casimersen was approved for medical use in the United States in February 2021, and it is the first FDA-approved targeted treatment for people who have a confirmed mutation of the DMD gene that is amenable to exon 45 skipping.Comprehensive multidisciplinary care guidelines for DMD have been developed by the Centers for Disease Control and Prevention, and were published in two parts in The Lancet Neurology in 2010. An update was published in 2018.
Physical therapy
Physical therapists are concerned with enabling patients to reach their maximum physical potential. Their aim is to:
minimize the development of contractures and deformity by developing a programme of stretches and exercises where appropriate
anticipate and minimize other secondary complications of a physical nature by recommending bracing and durable medical equipment
monitor respiratory function and advise on techniques to assist with breathing exercises and methods of clearing secretions
Respiration assistance
Modern "volume ventilators/respirators", which deliver an adjustable volume (amount) of air to the person with each breath, are valuable in the treatment of people with muscular dystrophy-related respiratory problems. The ventilator may require an invasive endotracheal or tracheotomy tube through which air is directly delivered, but for some people, noninvasive delivery through a face mask or mouthpiece is sufficient. Positive airway pressure machines, particularly bilevel ones, are sometimes used in this latter way. The respiratory equipment may easily fit on a ventilator tray on the bottom or back of a power wheelchair with an external battery for portability.Ventilator treatment may start in the mid- to late teens when the respiratory muscles can begin to collapse. If the vital capacity has dropped below 40% of normal, a volume ventilator/respirator may be used during sleeping hours, a time when the person is most likely to be underventilating (hypoventilating). Hypoventilation during sleep is determined by a thorough history of sleep disorder with an oximetry study and a capillary blood gas (see pulmonary function testing).
A cough assist device can help with excess mucus in lungs by hyperinflation of the lungs with positive air pressure, then negative pressure to get the mucus up. If the vital capacity continues to decline to less than 30 percent of normal, a volume ventilator/respirator may also be needed during the day for more assistance. The person gradually will increase the amount of time using the ventilator/respirator during the day as needed. However, there are also people with the disease in their 20s who have no need for a ventilator.
Future developments
There is no cure for any of the muscular dystrophies. Several drugs designed to address the root cause are under development, including gene therapy (Microdystrophin), and antisense drugs (Ataluren, Eteplirsen etc.). Other medications used include corticosteroids (Deflazacort), calcium channel blockers (Diltiazem) to slow skeletal and cardiac muscle degeneration, anticonvulsants to control seizures and some muscle activity, and immunosuppressants (Vamorolone) to delay damage to dying muscle cells. Physical therapy, braces, and corrective surgery may help with some symptoms while assisted ventilation may be required in those with weakness of breathing muscles. Outcomes depend on the specific type of disorder.A paper published by Stanford University on 10 March 2022 demonstrated that patients with muscular dystrophies could benefit from new therapies targeting the specific pathways contributing directly to muscle disorders. Three recent advances are likely to enhance the landscape of treatments for muscular dystrophies such as DMD. First, induced pluripotent stem cells (iPSCs) allow researchers to design effective treatment strategies. Second, artificial intelligence (AI) can help identify therapeutic targets. Third, a high volume of multi-omics data gathered from diverse sources through disease models can provide valuable information about converging and diverging pathways.
Prognosis
Duchenne muscular dystrophy is a rare progressive disease which eventually affects all voluntary muscles and involves the heart and breathing muscles in later stages. Life expectancy is estimated to be around 25–26, but this varies. With excellent medical care, affected men often live into their 30s. David Hatch of Paris, Maine, may be the oldest person in the world with the disease; as of 2021, he was 58.The most common direct cause of death in people with DMD is respiratory failure. Complications from treatment, such as mechanical ventilation and tracheotomy procedures, are also a concern. The next leading cause of death is cardiac-related conditions such as heart failure brought on by dilated cardiomyopathy. With respiratory assistance, the median survival age can reach up to 40. In rare cases, people with DMD have been seen to survive into their forties or early fifties, with proper positioning in wheelchairs and beds, and the use of ventilator support (via tracheostomy or mouthpiece), airway clearance, and heart medications. Early planning of the required supports for later-life care has shown greater longevity for people with DMD.Curiously, in the mdx mouse model of Duchenne muscular dystrophy, the lack of dystrophin is associated with increased calcium levels and skeletal muscle myonecrosis. The intrinsic laryngeal muscles (ILMs) are protected and do not undergo myonecrosis. ILMs have a calcium regulation system profile suggestive of a better ability to handle calcium changes in comparison to other muscles, and this may provide a mechanistic insight for their unique pathophysiological properties. The ILM may facilitate the development of novel strategies for the prevention and treatment of muscle wasting in a variety of clinical scenarios. In addition, patients with Duchenne muscular dystrophy also have elevated plasma lipoprotein levels, implying a primary state of dyslipidemia in patients.
Epidemiology
DMD is the most common type of muscular dystrophy; it affects about one in 5,000 males at birth. DMD has an incidence of one in 3,600 male infants.In the US, a 2010 study showed a higher amount of those with DMD age ranging from 5 to 54 who are Hispanic compared to non-Hispanic Whites, and non-Hispanic Blacks.
History
The disease was first described by the Neapolitan physician Giovanni Semmola in 1834 and Gaetano Conte in 1836. However, DMD is named after the French neurologist Guillaume-Benjamin-Amand Duchenne (1806–1875), who in the 1861 edition of his book Paraplegie hypertrophique de lenfance de cause cerebrale, described and detailed the case of a boy who had this condition. A year later, he presented photos of his patient in his Album de photographies pathologiques. In 1868, he gave an account of 13 other affected children. Duchenne was the first to do a biopsy to obtain tissue from a living patient for microscopic examination.
Notable cases
Alfredo ("Dino", "Alfredino") Ferrari (born January 1932 in Modena), the son of Enzo Ferrari, designed the 1.5 L DOHC V6 engine for the model F2 at the end of 1955. But Dino never saw the engine produced: he died on 30 June 1956 in Modena at the age of 24, before his namesakes Dino and Fiat Dino were made.Rapper Darius Weems had the disease and used his notoriety to raise awareness and funds for treatment. He died at the age of 27 (his brother also had the disease, until his death at the age of 19). The film Darius Goes West documents Weemss journey of growth and acceptance of having the disease.Jonathan Evisons novel, The Revised Fundamentals of Caregiving, published in 2012, depicted a young man affected by the disease. In 2016, Netflix released The Fundamentals of Caring, a film based on the novel.
Research
Current research includes exon-skipping, stem cell replacement therapy, analog up-regulation, gene replacement, and supportive care to slow disease progression.Efforts are ongoing to find medications that either return the ability to make dystrophin or utrophin. Other efforts include trying to block the entry of calcium ions into muscle cells.
Exon-skipping
Antisense oligonucleotides (oligos), structural analogs of DNA, are the basis of a potential treatment for 10% of people with DMD. The compounds allow faulty parts of the dystrophin gene to be skipped when it is transcribed to RNA for protein production, permitting a still-truncated but more functional version of the protein to be produced. It is also known as nonsense suppression therapy.Two kinds of antisense oligos, 2-O-methyl phosphorothioate oligos (like drisapersen) and Morpholino oligos (like eteplirsen), have tentative evidence of benefit and are being studied. Eteplirsen is targeted to skip exon 51. "As an example, skipping exon 51 restores the reading frame of ~ 15% of all the boys with deletions. It has been suggested that by having 10 AONs to skip 10 different exons it would be possible to deal with more than 70% of all DMD boys with deletions." This represents about 1.5% of cases.
People with Beckers muscular dystrophy, which is milder than DMD, have a form of dystrophin which is functional even though it is shorter than normal dystrophin. In 1990 England et al. noticed that a patient with mild Becker muscular dystrophy was lacking 46% of his coding region for dystrophin. This functional, yet truncated, form of dystrophin gave rise to the notion that shorter dystrophin can still be therapeutically beneficial. Concurrently, Kole et al. had modified splicing by targeting pre-mRNA with antisense oligonucleotides (AONs). Kole demonstrated success using splice-targeted AONs to correct missplicing in cells removed from beta-thalassemia patients Wiltons group tested exon skipping for muscular dystrophy.
Gene therapy
Researchers are working on a gene editing method to correct a mutation that leads to Duchenne muscular dystrophy (DMD). Researchers used a technique called CRISPR/Cas9-mediated genome editing, which can precisely remove a mutation in the dystrophin gene in DNA, allowing the bodys DNA repair mechanisms to replace it with a normal copy of the gene. The benefit of this over other gene therapy techniques is that it can permanently correct the "defect" in a gene rather than just transiently adding a "functional" one.Genome editing through the CRISPR/Cas9 system is not currently feasible in humans. However, it may be possible, through advancements in technology, to use this technique to develop therapies for DMD in the future. In 2007, researchers did the worlds first clinical (viral-mediated) gene therapy trial for Duchenne MD.Biostrophin is a delivery vector for gene therapy in the treatment of Duchenne muscular dystrophy and Becker muscular dystrophy.
References
Further reading
Birnkrant DJ, Bushby K, Bann CM, Apkon SD, Blackwell A, Brumbaugh D, et al. (March 2018). "Diagnosis and management of Duchenne muscular dystrophy, part 1: diagnosis, and neuromuscular, rehabilitation, endocrine, and gastrointestinal and nutritional management". Lancet Neurol. 17 (3): 251–267. doi:10.1016/S1474-4422(18)30024-3. PMC 5869704. PMID 29395989.
Birnkrant DJ, Bushby K, Bann CM, Alman BA, Apkon SD, Blackwell A, et al. (April 2018). "Diagnosis and management of Duchenne muscular dystrophy, part 2: respiratory, cardiac, bone health, and orthopaedic management". Lancet Neurol. 17 (4): 347–361. doi:10.1016/S1474-4422(18)30025-5. PMC 5889091. PMID 29395990.
External links
Muscular Dystrophies at Curlie
CDCs National Center on Birth Defects and Developmental Disabilities (previously listed below as "Duchenne/Becker Muscular Dystrophy, NCBDDD") at CDC
Genes and Disease Page at NCBI | 147 |
Dupuytrens contracture | Dupuytrens contracture (also called Dupuytrens disease, Morbus Dupuytren, Viking disease, palmar fibromatosis and Celtic hand) is a condition in which one or more fingers become progressively bent in a flexed position. It is named after Guillaume Dupuytren, who first described the underlying mechanism of action, followed by the first successful operation in 1831 and publication of the results in The Lancet in 1834. It usually begins as small, hard nodules just under the skin of the palm, then worsens over time until the fingers can no longer be fully straightened. While typically not painful, some aching or itching may be present. The ring finger followed by the little and middle fingers are most commonly affected. It can affect one or both hands. The condition can interfere with activities such as preparing food, writing, putting your hand in a tight pocket, putting on gloves, or shaking hands.The cause is unknown but might have a genetic component. Risk factors include family history, alcoholism, smoking, thyroid problems, liver disease, diabetes, previous hand trauma, and epilepsy. The underlying mechanism involves the formation of abnormal connective tissue within the palmar fascia. Diagnosis is usually based on a physical exam. Blood tests or imaging studies are not usually necessary.Initial treatment is typically with a cortisone shot into the affected area, occupational therapy, and physical therapy. Among those who worsen, clostridial collagenase injections or surgery may be tried. While radiation therapy is used to treat this condition, the evidence for this use is scarce. The Royal College of Radiologists (RCR) Faculty of Clinical Oncology concluded that radiotherapy is effective in early stage disease which has progressed within the last 6 to 12 months. The condition may recur despite treatment. If it does return after treatment, it can be treated again with further improvement. It is easier to treat when the amount of finger bending is more mild.It was once believed that Dupuytrens most often occurs in white males over the age of 50 and is rare among Asians and Africans. It sometimes was erroneously called "Viking disease," since it was often recorded among those of Nordic descent. In Norway, about 30% of men over 60 years old have the condition, while in the United States about 5% of people are affected at some point in time. In the United Kingdom, about 20% of people over 65 have some form of the disease.More recent and wider studies show the highest prevalence in Africa (17 per cent), Asia (15 per cent).
Signs and symptoms
Typically, Dupuytrens contracture first presents as a thickening or nodule in the palm, which initially can be with or without pain. Later in the disease process, which can be years later, there is painless increasing loss of range of motion of the affected finger(s). The earliest sign of a contracture is a triangular "puckering" of the skin of the palm as it passes over the flexor tendon just before the flexor crease of the finger, at the metacarpophalangeal (MCP) joint.Generally, the cords or contractures are painless, but, rarely, tenosynovitis can occur and produce pain. The most common finger to be affected is the ring finger; the thumb and index finger are much less often affected. The disease begins in the palm and moves towards the fingers, with the metacarpophalangeal (MCP) joints affected before the proximal interphalangeal (PIP) joints. The MCP joints at the base of the finger responds much better to treatment and are usually able to fully extend after treatment. Due to anatomic differences in the ligaments and extensor tendons at the PIP joints, they may have some residual flexion. Proper patient education is necessary to set realistic treatment expectation.
In Dupuytrens contracture, the palmar fascia within the hand becomes abnormally thick, which can cause the fingers to curl and can impair finger function. The main function of the palmar fascia is to increase grip strength; thus, over time, Dupuytrens contracture decreases a persons ability to hold objects and use the hand in many different activities. Dupuytrens contracture can also be experienced as embarrassing in social situations and can affect quality of life People may report pain, aching, and itching with the contractions. Normally, the palmar fascia consists of collagen type I, but in Dupuytren patients, the collagen changes to collagen type III, which is significantly thicker than collagen type I.
Related conditions
People with severe involvement often show lumps on the back of their finger joints (called "Garrods pads", "knuckle pads", or "dorsal Dupuytren nodules"), and lumps in the arch of the feet (plantar fibromatosis or Ledderhose disease). In severe cases, the area where the palm meets the wrist may develop lumps. It is thought the condition Peyronies disease is related to Dupuytrens contracture.
Risk factors
Dupuytrens contracture is a non-specific condition, but primarily affects:
Non-modifiable
People of Scandinavian or Northern European ancestry; it has been called the "Viking disease", though it is also widespread in some Mediterranean countries, e.g., Spain and Bosnia. Dupuytrens is unusual among groups such as Chinese and Africans.
Men rather than women; men are more likely to develop the condition (80%)
People over the age of 50 (5% to 15% of men in that group in the US); the likelihood of getting Dupuytrens disease increases with age
People with a family history (60% to 70% of those affected have a genetic predisposition to Dupuytrens contracture)
Modifiable
Smokers, especially those who smoke 25 cigarettes or more a day
Thinner people, i.e., those with a lower-than-average body mass index.
Manual workers
Alcoholics
Other conditions
People with a higher-than-average fasting blood glucose level
People with previous hand injury
People with Ledderhose disease (plantar fibromatosis)
People with epilepsy (possibly due to anti-convulsive medication)
People with diabetes mellitus
People with HIV
Previous myocardial infarctionIn one study, those with stage 2 of the disease were found to have a slightly increased risk of mortality, especially from cancer.
Diagnosis
Types
According to the American Dupuytrens specialist Dr. Charles Eaton, there may be three types of Dupuytrens disease:
Type 1: A very aggressive form of the disease found in only 3% of people with Dupuytrens, which can affect men under 50 with a family history of Dupuytrens. It is often associated with other symptoms such as knuckle pads and Ledderhose disease. This type is sometimes known as Dupuytrens diathesis.
Type 2: The more normal type of Dupuytrens disease, usually found in the palm only, and which generally begins above the age of 50. According to Eaton, this type may be made more severe by other factors such as diabetes or heavy manual labor.
Type 3: A mild form of Dupuytrens which is common among diabetics or which may also be caused by certain medications, such as the anti-convulsants taken by people with epilepsy. This type does not lead to full contracture of the fingers, and is probably not inherited.
Treatment
Treatment is indicated when the so-called table-top test is positive. With this test, the person places their hand on a table. If the hand lies completely flat on the table, the test is considered negative. If the hand cannot be placed completely flat on the table, leaving a space between the table and a part of the hand as big as the diameter of a ballpoint pen, the test is considered positive and surgery or other treatment may be indicated. Additionally, finger joints may become fixed and rigid. There are several types of treatment, with some hands needing repeated treatment.The main categories listed by the International Dupuytren Society in order of stage of disease are radiation therapy, needle aponeurotomy (NA), collagenase injection, and hand surgery. As of 2016 the evidence on the efficacy of radiation therapy was considered inadequate in quantity and quality, and difficult to interpret because of uncertainty about the natural history of Dupuytrens disease.Needle aponeurotomy is most effective for Stages I and II, covering 6–90 degrees of deformation of the finger. However, it is also used at other stages.
Collagenase injection is likewise most effective for Stages I and II. However, it is also used at other stages.Hand surgery is effective at stage I to stage IV.
Surgery
On 12 June 1831, Dupuytren performed a surgical procedure on a person with contracture of the 4th and 5th digits who had been previously told by other surgeons that the only remedy was cutting the flexor tendons. He described the condition and the operation in The Lancet in 1834 after presenting it in 1833, and posthumously in 1836 in a French publication by Hôtel-Dieu de Paris. The procedure he described was a minimally invasive needle procedure.
Because of high recurrence rates, new surgical techniques were introduced, such as fasciectomy and then dermofasciectomy. Most of the diseased tissue is removed with these procedures.Recurrence rates are low. For some individuals, the partial insertion of "K-wires" into either the DIP or PIP joint of the affected digit for a period of a least 21 days to fuse the joint is the only way to halt the diseases progress. After removal of the wires, the joint is fixed into flexion, which is considered preferable to fusion at extension.
In extreme cases, amputation of fingers may be needed for severe or recurrent cases or after surgical complications.
Limited fasciectomy
Limited/selective fasciectomy removes the pathological tissue, and is a common approach. Low-quality evidence suggests that fasciectomy may be more effective for people with advanced Dupuytrens contractures.During the procedure, the person is under regional or general anesthesia. A surgical tourniquet prevents blood flow to the limb. The skin is often opened with a zig-zag incision but straight incisions with or without Z-plasty are also described and may reduce damage to neurovascular bundles. All diseased cords and fascia are excised. The excision has to be very precise to spare the neurovascular bundles. Because not all the diseased tissue is visible macroscopically, complete excision is uncertain.A 20-year review of surgical complications associated with fasciectomy showed that major complications occurred in 15.7% of cases, including digital nerve injury (3.4%), digital artery injury (2%), infection (2.4%), hematoma (2.1%), and complex regional pain syndrome (5.5%), in addition to minor complications including painful flare reactions in 9.9% of cases and wound healing complications in 22.9% of cases. After the tissue is removed the incision is closed. In the case of a shortage of skin, the transverse part of the zig-zag incision is left open. Stitches are removed 10 days after surgery.After surgery, the hand is wrapped in a light compressive bandage for one week. Flexion and extension of the fingers can start as soon as the anaesthesia has resolved. It is common to experience tingling within the first week after surgery. Hand therapy is often recommended. Approximately 6 weeks after surgery the patient is able completely to use the hand.The average recurrence rate is 39% after a fasciectomy after a median interval of about 4 years.
Wide-awake fasciectomy
Limited/selective fasciectomy under local anesthesia (LA) with epinephrine but no tourniquet is possible. In 2005, Denkler described the technique.
Dermofasciectomy
Dermofasciectomy is a surgical procedure that may be used when:
The skin is clinically involved (pits, tethering, deficiency, etc.)
The risk of recurrence is high and the skin appears uninvolved (subclinical skin involvement occurs in ~50% of cases)
Recurrent disease. Similar to a limited fasciectomy, the dermofasciectomy removes diseased cords, fascia, and the overlying skin.Typically, the excised skin is replaced with a skin graft, usually full thickness, consisting of the epidermis and the entire dermis. In most cases the graft is taken from the antecubital fossa (the crease of skin at the elbow joint) or the inner side of the upper arm. This place is chosen because the skin color best matches the palms skin color. The skin on the inner side of the upper arm is thin and has enough skin to supply a full-thickness graft. The donor site can be closed with a direct suture.The graft is sutured to the skin surrounding the wound. For one week the hand is protected with a dressing. The hand and arm are elevated with a sling. The dressing is then removed and careful mobilization can be started, gradually increasing in intensity. After this procedure the risk of recurrence is minimised, but Dupuytrens can recur in the skin graft and complications from surgery may occur.
Segmental fasciectomy with/without cellulose
Segmental fasciectomy involves excising part(s) of the contracted cord so that it disappears or no longer contracts the finger. It is less invasive than the limited fasciectomy, because not all the diseased tissue is excised and the skin incisions are smaller.The person is placed under regional anesthesia and a surgical tourniquet is used. The skin is opened with small curved incisions over the diseased tissue. If necessary, incisions are made in the fingers. Pieces of cord and fascia of approximately one centimeter are excised. The cords are placed under maximum tension while they are cut. A scalpel is used to separate the tissues. The surgeon keeps removing small parts until the finger can fully extend. The person is encouraged to start moving his or her hand the day after surgery. They wear an extension splint for two to three weeks, except during physical therapy.The same procedure is used in the segmental fasciectomy with cellulose implant. After the excision and a careful hemostasis, the cellulose implant is placed in a single layer in between the remaining parts of the cord.After surgery people wear a light pressure dressing for four days, followed by an extension splint. The splint is worn continuously during nighttime for eight weeks. During the first weeks after surgery the splint may be worn during daytime.
Less invasive treatments
Studies have been conducted for percutaneous release, extensive percutaneous aponeurotomy with lipografting and collagenase. These treatments show promise.
Percutaneous needle fasciotomy
Needle aponeurotomy is a minimally-invasive technique where the cords are weakened through the insertion and manipulation of a small needle. The cord is sectioned at as many levels as possible in the palm and fingers, depending on the location and extent of the disease, using a 25-gauge needle mounted on a 10 ml syringe. Once weakened, the offending cords can be snapped by putting tension on the finger(s) and pulling the finger(s) straight. After the treatment a small dressing is applied for 24 hours, after which people are able to use their hands normally. No splints or physiotherapy are given.The advantage of needle aponeurotomy is the minimal intervention without incision (done in the office under local anesthesia) and the very rapid return to normal activities without need for rehabilitation, but the nodules may resume growing. A study reported postoperative gain is greater at the MCP-joint level than at the level of the IP-joint and found a reoperation rate of 24%; complications are scarce. Needle aponeurotomy may be performed on fingers that are severely bent (stage IV), and not just in early stages. A 2003 study showed 85% recurrence rate after 5 years.A comprehensive review of the results of needle aponeurotomy in 1,013 fingers was performed by Gary M. Pess, MD, Rebecca Pess, DPT, and Rachel Pess, PsyD, and published in the Journal of Hand Surgery April 2012. Minimal follow-up was 3 years. Metacarpophalangeal joint (MP) contractures were corrected at an average of 99% and proximal interphalangeal joint (PIP) contractures at an average of 89% immediately post procedure. At final follow-up, 72% of the correction was maintained for MP joints and 31% for PIP joints. The difference between the final corrections for MP versus PIP joints was statistically significant. When a comparison was performed between people aged 55 years and older versus under 55 years, there was a statistically significant difference at both MP and PIP joints, with greater correction maintained in the older group.Gender differences were not statistically significant. Needle aponeurotomy provided successful correction to 5° or less contracture immediately post procedure in 98% (791) of MP joints and 67% (350) of PIP joints. There was recurrence of 20° or less over the original post-procedure corrected level in 80% (646) of MP joints and 35% (183) of PIP joints. Complications were rare except for skin tears, which occurred in 3.4% (34) of digits. This study showed that NA is a safe procedure that can be performed in an outpatient setting. The complication rate was low, but recurrences were frequent in younger people and for PIP contractures.
Extensive percutaneous aponeurotomy and lipografting
A technique introduced in 2011 is extensive percutaneous aponeurotomy with lipografting. This procedure also uses a needle to cut the cords. The difference with the percutaneous needle fasciotomy is that the cord is cut at many places. The cord is also separated from the skin to make place for the lipograft that is taken from the abdomen or ipsilateral flank. This technique shortens the recovery time. The fat graft results in supple skin.Before the aponeurotomy, a liposuction is done to the abdomen and ipsilateral flank to collect the lipograft. The treatment can be performed under regional or general anesthesia. The digits are placed under maximal extension tension using a firm lead hand retractor. The surgeon makes multiple palmar puncture wounds with small nicks. The tension on the cords is crucial, because tight constricting bands are most susceptible to be cut and torn by the small nicks, whereas the relatively loose neurovascular structures are spared. After the cord is completely cut and separated from the skin the lipograft is injected under the skin. A total of about 5 to 10 ml is injected per ray.After the treatment the person wears an extension splint for 5 to 7 days. Thereafter the person returns to normal activities and is advised to use a night splint for up to 20 weeks.
Collagenase
Clostridial collagenase injections have been found to be more effective than placebo. The cords are weakened through the injection of small amounts of the enzyme collagenase, which breaks peptide bonds in collagen.The treatment with collagenase is different for the MCP joint and the PIP joint. In a MCP joint contracture the needle must be placed at the point of maximum bowstringing of the palpable cord.The needle is placed vertically on the bowstring. The collagenase is distributed across three injection points. For the PIP joint the needle must be placed not more than 4 mm distal to palmar digital crease at 2–3 mm depth. The injection for PIP consists of one injection filled with 0.58 mg CCH 0.20 ml. The needle must be placed horizontal to the cord and also uses a 3-point distribution. After the injection the persons hand is wrapped in bulky gauze dressing and must be elevated for the rest of the day. After 24 hours the person returns for passive digital extension to rupture the cord. Moderate pressure for 10–20 seconds ruptures the cord.After the treatment with collagenase the person should use a night splint and perform digital flexion/extension exercises several times per day for 4 months.In February 2010 the US Food and Drug Administration (FDA) approved injectable collagenase extracted from Clostridium histolyticum for the treatment of Dupuytrens contracture in adults with a palpable Dupuytrens cord. (Three years later, it was approved as well for the treatment of the sometimes related Peyronies disease.) In 2011 its use for the treatment of Dupuytrens contracture was approved as well by the European Medicines Agency, and it received similar approval in Australia in 2013. However, the Swedish manufacturer abruptly withdrew distribution of this drug in Europe and the UK in March 2020 for commercial reasons.(It now is promoted primarily as a dermatological treatment for cellulite aka "cottage cheese thighs"). Collagenase is no longer available on the National Health System except as part of a small clinical trial.
Radiation therapy
Radiation therapy has been used mostly for early-stage disease, but is unproven. Evidence to support its use as of 2017, however, was scarce —efforts to gather evidence are complicated due to a poor understanding of how the condition develops over time. It has only been looked at in early disease. The Royal College of Radiologists concluded that radiotherapy is effective in early stage disease which has progressed within the last 6 to 12 months.
Alternative medicine
Several alternate therapies such as vitamin E treatment have been studied, though without control groups. Most doctors do not value those treatments. None of these treatments stops or cures the condition permanently. A 1949 study of vitamin E therapy found that "In twelve of the thirteen patients there was no evidence whatever of any alteration. ... The treatment has been abandoned."Laser treatment (using red and infrared at low power) was informally discussed in 2013 at an International Dupuytren Society forum, as of which time little or no formal evaluation of the techniques had been completed.
Postoperative care
Postoperative care involves hand therapy and splinting. Hand therapy is prescribed to optimize post-surgical function and to prevent joint stiffness. The extent of hand therapy is depending on the patient and the corrective procedure.Besides hand therapy, many surgeons advise the use of static or dynamic splints after surgery to maintain finger mobility. The splint is used to provide prolonged stretch to the healing tissues and prevent flexion contractures. Although splinting is a widely used post-operative intervention, evidence of its effectiveness is limited, leading to variation in splinting approaches. Most surgeons use clinical experience to decide whether to splint. Cited advantages include maintenance of finger extension and prevention of new flexion contractures. Cited disadvantages include joint stiffness, prolonged pain, discomfort, subsequently reduced function and edema.
A third approach emphasizes early self-exercise and stretching.
Prognosis
Dupuytrens disease has a high recurrence rate, especially when a person has so-called Dupuytrens diathesis. The term diathesis relates to certain features of Dupuytrens disease, and indicates an aggressive course of disease.The presence of all new Dupuytrens diathesis factors increases the risk of recurrent Dupuytrens disease by 71%, compared with a baseline risk of 23% in people lacking the factors. In another study the prognostic value of diathesis was evaluated. It was concluded that presence of diathesis can predict recurrence and extension. A scoring system was made to evaluate the risk of recurrence and extension, based on the following values: bilateral hand involvement, little-finger surgery, early onset of disease, plantar fibrosis, knuckle pads, and radial side involvement.Minimally invasive therapies may precede higher recurrence rates. Recurrence lacks a consensus definition. Furthermore, different standards and measurements follow from the various definitions.
Notable cases
== References == | 148 |
Eating disorder | An eating disorder is a mental disorder defined by abnormal eating behaviors that negatively affect a persons physical or mental health. Only one eating disorder can be diagnosed at a given time. Types of eating disorders include binge eating disorder, where the patient eats a large amount in a short period of time; anorexia nervosa, where the person has an intense fear of gaining weight and restricts food or overexercises to manage this fear; bulimia nervosa, where individuals eat a large quantity (binging) then try to rid themselves of the food (purging); pica, where the patient eats non-food items; rumination syndrome, where the patient regurgitates undigested or minimally digested food; avoidant/restrictive food intake disorder (ARFID), where people have a reduced or selective food intake due to some psychological reasons (see below); and a group of other specified feeding or eating disorders. Anxiety disorders, depression and substance abuse are common among people with eating disorders. These disorders do not include obesity. People often experience comorbidity between an eating disorder and OCD. It is estimated 20-60% of patients with an ED have a history of OCD.The causes of eating disorders are not clear, although both biological and environmental factors appear to play a role. Cultural idealization of thinness is believed to contribute to some eating disorders. Individuals who have experienced sexual abuse are also more likely to develop eating disorders. Some disorders such as pica and rumination disorder occur more often in people with intellectual disabilities.Treatment can be effective for many eating disorders. Treatment varies by disorder and may involve counseling, dietary advice, reducing excessive exercise, and the reduction of efforts to eliminate food. Medications may be used to help with some of the associated symptoms. Hospitalization may be needed in more serious cases. About 70% of people with anorexia and 50% of people with bulimia recover within five years. Only 10% of people with eating disorders receive treatment, and of those, approximately 80% do not receive the proper care. Many are sent home weeks earlier than the recommended stay and are not provided with the necessary treatment. Recovery from binge eating disorder is less clear and estimated at 20% to 60%. Both anorexia and bulimia increase the risk of death. When people experience comorbidity with an eating disorder and OCD, certain aspects of treatment can be negatively impacted. OCD can make it harder to recover from obsession over weight and shape, body dissatisfaction, and body checking.This is impart because ED cognitions serve a similar purpose to OCD obsessions and compulsions (e.g., safety behaviors as temporary relief from anxiety). Research shows OCD does not have an impact on the BMI of patients during treatment.Estimates of the prevalence of eating disorders vary widely, reflecting differences in gender, age, and culture as well as methods used for diagnosis and measurement.
In the developed world, anorexia affects about 0.4% and bulimia affects about 1.3% of young women in a given year. Binge eating disorder affects about 1.6% of women and 0.8% of men in a given year. According to one analysis, the percent of women who will have anorexia at some point in their lives may be up to 4%, or up to 2% for bulimia and binge eating disorders. Rates of eating disorders appear to be lower in less developed countries. Anorexia and bulimia occur nearly ten times more often in females than males. The typical onset of eating disorders in late childhood to early adulthood. Rates of other eating disorders are not clear.
Classification
ICD and DSM diagnoses
These eating disorders are specified as mental disorders in standard medical manuals, including the ICD-10 and the DSM-5.
Anorexia nervosa (AN) is the restriction of energy intake relative to requirements, leading to significantly low body weight in the context of age, sex, developmental trajectory, and physical health. It is accompanied by an intense fear of gaining weight or becoming fat, as well as a disturbance in the way one experiences and appraises their body weight or shape. There are two subtypes of AN: the restricting type, and the binge-eating/purging type. The restricting type describes presentations in which weight loss is attained through dieting, fasting, and/or excessive exercise, with an absence of binge/purge behaviors. The binge-eating/purging type describes presentations in which the individual with the condition has engaged in recurrent episodes of binge-eating and purging behavior, such as self-induced vomiting, misuse of laxatives, and diuretics. The severity of AN is determined by BMI, with BMIs below 15 noted as the most extreme cases of the disorder. Pubertal and post-pubertal females with anorexia often experience amenorrhea, or the loss of menstrual periods, due to the extreme weight loss these individuals face. Although amenorrhea was a required criterion for a diagnosis of anorexia in the DSM-IV, it was dropped in the DSM-5 due to its exclusive nature, as male, post-menopause women, or individuals who do not menstruate for other reasons would fail to meet this criterion. Females with bulimia may also experience amenorrhea, although the cause is not clear.
Bulimia nervosa (BN) is characterized by recurrent binge eating followed by compensatory behaviors such as purging (self-induced vomiting, eating to the point of vomiting, excessive use of laxatives/diuretics, or excessive exercise). Fasting may also be used as a method of purging following a binge. However, unlike anorexia nervosa, body weight is maintained at or above a minimally normal level. Severity of BN is determined by the number of episodes of inappropriate compensatory behaviors per week.
Binge eating disorder (BED) is characterized by recurrent episodes of binge eating without use of inappropriate compensatory behaviors that are present in BN and AN binge-eating/purging subtype. Binge eating episodes are associated with eating much more rapidly than normal, eating until feeling uncomfortably full, eating large amounts of food when not feeling physically hungry, eating alone because of feeling embarrassed by how much one is eating, and/or feeling disgusted with oneself, depressed or very guilty after eating. For a BED diagnosis to be given, marked distress regarding binge eating must be present, and the binge eating must occur an average of once a week for 3 months. Severity of BED is determined by the number of binge eating episodes per week.
Pica is the persistent eating of nonnutritive, nonfood substances in a way that is not developmentally appropriate or culturally supported. Although substances consumed vary with age and availability, paper, soap, hair, chalk, paint, and clay are among the most commonly consumed in those with a pica diagnosis. There are multiple causes for the onset of pica, including iron-deficiency anemia, malnutrition, and pregnancy, and pica often occurs in tandem with other mental health disorders associated with impaired function, such as intellectual disability, autism spectrum disorder, and schizophrenia. In order for a diagnosis of pica to be warranted, behaviors must last for at least one month.
Rumination disorder encompasses the repeated regurgitation of food, which may be re-chewed, re-swallowed, or spit out. For this diagnosis to be warranted, behaviors must persist for at least one month, and regurgitation of food cannot be attributed to another medical condition. Additionally, rumination disorder is distinct from AN, BN, BED, and ARFID, and thus cannot occur during the course of one of these illnesses.
Avoidant/restrictive food intake disorder (ARFID) is a feeding or eating disturbance, such as a lack of interest in eating food, avoidance based on sensory characteristics of food, or concern about aversive consequences of eating, that prevents one from meeting nutritional energy needs. It is frequently associated with weight loss, nutritional deficiency, or failure to meet growth trajectories. Notably, ARFID is distinguishable from AN and BN in that there is no evidence of a disturbance in the way in which ones body weight or shape is experienced. The disorder is not better explained by lack of available food, cultural practices, a concurrent medical condition, or another mental disorder.
Other Specified Feeding or Eating Disorder (OSFED) is an eating or feeding disorder that does not meet full DSM-5 criteria for AN, BN, or BED. Examples of otherwise-specified eating disorders include individuals with atypical anorexia nervosa, who meet all criteria for AN except being underweight despite substantial weight loss; atypical bulimia nervosa, who meet all criteria for BN except that bulimic behaviors are less frequent or have not been ongoing for long enough; purging disorder; and night eating syndrome.
Unspecified Feeding or Eating Disorder (USFED) describes feeding or eating disturbances that cause marked distress and impairment in important areas of functioning but that do not meet the full criteria for any of the other diagnoses. The specific reason the presentation does not meet criteria for a specified disorder is not given. For example, an USFED diagnosis may be given when there is insufficient information to make a more specific diagnosis, such as in an emergency room setting.
Other
Compulsive overeating, which may include habitual "grazing" of food or episodes of binge eating without feelings of guilt.
Diabulimia, which is characterized by the deliberate manipulation of insulin levels by diabetics in an effort to control their weight.
Drunkorexia, which is commonly characterized by purposely restricting food intake in order to reserve food calories for alcoholic calories, exercising excessively in order to burn calories from drinking, and over-drinking alcohol in order to purge previously consumed food.
Food maintenance, which is characterized by a set of aberrant eating behaviors of children in foster care.
Night eating syndrome, which is characterized by nocturnal hyperphagia (consumption of 25% or more of the total daily calories after the evening meal) with nocturnal ingestions, insomnia, loss of morning appetite and depression.
Nocturnal sleep-related eating disorder, which is a parasomnia characterized by eating, habitually out-of-control, while in a state of NREM sleep, with no memory of this the next morning.
Gourmand syndrome, a rare condition occurring after damage to the frontal lobe. Individuals develop an obsessive focus on fine foods.
Orthorexia nervosa, a term used by Steven Bratman to describie an obsession with a "pure" diet, in which a person develops an obsession with avoiding unhealthy foods to the point where it interferes with the persons life.
Klüver-Bucy syndrome, caused by bilateral lesions of the medial temporal lobe, includes compulsive eating, hypersexuality, hyperorality, visual agnosia, and docility.
Prader-Willi syndrome, a genetic disorder associated with insatiable appetite and morbid obesity.
Pregorexia, which is characterized by extreme dieting and over-exercising in order to control pregnancy weight gain. Prenatal undernutrition is associated with low birth weight, coronary heart disease, type 2 diabetes, stroke, hypertension, cardiovascular disease risk, and depression.
Muscle dysmorphia is characterized by appearance preoccupation that ones own body is too small, too skinny, insufficiently muscular, or insufficiently lean. Muscle dysmorphia affects mostly males.
Purging disorder. Recurrent purging behavior to influence weight or shape in the absence of binge eating. It is more properly a disorder of elimination rather than eating disorder.
Symptoms and long-term effects
Symptoms and complications vary according to the nature and severity of the eating disorder:
Associated physical symptoms of eating disorders include weakness, fatigue, sensitivity to cold, reduced beard growth in men, reduction in waking erections, reduced libido, weight loss and growth failure.Frequent vomiting, which may cause acid reflux or entry of acidic gastric material into the laryngoesophageal tract, can lead to unexplained hoarseness. As such, individuals who induce vomiting as part of their eating disorder, such as those with anorexia nervosa, binge eating-purging type or those with purging-type bulimia nervosa, are at risk for acid reflux.Polycystic ovary syndrome (PCOS) is the most common endocrine disorder to affect women. Though often associated with obesity it can occur in normal weight individuals. PCOS has been associated with binge eating and bulimic behavior.Other possible manifestations are dry lips, burning tongue, parotid gland swelling, and temporomandibular disorders.
Psychopathology
The psychopathology of eating disorders centers around body image disturbance, such as concerns with weight and shape; self-worth being too dependent on weight and shape; fear of gaining weight even when underweight; denial of how severe the symptoms are and a distortion in the way the body is experienced.The main psychopathological features of anorexia were outlined in 1982 as problems in body perception, emotion processing and interpersonal relationships. Women with eating disorders have greater body dissatisfaction. This impairment of body perception involves vision, proprioception, interoception and tactile perception. There is an alteration in integration of signals in which body parts are experienced as dissociated from the body as a whole. Bruch once theorized that difficult early relationships were related to the cause of anorexia and how primary caregivers can contribute to the onset of the illness.A prominent feature of bulimia is dissatisfaction with body shape. However, dissatisfaction with body shape is not of diagnostic significance as it is sometimes present in individuals with no eating disorder. This highly labile feature can fluctuate depending on changes in shape and weight, the degree of control over eating and mood. In contrast, a necessary diagnostic feature for anorexia nervosa and bulimia nervosa is having overvalued ideas about shape and weight are relatively stable and partially related to the patients low self-esteem.
Pro-ana subculture
Pro-ana refers to the promotion of behaviors related to the eating disorder anorexia nervosa. Several websites promote eating disorders, and can provide a means for individuals to communicate in order to maintain eating disorders. Members of these websites typically feel that their eating disorder is the only aspect of a chaotic life that they can control. These websites are often interactive and have discussion boards where individuals can share strategies, ideas, and experiences, such as diet and exercise plans that achieve extremely low weights. A study comparing the personal web-blogs that were pro-eating disorder with those focused on recovery found that the pro-eating disorder blogs contained language reflecting lower cognitive processing, used a more closed-minded writing style, contained less emotional expression and fewer social references, and focused more on eating-related contents than did the recovery blogs.
Causes
The causes of eating disorders are not yet clearly established.Many people with eating disorders also have body image disturbance and a comorbid body dysmorphic disorder, leading them to an altered perception of their body. Studies have found that a high proportion of individuals diagnosed with body dysmorphic disorder also had some type of eating disorder, with 15% of individuals having either anorexia nervosa or bulimia nervosa. This link between body dysmorphic disorder and anorexia stems from the fact that both BDD and anorexia nervosa are characterized by a preoccupation with physical appearance and a distortion of body image.There are also many other possibilities such as environmental, social and interpersonal issues that could promote and sustain these illnesses. Also, the media are oftentimes blamed for the rise in the incidence of eating disorders due to the fact that media images of idealized slim physical shape of people such as models and celebrities motivate or even force people to attempt to achieve slimness themselves. The media are accused of distorting reality, in the sense that people portrayed in the media are either naturally thin and thus unrepresentative of normality or unnaturally thin by forcing their bodies to look like the ideal image by putting excessive pressure on themselves to look a certain way. While past findings have described eating disorders as primarily psychological, environmental, and sociocultural, further studies have uncovered evidence that there is a genetic component.
Genetics
Numerous studies show a genetic predisposition toward eating disorders. Twin studies have found a slight instances of genetic variance when considering the different criterion of both anorexia nervosa and bulimia nervosa as endophenotypes contributing to the disorders as a whole. A genetic link has been found on chromosome 1 in multiple family members of an individual with anorexia nervosa. An individual who is a first degree relative of someone who has had or currently has an eating disorder is seven to twelve times more likely to have an eating disorder themselves. Twin studies also show that at least a portion of the vulnerability to develop eating disorders can be inherited, and there is evidence to show that there is a genetic locus that shows susceptibility for developing anorexia nervosa. About 50% of eating disorder cases are attributable to genetics. Other cases are due to external reasons or developmental problems. There are also other neurobiological factors at play tied to emotional reactivity and impulsivity that could lead to binging and purging behaviors.Epigenetics mechanisms are means by which environmental effects alter gene expression via methods such as DNA methylation; these are independent of and do not alter the underlying DNA sequence. They are heritable, but also may occur throughout the lifespan, and are potentially reversible. Dysregulation of dopaminergic neurotransmission due to epigenetic mechanisms has been implicated in various eating disorders. Other candidate genes for epigenetic studies in eating disorders include leptin, pro-opiomelanocortin (POMC) and brain-derived neurotrophic factor (BDNF).There has found to be a genetic correlation between anorexia nervosa and OCD, suggesting a strong etiology. First and second relatives of probands with OCD have a greater chance of developing anorexia nervosa as genetic relatedness increases.
Psychological
Eating disorders are classified as Axis I disorders in the Diagnostic and Statistical Manual of Mental Health Disorders (DSM-IV) published by the American Psychiatric Association. There are various other psychological issues that may factor into eating disorders, some fulfill the criteria for a separate Axis I diagnosis or a personality disorder which is coded Axis II and thus are considered comorbid to the diagnosed eating disorder. Axis II disorders are subtyped into 3 "clusters": A, B and C. The causality between personality disorders and eating disorders has yet to be fully established. Some people have a previous disorder which may increase their vulnerability to developing an eating disorder. Some develop them afterwards. The severity and type of eating disorder symptoms have been shown to affect comorbidity. There has been controversy over various editions of the DSM diagnostic criteria including the latest edition, DSM-V, due in May 2013.
Cognitive attentional bias
Attentional bias may have an effect on eating disorders. Attentional bias is the preferential attention toward certain types of information in the environment while simultaneously ignoring others. Individuals with eating disorders can be thought to have schemas, knowledge structures, which are dysfunctional as they may bias judgement, thought, behaviour in a manner that is self-destructive or maladaptive. They may have developed a disordered schema which focuses on body size and eating. Thus, this information is given the highest level of importance and overvalued among other cognitive structures. Researchers have found that people who have eating disorders tend to pay more attention to stimuli related to food. For people struggling to recover from an eating disorder or addiction, this tendency to pay attention to certain signals while discounting others can make recovery that much more difficult.Studies have utilized the Stroop task to assess the probable effect of attentional bias on eating disorders. This may involve separating food and eating words from body shape and weight words. Such studies have found that anorexic subjects were slower to colour name food related words than control subjects. Other studies have noted that individuals with eating disorders have significant attentional biases associated with eating and weight stimuli.
Personality traits
There are various childhood personality traits associated with the development of eating disorders, such as perfectionism and neuroticism. These personality traits are found to link eating disorders and OCD.During adolescence these traits may become intensified due to a variety of physiological and cultural influences such as the hormonal changes associated with puberty, stress related to the approaching demands of maturity and socio-cultural influences and perceived expectations, especially in areas that concern body image. Eating disorders have been associated with a fragile sense of self and with disordered mentalization. Many personality traits have a genetic component and are highly heritable. Maladaptive levels of certain traits may be acquired as a result of anoxic or traumatic brain injury, neurodegenerative diseases such as Parkinsons disease, neurotoxicity such as lead exposure, bacterial infection such as Lyme disease or parasitic infection such as Toxoplasma gondii as well as hormonal influences. While studies are still continuing via the use of various imaging techniques such as fMRI; these traits have been shown to originate in various regions of the brain such as the amygdala and the prefrontal cortex. Disorders in the prefrontal cortex and the executive functioning system have been shown to affect eating behavior.
Celiac disease
People with gastrointestinal disorders may be more risk of developing disordered eating practices than the general population, principally restrictive eating disturbances. An association of anorexia nervosa with celiac disease has been found. The role that gastrointestinal symptoms play in the development of eating disorders seems rather complex. Some authors report that unresolved symptoms prior to gastrointestinal disease diagnosis may create a food aversion in these persons, causing alterations to their eating patterns. Other authors report that greater symptoms throughout their diagnosis led to greater risk. It has been documented that some people with celiac disease, irritable bowel syndrome or inflammatory bowel disease who are not conscious about the importance of strictly following their diet, choose to consume their trigger foods to promote weight loss. On the other hand, individuals with good dietary management may develop anxiety, food aversion and eating disorders because of concerns around cross contamination of their foods. Some authors suggest that medical professionals should evaluate the presence of an unrecognized celiac disease in all people with eating disorder, especially if they present any gastrointestinal symptom (such as decreased appetite, abdominal pain, bloating, distension, vomiting, diarrhea or constipation), weight loss, or growth failure; and also routinely ask celiac patients about weight or body shape concerns, dieting or vomiting for weight control, to evaluate the possible presence of eating disorders, specially in women.
Environmental influences
Child maltreatment
Child abuse which encompasses physical, psychological, and sexual abuse, as well as neglect, has been shown to approximately triple the risk of an eating disorder. Sexual abuse appears to be about double the risk of bulimia; however, the association is less clear for anorexia. The risk for individuals developing eating disorders increases if the individual grew up in an invalidating environment where displays of emotions were often punished. Abuse that has also occurred in childhood produces intolerable difficult emotions that cannot be expressed in a healthy manner. Eating disorders come in as an escape coping mechanism, as a means to control and avoid overwhelming negative emotions and feelings. Those who report physical or sexual maltreatment as a child are at an increased risk of developing an eating disorder.
Social isolation
Social isolation has been shown to have a deleterious effect on an individuals physical and emotional well-being. Those that are socially isolated have a higher mortality rate in general as compared to individuals that have established social relationships. This effect on mortality is markedly increased in those with pre-existing medical or psychiatric conditions, and has been especially noted in cases of coronary heart disease. "The magnitude of risk associated with social isolation is comparable with that of cigarette smoking and other major biomedical and psychosocial risk factors." (Brummett et al.)
Social isolation can be inherently stressful, depressing and anxiety-provoking. In an attempt to ameliorate these distressful feelings an individual may engage in emotional eating in which food serves as a source of comfort. The loneliness of social isolation and the inherent stressors thus associated have been implicated as triggering factors in binge eating as well.Waller, Kennerley and Ohanian (2007) argued that both bingeing–vomiting and restriction are emotion suppression strategies, but they are just utilized at different times. For example, restriction is used to pre-empt any emotion activation, while bingeing–vomiting is used after an emotion has been activated.
Parental influence
Parental influence has been shown to be an intrinsic component in the development of eating behaviors of children. This influence is manifested and shaped by a variety of diverse factors such as familial genetic predisposition, dietary choices as dictated by cultural or ethnic preferences, the parents own body shape and eating patterns, the degree of involvement and expectations of their childrens eating behavior as well as the interpersonal relationship of parent and child. This is in addition to the general psychosocial climate of the home and the presence or absence of a nurturing stable environment. It has been shown that maladaptive parental behavior has an important role in the development of eating disorders. As to the more subtle aspects of parental influence, it has been shown that eating patterns are established in early childhood and that children should be allowed to decide when their appetite is satisfied as early as the age of two. A direct link has been shown between obesity and parental pressure to eat more.Coercive tactics in regard to diet have not been proven to be efficacious in controlling a childs eating behavior. Affection and attention have been shown to affect the degree of a childs finickiness and their acceptance of a more varied diet.Adams and Crane (1980), have shown that parents are influenced by stereotypes that influence their perception of their childs body. The conveyance of these negative stereotypes also affects the childs own body image and satisfaction. Hilde Bruch, a pioneer in the field of studying eating disorders, asserts that anorexia nervosa often occurs in girls who are high achievers, obedient, and always trying to please their parents. Their parents have a tendency to be over-controlling and fail to encourage the expression of emotions, inhibiting daughters from accepting their own feelings and desires. Adolescent females in these overbearing families lack the ability to be independent from their families, yet realize the need to, often resulting in rebellion. Controlling their food intake may make them feel better, as it provides them with a sense of control.
Peer pressure
In various studies such as one conducted by The McKnight Investigators, peer pressure was shown to be a significant contributor to body image concerns and attitudes toward eating among subjects in their teens and early twenties.Eleanor Mackey and co-author, Annette M. La Greca of the University of Miami, studied 236 teen girls from public high schools in southeast Florida. "Teen girls concerns about their own weight, about how they appear to others and their perceptions that their peers want them to be thin are significantly related to weight-control behavior", says psychologist Eleanor Mackey of the Childrens National Medical Center in Washington and lead author of the study. "Those are really important."According to one study, 40% of 9- and 10-year-old girls are already trying to lose weight. Such dieting is reported to be influenced by peer behavior, with many of those individuals on a diet reporting that their friends also were dieting. The number of friends dieting and the number of friends who pressured them to diet also played a significant role in their own choices.Elite athletes have a significantly higher rate in eating disorders. Female athletes in sports such as gymnastics, ballet, diving, etc. are found to be at the highest risk among all athletes. Women are more likely than men to acquire an eating disorder between the ages of 13–25. 0–15% of those with bulimia and anorexia are men.Other psychological problems that could possibly create an eating disorder such as Anorexia Nervosa are depression, and low self-esteem. Depression is a state of mind where emotions are unstable causing a persons eating habits to change due to sadness and no interest of doing anything. According to PSYCOM "Studies show that a high percentage of people with an eating disorder will experience depression." Depression is a state of mind where people seem to refuge without being able to get out of it. A big factor of this can affect people with their eating and this can mostly affect teenagers. Teenagers are big candidates for Anorexia for the reason that during the teenage years, many things start changing and they start to think certain ways. According to Life Works an article about eating disorders "People of any age can be affected by pressure from their peers, the media and even their families but it is worse when youre a teenager at school." Teenagers can develop eating disorder such as Anorexia due to peer pressure which can lead to Depression. Many teens start off this journey by feeling pressure for wanting to look a certain way of feeling pressure for being different. This brings them to finding the result in eating less and soon leading to Anorexia which can bring big harms to the physical state.
Cultural pressure
na
Western perspective
There is a cultural emphasis on thinness which is especially pervasive in western society. A childs perception of external pressure to achieve the ideal body that is represented by the media predicts the childs body image dissatisfaction, body dysmorphic disorder and an eating disorder. "The cultural pressure on men and women to be perfect is an important predisposing factor for the development of eating disorders". Further, when women of all races base their evaluation of their self upon what is considered the culturally ideal body, the incidence of eating disorders increases.Socioeconomic status (SES) has been viewed as a risk factor for eating disorders, presuming that possessing more resources allows for an individual to actively choose to diet and reduce body weight. Some studies have also shown a relationship between increasing body dissatisfaction with increasing SES. However, once high socioeconomic status has been achieved, this relationship weakens and, in some cases, no longer exists.The media plays a major role in the way in which people view themselves. Countless magazine ads and commercials depict thin celebrities like Lindsay Lohan, Nicole Richie, Victoria Beckham and Mary Kate Olsen, who appear to gain nothing but attention from their looks. Society has taught people that being accepted by others is necessary at all costs. This has led to the belief that in order to fit in one must look a certain way. Televised beauty competitions such as the Miss America Competition contribute to the idea of what it means to be beautiful because competitors are evaluated on the basis of their opinion.In addition to socioeconomic status being considered a cultural risk factor so is the world of sports. Athletes and eating disorders tend to go hand in hand, especially the sports where weight is a competitive factor. Gymnastics, horse back riding, wrestling, body building, and dancing are just a few that fall into this category of weight dependent sports. Eating disorders among individuals that participate in competitive activities, especially women, often lead to having physical and biological changes related to their weight that often mimic prepubescent stages. Oftentimes as womens bodies change they lose their competitive edge which leads them to taking extreme measures to maintain their younger body shape. Men often struggle with binge eating followed by excessive exercise while focusing on building muscle rather than losing fat, but this goal of gaining muscle is just as much an eating disorder as obsessing over thinness. The following statistics taken from Susan Nolen-Hoeksemas book, (ab)normal psychology, show the estimated percentage of athletes that struggle with eating disorders based on the category of sport.
Aesthetic sports (dance, figure skating, gymnastics) – 35%
Weight dependent sports (judo, wrestling) – 29%
Endurance sports (cycling, swimming, running) – 20%
Technical sports (golf, high jumping) – 14%
Ball game sports (volleyball, soccer) – 12%Although most of these athletes develop eating disorders to keep their competitive edge, others use exercise as a way to maintain their weight and figure. This is just as serious as regulating food intake for competition. Even though there is mixed evidence showing at what point athletes are challenged with eating disorders, studies show that regardless of competition level all athletes are at higher risk for developing eating disorders that non-athletes, especially those that participate in sports where thinness is a factor.Pressure from society is also seen within the homosexual community. Gay men are at greater risk of eating disorder symptoms than heterosexual men. Within the gay culture, muscularity gives the advantages of both social and sexual desirability and also power. These pressures and ideas that another homosexual male may desire a mate who is thinner or muscular can possibly lead to eating disorders. The higher eating disorder symptom score reported, the more concern about how others perceive them and the more frequent and excessive exercise sessions occur. High levels of body dissatisfaction are also linked to external motivation to working out and old age; however, having a thin and muscular body occurs within younger homosexual males than older.Most of the cross-cultural studies use definitions from the DSM-IV-TR, which has been criticized as reflecting a Western cultural bias. Thus, assessments and questionnaires may not be constructed to detect some of the cultural differences associated with different disorders. Also, when looking at individuals in areas potentially influenced by Western culture, few studies have attempted to measure how much an individual has adopted the mainstream culture or retained the traditional cultural values of the area. Lastly, the majority of the cross-cultural studies on eating disorders and body image disturbances occurred in Western nations and not in the countries or regions being examined.While there are many influences to how an individual processes their body image, the media does play a major role. Along with the media, parental influence, peer influence, and self-efficacy beliefs also play a large role in an individuals view of themselves. The way the media presents images can have a lasting effect on an individuals perception of their body image. Eating disorders are a worldwide issue and while women are more likely to be affected by an eating disorder it still affects both genders (Schwitzer 2012). The media influences eating disorders whether shown in a positive or negative light, it then has a responsibility to use caution when promoting images that projects an ideal that many turn to eating disorders to attain.To try to address unhealthy body image in the fashion world, in 2015, France passed a law requiring models to be declared healthy by a doctor to participate in fashion shows. It also requires re-touched images to be marked as such in magazines.There is a relationship between "thin ideal" social media content and body dissatisfaction and eating disorders among young adult women, especially in the Western hemisphere. New research points to an "internalization" of distorted images online, as well as negative comparisons among young adult women. Most studies have been based in the U.S, the U.K, and Australia, these are places where the thin ideal is strong among women, as well as the strive for the "perfect" body.In addition to mere media exposure, there is an online "pro-eating disorder" community. Through personal blogs and Twitter, this community promotes eating disorders as a "lifestyle", and continuously posts pictures of emaciated bodies, and tips on how to stay thin. The hashtag "#proana" (pro-anorexia), is a product of this community, as well as images promoting weight loss, tagged with the term "thinspiration". According to social comparison theory, young women have a tendency to compare their appearance to others, which can result in a negative view of their own bodies and altering of eating behaviors, that in turn can develop disordered eating behaviors.When body parts are isolated and displayed in the media as objects to be looked at, it is called objectification, and women are affected most by this phenomenon. Objectification increases self-objectification, where women judge their own body parts as a mean of praise and pleasure for others. There is a significant link between self-objectification, body dissatisfaction, and disordered eating, as the beauty ideal is altered through social media.Although eating disorders are typically under diagnosed in people of color, they still experience eating disorders in great numbers. It is thought that the stress that those of color face in the United States from being multiply marginalized may contribute to their rates of eating disorders. Eating disorders, for these women, may be a response to environmental stressors such as racism, abuse and poverty.
African perspective
In the majority of many African communities, thinness is generally not seen as an ideal body type and most pressure to attain a slim figure may stem from influence or exposure to Western culture and ideology. Traditional African cultural ideals are reflected in the practice of some health professionals; in Ghana, pharmacists sell appetite stimulants to women who desire to, as Ghanaians stated, "grow fat". Girls are told that if they wish to find a partner and birth children they must gain weight. On the contrary, there are certain taboos surrounding a slim body image, specifically in West Africa. Lack of body fat is linked to poverty and HIV/AIDS.However, the emergence of Western and European influence, specifically with the introduction of such fashion and modelling shows and competitions, is changing certain views among body acceptance, and the prevalence of eating disorders has consequently increased. This acculturation is also related to how South Africa is concurrently undergoing rapid, intense urbanization. Such modern development is leading to cultural changes, and professionals cite rates of eating disorders in this region will increase with urbanization, specifically with changes in identity, body image, and cultural issues. Further, exposure to Western values through private Caucasian schools or caretakers is another possible factor related to acculturation which may be associated with the onset of eating disorders.Other factors which are cited to be related to the increasing prevalence of eating disorders in African communities can be related to sexual conflicts, such as psychosexual guilt, first sexual intercourse, and pregnancy. Traumatic events which are related to both family (i.e. parental separation) and eating related issues are also cited as possible effectors. Religious fasting, particularly around times of stress, and feelings of self-control are also cited as determinants in the onset of eating disorders.
Asian perspective
The West plays a role in Asias economic development via foreign investments, advanced technologies joining financial markets, and the arrival of American and European companies in Asia, especially through outsourcing manufacturing operations. This exposure to Western culture, especially the media, imparts Western body ideals to Asian society, termed Westernization. In part, Westernization fosters eating disorders among Asian populations. However, there are also country-specific influences on the occurrence of eating disorders in Asia.
China
In China as well as other Asian countries, Westernization, migration from rural to urban areas, after-effects of sociocultural events, and disruptions of social and emotional support are implicated in the emergence of eating disorders. In particular, risk factors for eating disorders include higher socioeconomic status, preference for a thin body ideal, history of child abuse, high anxiety levels, hostile parental relationships, jealousy towards media idols, and above-average scores on the body dissatisfaction and interoceptive awareness sections of the Eating Disorder Inventory. Similarly to the West, researchers have identified the media as a primary source of pressures relating to physical appearance, which may even predict body change behaviors in males and females.
Fiji
While colonised by the British in 1874, Fiji kept a large degree of linguistic and cultural diversity which characterised the ethnic Fijian population. Though gaining independence in 1970, Fiji has rejected Western, capitalist values which challenged its mutual trusts, bonds, kinships and identity as a nation. Similar to studies conducted on Polynesian groups, ethnic Fijian traditional aesthetic ideals reflected a preference for a robust body shape; thus, the prevailing pressure to be slim, thought to be associated with diet and disordered eating in many Western societies was absent in traditional Fiji. Additionally, traditional Fijian values would encourage a robust appetite and a widespread vigilance for and social response to weight loss. Individual efforts to reshape the body by dieting or exercise, thus traditionally was discouraged.However, studies conducted in 1995 and 1998 both demonstrated a link between the introduction of television in the country, and the emergence of eating disorders in young adolescent ethnic Fijian girls. Through the quantitative data collected in these studies there was found to be a significant increase in the prevalence of two key indicators of disordered eating: self-induced vomiting and high Eating Attitudes Test- 26. These results were recorded following prolonged television exposure in the community, and an associated increase in the percentage of households owning television sets. Additionally, qualitative data linked changing attitudes about dieting, weight loss and aesthetic ideas in the peer environment to Western media images. The impact of television was especially profound given the longstanding social and cultural traditions that had previously rejected the notions of dieting, purging and body dissatisfaction in Fiji. Additional studies in 2011 found that social network media exposure, independent of direct media and other cultural exposures, was also associated with eating pathology.
Hong Kong
From the early- to-mid- 1990s, a variant form of anorexia nervosa was identified in Hong Kong. This variant form did not share features of anorexia in the West, notably "fat-phobia" and distorted body image. Patients attributed their restrictive food intake to somatic complaints, such as epigastric bloating, abdominal or stomach pain, or a lack of hunger or appetite. Compared to Western patients, individuals with this variant anorexia demonstrated bulimic symptoms less frequently and tended to have lower pre-morbid body mass index. This form disapproves the assumption that a "fear of fatness or weight gain" is the defining characteristic of individuals with anorexia nervosa.
India
In the past, the available evidence did not suggest that unhealthy weight loss methods and eating disordered behaviors are common in India as proven by stagnant rates of clinically diagnosed eating disorders. However, it appears that rates of eating disorders in urban areas of India are increasing based on surveys from psychiatrists who were asked whether they perceived eating disorders to be a "serious clinical issue" in India. 23.5% of respondents believed that rates of eating disorders were rising in Bangalore, 26.5% claimed that rates were stagnant, and 42%, the largest percentage, expressed uncertainty. It has been suggested that urbanization and socioeconomic status are associated with increased risk for body weight dissatisfaction. However, due to the physical size of and diversity within India, trends may vary throughout the country.
American perspective
Black and African American
Historically, identifying as African American has been considered a protective factor for body dissatisfaction. Those identifying as African American have been found to have a greater acceptance of larger body image ideals and less internalization of the thin ideal, and African American women have reported the lowest levels of body dissatisfaction among the five major racial/ethnic groups in the US.However, recent research contradicts these findings, indicating that African American women may exhibit levels of body dissatisfaction comparable to other racial/ethnic minority groups. In this way, just because those who identify as African American may not internalize the thin ideal as strongly as other racial and ethnic groups, it does not mean that they do not hold other appearance ideals that may promote body shape concerns. Similarly, recent research shows that African Americans exhibit rates of disordered eating that are similar to or even higher than their white counterparts.
American Indian and Alaska Native
American Indian and Alaska Native women are more likely than white women to both experience a fear of losing control over their eating and to abuse laxatives and diuretics for weight control purposes. They have comparable rates of binge eating and other disordered weight control behaviors in comparison to other racial groups.
Latinos
Disproportionately high rates of disordered eating and body dissatisfaction have been found in Hispanics in comparison to other racial and ethnic groups. Studies have found significantly more laxative use in those identifying as Hispanic in comparison to non-Hispanic white counterparts. Specifically, those identifying as Hispanic may be at heightened risk of engaging in binge eating and bingeing/purging behaviors.
Food insecurity
Food insecurity is defined as inadequate access to sufficient food, both in terms of quantity and quality, in direct contrast to food security, which is conceptualized as having access to sufficient, safe, and nutritious food to meet dietary needs and preferences. Notably, levels of food security exist on a continuum from reliable access to food to disrupted access to food.
Multiple studies have found food insecurity to be associated with eating pathology. A study conducted on individuals visiting a food bank in Texas found higher food insecurity to be correlated with higher levels of binge eating, overall eating disorder pathology, dietary restraint, compensatory behaviors and weight self-stigma. Findings of a replication study with a larger, more diverse sample mirrored these results, and a study looking at the relationship between food insecurity and bulimia nervosa similarly found greater food insecurity to be associated with elevated levels of eating pathology.
Trauma
One study has found that binge-eating disorder may stem from trauma, with some female patients engaging in these disorders to numb pain experienced through sexual trauma.
Heterosexism
Some eating disorder patients have implied that enforced heterosexuality and heterosexism led many to engage in their condition to align with norms associated with their gender identity. Families may restrict womens food intake to keep them thin, thus increasing their ability to attain a male romantic partner.
Mechanisms
Biochemical: Eating behavior is a complex process controlled by the neuroendocrine system, of which the Hypothalamus-pituitary-adrenal-axis (HPA axis) is a major component. Dysregulation of the HPA axis has been associated with eating disorders, such as irregularities in the manufacture, amount or transmission of certain neurotransmitters, hormones or neuropeptides and amino acids such as homocysteine, elevated levels of which are found in AN and BN as well as depression.Serotonin: a neurotransmitter involved in depression also has an inhibitory effect on eating behavior.
Norepinephrine is both a neurotransmitter and a hormone; abnormalities in either capacity may affect eating behavior.
Dopamine: which in addition to being a precursor of norepinephrine and epinephrine is also a neurotransmitter which regulates the rewarding property of food.
Neuropeptide Y also known as NPY is a hormone that encourages eating and decreases metabolic rate. Blood levels of NPY are elevated in patients with anorexia nervosa, and studies have shown that injection of this hormone into the brain of rats with restricted food intake increases their time spent running on a wheel. Normally the hormone stimulates eating in healthy patients, but under conditions of starvation it increases their activity rate, probably to increase the chance of finding food. The increased levels of NPY in the blood of patients with eating disorders can in some ways explain the instances of extreme over-exercising found in most anorexia nervosa patients.
Leptin and ghrelin: leptin is a hormone produced primarily by the fat cells in the body; it has an inhibitory effect on appetite by inducing a feeling of satiety. Ghrelin is an appetite inducing hormone produced in the stomach and the upper portion of the small intestine. Circulating levels of both hormones are an important factor in weight control. While often associated with obesity, both hormones and their respective effects have been implicated in the pathophysiology of anorexia nervosa and bulimia nervosa. Leptin can also be used to distinguish between constitutional thinness found in a healthy person with a low BMI and an individual with anorexia nervosa.
Gut bacteria and immune system: studies have shown that a majority of patients with anorexia and bulimia nervosa have elevated levels of autoantibodies that affect hormones and neuropeptides that regulate appetite control and the stress response. There may be a direct correlation between autoantibody levels and associated psychological traits. Later study revealed that autoantibodies reactive with alpha-MSH are, in fact, generated against ClpB, a protein produced by certain gut bacteria e.g. Escherichia coli. ClpB protein was identified as a conformational antigen-mimetic of alpha-MSH. In patients with eating disorders plasma levels of anti-ClpB IgG and IgM correalated with patients psychological traits
Infection: PANDAS is an abbreviation for the controversial Pediatric Autoimmune Neuropsychiatric Disorders Associated with Streptococcal Infections hypothesis. Children with PANDAS are postulated to "have obsessive-compulsive disorder (OCD) and/or tic disorders such as Tourette syndrome, and in whom symptoms worsen following infections such as strep throat". (NIMH) PANDAS and the broader PANS are hypothesized to be a precipitating factor in the development of anorexia nervosa in some cases, (PANDAS AN).
Lesions: studies have shown that lesions to the right frontal lobe or temporal lobe can cause the pathological symptoms of an eating disorder.
Tumors: tumors in various regions of the brain have been implicated in the development of abnormal eating patterns.
Brain calcification: a study highlights a case in which prior calcification of the right thalumus may have contributed to development of anorexia nervosa.
somatosensory homunculus: is the representation of the body located in the somatosensory cortex, first described by renowned neurosurgeon Wilder Penfield. The illustration was originally termed "Penfields Homunculus", homunculus meaning little man. "In normal development this representation should adapt as the body goes through its pubertal growth spurt. However, in AN it is hypothesized that there is a lack of plasticity in this area, which may result in impairments of sensory processing and distortion of body image". (Bryan Lask, also proposed by VS Ramachandran)
Obstetric complications: There have been studies done which show maternal smoking, obstetric and perinatal complications such as maternal anemia, very pre-term birth (less than 32 weeks), being born small for gestational age, neonatal cardiac problems, preeclampsia, placental infarction and sustaining a cephalhematoma at birth increase the risk factor for developing either anorexia nervosa or bulimia nervosa. Some of this developmental risk as in the case of placental infarction, maternal anemia and cardiac problems may cause intrauterine hypoxia, umbilical cord occlusion or cord prolapse may cause ischemia, resulting in cerebral injury, the prefrontal cortex in the fetus and neonate is highly susceptible to damage as a result of oxygen deprivation which has been shown to contribute to executive dysfunction, ADHD, and may affect personality traits associated with both eating disorders and comorbid disorders such as impulsivity, mental rigidity and obsessionality. The problem of perinatal brain injury, in terms of the costs to society and to the affected individuals and their families, is extraordinary. (Yafeng Dong, PhD)
Symptom of starvation: Evidence suggests that the symptoms of eating disorders are actually symptoms of the starvation itself, not of a mental disorder. In a study involving thirty-six healthy young men that were subjected to semi-starvation, the men soon began displaying symptoms commonly found in patients with eating disorders. In this study, the healthy men ate approximately half of what they had become accustomed to eating and soon began developing symptoms and thought patterns (preoccupation with food and eating, ritualistic eating, impaired cognitive ability, other physiological changes such as decreased body temperature) that are characteristic symptoms of anorexia nervosa. The men used in the study also developed hoarding and obsessive collecting behaviors, even though they had no use for the items, which revealed a possible connection between eating disorders and obsessive compulsive disorder.
Diagnosis
According to Pritts and Susman "The medical history is the most powerful tool for diagnosing eating disorders". There are many medical disorders that mimic eating disorders and comorbid psychiatric disorders. Early detection and intervention can assure a better recovery and can improve a lot the quality of life of these patients. In the past 30 years eating disorders have become increasingly conspicuous and it is uncertain whether the changes in presentation reflect a true increase. Anorexia nervosa and bulimia nervosa are the most clearly defined subgroups of a wider range of eating disorders. Many patients present with subthreshold expressions of the two main diagnoses: others with different patterns and symptoms.As eating disorders, especially anorexia nervosa, are thought of as being associated with young, white females, diagnosis of eating disorders in other races happens more rarely. In one study, when clinicians were presented with identical case studies demonstrating disordered eating symptoms in Black, Hispanic, and white women, 44% noted the white womans behavior as problematic; 41% identified the Hispanic womans behavior as problematic, and only 17% of the clinicians noted the Black womans behavior as problematic (Gordon, Brattole, Wingate, & Joiner, 2006).
Medical
The diagnostic workup typically includes complete medical and psychosocial history and follows a rational and formulaic approach to the diagnosis. Neuroimaging using fMRI, MRI, PET and SPECT scans have been used to detect cases in which a lesion, tumor or other organic condition has been either the sole causative or contributory factor in an eating disorder. "Right frontal intracerebral lesions with their close relationship to the limbic system could be causative for eating disorders, we therefore recommend performing a cranial MRI in all patients with suspected eating disorders" (Trummer M et al. 2002), "intracranial pathology should also be considered however certain is the diagnosis of early-onset anorexia nervosa. Second, neuroimaging plays an important part in diagnosing early-onset anorexia nervosa, both from a clinical and a research prospective".(OBrien et al. 2001).
Psychological
After ruling out organic causes and the initial diagnosis of an eating disorder being made by a medical professional, a trained mental health professional aids in the assessment and treatment of the underlying psychological components of the eating disorder and any comorbid psychological conditions. The clinician conducts a clinical interview and may employ various psychometric tests. Some are general in nature while others were devised specifically for use in the assessment of eating disorders. Some of the general tests that may be used are the Hamilton Depression Rating Scale and the Beck Depression Inventory. longitudinal research showed that there is an increase in chance that a young adult female would develop bulimia due to their current psychological pressure and as the person ages and matures, their emotional problems change or are resolved and then the symptoms decline.Several types of scales are currently used – (a) self-report questionnaires –EDI-3, BSQ, TFEQ, MAC, BULIT-R, QEWP-R, EDE-Q, EAT, NEQ – and other; (b) semi-structured interviews – SCID-I, EDE – and other; (c) clinical interviews unstructured or observer-based rating scales- Morgan Russel scale The majority of the scales used were described and used in adult populations. From all the scales evaluated and analyzed, only three are described at the child population – it is EAT-26 (children above 16 years), EDI-3 (children above 13 years), and ANSOCQ (children above 13 years). It is essential to develop specific scales for people under 18 years of age, given the increasing incidence of ED among children and the need for early detection and appropriate intervention. Moreover, the urgent need for accurate scales and telemedicine testing and diagnosis tools are of high importance during the COVID-19 pandemic (Leti, Garner & al., 2020).
Differential diagnoses
There are multiple medical conditions which may be misdiagnosed as a primary psychiatric disorder, complicating or delaying treatment. These may have a synergistic effect on conditions which mimic an eating disorder or on a properly diagnosed eating disorder.
Lyme disease is known as the "great imitator", as it may present as a variety of psychiatric or neurological disorders including anorexia nervosa.
Gastrointestinal diseases, such as celiac disease, Crohns disease, peptic ulcer, eosinophilic esophagitis or non-celiac gluten sensitivity, among others. Celiac disease is also known as the "great imitator", because it may involve several organs and cause an extensive variety of non-gastrointestinal symptoms, such as psychiatric and neurological disorders, including anorexia nervosa.
Addisons disease is a disorder of the adrenal cortex which results in decreased hormonal production. Addisons disease, even in subclinical form may mimic many of the symptoms of anorexia nervosa.
Gastric adenocarcinoma is one of the most common forms of cancer in the world. Complications due to this condition have been misdiagnosed as an eating disorder.
Hypothyroidism, hyperthyroidism, hypoparathyroidism and hyperparathyroidism may mimic some of the symptoms of, can occur concurrently with, be masked by or exacerbate an eating disorder.
Toxoplasma seropositivity: even in the absence of symptomatic toxoplasmosis, toxoplasma gondii exposure has been linked to changes in human behavior and psychiatric disorders including those comorbid with eating disorders such as depression. In reported case studies the response to antidepressant treatment improved only after adequate treatment for toxoplasma.
Neurosyphilis: It is estimated that there may be up to one million cases of untreated syphilis in the US alone. "The disease can present with psychiatric symptoms alone, psychiatric symptoms that can mimic any other psychiatric illness". Many of the manifestations may appear atypical. Up to 1.3% of short term psychiatric admissions may be attributable to neurosyphilis, with a much higher rate in the general psychiatric population. (Ritchie, M Perdigao J,)
Dysautonomia: a wide variety of autonomic nervous system (ANS) disorders may cause a wide variety of psychiatric symptoms including anxiety, panic attacks and depression. Dysautonomia usually involves failure of sympathetic or parasympathetic components of the ANS system but may also include excessive ANS activity. Dysautonomia can occur in conditions such as diabetes and alcoholism.Psychological disorders which may be confused with an eating disorder, or be co-morbid with one:
Emetophobia is an anxiety disorder characterized by an intense fear of vomiting. A person so impacted may develop rigorous standards of food hygiene, such as not touching food with their hands. They may become socially withdrawn to avoid situations which in their perception may make them vomit. Many who have emetophobia are diagnosed with anorexia or self-starvation. In severe cases of emetophobia they may drastically reduce their food intake.
Phagophobia is an anxiety disorder characterized by a fear of eating, it is usually initiated by an adverse experience while eating such as choking or vomiting. Persons with this disorder may present with complaints of pain while swallowing.
Body dysmorphic disorder (BDD) is listed as a obsessive-compulsive disorder that affects up to 2% of the population. BDD is characterized by excessive rumination over an actual or perceived physical flaw. BDD has been diagnosed equally among men and women. While BDD has been misdiagnosed as anorexia nervosa, it also occurs comorbidly in 39% of eating disorder cases. BDD is a chronic and debilitating condition which may lead to social isolation, major depression and suicidal ideation and attempts. Neuroimaging studies to measure response to facial recognition have shown activity predominately in the left hemisphere in the left lateral prefrontal cortex, lateral temporal lobe and left parietal lobe showing hemispheric imbalance in information processing. There is a reported case of the development of BDD in a 21-year-old male following an inflammatory brain process. Neuroimaging showed the presence of a new atrophy in the frontotemporal region.
Prevention
Prevention aims to promote a healthy development before the occurrence of eating disorders. It also intends early identification of an eating disorder before it is too late to treat. Children as young as ages 5–7 are aware of the cultural messages regarding body image and dieting. Prevention comes in bringing these issues to the light. The following topics can be discussed with young children (as well as teens and young adults).
Emotional Bites: a simple way to discuss emotional eating is to ask children about why they might eat besides being hungry. Talk about more effective ways to cope with emotions, emphasizing the value of sharing feelings with a trusted adult.
Say No to Teasing: another concept is to emphasize that it is wrong to say hurtful things about other peoples body sizes.
Body Talk: emphasize the importance of listening to ones body. That is, eating when you are hungry (not starving) and stopping when you are satisfied (not stuffed). Children intuitively grasp these concepts.
Fitness Comes in All Sizes: educate children about the genetics of body size and the normal changes occurring in the body. Discuss their fears and hopes about growing bigger. Focus on fitness and a balanced diet.Internet and modern technologies provide new opportunities for prevention. Online programs have the potential to increase the use of prevention programs. The development and practice of prevention programs via online sources make it possible to reach a wide range of people at minimal cost. Such an approach can also make prevention programs to be sustainable.
Treatment
Treatment varies according to type and severity of eating disorder, and often more than one treatment option is utilized.
Various forms of cognitive behavioral therapy have been developed for eating disorders and found to be useful. If a person is experiencing comorbidity between an eating disorder and OCD, exposure and response prevention, coupled with weight restoration and serotonin reputake inhibitors has proven most effective. Other forms of psychotherapies can also be useful.Family doctors play an important role in early treatment of people with eating disorders by encouraging those who are also reluctant to see a psychiatrist. Treatment can take place in a variety of different settings such as community programs, hospitals, day programs, and groups. The American Psychiatric Association (APA) recommends a team approach to treatment of eating disorders. The members of the team are usually a psychiatrist, therapist, and registered dietitian, but other clinicians may be included.That said, some treatment methods are:
Cognitive behavioral therapy (CBT), which postulates that an individuals feelings and behaviors are caused by their own thoughts instead of external stimuli such as other people, situations or events; the idea is to change how a person thinks and reacts to a situation even if the situation itself does not change. See Cognitive behavioral treatment of eating disorders.
Acceptance and commitment therapy: a type of CBT
Cognitive behavioral therapy enhanched (CBT-E): the most widespread cognitive behavioral psychotherapy specific for eating disorders
Cognitive remediation therapy (CRT), a set of cognitive drills or compensatory interventions designed to enhance cognitive functioning.
Exposure and Response Prevention: a type of CBT; the gradual exposure to anxiety provoking situations in a safe environment, to learn how to deal with the uncomfortableness
The Maudsley anorexia nervosa treatment for adults (MANTRA), which focuses on addressing rigid information processing styles, emotional avoidance, pro-anorectic beliefs, and difficulties with interpersonal relationships. These four targets of treatment are proposed to be core maintenance factors within the Cognitive-Interpersonal Maintenance Model of anorexia nervosa.
Dialectical behavior therapy
Family therapy including "conjoint family therapy" (CFT), "separated family therapy" (SFT) and Maudsley Family Therapy.
Behavioral therapy: focuses on gaining control and changing unwanted behaviors.
Interpersonal psychotherapy (IPT)
Cognitive Emotional Behaviour Therapy (CEBT)
Art therapy
Nutrition counseling and Medical nutrition therapy
Self-help and guided self-help have been shown to be helpful in AN, BN and BED; this includes support groups and self-help groups such as Eating Disorders Anonymous and Overeaters Anonymous. Having meaninful relationships are often a way to recovery. Having a partner, friend or someone else close in your life may lead away from the way of problematic eating according to professor Cynthia M. Bulik.
psychoanalytic psychotherapy
Inpatient careThere are few studies on the cost-effectiveness of the various treatments. Treatment can be expensive; due to limitations in health care coverage, people hospitalized with anorexia nervosa may be discharged while still underweight, resulting in relapse and rehospitalization. Research has found comorbidity between an eating disorder (e.g., anorexia nervosa, bulimia nervosa, and binge eating) and OCD does not impact the length of the time patients spend in treatment, but can negatively impact treatment outcomes.For children with anorexia, the only well-established treatment is the family treatment-behavior. For other eating disorders in children, however, there is no well-established treatments, though family treatment-behavior has been used in treating bulimia.A 2019 Cochrane review examined studies comparing the effectiveness of inpatient versus outpatient models of care for eating disorders. Four trials including 511 participants were studied but the review was unable to draw any definitive conclusions as to the superiority of one model over another.
Barriers to Treatment
A variety of barriers to eating disorder treatment have been identified, typically grouped into individual and systemic barriers. Individual barriers include shame, fear of stigma, cultural perceptions, minimizing the seriousness of the problem, unfamiliarity with mental health services, and a lack of trust in mental health professionals. Systemic barriers include language differences, financial limitations, lack of insurance coverage, inaccessible health care facilities, time conflicts, long waits, lack of transportation, and lack of child care. These barriers may be particularly exacerbated for those who identify outside of the skinny, white, affluent girl stereotype that dominates in the field of eating disorders, such that those who do not identify with this stereotype are much less likely to seek treatment.Conditions during the COVID-19 pandemic may increase the difficulties experienced by those with eating disorders, and the risk that otherwise healthy individuals may develop eating disorders. The pandemic has been a stressful life event for everyone, increasing anxiety and isolation, disrupting normal routines, creating economic strain and food insecurity, and making it more difficult and stressful to obtain needed resources including food and medical treatment.
The COVID-19 pandemic in England exposed a dramatic rise in demand for eating disorder services which the English NHS struggled to meet. The National Institute for Health and Care Excellence and NHS England both advised that services should not impose thresholds using body mass index or duration of illness to determine whether treatment for eating disorders should be offered, but there were continuing reports that these recommendations were not followed.In terms of access to treatment, therapy sessions have generally switched from in-person to video calls. This may actually help people who previously had difficulty finding a therapist with experience in treating eating disorders, for example, those who live in rural areas.
Studies suggest that virtual (telehealth) CBT can be as effective as face-to-face CBT for bulimia and other mental illnesses. To help patients cope with conditions during the pandemic, therapists may have to particularly emphasize strategies to create structure where little is present, build interpersonal connections, and identify and avoid triggers.
Medication
Orlistat is used in obesity treatment. Olanzapine seems to promote weight gain as well as the ability to ameliorate obsessional behaviors concerning weight gain. zinc supplements have been shown to be helpful, and cortisol is also being investigated.Two pharmaceuticals, Prozac and Vyvanse, have been approved by the FDA to treat bulimia nervosa and binge-eating disorder, respectively. Olanzapine has also been used off-label to treat anorexia nervosa. Studies are also underway to explore psychedelic and psychedelic-adjacent medicines such as MDMA, psilocybin and ketamine for anorexia nervosa and binge-eating disorder.
Outcomes
For anorexia nervosa, bulimia nervosa, and binge eating disorder, there is a general agreement that full recovery rates range between 50% to 85%, with larger proportions of people experiencing at least partial remission. It can be a lifelong struggle or it can be overcome within months.
Miscarriages: Pregnant women with a binge eating disorder have shown to have a greater chance of having a miscarriage compared to pregnant women with any other eating disorders. According to a study done, out of a group of pregnant women being evaluated, 46.7% of the pregnancies ended with a miscarriage in women that were diagnosed with BED, with 23.0% in the control. In the same study, 21.4% of women diagnosed with Bulimia Nervosa had their pregnancies end with miscarriages and only 17.7% of the controls.
Relapse: An individual who is in remission from BN and EDNOS (Eating Disorder Not Otherwise Specified) is at a high risk of falling back into the habit of self-harm. Factors such as high stress regarding their job, pressures from society, as well as other occurrences that inflict stress on a person, can push a person back to what they feel will ease the pain. A study tracked a group of selected people that were either diagnosed with BN or EDNOS for 60 months. After the 60 months were complete, the researchers recorded whether or not the person was having a relapse. The results found that the probability of a person previously diagnosed with EDNOS had a 41% chance of relapsing; a person with BN had a 47% chance.
Attachment insecurity: People who are showing signs of attachment anxiety will most likely have trouble communicating their emotional status as well as having trouble seeking effective social support. Signs that a person has adopted this symptom include not showing recognition to their caregiver or when he/she is feeling pain. In a clinical sample, it is clear that at the pretreatment step of a patients recovery, more severe eating disorder symptoms directly corresponds to higher attachment anxiety. The more this symptom increases, the more difficult it is to achieve eating disorder reduction prior to treatment.
Impaired Decision Making: Studies have found mixed results on the relationship between eating disorders and decision making. Researchers have continuously found that patients with anorexia were less capable of thinking about long-term consequences of their decisions when completing the Iowa Gambling Task, a test designed to measure a persons decision-making capabilities. Consequently, they were at a higher risk of making hastier, harmful choices.Anorexia symptoms include the increasing chance of getting osteoporosis. Thinning of the hair as well as dry hair and skin are also very common. The muscles of the heart will also start to change if no treatment is inflicted on the patient. This causes the heart to have an abnormally slow heart rate along with low blood pressure. Heart failure becomes a major consideration when this begins to occur. Muscles throughout the body begin to lose their strength. This will cause the individual to begin feeling faint, drowsy, and weak. Along with these symptoms, the body will begin to grow a layer of hair called lanugo. The human body does this in response to the lack of heat and insulation due to the low percentage of body fat.Bulimia symptoms include heart problems like an irregular heartbeat that can lead to heart failure and death may occur. This occurs because of the electrolyte imbalance that is a result of the constant binge and purge process. The probability of a gastric rupture increases. A gastric rupture is when there is a sudden rupture of the stomach lining that can be fatal. The acids that are contained in the vomit can cause a rupture in the esophagus as well as tooth decay. As a result, to laxative abuse, irregular bowel movements may occur along with constipation. Sores along the lining of the stomach called peptic ulcers begin to appear and the chance of developing pancreatitis increases.Binge eating symptoms include high blood pressure, which can cause heart disease if it is not treated. Many patients recognize an increase in the levels of cholesterol. The chance of being diagnosed with gallbladder disease increases, which affects an individuals digestive tract.
Risk of death
Eating disorders result in about 7,000 deaths a year as of 2010, making them the mental illnesses with the highest mortality rate. Anorexia has a risk of death that is increased about 5 fold with 20% of these deaths as a result of suicide. Rates of death in bulimia and other disorders are similar at about a 2 fold increase.The mortality rate for those with anorexia is 5.4 per 1000 individuals per year. Roughly 1.3 deaths were due to suicide. A person who is or had been in an inpatient setting had a rate of 4.6 deaths per 1000. Of individuals with bulimia about 2 persons per 1000 persons die per year and among those with EDNOS about 3.3 per 1000 people die per year.
Epidemiology
In the developed world, binge eating disorder affects about 1.6% of women and 0.8% of men in a given year. Anorexia affects about 0.4% and bulimia affects about 1.3% of young women in a given year. Up to 4% of women have anorexia, 2% have bulimia, and 2% have binge eating disorder at some point in time. Anorexia and bulimia occur nearly ten times more often in females than males. Typically, they begin in late childhood or early adulthood. Rates of other eating disorders are not clear. Rates of eating disorders appear to be lower in less developed countries.In the United States, twenty million women and ten million men have an eating disorder at least once in their lifetime.
Anorexia
Rates of anorexia in the general population among women aged 11 to 65 ranges from 0 to 2.2% and around 0.3% among men. The incidence of female cases is low in general medicine or specialized consultation in town, ranging from 4.2 and 8.3/100,000 individuals per year. The incidence of AN ranges from 109 to 270/100,000 individuals per year. Mortality varies according to the population considered. AN has one of the highest mortality rates among mental illnesses. The rates observed are 6.2 to 10.6 times greater than that observed in the general population for follow-up periods ranging from 13 to 10 years. Standardized mortality ratios for anorexia vary from 1.36% to 20%.
Bulimia
Bulimia affects females 9 times more often than males. Approximately one to three percent women develop bulimia in their lifetime. About 2% to 3% of women are currently affected in the United States. New cases occur in about 12 per 100,000 population per year. The standardized mortality ratios for bulimia is 1% to 3%.
Binge eating disorder
Reported rates vary from 1.3 to 30% among subjects seeking weight-loss treatment. Based on surveys, BED appears to affected about 1-2% at some point in their life, with 0.1-1% of people affected in a given year. BED is more common among females than males. There have been no published studies investigating the effects of BED on mortality, although it is comorbid with disorders that are known to increase mortality risks.
Economics
Since 2017, the number of cost-effectiveness studies regarding eating disorders appears to be increasing in the past six years.
In 2011 United States dollars, annual healthcare costs were $1,869 greater among individuals with eating disorders compared to the general population. The added presence of mental health comorbidities was also associated with higher, but not statistically significant, costs difference of $1,993.
In 2013 Canadian dollars, the total hospital cost per admission for treatment of anorexia nervosa was $51,349 and the total societal cost was $54,932 based on an average length of stay of 37.9 days. For every unit increase in body mass index, there was also a 15.7% decrease in hospital cost.
For Ontario, Canada patients who received specialized inpatient care for an eating disorder both out of country and in province, annual total healthcare costs were about $11 million before 2007 and $6.5 million in the years afterwards. For those treated out of country alone, costs were about $5 million before 2007 and $2 million in the years afterwards.
Evolutionary perspective
In recent years. evolutionary psychiatry as an emerging scientific discipline has been studying mental disorders from an evolutionary perspective. If eating disorders have evolutionary functions or if they are new modern "lifestyle" problems is still debated.
See also
Eating disorder not otherwise specified
Weight phobia
References
External links
Eating disorder at Curlie | 149 |
Ectasia | Ectasia (), also called ectasis (), is dilation or distention of a tubular structure, either normal or pathophysiologic but usually the latter (except in atelectasis, where absence of ectasis is the problem).
Specific conditions
Bronchiectasis, chronic dilatation of the bronchi
Duct ectasia of breast, a dilated milk duct. Duct ectasia syndrome is a synonym for nonpuerperal (unrelated to pregnancy and breastfeeding) mastitis.
Dural ectasia, dilation of the dural sac surrounding the spinal cord, usually in the very low back.
Pyelectasis, dilation of a part of the kidney, most frequently seen in prenatal ultrasounds. It usually resolves on its own.
Rete tubular ectasia, dilation of tubular structures in the testicles. It is usually found in older men.
Acral arteriolar ectasia
Corneal ectasia (secondary keratoconus), a bulging of the cornea.Vascular ectasiasMost broadly, any abnormal dilatation of a blood vessel, including aneurysms
Annuloaortic ectasia, dilation of the aorta. It can be associated with Marfan syndrome.
Dolichoectasias, weakening of arteries, usually caused by high blood pressure.
Intracranial dolichoectasias, dilation of arteries inside the head.
Gastric antral vascular ectasia, dilation of small blood vessels in the last part of the stomach.
Telangiectasias are small dilated blood vessels found anywhere on the body, but commonly seen on the face around the nose, cheeks, and chin.
Venous ectasia, dilation of veins or venules, such as:
Chronic venous insufficiency, often in the leg
Jugular vein ectasia, in the jugular veins returning blood from the head
See also
All pages with titles beginning with Ectasia
All pages with titles containing Ectasia
== References == | 150 |
Ehrlichiosis | Ehrlichiosis is a tick-borne
bacterial infection, caused by bacteria of the family Anaplasmataceae, genera Ehrlichia and Anaplasma. These obligate intracellular bacteria infect and kill white blood cells.
The average reported annual incidence is on the order of 2.3 cases per million people.
Types
Six (see note below) species have been shown to cause human infection:
Anaplasma phagocytophilum causes human granulocytic anaplasmosis. A. phagocytophilum is endemic to New England and the north-central and Pacific regions of the United States.
Ehrlichia ewingii causes human ewingii ehrlichiosis. E. ewingii primarily infects deer and dogs (see Ehrlichiosis (canine)). E. ewingii is most common in the south-central and southeastern states.
Ehrlichia chaffeensis causes human monocytic ehrlichiosis. E. chaffeensis is most common in the south-central and southeastern states.
Ehrlichia canis
Neorickettsia sennetsu
Ehrlichia muris eauclairensis
The latter three infections are not well studied. Ehrlichia muris eauclairensis was recently discovered and has low reporting numbers due to the fact that it is relatively new and because its symptoms are very similar to the symptoms caused by other Ehrlichia bacteria.
In 2008, human infection by a Panola Mountain (in Georgia, USA) Ehrlichia species was reported. On August 3, 2011, infection by a yet-unnamed bacterium in the genus Ehrlichia was reported, carried by deer ticks and causing flu-like symptoms in at least 25 people in Minnesota and Wisconsin. Until then, human ehrlichiosis was thought to be very rare or absent in both states. The new species, which is genetically very similar to an Ehrlichia species found in Eastern Europe and Japan called E. muris, was identified at a Mayo Clinic Health System hospital in Eau Claire.Ehrlichia species are transported between cells through the host-cell filopodia during the initial stages of infection; whereas, in the final stages of infection, the pathogen ruptures the host cell membrane.
Signs and symptoms
Specific symptoms include fever, chills, severe headaches, muscle aches, nausea, vomiting, diarrhea, loss of appetite, confusion, and a splotchy or pinpoint rash. More severe symptoms include brain or nervous system damage, respiratory failure, uncontrollable bleeding, organ failure, and death. Ehrlichiosis can also blunt the immune system by suppressing production of TNF-alpha, which may lead to opportunistic infections such as candidiasis.Most of the signs and symptoms of ehrlichiosis can likely be ascribed to the immune dysregulation that it causes. A "toxic shock-like" syndrome is seen in some severe cases of ehrlichiosis. Some cases can present with purpura and in one such case, the organisms were present in such overwhelming numbers that in 1991, Dr. Aileen Marty of the AFIP was able to demonstrate the bacteria in human tissues using standard stains, and later proved that the organisms were indeed Ehrlichia using immunoperoxidase stains.Experiments in mouse models further support this hypothesis, as mice lacking TNF-alpha I/II receptors are resistant to liver injury caused by Ehrlichia infection.About 3% of human monocytic ehrlichiosis cases result in death; however, these deaths occur "most commonly in immunosuppressed individuals who develop respiratory distress syndrome, hepatitis, or opportunistic nosocomial infections."
Prevention
No human vaccine is available for ehrlichiosis. Tick control is the main preventive measure against the disease. However, in late 2012, a breakthrough in the prevention of canine monocytic ehrlichiosis was announced when a vaccine was accidentally discovered by Prof. Shimon Harrus, Dean of the Hebrew University of Jerusalems Koret School of Veterinary Medicine.Measures of tick bite prevention include staying out of tall grassy areas that ticks tend to live in, treating clothes and gear that a tick could jump on, using EPA approved bug repellent, tick checks for all humans, animals, and gear that potentially came into contact with a tick, and showering soon after being in an area that ticks might also be in.
Treatment
Doxycycline and minocycline are the medications of choice. For people allergic to antibiotics of the tetracycline class, rifampin is an alternative. Early clinical experience suggested that chloramphenicol may also be effective, but in vitro susceptibility testing revealed resistance.
Epidemiology
Ehrlichiosis is a nationally notifiable disease in the United States. Cases have been reported in every month of the year, but most cases are reported during April–September. These months are also the peak months for tick activity in the United States. The majority of cases of Ehrlichiosis tend to be in the United States. The states affected most include "the southeastern and south-central United States, from the East Coast extending westward to Texas."Since the first case of Ehrlichiosis was reported in 2000, cases reported to the CDC have increased, for example, in 2000, 200 cases were reported and in 2019, 2,093 cases were reported. Fortunately, the "proportion of ehrlichiosis patients that died as a result of infection" has gone down since 2000.From 2008 to 2012, the average yearly incidence of ehrlichiosis was 3.2 cases per million persons. This is more than twice the estimated incidence for 2000–2007. The incidence rate increases with age, with the ages of 60–69 years being the highest age-specific years. Children less than 10 years and adults aged 70 years and older have the highest case-fatality rates. A documented higher risk of death exists among persons who are immunosuppressed.
See also
Ehrlichia Wisconsin HM543746
References
External links
Aayushi Pratap: Dog ticks may get more of a taste for human blood as the climate changes. On: ScienceNews. November 30, 2020. | 151 |
Endometriosis | Endometriosis is a disease of the female reproductive system in which cells similar to those in the endometrium, the layer of tissue that normally covers the inside of the uterus, grow outside the uterus. Most often this is on the ovaries, fallopian tubes, and tissue around the uterus and ovaries; in rare cases it may also occur in other parts of the body. Some symptoms include pelvic pain, heavy periods, pain with bowel movements, and infertility. Nearly half of those affected have chronic pelvic pain, while in 70% pain occurs during menstruation. Pain during sexual intercourse is also common. Infertility occurs in up to half of affected individuals. About 25% of individuals have no symptoms and 85% of those seen with infertility in a tertiary center have no pain. Endometriosis can have both social and psychological effects.The cause is not entirely clear. Risk factors include having a family history of the condition. The areas of endometriosis bleed each month (menstrual period), resulting in inflammation and scarring. The growths due to endometriosis are not cancer. Diagnosis is usually based on symptoms in combination with medical imaging; however, biopsy is the surest method of diagnosis. Other causes of similar symptoms include pelvic inflammatory disease, irritable bowel syndrome, interstitial cystitis, and fibromyalgia. Endometriosis is commonly misdiagnosed and people often report being incorrectly told their symptoms are trivial or normal. People with endometriosis see an average of seven physicians before receiving a correct diagnosis, with an average delay of 6.7 years between the onset of symptoms and surgically obtained biopsies, the gold standard for diagnosing the condition. This average delay places endometriosis at the extreme end of diagnostic inefficiency.Tentative evidence suggests that the use of combined oral contraceptives reduces the risk of endometriosis. Exercise and avoiding large amounts of alcohol may also be preventive. There is no cure for endometriosis, but a number of treatments may improve symptoms. This may include pain medication, hormonal treatments or surgery. The recommended pain medication is usually a non-steroidal anti-inflammatory drug (NSAID), such as naproxen. Taking the active component of the birth control pill continuously or using an intrauterine device with progestogen may also be useful. Gonadotropin-releasing hormone agonist (GnRH agonist) may improve the ability of those who are infertile to get pregnant. Surgical removal of endometriosis may be used to treat those whose symptoms are not manageable with other treatments.One estimate is that 10.8 million people are affected globally as of 2015. Other sources estimate 6 to 10% of the general female population and 2 to 11% of asymptomatic women are affected. In addition, 11% of women in a general population have undiagnosed endometriosis that can be seen on magnetic resonance imaging (MRI). Endometriosis is most common in those in their thirties and forties; however, it can begin in girls as early as eight years old. It results in few deaths with unadjusted and age-standardized death rates of 0.1 and 0.0 per 100,000. Endometriosis was first determined to be a separate condition in the 1920s. Before that time, endometriosis and adenomyosis were considered together. It is unclear who first described the disease.
Signs and symptoms
Pain and infertility are common symptoms, although 20–25% of women are asymptomatic. Presence of pain symptoms are associated with the type of endometrial lesions as 50% of women with typical lesions, 10% of women with cystic ovarian lesions, and 5% of women with deep endometriosis do not have pain.
Pelvic pain
A major symptom of endometriosis is recurring pelvic pain. The pain can range from mild to severe cramping or stabbing pain that occurs on both sides of the pelvis, in the lower back and rectal area, and even down the legs. The amount of pain a person feels correlates weakly with the extent or stage (1 through 4) of endometriosis, with some individuals having little or no pain despite having extensive endometriosis or endometriosis with scarring, while others may have severe pain even though they have only a few small areas of endometriosis. The most severe pain is typically associated with menstruation. Pain can also start a week before a menstrual period, during and even a week after a menstrual period, or it can be constant. The pain can be debilitating and result in emotional stress. Symptoms of endometriosis-related pain may include:
dysmenorrhea (64%) – painful, sometimes disabling cramps during the menstrual period; pain may get worse over time (progressive pain), also lower back pains linked to the pelvis
chronic pelvic pain – typically accompanied by lower back pain or abdominal pain
dyspareunia – painful sexual intercourse
dysuria – urinary urgency, frequency, and sometimes painful voiding
mittelschmerz – pain associated with ovulation
bodily movement pain – present during exercise, standing, or walkingCompared with patients with superficial endometriosis, those with deep disease appear to be more likely to report shooting rectal pain and a sense of their insides being pulled down. Individual pain areas and pain intensity appear to be unrelated to the surgical diagnosis, and the area of pain unrelated to the area of endometriosis.There are multiple causes of pain. Endometriosis lesions react to hormonal stimulation and may "bleed" at the time of menstruation. The blood accumulates locally if it is not cleared shortly by the immune, circulatory, and lymphatic system. This may further lead to swelling, which triggers inflammation with the activation of cytokines, which results in pain. Another source of pain is the organ dislocation that arises from adhesion binding internal organs to each other. The ovaries, the uterus, the oviducts, the peritoneum, and the bladder can be bound together. Pain triggered in this way can last throughout the menstrual cycle, not just during menstrual periods.Also, endometriotic lesions can develop their own nerve supply, thereby creating a direct and two-way interaction between lesions and the central nervous system, potentially producing a variety of individual differences in pain that can, in some cases, become independent of the disease itself. Nerve fibres and blood vessels are thought to grow into endometriosis lesions by a process known as neuroangiogenesis.
Infertility
About a third of women with infertility have endometriosis. Among those with endometriosis, about 40% are infertile. The pathogenesis of infertility is dependent on the stage of disease: in early stage disease, it is hypothesised that this is secondary to an inflammatory response that impairs various aspects of conception, whereas in later stage disease distorted pelvic anatomy and adhesions contribute to impaired fertilisation.
Other
Other symptoms include diarrhea or constipation, chronic fatigue, nausea and vomiting, migraines, low-grade fevers, heavy (44%) and/or irregular periods (60%), and hypoglycemia. There is an association between endometriosis and certain types of cancers, notably some types of ovarian cancer, non-Hodgkins lymphoma and brain cancer. Endometriosis is unrelated to endometrial cancer. Rarely, endometriosis can cause endometrium-like tissue to be found in other parts of the body. Thoracic endometriosis occurs when endometrium-like tissue implants in the lungs or pleura. Manifestations of this include coughing up blood, a collapsed lung, or bleeding into the pleural space.Stress may be a cause or a consequence of endometriosis.
Complications
Physical Health
Complications of endometriosis include internal scarring, adhesions, pelvic cysts, chocolate cysts of ovaries, ruptured cysts, and bowel and ureter obstruction resulting from pelvic adhesions. Endometriosis-associated infertility can be related to scar formation and anatomical distortions due to the endometriosis.Ovarian endometriosis may complicate pregnancy by decidualization, abscess and/or rupture.Thoracic endometriosis can be associated with recurrent thoracic endometriosis syndrome at times of a menstrual period that includes catamenial pneumothorax in 73% of women, catamenial hemothorax in 14%, catamenial hemoptysis in 7%, and pulmonary nodules in 6%.A 20-year study of 12,000 women with endometriosis found that individuals under 40 who are diagnosed with endometriosis are 3 times more likely to have heart problems than their healthy peers.Endometriosis may increase about 1% or less chance of getting ovarian, breast and thyroid cancers in women compared with those without.It results in few deaths with unadjusted and age-standardized death rates of 0.1 and 0.0 per 100,000.Sciatic endometriosis also called catamenial or cyclical sciatica is a sciatica whose cause is endometriosis and whose incidence is unknown. Diagnosis is usually made by an MRI or CT-myelography.
Mental Health
"Endometriosis is associated with an elevated risk of developing depression and anxiety disorders". Studies suggest this is partially due to the pelvic pain experienced by endometriosis patients. "It has been demonstrated that pelvic pain has significant negative effects on womens mental health and quality of life; in particular, women who suffer from pelvic pain report high levels of anxiety and depression, loss of working ability, limitations in social activities and a poor quality of life"
Risk factors
Genetics
Endometriosis is a heritable condition that is influenced by both genetic and environmental factors. Children or siblings of people with endometriosis are at higher risk of developing endometriosis themselves; low progesterone levels may be genetic, and may contribute to a hormone imbalance. There is an approximate six-fold increased incidence in individuals with an affected first-degree relative.It has been proposed that endometriosis results from a series of multiple hits within target genes, in a mechanism similar to the development of cancer. In this case, the initial mutation may be either somatic or heritable.Individual genomic changes (found by genotyping including genome-wide association studies) that have been associated with endometriosis include 9 locis robustly replicated in various MNAs reaching genome-wide significance :
There are many findings of altered gene expression and epigenetics, but both of these can also be a secondary result of, for example, environmental factors and altered metabolism. Examples of altered gene expression include that of miRNAs.
Environmental toxins
Some factors associated with endometriosis include:
prolonged exposure to estrogen; for example, in late menopause or early menarche
obstruction of menstrual outflow; for example, in Müllerian anomaliesSeveral studies have investigated the potential link between exposure to dioxins and endometriosis, but the evidence is equivocal and potential mechanisms are poorly understood. A 2004 review of studies of dioxin and endometriosis concluded that "the human data supporting the dioxin-endometriosis association are scanty and conflicting", and a 2009 follow-up review also found that there was "insufficient evidence" in support of a link between dioxin exposure and developing endometriosis. A 2008 review concluded that more work was needed, stating that "although preliminary work suggests a potential involvement of exposure to dioxins in the pathogenesis of endometriosis, much work remains to clearly define cause and effect and to understand the potential mechanism of toxicity".
Pathophysiology
While the exact cause of endometriosis remains unknown, many theories have been presented to better understand and explain its development. These concepts do not necessarily exclude each other. The pathophysiology of endometriosis is likely to be multifactorial and to involve an interplay between several factors.
Formation
The main theories for the formation of the ectopic endometrium-like tissue include retrograde menstruation, Müllerianosis, coelomic metaplasia, vascular dissemination of stem cells, and surgical transplantation were postulated as early as 1870. Each is further described below.
Retrograde menstruation theory
The theory of retrograde menstruation (also called the implantation theory or transplantation theory) is the most commonly accepted theory for the dissemination and transformation of ectopic endometrium into endometriosis. It suggests that during a womans menstrual flow, some of the endometrial debris flow backward through the Fallopian tubes and into the peritoneal cavity, attaching itself to the peritoneal surface (the lining of the abdominal cavity) where it can proceed to invade the tissue as or transform into endometriosis. It is not clear at what stage the transformation of endometrium, or any cell of origin such as stem cells or coelomic cells (see those theories below), to endometriosis begins.Proofs in support of the theory is based on retrospective epidemiological studies that an association with endometrial implants attached to the peritoneal cavity, which would develop into endometrial lesions and retrograde menstruation; and the fact that animals like rodents and non-human primates whose endometrium is not shed during the estrous cycle dont develop naturally endometriosis contrary to animals that have a natural menstrual cycle like rhesus monkeys and baboons.Retrograde menstruation alone is not able to explain all instances of endometriosis, and additional factors such as genetics, immunology, stem cell migration, and coelomic metaplasia (see "Other theories" on this page) are needed to account for disseminated disease and why many individuals with retrograde menstruation are not diagnosed with endometriosis. In addition, endometriosis has shown up in people who have never experienced menstruation including cisgender men, fetuses, and prepubescent girls. Further theoretical additions are needed to complement the retrograde menstruation theory to explain why cases of endometriosis show up in the brain and lungs. This theory has numerous other associated issues.Researchers are investigating the possibility that the immune system may not be able to cope with the cyclic onslaught of retrograde menstrual fluid. In this context there is interest in studying the relationship of endometriosis to autoimmune disease, allergic reactions, and the impact of toxic materials. It is still unclear what, if any, causal relationship exists between toxic materials or autoimmune disease and endometriosis. There are immune system changes in people with endometriosis, such as an increase of macrophage-derived secretion products, but it is unknown if these are contributing to the disorder or are reactions from it.Endometriotic lesions differ in their biochemistry, hormonal response, immunology, inflammatory response when compared to endometrium. This is likely because the cells that give rise to endometriosis are a side population of cells. Similarly, there are changes in, for example, the mesothelium of the peritoneum in people with endometriosis, such as loss of tight junctions, but it is unknown if these are causes or effects of the disorder.In rare cases where imperforate hymen does not resolve itself prior to the first menstrual cycle and goes undetected, blood and endometrium are trapped within the uterus until such time as the problem is resolved by surgical incision. Many health care practitioners never encounter this defect, and due to the flu-like symptoms it is often misdiagnosed or overlooked until multiple menstrual cycles have passed. By the time a correct diagnosis has been made, endometrium and other fluids have filled the uterus and Fallopian tubes with results similar to retrograde menstruation resulting in endometriosis. The initial stage of endometriosis may vary based on the time elapsed between onset and surgical procedure.The theory of retrograde menstruation as a cause of endometriosis was first proposed by John A. Sampson.
Other theories
Stem cells: Endometriosis may arise from stem cells from bone marrow and potentially other sources. In particular, this theory explains endometriosis found in areas remote from the pelvis such as the brain or lungs. Stem cells may be from local cells such as the peritoneum (see coelomic metaplasia below) or cells disseminated in the blood stream (see vascular dissemination below) such as those from the bone marrow.
Vascular dissemination: Vascular dissemination is a 1927 theory that has been revived with new studies of bone-marrow stem cells involved in pathogenesis.
Environment: Environmental toxins (e.g., dioxin, nickel) may cause endometriosis. Toxins such as dioxins and dioxin-like compounds tend to bioaccumulate within the human body. Further research is needed but "it is plausible that inflammatory-like processes, caused by dioxin-like environmental chemicals, can alter normal endometrial and immune cell physiology allowing persistence and development of endometrial tissue within the peritoneal cavity, normally cleared by immune system cells".
Müllerianosis: A theory supported by foetal autopsy is that cells with the potential to become endometrial, which are laid down in tracts during embryonic development called the female reproductive (Müllerian) tract as it migrates downward at 8–10 weeks of embryonic life, could become dislocated from the migrating uterus and act like seeds or stem cells.
Coelomic metaplasia: Coelomic cells which are the common ancestor of endometrial and peritoneal cells may undergo metaplasia (transformation) from one type of cell to the other, perhaps triggered by inflammation.
Vasculogenesis: Up to 37% of the microvascular endothelium of ectopic endometrial tissue originates from endothelial progenitor cells, which result in de novo formation of microvessels by the process of vasculogenesis rather than the conventional process of angiogenesis.
Neural growth: An increased expression of new nerve fibres is found in endometriosis but does not fully explain the formation of ectopic endometriotic tissue and is not definitely correlated with the amount of perceived pain.
Autoimmune: Graves disease is an autoimmune disease characterized by hyperthyroidism, goiter, ophthalmopathy, and dermopathy. People with endometriosis had higher rates of Graves disease. One of these potential links between Graves disease and endometriosis is autoimmunity.
Oxidative stress: Influx of iron is associated with the local destruction of the peritoneal mesothelium, leading to the adhesion of ectopic endometriotic cells. Peritoneal iron overload has been suggested to be caused by the destruction of erythrocytes, which contain the iron-binding protein hemoglobin, or a deficiency in the peritoneal iron metabolism system. Oxidative stress activity and reactive oxygen species (such as superoxide anions and peroxide levels) are reported to be higher than normal in people with endometriosis. Oxidative stress and the presence of excess ROS can damage tissue and induce rapid cellular division. Mechanistically, there are several cellular pathways by which oxidative stress may lead to or may induce proliferation of endometriotic lesions, including the mitogen activated protein (MAP) kinase pathway and the extracellular signal-related kinase (ERK) pathway. Activation of both of the MAP and ERK pathways lead to increased levels of c-Fos and c-Jun, which are proto-oncogenes that are associated with high-grade lesions.
Localization
Most often, endometriosis is found on the:
ovaries
fallopian tubes
tissues that hold the uterus in place (ligaments)
outer surface of the uterusLess common pelvic sites are:
vagina
cervix
vulva
bowel
bladder
rectumEndometriosis may spread to the cervix and vagina or to sites of a surgical abdominal incision, known as "scar endometriosis." Rectovaginal or bowel endometriosis affects approximately 5-12% of those with endometriosis, and can cause severe pain with bowel movements.Deep infiltrating endometriosis has been defined as the presence of endometrial glands and stroma infiltrating more than 5 mm in the subperitoneal tissue. The prevalence of DIE is estimated to be 1 to 2%. Deep endometriosis typically presents as a single nodule in the vesicouterine fold or in the lower 20 cm of the bowel. Deep endometriosis is often associated with severe pain.
Extrapelvic endometriosis
Rarely, endometriosis appears in extrapelvic parts of the body, such as the lungs, brain, and skin. "Scar endometriosis" can occur in surgical abdominal incisions. Risk factors for scar endometriosis include previous abdominal surgeries, such as a hysterotomy or cesarean section, or ectopic pregnancies, salpingostomy puerperal sterilization, laparoscopy, amniocentesis, appendectomy, episiotomy, vaginal hysterectomies, and hernia repair.Endometriosis may also present with skin lesions in cutaneous endometriosis.Less commonly lesions can be found on the diaphragm or lungs. Diaphragmatic endometriosis is rare, almost always on the right hemidiaphragm, and may inflict the cyclic pain of the right scapula (shoulder) or cervical area (neck) during a menstrual period. Pulmonary endometriosis can be associated with a thoracic endometriosis syndrome that can include catamenial (occurs during menstruation) pneumothorax seen in 73% of women with the syndrome, catamenial hemothorax in 14%, catamenial hemoptysis in 7%, and pulmonary nodules in 6%.
Diagnosis
A health history and a physical examination can lead the health care practitioner to suspect endometriosis. There is a clear benefit for undergoing an ultrasound diagnostic procedure (TVUS) as a first step of testing for endometriosis.For many patients, there are significant delays in diagnosis. Studies show an average delay of 11.7 years in the United States. Patients in the UK have an average delay of 8 years and in Norway of 6.7 years. A third of women had consulted their GP six or more times before being diagnosed.The most common sites of endometriosis are the ovaries, followed by the Douglas pouch, the posterior leaves of the broad ligaments, and the sacrouterine ligaments.As for deep infiltrating endometriosis, TVUS, TRUS and MRI are the techniques of choice for non-invasive diagnosis with a high sensitivity and specificity.
Laparoscopy
Laparoscopy, a surgical procedure where a camera is used to look inside the abdominal cavity, is the only way to accurately diagnose the extent and severity of pelvic/abdominal endometriosis. Laparoscopy is not an applicable test for extrapelvic sites such as umbilicus, hernia sacs, abdominal wall, lung, or kidneys.Reviews in 2019 and 2020 concluded that 1) with advances in imaging, endometriosis diagnosis should no longer be considered synonymous with immediate laparoscopy for diagnosis, and 2) endometriosis should be classified a syndrome that requires confirmation of visible lesions seen at laparoscopy in addition to characteristic symptoms.Laparoscopy permits lesion visualization unless the lesion is visible externally (e.g., an endometriotic nodule in the vagina) or is extra-abdominal. If the growths (lesions) are not visible, a biopsy must be taken to determine the diagnosis. Surgery for diagnoses also allows for surgical treatment of endometriosis at the same time.
During a laparoscopic procedure lesions can appear dark blue, powder-burn black, red, white, yellow, brown or non-pigmented. Lesions vary in size. Some within the pelvis walls may not be visible, as normal-appearing peritoneum of infertile women reveals endometriosis on biopsy in 6–13% of cases. Early endometriosis typically occurs on the surfaces of organs in the pelvic and intra-abdominal areas. Health care providers may call areas of endometriosis by different names, such as implants, lesions, or nodules. Larger lesions may be seen within the ovaries as endometriomas or "chocolate cysts", "chocolate" because they contain a thick brownish fluid, mostly old blood.Frequently during diagnostic laparoscopy, no lesions are found in individuals with chronic pelvic pain, a symptom common to other disorders including adenomyosis, pelvic adhesions, pelvic inflammatory disease, congenital anomalies of the reproductive tract, and ovarian or tubal masses.
Ultrasound
Vaginal ultrasound has a clinical value in the diagnosis of endometrioma and before operating for deep endometriosis. This applies to the identification of the spread of disease in individuals with well-established clinical suspicion of endometriosis. Vaginal ultrasound is inexpensive, easily accessible, has no contraindications and requires no preparation. Healthcare professionals conducting ultrasound examinations need to be experienced. By extending the ultrasound assessment into the posterior and anterior pelvic compartments the sonographer is able to evaluate structural mobility and look for deep infiltrating endometriotic nodules noting the size, location and distance from the anus if applicable. An improvement in sonographic detection of deep infiltrating endometriosis will not only reduce the number of diagnostic laparoscopies, it will guide management and enhance quality of life.
Magnetic resonance imaging
Use of MRI is another method to detect lesions in a non-invasive manner. MRI is not widely used due to its cost and limited availability, however, it has the ability to detect the most common form of endometriosis (endometrioma) with a sufficient accuracy.
It is recommended for the patient to receive an anti-spasmodic agent (hyoscine butylbromide for example), a big glass of water (if bladder is empty), to undergo MRI scanning in supine position and applying abdominal strap for having a better image quality from the MRI.Phased coil arrays are also recommended.
Sequences
T1W with and without suppression of fat is recommended for endometriomas; meanwhile, sagittal, axial and oblique 2D T2W are recommended for deep infiltrating endometriosis.
Staging
Surgically, endometriosis can be staged I–IV by the revised classification of the American Society of Reproductive Medicine from 1997. The process is a complex point system that assesses lesions and adhesions in the pelvic organs, but it is important to note staging assesses physical disease only, not the level of pain or infertility. A person with Stage I endometriosis may have a little disease and severe pain, while a person with Stage IV endometriosis may have severe disease and no pain or vice versa. In principle the various stages show these findings:Stage I (Minimal)
Findings restricted to only superficial lesions and possibly a few filmy adhesions.Stage II (Mild)
In addition, some deep lesions are present in the cul-de-sac.Stage III (Moderate)
As above, plus the presence of endometriomas on the ovary and more adhesions.Stage IV (Severe)
As above, plus large endometriomas, extensive adhesions. Implants and adhesions may be found beyond the uterus. Large ovarian cysts are common.
Markers
An area of research is the search for endometriosis markers.In 2010, essentially all proposed biomarkers for endometriosis were of unclear medical use, although some appear to be promising. The one biomarker that has been in use over the last 20 years is CA-125. A 2016 review found that this biomarker was present in those with symptoms of endometriosis; and, once ovarian cancer has been ruled out, a positive CA-125 may confirm the diagnosis. Its performance in ruling out endometriosis is low. CA-125 levels appear to fall during endometriosis treatment, but it has not shown a correlation with disease response.Another review in 2011 identified several putative biomarkers upon biopsy, including findings of small sensory nerve fibers or defectively expressed β3 integrin subunit. It has been postulated a future diagnostic tool for endometriosis will consist of a panel of several specific and sensitive biomarkers, including both substance concentrations and genetic predisposition.A 2016 review of endometrial biomarkers for diagnosing endometriosis was unable to draw conclusions due to the low quality of the evidence.MicroRNAs have the potential to be used in diagnostic and therapeutic decisions
Histopathology
For a histopathological diagnosis, at least two of the following three criteria should be present:
Endometrial type stroma
Endometrial epithelium with glands
Evidence of chronic hemorrhage, mainly hemosiderin depositsImmunohistochemistry has been found to be useful in diagnosing endometriosis as stromal cells have a peculiar surface antigen, CD10, thus allowing the pathologist go straight to a staining area and hence confirm the presence of stromal cells and sometimes glandular tissue is thus identified that was missed on routine H&E staining.
Pain quantification
The most common pain scale for quantification of endometriosis-related pain is the visual analogue scale (VAS); VAS and numerical rating scale (NRS) were the best adapted pain scales for pain measurement in endometriosis. For research purposes, and for more detailed pain measurement in clinical practice, VAS or NRS for each type of typical pain related to endometriosis (dysmenorrhea, deep dyspareunia and non-menstrual chronic pelvic pain), combined with the clinical global impression (CGI) and a quality of life scale, are used.
Prevention
Limited evidence indicates that the use of combined oral contraceptives is associated with a reduced risk of endometriosis, as is regular exercise and the avoidance of alcohol and caffeine.
Management
While there is no cure for endometriosis, there are two types of interventions; treatment of pain and treatment of endometriosis-associated infertility. In many cases, menopause (natural or surgical) will abate the process. In the reproductive years, endometriosis is merely managed: the goal is to provide pain relief, to restrict progression of the process, and to restore or preserve fertility where needed. In younger individuals, some surgical treatment attempts to remove endometriotic tissue and preserve the ovaries without damaging normal tissue.Pharmacotherapy for pain management can be initiated based on the presence of symptoms and examination and ultrasound findings that rule out other potential causes.In general, the diagnosis of endometriosis is confirmed during surgery, at which time ablative steps can be taken. Further steps depend on circumstances: someone without infertility can manage symptoms with pain medication and hormonal medication that suppresses the natural cycle, while an infertile individual may be treated expectantly after surgery, with fertility medication, or with IVF.
A 2020 Cochrane systematic review found that for all types of endometriosis, "it is uncertain whether laparoscopic surgery improves overrall pain compared to diagnostic laparoscopy".
Surgery
Surgery, if done, should generally be performed laparoscopically (through keyhole surgery) rather than open. Treatment consists of the ablation or excision of the endometriosis, electrocoagulation, lysis of adhesions, resection of endometriomas, and restoration of normal pelvic anatomy as much as is possible. When laparoscopic surgery is used, small instruments are inserted through the incisions to remove the endometriosis tissue and adhesions. Because the incisions are very small, there will only be small scars on the skin after the procedure, and most individuals recover from surgery quickly and have a reduced risk of adhesions. Many endometriosis specialists believe that excision is the ideal surgical method to treat endometriosis.As for deep endometriosis, a segmental resection or shaving of nodules is effective but is associated with an important rate of complications which about 4,6% is major.Historically, a hysterectomy (removal of the uterus) was thought to be a cure for endometriosis in individuals who do not wish to conceive. Removal of the uterus may be beneficial as part of the treatment, if the uterus itself is affected by adenomyosis. However, this should only be done in combination with removal of the endometriosis by excision. If endometriosis is not also removed at the time of hysterectomy, pain may persist.Presacral neurectomy may be performed where the nerves to the uterus are cut. However, this technique is not usually used due to the high incidence of associated complications including presacral hematoma and irreversible problems with urination and constipation.
Recurrence
The underlying process that causes endometriosis may not cease after a surgical or medical intervention. A study has shown that dysmenorrhea recurs at a rate of 30 percent within a year following laparoscopic surgery. Resurgence of lesions tend to appear in the same location if the lesions were not completely removed during surgery. It has been shown that laser ablation resulted in higher and earlier recurrence rates when compared with endometrioma cystectomy; and recurrence after repetitive laparoscopy was similar to that after the first surgery. Endometriosis can come back after hysterectomy and bilateral salpingo-oophorectomy. It has 10% recurrent rate.Endometriosis recurrence following conservative surgery is estimated as 21.5% at 2 years and 40-50% at 5 years.Recurrence rate for DIE after surgery is less than 1%.
Risks and safety of pelvic surgery
Risk of developing complications following surgery depend on the type of the lesion that has undergone surgery.
55% to 100% of individuals develop adhesions following pelvic surgery, which can result in infertility, chronic abdominal and pelvic pain, and difficult reoperative surgery. Trehans temporary ovarian suspension, a technique in which the ovaries are suspended for a week after surgery, may be used to reduce the incidence of adhesions after endometriosis surgery.
Removal of cysts on the ovary without removing the ovary is a safe procedure.
Hormonal medications
Hormonal birth control therapy: Birth control pills reduce the menstrual pain and recurrence rate for endometrioma following conservative surgery for endometriosis. A 2018 Cochrane systematic review found that there is insufficient evidence to make a judgement on the effectiveness of the combined oral contraceptive pill compared with placebo or other medical treatment for managing pain associated with endometriosis partly because of lack of included studies for data analysis (only two for COCP vs placebo).
Progestogens: Progesterone counteracts estrogen and inhibits the growth of the endometrium. Danazol (Danocrine) and gestrinone (Dimetrose, Nemestran) are suppressive steroids with some androgenic activity. Both agents inhibit the growth of endometriosis but their use has declined, due in part to virilizing side effects such as excessive hair growth and voice changes. There is tentative evidence based on cohort studies that dienogest and norethisterone acetate (NETA) may help patients with DIE in terms of pain. There is tentative evidence based on a prospective study that vaginal danazol reduces pain in those affected by DIE.
Gonadotropin-releasing hormone (GnRH) modulators: These drugs include GnRH agonists such as leuprorelin (Lupron) and GnRH antagonists such as elagolix (Orilissa) and are thought to work by decreasing estrogen levels. A 2010 Cochrane review found that GnRH modulators were more effective for pain relief in endometriosis than no treatment or placebo, but were not more effective than danazol or intrauterine progestogen, and had more side effects than danazol. A 2018 Swedish systematic review found that GnRH modulators had similar pain-relieving effects to gestagen, but also decreased bone density.
Aromatase inhibitors are medications that block the formation of estrogen and have become of interest for researchers who are treating endometriosis. Examples of aromatase inhibitors include anastrozole and letrozole. Evidence for aromatase inhibitors is confirmed by numerous controlled studies that show benefit in terms of pain control and quality of life when used in combination with gestagens or oral contraceptives with less side-effects when used in combination with oral contraceptives like norethisterone acetate. Despite multiple benefits, there are lot of things to consider before using AIs for endometriosis, as it is common for them to induce functional cysts as an adverse effects. Moreover, dosages, treatment length, appropriate add-back therapies and mode of administration is still being investigated.
Progesterone receptor modulators like mifepristone and gestrinone have the potential (based on only one RCT each) to be used as a treatment to manage pain caused by endometriosis.
Other Medicines
Melatonin, there is tentative evidence for its use (at a dose of 10 mg) in reducing pain related to endometriosis.
Opioids: Morphine sulphate tablets and other opioid painkillers work by mimicking the action of naturally occurring pain-reducing chemicals called "endorphins". There are different long acting and short acting medications that can be used alone or in combination to provide appropriate pain control.
Chinese herbal medicine was reported to have comparable benefits to gestrinone and danazol in patients who had had laparoscopic surgery, though the review notes that the two trials were small and of "poor methodological quality" and results should be "interpreted cautiously" as better quality research is needed.
Serrapeptase, a Digestive enzyme found in the intestines of silkworms. Serrapeptase is widely used in Japan and Europe as an anti-inflammatory treatment. More research is needed but serrapeptase may be used by endometriosis patients to reduce inflammation.
Angiogenesis inhibitors lack clinical evidence of efficacy in endometriosis therapy. Under experimental in vitro and in vivo conditions, compounds that have been shown to exert inhibitory effects on endometriotic lesions include growth factor inhibitors, endogenous angiogenesis inhibitors, fumagillin analogues, statins, cyclo-oxygenase-2 inhibitors, phytochemical compounds, immunomodulators, dopamine agonists, peroxisome proliferator-activated receptor agonists, progestins, danazol and gonadotropin-releasing hormone agonists. However, many of these agents are associated with undesirable side effects and more research is necessary. An ideal therapy would diminish inflammation and underlying symptoms without being contraceptive.
Pentoxifylline, an immunomodulating agent, has been theorized to improve pain as well as improve pregnancy rates in individuals with endometriosis.There is not enough evidence to support the effectiveness or safety of either of these uses. Current American Congress of Obstetricians and Gynecologists (ACOG) guidelines do not include immunomodulators, such as pentoxifylline, in standard treatment protocols.
NSAIDs are anti-inflammatory medications commonly used for endometriosis patients despite unproven efficacy and unintended adverse effects.
Neuromodulators like gabapentin did not prove to be superior to placebo in managing pain caused by endometriosis.The overall effectiveness of manual physical therapy to treat endometriosis has not yet been identified.
Comparison of interventions
A 2021 meta-analysis found that GnRH analogues and combined hormonal contraceptives were the best treatment for reducing dyspareunia, menstrual and non menstrual pelvic pain. A 2018 Swedish systematic review found a large number of studies but a general lack of scientific evidence for most treatments. There was only one study of sufficient quality and relevance comparing the effect of surgery and non-surgery. Cohort studies indicate that surgery is effective in decreasing pain. Most complications occurred in cases of low intestinal anastomosis, while risk of fistula occurred in cases of combined abdominal or vaginal surgery, and urinary tract problems were common in intestinal surgery. The evidence was found to be insufficient regarding surgical intervention.The advantages of physical therapy techniques are decreased cost, absence of major side-effects, it does not interfere with fertility, and near-universal increase of sexual function. Disadvantages are that there are no large or long-term studies of its use for treating pain or infertility related to endometriosis.
Treatment of infertility
Surgery is more effective than medicinal intervention for addressing infertility associated with endometriosis. Surgery attempts to remove endometrium-like tissue and preserve the ovaries without damaging normal tissue. Receiving hormonal suppression therapy after surgery might be positive regarding endometriosis recurrence and pregnancy. In-vitro fertilization (IVF) procedures are effective in improving fertility in many individuals with endometriosis.During fertility treatment, the ultralong pretreatment with GnRH-agonist has a higher chance of resulting in pregnancy for individuals with endometriosis, compared to the short pretreatment.
Research
Preliminary research on mouse models showed that monoclonal antibodies, as well as inhibitors of MyD88 downstream signaling pathway, can reduce lesion volume. Thanks to that, clinical trials are being done on using a monoclonal antibody directed against IL-33 and using anakinra, an IL-1 receptor antagonist.Promising preclinical outcomes is pushing clinical trials into testing cannabinoid extracts, dichloroacetic acid and curcuma capsules.
Epidemiology
Determining how many people have endometriosis is challenging because definitive diagnosis requires surgical visualization through laparoscopic surgery. Criteria that are commonly used to establish a diagnosis include pelvic pain, infertility, surgical assessment, and in some cases, magnetic resonance imaging. An ultrasound can identify large clumps of tissue as potential endometriosis lesions and ovarian cysts but it is not effective for all patients, especially in cases with smaller, superficial lesions.These studies suggest that endometriosis affects approximately 11% of women in the general population. Endometriosis is most common in those in their thirties and forties; however, it can begin as early as 8 years old. Endometriosis is estimated to affect over 190 million women in their reproductive years.It chiefly affects adults from premenarche to postmenopause, regardless of race or ethnicity or whether or not they have had children. It is primarily a disease of the reproductive years. Incidences of endometriosis have occurred in postmenopausal individuals, and in less common cases, individuals may have had endometriosis symptoms before they even reach menarche.The rate of recurrence of endometriosis is estimated to be 40-50% for adults over a 5-year period. The rate of recurrence has been shown to increase with time from surgery and is not associated with the stage of the disease, initial site, surgical method used, or post-surgical treatment.
History
Endometriosis was first discovered microscopically by Karl von Rokitansky in 1860, although the earliest antecedents may have stemmed from concepts published almost 4,000 years ago. The Hippocratic Corpus outlines symptoms similar to endometriosis, including uterine ulcers, adhesions, and infertility. Historically, women with these symptoms were treated with leeches, straitjackets, bloodletting, chemical douches, genital mutilation, pregnancy (as a form of treatment), hanging upside down, surgical intervention, and even killing due to suspicion of demonic possession. Hippocratic doctors recognized and treated chronic pelvic pain as a true organic disorder 2,500 years ago, but during the Middle Ages, there was a shift into believing that women with pelvic pain were mad, immoral, imagining the pain, or simply misbehaving. The symptoms of inexplicable chronic pelvic pain were often attributed to imagined madness, female weakness, promiscuity, or hysteria. The historical diagnosis of hysteria, which was thought to be a psychological disease, may have indeed been endometriosis. The idea that chronic pelvic pain was related to mental illness influenced modern attitudes regarding individuals with endometriosis, leading to delays in correct diagnosis and indifference to the patients true pain throughout the 20th and into the 21st century.Hippocratic doctors believed that delaying childbearing could trigger diseases of the uterus, which caused endometriosis-like symptoms. Women with dysmenorrhea were encouraged to marry and have children at a young age. The fact that Hippocratics were recommending changes in marriage practices due to an endometriosis-like illness implies that this disease was likely common, with rates higher than the 5-15% prevalence that is often cited today. If indeed this disorder was so common historically, this may point away from modern theories that suggest links between endometriosis and dioxins, PCBs, and chemicals.The early treatment of endometriosis was surgical and included oophorectomy (removal of the ovaries) and hysterectomy (removal of the uterus). In the 1940s, the only available hormonal therapies for endometriosis were high-dose testosterone and high-dose estrogen therapy. High-dose estrogen therapy with diethylstilbestrol for endometriosis was first reported by Karnaky in 1948 and was the main pharmacological treatment for the condition in the early 1950s. Pseudopregnancy (high-dose estrogen–progestogen therapy) for endometriosis was first described by Kistner in the late 1950s. Pseudopregnancy as well as progestogen monotherapy dominated the treatment of endometriosis in the 1960s and 1970s. These agents, although efficacious, were associated with intolerable side effects. Danazol was first described for endometriosis in 1971 and became the main therapy in the 1970s and 1980s. In the 1980s GnRH agonists gained prominence for the treatment of endometriosis and by the 1990s had become the most widely used therapy. Oral GnRH antagonists such as elagolix were introduced for the treatment of endometriosis in 2018.
Society and culture
Public figures
A number of public figures have spoken about their experience with endometriosis, including:
Halsey
Emma Bunton
Whoopi Goldberg
Mel Greig
Abby Finkenauer
Julianne Hough
Bridget Hustwaite
Padma Lakshmi
Dolly Parton
Daisy Ridley
Emma Roberts
Kirsten Storms
Chrissy Teigen
Emma Watkins
Danielle Collins
Emma Barnett
Jennifer Boyce - bass player of Ball Park Music
Economic burden
The economic burden of endometriosis is widespread and multifaceted. Endometriosis is a chronic disease that has direct and indirect costs which include loss of work days, direct costs of treatment, symptom management, and treatment of other associated conditions such as depression or chronic pain. One factor which seems to be associated with especially high costs is the delay between onset of symptoms and diagnosis.
Costs vary greatly between countries. Two factors that contribute to the economic burden include healthcare costs and losses in productivity. A Swedish study of 400 endometriosis patients found "Absence from work was reported by 32% of the women, while 36% reported reduced time at work because of endometriosis". An additional cross sectional study with Puerto Rican women, "found that endometriosis-related and coexisting symptoms disrupted all aspects of womens daily lives, including physical limitations that affected doing household chores and paid employment. The majority of women (85%) experienced a decrease in the quality of their work; 20% reported being unable to work because of pain, and over two-thirds of the sample continued to work despite their pain."
Medical culture
There are a number of barriers that those affected face to receiving diagnosis and treatment for endometriosis. Some of these include outdated standards for laparoscopic evaluation, stigma about discussing menstruation and sex, lack of understanding of the disease, primary care physicians lack of knowledge, and assumptions about typical menstrual pain. On average, those later diagnosed with endometriosis waited 2.3 years after the onset of symptoms before seeking treatment and nearly three quarters of women receive a misdiagnosis prior to endometriosis. Self-help groups say practitioners delay making the diagnosis, often because they do not consider it a possibility. There is a typical delay of 7–12 years from symptom onset in affected individuals to professional diagnosis. There is a general lack of knowledge about endometriosis among primary care physicians. Half of general health care providers surveyed in a 2013 study were unable to name three symptoms of endometriosis. Health care providers are also likely to dismiss described symptoms as normal menstruation. Younger patients may also feel uncomfortable discussing symptoms with a physician.
Race and ethnicity
Race and ethnicity may play a role in how endometriosis affects ones life. Endometriosis is less thoroughly studied among Black people, and the research that has been done is outdated. Black people with endometriosis may face barriers in receiving care due to misconceptions about how Black people feel pain. Since pain is the primary symptom of endometriosis, this makes it increasingly possible for doctors to dismiss pain symptoms when their patient is Black. An inaccurate diagnosis is also more likely since Black women are at a higher risk for other related conditions such as uterine fibroids.Cultural differences among ethnic groups also contribute to attitudes toward and treatment of endometriosis, especially in Hispanic or Latino communities. A study done in Puerto Rico in 2020 found that health care and interactions with friends and family related to discussing endometriosis were affected by stigma. The most common finding was a referral to those expressing pain related to endometriosis as "changuería" or "changas", terms used in Puerto Rico to describe pointless whining and complaining, often directed at children.
References
This article incorporates text in the public domain as a Swedish government "utterance" by URL§9
External links
Endometriosis at Curlie
Endometriosis fact sheet from womenshealth.gov
Fact Sheet Endometriosis Fact Sheet from the World Health Organization | 152 |
Endometritis | Endometritis is inflammation of the inner lining of the uterus (endometrium). Symptoms may include fever, lower abdominal pain, and abnormal vaginal bleeding or discharge. It is the most common cause of infection after childbirth. It is also part of spectrum of diseases that make up pelvic inflammatory disease.Endometritis is divided into acute and chronic forms. The acute form is usually from an infection that passes through the cervix as a result of an abortion, during menstruation, following childbirth, or as a result of douching or placement of an IUD. Risk factors for endometritis following delivery include Caesarean section and prolonged rupture of membranes. Chronic endometritis is more common after menopause. The diagnosis may be confirmed by endometrial biopsy. Ultrasound may be useful to verify that there is no retained tissue within the uterus.Treatment is usually with antibiotics. Recommendations for treatment of endometritis following delivery includes clindamycin with gentamicin. Testing for and treating gonorrhea and chlamydia in those at risk is also recommended. Chronic disease may be treated with doxycycline. Outcomes with treatment are generally good.Rates of endometritis are about 2% following vaginal delivery, 10% following scheduled C-section, and 30% with rupture of membranes before C-section if preventive antibiotics are not used. The term "endomyometritis" may be used when inflammation of the endometrium and the myometrium is present. The condition is also relatively common in other animals such as cows.
Symptoms
Symptoms may include fever, lower abdominal pain, and abnormal vaginal bleeding or discharge.
Types
Acute endometritis
Acute endometritis is characterized by infection. The organisms most often isolated are believed to be because of compromised abortions, delivery, medical instrumentation, and retention of placental fragments. There is not enough evidence for the use of prophylactic antibiotics to prevent endometritis after manual removal of placental in vaginal birth. Histologically, neutrophilic infiltration of the endometrial tissue is present during acute endometritis. The clinical presentation is typically high fever and purulent vaginal discharge. Menstruation after acute endometritis is excessive and in uncomplicated cases can resolve after 2 weeks of clindamycin and gentamicin IV antibiotic treatment.
In certain populations, it has been associated with Mycoplasma genitalium and pelvic inflammatory disease.
Chronic endometritis
Chronic endometritis is characterized by the presence of plasma cells in the stroma. Lymphocytes, eosinophils, and even lymphoid follicles may be seen, but in the absence of plasma cells, are not enough to warrant a histologic diagnosis. It may be seen in up to 10% of all endometrial biopsies performed for irregular bleeding. The most common organisms are Chlamydia trachomatis (chlamydia), Neisseria gonorrhoeae (gonorrhea), Streptococcus agalactiae (Group B Streptococcus), Mycoplasma hominis, tuberculosis, and various viruses. Most of these agents are capable of causing chronic pelvic inflammatory disease (PID). Patients with chronic endometritis may have an underlying cancer of the cervix or endometrium (although infectious cause is more common). Antibiotic therapy is curative in most cases (depending on underlying cause), with fairly rapid alleviation of symptoms after only 2 to 3 days. Women with chronic endometritis are also at a higher risk of pregnancy loss and treatment for this improves future pregnancy outcomes.Chronic granulomatous endometritis is usually caused by tuberculous. The granulomas are small, sparse, and without caseation. The granulomas take up to 2 weeks to develop and since the endometrium is shed every 4 weeks, the granulomas are poorly formed.
In human medicine, pyometra (also a veterinary condition of significance) is regarded as a form of chronic endometritis seen in elderly women causing stenosis of the cervical os and accumulation of discharges and infection. Symptom in chronic endometritis is blood stained discharge but in pyometra the patient complaints of lower abdominal pain.
Pyometra
Pyometra describes an accumulation of pus in the uterine cavity. In order for pyometra to develop, there must be both an infection and blockage of cervix. Signs and symptoms include lower abdominal pain (suprapubic), rigors, fever, and the discharge of pus on introduction of a sound into the uterus. Pyometra is treated with antibiotics, according to culture and sensitivity.
See also
Maternal death
Puerperal fever
References
== External links == | 153 |
Endophthalmitis | Endophthalmitis is inflammation of the interior cavity of the eye, usually caused by infection. It is a possible complication of all intraocular surgeries, particularly cataract surgery, and can result in loss of vision or loss of the eye itself. Infection can be caused by bacteria or fungi, and is classified as exogenous (infection introduced by direct inoculation as in surgery or penetrating trauma), or endogenous (organisms carried by blood vessels to the eye from another site of infection). Other non-infectious causes include toxins, allergic reactions, and retained intraocular foreign bodies. Intravitreal injections are a rare cause, with an incidence rate usually less than .05%.
Signs and symptoms
There is usually a history of recent eye surgery or penetrating trauma to the eye. Symptoms include severe pain, vision loss, and intense redness of the conjunctiva. Hypopyon can be present and should be looked for on examination by a slit lamp. It can first present with the black dot sign (Martin-Farina sign), where patients may report a small area of loss of vision that resembles a black dot or fly.
An eye exam should be considered in systemic candidiasis, as up to 3% of cases of candidal blood infections lead to endophthalmitis.
Complications
Panophthalmitis — Progression to involve all the coats of the eye.
Corneal ulcer
Orbital cellulitis
Impairment of vision
Complete loss of vision
Loss of eye architecture
Enucleation
Cause
Bacteria: N. meningitidis, Staphylococcus aureus, S. epidermidis, S. pneumoniae, other streptococcal spp., Cutibacterium acnes, Pseudomonas aeruginosa, other gram negative organisms.
Viruses: Herpes simplex virus.
Fungi: Candida spp. Fusarium
Parasites: Toxoplasma gondii, Toxocara.A recent systematic review found that the most common source of infectious transmission following cataract surgery was attributed to a contaminated intaocular solution (i.e. irrigation solution, viscoelastic, or diluted antibiotic), although there is a large diversity of exogenous microorganisms that can travel via various routes including the operating room environment, phacoemulsifcation machine, surgical instruments, topical anesthetics, intraocular lens, autoclave solution, and cotton wool swabs.Late-onset endophthalmitis is mostly caused by Cutibacterium acnes.Causative organisms are not present in all cases. Endophthalmitis can emerge by entirely sterile means, e.g. an allergic reaction to a drug administered intravitreally.
Diagnosis
Diagnosis:
Microbiology testing.
PCR.
TASS vs Infectious endophthalmitis.
Prevention
A Cochrane Review sought to evaluate the effects of perioperative antibiotic prophylaxis for endophthalmitis following cataract surgery. The review showed high-certainty evidence that antibiotic injections in the eye with cefuroxime at the end of surgery lowers the chance of endophthalmitis. Also, the review showed moderate evidence that antibiotic eye drops (levofloxacin or chloramphenicol) with antibiotic injections (cefuroxime or penicillin) probably lowers the chance of endophthalmitis compared with injections or eye drops alone. Separate studies from the research showed that a periocular injection of penicillin with chloramphenicol-suphadimidine eye drops, and an intracameral cefuroxime injection with topical levofloxacin resulted in a risk reduction of developing endophthalmitis following cataract surgery for subjects.
In the case of intravitreal injections, however, antibiotics are not effective. Studies have demonstrated no difference between rates of infection with and without antibiotics when intravitreal injections are performed. The only consistent method of antibioprophylaxis in this instance is a solution of povidone-iodine applied pre-injection.
Treatment
The patient needs urgent examination by an ophthalmologist, preferably a vitreoretinal specialist who will usually decide for urgent intervention to provide intravitreal injection of potent antibiotics. Injections of vancomycin (to kill Gram-positive bacteria) and ceftazidime (to kill Gram-negative bacteria) are routine. Even though antibiotics can have negative impacts on the retina in high concentrations, the facts that visual acuity worsens in 65% of endophthalmitis patients and prognosis gets poorer the longer an infection goes untreated make immediate intervention necessary. Endophthalmitis patients may also require an urgent surgery (pars plana vitrectomy), and evisceration may be necessary to remove a severe and intractable infection which could result in a blind and painful eye.
Steroids may be injected intravitreally if the cause is allergic.
In patients with acute endophthalmitis, combined steroid treatment with antibiotics have been found to improve visual outcomes, versus patients only treated with antibiotics, but any improvements on the resolution acute endophthalmitis is unknown.
References
External links
Endophthalmitis at eMedicine
Fungal Endophthalmitis at eMedicine | 154 |
Tracheal intubation | Tracheal intubation, usually simply referred to as intubation, is the placement of a flexible plastic tube into the trachea (windpipe) to maintain an open airway or to serve as a conduit through which to administer certain drugs. It is frequently performed in critically injured, ill, or anesthetized patients to facilitate ventilation of the lungs, including mechanical ventilation, and to prevent the possibility of asphyxiation or airway obstruction.
The most widely used route is orotracheal, in which an endotracheal tube is passed through the mouth and vocal apparatus into the trachea. In a nasotracheal procedure, an endotracheal tube is passed through the nose and vocal apparatus into the trachea. Other methods of intubation involve surgery and include the cricothyrotomy (used almost exclusively in emergency circumstances) and the tracheotomy, used primarily in situations where a prolonged need for airway support is anticipated.
Because it is an invasive and uncomfortable medical procedure, intubation is usually performed after administration of general anesthesia and a neuromuscular-blocking drug. It can, however, be performed in the awake patient with local or topical anesthesia or in an emergency without any anesthesia at all. Intubation is normally facilitated by using a conventional laryngoscope, flexible fiberoptic bronchoscope, or video laryngoscope to identify the vocal cords and pass the tube between them into the trachea instead of into the esophagus. Other devices and techniques may be used alternatively.
After the trachea has been intubated, a balloon cuff is typically inflated just above the far end of the tube to help secure it in place, to prevent leakage of respiratory gases, and to protect the tracheobronchial tree from receiving undesirable material such as stomach acid. The tube is then secured to the face or neck and connected to a T-piece, anesthesia breathing circuit, bag valve mask device, or a mechanical ventilator. Once there is no longer a need for ventilatory assistance or protection of the airway, the tracheal tube is removed; this is referred to as extubation of the trachea (or decannulation, in the case of a surgical airway such as a cricothyrotomy or a tracheotomy).
For centuries, tracheotomy was considered the only reliable method for intubation of the trachea. However, because only a minority of patients survived the operation, physicians undertook tracheotomy only as a last resort, on patients who were nearly dead. It was not until the late 19th century, however, that advances in understanding of anatomy and physiology, as well an appreciation of the germ theory of disease, had improved the outcome of this operation to the point that it could be considered an acceptable treatment option. Also at that time, advances in endoscopic instrumentation had improved to such a degree that direct laryngoscopy had become a viable means to secure the airway by the non-surgical orotracheal route. By the mid-20th century, the tracheotomy as well as endoscopy and non-surgical tracheal intubation had evolved from rarely employed procedures to becoming essential components of the practices of anesthesiology, critical care medicine, emergency medicine, and laryngology.
Tracheal intubation can be associated with complications such as broken teeth or lacerations of the tissues of the upper airway. It can also be associated with potentially fatal complications such as pulmonary aspiration of stomach contents which can result in a severe and sometimes fatal chemical aspiration pneumonitis, or unrecognized intubation of the esophagus which can lead to potentially fatal anoxia. Because of this, the potential for difficulty or complications due to the presence of unusual airway anatomy or other uncontrolled variables is carefully evaluated before undertaking tracheal intubation. Alternative strategies for securing the airway must always be readily available.
Indications
Tracheal intubation is indicated in a variety of situations when illness or a medical procedure prevents a person from maintaining a clear airway, breathing, and oxygenating the blood. In these circumstances, oxygen supplementation using a simple face mask is inadequate.
Depressed level of consciousness
Perhaps the most common indication for tracheal intubation is for the placement of a conduit through which nitrous oxide or volatile anesthetics may be administered. General anesthetic agents, opioids, and neuromuscular-blocking drugs may diminish or even abolish the respiratory drive. Although it is not the only means to maintain a patent airway during general anesthesia, intubation of the trachea provides the most reliable means of oxygenation and ventilation and the greatest degree of protection against regurgitation and pulmonary aspiration.Damage to the brain (such as from a massive stroke, non-penetrating head injury, intoxication or poisoning) may result in a depressed level of consciousness. When this becomes severe to the point of stupor or coma (defined as a score on the Glasgow Coma Scale of less than 8), dynamic collapse of the extrinsic muscles of the airway can obstruct the airway, impeding the free flow of air into the lungs. Furthermore, protective airway reflexes such as coughing and swallowing may be diminished or absent. Tracheal intubation is often required to restore patency (the relative absence of blockage) of the airway and protect the tracheobronchial tree from pulmonary aspiration of gastric contents.
Hypoxemia
Intubation may be necessary for a patient with decreased oxygen content and oxygen saturation of the blood caused when their breathing is inadequate (hypoventilation), suspended (apnea), or when the lungs are unable to sufficiently transfer gasses to the blood. Such patients, who may be awake and alert, are typically critically ill with a multisystem disease or multiple severe injuries. Examples of such conditions include cervical spine injury, multiple rib fractures, severe pneumonia, acute respiratory distress syndrome (ARDS), or near-drowning. Specifically, intubation is considered if the arterial partial pressure of oxygen (PaO2) is less than 60 millimeters of mercury (mm Hg) while breathing an inspired O2 concentration (FIO2) of 50% or greater. In patients with elevated arterial carbon dioxide, an arterial partial pressure of CO2 (PaCO2) greater than 45 mm Hg in the setting of acidemia would prompt intubation, especially if a series of measurements demonstrate a worsening respiratory acidosis. Regardless of the laboratory values, these guidelines are always interpreted in the clinical context.
Airway obstruction
Actual or impending airway obstruction is a common indication for intubation of the trachea. Life-threatening airway obstruction may occur when a foreign body becomes lodged in the airway; this is especially common in infants and toddlers. Severe blunt or penetrating injury to the face or neck may be accompanied by swelling and an expanding hematoma, or injury to the larynx, trachea or bronchi. Airway obstruction is also common in people who have suffered smoke inhalation or burns within or near the airway or epiglottitis. Sustained generalized seizure activity and angioedema are other common causes of life-threatening airway obstruction which may require tracheal intubation to secure the airway.
Manipulation of the airway
Diagnostic or therapeutic manipulation of the airway (such as bronchoscopy, laser therapy or stenting of the bronchi) may intermittently interfere with the ability to breathe; intubation may be necessary in such situations.
Newborns
Syndromes such as respiratory distress syndrome, congenital heart disease, pneumothorax, and shock may lead to breathing problems in newborn infants that require endotracheal intubation and mechanically assisted breathing (mechanical ventilation). Newborn infants may also require endotracheal intubation during surgery while under general anaesthesia.
Equipment
Laryngoscopes
The vast majority of tracheal intubations involve the use of a viewing instrument of one type or another. The modern conventional laryngoscope consists of a handle containing batteries that power a light and a set of interchangeable blades, which are either straight or curved. This device is designed to allow the laryngoscopist to directly view the larynx. Due to the widespread availability of such devices, the technique of blind intubation of the trachea is rarely practiced today, although it may still be useful in certain emergency situations, such as natural or man-made disasters. In the prehospital emergency setting, digital intubation may be necessitated if the patient is in a position that makes direct laryngoscopy impossible. For example, digital intubation may be used by a paramedic if the patient is entrapped in an inverted position in a vehicle after a motor vehicle collision with a prolonged extrication time.
The decision to use a straight or curved laryngoscope blade depends partly on the specific anatomical features of the airway, and partly on the personal experience and preference of the laryngoscopist. The Macintosh blade is the most widely used curved laryngoscope blade, while the Miller blade is the most popular style of straight blade. Both Miller and Macintosh laryngoscope blades are available in sizes 0 (infant) through 4 (large adult). There are many other styles of straight and curved blades, with accessories such as mirrors for enlarging the field of view and even ports for the administration of oxygen. These specialty blades are primarily designed for use by anesthetists and otolaryngologists, most commonly in the operating room.Fiberoptic laryngoscopes have become increasingly available since the 1990s. In contrast to the conventional laryngoscope, these devices allow the laryngoscopist to indirectly view the larynx. This provides a significant advantage in situations where the operator needs to see around an acute bend in order to visualize the glottis, and deal with otherwise difficult intubations. Video laryngoscopes are specialized fiberoptic laryngoscopes that use a digital video camera sensor to allow the operator to view the glottis and larynx on a video monitor. Other "noninvasive" devices which can be employed to assist in tracheal intubation are the laryngeal mask airway (used as a conduit for endotracheal tube placement) and the Airtraq.
Stylets
An intubating stylet is a malleable metal wire designed to be inserted into the endotracheal tube to make the tube conform better to the upper airway anatomy of the specific individual. This aid is commonly used with a difficult laryngoscopy. Just as with laryngoscope blades, there are also several types of available stylets, such as the Verathon Stylet, which is specifically designed to follow the 60° blade angle of the GlideScope video laryngoscope.The Eschmann tracheal tube introducer (also referred to as a "gum elastic bougie") is specialized type of stylet used to facilitate difficult intubation. This flexible device is 60 cm (24 in) in length, 15 French (5 mm diameter) with a small "hockey-stick" angle at the far end. Unlike a traditional intubating stylet, the Eschmann tracheal tube introducer is typically inserted directly into the trachea and then used as a guide over which the endotracheal tube can be passed (in a manner analogous to the Seldinger technique). As the Eschmann tracheal tube introducer is considerably less rigid than a conventional stylet, this technique is considered to be a relatively atraumatic means of tracheal intubation.The tracheal tube exchanger is a hollow catheter, 56 to 81 cm (22.0 to 31.9 in) in length, that can be used for removal and replacement of tracheal tubes without the need for laryngoscopy. The Cook Airway Exchange Catheter (CAEC) is another example of this type of catheter; this device has a central lumen (hollow channel) through which oxygen can be administered. Airway exchange catheters are long hollow catheters which often have connectors for jet ventilation, manual ventilation, or oxygen insufflation. It is also possible to connect the catheter to a capnograph to perform respiratory monitoring.
The lighted stylet is a device that employs the principle of transillumination to facilitate blind orotracheal intubation (an intubation technique in which the laryngoscopist does not view the glottis).
Tracheal tubes
A tracheal tube is a catheter that is inserted into the trachea for the primary purpose of establishing and maintaining a patent (open and unobstructed) airway. Tracheal tubes are frequently used for airway management in the settings of general anesthesia, critical care, mechanical ventilation, and emergency medicine. Many different types of tracheal tubes are available, suited for different specific applications. An endotracheal tube is a specific type of tracheal tube that is nearly always inserted through the mouth (orotracheal) or nose (nasotracheal). It is a breathing conduit designed to be placed into the airway of critically injured, ill or anesthetized patients in order to perform mechanical positive pressure ventilation of the lungs and to prevent the possibility of aspiration or airway obstruction. The endotracheal tube has a fitting designed to be connected to a source of pressurized gas such as oxygen. At the other end is an orifice through which such gases are directed into the lungs and may also include a balloon (referred to as a cuff). The tip of the endotracheal tube is positioned above the carina (before the trachea divides to each lung) and sealed within the trachea so that the lungs can be ventilated equally. A tracheostomy tube is another type of tracheal tube; this 2–3-inch-long (51–76 mm) curved metal or plastic tube is inserted into a tracheostomy stoma or a cricothyrotomy incision.Tracheal tubes can be used to ensure the adequate exchange of oxygen and carbon dioxide, to deliver oxygen in higher concentrations than found in air, or to administer other gases such as helium, nitric oxide, nitrous oxide, xenon, or certain volatile anesthetic agents such as desflurane, isoflurane, or sevoflurane. They may also be used as a route for administration of certain medications such as bronchodilators, inhaled corticosteroids, and drugs used in treating cardiac arrest such as atropine, epinephrine, lidocaine and vasopressin.Originally made from latex rubber, most modern endotracheal tubes today are constructed of polyvinyl chloride. Tubes constructed of silicone rubber, wire-reinforced silicone rubber or stainless steel are also available for special applications. For human use, tubes range in size from 2 to 10.5 mm (0.1 to 0.4 in) in internal diameter. The size is chosen based on the patients body size, with the smaller sizes being used for infants and children. Most endotracheal tubes have an inflatable cuff to seal the tracheobronchial tree against leakage of respiratory gases and pulmonary aspiration of gastric contents, blood, secretions, and other fluids. Uncuffed tubes are also available, though their use is limited mostly to children (in small children, the cricoid cartilage is the narrowest portion of the airway and usually provides an adequate seal for mechanical ventilation).In addition to cuffed or uncuffed, preformed endotracheal tubes are also available. The oral and nasal RAE tubes (named after the inventors Ring, Adair and Elwyn) are the most widely used of the preformed tubes.There are a number of different types of double-lumen endo-bronchial tubes that have endobronchial as well as endotracheal channels (Carlens, White and Robertshaw tubes). These tubes are typically coaxial, with two separate channels and two separate openings. They incorporate an endotracheal lumen which terminates in the trachea and an endobronchial lumen, the distal tip of which is positioned 1–2 cm into the right or left mainstem bronchus. There is also the Univent tube, which has a single tracheal lumen and an integrated endobronchial blocker. These tubes enable one to ventilate both lungs, or either lung independently. Single-lung ventilation (allowing the lung on the operative side to collapse) can be useful during thoracic surgery, as it can facilitate the surgeons view and access to other relevant structures within the thoracic cavity.The "armored" endotracheal tubes are cuffed, wire-reinforced silicone rubber tubes. They are much more flexible than polyvinyl chloride tubes, yet they are difficult to compress or kink. This can make them useful for situations in which the trachea is anticipated to remain intubated for a prolonged duration, or if the neck is to remain flexed during surgery. Most armored tubes have a Magill curve, but preformed armored RAE tubes are also available. Another type of endotracheal tube has four small openings just above the inflatable cuff, which can be used for suction of the trachea or administration of intratracheal medications if necessary. Other tubes (such as the Bivona Fome-Cuf tube) are designed specifically for use in laser surgery in and around the airway.
Methods to confirm tube placement
No single method for confirming tracheal tube placement has been shown to be 100% reliable. Accordingly, the use of multiple methods for confirmation of correct tube placement is now widely considered to be the standard of care. Such methods include direct visualization as the tip of the tube passes through the glottis, or indirect visualization of the tracheal tube within the trachea using a device such as a bronchoscope. With a properly positioned tracheal tube, equal bilateral breath sounds will be heard upon listening to the chest with a stethoscope, and no sound upon listening to the area over the stomach. Equal bilateral rise and fall of the chest wall will be evident with ventilatory excursions. A small amount of water vapor will also be evident within the lumen of the tube with each exhalation and there will be no gastric contents in the tracheal tube at any time.Ideally, at least one of the methods utilized for confirming tracheal tube placement will be a measuring instrument. Waveform capnography has emerged as the gold standard for the confirmation of tube placement within the trachea. Other methods relying on instruments include the use of a colorimetric end-tidal carbon dioxide detector, a self-inflating esophageal bulb, or an esophageal detection device. The distal tip of a properly positioned tracheal tube will be located in the mid-trachea, roughly 2 cm (1 in) above the bifurcation of the carina; this can be confirmed by chest x-ray. If it is inserted too far into the trachea (beyond the carina), the tip of the tracheal tube is likely to be within the right main bronchus—a situation often referred to as a "right mainstem intubation". In this situation, the left lung may be unable to participate in ventilation, which can lead to decreased oxygen content due to ventilation/perfusion mismatch.
Special situations
Emergencies
Tracheal intubation in the emergency setting can be difficult with the fiberoptic bronchoscope due to blood, vomit, or secretions in the airway and poor patient cooperation. Because of this, patients with massive facial injury, complete upper airway obstruction, severely diminished ventilation, or profuse upper airway bleeding are poor candidates for fiberoptic intubation. Fiberoptic intubation under general anesthesia typically requires two skilled individuals. Success rates of only 83–87% have been reported using fiberoptic techniques in the emergency department, with significant nasal bleeding occurring in up to 22% of patients. These drawbacks limit the use of fiberoptic bronchoscopy somewhat in urgent and emergency situations.Personnel experienced in direct laryngoscopy are not always immediately available in certain settings that require emergency tracheal intubation. For this reason, specialized devices have been designed to act as bridges to a definitive airway. Such devices include the laryngeal mask airway, cuffed oropharyngeal airway and the esophageal-tracheal combitube (Combitube). Other devices such as rigid stylets, the lightwand (a blind technique) and indirect fiberoptic rigid stylets, such as the Bullard scope, Upsher scope and the WuScope can also be used as alternatives to direct laryngoscopy. Each of these devices have its own unique set of benefits and drawbacks, and none of them is effective under all circumstances.
Rapid-sequence induction and intubation
Rapid sequence induction and intubation (RSI) is a particular method of induction of general anesthesia, commonly employed in emergency operations and other situations where patients are assumed to have a full stomach. The objective of RSI is to minimize the possibility of regurgitation and pulmonary aspiration of gastric contents during the induction of general anesthesia and subsequent tracheal intubation. RSI traditionally involves preoxygenating the lungs with a tightly fitting oxygen mask, followed by the sequential administration of an intravenous sleep-inducing agent and a rapidly acting neuromuscular-blocking drug, such as rocuronium, succinylcholine, or cisatracurium besilate, before intubation of the trachea.One important difference between RSI and routine tracheal intubation is that the practitioner does not manually assist the ventilation of the lungs after the onset of general anesthesia and cessation of breathing, until the trachea has been intubated and the cuff has been inflated. Another key feature of RSI is the application of manual cricoid pressure to the cricoid cartilage, often referred to as the "Sellick maneuver", prior to instrumentation of the airway and intubation of the trachea.Named for British anesthetist Brian Arthur Sellick (1918–1996) who first described the procedure in 1961, the goal of cricoid pressure is to minimize the possibility of regurgitation and pulmonary aspiration of gastric contents. Cricoid pressure has been widely used during RSI for nearly fifty years, despite a lack of compelling evidence to support this practice. The initial article by Sellick was based on a small sample size at a time when high tidal volumes, head-down positioning and barbiturate anesthesia were the rule. Beginning around 2000, a significant body of evidence has accumulated which questions the effectiveness of cricoid pressure. The application of cricoid pressure may in fact displace the esophagus laterally instead of compressing it as described by Sellick. Cricoid pressure may also compress the glottis, which can obstruct the view of the laryngoscopist and actually cause a delay in securing the airway.Cricoid pressure is often confused with the "BURP" (Backwards Upwards Rightwards Pressure) maneuver. While both of these involve digital pressure to the anterior aspect (front) of the laryngeal apparatus, the purpose of the latter is to improve the view of the glottis during laryngoscopy and tracheal intubation, rather than to prevent regurgitation. Both cricoid pressure and the BURP maneuver have the potential to worsen laryngoscopy.RSI may also be used in prehospital emergency situations when a patient is conscious but respiratory failure is imminent (such as in extreme trauma). This procedure is commonly performed by flight paramedics. Flight paramedics often use RSI to intubate before transport because intubation in a moving fixed-wing or rotary-wing aircraft is extremely difficult to perform due to environmental factors. The patient will be paralyzed and intubated on the ground before transport by aircraft.
Cricothyrotomy
A cricothyrotomy is an incision made through the skin and cricothyroid membrane to establish a patent airway during certain life-threatening situations, such as airway obstruction by a foreign body, angioedema, or massive facial trauma. A cricothyrotomy is nearly always performed as a last resort in cases where orotracheal and nasotracheal intubation are impossible or contraindicated. Cricothyrotomy is easier and quicker to perform than tracheotomy, does not require manipulation of the cervical spine and is associated with fewer complications.The easiest method to perform this technique is the needle cricothyrotomy (also referred to as a percutaneous dilational cricothyrotomy), in which a large-bore (12–14 gauge) intravenous catheter is used to puncture the cricothyroid membrane. Oxygen can then be administered through this catheter via jet insufflation. However, while needle cricothyrotomy may be life-saving in extreme circumstances, this technique is only intended to be a temporizing measure until a definitive airway can be established. While needle cricothyrotomy can provide adequate oxygenation, the small diameter of the cricothyrotomy catheter is insufficient for elimination of carbon dioxide (ventilation). After one hour of apneic oxygenation through a needle cricothyrotomy, one can expect a PaCO2 of greater than 250 mm Hg and an arterial pH of less than 6.72, despite an oxygen saturation of 98% or greater. A more definitive airway can be established by performing a surgical cricothyrotomy, in which a 5 to 6 mm (0.20 to 0.24 in) endotracheal tube or tracheostomy tube can be inserted through a larger incision.Several manufacturers market prepackaged cricothyrotomy kits, which enable one to use either a wire-guided percutaneous dilational (Seldinger) technique, or the classic surgical technique to insert a polyvinylchloride catheter through the cricothyroid membrane. The kits may be stocked in hospital emergency departments and operating suites, as well as ambulances and other selected pre-hospital settings.
Tracheotomy
Tracheotomy consists of making an incision on the front of the neck and opening a direct airway through an incision in the trachea. The resulting opening can serve independently as an airway or as a site for a tracheostomy tube to be inserted; this tube allows a person to breathe without the use of his nose or mouth. The opening may be made by a scalpel or a needle (referred to as surgical and percutaneous techniques respectively) and both techniques are widely used in current practice. In order to limit the risk of damage to the recurrent laryngeal nerves (the nerves that control the voice box), the tracheotomy is performed as high in the trachea as possible. If only one of these nerves is damaged, the patients voice may be impaired (dysphonia); if both of the nerves are damaged, the patient will be unable to speak (aphonia). In the acute setting, indications for tracheotomy are similar to those for cricothyrotomy. In the chronic setting, indications for tracheotomy include the need for long-term mechanical ventilation and removal of tracheal secretions (e.g., comatose patients, or extensive surgery involving the head and neck).
Children
There are significant differences in airway anatomy and respiratory physiology between children and adults, and these are taken into careful consideration before performing tracheal intubation of any pediatric patient. The differences, which are quite significant in infants, gradually disappear as the human body approaches a mature age and body mass index.For infants and young children, orotracheal intubation is easier than the nasotracheal route. Nasotracheal intubation carries a risk of dislodgement of adenoids and nasal bleeding. Despite the greater difficulty, nasotracheal intubation route is preferable to orotracheal intubation in children undergoing intensive care and requiring prolonged intubation because this route allows a more secure fixation of the tube. As with adults, there are a number of devices specially designed for assistance with difficult tracheal intubation in children. Confirmation of proper position of the tracheal tube is accomplished as with adult patients.Because the airway of a child is narrow, a small amount of glottic or tracheal swelling can produce critical obstruction. Inserting a tube that is too large relative to the diameter of the trachea can cause swelling. Conversely, inserting a tube that is too small can result in inability to achieve effective positive pressure ventilation due to retrograde escape of gas through the glottis and out the mouth and nose (often referred to as a "leak" around the tube). An excessive leak can usually be corrected by inserting a larger tube or a cuffed tube.The tip of a correctly positioned tracheal tube will be in the mid-trachea, between the collarbones on an anteroposterior chest radiograph. The correct diameter of the tube is that which results in a small leak at a pressure of about 25 cm (10 in) of water. The appropriate inner diameter for the endotracheal tube is estimated to be roughly the same diameter as the childs little finger. The appropriate length for the endotracheal tube can be estimated by doubling the distance from the corner of the childs mouth to the ear canal. For premature infants 2.5 mm (0.1 in) internal diameter is an appropriate size for the tracheal tube. For infants of normal gestational age, 3 mm (0.12 in) internal diameter is an appropriate size. For normally nourished children 1 year of age and older, two formulae are used to estimate the appropriate diameter and depth for tracheal intubation. The internal diameter of the tube in mm is (patients age in years + 16) / 4, while the appropriate depth of insertion in cm is 12 + (patients age in years / 2).
Newborn infants
Endotrachael suctioning is often used during intubation in newborn infants to reduce the risk of a blocked tube due to secretions, a collapsed lung, and to reduce pain. Suctioning is sometimes used at specifically scheduled intervals, "as needed", and less frequently. Further research is necessary to determine the most effective suctioning schedule or frequency of suctioning in intubated infants.In newborns free flow oxygen used to be recommended during intubation however as there is no evidence of benefit the 2011 NRP guidelines no longer do.
Predicting difficulty
Tracheal intubation is not a simple procedure and the consequences of failure are grave. Therefore, the patient is carefully evaluated for potential difficulty or complications beforehand. This involves taking the medical history of the patient and performing a physical examination, the results of which can be scored against one of several classification systems. The proposed surgical procedure (e.g., surgery involving the head and neck, or bariatric surgery) may lead one to anticipate difficulties with intubation. Many individuals have unusual airway anatomy, such as those who have limited movement of their neck or jaw, or those who have tumors, deep swelling due to injury or to allergy, developmental abnormalities of the jaw, or excess fatty tissue of the face and neck. Using conventional laryngoscopic techniques, intubation of the trachea can be difficult or even impossible in such patients. This is why all persons performing tracheal intubation must be familiar with alternative techniques of securing the airway. Use of the flexible fiberoptic bronchoscope and similar devices has become among the preferred techniques in the management of such cases. However, these devices require a different skill set than that employed for conventional laryngoscopy and are expensive to purchase, maintain and repair.When taking the patients medical history, the subject is questioned about any significant signs or symptoms, such as difficulty in speaking or difficulty in breathing. These may suggest obstructing lesions in various locations within the upper airway, larynx, or tracheobronchial tree. A history of previous surgery (e.g., previous cervical fusion), injury, radiation therapy, or tumors involving the head, neck and upper chest can also provide clues to a potentially difficult intubation. Previous experiences with tracheal intubation, especially difficult intubation, intubation for prolonged duration (e.g., intensive care unit) or prior tracheotomy are also noted.A detailed physical examination of the airway is important, particularly:
the range of motion of the cervical spine: the subject should be able to tilt the head back and then forward so that the chin touches the chest.
the range of motion of the jaw (the temporomandibular joint): three of the subjects fingers should be able to fit between the upper and lower incisors.
the size and shape of the upper jaw and lower jaw, looking especially for problems such as maxillary hypoplasia (an underdeveloped upper jaw), micrognathia (an abnormally small jaw), or retrognathia (misalignment of the upper and lower jaw).
the thyromental distance: three of the subjects fingers should be able to fit between the Adams apple and the chin.
the size and shape of the tongue and palate relative to the size of the mouth.
the teeth, especially noting the presence of prominent maxillary incisors, any loose or damaged teeth, or crowns.Many classification systems have been developed in an effort to predict difficulty of tracheal intubation, including the Cormack-Lehane classification system, the Intubation Difficulty Scale (IDS), and the Mallampati score. The Mallampati score is drawn from the observation that the size of the base of the tongue influences the difficulty of intubation. It is determined by looking at the anatomy of the mouth, and in particular the visibility of the base of palatine uvula, faucial pillars and the soft palate. Although such medical scoring systems may aid in the evaluation of patients, no single score or combination of scores can be trusted to specifically detect all and only those patients who are difficult to intubate. Furthermore, one study of experienced anesthesiologists, on the widely used Cormack–Lehane classification system, found they did not score the same patients consistently over time, and that only 25% could correctly define all four grades of the widely used Cormack–Lehane classification system. Under certain emergency circumstances (e.g., severe head trauma or suspected cervical spine injury), it may be impossible to fully utilize these the physical examination and the various classification systems to predict the difficulty of tracheal intubation. A recent Cochrane systematic review examined the sensitivity and specificity of various bedside tests commonly used for predicting difficulty in airway management. In such cases, alternative techniques of securing the airway must be readily available.
Complications
Tracheal intubation is generally considered the best method for airway management under a wide variety of circumstances, as it provides the most reliable means of oxygenation and ventilation and the greatest degree of protection against regurgitation and pulmonary aspiration. However, tracheal intubation requires a great deal of clinical experience to master and serious complications may result even when properly performed.Four anatomic features must be present for orotracheal intubation to be straightforward: adequate mouth opening (full range of motion of the temporomandibular joint), sufficient pharyngeal space (determined by examining the back of the mouth), sufficient submandibular space (distance between the thyroid cartilage and the chin, the space into which the tongue must be displaced in order for the larygoscopist to view the glottis), and adequate extension of the cervical spine at the atlanto-occipital joint. If any of these variables is in any way compromised, intubation should be expected to be difficult.Minor complications are common after laryngoscopy and insertion of an orotracheal tube. These are typically of short duration, such as sore throat, lacerations of the lips or gums or other structures within the upper airway, chipped, fractured or dislodged teeth, and nasal injury. Other complications which are common but potentially more serious include accelerated or irregular heartbeat, high blood pressure, elevated intracranial and introcular pressure, and bronchospasm.More serious complications include laryngospasm, perforation of the trachea or esophagus, pulmonary aspiration of gastric contents or other foreign bodies, fracture or dislocation of the cervical spine, temporomandibular joint or arytenoid cartilages, decreased oxygen content, elevated arterial carbon dioxide, and vocal cord weakness. In addition to these complications, tracheal intubation via the nasal route carries a risk of dislodgement of adenoids and potentially severe nasal bleeding. Newer technologies such as flexible fiberoptic laryngoscopy have fared better in reducing the incidence of some of these complications, though the most frequent cause of intubation trauma remains a lack of skill on the part of the laryngoscopist.Complications may also be severe and long-lasting or permanent, such as vocal cord damage, esophageal perforation and retropharyngeal abscess, bronchial intubation, or nerve injury. They may even be immediately life-threatening, such as laryngospasm and negative pressure pulmonary edema (fluid in the lungs), aspiration, unrecognized esophageal intubation, or accidental disconnection or dislodgement of the tracheal tube. Potentially fatal complications more often associated with prolonged intubation or tracheotomy include abnormal communication between the trachea and nearby structures such as the innominate artery (tracheoinnominate fistula) or esophagus (tracheoesophageal fistula). Other significant complications include airway obstruction due to loss of tracheal rigidity, ventilator-associated pneumonia and narrowing of the glottis or trachea. The cuff pressure is monitored carefully in order to avoid complications from over-inflation, many of which can be traced to excessive cuff pressure restricting the blood supply to the tracheal mucosa. A 2000 Spanish study of bedside percutaneous tracheotomy reported overall complication rates of 10–15% and procedural mortality of 0%, which is comparable to those of other series reported in the literature from the Netherlands and the United States.Inability to secure the airway, with subsequent failure of oxygenation and ventilation is a life-threatening complication which if not immediately corrected leads to decreased oxygen content, brain damage, cardiovascular collapse, and death. When performed improperly, the associated complications (e.g., unrecognized esophageal intubation) may be rapidly fatal. Without adequate training and experience, the incidence of such complications is high. The case of Andrew Davis Hughes, from Emerald Isle, NC is a widely known case in which the patient was improperly intubated and, due to the lack of oxygen, sustained severe brain damage and died. For example, among paramedics in several United States urban communities, unrecognized esophageal or hypopharyngeal intubation has been reported to be 6% to 25%. Although not common, where basic emergency medical technicians are permitted to intubate, reported success rates are as low as 51%. In one study, nearly half of patients with misplaced tracheal tubes died in the emergency room. Because of this, recent editions of the American Heart Associations Guidelines for Cardiopulmonary Resuscitation have de-emphasized the role of tracheal intubation in favor of other airway management techniques such as bag-valve-mask ventilation, the laryngeal mask airway and the Combitube. However, recent higher quality studies have shown no survival or neurological benefit with endotracheal intubation over supraglottic airway devices (Laryngeal mask or Combitube).One complication—unintentional and unrecognized intubation of the esophagus—is both common (as frequent as 25% in the hands of inexperienced personnel) and likely to result in a deleterious or even fatal outcome. In such cases, oxygen is inadvertently administered to the stomach, from where it cannot be taken up by the circulatory system, instead of the lungs. If this situation is not immediately identified and corrected, death will ensue from cerebral and cardiac anoxia.
Of 4,460 claims in the American Society of Anesthesiologists (ASA) Closed Claims Project database, 266 (approximately 6%) were for airway injury. Of these 266 cases, 87% of the injuries were temporary, 5% were permanent or disabling, and 8% resulted in death. Difficult intubation, age older than 60 years, and female gender were associated with claims for perforation of the esophagus or pharynx. Early signs of perforation were present in only 51% of perforation claims, whereas late sequelae occurred in 65%.During the SARS and COVID-19 pandemics, tracheal intubation has been used with a ventilator in severe cases where the patient struggles to breathe. Performing the procedure carries a risk of the caregiver becoming infected.
Alternatives
Although it offers the greatest degree of protection against regurgitation and pulmonary aspiration, tracheal intubation is not the only means to maintain a patent airway. Alternative techniques for airway management and delivery of oxygen, volatile anesthetics or other breathing gases include the laryngeal mask airway, i-gel, cuffed oropharyngeal airway, continuous positive airway pressure (CPAP mask), nasal BiPAP mask, simple face mask, and nasal cannula.General anesthesia is often administered without tracheal intubation in selected cases where the procedure is brief in duration, or procedures where the depth of anesthesia is not sufficient to cause significant compromise in ventilatory function. Even for longer duration or more invasive procedures, a general anesthetic may be administered without intubating the trachea, provided that patients are carefully selected, and the risk-benefit ratio is favorable (i.e., the risks associated with an unprotected airway are believed to be less than the risks of intubating the trachea).Airway management can be classified into closed or open techniques depending on the system of ventilation used. Tracheal intubation is a typical example of a closed technique as ventilation occurs using a closed circuit. Several open techniques exist, such as spontaneous ventilation, apnoeic ventilation or jet ventilation. Each has its own specific advantages and disadvantages which determine when it should be used.
Spontaneous ventilation has been traditionally performed with an inhalational agent (i.e. gas induction or inhalational induction using halothane or sevoflurane) however it can also be performed using intravenous anaesthesia (e.g. propofol, ketamine or dexmedetomidine). SponTaneous Respiration using IntraVEnous anaesthesia and High-flow nasal oxygen (STRIVE Hi) is an open airway technique that uses an upwards titration of propofol which maintains ventilation at deep levels of anaesthesia. It has been used in airway surgery as an alternative to tracheal intubation.
History
TracheotomyThe earliest known depiction of a tracheotomy is found on two Egyptian tablets dating back to around 3600 BC. The 110-page Ebers Papyrus, an Egyptian medical papyrus which dates to roughly 1550 BC, also makes reference to the tracheotomy. Tracheotomy was described in the Rigveda, a Sanskrit text of ayurvedic medicine written around 2000 BC in ancient India. The Sushruta Samhita from around 400 BC is another text from the Indian subcontinent on ayurvedic medicine and surgery that mentions tracheotomy. Asclepiades of Bithynia (c. 124–40 BC) is often credited as being the first physician to perform a non-emergency tracheotomy. Galen of Pergamon (AD 129–199) clarified the anatomy of the trachea and was the first to demonstrate that the larynx generates the voice. In one of his experiments, Galen used bellows to inflate the lungs of a dead animal. Ibn Sīnā (980–1037) described the use of tracheal intubation to facilitate breathing in 1025 in his 14-volume medical encyclopedia, The Canon of Medicine. In the 12th century medical textbook Al-Taisir, Ibn Zuhr (1092–1162)—also known as Avenzoar—of Al-Andalus provided a correct description of the tracheotomy operation.The first detailed descriptions of tracheal intubation and subsequent artificial respiration of animals were from Andreas Vesalius (1514–1564) of Brussels. In his landmark book published in 1543, De humani corporis fabrica, he described an experiment in which he passed a reed into the trachea of a dying animal whose thorax had been opened and maintained ventilation by blowing into the reed intermittently. Antonio Musa Brassavola (1490–1554) of Ferrara successfully treated a patient with peritonsillar abscess by tracheotomy. Brassavola published his account in 1546; this operation has been identified as the first recorded successful tracheotomy, despite the many previous references to this operation. Towards the end of the 16th century, Hieronymus Fabricius (1533–1619) described a useful technique for tracheotomy in his writings, although he had never actually performed the operation himself. In 1620 the French surgeon Nicholas Habicot (1550–1624) published a report of four successful tracheotomies. In 1714, anatomist Georg Detharding (1671–1747) of the University of Rostock performed a tracheotomy on a drowning victim.Despite the many recorded instances of its use since antiquity, it was not until the early 19th century that the tracheotomy finally began to be recognized as a legitimate means of treating severe airway obstruction. In 1852, French physician Armand Trousseau (1801–1867) presented a series of 169 tracheotomies to the Académie Impériale de Médecine. 158 of these were performed for the treatment of croup, and 11 were performed for "chronic maladies of the larynx". Between 1830 and 1855, more than 350 tracheotomies were performed in Paris, most of them at the Hôpital des Enfants Malades, a public hospital, with an overall survival rate of only 20–25%. This compares with 58% of the 24 patients in Trousseaus private practice, who fared better due to greater postoperative care.In 1871, the German surgeon Friedrich Trendelenburg (1844–1924) published a paper describing the first successful elective human tracheotomy to be performed for the purpose of administration of general anesthesia. In 1888, Sir Morell Mackenzie (1837–1892) published a book discussing the indications for tracheotomy. In the early 20th century, tracheotomy became a life-saving treatment for patients affected with paralytic poliomyelitis who required mechanical ventilation. In 1909, Philadelphia laryngologist Chevalier Jackson (1865–1958) described a technique for tracheotomy that is used to this day.
Laryngoscopy and non-surgical techniques
In 1854, a Spanish singing teacher named Manuel García (1805–1906) became the first man to view the functioning glottis in a living human. In 1858, French pediatrician Eugène Bouchut (1818–1891) developed a new technique for non-surgical orotracheal intubation to bypass laryngeal obstruction resulting from a diphtheria-related pseudomembrane. In 1880, Scottish surgeon William Macewen (1848–1924) reported on his use of orotracheal intubation as an alternative to tracheotomy to allow a patient with glottic edema to breathe, as well as in the setting of general anesthesia with chloroform. In 1895, Alfred Kirstein (1863–1922) of Berlin first described direct visualization of the vocal cords, using an esophagoscope he had modified for this purpose; he called this device an autoscope.In 1913, Chevalier Jackson was the first to report a high rate of success for the use of direct laryngoscopy as a means to intubate the trachea. Jackson introduced a new laryngoscope blade that incorporated a component that the operator could slide out to allow room for passage of an endotracheal tube or bronchoscope. Also in 1913, New York surgeon Henry H. Janeway (1873–1921) published results he had achieved using a laryngoscope he had recently developed. Another pioneer in this field was Sir Ivan Whiteside Magill (1888–1986), who developed the technique of awake blind nasotracheal intubation, the Magill forceps, the Magill laryngoscope blade, and several apparati for the administration of volatile anesthetic agents. The Magill curve of an endotracheal tube is also named for Magill. Sir Robert Macintosh (1897–1989) introduced a curved laryngoscope blade in 1943; the Macintosh blade remains to this day the most widely used laryngoscope blade for orotracheal intubation.Between 1945 and 1952, optical engineers built upon the earlier work of Rudolph Schindler (1888–1968), developing the first gastrocamera. In 1964, optical fiber technology was applied to one of these early gastrocameras to produce the first flexible fiberoptic endoscope. Initially used in upper GI endoscopy, this device was first used for laryngoscopy and tracheal intubation by Peter Murphy, an English anesthetist, in 1967. The concept of using a stylet for replacing or exchanging orotracheal tubes was introduced by Finucane and Kupshik in 1978, using a central venous catheter.By the mid-1980s, the flexible fiberoptic bronchoscope had become an indispensable instrument within the pulmonology and anesthesia communities. The digital revolution of the 21st century has brought newer technology to the art and science of tracheal intubation. Several manufacturers have developed video laryngoscopes which employ digital technology such as the CMOS active pixel sensor (CMOS APS) to generate a view of the glottis so that the trachea may be intubated.
See also
Intratracheal instillation
Notes
References
External links
Video of endotracheal intubation using C-MAC D-blade and bougie used as introducer.
Videos of direct laryngoscopy recorded with the Airway Cam (TM) imaging system
Examples of some devices for facilitation of tracheal intubation
Free image rich resource explaining various types of endotracheal tubes
Tracheal intubation live case 2022 | 155 |
Epididymitis | Epididymitis is a medical condition characterized by inflammation of the epididymis, a curved structure at the back of the testicle. Onset of pain is typically over a day or two. The pain may improve with raising the testicle. Other symptoms may include swelling of the testicle, burning with urination, or frequent urination. Inflammation of the testicle is commonly also present.In those who are young and sexually active gonorrhea and chlamydia are frequently the underlying cause. In older males and men who practice insertive anal sex, enteric bacteria are a common cause. Diagnosis is typically based on symptoms. Conditions that may result in similar symptoms include testicular torsion, inguinal hernia, and testicular cancer. Ultrasound can be useful if the diagnosis is unclear.Treatment may include pain medications, NSAIDs, and elevation. Recommended antibiotics in those who are young and sexually active are ceftriaxone and doxycycline. Among those who are older, ofloxacin may be used. Complications include infertility and chronic pain. People aged 15 to 35 are most commonly affected, with about 600,000 people within this age group affected per year in the United States.
Signs and symptoms
Those aged 15 to 35 are most commonly affected. The acute form usually develops over the course of several days, with pain and swelling frequently in only one testis, which will hang low in the scrotum. There will often be a recent history of dysuria or urethral discharge. Fever is also a common symptom. In the chronic version, the patient may have painful point tenderness but may or may not have an irregular epididymis upon palpation, though palpation may reveal an indurated epididymis. A scrotal ultrasound may reveal problems with the epididymis, but such an ultrasound may also show nothing unusual. The majority of patients who present with chronic epididymitis have had symptoms for over five years.: p.311
Complications
Untreated, acute epididymitiss major complications are abscess formation and testicular infarction. Chronic epididymitis can lead to permanent damage or even destruction of the epididymis and testicle (resulting in infertility and/or hypogonadism), and infection may spread to any other organ or system of the body. Chronic pain is also an associated complication for untreated chronic epididymitis.
Causes
Though urinary tract infections in men are rare, bacterial infection is the most common cause of acute epididymitis. The bacteria in the urethra back-track through the urinary and reproductive structures to the epididymis. In rare circumstances, the infection reaches the epididymis via the bloodstream.In sexually active men, Chlamydia trachomatis is responsible for two-thirds of acute cases, followed by Neisseria gonorrhoeae and E. coli (or other bacteria that cause urinary tract infection). Particularly among men over age 35 in whom the cause is E. coli, epididymitis is commonly due to urinary tract obstruction. Less common microbes include Ureaplasma, Mycobacterium, and cytomegalovirus, or Cryptococcus in patients with HIV infection. E. coli is more common in boys before puberty, the elderly, and men who have sex with men. In the majority of cases in which bacteria are the cause, only one side of the scrotum or the other is the locus of pain.Non-infectious causes are also possible. Reflux of sterile urine (urine without bacteria) through the ejaculatory ducts may cause inflammation with obstruction. In children, it may be a response following an infection with enterovirus, adenovirus or Mycoplasma pneumoniae. Rare non-infectious causes of chronic epididymitis include sarcoidosis (more prevalent in black men) and Behçets disease.: p.311 Any form of epididymitis can be caused by genito-urinary surgery, including prostatectomy and urinary catheterization. Congestive epididymitis is a long-term complication of vasectomy. Chemical epididymitis may also result from drugs such as amiodarone.
Diagnosis
Diagnosis is typically based on symptoms. Conditions that may result in similar symptoms include testicular torsion, inguinal hernia, and testicular cancer. Ultrasound can be useful if the diagnosis is unclear.Epididymitis usually has a gradual onset. Typical findings are redness, warmth and swelling of the scrotum, with tenderness behind the testicle, away from the middle (this is the normal position of the epididymis relative to the testicle). The cremasteric reflex (elevation of the testicle in response to stroking the upper inner thigh) remains normal. This is a useful sign to distinguish it from testicular torsion. If there is pain relieved by elevation of the testicle, this is called Prehns sign, which is, however, non-specific and is not useful for diagnosis.
Before the advent of sophisticated medical imaging techniques, surgical exploration was the standard of care. Today, Doppler ultrasound is a common test: it can demonstrate areas of blood flow and can distinguish clearly between epididymitis and torsion. However, as torsion and other sources of testicular pain can often be determined by palpation alone, some studies have suggested that the only real benefit of an ultrasound is to assure the person that they do not have testicular cancer.: p.237 Nuclear testicular blood flow testing is rarely used.Additional tests may be necessary to identify underlying causes. In younger children, a urinary tract anomaly is frequently found. In sexually active men, tests for sexually transmitted diseases may be done. These may include microscopy and culture of a first void urine sample, Gram stain and culture of fluid or a swab from the urethra, nucleic acid amplification tests (to amplify and detect microbial DNA or other nucleic acids) or tests for syphilis and HIV.
Classification
Epididymitis can be classified as acute, subacute, and chronic, depending on the duration of symptoms.
Chronic epididymitis
Chronic epididymitis is epididymitis that is present for more than 3 months. Chronic epididymitis is characterized by inflammation even when there is no infection present. Tests are needed to distinguish chronic epididymitis from a range of other disorders that can cause constant scrotal pain including testicular cancer (though this is often painless), enlarged scrotal veins (varicocele), calcifications, and a possible cyst within the epididymis. Some research has found that as much as 80% of visits to a urologist for scrotal pain are for chronic epididymitis.: p.311 As a further complication, the nerves in the scrotal area are closely connected to those of the abdomen, sometimes causing abdominal pain similar to a hernia (see referred pain).
Chronic epididymitis is most commonly associated with lower back pain, and the onset of pain often co-occurs with activity that stresses the low back (i.e., heavy lifting, long periods of car driving, poor posture while sitting, or any other activity that interferes with the normal curve of the lumbar lordosis region).: p.237
Treatment
In both the acute and chronic forms, antibiotics are used if an infection is suspected. The treatment of choice is often azithromycin and cefixime to cover both gonorrhoeae and chlamydia. Fluoroquinolones are no longer recommended due to widespread resistance of gonorrhoeae to this class. Doxycycline may be used as an alternative to azithromycin. In chronic epididymitis, a four- to six-week course of antibiotics may be prescribed to ensure the complete eradication of any possible bacterial cause, especially the various chlamydiae.
For cases caused by enteric organisms (such as E. coli), ofloxacin or levofloxacin are recommended.In children, fluoroquinolones and doxycycline are best avoided. Since bacteria that cause urinary tract infections are often the cause of epididymitis in children, co-trimoxazole or suited penicillins (for example, cephalexin) can be used.Household remedies such as elevation of the scrotum and cold compresses applied regularly to the scrotum may relieve the pain in acute cases. Painkillers or anti-inflammatory drugs are often used for treatment of both chronic and acute forms. Hospitalisation is indicated for severe cases, and check-ups can ensure the infection has cleared up. Surgical removal of the epididymis is rarely necessary, causes sterility, and only gives relief from pain in approximately 50% of cases. However, in acute suppurating epididymitis (acute epididymitis with a discharge of pus), an epididymotomy may be recommended; in refractory cases, a full epididymectomy may be required. In cases with unrelenting testicular pain, removal of the entire testicle—orchiectomy—may also be warranted.
It is generally believed that most cases of chronic epididymitis will eventually "burn out" of patients system if left untreated, though this might take years or even decades. However, some prostate-related medications have proven effective in treating chronic epididymitis, including doxazosin.
Epidemiology
Epididymitis makes up 1 in 144 visits for medical care (0.69 percent) in men 18 to 50 years old or 600,000 cases in males between 18 and 35 in the United States.It occurs primarily in those 16 to 30 years of age and 51 to 70 years. As of 2008 there appears to be an increase in incidences in the United States that parallels an increase in reported cases of chlamydia and gonorrhea.
References
Further reading
Galejs LE (February 1999). "Diagnosis and treatment of the acute scrotum". Am Fam Physician. 59 (4): 817–24. PMID 10068706.
Nickel JC (2003). "Chronic epididymitis: a practical approach to understanding and managing a difficult urologic enigma". Rev Urol. 5 (4): 209–15. PMC 1553215. PMID 16985840.
External links
Epididymitis at Curlie | 156 |
Esophagitis | Esophagitis, also spelled oesophagitis, is a disease characterized by inflammation of the esophagus. The esophagus is a tube composed of a mucosal lining, and longitudinal and circular smooth muscle fibers. It connects the pharynx to the stomach; swallowed food and liquids normally pass through it.Esophagitis can be asymptomatic; or can cause epigastric and/or substernal burning pain, especially when lying down or straining; and can make swallowing difficult (dysphagia). The most common cause of esophagitis is the reverse flow of acid from the stomach into the lower esophagus: gastroesophageal reflux disease (GERD).
Signs and symptoms
The symptoms of esophagitis include:
Heartburn – a burning sensation in the lower mid-chest
Nausea
Dysphagia – swallowing is painful, with difficulty passing or inability to pass food through the esophagus
Vomiting (emesis)
Abdominal pain
Cough
Complications
If the disease remains untreated, it can cause scarring and discomfort in the esophagus. If the irritation is not allowed to heal, esophagitis can result in esophageal ulcers. Esophagitis can develop into Barretts esophagus and can increase the risk of esophageal cancer.
Causes
Esophagitis cannot be spread. However, infections can be spread by those who have infectious esophagitis. Esophagitis can develop due to many causes. GERD is the most common cause of esophagitis because of the backflow of acid from the stomach, which can irritate the lining of the esophagus.
Other causes include:
Medicines – Can cause esophageal damage that can lead to esophageal ulcers
Nonsteroidal anti-inflammatory drugs (NSAIDS) – aspirin, naproxen sodium, and ibuprofen. Known to irritate the GI tract.
Antibiotics – doxycycline and tetracycline
Quinidine
Bisphosphonates – used to treat osteoporosis
Steroids
Potassium chloride
Chemical injury by alkaline or acid solutions
Physical injury resulting from nasogastric tubes.
Alcohol use disorder – Can wear down the lining of the esophagus.
Crohns disease – a type of IBD and an autoimmune disease that can cause esophagitis if it attacks the esophagus.
Stress – Can cause higher levels of acid reflux
Radiation therapy - Can affect the immune system.
Allergies (food, inhalants) – Allergies can stimulate eosinophilic esophagitis.
Infection - People with an immunodeficiencies have a higher chance of developing esophagitis.
Vitamins and supplements (iron, vitamin C, and potassium) – Supplements and minerals can be hard on the GI tract.
Vomiting – Acid can irritate esophagus.
Hernias – A hernia can poke through the diaphragm muscle and can inhibit the stomach acid and food from draining quickly.
Surgery
Eosinophilic esophagitis, a more chronic condition with a theorized autoimmune component
Mechanism
The esophagus is a muscular tube made of both voluntary and involuntary muscles. It is responsible for peristalsis of food. It is about 8 inches long and passes through the diaphragm before entering the stomach. The esophagus is made up of three layers: from the inside out, they are the mucosa, submucosa, muscularis externa. The mucosa, the inner most layer and lining of the esophagus, is composed of stratified squamous epithelium, lamina propria, and muscularis mucosae. At the end of the esophagus is the lower esophageal sphincter, which normally prevents stomach acid from entering the esophagus.
If the sphincter is not sufficiently tight, it may allow acid to enter the esophagus, causing inflammation of one or more layers. Esophagitis may also occur if an infection is present, which may be due to bacteria, viruses, or fungi; or by diseases that affect the immune system.Irritation can be caused by GERD, vomiting, surgery, medications, hernias, and radiation injury. Inflammation can cause the esophagus to narrow, which makes swallowing food difficult and may result in food bolus impaction.
Diagnosis
Esophagitis can be diagnosed by upper endoscopy, biopsy, upper GI series (or barium swallow), and laboratory tests.An upper endoscopy is a procedure to look at the esophagus by using an endoscope. While looking at the esophagus, the doctor is able to take a small biopsy. The biopsy can be used to confirm inflammation of the esophagus.
An upper GI series uses a barium contrast, fluoroscopy, and an X-ray. During a barium X-ray, a solution with barium or pill is taken before getting an X-ray. The barium makes the organs more visible and can detect if there is any narrowing, inflammation, or other abnormalities that can be causing the disease. The upper GI series can be used to find the cause of GI symptoms. An esophagram is if only the throat and esophagus are looked at.Laboratory tests can be done on biopsies removed from the esophagus and can help determine the cause of the esophagitis. Laboratory tests can help diagnose a fungal, viral, or bacterial infection. Scanning for white blood cells can help diagnose eosinophil esophagitis.
Some lifestyle indicators for this disease include stress, unhealthy eating, smoking, drinking, family history, allergies, and immunodeficiency.
Types
Reflux esophagitis
Although it usually assumed that inflammation from acid reflux is caused by the irritant action on the mucosa by hydrochloric acid, one study suggests that the pathogenesis of reflux esophagitis may be cytokine-mediated.
Infectious esophagitis
Esophagitis happens due to a viral, fungal, parasitic or bacterial infection. More likely to happen to people who have an immunodeficiency. Types include:
Fungal
Candida (Esophageal candidiasis)Viral
Herpes simplex (Herpes esophagitis)
CytomegalovirusDrug-induced esophagitis
Damage to the esophagus due to medications. If the esophagus is not coated or if the medicine is not taken with enough liquid, it can damage the tissues.
Eosinophilic esophagitis
Eosinophilic esophagitis is caused by a high concentration of eosinophils in the esophagus. The presence of eosinophils in the esophagus may be due to an allergen and is often correlated with GERD. The direction of cause and effect between inflammation and acid reflux is poorly established, with recent studies (in 2016) hinting that reflux does not cause inflammation. This esophagitis can be triggered by allergies to food or to inhaled allergens. This type is still poorly understood.
Lymphocytic esophagitis
Lymphocytic esophagitis is a rare and poorly understood entity associated with an increased amount of lymphocytes in the lining of the esophagus. It was first described in 2006. Disease associations may include Crohns disease, gastroesophageal reflux disease and coeliac disease. It causes similar changes on endoscopy as eosinophilic esophagitis including esophageal rings, narrow-lumen esophagus, and linear furrows.
Caustic esophagitis
Caustic esophagitis is the damage of tissue via chemical origin. This occasionally occurs through occupational exposure (via breathing of fumes that mix into the saliva which is then swallowed) or through pica. It occurred in some teenagers during the fad of intentionally eating Tide pods.
By severity
The severity of reflux esophagitis is commonly classified into four grades according to the Los Angeles Classification:
Prevention
Since there can be many causes underlying esophagitis, it is important to try to find the cause to help to prevent esophagitis. To prevent reflux esophagitis, avoid acidic foods, caffeine, eating before going to bed, alcohol, fatty meals, and smoking. To prevent drug-induced esophagitis, drink plenty of liquids when taking medicines, take an alternative drug, and do not take medicines while lying down, before sleeping, or too many at one time. Esophagitis is more prevalent in adults and does not discriminate.
Treatment
Lifestyle changes
Losing weight, stop smoking and alcohol, lowering stress, avoid sleeping/lying down after eating, raising the head of the bed, taking medicines correctly, avoiding certain medications, and avoiding foods that cause the reflux that might be causing the esophagitis.
Medications
Antacids
To treat reflux esophagitis, over the counter antacids, medications that reduce acid production (H-2 receptor blockers), and proton pump inhibitors are recommended to help block acid production and to let the esophagus heal. Some prescription medications to treat reflux esophagitis include higher dose H-2 receptor blockers, proton pump inhibitors, and prokinetics, which help with the emptying of the stomach. However prokinetics are no longer licensed for GERD because their evidence of efficacy is poor, and following a safety review, licensed use of domperidone and metoclopramide is now restricted to short-term use in nausea and vomiting only.
For subtypes
To treat eosinophilic esophagitis, avoiding any allergens that may be stimulating the eosinophils is recommended. As for medications, proton pump inhibitors and steroids can be prescribed. Steroids that are used to treat asthma can be swallowed to treat eosinophil esophagitis due to nonfood allergens. The removal of food allergens from the diet is included to help treat eosinophilic esophagitis.
For infectious esophagitis, medicine is prescribed based on what type of infection is causing the esophagitis. These medicines are prescribed to treat bacterial, fungal, viral, and/or parasitic infections.
Procedures
An endoscopy can be used to remove ill fragments.
Surgery can be done to remove the damaged part of the esophagus.
For reflux esophagitis, a fundooplication can be done to help strengthen the lower esophageal sphincter from allowing backflow of the stomach into the esophagus.
For esophageal stricture, a gastroenterologist can perform a dilation of the esophagus.As of 2020 evidence for magnetic sphincter augmentation is poor.
Prognosis
The prognosis for a person with esophagitis depends on the underlying causes and conditions. If a patient has a more serious underlying cause such as a digestive system or immune system issue, it may be more difficult to treat. Normally, the prognosis would be good with no serious illnesses. If there are more causes than one, the prognosis could move to fair.
Terminology
The term is from Greek οἰσοφάγος "gullet" and -itis "inflammation".
References
== External links == | 157 |
Erysipelas | Erysipelas () is a relatively common bacterial infection of the superficial layer of the skin (upper dermis), extending to the superficial lymphatic vessels within the skin, characterized by a raised, well-defined, tender, bright red rash, typically on the face or legs, but which can occur anywhere on the skin. It is a form of cellulitis and is potentially serious.Erysipelas is usually caused by the bacteria Streptococcus pyogenes, also known as group A β-hemolytic streptococci, through a break in the skin such as from scratches or an insect bite. It is more superficial than cellulitis, and is typically more raised and demarcated. The term is from Greek ἐρυσίπελας (erysípelas), meaning "red skin".In animals, erysipelas is a disease caused by infection with the bacterium Erysipelothrix rhusiopathiae. The disease caused in animals is called Diamond Skin Disease, which occurs especially in pigs. Heart valves and skin are affected. Erysipelothrix rhusiopathiae can also infect humans, but in that case, the infection is known as erysipeloid.
Signs and symptoms
Symptoms often occur suddenly. Affected individuals may develop a fever, shivering, chills, fatigue, headaches, vomiting and be generally unwell within 48 hours of the initial infection. The red plaque enlarges rapidly and has a sharply demarcated, raised edge. It may appear swollen, feel firm, warm and tender to touch and may have a consistency similar to orange peel. Pain may be extreme.More severe infections can result in vesicles (pox or insect bite-like marks), blisters, and petechiae (small purple or red spots), with possible skin necrosis (death). Lymph nodes may be swollen, and lymphedema may occur. Occasionally, a red streak extending to the lymph node can be seen.The infection may occur on any part of the skin, including the face, arms, fingers, legs and toes; it tends to favour the extremities. The umbilical stump and sites of lymphoedema are also common sites affected.Fat tissue and facial areas, typically around the eyes, ears, and cheeks, are most susceptible to infection. Repeated infection of the extremities can lead to chronic swelling (lymphoedema).
Cause
Most cases of erysipelas are due to Streptococcus pyogenes, also known as group A β-hemolytic streptococci, less commonly by group C or G streptococci and rarely due to Staphylococcus aureus. Newborns may contract erysipelas due to Streptococcus agalactiae, also known as group B streptococcus or GBS.The infecting bacteria can enter the skin through minor trauma, human, insect or animal bites, surgical incisions, ulcers, burns and abrasions. There may be underlying eczema or athletes foot (tinea pedis), and it can originate from streptococci bacteria in the subjects own nasal passages or ear.The rash is due to an exotoxin, not the Streptococcus bacteria, and is found in areas where no symptoms are present; e.g., the infection may be in the nasopharynx, but the rash is found usually on the epidermis and superficial lymphatics.
Diagnosis
Erysipelas is usually diagnosed by the clinician looking at the characteristic well-demarcated rash following a history of injury or recognition of one of the risk factors.Tests, if performed, may show a high white cell count, raised CRP or positive blood culture identifying the organism.Erysipelas must be differentiated from herpes zoster, angioedema, contact dermatitis, erythema chronicum migrans of early Lyme disease, gout, septic arthritis, septic bursitis, vasculitis, allergic reaction to an insect bite, acute drug reaction, deep venous thrombosis and diffuse inflammatory carcinoma of the breast.
Differentiating from cellulitis
Erysipelas can be distinguished from cellulitis by two particular features; its raised advancing edge and its sharp borders. The redness in cellulitis is not raised and its border is relatively indistinct. Bright redness of erysipelas has been described as a third differentiating feature.Erysipelas does not affect subcutaneous tissue. It does not release pus, only serum or serous fluid. Subcutaneous edema may lead the physician to misdiagnose it as cellulitis.
Treatment
Depending on the severity, treatment involves either oral or intravenous antibiotics, using penicillins, clindamycin, or erythromycin. While illness symptoms resolve in a day or two, the skin may take weeks to return to normal.
The USA Food and Drug Administration has approved 4 antibiotics-- omadacycline (Nuzyra), oritavancin (Orbactiv), dalbavancin (Dalvance), and tedizolid (Sivextro) --for the treatment of acute bacterial skin and skin structure infections.
Because of the risk of reinfection, prophylactic antibiotics are sometimes used after resolution of the initial condition.
Prognosis
The disease prognosis includes:
Spread of infection to other areas of body can occur through the bloodstream (bacteremia), including septic arthritis. Glomerulonephritis can follow an episode of streptococcal erysipelas or other skin infection, but not rheumatic fever.
Recurrence of infection: Erysipelas can recur in 18–30% of cases even after antibiotic treatment. A chronic state of recurrent erysipelas infections can occur with several predisposing factors including alcoholism, diabetes, and athletes foot. Another predisposing factor is chronic cutaneous edema, such as can in turn be caused by venous insufficiency or heart failure.
Lymphatic damage
Necrotizing fasciitis, commonly known as "flesh-eating" bacterial infection, is a potentially deadly exacerbation of the infection if it spreads to deeper tissue.
Epidemiology
There is currently no validated recent data on the worldwide incidence of erysipelas. From 2004 to 2005, UK hospitals reported 69,576 cases of cellulitis and 516 cases of erysipelas. One book stated that several studies have placed the prevalence rate between one and 250 in every 10,000 people. The development of antibiotics, as well as increased sanitation standards, has contributed to the decreased rate of incidence. Erysipelas caused systemic illness in up to 40% of cases reported by UK hospitals and 29% of people had recurrent episodes within three years. Anyone can be infected, although incidence rates are higher in infants and elderly. Several studies also reported a higher incidence rate in women. Four out of five cases occur on the legs, although historically the face was a more frequent site.Risk factors for developing the disease include
Arteriovenous fistula
Chronic skin conditions such as psoriasis, athletes foot, and eczema
Excising the saphenous vein
Immune deficiency or compromise, such as
Diabetes
Alcoholism
Obesity
Human immunodeficiency virus (HIV)
In newborns, exposure of the umbilical cord and vaccination site injury
Issues in lymph or blood circulation
Leg ulcers
Lymphatic edema
Lymphatic obstruction
Lymphoedema
Nasopharyngeal infection
Nephrotic syndrome
Pregnancy
Previous episode(s) of erysipelas
Toe web intertrigo
Traumatic wounds
Venous insufficiency or disease
Preventative measures
Individuals can take preventative steps to increase the chance they do not catch the disease. Properly cleaning and covering wounds is important for people battling an open wound. Effectively treating athletes foot or eczema if they were the cause for the initial infection will decrease the chance of the infection occurring again. People with diabetes should pay attention to maintaining good foot hygiene. It is also important to follow up with doctors to make sure the disease has not come back or spread. About one-third of people who have had erysipelas will be infected again within three years. Rigorous antibiotics may be needed in the case of recurrent bacterial skin infections.
Notable cases
History
It was historically known as St. Anthonys fire.
Citations
== External links == | 158 |
Erysipeloid | In humans, Erysipelothrix rhusiopathiae infections most commonly present in a mild cutaneous form known as erysipeloid or fish poisoning. E. rhusiopathiae can cause an indolent cellulitis, more commonly in individuals who handle fish and raw meat. Erysipelothrix rhusiopathiae also causes Swine Erysipelas. It is common in domestic pigs and can be transmitted to humans who work with swine. It gains entry typically by abrasions in the hand. Bacteremia and endocarditis are uncommon but serious sequelae. Due to the rarity of reported human cases, E. rhusiopathiae infections are frequently misidentified at presentation.
Diagnosis
Violaceous swelling with severe pain but without pus (Which differentiates from pus forming streptococcal and staphylococcal erysipelas)
Erysipeloid of Rosenbach
Erysipeloid of Rosenbach is a cutaneous condition most frequently characterized by a purplish marginated swelling on the hands.: 264 The eponym Rosenbachs disease is in reference to the milder type of the condition and is named after Friedrich Julius Rosenbach. Early work on the condition in US fishermen was carried out by Klaunders and colleagues.
Treatment
The treatment of choice is a single dose of benzathine benzylpenicillin given by intramuscular injection, or a five-day to one-week course of either oral penicillin or intramuscular procaine benzylpenicillin. Erythromycin or doxycycline may be given instead to people who are allergic to penicillin. E. rhusiopathiae is intrinsically resistant to vancomycin.
See also
Erysipeloid of Rosenbach
References
== External links == | 159 |
Esotropia | Esotropia is a form of strabismus in which one or both eyes turns inward. The condition can be constantly present, or occur intermittently, and can give the affected individual a "cross-eyed" appearance. It is the opposite of exotropia and usually involves more severe axis deviation than esophoria. Esotropia is sometimes erroneously called "lazy eye", which describes the condition of amblyopia; a reduction in vision of one or both eyes that is not the result of any pathology of the eye and cannot be resolved by the use of corrective lenses. Amblyopia can, however, arise as a result of esotropia occurring in childhood: In order to relieve symptoms of diplopia or double vision, the childs brain will ignore or "suppress" the image from the esotropic eye, which when allowed to continue untreated will lead to the development of amblyopia. Treatment options for esotropia include glasses to correct refractive errors (see accommodative esotropia below), the use of prisms and/or orthoptic exercises and/or eye muscle surgery. The term is from Greek eso meaning "inward" and trope meaning "a turning".
Types
Concomitant esotropia
Concomitant esotropia – that is, an inward squint that does not vary with the direction of gaze – mostly sets in before 12 months of age (this constitutes 40% of all strabismus cases) or at the age of three or four. Most patients with "early-onset" concomitant esotropia are emmetropic, whereas most of the "later-onset" patients are hyperopic. It is the most frequent type of natural strabismus not only in humans, but also in monkeys.Concomitant esotropia can itself be subdivided into esotropias that are either constant, or intermittent.
Constant esotropia
A constant esotropia, as the name implies, is present all the time.
Intermittent esotropia
Intermittent esotropias, again as the name implies, are not always present. In very rare cases, they may only occur in repeated cycles of one day on, one day off (Cyclic Esotropia). However, the vast majority of intermittent esotropias are accommodative in origin.A patient can have a constant esotropia for reading, but an intermittent esotropia for distance (but rarely vice versa).
Accommodative esotropia
Accommodative esotropia (also called refractive esotropia) is an inward turning of the eyes due to efforts of accommodation. It is often seen in patients with moderate amounts of hyperopia. The person with hyperopia, in an attempt to "accommodate" or focus the eyes, converges the eyes as well, as convergence is associated with activation of the accommodation reflex. The over-convergence associated with the extra accommodation required to overcome a hyperopic refractive error can precipitate a loss of binocular control and lead to the development of esotropia.The chances of an esotropia developing in a hyperopic child will depend to some degree on the amount of hyperopia present. Where the degree of error is small, the child will typically be able to maintain control because the amount of over-accommodation required to produce clear vision is also small. Where the degree of hyperopia is large, the child may not be able to produce clear vision no matter how much extra-accommodation is exerted and thus no incentive exists for the over-accommodation and convergence that can give rise to the onset of esotropia. However, where the degree of error is small enough to allow the child to generate clear vision by over-accommodation, but large enough to disrupt their binocular control, esotropia will result.
Only about 20% of children with hyperopia greater than +3.5 diopters develop strabismus.Where the esotropia is solely a consequence of uncorrected hyperopic refractive error, providing the child with the correct glasses and ensuring that these are worn all the time, is often enough to control the deviation. In such cases, known as fully accommodative esotropias, the esotropia will only be seen when the child removes their glasses. Many adults with childhood esotropias of this type make use of contact lenses to control their squint. Some undergo refractive surgery for this purpose.
A second type of accommodative esotropia also exists, known as convergence excess esotropia. In this condition the child exerts excessive accommodative convergence relative to their accommodation. Thus, in such cases, even when all underlying hyperopic refractive errors have been corrected, the child will continue to squint when looking at very small objects or reading small print. Even though they are exerting a normal amount of accommodative or focusing effort, the amount of convergence associated with this effort is excessive, thus giving rise to esotropia. In such cases an additional hyperopic correction is often prescribed in the form of bifocal lenses, to reduce the degree of accommodation, and hence convergence, being exerted. Many children will gradually learn to control their esotropias, sometimes with the help of orthoptic exercises. However, others will eventually require extra-ocular muscle surgery to resolve their problems.
Congenital esotropia
Congenital esotropia, or infantile esotropia, is a specific sub-type of primary concomitant esotropia. It is a constant esotropia of large and consistent size with onset between birth and six months of age. It is not associated with hyperopia, so the exertion of accommodative effort will not significantly affect the angle of deviation. It is, however, associated with other ocular dysfunctions including oblique muscle over-actions, Dissociated Vertical Deviation (DVD), Manifest Latent Nystagmus, and defective abduction, which develops as a consequence of the tendency of those with infantile esotropia to cross fixate. Cross fixation involves the use of the right eye to look to the left and the left eye to look to the right; a visual pattern that will be natural for the person with the large angle esotropia whose eye is already deviated towards the opposing side.
The origin of the condition is unknown, and its early onset means that the affected individuals potential for developing binocular vision is limited. The appropriate treatment approach remains a matter of some debate. Some ophthalmologists favour an early surgical approach as offering the best prospect of binocularity whilst others remain unconvinced that the prospects of achieving this result are good enough to justify the increased complexity and risk associated with operating on those under the age of one year.
Incomitant esotropia
Incomitant esotropias are conditions in which the esotropia varies in size with direction of gaze. They can occur in both childhood and adulthood, and arise as a result of neurological, mechanical or myogenic problems. These problems may directly affect the extra-ocular muscles themselves, and may also result from conditions affecting the nerve or blood supply to these muscles or the bony orbital structures surrounding them. Examples of conditions giving rise to an esotropia might include a VIth cranial nerve (or Abducens) palsy, Duanes syndrome or orbital injury.
Diagnosis
Classification
Right, left or alternating
Someone with esotropia will squint with either the right or the left eye but never with both eyes simultaneously. In a left esotropia, the left eye squints, and in a right esotropia the right eye squints. In an alternating esotropia, the patient is able to alternate fixation between their right and left eye so that at one moment the right eye fixates and the left eye turns inward, and at the next the left eye fixates and the right turns inward. This alteration between the left and right eye is mostly spontaneous, but may be voluntary in some cases. Where a patient tends to consistently fixate with one eye and squint with the other, the eye that squints is likely to develop some amblyopia. Someone whose squint alternates is very unlikely to develop amblyopia because both eyes will receive equal visual stimulation. It is possible to encourage alternation through the use of occlusion or patching of the dominant or fixating eye to promote the use of the other. Esotropia is a highly prevalent congenital condition.
Concomitant versus incomitant
Esotropias can be concomitant, where the size of the deviation does not vary with direction of gaze—or incomitant, where the direction of gaze does affect the size, or indeed presence, of the esotropia. The majority of esotropias are concomitant and begin early in childhood, typically between the ages of 2 to 4 years. Incomitant esotropias occur both in childhood and adulthood as a result of neurological, mechanical or myogenic problems affecting the muscles controlling eye movements.
Primary, secondary or consecutive
Concomitant esotropias can arise as an initial problem, in which case they are designated as "primary," as a consequence of loss or impairment of vision, in which case they are designated as "secondary," or following overcorrection of an initial exotropia in which case they are described as being "consecutive". The vast majority of esotropias are primary.
Treatment
The prognosis for each patient with esotropia will depend upon the origin and classification of their condition. However, in general, management will take the following course:
Identify and treat any underlying systemic condition.
Prescribe any glasses required and allow the patient time to settle into them.
Use occlusion to treat any amblyopia present and encourage alternation.
Where appropriate, orthoptic exercises (sometimes referred to as Vision Therapy) can be used to attempt to restore binocularity.
Where appropriate, prismatic correction can be used, either temporarily or permanently, to relieve symptoms of double vision.
In specific cases, and primarily in adult patients, botulinum toxin can be used either as a permanent therapeutic approach, or as a temporary measure to prevent contracture of muscles prior to surgery
Where necessary, extra-ocular muscle surgery, like strabismus surgery, which is a surgery where the doctors physically move the muscle that is making the eye contract. This can be undertaken to improve cosmesis and, on occasion, restore binocularity.
Etymology
The term "esotropia" is ultimately derived from the Ancient Greek ἔσω ésō, meaning “within”, and τρόπος trópos, meaning “a turn”.
References
External links
"Squint / Strabismus". Parallel Vision Problems. British and Irish Orthoptic Society.
"Esotropia". EyeWiki. American Academy of Ophthalmology. | 160 |
Ethylene glycol poisoning | Ethylene glycol poisoning is poisoning caused by drinking ethylene glycol. Early symptoms include intoxication, vomiting and abdominal pain. Later symptoms may include a decreased level of consciousness, headache, and seizures. Long term outcomes may include kidney failure and brain damage. Toxicity and death may occur after drinking even in a small amount as ethylene glycol is more toxic than other diols.
Ethylene glycol is a colorless, odorless, sweet liquid, commonly found in antifreeze. It may be drunk accidentally or intentionally in a suicide attempt. When broken down by the body it results in glycolic acid and oxalic acid which cause most of the toxicity. The diagnosis may be suspected when calcium oxalate crystals are seen in the urine or when acidosis or an increased osmol gap is present in the blood. Diagnosis may be confirmed by measuring ethylene glycol levels in the blood; however, many hospitals do not have the ability to perform this test.Early treatment increases the chance of a good outcome. Treatment consists of stabilizing the person, followed by the use of an antidote. The preferred antidote is fomepizole with ethanol used if this is not available. Hemodialysis may also be used in those where there is organ damage or a high degree of acidosis. Other treatments may include sodium bicarbonate, thiamine, and magnesium.More than 5,000 cases of poisoning occur in the United States each year. Those affected are often adults and male. Deaths from ethylene glycol have been reported as early as 1930. An outbreak of deaths in 1937 due to a medication mixed in a similar compound, diethylene glycol, resulted in the Food, Drug, and Cosmetic Act of 1938 in the United States, which mandated evidence of safety before new medications could be sold. Antifreeze products sometimes have a substance to make them bitter added to discourage drinking by children or animals but this has not been found to be effective.
Signs and symptoms
Signs of ethylene glycol poisoning depend upon the time after ingestion. Symptoms usually follow a three-step progression, although poisoned individuals will not always develop each stage.
Stage 1 (30 minutes to 12 hours) consists of neurological and gastrointestinal symptoms and looks similar to alcohol poisoning. Poisoned individuals may appear to be intoxicated, dizzy, lacking coordination of muscle movements, drooling, depressed, and have slurred speech, seizures, abnormal eye movements, headaches, and confusion. Irritation to the stomach may cause nausea and vomiting. Also seen are excessive thirst and urination. Over time, the body metabolizes ethylene glycol into other toxins.
Stage 2 (12 to 36 hours) where signs of "alcohol" poisoning appear to resolve, underlying severe internal damage is still occurring. An elevated heart rate, hyperventilation or increased breathing effort, and dehydration may start to develop, along with high blood pressure and metabolic acidosis. These symptoms are a result of accumulation of organic acids formed by the metabolism of ethylene glycol. Additionally low calcium concentrations in the blood, overactive muscle reflexes, muscle spasms, QT interval prolongation, and congestive heart failure may occur. If untreated, death most commonly occurs during this period.
Stage 3 (24 to 72 hours) kidney failure is the result of ethylene glycol poisoning. In cats, this stage occurs 12–24 hours after consuming antifreeze; in dogs, at 36–72 hours after consuming antifreeze. During this stage, severe kidney failure is developing secondary to calcium oxalate crystals forming in the kidneys. Severe lethargy, coma, depression, vomiting, seizures, drooling, and inappetence may be seen. Other symptoms include acute tubular necrosis, red blood cells in the urine, excess proteins in the urine, lower back pain, decreased or absent production of urine, elevated blood concentration of potassium, and acute kidney failure. If kidney failure occurs it is typically reversible, although weeks or months of supportive care including hemodialysis may be required before kidney function returns.
Sources
The most common source of ethylene glycol is automotive antifreeze or radiator coolant, where concentrations are high. Other sources of ethylene glycol include windshield deicing agents, brake fluid, motor oil, developing solutions for hobby photographers, wood stains, solvents, and paints. Some people put antifreeze into their cabins toilet to prevent it from freezing during the winter, resulting in toxicities when animals drink from the toilet. Small amounts of ethylene glycol may be contained in holiday ornaments such as snow globes.The most significant source of ethylene glycol is from aircraft de-icing and anti-icing operations, where it is released onto land and eventually to waterways near airports experiencing cold winter climates. It is also used in manufacturing polyester products. In 2006, approximately 1540 kilotonnes of ethylene glycol were manufactured in Canada by three companies in Alberta, with most of the production destined for export.
Pathophysiology
The three main systems affected by ethylene glycol poisoning are the central nervous system, metabolic processes, and the kidneys. The central nervous system is affected early in the course of poisoning as the result of a direct action of ethylene glycol. Similar to ethanol, it causes intoxication, followed by drowsiness or coma. Seizures may occur due to a direct effect. The toxic mechanism of ethylene glycol poisoning is mainly due to the metabolites of ethylene glycol. Initially it is metabolized by alcohol dehydrogenase to glycolaldehyde, which is then oxidized to glycolic acid by aldehyde dehydrogenase. The increase in metabolites may cause encephalopathy or cerebral edema. The metabolic effects occur 12 to 36 hours post ingestion, causing primarily metabolic acidosis which is due mainly to accumulated glycolic acid. Additionally, as a side effect of the first two steps of metabolism, an increase in the blood concentration of lactic acid occurs contributing to lactic acidosis. The formation of acid metabolites also causes inhibition of other metabolic pathways, such as oxidative phosphorylation.The kidney toxicity of ethylene glycol occurs 24 to 72 hours post ingestion and is caused by a direct cytotoxic effect of glycolic acid. The glycolic acid is then metabolized to glyoxylic acid and finally to oxalic acid. Oxalic acid binds with calcium to form calcium oxalate crystals which may deposit and cause damage to many areas of the body including the brain, heart, kidneys, and lungs. The most significant effect is accumulation of calcium oxalate crystals in the kidneys which causes kidney damage leading to oliguric or anuric acute kidney failure. The rate-limiting step in this cascade is the conversion of glycolic to glyoxylic acid. Accumulation of glycolic acid in the body is mainly responsible for toxicity.
Toxicity
Ethylene glycol has been shown to be toxic to humans and is also toxic to domestic pets such as cats and dogs. A toxic dose requiring medical treatment varies but is considered more than 0.1 mL per kg body weight (mL/kg) of pure substance. That is roughly 16 mL of 50% ethylene glycol for an 80 kg adult and 4 mL for a 20 kg child. Poison control centers often use more than a lick or taste in a child or more than a mouthful in an adult as a dose requiring hospital assessment.The orally lethal dose in humans has been reported as approximately 1.4 mL/kg of pure ethylene glycol. That is approximately 224 mL (7.6 oz.) of 50% ethylene glycol for an 80 kg adult and 56 mL (2 oz.) for a 20 kg child. Although survival with medical treatment has occurred with doses much higher than this, death has occurred with 30 mL of the concentrate in an adult. In the EU classification of dangerous substances it is harmful (Xn) while more toxic substances are classified as toxic (T) or very toxic (T+). The U.S. Environmental Protection Agency generally puts substances which are lethal at more than 30 g to adults in Toxicity Class III.Ethylene glycol has a low vapor pressure; it does not evaporate readily at normal temperatures and therefore high concentrations in air or intoxication are unlikely to occur following inhalational exposures. There may be a slight risk of poisoning where mists or fogs are generated, although this rarely leads to poisoning as ethylene glycol causes irritation and coughing when breathed in, alerting victims to its presence. Ethylene glycol is not well absorbed through skin meaning poisoning following dermal exposure is also uncommon.
Diagnosis
As many of the clinical signs and symptoms of ethylene glycol poisoning are nonspecific and occur in many poisonings the diagnosis is often difficult. It is most reliably diagnosed by the measurement of the blood ethylene glycol concentration. Ethylene glycol in biological fluids can be determined by gas chromatography. Many hospital laboratories do not have the ability to perform this blood test and in the absence of this test the diagnosis must be made based on the presentation of the person. In this situation a helpful test to diagnose poisoning is the measurement of the osmolal gap. The person serum osmolality is measured by freezing point depression and then compared with the predicted osmolality based on the persons measured sodium, glucose, blood urea nitrogen, and any ethanol that may have been ingested. The presence of a large osmolal gap supports a diagnosis of ethylene glycol poisoning. However, a normal osmolar gap does not rule out ethylene glycol exposure because of wide individual variability.The increased osmolal gap is caused by the ethylene glycol itself. As the metabolism of ethylene glycol progresses there will be less ethylene glycol and this will decrease the blood ethylene glycol concentration and the osmolal gap making this test less useful. Additionally, the presence of other alcohols such as ethanol, isopropanol, or methanol or conditions such as alcoholic or diabetic ketoacidosis, lactic acidosis, or kidney failure may also produce an elevated osmolal gap leading to a false diagnosis.Other laboratory abnormalities may suggest poisoning, especially the presence of a metabolic acidosis, particularly if it is characterized by a large anion gap. Large anion gap acidosis is usually present during the initial stage of poisoning. However, acidosis has a large number of differential diagnosis, including poisoning from methanol, salicylates, iron, isoniazid, paracetamol, theophylline, or from conditions such as uremia or diabetic and alcoholic ketoacidosis. The diagnosis of ethylene glycol poisoning should be considered in any people with a severe acidosis. Urine microscopy can reveal needle or envelope-shaped calcium oxalate crystals in the urine which can suggest poisoning; although these crystals may not be present until the late stages of poisoning. Finally, many commercial radiator antifreeze products have fluorescein added to enable radiator leaks to be detected using a Woods lamp. Following ingestion of antifreeze products containing ethylene glycol and fluorescein, a Woods lamp may reveal fluorescence of a persons mouth area, clothing, vomitus, or urine which can help to diagnose poisoning.
Prevention
Antifreeze products for automotive use containing propylene glycol in place of ethylene glycol are available, and are generally considered safer to use, as it possesses an unpleasant taste in contrast to the perceived "sweet" taste of toxic ethylene glycol-based coolants, and produces only lactic acid in an animals body, as their muscles do when exercised.When using antifreeze products containing ethylene glycol, recommended safety measures include:
Cleaning up any spill immediately and thoroughly. Spills may be cleaned by sprinkling cat litter, sand or other absorbent material directly on the spill. Once fully absorbed, while wearing protective gloves, the material may be scooped into a plastic bag, sealed and disposed. The spill area may be scrubbed with a stiff brush and warm, soapy water. The soapy water is not recommended to be drained in a storm drain.
Checking vehicles regularly for leaks.
Storing antifreeze in clearly marked original sealed containers, in areas that are inaccessible to pets or small children.
Keeping pets and small children away from the area when draining the car radiator.
Disposing of used antifreeze only by taking to a service station.
If antifreeze is placed in toilets, ensuring the lid is down and the door closed.
Treatment
Stabilization and decontamination
The most important initial treatment for ethylene glycol poisoning is stabilizing the person. As ethylene glycol is rapidly absorbed, gastric decontamination is unlikely to be of benefit unless it is performed within 60 minutes of ingestion. Traditionally, gastric lavage or nasogastric aspiration of gastric contents are the most common methods employed in ethylene glycol poisoning. The usefulness of gastric lavage has, however, been questioned, and it is now no longer used routinely in poisoning situations. Ipecac-induced vomiting is not recommended. As activated charcoal does not adsorb glycols, it is not recommended as it will not be effective at preventing absorption. It is only used in the presence of a toxic dose of another poison or drug. People with significant poisoning often present in a critical condition. In this situation stabilization of the person including airway management with intubation should be performed in preference to gastrointestinal decontamination. People presenting with metabolic acidosis or seizures require treatment with sodium bicarbonate and anticonvulsives such as a benzodiazepine respectively. Sodium bicarbonate should be used cautiously as it can worsen hypocalcemia by increasing the plasma protein binding of calcium. If hypocalcemia occurs it can be treated with calcium replacement although calcium supplementation can increase the precipitation of calcium oxalate crystals leading to tissue damage. Intubation and respiratory support may be required in severely intoxicated people; people with hypotension require treatment with intravenous fluids and possibly vasopressors.
Antidotes
Following decontamination and the institution of supportive measures, the next priority is inhibition of further ethylene glycol metabolism using antidotes. The antidotes for ethylene glycol poisoning are ethanol and fomepizole. This antidotal treatment forms the mainstay of management of ethylene glycol poisoning. The toxicity of ethylene glycol comes from its metabolism to glycolic acid and oxalic acid. The goal of pharmacotherapy is to prevent the formation of these metabolites. Ethanol acts by competing with ethylene glycol for alcohol dehydrogenase (ADH), the first enzyme in the degradation pathway. Because ethanol has nearly 100 times more affinity for ADH, it blocks the breakdown of ethylene glycol into glycolaldehyde, thus preventing further degradation to oxalic acid and the associated nephrotoxic effects. The unreacted ethylene glycol remains in the body and is eventually excreted in the urine; however, supportive therapy for the CNS depression and metabolic acidosis will be required until the ethylene glycol concentrations fall below toxic limits. Pharmaceutical grade ethanol is usually given intravenously as a 5 or 10% solution in 5% dextrose, but it is also sometimes given orally in the form of a strong spirit such as whisky, vodka, or gin.Fomepizole is a potent inhibitor of alcohol dehydrogenase; similar to ethanol, it acts to block the formation of the toxic metabolites. Fomepizole has been shown to be highly effective as an antidote for ethylene glycol poisoning. It is the only antidote approved by the U.S. Food and Drug Administration for the treatment of ethylene glycol poisoning. Both antidotes have advantages and disadvantages. Ethanol is readily available in most hospitals, is inexpensive, and can be administered orally as well as intravenously. Its adverse effects include intoxication, hypoglycemia in children, and possible liver toxicity. People receiving ethanol therapy also require frequent blood ethanol concentration measurements and dosage adjustments to maintain a therapeutic ethanol concentration. People therefore must be monitored in an intensive care unit. Alternatively, the adverse side effects of fomepizole are minimal and the approved dosing regimen maintains therapeutic concentrations without the need to monitor blood concentrations of the drug. The disadvantage of fomepizole is that it is expensive. Costing US$1,000 per gram, an average course used in an adult poisoning would cost approximately $3,500 to $4,000. Despite the cost, fomepizole is gradually replacing ethanol as the antidote of choice in ethylene glycol poisoning.Adjunct agents including thiamine and pyridoxine are often given, because they may help prevent the formation of oxalic acid. The use of these agents is based on theoretical observations and there is limited evidence to support their use in treatment; they may be of particular benefit in people who could be deficient in these vitamins such as those who are malnourished or alcoholic.
Hemodialysis
In addition to antidotes, an important treatment for poisoning is the use of hemodialysis. Hemodialysis is used to enhance the removal of unmetabolized ethylene glycol, as well as its metabolites from the body. It has been shown to be highly effective in the removal of ethylene glycol and its metabolites from the blood. Hemodialysis also has the added benefit of correcting other metabolic derangements or supporting deteriorating kidney function. Hemodialysis is usually indicated in people with severe metabolic acidosis (blood pH less than 7.3), kidney failure, severe electrolyte imbalance, or if the persons condition is deteriorating despite treatment. Often both antidotal treatment and hemodialysis are used together in the treatment of poisoning. Because hemodialysis will also remove the antidotes from the blood, doses of antidotes need to be increased to compensate. If hemodialysis is not available, then peritoneal dialysis also removes ethylene glycol, although less efficiently.
Prognosis
Treatment for antifreeze poisoning needs to be started as soon after ingestion as possible to be effective; the earlier treatment is started, the greater the chance of survival. Cats must be treated within 3 hours of ingesting of antifreeze to be effective, while dogs must be treated within 8–12 hours of ingestion. Once kidney failure develops, the prognosis is poor.Generally, if the person is treated and survives then a full recovery is expected. People who present early to medical facilities and have prompt medical treatment typically will have a favorable outcome. Alternatively, people presenting late with signs and symptoms of coma, hyperkalemia, seizures, or severe acidosis have a poor prognosis. People who develop severe central nervous system manifestations or stroke who survive may have long term neurologic dysfunction; in some cases they may recover, although convalescence may be prolonged. The most significant long-term complication is related to the kidneys. Cases of permanent kidney damage, often requiring chronic dialysis or kidney transplantation, have been reported after severe poisoning.
Epidemiology
Ethylene glycol poisoning is a relatively common occurrence worldwide. Human poisoning often occurs in isolated cases, but may also occur in epidemics. Many cases of poisoning are the result of using ethylene glycol as a cheap substitute for alcohol or intentional ingestions in suicide attempts. Less commonly it has been used as a means of homicide. Children or animals may be exposed by accidental ingestion; children and animals often consume large amounts due to ethylene glycol having a sweet taste. In the United States there were 5816 cases reported to poison centers in 2002. Additionally, ethylene glycol was the most common chemical responsible for deaths reported by US poison centers in 2003. In Australia there were 17 cases reported to the Victorian poison center and 30 cases reported to the New South Wales poison center in 2007. However, these numbers may underestimate actual numbers because not all cases attributable to ethylene glycol are reported to poison control centers. Most deaths from ethylene glycol are intentional suicides; deaths in children due to unintentional ingestion are extremely rare.In an effort to prevent poisoning, often a bittering agent called denatonium benzoate, known by the trade name Bitrex, is added to ethylene glycol preparations as an adversant to prevent accidental or intentional ingestion. The bittering agent is thought to stop ingestion as part of the human defense against ingestion of harmful substances is rejection of bitter tasting substances. In the United States, eight states (Oregon, California, New Mexico, Virginia, Arizona, Maine, Tennessee, Washington) have made the addition of bittering agents to antifreeze compulsory. Three follow up studies targeting limited populations or suicidal persons to assess the efficacy of bittering agents in preventing toxicity or death have, however, shown limited benefit of bittering ethylene glycol preparations in these two populations. Specifically, Mullins finds that bittering of antifreeze does not reduce reported cases of poisoning of preschoolers in the US state of Oregon. Similarly, White found that adding bittering agents did not decrease the frequency or severity of antifreeze poisonings in children under the age of 5. Additionally, another study by White found that suicidal persons are not deterred by the bittered taste of antifreeze in their attempts to kill themselves. These studies did not focus on poisoning of domestic pets or livestock, for example, or inadvertent exposure to bittered antifreeze among a large population (of non-preschool age children).
Poisoning of a raccoon was diagnosed in 2002 in Prince Edward Island, Canada. An online veterinary manual provides information on lethal doses of ethylene glycol for chicken, cattle, as well as cats and dogs, adding that younger animals may be more susceptible.
History
Ethylene glycol was once thought innocuous; in 1931 it was suggested as being suitable for use as a vehicle or solvent for injectable pharmaceutical preparations. Numerous cases of poisoning have been reported since then, and it has been shown to be toxic to humans.
Environmental effects
Ethylene glycol involved in aircraft de-icing and anti-icing operations is released onto land and eventually to waterways. A report prepared for the World Health Organization in 2000 stated that laboratory tests exposing aquatic organisms to stream water receiving runoff from airports have shown toxic effects and death (p. 12). Field studies in the vicinity of an airport have reported toxic signs consistent with ethylene glycol poisoning, fish kills, and reduced biodiversity, although those effects could not definitively be ascribed to ethylene glycol (p. 12). The process of biodegrading of glycols also increases the risk to organisms, as oxygen levels become depleted in surface waters (p. 13). Another study found the toxicity to aquatic and other organisms was relatively low, but the oxygen-depletion effect of biodegradation was more serious (p. 245). Further, "Anaerobic biodegradation may also release relatively toxic byproducts such as acetaldehyde, ethanol, acetate, and methane (p. 245)."In Canada, Environment Canada reports that "in recent years, management practices at Canada’s major airports have improved with the installation of new ethylene glycol application and mitigation facilities or improvements to existing ones." Since 1994, federal airports must comply with the Glycol Guidelines of the Canadian Environmental Protection Act, monitoring and reporting on concentrations of glycols in surface water. Detailed mitigation plans include storage and handling issues (p. 27), spill response procedures, and measures taken to reduce volumes of fluid (p. 28). Considering factors such as the "seasonal nature of releases, ambient temperatures, metabolic rates and duration of exposure", Environment Canada stated in 2014 that "it is proposed that ethylene glycol is not entering the environment in a quantity or concentration or under conditions that have or may have an immediate or long-term harmful effect on the environment or its biological diversity".In the U.S., airports are required to obtain stormwater discharge permits and ensure that wastes from deicing operations are properly collected and treated. Large new airports may be required to collect 60 percent of aircraft deicing fluid after deicing. Airports that discharge the collected aircraft deicing fluid directly to waters of the U.S. must also meet numeric discharge requirements for chemical oxygen demand. A report in 2000 stated that ethylene glycol was becoming less popular for aircraft deicing in the U.S., due to its reporting requirements and adverse environmental impacts (p. 213), and noted a shift to the use of propylene glycol (p. I-3).
Other animals
Once kidney failure has developed in dogs and cats, the outcome is poor. The treatment is generally the same, although vodka or rectified spirits may be substituted for pharmaceutical grade ethanol in IV injections.
See also
Methylmalonic acidemia – an autosomal recessive metabolic disorder that mimics the effects of ethylene glycol poisoning.
1985 diethylene glycol wine scandal
Elixir sulfanilamide, a banned medicine that caused mass poisoning because it used ethylene glycol as a solvent
Methanol poisoning
References
External links
"Antifreeze Poisoning in Dogs & Cats (Ethylene Glycol Poisoning)" – Pet Poison Helpline
"Antifreeze Poisoning" – Washington State University, College of Veterinary Medicine information sheet
"Overview of Ethylene Glycol Toxicity" – Merck Veterinary Manual information. | 161 |
Ewing sarcoma | Ewing sarcoma is a type of cancer that forms in bone or soft tissue. Symptoms may include swelling and pain at the site of the tumor, fever, and a bone fracture. The most common areas where it begins are the legs, pelvis, and chest wall. In about 25% of cases, the cancer has already spread to other parts of the body at the time of diagnosis. Complications may include a pleural effusion or paraplegia.It is a type of small round cell sarcoma. The cause of Ewing sarcoma is unknown. Most cases appear to occur randomly. Sometimes there has been a germline mutation. The underlying mechanism often involves a genetic change known as a reciprocal translocation. Diagnosis is based on biopsy of the tumor.Treatment often includes chemotherapy, radiation therapy, surgery, and stem cell transplant. Targeted therapy and immunotherapy are being studied. Five-year survival is about 70%. A number of factors, however, affect this estimate.James Ewing in 1920 established that the tumor is a distinct type of cancer. It affects about one in a million people per year in the United States. Ewing sarcoma occurs most often in teenagers and young adults and represents 2% of childhood cancers. Caucasians are affected more often than African Americans or Asians. Males are affected more often than females.
Signs and symptoms
Ewing sarcoma is more common in males (1.6 male:1 female) and usually presents in childhood or early adulthood, with a peak between 10 and 20 years of age. It can occur anywhere in the body but most commonly in the pelvis and proximal long tubular bones, especially around the growth plates. The diaphyses of the femur are the most common sites, followed by the tibia and the humerus. Thirty percent are overtly metastatic at presentation, while 10–15% of people present with a pathologic fracture at the time of diagnosis. People usually experience extreme bone pain. Rarely, it can develop in the vagina.Signs and symptoms include intermittent fevers, anemia, leukocytosis, increased sedimentation rate, and other symptoms of inflammatory systemic illness.According to the Bone Cancer Research Trust (BCRT), the most common symptoms are localized pain, swelling, and sporadic bone pain with variable intensity. The swelling is most likely to be visible if the sarcoma is located on a bone near the surface of the body, but when it occurs in other places deeper in the body, like on the pelvis, it may not be visible.
Genetics
Genetic exchange between chromosomes can cause cells to become cancerous. Most cases of Ewing sarcoma (about 85%) are the result of a defining genetic event; a reciprocal translocation between chromosomes 11 and 22, t(11,22), which fuses the Ewing Sarcoma Breakpoint Region 1 (EWSR1) gene of chromosome 22 (which encodes the EWS protein) to the Friend Leukemia Virus Integration 1 (FLI1) gene (which encodes Friend Leukemia Integration 1 transcription factor (FLI1), a member of the ETS transcription factor family) of chromosome 11. The resultant chromosomal translocation causes the EWS trans-activation domain (which is usually silent in the wild type) to become very active, this leads to the translation of a new EWS-FLI1 fusion protein. EWS proteins are involved in meiosis, B-lymphocyte maturation, hematopoietic stem cell renewal, DNA repair and cell senescence. ETS transcription factors are involved in cell differentiation and cell cycle control. The EWS-FLI1 fusion protein has phase transition properties, allowing it to transition into liquid-like, phase separated compartments consisting of membrane-less organelles. This phase transition property allows the fusion protein to access and activate micro-satellite regions of the genome that would otherwise be inaccessible. This fusion protein can convert usually silent chromatin regions into fully active enhancers leading to oncogenesis of the cells.The EWS-FLI1 fusion protein also causes variable expression of the genome via epigenetic mechanisms. The fusion protein does this by recruiting enzymes that affect DNA methylation, histone acetylation and direct inhibition of non-coding microRNA. EWS-FLI1 promotes histone acetylation, which leads to uncoiling of DNA (which is usually tightly wound around histones); this chromatin relaxation leads to the DNA being more accessible to transcription factors and thus enhancing the expression of the associated genes. DNA methylation leads to gene silencing as it prevents transcription factor binding. EWS-FLI1 reduces DNA methylation (which occurs mostly in areas corresponding to transcription enhancers), leading to increased gene expression. The EWS-FLI1 fusion protein inhibits certain microRNAs of cells (such as miRNA-145). MiRNA-145 normally activates RNA-induced silencing complexes (RISCs) to inhibit or degrade mRNA that is involved in cell pluripotency. Thus, ESW-FLI1 inhibition of the microRNA miRNA-145 leads to increased pluripotency and decreased differentiation of cells and increased oncogenesis.A genome-wide association study (GWAS) identified three susceptibility loci located on chromosomes 1, 10 and 15. A continuative study discovered that the Ewing sarcoma susceptibility gene EGR2, which is located within the chromosome 10 susceptibility locus, is regulated by the EWSR1-FLI1 fusion oncogene via a GGAA-microsatellite.EWS/FLI functions as the master regulator. Other translocations are at t(21;22) and t(7;22). Ewing sarcoma cells are positive for CD99 and MIC2, and negative for CD45.
Diagnosis
The definitive diagnosis is based on histomorphologic findings, immunohistochemistry and molecular pathology.
Ewing sarcoma is a small-blue-round-cell tumor that typically has a clear cytoplasm on H&E staining, due to glycogen. The presence of the glycogen can be demonstrated with positive PAS staining and negative PAS diastase staining. The characteristic immunostain is CD99, which diffusely marks the cell membrane. However, as CD99 is not specific for Ewing sarcoma, several auxiliary immunohistochemical markers can be employed to support the histological diagnosis. Morphologic and immunohistochemical findings are corroborated with an associated chromosomal translocation, of which several occur. The most common translocation, present in about 90% of Ewing sarcoma cases, is t(11;22)(q24;q12), which generates an aberrant transcription factor through fusion of the EWSR1 gene with the FLI1 gene.The pathologic differential diagnosis is the grouping of small-blue-round-cell tumors, which includes lymphoma, alveolar rhabdomyosarcoma, and desmoplastic small round cell tumor, among others.
Medical imaging
On conventional radiographs, typical findings of Ewing sarcoma consist of multiple confluent lytic bone lesions that have a "moth eaten" pattern due to permeative destruction of bone. There will also be a displaced periosteum as the new sub-periosteal layer of bone begins to grow on top of the tumor. This raised or displaced periosteum is consistent with the classic radiographic finding of the Codman triangle. The proliferative reaction of bone can also create delicate laminations constituting the periosteal layers and giving the radiographic appearance of an onion peel. Plain films add valuable information in the initial evaluation or screening. The wide zone of transition (e.g. permeative) is the most useful plain film characteristic in differentiation of benign versus aggressive or malignant lytic lesions.
Magnetic resonance imaging (MRI) should be routinely used in the work-up of malignant tumors. It will show the full bony and soft tissue extent and relate the tumor to other nearby anatomic structures (e.g. vessels). Gadolinium contrast is not necessary as it does not give additional information over noncontrast studies, though some current researchers argue that dynamic, contrast-enhanced MRI may help determine the amount of necrosis within the tumor, thus help in determining response to treatment prior to surgery.Computed axial tomography (CT) can also be used to define the extraosseous extent of the tumor, especially in the skull, spine, ribs, and pelvis. Both CT and MRI can be used to follow response to radiation and/or chemotherapy. Bone scintigraphy can also be used to follow tumor response to therapy.In the group of malignant small round cell tumors that includes Ewing sarcoma, bone lymphoma, and small cell osteosarcoma, the cortex may appear almost normal radiographically, while permeative growth occurs throughout the Haversian channels. These tumors may be accompanied by a large soft-tissue mass while almost no bone destruction is visible. The radiographs frequently do not show any signs of cortical destruction.
Radiographically, Ewing sarcoma presents as "moth-eaten" destructive radiolucencies of the medulla and erosion of the cortex with expansion.
Differential diagnosis
Other entities with similar clinical presentations include osteomyelitis, osteosarcoma (especially telangiectatic osteosarcoma), and eosinophilic granuloma. Soft-tissue neoplasms such as pleomorphic undifferentiated sarcoma (malignant fibrous histiocytoma) that erode into adjacent bone may also have a similar appearance. Accumulating evidence suggests that EWSR1-NFATc2 positive sarcomas, which were previously considered to possibly belong to the Ewing family of tumors, differ from Ewing sarcoma in their genetics, transcriptomes, and epigenetic and epidemiological profiles, indicating that they might represent a distinct tumor entity.
Treatment
Almost all people receive multidrug chemotherapy (most often vincristine, doxorubicin, cyclophosphamide, ifosfamide, and etoposide), as well as local disease control with surgery and/or radiation. An aggressive approach is necessary because almost all people with apparently localized disease at the time of diagnosis actually have asymptomatic metastatic disease.The surgical resection may involve limb salvage or amputation. Complete excision at the time of biopsy may be performed if malignancy is confirmed at the time it is examined. Treatment lengths vary depending on location and stage of the disease at diagnosis. Radical chemotherapy may be as short as six treatments at three-week cycles, but most people undergo chemotherapy for 6–12 months and radiation therapy for 5–8 weeks. Radiotherapy has been used for localized disease. The tumor has a unique property of being highly sensitive to radiation, sometimes acknowledged by the phrase "melting like snow", but the main drawback is that it recurs dramatically after some time.Antisense oligodeoxynucleotides have been proposed as possible treatment by down-regulating the expression of the oncogenic fusion protein associated with the development of Ewing sarcoma resulting from the EWS-ETS gene translocation. In addition, the synthetic retinoid derivative fenretinide (4-hydroxy(phenyl)retinamide) has been reported to induce high levels of cell death in Ewing sarcoma cell lines in vitro and to delay growth of xenografts in in vivo mouse models.In most pediatric cancers including sarcoma, proton beam radiation (also known as proton therapy) delivers an equally effective dose to the tumor with less damage to the surrounding normal tissue compared to photon radiation.
Prognosis
Staging attempts to distinguish people with localized from those with metastatic disease. The most common areas of metastasis are the lungs, bone and bone marrow with less common areas of metastasis being the lymph nodes, liver and brain. The presence of metastatic disease is the most important prognostic factor in Ewing Sarcoma with the 5 year survival rate being only 30% when metastasis is present at the time of diagnosis as compared to a 70% 5 year survival rate with no metastasis present. Another important prognostic factor is the location of the primary tumor; proximal tumors (located in the pelvis and sacrum) are worse prognostic indicators as compared to more distal tumors. Other factors associated with a poor prognosis include a large primary neoplasm, older age at diagnosis (older than 18 years of age) and increased lactate dehydrogenase (LDH) levels.Five-year survival for localized disease is greater than 70% after therapy. Prior to the use of multi-drug chemotherapy, long-term survival was less than 10%. The development of multi-disciplinary therapy with chemotherapy, irradiation, and surgery has increased current long-term survival rates in most clinical centers to greater than 50%. However, some sources state it is 25–30%.Retrospective research showed that two chemokine receptors, CXCR4 and CXCR7, can be used as molecular prognosis factors. People who express low levels of both chemokine receptors have the highest odds of long-term survival with >90% survival at five years post-diagnosis versus <30% survival at five years for patients with very high expression levels of both receptors. A recent study also suggested a role for SOX2 as an independent prognostic biomarker that can be used to identify patients at high risk for tumor relapse.
Epidemiology
Ewing sarcomas represent 16% of primary bone sarcomas. In the United States, they are most common in the second decade of life, with a rate of 0.3 cases per million in children under 3 years of age, and as high as 4.6 cases per million in adolescents aged 15–19 years. Nearly 80% of patients are aged less than 20 years of age. It is uncommon in patients younger than 5 years and older than 30 years.Internationally, the annual incidence rate averages less than 2 cases per million children. In the United Kingdom, an average of six children per year are diagnosed; mainly males in early stages of puberty. Due to the prevalence of diagnosis during teenage years, a link may exist between the onset of puberty and the early stages of this disease, although no research confirms this hypothesis.A grouping of three unrelated teenagers in Wake Forest, North Carolina, have been diagnosed with Ewing sarcoma. All three children were diagnosed in 2011 and all attended the same temporary classroom together while the school underwent renovation. A fourth teenager living nearby was diagnosed in 2009. The odds of this grouping are considered significant. Ewing sarcoma occurs about 10- to 20-fold more commonly in people of European descent compared to people of African descent.Ewing sarcoma is the second most common bone cancer in children and adolescents, with poor prognosis and outcome in ~70% of initial diagnoses and 10–15% of relapses.
References
Further reading
== External links == | 162 |
Exocrine pancreatic insufficiency | Exocrine pancreatic insufficiency (EPI) is the inability to properly digest food due to a lack of digestive enzymes made by the pancreas. EPI is found in humans afflicted with cystic fibrosis and Shwachman–Diamond syndrome, and is common in dogs. EPI is caused by a progressive loss of the pancreatic cells that make digestive enzymes; loss of digestive enzymes leads to maldigestion and malabsorption of nutrients from normal digestive processes.
Chronic pancreatitis is the most common cause of EPI in humans and cats. In dogs, the most common cause is pancreatic acinar atrophy, arising as a result of genetic conditions, a blocked pancreatic duct, or prior infection.
The exocrine pancreas is a portion of this organ that contains clusters of ducts (acini) producing bicarbonate anion, a mild alkali, as well as an array of digestive enzymes that together empty by way of the interlobular and main pancreatic ducts into the duodenum (upper small intestine). The hormones cholecystokinin and secretin secreted by the stomach and duodenum in response to distension and the presence of food in turn stimulate the production of digestive enzymes by the exocrine pancreas. The alkalization of the duodenum neutralizes the acidic chyme produced by the stomach that is passing into it; the digestive enzymes serve to catalyze the breakdown of complex foodstuffs into smaller molecules for absorption and integration into metabolic pathways. The enzymes include proteases (trypsinogen and chymotrypsinogen), hydrolytic enzymes that cleave lipids (the lipases phospholipase A2 and lysophospholipase, and cholesterol esterase), and amylase to digest starches. EPI results from progressive failure in the exocrine function of the pancreas to provide its digestive enzymes, often in response to a genetic condition or other disease state, resulting in the inability of the animal involved to properly digest food.
Signs and symptoms
Loss of pancreatic enzymes leads to maldigestion and malabsorption, which may in turn lead to:
anemia (Vitamin B12, iron, folate deficiency)
bleeding disorders (Vitamin K malabsorption)
edema (hypoalbuminemia)
fatigue
flatulence and abdominal distention (bacterial fermentation of unabsorbed food)
hypocalcemia
metabolic bone disease (Vitamin D deficiency)
neurologic manifestation
steatorrhea
weight loss
Causes
In humans, the most common causes of EPI are chronic pancreatitis and cystic fibrosis, the former a longstanding inflammation of the pancreas altering the organs normal structure and function that can arise as a result of malnutrition, heredity, or (in the Western world especially), behaviour (alcohol use, smoking), and the latter a recessive hereditary disease most common in Europeans and Ashkenazi Jews where the molecular culprit is an altered, CFTR-encoded chloride channel. According to WebMD, "Crohns disease and celiac disease can also lead to EPI in some people". In children, another common cause is Shwachman-Bodian-Diamond syndrome, a rare autosomal recessive genetic disorder resulting from mutation in the SBDS gene.
Diagnosis
The three main tests used in considering a diagnosis of EPI are: fecal elastase test, fecal fat test, and a direct pancreatic function test. The latter is a limitedly used test that assesses exocrine function in the pancreas by inserting a tube into the small intestine to collect pancreatic secretions.
Treatment
EPI is often treated with pancreatic enzyme replacement products (PERPs) such as pancrelipase, that are used to break down fats (via a lipase), proteins (via a protease), and carbohydrates (via amylase) into units that can be digested. Pancrelipase is typically porcine derived and requires large doses.
Other animals
Causes and pathogenesis
Chronic pancreatitis is the most common cause of EPI in cats. In dogs, where the condition has been deemed common, the usual cause is by pancreatic acinar atrophy, arising as a result of genetic conditions, a blocked pancreatic duct, or prior infection.In dogs, EPI is most common in young German Shepherds, and in Finland Rough Collies, and is inherited. In German Shepherds, the method of inheritance is through an autosomal recessive gene. In these two breeds, at least, the cause appears to be immune-mediated as a sequela to lymphocytic pancreatitis. The German Shepherd makes up about two-thirds of cases seen with EPI. Other breeds reported to be predisposed to EPI include terrier breeds, Cavalier King Charles Spaniels, Chow Chows, and Picardy Shepherds.
Symptoms
In animals, signs of EPI are not present until 85 to 90 percent of the pancreas is unable to secrete its enzymes. In dogs, symptoms include weight loss, poor hair coat, flatulence, increased appetite, coprophagia, and diarrhea. Feces are often yellow-gray in color with an oily texture. There are many concurrent diseases that mimic EPI, and severe pancreatitis is one that if allowed to continue unabated can lead to EPI.
Diagnosis and treatment
The most reliable test for EPI in dogs and cats is serum trypsin-like immunoreactivity (TLI); a low value indicates EPI. Fecal elastase levels may also be used for diagnosis in dogs.In dogs, the best treatment is to supplement the animals food with dried pancreatic extracts. There are commercial preparations available, but chopped bovine pancreas from the butcher can also be used. (Pork pancreas should not be used because of the rare transmission of pseudorabies). Symptoms usually improve within a few days, but lifelong treatment is required to manage the condition. A rare side-effect of use of dried pancreatic extracts is oral ulceration and bleeding.Because of malabsorption, serum levels of cyanocobalamin (vitamin B12) and tocopherol (vitamin E) may be low. These may be supplemented, although since cyanocobalamin contains the toxic chemical cyanide, dogs that have serious cobalamin issues should instead be treated with hydroxocobalamin or methylcobalamin.Cyanocobalamin deficiency is very common in cats with EPI because about 99 percent of intrinsic factor (which is required for cyanocobalamin absorption from the intestine) is secreted by the pancreas. In dogs, this figure is about 90 percent, and only about 50 percent of dogs have this deficiency.Cats may suffer from Vitamin K deficiencies. If there is bacterial overgrowth in the intestine, antibiotics should be used, especially if treatment is not working.
In dogs failing to gain weight or continuing to show symptoms, modifying the diet to make it low-fiber and highly digestible may help. Despite previous belief that low-fat diets are beneficial in dogs with EPI, more recent studies have shown that a high-fat diet may increase absorption of nutrients and better manage the disease. However, it has been shown that different dogs respond to different dietary modifications, so the best diet must be determined on a case-by-case basis.One possible sequela, volvulus (mesenteric torsion), is a rare consequence of EPI in dogs.
References
== External links == | 163 |
Exposure keratopathy | Exposure keratopathy (also known as exposure keratitis) is medical condition affecting the cornea of eyes. It can lead to corneal ulceration and permanent loss of vision due to corneal opacity.
Normally, corneal surface is kept moist by blinking. during sleep, it is covered by lids. Increased corneal exposure to the air due to incomplete or inadequate eyelid closure cause increased evaporation of tear from corneal surface. Increased evaporation of tear cause instability of the tear film and dryness of corneal surface. This will lead to corneal epithelial damage. Both tear film and corneal epithelium play significant role in corneal protective mechanism. The dryness and epithelial damage will allow micro organism to penetrate cornea and thus keratitis occurs.
Signs and symptoms
Symptoms are similar to dry eye. Patients may complain redness, irritation, ocular discomfort, burning, and foreign body sensation. Punctate epithelial defects, epithelial break down and stromal melting may be seen in corneal examination. Corneal ulceration may develop due to bacterial invasion.
Complications
Main complication of exposure keratopathy is permanent vision loss due to corneal opacification. Stromal melting may occasionally lead to corneal perforation.
Causes
Exposure keratopathy may occur due to mechanical eyelid abnormalities or neuro-paralytic corneal anesthesia. It may occur secondary to ocular surgeries like blepharoplasty, ptosis surgery etc. also.
Lagophthalmos
Lagophthalmos, the inability to close the eyelids completely is the main cause of exposure keratopathy. Common cause of lagophthalmos is facial nerve (CN VII) palsy. Facial nerve function may affect in several conditions like cerebrovascular accident, head trauma, brain tumors, Bells palsy etc. Physiological inability to close the eyelids during sleep (nocturnal lagophthalmos) may also cause exposure keratopathy.
Mechanical causes
Chemical or thermal burns to eyelids or conjunctiva, ocular cicatricial pemphigoid, or symblepharon may cause incomplete or inadequate eyelid closure.
Exophthalmos
Exophthalmos is the unilateral or bilateral bulging of the eye anteriorly out of the orbit causing increased exposure of cornea. It may be seen in many conditions like Graves ophthalmopathy, Orbital cellulitis, Orbital pseudotumor etc.
Surgical
A weak bell phenomenon may result in exposure keratopathy after ptosis surgery. Postoperative lagophthalmos following blepharoplasty is another common cause of secondary exposure keratopathy.
Diagnosis
Fluorescein staining may be used to detect for epithelial defects, corneal infection or perforation of the cornea. Tear break-up time and ocular protection index assessment can be done to reveal dry eye. Exophthalmometry can be used to measure degree of exophthalmos.
Prevention
If increased corneal exposure is detected, several preventive measures can be done to prevent keratitis. Aritificial eye drops and eye ointments may be used to keep the eyes moist. Since frequent use of eye drops with preservatives can promote inflammation, it is better to choose preservative free artificial tear drops and lubricating eye drops. Bandage silicone hydrogel or scleral contact lens may be used to protect cornea. But, risk of infection is more with bandage contact lens use. Moisture goggles may also be used to protect cornea. Temporary or permanent tarsorrhaphy may be indicated to treat lagophthalmos. Gold weights can be inserted into the upper eyelid to treat fasial nerve palsy.
Treatment
Treatment of the cause of the exposure is to be done first. For example, in proptosis due to thyroid eye disease, regulation of thyroid hormone levels may be advised. Symblepharon can be treated surgically. If necessary, management of proptosis may be done by orbital decompression. Eyelid taping during sleep may alleviate mild cases of exposure keratopathy.If corneal ulcer is detected, it may be treated medically with antibiotics. If corneal perforation has occurred, immediate treatment measures should be done to restore the integrity of perforated cornea. Tissue adhesive glues, covering with conjunctival flap, bandage soft contact lens or therapeutic keratoplasty may be indicated to treat perforated corneal ulcer.
See also
Keratitis
Dry eye syndrome
Lagophthalmos
== References == | 164 |
Failure to thrive | Failure to thrive (FTT), also known as weight faltering or faltering growth, indicates insufficient weight gain or absence of appropriate physical growth in children. FTT is usually defined in terms of weight, and can be evaluated either by a low weight for the childs age, or by a low rate of increase in the weight.The term "failure to thrive" has been used in different ways, as there is no objective standard or universally accepted definition for when to diagnose FTT. One definition describes FTT as a fall in one or more weight centile spaces on a World Health Organization (WHO) growth chart depending on birth weight or when weight is below the 2nd percentile of weight for age irrespective of birth weight. Another definition of FTT is a weight for age that is consistently below the 5th percentile or weight for age that falls by at least two major percentile lines on a growth chart. While weight loss after birth is normal and most babies return to their birth weight by three weeks of age, clinical assessment for FTT is recommended for babies who lose more than 10% of their birth weight or do not return to their birth weight after three weeks. Failure to thrive is not a specific disease, but a sign of inadequate nutrition.In veterinary medicine, FTT is also referred to as ill-thrift.
Signs and symptoms
Failure to thrive is most commonly diagnosed before two years of age, when growth rates are highest, though FTT can present among children and adolescents of any age. Caretakers may express concern about poor weight gain or smaller size compared to peers of a similar age. Physicians often identify failure to thrive during routine office visits, when a childs growth parameters such as height and weight are not increasing appropriately on growth curves. Other signs and symptoms may vary widely depending on the etiology of FTT. It is also important to differentiate stunting from wasting, as they can indicate different causes of FTT. "Wasting" refers to a deceleration in stature more than 2 standard deviations from median weight-for-height, whereas "stunting" is a drop of more than 2 standard deviations from the median height-for-age.The characteristic pattern seen with children with inadequate nutritional intake is an initial deceleration in weight gain, followed several weeks to months later by a deceleration in stature, and finally a deceleration in head circumference. Inadequate caloric intake could be caused by lack of access to food, or caretakers may notice picky eating habits, low appetite, or food refusal. FTT caused by malnutrition could also yield physical findings that indicate potential vitamin and mineral deficiencies, such as scaling skin, spoon-shaped nails, cheilosis, or neuropathy. Lack of food intake by a child could also be due to psychosocial factors related to the child or family. It is vital to screen patients and their caretakers for psychiatric conditions such as depression or anxiety, as well as screen children for signs and symptoms of child abuse, neglect, or emotional deprivation.Children who have FTT caused by a genetic or medical problem may have differences in growth patterns compared to children with FTT due to inadequate food intake. A decrease in length with a proportional drop in weight can be related to long-standing nutritional factors as well as genetic or endocrine causes. Head circumference, as well, can be an indicator for the etiology of FTT. If head circumference is affected initially in addition to weight or length, other factors are more likely causes than inadequate intake. Some of these include intrauterine infection, teratogens, and some congenital syndromes.Children who have a medical condition causing FTT may have additional signs and symptoms specific to their condition. Fetal alcohol syndrome (FAS) has been associated with FTT, and can present with characteristic findings including microcephaly, short palpebral fissures, a smooth philtrum and a thin vermillion border. Disorders that cause difficulties absorbing or digesting nutrients, such as Crohns disease, cystic fibrosis, or celiac disease, can present with abdominal symptoms. Symptoms can include abdominal pain, abdominal distention, hyperactive bowel sounds, bloody stools, or diarrhea.
Cause
Traditionally, causes of FTT have been divided into endogenous and exogenous causes. These causes can also be largely grouped into three categories: inadequate caloric intake, malabsorption/caloric retention defect, and increased metabolic demands.
Endogenous (or "organic")
Endogenous causes are due to physical or mental issues affecting the child. These causes include various inborn errors of metabolism. Problems with the gastrointestinal system such as excessive gas and acid reflux are painful conditions which may make the child unwilling to take in sufficient nutrition. Cystic fibrosis, diarrhea, liver disease, anemia or iron deficiency, Crohns disease, and coeliac disease make it more difficult for the body to absorb nutrition. Other causes include physical deformities such as cleft palate and tongue tie that impede food intake. Additionally, allergies such as milk allergies can cause endogenous FTT. FAS has also been associated with failure to thrive. Additional, medical conditions including parasite infections, urinary tract infections, other fever-inducing infections, asthma, hyperthyroidism and congenital heart disease may raise energy needs of the body and cause greater difficulty taking in sufficient calories to meet the higher caloric demands, leading to FTT.
Exogenous (or "nonorganic")
Exogenous causes are due to caregiver actions, whether unintentional or intentional. Examples include physical inability to produce enough breastmilk, inappropriate feeding schedules or feeding technique, and mistakes made in formula preparation. In developing countries, conflict settings, and protracted emergencies, exogenous FTT may more commonly be caused by chronic food insecurity, lack of nutritional awareness, and other factors beyond the caregivers control. As many as 90% of failure to thrive cases are non-organic.
Mixed
Both endogenous and exogenous factors may co-exist. For instance, a child who is not getting sufficient nutrition for endogenous reasons may act content so that caregivers do not offer feedings of sufficient frequency or volume. Yet, a child with severe acid reflux who appears to be in pain while eating may also make a caregiver hesitant to offer sufficient feedings.
Inadequate caloric intake
Inadequate caloric intake indicates that an insufficient amount of food and nutrition is entering the body, whether due to lack of food, anatomical differences causing difficulty eating, or psychosocial reasons for decreased food intake.
Malabsorption/caloric retention defect
Malabsorption and caloric retention defects cause the body to the unable to absorb and use nutrients from food, despite an adequate amount of food physically entering the body.
Increased metabolic demand
Increased metabolic demand suggests a state of increased energy needs and caloric expenditure. This state causes greater difficulty taking in enough nutrition to meet the bodys energy needs and allow for normal growth.
Epidemiology
Failure to thrive is a common presenting problem in the pediatric population in both resource-abundant and resource-poor countries. While epidemiology may vary by region, inadequate caloric intake remains the most common cause of FTT in both developed and developing countries, and poverty is the greatest risk factor for FTT worldwide.
Resource-abundant regions
Failure to thrive is prevalent in developed countries, with literature from Western studies demonstrating a prevalence of about 8% among pediatric patients. Presentations of FTT comprise about 5-10% of children seen as outpatients by primary care physicians and 3-5% of hospital admissions for children. Failure to thrive is more prevalent in children of lower socioeconomic status in both rural and urban areas. FTT is also associated with lower parental education levels. Additionally, retrospective studies done in the United States suggest that males are slightly more likely than females to be admitted to the hospital for failure to thrive.
Low-resourced regions
Failure to thrive is more common in developing countries and is mostly driven by malnutrition due to poverty. In an example of the high prevalence of FTT due to malnutrition, in India, about 40% of the population suffers from mild to moderate malnutrition and about 25% of pediatric hospitalizations are due to malnutrition.Malnutrition is a global problem of great scale. Worldwide, problems with receiving adequate nutrition contributes to about 45% of all deaths in children younger than 5 years old. In 2020, global estimates of malnutrition indicated that 149 million children under 5 were stunted and 45 million were estimated to be wasted. In 2014, approximately 462 millions adults were estimated to be underweight. It is important to note that these reports are likely underestimating the true scope of the global burden.Malnutrition can also be classified to acute malnutrition and chronic malnutrition. Acute malnutrition indicates inadequate or insufficient nutrient intake resulting in severe systemic degeneration. Globally, approximately 32.7 million children under 5 years are found to have visible and clinical signs of acute malnutrition. Severe wasting is seen in 14.3 million children within this age group. These disorders are primarily localized to resource-limited regions. In comparison, chronic malnutrition is a condition that develops over time and results in growth inadequacy with subsequent developmental, physical and cognitive delays. Around 144 million children worldwide are chronically malnurished.
Diagnosis
The diagnosis of FTT relies on plotting the childs height and weight on a validated growth chart, such as the World Health Organization (WHO) growth charts for children younger than two years old or the U.S. Centers for Disease Control and Prevention (CDC) growth charts for patients between the ages of two and twenty years old. While there is no universally accepted definition for failure to thrive, the following are examples of diagnostic criteria for FTT:
Weight under the 5th percentile among children of the same sex and corrected age;
Weight for length below the 5th percentile among children of the same sex and age;
Length for age below the 5th percentile;
Body mass index for age under the 5th percentile;
Weight for age or weight for length dropping by at least two major percentiles (95th, 90th, 75th, 50th, 25th, 10th, and 5th) on a growth chart;
Weight below 75% of the median weight for age;
Weight below 75% of median weight for length; or
Weight velocity less than the 5th percentile.After diagnosis, the underlying cause of FTT must be evaluated by a medical provider through a multifaceted process. This process begins with evaluating the patients medical history. The medical provider will ask about complications during pregnancy and birth, health during early infancy, previous or current medical conditions of the child, and developmental milestones that have been reached or not reached by the child. The childs feeding and diet history, including overall caloric intake and eating habits, is also assessed to help identify potential causes of FTT. Additionally, medical providers will inquire about any medical conditions that other members of the family may have, as well as assess the psychological and social circumstances of the child and family.Next, a complete physical examination may be done, with special attention being paid to identifying possible organic sources of FTT. This could include looking for dysmorphic features (differences in physical features that may indicate an underlying medical disorder), abnormal breathing sounds, and signs of specific vitamin and mineral deficiencies. The physical exam may also reveal signs of possible child neglect or abuse.Based on the information gained from the history and physical examination, a workup can then be conducted, in which possible sources of FTT can be further probed through blood work, x-rays, or other tests. Laboratory workup should be done in response to specific history and physical examination findings. Medical providers should take care not to order unnecessary tests, especially given estimates that the usefulness of laboratory investigations for children with failure to thrive is 1.4%. Initial bloodwork may include a complete blood count (CBC) with differential to see if there are abnormalities in the number of blood cells, a complete metabolic panel to look for electrolyte derangements, a thyroid function test to assess thyroid hormone activity, and a urinalysis to test for infections or diseases related to the kidneys or urinary tract. If indicated, anti-TTG IgA antibodies can be used to assess for celiac disease, and a sweat chloride test can be used to screen for cystic fibrosis. If no cause is discovered, a stool examination could be indicated, which would give information about the function of gastrointestinal organs. C-reactive protein and erythrocyte sedimentation rate (ESR) can also be used look for signs of inflammation, which may indicate an infection or inflammatory disorder.
Treatment
Infants and children who have had unpleasant eating experiences (e.g. acid reflux or food intolerance) may be reluctant to eat their meals. Additionally, force feeding an infant or child can discourage proper self-feeding practices and in-turn cause undue stress on both the child and their parents. Psychosocial interventions can be targeted at encouraging the child to feed themselves during meals. Also, making mealtimes a positive, enjoyable experience through the use of positive reinforcement may improve eating habits in children who present with FTT. If behavioral issues persist and are affecting nutritional habits in children with FTT it is recommended that the child see a psychologist. If an underlying condition, such as inflammatory bowel disease, is identified as the cause of the childs failure to thrive then treatment is directed towards the underlying condition. Special care should be taken to avoid refeeding syndrome when initiating feeds in a malnourished patient. Refeeding syndrome is caused by a shift in fluid and electrolytes in a malnourished person as they receive artificial refeeding. It is potentially fatal, and can occur whether receiving enteral or parenteral nutrition. The most serious and common electrolyte abnormality is hypophosphatemia, although sodium abnormalities are common as well. It can also cause changes in glucose, protein, and fat metabolism. Incidence of refeeding syndrome is high, with one prospective cohort study showing 34% of ICU experienced hypophosphatemia soon after feeding was restarted.
Low resourced settings
Community-based management of malnutrition (CMAM) has been shown to be effective in many low resourced regions in the past two decades. This method includes providing children with ready-to-use therapeutic food (RUTF) and then following up with their health at home or at local health centers. RUTF is readily-consumed, shelf-stable food that provides all the nutrients required for recovery. It comes in different formulations, is generally a soft, semisolid paste, and can be sourced locally, commercially, or from agencies like UNICEF. In terms of efficacy, clinical experience and systemic reviews have shown higher recovery rates using CMAM than previous methods, such as milk-based formulas. While this is an efficient outpatient method to address FTT, children with underlying pathologies would require further inpatient workup.RUTF should be treated as prescribed medication to the child experience FTT, and thus should not be shared with others in the family. The recommended feeding protocol is 5-6 servings a day for about 6–8 months, at which time many children will fully recover. Children should have a follow up every week or two looking at weight and upper arm circumference. Follow ups can be decreased if there is progress without complications, but if the child is not improving, then further evaluation for underlying issues is recommended. After treatment has ended, the childs caretakers should be counseled on how to continue feeding them and looking for signs of relapse.Prevention is an effective strategy to address failure to thrive in resource limited regions. Recognition of at-risk populations is an important first step in approaching prevention. Infections such as HIV, tuberculosis and conditions causing diarrhea can be causative factors in failure to thrive. As such, addressing these conditions can greatly improve outcomes. Targeted supplementation strategies such as ready-to-eat foods or legume supplementation are valuable tools for preempting failure to thrive.
Prognosis
Children with failure to thrive are at an increased risk for long-term growth, cognitive, and behavioral complications. Studies have shown that children with failure to thrive during infancy were shorter and lower weight at school-age than their peers. Failure to thrive may also result in children not achieving their growth potential, as estimated by mid-parental height. Longitudinal studies have also demonstrated lower IQs (3–5 points) and poorer arithmetic performance in children with a history failure to thrive, compared to peers receiving adequate nutrition as infants and toddlers. Early intervention and restoration of adequate nutrition has been shown to reduce the likelihood of long-term sequelae, however, studies have shown that failure to thrive may cause persistent behavioral problems, despite appropriate treatment.
History
FTT was first introduced in the early 20th century to describe poor growth in orphan children but became associated with negative implications (such as maternal deprivation) that often incorrectly explained the underlying issues. Throughout the 20th century, FTT was expanded to include many different issues related to poor growth, which made it broadly applicable but non-specific. The current conceptualization of FTT acknowledges the complexity of faltering growth in children and has shed many of the negative stereotypes that plagued previous definitions.
See also
Developmental disorders
Hospitalism
Malnutrition
Neonatal isoerythrolysis
Refeeding syndrome
SIDS
Small for gestational age
Stunted growth
References
== External links == | 165 |
Familial Mediterranean fever | Familial Mediterranean fever (FMF) is a hereditary inflammatory disorder.: 149 FMF is an autoinflammatory disease caused by mutations in Mediterranean fever gene, which encodes a 781–amino acid protein called pyrin. While all ethnic groups are susceptible to FMF, it usually occurs in people of Mediterranean origin—including Sephardic Jews, Mizrahi Jews, Ashkenazi Jews, Assyrians, Armenians, Azerbaijanis, Levantines, Kurds, Greeks, Turks and Italians.The disorder has been given various names, including familial paroxysmal polyserositis, periodic peritonitis, recurrent polyserositis, benign paroxysmal peritonitis, periodic disease or periodic fever, Reimann periodic disease or Reimann syndrome, Siegal-Cattan-Mamou disease, and Wolff periodic disease. Note that "periodic fever" can also refer to any of the periodic fever syndromes.
Signs and symptoms
Attacks
There are seven types of attacks. Ninety percent of all patients have their first attack before they are 18 years old. All develop over 2–4 hours and last anywhere from 6 hours to 5 days. Most attacks involve fever.
Abdominal attacks, featuring abdominal pain, affect the whole abdomen with all signs of peritonitis (inflammation of abdominal lining), and acute abdominal pain like appendicitis. They occur in 95% of all patients and may lead to unnecessary laparotomy. Incomplete attacks, with local tenderness and normal blood tests, have been reported.
Joint attacks mainly occur in large joints, especially in the legs. Usually, only one joint is affected. 75% of all FMF patients experience joint attacks.
Chest attacks include pleuritis (inflammation of the pleura) and pericarditis (inflammation of the pericardium). Pleuritis occurs in 40% of patients and makes it difficult to breathe or lie flat, but pericarditis is rare.
Scrotal attacks due to inflammation of the tunica vaginalis are somewhat rare but may be mistaken for testicular torsion.
Myalgia (rare in isolation)
Erysipeloid rashes (a skin reaction on the legs that can mimic cellulitis, rare in isolation)
Complications
AA-amyloidosis with kidney failure is a complication and may develop without overt crises. AA amyloid protein is produced in very large quantities during attacks, and at a low rate between them, and accumulates mainly in the kidney, as well as the heart, spleen, gastrointestinal tract, and thyroid.There appears to be an increase in the risk for developing particular vasculitis-related diseases (e.g. Henoch–Schönlein purpura), spondylarthropathy, prolonged arthritis of certain joints and protracted myalgia.
Genetics
The MEFV gene is located on the short arm of chromosome 16 (16p13). Many different mutations of the MEFV gene can cause the disorder. Having one mutation is unlikely to cause the condition. Having two mutations either a copy from both parents, or two different mutations, one from each parent is the threshold for a genetic diagnosis of FMF. However, most individuals who comply with the genetic diagnosis of FMF remain asymptomatic or undiagnosed. Whether this is due to modifier genes or environmental factors remains to be established.
Pathophysiology
Virtually all cases are due to a mutation in the Mediterranean Fever (MEFV) gene on the chromosome 16, which codes for a protein called pyrin or marenostrin. Various mutations of this gene lead to FMF, although some mutations cause a more severe picture than others. Mutations occur mainly in exons 2, 3, 5 and 10.The function of pyrin has not been completely elucidated, but in short, it is a protein that binds to the adaptor ASC and the pro form of the enzyme caspase-1 to generate multiprotein complexes called inflammasomes in response to certain infections. In healthy individuals, pyrin-mediated inflammasome assembly (which leads to the caspase 1) dependent processing and secretion of the pro-inflammatory cytokines (such as interleukin-18 (IL-18) and IL-1β) is a response to enterotoxins from certain bacteria. The gain-of-function mutations in the MEFV gene render Pyrin hyperactive, and subsequently, the formation of the inflammasomes becomes more frequent.The pathophysiology of familial Mediterranean fever has recently undergone significant advances: at basal state, pyrin is kept inactive by a chaperone protein (belonging to the family of 14.3.3 proteins) linked to pyrin through phosphorylated serine residues. The dephosphoration of pyrin is an essential prerequisite for the activation of the pyrin inflammasome. Inactivation of RhoA GTPases (by bacterial toxins, for example) leads to the inactivation of PKN1 / PKN2 kinases and dephosphoration of pyrin. In healthy subjects, the dephosphorylation step alone does not cause activation of the pyrin inflammasome. In contrast, in FMF patients, the dephosphorylation of serines is sufficient to trigger the activation of the pyrin inflammasome. This suggests that there is a two-level regulation and that the second regulatory mechanism (independent of (de)phosphorylation) is deficient in FMF patients. This deficient mechanism is probably located at the level of the B30.2 domain (exon 10) where most of the pathogenic mutations associated with FMF are located. It is probably the interaction of this domain with the cytoskeleton (microtubules) that is failing, as suggested by the efficacy of colchicine.It is not conclusively known what exactly sets off the attacks, and why overproduction of IL-1 would lead to particular symptoms in particular organs (e.g. joints or the peritoneal cavity).
Diagnosis
The diagnosis is clinically made on the basis of the history of typical attacks, especially in patients from the ethnic groups in which FMF is more highly prevalent. An acute phase response is present during attacks, with high C-reactive protein levels, an elevated white blood cell count and other markers of inflammation. In patients with a long history of attacks, monitoring the kidney function is of importance in predicting chronic kidney failure.A genetic test is also available to detect mutations in the MEFV gene. Sequencing of exons 2, 3, 5, and 10 of this gene detects an estimated 97% of all known mutations.A specific and highly sensitive test for FMF is the "metaraminol provocative test (MPT)", whereby a single 10 mg infusion of metaraminol is administered to the patient. A positive diagnosis is made if the patient presents with a typical, albeit milder, FMF attack within 48 hours. As MPT is more specific than sensitive, it does not identify all cases of FMF, although a positive MPT can be very useful.
Treatment
Attacks are self-limiting, and require analgesia and NSAIDs (such as diclofenac). Colchicine, a drug otherwise mainly used in gout, decreases attack frequency in FMF patients. The exact way in which colchicine suppresses attacks is unclear. While this agent is not without side effects (such as abdominal pain and muscle pains), it may markedly improve quality of life in patients. The dosage is typically 1–2 mg a day. Development of amyloidosis is delayed with colchicine treatment. Interferon is being studied as a therapeutic modality. Some advise discontinuation of colchicine before and during pregnancy, but the data are inconsistent, and others feel it is safe to take colchicine during pregnancy.Approximately 5–10% of FMF cases are resistant to colchicine therapy alone. In these cases, adding anakinra to the daily colchicine regimen has been successful. Canakinumab, an anti-interleukin-1-beta monoclonal antibody, has likewise been shown to be effective in controlling and preventing flare-ups in patients with colchicine-resistant FMF and in two additional autoinflammatory recurrent fever syndromes: mevolonate kinase deficiency (hyper-immunoglobulin D syndrome, or HIDS) and tumor necrosis factor receptor-associated periodic syndrome (TRAPS).
Epidemiology
FMF affects groups of people originating from around the Mediterranean Sea (hence its name). It is most prominently present in the Armenians, Sephardic Jews, Ashkenazi Jews, Mizrahi Jews, Cypriots, Kurds, Turks and Levantines.
History
A New York City allergist, Sheppard Siegal, first described the attacks of peritonitis in 1945; he termed this "benign paroxysmal peritonitis", as the disease course was essentially benign. Dr Hobart Reimann, working in the American University in Beirut, described a more complete picture which he termed "periodic disease". French physicians Henry Mamou and Roger Cattan described the complete disease with renal complications in 1952.
See also
List of cutaneous conditions
Urticarial syndromes
References
External links
Proteopedia 2wl1 information about the MEFV gene.
GeneReview/NIH/UW entry on Familial Mediterranean Fever
Familial Mediterranean Fever (FMF) - US National Institute of Arthritis and Musculoskeletal and Skin Diseases | 166 |
Fasciolosis | Fasciolosis is a parasitic worm infection caused by the common liver fluke Fasciola hepatica as well as by Fasciola gigantica. The disease is a plant-borne trematode zoonosis, and is classified as a neglected tropical disease (NTD). It affects humans, but its main host is ruminants such as cattle and sheep. The disease progresses through four distinct phases; an initial incubation phase of between a few days up to three months with little or no symptoms; an invasive or acute phase which may manifest with: fever, malaise, abdominal pain, gastrointestinal symptoms, urticaria, anemia, jaundice, and respiratory symptoms. The disease later progresses to a latent phase with less symptoms and ultimately into a chronic or obstructive phase months to years later. In the chronic state the disease causes inflammation of the bile ducts, gall bladder and may cause gall stones as well as fibrosis. While chronic inflammation is connected to increased cancer rates, it is unclear whether fasciolosis is associated with increased cancer risk.Up to half of those infected display no symptoms, and diagnosis is difficult because the worm eggs are often missed in fecal examination. The methods of detection are through fecal examination, parasite-specific antibody detection, or radiological diagnosis, as well as laparotomy. In case of a suspected outbreak it may be useful to keep track of dietary history, which is also useful for exclusion of differential diagnoses. Fecal examination is generally not helpful because the worm eggs can seldom be detected in the chronic phase of the infection. Eggs appear in the feces first between 9–11 weeks post-infection. The cause of this is unknown, and it is also difficult to distinguish between the different species of fasciola as well distinguishing them from echinostomes and Fasciolopsis. Most immunodiagnostic tests detect infection with very high sensitivity, and as concentration drops after treatment, it is a very good diagnostic method. Clinically it is not possible to differentiate from other liver and bile diseases. Radiological methods can detect lesions in both acute and chronic infection, while laparotomy will detect lesions and also occasionally eggs and live worms.Because of the size of the parasite, as adult F. hepatica: 20–30 × 13 mm (0.79–1.18 × 0.51 inches) or adult F. gigantica: 25–75 × 12 mm (0.98–2.95 × 0.47 inches), fasciolosis is a big concern. The amount of symptoms depend on how many worms and what stage the infection is in. The death rate is significant in both cattle (67.55%) and goats (24.61%), but generally low among humans. Treatment with triclabendazole has been highly effective against the adult worms as well as various developing stages. Praziquantel is not effective, and older drugs such as bithionol are moderately effective but also cause more side effects. Secondary bacterial infection causing cholangitis has also been a concern and can be treated with antibiotics, and toxaemia may be treated with prednisolone.Humans are infected by eating watergrown plants, primarily wild-grown watercress in Europe or morning glory in Asia. Infection may also occur by drinking contaminated water with floating young fasciola or when using utensils washed with contaminated water. Cultivated plants do not spread the disease in the same capacity. Human infection is rare, even if the infection rate is high among animals. Especially high rates of human infection have been found in Bolivia, Peru and Egypt, and this may be due to consumption of certain foods. No vaccine is available to protect people against Fasciola infection. Preventative measures are primarily treating and immunization of the livestock, which are required to host the live cycle of the worms. Veterinary vaccines are in development, and their use is being considered by a number of countries on account of the risk to human health and economic losses resulting from livestock infection. Other methods include using molluscicides to decrease the number of snails that act as vectors, but it is not practical. Educational methods to decrease consumption of wild watercress and other waterplants has been shown to work in areas with a high disease burden.Fascioliasis occurs in Europe, Africa, the Americas as well as Oceania. Recently, worldwide losses in animal productivity due to fasciolosis were conservatively estimated at over US$3.2 billion per annum. Fasciolosis is now recognized as an emerging human disease: the World Health Organization (WHO) has estimated that 2.4 million people are infected with Fasciola, and a further 180 million are at risk of infection.
Signs and symptoms
Humans
The course of fasciolosis in humans has 4 main phases:Incubation phase: from the ingestion of metacercariae to the appearance of the first symptoms; time period: few days to 3 months; depends on number of ingested metacercariae and immune status of host
Invasive or acute phase: fluke migration up to the bile ducts. This phase is a result of mechanical destruction of the hepatic tissue and the peritoneum by migrating juvenile flukes causing localized and or generalized toxic and allergic reactions. The major symptoms of this phase are:
Fever: usually the first symptom of the disease; 40–42 °C (104–108 °F)
Abdominal pain
Gastrointestinal disturbances: loss of appetite, flatulence, nausea, diarrhea
Urticaria
Respiratory symptoms (very rare): cough, dyspnoea, chest pain, hemoptysis
Hepatomegaly and splenomegaly
Ascites
Anaemia
Jaundice
Latent phase: This phase can last for months or years. The proportion of asymptomatic subjects in this phase is unknown. They are often discovered during family screening after a patient is diagnosed.
Chronic or obstructive phase:This phase may develop months or years after initial infection. Adult flukes in the bile ducts cause inflammation and hyperplasia of the epithelium. The resulting cholangitis and cholecystitis, combined with the large body of the flukes, are sufficient to cause mechanical obstruction of the biliary duct. In this phase, biliary colic, epigastric pain, fatty food intolerance, nausea, jaundice, pruritus, right upper-quadrant abdominal tenderness, etc., are clinical manifestations indistinguishable from cholangitis, cholecystitis and cholelithiasis of other origins. Hepatic enlargement may be associated with an enlarged spleen or ascites. In case of obstruction, the gall bladder is usually enlarged and edematous with thickening of the wall (Ref: Hepatobiliary Fascioliasis:
Sonographic and CT Findings in 87 Patients During the InitialPhase and Long-Term Follow-Up. Adnan Kabaalioglu, Kagan Ceken, Emel Alimoglu, Rabin Saba, Metin Cubuk, Gokhan Arslan, Ali Apaydin. AJR 2007; 189:824–828). Fibrous adhesions of the gall bladder to adjacent organs are common. Lithiasis of the bile duct or gall bladder is frequent and the stones are usually small and multiple.
Other animals
Veterinary clinical signs of fasciolosis are always closely associated with infectious dose (amount of ingested metacercariae). In sheep, as the most common definitive host, clinical presentation is divided into 4 types:
Acute Type I Fasciolosis: infectious dose is more than 5000 ingested metacercariae. Sheep suddenly die without any previous clinical signs. Ascites, abdominal haemorrhage, icterus, pallor of membranes, weakness may be observed in sheep.
Acute Type II Fasciolosis: infectious dose is 1000-5,000 ingested metacercariae. As above, sheep die but briefly show pallor, loss of condition and ascites.
Subacute Fasciolosis: infectious dose is 800-1000 ingested metacercariae. Sheep are lethargic, anemic and may die. Weight loss is dominant feature.
Chronic Fasciolosis: infectious dose is 200-800 ingested metacercariae. Asymptomatic or gradual development of bottle jaw and ascites (ventral edema), emaciation, weight loss.In blood, anemia, hypoalbuminemia, and eosinophilia may be observed in all types of fasciolosis. Elevation of liver enzyme activities, such a glutamate dehydrogenase (GLDH), gamma-glutamyl transferase (GGT), and lactate dehydrogenase (LDH), is detected in subacute or chronic fasciolosis from 12 to 15 weeks after ingestion of metacercariae. Economical effect of fasciolosis in sheep consists in sudden deaths of animals as well as in reduction of weight gain and wool production. In goats and cattle, the clinical manifestation is similar to sheep. However, acquired resistance to F. hepatica infection is well known in adult cattle. Calves are susceptible to disease but in excess of 1000 metacercariae are usually required to cause clinical fasciolosis. In this case the disease is similar to sheep and is characterized by weight loss, anemia, hypoalbuminemia and (after infection with 10,000 metacercariae) death. Importance of cattle fasciolosis consist in economic losses caused by condemnation of livers at slaughter and production losses especially due to reduced weight gain.In sheep and sometimes cattle, the damaged liver tissue may become infected by the Clostridium bacteria C. novyi type B. The bacteria will release toxins into the bloodstream resulting in what is known as black disease. There is no cure and death follows quickly. As C. novyi is common in the environment, black disease is found wherever populations of liver flukes and sheep overlap.
Cause
Fasciolosis is caused by two digenetic trematodes F. hepatica and F. gigantica. Adult flukes of both species are localized in the bile ducts of the liver or gallbladder. F. hepatica measures 2 to 3 cm and has a cosmopolitan distribution. F. gigantica measures 4 to 10 cm in length and the distribution of the species is limited to the tropics and has been recorded in Africa, the Middle East, Eastern Europe and south and eastern Asia. In domestic livestock in Japan, diploid (2n = 20), triploid (3n = 30) and chimeric flukes (2n/3n) have been described, many of which reproduce parthenogenetically. As a result of this unclear classification, flukes in Japan are normally referred to as Fasciola spp. Recent reports based on mitochondrial genes analysis has shown that Japanese Fasciola spp. is more closely related to F. gigantica than to F. hepatica. In India, a species called F. jacksoni was described in elephants.
Transmission
Human F. hepatica infection is determined by the presence of the intermediate snail hosts, domestic herbivorous animals, climatic conditions and the dietary habits of man. Sheep, goats and cattle are considered the predominant animal reservoirs. While other animals can be infected, they are usually not very important for human disease transmission. On the other hand, some authors have observed that donkeys and pigs contribute to disease transmission in Bolivia. Among wild animals, it has been demonstrated that the peridomestic rat (Rattus rattus) may play an important role in the spread as well as in the transmission of the parasite in Corsica. In France, nutria (Myocastor coypus) was confirmed as a wild reservoir host of F. hepatica. Humans are infected by ingestion of aquatic plants that contain the infectious cercariae. Several species of aquatic vegetables are known as a vehicle of human infection. In Europe, Nasturtium officinale (common watercress), Nasturtium sylvestre, Rorippa amphibia (wild watercress), Taraxacum dens leonis (dandelion leaves), Valerianella olitoria (lambs lettuce), and Mentha viridis (spearmint) were reported as a source of human infections. In the Northern Bolivian Altiplano, some authors suggested that several aquatic plants such as bero-bero (watercress), algas (algae), kjosco and tortora could act as a source of infection for humans. Because F. hepatica cercariae also encyst on water surface, humans can be infected by drinking of fresh untreated water containing cercariae. In addition, an experimental study suggested that humans consuming raw liver dishes from fresh livers infected with juvenile flukes could become infected.
Intermediate hosts
Intermediate hosts of F. hepatica are freshwater snails from family Lymnaeidae. Snails from family Planorbidae act as an intermediate host of F. hepatica very occasionally.
Mechanism
The development of infection in definitive host is divided into two phases: the parenchymal (migratory) phase and the biliary phase. The parenchymal phase begins when excysted juvenile flukes penetrate the intestinal wall. After the penetration of the intestine, flukes migrate within the abdominal cavity and penetrate the liver or other organs. F. hepatica has a strong predilection for the tissues of the liver. Occasionally, ectopic locations of flukes such as the lungs, diaphragm, intestinal wall, kidneys, and subcutaneous tissue can occur. During the migration of flukes, tissues are mechanically destroyed and inflammation appears around migratory tracks of flukes. The second phase (the biliary phase) begins when parasites enter the biliary ducts of the liver. In biliary ducts, flukes mature, feed on blood, and produce eggs. Hypertrophy of biliar ducts associated with obstruction of the lumen occurs as a result of tissue damage.
Resistance to infection
Mechanisms of resistance have been studied by several authors in different animal species. These studies may help to better understand the immune response to F. hepatica in host and are necessary in development of vaccine against the parasite. It has been established that cattle acquire resistance to challenge infection with F. hepatica and F. gigantica when they have been sensitized with primary patent or drug-abbreviated infection. Resistance to fasciolosis was also documented in rats. On the other hand, sheep and goats are not resistant to re-infection with F. hepatica. However, there is evidence that two sheep breeds, in particular Indonesian thin tail sheep and Red maasai sheep, are resistant to F. gigantica.
Diagnosis
Most immunodiagnostic tests will detect infection and have a sensitivity above 90% during all stages of the diseases. In addition antibody concentration quickly drops post treatment and no antibodies are present one year after treatment, which makes it a very good diagnostic method. In humans, diagnosis of fasciolosis is usually achieved parasitologically by findings the fluke eggs in stool, and immunologically by ELISA and Western blot. Coprological examinations of stool alone are generally not adequate because infected humans have important clinical presentations long before eggs are found in the stools.Moreover, in many human infections, the fluke eggs are often not found in the faeces, even after multiple faecal examinations. Furthermore, eggs of F. hepatica, F. gigantica and Fasciolopsis buski are morphologically indistinguishable. Therefore, immunonological methods such ELISA and enzyme-linked immunoelectrotransfer blot, also called Western blot, are the most important methods in diagnosis of F. hepatica infection. These immunological tests are based on detection of species-specific antibodies from sera. The antigenic preparations used have been primarily derived from extracts of excretory/secretory products from adult worms, or with partially purified fractions. Recently, purified native and recombinant antigens have been used, e.g. recombinant F. hepatica cathepsin L-like protease.Methods based on antigen detection (circulating in serum or in faeces) are less frequent. In addition, biochemical and haematological examinations of human sera support the exact diagnosis (eosinophilia, elevation of liver enzymes). Ultrasonography and xray of the abdominal cavity, biopsy of liver, and gallbladder punctuate can also be used (ref: US-guided gallbladder aspiration:
a new diagnostic method for biliary fascioliasis. A. Kabaalioglu, A. Apaydin, T. Sindel, E. Lüleci. Eur. Radiol. 9, 880±882 (1999) . False fasciolosis (pseudofasciolosis) refers to the presence of eggs in the stool resulting not from an actual infection but from recent ingestion of infected livers containing eggs. This situation (with its potential for misdiagnosis) can be avoided by having the patient follow a liver-free diet several days before a repeat stool examination.In animals, intravital diagnosis is based predominantly on faeces examinations and immunological methods. However, clinical signs, biochemical and haematological profile, season, climate conditions, epidemiology situation, and examinations of snails must be considered. Similarly to humans, faeces examinations are not reliable. Moreover, the fluke eggs are detectable in faeces 8–12 weeks post-infection. In spite of that fact, faecal examination is still the only used diagnostic tool in some countries. While coprological diagnosis of fasciolosis is possible from 8- to 12-week post-infection (WPI), F. hepatica specific-antibodies are recognized using ELISA or Western blot after 2-4 week post-infection. Therefore, these methods provide early detection of the infection.
Prevention
In some areas special control programs are in place or have been planned. The types of control measures depend on the setting (such as epidemiologic, ecologic, and cultural factors). Strict control of the growth and sale of watercress and other edible water plants is important. Individual people can protect themselves by not eating raw watercress and other water plants, especially from endemic grazing areas. Travelers to areas with poor sanitation should avoid food and water that might be contaminated (tainted). Vegetables grown in fields, that might have been irrigated with polluted water, should be thoroughly cooked, as should viscera from potentially infected animals.
Treatment
Humans
Several drugs are effective for fascioliasis, both in humans and in domestic animals. The drug of choice in the treatment of fasciolosis is triclabendazole, a member of the benzimidazole family of anthelmintics. The drug works by preventing the polymerization of the molecule tubulin into the cytoskeletal structures, microtubules. Resistance of F. hepatica to triclabendazole has been recorded in Australia in 1995 and Ireland in 1998.Praziquantel treatment is ineffective. There are case reports of nitazoxanide being successfully used in human fasciolosis treatment in Mexico. There are also reports of bithionol being used successfully.Nitazoxanide has been found effective in trials, but is currently not recommended.
Domestic animals
Only clorsulon and albendazole are approved for use in the treatment of domestic animals in the United States, but the available flukicides used worldwide also include triclabendazole, netobimin, closantel, rafoxanide, nitroxynil, and oxyclozanide; however, this list of available drugs has some drawbacks. Closantel, nitroxynil, and oxyclozanide are not effective against young liver flukes and should only be used to treat subacute and chronic infections. Triclabendazole is effective at killing flukes of any age, but only those that cause acute infections; flukes that have remained in the body for long periods of time are becoming resistant to this drug. The timing of treatment is critical for success, and is determined by environmental factors and analysis of the expected distribution and prevalence of the disease. For example, in European countries that have large numbers of sheep, computerized systems predict when fascioliasis is most likely to make the biggest impact on sheep populations and how many sheep will most likely be affected. The predictions are dependent on guessing when environmental conditions that are most conducive to parasite multiplication will occur, such as amount of rainfall, evapotranspiration, and the ratio of wet to dry days in a particular month. If heavy infections are expected to occur, treatment for sheep should begin in September/October, then again in January/February, and finally in April/May; the amount of hatching fluke eggs is minimal during these times because they require a warm, wet environment, making treatment more effective.
Epidemiology
Human and animal fasciolosis occurs worldwide. While animal fasciolosis is distributed in countries with high cattle and sheep production, human fasciolosis occurs, excepting Western Europe, in developing countries. Fasciolosis occurs only in areas where suitable conditions for intermediate hosts exist.Studies carried out in recent years have shown human fasciolosis to be an important public health problem. Human fasciolosis has been reported from countries in Europe, America, Asia, Africa and Oceania. The incidence of human cases has been increasing in 51 countries of the five continents. A global analysis shows that the expected correlation between animal and human fasciolosis only appears at a basic level. High prevalences in humans are not necessarily found in areas where fasciolosis is a great veterinary problem. For instance, in South America, hyperendemics and mesoendemics are found in Bolivia and Peru where the veterinary problem is less important, while in countries such as Uruguay, Argentina and Chile, human fasciolosis is only sporadic or hypoendemic.
Europe
In Europe, human fasciolosis occur mainly in France, Spain, Portugal, and the former USSR. France is considered an important human endemic area. A total of 5863 cases of human fasciolosis were recorded from nine French hospitals from 1970 to 1982. Concerning the former Soviet Union, almost all reported cases were from the Tajik Republic. Several papers referred to human fasciolosis in Turkey. Recently, serological survey of human fasciolosis was performed in some parts of Turkey. The prevalence of the disease was serologically found to be 3.01% in Antalya Province, and between 0.9 and 6.1% in Isparta Province, Mediterranean region of Turkey. In other European countries, fasciolosis is sporadic and the occurrence of the disease is usually combined with travelling to endemic areas.
Americas
In North America, the disease is very sporadic. In Mexico, 53 cases have been reported. In Central America, fasciolosis is a human health problem in the Caribbean islands, especially in zones of Puerto Rico and Cuba. Pinar del Río Province and Villa Clara Province are Cuban regions where fasciolosis was hyperendemic. In South America, human fasciolosis is a serious problem in Bolivia, Peru, and Ecuador. These Andean countries are considered to be the area with the highest prevalence of human fasciolosis in the world. Well-known human hyperendemic areas are localized predominately in the high plain called altiplano. In the Northern Bolivian Altiplano, prevalences detected in some communities were up to 72% and 100% in coprological and serological surveys, respectively. In Peru, F. hepatica in humans occurs throughout the country. The highest prevalences were reported in Arequipa, Mantaro Valley, Cajamarca Valley, and Puno Region. In other South American countries like Argentina, Uruguay, Brazil, Venezuela and Colombia, human fasciolosis appear to be sporadic, despite the high prevalences of fasciolosis in cattle.
Africa
In Africa, human cases of fasciolosis, except in northern parts, have not been frequently reported. The highest prevalence was recorded in Egypt where the disease is distributed in communities living in the Nile Delta.
Asia
In Asia, the most human cases were reported in Iran, especially in Gīlān Province, on the Caspian Sea. It was mentioned that more than 10,000 human cases were detected in Iran. In eastern Asia, human fasciolosis appears to be sporadic. Few cases were documented in Japan, Koreas, Vietnam, and Thailand.
Australia and the Oceania
In Australia, human fasciolosis is very rare (only 12 cases documented). In New Zealand, F. hepatica has never been detected in humans.
Other animals
A number of drugs have been used in control fasciolosis in animals. Drugs differ in their efficacy, mode of action, price, and viability. Fasciolicides (drugs against Fasciola spp.) fall into five main chemical groups:Halogenated phenols: bithionol (Bitin), hexachlorophene (Bilevon), nitroxynil (Trodax)
Salicylanilides: closantel (Flukiver, Supaverm), rafoxanide (Flukanide, Ranizole)
Benzimidazoles: triclabendazole (Fasinex), albendazole (Vermitan, Valbazen), mebendazol (Telmin), luxabendazole (Fluxacur)
Sulphonamides: clorsulon (Ivomec Plus)
Phenoxyalkanes: diamphenetide (Coriban)Triclabendazole (Fasinex) is considered as the most common drug due to its high efficacy against adult as well as juvenile flukes. Triclabendazole is used in control of fasciolosis of livestock in many countries. Nevertheless, long-term veterinary use of triclabendazole has caused appearance of resistance in F. hepatica. In animals, triclabendazole resistance was first described in Australia, later in Ireland and Scotland and more recently in the Netherlands. Considering this fact, scientists have started to work on the development of new drug. Recently, a new fasciolicide was successfully tested in naturally and experimentally infected cattle in Mexico. This new drug is called Compound Alpha and is chemically very similar to triclabendazole. Countries where fasciolosis in livestock was repeatedly reported:
Europe: UK, Ireland, France, Portugal, Spain, Switzerland, Italy, Netherlands, Germany, Poland
Asia: Turkey, Russia, Thailand, Iraq, Iran, China, Vietnam, India, Nepal, Japan, Korea, Philippines
Africa: Kenya, Zimbabwe, Nigeria, Egypt, Gambia, Morocco
Australia and the Oceania: Australia, New Zealand
Americas: United States, Mexico, Cuba, Peru, Chile, Uruguay, Argentina, Jamaica, BrazilOn September 8, 2007, Veterinary officials in South Cotabato, Philippines said that laboratory tests on samples from cows, carabaos, and horses in the provinces 10 towns and lone city showed the level of infection at 89.5%, a sudden increase of positive cases among large livestock due to the erratic weather condition in the area. They must be treated forthwith to prevent complications with surra and hemorrhagic septicemia diseases. Surra already affected all barangays of the Surallah town.
See also
Fasciolopsiasis
Clonorchiasis
Fh8 - chemical produced by fasciolosis infection in the liver
References
External links
Fasciolosis Overview Archived 2008-02-04 at the Wayback Machine at CDC
Immunodiagnosis of fasciolosis in Bolivian Altiplano
Fasciolosis Archived 2009-03-10 at the Wayback Machine
Pictures of adult flukes
Pictures of F. hepatica eggs | 167 |
Food allergy | A food allergy is an abnormal immune response to food. The symptoms of the allergic reaction may range from mild to severe. They may include itchiness, swelling of the tongue, vomiting, diarrhea, hives, trouble breathing, or low blood pressure. This typically occurs within minutes to several hours of exposure. When the symptoms are severe, it is known as anaphylaxis. A food intolerance and food poisoning are separate conditions, not due to an immune response.Common foods involved include cows milk, peanuts, eggs, shellfish, fish, tree nuts, soy, wheat, sesame, rice, and fruit. The common allergies vary depending on the country. Risk factors include a family history of allergies, vitamin D deficiency, obesity, and high levels of cleanliness. Allergies occur when immunoglobulin E (IgE), part of the bodys immune system, binds to food molecules. A protein in the food is usually the problem. This triggers the release of inflammatory chemicals such as histamine. Diagnosis is usually based on a medical history, elimination diet, skin prick test, blood tests for food-specific IgE antibodies, or oral food challenge.Early exposure to potential allergens may be protective. Management primarily involves avoiding the food in question and having a plan if exposure occurs. This plan may include giving adrenaline (epinephrine) and wearing medical alert jewelry. The benefits of allergen immunotherapy for food allergies is unclear, thus is not recommended as of 2015. Some types of food allergies among children resolve with age, including that to milk, eggs, and soy; while others such as to nuts and shellfish typically do not.In the developed world, about 4% to 8% of people have at least one food allergy. They are more common in children than adults and appear to be increasing in frequency. Male children appear to be more commonly affected than females. Some allergies more commonly develop early in life, while others typically develop in later life. In developed countries, more people believe they have food allergies when they actually do not have them.
Signs and symptoms
Food allergy symptoms occur within minutes to hours after exposure and may include:
Rash
Hives
Itching of mouth, lips, tongue, throat, eyes, skin, or other areas
Swelling (angioedema) of lips, tongue, eyelids, or the whole face
Difficulty swallowing
Runny or congested nose
Hoarse voice
Wheezing and/or shortness of breath
Diarrhea, abdominal pain, and/or stomach cramps
Lightheadedness
Fainting
Nausea
VomitingIn some cases, however, onset of symptoms may be delayed for hours.Symptoms can vary. The amount of food needed to trigger a reaction also varies.Serious danger regarding allergies can begin when the respiratory tract or blood circulation is affected. The former can be indicated through wheezing and cyanosis. Poor blood circulation leads to a weak pulse, pale skin and fainting.A severe case of an allergic reaction, caused by symptoms affecting the respiratory tract and blood circulation, is called anaphylaxis. When symptoms are related to a drop in blood pressure, the person is said to be in anaphylactic shock. Anaphylaxis occurs when IgE antibodies are involved, and areas of the body that are not in direct contact with the food become affected and show symptoms. Those with asthma or an allergy to peanuts, tree nuts, or seafood are at greater risk for anaphylaxis.
Causes
Although sensitivity levels vary by country, the most common food allergies are allergies to milk, eggs, peanuts, tree nuts, fish, shellfish, soy, and wheat. These are often referred to as "the big eight". Allergies to seeds—especially sesame—seem to be increasing in many countries. Sesame will join “the big eight” as a priority allergen in the United States by 2023. An example of an allergy more common to a particular region is that to rice in East Asia where it forms a large part of the diet.One of the most common food allergies is a sensitivity to peanuts, a member of the bean family. Peanut allergies may be severe, but children with peanut allergies sometimes outgrow them. Tree nuts, including almonds, brazil nuts, cashews, coconuts, hazelnuts, macadamia nuts, pecans, pistachios, pine nuts, and walnuts, are also common allergens. Affected individuals may be sensitive to one particular tree nut or to many different ones. Peanuts and seeds, including sesame seeds and poppy seeds, can be processed to extract oils, but trace amounts of protein may be present, and elicit an allergic reaction.Egg allergies affect about one in 50 children but are frequently outgrown by children when they reach age five. Typically, the sensitivity is to proteins in the white, rather than the yolk.Milk from cows, goats, or sheep is another common food allergen, and many affected people are also unable to tolerate dairy products such as cheese. A small portion of children with a milk allergy, roughly 10%, have a reaction to beef because it contains small amounts of protein that are also present in cows milk.Seafood is one of the most common sources of food allergens; people may be allergic to proteins found in fish or to different proteins found in shellfish (crustaceans and mollusks).Other foods containing allergenic proteins include soy and wheat, and to a lesser frequency, fruits, vegetables, maize, spices, synthetic and natural colors, and chemical additives.Balsam of Peru, which is in various foods, is in the "top five" allergens most commonly causing patch test reactions in people referred to dermatology clinics.
Other than oral ingestion
Sensitization can occur through the gastrointestinal tract, respiratory tract and possibly the skin. Damage to the skin in conditions such as eczema has been proposed as a risk factor for sensitization.While the most obvious route for an allergic exposure is oral ingestion, some reactions are possible through external exposure. Peanut allergies are much more common in adults who had oozing and crusted skin rashes as infants. Airborne particles in a farm- or factory-scale peanut shelling or crushing environment, or from cooking, can produce respiratory effects in exposed allergic individuals. For seafood allergy, an industry review estimated that 28.5 million people worldwide were engaged in some aspect of the seafood industry: fishing, aquaculture, processing and industrial cooking. Exposure to fish allergenic proteins includes inhalation of wet aerosols from fresh fish handling, inhalation of dry aerosols from fishmeal processing, and dermal contact through skin breaks and cuts. Respiratory allergies are an occupational disease that develop in food service workers working with baked goods, known as "bakers asthma"). Previous studies detected 40 allergens from wheat; some cross-reacted with rye proteins and a few cross-reacted with grass pollens.Influenza vaccines are created by injecting a live virus into fertilized chicken eggs. The viruses are harvested, killed and purified, but a residual amount of egg white protein remains. There are options to receive recombinant flu vaccine grown on mammalian cell cultures instead of in eggs.
Atopy
Food allergies develop more easily in people with the atopic syndrome, a very common combination of diseases: allergic rhinitis and conjunctivitis, eczema, and asthma. The syndrome has a strong inherited component; a family history of allergic diseases can be indicative of the atopic syndrome.
Cross-reactivity
Some children who are allergic to cows milk protein also show a cross-sensitivity to soy-based products. Some infant formulas have their milk and soy proteins hydrolyzed, so when taken by infants, their immune systems do not recognize the allergen and they can safely consume the product. Hypoallergenic infant formulas can be based on proteins partially predigested to a less antigenic form. Other formulas, based on free amino acids, are the least antigenic and provide complete nutritional support in severe forms of milk allergy.Crustaceans (shrimp, crab, lobster, etc.) and molluscs (mussel, oyster, scallop, squid, octopus, snail, etc.) are different invertebrate classes, but the allergenic protein tropomyosin is present and responsible for cross-reactivity.People with latex allergy often also develop allergies to bananas, kiwifruit, avocados, and some other foods.
Pathophysiology
Conditions caused by food allergies are classified into three groups according to the mechanism of the allergic response:
IgE-mediated (classic) – the most common type, occurs shortly after eating and may involve anaphylaxis.
Non-IgE mediated – characterized by an immune response not involving immunoglobulin E; may occur some hours after eating, complicating diagnosis
IgE and/or non-IgE-mediated – a hybrid of the above two typesAllergic reactions are hyperactive responses of the immune system to generally innocuous substances. When immune cells encounter the allergenic protein, IgE antibodies are produced; this is similar to the immune systems reaction to foreign pathogens. The IgE antibodies identify the allergenic proteins as harmful and initiate the allergic reaction. The harmful proteins are those that do not break down due to the strong bonds of the protein. IgE antibodies bind to a receptor on the surface of the protein, creating a tag, just as a virus or parasite becomes tagged. Why some proteins do not denature and subsequently trigger allergic reactions and hypersensitivity while others do is not entirely clear.Hypersensitivities are categorized according to the parts of the immune system that are attacked and the amount of time it takes for the response to occur. The four types of hypersensitivity reaction are: type 1, immediate IgE-mediated; type 2, cytotoxic; type 3, immune complex-mediated; and type 4, delayed cell-mediated. The pathophysiology of allergic responses can be divided into two phases. The first is an acute response that occurs immediately after exposure to an allergen. This phase can either subside or progress into a "late-phase reaction" which can substantially prolong the symptoms of a response, and result in tissue damage.Many food allergies are caused by hypersensitivities to particular proteins in different foods. Proteins have unique properties that allow them to become allergens, such as stabilizing forces in their tertiary and quaternary structures which prevent degradation during digestion. Many theoretically allergenic proteins cannot survive the destructive environment of the digestive tract, thus do not trigger hypersensitive reactions.
Acute response
In the early stages of allergy, a type I hypersensitivity reaction against an allergen, encountered for the first time, causes a response in a type of immune cell called a TH2 lymphocyte, which belongs to a subset of T cells that produce a cytokine called interleukin-4 (IL-4). These TH2 cells interact with other lymphocytes called B cells, whose role is the production of antibodies. Coupled with signals provided by IL-4, this interaction stimulates the B cell to begin production of a large amount of a particular type of antibody known as IgE. Secreted IgE circulates in the blood and binds to an IgE-specific receptor (a kind of Fc receptor called FcεRI) on the surface of other kinds of immune cells called mast cells and basophils, which are both involved in the acute inflammatory response. The IgE-coated cells, at this stage, are sensitized to the allergen.If later exposure to the same allergen occurs, the allergen can bind to the IgE molecules held on the surface of the mast cells or basophils. Cross-linking of the IgE and Fc receptors occurs when more than one IgE-receptor complex interacts with the same allergenic molecule and activates the sensitized cell. Activated mast cells and basophils undergo a process called degranulation, during which they release histamine and other inflammatory chemical mediators (cytokines, interleukins, leukotrienes, and prostaglandins) from their granules into the surrounding tissue causing several systemic effects, such as vasodilation, mucous secretion, nerve stimulation, and smooth-muscle contraction. This results in rhinorrhea, itchiness, dyspnea, and anaphylaxis. Depending on the individual, the allergen, and the mode of introduction, the symptoms can be system-wide (classical anaphylaxis), or localized to particular body systems.
Late-phase response
After the chemical mediators of the acute response subside, late-phase responses can often occur due to the migration of other leukocytes such as neutrophils, lymphocytes, eosinophils, and macrophages to the initial site. The reaction is usually seen 2–24 hours after the original reaction. Cytokines from mast cells may also play a role in the persistence of long-term effects.
Diagnosis
Diagnosis is usually based on a medical history, elimination diet, skin prick test, blood tests for food-specific IgE antibodies, or oral food challenge.
For skin-prick tests, a tiny board with protruding needles is used. The allergens are placed either on the board or directly on the skin. The board is then placed on the skin, to puncture the skin and for the allergens to enter the body. If a hive appears, the person is considered positive for the allergy. This test only works for IgE antibodies. Allergic reactions caused by other antibodies cannot be detected through skin-prick tests.Skin-prick testing is easy to do and results are available in minutes. Different allergists may use different devices for testing. Some use a "bifurcated needle", which looks like a fork with two prongs. Others use a "multitest", which may look like a small board with several pins sticking out of it. In these tests, a tiny amount of the suspected allergen is put onto the skin or into a testing device, and the device is placed on the skin to prick, or break through, the top layer of skin. This puts a small amount of the allergen under the skin. A hive will form at any spot where the person is allergic. This test generally yields a positive or negative result. It is good for quickly learning if a person is allergic to a particular food or not because it detects IgE. Skin tests cannot predict if a reaction would occur or what kind of reaction might occur if a person ingests that particular allergen. They can, however, confirm an allergy in light of a patients history of reactions to a particular food. Non-IgE-mediated allergies cannot be detected by this method.
Patch testing is used to determine if a specific substance causes allergic inflammation of the skin. It tests for delayed food reactions.
Blood testing is another way to test for allergies; however, it poses the same disadvantage and only detects IgE allergens and does not work for every possible allergen. Radioallergosorbent testing (RAST) is used to detect IgE antibodies present to a certain allergen. The score taken from the RAST is compared to predictive values, taken from a specific type of RAST. If the score is higher than the predictive values, a great chance the allergy is present in the person exists. One advantage of this test is that it can test many allergens at one time.A CAP-RAST has greater specificity than RAST; it can show the amount of IgE present to each allergen. Researchers have been able to determine "predictive values" for certain foods, which can be compared to the RAST results. If a persons RAST score is higher than the predictive value for that food, over a 95% chance exists that patients will have an allergic reaction (limited to rash and anaphylaxis reactions) if they ingest that food. Currently, predictive values are available for milk, egg, peanut, fish, soy, and wheat. Blood tests allow for hundreds of allergens to be screened from a single sample, and cover food allergies as well as inhalants. However, non-IgE-mediated allergies cannot be detected by this method. Other widely promoted tests such as the antigen leukocyte cellular antibody test and the food allergy profile are considered unproven methods, the use of which is not advised.
Food challenges test for allergens other than those caused by IgE allergens. The allergen is given to the person in the form of a pill, so the person can ingest the allergen directly. The person is watched for signs and symptoms. The problem with food challenges is that they must be performed in the hospital under careful watch, due to the possibility of anaphylaxis.Food challenges, especially double-blind, placebo-controlled food challenges, are the gold standard for diagnosis of food allergies, including most non-IgE-mediated reactions, but is rarely done. Blind food challenges involve packaging the suspected allergen into a capsule, giving it to the patient, and observing the patient for signs or symptoms of an allergic reaction.The recommended method for diagnosing food allergy is to be assessed by an allergist. The allergist will review the patients history and the symptoms or reactions that have been noted after food ingestion. If the allergist feels the symptoms or reactions are consistent with food allergy, he/she will perform allergy tests. Additional diagnostic tools for evaluation of eosinophilic or non-IgE mediated reactions include endoscopy, colonoscopy, and biopsy.
Differential diagnosis
Important differential diagnoses are:
Lactose intolerance generally develops later in life, but can present in young patients in severe cases. It is not an immune reaction and is due to an enzyme deficiency (lactase). It is more common in many non-Western people.
Celiac disease. While it is caused by a permanent intolerance to gluten (present in wheat, rye, barley and oats), is not an allergy nor simply an intolerance, but a chronic, multiple-organ autoimmune disorder primarily affecting the small intestine.
Irritable bowel syndrome
C1 Esterase inhibitor deficiency (hereditary angioedema), a rare disease, generally causes attacks of angioedema, but can present solely with abdominal pain and occasional diarrhea, and thus may be confused with allergy-triggered angioedema.
Prevention
Breastfeeding for more than four months may prevent atopic dermatitis, cows milk allergy, and wheezing in early childhood. Early exposure to potential allergens may be protective. Specifically, early exposure to eggs and peanuts reduces the risk of allergies to these. Guidelines suggest introducing peanuts as early as 4–6 months and include precautionary measures for high-risk infants. The former guidelines, advising delaying the introduction of peanuts, are now thought to have contributed to the increase in peanut allergy seen recently.To avoid an allergic reaction, a strict diet can be followed. It is difficult to determine the amount of allergenic food required to elicit a reaction, so complete avoidance should be attempted. In some cases, hypersensitive reactions can be triggered by exposures to allergens through skin contact, inhalation, kissing, participation in sports, blood transfusions, cosmetics, and alcohol.
Inhalation exposure
Allergic reactions to airborne particles or vapors of known food allergens have been reported as an occupational consequence of people working in the food industry, but can also take place in home situations, restaurants, or confined spaces such as airplanes. According to two reviews, respiratory symptoms are common, but in some cases there has been progression to anaphylaxis. The most frequent reported cases of reactions by inhalation of allergenic foods were due to peanut, seafood, legumes, tree nut, and cows milk. Steam rising from cooking of lentils, green beans, chickpeas and fish has been well documented as triggering reactions, including anaphylactic reactions. One review mentioned case study examples of allergic responses to inhalation of other foods, including examples in which oral consumption of the food is tolerated.
Treatment
The mainstay of treatment for food allergy is total avoidance of the foods identified as allergens. An allergen can enter the body by consuming a portion of food containing the allergen, and can also be ingested by touching any surfaces that may have come into contact with the allergen, then touching the eyes or nose. For people who are extremely sensitive, avoidance includes avoiding touching or inhaling problematic food. Total avoidance is complicated because the declaration of the presence of trace amounts of allergens in foods is not mandatory (see regulation of labelling).
If the food is accidentally ingested and a systemic reaction (anaphylaxis) occurs, then epinephrine should be used. A second dose of epinephrine may be required for severe reactions. The person should then be transported to the emergency room, where additional treatment can be given. Other treatments include antihistamines and steroids.
Epinephrine
Epinephrine (adrenaline) is the first-line treatment for severe allergic reactions (anaphylaxis). If administered in a timely manner, epinephrine can reverse its effects. Epinephrine relieves airway swelling and obstruction, and improves blood circulation; blood vessels are tightened and heart rate is increased, improving circulation to body organs. Epinephrine is available by prescription in an autoinjector.
Antihistamines
Antihistamines can alleviate some of the milder symptoms of an allergic reaction, but do not treat all symptoms of anaphylaxis. Antihistamines block the action of histamine, which causes blood vessels to dilate and become leaky to plasma proteins. Histamine also causes itchiness by acting on sensory nerve terminals. The most common antihistamine given for food allergies is diphenhydramine.
Steroids
Glucocorticoid steroids are used to calm down the immune system cells that are attacked by the chemicals released during an allergic reaction. This treatment in the form of a nasal spray should not be used to treat anaphylaxis, for it only relieves symptoms in the area in which the steroid is in contact. Another reason steroids should not be used is the delay in reducing inflammation. Steroids can also be taken orally or through injection, by which every part of the body can be reached and treated, but a long time is usually needed for these to take effect.
Epidemiology
The most common food allergens account for about 90% of all allergic reactions; in adults they include crustacean shellfish, peanuts, tree nuts, fish, and egg. In children, they include milk, eggs, peanuts, and tree nuts. Six to 8% of children under the age of three have food allergies and nearly 4% of adults have food allergies.For reasons not entirely understood, the diagnosis of food allergies has apparently become more common in Western nations recently. One possible explanation for this is the "old friends" hypothesis which suggests that non disease causing organisms, such as helminths, could protect against allergy. Therefore, reduced exposure to these organisms, particularly in developed countries, could have contributed towards the increase.In the United States, food allergy affects as many as 5% of infants less than three years of age and 3% to 4% of adults. A similar prevalence is found in Canada.About 75% of children who have allergies to milk protein are able to tolerate baked-in milk products, i.e., muffins, cookies, cake, and hydrolyzed formulas.About 50% of children with allergies to milk, egg, soy, peanuts, tree nuts, and wheat will outgrow their allergy by the age of 6. Those who are still allergic by the age of 12 or so have less than an 8% chance of outgrowing the allergy.Peanut and tree nut allergies are less likely to be outgrown, although evidence shows that about 20% of those with peanut allergies and 9% of those with tree nut allergies will outgrow them.In Japan, allergy to buckwheat flour, used for soba noodles, is more common than peanuts, tree nuts or foods made from soy beans.
United States
In the United States, an estimated 12 million people have food allergies. Food allergy affects as many as 5% of infants less than three years of age and 3% to 4% of adults. The prevalence of food allergies is rising. Food allergies cause roughly 30,000 emergency room visits and 150 deaths per year.
Regulation
Whether rates of food allergy are increasing or not, food allergy awareness has definitely increased, with impacts on the quality of life for children, their parents and their caregivers. In the United States, the Food Allergen Labeling and Consumer Protection Act of 2004 causes people to be reminded of allergy problems every time they handle a food package, and restaurants have added allergen warnings to menus. The Culinary Institute of America, a premier school for chef training, has courses in allergen-free cooking and a separate teaching kitchen. School systems have protocols about what foods can be brought into the school. Despite all these precautions, people with serious allergies are aware that accidental exposure can easily occur at other peoples houses, at school or in restaurants.
Regulation of labelling
In response to the risk that certain foods pose to those with food allergies, some countries have responded by instituting labeling laws that require food products to clearly inform consumers if their products contain priority allergens or byproducts of major allergens among the ingredients intentionally added to foods.
The priority allergens vary by country.
There are no labeling laws mandating declaration of the presence of trace amounts in the final product as a consequence of cross-contamination, except in Brazil.
Ingredients intentionally added
In the United States, the Food Allergen Labeling and Consumer Protection Act of 2004 (FALCPA) requires companies to disclose on the label whether a packaged food product contains any of these eight major food allergens, added intentionally: cows milk, peanuts, eggs, shellfish, fish, tree nuts, soy and wheat. The eight-ingredient is list originated in 1999 from the World Health Organisation Codex Alimentarius Commission. To meet FALCPA labeling requirements, if an ingredient is derived from one of the required-label allergens, then it must either have its "food sourced name" in parentheses, for example, "Casein (milk)," or as an alternative, there must be a statement separate but adjacent to the ingredients list: "Contains milk" (and any other of the allergens with mandatory labeling). The European Union requires listing for those eight major allergens plus molluscs, celery, mustard, lupin, sesame and sulfites.In 2018, the US FDA issued a request for information for the consideration of labeling for sesame to help protect people who have sesame allergies. A decision was reached in November 2020 that food manufacturers voluntarily declare that when powdered sesame seeds are used as a previously unspecified spice or flavor, the label be changed to "spice (sesame)" or "flavor (sesame)."Congress and the President passed a law in April 2021, the "FASTER Act", stipulating that labeling be mandatory, to be effect January 1, 2023, making it the ninth required food ingredient label.FALCPA applies to packaged foods regulated by the FDA, which does not include poultry, most meats, certain egg products, and most alcoholic beverages. However, some meat, poultry, and egg processed products may contain allergenic ingredients. These products are regulated by the Food Safety and Inspection Service (FSIS), which requires that any ingredient be declared in the labeling only by its common or usual name. Neither the identification of the source of a specific ingredient in a parenthetical statement nor the use of statements to alert for the presence of specific ingredients, like "Contains: milk", are mandatory according to FSIS. FALCPA also does not apply to food prepared in restaurants. The EU Food Information for Consumers Regulation 1169/2011 – requires food businesses to provide allergy information on food sold unpackaged, for example, in catering outlets, deli counters, bakeries and sandwich bars.In the United States, there is no federal mandate to address the presence of allergens in drug products. FALCPA does not apply to medicines nor to cosmetics.
Trace amounts as a result of cross-contamination
The value of allergen labeling other than for intentional ingredients is controversial. This concerns labeling for ingredients present unintentionally as a consequence of cross-contact or cross-contamination at any point along the food chain (during raw material transportation, storage or handling, due to shared equipment for processing and packaging, etc.). Experts in this field propose that if allergen labeling is to be useful to consumers, and healthcare professionals who advise and treat those consumers, ideally there should be agreement on which foods require labeling, threshold quantities below which labeling may be of no purpose, and validation of allergen detection methods to test and potentially recall foods that were deliberately or inadvertently contaminated.Labeling regulations have been modified to provide for mandatory labeling of ingredients plus voluntary labeling, termed precautionary allergen labeling (PAL), also known as "may contain" statements, for possible, inadvertent, trace amount, cross-contamination during production. PAL labeling can be confusing to consumers, especially as there can be many variations on the wording of the warning. PAL is optional in the United States. As of 2014, PAL is regulated only in Switzerland, Japan, Argentina, and South Africa. Argentina decided to prohibit precautionary allergen labeling since 2010 and instead puts the onus on the manufacturer to control the manufacturing process and label only those allergenic ingredients known to be in the products. South Africa does not permit the use of PAL, except when manufacturers demonstrate the potential presence of allergen due to cross-contamination through a documented risk assessment and despite adherence to Good Manufacturing Practice. In Australia and New Zealand there is a recommendation that PAL be replaced by guidance from VITAL 2.0 (Vital Incidental Trace Allergen Labeling). A review identified "the eliciting dose for an allergic reaction in 1% of the population" as ED01. This threshold reference dose for foods (such as cows milk, egg, peanut and other proteins) will provide food manufacturers with guidance for developing precautionary labeling and give consumers a better idea of what might be accidentally in a food product beyond "may contain." VITAL 2.0 was developed by the Allergen Bureau, a food industry sponsored, non-government organization. The European Union has initiated a process to create labeling regulations for unintentional contamination but is not expected to publish such before 2024.In Brazil, since April 2016, the declaration of the possibility of cross-contamination is mandatory when the product does not intentionally add any allergenic food or its derivatives, but the Good Manufacturing Practices and allergen control measures adopted are not sufficient to prevent the presence of accidental trace amounts. These allergens include wheat, rye, barley, oats and their hybrids, crustaceans, eggs, fish, peanuts, soybean, milk of all species of mammalians, almonds, hazelnuts, cashew nuts, Brazil nuts, macadamia nuts, walnuts, pecan nuts, pistachios, pine nuts, and chestnuts.
Genetically modified food
Although there is a scientific consensus that available food derived from GM crops poses no greater risk to human health than conventional food, and a 2016 U.S. National Academy of Sciences report concluded that there is no relationship between consumption of GM foods and the increase in prevalence of food allergies, there are concerns that genetically modified foods, also described as foods sourced from genetically modified organisms (GMO), could be responsible for allergic reactions, and that the widespread acceptance of GMO foods may be responsible for what is a real or perceived increase in the percentage of people with allergies.One concern is that genetic engineering could make an allergy-provoking food more allergic, meaning that smaller portions would suffice to set off a reaction. Of the food currently in widespread GMO use, only soybeans are identified as a common allergen. However, for the soybean proteins known to trigger allergic reactions, there is more variation from strain to strain than between those and the GMO varieties. Another concern is that genes transferred from one species to another could introduce an allergen in a food not thought of as particularly allergenic. Research on an attempt to enhance the quality of soybean protein by adding genes from Brazil nuts was terminated when human volunteers known to have tree nut allergy reacted to the modified soybeans.Prior to a new GMO food receiving government approval, certain criteria need to be met. These include: Is the donor species known to be allergenic? Does the amino acid sequence of the transferred proteins resemble the sequence of known allergenic proteins? Are the transferred proteins resistant to digestion - a trait shared by many allergenic proteins? Genes approved for animal use can be restricted from human consumption due to potential for allergic reactions. In 1998 Starlink brand corn restricted to animals was detected in the human food supply, leading to first a voluntary and then a FDA mandated recall. There are requirements in some countries and recommendations in others that all foods containing GMO ingredients be so labeled, and that there be a post-launch monitoring system to report adverse effects (much there exists in some countries for drug and dietary supplement reporting).
Restaurants
In the US, the FDA Food Code states that the person in charge in restaurants should have knowledge about major food allergens, cross-contacts, and symptoms of food allergy
reactions. Restaurant staff, including wait staff and kitchen staff, may not be adequately informed about allergenic ingredients, or the risk of cross-contact when kitchen utensils used to prepare food may have been in previous contact with an allergenic food. The problem may be compounded when customers have a hard time describing their food allergies or when wait staff have a hard time understanding those with food allergies when taking an order.
Diagnosing issues
There exists both over-reporting and under-reporting of the prevalence of food allergies. Self-diagnosed perceptions of food allergy are greater than the rates of true food allergy because people confuse non-allergic intolerance with allergy, and also attribute non-allergy symptoms to an allergic response. Conversely, healthcare professionals treating allergic reactions on an out-patient or even hospitalized basis may not report all cases. Recent increases in reported cases may reflect a real change in incidence or an increased awareness on the part of healthcare professionals.
Social impact
Food fear has a significant impact on quality of life. For children with allergies, their quality of life is also affected by the actions of their peers. An increased occurrence of bullying has been observed, which can include threats or deliberate acts of forcing allergic children to contact foods that they must avoid or intentional contamination of allergen-free food.
Portrayal in media
Media portrayals of food allergy in television and film are not accurate, often used for comedic effect or underplaying the potential severity of an allergic reaction. These tropes missinform the public and also contribute to how entertainment media will continue to wrongly portray food allergies in the future. Types of tropes: 1) characters have food allergies, providing a weakness that can be used to sabotage them. In the movie Parasite a housekeeper is displaced by taking advantage of her peach allergy. In the animated film Peter Rabbit, the farm owner is attacked by being pelted with blackberries, causing an anaphylactic reaction requiring emergency treatment with epinephrine. After many public protests, Sony Pictures and the Peter Rabbit director apologized for making light of food allergies. 2) Food allergy is used for comedic effect, such as in the movies Hitch and in television, Kelsos egg allergy in That 70s Show 3) Food allergies may be incorporated into characters who are also portrayed as annoying, weak and oversensitive, which can be taken as implying that their allergies are either not real or not potentially severe. In season 1, episode 16 of The Big Bang Theory Howard Wolowitz deliberately consumes a peanut-containing food bar (and has a serious reaction) just to delay Leonard from returning to his apartment where a surprise birthday party is being arranged. 4) Any of these portrayals may underplay the potential severity of food allergy, some showing that Benadryl treatment is sufficient. Viewing of humorous portrayals of food allergies has been shown to have a negative effect on related health policy support due to low perceived seriousness.
Research
A number of desensitization techniques are being studied. Areas of research include anti-IgE antibody (omalizumab), specific oral tolerance induction (SOTI, also known as OIT for oral immunotherapy), and sublingual immunotherapy (SLIT). The benefits of allergen immunotherapy for food allergies is unclear, thus is not recommended as of 2015.There is research on the effects of increasing intake of polyunsaturated fatty acids (PUFAs) during pregnancy, lactation, via infant formula and in early childhood on the subsequent risk of developing food allergies during infancy and childhood. From two reviews, maternal intake of omega-3, long-chain fatty acids during pregnancy appeared to reduce the risks of medically diagnosed IgE-mediated allergy, eczema and food allergy per parental reporting in the first 12 months of life, but the effects were not all sustained past 12 months. The reviews characterized the literatures evidence as inconsistent and limited. Results when breastfeeding mothers were consuming a diet high in PUFAs were inconclusive. For infants, supplementing their diet with oils high in PUFAs did not affect the risks of food allergies, eczema or asthma either as infants or into childhood.There is research on probiotics, prebiotics and the combination of the two (synbiotics) as a means of treating or preventing infant and child allergies. From reviews, there appears to be a treatment benefit for eczema, but not asthma, wheezing or rhinoconjunctivitis. The evidence was not consistent for preventing food allergies and this approach cannot yet be recommended.The Food Standards Agency, in the United Kingdom, are in charge of funding research into food allergies and intolerance. Since their founding in 1994 they have funded over 45 studies. In 2005 Europe created EuroPrevall, a multi-country project dedicated to research involving allergies.
See also
List of allergens (food and non-food)
References
NotesNester, Eugene W.; Anderson, Denise G.; Roberts Jr, C. Evans; Nester, Martha T. (2009). "Immunologic Disorders". Microbiology: A Human Perspective (6th ed.). New York: McGraw-Hill. pp. 414–428.
Sicherer, Scott H. (2006). Understanding and Managing Your Childs Food Allergy. Baltimore: Johns Hopkins University Press.
External links
Food Allergy, Merck Manual
"Food Allergies and Intolerances Resource List for Consumers" (PDF). Food and Nutrition Information Center, National Agricultural Library. December 2010. – a collection of resources on the topic of food allergies and intolerances
"Food Allergy". MedlinePlus. U.S. National Library of Medicine. | 168 |
Endoscopic foreign body retrieval | Endoscopic foreign body retrieval refers to the removal of ingested objects from the esophagus, stomach and duodenum by endoscopic techniques. It does not involve surgery, but rather encompasses a variety of techniques employed through the gastroscope for grasping foreign bodies, manipulating them, and removing them while protecting the esophagus and trachea. It is of particular importance with children, people with mental illness, and prison inmates as these groups have a high rate of foreign body ingestion.
Commonly swallowed objects include coins, buttons, batteries, and small bones (such as fish bones), but can include more complex objects, such as eyeglasses, spoons, and toothbrushes (see image).
Indications and contraindications
Some patients at risk for foreign body ingestion may not be able to give an accurate medical history of ingestion, either due to age or mental illness. It is important that physicians treating these patients recognize the symptoms of esophageal foreign body impaction requiring urgent intervention. Most frequently, these include drooling and the inability to swallow saliva, neck tenderness, regurgitation of food, stridor and shortness of breath if there is compression of the trachea.There are several situations in which endoscopic techniques are not indicated, such as for small blunt objects less than 2.5 cm which have already passed into the stomach (as these usually do not obstruct anywhere else), when there is perforation of the esophagus or mediastinitis (inflammation of structures around the esophagus), and for narcotic-containing bags or condoms that have been ingested, because of the risk of overdose if they are ruptured.Foreign bodies should be removed from the esophagus within 24 hours of ingestion because of a high risk of complication.
Non-invasive testing
Prior to undertaking endoscopy, attempts should be made to locate the foreign body with x-rays or other non-invasive techniques. For radio-opaque objects, x-rays of the neck, chest and abdomen can be used to locate the foreign body and assist endoscopy. Alternative approaches, including the use of metal detectors, have also been described.X-rays are also useful for identifying the type of foreign body ingested and complications of foreign body ingestion, including mediastinitis and perforation of the esophagus.
Endoscopy
Endoscopic retrieval involves the use of a gastroscope or an optic fiber charge-coupled device camera. This instrument is shaped as a long tube, which is inserted through the mouth into the esophagus and stomach to identify the foreign body or bodies. This procedure is typically performed under conscious sedation. Many techniques have been described to remove foreign bodies from the stomach and esophagus. Usually the esophagus is protected with an overtube (a plastic tube of varying length), through which the gastroscope and retrieved objects are passed.Once the foreign body has been identified with the gastroscope, various devices can be passed through the gastroscope to grasp or manipulate the foreign body. Devices used include forceps, which come in varying shapes, sizes and grips, snares, and oval loops that can be retracted from outside the gastroscope to lasso objects, as well as Roth baskets (mesh nets that can be closed to trap small objects), and magnets placed at the end of the scope or at the end of orogastric tubes. Some techniques have been described that use foley catheters to trap objects, or use two snares to orient foreign bodies.
Alternative methods
In veterinary medicine or when there is no endoscope available to extract foreign bodies economically without operation very often the Hartmann alligator forceps is used
See also
Bezoar
Schatzki ring
References
External links
Esophageal Coin MedPix Topic | 169 |
Labial fusion | Labial fusion is a medical condition of the female genital anatomy where the labia minora become fused together. It is generally a pediatric condition.
Presentation
Labial fusion is rarely present at birth, but rather acquired later in infancy, since it is caused by insufficient estrogen exposure and newborns have been exposed to maternal estrogen in utero. It typically presents in infants at least 3 months old. Most presentations are asymptomatic and are discovered by a parent or during routine medical examination. In other cases, patients may present with associated symptoms of dysuria, urinary frequency, refusal to urinate, or post-void dribbling. Some patients present with vaginal discharge due to pooling of urine in the vulval vestibule or vagina.
Complications
Labial fusion can lead to urinary tract infection, vulvar vestibulitis and inflammation caused by chronic urine exposure. In severe cases, labial adhesions can cause complete obstruction of the urethra, leading to anuria and urinary retention.
Pathophysiology
The primary contributing factor to labial fusion is low estrogen levels. A vulva with low estrogen exposure, such as that of a preadolescent, has delicate epithelial lining and is therefore vulnerable to irritation. Conditions causing irritation, such as infection, inflammation and trauma, cause the edges of the labia minora to fuse together. The fusion typically begins at the posterior frenulum of the labia minora and continues anteriorly.Most labial adhesions resolve spontaneously before puberty as estrogen levels increase and the vaginal epithelium becomes cornified.
Diagnosis
The condition can be diagnosed based on inspection of the vulva. In patients with labial fusion, a flat plane of tissue with a dense central line of tissue is usually seen when the labia majora are retracted, while an anterior opening is usually present below the clitoris.
Treatment
Treatment is not usually necessary in asymptomatic cases, since most fusions will separate naturally over time, but may be required when symptoms are present. The standard method of treatment for labial fusion is the application of topical estrogen cream onto the areas of adhesion, which is effective in 90% of patients. In severe cases where the labia minora are entirely fused, causing urinary outflow obstruction or vaginal obstruction, the labia should be separated surgically. Recurrence after treatment is common but is thought to be prevented by good hygiene practices. One study has shown that betamethasone may be more effective than estrogen cream in preventing recurrence, with fewer side effects.
Epidemiology
Labial fusion is not uncommon in infants and young girls. It is most common in infants between the ages of 13 and 23 months, and has an incidence of 3.3% in this age group. It is estimated that labial fusion occurs in 1.8% of all prepubertal girls. It is rare in adult women, particularly in reproductive age, but is occasionally found in postpartum and postmenopausal women.
References
External links
Labial adhesions at Medscape | 170 |
Gastric erosion | Gastric erosion occurs when the mucous membrane lining the stomach becomes inflamed. Specifically, the term "erosion," in this context means damage that is limited to the mucosa (which consists of three distinct layers: The epithelium (in the case of a healthy stomach, this is non-ciliated simple columnar epithelium), basement membrane, and lamina propria). An erosion is different from an ulcer. An "ulcer" is an area of damage to the gastrointestinal wall (in this case the gastric wall) that extends deeper through the wall than an erosion (an ulcer can extend anywhere from beyond the lamina propria to right through the wall, potentially causing a perforation). See gastrointestinal wall.
Some drugs, as tablets, can irritate this mucous membrane, especially drugs taken for arthritis and muscular disorders, steroids, and aspirin. A gastric erosion may also occur because of emotional stress, or as a side effect of burns or stomach injuries. See acute gastritis.
Symptoms
There is basically one symptom of gastric erosion: bleeding from the area where the stomach lesion is. Bowel movements may contain blood. Vomit may be bloody as well, but a gastric erosion may not cause vomiting. Blood may be black because it will be partially digested. Loss of blood may cause one to develop anemia.
Risks
Anemia and other problems related to blood loss may occur. Sometimes a person with a gastric erosion will experience severe bleeding all at once; red (bloody) vomiting and/or black bowel movements may occur.
Sources
"Gastric Erosion." Encyclopædia Britannica, Micropaedia. Encyclopædia Britannica Inc., 1998 ed. | 171 |
Gestational trophoblastic disease | Gestational trophoblastic disease (GTD) is a term used for a group of pregnancy-related tumours. These tumours are rare, and they appear when cells in the womb start to proliferate uncontrollably. The cells that form gestational trophoblastic tumours are called trophoblasts and come from tissue that grows to form the placenta during pregnancy.
There are several different types of GTD. A hydatidiform mole also known as a molar pregnancy, is the most common and is usually benign. Sometimes it may develop into an invasive mole, or, more rarely into a choriocarcinoma. A choriocarcinoma is likely to spread quickly, but is very sensitive to chemotherapy, and has a very good prognosis. Trophoblasts are of particular interest to cell biologists because, like cancer, they can invade tissue (the uterus), but unlike cancer, they usually "know" when to stop.GTD can simulate pregnancy, because the uterus may contain fetal tissue, albeit abnormal. This tissue may grow at the same rate as a normal pregnancy, and produces chorionic gonadotropin, a hormone which is measured to monitor fetal well-being.While GTD overwhelmingly affects women of child-bearing age, it may rarely occur in postmenopausal women.
Types
GTD is the common name for five closely related tumours (one benign tumour, and four malignant tumours):
The benign tumour
Hydatidiform moleHere, first a fertilised egg implants into the uterus, but some cells around the fetus (the chorionic villi) do not develop properly. The pregnancy is not viable, and the normal pregnancy process turns into a benign tumour. There are two subtypes of hydatidiform mole: complete hydatidiform mole, and partial hydatidiform mole.
The four malignant tumours
Invasive mole
Choriocarcinoma
Placental site trophoblastic tumour
Epithelioid trophoblastic tumourAll five closely related tumours develop in the placenta. All five tumours arise from trophoblast cells that form the outer layer of the blastocyst in the early development of the fetus. In a normal pregnancy, trophoblasts aid the implantation of the fertilised egg into the uterine wall. But in GTD, they develop into tumour cells.
Cause
Two main risk factors increase the likelihood for the development of GTD: 1) The woman being under 20 years of age, or over 35 years of age, and 2) previous GTD.
Although molar pregnancies affect women of all ages, women under 16 and over 45 years of age have an increased risk of developing a molar pregnancy. Being from Asia/of Asian ethnicity is an important risk factor.Hydatidiform moles are abnormal conceptions with excessive placental development. Conception takes place, but placental tissue grows very fast, rather than supporting the growth of a fetus.Complete hydatidiform moles have no fetal tissue and no maternal DNA, as a result of a maternal ovum with no functional DNA. Most commonly, a single spermatozoon duplicates and fertilises an empty ovum. Less commonly, two separate spermatozoa fertilise an empty ovum (dispermic fertilisation).
Partial hydatidiform moles have a fetus or fetal cells. They are triploid in origin, containing one set of maternal haploid genes and two sets of paternal haploid genes. They almost always occur following dispermic fertilisation of a normal ovum. Malignant forms of GTD are very rare. About 50% of malignant forms of GTD develop from a hydatidiform mole.
Diagnosis
Cases of GTD can be diagnosed through routine tests given during pregnancy, such as blood tests and ultrasound, or through tests done after miscarriage or abortion. Vaginal bleeding, enlarged uterus, pelvic pain or discomfort, and vomiting too much (hyperemesis) are the most common symptoms of GTD. But GTD also leads to elevated serum hCG (human chorionic gonadotropin hormone). Since pregnancy is by far the most common cause of elevated serum hCG, clinicians generally first suspect a pregnancy with a complication. However, in GTD, the beta subunit of hCG (beta hCG) is also always elevated. Therefore, if GTD is clinically suspected, serum beta hCG is also measured.The initial clinical diagnosis of GTD should be confirmed histologically, which can be done after the evacuation of pregnancy (see Treatment below) in women with hydatidiform mole. However, malignant GTD is highly vascular. If malignant GTD is suspected clinically, biopsy is contraindicated, because biopsy may cause life-threatening haemorrhage.
Women with persistent abnormal vaginal bleeding after any pregnancy, and women developing acute respiratory or neurological symptoms after any pregnancy, should also undergo hCG testing, because these may be signs of a hitherto undiagnosed GTD.
There might be some signs and symptoms of hyperthyroidism as well as an increase in the levels of thyroid hormones in some patients. The proposed mechanism is attaching hCG to TSH receptors and acting like TSH weakly.
Differential diagnosis
These are not GTD, and they are not tumoursExaggerated placental site
Placental site noduleBoth are composed of intermediate trophoblast, but their morphological features and clinical presentation can differ significantly.
Exaggerated placental site is a benign, non cancerous lesion with an increased number of implantation site intermediate trophoblastic cells that infiltrate the endometrium and the underlying myometrium. An exaggerated placental site may occur with normal pregnancy, or after an abortion. No specific treatment or follow up is necessary.
Placental site nodules are lesions of chorionic type intermediate trophoblast, usually small. 40 to 50% of placental site nodules are found in the cervix. They almost always are incidental findings after a surgical procedure. No specific treatment or follow up is necessary.
Treatment
Treatment is always necessary.The treatment for hydatidiform mole consists of the evacuation of pregnancy. Evacuation will lead to the relief of symptoms, and also prevent later complications. Suction curettage is the preferred method of evacuation. Hysterectomy is an alternative if no further pregnancies are wished for by the female patient. Hydatidiform mole also has successfully been treated with systemic (intravenous) methotrexate.The treatment for invasive mole or choriocarcinoma generally is the same. Both are usually treated with chemotherapy. Methotrexate and dactinomycin are among the chemotherapy drugs used in GTD. In women with low risk gestational trophoblastic neoplasia, a review has found that Actinomycin D is probably more effective as a treatment and more likely to achieve a cure in the first instance than methotrexate. Only a few women with GTD have poor prognosis metastatic gestational trophoblastic disease. Their treatment usually includes chemotherapy. Radiotherapy can also be given to places where the cancer has spread, e.g. the brain.Women who undergo chemotherapy are advised not to conceive for one year after completion of treatment. These women also are likely to have an earlier menopause. It has been estimated by the Royal College of Obstetricians and Gynaecologists that the age at menopause for women who receive single agent chemotherapy is advanced by one year, and by three years for women who receive multi agent chemotherapy.
Follow up
Follow up is necessary in all women with gestational trophoblastic disease, because of the possibility of persistent disease, or because of the risk of developing malignant uterine invasion or malignant metastatic disease even after treatment in some women with certain risk factors.The use of a reliable contraception method is very important during the entire follow up period, as patients are strongly advised against pregnancy at that time. If a reliable contraception method is not used during the follow-up, it could be initially unclear to clinicians as to whether a rising hCG level is caused by the patient becoming pregnant again, or by the continued presence of GTD.
In women who have a malignant form of GTD, hCG concentrations stay the same (plateau) or they rise. Persistent elevation of serum hCG levels after a non molar pregnancy (i.e., normal pregnancy [term pregnancy], or preterm pregnancy, or ectopic pregnancy [pregnancy taking place in the wrong place, usually in the fallopian tube], or abortion) always indicate persistent GTD (very frequently due to choriocarcinoma or placental site trophoblastic tumour), but this is not common, because treatment mostly is successful.
In rare cases, a previous GTD may be reactivated after a subsequent pregnancy, even after several years. Therefore, the hCG tests should be performed also after any subsequent pregnancy in all women who had had a previous GTD (6 and 10 weeks after the end of any subsequent pregnancy).
Prognosis
Women with a hydatidiform mole have an excellent prognosis. Women with a malignant form of GTD usually have a very good prognosis.Choriocarcinoma, for example, is an uncommon, yet almost always curable cancer. Although choriocarcinoma is a highly malignant tumour and a life-threatening disease, it is very sensitive to chemotherapy. Virtually all women with non-metastatic disease are cured and retain their fertility; the prognosis is also very good for those with metastatic (spreading) cancer, in the early stages, but fertility may be lost. Hysterectomy (surgical removal of the uterus) can also be offered to patients > 40 years of age or those for whom sterilisation is not an obstacle. Only a few women with GTD have a poor prognosis, e.g. some forms of stage IV GTN. The FIGO staging system is used. The risk can be estimated by scoring systems such as the Modified WHO Prognostic Scoring System, wherein scores between 1 and 4 from various parameters are summed together:
In this scoring system, women with a score of 7 or greater are considered at high risk.
It is very important for malignant forms of GTD to be discovered in time. In Western countries, women with molar pregnancies are followed carefully; for instance, in the UK, all women who have had a molar pregnancy are registered at the National Trophoblastic Screening Centre. There are efforts in this direction in the developing countries too, and there have been improvements in these countries in the early detection of choriocarcinoma, thereby significantly reducing the mortality rate also in developing countries.
Becoming pregnant again
Most women with GTD can become pregnant again and can have children again. The risk of a further molar pregnancy is low. More than 98% of women who become pregnant following a molar pregnancy will not have a further hydatidiform mole or be at increased risk of complications.
In the past, it was seen as important not to get pregnant straight away after a GTD. Specialists recommended a waiting period of six months after the hCG levels become normal. Recently, this standpoint has been questioned. New medical data suggest that a significantly shorter waiting period after the hCG levels become normal is reasonable for approximately 97% of the patients with hydatidiform mole.
Risk of a repeat GTD
The risk of a repeat GTD is approximately 1 in 100, compared with approximately 1 in 1000 risk in the general population. Especially women whose hCG levels remain significantly elevated are at risk of developing a repeat GTD.
Persistent trophoblastic disease
The term «persistent trophoblastic disease» (PTD) is used when after treatment of a molar pregnancy, some molar tissue is left behind and again starts growing into a tumour. Although PTD can spread within the body like a malignant cancer, the overall cure rate is nearly 100%.In the vast majority of patients, treatment of PTD consist of chemotherapy. Only about 10% of patients with PTD can be treated successfully with a second curettage.
GTD coexisting with a normal fetus, also called "twin pregnancy"
In some very rare cases, a GTD can coexist with a normal fetus. This is called a "twin pregnancy". These cases should be managed only by experienced clinics, after extensive consultation with the patient. Because successful term delivery might be possible, the pregnancy should be allowed to proceed if the mother wishes, following appropriate counselling. The probability of achieving a healthy baby is approximately 40%, but there is a risk of complications, e.g. pulmonary embolism and pre-eclampsia. Compared with women who simply had a GTD in the past, there is no increased risk of developing persistent GTD after such a twin pregnancy.In few cases, a GTD had coexisted with a normal pregnancy, but this was discovered only incidentally after a normal birth.
Epidemiology
Overall, GTD is a rare disease. Nevertheless, the incidence of GTD varies greatly between different parts of the world. The reported incidence of hydatidiform mole ranges from 23 to 1299 cases per 100,000 pregnancies. The incidence of the malignant forms of GTD is much lower, only about 10% of the incidence of hydatidiform mole. The reported incidence of GTD from Europe and North America is significantly lower than the reported incidence of GTD from Asia and South America. One proposed reason for this great geographical variation is differences in healthy diet in the different parts of the world (e.g., carotene deficiency).However, the incidence of rare diseases (such as GTD) is difficult to measure, because epidemiologic data on rare diseases is limited. Not all cases will be reported, and some cases will not be recognised. In addition, in GTD, this is especially difficult, because one would need to know all gestational events in the total population. Yet, it seems very likely that the estimated number of births that occur at home or outside of a hospital has been inflated in some reports.
Terminology
Gestational trophoblastic disease (GTD) may also be called gestational trophoblastic tumour (GTT). Hydatidiform mole (one type of GTD) may also be called molar pregnancy.Persistent disease; persistent GTD: If there is any evidence of persistence of GTD, usually defined as persistent elevation of beta hCG (see «Diagnosis» below), the condition may also be referred to as gestational trophoblastic neoplasia (GTN).
See also
Trophoblastic neoplasms
References
== External links == | 172 |
Giardiasis | Giardiasis is a parasitic disease caused by Giardia duodenalis (also known as G. lamblia and G. intestinalis). Infected individuals who experience symptoms (about 10% have no symptoms) may have diarrhea, abdominal pain, and weight loss. Less common symptoms include vomiting and blood in the stool. Symptoms usually begin 1 to 3 weeks after exposure and, without treatment, may last two to six weeks or longer.Giardiasis usually spreads when Giardia duodenalis cysts within feces contaminate food or water that is later consumed orally. The disease can also spread between people and through other animals. Cysts may survive for nearly three months in cold water. Giardiasis is diagnosed via stool tests.Prevention may be improved through proper hygiene practices. Asymptomatic cases often do not need treatment. When symptoms are present, treatment is typically provided with either tinidazole or metronidazole. Infection may cause a person to become lactose intolerant, so it is recommended to temporarily avoid lactose following an infection. Resistance to treatment may occur in some patients.Giardiasis occurs worldwide. It is one of the most common parasitic human diseases. Infection rates are as high as 7% in the developed world and 30% in the developing world. In 2013, there were approximately 280 million people worldwide with symptomatic cases of giardiasis. The World Health Organization classifies giardiasis as a neglected disease. It is popularly known as beaver fever in North America.
Signs and symptoms
Symptoms vary from none to severe diarrhea with poor absorption of nutrients. The cause of this wide range in severity of symptoms is not fully known but the intestinal flora of the infected host may play a role. Diarrhea is less likely to occur in people from developing countries.Symptoms typically develop 9–15 days after exposure, but may occur as early as one day. The most common and prominent symptom is chronic diarrhea, which can occur for weeks or months if untreated. Diarrhea is often greasy and foul-smelling, with a tendency to float. This characteristic diarrhea is often accompanied by a number of other symptoms, including gas, abdominal cramps, and nausea or vomiting. Some people also experience symptoms outside of the gastrointestinal tract, such as itchy skin, hives, and swelling of the eyes and joints, although these are less common. Fever occurs in only about 15% of people, in spite of the nickname "beaver fever".Prolonged disease is often characterized by diarrhea, along with malabsorption of nutrients in the intestine. This malabsorption results in fatty stools, substantial weight loss, and fatigue. Additionally, those with giardiasis often have difficulty absorbing lactose, vitamin A, folate, and vitamin B12. In children, prolonged giardiasis can cause failure to thrive and may impair mental development. Symptomatic infections are well recognized as causing lactose intolerance, which, while usually temporary, may become permanent.
Cause
Giardiasis is caused by the protozoan Giardia duodenalis. The infection occurs in many animals, including beavers, other rodents, cows, and sheep. Animals are believed to play a role in keeping infections present in an environment.G. duodenalis has been sub-classified into eight genetic assemblages (designated A–H). Genotyping of G. duodenalis isolated from various hosts has shown that assemblages A and B infect the largest range of host species, and appear to be the main and possibly only G. duodenalis assemblages that infect humans.
Risk factors
According to the United States Centers for Disease Control and Prevention (CDC), people at greatest risk of infection are:
People in childcare settings
People who are in close contact with someone who has the disease
Travelers within areas that have poor sanitation
People who have contact with feces during sexual activity
Backpackers or campers who drink untreated water from springs, lakes, or rivers
Swimmers who swallow water from swimming pools, hot tubs, interactive fountains, or untreated recreational water from springs, lakes, or rivers
People who get their household water from a shallow well
People with weakened immune systems
People who have contact with infected animals or animal environments contaminated with fecesFactors that increase infection risk for people from developed countries include changing diapers, consuming raw food, owning a dog, and travelling in the developing world. However, 75% of infections in the United Kingdom are acquired in the UK, not through travel elsewhere. In the United States, giardiasis occurs more often in summer, which is believed to be due to a greater amount of time spent on outdoor activities and traveling in the wilderness.
Transmission
Giardiasis is transmitted via the fecal-oral route with the ingestion of cysts. Primary routes are personal contact and contaminated water and food. The cysts can stay infectious for up to three months in cold water.Many people with Giardia infections have no or few symptoms. They may, however, still spread the disease.
Pathophysiology
The life cycle of Giardia consists of a cyst form and a trophozoite form. The cyst form is infectious and once it has found a host, transforms into the trophozoite form. This trophozoite attaches to the intestinal wall and replicates within the gut. As trophozoites continue along the gastrointestinal tract, they convert back to their cyst form which is then excreted with feces. Ingestion of only a few of these cysts is needed to generate infection in another host.Infection with Giardia results in decreased expression of brush border enzymes, morphological changes to the microvillus, increased intestinal permeability, and programmed cell death of small intestinal epithelial cells. Both trophozoites and cysts are contained within the gastrointestinal tract and do not invade beyond it.The attachment of trophozoites causes villous flattening and inhibition of enzymes that break down disaccharide sugars in the intestines. Ultimately, the community of microorganisms that lives in the intestine may overgrow and may be the cause of further symptoms, though this idea has not been fully investigated. The alteration of the villi leads to an inability of nutrient and water absorption from the intestine, resulting in diarrhea, one of the predominant symptoms. In the case of asymptomatic giardiasis, there can be malabsorption with or without histological changes to the small intestine. The degree to which malabsorption occurs in symptomatic and asymptomatic cases is highly varied.The species Giardia intestinalis uses enzymes that break down proteins to attack the villi of the brush border and appears to increase crypt cell proliferation and crypt length of crypt cells existing on the sides of the villi. On an immunological level, activated host T lymphocytes attack endothelial cells that have been injured in order to remove the cell. This occurs after the disruption of proteins that connect brush border endothelial cells to one another. The result is increased intestinal permeability.There appears to be a further increase in programmed enterocyte cell death by Giardia intestinalis, which further damages the intestinal barrier and increases permeability. There is significant upregulation of the programmed cell death cascade by the parasite, and, furthermore, substantial downregulation of the anti-apoptotic protein Bcl-2 and upregulation of the proapoptotic protein Bax. These connections suggest a role of caspase-dependent apoptosis in the pathogenesis of giardiasis.Giardia protects its own growth by reducing the formation of the gas nitric oxide by consuming all local arginine, which is the amino acid necessary to make nitric oxide. Arginine starvation is known to be a cause of programmed cell death, and local removal is a strong apoptotic agent.
Host defense
Host defense against Giardia consists of natural barriers, production of nitric oxide, and activation of the innate and adaptive immune systems.
Natural barriers
Natural barriers defend against parasite entering the hosts body. Natural barriers consist of mucus layers, bile salt, proteases, and lipases. Additionally, peristalsis and the renewal of enterocytes provide further protection against parasites.
Nitric oxide production
Nitric oxide does not kill the parasite, but it inhibits the growth of trophozoites as well as excystation and encystation.
Innate immune system
Lectin pathway of complement
The lectin pathway of complement is activated by mannose-binding lectin (MBL) which binds to N-acetylglucosamine. N-acetylglucosamine is a ligand for MBL and is present on the surface of Giardia.
The classical pathway of complement
The classical pathway of complement is activated by antibodies specific against Giardia.
Adaptive immune system
Antibodies
Antibodies inhibit parasite replication and also induce parasite death via the classical pathway of complement.Infection with Giardia typically results in a strong antibody response against the parasite. While IgG is made in significant amounts, IgA is believed to be more important in parasite control. IgA is the most abundant isotype in intestinal secretions, and it is also the dominant isotype in mothers milk. Antibodies in mothers milk protect children against giardiasis (passive immunization).
T cells
The major aspect of adaptive immune responses is the T cell response. Giardia is an extracellular pathogen. Therefore CD4+ helper T cells are primarily responsible for this protective effect.One role of helper T cells is to promote antibody production and isotype switching. Other roles include cytokine production (Il-4,IL-9) to help recruit other effector cells of the immune response.
Diagnosis
According to the CDC, detection of antigens on the surface of organisms in stool specimens is the current test of choice for diagnosis of giardiasis and provides increased sensitivity over more common microscopy techniques.
A trichrome stain of preserved stool is another method used to detect Giardia.
Microscopic examination of the stool can be performed for diagnosis. This method is not preferred, however, due to inconsistent shedding of trophozoites and cysts in infected hosts. Multiple samples over a period of time, typically one week, must be examined.
The Entero-Test uses a gelatin capsule with an attached thread. One end is attached to the inner aspect of the hosts cheek, and the capsule is swallowed. Later, the thread is withdrawn and shaken in saline to release trophozoites which can be detected with a microscope. The sensitivity of this test is low, however, and is not routinely used for diagnosis.
Immunologic enzyme-linked immunosorbent assay (ELISA) testing may be used for diagnosis. These tests are capable of a 90% detection rate or more.Although hydrogen breath tests indicate poorer rates of carbohydrate absorption in those asymptomatically infected, such tests are not diagnostic of infection. Serological tests are not helpful in diagnosis.
Prevention
The CDC recommends hand-washing and avoiding potentially contaminated food and untreated water.Boiling water contaminated with Giardia effectively kills infectious cysts. Chemical disinfectants or filters may be used. Iodine-based disinfectants are preferred over chlorination as the latter is ineffective at destroying cysts.Although the evidence linking the drinking of water in the North American wilderness and giardiasis has been questioned, a number of studies raise concern. Most if not all CDC verified backcountry giardiasis outbreaks have been attributed to water. Surveillance data (for 2013 and 2014) reports six outbreaks (96 cases) of waterborne giardiasis contracted from rivers, streams or springs and less than 1% of reported giardiasis cases are associated with outbreaks.Person-to-person transmission accounts for the majority of Giardia infections, and is usually associated with poor hygiene and sanitation. Giardia is often found on the surface of the ground, in the soil, in undercooked foods, and in water, and on hands that have not been properly cleaned after handling infected feces. Water-borne transmission is associated with the ingestion of contaminated water. In the U.S., outbreaks typically occur in small water systems using inadequately treated surface water. Venereal transmission happens through fecal-oral contamination. Additionally, diaper changing and inadequate handwashing are risk factors for transmission from infected children. Lastly, food-borne epidemics of Giardia have developed through the contamination of food by infected food-handlers.
Vaccine
There are no vaccines for humans yet, however there are several vaccine candidates in development. They are targeting: recombinant proteins, DNA vaccine, variant-specific surface proteins (VSP), cyst wall proteins (CWP), giadins and enzymes.At present, one commercially available vaccine exists – GiardiaVax, made from G. lamblia whole trophozoite lysate. It is a vaccine for veterinary use only in dogs and cats. GiardiaVax should promote production of specific antibodies.
Treatment
Treatment is not always necessary as the infection usually resolves on its own. However, if the illness is acute or symptoms persist and medications are needed to treat it, a nitroimidazole medication is used such as metronidazole, tinidazole, secnidazole or ornidazole.The World Health Organization and Infectious Disease Society of America recommend metronidazole as first line therapy. The US CDC lists metronidazole, tinidazole, and nitazoxanide as effective first-line therapies; of these three, only nitazoxanide and tinidazole are approved for the treatment of giardiasis by the US FDA. A meta-analysis done by the Cochrane Collaboration found that compared to the standard of metronidazole, albendazole had equivalent efficacy while having fewer side effects, such as gastrointestinal or neurologic issues. Other meta-analyses have reached similar conclusions. Both medications need a five to 10 day long course; albendazole is taken once a day, while metronidazole needs to be taken three times a day. The evidence for comparing metronidazole to other alternatives such as mebendazole, tinidazole or nitazoxanide was felt to be of very low quality. While tinidazole has side effects and efficacy similar to those of metronidazole, it is administered with a single dose.Resistance has been seen clinically to both nitroimidazoles and albendazole, but not nitazoxanide, though nitazoxanide resistance has been induced in research laboratories. The exact mechanism of resistance to all of these medications is not well understood. In the case of nitroimidazole-resistant strains of Giardia, other drugs are available which have showed efficacy in treatment including quinacrine, nitazoxanide, bacitracin zinc, furazolidone and paromomycin. Mepacrine may also be used for refractory cases.Probiotics, when given in combination with the standard treatment, has been shown to assist with clearance of Giardia.During pregnancy, paromomycin is the preferred treatment drug because of its poor intestinal absorption, resulting in less exposure to the fetus. Alternatively, metronidazole can be used after the first trimester as there has been wide experience in its use for trichomonas in pregnancy.
Prognosis
In people with a properly functioning immune system, infection may resolve without medication. A small portion, however, develop a chronic infection. People with an impaired immune system are at higher risk of chronic infection. Medication is an effective cure for nearly all people although there is growing drug-resistance.Children with chronic giardiasis are at risk for failure to thrive as well as more long-lasting sequelae such as growth stunting. Up to half of infected people develop a temporary lactose intolerance leading symptoms that may mimic a chronic infection. Some people experience post-infectious irritable bowel syndrome after the infection has cleared. Giardiasis has also been implicated in the development of food allergies. This is thought to be due to its effect on intestinal permeability.
Epidemiology
In some developing countries Giardia is present in 30% of the population. In the United States it is estimated that it is present in 3–7% of the population.The number of reported cases in the United States in 2018 was 15,584. All states that classify giardiasis as a notifiable disease had cases of giardiasis. The states of Illinois, Kentucky, Mississippi, North Carolina, Oklahoma, Tennessee, Texas, and Vermont did not notify the Center for Disease Control regarding cases in 2018. The states with the highest number of cases in 2018 were California, New York, Florida, and Wisconsin. There are seasonal trends associated with giardiasis. July, August, and September are the months with the highest incidence of giardiasis in the United States.In the ECDCs (European Centre for Disease Prevention and Control) annual epidemiological report containing 2014 data, 17,278 confirmed giardiasis cases were reported by 23 of the 31 countries that are members of the EU/EEA. Germany reported the highest number at 4,011 cases. Following Germany, the UK reported 3,628 confirmed giardiasis cases. Together, this accounts for 44% of total reported cases.
Research
Some intestinal parasitic infections may play a role in irritable bowel syndrome and other long-term sequelae such as chronic fatigue syndrome. The mechanism of transformation from cyst to trophozoites has not been characterized but may be helpful in developing drug targets for treatment-resistant Giardia. The interaction between Giardia and host immunity, internal flora, and other pathogens is not well understood.The main congress about giardiasis is the "International Giardia and Cryptosporidium Conference" (IGCC). A summary of results presented at the most recent edition (2019, in Rouen, France) is available.
Other animals
In both dogs and cats, giardiasis usually responds to metronidazole and fenbendazole. Metronidazole in pregnant cats can cause developmental malformations. Many cats dislike the taste of fenbendazole. Giardiasis has been shown to decrease weight in livestock.
References
External links
Giardiasis Fact Sheet | 173 |
Gingivitis | Gingivitis is a non-destructive disease that causes inflammation of the gums. The most common form of gingivitis, and the most common form of periodontal disease overall, is in response to bacterial biofilms (also called plaque) that is attached to tooth surfaces, termed plaque-induced gingivitis. Most forms of gingivitis are plaque-induced.While some cases of gingivitis never progress to periodontitis, periodontitis is always preceded by gingivitis.Gingivitis is reversible with good oral hygiene; however, without treatment, gingivitis can progress to periodontitis, in which the inflammation of the gums results in tissue destruction and bone resorption around the teeth. Periodontitis can ultimately lead to tooth loss.
Signs and symptoms
The symptoms of gingivitis are somewhat non-specific and manifest in the gum tissue as the classic signs of inflammation:
Swollen gums
Bright red or purple gums
Gums that are tender or painful to the touch
Bleeding gums or bleeding after brushing and/or flossing
Bad breath (halitosis)Additionally, the stippling that normally exists in the gum tissue of some individuals will often disappear and the gums may appear shiny when the gum tissue becomes swollen and stretched over the inflamed underlying connective tissue. The accumulation may also emit an unpleasant odor. When the gingiva are swollen, the epithelial lining of the gingival crevice becomes ulcerated and the gums will bleed more easily with even gentle brushing, and especially when flossing.
Complications
Recurrence of gingivitis
Periodontitis
Infection or abscess of the gingiva or the jaw bones
Trench mouth (bacterial infection and ulceration of the gums)
Swollen lymph nodes
Associated with premature birth and low birth weight
Alzheimers and dementia
A new study from 2018 found evidence that gingivitis bacteria may be linked to Alzheimers disease. Scientists agree that more research is needed to prove a cause and effect link. "Studies have also found that the bacteria P. gingivalis – which are responsible for many forms of gum disease – can migrate from the mouth to the brain in mice. And on entry to the brain, P. gingivalis can reproduce all of the characteristic features of Alzheimer’s disease."
Cause
The cause of plaque-induced gingivitis is bacterial plaque, which acts to initiate the bodys host response. This, in turn, can lead to destruction of the gingival tissues, which may progress to destruction of the periodontal attachment apparatus. The plaque accumulates in the small gaps between teeth, in the gingival grooves and in areas known as plaque traps: locations that serve to accumulate and maintain plaque. Examples of plaque traps include bulky and overhanging restorative margins, clasps of removable partial dentures and calculus (tartar) that forms on teeth. Although these accumulations may be tiny, the bacteria in them produce chemicals, such as degradative enzymes, and toxins, such as lipopolysaccharide (LPS, otherwise known as endotoxin) or lipoteichoic acid (LTA), that promote an inflammatory response in the gum tissue. This inflammation can cause an enlargement of the gingiva and subsequent formation. Early plaque in health consists of a relatively simple bacterial community dominated by Gram-positive cocci and rods. As plaque matures and gingivitis develops, the communities become increasingly complex with higher proportions of Gram-negative rods, fusiforms, filaments, spirilla and spirochetes. Later experimental gingivitis studies, using culture, provided more information regarding the specific bacterial species present in plaque. Taxa associated with gingivitis included Fusobacterium nucleatum subspecies polymorphum, Lachnospiraceae [G-2] species HOT100, Lautropia species HOTA94, and Prevotella oulorum (a species of Prevotella bacterium), whilst Rothia dentocariosa was associated with periodontal health. Further study of these taxa is warranted and may lead to new therapeutic approaches to prevent periodontal disease.
Risk factors
Risk factors associated with gingivitis include the following:
age
osteoporosis
low dental care utilization
poor oral hygiene
overly aggressive oral hygiene such as brushing with stiff bristles
mouth breathing during sleep
Orthodontic braces
medications and conditions that dry the mouth
cigarette smoking
genetic factors
stress
mental health issues such as depression
pre-existing conditions such as diabetes
Diagnosis
Gingivitis is a category of periodontal disease in which there is no loss of bone but inflammation and bleeding are present.
Each tooth is divided into four gingival units (mesial, distal, buccal, and lingual) and given a score from 0–3 based on the gingival index. The four scores are then averaged to give each tooth a single score.
The diagnosis of the periodontal disease gingivitis is done by a dentist. The diagnosis is based on clinical assessment data acquired during a comprehensive periodontal exam. Either a registered dental hygienist or a dentist may perform the comprehensive periodontal exam but the data interpretation and diagnosis are done by the dentist. The comprehensive periodontal exam consists of a visual exam, a series of radiographs, probing of the gingiva, determining the extent of current or past damage to the periodontium and a comprehensive review of the medical and dental histories.
Current research shows that activity levels of the following enzymes in saliva samples are associated with periodontal destruction: aspartate aminotransferase (AST), alanine aminotransferase (ALT), gamma glutamyl transferase (GGT), alkaline phosphatase (ALP), and acid phosphatase (ACP). Therefore, these enzyme biomarkers may be used to aid in the diagnosis and treatment of gingivitis and periodontitis.
A dental hygienist or dentist will check for the symptoms of gingivitis, and may also examine the amount of plaque in the oral cavity. A dental hygienist or dentist will also look for signs of periodontitis using X-rays or periodontal probing as well as other methods.
If gingivitis is not responsive to treatment, referral to a periodontist (a specialist in diseases of the gingiva and bone around teeth and dental implants) for further treatment may be necessary.
Classification
1999 Classification
As defined by the 1999 World Workshop in Clinical Periodontics, there are two primary categories of gingival diseases, each with numerous subgroups:
Dental plaque-induced gingival diseases.
Gingivitis associated with plaque only
Gingival diseases modified by systemic factors
Gingival diseases modified by medications
Gingival diseases modified by malnutrition
Non-plaque-induced gingival lesions
Gingival diseases of specific bacterial origin
Gingival diseases of viral origin
Gingival diseases of fungal origin
Gingival diseases of genetic origin
Gingival manifestations of systemic conditions
Traumatic lesions
Foreign body reactions
Not otherwise specified
2017 Classification
As defined by the 2017 World Workshop, periodontal health, gingival diseases/ conditions have been categorised into the following:
Periodontal health and gingival health
Clinical gingival health on an intact periodontium
Clinical gingival health on a reduced periodontium
Stable periodontitis patient
Non-periodontitis patient
Gingivitis – dental biofilm-induced
Associated with dental biofilm alone
Mediated by systemic or local risk factors
Drug-influenced gingival enlargement
Gingival diseases – non-dental biofilm induced
Genetic/ developmental disorders
Specific infections
Inflammatory and immune conditions
Reactive processes
Neoplasms
Endocrine, nutritional & metabolic diseases
Traumatic lesions
Gingival pigmentation
Prevention
Gingivitis can be prevented through regular oral hygiene that includes daily brushing and flossing. Hydrogen peroxide, saline, alcohol or chlorhexidine mouth washes may also be employed. In a 2004 clinical study, the beneficial effect of hydrogen peroxide on gingivitis has been highlighted. The use of oscillation type brushes might reduce the risk of gingivitis compared to manual brushing.Rigorous plaque control programs along with periodontal scaling and curettage also have proved to be helpful, although according to the American Dental Association, periodontal scaling and root planing are considered as a treatment for periodontal disease, not as a preventive treatment for periodontal disease. In a 1997 review of effectiveness data, the U.S. Food and Drug Administration (FDA) found clear evidence showing that toothpaste containing triclosan was effective in preventing gingivitis. In 2017 the FDA banned triclosan in many consumer products but allowed it to remain in toothpaste because of its effectiveness against gingivitis. In 2019, Colgate, under pressure from health advocates, removed triclosan from the last toothpaste on the market containing it, Colgate Total.
Treatment
The focus of treatment is to remove plaque. Therapy is aimed at the reduction of oral bacteria and may take the form of regular periodic visits to a dental professional together with adequate oral hygiene home care. Thus, several of the methods used in the prevention of gingivitis can also be used for the treatment of manifest gingivitis, such as scaling, root planing, curettage, mouth washes containing chlorhexidine or hydrogen peroxide, and flossing. Interdental brushes also help remove any causative agents.
Powered toothbrushes work better than manual toothbrushes in reducing the disease.The active ingredients that "reduce plaque and demonstrate effective reduction of gingival inflammation over a period of time" are triclosan, chlorhexidine digluconate, and a combination of thymol, menthol, eucalyptol, and methyl salicylate. These ingredients are found in toothpaste and mouthwash. Hydrogen peroxide was long considered a suitable over-the-counter agent to treat gingivitis. There has been evidence to show the positive effect on controlling gingivitis in short-term use. A study indicates the fluoridated hydrogen peroxide-based mouth rinse can remove teeth stain and reduce gingivitis.Based on a limited evidence, mouthwashes with essential oils may also be useful, as they contain ingredients with anti-inflammatory properties, such as thymol, menthol and eucalyptol.The bacteria that causes gingivitis can be controlled by using an oral irrigator daily with a mouthwash containing an antibiotic. Either amoxicillin, cephalexin, or minocycline in 500 grams of a non-alcoholic fluoride mouthwash is an effective mixture.Overall, intensive oral hygiene care has been shown to improve gingival health in individuals with well-controlled type 2 diabetes. Periodontal destruction is also slowed due to the extensive oral care. Intensive oral hygiene care (oral health education plus supra-gingival scaling) without any periodontal therapy improves gingival health, and may prevent progression of gingivitis in well-controlled diabetes.
See also
Pericoronitis
"Full width gingivitis" of orofacial granulomatosis
Desquamative gingivitis
References
== External links == | 174 |
Glioblastoma | Glioblastoma, previously known as glioblastoma multiforme (GBM), is one of the most aggressive types of cancer that begin within the brain. Initially, signs and symptoms of glioblastoma are nonspecific. They may include headaches, personality changes, nausea, and symptoms similar to those of a stroke. Symptoms often worsen rapidly and may progress to unconsciousness.The cause of most cases of glioblastoma is not known. Uncommon risk factors include genetic disorders, such as neurofibromatosis and Li–Fraumeni syndrome, and previous radiation therapy. Glioblastomas represent 15% of all brain tumors. They can either start from normal brain cells or develop from an existing low-grade astrocytoma. The diagnosis typically is made by a combination of a CT scan, MRI scan, and tissue biopsy.There is no known method of preventing the cancer. Treatment usually involves surgery, after which chemotherapy and radiation therapy are used. The medication temozolomide is frequently used as part of chemotherapy. High-dose steroids may be used to help reduce swelling and decrease symptoms. Surgical removal (decompression) of the tumor is linked to increased survival, but only by some months.Despite maximum treatment, the cancer almost always recurs. The typical duration of survival following diagnosis is 10–13 months, with fewer than 5–10% of people surviving longer than five years. Without treatment, survival is typically three months. It is the most common cancer that begins within the brain and the second-most common brain tumor, after meningioma. About 3 in 100,000 people develop the disease per year. The average age at diagnosis is 64, and the disease occurs more commonly in males than females.
Signs and symptoms
Common symptoms include seizures, headaches, nausea and vomiting, memory loss, changes to personality, mood or concentration, and localized neurological problems. The kind of symptoms produced depends more on the location of the tumor than on its pathological properties. The tumor can start producing symptoms quickly, but occasionally is an asymptomatic condition until it reaches an enormous size.
Risk factors
The cause of most cases is unclear. About 5% develop from another type of brain tumor known as a low-grade astrocytoma.
Genetics
Uncommon risk factors include genetic disorders such as neurofibromatosis, Li–Fraumeni syndrome, tuberous sclerosis, or Turcot syndrome. Previous radiation therapy is also a risk. For unknown reasons, it occurs more commonly in males.
Environmental
Other associations include exposure to smoking, pesticides, and working in petroleum refining or rubber manufacturing.Glioblastoma has been associated with the viruses SV40, HHV-6, and cytomegalovirus.
Other
Research has been done to see if consumption of cured meat is a risk factor. No risk had been confirmed as of 2013. Similarly, exposure to radiation during medical imaging, formaldehyde, and residential electromagnetic fields, such as from cell phones and electrical wiring within homes, have been studied as risk factors. As of 2015, they had not been shown to cause GBM.
Pathogenesis
The cellular origin of glioblastoma is unknown. Because of the similarities in immunostaining of glial cells and glioblastoma, gliomas such as glioblastoma have long been assumed to originate from glial-type cells. More recent studies suggest that astrocytes, oligodendrocyte progenitor cells, and neural stem cells could all serve as the cell of origin.Glioblastomas are characterized by the presence of small areas of necrotizing tissue that are surrounded by anaplastic cells. This characteristic, as well as the presence of hyperplastic blood vessels, differentiates the tumor from grade 3 astrocytomas, which do not have these features.GBMs usually form in the cerebral white matter, grow quickly, and can become very large before producing symptoms. Fewer than 10% form more slowly following degeneration of low-grade astrocytoma or anaplastic astrocytoma. These are called secondary GBMs and are more common in younger patients (mean age 45 versus 62 years). The tumor may extend into the meninges or ventricular wall, leading to high protein content in the cerebrospinal fluid (CSF) (> 100 mg/dl), as well as an occasional pleocytosis of 10 to 100 cells, mostly lymphocytes. Malignant cells carried in the CSF may spread (rarely) to the spinal cord or cause meningeal gliomatosis. However, metastasis of GBM beyond the central nervous system is extremely unusual. About 50% of GBMs occupy more than one lobe of a hemisphere or are bilateral. Tumors of this type usually arise from the cerebrum and may exhibit the classic infiltration across the corpus callosum, producing a butterfly (bilateral) glioma.
Glioblastoma classification
Brain tumor classification has been traditionally based on histopathology at macroscopic level, measured in hematoxylin-eosin sections. The World Health Organization published the first standard classification in 1979 and has been doing so since. The 2007 WHO Classification of Tumors of the Central Nervous System was the last classification mainly based on microscopy features. The new 2016 WHO Classification of Tumors of the Central Nervous System was a paradigm shift: some of the tumors were defined also by their genetic composition as well as their cell morphology.
The grading of gliomas changed importantly and glioblastoma was now mainly classified according to the status of isocitrate dehydrogenase (IDH) mutation: IDH-wildtype or IDH-mutant.
Molecular alterations
Four subtypes of glioblastoma have been identified based on gene expression:
Classical: Around 97% of tumors in this subtype carry extra copies of the epidermal growth factor receptor (EGFR) gene, and most have higher than normal expression of EGFR, whereas the gene TP53 (p53), which is often mutated in glioblastoma, is rarely mutated in this subtype. Loss of heterozygosity in chromosome 10 is also frequently seen in the classical subtype alongside chromosome 7 amplification.
The proneural subtype often has high rates of alterations in TP53 (p53), and in PDGFRA, the gene encoding a-type platelet-derived growth factor receptor, and in IDH1, the gene encoding isocitrate dehydrogenase-1.
The mesenchymal subtype is characterized by high rates of mutations or other alterations in NF1, the gene encoding neurofibromin 1 and fewer alterations in the EGFR gene and less expression of EGFR than other types.
The neural subtype was typified by the expression of neuron markers such as NEFL, GABRA1, SYT1, and SLC12A5, while often presenting themselves as normal cells upon pathological assessment.Many other genetic alterations have been described in glioblastoma, and the majority of them are clustered in two pathways, the RB and the PI3K/AKT. Glioblastomas have alterations in 68–78% and 88% of these pathways, respectively.Another important alteration is methylation of MGMT, a "suicide" DNA repair enzyme. Methylation impairs DNA transcription and expression of the MGMT gene. Since the MGMT enzyme can repair only one DNA alkylation due to its suicide repair mechanism, reserve capacity is low and methylation of the MGMT gene promoter greatly affects DNA-repair capacity. MGMT methylation is associated with an improved response to treatment with DNA-damaging chemotherapeutics, such as temozolomide.
Cancer stem cells
Glioblastoma cells with properties similar to progenitor cells (glioblastoma cancer stem cells) have been found in glioblastomas. Their presence, coupled with the glioblastomas diffuse nature results in difficulty in removing them completely by surgery, and is therefore believed to be the possible cause behind resistance to conventional treatments, and the high recurrence rate. Glioblastoma cancer stem cells share some resemblance with neural progenitor cells, both expressing the surface receptor CD133. CD44 can also be used as a cancer stem cell marker in a subset of glioblastoma tumour cells. Glioblastoma cancer stem cells appear to exhibit enhanced resistance to radiotherapy and chemotherapy mediated, at least in part, by up-regulation of the DNA damage response.
Metabolism
The IDH1 gene encodes for the enzyme isocitrate dehydrogenase 1 and is uncommonly mutated in glioblastoma (primary GBM: 5%, secondary GBM >80%). By producing very high concentrations of the oncometabolite D-2-hydroxyglutarate and dysregulating the function of the wild-type IDH1 enzyme, it induces profound changes to the metabolism of IDH1-mutated glioblastoma, compared with IDH1 wild-type glioblastoma or healthy astrocytes. Among others, it increases the glioblastoma cells dependence on glutamine or glutamate as an energy source. IDH1-mutated glioblastomas are thought to have a very high demand for glutamate and use this amino acid and neurotransmitter as a chemotactic signal. Since healthy astrocytes excrete glutamate, IDH1-mutated glioblastoma cells do not favor dense tumor structures, but instead migrate, invade, and disperse into healthy parts of the brain where glutamate concentrations are higher. This may explain the invasive behavior of these IDH1-mutated glioblastoma.
Ion channels
Furthermore, GBM exhibits numerous alterations in genes that encode for ion channels, including upregulation of gBK potassium channels and ClC-3 chloride channels. By upregulating these ion channels, glioblastoma tumor cells are hypothesized to facilitate increased ion movement over the cell membrane, thereby increasing H2O movement through osmosis, which aids glioblastoma cells in changing cellular volume very rapidly. This is helpful in their extremely aggressive invasive behavior because quick adaptations in cellular volume can facilitate movement through the sinuous extracellular matrix of the brain.
MicroRNA
As of 2012, RNA interference, usually microRNA, was under investigation in tissue culture, pathology specimens, and preclinical animal models of glioblastoma. Additionally, experimental observations suggest that microRNA-451 is a key regulator of LKB1/AMPK signaling in cultured glioma cells and that miRNA clustering controls epigenetic pathways in the disease.
Tumor vasculature
GBM is characterized by abnormal vessels that present disrupted morphology and functionality. The high permeability and poor perfusion of the vasculature result in a disorganized blood flow within the tumor and can lead to increased hypoxia, which in turn facilitates cancer progression by promoting processes such as immunosuppression.
Diagnosis
When viewed with MRI, glioblastomas often appear as ring-enhancing lesions. The appearance is not specific, however, as other lesions such as abscess, metastasis, tumefactive multiple sclerosis, and other entities may have a similar appearance. Definitive diagnosis of a suspected GBM on CT or MRI requires a stereotactic biopsy or a craniotomy with tumor resection and pathologic confirmation. Because the tumor grade is based upon the most malignant portion of the tumor, biopsy or subtotal tumor resection can result in undergrading of the lesion. Imaging of tumor blood flow using perfusion MRI and measuring tumor metabolite concentration with MR spectroscopy may add diagnostic value to standard MRI in select cases by showing increased relative cerebral blood volume and increased choline peak, respectively, but pathology remains the gold standard for diagnosis and molecular characterization.Distinguishing primary glioblastoma from secondary glioblastoma is important. These tumors occur spontaneously (de novo) or have progressed from a lower-grade glioma, respectively. Primary glioblastomas have a worse prognosis and different tumor biology, and may have a different response to therapy, which makes this a critical evaluation to determine patient prognosis and therapy. Over 80% of secondary glioblastomas carry a mutation in IDH1, whereas this mutation is rare in primary glioblastoma (5–10%). Thus, IDH1 mutations are a useful tool to distinguish primary and secondary glioblastomas, since histopathologically they are very similar and the distinction without molecular biomarkers is unreliable.
Prevention
There are no known methods to prevent glioblastoma. It is the case for most gliomas, unlike for some other forms of cancer, that they happen without previous warning and there are no known ways to prevent them.
Treatment
Treating glioblastoma is difficult due to several complicating factors:
The tumor cells are resistant to conventional therapies.
The brain is susceptible to damage from conventional therapy.
The brain has a limited capacity to repair itself.
Many drugs cannot cross the blood–brain barrier to act on the tumor.Treatment of primary brain tumors consists of palliative (symptomatic) care and therapies intended to improve survival.
Symptomatic therapy
Supportive treatment focuses on relieving symptoms and improving the patients neurologic function. The primary supportive agents are anticonvulsants and corticosteroids.
Historically, around 90% of patients with glioblastoma underwent anticonvulsant treatment, although only an estimated 40% of patients required this treatment. Recently, neurosurgeons have been recommended that anticonvulsants not be administered prophylactically, and should wait until a seizure occurs before prescribing this medication. Those receiving phenytoin concurrent with radiation may have serious skin reactions such as erythema multiforme and Stevens–Johnson syndrome.
Corticosteroids, usually dexamethasone, can reduce peritumoral edema (through rearrangement of the blood–brain barrier), diminishing mass effect and lowering intracranial pressure, with a decrease in headache or drowsiness.
Surgery
Surgery is the first stage of treatment of glioblastoma. An average GBM tumor contains 1011 cells, which is on average reduced to 109 cells after surgery (a reduction of 99%). Benefits of surgery include resection for a pathological diagnosis, alleviation of symptoms related to mass effect, and potentially removing disease before secondary resistance to radiotherapy and chemotherapy occurs.The greater the extent of tumor removal, the better. In retrospective analyses, removal of 98% or more of the tumor has been associated with a significantly longer healthier time than if less than 98% of the tumor is removed. The chances of near-complete initial removal of the tumor may be increased if the surgery is guided by a fluorescent dye known as 5-aminolevulinic acid. GBM cells are widely infiltrative through the brain at diagnosis, and despite a "total resection" of all obvious tumor, most people with GBM later develop recurrent tumors either near the original site or at more distant locations within the brain. Other modalities, typically radiation and chemotherapy, are used after surgery in an effort to suppress and slow recurrent disease.
Radiotherapy
Subsequent to surgery, radiotherapy becomes the mainstay of treatment for people with glioblastoma. It is typically performed along with giving temozolomide. A pivotal clinical trial carried out in the early 1970s showed that among 303 GBM patients randomized to radiation or nonradiation therapy, those who received radiation had a median survival more than double those who did not. Subsequent clinical research has attempted to build on the backbone of surgery followed by radiation. On average, radiotherapy after surgery can reduce the tumor size to 107 cells. Whole-brain radiotherapy does not improve when compared to the more precise and targeted three-dimensional conformal radiotherapy. A total radiation dose of 60–65 Gy has been found to be optimal for treatment.GBM tumors are well known to contain zones of tissue exhibiting hypoxia, which are highly resistant to radiotherapy. Various approaches to chemotherapy radiosensitizers have been pursued, with limited success as of 2016. As of 2010, newer research approaches included preclinical and clinical investigations into the use of an oxygen diffusion-enhancing compound such as trans sodium crocetinate as radiosensitizers, and as of 2015 a clinical trial was underway. Boron neutron capture therapy has been tested as an alternative treatment for glioblastoma, but is not in common use.
Chemotherapy
Most studies show no benefit from the addition of chemotherapy. However, a large clinical trial of 575 participants randomized to standard radiation versus radiation plus temozolomide chemotherapy showed that the group receiving temozolomide survived a median of 14.6 months as opposed to 12.1 months for the group receiving radiation alone. This treatment regimen is now standard for most cases of glioblastoma where the person is not enrolled in a clinical trial. Temozolomide seems to work by sensitizing the tumor cells to radiation, and appears more effective for tumors with MGMT promoter methylation. High doses of temozolomide in high-grade gliomas yield low toxicity, but the results are comparable to the standard doses. Antiangiogenic therapy with medications such as bevacizumab control symptoms, but do not appear to affect overall survival in those with glioblastoma. The overall benefit of anti-angiogenic therapies as of 2019 is unclear. In elderly people with newly diagnosed glioblastoma who are reasonably fit, concurrent and adjuvant chemoradiotherapy gives the best overall survival but is associated with a greater risk of haematological adverse events than radiotherapy alone.
Other procedures
Alternating electric field therapy is an FDA-approved therapy for newly diagnosed and recurrent glioblastoma. In 2015, initial results from a phase-III randomized clinical trial of alternating electric field therapy plus temozolomide in newly diagnosed glioblastoma reported a three-month improvement in progression-free survival, and a five-month improvement in overall survival compared to temozolomide therapy alone, representing the first large trial in a decade to show a survival improvement in this setting. Despite these results, the efficacy of this approach remains controversial among medical experts. However, increasing understanding of the mechanistic basis through which alternating electric field therapy exerts anti-cancer effects and results from ongoing phase-III clinical trials in extracranial cancers may help facilitate increased clinical acceptance to treat glioblastoma in the future.A Tel Aviv University study showed that pharmacological and molecular inhibition of the P-selectin protein leads to reduced tumor growth and increased survival in mouse models of glioblastoma. The results of this research could open to possible therapies with drugs that inhibit this protein, such as crizanlizumab.
Prognosis
The most common length of survival following diagnosis is 10 to 13 months, with fewer than 1 to 3% of people surviving longer than five years. In the United States between 2012 and 2016 five-year survival was 6.8%. Without treatment, survival is typically 3 months. Complete cures are extremely rare, but have been reported.Increasing age (> 60 years) carries a worse prognostic risk. Death is usually due to widespread tumor infiltration with cerebral edema and increased intracranial pressure.A good initial Karnofsky performance score (KPS) and MGMT methylation are associated with longer survival. A DNA test can be conducted on glioblastomas to determine whether or not the promoter of the MGMT gene is methylated. Patients with a methylated MGMT promoter have longer survival than those with an unmethylated MGMT promoter, due in part to increased sensitivity to temozolomide. Another positive prognostic marker for glioblastoma patients is mutation of the IDH1 gene, which can be tested by DNA-based methods or by immunohistochemistry using an antibody against the most common mutation, namely IDH1-R132H.More prognostic power can be obtained by combining the mutational status of IDH1 and the methylation status of MGMT into a two-gene predictor. Patients with both IDH1 mutations and MGMT methylation have the longest survival, patients with an IDH1 mutation or MGMT methylation an intermediate survival, and patients without either genetic event have the shortest survival.Long-term benefits have also been associated with those patients who receive surgery, radiotherapy, and temozolomide chemotherapy. However, much remains unknown about why some patients survive longer with glioblastoma. Age under 50 is linked to longer survival in GBM, as is 98%+ resection and use of temozolomide chemotherapy and better KPSs. A recent study confirms that younger age is associated with a much better prognosis, with a small fraction of patients under 40 years of age achieving a population-based cure. Cure is thought to occur when a persons risk of death returns to that of the normal population, and in GBM, this is thought to occur after 10 years.UCLA Neuro-oncology publishes real-time survival data for patients with this diagnosis.According to a 2003 study, GBM prognosis can be divided into three subgroups dependent on KPS, the age of the patient, and treatment.
Epidemiology
About three per 100,000 people develop the disease a year, although regional frequency may be much higher. The frequency in England doubled between 1995 and 2015.It is the second-most common central nervous system cancer after meningioma. It occurs more commonly in males than females. Although the average age at diagnosis is 64, in 2014, the broad category of brain cancers was second only to leukemia in people in the United States under 20 years of age.
History
The term glioblastoma multiforme was introduced in 1926 by Percival Bailey and Harvey Cushing, based on the idea that the tumor originates from primitive precursors of glial cells (glioblasts), and the highly variable appearance due to the presence of necrosis, hemorrhage, and cysts (multiform).
Research
Gene therapy
Gene therapy has been explored as a method to treat glioblastoma, and while animal models and early-phase clinical trials have been successful, as of 2017, all gene-therapy drugs that had been tested in phase-III clinical trials for glioblastoma had failed. Scientists have developed the core–shell nanostructured LPLNP-PPT (long persistent luminescence nanoparticles. PPT refers to polyetherimide, PEG and trans-activator of transcription, and TRAIL is the human tumor necrosis factor-related apoptosis-induced ligand) for effective gene delivery and tracking, with positive results. This is a TRAIL ligand that has been encoded to induce apoptosis of cancer cells, more specifically glioblastomas. Although this study was still in clinical trials in 2017, it has shown diagnostic and therapeutic functionalities, and will open great interest for clinical applications in stem-cell-based therapy.
Oncolytic virotherapy
Oncolytic virotherapy is an emerging novel treatment that is under investigation both at preclinical and clinical stages. Several viruses including herpes simplex virus, adenovirus, poliovirus, and reovirus are currently being tested in phases I and II of clinical trials for glioblastoma therapy and have shown to improve overall survival.
Intranasal drug delivery
Direct nose-to-brain drug delivery is being explored as a means to achieve higher, and hopefully more effective, drug concentrations in the brain. A clinical phase-I/II study with glioblastoma patients in Brazil investigated the natural compound perillyl alcohol for intranasal delivery as an aerosol. The results were encouraging and, as of 2016, a similar trial has been initiated in the United States.
Cannabinoids
The efficacy of cannabinoids (cannabis derivatives) is known in oncology (through capsules of tetrahydrocannabinol (THC) or the synthetic analogue nabilone), on the one hand to combat nausea and vomiting induced by chemotherapy, on the other to stimulate appetite and lessen the sense of anguish or the actual pain.
Their ability to inhibit growth and angiogenesis in malignant gliomas in mouse models has been demonstrated.
The results of a pilot study on the use of THC in end-stage patients with recurrent glioblastoma appeared worthy of further study.
A potential avenue for future research rests on the discovery that cannabinoids are able to attack the neoplastic stem cells of glioblastoma in mouse models, with the result on the one hand of inducing their differentiation into more mature, possibly more "treatable" cells, and on the other hand to inhibit tumorigenesis.
See also
Adegramotide
List of people with brain tumors
References
External links
Information about Glioblastoma Multiforme (GBM) from the American Brain Tumor Association
AFIP Course Syllabus – Astrocytoma WHO Grading Lecture Handout | 175 |
Gold | Gold is a chemical element with the symbol Au (from Latin: aurum) and atomic number 79. This makes it one of the higher atomic number elements that occur naturally. It is a bright, slightly orange-yellow, dense, soft, malleable, and ductile metal in a pure form. Chemically, gold is a transition metal and a group 11 element. It is one of the least reactive chemical elements and is solid under standard conditions. Gold often occurs in free elemental (native) form, as nuggets or grains, in rocks, veins, and alluvial deposits. It occurs in a solid solution series with the native element silver (as electrum), naturally alloyed with other metals like copper and palladium, and mineral inclusions such as within pyrite. Less commonly, it occurs in minerals as gold compounds, often with tellurium (gold tellurides).
Gold is resistant to most acids, though it does dissolve in aqua regia (a mixture of nitric acid and hydrochloric acid), forming a soluble tetrachloroaurate anion. Gold is insoluble in nitric acid alone, which dissolves silver and base metals, a property long used to refine gold and confirm the presence of gold in metallic substances, giving rise to the term acid test. Gold dissolves in alkaline solutions of cyanide, which are used in mining and electroplating. Gold also dissolves in mercury, forming amalgam alloys, and as the gold acts simply as a solute, this is not a chemical reaction.
A relatively rare element, gold is a precious metal that has been used for coinage, jewelry, and other arts throughout recorded history. In the past, a gold standard was often implemented as a monetary policy. Gold coins ceased to be minted as a circulating currency in the 1930s, and the world gold standard was abandoned for a fiat currency system after the Nixon shock measures of 1971.
In 2020, the worlds largest gold producer was China, followed by Russia and Australia. A total of around 201,296 tonnes of gold exists above ground, as of 2020. This is equal to a cube with each side measuring roughly 21.7 meters (71 ft). The world consumption of new gold produced is about 50% in jewelry, 40% in investments and 10% in industry. Golds high malleability, ductility, resistance to corrosion and most other chemical reactions, and conductivity of electricity have led to its continued use in corrosion-resistant electrical connectors in all types of computerized devices (its chief industrial use). Gold is also used in infrared shielding, production of colored glass, gold leafing, and tooth restoration. Certain gold salts are still used as anti-inflammatories in medicine.
Characteristics
Gold is the most malleable of all metals. It can be drawn into a wire of single-atom width, and then stretched considerably before it breaks. Such nanowires distort via formation, reorientation and migration of dislocations and crystal twins without noticeable hardening. A single gram of gold can be beaten into a sheet of 1 square metre (11 sq ft), and an avoirdupois ounce into 300 square feet (28 m2). Gold leaf can be beaten thin enough to become semi-transparent. The transmitted light appears greenish-blue, because gold strongly reflects yellow and red. Such semi-transparent sheets also strongly reflect infrared light, making them useful as infrared (radiant heat) shields in visors of heat-resistant suits, and in sun-visors for spacesuits. Gold is a good conductor of heat and electricity.
Gold has a density of 19.3 g/cm3, almost identical to that of tungsten at 19.25 g/cm3; as such, tungsten has been used in counterfeiting of gold bars, such as by plating a tungsten bar with gold, or taking an existing gold bar, drilling holes, and replacing the removed gold with tungsten rods. By comparison, the density of lead is 11.34 g/cm3, and that of the densest element, osmium, is 22.588±0.015 g/cm3.
Color
Whereas most metals are gray or silvery white, gold is slightly reddish-yellow. This color is determined by the frequency of plasma oscillations among the metals valence electrons, in the ultraviolet range for most metals but in the visible range for gold due to relativistic effects affecting the orbitals around gold atoms. Similar effects impart a golden hue to metallic caesium.
Common colored gold alloys include the distinctive eighteen-karat rose gold created by the addition of copper. Alloys containing palladium or nickel are also important in commercial jewelry as these produce white gold alloys. Fourteen-karat gold-copper alloy is nearly identical in color to certain bronze alloys, and both may be used to produce police and other badges. Fourteen- and eighteen-karat gold alloys with silver alone appear greenish-yellow and are referred to as green gold. Blue gold can be made by alloying with iron, and purple gold can be made by alloying with aluminium. Less commonly, addition of manganese, indium, and other elements can produce more unusual colors of gold for various applications.Colloidal gold, used by electron-microscopists, is red if the particles are small; larger particles of colloidal gold are blue.
Isotopes
Gold has only one stable isotope, 197Au, which is also its only naturally occurring isotope, so gold is both a mononuclidic and monoisotopic element. Thirty-six radioisotopes have been synthesized, ranging in atomic mass from 169 to 205. The most stable of these is 195Au with a half-life of 186.1 days. The least stable is 171Au, which decays by proton emission with a half-life of 30 µs. Most of golds radioisotopes with atomic masses below 197 decay by some combination of proton emission, α decay, and β+ decay. The exceptions are 195Au, which decays by electron capture, and 196Au, which decays most often by electron capture (93%) with a minor β− decay path (7%). All of golds radioisotopes with atomic masses above 197 decay by β− decay.At least 32 nuclear isomers have also been characterized, ranging in atomic mass from 170 to 200. Within that range, only 178Au, 180Au, 181Au, 182Au, and 188Au do not have isomers. Golds most stable isomer is 198m2Au with a half-life of 2.27 days. Golds least stable isomer is 177m2Au with a half-life of only 7 ns. 184m1Au has three decay paths: β+ decay, isomeric transition, and alpha decay. No other isomer or isotope of gold has three decay paths.
Synthesis
The possible production of gold from a more common element, such as lead, has long been a subject of human inquiry, and the ancient and medieval discipline of alchemy often focused on it; however, the transmutation of the chemical elements did not become possible until the understanding of nuclear physics in the 20th century. The first synthesis of gold was conducted by Japanese physicist Hantaro Nagaoka, who synthesized gold from mercury in 1924 by neutron bombardment. An American team, working without knowledge of Nagaokas prior study, conducted the same experiment in 1941, achieving the same result and showing that the isotopes of gold produced by it were all radioactive. In 1980, Glenn Seaborg transmuted several thousand atoms of bismuth into gold at the Lawrence Berkeley Laboratory. Gold can be manufactured in a nuclear reactor, but doing so is highly impractical and would cost far more than the value of the gold that is produced.
Chemistry
Although gold is the most noble of the noble metals, it still forms many diverse compounds. The oxidation state of gold in its compounds ranges from −1 to +5, but Au(I) and Au(III) dominate its chemistry. Au(I), referred to as the aurous ion, is the most common oxidation state with soft ligands such as thioethers, thiolates, and organophosphines. Au(I) compounds are typically linear. A good example is Au(CN)−2, which is the soluble form of gold encountered in mining. The binary gold halides, such as AuCl, form zigzag polymeric chains, again featuring linear coordination at Au. Most drugs based on gold are Au(I) derivatives.Au(III) (referred to as the auric) is a common oxidation state, and is illustrated by gold(III) chloride, Au2Cl6. The gold atom centers in Au(III) complexes, like other d8 compounds, are typically square planar, with chemical bonds that have both covalent and ionic character. Gold(I,III) chloride is also known, an example of a mixed-valence complex.
Gold does not react with oxygen at any temperature and, up to 100 °C, is resistant to attack from ozone.
A
u
+
O
2
≠
{\displaystyle \mathrm {Au} +\mathrm {O} _{2}\neq }
A
u
+
O
3
≠
t
<
100
∘
C
{\displaystyle \mathrm {Au} +\mathrm {O} _{3}{\overset {\underset {t<100^{\circ }{\text{C}}}{}}{\neq }}}
Some free halogens react with gold. Gold is strongly attacked by fluorine at dull-red heat to form gold(III) fluoride AuF3. Powdered gold reacts with chlorine at 180 °C to form gold(III) chloride AuCl3. Gold reacts with bromine at 140 °C to form gold(III) bromide AuBr3, but reacts only very slowly with iodine to form gold(I) iodide AuI.
2
Au
+
3
F
2
→
t
2
AuF
3
{\displaystyle {\ce {2 Au + 3 F2 ->[t] 2 AuF3}}}
2
Au
+
3
Cl
2
→
t
2
AuCl
3
{\displaystyle {\ce {2 Au + 3 Cl2 ->[t] 2 AuCl3}}}
2
Au
+
2
Br
2
→
t
AuBr
3
+
AuBr
{\displaystyle {\ce {2 Au + 2 Br2 ->[t] AuBr3 + AuBr}}}
2
Au
+
I
2
→
t
2
AuI
{\displaystyle {\ce {2 Au + I2 ->[t] 2 AuI}}}
Gold does not react with sulfur directly, but gold(III) sulfide can be made by passing hydrogen sulfide through a dilute solution of gold(III) chloride or chlorauric acid.
Gold readily dissolves in mercury at room temperature to form an amalgam, and forms alloys with many other metals at higher temperatures. These alloys can be produced to modify the hardness and other metallurgical properties, to control melting point or to create exotic colors.Gold is unaffected by most acids. It does not react with hydrofluoric, hydrochloric, hydrobromic, hydriodic, sulfuric, or nitric acid. It does react with selenic acid, and is dissolved by aqua regia, a 1:3 mixture of nitric acid and hydrochloric acid. Nitric acid oxidizes the metal to +3 ions, but only in minute amounts, typically undetectable in the pure acid because of the chemical equilibrium of the reaction. However, the ions are removed from the equilibrium by hydrochloric acid, forming AuCl−4 ions, or chloroauric acid, thereby enabling further oxidation.
2
Au
6
+
H
2
SeO
4
→
200
∘
C
Au
2
(
SeO
4
)
3
+
3
H
2
SeO
3
+
3
H
2
O
{\displaystyle {\ce {2Au+6H2SeO4->[200^{\circ }C]Au2(SeO4)3+3H2SeO3+3H2O}}}
Au
4
+
HCl
+
HNO
3
⟶
H
[
AuCl
4
]
+
NO
↑
+
2
H
2
O
{\displaystyle {\ce {Au+4HCl+HNO3->H[AuCl4]{}+NO\uparrow +2H2O}}}
Gold is similarly unaffected by most bases. It does not react with aqueous, solid, or molten sodium or potassium hydroxide. It does however, react with sodium or potassium cyanide under alkaline conditions when oxygen is present to form soluble complexes.Common oxidation states of gold include +1 (gold(I) or aurous compounds) and +3 (gold(III) or auric compounds). Gold ions in solution are readily reduced and precipitated as metal by adding any other metal as the reducing agent. The added metal is oxidized and dissolves, allowing the gold to be displaced from solution and be recovered as a solid precipitate.
Rare oxidation states
Less common oxidation states of gold include −1, +2, and +5.
The −1 oxidation state occurs in aurides, compounds containing the Au− anion. Caesium auride (CsAu), for example, crystallizes in the caesium chloride motif; rubidium, potassium, and tetramethylammonium aurides are also known. Gold has the highest electron affinity of any metal, at 222.8 kJ/mol, making Au− a stable species, analogous to the halides.
Gold also has a –1 oxidation state in covalent complexes with the group 4 transition metals, such as in titanium tetraauride and the analogous zirconium and hafnium compounds. These chemicals are expected to form gold-bridged dimers in a manner similar to titanium(IV) hydride.Gold(II) compounds are usually diamagnetic with Au–Au bonds such as [Au(CH2)2P(C6H5)2]2Cl2. The evaporation of a solution of Au(OH)3 in concentrated H2SO4 produces red crystals of gold(II) sulfate, Au2(SO4)2. Originally thought to be a mixed-valence compound, it has been shown to contain Au4+2 cations, analogous to the better-known mercury(I) ion, Hg2+2. A gold(II) complex, the tetraxenonogold(II) cation, which contains xenon as a ligand, occurs in [AuXe4](Sb2F11)2.Gold pentafluoride, along with its derivative anion, AuF−6, and its difluorine complex, gold heptafluoride, is the sole example of gold(V), the highest verified oxidation state.Some gold compounds exhibit aurophilic bonding, which describes the tendency of gold ions to interact at distances that are too long to be a conventional Au–Au bond but shorter than van der Waals bonding. The interaction is estimated to be comparable in strength to that of a hydrogen bond.
Well-defined cluster compounds are numerous. In some cases, gold has a fractional oxidation state. A representative example is the octahedral species {Au(P(C6H5)3)}2+6.
Origin
Gold production in the universe
Gold is thought to have been produced in supernova nucleosynthesis, and from the collision of neutron stars, and to have been present in the dust from which the Solar System formed.Traditionally, gold in the universe is thought to have formed by the r-process (rapid neutron capture) in supernova nucleosynthesis, but more recently it has been suggested that gold and other elements heavier than iron may also be produced in quantity by the r-process in the collision of neutron stars. In both cases, satellite spectrometers at first only indirectly detected the resulting gold. However, in August 2017, the spectroscopic signatures of heavy elements, including gold, were observed by electromagnetic observatories in the GW170817 neutron star merger event, after gravitational wave detectors confirmed the event as a neutron star merger. Current astrophysical models suggest that this single neutron star merger event generated between 3 and 13 Earth masses of gold. This amount, along with estimations of the rate of occurrence of these neutron star merger events, suggests that such mergers may produce enough gold to account for most of the abundance of this element in the universe.
Asteroid origin theories
Because the Earth was molten when it was formed, almost all of the gold present in the early Earth probably sank into the planetary core. Therefore, most of the gold that is in the Earths crust and mantle has in one model thought to have been delivered to Earth later, by asteroid impacts during the Late Heavy Bombardment, about 4 billion years ago.Gold which is reachable by humans has, in one case, been associated with a particular asteroid impact. The asteroid that formed Vredefort impact structure 2.020 billion years ago is often credited with seeding the Witwatersrand basin in South Africa with the richest gold deposits on earth. However, this scenario is now questioned. The gold-bearing Witwatersrand rocks were laid down between 700 and 950 million years before the Vredefort impact. These gold-bearing rocks had furthermore been covered by a thick layer of Ventersdorp lavas and the Transvaal Supergroup of rocks before the meteor struck, and thus the gold did not actually arrive in the asteroid/meteorite. What the Vredefort impact achieved, however, was to distort the Witwatersrand basin in such a way that the gold-bearing rocks were brought to the present erosion surface in Johannesburg, on the Witwatersrand, just inside the rim of the original 300 km (190 mi) diameter crater caused by the meteor strike. The discovery of the deposit in 1886 launched the Witwatersrand Gold Rush. Some 22% of all the gold that is ascertained to exist today on Earth has been extracted from these Witwatersrand rocks.
Mantle return theories
Notwithstanding the impact above, much of the rest of the gold on Earth is thought to have been incorporated into the planet since its very beginning, as planetesimals formed the planets mantle, early in Earths creation. In 2017, an international group of scientists, established that gold "came to the Earths surface from the deepest regions of our planet", the mantle, evidenced by their findings at Deseado Massif in the Argentinian Patagonia.
Occurrence
On Earth, gold is found in ores in rock formed from the Precambrian time onward. It most often occurs as a native metal, typically in a metal solid solution with silver (i.e. as a gold/silver alloy). Such alloys usually have a silver content of 8–10%. Electrum is elemental gold with more than 20% silver, and is commonly known as white gold. Electrums color runs from golden-silvery to silvery, dependent upon the silver content. The more silver, the lower the specific gravity.
Native gold occurs as very small to microscopic particles embedded in rock, often together with quartz or sulfide minerals such as "fools gold", which is a pyrite. These are called lode deposits. The metal in a native state is also found in the form of free flakes, grains or larger nuggets that have been eroded from rocks and end up in alluvial deposits called placer deposits. Such free gold is always richer at the exposed surface of gold-bearing veins, owing to the oxidation of accompanying minerals followed by weathering; and by washing of the dust into streams and rivers, where it collects and can be welded by water action to form nuggets.
Gold sometimes occurs combined with tellurium as the minerals calaverite, krennerite, nagyagite, petzite and sylvanite (see telluride minerals), and as the rare bismuthide maldonite (Au2Bi) and antimonide aurostibite (AuSb2). Gold also occurs in rare alloys with copper, lead, and mercury: the minerals auricupride (Cu3Au), novodneprite (AuPb3) and weishanite ((Au,Ag)3Hg2).
Recent research suggests that microbes can sometimes play an important role in forming gold deposits, transporting and precipitating gold to form grains and nuggets that collect in alluvial deposits.Another recent study has claimed water in faults vaporizes during an earthquake, depositing gold. When an earthquake strikes, it moves along a fault. Water often lubricates faults, filling in fractures and jogs. About 10 kilometres (6.2 mi) below the surface, under very high temperatures and pressures, the water carries high concentrations of carbon dioxide, silica, and gold. During an earthquake, the fault jog suddenly opens wider. The water inside the void instantly vaporizes, flashing to steam and forcing silica, which forms the mineral quartz, and gold out of the fluids and onto nearby surfaces.
Seawater
The worlds oceans contain gold. Measured concentrations of gold in the Atlantic and Northeast Pacific are 50–150 femtomol/L or 10–30 parts per quadrillion (about 10–30 g/km3). In general, gold concentrations for south Atlantic and central Pacific samples are the same (~50 femtomol/L) but less certain. Mediterranean deep waters contain slightly higher concentrations of gold (100–150 femtomol/L) attributed to wind-blown dust and/or rivers. At 10 parts per quadrillion the Earths oceans would hold 15,000 tonnes of gold. These figures are three orders of magnitude less than reported in the literature prior to 1988, indicating contamination problems with the earlier data.
A number of people have claimed to be able to economically recover gold from sea water, but they were either mistaken or acted in an intentional deception. Prescott Jernegan ran a gold-from-seawater swindle in the United States in the 1890s, as did an English fraudster in the early 1900s. Fritz Haber did research on the extraction of gold from sea water in an effort to help pay Germanys reparations following World War I. Based on the published values of 2 to 64 ppb of gold in seawater a commercially successful extraction seemed possible. After analysis of 4,000 water samples yielding an average of 0.004 ppb it became clear that extraction would not be possible and he ended the project.
History
The earliest recorded metal employed by humans appears to be gold, which can be found free or "native". Small amounts of natural gold have been found in Spanish caves used during the late Paleolithic period, c. 40,000 BC.The oldest gold artifacts in the world are from Bulgaria and are dating back to the 5th millennium BC (4,600 BC to 4,200 BC), such as those found in the Varna Necropolis near Lake Varna and the Black Sea coast, thought to be the earliest "well-dated" finding of gold artifacts in history. Several prehistoric Bulgarian finds are considered no less old – the golden treasures of Hotnitsa, Durankulak, artifacts from the Kurgan settlement of Yunatsite near Pazardzhik, the golden treasure Sakar, as well as beads and gold jewelry found in the Kurgan settlement of Provadia – Solnitsata (“salt pit”). However, Varna gold is most often called the oldest since this treasure is the largest and most diverse.Gold artifacts probably made their first appearance in Ancient Egypt at the very beginning of the pre-dynastic period, at the end of the fifth millennium BC and the start of the fourth, and smelting was developed during the course of the 4th millennium; gold artifacts appear in the archeology of Lower Mesopotamia during the early 4th millennium. As of 1990, gold artifacts found at the Wadi Qana cave cemetery of the 4th millennium BC in West Bank were the earliest from the Levant. Gold artifacts such as the golden hats and the Nebra disk appeared in Central Europe from the 2nd millennium BC Bronze Age.
The oldest known map of a gold mine was drawn in the 19th Dynasty of Ancient Egypt (1320–1200 BC), whereas the first written reference to gold was recorded in the 12th Dynasty around 1900 BC. Egyptian hieroglyphs from as early as 2600 BC describe gold, which King Tushratta of the Mitanni claimed was "more plentiful than dirt" in Egypt. Egypt and especially Nubia had the resources to make them major gold-producing areas for much of history. One of the earliest known maps, known as the Turin Papyrus Map, shows the plan of a gold mine in Nubia together with indications of the local geology. The primitive working methods are described by both Strabo and Diodorus Siculus, and included fire-setting. Large mines were also present across the Red Sea in what is now Saudi Arabia.
Gold is mentioned in the Amarna letters numbered 19 and 26 from around the 14th century BC.Gold is mentioned frequently in the Old Testament, starting with Genesis 2:11 (at Havilah), the story of the golden calf, and many parts of the temple including the Menorah and the golden altar. In the New Testament, it is included with the gifts of the magi in the first chapters of Matthew. The Book of Revelation 21:21 describes the city of New Jerusalem as having streets "made of pure gold, clear as crystal". Exploitation of gold in the south-east corner of the Black Sea is said to date from the time of Midas, and this gold was important in the establishment of what is probably the worlds earliest coinage in Lydia around 610 BC. The legend of the golden fleece dating from eighth century BCE may refer to the use of fleeces to trap gold dust from placer deposits in the ancient world. From the 6th or 5th century BC, the Chu (state) circulated the Ying Yuan, one kind of square gold coin.
In Roman metallurgy, new methods for extracting gold on a large scale were developed by introducing hydraulic mining methods, especially in Hispania from 25 BC onwards and in Dacia from 106 AD onwards. One of their largest mines was at Las Medulas in León, where seven long aqueducts enabled them to sluice most of a large alluvial deposit. The mines at Roşia Montană in Transylvania were also very large, and until very recently, still mined by opencast methods. They also exploited smaller deposits in Britain, such as placer and hard-rock deposits at Dolaucothi. The various methods they used are well described by Pliny the Elder in his encyclopedia Naturalis Historia written towards the end of the first century AD.
During Mansa Musas (ruler of the Mali Empire from 1312 to 1337) hajj to Mecca in 1324, he passed through Cairo in July 1324, and was reportedly accompanied by a camel train that included thousands of people and nearly a hundred camels where he gave away so much gold that it depressed the price in Egypt for over a decade, causing high inflation. A contemporary Arab historian remarked:
Gold was at a high price in Egypt until they came in that year. The mithqal did not go below 25 dirhams and was generally above, but from that time its value fell and it cheapened in price and has remained cheap till now. The mithqal does not exceed 22 dirhams or less. This has been the state of affairs for about twelve years until this day by reason of the large amount of gold which they brought into Egypt and spent there [...].
The European exploration of the Americas was fueled in no small part by reports of the gold ornaments displayed in great profusion by Native American peoples, especially in Mesoamerica, Peru, Ecuador and Colombia. The Aztecs regarded gold as the product of the gods, calling it literally "god excrement" (teocuitlatl in Nahuatl), and after Moctezuma II was killed, most of this gold was shipped to Spain. However, for the indigenous peoples of North America gold was considered useless and they saw much greater value in other minerals which were directly related to their utility, such as obsidian, flint, and slate. El Dorado is applied to a legendary story in which precious stones were found in fabulous abundance along with gold coins. The concept of El Dorado underwent several transformations, and eventually accounts of the previous myth were also combined with those of a legendary lost city. El Dorado, was the term used by the Spanish Empire to describe a mythical tribal chief (zipa) of the Muisca native people in Colombia, who, as an initiation rite, covered himself with gold dust and submerged in Lake Guatavita. The legends surrounding El Dorado changed over time, as it went from being a man, to a city, to a kingdom, and then finally to an empire.
Beginning in the early modern period, European exploration and colonization of West Africa was driven in large part by reports of gold deposits in the region, which was eventually referred to by Europeans as the "Gold Coast". From the late 15th to early 19th centuries, European trade in the region was primarily focused in gold, along with ivory and slaves. The gold trade in West Africa was dominated by the Ashanti Empire, who initially traded with the Portuguese before branching out and trading with British, French, Spanish and Danish merchants. British desires to secure control of West African gold deposits played a role in the Anglo-Ashanti wars of the late 19th century, which saw the Ashanti Empire annexed by Britain.Gold played a role in western culture, as a cause for desire and of corruption, as told in childrens fables such as Rumpelstiltskin—where Rumpelstiltskin turns hay into gold for the peasants daughter in return for her child when she becomes a princess—and the stealing of the hen that lays golden eggs in Jack and the Beanstalk.
The top prize at the Olympic Games and many other sports competitions is the gold medal.
75% of the presently accounted for gold has been extracted since 1910, two-thirds since 1950.
One main goal of the alchemists was to produce gold from other substances, such as lead — presumably by the interaction with a mythical substance called the philosophers stone. Trying to produce gold led the alchemists to systematically find out what can be done with substances, and this laid the foundation for todays chemistry, which can produce gold (albeit uneconomically) by using nuclear transmutation. Their symbol for gold was the circle with a point at its center (☉), which was also the astrological symbol and the ancient Chinese character for the Sun.
The Dome of the Rock is covered with an ultra-thin golden glassier. The Sikh Golden temple, the Harmandir Sahib, is a building covered with gold. Similarly the Wat Phra Kaew emerald Buddhist temple (wat) in Thailand has ornamental gold-leafed statues and roofs. Some European king and queens crowns were made of gold, and gold was used for the bridal crown since antiquity. An ancient Talmudic text circa 100 AD describes Rachel, wife of Rabbi Akiva, receiving a "Jerusalem of Gold" (diadem). A Greek burial crown made of gold was found in a grave circa 370 BC.
Etymology
"Gold" is cognate with similar words in many Germanic languages, deriving via Proto-Germanic *gulþą from Proto-Indo-European *ǵʰelh₃- ("to shine, to gleam; to be yellow or green").The symbol Au is from the Latin: aurum, the Latin word for "gold". The Proto-Indo-European ancestor of aurum was *h₂é-h₂us-o-, meaning "glow". This word is derived from the same root (Proto-Indo-European *h₂u̯es- "to dawn") as *h₂éu̯sōs, the ancestor of the Latin word Aurora, "dawn". This etymological relationship is presumably behind the frequent claim in scientific publications that aurum meant "shining dawn".
Culture
In popular culture gold is a high standard of excellence, often used in awards. Great achievements are frequently rewarded with gold, in the form of gold medals, gold trophies and other decorations. Winners of athletic events and other graded competitions are usually awarded a gold medal. Many awards such as the Nobel Prize are made from gold as well. Other award statues and prizes are depicted in gold or are gold plated (such as the Academy Awards, the Golden Globe Awards, the Emmy Awards, the Palme dOr, and the British Academy Film Awards).Aristotle in his ethics used gold symbolism when referring to what is now known as the golden mean. Similarly, gold is associated with perfect or divine principles, such as in the case of the golden ratio and the golden rule. Gold is further associated with the wisdom of aging and fruition. The fiftieth wedding anniversary is golden. A persons most valued or most successful latter years are sometimes considered "golden years". The height of a civilization is referred to as a golden age.
Religion
In some forms of Christianity and Judaism, gold has been associated both with the sacred and evil. In the Book of Exodus, the Golden Calf is a symbol of idolatry, while in the Book of Genesis, Abraham was said to be rich in gold and silver, and Moses was instructed to cover the Mercy Seat of the Ark of the Covenant with pure gold. In Byzantine iconography the halos of Christ, Virgin Mary and the saints are often golden.In Islam, gold (along with silk) is often cited as being forbidden for men to wear. Abu Bakr al-Jazaeri, quoting a hadith, said that "[t]he wearing of silk and gold are forbidden on the males of my nation, and they are lawful to their women". This, however, has not been enforced consistently throughout history, e.g. in the Ottoman Empire. Further, small gold accents on clothing, such as in embroidery, may be permitted.In ancient Greek religion and mythology, Theia was seen as the goddess of gold, silver and other gems.According to Christopher Columbus, those who had something of gold were in possession of something of great value on Earth and a substance to even help souls to paradise.Wedding rings are typically made of gold. It is long lasting and unaffected by the passage of time and may aid in the ring symbolism of eternal vows before God and the perfection the marriage signifies. In Orthodox Christian wedding ceremonies, the wedded couple is adorned with a golden crown (though some opt for wreaths, instead) during the ceremony, an amalgamation of symbolic rites.
On 24 August 2020, Israeli archaeologists discovered a trove of early Islamic gold coins near the central city of Yavne. Analysis of the extremely rare collection of 425 gold coins indicated that they were from the late 9th century. Dating to around 1,100 years back, the gold coins were from the Abbasid Caliphate.
Production
According to the United States Geological Survey in 2016, about 5,726,000,000 troy ounces (178,100 t) of gold has been accounted for, of which 85% remains in active use.
Mining and prospecting
Since the 1880s, South Africa has been the source of a large proportion of the worlds gold supply, and about 22% of the gold presently accounted is from South Africa. Production in 1970 accounted for 79% of the world supply, about 1,480 tonnes. In 2007 China (with 276 tonnes) overtook South Africa as the worlds largest gold producer, the first time since 1905 that South Africa had not been the largest.In 2020, China was the worlds leading gold-mining country, followed in order by Russia, Australia, the United States, Canada, and Ghana.
In South America, the controversial project Pascua Lama aims at exploitation of rich fields in the high mountains of Atacama Desert, at the border between Chile and Argentina.
It has been estimated that up to one-quarter of the yearly global gold production originates from artisanal or small scale mining.The city of Johannesburg located in South Africa was founded as a result of the Witwatersrand Gold Rush which resulted in the discovery of some of the largest natural gold deposits in recorded history. The gold fields are confined to the northern and north-western edges of the Witwatersrand basin, which is a 5–7 km (3.1–4.3 mi) thick layer of archean rocks located, in most places, deep under the Free State, Gauteng and surrounding provinces. These Witwatersrand rocks are exposed at the surface on the Witwatersrand, in and around Johannesburg, but also in isolated patches to the south-east and south-west of Johannesburg, as well as in an arc around the Vredefort Dome which lies close to the center of the Witwatersrand basin. From these surface exposures the basin dips extensively, requiring some of the mining to occur at depths of nearly 4,000 m (13,000 ft), making them, especially the Savuka and TauTona mines to the south-west of Johannesburg, the deepest mines on earth. The gold is found only in six areas where archean rivers from the north and north-west formed extensive pebbly Braided river deltas before draining into the "Witwatersrand sea" where the rest of the Witwatersrand sediments were deposited.The Second Boer War of 1899–1901 between the British Empire and the Afrikaner Boers was at least partly over the rights of miners and possession of the gold wealth in South Africa.
During the 19th century, gold rushes occurred whenever large gold deposits were discovered. The first documented discovery of gold in the United States was at the Reed Gold Mine near Georgeville, North Carolina in 1803. The first major gold strike in the United States occurred in a small north Georgia town called Dahlonega. Further gold rushes occurred in California, Colorado, the Black Hills, Otago in New Zealand, a number of locations across Australia, Witwatersrand in South Africa, and the Klondike in Canada.
Grasberg mine located in Papua, Indonesia is the largest gold mine in the world.
Extraction and refining
Gold extraction is most economical in large, easily mined deposits. Ore grades as little as 0.5 parts per million (ppm) can be economical. Typical ore grades in open-pit mines are 1–5 ppm; ore grades in underground or hard rock mines are usually at least 3 ppm. Because ore grades of 30 ppm are usually needed before gold is visible to the naked eye, in most gold mines the gold is invisible.
The average gold mining and extraction costs were about $317 per troy ounce in 2007, but these can vary widely depending on mining type and ore quality; global mine production amounted to 2,471.1 tonnes.After initial production, gold is often subsequently refined industrially by the Wohlwill process which is based on electrolysis or by the Miller process, that is chlorination in the melt. The Wohlwill process results in higher purity, but is more complex and is only applied in small-scale installations. Other methods of assaying and purifying smaller amounts of gold include parting and inquartation as well as cupellation, or refining methods based on the dissolution of gold in aqua regia.As of 2020, the amount of carbon dioxide CO2 produced in mining a kilogram of gold is 16 tonnes, while recycling a kilogram of gold produces 53 kilograms of CO2 equivalent. Approximately 30 percent of the global gold supply is recycled and not mined as of 2020.Corporations are starting to adopt gold recycling including jewelry companies such as Generation Collection and computer companies including Dell.
Consumption
The consumption of gold produced in the world is about 50% in jewelry, 40% in investments, and 10% in industry.According to the World Gold Council, China was the worlds largest single consumer of gold in 2013, overtaking India.
Pollution
Gold production is associated with contribution to hazardous pollution.Low-grade gold ore may contain less than one ppm gold metal; such ore is ground and mixed with sodium cyanide to dissolve the gold. Cyanide is a highly poisonous chemical, which can kill living creatures when exposed in minute quantities. Many cyanide spills from gold mines have occurred in both developed and developing countries which killed aquatic life in long stretches of affected rivers. Environmentalists consider these events major environmental disasters. Up to thirty tons of used ore can dumped as waste for producing one troy ounce of gold. Gold ore dumps are the source of many heavy elements such as cadmium, lead, zinc, copper, arsenic, selenium and mercury. When sulfide-bearing minerals in these ore dumps are exposed to air and water, the sulfide transforms into sulfuric acid which in turn dissolves these heavy metals facilitating their passage into surface water and ground water. This process is called acid mine drainage. These gold ore dumps are long-term, highly hazardous wastes second only to nuclear waste dumps.It was once common to use mercury to recover gold from ore, but today the use of mercury is largely limited to small-scale individual miners. Minute quantities of mercury compounds can reach water bodies, causing heavy metal contamination. Mercury can then enter into the human food chain in the form of methylmercury. Mercury poisoning in humans causes incurable brain function damage and severe retardation.Gold extraction is also a highly energy-intensive industry, extracting ore from deep mines and grinding the large quantity of ore for further chemical extraction requires nearly 25 kWh of electricity per gram of gold produced.
Monetary use
Gold has been widely used throughout the world as money, for efficient indirect exchange (versus barter), and to store wealth in hoards. For exchange purposes, mints produce standardized gold bullion coins, bars and other units of fixed weight and purity.
The first known coins containing gold were struck in Lydia, Asia Minor, around 600 BC. The talent coin of gold in use during the periods of Grecian history both before and during the time of the life of Homer weighed between 8.42 and 8.75 grams. From an earlier preference in using silver, European economies re-established the minting of gold as coinage during the thirteenth and fourteenth centuries.Bills (that mature into gold coin) and gold certificates (convertible into gold coin at the issuing bank) added to the circulating stock of gold standard money in most 19th century industrial economies.
In preparation for World War I the warring nations moved to fractional gold standards, inflating their currencies to finance the war effort.
Post-war, the victorious countries, most notably Britain, gradually restored gold-convertibility, but international flows of gold via bills of exchange remained embargoed; international shipments were made exclusively for bilateral trades or to pay war reparations.
After World War II gold was replaced by a system of nominally convertible currencies related by fixed exchange rates following the Bretton Woods system. Gold standards and the direct convertibility of currencies to gold have been abandoned by world governments, led in 1971 by the United States refusal to redeem its dollars in gold. Fiat currency now fills most monetary roles. Switzerland was the last country to tie its currency to gold; this was ended by a referendum in 1999.Central banks continue to keep a portion of their liquid reserves as gold in some form, and metals exchanges such as the London Bullion Market Association still clear transactions denominated in gold, including future delivery contracts. Today, gold mining output is declining. With the sharp growth of economies in the 20th century, and increasing foreign exchange, the worlds gold reserves and their trading market have become a small fraction of all markets and fixed exchange rates of currencies to gold have been replaced by floating prices for gold and gold future contract. Though the gold stock grows by only 1% or 2% per year, very little metal is irretrievably consumed. Inventory above ground would satisfy many decades of industrial and even artisan uses at current prices.
The gold proportion (fineness) of alloys is measured by karat (k). Pure gold (commercially termed fine gold) is designated as 24 karat, abbreviated 24k. English gold coins intended for circulation from 1526 into the 1930s were typically a standard 22k alloy called crown gold, for hardness (American gold coins for circulation after 1837 contain an alloy of 0.900 fine gold, or 21.6 kt).Although the prices of some platinum group metals can be much higher, gold has long been considered the most desirable of precious metals, and its value has been used as the standard for many currencies. Gold has been used as a symbol for purity, value, royalty, and particularly roles that combine these properties. Gold as a sign of wealth and prestige was ridiculed by Thomas More in his treatise Utopia. On that imaginary island, gold is so abundant that it is used to make chains for slaves, tableware, and lavatory seats. When ambassadors from other countries arrive, dressed in ostentatious gold jewels and badges, the Utopians mistake them for menial servants, paying homage instead to the most modestly dressed of their party.
The ISO 4217 currency code of gold is XAU. Many holders of gold store it in form of bullion coins or bars as a hedge against inflation or other economic disruptions, though its efficacy as such has been questioned; historically, it has not proven itself reliable as a hedging instrument. Modern bullion coins for investment or collector purposes do not require good mechanical wear properties; they are typically fine gold at 24k, although the American Gold Eagle and the British gold sovereign continue to be minted in 22k (0.92) metal in historical tradition, and the South African Krugerrand, first released in 1967, is also 22k (0.92).The special issue Canadian Gold Maple Leaf coin contains the highest purity gold of any bullion coin, at 99.999% or 0.99999, while the popular issue Canadian Gold Maple Leaf coin has a purity of 99.99%. In 2006, the United States Mint began producing the American Buffalo gold bullion coin with a purity of 99.99%. The Australian Gold Kangaroos were first coined in 1986 as the Australian Gold Nugget but changed the reverse design in 1989. Other modern coins include the Austrian Vienna Philharmonic bullion coin and the Chinese Gold Panda.
Price
As of September 2017, gold is valued at around $42 per gram ($1,300 per troy ounce).
Like other precious metals, gold is measured by troy weight and by grams. The proportion of gold in the alloy is measured by karat (k), with 24 karat (24k) being pure gold (100%), and lower karat numbers proportionally less (18k = 75%). The purity of a gold bar or coin can also be expressed as a decimal figure ranging from 0 to 1, known as the millesimal fineness, such as 0.995 being nearly pure.
The price of gold is determined through trading in the gold and derivatives markets, but a procedure known as the Gold Fixing in London, originating in September 1919, provides a daily benchmark price to the industry. The afternoon fixing was introduced in 1968 to provide a price when US markets are open.
History
Historically gold coinage was widely used as currency; when paper money was introduced, it typically was a receipt redeemable for gold coin or bullion. In a monetary system known as the gold standard, a certain weight of gold was given the name of a unit of currency. For a long period, the United States government set the value of the US dollar so that one troy ounce was equal to $20.67 ($0.665 per gram), but in 1934 the dollar was devalued to $35.00 per troy ounce ($0.889/g). By 1961, it was becoming hard to maintain this price, and a pool of US and European banks agreed to manipulate the market to prevent further currency devaluation against increased gold demand.On 17 March 1968, economic circumstances caused the collapse of the gold pool, and a two-tiered pricing scheme was established whereby gold was still used to settle international accounts at the old $35.00 per troy ounce ($1.13/g) but the price of gold on the private market was allowed to fluctuate; this two-tiered pricing system was abandoned in 1975 when the price of gold was left to find its free-market level. Central banks still hold historical gold reserves as a store of value although the level has generally been declining. The largest gold depository in the world is that of the U.S. Federal Reserve Bank in New York, which holds about 3% of the gold known to exist and accounted for today, as does the similarly laden U.S. Bullion Depository at Fort Knox.
In 2005 the World Gold Council estimated total global gold supply to be 3,859 tonnes and demand to be 3,754 tonnes, giving a surplus of 105 tonnes.After 15 August 1971 Nixon shock, the price began to greatly increase, and between 1968 and 2000 the price of gold ranged widely, from a high of $850 per troy ounce ($27.33/g) on 21 January 1980, to a low of $252.90 per troy ounce ($8.13/g) on 21 June 1999 (London Gold Fixing). Prices increased rapidly from 2001, but the 1980 high was not exceeded until 3 January 2008, when a new maximum of $865.35 per troy ounce was set. Another record price was set on 17 March 2008, at $1023.50 per troy ounce ($32.91/g).In late 2009, gold markets experienced renewed momentum upwards due to increased demand and a weakening US dollar. On 2 December 2009, gold reached a new high closing at $1,217.23. Gold further rallied hitting new highs in May 2010 after the European Union debt crisis prompted further purchase of gold as a safe asset. On 1 March 2011, gold hit a new all-time high of $1432.57, based on investor concerns regarding ongoing unrest in North Africa as well as in the Middle East.From April 2001 to August 2011, spot gold prices more than quintupled in value against the US dollar, hitting a new all-time high of $1,913.50 on 23 August 2011, prompting speculation that the long secular bear market had ended and a bull market had returned. However, the price then began a slow decline towards $1200 per troy ounce in late 2014 and 2015.
In August 2020, the gold price picked up to US$2060 per ounce after a complexive growth of 59% from August 2018 to October 2020, a period during which it outplaced the Nasdaq total return of 54%.Gold futures are traded on the COMEX exchange. These contacts are priced in USD per troy ounce (1 troy ounce = 31.1034768 grams). Below are the CQG contract specifications outlining the futures contracts:
Medicinal uses
Medicinal applications of gold and its complexes have a long history dating back thousands of years. Several gold complexes have been applied to treat rheumatoid arthritis, the most frequently used being aurothiomalate, aurothioglucose, and auranofin. Both gold(I) and gold(III) compounds have been investigated as possible anti-cancer drugs. For gold(III) complexes, reduction to gold(0/I) under physiological conditions has to be considered. Stable complexes can be generated using different types of bi-, tri-, and tetradentate ligand systems, and their efficacy has been demonstrated in vitro and in vivo.
Other applications
Jewelry
Because of the softness of pure (24k) gold, it is usually alloyed with base metals for use in jewelry, altering its hardness and ductility, melting point, color and other properties. Alloys with lower karat rating, typically 22k, 18k, 14k or 10k, contain higher percentages of copper or other base metals or silver or palladium in the alloy. Nickel is toxic, and its release from nickel white gold is controlled by legislation in Europe. Palladium-gold alloys are more expensive than those using nickel. High-karat white gold alloys are more resistant to corrosion than are either pure silver or sterling silver. The Japanese craft of Mokume-gane exploits the color contrasts between laminated colored gold alloys to produce decorative wood-grain effects.
By 2014, the gold jewelry industry was escalating despite a dip in gold prices. Demand in the first quarter of 2014 pushed turnover to $23.7 billion according to a World Gold Council report.
Gold solder is used for joining the components of gold jewelry by high-temperature hard soldering or brazing. If the work is to be of hallmarking quality, the gold solder alloy must match the fineness (purity) of the work, and alloy formulas are manufactured to color-match yellow and white gold. Gold solder is usually made in at least three melting-point ranges referred to as Easy, Medium and Hard. By using the hard, high-melting point solder first, followed by solders with progressively lower melting points, goldsmiths can assemble complex items with several separate soldered joints. Gold can also be made into thread and used in embroidery.
Electronics
Only 10% of the world consumption of new gold produced goes to industry, but by far the most important industrial use for new gold is in fabrication of corrosion-free electrical connectors in computers and other electrical devices. For example, according to the World Gold Council, a typical cell phone may contain 50 mg of gold, worth about 50 cents. But since nearly one billion cell phones are produced each year, a gold value of 50 cents in each phone adds to $500 million in gold from just this application.Though gold is attacked by free chlorine, its good conductivity and general resistance to oxidation and corrosion in other environments (including resistance to non-chlorinated acids) has led to its widespread industrial use in the electronic era as a thin-layer coating on electrical connectors, thereby ensuring good connection. For example, gold is used in the connectors of the more expensive electronics cables, such as audio, video and USB cables. The benefit of using gold over other connector metals such as tin in these applications has been debated; gold connectors are often criticized by audio-visual experts as unnecessary for most consumers and seen as simply a marketing ploy. However, the use of gold in other applications in electronic sliding contacts in highly humid or corrosive atmospheres, and in use for contacts with a very high failure cost (certain computers, communications equipment, spacecraft, jet aircraft engines) remains very common.Besides sliding electrical contacts, gold is also used in electrical contacts because of its resistance to corrosion, electrical conductivity, ductility and lack of toxicity. Switch contacts are generally subjected to more intense corrosion stress than are sliding contacts. Fine gold wires are used to connect semiconductor devices to their packages through a process known as wire bonding.
The concentration of free electrons in gold metal is 5.91×1022 cm−3. Gold is highly conductive to electricity, and has been used for electrical wiring in some high-energy applications (only silver and copper are more conductive per volume, but gold has the advantage of corrosion resistance). For example, gold electrical wires were used during some of the Manhattan Projects atomic experiments, but large high-current silver wires were used in the calutron isotope separator magnets in the project.
It is estimated that 16% of the worlds presently-accounted-for gold and 22% of the worlds silver is contained in electronic technology in Japan.
Medicine
Metallic and gold compounds have long been used for medicinal purposes. Gold, usually as the metal, is perhaps the most anciently administered medicine (apparently by shamanic practitioners) and known to Dioscorides. In medieval times, gold was often seen as beneficial for the health, in the belief that something so rare and beautiful could not be anything but healthy. Even some modern esotericists and forms of alternative medicine assign metallic gold a healing power.
In the 19th century gold had a reputation as an anxiolytic, a therapy for nervous disorders. Depression, epilepsy, migraine, and glandular problems such as amenorrhea and impotence were treated, and most notably alcoholism (Keeley, 1897).The apparent paradox of the actual toxicology of the substance suggests the possibility of serious gaps in the understanding of the action of gold in physiology. Only salts and radioisotopes of gold are of pharmacological value, since elemental (metallic) gold is inert to all chemicals it encounters inside the body (e.g., ingested gold cannot be attacked by stomach acid). Some gold salts do have anti-inflammatory properties and at present two are still used as pharmaceuticals in the treatment of arthritis and other similar conditions in the US (sodium aurothiomalate and auranofin). These drugs have been explored as a means to help to reduce the pain and swelling of rheumatoid arthritis, and also (historically) against tuberculosis and some parasites.Gold alloys are used in restorative dentistry, especially in tooth restorations, such as crowns and permanent bridges. The gold alloys slight malleability facilitates the creation of a superior molar mating surface with other teeth and produces results that are generally more satisfactory than those produced by the creation of porcelain crowns. The use of gold crowns in more prominent teeth such as incisors is favored in some cultures and discouraged in others.
Colloidal gold preparations (suspensions of gold nanoparticles) in water are intensely red-colored, and can be made with tightly controlled particle sizes up to a few tens of nanometers across by reduction of gold chloride with citrate or ascorbate ions. Colloidal gold is used in research applications in medicine, biology and materials science. The technique of immunogold labeling exploits the ability of the gold particles to adsorb protein molecules onto their surfaces. Colloidal gold particles coated with specific antibodies can be used as probes for the presence and position of antigens on the surfaces of cells. In ultrathin sections of tissues viewed by electron microscopy, the immunogold labels appear as extremely dense round spots at the position of the antigen.Gold, or alloys of gold and palladium, are applied as conductive coating to biological specimens and other non-conducting materials such as plastics and glass to be viewed in a scanning electron microscope. The coating, which is usually applied by sputtering with an argon plasma, has a triple role in this application. Golds very high electrical conductivity drains electrical charge to earth, and its very high density provides stopping power for electrons in the electron beam, helping to limit the depth to which the electron beam penetrates the specimen. This improves definition of the position and topography of the specimen surface and increases the spatial resolution of the image. Gold also produces a high output of secondary electrons when irradiated by an electron beam, and these low-energy electrons are the most commonly used signal source used in the scanning electron microscope.The isotope gold-198 (half-life 2.7 days) is used in nuclear medicine, in some cancer treatments and for treating other diseases.
Cuisine
Gold can be used in food and has the E number 175. In 2016, the European Food Safety Authority published an opinion on the re-evaluation of gold as a food additive. Concerns included the possible presence of minute amounts of gold nanoparticles in the food additive, and that gold nanoparticles have been shown to be genotoxic in mammalian cells in vitro.
Gold leaf, flake or dust is used on and in some gourmet foods, notably sweets and drinks as decorative ingredient. Gold flake was used by the nobility in medieval Europe as a decoration in food and drinks, in the form of leaf, flakes or dust, either to demonstrate the hosts wealth or in the belief that something that valuable and rare must be beneficial for ones health.
Danziger Goldwasser (German: Gold water of Danzig) or Goldwasser (English: Goldwater) is a traditional German herbal liqueur produced in what is today Gdańsk, Poland, and Schwabach, Germany, and contains flakes of gold leaf. There are also some expensive (c. $1000) cocktails which contain flakes of gold leaf. However, since metallic gold is inert to all body chemistry, it has no taste, it provides no nutrition, and it leaves the body unaltered.
Vark is a foil composed of a pure metal that is sometimes gold, and is used for garnishing sweets in South Asian cuisine.
Miscellanea
Gold produces a deep, intense red color when used as a coloring agent in cranberry glass.
In photography, gold toners are used to shift the color of silver bromide black-and-white prints towards brown or blue tones, or to increase their stability. Used on sepia-toned prints, gold toners produce red tones. Kodak published formulas for several types of gold toners, which use gold as the chloride.
Gold is a good reflector of electromagnetic radiation such as infrared and visible light, as well as radio waves. It is used for the protective coatings on many artificial satellites, in infrared protective faceplates in thermal-protection suits and astronauts helmets, and in electronic warfare planes such as the EA-6B Prowler.
Gold is used as the reflective layer on some high-end CDs.
Automobiles may use gold for heat shielding. McLaren uses gold foil in the engine compartment of its F1 model.
Gold can be manufactured so thin that it appears semi-transparent. It is used in some aircraft cockpit windows for de-icing or anti-icing by passing electricity through it. The heat produced by the resistance of the gold is enough to prevent ice from forming.
Gold is attacked by and dissolves in alkaline solutions of potassium or sodium cyanide, to form the salt gold cyanide—a technique that has been used in extracting metallic gold from ores in the cyanide process. Gold cyanide is the electrolyte used in commercial electroplating of gold onto base metals and electroforming.
Gold chloride (chloroauric acid) solutions are used to make colloidal gold by reduction with citrate or ascorbate ions. Gold chloride and gold oxide are used to make cranberry or red-colored glass, which, like colloidal gold suspensions, contains evenly sized spherical gold nanoparticles.
Gold, when dispersed in nanoparticles, can act as a heterogeneous catalyst of chemical reactions.
Toxicity
Pure metallic (elemental) gold is non-toxic and non-irritating when ingested and is sometimes used as a food decoration in the form of gold leaf. Metallic gold is also a component of the alcoholic drinks Goldschläger, Gold Strike, and Goldwasser. Metallic gold is approved as a food additive in the EU (E175 in the Codex Alimentarius). Although the gold ion is toxic, the acceptance of metallic gold as a food additive is due to its relative chemical inertness, and resistance to being corroded or transformed into soluble salts (gold compounds) by any known chemical process which would be encountered in the human body.
Soluble compounds (gold salts) such as gold chloride are toxic to the liver and kidneys. Common cyanide salts of gold such as potassium gold cyanide, used in gold electroplating, are toxic by virtue of both their cyanide and gold content. There are rare cases of lethal gold poisoning from potassium gold cyanide. Gold toxicity can be ameliorated with chelation therapy with an agent such as dimercaprol.
Gold metal was voted Allergen of the Year in 2001 by the American Contact Dermatitis Society; gold contact allergies affect mostly women. Despite this, gold is a relatively non-potent contact allergen, in comparison with metals like nickel.A sample of the fungus Aspergillus niger was found growing from gold mining solution; and was found to contain cyano metal complexes, such as gold, silver, copper, iron and zinc. The fungus also plays a role in the solubilization of heavy metal sulfides.
See also
References
Further reading
Bachmann, H. G. The lure of gold : an artistic and cultural history (2006) online
Bernstein, Peter L. The Power of Gold: The History of an Obsession (2000) onlineBrands, H.W. The Age of Gold: The California Gold Rush and the New American Dream (2003) excerpt
Buranelli, Vincent. Gold : an illustrated history (1979) online wide-ranging popular history
Cassel, Gustav. "The restoration of the gold standard." Economica 9 (1923): 171-185. online
Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919–1939 (Oxford UP, 1992).
Ferguson, Niall. The Ascent of Money - Financial History of the World (2009) online
Hart, Matthew, Gold: The Race for the Worlds Most Seductive Metal Gold : the race for the worlds most seductive metal"], New York: Simon & Schuster, 2013. ISBN 9781451650020
Johnson, Harry G. "The gold rush of 1968 in retrospect and prospect." American Economic Review 59.2 (1969): 344-348. online
Kwarteng, Kwasi. War and Gold: A Five-Hundred-Year History of Empires, Adventures, and Debt (2014) online
Vilar, Pierre. A History of Gold and Money, 1450 to 1920 (1960). online
Vilches, Elvira. New World Gold: Cultural Anxiety and Monetary Disorder in Early Modern Spain (2010).
External links
"Gold" . Encyclopædia Britannica. Vol. 11 (11th ed.). 1911.
Chemistry in its element podcast (MP3) from the Royal Society of Chemistrys Chemistry World: Gold www.rsc.org
Gold at The Periodic Table of Videos (University of Nottingham)
Getting Gold 1898 book, www.lateralscience.co.uk
Technical Document on Extraction and Mining of Gold at the Wayback Machine (archived 7 March 2008), www.epa.gov
Gold element information - rsc.org | 176 |
Granuloma annulare | Granuloma annulare (GA) is a common, sometimes chronic skin condition which presents as reddish bumps on the skin arranged in a circle or ring. It can initially occur at any age, though two-thirds of patients are under 30 years old, and it is seen most often in children and young adults. Females are two times as likely to have it than males.
Signs and symptoms
Aside from the visible rash, granuloma annulare is usually asymptomatic. Sometimes the rash may burn or itch. People with GA usually notice a ring of small, firm bumps (papules) over the backs of the forearms, hands or feet, often centered on joints or knuckles. The bumps are caused by the clustering of T cells below the skin. These papules start as very small, pimple looking bumps, which spread over time from that size to dime, quarter, half-dollar size and beyond. Occasionally, multiple rings may join into one. Rarely, GA may appear as a firm nodule under the skin of the arms or legs. It also occurs on the sides and circumferential at the waist and without therapy can continue to be present for many years. Outbreaks continue to develop at the edges of the aging rings.
Causes
The condition is usually seen in otherwise healthy people. Occasionally, it may be associated with diabetes or thyroid disease. It has also been associated with auto-immune diseases such as systemic lupus erythematosus, rheumatoid arthritis, Lyme disease and Addisons disease. At this time, no conclusive connection has been made between patients.
Pathology
Granuloma annulare microscopically consists of dermal epithelioid histiocytes around a central zone of mucin—a so-called palisaded granuloma.
Pathogenesis
Granuloma annulare is an idiopathic condition, though many catalysts have been proposed. Among these is skin trauma, UV exposure, vaccinations, tuberculin skin testing, and Borrelia and viral infections.The mechanisms proposed at a molecular level vary even more. In 1977, Dahl et al. proposed that since the lesions of GA often display a thickening of, occlusion of, or other trauma to blood vessels, blood vessels may be responsible for GA. From their study of 58 patients, they found that immunoglobin M (IgM), complement, and fibrinogen were in the blood vessels of GA areas, suggesting that GA may share similarities with an immune-mediated, type 3 reaction or that chronic immune vasculitis may be involved in the pathogenesis. Another study found evidence suggesting blood vessel involvement with masses of intercellular fibrin and thickened basal lamina found around capillaries.Umbert et al. (1976), proposed an alternative pathogenesis: cell-mediated immunity. Their data suggests that lymphokines, such as macrophage-inhibiting factor (MIF), leads to sequestration of macrophages and histiocytes in the dermis. Then, upon lysosomal enzyme release by these sequestered cells, connective tissue damage ensues, which results in GA. Later, these authors found data suggesting that activation of macrophages and fibroblasts are involved in the pathogenesis of GA and that fibrin and the rare IgM and C3 deposition around vessels were more likely a delayed-type hypersensitivity with resulting tissue and vessel changes rather than an immune-complex mediated disease. Further data has been collected supporting this finding.
Diagnosis
Types
Granuloma annulare may be divided into the following types:: 703–5
Localized granuloma annulare
Generalized granuloma annulare
Patch-type granuloma annulare
Subcutaneous granuloma annulare
Perforating granuloma annulare
Treatment
Because granuloma annulare is usually asymptomatic and self-limiting with a course of about 2 years, initial treatment is generally topical steroids or calcineurin inhibitors; if unimproved with topical treatments, it may be treated with intradermal injections of steroids. If local treatment fails it may be treated with systemic corticosteroids. Treatment success varies widely, with most patients finding only brief success with the above-mentioned treatments. Most lesions of granuloma annulare disappear in pre-pubertal patients with no treatment within two years while older patients (50+) have rings for upwards of 20 years. The appearance of new rings years later is not uncommon.
History
The disease was first described in 1895 by Thomas Colcott Fox as a "ringed eruption of the fingers", and it was named granuloma annulare by Henry Radcliffe Crocker in 1902.
See also
Granuloma
Necrobiosis lipoidica
References
External links
DermNet dermal-infiltrative/granuloma-annulare | 177 |
Helicobacter pylori | Helicobacter pylori, previously known as Campylobacter pylori, is a gram-negative, microaerophilic, spiral (helical) bacterium usually found in the stomach. Its helical shape (from which the genus name, helicobacter, derives) is thought to have evolved in order to penetrate the mucoid lining of the stomach and thereby establish infection. The bacterium was first identified in 1982 by the Australian doctors Barry Marshall and Robin Warren. H. pylori has been associated with cancer of the mucosa-associated lymphoid tissue in the stomach, esophagus, colon, rectum, or tissues around the eye (termed extranodal marginal zone B-cell lymphoma of the cited organ), and of lymphoid tissue in the stomach (termed diffuse large B-cell lymphoma).H. pylori infection usually has no symptoms but sometimes causes gastritis (stomach inflammation) or ulcers of the stomach or first part of the small intestine. The infection is also associated with the development of certain cancers. Many investigators have suggested that H. pylori causes or prevents a wide range of other diseases, but many of these relationships remain controversial.Some studies suggest that H. pylori plays an important role in the natural stomach ecology, e.g. by influencing the type of bacteria that colonize the gastrointestinal tract. Other studies suggest that non-pathogenic strains of H. pylori may beneficially normalize stomach acid secretion, and regulate appetite.In 2015, it was estimated that over 50% of the worlds population had H. pylori in their upper gastrointestinal tracts with this infection (or colonization) being more common in developing countries. In recent decades, however, the prevalence of H. pylori colonization of the gastrointestinal tract has declined in many countries.
Signs and symptoms
Up to 90% of people infected with H. pylori never experience symptoms or complications. However, individuals infected with H. pylori have a 10% to 20% lifetime risk of developing peptic ulcers. Acute infection may appear as an acute gastritis with abdominal pain (stomach ache) or nausea. Where this develops into chronic gastritis, the symptoms, if present, are often those of non-ulcer dyspepsia: Stomach pains, nausea, bloating, belching, and sometimes vomiting. Pain typically occurs when the stomach is empty, between meals, and in the early morning hours, but it can also occur at other times. Less common ulcer symptoms include nausea, vomiting, and loss of appetite.
Bleeding in the stomach can also occur as evidenced by the passage of black stools; prolonged bleeding may cause anemia leading to weakness and fatigue. If bleeding is heavy, hematemesis, hematochezia, or melena may occur. Inflammation of the pyloric antrum, which connects the stomach to the duodenum, is more likely to lead to duodenal ulcers, while inflammation of the corpus (i.e. body of the stomach) is more likely to lead to gastric ulcers. Individuals infected with H. pylori may also develop colorectal or gastric polyps, i.e. non-cancerous growths of tissue projecting from the mucous membranes of these organs. Usually, these polyps are asymptomatic but gastric polyps may be the cause of dyspepsia, heartburn, bleeding from the upper gastrointestinal tract, and, rarely, gastric outlet obstruction while colorectal polyps may be the cause of rectal bleeding, anemia, constipation, diarrhea, weight loss, and abdominal pain.Individuals with chronic H. pylori infection have an increased risk of acquiring a cancer that is directly related to this infection. These cancers are stomach adenocarcinoma, less commonly diffuse large B-cell lymphoma of the stomach, or extranodal marginal zone B-cell lymphomas of the stomach, or, more rarely, of the colon, rectum, esophagus, or ocular adenexa (i.e. orbit, conjunctiva, and/or eyelids). The signs, symptoms, pathophysiology, and diagnoses of these cancers are given in the cited linkages.
Microbiology
Morphology
Helicobacter pylori is a helix-shaped (classified as a curved rod, not spirochaete) Gram-negative bacterium about 3 μm long with a diameter of about 0.5 μm . H. pylori can be demonstrated in tissue by Gram stain, Giemsa stain, haematoxylin–eosin stain, Warthin–Starry silver stain, acridine orange stain, and phase-contrast microscopy. It is capable of forming biofilms and can convert from spiral to a possibly viable but nonculturable coccoid form.Helicobacter pylori has four to six flagella at the same location; all gastric and enterohepatic Helicobacter species are highly motile owing to flagella. The characteristic sheathed flagellar filaments of Helicobacter are composed of two copolymerized flagellins, FlaA and FlaB.
Physiology
Helicobacter pylori is microaerophilic – that is, it requires oxygen, but at lower concentration than in the atmosphere. It contains a hydrogenase that can produce energy by oxidizing molecular hydrogen (H2) made by intestinal bacteria. It produces oxidase, catalase, and urease.
H. pylori possesses five major outer membrane protein families. The largest family includes known and putative adhesins. The other four families are porins, iron transporters, flagellum-associated proteins, and proteins of unknown function. Like other typical Gram-negative bacteria, the outer membrane of H. pylori consists of phospholipids and lipopolysaccharide (LPS). The O antigen of LPS may be fucosylated and mimic Lewis blood group antigens found on the gastric epithelium. The outer membrane also contains cholesterol glucosides, which are present in few other bacteria.
Genome
Helicobacter pylori consists of a large diversity of strains, and hundreds of genomes have been completely sequenced. The genome of the strain "26695" consists of about 1.7 million base pairs, with some 1,576 genes. The pan-genome, that is a combined set of 30 sequenced strains, encodes 2,239 protein families (orthologous groups, OGs). Among them, 1,248 OGs are conserved in all the 30 strains, and represent the universal core. The remaining 991 OGs correspond to the accessory genome in which 277 OGs are unique (i.e., OGs present in only one strain).
Transcriptome
In 2010, Sharma et al. presented a comprehensive analysis of transcription at single-nucleotide resolution by differential RNA-seq that confirmed the known acid induction of major virulence loci, such as the urease (ure) operon or the cag pathogenicity island (see below). More importantly, this study identified a total of 1,907 transcriptional start sites, 337 primary operons, and 126 additional suboperons, and 66 monocistrons. Until 2010, only about 55 transcriptional start sites (TSSs) were known in this species. Notably, 27% of the primary TSSs are also antisense TSSs, indicating that – similar to E. coli – antisense transcription occurs across the entire H. pylori genome. At least one antisense TSS is associated with about 46% of all open reading frames, including many housekeeping genes. Most (about 50%) of the 5′ UTRs are 20–40 nucleotides (nt) in length and support the AAGGag motif located about 6 nt (median distance) upstream of start codons as the consensus Shine–Dalgarno sequence in H. pylori.
Genes involved in virulence and pathogenesis
Study of the H. pylori genome is centered on attempts to understand pathogenesis, the ability of this organism to cause disease. About 29% of the loci have a colonization defect when mutated. Two of sequenced strains have an around 40 kb-long Cag pathogenicity island (a common gene sequence believed responsible for pathogenesis) that contains over 40 genes. This pathogenicity island is usually absent from H. pylori strains isolated from humans who are carriers of H. pylori, but remain asymptomatic.The cagA gene codes for one of the major H. pylori virulence proteins. Bacterial strains with the cagA gene are associated with an ability to cause ulcers. The cagA gene codes for a relatively long (1186-amino acid) protein. The cag pathogenicity island (PAI) has about 30 genes, part of which code for a complex type IV secretion system. The low GC-content of the cag PAI relative to the rest of the Helicobacter genome suggests the island was acquired by horizontal transfer from another bacterial species. The serine protease HtrA also plays a major role in the pathogenesis of H. pylori. The HtrA protein enables the bacterium to transmigrate across the host cells epithelium, and is also needed for the translocation of CagA.The vacA (Q48245) gene codes for another major H. pylori virulence protein. There are four main subtypes of vacA: s1/m1, s1/m2, s2/m1, and s2/m2. s1/m1 and s1/m2 subtypes are known to cause increased risk of gastric cancer. This has been linked to the ability for toxigenic vacA to promote the generation of intracellular reservoirs of H. pylori via disruption of calcium channel TRPML1.
Proteome
The proteins of H. pylori have been systematically analyzed by multiple studies. As a consequence, more than 70% of its proteome have been detected by mass spectrometry and other biochemical methods. In fact, about 50% of the proteome have been quantified, that is, we know how many copies of each protein are present in a typical cell. Furthermore, the interactome of H. pylori has been systematically studied and more than 3000 protein-protein interactions have been identified. The latter provide information of how proteins interact with each other, e.g. in stable protein complexes or in more dynamic, transient interactions. This in turn helps researchers to find out what the function of uncharacterized proteins is, e.g. when an uncharacterized protein interacts with several proteins of the ribosome (that is, it is likely also involved in ribosome function). Nevertheless, about a third of all ~1,500 proteins in H. pylori remain uncharacterized and their function is largely unknown.
Pathophysiology
Adaptation to the stomach
To avoid the acidic environment of the interior of the stomach (lumen), H. pylori uses its flagella to burrow into the mucus lining of the stomach to reach the epithelial cells underneath, where it is less acidic. H. pylori is able to sense the pH gradient in the mucus and move towards the less acidic region (chemotaxis). This also keeps the bacteria from being swept away into the lumen with the bacterias mucus environment, which is constantly moving from its site of creation at the epithelium to its dissolution at the lumen interface.
H. pylori is found in the mucus, on the inner surface of the epithelium, and occasionally inside the epithelial cells themselves. It adheres to the epithelial cells by producing adhesins, which bind to lipids and carbohydrates in the epithelial cell membrane. One such adhesin, BabA, binds to the Lewis b antigen displayed on the surface of stomach epithelial cells. H. pylori adherence via BabA is acid sensitive and can be fully reversed by decreased pH. It has been proposed that BabAs acid responsiveness enables adherence while also allowing an effective escape from unfavorable environment at pH that is harmful to the organism. Another such adhesin, SabA, binds to increased levels of sialyl-Lewis X (sLeX) antigen expressed on gastric mucosa.In addition to using chemotaxis to avoid areas of low pH, H. pylori also neutralizes the acid in its environment by producing large amounts of urease, which breaks down the urea present in the stomach to carbon dioxide and ammonia. These react with the strong acids in the environment to produce a neutralized area around H. pylori. Urease knockout mutants are incapable of colonization. In fact, urease expression is not only required for establishing initial colonization but also for maintaining chronic infection.
Adaptation of H. pylori to high acidity of stomach
As mentioned above, H. pylori produce large amounts of urease to produce ammonia as one of its adaptation methods to overcome stomach acidity. Helicobacter pylori arginase, a bimetallic enzyme binuclear Mn2-metalloenzyme arginase, crucial for pathogenesis of the bacterium in human stomach, a member of the ureohydrolase family, catalyzes the conversion of L-arginine to L-ornithine and urea, where ornithine is further converted into polyamines, which are essential for various critical metabolic processes.This provides acid resistance and is thus important for colonization of the bacterium in the gastric epithelial cells. Arginase of H. pylori also plays a role in evasion of the pathogen from the host immune system mainly by various proposed mechanisms, arginase competes with host-inducible nitric oxide (NO) synthase for the common substrate L-arginine, and thus reduces the synthesis of NO, an important component of innate immunity and an effective antimicrobial agent that is able to kill the invading pathogens directly.Alterations in the availability of L-arginine and its metabolism into polyamines contribute significantly to the dysregulation of the host immune response to H. pylori infection.
Inflammation, gastritis and ulcer
Helicobacter pylori harms the stomach and duodenal linings by several mechanisms. The ammonia produced to regulate pH is toxic to epithelial cells, as are biochemicals produced by H. pylori such as proteases, vacuolating cytotoxin A (VacA) (this damages epithelial cells, disrupts tight junctions and causes apoptosis), and certain phospholipases. Cytotoxin associated gene CagA can also cause inflammation and is potentially a carcinogen.Colonization of the stomach by H. pylori can result in chronic gastritis, an inflammation of the stomach lining, at the site of infection. Helicobacter cysteine-rich proteins (Hcp), particularly HcpA (hp0211), are known to trigger an immune response, causing inflammation. H. pylori has been shown to increase the levels of COX2 in H. pylori positive gastritis.
Chronic gastritis is likely to underlie H. pylori-related diseases.Ulcers in the stomach and duodenum result when the consequences of inflammation allow stomach acid and the digestive enzyme pepsin to overwhelm the mechanisms that protect the stomach and duodenal mucous membranes. The location of colonization of H. pylori, which affects the location of the ulcer, depends on the acidity of the stomach.
In people producing large amounts of acid, H. pylori colonizes near the pyloric antrum (exit to the duodenum) to avoid the acid-secreting parietal cells at the fundus (near the entrance to the stomach). In people producing normal or reduced amounts of acid, H. pylori can also colonize the rest of the stomach.
The inflammatory response caused by bacteria colonizing near the pyloric antrum induces G cells in the antrum to secrete the hormone gastrin, which travels through the bloodstream to parietal cells in the fundus. Gastrin stimulates the parietal cells to secrete more acid into the stomach lumen, and over time increases the number of parietal cells, as well. The increased acid load damages the duodenum, which may eventually result in ulcers forming in the duodenum.
When H. pylori colonizes other areas of the stomach, the inflammatory response can result in atrophy of the stomach lining and eventually ulcers in the stomach. This also may increase the risk of stomach cancer.
Cag pathogenicity island
The pathogenicity of H. pylori may be increased by genes of the cag pathogenicity island; about 50–70% of H. pylori strains in Western countries carry it. Western people infected with strains carrying the cag PAI have a stronger inflammatory response in the stomach and are at a greater risk of developing peptic ulcers or stomach cancer than those infected with strains lacking the island. Following attachment of H. pylori to stomach epithelial cells, the type IV secretion system expressed by the cag PAI "injects" the inflammation-inducing agent, peptidoglycan, from their own cell walls into the epithelial cells. The injected peptidoglycan is recognized by the cytoplasmic pattern recognition receptor (immune sensor) Nod1, which then stimulates expression of cytokines that promote inflammation.The type-IV secretion apparatus also injects the cag PAI-encoded protein CagA into the stomachs epithelial cells, where it disrupts the cytoskeleton, adherence to adjacent cells, intracellular signaling, cell polarity, and other cellular activities. Once inside the cell, the CagA protein is phosphorylated on tyrosine residues by a host cell membrane-associated tyrosine kinase (TK). CagA then allosterically activates protein tyrosine phosphatase/protooncogene Shp2. Pathogenic strains of H. pylori have been shown to activate the epidermal growth factor receptor (EGFR), a membrane protein with a TK domain. Activation of the EGFR by H. pylori is associated with altered signal transduction and gene expression in host epithelial cells that may contribute to pathogenesis. A C-terminal region of the CagA protein (amino acids 873–1002) has also been suggested to be able to regulate host cell gene transcription, independent of protein tyrosine phosphorylation. A great deal of diversity exists between strains of H. pylori, and the strain that infects a person can predict the outcome.
Cancer
Two related mechanisms by which H. pylori could promote cancer are under investigation. One mechanism involves the enhanced production of free radicals near H. pylori and an increased rate of host cell mutation. The other proposed mechanism has been called a "perigenetic pathway", and involves enhancement of the transformed host cell phenotype by means of alterations in cell proteins, such as adhesion proteins. H. pylori has been proposed to induce inflammation and locally high levels of TNF-α and/or interleukin 6 (IL-6). According to the proposed perigenetic mechanism, inflammation-associated signaling molecules, such as TNF-α, can alter gastric epithelial cell adhesion and lead to the dispersion and migration of mutated epithelial cells without the need for additional mutations in tumor suppressor genes, such as genes that code for cell adhesion proteins.The strain of H. pylori a person is exposed to may influence the risk of developing gastric cancer. Strains of H. pylori that produce high levels of two proteins, vacuolating toxin A (VacA) and the cytotoxin-associated gene A (CagA), appear to cause greater tissue damage than those that produce lower levels or that lack those genes completely. These proteins are directly toxic to cells lining the stomach and signal strongly to the immune system that an invasion is under way. As a result of the bacterial presence, neutrophils and macrophages set up residence in the tissue to fight the bacteria assault.H. pylori is a major source of worldwide cancer mortality. Although the data varies between different countries, overall about 1% to 3% of people infected with Helicobacter pylori develop gastric cancer in their lifetime compared to 0.13% of individuals who have had no H. pylori infection. H. pylori infection is very prevalent. As evaluated in 2002, it is present in the gastric tissues of 74% of middle-aged adults in developing countries and 58% in developed countries. Since 1% to 3% of infected individuals are likely to develop gastric cancer, H. pylori-induced gastric cancer is the third highest cause of worldwide cancer mortality as of 2018.Infection by H. pylori causes no symptoms in about 80% of those infected. About 75% of individuals infected with H. pylori develop gastritis. Thus, the usual consequence of H. pylori infection is chronic asymptomatic gastritis. Because of the usual lack of symptoms, when gastric cancer is finally diagnosed it is often fairly advanced. More than half of gastric cancer patients have lymph node metastasis when they are initially diagnosed.The gastritis caused by H. pylori is accompanied by inflammation, characterized by infiltration of neutrophils and macrophages to the gastric epithelium, which favors the accumulation of pro-inflammatory cytokines and reactive oxygen species/reactive nitrogen species (ROS/RNS). The substantial presence of ROS/RNS causes DNA damage including 8-oxo-2-deoxyguanosine (8-OHdG). If the infecting H. pylori carry the cytotoxic cagA gene (present in about 60% of Western isolates and a higher percentage of Asian isolates), they can increase the level of 8-OHdG in gastric cells by 8-fold, while if the H. pylori do not carry the cagA gene, the increase in 8-OHdG is about 4-fold. In addition to the oxidative DNA damage 8-OHdG, H. pylori infection causes other characteristic DNA damages including DNA double-strand breaks.H. pylori also causes many epigenetic alterations linked to cancer development. These epigenetic alterations are due to H. pylori-induced methylation of CpG sites in promoters of genes and H. pylori-induced altered expression of multiple microRNAs.As reviewed by Santos and Ribeiro H. pylori infection is associated with epigenetically reduced efficiency of the DNA repair machinery, which favors the accumulation of mutations and genomic instability as well as gastric carcinogenesis. In particular, Raza et al. showed that expression of two DNA repair proteins, ERCC1 and PMS2, was severely reduced once H. pylori infection had progressed to cause dyspepsia. Dyspepsia occurs in about 20% of infected individuals. In addition, as reviewed by Raza et al., human gastric infection with H. pylori causes epigenetically reduced protein expression of DNA repair proteins MLH1, MGMT and MRE11. Reduced DNA repair in the presence of increased DNA damage increases carcinogenic mutations and is likely a significant cause of H. pylori carcinogenesis.
Survival of Helicobacter pylori
The pathogenesis of H. pylori depends on its ability to survive in the harsh gastric environment characterized by acidity, peristalsis, and attack by phagocytes accompanied by release of reactive oxygen species. In particular, H. pylori elicits an oxidative stress response during host colonization. This oxidative stress response induces potentially lethal and mutagenic oxidative DNA adducts in the H. pylori genome.Vulnerability to oxidative stress and oxidative DNA damage occurs commonly in many studied bacterial pathogens, including Neisseria gonorrhoeae, Hemophilus influenzae, Streptococcus pneumoniae, S. mutans, and H. pylori. For each of these pathogens, surviving the DNA damage induced by oxidative stress appears supported by transformation-mediated recombinational repair. Thus, transformation and recombinational repair appear to contribute to successful infection.
Transformation (the transfer of DNA from one bacterial cell to another through the intervening medium) appears to be part of an adaptation for DNA repair. H. pylori is naturally competent for transformation. While many organisms are competent only under certain environmental conditions, such as starvation, H. pylori is competent throughout logarithmic growth. All organisms encode genetic programs for response to stressful conditions including those that cause DNA damage. In H. pylori, homologous recombination is required for repairing DNA double-strand breaks (DSBs). The AddAB helicase-nuclease complex resects DSBs and loads RecA onto single-strand DNA (ssDNA), which then mediates strand exchange, leading to homologous recombination and repair. The requirement of RecA plus AddAB for efficient gastric colonization suggests, in the stomach, H. pylori is either exposed to double-strand DNA damage that must be repaired or requires some other recombination-mediated event. In particular, natural transformation is increased by DNA damage in H. pylori, and a connection exists between the DNA damage response and DNA uptake in H. pylori, suggesting natural competence contributes to persistence of H. pylori in its human host and explains the retention of competence in most clinical isolates.
RuvC protein is essential to the process of recombinational repair, since it resolves intermediates in this process termed Holliday junctions. H. pylori mutants that are defective in RuvC have increased sensitivity to DNA-damaging agents and to oxidative stress, exhibit reduced survival within macrophages, and are unable to establish successful infection in a mouse model. Similarly, RecN protein plays an important role in DSB repair in H. pylori. An H. pylori recN mutant displays an attenuated ability to colonize mouse stomachs, highlighting the importance of recombinational DNA repair in survival of H. pylori within its host.
Diagnosis
Colonization with H. pylori is not a disease in itself, but a condition associated with a number of disorders of the upper gastrointestinal tract. Testing is recommended if peptic ulcer disease or low-grade gastric MALT lymphoma (MALToma) is present, after endoscopic resection of early gastric cancer, for first-degree relatives with gastric cancer, and in certain cases of dyspepsia. Several methods of testing exist, including invasive and noninvasive testing methods.
Noninvasive tests for H. pylori infection may be suitable and include blood antibody tests, stool antigen tests, or the carbon urea breath test (in which the patient drinks 14C – or 13C-labelled urea, which the bacterium metabolizes, producing labelled carbon dioxide that can be detected in the breath). It is not known for sure which non-invasive test is more accurate for diagnosing a H. pylori infection but indirect comparison puts urea breath test as a higher accuracy than others.An endoscopic biopsy is an invasive means to test for H. pylori infection. Low-level infections can be missed by biopsy, so multiple samples are recommended. The most accurate method for detecting H. pylori infection is with a histological examination from two sites after endoscopic biopsy, combined with either a rapid urease test or microbial culture.
Transmission
Helicobacter pylori is contagious, although the exact route of transmission is not known.
Person-to-person transmission by either the oral–oral (kissing, mouth feeding) or fecal–oral route is most likely. Consistent with these transmission routes, the bacteria have been isolated from feces, saliva, and dental plaque of some infected people. Findings suggest H. pylori is more easily transmitted by gastric mucus than saliva. Transmission occurs mainly within families in developed nations, yet can also be acquired from the community in developing countries. H. pylori may also be transmitted orally by means of fecal matter through the ingestion of waste-tainted water, so a hygienic environment could help decrease the risk of H. pylori infection.
Prevention
Due to H. pyloris role as a major cause of certain diseases (particularly cancers) and its consistently increasing antibiotic resistance, there is a clear need for new therapeutic strategies to prevent or remove the bacterium from colonizing humans. Much work has been done on developing viable vaccines aimed at providing an alternative strategy to control H. pylori infection and related diseases. Researchers are studying different adjuvants, antigens, and routes of immunization to ascertain the most appropriate system of immune protection; however, most of the research only recently moved from animal to human trials. An economic evaluation of the use of a potential H. pylori vaccine in babies found its introduction could, at least in the Netherlands, prove cost-effective for the prevention of peptic ulcer and stomach adenocarcinoma. A similar approach has also been studied for the United States. Notwithstanding this proof-of-concept (i.e. vaccination protects children from acquisition of infection with H. pylori), as of late 2019 there have been no advanced vaccine candidates and only one vaccine in a Phase I clinical trial. Furthermore, development of a vaccine against H. pylori has not been a current priority of major pharmaceutical companies.Many investigations have attempted to prevent the development of Helicobacter pylori-related diseases by eradicating the bacterium during the early stages of its infestation using antibiotic-based drug regimens. Studies find that such treatments, when effectively eradicating H. pylori from the stomach, reduce the inflammation and some of the histopathological abnormalities associated with the infestation. However studies disagree on the ability of these treatments to alleviate the more serious histopathological abnormalities in H. pylori infections, e.g. gastric atrophy and metaplasia, both of which are precursors to gastric adenocarcinoma. There is similar disagreement on the ability of antibiotic-based regiments to prevent gastric adenocarcinoma. A meta-analysis (i.e. a statistical analysis that combines the results of multiple randomized controlled trials) published in 2014 found that these regimens did not appear to prevent development of this adenocarcinoma. However, two subsequent prospective cohort studies conducted on high-risk individuals in China and Taiwan found that eradication of the bacterium produced a significant decrease in the number of individuals developing the disease. These results agreed with a retrospective cohort study done in Japan and published in 2016 as well as a meta-analysis, also published in 2016, of 24 studies conducted on individuals with varying levels of risk for developing the disease. These more recent studies suggest that the eradication of H. pylori infection reduces the incidence of H. pylori-related gastric adenocarcinoma in individuals at all levels of baseline risk. Further studies will be required to clarify this issue. In all events, studies agree that antibiotic-based regimens effectively reduce the occurrence of metachronous H. pylori-associated gastric adenocarcinoma. (Metachronous cancers are cancers that reoccur 6 months or later after resection of the original cancer.) It is suggested that antibiotic-based drug regimens be used after resecting H. pylori-associated gastric adenocarcinoma in order to reduce its metachronus reoccurrence.
Treatment
Gastritis
Superficial gastritis, either acute or chronic, is the most common manifestation of H. pylori infection. The signs and symptoms of this gastritis have been found to remit spontaneously in many individuals without resorting to Helicobacter pylori eradication protocols. The H. pylori bacterial infection persists after remission in these cases. Various antibiotic plus proton pump inhibitor drug regimens are used to eradicate the bacterium and thereby successfully treat the disorder with triple-drug therapy consisting of clarithromycin, amoxicillin, and a proton-pump inhibitor given for 14–21 days often being considered first line treatment.
Peptic ulcers
Once H. pylori is detected in a person with a peptic ulcer, the normal procedure is to eradicate it and allow the ulcer to heal. The standard first-line therapy is a one-week "triple therapy" consisting of proton-pump inhibitors such as omeprazole and the antibiotics clarithromycin and amoxicillin. (The actions of proton pump inhibitors against H. pylori may reflect their direct bacteriostatic effect due to inhibition of the bacteriums P-type ATPase and/or urease.) Variations of the triple therapy have been developed over the years, such as using a different proton pump inhibitor, as with pantoprazole or rabeprazole, or replacing amoxicillin with metronidazole for people who are allergic to penicillin. In areas with higher rates of clarithromycin resistance, other options are recommended. Such a therapy has revolutionized the treatment of peptic ulcers and has made a cure to the disease possible. Previously, the only option was symptom control using antacids, H2-antagonists or proton pump inhibitors alone.
Antibiotic-resistant disease
An increasing number of infected individuals are found to harbor antibiotic-resistant bacteria. This results in initial treatment failure and requires additional rounds of antibiotic therapy or alternative strategies, such as a quadruple therapy, which adds a bismuth colloid, such as bismuth subsalicylate. In patients with any previous macrolide exposure or who are allergic to penicillin, a quadruple therapy that consisting of a proton pump inhibitor, bismuth, tetracycline, and a nitroimidazole for 10–14 days is a recommended first-line treatment option. For the treatment of clarithromycin-resistant strains of H. pylori, the use of levofloxacin as part of the therapy has been suggested.Ingesting lactic acid bacteria exerts a suppressive effect on H. pylori infection in both animals and humans, and supplementing with Lactobacillus- and Bifidobacterium-containing yogurt improved the rates of eradication of H. pylori in humans. Symbiotic butyrate-producing bacteria which are normally present in the intestine are sometimes used as probiotics to help suppress H. pylori infections as an adjunct to antibiotic therapy. Butyrate itself is an antimicrobial which destroys the cell envelope of H. pylori by inducing regulatory T cell expression (specifically, FOXP3) and synthesis of an antimicrobial peptide called LL-37, which arises through its action as a histone deacetylase inhibitor.The substance sulforaphane, which occurs in broccoli and cauliflower, has been proposed as a treatment. Periodontal therapy or scaling and root planing has also been suggested as an additional treatment.
Cancers
Extranodal marginal zone B-cell lymphomas
Extranodal marginal zone B-cell lymphomas (also termed MALT lymphomas) are generally indolent malignancies. Recommended treatment of H. pylori-positive extranodal marginal zone B-cell lymphoma of the stomach, when localized (i.e. Ann Arbor stage I and II), employs one of the antibiotic-proton pump inhibitor regiments listed in the H. pylori eradication protocols. If the initial regimen fails to eradicate the pathogen, patients are treated with an alternate protocol. Eradication of the pathogen is successful in 70–95% of cases. Some 50-80% of patients who experience eradication of the pathogen develop within 3–28 months a remission and long-term clinical control of their lymphoma. Radiation therapy to the stomach and surrounding (i.e. peri-gastric) lymph nodes has also been used to successfully treat these localized cases. Patients with non-localized (i.e. systemic Ann Arbor stage III and IV) disease who are free of symptoms have been treated with watchful waiting or, if symptomatic, with the immunotherapy drug, rituximab, (given for 4 weeks) combined with the chemotherapy drug, chlorambucil, for 6–12 months; 58% of these patients attain a 58% progression-free survival rate at 5 years. Frail stage III/IV patients have been successfully treated with rituximab or the chemotherapy drug, cyclophosphamide, alone. Only rare cases of H. pylori-positive extranodal marginal zone B-cell lymphoma of the colon have been successfully treated with an antibiotic-proton pump inhibitor regimen; the currently recommended treatments for this disease are surgical resection, endoscopic resection, radiation, chemotherapy, or, more recently, rituximab. In the few reported cases of H. pylori-positive extranodal marginal zone B-cell lymphoma of the esophagus, localized disease has been successfully treated with antibiotic-proton pump inhibitor regimens; however, advanced disease appears less responsive or unresponsive to these regimens but partially responsive to rituximab. Antibiotic-proton pump inhibitor eradication therapy and localized radiation therapy have been used successfully to treat H. pylori-positive extranodal marginal zone B-cell lymphomas of the rectum; however radiation therapy has given slightly better results and therefore been suggested to be the disease preferred treatment. The treatment of localized H. pylori-positive extranodal marginal zone B-cell lymphoma of the ocular adenexa with antibiotic/proton pump inhibitor regimens has achieved 2 year and 5 year failure-free survival rates of 67% and 55%, respectively, and a 5-year progression-free rate of 61%. However, the generally recognized treatment of choice for patients with systemic involvement uses various chemotherapy drugs often combined with rituximab.
Diffuse large B-cell lymphoma
Diffuse large B-cell lymphoma is a far more aggressive cancer than extranodal marginal zone B-cell lymphoma. Cases of this malignancy that are H. pylori-positive may be derived from the latter lymphoma and are less aggressive as well as more susceptible to treatment than H. pylori negative cases. Several recent studies strongly suggest that localized, early-stage diffuse Helicobacter pylori positive diffuse large B-cell lymphoma, when limited to the stomach, can be successfully treated with antibiotic-proton pump inhibitor regimens. However, these studies also agree that, given the aggressiveness of diffuse large B-cell lymphoma, patients treated with one of these H. pylori eradication regimes need to be carefully followed. If found unresponsive to or clinically worsening on these regimens, these patients should be switched to more conventional therapy such as chemotherapy (e.g. CHOP or a CHOP-like regimen), immunotherapy (e.g. rituximab), surgery, and/or local radiotherapy. H. pylori positive diffuse large B-cell lymphoma has been successfully treated with one or a combination of these methods.
Stomach adenocarcinoma
Helicobacter pylori is linked to the majority of gastric adenocarcinoma cases, particularly those that are located outside of the stomachs cardia (i.e. esophagus-stomach junction). The treatment for this cancer is highly aggressive with even localized disease being treated sequentially with chemotherapy and radiotherapy before surgical resection. Since this cancer, once developed, is independent of H. pylori infection, antibiotic-proton pump inhibitor regimens are not used in its treatment.
Prognosis
Helicobacter pylori colonizes the stomach and induces chronic gastritis, a long-lasting inflammation of the stomach. The bacterium persists in the stomach for decades in most people. Most individuals infected by H. pylori never experience clinical symptoms, despite having chronic gastritis. About 10–20% of those colonized by H. pylori ultimately develop gastric and duodenal ulcers. H. pylori infection is also associated with a 1–2% lifetime risk of stomach cancer and a less than 1% risk of gastric MALT lymphoma.In the absence of treatment, H. pylori infection – once established in its gastric niche – is widely believed to persist for life. In the elderly, however, infection likely can disappear as the stomachs mucosa becomes increasingly atrophic and inhospitable to colonization. The proportion of acute infections that persist is not known, but several studies that followed the natural history in populations have reported apparent spontaneous elimination.It is possible for H. pylori to re-establish in a person after eradication. This recurrence can be caused by the original strain (recrudescence), or be caused by a different strain (reinfection). According to a 2017 meta-analysis by Hu et al., the global per-person annual rates of recurrence, reinfection, and recrudescence is 4.3%, 3.1%, and 2.2% resepctively. It is unclear what the main risk factors are.Mounting evidence suggests H. pylori has an important role in protection from some diseases. The incidence of acid reflux disease, Barretts esophagus, and esophageal cancer have been rising dramatically at the same time as H. pyloris presence decreases. In 1996, Martin J. Blaser advanced the hypothesis that H. pylori has a beneficial effect by regulating the acidity of the stomach contents. The hypothesis is not universally accepted as several randomized controlled trials failed to demonstrate worsening of acid reflux disease symptoms following eradication of H. pylori. Nevertheless, Blaser has reasserted his view that H. pylori is a member of the normal flora of the stomach. He postulates that the changes in gastric physiology caused by the loss of H. pylori account for the recent increase in incidence of several diseases, including type 2 diabetes, obesity, and asthma. His group has recently shown that H. pylori colonization is associated with a lower incidence of childhood asthma.
Epidemiology
At least half the worlds population is infected by the bacterium, making it the most widespread infection in the world. Actual infection rates vary from nation to nation; the developing world has much higher infection rates than the developed one (notably Western Europe, North America, Australasia), where rates are estimated to be around 25%.The age when someone acquires this bacterium seems to influence the pathologic outcome of the infection. People infected at an early age are likely to develop more intense inflammation that may be followed by atrophic gastritis with a higher subsequent risk of gastric ulcer, gastric cancer, or both. Acquisition at an older age brings different gastric changes more likely to lead to duodenal ulcer. Infections are usually acquired in early childhood in all countries. However, the infection rate of children in developing nations is higher than in industrialized nations, probably due to poor sanitary conditions, perhaps combined with lower antibiotics usage for unrelated pathologies. In developed nations, it is currently uncommon to find infected children, but the percentage of infected people increases with age, with about 50% infected for those over the age of 60 compared with around 10% between 18 and 30 years. The higher prevalence among the elderly reflects higher infection rates in the past when the individuals were children rather than more recent infection at a later age of the individual. In the United States, prevalence appears higher in African-American and Hispanic populations, most likely due to socioeconomic factors. The lower rate of infection in the West is largely attributed to higher hygiene standards and widespread use of antibiotics. Despite high rates of infection in certain areas of the world, the overall frequency of H. pylori infection is declining. However, antibiotic resistance is appearing in H. pylori; many metronidazole- and clarithromycin-resistant strains are found in most parts of the world.
History
Helicobacter pylori migrated out of Africa along with its human host circa 60,000 years ago. Recent research states that genetic diversity in H. pylori, like that of its host, decreases with geographic distance from East Africa. Using the genetic diversity data, researchers have created simulations that indicate the bacteria seem to have spread from East Africa around 58,000 years ago. Their results indicate modern humans were already infected by H. pylori before their migrations out of Africa, and it has remained associated with human hosts since that time.H. pylori was first discovered in the stomachs of patients with gastritis and ulcers in 1982 by Drs. Barry Marshall and Robin Warren of Perth, Western Australia. At the time, the conventional thinking was that no bacterium could live in the acid environment of the human stomach. In recognition of their discovery, Marshall and Warren were awarded the 2005 Nobel Prize in Physiology or Medicine.Before the research of Marshall and Warren, German scientists found spiral-shaped bacteria in the lining of the human stomach in 1875, but they were unable to culture them, and the results were eventually forgotten. The Italian researcher Giulio Bizzozero described similarly shaped bacteria living in the acidic environment of the stomach of dogs in 1893. Professor Walery Jaworski of the Jagiellonian University in Kraków investigated sediments of gastric washings obtained by lavage from humans in 1899. Among some rod-like bacteria, he also found bacteria with a characteristic spiral shape, which he called Vibrio rugula. He was the first to suggest a possible role of this organism in the pathogenesis of gastric diseases. His work was included in the Handbook of Gastric Diseases, but it had little impact, as it was written in Polish. Several small studies conducted in the early 20th century demonstrated the presence of curved rods in the stomachs of many people with peptic ulcers and stomach cancers. Interest in the bacteria waned, however, when an American study published in 1954 failed to observe the bacteria in 1180 stomach biopsies.Interest in understanding the role of bacteria in stomach diseases was rekindled in the 1970s, with the visualization of bacteria in the stomachs of people with gastric ulcers. The bacteria had also been observed in 1979, by Robin Warren, who researched it further with Barry Marshall from 1981. After unsuccessful attempts at culturing the bacteria from the stomach, they finally succeeded in visualizing colonies in 1982, when they unintentionally left their Petri dishes incubating for five days over the Easter weekend. In their original paper, Warren and Marshall contended that most stomach ulcers and gastritis were caused by bacterial infection and not by stress or spicy food, as had been assumed before.Some skepticism was expressed initially, but within a few years multiple research groups had verified the association of H. pylori with gastritis and, to a lesser extent, ulcers. To demonstrate H. pylori caused gastritis and was not merely a bystander, Marshall drank a beaker of H. pylori culture. He became ill with nausea and vomiting several days later. An endoscopy 10 days after inoculation revealed signs of gastritis and the presence of H. pylori. These results suggested H. pylori was the causative agent. Marshall and Warren went on to demonstrate antibiotics are effective in the treatment of many cases of gastritis. In 1994, the National Institutes of Health stated most recurrent duodenal and gastric ulcers were caused by H. pylori, and recommended antibiotics be included in the treatment regimen.The bacterium was initially named Campylobacter pyloridis, then renamed C. pylori in 1987 (pylori being the genitive of pylorus, the circular opening leading from the stomach into the duodenum, from the Ancient Greek word πυλωρός, which means gatekeeper.). When 16S ribosomal RNA gene sequencing and other research showed in 1989 that the bacterium did not belong in the genus Campylobacter, it was placed in its own genus, Helicobacter from the ancient Greek έλιξ (hělix) "spiral" or "coil".In October 1987, a group of experts met in Copenhagen to found the European Helicobacter Study Group (EHSG), an international multidisciplinary research group and the only institution focused on H. pylori. The Group is involved with the Annual International Workshop on Helicobacter and Related Bacteria, the Maastricht Consensus Reports (European Consensus on the management of H. pylori), and other educational and research projects, including two international long-term projects:
European Registry on H. pylori Management (Hp-EuReg) – a database systematically registering the routine clinical practice of European gastroenterologists.
Optimal H. pylori management in primary care (OptiCare) – a long-term educational project aiming to disseminate the evidence based recommendations of the Maastricht IV Consensus to primary care physicians in Europe, funded by an educational grant from United European Gastroenterology.
Research
Results from in vitro studies suggest that fatty acids, mainly polyunsaturated fatty acids, have a bactericidal effect against H. pylori, but their in vivo effects have not been proven.
See also
List of oncogenic bacteria
Infectious causes of cancer
Explanatory footnotes
References
External links
"Information on tests for H. pylori". National Institutes of Health. U.S. Department of Health and Human Services. Archived from the original on 13 June 2013.
"European Helicobacter Study Group (EHSG)".
"Type strain of Helicobacter pylori at BacDive". Bacterial Diversity Metadatabase.
"Helicobacter pylori". Genome. KEGG. Japan. 26695. | 178 |
Hemorrhoid | Hemorrhoids (or haemorrhoids), also known as piles, are vascular structures in the anal canal. In their normal state, they are cushions that help with stool control. They become a disease when swollen or inflamed; the unqualified term "hemorrhoid" is often used to refer to the disease. The signs and symptoms of hemorrhoids depend on the type present. Internal hemorrhoids often result in painless, bright red rectal bleeding when defecating. External hemorrhoids often result in pain and swelling in the area of the anus. If bleeding occurs, it is usually darker. Symptoms frequently get better after a few days. A skin tag may remain after the healing of an external hemorrhoid.While the exact cause of hemorrhoids remains unknown, a number of factors that increase pressure in the abdomen are believed to be involved. This may include constipation, diarrhea, and sitting on the toilet for long periods. Hemorrhoids are also more common during pregnancy. Diagnosis is made by looking at the area. Many people incorrectly refer to any symptom occurring around the anal area as "hemorrhoids", and serious causes of the symptoms should not be ruled out. Colonoscopy or sigmoidoscopy is reasonable to confirm the diagnosis and rule out more serious causes.Often, no specific treatment is needed. Initial measures consist of increasing fiber intake, drinking fluids to maintain hydration, NSAIDs to help with pain, and rest. Medicated creams may be applied to the area, but their effectiveness is poorly supported by evidence. A number of minor procedures may be performed if symptoms are severe or do not improve with conservative management. Surgery is reserved for those who fail to improve following these measures.Approximately 50% to 66% of people have problems with hemorrhoids at some point in their lives. Males and females are both affected with about equal frequency. Hemorrhoids affect people most often between 45 and 65 years of age, and they are more common among the wealthy. Outcomes are usually good. The first known mention of the disease is from a 1700 BC Egyptian papyrus.
Signs and symptoms
In about 40% of people with pathological hemorrhoids, there are no significant symptoms. Internal and external hemorrhoids may present differently; however, many people may have a combination of the two. Bleeding enough to cause anemia is rare, and life-threatening bleeding is even more uncommon. Many people feel embarrassed when facing the problem and often seek medical care only when the case is advanced.
External
If not thrombosed, external hemorrhoids may cause few problems. However, when thrombosed, hemorrhoids may be very painful. Nevertheless, this pain typically resolves in two to three days. The swelling may, however, take a few weeks to disappear. A skin tag may remain after healing. If hemorrhoids are large and cause issues with hygiene, they may produce irritation of the surrounding skin, and thus itchiness around the anus.Lidocaine is a local anesthetic that blocks the calcium channel by blocking the transmission of nerve messages before reaching the central nervous system. As a result, the patient does not feel any pain. In addition, this drug is anti-inflammatory and is effective in treating hemorrhoids. Lidocaine is not recommended if you are pregnant or have a local allergy.
Internal
Internal hemorrhoids usually present with painless, bright red rectal bleeding during or following a bowel movement. The blood typically covers the stool (a condition known as hematochezia), is on the toilet paper, or drips into the toilet bowl. The stool itself is usually normally coloured. Other symptoms may include mucous discharge, a perianal mass if they prolapse through the anus, itchiness, and fecal incontinence. Internal hemorrhoids are usually painful only if they become thrombosed or necrotic.
Causes
The exact cause of symptomatic hemorrhoids is unknown. A number of factors are believed to play a role, including irregular bowel habits (constipation or diarrhea), lack of exercise, nutritional factors (low-fiber diets), increased intra-abdominal pressure (prolonged straining, ascites, an intra-abdominal mass, or pregnancy), genetics, an absence of valves within the hemorrhoidal veins, and aging. Other factors believed to increase risk include obesity, prolonged sitting, a chronic cough, and pelvic floor dysfunction. Squatting while defecating may also increase the risk of severe hemorrhoids. Evidence for these associations, however, is poor.During pregnancy, pressure from the fetus on the abdomen and hormonal changes cause the hemorrhoidal vessels to enlarge. The birth of the baby also leads to increased intra-abdominal pressures. Pregnant women rarely need surgical treatment, as symptoms usually resolve after delivery.
Pathophysiology
Hemorrhoid cushions are a part of normal human anatomy and become a pathological disease only when they experience abnormal changes. There are three main cushions present in the normal anal canal. These are located classically at left lateral, right anterior, and right posterior positions. They are composed of neither arteries nor veins, but blood vessels called sinusoids, connective tissue, and smooth muscle.: 175 Sinusoids do not have muscle tissue in their walls, as veins do. This set of blood vessels is known as the hemorrhoidal plexus.Hemorrhoid cushions are important for continence. They contribute to 15–20% of anal closure pressure at rest and protect the internal and external anal sphincter muscles during the passage of stool. When a person bears down, the intra-abdominal pressure grows, and hemorrhoid cushions increase in size, helping maintain anal closure. Hemorrhoid symptoms are believed to result when these vascular structures slide downwards or when venous pressure is excessively increased. Increased internal and external anal sphincter pressure may also be involved in hemorrhoid symptoms. Two types of hemorrhoids occur: internals from the superior hemorrhoidal plexus and externals from the inferior hemorrhoidal plexus. The pectinate line divides the two regions.
Diagnosis
Hemorrhoids are typically diagnosed by physical examination. A visual examination of the anus and surrounding area may diagnose external or prolapsed hemorrhoids. A rectal exam may be performed to detect possible rectal tumors, polyps, an enlarged prostate, or abscesses. This examination may not be possible without appropriate sedation because of pain, although most internal hemorrhoids are not associated with pain. Visual confirmation of internal hemorrhoids may require anoscopy, insertion of a hollow tube device with a light attached at one end. The two types of hemorrhoids are external and internal. These are differentiated by their position with respect to the pectinate line. Some persons may concurrently have symptomatic versions of both. If pain is present, the condition is more likely to be an anal fissure or external hemorrhoid rather than internal hemorrhoid.
Internal
Internal hemorrhoids originate above the pectinate line. They are covered by columnar epithelium, which lacks pain receptors. They were classified in 1985 into four grades based on the degree of prolapse:
Grade I: No prolapse, just prominent blood vessels
Grade II: Prolapse upon bearing down, but spontaneous reduction
Grade III: Prolapse upon bearing down requiring manual reduction
Grade IV: Prolapse with inability to be manually reduced.
External
External hemorrhoids occur below the dentate (or pectinate) line. They are covered proximally by anoderm and distally by skin, both of which are sensitive to pain and temperature.
Differential
Many anorectal problems, including fissures, fistulae, abscesses, colorectal cancer, rectal varices, and itching have similar symptoms and may be incorrectly referred to as hemorrhoids. Rectal bleeding may also occur owing to colorectal cancer, colitis including inflammatory bowel disease, diverticular disease, and angiodysplasia. If anemia is present, other potential causes should be considered.Other conditions that produce an anal mass include skin tags, anal warts, rectal prolapse, polyps, and enlarged anal papillae. Anorectal varices due to portal hypertension (blood pressure in the portal venous system) may present similar to hemorrhoids but are a different condition. Portal hypertension does not increase the risk of hemorrhoids.
Prevention
A number of preventative measures are recommended, including avoiding straining while attempting to defecate, avoiding constipation and diarrhea either by eating a high-fiber diet and drinking plenty of fluid or by taking fiber supplements and getting sufficient exercise. Spending less time attempting to defecate, avoiding reading while on the toilet, and losing weight for overweight persons and avoiding heavy lifting are also recommended.
Management
Conservative
Conservative treatment typically consists of foods rich in dietary fiber, intake of oral fluids to maintain hydration, nonsteroidal anti-inflammatory drugs, sitz baths, and rest. Increased fiber intake has been shown to improve outcomes and may be achieved by dietary alterations or the consumption of fiber supplements. Evidence for benefits from sitz baths during any point in treatment, however, is lacking. If they are used, they should be limited to 15 minutes at a time.: 182 Decreasing time spent on the toilet and not straining is also recommended.While many topical agents and suppositories are available for the treatment of hemorrhoids, little evidence supports their use. As such, they are not recommended by the American Society of Colon and Rectal Surgeons. Steroid-containing agents should not be used for more than 14 days, as they may cause thinning of the skin. Most agents include a combination of active ingredients. These may include a barrier cream such as petroleum jelly or zinc oxide, an analgesic agent such as lidocaine, and a vasoconstrictor such as epinephrine. Some contain Balsam of Peru to which certain people may be allergic.Flavonoids are of questionable benefit, with potential side effects. Symptoms usually resolve following pregnancy; thus active treatment is often delayed until after delivery. Evidence does not support the use of traditional Chinese herbal treatment.Several professional organizations weakly recommend the use of phlebotonics in the treatment of the symptoms of haemorrhoids grade I to II, although these drugs are not approved in the United States as of 2013 and in Germany, and restricted in Spain for the treatment of chronic venous diseases.
Procedures
A number of office-based procedures may be performed. While generally safe, rare serious side effects such as perianal sepsis may occur.
Rubber band ligation is typically recommended as the first-line treatment in those with grade I to III disease. It is a procedure in which elastic bands are applied onto internal hemorrhoid at least 1 cm above the pectinate line to cut off its blood supply. Within 5–7 days, the withered hemorrhoid falls off. If the band is placed too close to the pectinate line, intense pain results immediately afterwards. The cure rate has been found to be about 87%, with a complication rate of up to 3%.
Sclerotherapy involves the injection of a sclerosing agent, such as phenol, into the hemorrhoid. This causes the vein walls to collapse and the hemorrhoids to shrivel up. The success rate four years after treatment is about 70%.
A number of cauterization methods have been shown to be effective for hemorrhoids, but are usually used only when other methods fail. This procedure can be done using electrocautery, infrared radiation, laser surgery, or cryosurgery. Infrared cauterization may be an option for grade I or II disease. In those with grade III or IV disease, reoccurrence rates are high.
Surgery
A number of surgical techniques may be used if conservative management and simple procedures fail. All surgical treatments are associated with some degree of complications, including bleeding, infection, anal strictures, and urinary retention, due to the close proximity of the rectum to the nerves that supply the bladder. Also, a small risk of fecal incontinence occurs, particularly of liquid, with rates reported between 0% and 28%. Mucosal ectropion is another condition which may occur after hemorrhoidectomy (often together with anal stenosis). This is where the anal mucosa becomes everted from the anus, similar to a very mild form of rectal prolapse.
Excisional hemorrhoidectomy is a surgical excision of the hemorrhoid used primarily only in severe cases. It is associated with significant postoperative pain and usually requires two to four weeks for recovery. However, the long-term benefit is greater in those with grade III hemorrhoids as compared to rubber band ligation. It is the recommended treatment in those with a thrombosed external hemorrhoid if carried out within 24–72 hours. Evidence to support this is weak, however. Glyceryl trinitrate ointment after the procedure helps both with pain and with healing.
Doppler-guided transanal hemorrhoidal dearterialization is a minimally invasive treatment using an ultrasound Doppler to accurately locate the arterial blood inflow. These arteries are then "tied off" and the prolapsed tissue is sutured back to its normal position. It has a slightly higher recurrence rate but fewer complications compared to a hemorrhoidectomy.
Stapled hemorrhoidectomy, also known as stapled hemorrhoidopexy, involves the removal of much of the abnormally enlarged hemorrhoidal tissue, followed by a repositioning of the remaining hemorrhoidal tissue back to its normal anatomical position. It is generally less painful and is associated with faster healing compared to complete removal of hemorrhoids. However, the chance of symptomatic hemorrhoids returning is greater than for conventional hemorrhoidectomy, so it is typically recommended only for grade II or III disease.
Epidemiology
It is difficult to determine how common hemorrhoids are as many people with the condition do not see a healthcare provider. However, symptomatic hemorrhoids are thought to affect at least 50% of the US population at some time during their lives, and around 5% of the population is affected at any given time. Both sexes experience about the same incidence of the condition, with rates peaking between 45 and 65 years. They are more common in Caucasians and those of higher socioeconomic status.Long-term outcomes are generally good, though some people may have recurrent symptomatic episodes. Only a small proportion of persons end up needing surgery.
History
The first known mention of this disease is from a 1700 BCE Egyptian papyrus, which advises: "... Thou shouldest give a recipe, an ointment of great protection; acacia leaves, ground, titurated and cooked together. Smear a strip of fine linen there-with and place in the anus, that he recovers immediately." In 460 BCE, the Hippocratic corpus discusses a treatment similar to modern rubber band ligation: "And hemorrhoids in like manner you may treat by transfixing them with a needle and tying them with very thick and woolen thread, for application, and do not foment until they drop off, and always leave one behind; and when the patient recovers, let him be put on a course of Hellebore." Hemorrhoids may have been described in the Bible, with earlier English translations using the now-obsolete spelling "emerods".Celsus (25 BCE – 14 CE) described ligation and excision procedures and discussed the possible complications. Galen advocated severing the connection of the arteries to veins, claiming it reduced both pain and the spread of gangrene. The Susruta Samhita (4th–5th century BCE) is similar to the words of Hippocrates, but emphasizes wound cleanliness. In the 13th century, European surgeons such as Lanfranc of Milan, Guy de Chauliac, Henri de Mondeville, and John of Ardene made great progress and development of the surgical techniques.In medieval times, hemorrhoids were also known as Saint Fiacres curse after a sixth-century saint who developed them following tilling the soil. The first use of the word "hemorrhoid" in English occurs in 1398, derived from the Old French "emorroides", from Latin hæmorrhoida, in turn from the Greek αἱμορροΐς (haimorrhois), "liable to discharge blood", from αἷμα (haima), "blood" and ῥόος (rhoos), "stream, flow, current", itself from ῥέω (rheo), "to flow, to stream".
Notable cases
Hall-of-Fame baseball player George Brett was removed from a game in the 1980 World Series due to hemorrhoid pain. After undergoing minor surgery, Brett returned to play in the next game, quipping, "My problems are all behind me". Brett underwent further hemorrhoid surgery the following spring. Conservative political commentator Glenn Beck underwent surgery for hemorrhoids, subsequently describing his unpleasant experience in a widely viewed 2008 YouTube video. Former U.S. President Jimmy Carter had surgery for hemorrhoids in 1984. Cricketers Matthew Hayden and Viv Richards have also had the condition.
References
External links
Hemorrhoid at Curlie
Davis, BR; Lee-Kong, SA; Migaly, J; Feingold, DL; Steele, SR (March 2018). "The American Society of Colon and Rectal Surgeons Clinical Practice Guidelines for the Management of Hemorrhoids". Diseases of the Colon and Rectum. 61 (3): 284–292. doi:10.1097/DCR.0000000000001030. PMID 29420423. S2CID 4198610. | 179 |
Heparin-induced thrombocytopenia | Heparin-induced thrombocytopenia (HIT) is the development of thrombocytopenia (a low platelet count), due to the administration of various forms of heparin, an anticoagulant. HIT predisposes to thrombosis (the abnormal formation of blood clots inside a blood vessel) because platelets release microparticles that activate thrombin, thereby leading to thrombosis. When thrombosis is identified the condition is called heparin-induced thrombocytopenia and thrombosis (HITT). HIT is caused by the formation of abnormal antibodies that activate platelets. If someone receiving heparin develops new or worsening thrombosis, or if the platelet count falls, HIT can be confirmed with specific blood tests.The treatment of HIT requires stopping heparin treatment, and both protection from thrombosis and choice of an agent that will not reduce the platelet count any further. Several alternatives are available for this purpose; mainly used are danaparoid, fondaparinux, argatroban, and bivalirudin.While heparin was discovered in the 1930s, HIT was not reported until the 1960s.
Signs and symptoms
Heparin may be used for both prevention and the treatment of thrombosis. It exists in two main forms: an "unfractionated" form that can be injected under the skin (subcutaneously) or through an intravenous infusion, and a "low molecular weight" form that is generally given subcutaneously. Commonly used low molecular weight heparins are enoxaparin, dalteparin, nadroparin and tinzaparin.In HIT, the platelet count in the blood falls below the normal range, a condition called thrombocytopenia. However, it is generally not low enough to lead to an increased risk of bleeding. Most people with HIT, therefore, do not experience any symptoms. Typically, the platelet count falls 5–14 days after heparin is first given; if someone has received heparin in the previous three months, the fall in platelet count may occur sooner, sometimes within a day.The most common symptom of HIT is enlargement or extension of a previously diagnosed blood clot, or the development of a new blood clot elsewhere in the body. This may take the form of clots either in arteries or veins, causing arterial or venous thrombosis, respectively. Examples of arterial thrombosis are stroke, myocardial infarction ("heart attack"), and acute leg ischemia. Venous thrombosis may occur in the leg or arm in the form of deep vein thrombosis (DVT) and in the lung in the form of a pulmonary embolism (PE); the latter usually originates in the leg, but migrates to the lung.In those receiving heparin through an intravenous infusion, a complex of symptoms ("systemic reaction") may occur when the infusion is started. These include fever, chills, high blood pressure, a fast heart rate, shortness of breath, and chest pain. This happens in about a quarter of people with HIT. Others may develop a skin rash consisting of red spots.
Mechanism
The administration of heparin can cause the development of HIT antibodies, suggesting heparin may act as a hapten, thus may be targeted by the immune system. In HIT, the immune system forms antibodies against heparin when it is bound to a protein called platelet factor 4 (PF4). These antibodies are usually of the IgG class and their development usually takes about 5 days. However, those who have been exposed to heparin in the last few months may still have circulating IgG, as IgG-type antibodies generally continue to be produced even when their precipitant has been removed. This is similar to immunity against certain microorganisms, with the difference that the HIT antibody does not persist more than three months. HIT antibodies have been found in individuals with thrombocytopenia and thrombosis who had no prior exposure to heparin, but the majority are found in people who are receiving heparin.The IgG antibodies form a complex with heparin and PF4 in the bloodstream. The tail of the antibody then binds to the FcγIIa receptor, a protein on the surface of the platelet. This results in platelet activation and the formation of platelet microparticles, which initiate the formation of blood clots; the platelet count falls as a result, leading to thrombocytopenia. In addition, the reticuloendothelial system (mostly the spleen) removes the antibody-coated platelets, further contributing to the thrombocytopenia.
Formation of PF4-heparin antibodies is common in people receiving heparin, but only a proportion of these develop thrombocytopenia or thrombosis. This has been referred to as an "iceberg phenomenon".
Diagnosis
HIT may be suspected if blood tests show a falling platelet count in someone receiving heparin, even if the heparin has already been discontinued. Professional guidelines recommend that people receiving heparin have a complete blood count (which includes a platelet count) on a regular basis while receiving heparin.However, not all people with a falling platelet count while receiving heparin turn out to have HIT. The timing, severity of the thrombocytopenia, the occurrence of new thrombosis, and the presence of alternative explanations, all determine the likelihood that HIT is present. A commonly used score to predict the likelihood of HIT is the "4 Ts" score introduced in 2003. A score of 0–8 points is generated; if the score is 0–3, HIT is unlikely. A score of 4–5 indicates intermediate probability, while a score of 6–8 makes it highly likely. Those with a high score may need to be treated with an alternative drug, while more sensitive and specific tests for HIT are performed, while those with a low score can safely continue receiving heparin, as the likelihood that they have HIT is extremely low. In an analysis of the reliability of the 4T score, a low score had a negative predictive value of 0.998, while an intermediate score had a positive predictive value of 0.14 and a high score a positive predictive value of 0.64; intermediate and high scores, therefore, warrant further investigation.
The first screening test in someone suspected of having HIT is aimed at detecting antibodies against heparin-PF4 complexes. This may be with a laboratory test of the enzyme-linked immunosorbent assay type. This ELISA test, however, detects all circulating antibodies that bind heparin-PF4 complexes, and may also falsely identify antibodies that do not cause HIT. Therefore, those with a positive ELISA are tested further with a functional assay. This test uses platelets and serum from the patient; the platelets are washed and mixed with serum and heparin. The sample is then tested for the release of serotonin, a marker of platelet activation. If this serotonin release assay (SRA) shows high serotonin release, the diagnosis of HIT is confirmed. The SRA test is difficult to perform and is usually only done in regional laboratories.If someone has been diagnosed with HIT, some recommend routine Doppler sonography of the leg veins to identify deep vein thromboses, as this is very common in HIT.
Treatment
Given the fact that HIT predisposes strongly to new episodes of thrombosis, simply discontinuing the heparin administration is insufficient. Generally, an alternative anticoagulant is needed to suppress the thrombotic tendency while the generation of antibodies stops and the platelet count recovers. To make matters more complicated, the other most commonly used anticoagulant, warfarin, should not be used in HIT until the platelet count is at least 150 x 109/L because a very high risk of warfarin necrosis exists in people with HIT who have low platelet counts. Warfarin necrosis is the development of skin gangrene in those receiving warfarin or a similar vitamin K inhibitor. If the patient was receiving warfarin at the time when HIT is diagnosed, the activity of warfarin is reversed with vitamin K. Transfusing platelets is discouraged, as a theoretical risk indicates that this may worsen the risk of thrombosis; the platelet count is rarely low enough to be the principal cause of significant hemorrhage.Various nonheparin agents are used as alternatives to heparin therapy to provide anticoagulation in those with strongly suspected or proven HIT: danaparoid, fondaparinux, bivalirudin, and argatroban. Not all agents are available in all countries, and not all are approved for this specific use. For instance, argatroban is only recently licensed in the United Kingdom, and danaparoid is not available in the United States. Fondaparinux, a factor Xa inhibitor, is commonly used off label for HIT treatment in the United States.According to a systematic review, people with HIT treated with lepirudin showed a relative risk reduction of clinical outcome (death, amputation, etc.) to be 0.52 and 0.42 when compared to patient controls. In addition, people treated with argatroban for HIT showed a relative risk reduction of the above clinical outcomes to be 0.20 and 0.18. Lepirudin production stopped on May 31, 2012.
Epidemiology
Up to 8% of patients receiving heparin are at risk to develop HIT antibodies, but only 1–5% on heparin will progress to develop HIT with thrombocytopenia and subsequently one-third of them may develop arterial or venous thrombosis. After vascular surgery, 34% of patients receiving heparin developed HIT antibodies without clinical symptoms. The exact number of cases of HIT in the general population is unknown. What is known is that women receiving heparin after a recent surgical procedure, particularly cardiothoracic surgery, have a higher risk, while the risk is very low in women just before and after giving birth. Some studies have shown that HIT is less common in those receiving low molecular weight heparin.
History
While heparin was introduced for clinical use in the late 1930s, new thrombosis in people treated with heparin was not described until 1957, when vascular surgeons reported the association. The fact that this phenomenon occurred together with thrombocytopenia was reported in 1969; prior to this time, platelet counts were not routinely performed. A 1973 report established HIT as a diagnosis, as well as suggesting that its features were the result of an immune process.Initially, various theories existed about the exact cause of the low platelets in HIT. Gradually, evidence accumulated on the exact underlying mechanism. In 1984–1986, John G. Kelton and colleagues at McMaster University Medical School developed the laboratory tests that could be used to confirm or exclude heparin-induced thrombocytopenia.Treatment was initially limited to aspirin and warfarin, but the 1990s saw the introduction of a number of agents that could provide anticoagulation without a risk of recurrent HIT. Older terminology distinguishes between two forms of heparin-induced thrombocytopenia: type 1 (mild, nonimmune mediated and self-limiting fall in platelet count) and type 2, the form described above. Currently, the term HIT is used without a modifier to describe the immune-mediated severe form.In 2021 a condition resembling HIT but without heparin exposure was described to explain unusual post-vaccination embolic and thrombotic events after the Oxford–AstraZeneca COVID-19 vaccine. It is a rare adverse event (1:1 million to 1:100,000) resulting from COVID-19 vaccines (particularly adenoviral vector vaccines). This is also known as Thrombosis with Thrombocytopenia Syndrome or TTS.
References
== External links == | 180 |
Hepatic porphyria | Hepatic porphyrias is a form of porphyria in which toxic porphyrin molecules build up in the liver. Hepatic porphyrias can result from a number of different enzyme deficiencies.Examples include (in order of synthesis pathway):
Acute intermittent porphyria
Porphyria cutanea tarda and Hepatoerythropoietic porphyria
Hereditary coproporphyria
Variegate porphyria
See also
Erythropoietic porphyria
Givosiran
References
External links
Porphyrias,+Hepatic at the US National Library of Medicine Medical Subject Headings (MeSH)
www.drugs-porphyria.com
www.porphyria-europe.com | 181 |
Hereditary hemorrhagic telangiectasia | Hereditary hemorrhagic telangiectasia (HHT), also known as Osler–Weber–Rendu disease and Osler–Weber–Rendu syndrome, is a rare autosomal dominant genetic disorder that leads to abnormal blood vessel formation in the skin, mucous membranes, and often in organs such as the lungs, liver, and brain.It may lead to nosebleeds, acute and chronic digestive tract bleeding, and various problems due to the involvement of other organs. Treatment focuses on reducing bleeding from blood vessel lesions, and sometimes surgery or other targeted interventions to remove arteriovenous malformations in organs. Chronic bleeding often requires iron supplements and sometimes blood transfusions. HHT is transmitted in an autosomal dominant fashion, and occurs in one in 5,000–8,000 people in North America.The disease carries the names of Sir William Osler, Henri Jules Louis Marie Rendu, and Frederick Parkes Weber, who described it in the late 19th and early 20th centuries.
Signs and symptoms
Telangiectasias
Telangiectasia (small vascular malformations) may occur in the skin and mucosal linings of the nose and gastrointestinal tract. The most common problem is nosebleeds (epistaxis), which happen frequently from childhood and affect about 90–95% of people with HHT. Lesions on the skin and in the mouth bleed less often but may be considered cosmetically displeasing; they affect about 80%. The skin lesions characteristically occur on the lips, the nose and the fingers, and on the skin of the face in sun-exposed areas. They appear suddenly, with the number increasing over time.About 20% are affected by symptomatic digestive tract lesions, although a higher percentage have lesions that do not cause symptoms. These lesions may bleed intermittently, which is rarely significant enough to be noticed (in the form of bloody vomiting or black stool), but can eventually lead to depletion of iron in the body, resulting in iron-deficiency anemia.
Arteriovenous malformation
Arteriovenous malformations (AVMs, larger vascular malformations) occur in larger organs, predominantly the lungs (pulmonary AVMs) (50%), liver (30–70%) and the brain (cerebral AVMs, 10%), with a very small proportion (<1%) of AVMs in the spinal cord.Vascular malformations in the lungs may cause a number of problems. The lungs normally "filter out" bacteria and blood clots from the bloodstream; AVMs bypass the capillary network of the lungs and allow these to migrate to the brain, where bacteria may cause a brain abscess and blood clots may lead to stroke. HHT is the most common cause of lung AVMs: out of all people found to have lung AVMs, 70–80% are due to HHT. Bleeding from lung AVMs is relatively unusual, but may cause hemoptysis (coughing up blood) or hemothorax (blood accumulating in the chest cavity). Large vascular malformations in the lung allow oxygen-depleted blood from the right ventricle to bypass the alveoli, meaning that this blood does not have an opportunity to absorb fresh oxygen. This may lead to breathlessness. Large AVMs may lead to platypnea, difficulty in breathing that is more marked when sitting up compared to lying down; this probably reflects changes in blood flow associated with positioning. Very large AVMs cause a marked inability to absorb oxygen, which may be noted by cyanosis (bluish discoloration of the lips and skin), clubbing of the fingernails (often encountered in chronically low oxygen levels), and a humming noise over the affected part of the lung detectable by stethoscope.The symptoms produced by AVMs in the liver depend on the type of abnormal connection that they form between blood vessels. If the connection is between arteries and veins, a large amount of blood bypasses the bodys organs, for which the heart compensates by increasing the cardiac output. Eventually congestive cardiac failure develops ("high-output cardiac failure"), with breathlessness and leg swelling among other problems. If the AVM creates a connection between the portal vein and the blood vessels of the liver, the result may be portal hypertension (increased portal vein pressure), in which collateral blood vessels form in the esophagus (esophageal varices), which may bleed violently; furthermore, the increased pressure may give rise to fluid accumulation in the abdominal cavity (ascites). If the flow in the AVM is in the other direction, portal venous blood flows directly into the veins rather than running through the liver; this may lead to hepatic encephalopathy (confusion due to portal waste products irritating the brain). Rarely, the bile ducts are deprived of blood, leading to severe cholangitis (inflammation of the bile ducts). Liver AVMs are detectable in over 70% of people with HHT, but only 10% experience problems as a result.In the brain, AVMs occasionally exert pressure, leading to headaches. They may also increase the risk of seizures, as would any abnormal tissue in the brain. Finally, hemorrhage from an AVM may lead to intracerebral hemorrhage (bleeding into the brain), which causes any of the symptoms of stroke such as weakness in part of the body or difficulty speaking. If the bleeding occurs into the subarachnoid space (subarachnoid hemorrhage), there is usually a severe, sudden headache and decreased level of consciousness and often weakness in part of the body.
Other problems
A very small proportion (those affected by SMAD4 (MADH4) mutations, see below) have multiple benign polyps in the large intestine, which may bleed or transform into colorectal cancer. A similarly small proportion experiences pulmonary hypertension, a state in which the pressure in the lung arteries is increased, exerting pressure on the right side of the heart and causing peripheral edema (swelling of the legs), fainting and attacks of chest pain. It has been observed that the risk of thrombosis (particularly venous thrombosis, in the form of deep vein thrombosis or pulmonary embolism) may be increased. There is a suspicion that those with HHT may have a mild immunodeficiency and are therefore at a slightly increased risk from infections.
Genetics
HHT is a genetic disorder with an autosomal dominant inheritance pattern. Those with HHT symptoms that have no relatives with the disease may have a new mutation. Homozygosity appears to be fatal in utero.Five genetic types of HHT are recognized. Of these, three have been linked to particular genes, while the two remaining have currently only been associated with a particular locus. More than 80% of all cases of HHT are due to mutations in either ENG or ACVRL1. A total of over 600 different mutations are known. There is likely to be a predominance of either type in particular populations, but the data are conflicting. MADH4 mutations, which cause colonic polyposis in addition to HHT, comprise about 2% of disease-causing mutations. Apart from MADH4, it is not clear whether mutations in ENG and ACVRL1 lead to particular symptoms, although some reports suggest that ENG mutations are more likely to cause lung problems while ACVRL1 mutations may cause more liver problems, and pulmonary hypertension may be a particular problem in people with ACVRL1 mutations. People with exactly the same mutations may have different nature and severity of symptoms, suggesting that additional genes or other risk factors may determine the rate at which lesions develop; these have not yet been identified.
Pathophysiology
Telangiectasias and arteriovenous malformations in HHT are thought to arise because of changes in angiogenesis, the development of blood vessels out of existing ones. The development of a new blood vessel requires the activation and migration of various types of cells, chiefly endothelium, smooth muscle and pericytes. The exact mechanism by which the HHT mutations influence this process is not yet clear, and it is likely that they disrupt a balance between pro- and antiangiogenic signals in blood vessels. The wall of telangiectasias is unusually friable, which explains the tendency of these lesions to bleed.All genes known so far to be linked to HHT code for proteins in the TGF-β signaling pathway. This is a group of proteins that participates in signal transduction of hormones of the transforming growth factor beta superfamily (the transforming growth factor beta, bone morphogenetic protein and growth differentiation factor classes), specifically BMP9/GDF2 and BMP10. The hormones do not enter the cell but link to receptors on the cell membrane; these then activate other proteins, eventually influencing cellular behavior in a number of ways such as cellular survival, proliferation (increasing in number) and differentiation (becoming more specialized). For the hormone signal to be adequately transduced, a combination of proteins is needed: two each of two types of serine/threonine-specific kinase type membrane receptors and endoglin. When bound to the hormone, the type II receptor proteins phosphorylate (transfer phosphate) onto type I receptor proteins (of which Alk-1 is one), which in turn phosphorylate a complex of SMAD proteins (chiefly SMAD1, SMAD5 and SMAD8). These bind to SMAD4 and migrate to the cell nucleus where they act as transcription factors and participate in the transcription of particular genes. In addition to the SMAD pathway, the membrane receptors also act on the MAPK pathway, which has additional actions on the behavior of cells. Both Alk-1 and endoglin are expressed predominantly in endothelium, perhaps explaining why HHT-causing mutations in these proteins lead predominantly to blood vessel problems. Both ENG and ACVRL1 mutations lead predominantly to underproduction of the related proteins, rather than misfunctioning of the proteins.
Diagnosis
Diagnostic tests may be conducted for various reasons. Firstly, some tests are needed to confirm or refute the diagnosis. Secondly, some are needed to identify any potential complications.
Telangiectasias
The skin and oral cavity telangiectasias are visually identifiable on physical examination, and similarly the lesions in the nose may be seen on endoscopy of the nasopharynx or on laryngoscopy. The severity of nosebleeds may be quantified objectively using a grid-like questionnaire in which the number of nosebleed episodes and their duration is recorded.Digestive tract telangiectasias may be identified on esophagogastroduodenoscopy (endoscopy of the esophagus, stomach and first part of the small intestine). This procedure will typically only be undertaken if there is anemia that is more marked than expected by the severity of nosebleeds, or if there is evidence of severe bleeding (vomiting blood, black stools). If the number of lesions seen on endoscopy is unexpectedly low, the remainder of the small intestine may be examined with capsule endoscopy, in which the patient swallows a capsule-shaped device containing a miniature camera which transmits images of the digestive tract to a portable digital recorder.
Arteriovenous malformations
Identification of AVMs requires detailed medical imaging of the organs most commonly affected by these lesions. Not all AVMs cause symptoms or are at risk of doing so, and hence there is a degree of variation between specialists as to whether such investigations would be performed, and by which modality; often, decisions on this issue are reached together with the patient.Lung AVMs may be suspected because of the abnormal appearance of the lungs on a chest X-ray, or hypoxia (low oxygen levels) on pulse oximetry or arterial blood gas determination. Bubble contrast echocardiography (bubble echo) may be used as a screening tool to identify abnormal connections between the lung arteries and veins. This involves the injection of agitated saline into a vein, followed by ultrasound-based imaging of the heart. Normally, the lungs remove small air bubbles from the circulation, and they are therefore only seen in the right atrium and the right ventricle. If an AVM is present, bubbles appear in the left atrium and left ventricle, usually 3–10 cardiac cycles after the right side; this is slower than in heart defects, in which there are direct connections between the right and left side of the heart. A larger number of bubbles is more likely to indicate the presence of an AVM. Bubble echo is not a perfect screening tool as it can miss smaller AVMs and does not identify the site of AVMs. Often contrast-enhanced computed tomography (CT angiography) is used to identify lung lesions; this modality has a sensitivity of over 90%. It may be possible to omit contrast administration on modern CT scanners. Echocardiography is also used if there is a suspicion of pulmonary hypertension or high-output cardiac failure due to large liver lesions, sometimes followed by cardiac catheterization to measure the pressures inside the various chambers of the heart.
Liver AVMs may be suspected because of abnormal liver function tests in the blood, because the symptoms of heart failure develop, or because of jaundice or other symptoms of liver dysfunction. The most reliable initial screening test is Doppler ultrasonography of the liver; this has a very high sensitivity for identifying vascular lesions in the liver. If necessary, contrast-enhanced CT may be used to further characterize AVMs. It is extremely common to find incidental nodules on liver scans, most commonly due to focal nodular hyperplasia (FNH), as these are a hundredfold times more common in HHT compared to the general population. FNH is regarded as harmless. Generally, tumor markers and additional imaging modalities are used to differentiate between FNH and malignant tumors of the liver. Liver biopsy is discouraged in people with HHT as the risk of hemorrhage from liver AVMs may be significant. Liver scans may be useful if someone is suspected of HHT, but does not meet the criteria (see below) unless liver lesions can be demonstrated.Brain AVMs may be detected on computed tomography angiography (CTA or CT angio) or magnetic resonance angiography (MRA); CTA is better in showing the vessels themselves, and MRA provides more detail about the relationship between an AVM and surrounding brain tissue. In general, MRI is recommended. Various types of vascular malformations may be encountered: AVMs, micro-AVMs, telangiectasias and arteriovenous fistulas. If surgery, embolization, or other treatment is contemplated (see below), cerebral angiography may be required to get sufficient detail of the vessels. This procedure carries a small risk of stroke (0.5%) and is therefore limited to specific circumstances. Recent professional guidelines recommend that all children with suspected or definite HHT undergo a brain MRI early in life to identify AVMs that can cause major complications. Others suggest that screening for cerebral AVMs is probably unnecessary in those who are not experiencing any neurological symptoms, because most lesions discovered on screening scans would not require treatment, creating undesirable conundrums.
Genetic testing
Genetic tests are available for the ENG, ACVRL1 and MADH4 mutations. Testing is not always needed for diagnosis, because the symptoms are sufficient to distinguish the disease from other diagnoses. There are situations in which testing can be particularly useful. Firstly, children and young adults with a parent with definite HHT may have limited symptoms, yet be at risk from some of the complications mentioned above; if the mutation is known in the affected parent, absence of this mutation in the child would prevent the need for screening tests. Furthermore, genetic testing may confirm the diagnosis in those with limited symptoms who otherwise would have been labeled "possible HHT" (see below).Genetic diagnosis in HHT is difficult, as mutations occur in numerous different locations in the linked genes, without particular mutations being highly frequent (as opposed to, for instance, the ΔF508 mutation in cystic fibrosis). Sequence analysis of the involved genes is therefore the most useful approach (sensitivity 75%), followed by additional testing to detect large deletions and duplications (additional 10%). Not all mutations in these genes have been linked with disease.Mutations in the MADH4 gene is usually associated with juvenile polyposis, and detection of such a mutation would indicate a need to screen the patient and affected relatives for polyps and tumors of the large intestine.
Criteria
The diagnosis can be made depending on the presence of four criteria, known as the "Curaçao criteria". If three or four are met, a patient has "definite HHT", while two gives "possible HHT":
Spontaneous recurrent epistaxis
Multiple telangiectasias in typical locations (see above)
Proven visceral AVM (lung, liver, brain, spine)
First-degree family member with HHTDespite the designation "possible", someone with a visceral AVM and a family history but no nosebleeds or telangiectasias is still extremely likely to have HHT, because these AVMs are very uncommon in the general population. At the same time, the same cannot be said of nosebleeds and sparse telangiectasias, both of which occur in people without HHT, in the absence of AVMs. Someones diagnostic status may change in the course of life, as young children may not yet exhibit all the symptoms; at age 16, thirteen percent are still indeterminate, while at age 60 the vast majority (99%) have a definite diagnostic classification. The children of established HHT patients may therefore be labeled as "possible HHT", as 50% may turn out to have HHT in the course of their life.
Treatment
Treatment of HHT is symptomatic (it deals with the symptoms rather than the disease itself), as there is no therapy that stops the development of telangiectasias and AVMs directly. Furthermore, some treatments are applied to prevent the development of common complications. Chronic nosebleeds and digestive tract bleeding can both lead to anemia; if the bleeding itself cannot be completely stopped, the anemia requires treatment with iron supplements. Those who cannot tolerate iron tablets or solutions may require administration of intravenous iron, and blood transfusion if the anemia is causing severe symptoms that warrant rapid improvement of the blood count.Most treatments used in HHT have been described in adults, and the experience in treating children is more limited. Women with HHT who get pregnant are at an increased risk of complications, and are observed closely, although the absolute risk is still low (1%).
Nosebleeds
An acute nosebleed may be managed with a variety of measures, such as packing of the nasal cavity with absorbent swabs or gels. Removal of the packs after the bleeding may lead to reopening of the fragile vessels, and therefore lubricated or atraumatic packing is recommended. Some patients may wish to learn packing themselves to deal with nosebleeds without having to resort to medical help.Frequent nosebleeds can be prevented in part by keeping the nostrils moist, and by applying saline solution, estrogen-containing creams or tranexamic acid; these have few side effects and may have a small degree of benefit. A number of additional modalities has been used to prevent recurrent bleeding if simple measures are unsuccessful. Medical therapies include oral tranexamic acid and estrogen; the evidence for these is relatively limited, and estrogen is poorly tolerated by men and possibly carries risks of cancer and heart disease in women past the menopause. Nasal coagulation and cauterization may reduce the bleeding from telangiectasias, and is recommended before surgery is considered. However, it is highly recommended to use the least heat and time to prevent septal perforations and excessive trauma to the nasal mucosa that are already susceptible to bleeding. Sclerotherapy is another option to manage the bleeding. This process involves injecting a small amount of an aerated irritant (detergent such as sodium tetradecyl sulfate) directly into the telangiectasias. The detergent causes the vessel to collapse and harden, resulting in scar tissue residue. This is the same procedure used to treat varicose veins and similar disorders.It may be possible to embolize vascular lesions through interventional radiology; this requires passing a catheter through a large artery and locating the maxillary artery under X-ray guidance, followed by the injection into the vessel of particles that occlude the blood vessels. The benefit from the procedure tends to be short-lived, and it may be most appropriate in episodes of severe bleeding.To more effectively minimize recurrence and severity of epistaxis, other options may be used in conjunction with therapies listed above. Intravenously administered anti-VEGF substances such as bevacizumab (brand name Avastin), pazopinab and thalidomide or its derivatives interfere with the production of new blood vessels that are weak and therefore prone to bleeding. Due to the past experiences with prescribing thalidomide to pregnant women to alleviate symptoms of nausea and the terrible birth defects that followed, thalidomide is a last resort therapy. Additionally, thalidomide can cause neuropathy. Though this can be mitigated by tinkering with dosages and prescribing its derivatives such as lenolidomide and pomalidomide, many doctors prefer alternative VEGF inhibitors. Bevacizumab has been shown to significantly reduce the severity of epistaxis without side effects.If other interventions have failed, several operations have been reported to provide benefit. One is septal dermoplasty or Saunders procedure, in which skin is transplanted into the nostrils, and the other is Youngs procedure, in which the nostrils are sealed off completely.
Skin and digestive tract
The skin lesions of HHT can be disfiguring, and may respond to treatment with long-pulsed Nd:YAG laser. Skin lesions in the fingertips may sometimes bleed and cause pain. Skin grafting is occasionally needed to treat this problem.With regards to digestive tract lesions, mild bleeding and mild resultant anemia is treated with iron supplementation, and no specific treatment is administered. There is limited data on hormone treatment and tranexamic acid to reduce bleeding and anemia. Severe anemia or episodes of severe bleeding are treated with endoscopic argon plasma coagulation (APC) or laser treatment of any lesions identified; this may reduce the need for supportive treatment. The expected benefits are not such that repeated attempts at treating lesions are advocated. Sudden, very severe bleeding is unusual—if encountered, alternative causes (such as a peptic ulcer) need to be considered—but embolization may be used in such instances.
Lung AVMs
Lung lesions, once identified, are usually treated to prevent episodes of bleeding and more importantly embolism to the brain. This is particularly done in lesions with a feeding blood vessel of 3 mm or larger, as these are the most likely to cause long-term complications unless treated. The most effective current therapy is embolization with detachable metal coils or plugs. The procedure involves puncture of a large vein (usually under a general anesthetic), followed by advancing of a catheter through the right ventricle and into the pulmonary artery, after which radiocontrast is injected to visualize the AVMs (pulmonary angiography). Once the lesion has been identified, coils are deployed that obstruct the blood flow and allow the lesion to regress. In experienced hands, the procedure tends to be very effective and with limited side effects, but lesions may recur and further attempts may be required. CTA scans are repeated to monitor for recurrence. Surgical excision has now essentially been abandoned due to the success of embolotherapy.Those with either definite pulmonary AVMs or an abnormal contrast echocardiogram with no clearly visible lesions are deemed to be at risk from brain emboli. They are therefore counselled to avoid scuba diving, during which small air bubbles may form in the bloodstream that may migrate to the brain and cause stroke. Similarly, antimicrobial prophylaxis is advised during procedures in which bacteria may enter the bloodstream, such as dental work, and avoidance of air bubbles during intravenous therapy.
Liver AVMs
Given that liver AVMs generally cause high-output cardiac failure, the emphasis is on treating this with diuretics to reduce the circulating blood volume, restriction of salt and fluid intake, and antiarrhythmic agents in case of irregular heart beat. This may be sufficient in treating the symptoms of swelling and breathlessness. If this treatment is not effective or leads to side effects or complications, the only remaining option is liver transplantation. This is reserved for those with severe symptoms, as it carries a mortality of about 10%, but leads to good results if successful. The exact point at which liver transplantion is to be offered is not yet completely established. Embolization treatment has been attempted, but leads to severe complications in a proportion of patients and is discouraged.Other liver-related complications (portal hypertension, esophageal varices, ascites, hepatic encephalopathy) are treated with the same modalities as used in cirrhosis, although the use of transjugular intrahepatic portosystemic shunt treatment is discouraged due to the lack of documented benefit.
Brain AVMs
The decision to treat brain arteriovenous malformations depends on the symptoms that they cause (such as seizures or headaches). The bleeding risk is predicted by previous episodes of hemorrhage, and whether on the CTA or MRA scan the AVM appears to be deep-seated or have deep venous drainage. Size of the AVM and the presence of aneurysms appears to matter less. In HHT, some lesions (high-flow arteriovenous fistulae) tend to cause more problems, and treatment is warranted. Other AVMs may regress over time without intervention. Various modalities are available, depending on the location of the AVM and its size: surgery, radiation-based treatment and embolization. Sometimes, multiple modalities are used on the same lesion.Surgery (by craniotomy, open brain surgery) may be offered based on the risks of treatment as determined by the Spetzler–Martin scale (grade I-V); this score is higher in larger lesions that are close to important brain structures and have deep venous drainage. High grade lesions (IV and V) have an unacceptably high risk and surgery is not typically offered in those cases. Radiosurgery (using targeted radiation therapy such as by a gamma knife) may be used if the lesion is small but close to vital structures. Finally, embolization may be used on small lesions that have only a single feeding vessel.
Experimental treatments
Several anti-angiogenesis drugs approved for other conditions, such as cancer, have been investigated in small clinical trials. The anti-VEGF antibody bevacizumab, for instance, has been used off-label in several studies. In a large clinical trial, bevacizumab infusion was associated with a decrease in cardiac output and reduced duration and number of episodes of epistaxis in treated HHT patients. Thalidomide, another anti-angiogenesis drug, was also reported to have beneficial effects in HHT patients. Thalidomide treatment was found to induce vessel maturation in an experimental mouse model of HHT and to reduce the severity and frequency of nosebleeds in the majority of a small group of HHT patients. The blood hemoglobin levels of these treated patients rose as a result of reduced hemorrhage and enhanced blood vessel stabilization.
Epidemiology
Population studies from numerous areas in the world have shown that HHT occurs at roughly the same rate in almost all populations: somewhere around 1 in 5000. In some areas, it is much more common; for instance, in the French region of Haut Jura the rate is 1:2351 - twice as common as in other populations. This has been attributed to a founder effect, in which a population descending from a small number of ancestors has a high rate of a particular genetic trait because one of these ancestors harbored this trait. In Haut Jura, this has been shown to be the result of a particular ACVRL1 mutation (named c.1112dupG or c.1112_1113insG). The highest rate of HHT is 1:1331, reported in Bonaire and Curaçao, two islands in the Caribbean belonging to the Netherlands Antilles.Most people with HHT have a normal lifespan. The skin lesions and nosebleeds tend to develop during childhood. AVMs are probably present from birth, but dont necessarily cause any symptoms. Frequent nosebleeds are the most common symptom and can significantly affect quality of life.
History
Several 19th century English physicians, starting with Henry Gawen Sutton (1836–1891) and followed by Benjamin Guy Babington (1794–1866) and John Wickham Legg (1843–1921), described the most common features of HHT, particularly the recurrent nosebleeds and the hereditary nature of the disease. The French physician Henri Jules Louis Marie Rendu (1844–1902) observed the skin and mucosal lesions, and distinguished the condition from hemophilia. The Canadian-born Sir William Osler (1849–1919), then at Johns Hopkins Hospital and later at Oxford University, made further contributions with a 1901 report in which he described characteristic lesions in the digestive tract. The English physician Frederick Parkes Weber (1863–1962) reported further on the condition in 1907 with a series of cases. The term "hereditary hemorrhagic telangiectasia" was first used by the American physician Frederic M. Hanes (1883–1946) in a 1909 article on the condition.The diagnosis of HHT remained a clinical one until the genetic defects that cause HHT were identified by a research group at Duke University Medical Center, in 1994 and 1996 respectively. In 2000, the international scientific advisory committee of cureHHT formerly called the HHT Foundation International published the now widely used Curaçao criteria. In 2006, a group of international experts met in Canada and formulated an evidence-based guideline, sponsored by cureHHT. This guideline has since been updated in 2020 and can be found here.
== References == | 182 |
Orotic aciduria | Orotic aciduria (AKA hereditary orotic aciduria) is a disease caused by an enzyme deficiency resulting in a decreased ability to synthesize pyrimidines. It was the first described enzyme deficiency of the de novo pyrimidine synthesis pathway.Orotic aciduria is characterized by excessive excretion of orotic acid in urine because of the inability to convert orotic acid to UMP. It causes megaloblastic anemia and may be associated with mental and physical developmental delays.
Signs and symptoms
Patients typically present with excessive orotic acid in the urine, failure to thrive, developmental delay, and megaloblastic anemia which cannot be cured by administration of vitamin B12 or folic acid.
Cause and genetics
This autosomal recessive disorder is caused by a deficiency in the enzyme UMPS, a bifunctional protein that includes the enzyme activities of OPRT and ODC. In one study of three patients, UMPS activity ranged from 2-7% of normal levels.Two types of orotic aciduria have been reported. Type I has a severe deficiency of both activities of UMP synthase. In Type II orotic aciduria, the ODC activity is deficient while OPRT activity is elevated. As of 1988, only one case of type II orotic aciduria had ever been reported.Orotic aciduria is associated with megaloblastic anemia due to decreased pyrimidine synthesis, which leads to decreased nucleotide-lipid cofactors needed for erythrocyte membrane synthesis in the bone marrow.
Diagnosis
Elevated urinary orotic acid levels can also arise secondary to blockage of the urea cycle, particularly in ornithine transcarbamylase deficiency (OTC deficiency). This can be distinguished from hereditary orotic aciduria by assessing blood ammonia levels and blood urea nitrogen (BUN). In OTC deficiency, hyperammonemia and decreased BUN are seen because the urea cycle is not functioning properly, but megaloblastic anemia will not occur because pyrimidine synthesis is not affected. In orotic aciduria, the urea cycle is not affected.
Orotic aciduria can be diagnosed through genetic sequencing of the UMPS gene.
Treatment
Treatment is administration of uridine monophosphate (UMP) or uridine triacetate (which is converted to UMP). These medications will bypass the missing enzyme and provide the body with a source of pyrimidines.
References
== External links == | 183 |
Herpes labialis | Herpes labialis, commonly known as cold sores or fever blisters, is a type of infection by the herpes simplex virus that affects primarily the lip. Symptoms typically include a burning pain followed by small blisters or sores. The first attack may also be accompanied by fever, sore throat, and enlarged lymph nodes. The rash usually heals within ten days, but the virus remains dormant in the trigeminal ganglion. The virus may periodically reactivate to create another outbreak of sores in the mouth or lip.The cause is usually herpes simplex virus type 1 (HSV-1) and occasionally herpes simplex virus type 2 (HSV-2). The infection is typically spread between people by direct non-sexual contact. Attacks can be triggered by sunlight, fever, psychological stress, or a menstrual period. Direct contact with the genitals can result in genital herpes. Diagnosis is usually based on symptoms but can be confirmed with specific testing.Prevention includes avoiding kissing or using the personal items of a person who is infected. A zinc oxide, anesthetic, or antiviral cream appears to decrease the duration of symptoms by a small amount. Antiviral medications may also decrease the frequency of outbreaks.About 2.5 per 1000 people are affected with outbreaks in any given year. After one episode about 33% of people develop subsequent episodes. Onset often occurs in those less than 20 years old and 80% develop antibodies for the virus by this age. In those with recurrent outbreaks, these typically happen less than three times a year. The frequency of outbreaks generally decreases over time.
Signs and symptoms
Herpes infections usually show no symptoms; when symptoms do appear they typically resolve within two weeks. The main symptom of oral infection is inflammation of the mucosa of the cheek and gums—known as acute herpetic gingivostomatitis—which occurs within 5–10 days of infection. Other symptoms may also develop, including headache, nausea, dizziness and painful ulcers—sometimes confused with canker sores—fever, and sore throat.Primary HSV infection in adolescents frequently manifests as severe pharyngitis with lesions developing on the cheek and gums. Some individuals develop difficulty in swallowing (dysphagia) and swollen lymph nodes (lymphadenopathy). Primary HSV infections in adults often results in pharyngitis similar to that observed in glandular fever (infectious mononucleosis), but gingivostomatitis is less likely.Recurrent oral infection is more common with HSV-1 infections than with HSV-2. Symptoms typically progress in a series of eight stages:
Latent (weeks to months incident-free): The remission period; After initial infection, the viruses move to sensory nerve ganglia (trigeminal ganglion), where they reside as lifelong, latent viruses. Asymptomatic shedding of contagious virus particles can occur during this stage.
Prodromal (day 0–1): Symptoms often precede a recurrence. Symptoms typically begin with tingling (itching) and reddening of the skin around the infected site. This stage can last from a few days to a few hours preceding the physical manifestation of an infection and is the best time to start treatment.
Inflammation (day 1): Virus begins reproducing and infecting cells at the end of the nerve. The healthy cells react to the invasion with swelling and redness displayed as symptoms of infection.
Pre-sore (day 2–3): This stage is defined by the appearance of tiny, hard, inflamed papules and vesicles that may itch and are painfully sensitive to touch. In time, these fluid-filled blisters form a cluster on the lip (labial) tissue, the area between the lip and skin (vermilion border), and can occur on the nose, chin, and cheeks.
Open lesion (day 4): This is the most painful and contagious of the stages. All the tiny vesicles break open and merge to create one big, open, weeping ulcer. Fluids are slowly discharged from blood vessels and inflamed tissue. This watery discharge is teeming with active viral particles and is highly contagious. Depending on the severity, one may develop a fever and swollen lymph glands under the jaw.
Crusting (day 5–8): A honey/golden crust starts to form from the syrupy exudate. This yellowish or brown crust or scab is not made of active virus but from blood serum containing useful proteins such as immunoglobulins. This appears as the healing process begins. The sore is still painful at this stage, but, more painful, however, is the constant cracking of the scab as one moves or stretches their lips, as in smiling or eating. Virus-filled fluid will still ooze out of the sore through any cracks.
Healing (day 9–14): New skin begins to form underneath the scab as the virus retreats into latency. A series of scabs will form over the sore (called Meier Complex), each one smaller than the last. During this phase irritation, itching, and some pain are common.
Post-scab (12–14 days): A reddish area may linger at the site of viral infection as the destroyed cells are regenerated. Virus shedding can still occur during this stage.The recurrent infection is thus often called herpes simplex labialis. Rare reinfections occur inside the mouth (intraoral HSV stomatitis) affecting the gums, alveolar ridge, hard palate, and the back of the tongue, possibly accompanied by herpes labialis.A lesion caused by herpes simplex can occur in the corner of the mouth and be mistaken for angular cheilitis of another cause. Sometimes termed "angular herpes simplex". A cold sore at the corner of the mouth behaves similarly to elsewhere on the lips. Rather than utilizing antifungal creams, angular herpes simplex is treated in the same way as a cold sore, with topical antiviral drugs.
Causes
Herpes labialis infection occurs when the herpes simplex virus comes into contact with oral mucosal tissue or abraded skin of the mouth. Infection by the type 1 strain of herpes simplex virus (HSV-1) is most common; however, cases of oral infection by the type 2 strain are increasing.Oral HSV-2 shedding is rare, and "usually noted in the context of first episode genital herpes." In general, both types can cause oral or genital herpes.Cold sores are the result of the virus reactivating in the body. Once HSV-1 has entered the body, it never leaves. The virus moves from the mouth to remain latent in the central nervous system. In approximately one-third of people, the virus can "wake up" or reactivate to cause disease. When reactivation occurs, the virus travels down the nerves to the skin where it may cause blisters (cold sores) around the lips or mouth area.In case of Herpes zoster the nose can be affected.Cold sore outbreaks may be influenced by stress, menstruation, sunlight, sunburn, fever, dehydration, or local skin trauma. Surgical procedures such as dental or neural surgery, lip tattooing, or dermabrasion are also common triggers. HSV-1 can in rare cases be transmitted to newborn babies by family members or hospital staff who have cold sores; this can cause a severe disease called neonatal herpes simplex.
The colloquial term for this condition, "cold sore" comes from the fact that herpes labialis is often triggered by fever, for example, as may occur during an upper respiratory tract infection (i.e. a cold).People can transfer the virus from their cold sores to other areas of the body, such as the eye, skin, or fingers; this is called autoinoculation. Eye infection, in the form of conjunctivitis or keratitis, can happen when the eyes are rubbed after touching the lesion. Finger infection (herpetic whitlow) can occur when a child with cold sores or primary HSV-1 infection sucks their fingers.Blood tests for herpes may differentiate between type 1 and type 2. When a person is not experiencing any symptoms, a blood test alone does not reveal the site of infection. Genital herpes infections occurred with almost equal frequency as type 1 or 2 in younger adults when samples were taken from genital lesions. Herpes in the mouth is more likely to be caused by type 1, but (see above) also can be type 2. The only way to know for certain if a positive blood test for herpes is due to infection of the mouth, genitals, or elsewhere, is to sample from lesions. This is not possible if the affected individual is asymptomatic. The bodys immune system typically fight the virus.
Prevention
Primary Infection
The likelihood of the infection can be reduced through avoidance of touching an area with active infection and contact sports and frequent hand washing, use of mouth rinsing (anti-viral, anti-bacterial) products.</ref> During active infection (outbreaks with oral lesions) avoid oral-to-oral kissing and oral-genital sex without protection. HSV1 can be transmitted to uninfected partners through oral sex, resulting in genital lesions. Healthcare workers working with patients who have active lesions are advised to use gloves, eye protection, and mouth protection during physical, mucosal, and bronchoscopic procedures and examinations.
Recurrent Infection
In some cases, sun exposure can lead to HSV-1 reactivation, therefore use of zinc-based sunscreen or topical and oral therapeutics such as acyclovir and valacyclovir may prove helpful. Other triggers for recurrent herpetic infection includes fever, common cold, fatigue, emotional stress, trauma, sideropenia, oral cancer therapy, immunosuppression, chemotherapy, oral and facial surgery, menstruation, and epidural morphine, and upset GI. Surgical procedures like nerve root decompression, facial dermabrasion, and ablative laser resurfacing can increase risks of reactivation by 50-70%.
Treatment
Despite no cure or vaccine for the virus, a human bodys immune system and specific antibodies typically fight the virus. Treatment options include no treatment, topical creams (indifferent, antiviral, and anaesthetic), and oral antiviral medications. Indifferent topical creams include zinc oxide and glycerin cream, which can have itching and burning sensation as side effects and docosanoll. Docosanol, a saturated fatty alcohol was approved by the United States Food and Drug Administration for herpes labialis in adults with properly functioning immune systems. It is comparable in effectiveness to prescription topical antiviral agents. Due to docosanols mechanism of action, there is little risk of drug resistance. Antivirals creams include acyclovir and penciclovir, which can speed healing by as much as 10%. Oral antivirals include acyclovir, valaciclovir, and famciclovir. Famciclovir or valacyclovir, taken in pill form, can be effective using a single day, high-dose application and is more cost effective and convenient than the traditional treatment of lower doses for 5–7 days. Anaesthetic creams include lidocaine and prilocaine which has shown reduction in duration of subjective symptoms and eruptions.Treatment recommendations vary on the severity of the symptoms and chronicity of the infection. Treatment with oral antivirals such as acyclovir in children within 72 hours of illness onset has shown to shorten duration of fever, odynophagia, and lesions, and to reduce viral shedding. For patient with mild to moderate symptoms, local anaesthetic such as lidocaine for pain without antiviral may be sufficient. However, those with occasional severe recurrences of lesions may use oral antivirals. Patients with severe cases such as those with frequent recurrences of lesions, presence of disfiguring lesions, and serious systematic complications may need chronic suppressive therapy on top of the antiviral therapies.Mouth-rinse with combinations of ethanol and essential oils against herpes as therapeutic method is recommended by the German Society of Hospital Hygiene. Further research into virucidal effects of essential oils exists.
Epidemiology
Herpes labialis is common throughout the world. A large survey of young adults on six continents reported that 33% of males and 28% of females had herpes labialis on two or more occasions during the year before the study. The lifetime prevalence in the United States of America is estimated at 20–45% of the adult population. Lifetime prevalence in France was reported by one study as 32% in males and 42% in females. In Germany, the prevalence was reported at 32% in people aged between 35 and 44 years, and 20% in those aged 65–74. In Jordan, another study reported a lifetime prevalence of 26%.
Research
Research has gone into vaccines and drugs for both prevention and treatment of herpes infections.
Terminology
The term labia means "lip". Herpes labialis does not refer to the labia of the genitals, though the origin of the word is the same. When the viral infection affects both face and mouth, the broader term orofacial herpes is used, whereas herpetic stomatitis describes infection of the mouth specifically; stomatitis is derived from the Greek word stoma, which means "mouth".
References
== External links == | 184 |
Heterotopic ossification | Heterotopic ossification (HO) is the process by which bone tissue forms outside of the skeleton in muscles and soft tissue.
Symptoms
In traumatic heterotopic ossification (traumatic myositis ossificans), the patient may complain of a warm, tender, firm swelling in a muscle and decreased range of motion in the joint served by the muscle involved. There is often a history of a blow or other trauma to the area a few weeks to a few months earlier. Patients with traumatic neurological injuries, severe neurologic disorders or severe burns who develop heterotopic ossification experience limitation of motion in the areas affected.
Causes
Heterotopic ossification of varying severity can be caused by surgery or trauma to the hips and legs. About every third patient who has total hip arthroplasty (joint replacement) or a severe fracture of the long bones of the lower leg will develop heterotopic ossification, but is uncommonly symptomatic. Between 50% and 90% of patients who developed heterotopic ossification following a previous hip arthroplasty will develop additional heterotopic ossification.Heterotopic ossification often develops in patients with traumatic brain or spinal cord injuries, other severe neurologic disorders or severe burns, most commonly around the hips. The mechanism is unknown. This may account for the clinical impression that traumatic brain injuries cause accelerated fracture healing.There are also rare genetic disorders causing heterotopic ossification such as fibrodysplasia ossificans progressiva (FOP), a condition that causes injured bodily tissues to be replaced by heterotopic bone. Characteristically exhibiting in the big toe at birth, it causes the formation of heterotopic bone throughout the body over the course of the sufferers life, causing chronic pain and eventually leading to the immobilisation and fusion of most of the skeleton by abnormal growths of bone.Another rare genetic disorder causing heterotopic ossification is progressive osseous heteroplasia (POH), is a condition characterized by cutaneous or subcutaneous ossification.
Diagnosis
During the early stage, an x-ray will not be helpful because there is no calcium in the matrix. (In an acute episode which is not treated, it will be 3– 4 weeks after onset before the x-ray is positive.) Early laboratory tests are not very helpful. Alkaline phosphatase will be elevated at some point, but initially may be only slightly elevated, rising later to a high value for a short time. Unless weekly tests are done, this peak value may not be detected. It is not useful in patients who have had fractures or spine fusion recently, as they will cause elevations.The only definitive diagnostic test in the early acute stage is a bone scan, which will show heterotopic ossification 7 – 10 days earlier than an x-ray. The three-phase bone scan may be the most sensitive method of detecting early heterotopic bone formation. However, an abnormality detected in the early phase may not progress to the formation of heterotopic bone. Another finding, often misinterpreted as early heterotopic bone formation, is an increased (early) uptake around the knees or the ankles in a patient with a very recent spinal cord injury. It is not clear exactly what this means, because these patients do not develop heterotopic bone formation. It has been hypothesized that this may be related to the autonomic nervous system and its control over circulation.When the initial presentation is swelling and increased temperature in a leg, the differential diagnosis includes thrombophlebitis. It may be necessary to do both a bone scan and a venogram to differentiate between heterotopic ossification and thrombophlebitis, and it is even possible that both could be present simultaneously. In heterotopic ossification, the swelling tends to be more proximal and localized, with little or no foot/ankle edema, whereas in thrombophlebitis the swelling is usually more uniform throughout the leg.
Treatment
There is no clear form of treatment. Originally, bisphosphonates were expected to be of value after hip surgery but there has been no convincing evidence of benefit, despite having been used prophylactically.Depending on the growths location, orientation and severity, surgical removal may be possible.
Radiation Therapy.
Prophylactic radiation therapy for the prevention of heterotopic ossification has been employed since the 1970s. A variety of doses and techniques have been used. Generally, radiation therapy should be delivered as close as practical to the time of surgery. A dose of 7-8 Gray in a single fraction within 24–48 hours of surgery has been used successfully. Treatment volumes include the peri-articular region, and can be used for hip, knee, elbow, shoulder, jaw or in patients after spinal cord trauma.
Single dose radiation therapy is well tolerated and is cost effective, without an increase in bleeding, infection or wound healing disturbances.Other possible treatments.
Certain antiinflammatory agents, such as indomethacin, ibuprofen and aspirin, have shown some effect in preventing recurrence of heterotopic ossification after total hip replacement.
Conservative treatments such as passive range of motion exercises or other mobilization techniques provided by physical therapists or occupational therapists may also assist in preventing HO. A review article looked at 114 adult patients retrospectively and suggested that the lower incidence of HO in patients with a very severe TBI may have been due to early intensive physical and occupational therapy in conjunction with pharmacological treatment. Another review article also recommended physiotherapy as an adjunct to pharmacological and medical treatments because passive range of motion exercises may maintain range at the joint and prevent secondary soft tissue contractures, which are often associated with joint immobility.
See also
Intramembranous ossification
Myositis ossificans
Fibrodysplasia ossificans progressiva
Progressive osseous heteroplasia
References
External links
pmr/112 at eMedicine
radio/336 at eMedicine | 185 |
Hirsutism | Hirsutism is excessive body hair on parts of the body where hair is normally absent or minimal. The word is from early 17th century: from Latin hirsutus meaning "hairy". It may refer to a "male" pattern of hair growth that may be a sign of a more serious medical condition, especially if it develops well after puberty. Cultural stigma against hirsutism can cause much psychological distress and social difficulty. Discrimination based on facial hirsutism often leads to the avoidance of social situations and to symptoms of anxiety and depression.Hirsutism is usually the result of an underlying endocrine imbalance, which may be adrenal, ovarian, or central. It can be caused by increased levels of androgen hormones. The amount and location of the hair is measured by a Ferriman-Gallwey score. It is different from hypertrichosis, which is excessive hair growth anywhere on the body.Treatments may include birth control pills that contain estrogen and progestin, antiandrogens, or insulin sensitizers.Hirsutism affects between 5–15% of women across all ethnic backgrounds. Depending on the definition and the underlying data, approximately 40% of women have some degree of facial hair.
Causes
The causes of hirsutism can be divided into endocrine imbalances and non-endocrine etiologies. It is important to begin by first determining the distribution of body hair growth. If hair growth follows a male distribution, it could indicate the presence of increased androgens or hyperandrogenism. However, there are other hormones not related to androgens that can lead to hirsutism. A detailed history is taken by a provider in search of possible causes for hyperandrogenism or other non-endocrine-related causes. If the distribution of hair growth occurs throughout the body, this is referred to as hypertrichosis, not hirsutism.
Endocrine causes
Ovarian cysts such as in polycystic ovary syndrome (PCOS), the most common cause in women.
Adrenal gland tumors, adrenocortical adenomas, and adrenocortical carcinoma, as well as adrenal hyperplasia due to pituitary adenomas (as in Cushings disease).
Inborn errors of steroid metabolism such as in congenital adrenal hyperplasia, most commonly caused by 21-hydroxylase deficiency.
Acromegaly and gigantism (growth hormone and IGF-1 excess), usually due to pituitary tumors.
Causes of hirsutism not related to hyperandrogenism include
Familial: Family history of hirsutism with normal androgen levels.
Drug-induced: medications were used before the onset of hirsutism. The recommendation is to stop the medication and replace it with another.Minoxidil
Testosterone, danazol, progestins, anabolic steroids, valproic acid, methyldopa
Pregnancy or post-menopause: moderate hirsutism due to prolactin secretion and hyperandrogenism due to decrease estrogen production, respectively.
Idiopathic: When no other cause can be attributed to an individuals hirsutism, the cause is considered idiopathic by exclusion. In these cases, mensuration cycles and androgen levels are normal.
Diagnosis
Hirsutism is a clinical diagnosis of excessive androgenic, terminal hair growth. A complete physical evaluation should be done prior to initiating more extensive studies, the examiner should differentiate between widespread body hair increase and male pattern virilization. One method of evaluating hirsutism is the Ferriman-Gallwey Score which gives a score based on the amount and location of hair growth. The Ferriman-Gallwey Score has various cutoffs due to variable expressivity of hair growth based on ethnic background.Diagnosis of patients with even mild hirsutism should include assessment of ovulation and ovarian ultrasound, due to the high prevalence of polycystic ovary syndrome (PCOS), as well as 17α-hydroxyprogesterone (because of the possibility of finding nonclassic 21-hydroxylase deficiency). People with hirsutism may present with an elevated serum dehydroepiandrosterone sulfate (DHEA-S) level, however, additional imaging is required to discriminate between malignant and benign etiologies of adrenal hyperandrogenism. Levels greater than 700 μg/dL are indicative of adrenal gland dysfunction, particularly congenital adrenal hyperplasia due to 21-hydroxylase deficiency. However, PCOS and idiopathic hirsutism make up 90% of cases.
Treatment
Treatment of hirsutism is indicated when hair growth causes patient distress. The two main approaches to treatment are pharmacologic therapies targeting androgen production/action, and direct hair removal methods including electrolysis and photoepilaiton. These may be used independently or in combination.
Pharmacologic therapies
Common medications consist of antiandrogens, insulin sensitizers, and oral contraceptive pills. All three types of therapy have demonstrated efficacy on their own, however insulin sensitizers are shown to be less effective than antiandrogens and oral contraceptive pills. The therapies may be combined, as directed by a physician, in line with the patients medical goals. Antiandrogens are drugs that block the effects of androgens like testosterone and dihydrotestosterone (DHT) in the body. They are the most effective pharmacologic treatment for patient-important hirsutism, however they have teratogenic potential, and are therefore not recommended in people who are pregnant or desire pregnancy. Current data does not favor any one type of oral contraceptive over another.List of medications:
Spironolactone: An antimineralocorticoid with additional antiandrogenic activity at high dosages
Cyproterone acetate: A dual antiandrogen and progestogen. In addition to single form, it is also available in some formulations of combined oral contraceptives at a low dosage (see below). It has a risk of liver damage.
Flutamide: A pure antiandrogen. It has been found to possess equivalent or greater effectiveness than spironolactone, cyproterone acetate, and finasteride in the treatment of hirsutism. However, it has a high risk of liver damage and hence is no longer recommended as a first- or second-line treatment. Flutamide is safe and effective.
Bicalutamide: A pure antiandrogen. It is effective similarly to flutamide but is much safer as well as better-tolerated.
Finasteride and dutasteride: 5α-Reductase inhibitors. They inhibit the production of the potent androgen DHT. A meta-analysis showed inconsistent results of finasteride in the treatment of hirsutism.
GnRH analogues: Suppress androgen production by the gonads and reduce androgen concentrations to castrate levels.
Birth control pills that consist of an estrogen, usually ethinylestradiol, and a progestin are supported by the evidence. They are functional antiandrogens. In addition, certain birth control pills contain a progestin that also has antiandrogenic activity. Examples include birth control pills containing cyproterone acetate, chlormadinone acetate, drospirenone, and dienogest.
Metformin: Insulin sensitizer. Antihyperglycemic drug used for diabetes mellitus and treatment of hirsutism associated with insulin resistance (e.g. polycystic ovary syndrome). Metformin appears ineffective in the treatment of hirsutism, although the evidence was of low quality.
Eflornithine: Blocks putrescine that is necessary for the growth of hair follicles
Other methods
Epilation
Waxing
Shaving
Laser hair removal
Electrology
Lifestyle change, including reducing excessive weight and addressing insulin resistance, may be beneficial. Insulin resistance can cause excessive testosterone levels in women, resulting in hirsutism.
See also
Ferriman-Gallwey score
Petrus Gonsalvus
Androgenic hair
Pubic hair
Hypertrichosis
Hair removal
Laser hair removal
Bearded lady
Trichophilia
Polycystic ovary syndrome (PCOS)
Social model of disability
References
External links
Why the Bearded Lady Was Never a Laughing Matter: Hirsutism
The Bearded Lady | 186 |
Hookworm infection | Hookworm infection is an infection by a type of intestinal parasite known as a hookworm. Initially, itching and a rash may occur at the site of infection. Those only affected by a few worms may show no symptoms. Those infected by many worms may experience abdominal pain, diarrhea, weight loss, and tiredness. The mental and physical development of children may be affected. Anemia may result.Two common hookworm infections in humans are ancylostomiasis and necatoriasis, caused by the species Ancylostoma duodenale and Necator americanus respectively. Hookworm eggs are deposited in the stools of infected people. If these end up in the environment, they can hatch into larvae (immature worms), which can then penetrate the skin. One type can also be spread through contaminated food. Risk factors include walking barefoot in warm climates, where sanitation is poor. Diagnosis is by examination of a stool sample with a microscope.The disease can be prevented on an individual level by not walking barefoot in areas where the disease is common. At a population level, decreasing outdoor defecation, not using raw feces as fertilizer, and mass deworming is effective. Treatment is typically with the medications albendazole or mebendazole for one to three days. Iron supplements may be needed in those with anemia.Hookworms infected about 428 million people in 2015. Heavy infections can occur in both children and adults, but are less common in adults. They are rarely fatal. Hookworm infection is a soil-transmitted helminthiasis and classified as a neglected tropical disease.
Signs and symptoms
No symptoms or signs are specific for hookworm infection, but they give rise to a combination of intestinal inflammation and progressive iron-deficiency anemia and protein deficiency. Coughing, chest pain, wheezing, and fever sometimes result from severe infection. Epigastric pains, indigestion, nausea, vomiting, constipation, and diarrhea can occur early or in later stages, as well, although gastrointestinal symptoms tend to improve with time. Signs of advanced severe infection are those of anemia and protein deficiency, including emaciation, cardiac failure, and abdominal distension with ascites.Larval invasion of the skin (mostly in the Americas) can produce a skin disease called cutaneous larva migrans also known as creeping eruption. The hosts of these worms are not human and the larvae can only penetrate the upper five layers of the skin, where they give rise to intense, local itching, usually on the foot or lower leg, known as ground itch. This infection is due to larvae from the A. braziliense hookworm. The larvae migrate in tortuous tunnels between the stratum basale and stratum corneum of the skin, causing serpiginous vesicular lesions. With advancing movement of the larvae, the rear portions of the lesions become dry and crusty. The lesions are typically intensely itchy.
Incubation period
The incubation period can vary between a few weeks to many months, and is largely dependent on the number of hookworm parasites an individual is infected with.
Cause
Hookworm infections in humans include ancylostomiasis and necatoriasis. Ancylostomiasis is caused by Ancylostoma duodenale, which is the more common type found in the Middle East, North Africa, India, and (formerly) in southern Europe. Necatoriasis is caused by Necator americanus, the more common type in the Americas, sub-Saharan Africa, Southeast Asia, China, and Indonesia.Other animals such as birds, dogs, and cats may also be affected. A. tubaeforme infects cats, A. caninum infects dogs, and A. braziliense and Uncinaria stenocephala infect both cats and dogs. Some of these infections can be transmitted to humans.
Morphology
A. duodenale worms are grayish white or pinkish with the head slightly bent in relation to the rest of the body. This bend forms a definitive hook shape at the anterior end for which hookworms are named. They possess well-developed mouths with two pairs of teeth. While males measure approximately one centimeter by 0.5 millimeter, the females are often longer and stouter. Additionally, males can be distinguished from females based on the presence of a prominent posterior copulatory bursa.N. americanus is very similar in morphology to A. duodenale. N. americanus is generally smaller than A. duodenale with males usually 5 to 9 mm long and females about 1 cm long. Whereas A. duodenale possesses two pairs of teeth, N. americanus possesses a pair of cutting plates in the buccal capsule. Additionally, the hook shape is much more defined in Necator than in Ancylostoma.
Life cycle
The hookworm thrives in warm soil where temperatures are over 18 °C (64 °F). They exist primarily in sandy or loamy soil and cannot live in clay or muck. Rainfall averages must be more than 1,000 mm (39 in) a year for them to survive. Only if these conditions exist can the eggs hatch. Infective larvae of N. americanus can survive at higher temperatures, whereas those of A. duodenale are better adapted to cooler climates. Generally, they live for only a few weeks at most under natural conditions, and die almost immediately on exposure to direct sunlight or desiccation.Infection of the host is by the larvae, not the eggs. While A. duodenale can be ingested, the usual method of infection is through the skin; this is commonly caused by walking barefoot through areas contaminated with fecal matter. The larvae are able to penetrate the skin of the foot, and once inside the body, they migrate through the vascular system to the lungs, and from there up the trachea, and are swallowed. They then pass down the esophagus and enter the digestive system, finishing their journey in the intestine, where the larvae mature into adult worms.Once in the host gut, Necator tends to cause a prolonged infection, generally 1 to 5 years (many worms die within a year or two of infecting), though some adult worms have been recorded to live for 15 years or more. Ancylostoma adults are short-lived, surviving on average for only about 6 months. However, the infection can be prolonged because dormant larvae can be "recruited" sequentially from tissue "stores" (see Pathology, above) over many years, to replace expired adult worms. This can give rise to seasonal fluctuations in infection prevalence and intensity (apart from normal seasonal variations in transmission).
They mate inside the host, females laying up to 30,000 eggs per day and some 18 to 54 million eggs during their lifetimes, which pass out in feces. Because 5 to 7 weeks are needed for adult worms to mature, mate, and produce eggs, in the early stages of very heavy infection, acute symptoms might occur without any eggs being detected in the patients feces. This can make diagnosis very difficult.N. americanus and A. duodenale eggs can be found in warm, moist soil where they eventually hatch into first-stage larvae, or L1. L1, the feeding noninfective rhabditoform stage, will feed on soil microbes and eventually molt into second-stage larvae, L2, which is also in the rhabditoform stage. It will feed for about 7 days and then molt into the third-stage larvae, or L3. This is the filariform stage of the parasite, that is, the nonfeeding infective form of the larvae. The L3 larvae are extremely motile and seek higher ground to increase their chances of penetrating the skin of a human host. The L3 larvae can survive up to 2 weeks without finding a host. While N. americanus larvae only infect through penetration of skin, A. duodenale can infect both through penetration and orally. After the L3 larvae have successfully entered the host, they then travel through the subcutaneous venules and lymphatic vessels of the human host. Eventually, the L3 larvae enter the lungs through the pulmonary capillaries and break out into the alveoli. They then travel up the trachea to be coughed and swallowed by the host. After being swallowed, the L3 larvae are then found in the small intestine, where they molt into the L4, or adult worm stage. The entire process from skin penetration to adult development takes about 5–9 weeks. The female adult worms release eggs (N. americanus about 9,000–10,000 eggs/day and A. duodenale 25,000–30,000 eggs/day), which are passed in the feces of the human host. These eggs hatch in the environment within several days and the cycle starts anew.
Pathophysiology
Hookworm infection is generally considered to be asymptomatic, but as Norman Stoll described in 1962, it is an extremely dangerous infection because its damage is "silent and insidious." An individual may experience general symptoms soon after infection. Ground-itch, which is an allergic reaction at the site of parasitic penetration and entry, is common in patients infected with N. americanus. Additionally, cough and pneumonitis may result as the larvae begin to break into the alveoli and travel up the trachea. Then once the larvae reach the small intestine of the host and begin to mature, the infected individual will experience diarrhea and other gastrointestinal discomfort. However, the "silent and insidious" symptoms referred to by Stoll are related to chronic, heavy-intensity hookworm infections. Major morbidity associated with hookworm infection is caused by intestinal blood loss, iron deficiency anemia, and protein malnutrition. They result mainly from adult hookworms in the small intestine ingesting blood, rupturing erythrocytes, and degrading hemoglobin in the host. This long-term blood loss can manifest itself physically through facial and peripheral edema; eosinophilia and pica/geophagy caused by iron deficiency anemia are also experienced by some hookworm-infected patients. Recently, more attention has been given to other important outcomes of hookworm infection that play a large role in public health. It is now widely accepted that children who have chronic hookworm infection can experience growth retardation as well as intellectual and cognitive impairments. Additionally, recent research has focused on the potential of adverse maternal-fetal outcomes when the mother is infected with hookworm during pregnancy.The disease was linked to nematode worms (Ankylostoma duodenalis) from one-third to half an inch long in the intestine chiefly through the labours of Theodor Bilharz and Griesinger in Egypt (1854).The symptoms can be linked to inflammation in the gut stimulated by feeding hookworms, such as nausea, abdominal pain and intermittent diarrhea, and to progressive anemia in prolonged disease: capricious appetite, pica/geophagy (or dirt-eating), obstinate constipation followed by diarrhea, palpitations, thready pulse, coldness of the skin, pallor of the mucous membranes, fatigue and weakness, shortness of breath and in cases running a fatal course, dysentery, hemorrhages and edema. The worms suck blood and damage the mucosa. However, the blood loss in the stools is not visibly apparent.
Blood tests in early infection often show a rise in numbers of eosinophils, a type of white blood cell that is preferentially stimulated by worm infections in tissues (large numbers of eosinophils are also present in the local inflammatory response). Falling blood hemoglobin levels will be seen in cases of prolonged infection with anemia.
In contrast to most intestinal helminthiases, where the heaviest parasitic loads tend to occur in children, hookworm prevalence and intensity can be higher among adult males. The explanation for this is that hookworm infection tends to be occupational, so that coworkers and other close groups maintain a high prevalence of infection among themselves by contaminating their work environment. However, in most endemic areas, adult women are the most severely affected by anemia, mainly because they have much higher physiological needs for iron (menstruation, repeated pregnancy).An interesting consequence of this in the case of Ancylostoma duodenale infection is translactational transmission of infection: the skin-invasive larvae of this species do not all immediately pass through the lungs and on into the gut, but spread around the body via the circulation, to become dormant inside muscle fibers. In a pregnant woman, after childbirth some or all of these larvae are stimulated to re-enter the circulation (presumably by sudden hormonal changes), then to pass into the mammary glands, so that the newborn baby can receive a large dose of infective larvae through its mothers milk. This accounts for otherwise inexplicable cases of very heavy, even fatal, hookworm infections in children a month or so of age, in places such as China, India and northern Australia.
An identical phenomenon is much more commonly seen with Ancylostoma caninum infections in dogs, where the newborn pups can even die of hemorrhaging from their intestines caused by massive numbers of feeding hookworms. This also reflects the close evolutionary link between the human and canine parasites, which probably have a common ancestor dating back to when humans and dogs first started living closely together.
Filariform larvae is the infective stage of the parasite: infection occurs when larvae in soil penetrate the skin, or when they are ingested through contaminated food and water following skin penetration.
Diagnosis
Diagnosis depends on finding characteristic worm eggs on microscopic examination of the stools, although this is not possible in early infection. Early signs of infection in most dogs include limbular limping and anal itching. The eggs are oval or elliptical, measuring 60 by 40 µm, colorless, not bile stained and with a thin transparent hyaline shell membrane. When released by the worm in the intestine, the egg contains an unsegmented ovum. During its passage down the intestine, the ovum develops and thus the eggs passed in feces have a segmented ovum, usually with 4 to 8 blastomeres.
As the eggs of both Ancylostoma and Necator (and most other hookworm species) are indistinguishable, to identify the genus, they must be cultured in the lab to allow larvae to hatch out. If the fecal sample is left for a day or more under tropical conditions, the larvae will have hatched out, so eggs might no longer be evident. In such a case, it is essential to distinguish hookworms from Strongyloides larvae, as infection with the latter has more serious implications and requires different management. The larvae of the two hookworm species can also be distinguished microscopically, although this would not be done routinely, but usually for research purposes. Adult worms are rarely seen (except via endoscopy, surgery or autopsy), but if found, would allow definitive identification of the species. Classification can be performed based on the length of the buccal cavity, the space between the oral opening and the esophagus: hookworm rhabditoform larvae have long buccal cavities whereas Strongyloides rhabditoform larvae have short buccal cavities.Recent research has focused on the development of DNA-based tools for diagnosis of infection, specific identification of hookworm, and analysis of genetic variability within hookworm populations. Because hookworm eggs are often indistinguishable from other parasitic eggs, PCR assays could serve as a molecular approach for accurate diagnosis of hookworm in the feces.
Prevention
The infective larvae develop and survive in an environment of damp dirt, particularly sandy and loamy soil. They cannot survive in clay or muck. The main lines of precaution are those dictated by good hygiene behaviors:
Do not defecate in the open, but rather in toilets.
Do not use untreated human excreta or raw sewage as fertilizer in agriculture.
Do not walk barefoot in known infected areas.
Deworm pet dogs and cats. Canine and feline hookworms rarely develop to adulthood in humans. Ancylostoma caninum, the common dog hookworm, occasionally develops into an adult to cause eosinophilic enteritis in people, but their invasive larvae can cause an itchy rash called cutaneous larva migrans.Moxidectin is available in the United States as (imidacloprid + moxidectin) topical solution for dogs and cats. It utilizes moxidectin for control and prevention of roundworms, hookworms, heartworms, and whipworms.
Children
Most of these public health concerns have focused on children who are infected with hookworm. This focus on children is largely due to the large body of evidence that has demonstrated strong associations between hookworm infection and impaired learning, increased absences from school, and decreased future economic productivity. In 2001, the 54th World Health Assembly passed a resolution demanding member states to attain a minimum target of regular deworming of at least 75% of all at-risk school children by the year 2010. A 2008 World Health Organization publication reported on these efforts to treat at-risk school children. Some of the interesting statistics were as follows: 1) only 9 out of 130 endemic countries were able to reach the 75% target goal; and 2) less than 77 million school-aged children (of the total 878 million at risk) were reached, which means that only 8.78% of at-risk children are being treated for hookworm infection.
School-based mass deworming
School-based mass deworming programs have been the most popular strategy to address the issue of hookworm infection in children. School-based programs are extremely cost-effective as schools already have an available, extensive, and sustained infrastructure with a skilled workforce that has a close relationship with the community. With little training from a local health system, teachers can easily administer the drugs which often cost less than US$0.50 per child per year.Recently, many people have begun to question if the school-based programs are necessarily the most effective approach. An important concern with school-based programs is that they often do not reach children who do not attend school, thus ignoring a large number of at-risk children. A 2008 study by Massa et al. continued the debate regarding school-based programs. They examined the effects of community-directed treatments versus school-based treatments in the Tanga Region of Tanzania. A major conclusion was that the mean infection intensity of hookworm was significantly lower in the villages employing the community-directed treatment approach than the school-based approach. The community-directed treatment model used in this specific study allowed villagers to take control of the childs treatment by having villagers select their own community drug distributors to administer the antihelminthic drugs. Additionally, villagers organized and implemented their own methods for distributing the drugs to all children. The positive results associated with this new model highlight the need for large-scale community involvement in deworming campaigns.
Public health education
Many mass deworming programs also combine their efforts with a public health education. These health education programs often stress important preventative techniques such as: washing your hands before eating, and staying away from water/areas contaminated by human feces. These programs may also stress that shoes must be worn, however, these come with their own health risks and may not be effective. Shoe wearing patterns in towns and villages across the globe are determined by cultural beliefs, and the levels of education within that society. The wearing of shoes will prevent the entry of hookworm infections from the surrounding soils into tender skin regions; such as areas between the toes.
Sanitation
Historical examples, such as the hookworm campaigns in Mississippi and Florida from 1943 to 1947 have shown that the primary cause of hookworm infection is poor sanitation, which can be solved by building and maintaining toilets. But while these may seem like simple tasks, they raise important public health challenges. Most infected populations are from poverty-stricken areas with very poor sanitation. Thus, it is most likely that at-risk children do not have access to clean water to wash their hands and live in environments with no proper sanitation infrastructure. Health education, therefore, must address preventive measures in ways that are both feasible and sustainable in the context of resource-limited settings.
Integrated approaches
Evaluation of numerous public health interventions has generally shown that improvement in each individual component ordinarily attributed to poverty (for example, sanitation, health education and underlying nutrition status) often have minimal impact on transmission. For example, one study found that the introduction of latrines into a resource-limited community only reduced the prevalence of hookworm infection by four percent. However, another study in Salvador, Brazil found that improved drainage and sewerage had a significant impact on the prevalence of hookworm infection but no impact at all on the intensity of hookworm infection. This seems to suggest that environmental control alone has a limited but incomplete effect on the transmission of hookworms. It is imperative, therefore, that more research is performed to understand the efficacy and sustainability of integrated programs that combine numerous preventive methods including education, sanitation, and treatment.
Treatment
Anthelmintic drugs
The most common treatment for hookworm are benzimidazoles, specifically albendazole and mebendazole. BZAs kill adult worms by binding to the nematodes β-tubulin and subsequently inhibiting microtubule polymerization within the parasite. In certain circumstances, levamisole and pyrantel pamoate may be used. A 2008 review found that the efficacy of single-dose treatments for hookworm infections were as follows: 72% for albendazole, 15% for mebendazole, and 31% for pyrantel pamoate. This substantiates prior claims that albendazole is much more effective than mebendazole for hookworm infections. Also of note is that the World Health Organization does recommend anthelmintic treatment in pregnant women after the first trimester. It is also recommended that if the patient also has anemia that ferrous sulfate (200 mg) be administered three times daily at the same time as anthelmintic treatment; this should be continued until hemoglobin values return to normal which could take up to 3 months.Hookworm infection can be treated with local cryotherapy when the hookworm is still in the skin.Albendazole is effective both in the intestinal stage and during the stage the parasite is still migrating under the skin.In case of anemia, iron supplementation can cause relief symptoms of iron-deficiency anemia. However, as red blood cell levels are restored, shortage of other essentials such as folic acid or vitamin B12 may develop, so these might also be supplemented.
During the 1910s, common treatments for hookworm included thymol, 2-naphthol, chloroform, gasoline, and eucalyptus oil. By the 1940s, the treatment of choice used tetrachloroethylene, given as 3 to 4 cc in the fasting state, followed by 30 to 45 g of sodium sulfate. Tetrachloroethylene was reported to have a cure rate of 80 percent for Necator infections, but 25 percent in Ancylostoma infections, and often produced mild intoxication in the patient.
Reinfection and drug resistance
Other important issues related to the treatment of hookworm are reinfection and drug resistance. It has been shown that reinfection after treatment can be extremely high. Some studies even show that 80% of pretreatment hookworm infection rates can be seen in treated communities within 30–36 months. While reinfection may occur, it is still recommended that regular treatments be conducted as it will minimize the occurrence of chronic outcomes. There are also increasing concerns about the issue of drug resistance. Drug resistance has appeared in front-line anthelmintics used for livestock nematodes. Generally human nematodes are less likely to develop resistance due to longer reproducing times, less frequent treatment, and more targeted treatment. Nonetheless, the global community must be careful to maintain the effectiveness of current anthelmintic as no new anthelmintic drugs are in the late-stage development.
Epidemiology
It is estimated that between 576 and 740 million individuals are infected with hookworm. Of these infected individuals, about 80 million are severely affected. The major cause of hookworm infection is N. americanus which is found in the Americas, sub-Saharan Africa, and Asia. A. duodenale is found in more scattered focal environments, namely Europe and the Mediterranean. Most infected individuals are concentrated in sub-Saharan Africa and East Asia/the Pacific Islands with each region having estimates of 198 million and 149 million infected individuals, respectively. Other affected regions include: South Asia (50 million), Latin America and the Caribbean (50 million), South Asia (59 million), Middle East/North Africa (10 million). A majority of these infected individuals live in poverty-stricken areas with poor sanitation. Hookworm infection is most concentrated among the worlds poorest who live on less than $2 a day.While hookworm infection may not directly lead to mortality, its effects on morbidity demand immediate attention. When considering disability-adjusted life years (DALYs), neglected tropical diseases, including hookworm infection, rank among diarrheal diseases, ischemic heart disease, malaria, and tuberculosis as one of the most important health problems of the developing world.
It has been estimated that as many as 22.1 million DALYs have been lost due to hookworm infection. Recently, there has been increasing interest to address the public health concerns associated with hookworm infection. For example, the Bill & Melinda Gates Foundation recently donated US$34 million to fight Neglected Tropical Diseases including hookworm infection. Former US President Clinton also announced a mega-commitment at the Clinton Global Initiative (CGI) 2008 Annual Meeting to de-worm 10 million children.Many of the numbers regarding the prevalence of hookworm infection are estimates as there is no international surveillance mechanism currently in place to determine prevalence and global distribution. Some prevalence rates have been measured through survey data in endemic regions around the world. The following are some of the most recent findings on prevalence rates in regions endemic with hookworm.
Darjeeling, Hooghly District, West Bengal, India (Pal et al. 2007)
43% infection rate of predominantly N. americanus although with some A. duodenale infection
Both hookworm infection load and degree of anemia in the mild rangeXiulongkan Village, Hainan Province, China (Gandhi et al. 2001)
60% infection rate of predominantly N. americanus
Important trends noted were that prevalence increased with age (plateau of about 41 years) and women had higher prevalence rates than menHòa Bình, Northwest Vietnam (Verle et al. 2003)
52% of a total of 526 tested households infected
Could not identify species, but previous studies in North Vietnam reported N. americanus in more than 95% of hookworm larvaeMinas Gerais, Brazil (Fleming et al. 2006)
63% infection rate of predominantly N. americanusKwaZulu-Natal, South Africa (Mabaso et al. 2004)
Inland areas had a prevalence rate of 9% of N. americanus
Coastal plain areas had a prevalence rate of 63% of N. americanusLowndes County, Alabama, United States
35% infection rate of predominantly N. americanusThere have also been technological developments that may facilitate more accurate mapping of hookworm prevalence. Some researchers have begun to use geographical information systems (GIS) and remote sensing (RS) to examine helminth ecology and epidemiology. Brooker et al. utilized this technology to create helminth distribution maps of sub-Saharan Africa. By relating satellite derived environmental data with prevalence data from school-based surveys, they were able to create detailed prevalence maps. The study focused on a wide range of helminths, but interesting conclusions about hookworm specifically were found. As compared to other helminths, hookworm is able to survive in much hotter conditions and was highly prevalent throughout the upper end of the thermal range.Improved molecular diagnostic tools are another technological advancement that could help improve existing prevalence statistics. Recent research has focused on the development of a DNA-based tool that can be used for diagnosis of infection, specific identification of hookworm, and analysis of genetic variability in hookworm populations. Again this can serve as a major tool for different public health measures against hookworm infection. Most research regarding diagnostic tools is now focused on the creation of a rapid and cost-effective assay for the specific diagnosis of hookworm infection. Many are hopeful that its development can be achieved within the next five years.
History
Discovery
The symptoms now attributed to hookworm appear in papyrus papers of ancient Egypt (c. 1500 BC), described as a derangement characterized by anemia. Avicenna, a Persian physician of the eleventh century, discovered the worm in several of his patients and related it to their disease. In later times, the condition was noticeably prevalent in the mining industry in England, France, Germany, Belgium, North Queensland, and elsewhere.Italian physician Angelo Dubini was the modern-day discoverer of the worm in 1838 after an autopsy of a peasant woman. Dubini published details in 1843 and identified the species as A. duodenale. Working in the Egyptian medical system in 1852 German physician Theodor Bilharz, drawing upon the work of colleague Wilhelm Griesinger, found these worms during autopsies and went a step further in linking them to local endemic occurrences of chlorosis, which would probably be called iron-deficiency anemia today.
A breakthrough came 25 years later following a diarrhea and anemia epidemic that took place among Italian workmen employed on the Gotthard Rail Tunnel. In an 1880 paper, physicians Camillo Bozzolo, Edoardo Perroncito, and Luigi Pagliani correctly hypothesized that hookworm was linked to the fact that workers had to defecate inside the 15 km tunnel, and that many wore worn-out shoes. The work environment often contained standing water, sometimes knee-deep, and the larvae were capable of surviving several weeks in the water, allowing them to infect many of the workers. In 1897, it was established that the skin was the principal avenue of infection and the biological life cycle of the hookworm was clarified.
Eradication programmes
In 1899, American zoologist Charles Wardell Stiles identified progressive pernicious anemia seen in the southern United States as being caused by the hookworm A. duodenale. Testing in the 1900s revealed very heavy infestations in school-age children. In Puerto Rico, Dr. Bailey K. Ashford, a US Army physician, organized and conducted a parasite treatment campaign, which cured approximately 300,000 people (one-third of the Puerto Rican population) and reduced the death rate from this anemia by 90 percent during the years 1903–04.
On October 26, 1909, the Rockefeller Sanitary Commission for the Eradication of Hookworm Disease was organized as a result of a gift of US$1 million from John D. Rockefeller, Sr. The five-year program was a remarkable success and a great contribution to the United States public health, instilling public education, medication, field work and modern government health departments in eleven southern states.
The hookworm exhibit was a prominent part of the 1910 Mississippi state fair.
The commission found that an average of 40% of school-aged children were infected with hookworm. Areas with higher levels of hookworm infection prior to the eradication program experienced greater increases in school enrollment, attendance, and literacy after the intervention. Econometric studies have shown that this effect cannot be explained by a variety of alternative factors, including differential trends across areas, changing crop prices, shifts in certain educational and health policies and the effect of malaria eradication. No significant contemporaneous results were found for adults who should have benefited less from the intervention owing to their substantially lower (prior) infection rates. The program nearly eradicated hookworm and would flourish afterward with new funding as the Rockefeller Foundation International Health Division.The RFs hookworm campaign in Mexico showed how science and politics play a role in developing health policies. It brought together government officials, health officials, public health workers, Rockefeller officials and the community. This campaign was launched to eradicate hookworms in Mexico. Although the campaign did not focus on long-term treatments, it did set the terms of the relationship between Mexico and the Rockefeller Foundation. The scientific knowledge behind this campaign helped shape public health policies, improved public health and built a strong relationship between US and Mexico.In the 1920s, hookworm eradication reached the Caribbean and Latin America, where great mortality was reported among people in the West Indies towards the end of the 18th century, as well as through descriptions sent from Brazil and various other tropical and sub-tropical regions.
Treatments
Treatment in the early 20th century relied on the use of Epsom salt to reduce protective mucus, followed by thymol to kill the worms. By the 1940s, tetrachloroethylene was the leading method. It was not until later in the mid-20th century when new organic drug compounds were developed.
Research
Anemia in pregnancy
It is estimated that a third of all pregnant women in developing countries are infected with hookworm, 56% of all pregnant women in developing countries experience anemia, 20% of all maternal deaths are either directly or indirectly related to anemia. Numbers like this have led to an increased interest in the topic of hookworm-related anemia during pregnancy. With the understanding that chronic hookworm infection can often lead to anemia, many people are now questioning if the treatment of hookworm could effect change in severe anemia rates and thus also on maternal and child health as well. Most evidence suggests that the contribution of hookworm to maternal anemia merits that all women of child-bearing age living in endemic areas be subject to periodic anthelmintic treatment. The World Health Organization even recommends that infected pregnant women be treated after their first trimester. Regardless of these suggestions, only Madagascar, Nepal and Sri Lanka have added deworming to their antenatal care programs.This lack of deworming of pregnant women is explained by the fact that most individuals still fear that anthelmintic treatment will result in adverse birth outcomes. But a 2006 study by Gyorkos et al. found that when comparing a group of pregnant women treated with mebendazole with a control placebo group, both illustrated rather similar rates in adverse birth outcomes. The treated group demonstrated 5.6% adverse birth outcomes, while the control group had 6.25% adverse birth outcomes. Furthermore, Larocque et al. illustrated that treatment for hookworm infection actually led to positive health results in the infant. This study concluded that treatment with mebendazole plus iron supplements during antenatal care significantly reduced the proportion of very low birth weight infants when compared to a placebo control group. Studies so far have validated recommendations to treat infected pregnant women for hookworm infection during pregnancy.
A review found that a single dose of antihelminthics (anti-worm drugs) given in the second trimester of pregnancy "may reduce maternal anaemia and worm prevalence when used in settings with high prevalence of maternal helminthiasis".The intensity of hookworm infection as well as the species of hookworm have yet to be studied as they relate to hookworm-related anemia during pregnancy. Additionally, more research must be done in different regions of the world to see if trends noted in completed studies persist.
Malaria co-infection
Co-infection with hookworm and Plasmodium falciparum is common in Africa. Although exact numbers are unknown, preliminary analyses estimate that as many as a quarter of African schoolchildren (17.8–32.1 million children aged 5–14 years) may be coincidentally at-risk of both P. falciparum and hookworm. While original hypotheses stated that co-infection with multiple parasites would impair the hosts immune response to a single parasite and increase susceptibility to clinical disease, studies have yielded contrasting results. For example, one study in Senegal showed that the risk of clinical malaria infection was increased in helminth-infected children in comparison to helminth-free children while other studies have failed to reproduce such results, and even among laboratory mouse experiments the effect of helminths on malaria is variable.Some hypotheses and studies suggest that helminth infections may protect against cerebral malaria due to the possible modulation of pro-inflammatory and anti-inflammatory cytokines responses. Furthermore, the mechanisms underlying this supposed increased susceptibility to disease are unknown. For example, helminth infections cause potent and highly polarized immune response characterized by increased T-helper cell type 2 (Th2) cytokine and Immunoglobulin E (IgE) production. However, the effect of such responses on the human immune response is unknown. Additionally, both malaria and helminth infection can cause anemia, but the effect of co-infection and possible enhancement of anemia is poorly understood.
Hygiene hypothesis and hookworm as therapy
The hygiene hypothesis states that infants and children who lack exposure to infectious agents are more susceptible to allergic diseases via modulation of immune system development. The theory was first proposed by David P. Strachan who noted that hay fever and eczema were less common in children who belonged to large families. Since then, studies have noted the effect of gastrointestinal worms on the development of allergies in the developing world. For example, a study in Gambia found that eradication of worms in some villages led to increased skin reactions to allergies among children.
Vaccines
While annual or semi-annual mass antihelminthic administration is a critical aspect of any public health intervention, many have begun to realize how unsustainable it is due to aspects such as poverty, high rates of re-infection, and diminished efficacy of drugs with repeated use. Current research, therefore, has focused on the development of a vaccine that could be integrated into existing control programs. The goal of vaccine development is not necessarily to create a vaccine with sterilizing immunity or complete protection against immunity. A vaccine that reduces the likelihood of vaccinated individuals developing severe infections and thus reduced blood and nutrient levels could still have a significant impact on the high burden of disease throughout the world.
Current research focuses on targeting two stages in the development of the worm: the larval stage and the adult stage. Research on larval antigens has focused on proteins that are members of the pathogenesis-related protein superfamily, Ancylostoma Secreted Proteins. Although they were first described in Anyclostoma, these proteins have also been successfully isolated from the secreted product of N. americanus. N. americanus ASP-2 (Na-ASP-2) is currently the leading larval-stage hookworm vaccine candidate. A randomized, double-blind, placebo-controlled study has already been performed; 36 healthy adults without a history of hookworm infection were given three intramuscular injections of three different concentrations of Na-ASP-2 and observed for six months after the final vaccination. The vaccine induced significant anti-Na-ASP-2 IgG and cellular immune responses. In addition, it was safe and produced no debilitating side effects. The vaccine is now in a phase one trial; healthy adult volunteers with documented evidence of previous infection in Brazil are being given the same dose concentration on the same schedule used in the initial study. If this study is successful, the next step would be to conduct a phase two trial to assess the rate and intensity of hookworm infection among vaccinated persons. Because the Na-ASP-2 vaccine only targets the larval stage, it is critical that all subjects enrolled in the study be treated with antihelminthic drugs to eliminate adult worms prior to vaccination.
Adult hookworm antigens have also been identified as potential candidates for vaccines. When adult worms attach to the intestinal mucosa of the human host, erythrocytes are ruptured in the worms digestive tract which causes the release of free hemoglobin which is subsequently degraded by a proteolytic cascade. Several of these proteins that are responsible for this proteolytic cascade are also essential for the worms nutrition and survival. Therefore, a vaccine that could induce antibodies for these antigens could interfere with the hookworms digestive pathway and impair the worms survival. Three proteins have been identified: the aspartic protease-hemoglobinase APR-1, the cysteine protease-hemoglobinase CP-2, and a glutathione S-transferase. Vaccination with APR-1 and CP-2 led to reduced host blood loss and fecal egg counts in dogs. With APR-1, vaccination even led to reduced worm burden. Research is currently stymied at the development of at least one of these antigens as a recombinant protein for testing in clinical trials.
Terminology
The term "hookworm" is sometimes used to refer to hookworm infection. A hookworm is a type of parasitic worm (helminth).
See also
List of parasites (human)
References
External links
CDC Department of Parasitic Diseases images of the hookworm life cycle
Centers for Disease Control and Prevention
Dog hookworm (Ancylostoma caninum) at MetaPathogen: facts, life cycle, references
Human hookworms (Ancylostoma duodenale and Necator americanus) at MetaPathogen: facts, life cycle, references | 187 |
Molar pregnancy | A molar pregnancy also known as a hydatidiform mole, is an abnormal form of pregnancy in which a non-viable fertilized egg implants in the uterus. A molar pregnancy is a type of gestational trophoblastic disease that used to be known as a hydatidiform mole. A molar pregnancy grows into a mass in the uterus that has swollen chorionic villi that grow in clusters resembling grapes. A molar pregnancy can develop when a fertilized egg does not contain an original maternal nucleus. The products of conception may or may not contain fetal tissue. Molar pregnancies are categorized as partial moles or complete moles, with the word mole being used to denote simply a clump of growing tissue, or a growth.
A complete mole is caused by a single sperm (90% of the time) or two (10% of the time) sperm combining with an egg which has lost its DNA. In the first case, the sperm then reduplicates, forming a "complete" 46 chromosome set. The genotype is typically 46,XX (diploid) due to the subsequent mitosis of the fertilizing sperm but can also be 46,XY (diploid). 46,YY (diploid) is not observed. In contrast, a partial mole occurs when a normal egg is fertilized by one or two sperm which then reduplicates itself, yielding the genotypes of 69,XXY (triploid) or 92,XXXY (tetraploid).Complete moles have a 2–4% risk of developing into choriocarcinoma in Western countries and 10–15% in Eastern countries and a 15% risk of becoming an invasive mole. Incomplete moles can become invasive (<5% risk) but are not associated with choriocarcinoma. Complete hydatidiform moles account for 50% of all cases of choriocarcinoma.
Molar pregnancies are a relatively rare complication of pregnancy, making up 1 in 1,000 pregnancies in the US, with much higher rates in Asia (e.g. up to 1 in 100 pregnancies in Indonesia).
Signs and symptoms
Molar pregnancies usually present with painless vaginal bleeding in the fourth to fifth months of pregnancy. The uterus may be larger than expected, or the ovaries may be enlarged. There may also be more vomiting than would be expected (hyperemesis). Sometimes there is an increase in blood pressure along with protein in the urine. Blood tests will show very high levels of human chorionic gonadotropin (hCG).
Cause
The cause of this condition is not completely understood. Potential risk factors may include defects in the egg, abnormalities within the uterus, or nutritional deficiencies. Women under 20 or over 40 years of age have a higher risk. Other risk factors include diets low in protein, folic acid, and carotene. The diploid set of sperm-only DNA means that all chromosomes have sperm-patterned methylation suppression of genes. This leads to overgrowth of the syncytiotrophoblast whereas dual egg-patterned methylation leads to a devotion of resources to the embryo, with an underdeveloped syncytiotrophoblast. This is considered to be the result of evolutionary competition, with male genes driving for high investment into the fetus versus female genes driving for resource restriction to maximise the number of children.
Pathophysiology
A hydatidiform mole is a pregnancy/conceptus in which the placenta contains grapelike vesicles (small sacs) that are usually visible to the naked eye. The vesicles arise by distention of the chorionic villi by fluid. When inspected under the microscope, hyperplasia of the trophoblastic tissue is noted. If left untreated, a hydatidiform mole will almost always end as a spontaneous abortion (miscarriage).
Based on morphology, hydatidiform moles can be divided into two types: in complete moles, all the chorionic villi are vesicular, and no sign of embryonic or fetal development is present. In partial moles some villi are vesicular, whereas others appear more normal, and embryonic/fetal development may be seen but the fetus is always malformed and is never viable.
In rare cases a hydatidiform mole co-exists in the uterus with a normal, viable fetus. These cases are due to twinning. The uterus contains the products of two conceptions: one with an abnormal placenta and no viable fetus (the mole), and one with a normal placenta and a viable fetus. Under careful surveillance it is often possible for the woman to give birth to the normal child and to be cured of the mole.
Parental origin
In most complete moles, all nuclear genes are inherited from the father only (androgenesis). In approximately 80% of these androgenetic moles, the most probable mechanism is that an empty egg is fertilized by a single sperm, followed by a duplication of all chromosomes/genes (a process called endoreduplication). In approximately 20% of complete moles, the most probable mechanism is that an empty egg is fertilized by two sperm. In both cases, the moles are diploid (i.e. there are two copies of every chromosome). In all these cases, the mitochondrial genes are inherited from the mother, as usual.
Most partial moles are triploid (three chromosome sets). The nucleus contains one maternal set of genes and two paternal sets. The mechanism is usually the reduplication of the paternal haploid set from a single sperm, but may also be the consequence of dispermic (two sperm) fertilization of the egg.In rare cases, hydatidiform moles are tetraploid (four chromosome sets) or have other chromosome abnormalities.
A small percentage of hydatidiform moles have biparental diploid genomes, as in normal living persons; they have two sets of chromosomes, one inherited from each biological parent. Some of these moles occur in women who carry mutations in the gene NLRP7, predisposing them towards molar pregnancy. These rare variants of hydatidiform mole may be complete or partial.
Diagnosis
The diagnosis is strongly suggested by ultrasound (sonogram), but definitive diagnosis requires histopathological examination. On ultrasound, the mole resembles a bunch of grapes ("cluster of grapes" or "honeycombed uterus" or "snow-storm"). There is increased trophoblast proliferation and enlarging of the chorionic villi, and angiogenesis in the trophoblasts is impaired.Sometimes symptoms of hyperthyroidism are seen, due to the extremely high levels of hCG, which can mimic the effects of thyroid-stimulating hormone.
Treatment
Hydatidiform moles should be treated by evacuating the uterus by uterine suction or by surgical curettage as soon as possible after diagnosis, in order to avoid the risks of choriocarcinoma. Patients are followed up until their serum human chorionic gonadotrophin (hCG) level has fallen to an undetectable level. Invasive or metastatic moles (cancer) may require chemotherapy and often respond well to methotrexate. As they contain paternal antigens, the response to treatment is nearly 100%. Patients are advised not to conceive for half a year after hCG levels have normalized. The chances of having another molar pregnancy are approximately 1%.
Management is more complicated when the mole occurs together with one or more normal fetuses.
In some women, the growth can develop into gestational trophoblastic neoplasia. For women who have complete hydatidiform mole and are at high risk of this progression, evidence suggests giving prophylactic chemotherapy (known as P-chem) may reduce the risk of this happening. However P-chem may also increase toxic side effects, so more research is needed to explore its effects.
Anesthesia
The uterine curettage is generally done under the effect of anesthesia, preferably spinal anesthesia in hemodynamically stable patients. The advantages of spinal anesthesia over general anesthesia include ease of technique, favorable effects on the pulmonary system, safety in patients with hyperthyroidism and non-tocolytic pharmacological properties. Additionally, by maintaining patients consciousness one can diagnose the complications like uterine perforation, cardiopulmonary distress and thyroid storm at an earlier stage than when the patient is sedated or is under general anesthesia.
Prognosis
More than 80% of hydatidiform moles are benign. The outcome after treatment is usually excellent. Close follow-up is essential to ensure that treatment has been successful. Highly effective means of contraception are recommended to avoid pregnancy for at least 6 to 12 months. Women who have had a prior partial or complete mole, have a slightly increased risk of a second hydatidiform mole in a subsequent pregnancy, meaning a future pregnancy will require an earlier ultrasound scan.In 10 to 15% of cases, hydatidiform moles may develop into invasive moles. This condition is named persistent trophoblastic disease (PTD). The moles may intrude so far into the uterine wall that hemorrhage or other complications develop. It is for this reason that a post-operative full abdominal and chest X-ray will often be requested.
In 2 to 3% of cases, hydatidiform moles may develop into choriocarcinoma, which is a malignant, rapidly growing, and metastatic (spreading) form of cancer. Despite these factors which normally indicate a poor prognosis, the rate of cure after treatment with chemotherapy is high.
Over 90% of women with malignant, non-spreading cancer are able to survive and retain their ability to conceive and bear children. In those with metastatic (spreading) cancer, remission remains at 75 to 85%, although their childbearing ability is usually lost.
Epidemiology
Hydatidiform moles are a rare complication of pregnancy, occurring once in every 1,000 pregnancies in the US, with much higher rates in Asia (e.g. up to one in 100 pregnancies in Indonesia).
Etymology
The etymology is derived from hydatisia (Greek "a drop of water"), referring to the watery contents of the cysts, and mole (from Latin mola = millstone/false conception). The term, however, comes from the similar appearance of the cyst to a hydatid cyst in an Echinococcosis.
References
External links
Humpath #3186 (Pathology images)
Clinically reviewed molar pregnancy and choriocarcinoma information for patients from Cancer Research UK
MyMolarPregnancy.com: Resource for those diagnosed with molar pregnancy. Links, personal stories, and support groups. | 188 |
Hyperaldosteronism | Hyperaldosteronism is a medical condition wherein too much aldosterone is produced by the adrenal glands, which can lead to lowered levels of potassium in the blood (hypokalemia) and increased hydrogen ion excretion (alkalosis).
This cause of mineralocorticoid excess is primary hyperaldosteronism reflecting excess production of aldosterone by adrenal zona glomerulosa. Bilateral micronodular hyperplasia is more common than unilateral adrenal adenoma.
Signs and symptoms
It can be asymptomatic, but these symptoms may be present:
Fatigue
Headache
High blood pressure
Hypokalemia
Hypernatraemia
Hypomagnesemia
Intermittent or temporary paralysis
Muscle spasms
Muscle weakness
Numbness
Polyuria
Polydipsia
Tingling
Metabolic alkalosis
Nocturia
Blurry Vision
Dizziness/Vertigo
Cause
The causes of primary hyperaldosteronism are adrenal hyperplasia and adrenal adenoma (Conns syndrome).
These cause hyperplasia of aldosterone-producing cells of the adrenal cortex resulting in primary hyperaldosteronism.
The causes of secondary hyperaldosteronism are accessory renal veins, fibromuscular dysplasia, reninoma, renal tubular acidosis, nutcracker syndrome, ectopic tumors, massive ascites, left ventricular failure, and cor pulmonale.
These act either by decreasing circulating fluid volume or by decreasing cardiac output, with resulting increase in renin release leading to secondary hyperaldosteronism. Secondary hyperaldosteronism can also be caused by proximal renal tubular acidosis
Secondary hyperaldosteronism can also be a symptom of genetic conditions Bartters Syndrome and Gitelmans Syndrome.
Diagnosis
When taking a blood test, the aldosterone-to-renin ratio is abnormally increased in primary hyperaldosteronism, and decreased or normal but with high renin in secondary hyperaldosteronism.
Types
In endocrinology, the terms primary and secondary are used to describe the abnormality (e.g., elevated aldosterone) in relation to the defect, i.e., the tumors location. It also refers to causes that are genetic (primary) or due to another condition or influence (secondary).
Primary
Primary aldosteronism (hyporeninemic hyperaldosteronism) was previously thought to be most commonly caused by an adrenal adenoma, termed Conns syndrome. However, recent studies have shown that bilateral idiopathic adrenal hyperplasia is the cause in up to 70% of cases. Differentiating between the two is important, as this determines treatment. Also, see congenital adrenal hyperplasia.
Adrenal carcinoma is an extremely rare cause of primary hyperaldosteronism.Two familial forms have been identified: type I (dexamethasone suppressible), and type II, which has been linked to the 7p22 gene.Features
Hypertension
Hypokalemia (e.g., may cause muscle weakness)
AlkalosisInvestigations
High serum aldosterone
Low serum renin
High-resolution CT abdomenManagement
Adrenal adenoma: surgery
Bilateral adrenocortical hyperplasia: aldosterone antagonist, e.g., spironolactone
Secondary
Secondary hyperaldosteronism (also hyperreninism, or hyperreninemic hyperaldosteronism) is due to overactivity of the renin–angiotensin–aldosterone system (RAAS).Secondary refers to an abnormality that indirectly results in pathology through a predictable physiologic pathway, i.e., a renin-producing tumor leads to increased aldosterone, as the bodys aldosterone production is normally regulated by renin levels. One cause is a juxtaglomerular cell tumor. Another is renal artery stenosis, in which the reduced blood supply across the juxtaglomerular apparatus stimulates the production of renin. Likewise, fibromuscular dysplasia may cause stenosis of the renal artery, and therefore secondary hyperaldosteronism. Other causes can come from the tubules: low reabsorption of sodium (as seen in Bartter and Gitelman syndromes) will lead to hypovolemia/hypotension, which will activate the renin–angiotensin system (RAAS).Secondary hyperaldosteronism can also be caused by excessive ingestion of licorice or other members of the Glycyrrhiza genus of plants that contain the triterpenoid saponin glycoside known as glycyrrhizin. Licorice and closely related plants are perennial shrubs, the roots of which are used in medicine as well as making candies and in cooking other desserts because of the sweet taste. Through inhibition of 11-beta-hydroxysteroid dehydrogenase type 2 (11-beta-HSD2), glycyrrhizin allows cortisol to activate mineralocorticoid receptors in the kidney. This severely potentiates mineralocorticoid receptor-mediated renal sodium reabsorbtion, due to much higher circulating concentrations of cortisol compared to aldosterone. This, in turn, expands the extracellular volume, increases total peripheral resistance and increases arterial blood pressure. The condition is termed pseudohyperaldosteronism.Secondary hyperaldosterone can also be caused by a genetic mutation in the kidneys which causes sodium and potassium wasting. These conditions can be referred to syndromes such as Bartter Syndrome and Gitelman Syndrome.
Treatment
Treatment includes removing the causative agent (such as licorice), a high-potassium, low-sodium diet (for primary) and high-sodium diet (for secondary), spironolactone and eplerenone, potassium-sparing diuretics that act as aldosterone antagonists, and surgery, depending on the cause. Secondary hyperaldosteronism may also be treated with cox2 inhibitors which cause water retention, sodium retention and potassium retention as well as raising blood pressure. Bartter and Gitleman syndrome tend to cause low blood pressure in significant populations and treatment with blood pressure medications tend to lower the blood pressure even more.
Other animals
Cats can be affected by hyperaldosteronism. The most common signs in cats are muscle weakness and loss of eyesight, although only one of these signs may be present. Muscle weakness is due to low potassium concentrations in the blood, and signs of muscle weakness, such as being unable to jump, may be intermittent. High blood pressure causes either detachment of the retina, or blood inside the eye, which leads to loss of vision. Hyperaldosteronism caused by a tumor is treated by surgical removal of the affected adrenal gland.
See also
Hypoaldosteronism
Glucocorticoid remediable aldosteronism
References
External links
Primary Hyperaldosteronism Nursing Management | 189 |
Hyperkeratosis | Hyperkeratosis is thickening of the stratum corneum (the outermost layer of the epidermis, or skin), often associated with the presence of an abnormal quantity of keratin, and also usually accompanied by an increase in the granular layer. As the corneum layer normally varies greatly in thickness in different sites, some experience is needed to assess minor degrees of hyperkeratosis.
It can be caused by vitamin A deficiency or chronic exposure to arsenic.
Hyperkeratosis can also be caused by B-Raf inhibitor drugs such as Vemurafenib and Dabrafenib.It can be treated with urea-containing creams, which dissolve the intercellular matrix of the cells of the stratum corneum, promoting desquamation of scaly skin, eventually resulting in softening of hyperkeratotic areas.
Types
Follicular
Follicular hyperkeratosis, also known as keratosis pilaris (KP), is a skin condition characterized by excessive development of keratin in hair follicles, resulting in rough, cone-shaped, elevated papules. The openings are often closed with a white plug of encrusted sebum. When called phrynoderma the condition is associated with nutritional deficiency or malnourishment.
This condition has been shown in several small-scale studies to respond well to supplementation with vitamins and fats rich in essential fatty acids. Deficiencies of vitamin E, vitamin A, and B-complex vitamins have been implicated in causing the condition.
By other specific site
Plantar hyperkeratosis is hyperkeratosis of the sole of the foot. It is recommended to surgically remove the dead skin, to provide symptomatic relief.
Hyperkeratosis of the nipple and areola is an uncommon benign, asymptomatic, acquired condition of unknown pathogenesis.: 636
Hereditary
Epidermolytic hyperkeratosis (also known as "Bullous congenital ichthyosiform erythroderma," "Bullous ichthyosiform erythroderma,": 482 or "bullous congenital ichthyosiform erythroderma of Brocq") is a rare skin disease in the ichthyosis family affecting around 1 in 250,000 people. It involves the clumping of keratin filaments.: 562
Multiple minute digitate hyperkeratosis, a rare cutaneous condition, with about half of cases being familial
Focal acral hyperkeratosis (also known as "Acrokeratoelastoidosis lichenoides,") is a late-onset keratoderma, inherited as an autosomal dominant condition, characterized by oval or polygonal crateriform papules developing along the border of the hands, feet, and wrists.: 509
Keratosis pilaris appears similar to gooseflesh, is usually asymptomatic and may be treated by moisturizing the skin.
Other
Hyperkeratosis lenticularis perstans (also known as "Flegels disease") is a cutaneous condition characterized by rough, yellow-brown keratotic, flat-topped papules.: 639
In mucous membranes
The term hyperkeratosis is often used in connection with lesions of the mucous membranes, such as leukoplakia. Because of the differences between mucous membranes and the skin (e.g. keratinizing mucosa does not have a stratum lucidum and non keratinizing mucosa does not have this layer or normally a stratum corneum or a stratum granulosum), sometimes specialized texts give slightly different definitions of hyperkeratosis in the context of mucosae. Examples are "an excessive formation of keratin (e.g., as seen in leukoplakia)" and "an increase in the thickness of the keratin layer of the epithelium, or the presence of such a layer in a site where none would normally be expected."
Etymology and pronunciation
The word hyperkeratosis () is based on the Ancient Greek morphemes hyper- + kerato- + -osis, meaning the condition of too much keratin.
Hyperkeratosis in dogs
Nasodigitic hyperkeratosis in dogs may be idiopathic, secondary to an underlying disease, or due to congenital abnormalities in the normal anatomy of the nose and fingertips.
In the case of congenital anatomical abnormalities, contact between the affected area and rubbing surfaces is impaired. It is roughly the same with finger pads - in animals with an anatomical abnormality part of the pad is not in contact with rubbing surfaces and excessive keratin deposition is formed. The idiopathic form of nasodigitic hyperkeratosis in dogs develops from unknown causes and is more common in older animals (senile form). Of all dog breeds, Labradors, Golden Retrievers, Cocker Spaniels, Irish Terriers, Bordeaux Dogs are the most prone to hyperkeratosis.
Therapy
Since the deposition of excess keratin cannot be stopped, therapy is aimed at softening and removing it. For moderate to severe cases, the affected areas should be hydrated (moisturised) with warm water or compresses for 5-10 minutes. Softening preparations are then applied once a day until the excess keratin is removed.
In dogs with severe hyperkeratosis and a significant excess of keratin, it is removed with scissors or a blade. After proper instructions, pet owners are able to perform this procedure at home and it may be the only method of correction.
See also
Calluses
Keratin disease
List of skin diseases
Skin disease
Skin lesion
Epidermal hyperplasia
References
== External links == | 190 |
Hypoestrogenism | Hypoestrogenism, or estrogen deficiency, refers to a lower than normal level of estrogen. It is an umbrella term used to describe estrogen deficiency in various conditions. Estrogen deficiency is also associated with an increased risk of cardiovascular disease, and has been linked to diseases like urinary tract infections and osteoporosis.
In women, low levels of estrogen may cause symptoms such as hot flashes, sleeping disturbances, decreased bone health, and changes in the genitourinary system. Hypoestrogenism is most commonly found in women who are postmenopausal, have primary ovarian insufficiency (POI), or are presenting with amenorrhea (absence of menstrual periods). Hypoestrogenism includes primarily genitourinary effects, including thinning of the vaginal tissue layers and an increase in vaginal pH. With normal levels of estrogen, the environment of the vagina is protected against inflammation, infections, and sexually transmitted infections. Hypoestrogenism can also occur in men, for instance due to hypogonadism.
There are both hormonal and non-hormonal treatments to prevent the negative effects of low estrogen levels and improve quality of life.
Signs and symptoms
Vasomotor
Presentations of low estrogen levels include hot flashes, which are sudden, intense feelings of heat predominantly in the upper body, causing the skin to redden as if blushing. They are believed to occur due to the narrowing of the thermonuclear zone in the hypothalamus, making the body more sensitive to body temperature changes. Night disturbances are also common symptoms associated with hypoestrogenism. People may experience difficulty falling asleep, waking up several times a night, and early awakening with different variability between races and ethnic groups.
Genitourinary
Other classic symptoms include both physical and chemical changes of the vulva, vagina, and lower urinary tract. Genitals go through atrophic changes such as losing elasticity, losing vaginal rugae, and increasing of vaginal pH, which can lead to changes in the vaginal flora and increase the risk of tissue fragility and fissure. Other genital signs include dryness or lack of lubrication, burning, irritation, discomfort or pain, as well as impaired function. Low levels of estrogen can lead to limited genital arousal and cause dyspareunia, or painful sexual intercourse because of changes in the four layers of the vaginal wall. People with low estrogen will also experience higher urgency to urinate and dysuria, or painful urination. Hypoestrogenism is also considered one of the major risk factors for developing uncomplicated urinary tract infection in postmenopausal women who do not take hormone replacement therapy.
Bone health
Estrogen contributes to bone health in several ways; low estrogen levels increase bone resorption via osteoclasts and osteocytes, cells that help with bone remodeling, making bones more likely to deteriorate and increase risk of fracture. The decline in estrogen levels can ultimately lead to more serious illnesses, such as scoliosis or type I osteoporosis, a disease that thins and weakens bones, resulting in low bone density and fractures. Estrogen deficiency plays an important role in osteoporosis development for both genders, and it is more pronounced for women and at younger (menopausal) ages by five to ten years compared with men. Females are also at higher risk for osteopenia and osteoporosis.
Causes
A variety of conditions can lead to hypoestrogenism: menopause is the most common. Primary ovarian insufficiency (premature menopause) due to varying causes, such as radiation therapy, chemotherapy, or a spontaneous manifestation, can also lead to low estrogen and infertility.Hypogonadism (a condition where the gonads – testes for men and ovaries for women – have diminished activity) can decrease estrogen. In primary hypogonadism, elevated serum gonadotropins are detected on at least two occasions several weeks apart, indicating gonadal failure. In secondary hypogonadism (where the cause is hypothalamic or pituitary dysfunction) serum levels of gonadotropins may be low.Other causes include certain medications, gonadotropin insensitivity, inborn errors of steroid metabolism (for example, aromatase deficiency, 17α-hydroxylase deficiency, 17,20-lyase deficiency, 3β-hydroxysteroid dehydrogenase deficiency, and cholesterol side-chain cleavage enzyme or steroidogenic acute regulatory protein deficiency) and functional amenorrhea.
Risks
Low endogenous estrogen levels can elevate the risk of cardiovascular disease in women who reach early menopause. Estrogen is needed to relax arteries using endothelial-derived nitric oxide resulting in better heart health by decreasing adverse atherogenic effects. Women with POI may have an increased risk of cardiovascular disease due to low estrogen production.
Pathophysiology
Estrogen deficiency has both vaginal and urologic effects; the female genitalia and lower urinary tract share common estrogen receptor function due to their embryological development. Estrogen is a vasoactive hormone (one that affects blood pressure) which stimulates blood flow and increases vaginal secretions and lubrication. Activated estrogen receptors also stimulate tissue proliferation in the vaginal walls, which contribute to the formation of rugae. This rugae aids in sexual stimulation by becoming lubricated, distended, and expanded.Genitourinary effects of low estrogen include thinning of the vaginal epithelium, loss of vaginal barrier function, decrease of vaginal folding, decrease of the elasticity of the tissues, and decrease of the secretory activity of the Bartholin glands, which leads to traumatization of the vaginal mucosa and painful sensations. This thinning of the vaginal epithelium layers can increase the risk of developing inflammation and infection, such as urinary tract infection.The vagina is largely dominated by bacteria from the genus Lactobacillus, which typically comprise more than 70% of the vaginal bacteria in women. These lactobacilli process glycogen and its breakdown products, which result in a maintained low vaginal pH. Estrogen levels are closely linked to lactobacilli abundance and vaginal pH, as higher levels of estrogen promote thickening of the vaginal epithelium and intracellular production of glycogen. This large presence of lactobacilli and subsequent low pH levels are hypothesized to benefit women by protecting against sexually transmitted pathogens and opportunistic infections, and therefore reducing disease risk.
Diagnosis
Hypoestrogenism is typically found in menopause and aids in diagnosis of other conditions such as POI and functional amenorrhea. Estrogen levels can be tested through several laboratory tests: vaginal maturation index, progestogen challenge test, and vaginal swabs for small parabasal cells.
Menopause
Menopause is usually diagnosed through symptoms of vaginal atrophy, pelvic exams, and taking a comprehensive medical history consisting of last menstruation cycle. There is no definitive testing available for determining menopause as the symptom complex is the primary indicator and because the lower levels of estradiol are harder to accurately detect after menopause. However, there can be laboratory tests done to differentiate between menopause and other diagnoses.
Functional hypothalamic amenorrhea
Functional hypothalamic amenorrhea (FHA) is diagnosed based on findings of amenorrhea lasting three months or more, low serum hormone of gonadotropins and estradiol. Since common causes of FHA include exercising too much, eating too little, or being under too much stress, diagnosis of FHA includes assessing for any changes in exercise, weight, and stress. In addition, evaluation of amenorrhea includes a history and physical examination, biochemical testing, imaging, and measuring estrogen level. Examination of menstrual problems and clinical tests to measure hormones such as serum prolactin, thyroid-stimulating hormone, and follicle-stimulating hormone (FSH) can help rule out other potential causes of amenorrhea. These potential conditions include hyperprolactinemia, POI, and polycystic ovary syndrome.
Primary ovarian insufficiency
Primary ovarian insufficiency, also known as premature ovarian failure, can develop in women before the age of forty as a consequence of hypergonadotropic hypogonadism. POI can present as amenorrhea and has similar symptoms to menopause, but measuring FSH levels is used for diagnosis.
Treatment
Hormone replacement therapy (HRT) can be used to treat hypoestrogenism and menopause related symptoms, and low estrogen levels in both premenopausal and postmenopausal women. Low-dose estrogen medications are approved by the U.S. Food and Drug Administration for treatment of menopause-related symptoms. HRT can be used with or without a progestogen to improve symptoms such as hot flashes, sweating, trouble sleeping, vaginal dryness and discomfort. The FDA recommends HRT to be avoided in women with a history or risk of breast cancer, undiagnosed genital bleeding, untreated high blood pressure, unexplained blood clots, or liver disease.HRT for the vasomotor symptoms of hypoestrogenism include different forms of estrogen, such as conjugated equine estrogens, 17β-estradiol, transdermal estradiol, ethinyl estradiol, and the estradiol ring. In addition to HRT, there are common progestogens that are used to protect the inner layer of the uterus, the endometrium. These medications include medroxyprogesterone acetate, progesterone, norethisterone acetate, and drospirenone.Non-pharmacological treatment of hot flashes includes using portable fans to lower the room temperature, wearing layered clothing, and avoiding tobacco, spicy food, alcohol and caffeine. There is a lack of evidence to support other treatments such as acupuncture, yoga, and exercise to reduce symptoms.
In men
Estrogens are also important in male physiology. Hypoestrogenism can occur in men due to hypogonadism. Very rare causes include aromatase deficiency and estrogen insensitivity syndrome. Medications can also be a cause of hypoestrogenism in men. Hypoestrogenism in men can lead to osteoporosis, among other symptoms. Estrogens may also be positively involved in sexual desire in men.
See also
Estrogen insensitivity syndrome
Aromatase excess syndrome
References
== External links == | 191 |
Hypophosphatasia | Hypophosphatasia (; also called deficiency of alkaline phosphatase, phosphoethanolaminuria, or Rathbuns syndrome; sometimes abbreviated HPP) is a rare, and sometimes fatal, inherited metabolic bone disease. Clinical symptoms are heterogeneous, ranging from the rapidly fatal, perinatal variant, with profound skeletal hypomineralization, respiratory compromise or vitamin B6 dependent seizures to a milder, progressive osteomalacia later in life. Tissue non-specific alkaline phosphatase (TNSALP) deficiency in osteoblasts and chondrocytes impairs bone mineralization, leading to rickets or osteomalacia. The pathognomonic finding is subnormal serum activity of the TNSALP enzyme, which is caused by one of 388 genetic mutations identified to date, in the gene encoding TNSALP. Genetic inheritance is autosomal recessive for the perinatal and infantile forms but either autosomal recessive or autosomal dominant in the milder forms.
The prevalence of hypophosphatasia is not known; one study estimated the live birth incidence of severe forms to be 1:100,000. and some studies report a higher prevalence of milder disease.
Symptoms and signs
There is a remarkable variety of symptoms that depends, largely, on the age of the patient at initial presentation, ranging from death in utero to relatively mild bone problems with or without dentition symptoms in adult life although neurological and extra-skeletal symptoms are also reported. The stages of this disease are generally included in the following categories: perinatal, infantile, childhood, adult, benign prenatal and odontohypophosphatasia. Although several clinical sub-types of the disease have been characterized, based on the age at which skeletal lesions are discovered, the disease is best understood as a single continuous spectrum of severity.As the presentation of adult disease is highly variable, incorrect or missed diagnosis may occur. In one study, 19% of patients diagnosed with fibromyalgia had laboratory findings suggestive of possible hypophosphatasia.One case report details a 35-year old female with low serum ALP and mild pains but no history of rickets, fractures or dental problems. Subsequent evaluation showed osteopenia and renal microcalcifications and an elevation of PEA. The genetic mutations found in this case were previously reported in perinatal, infantile and childhood hypophosphatasia, but not adult hypophosphatasia.
Perinatal hypophosphatasia
Perinatal hypophosphatasia is the most lethal form. Profound hypomineralization results in caput membranaceum (a soft calvarium), deformed or shortened limbs during gestation and at birth, and rapid death due to respiratory failure. Stillbirth is not uncommon and long-term survival is rare. Neonates who manage to survive suffer increasing respiratory compromise due to softening of the bones (osteomalacia) and underdeveloped lungs (hypoplastic). Ultimately, this leads to respiratory failure. Epilepsy (seizures) can occur and can prove lethal. Regions of developing, unmineralized bone (osteoid) may expand and encroach on the marrow space, resulting in myelophthisic anemia.In radiographic examinations, perinatal hypophosphatasia can be distinguished from even the most severe forms of osteogenesis imperfecta and congenital dwarfism. Some stillborn skeletons show almost no mineralization; others have marked undermineralization and severe osteomalacia. Occasionally, there can be a complete absence of ossification in one or more vertebrae. In the skull, individual bones may calcify only at their centers. Another unusual radiographic feature is bony spurs that protrude laterally from the shafts of the ulnae and fibulae. Despite the considerable patient-to-patient variability and the diversity of radiographic findings, the X-ray can be considered diagnostic.
Infantile hypophosphatasia
Infantile hypophosphatasia presents in the first 6 months of life, with the onset of poor feeding and inadequate weight gain. Clinical manifestations of rickets often appear at this time. Although cranial sutures appear to be wide, this reflects hypomineralization of the skull, and there is often “functional” craniosynostosis. If the patient survives infancy, these sutures can permanently fuse. Defects in the chest, such as flail chest resulting from rib fractures, lead to respiratory compromise and pneumonia. Elevated calcium in the blood (hypercalcemia) and urine (hypercalcenuria) are also common, and may explain the renal problems and recurrent vomiting seen is this disease.Radiographic features in infants are generally less severe than those seen in perinatal hypophosphatasia. In the long bones, there is an abrupt change from a normal appearance in the shaft (diaphysis) to uncalcified regions near the ends (metaphysis), which suggests the occurrence of an abrupt metabolic change. In addition, serial radiography studies suggest that defects in skeletal mineralization (i.e. rickets) persist and become more generalized. Mortality is estimated to be 50% in the first year of life.
Childhood hypophosphatasia
Hypophosphatasia in childhood has variable clinical expression. As a result of defects in the development of the dental cementum, the deciduous teeth (baby teeth) are often lost before the age of 5. Frequently, the incisors are lost first; occasionally all of the teeth are lost prematurely. Dental radiographs can show the enlarged pulp chambers and root canals that are characteristic of rickets.Patients may experience delayed walking, a characteristic waddling gait, stiffness and pain, and muscle weakness (especially in the thighs) consistent with nonprogressive myopathy. Typically, radiographs show defects in calcification and characteristic bony defects near the ends of major long bones. Growth retardation, frequent fractures, and low bone density (osteopenia) are common. In severely-affected infants and young children, cranial bones can fuse prematurely, despite the appearance of open fontanels on radiographic studies. The illusion of open fontanels results from hypomineralization of large areas of the calvarium. Premature bony fusion of the cranial sutures may elevate intracranial pressure.
Adult hypophosphatasia
Adult hypophosphatasia can be associated with rickets, premature loss of deciduous teeth, or early loss of adult dentation followed by relatively good health. Osteomalacia results in painful feet due to poor healing of metatarsal stress fractures. Discomfort in the thighs or hips due to femoral pseudofractures can be distinguished from other types of osteomalacia by their location in the lateral cortices of the femora. The symptoms of this disease usually begin during middle age of an adult patient and can include bone pain, and hypomineralization.Some patients suffer from calcium pyrophosphate dihydrate crystal depositions with occasional attacks of arthritis (pseudogout), which appears to be the result of elevated endogenous inorganic pyrophosphate (PPi) levels. These patients may also suffer articular cartilage degeneration and pyrophosphate arthropathy. Radiographs reveal pseudofractures in the lateral cortices of the proximal femora and stress fractures, and patients may experience osteopenia, chondrocalcinosis, features of pyrophosphate arthropathy, and calcific periarthritis.
Odontohypophosphatasia
Odontohypophosphatasia is present when dental disease is the only clinical abnormality, and radiographic and/or histologic studies reveal no evidence of rickets or osteomalacia. Although hereditary leukocyte abnormalities and other disorders usually account for this condition, odontohypophosphatasia may explain some “early-onset periodontitis” cases.
Causes
Hypophosphatasia is associated with a molecular defect in the gene encoding tissue non-specific alkaline phosphatase (TNSALP). TNSALP is an enzyme that is tethered to the outer surface of osteoblasts and chondrocytes. TNSALP hydrolyzes several substances, including mineralization-inhibiting inorganic pyrophosphate (PPi) and pyridoxal 5’-phosphate (PLP), a major form of vitamin B. A relationship describing physiologic regulation of mineralization has been termed the Stenciling Principle of mineralization, whereby enzyme-substrate pairs imprint mineralization patterns locally into the extracellular matrix (most notably described for bone) by degrading mineralization inhibitors (e.g. TNAP/TNSALP/ALPL enzyme degrading the pyrophosphate inhibition of mineralization, and PHEX enzyme degrading the osteopontin inhibition of mineralization). The Stenciling Principle for mineralization is particularly relevant to the osteomalacia and odontomalacia observed in hypophosphatasia (HPP) and X-linked hypophosphatemia (XLH).6.
When TSNALP enzymatic activity is low, inorganic pyrophosphate (PPi) accumulates outside of cells in the extracellular matrix of bones and teeth, and inhibits formation of hydroxyapatite mineral, the main hardening component of bone, causing rickets in infants and children and osteomalacia (soft bones) and odontomalacia (soft teeth) in children and adults. PLP is the principal form of vitamin B6 and must be dephosphorylated by TNSALP before it can cross the cell membrane. Vitamin B6 deficiency in the brain impairs synthesis of neurotransmitters, which can cause seizures. In some cases, a build-up of calcium pyrophosphate dihydrate (CPPD) crystals in the joint can cause pseudogout.
Genetics
Perinatal and infantile hypophosphatasia are inherited as autosomal recessive traits with homozygosity or compound heterozygosity for two defective TNSALP alleles. The mode of inheritance for childhood, adult, and odonto forms of hypophosphatasia can be either autosomal dominant or recessive. Autosomal transmission accounts for the fact that the disease affects males and females with equal frequency. Genetic counseling is complicated by the disease’s variable inheritance pattern, and by incomplete penetration of the trait.Hypophosphatasia is a rare disease that has been reported worldwide and appears to affect individuals of all ethnicities. The prevalence of severe hypophosphatasia is estimated to be 1:100,000 in a population of largely Anglo-Saxon origin. The frequency of mild hypophosphatasia is more challenging to assess because the symptoms may escape notice or be misdiagnosed. The highest incidence of hypophosphatasia has been reported in the Mennonite population in Manitoba, Canada where one in every 25 individuals are considered carriers and one in every 2,500 newborns exhibits severe disease. Hypophosphatasia is considered particularly rare in people of African ancestry in the U.S.
Diagnosis
Dental findings
Hypophosphatasia is often discovered because of an early loss of deciduous (baby or primary) teeth with the root intact. Researchers have recently documented a positive correlation between dental abnormalities and clinical phenotype. Poor dentition is also noted in adults.
Laboratory testing
The symptom that best characterizes hypophosphatasia is low serum activity of alkaline phosphatase enzyme (ALP). In general, lower levels of enzyme activity correlate with more severe symptoms. The decrease in ALP activity leads to an increase in pyridoxal 5’-phosphate (PLP), which is the major form of Vitamin B6, in the blood, although tissue levels of Vitamin B6 may be unremarkable and correlates with disease severity. Urinary inorganic pyrophosphate (PPi) levels are elevated in most hypophosphatasia patients and, although it remains only a research technique, this increase has been reported to accurately detect carriers of the disease. In addition, most patients have an increased level of urinary phosphoethanolamine (PEA) although some may not. PLP screening is preferred over PEA due to cost and sensitivity.Tests for serum tissue-non-specific ALP (sometimes referred to as TNSALP) levels are part of the standard comprehensive metabolic panel (CMP) that is used in routine exams, although bone-specific ALP testing may be indicative of disease severity.
Radiography
Despite patient-to-patient variability and the diversity of radiographic findings, the X-ray is diagnostic in infantile hypophosphatasia. Skeletal defects are found in nearly all patients and include hypomineralization, rachitic changes, incomplete vertebrate ossification and, occasionally, lateral bony spurs on the ulnae and fibulae.In newborns, X-rays readily distinguish hypophosphatasia from osteogenesis imperfecta and congenital dwarfism. Some stillborn skeletons show almost no mineralization; others have marked undermineralization and severe rachitic changes. Occasionally there can be peculiar complete or partial absence of ossification in one or more vertebrae. In the skull, individual membranous bones may calcify only at their centers, making it appear that areas of the unossified calvarium have cranial sutures that are widely separated when, in fact, they are functionally closed. Small protrusions (or "tongues") of radiolucency often extend from the metaphyses into the bone shaft.
In infants, radiographic features of hypophosphatasia are striking, though generally less severe than those found in perinatal hypophosphatasia. In some newly diagnosed patients, there is an abrupt transition from relatively normal-appearing diaphyses to uncalcified metaphases, suggesting an abrupt metabolic change has occurred. Serial radiography studies can reveal the persistence of impaired skeletal mineralization (i.e. rickets), instances of sclerosis, and gradual generalized demineralization.
In adults, X-rays may reveal bilateral femoral pseudofractures in the lateral subtrochanteric diaphysis. These pseudofractures may remain for years, but they may not heal until they break completely or the patient receives intramedullary fixation. These patients may also experience recurrent metatarsal fractures. DXA may show abnormal bone mineral density which may correlate with disease severity, although bone mineral density in HPP patients may not be systemically reduced.
Genetic analysis
All clinical sub-types of hypophosphatasia have been traced to genetic mutations in the gene encoding TNSALP, which is localized on chromosome 1p36.1-34 in humans (ALPL; OMIM#171760). Approximately 388 distinct mutations have been described in the TNSALP gene. "The Tissue Nonspecific Alkaline Phosphatase Gene Mutations Database". About 80% of the mutations are missense mutations. The number and diversity of mutations results in highly variable phenotypic expression, and there appears to be a correlation between genotype and phenotype in hypophosphatasia”. Mutation analysis is possible and available in 3 laboratories.
Treatment
As of October 2015, asfotase alfa (Strensiq) has been approved by the FDA for the treatment of hypophosphatasia.
Some evidence exists to support the use of teriparatide in adult-HPP.Current management consists of palliating symptoms, maintaining calcium balance and applying physical, occupational, dental and orthopedic interventions, as necessary.
Hypercalcemia in infants may require restriction of dietary calcium or administration of calciuretics. This should be done carefully so as not to increase the skeletal demineralization that results from the disease itself. Vitamin D sterols and mineral supplements, traditionally used for rickets or osteomalacia, should not be used unless there is a deficiency, as blood levels of calcium ions (Ca2+), inorganic phosphate (Pi) and vitamin D metabolites usually are not reduced.
Craniosynostosis, the premature closure of skull sutures, may cause intracranial hypertension and may require neurosurgical intervention to avoid brain damage in infants.
Bony deformities and fractures are complicated by the lack of mineralization and impaired skeletal growth in these patients. Fractures and corrective osteotomies (bone cutting) can heal, but healing may be delayed and require prolonged casting or stabilization with orthopedic hardware. A load-sharing intramedullary nail or rod is the best surgical treatment for complete fractures, symptomatic pseudofractures, and progressive asymptomatic pseudofractures in adult hypophosphatasia patients.
Dental problems: Children particularly benefit from skilled dental care, as early tooth loss can cause malnutrition and inhibit speech development. Dentures may ultimately be needed. Dentists should carefully monitor patients’ dental hygiene and use prophylactic programs to avoid deteriorating health and periodontal disease.
Physical Impairments and pain: Rickets and bone weakness associated with hypophosphatasia can restrict or eliminate ambulation, impair functional endurance, and diminish ability to perform activities of daily living. Nonsteroidal anti-inflammatory drugs may improve pain-associated physical impairment and can help improve walking distance]
Bisphosphonate (a pyrophosphate synthetic analog) in one infant had no discernible effect on the skeleton, and the infant’s disease progressed until death at 14 months of age.
Bone marrow cell transplantation in two severely affected infants produced radiographic and clinical improvement, although the mechanism of efficacy is not fully understood and significant morbidity persisted.
Enzyme replacement therapy with normal, or ALP-rich serum from patients with Paget’s bone disease, was not beneficial.
Phase 2 clinical trials of bone targeted enzyme-replacement therapy for the treatment of hypophosphatasia in infants and juveniles have been completed, and a phase 2 study in adults is ongoing.
Pyridoxine, or Vitamin B6 may be used as adjunctive therapy in some cases, which may be referred to as Pyridoxine responsive seizures.
History
It was discovered initially in 1936 but was fully named and documented by a Canadian pediatrician, John Campbell Rathbun (1915-1972), while examining and treating a baby boy with very low levels of alkaline phosphatase in 1948. The genetic basis of the disease was mapped out only some 40 years later. Hypophosphatasia is sometimes called Rathbuns syndrome after its principal documenter.
See also
Alkaline phosphatase
Choline
References
Further reading
External links
Online Mendelian Inheritance in Man (OMIM): Adult Hypophosphatasia - 146300 | 192 |
Hypoprothrombinemia | Hypoprothrombinemia is a rare blood disorder in which a deficiency in immunoreactive prothrombin (Factor II), produced in the liver, results in an impaired blood clotting reaction, leading to an increased physiological risk for spontaneous bleeding. This condition can be observed in the gastrointestinal system, cranial vault, and superficial integumentary system, affecting both the male and female population. Prothrombin is a critical protein that is involved in the process of hemostasis, as well as illustrating procoagulant activities. This condition is characterized as an autosomal recessive inheritance congenital coagulation disorder affecting 1 per 2,000,000 of the population, worldwide, but is also attributed as acquired.
Signs and symptoms
There are various symptoms that are presented and are typically associated to a specific site that they appear at. Hypoprothrombinemia is characterized by a poor blood clotting function of prothrombin. Some symptoms are presented as severe, while others are mild, meaning that blood clotting is slower than normal. Areas that are usually affected are muscles, joints, and the brain, however, these sites are more uncommon.The most common symptoms include:
Easy bruising
Oral mucosal bleeding - Bleeding of the membrane mucus lining inside of the mouth.
Soft tissue bleeding.
Hemarthrosis - Bleeding in joint spaces.
Epistaxis - Acute hemorrhages from areas of the nasal cavity, nostrils, or nasopharynx.
Women with this deficiency experience menorrhagia: prolonged, abnormal heavy menstrual bleeding. This is typically a symptom of the disorder when severe blood loss occurs.Other reported symptoms that are related to the condition:
Prolonged periods of bleeding due to surgery, injury, or post birth.
Melena - Associated with acute gastrointestinal bleeding, dark black, tarry feces.
Hematochezia - Lower gastrointestinal bleeding, passage of fresh, bright red blood through the anus secreted in or with stools. If associated with upper gastrointestinal bleeding, suggestive of a more life-threatening issue.Type I: Severe hemorrhages are indicators of a more severe prothrombin deficiency that account for muscle hematomas, intracranial bleeding, postoperative bleeding, and umbilical cord hemorrhage, which may also occur depending on the severity, respectively.
Type II: Symptoms are usually more capricious, but can include a variety of the symptoms described previously. Less severe cases of the disorder typically do not involve spontaneous bleeding.
Causes
Hypoprothrombinemia can be the result of a genetic defect, may be acquired as the result of another disease process, or may be an adverse effect of medication. For example, 5-10% of patients with systemic lupus erythematosus exhibit acquired hypoprothrombinemia due to the presence of autoantibodies which bind to prothrombin and remove it from the bloodstream (lupus anticoagulant-hypoprothrombinemia syndrome). The most common viral pathogen that is involved is Adenovirus, with a prevalence of 50% in postviral cases.
Inheritance
Autosomal recessive condition in which both parents must carry the recessive gene in order to pass the disease on to offspring. If both parents have the autosomal recessive condition, the chance of mutation in offspring increases to 100%. An individual will be considered a carrier if one mutant copy of the gene is inherited, and will not illustrate any symptoms. The disease affects both men and women equally, and overall, is a very uncommon inherited or acquired disorder.
Non-inheritance and other factors
There are two types of prothrombin deficiencies that occur depending on the mutation:Type I (true deficiency), includes a missense or nonsense mutation, essentially decreasing prothrombin production. This is associated with bleeding from birth. Here, plasma levels of prothrombin are typically less than 10% of normal levels.Type II, known as dysprothrombinemia, includes a missense mutation at specific Xa factor cleavage sites and serine protease prothrombin regions. Type II deficiency creates a dysfunctional protein with decreased activity and usually normal or low-normal antigen levels. A vitamin K-dependent clotting factor is seldom seen as a contributor to inherited prothrombin deficiencies, but lack of Vitamin K decreases the synthesis of prothrombin in liver cells.Acquired underlying causes of this condition include severe liver disease, warfarin overdose, platelet disorders, and disseminated intravascular coagulation (DIC).
It may also be a rare adverse effect to ceftriaxone.
Mechanism
Hypoprothrombinemia is found to present itself as either inherited or acquired, and is a decrease in the synthesis of prothrombin. In the process of inheritance, it marks itself as an autosomal recessive disorder, meaning that both parents must be carriers of the defective gene in order for the disorder to be present in a child. Prothrombin is a glycoprotein that occurs in blood plasma and functions as a precursor to the enzyme, thrombin, which acts to convert fibrinogen into fibrin, therefore, fortifying clots. This clotting process is known as coagulation.The mechanism specific to prothrombin (factor II) includes the proteolytically cleaving, breakdown of proteins into smaller polypeptides or amino acids, of this coagulation factor in order to form thrombin at the beginning of the cascade, leading to stemming of blood loss. A mutation in factor II would essentially lead to hypoprothrombinemia. The mutation is presented on chromosome 11.Areas where the disease has been shown to present itself at include the liver, since the glycoprotein is stored in this area.
Acquired cases are results from an isolated factor II deficiency. Specific cases include:
Vitamin K deficiency: In the liver, vitamin K plays an important role in the synthesis of coagulation factor II. Bodys capacity in the storage of vitamin K is typically very low. Vitamin K-dependent coagulation factors have a very short half-life, sometimes leading to a deficiency when a depletion of vitamin K occurs. The liver synthesizes inactive precursor proteins in the absence of vitamin K (liver disease). Vitamin K deficiency leads to impaired clotting of the blood and in some cases, causes internal bleeding without an associated injury.
Disseminated intravascular coagulation (DIC): Involving abnormal, excessive generation of thrombin and fibrin within the blood. Relative to hypoprothrombinemia, due to increased platelet aggregation and coagulation factor consumption involved in the process.
Anticoagulants: warfarin overdose: Used as a treatment for prevention of blood clots, however, like most drugs, side effects have been shown to increase risk of excessive bleeding by functioning in the disruption of hepatic synthesis of coagulation factors II, VII, IX, and X. Vitamin K is an antagonist to warfarin drug, reversing its activity, causing it to be less effective in the process of blood clotting. Warfarin intake has been shown to interfere with Vitamin-K metabolism.
Diagnosis
Diagnosis of inherited hypoprothrombinemia, relies heavily on a patients medical history, family history of bleeding issues, and lab exams performed by a hematologist. A physical examination by a general physician should also be performed in order to determine whether the condition is congenital or acquired, as well as ruling out other possible conditions with similar symptoms. For acquired forms, information must be taken regarding current diseases and medications taken by the patient, if applicable.Lab tests that are performed to determine diagnosis:
Factor assays: To observe the performance of specific factors (II) to identify missing/poorly performing factors. These lab tests are typically performed first in order to determine the status of the factor.
Prothrombin blood test: Determines if patient has deficient or low levels of Factor II.
Vitamin K1 test: Performed to evaluate bleeding of unknown causes, nosebleeds, and identified bruising. To accomplish this, a band is wrapped around the patients arm, 4 inches above the superficial vein site in the elbow pit. The vein is penetrated with the needle and amount of blood required for testing is obtained. Decreased vitamin K levels are suggestive of hypoprothrombinemia. However, this exam is rarely used as a prothrombin blood test is performed beforehand.
Treatment
Treatment is almost always aimed to control hemorrhages, treating underlying causes, and taking preventative steps before performing invasive surgeries.
Hypoprothrombinemia can be treated with periodic infusions of purified prothrombin complexes. These are typically used as treatment methods for severe bleeding cases in order to boost clotting ability and increasing levels of vitamin K-dependent coagulation factors.
A known treatment for hypoprothrombinemia is menadoxime.
Menatetrenone was also listed as an antihemorrhagic vitamin.
4-Amino-2-methyl-1-naphthol (Vitamin K5) is another treatment for hypoprothrombinemia.
Vitamin K forms are administered orally or intravenously.
Other concentrates include Proplex T, Konyne 80, and Bebulin VH.Fresh frozen plasma infusion (FFP) is a method used for continuous bleeding episodes, every 3–5 weeks for mention.
Used to treat various conditions related to low blood clotting factors.
Administered by intravenous injection and typically at a 15-20 ml/kg/dose.
Can be used to treat acute bleeding.Sometimes, underlying causes cannot be controlled or determined, so management of symptoms and bleeding conditions should be priority in treatment.Invasive options, such as surgery or clotting factor infusions, are required if previous methods do not suffice. Surgery is to be avoided, as it causes significant bleeding in patients with hypoprothrombinemia.
Prognosis
Prognosis for patients varies and is dependent on severity of the condition and how early the treatment is managed.
With proper treatment and care, most people go on to live a normal and healthy life.
With more severe cases, a hematologist will need to be seen throughout the patients life in order to deal with bleeding and continued risks.
References
== External links == | 193 |
Hair loss | Hair loss, also known as alopecia or baldness, refers to a loss of hair from part of the head or body. Typically at least the head is involved. The severity of hair loss can vary from a small area to the entire body. Inflammation or scarring is not usually present. Hair loss in some people causes psychological distress.Common types include male- or female-pattern hair loss, alopecia areata, and a thinning of hair known as telogen effluvium. The cause of male-pattern hair loss is a combination of genetics and male hormones; the cause of female pattern hair loss is unclear; the cause of alopecia areata is autoimmune; and the cause of telogen effluvium is typically a physically or psychologically stressful event. Telogen effluvium is very common following pregnancy.Less common causes of hair loss without inflammation or scarring include the pulling out of hair, certain medications including chemotherapy, HIV/AIDS, hypothyroidism, and malnutrition including iron deficiency. Causes of hair loss that occurs with scarring or inflammation include fungal infection, lupus erythematosus, radiation therapy, and sarcoidosis. Diagnosis of hair loss is partly based on the areas affected.Treatment of pattern hair loss may simply involve accepting the condition, which can also include shaving ones head. Interventions that can be tried include the medications minoxidil (or finasteride) and hair transplant surgery. Alopecia areata may be treated by steroid injections in the affected area, but these need to be frequently repeated to be effective. Hair loss is a common problem. Pattern hair loss by age 50 affects about half of men and a quarter of women. About 2% of people develop alopecia areata at some point in time.
Terminology
Baldness is the partial or complete lack of hair growth, and part of the wider topic of "hair thinning". The degree and pattern of baldness varies, but its most common cause is androgenic hair loss, alopecia androgenetica, or alopecia seborrheica, with the last term primarily used in Europe.
Hypotrichosis
Hypotrichosis is a condition of abnormal hair patterns, predominantly loss or reduction. It occurs, most frequently, by the growth of vellus hair in areas of the body that normally produce terminal hair. Typically, the individuals hair growth is normal after birth, but shortly thereafter the hair is shed and replaced with sparse, abnormal hair growth. The new hair is typically fine, short and brittle, and may lack pigmentation. Baldness may be present by the time the subject is 25 years old.
Signs and symptoms
Symptoms of hair loss include hair loss in patches usually in circular patterns, dandruff, skin lesions, and scarring. Alopecia areata (mild – medium level) usually shows in unusual hair loss areas, e.g., eyebrows, backside of the head or above the ears, areas the male pattern baldness usually does not affect. In male-pattern hair loss, loss and thinning begin at the temples and the crown and hair either thins out or falls out. Female-pattern hair loss occurs at the frontal and parietal.
People have between 100,000 and 150,000 hairs on their head. The number of strands normally lost in a day varies but on average is 100. In order to maintain a normal volume, hair must be replaced at the same rate at which it is lost. The first signs of hair thinning that people will often notice are more hairs than usual left in the hairbrush after brushing or in the basin after shampooing. Styling can also reveal areas of thinning, such as a wider parting or a thinning crown.
Skin conditions
A substantially blemished face, back and limbs could point to cystic acne. The most severe form of the condition, cystic acne, arises from the same hormonal imbalances that cause hair loss and is associated with dihydrotestosterone production.
Psychological
The psychology of hair thinning is a complex issue. Hair is considered an essential part of overall identity: especially for women, for whom it often represents femininity and attractiveness. Men typically associate a full head of hair with youth and vigor. People experiencing hair thinning often find themselves in a situation where their physical appearance is at odds with their own self-image and commonly worry that they appear older than they are or less attractive to others. Psychological problems due to baldness, if present, are typically most severe at the onset of symptoms.Hair loss induced by cancer chemotherapy has been reported to cause changes in self-concept and body image. Body image does not return to the previous state after regrowth of hair for a majority of patients. In such cases, patients have difficulties expressing their feelings (alexithymia) and may be more prone to avoiding family conflicts. Family therapy can help families to cope with these psychological problems if they arise.
Causes
Although not completely understood, hair loss can have many causes:
Pattern hair loss
Male pattern hair loss is believed to be due to a combination of genetics and the male hormone dihydrotestosterone. The cause in female pattern hair loss remains unclear.
Infection
Dissecting cellulitis of the scalp
Fungal infections (such as tinea capitis)
Folliculitis from various causes
Demodex folliculitis, caused by Demodex folliculorum, a microscopic mite that feeds on the sebum produced by the sebaceous glands, denies hair essential nutrients and can cause thinning. Demodex folliculorum is not present on every scalp and is more likely to live in an excessively oily scalp environment.
Secondary syphilis
Drugs
Temporary or permanent hair loss can be caused by several medications, including those for blood pressure problems, diabetes, heart disease and cholesterol. Any that affect the bodys hormone balance can have a pronounced effect: these include the contraceptive pill, hormone replacement therapy, steroids and acne medications.
Some treatments used to cure mycotic infections can cause massive hair loss.
Medications (side effects from drugs, including chemotherapy, anabolic steroids, and birth control pills)
Trauma
Traction alopecia is most commonly found in people with ponytails or cornrows who pull on their hair with excessive force. In addition, rigorous brushing and heat styling, rough scalp massage can damage the cuticle, the hard outer casing of the hair. This causes individual strands to become weak and break off, reducing overall hair volume.
Frictional alopecia is hair loss caused by rubbing of the hair or follicles, most infamously around the ankles of men from socks, where even if socks are no longer worn, the hair often will not grow back.
Trichotillomania is the loss of hair caused by compulsive pulling and bending of the hairs. Onset of this disorder tends to begin around the onset of puberty and usually continues through adulthood. Due to the constant extraction of the hair roots, permanent hair loss can occur.
Traumas such as childbirth, major surgery, poisoning, and severe stress may cause a hair loss condition known as telogen effluvium, in which a large number of hairs enter the resting phase at the same time, causing shedding and subsequent thinning. The condition also presents as a side effect of chemotherapy – while targeting dividing cancer cells, this treatment also affects hairs growth phase with the result that almost 90% of hairs fall out soon after chemotherapy starts.
Radiation to the scalp, as when radiotherapy is applied to the head for the treatment of certain cancers there, can cause baldness of the irradiated areas.
Pregnancy
Hair loss often follows childbirth in the postpartum period without causing baldness. In this situation, the hair is actually thicker during pregnancy owing to increased circulating oestrogens. Approximately three months after giving birth (typically between 2 and 5 months), oestrogen levels drop and hair loss occurs, often particularly noticeably around the hairline and temple area. Hair typically grows back normally and treatment is not indicated. A similar situation occurs in women taking the fertility-stimulating drug clomiphene.
Other causes
Autoimmune disease. Alopecia areata is an autoimmune disorder also known as "spot baldness" that can result in hair loss ranging from just one location (Alopecia areata monolocularis) to every hair on the entire body (Alopecia areata universalis). Although thought to be caused by hair follicles becoming dormant, what triggers alopecia areata is not known. In most cases the condition corrects itself, but it can also spread to the entire scalp (alopecia totalis) or to the entire body (alopecia universalis).
Skin diseases and cancer. Localized or diffuse hair loss may also occur in cicatricial alopecia (lupus erythematosus, lichen plano pilaris, folliculitis decalvans, central centrifugal cicatricial alopecia, postmenopausal frontal fibrosing alopecia, etc.). Tumours and skin outgrowths also induce localized baldness (sebaceous nevus, basal cell carcinoma, squamous cell carcinoma).
Hypothyroidism (an under-active thyroid) and the side effects of its related medications can cause hair loss, typically frontal, which is particularly associated with thinning of the outer third of the eyebrows (also seen with syphilis). Hyperthyroidism (an over-active thyroid) can also cause hair loss, which is parietal rather than frontal.
Sebaceous cysts. Temporary loss of hair can occur in areas where sebaceous cysts are present for considerable duration (normally one to several weeks).
Congenital triangular alopecia – It is a triangular, or oval in some cases, shaped patch of hair loss in the temple area of the scalp that occurs mostly in young children. The affected area mainly contains vellus hair follicles or no hair follicles at all, but it does not expand. Its causes are unknown, and although it is a permanent condition, it does not have any other effect on the affected individuals.
Hair growth conditions. Gradual thinning of hair with age is a natural condition known as involutional alopecia. This is caused by an increasing number of hair follicles switching from the growth, or anagen, phase into a resting phase, or telogen phase, so that remaining hairs become shorter and fewer in number. An unhealthy scalp environment can play a significant role in hair thinning by contributing to miniaturization or causing damage.
Obesity. Obesity-induced stress, such as that induced by a high-fat diet (HFD), targets hair follicle stem cells (HFSCs) to accelerate hair thinning in mice. It is likely that similar molecular mechanism play a role in human hair loss.Other causes of hair loss include:
Alopecia mucinosa
Biotinidase deficiency
Chronic inflammation
Diabetes
Pseudopelade of Brocq
Telogen effluvium
Tufted folliculitis
Genetics
Genetic forms of localized autosomal recessive hypotrichosis include:
Pathophysiology
Hair follicle growth occurs in cycles. Each cycle consists of a long growing phase (anagen), a short transitional phase (catagen) and a short resting phase (telogen). At the end of the resting phase, the hair falls out (exogen) and a new hair starts growing in the follicle, beginning the cycle again.
Normally, about 40 (0–78 in men) hairs reach the end of their resting phase each day and fall out. When more than 100 hairs fall out per day, clinical hair loss (telogen effluvium) may occur. A disruption of the growing phase causes abnormal loss of anagen hairs (anagen effluvium).
Diagnosis
Because they are not usually associated with an increased loss rate, male-pattern and female-pattern hair loss do not generally require testing. If hair loss occurs in a young man with no family history, drug use could be the cause.
The pull test helps to evaluate diffuse scalp hair loss. Gentle traction is exerted on a group of hairs (about 40–60) on three different areas of the scalp. The number of extracted hairs is counted and examined under a microscope. Normally, fewer than three hairs per area should come out with each pull. If more than ten hairs are obtained, the pull test is considered positive.
The pluck test is conducted by pulling hair out "by the roots". The root of the plucked hair is examined under a microscope to determine the phase of growth, and is used to diagnose a defect of telogen, anagen, or systemic disease. Telogen hairs have tiny bulbs without sheaths at their roots. Telogen effluvium shows an increased percentage of hairs upon examination. Anagen hairs have sheaths attached to their roots. Anagen effluvium shows a decrease in telogen-phase hairs and an increased number of broken hairs.
Scalp biopsy is used when the diagnosis is unsure; a biopsy allows for differing between scarring and nonscarring forms. Hair samples are taken from areas of inflammation, usually around the border of the bald patch.
Daily hair counts are normally done when the pull test is negative. It is done by counting the number of hairs lost. The hair from the first morning combing or during washing should be counted. The hair is collected in a clear plastic bag for 14 days. The strands are recorded. If the hair count is >100/day, it is considered abnormal except after shampooing, where hair counts will be up to 250 and be normal.
Trichoscopy is a noninvasive method of examining hair and scalp. The test may be performed with the use of a handheld dermoscope or a video dermoscope. It allows differential diagnosis of hair loss in most cases.There are two types of identification tests for female pattern baldness: the Ludwig Scale and the Savin Scale. Both track the progress of diffused thinning, which typically begins on the crown of the head behind the hairline, and becomes gradually more pronounced. For male pattern baldness, the Hamilton–Norwood scale tracks the progress of a receding hairline and/or a thinning crown, through to a horseshoe-shaped ring of hair around the head and on to total baldness.In almost all cases of thinning, and especially in cases of severe hair loss, it is recommended to seek advice from a doctor or dermatologist. Many types of thinning have an underlying genetic or health-related cause, which a qualified professional will be able to diagnose.
Management
Hiding hair loss
Head
One method of hiding hair loss is the comb over, which involves restyling the remaining hair to cover the balding area. It is usually a temporary solution, useful only while the area of hair loss is small. As the hair loss increases, a comb over becomes less effective.
Another method is to wear a hat or a hairpiece such as a wig or toupee. The wig is a layer of artificial or natural hair made to resemble a typical hair style. In most cases the hair is artificial. Wigs vary widely in quality and cost. In the United States, the best wigs – those that look like real hair – cost up to tens of thousands of dollars. Organizations also collect individuals donations of their own natural hair to be made into wigs for young cancer patients who have lost their hair due to chemotherapy or other cancer treatment in addition to any type of hair loss.
Eyebrows
Though not as common as the loss of hair on the head, chemotherapy, hormone imbalance, forms of hair loss, and other factors can also cause loss of hair in the eyebrows. Loss of growth in the outer one third of the eyebrow is often associated with hypothyroidism. Artificial eyebrows are available to replace missing eyebrows or to cover patchy eyebrows. Eyebrow embroidery is another option which involves the use of a blade to add pigment to the eyebrows. This gives a natural 3D look for those who are worried about an artificial look and it lasts for two years. Micropigmentation (permanent makeup tattooing) is also available for those who want the look to be permanent.
Medications
Treatments for the various forms of hair loss have limited success. Three medications have evidence to support their use in male pattern hair loss: minoxidil, finasteride, and dutasteride. They typically work better to prevent further hair loss, than to regrow lost hair. On June 13, 2022, the U.S. Food and Drug Administration (FDA) approved Olumiant (baricitinib) for adults with severe alopecia areatal. It is the first FDA approved drug for systemic treatment, or treatment for any area of the body.
Minoxidil (Rogaine) is a nonprescription medication approved for male pattern baldness and alopecia areata. In a liquid or foam, it is rubbed into the scalp twice a day. Some people have an allergic reaction to the propylene glycol in the minoxidil solution and a minoxidil foam was developed without propylene glycol. Not all users will regrow hair. Minoxidil is also prescribed tablets to be taken orally to encourage hair regrowth although is not FDA approved to treat hair loss. The longer the hair has stopped growing, the less likely minoxidil will regrow hair. Minoxidil is not effective for other causes of hair loss. Hair regrowth can take 1 to 6 months to begin. Treatment must be continued indefinitely. If the treatment is stopped, hair loss resumes. Any regrown hair and any hair susceptible to being lost, while Minoxidil was used, will be lost. Most frequent side effects are mild scalp irritation, allergic contact dermatitis, and unwanted hair in other parts of the body.
Finasteride (Propecia) is used in male-pattern hair loss in a pill form, taken 1 milligram per day. It is not indicated for women and is not recommended in pregnant women (as it is known to cause birth defects in fetuses). Treatment is effective starting within 6 weeks of treatment. Finasteride causes an increase in hair retention, the weight of hair, and some increase in regrowth. Side effects in about 2% of males include decreased sex drive, erectile dysfunction, and ejaculatory dysfunction. Treatment should be continued as long as positive results occur. Once treatment is stopped, hair loss resumes.
Corticosteroids injections into the scalp can be used to treat alopecia areata. This type of treatment is repeated on a monthly basis. Oral pills for extensive hair loss may be used for alopecia areata. Results may take up to a month to be seen.
Immunosuppressants applied to the scalp have been shown to temporarily reverse alopecia areata, though the side effects of some of these drugs make such therapy questionable.
There is some tentative evidence that anthralin may be useful for treating alopecia areata.
Hormonal modulators (oral contraceptives or antiandrogens such as spironolactone and flutamide) can be used for female-pattern hair loss associated with hyperandrogenemia.
Surgery
Hair transplantation is usually carried out under local anaesthetic. A surgeon will move healthy hair from the back and sides of the head to areas of thinning. The procedure can take between four and eight hours, and additional sessions can be carried out to make hair even thicker. Transplanted hair falls out within a few weeks, but regrows permanently within months. Hair transplants, takes tiny plugs of skin, each which contains a few hairs, and implants the plugs into bald sections. The plugs are generally taken from the back or sides of the scalp. Several transplant sessions may be necessary.
Surgical options, such as follicle transplants, scalp flaps, and hair loss reduction, are available. These procedures are generally chosen by those who are self-conscious about their hair loss, but they are expensive and painful, with a risk of infection and scarring. Once surgery has occurred, six to eight months are needed before the quality of new hair can be assessed.
Scalp reduction is the process is the decreasing of the area of bald skin on the head. In time, the skin on the head becomes flexible and stretched enough that some of it can be surgically removed. After the hairless scalp is removed, the space is closed with hair-covered scalp. Scalp reduction is generally done in combination with hair transplantation to provide a natural-looking hairline, especially those with extensive hair loss.
Hairline lowering can sometimes be used to lower a high hairline secondary to hair loss, although there may be a visible scar after further hair loss.
Wigs are an alternative to medical and surgical treatment; some patients wear a wig or hairpiece. They can be used permanently or temporarily to cover the hair loss. High-quality, natural-looking wigs and hairpieces are available.
Chemotherapy
Hypothermia caps may be used to prevent hair loss during some kinds of chemotherapy, specifically, when taxanes or anthracyclines are administered. It is not recommended to be used when cancer is present in the skin of the scalp or for lymphoma or leukemia. There are generally only minor side effects from scalp cooling given during chemotherapy.
Embracing baldness
Instead of attempting to conceal their hair loss, some people embrace it by either doing nothing about it or sporting a shaved head. The general public became more accepting of men with shaved heads in the early 1950s, when Russian-American actor Yul Brynner began sporting the look; the resulting phenomenon inspired many of his male fans to shave their heads. Male celebrities then continued to bring mainstream popularity to shaved heads, including athletes such as Michael Jordan and Zinedine Zidane and actors such as Dwayne Johnson, Ben Kingsley, and Jason Statham. Baldness in females, however, is still viewed as less "normal" in various parts of the world.
Alternative medicine
Dietary supplements are not typically recommended. There is only one small trial of saw palmetto which shows tentative benefit in those with mild to moderate androgenetic alopecia. There is no evidence for biotin. Evidence for most other alternative medicine remedies is also insufficient. There was no good evidence for ginkgo, aloe vera, ginseng, bergamot, hibiscus, or sophora as of 2011.Many people use unproven treatments to treat hair loss. Egg oil, in Indian, Japanese, Unani (Roghan Baiza Murgh) and Chinese traditional medicine, was traditionally used as a treatment for hair loss.
Research
Research is looking into connections between hair loss and other health issues. While there has been speculation about a connection between early-onset male pattern hair loss and heart disease, a review of articles from 1954 to 1999 found no conclusive connection between baldness and coronary artery disease. The dermatologists who conducted the review suggested further study was needed.Environmental factors are under review. A 2007 study indicated that smoking may be a factor associated with age-related hair loss among Asian men. The study controlled for age and family history, and found statistically significant positive associations between moderate or severe male pattern hair loss and smoking status.Vertex baldness is associated with an increased risk of coronary heart disease (CHD) and the relationship depends upon the severity of baldness, while frontal baldness is not. Thus, vertex baldness might be a marker of CHD and is more closely associated with atherosclerosis than frontal baldness.
Hair follicle aging
A key aspect of hair loss with age is the aging of the hair follicle. Ordinarily, hair follicle renewal is maintained by the stem cells associated with each follicle. Aging of the hair follicle appears to be primed by a sustained cellular response to the DNA damage that accumulates in renewing stem cells during aging. This damage response involves the proteolysis of type XVII collagen by neutrophil elastase in response to DNA damage in hair follicle stem cells. Proteolysis of collagen leads to elimination of the damaged cells and, consequently, to terminal hair follicle miniaturization.
Hedgehog signaling
In June 2022 the University of California, Irvine announced that researchers have discovered that hedgehog signaling in murine fibroblasts induces new hair growth and hair multiplication while hedgehog activation increases fibroblast heterogeneity and drives new cell states. A new signaling molecule called SCUBE3 potently stimulates hair growth and may offer a therapeutic treatment for androgenetic alopecia.
Etymology
The term alopecia () is from the Classical Greek ἀλώπηξ, alōpēx, meaning "fox". The origin of this usage is because this animal sheds its coat twice a year, or because in ancient Greece foxes often lost hair because of mange.
See also
Alopecia in animals
Lichen planopilaris
List of conditions caused by problems with junctional proteins
Locks of Love – charity that provides hair prosthetics to alopecia patients
Psychogenic alopecia
References
External links
Hair loss at Curlie | 194 |
Ichthyosis vulgaris | Ichthyosis vulgaris (also known as "autosomal dominant ichthyosis" and "Ichthyosis simplex") is a skin disorder causing dry, scaly skin. It is the most common form of ichthyosis,: 486 affecting around 1 in 250 people. For this reason it is known as common ichthyosis. It is usually an autosomal dominant inherited disease (often associated with filaggrin), although a rare non-heritable version called acquired ichthyosis exists.: 560
Presentation
The symptoms of the inherited form of ichthyosis vulgaris are not usually present at birth but generally develop between three months and five years of age. The symptoms will often improve with age, although they may grow more severe again in old age.The condition is not life-threatening; the impact on the patient, if it is a mild case, is generally restricted to mild itching and the social impact of having skin with an unusual appearance. People with mild cases have symptoms that include scaly patches on the shins, fine white scales on the forearms and upper arms, and rough palms. People with the mildest cases have no symptoms other than faint, tell-tale "mosaic lines" between the Achilles tendons and the calf muscles.
Severe cases, although rare, do exist. Severe cases entail the buildup of scales everywhere, with areas of the body that have a concentration of sweat glands being least affected. Areas where the skin rubs against together, such as the armpits, the groin, and the "folded" areas of the elbow and knees, are less affected. Various topical treatments are available to "exfoliate" the scales. These include lotions that contain alpha-hydroxy acids.
Associated conditions
Many people with severe ichthyosis have problems sweating due to the buildup of scales on the skin. This may lead to problems such as "prickly itch", which results from the afflicted skin being unable to sweat due to the buildup of scales, or problems associated with overheating. The majority of people with vulgaris can sweat at least a little. Paradoxically this means most would be more comfortable living in a hot and humid climate. Sweating helps to shed scales, which improves the appearance of the skin and prevents "prickly itch".The dry skin will crack on digits or extremities and create bloody cuts. Skin is painful when inflamed and/or tight. For children and adolescents, psychological concerns may include inconsistent self-image, mood fluctuating due to cyclical outbreaks, tendency to addiction, possibility of social withdrawal when skin is noticeably infected, and preoccupation with appearance.Strong air conditioning and excessive consumption of alcohol can also increase the buildup of scales.
Over 50% of people with ichthyosis vulgaris have some type of atopic disease such as allergies, eczema, or asthma. Another common condition associated with ichthyosis vulgaris is keratosis pilaris (small bumps mainly appearing on the back of the upper arms).
Genetics
Ichthyosis vulgaris is one of the most common genetic disorders caused by a single gene. The disorder is believed to be caused by mutations to the gene encoding profilaggrin (a protein which is converted to filaggrin, which plays a vital role in the structure of the skin). Around 10% of the population have some detrimental mutations to the profilaggrin gene that is also linked to atopic dermatitis (another skin disorder that is often present with ichthyosis vulgaris). The exact mutation is only known for some cases of ichthyosis vulgaris.It is generally considered to be an autosomal dominant condition, i.e., a single genetic mutation causes the disease and an affected person has a 50% chance of passing the condition on to their child. There is some research indicating it may be semi-dominant. This means that a single mutation would cause a mild case of ichthyosis vulgaris and mutations to both copies of the gene would produce a more severe case.
Diagnosis
See also
Harlequin-type ichthyosis
List of cutaneous conditions
List of cutaneous conditions caused by mutations in keratins
References
External links
DermAtlas 28
Photographs from Ichthyosis Information | 195 |
Immune thrombocytopenic purpura | Immune thrombocytopenic purpura (ITP), also known as idiopathic thrombocytopenic purpura or immune thrombocytopenia, is a type of thrombocytopenic purpura defined as an isolated low platelet count with a normal bone marrow in the absence of other causes of low platelets. It causes a characteristic red or purple bruise-like rash and an increased tendency to bleed. Two distinct clinical syndromes manifest as an acute condition in children and a chronic condition in adults. The acute form often follows an infection and spontaneously resolves within two months. Chronic immune thrombocytopenia persists longer than six months with a specific cause being unknown.
ITP is an autoimmune disease with antibodies detectable against several platelet surface structures.
ITP is diagnosed by identifying a low platelet count on a complete blood count (a common blood test). However, since the diagnosis depends on the exclusion of other causes of a low platelet count, additional investigations (such as a bone marrow biopsy) may be necessary in some cases.
In mild cases, only careful observation may be required but very low counts or significant bleeding may prompt treatment with corticosteroids, intravenous immunoglobulin, anti-D immunoglobulin, or immunosuppressive medications. Refractory ITP (not responsive to conventional treatment or constant relapsing after splenectomy) requires treatment to reduce the risk of clinically significant bleeding. Platelet transfusions may be used in severe cases with very low platelet counts in people who are bleeding. Sometimes the body may compensate by making abnormally large platelets.
Signs and symptoms
Signs include the spontaneous formation of bruises (purpura) and petechiae (tiny bruises), especially on the extremities, bleeding from the nostrils and/or gums, and menorrhagia (excessive menstrual bleeding), any of which may occur if the platelet count is below 20,000 per μl. A very low count (<10,000 per μl) may result in the spontaneous formation of hematomas (blood masses) in the mouth or on other mucous membranes. Bleeding time from minor lacerations or abrasions is usually prolonged.Serious and possibly fatal complications due to extremely low counts (<5,000 per μl) include subarachnoid or intracerebral hemorrhage (bleeding inside the skull or brain), lower gastrointestinal bleeding or other internal bleeding. An ITP patient with an extremely low count is vulnerable to internal bleeding caused by blunt abdominal trauma, as might be experienced in a motor vehicle crash. These complications are not likely when the platelet count is above 20,000 per μl.
Pathogenesis
In approximately 60 percent of cases, antibodies against platelets can be detected. Most often these antibodies are against platelet membrane glycoproteins IIb-IIIa or Ib-IX, and are of the immunoglobulin G (IgG) type. The Harrington–Hollingsworth experiment established the immune pathogenesis of ITP.The coating of platelets with IgG renders them susceptible to opsonization and phagocytosis by splenic macrophages, as well by Kupffer cells in the liver. The IgG autoantibodies are also thought to damage megakaryocytes, the precursor cells to platelets, although this is believed to contribute only slightly to the decrease in platelet numbers. Recent research now indicates that impaired production of the glycoprotein hormone, thrombopoietin, which is the stimulant for platelet production, may be a contributing factor to the reduction in circulating platelets. This observation has led to the development of a class of ITP-targeted medications referred to as thrombopoietin receptor agonists.The stimulus for auto-antibody production in ITP is probably abnormal T cell activity. Preliminary findings suggest that these T cells can be influenced by medications that target B cells, such as rituximab.
Diagnosis
The diagnosis of ITP is a process of exclusion. First, it has to be determined that there are no blood abnormalities other than a low platelet count, and no physical signs other than bleeding. Then, secondary causes (5–10 percent of suspected ITP cases) should be excluded. Such secondary causes include leukemia, medications (e.g., quinine, heparin), lupus erythematosus, cirrhosis, HIV, hepatitis C, congenital causes, antiphospholipid syndrome, von Willebrand factor deficiency, onyalai and others. All patients with presumed ITP should be tested for HIV and hepatitis C virus, as platelet counts may be corrected by treating the underlying disease. In approximately 2.7 to 5 percent of cases, autoimmune hemolytic anemia and ITP coexist, a condition referred to as Evans syndrome.Despite the destruction of platelets by splenic macrophages, the spleen is normally not enlarged. In fact, an enlarged spleen should lead to a search for other possible causes for the thrombocytopenia. Bleeding time is usually prolonged in ITP patients. However, the use of bleeding time in diagnosis is discouraged by the American Society of Hematology practice guidelines and a normal bleeding time does not exclude a platelet disorder.Bone marrow examination may be performed on patients over the age of 60 and those who do not respond to treatment, or when the diagnosis is in doubt. On examination of the marrow, an increase in the production of megakaryocytes may be observed and may help in establishing a diagnosis of ITP. An analysis for anti-platelet antibodies is a matter of clinicians preference, as there is disagreement on whether the 80 percent specificity of this test is sufficient to be clinically useful.
Treatment
With rare exceptions, there is usually no need to treat based on platelet counts. Many older recommendations suggested a certain platelet count threshold (usually somewhere below 20.0/µl) as an indication for hospitalization or treatment. Current guidelines recommend treatment only in cases of significant bleeding.
Treatment recommendations sometimes differ for adult and pediatric ITP.
Steroids
Initial treatment usually consists of the administration of corticosteroids, a group of medications that suppress the immune system. The dose and mode of administration is determined by platelet count and whether there is active bleeding: in urgent situations, infusions of dexamethasone or methylprednisolone may be used, while oral prednisone or prednisolone may suffice in less severe cases. Once the platelet count has improved, the dose of steroid is gradually reduced while the possibility of relapse is monitored. 60–90 percent will experience a relapse during dose reduction or cessation. Long-term steroids are avoided if possible because of potential side-effects that include osteoporosis, diabetes and cataracts.
Anti-D
Another option, suitable for Rh-positive patients with functional spleens is intravenous administration of Rho(D) immune globulin [Human; Anti-D]. The mechanism of action of anti-D is not fully understood. However, following administration, anti-D-coated red blood cell complexes saturate Fcγ receptor sites on macrophages, resulting in preferential destruction of red blood cells (RBCs), therefore sparing antibody-coated platelets. There are two anti-D products indicated for use in patients with ITP: WinRho SDF and Rhophylac. The most common adverse reactions are headache (15%), nausea/vomiting (12%) chills (<2%) and fever (1%).
Steroid-sparing agents
There is increasing use of immunosuppressants such as mycophenolate mofetil and azathioprine because of their effectiveness. In chronic refractory cases, where immune pathogenesis has been confirmed, the off-label use of the vinca alkaloid and chemotherapy agent vincristine may be attempted. However, vincristine has significant side effects and its use in treating ITP must be approached with caution, especially in children.
Intravenous immunoglobulin
Intravenous immunoglobulin (IVIg) may be infused in some cases in order to decrease the rate at which macrophages consume antibody-tagged platelets. However, while sometimes effective, it is costly and produces improvement that generally lasts less than a month. Nevertheless, in the case of an ITP patient already scheduled for surgery who has a dangerously low platelet count and has experienced a poor response to other treatments, IVIg can rapidly increase platelet counts, and can also help reduce the risk of major bleeding by transiently increasing platelet counts.
Thrombopoietin receptor agonists
Thrombopoietin receptor agonists are pharmaceutical agents that stimulate platelet production in the bone marrow. In this, they differ from the previously discussed agents that act by attempting to curtail platelet destruction. Two such products are currently available:
Romiplostim (trade name Nplate) is a thrombopoiesis stimulating Fc-peptide fusion protein (peptibody) that is administered by subcutaneous injection. Designated an orphan drug in 2003 under United States law, clinical trials demonstrated romiplostim to be effective in treating chronic ITP, especially in relapsed post-splenectomy patients. Romiplostim was approved by the United States Food and Drug Administration (FDA) for long-term treatment of adult chronic ITP on August 22, 2008.
Eltrombopag (trade name Promacta in the US, Revolade in the EU) is an orally-administered agent with an effect similar to that of romiplostim. It too has been demonstrated to increase platelet counts and decrease bleeding in a dose-dependent manner. Developed by GlaxoSmithKline and also designated an orphan drug by the FDA, Promacta was approved by the FDA on November 20, 2008.Thrombopoietin receptor agonists exhibited the greatest success so far in treating patients with refractory ITP.Side effects of thrombopoietin receptor agonists include headache, joint or muscle pain, dizziness, nausea or vomiting, and an increased risk of blood clots.
Surgery
Splenectomy (removal of the spleen) may be considered in patients who are either unresponsive to steroid treatment, have frequent relapses, or cannot be tapered off steroids after a few months. Platelets which have been bound by antibodies are taken up by macrophages in the spleen (which have Fc receptors), and so removal of the spleen reduces platelet destruction. The procedure is potentially risky in ITP cases due to the increased possibility of significant bleeding during surgery. Durable remission following splenectomy is achieved in 60 - 80 percent of ITP cases. Even though there is a consensus regarding the short-term efficacy of splenectomy, findings on its long-term efficacy and side-effects are controversial. After splenectomy, 11.6 - 75 percent of ITP cases relapsed, and 8.7 - 40 percent of ITP cases had no response to splenectomy. The use of splenectomy to treat ITP has diminished since the development of steroid therapy and other pharmaceutical remedies.
Platelet transfusion
Platelet transfusion alone is normally not recommended except in an emergency and is usually unsuccessful in producing a long-term platelet count increase. This is because the underlying autoimmune mechanism that is destroying the patients platelets will also destroy donor platelets, and so platelet transfusions are not considered a long-term treatment option.
H. pylori eradication
In adults, particularly those living in areas with a high prevalence of Helicobacter pylori (which normally inhabits the stomach wall and has been associated with peptic ulcers), identification and treatment of this infection has been shown to improve platelet counts in a third of patients. In a fifth, the platelet count normalized completely; this response rate is similar to that found in treatment with rituximab, which is more expensive and less safe. In children, this approach is not supported by evidence, except in high prevalence areas. Urea breath testing and stool antigen testing perform better than serology-based tests; moreover, serology may be false-positive after treatment with IVIG.
Other agents
Dapsone (also called diphenylsulfone, DDS, or avlosulfon) is an anti-infective sulfone medication. Dapsone may also be helpful in treating lupus, rheumatoid arthritis, and as a second-line treatment for ITP. The mechanism by which dapsone assists in ITP is unclear but an increased platelet count is seen in 40–60 percent of recipients.
The off-label use of rituximab, a chimeric monoclonal antibody against the B cell surface antigen CD20, may sometimes be an effective alternative to splenectomy. However, significant side-effects can occur, and randomized controlled trials are inconclusive.
Prognosis
In general patients with acute ITP will only rarely have life-threatening bleeding. most of the patients ultimately have stable but lower platelet counts which is hemostatic for a person. Unlike in pediatric patients who can be cured; most of the adults will run a chronic course even after splenectomy
Epidemiology
A normal platelet count is considered to be in the range of 150,000–450,000 per microlitre (μl) of blood for most healthy individuals. Hence one may be considered thrombocytopenic below that range, although the threshold for a diagnosis of ITP is not tied to any specific number.The incidence of ITP is estimated at 50–100 new cases per million per year, with children accounting for half of that number. At least 70 percent of childhood cases will end up in remission within six months, even without treatment. Moreover, a third of the remaining chronic cases will usually remit during follow-up observation, and another third will end up with only mild thrombocytopenia (defined as a platelet count above 50,000). A number of immune related genes and polymorphisms have been identified as influencing predisposition to ITP, with FCGR3a-V158 allele and KIRDS2/DL2 increasing susceptibility and KIR2DS5 shown to be protective.ITP is usually chronic in adults and the probability of durable remission is 20–40 percent. The male to female ratio in the adult group varies from 1:1.2 to 1.7 in most age ranges (childhood cases are roughly equal for both sexes) and the median age of adults at the diagnosis is 56–60. The ratio between male and female adult cases tends to widen with age. In the United States, the adult chronic population is thought to be approximately 60,000—with women outnumbering men approximately 2 to 1, which has resulted in ITP being designated an orphan disease.The mortality rate due to chronic ITP varies but tends to be higher relative to the general population for any age range. In a study conducted in Great Britain, it was noted that ITP causes an approximately 60 percent higher rate of mortality compared to sex- and age-matched subjects without ITP. This increased risk of death with ITP is largely concentrated in the middle-aged and elderly. Ninety-six percent of reported ITP-related deaths were individuals 45 years or older. No significant difference was noted in the rate of survival between males and females.
Pregnancy
Anti-platelet autoantibodies in a pregnant woman with ITP will attack the patients own platelets and will also cross the placenta and react against fetal platelets. Therefore, ITP is a significant cause of fetal and neonatal immune thrombocytopenia. Approximately 10% of newborns affected by ITP will have platelet counts <50,000/uL and 1% to 2% will have a risk of intracerebral hemorrhage comparable to infants with neonatal alloimmune thrombocytopenia (NAIT).No lab test can reliably predict if neonatal thrombocytopenia will occur. The risk of neonatal thrombocytopenia is increased with:
Mothers with a history of splenectomy for ITP
Mothers who had a previous infant affected with ITP
Gestational (maternal) platelet count less than 100,000/uLIt is recommended that pregnant women with thrombocytopenia or a previous diagnosis of ITP should be tested for serum antiplatelet antibodies. A woman with symptomatic thrombocytopenia and an identifiable antiplatelet antibody should be started on therapy for their ITP which may include steroids or IVIG. Fetal blood analysis to determine the platelet count is not generally performed as ITP-induced thrombocytopenia in the fetus is generally less severe than NAIT. Platelet transfusions may be performed in newborns, depending on the degree of thrombocytopenia. It is recommended that neonates be followed with serial platelet counts for the first few days after birth.
History
After initial reports by the Portuguese physician Amato Lusitano in 1556 and Lazarus de la Rivière (physician to the King of France) in 1658, it was the German physician and poet Paul Gottlieb Werlhof who in 1735 wrote the most complete initial report of the purpura of ITP. Platelets were unknown at the time. The name "Werlhofs disease" was used more widely before the current descriptive name became more popular. Platelets were described in the early 19th century, and in the 1880s several investigators linked the purpura with abnormalities in the platelet count. The first report of a successful therapy for ITP was in 1916, when a young Polish medical student, Paul Kaznelson, described a female patients response to a splenectomy. Splenectomy remained a first-line remedy until the introduction of steroid therapy in the 1950s.
References
== External links == | 196 |
Impetigo | Impetigo is a bacterial infection that involves the superficial skin. The most common presentation is yellowish crusts on the face, arms, or legs. Less commonly there may be large blisters which affect the groin or armpits. The lesions may be painful or itchy. Fever is uncommon.It is typically due to either Staphylococcus aureus or Streptococcus pyogenes. Risk factors include attending day care, crowding, poor nutrition, diabetes mellitus, contact sports, and breaks in the skin such as from mosquito bites, eczema, scabies, or herpes. With contact it can spread around or between people. Diagnosis is typically based on the symptoms and appearance.Prevention is by hand washing, avoiding people who are infected, and cleaning injuries. Treatment is typically with antibiotic creams such as mupirocin or fusidic acid. Antibiotics by mouth, such as cefalexin, may be used if large areas are affected. Antibiotic-resistant forms have been found.Impetigo affected about 140 million people (2% of the world population) in 2010. It can occur at any age, but is most common in young children. In some places the condition is also known as "school sores". Without treatment people typically get better within three weeks. Recurring infections can occur due to colonization of the nose by the bacteria. Complications may include cellulitis or poststreptococcal glomerulonephritis. The name is from the Latin impetere meaning "attack".
Signs and symptoms
Contagious impetigo
This most common form of impetigo, also called nonbullous impetigo, most often begins as a red sore near the nose or mouth which soon breaks, leaking pus or fluid, and forms a honey-colored scab, followed by a red mark which often heals without leaving a scar. Sores are not painful, but they may be itchy. Lymph nodes in the affected area may be swollen, but fever is rare. Touching or scratching the sores may easily spread the infection to other parts of the body.Skin ulcers with redness and scarring also may result from scratching or abrading the skin.
Bullous impetigo
Bullous impetigo, mainly seen in children younger than 2 years, involves painless, fluid-filled blisters, mostly on the arms, legs, and trunk, surrounded by red and itchy (but not sore) skin. The blisters may be large or small. After they break, they form yellow scabs.
Ecthyma
Ecthyma, the nonbullous form of impetigo, produces painful fluid- or pus-filled sores with redness of skin, usually on the arms and legs, become ulcers that penetrate deeper into the dermis. After they break open, they form hard, thick, gray-yellow scabs, which sometimes leave scars. Ecthyma may be accompanied by swollen lymph nodes in the affected area.
Causes
Impetigo is primarily caused by Staphylococcus aureus, and sometimes by Streptococcus pyogenes. Both bullous and nonbullous are primarily caused by S. aureus, with Streptococcus also commonly being involved in the nonbullous form.
Predisposing factors
Impetigo is more likely to infect children ages 2–5, especially those that attend school or day care. 70% of cases are the nonbullous form and 30% are the bullous form. Other factors can increase the risk of contracting impetigo such as diabetes mellitus, dermatitis, immunodeficiency disorders, and other irritable skin disorders. Impetigo occurs more frequently among people who live in warm climates.
Transmission
The infection is spread by direct contact with lesions or with nasal carriers. The incubation period is 1–3 days after exposure to Streptococcus and 4–10 days for Staphylococcus. Dried streptococci in the air are not infectious to intact skin. Scratching may spread the lesions.
Diagnosis
Impetigo is usually diagnosed based on its appearance. It generally appears as honey-colored scabs formed from dried serum and is often found on the arms, legs, or face. If a visual diagnosis is unclear a culture may be done to test for resistant bacteria.
Differential diagnosis
Other conditions that can result in symptoms similar to the common form include contact dermatitis, herpes simplex virus, discoid lupus, and scabies.Other conditions that can result in symptoms similar to the blistering form include other bullous skin diseases, burns, and necrotizing fasciitis.
Prevention
To prevent the spread of impetigo the skin and any open wounds should be kept clean and covered. Care should be taken to keep fluids from an infected person away from the skin of a non-infected person. Washing hands, linens, and affected areas will lower the likelihood of contact with infected fluids. Scratching can spread the sores; keeping nails short will reduce the chances of spreading. Infected people should avoid contact with others and eliminate sharing of clothing or linens. Children with impetigo can return to school 24 hours after starting antibiotic therapy as long as their draining lesions are covered.
Treatment
Antibiotics, either as a cream or by mouth, are usually prescribed. Mild cases may be treated with mupirocin ointments. In 95% of cases, a single 7-day antibiotic course results in resolution in children. It has been advocated that topical antiseptics are inferior to topical antibiotics, and therefore should not be used as a replacement. However, the National Institute for Health and Care Excellence (NICE) as of February 2020 recommends a hydrogen peroxide 1% cream antiseptic rather than topical antibiotics for localised non-bullous impetigo in otherwise well individuals. This recommendation is part of an effort to reduce the overuse of antimicrobials that may contribute to the development of resistant organisms such as MRSA.
More severe cases require oral antibiotics, such as dicloxacillin, flucloxacillin, or erythromycin. Alternatively, amoxicillin combined with clavulanate potassium, cephalosporins (first-generation) and many others may also be used as an antibiotic treatment. Alternatives for people who are seriously allergic to penicillin or infections with methicillin-resistant Staphococcus aureus include doxycycline, clindamycin, and trimethoprim-sulphamethoxazole, although doxycycline should not be used in children under the age of eight years old due to the risk of drug-induced tooth discolouration. When streptococci alone are the cause, penicillin is the drug of choice.When the condition presents with ulcers, valacyclovir, an antiviral, may be given in case a viral infection is causing the ulcer.
Alternative medicine
There is not enough evidence to recommend alternative medicine such as tea tree oil or honey.
Prognosis
Without treatment, individuals with impetigo typically get better within three weeks. Complications may include cellulitis or poststreptococcal glomerulonephritis. Rheumatic fever does not appear to be related.
Epidemiology
Globally, impetigo affects more than 162 million children in low- to middle-income countries. The rates are highest in countries with low available resources and is especially prevalent in the region of Oceania. The tropical climate and high population in lower socioeconomic regions contribute to these high rates. Children under the age of 4 in the United Kingdom are 2.8% more likely than average to contract impetigo; this decreases to 1.6% for children up to 15 years old. As age increases, the rate of impetigo declines, but all ages are still susceptible.
History
Impetigo was originally described and differentiated by William Tilbury Fox around 1864. The word impetigo is the generic Latin word for skin eruption, and it stems from the verb impetere to attack (as in impetus). Before the discovery of antibiotics, the disease was treated with an application of the antiseptic gentian violet, which was an effective treatment.
References
External links
Impetigo at Curlie
Impetigo and Ecthyma at Merck Manual of Diagnosis and Therapy Professional Edition | 197 |
Botulism | Botulism is a rare and potentially fatal illness caused by a toxin produced by the bacterium Clostridium botulinum. The disease begins with weakness, blurred vision, feeling tired, and trouble speaking. This may then be followed by weakness of the arms, chest muscles, and legs. Vomiting, swelling of the abdomen, and diarrhea may also occur. The disease does not usually affect consciousness or cause a fever.
Botulism can be spread in several ways. The bacterial spores which cause it are common in both soil and water. They produce the botulinum toxin when exposed to low oxygen levels and certain temperatures. Foodborne botulism happens when food containing the toxin is eaten. Infant botulism happens when the bacteria develops in the intestines and releases the toxin. This typically only occurs in children less than six months old, as protective mechanisms develop after that time. Wound botulism is found most often among those who inject street drugs. In this situation, spores enter a wound, and in the absence of oxygen, release the toxin. It is not passed directly between people. The diagnosis is confirmed by finding the toxin or bacteria in the person in question.
Prevention is primarily by proper food preparation. The bacteria, though not the spores, are destroyed by heating it to more than 85 °C (185 °F) for longer than 5 minutes. Honey can contain the organism, and for this reason, honey should not be fed to children under 12 months. Treatment is with an antitoxin. In those who lose their ability to breathe on their own, mechanical ventilation may be necessary for months. Antibiotics may be used for wound botulism. Death occurs in 5 to 10% of people. Botulism also affects many other animals. The word is from Latin botulus, meaning sausage.
Signs and symptoms
The muscle weakness of botulism characteristically starts in the muscles supplied by the cranial nerves—a group of twelve nerves that control eye movements, the facial muscles and the muscles controlling chewing and swallowing. Double vision, drooping of both eyelids, loss of facial expression and swallowing problems may therefore occur. In addition to affecting the voluntary muscles, it can also cause disruptions in the autonomic nervous system. This is experienced as a dry mouth and throat (due to decreased production of saliva), postural hypotension (decreased blood pressure on standing, with resultant lightheadedness and risk of blackouts), and eventually constipation (due to decreased forward movement of intestinal contents). Some of the toxins (B and E) also precipitate nausea, vomiting, and difficulty with talking. The weakness then spreads to the arms (starting in the shoulders and proceeding to the forearms) and legs (again from the thighs down to the feet).Severe botulism leads to reduced movement of the muscles of respiration, and hence problems with gas exchange. This may be experienced as dyspnea (difficulty breathing), but when severe can lead to respiratory failure, due to the buildup of unexhaled carbon dioxide and its resultant depressant effect on the brain. This may lead to respiratory compromise and death if untreated.Clinicians frequently think of the symptoms of botulism in terms of a classic triad: bulbar palsy and descending paralysis, lack of fever, and clear senses and mental status ("clear sensorium").
Infant botulism
Infant botulism (also referred to as floppy baby syndrome) was first recognized in 1976, and is the most common form of botulism in the United States. Infants are susceptible to infant botulism in the first year of life, with more than 90% of cases occurring in infants younger than six months. Infant botulism results from the ingestion of the C. botulinum spores, and subsequent colonization of the small intestine. The infant gut may be colonized when the composition of the intestinal microflora (normal flora) is insufficient to competitively inhibit the growth of C. botulinum and levels of bile acids (which normally inhibit clostridial growth) are lower than later in life.The growth of the spores releases botulinum toxin, which is then absorbed into the bloodstream and taken throughout the body, causing paralysis by blocking the release of acetylcholine at the neuromuscular junction. Typical symptoms of infant botulism include constipation, lethargy, weakness, difficulty feeding, and an altered cry, often progressing to a complete descending flaccid paralysis. Although constipation is usually the first symptom of infant botulism, it is commonly overlooked.Honey is a known dietary reservoir of C. botulinum spores and has been linked to infant botulism. For this reason, honey is not recommended for infants less than one year of age. Most cases of infant botulism, however, are thought to be caused by acquiring the spores from the natural environment. Clostridium botulinum is a ubiquitous soil-dwelling bacterium. Many infant botulism patients have been demonstrated to live near a construction site or an area of soil disturbance.Infant botulism has been reported in 49 of 50 US states (all save for Rhode Island), and cases have been recognized in 26 countries on five continents.
Complications
Infant botulism has no long-term side effects.
Botulism can result in death due to respiratory failure. However, in the past 50 years, the proportion of patients with botulism who die has fallen from about 50% to 7% due to improved supportive care. A patient with severe botulism may require mechanical ventilation (breathing support through a ventilator) as well as intensive medical and nursing care, sometimes for several months. The person may require rehabilitation therapy after leaving the hospital.
Cause
Clostridium botulinum is an anaerobic, Gram positive, spore-forming rod. Botulinum toxin is one of the most powerful known toxins: about one microgram is lethal to humans when inhaled. It acts by blocking nerve function (neuromuscular blockade) through inhibition of the excitatory neurotransmitter acetylcholines release from the presynaptic membrane of neuromuscular junctions in the somatic nervous system. This causes paralysis. Advanced botulism can cause respiratory failure by paralysing the muscles of the chest; this can progress to respiratory arrest. Furthermore, acetylcholine release from the presynaptic membranes of muscarinic nerve synapses is blocked. This can lead to a variety of autonomic signs and symptoms described above.
In all cases, illness is caused by the botulinum toxin produced by the bacterium C. botulinum in anaerobic conditions and not by the bacterium itself. The pattern of damage occurs because the toxin affects nerves that fire (depolarize) at a higher frequency first.Mechanisms of entry into the human body for botulinum toxin are described below.
Colonization of the gut
The most common form in Western countries is infant botulism. This occurs in infants who are colonized with the bacterium in the small intestine during the early stages of their lives. The bacterium then produces the toxin, which is absorbed into the bloodstream. The consumption of honey during the first year of life has been identified as a risk factor for infant botulism; it is a factor in a fifth of all cases. The adult form of infant botulism is termed adult intestinal toxemia, and is exceedingly rare.
Food
Toxin that is produced by the bacterium in containers of food that have been improperly preserved is the most common cause of food-borne botulism. Fish that has been pickled without the salinity or acidity of brine that contains acetic acid and high sodium levels, as well as smoked fish stored at too high a temperature, presents a risk, as does improperly canned food.
Food-borne botulism results from contaminated food in which C. botulinum spores have been allowed to germinate in low-oxygen conditions. This typically occurs in improperly prepared home-canned food substances and fermented dishes without adequate salt or acidity. Given that multiple people often consume food from the same source, it is common for more than a single person to be affected simultaneously. Symptoms usually appear 12–36 hours after eating, but can also appear within 6 hours to 10 days.
Wound
Wound botulism results from the contamination of a wound with the bacteria, which then secrete the toxin into the bloodstream. This has become more common in intravenous drug users since the 1990s, especially people using black tar heroin and those injecting heroin into the skin rather than the veins. Wound botulism accounts for 29% of cases.
Inhalation
Isolated cases of botulism have been described after inhalation by laboratory workers.
Injection
Symptoms of botulism may occur away from the injection site of botulinum toxin. This may include loss of strength, blurred vision, change of voice, or trouble breathing which can result in death. Onset can be hours to weeks after an injection. This generally only occurs with inappropriate strengths of botulinum toxin for cosmetic use or due to the larger doses used to treat movement disorders. Following a 2008 review the FDA added these concerns as a boxed warning.
Mechanism
The toxin is the protein botulinum toxin produced under anaerobic conditions (where there is no oxygen) by the bacterium Clostridium botulinum.
Clostridium botulinum is a large anaerobic Gram-positive bacillus that forms subterminal endospores.There are eight serological varieties of the bacterium denoted by the letters A to H. The toxin from all of these acts in the same way and produces similar symptoms: the motor nerve endings are prevented from releasing acetylcholine, causing flaccid paralysis and symptoms of blurred vision, ptosis, nausea, vomiting, diarrhea or constipation, cramps, and respiratory difficulty.
Botulinum toxin is broken into eight neurotoxins (labeled as types A, B, C [C1, C2], D, E, F, and G), which are antigenically and serologically distinct but structurally similar. Human botulism is caused mainly by types A, B, E, and (rarely) F. Types C and D cause toxicity only in other animals.In October 2013, scientists released news of the discovery of type H, the first new botulism neurotoxin found in forty years. However, further studies showed type H to be a chimeric toxin composed of parts of types F and A (FA).Some types produce a characteristic putrefactive smell and digest meat (types A and some of B and F); these are said to be proteolytic; type E and some types of B, C, D and F are nonproteolytic and can go undetected because there is no strong odor associated with them.When the bacteria are under stress, they develop spores, which are inert. Their natural habitats are in the soil, in the silt that comprises the bottom sediment of streams, lakes, and coastal waters and ocean, while some types are natural inhabitants of the intestinal tracts of mammals (e.g., horses, cattle, humans), and are present in their excreta. The spores can survive in their inert form for many years.Toxin is produced by the bacteria when environmental conditions are favourable for the spores to replicate and grow, but the gene that encodes for the toxin protein is actually carried by a virus or phage that infects the bacteria. Little is known about the natural factors that control phage infection and replication within the bacteria.The spores require warm temperatures, a protein source, an anaerobic environment, and moisture in order to become active and produce toxin. In the wild, decomposing vegetation and invertebrates combined with warm temperatures can provide ideal conditions for the botulism bacteria to activate and produce toxin that may affect feeding birds and other animals. Spores are not killed by boiling, but botulism is uncommon because special, rarely obtained conditions are necessary for botulinum toxin production from C. botulinum spores, including an anaerobic, low-salt, low-acid, low-sugar environment at ambient temperatures.Botulinum inhibits the release within the nervous system of acetylcholine, a neurotransmitter, responsible for communication between motor neurons and muscle cells. All forms of botulism lead to paralysis that typically starts with the muscles of the face and then spreads towards the limbs. In severe forms, botulism leads to paralysis of the breathing muscles and causes respiratory failure. In light of this life-threatening complication, all suspected cases of botulism are treated as medical emergencies, and public health officials are usually involved to identify the source and take steps to prevent further cases from occurring.Botulinum toxin A, C, and E cleave the SNAP-25, ultimately leading to paralysis.
Diagnosis
For botulism in babies, diagnosis should be made on signs and symptoms. Confirmation of the diagnosis is made by testing of a stool or enema specimen with the mouse bioassay.
In people whose history and physical examination suggest botulism, these clues are often not enough to allow a diagnosis. Other diseases such as Guillain–Barré syndrome, stroke, and myasthenia gravis can appear similar to botulism, and special tests may be needed to exclude these other conditions. These tests may include a brain scan, cerebrospinal fluid examination, nerve conduction test (electromyography, or EMG), and an edrophonium chloride (Tensilon) test for myasthenia gravis. A definite diagnosis can be made if botulinum toxin is identified in the food, stomach or intestinal contents, vomit or feces. The toxin is occasionally found in the blood in peracute cases. Botulinum toxin can be detected by a variety of techniques, including enzyme-linked immunosorbent assays (ELISAs), electrochemiluminescent (ECL) tests and mouse inoculation or feeding trials. The toxins can be typed with neutralization tests in mice. In toxicoinfectious botulism, the organism can be cultured from tissues. On egg yolk medium, toxin-producing colonies usually display surface iridescence that extends beyond the colony.
Prevention
Although the vegetative form of the bacteria is destroyed by boiling, the spore itself is not killed by the temperatures reached with normal sea-level-pressure boiling, leaving it free to grow and again produce the toxin when conditions are right.A recommended prevention measure for infant botulism is to avoid giving honey to infants less than 12 months of age, as botulinum spores are often present. In older children and adults the normal intestinal bacteria suppress development of C. botulinum.While commercially canned goods are required to undergo a "botulinum cook" in a pressure cooker at 121 °C (250 °F) for 3 minutes, and thus rarely cause botulism, there have been notable exceptions. Two were the 1978 Alaskan salmon outbreak and the 2007 Castleberrys Food Company outbreak. Foodborne botulism is the rarest form though, accounting for only around 15% of cases (US) and has more frequently been from home-canned foods with low acid content, such as carrot juice, asparagus, green beans, beets, and corn. However, outbreaks of botulism have resulted from more unusual sources. In July 2002, fourteen Alaskans ate muktuk (whale meat) from a beached whale, and eight of them developed symptoms of botulism, two of them requiring mechanical ventilation.Other, much rarer sources of infection (about every decade in the US) include garlic or herbs stored covered in oil without acidification, chili peppers, improperly handled baked potatoes wrapped in aluminum foil, tomatoes, and home-canned or fermented fish.
When canning or preserving food at home, attention should be paid to hygiene, pressure, temperature, refrigeration and storage. When making home preserves, only acidic fruit such as apples, pears, stone fruits and berries should be used. Tropical fruit and tomatoes are low in acidity and must have some acidity added before they are canned.Low-acid foods have pH values higher than 4.6. They include red meats, seafood, poultry, milk, and all fresh vegetables except for most tomatoes. Most mixtures of low-acid and acid foods also have pH values above 4.6 unless their recipes include enough lemon juice, citric acid, or vinegar to make them acidic. Acid foods have a pH of 4.6 or lower. They include fruits, pickles, sauerkraut, jams, jellies, marmalades, and fruit butters.Although tomatoes usually are considered an acid food, some are now known to have pH values slightly above 4.6. Figs also have pH values slightly above 4.6. Therefore, if they are to be canned as acid foods, these products must be acidified to a pH of 4.6 or lower with lemon juice or citric acid. Properly acidified tomatoes and figs are acid foods and can be safely processed in a boiling-water canner.Oils infused with fresh garlic or herbs should be acidified and refrigerated. Potatoes which have been baked while wrapped in aluminum foil should be kept hot until served or refrigerated. Because the botulism toxin is destroyed by high temperatures, home-canned foods are best boiled for 10 minutes before eating. Metal cans containing food in which bacteria are growing may bulge outwards due to gas production from bacterial growth or the food inside may be foamy or have a bad odor; such cans with any of these signs should be discarded.Any container of food which has been heat-treated and then assumed to be airtight which shows signs of not being so, e.g., metal cans with pinprick holes from rust or mechanical damage, should be discarded. Contamination of a canned food solely with C. botulinum may not cause any visual defects to the container, such as bulging. Only assurance of sufficient thermal processing during production, and absence of a route for subsequent contamination, should be used as indicators of food safety.
The addition of nitrites and nitrates to processed meats such as ham, bacon, and sausages reduces growth and toxin production of C. botulinum.
Vaccine
Vaccines are under development, but they have disadvantages, and in some cases there are concerns that they may revert to dangerous native activity. As of 2017 work to develop a better vaccine was being carried out, but the US FDA had not approved any vaccine against botulism.
Treatment
Botulism is generally treated with botulism antitoxin and supportive care.Supportive care for botulism includes monitoring of respiratory function. Respiratory failure due to paralysis may require mechanical ventilation for 2 to 8 weeks, plus intensive medical and nursing care. After this time, paralysis generally improves as new neuromuscular connections are formed.In some abdominal cases, physicians may try to remove contaminated food still in the digestive tract by inducing vomiting or using enemas. Wounds should be treated, usually surgically, to remove the source of the toxin-producing bacteria.
Antitoxin
Botulinum antitoxin consists of antibodies that neutralize botulinum toxin in the circulatory system by passive immunization. This prevents additional toxin from binding to the neuromuscular junction, but does not reverse any already inflicted paralysis.In adults, a trivalent antitoxin containing antibodies raised against botulinum toxin types A, B, and E is used most commonly; however, a heptavalent botulism antitoxin has also been developed and was approved by the U.S. FDA in 2013. In infants, horse-derived antitoxin is sometimes avoided for fear of infants developing serum sickness or lasting hypersensitivity to horse-derived proteins. To avoid this, a human-derived antitoxin has been developed and approved by the U.S. FDA in 2003 for the treatment of infant botulism. This human-derived antitoxin has been shown to be both safe and effective for the treatment of infant botulism. However, the danger of equine-derived antitoxin to infants has not been clearly established, and one study showed the equine-derived antitoxin to be both safe and effective for the treatment of infant botulism.Trivalent (A,B,E) botulinum antitoxin is derived from equine sources utilizing whole antibodies (Fab and Fc portions). In the United States, this antitoxin is available from the local health department via the CDC. The second antitoxin, heptavalent (A,B,C,D,E,F,G) botulinum antitoxin, is derived from "despeciated" equine IgG antibodies which have had the Fc portion cleaved off leaving the F(ab)2 portions. This less immunogenic antitoxin is effective against all known strains of botulism where not contraindicated.
Prognosis
The paralysis caused by botulism can persist for 2 to 8 weeks, during which supportive care and ventilation may be necessary to keep the person alive. Botulism can be fatal in 5% to 10% of people who are affected. However, if left untreated, botulism is fatal in 40% to 50% of cases.Infant botulism typically has no long-term side effects but can be complicated by treatment-associated adverse events. The case fatality rate is less than 2% for hospitalized babies.
Epidemiology
Globally, botulism is fairly rare, with approximately 1,000 identified cases yearly.
United States
In the United States an average of 145 cases are reported each year. Of these, roughly 65% are infant botulism, 20% are wound botulism, and 15% are foodborne. Infant botulism is predominantly sporadic and not associated with epidemics, but great geographic variability exists. From 1974 to 1996, for example, 47% of all infant botulism cases reported in the U.S. occurred in California.Between 1990 and 2000, the Centers for Disease Control and Prevention reported 263 individual foodborne cases from 160 botulism events in the United States with a case-fatality rate of 4%. Thirty-nine percent (103 cases and 58 events) occurred in Alaska, all of which were attributable to traditional Alaska aboriginal foods. In the lower 49 states, home-canned food was implicated in 70 events (~69%) with canned asparagus being the most frequent cause. Two restaurant-associated outbreaks affected 25 people. The median number of cases per year was 23 (range 17–43), the median number of events per year was 14 (range 9–24). The highest incidence rates occurred in Alaska, Idaho, Washington, and Oregon. All other states had an incidence rate of 1 case per ten million people or less.The number of cases of food borne and infant botulism has changed little in recent years, but wound botulism has increased because of the use of black tar heroin, especially in California.All data regarding botulism antitoxin releases and laboratory confirmation of cases in the US are recorded annually by the Centers for Disease Control and Prevention and published on their website.
On 2 July 1971, the U.S. Food and Drug Administration (FDA) released a public warning after learning that a New York man had died and his wife had become seriously ill due to botulism after eating a can of Bon Vivant vichyssoise soup.
Between 31 March and 6 April 1977, 59 individuals developed type B botulism. All who fell ill had eaten at the same Mexican restaurant in Pontiac, Michigan, and had consumed a hot sauce made with improperly home-canned jalapeño peppers, either by adding it to their food, or by eating nachos that had been prepared with the hot sauce. The full clinical spectrum (mild symptomatology with neurologic findings through life-threatening ventilatory paralysis) of type B botulism was documented.
In April 1994, the largest outbreak of botulism in the United States since 1978 occurred in El Paso, Texas. Thirty people were affected; 4 required mechanical ventilation. All ate food from a Greek restaurant. The attack rate among people who ate a potato-based dip was 86% (19/22) compared with 6% (11/176) among people who did not eat the dip (relative risk [RR] = 13.8; 95% confidence interval [CI], 7.6–25.1). The attack rate among people who ate an eggplant-based dip was 67% (6/9) compared with 13% (24/189) among people who did not (RR = 5.2; 95% CI, 2.9–9.5). Botulism toxin type A was detected in patients and in both dips. Toxin formation resulted from holding aluminum foil-wrapped baked potatoes at room temperature, apparently for several days, before they were used in the dips. Food handlers should be informed of the potential hazards caused by holding foil-wrapped potatoes at ambient temperatures after cooking.
In 2002, fourteen Alaskans ate muktuk (whale blubber) from a beached whale, resulting in eight of them developing botulism, with two of the affected requiring mechanical ventilation.
Beginning in late June 2007, 8 people contracted botulism poisoning by eating canned food products produced by Castleberrys Food Company in its Augusta, Georgia plant. It was later identified that the Castleberrys plant had serious production problems on a specific line of retorts that had under-processed the cans of food. These issues included broken cooking alarms, leaking water valves and inaccurate temperature devices, all the result of poor management of the company. All of the victims were hospitalized and placed on mechanical ventilation. The Castleberrys Food Company outbreak was the first instance of botulism in commercial canned foods in the United States in over 30 years.
One person died, 21 cases were confirmed, and 10 more were suspected in Lancaster, Ohio when a botulism outbreak occurred after a church potluck in April 2015. The suspected source was a salad made from home-canned potatoes.
A botulism outbreak occurred in Northern California in May 2017 after 10 people consumed nacho cheese dip served at a gas station in Sacramento County. One man died as a result of the outbreak.
United Kingdom
The largest recorded outbreak of foodborne botulism in the United Kingdom occurred in June 1989. A total of 27 patients were affected; one patient died. Twenty-five of the patients had eaten one brand of hazelnut yogurt in the week before the onset of symptoms. Control measures included the cessation of all yogurt production by the implicated producer, the withdrawal of the firms yogurts from sale, the recall of cans of the hazelnut conserve, and advice to the general public to avoid the consumption of all hazelnut yogurts.
China
From 1958 to 1983 there were 986 outbreaks of botulism in China involving 4,377 people with 548 deaths.
Qapqal disease
After the Chinese Communist Revolution in 1949, a mysterious plague (named Qapqal disease) was noticed to be affecting several Sibe villages in Qapqal Xibe Autonomous County. It was endemic with distinctive epidemic patterns, yet the underlying cause remained unknown for a long period of time. It caused a number of deaths and forced some people to leave the place.In 1958, a team of experts were sent to the area by the Ministry of Health to investigate the cases. The epidemic survey conducted proved that the disease was primarily type A botulism, with several cases of type B. The team also discovered that, the source of the botulinum was local fermented grain and beans, as well as a raw meat food called mi song hu hu. They promoted the improvement of fermentation techniques among local residents, and thus eliminated the disease.
Canada
From 1985 to 2015 there were outbreaks causing 91 confirmed cases of foodborne botulism in Canada, 85% of which were in Inuit communities, especially Nunavik, as well as First Nations of the coast of British Columbia, following consumption of traditionally prepared marine mammal and fish products.
Ukraine
In 2017, there were 70 cases of botulism with 8 deaths in Ukraine. The previous year there were 115 cases with 12 deaths. Most cases were the result of dried fish, a common local drinking snack.
Vietnam
In 2020, several cases of botulism were reported in Vietnam. All of them were related to a product containing contaminated vegetarian pâté. Some patients were put on life support.
Other susceptible species
Botulism can occur in many vertebrates and invertebrates. Botulism has been reported in such species as rats, mice, chicken, frogs, toads, goldfish, aplysia, squid, crayfish, drosophila and leeches.Death from botulism is common in waterfowl; an estimated 10,000 to 100,000 birds die of botulism annually. The disease is commonly called "limberneck". In some large outbreaks, a million or more birds may die. Ducks appear to be affected most often. An enzootic form of duck botulism in the Western USA and Canada is known as "western duck sickness". Botulism also affects commercially raised poultry. In chickens, the mortality rate varies from a few birds to 40% of the flock.
Botulism seems to be relatively uncommon in domestic mammals; however, in some parts of the world, epidemics with up to 65% mortality are seen in cattle. The prognosis is poor in large animals that are recumbent.
In cattle, the symptoms may include drooling, restlessness, uncoordination, urine retention, dysphagia, and sternal recumbency. Laterally recumbent animals are usually very close to death. In sheep, the symptoms may include drooling, a serous nasal discharge, stiffness, and incoordination. Abdominal respiration may be observed and the tail may switch on the side. As the disease progresses, the limbs may become paralyzed and death may occur.
Phosphorus-deficient cattle, especially in southern Africa, are inclined to ingest bones and carrion containing clostridial toxins and consequently develop lame sickness or lamsiekte.
The clinical signs in horses are similar to cattle. The muscle paralysis is progressive; it usually begins at the hindquarters and gradually moves to the front limbs, neck, and head. Death generally occurs 24 to 72 hours after initial symptoms and results from respiratory paralysis. Some foals are found dead without other clinical signs.
Clostridium botulinum type C toxin has been incriminated as the cause of grass sickness, a condition in horses which occurs in rainy and hot summers in Northern Europe. The main symptom is pharynx paralysis.Domestic dogs may develop systemic toxemia after consuming C. botulinum type C exotoxin or spores within bird carcasses or other infected meat but are generally resistant to the more severe effects of Clostridium botulinum type C. Symptoms include flaccid muscle paralysis, which can lead to death due to cardiac and respiratory arrest.Pigs are relatively resistant to botulism. Reported symptoms include anorexia, refusal to drink, vomiting, pupillary dilation, and muscle paralysis.In poultry and wild birds, flaccid paralysis is usually seen in the legs, wings, neck and eyelids. Broiler chickens with the toxicoinfectious form may also have diarrhea with excess urates.
See also
List of foodborne illness outbreaks
References
Further reading
Rao AK, Sobel J, Chatham-Stephens K, Luquez C (May 2021). "Clinical Guidelines for Diagnosis and Treatment of Botulism, 2021" (PDF). MMWR Recomm Rep. 70 (2): 1–30. doi:10.15585/mmwr.rr7002a1. PMC 8112830. PMID 33956777.
External links
Botulism in the United States, 1889–1996. Handbook for Epidemiologists, Clinicians and Laboratory Technicians. Centers for Disease Control and Prevention. National Center for Infectious Diseases, Division of Bacterial and Mycotic Diseases 1998.
NHS choices
CDC Botulism: Control Measures Overview for Clinicians
University of California, Santa Cruz Environmental toxicology – Botulism Archived 9 May 2013 at the Wayback Machine
CDC Botulism FAQ
FDA Clostridium botulinum Bad Bug Book
USGS Avian Botulism Archived 20 October 2018 at the Wayback Machine | 198 |
Infantile hemangioma | An infantile hemangioma (IH), sometimes called a strawberry mark due to appearance, is a type of benign vascular tumor or anomaly that affects babies. Other names include capillary hemangioma, strawberry hemangioma,: 593 strawberry birthmark and strawberry nevus. and formerly known as a cavernous hemangioma. They appear as a red or blue raised lesion on the skin. Typically, they begin during the first four weeks of life, growing until about five months of life, and then shrinking in size and disappearing over the next few years. Often skin changes remain after they shrink. Complications may include pain, bleeding, ulcer formation, disfigurement, or heart failure. It is the most common tumor of orbit and periorbital areas in childhood. It may occur in the skin, subcutaneous tissues and mucous membranes of oral cavities and lips as well as in extracutaneous locations including the liver and gastrointestinal tract.
The underlying reason for their occurrence is not clear. In about 10% of cases they appear to run in families. A few cases are associated with other abnormalities such as PHACE syndrome. Diagnosis is generally based on the symptoms and appearance. Occasionally medical imaging can assist in the diagnosis.In most cases no treatment is needed, other than close observation. It may grow rapidly, before stopping and slowly fading. Some are gone by the age of 2, about 60% by 5 years, and 90–95% by 9 years. While this birthmark may be alarming in appearance, physicians generally counsel that it be left to disappear on its own, unless it is in the way of vision or blocking the nostrils. Certain cases, however, may result in problems and the use of medication such as propranolol or steroids are recommended. Occasionally surgery or laser treatment may be used.It is one of the most common benign tumors in babies, occurring in about 5-10% of all births.: 81 They occur more frequently in females, whites, preemies, and low birth weight babies. They can occur anywhere on the body, though 83% occur on the head or neck area. The word "hemangioma" comes from the Greek haima (αἷμα) meaning "blood"; angeion (ἀγγεῖον) meaning "vessel"; and -oma (-ωμα) meaning "tumor".
Signs and symptoms
Infantile hemangiomas typically develop in the first few weeks or months of life. They are more common in Caucasians, in premature children whose birth weight is less than 3 pounds (1.4 kg), in females, and in twin births. Early lesions may resemble a red scratch or patch, a white patch, or a bruise. The majority occurs on the head and neck, but they can occur almost anywhere. The appearance and color of the IH depends on its location and depth within the level of the skin.Superficial IHs are situated higher in the skin and have a bright red, erythematous to reddish-purple appearance. Superficial lesions can be flat and telangiectatic, composed of a macule or patch of small, varied branching, capillary blood vessels. They can also be raised and elevated from the skin, forming papules and confluent bright-red plaques like raised islands. Infantile hemangiomas have historically been referred to “strawberry marks" or "strawberry hemangiomas” in the past, as raised superficial hemangiomas can look like the side of a strawberry without seeds, and this remains a common lay term.Superficial IHs in certain locations, such as the posterior scalp, neck folds, and groin/perianal areas, are at potential risk of ulceration. Ulcerated hemangiomas can present as black crusted papules or plaques, or painful erosions or ulcers. Ulcerations are prone to secondary bacterial infections, which can present with yellow crusting, drainage, pain, or odor. Ulcerations are also at risk for bleeding, particularly deep lesions or in areas of friction. Multiple superficial hemangiomas, more than five, can be associated with extracutaneous hemangiomas, the most common being a liver (hepatic) hemangioma, and these infants warrant ultrasound examination.Deep IHs present as poorly defined, bluish macules that can proliferate into papules, nodules, or larger tumors. Proliferating lesions are often compressible, but fairly firm. Many deep hemangiomas may have a few superficial capillaries visible evident over the primary deep component or surrounding venous prominence. Deep hemangiomas have a tendency to develop a little later than superficial hemangiomas, and may have longer and later proliferative phases, as well. Deep hemangiomas rarely ulcerate, but can cause issues depending on their location, size, and growth. Deep hemangiomas near sensitive structures can cause compression of softer surrounding structures during the proliferative phase, such as the external ear canal and the eyelid. Mixed hemangiomas are simply a combination of superficial and deep hemangiomas, and may not be evident for several months. Patients may have any combination of superficial, deep, or mixed IHs.
IHs are often classified as focal/localized, segmental, or indeterminate. Focal IHs appear localized to a specific location and appear to arise from a solitary spot. Segmental hemangiomas are larger, and appear to encompass a region of the body. Larger or segmental hemangiomas that span a large area can sometimes have underlying anomalies that may require investigation, especially when located on the face, sacrum, or pelvis.
Unless ulceration occurs, an IH does not tend to bleed and is not painful. Discomfort may arise if it is bulky and blocks a vital orifice.
Complications
Almost no IHs are associated with complications. They may break down on the surface, called ulceration, which can be painful and problematic. If the ulceration is deep, significant bleeding and infection may occur in rare occasions. If a hemangioma develops in the larynx, breathing can be compromised. If located near the eye, a growing hemangioma may cause an occlusion or deviation of the eye that can lead to amblyopia. Very rarely, extremely large hemangiomas can cause high-output heart failure due to the amount of blood that must be pumped to excess blood vessels. Lesions adjacent to bone may cause erosion of the bone.The most frequent complaints about IHs stem from psychosocial complications. The condition can affect a persons appearance and provoke attention and malicious reactions from others. Particular problems occur if the lip or nose is involved, as distortions can be difficult to treat surgically. The potential for psychological injury develops from school age onward. Considering treatment before school begins is, therefore, important if adequate spontaneous improvement has not occurred. Large IHs can leave visible skin changes secondary to severe stretching that results in altered surface texture.
Large segmental hemangiomas of the head and neck can be associated with a disorder called PHACES syndrome. Large segmental hemangiomas over the lumbar spine can be associated with dysraphism, renal, and urogenital problems in association with a disorder called LUMBAR syndrome. Multiple cutaneous hemangiomas in infants may be an indicator for liver hemangiomas. Screening for liver involvement is often recommended in infants with five or more skin hemangiomas.
Causes
The cause of hemangioma is currently unknown, but several studies have suggested the importance of estrogen signaling in proliferation. Localized soft-tissue hypoxia coupled with increased circulating estrogen after birth may be the stimulus. Also, a hypothesis was presented by researchers that maternal placenta embolizes to the fetal dermis during gestation, resulting in hemangiomagenesis. However, another group of researchers conducted genetic analyses of single-nucleotide polymorphism in hemangioma tissue compared to the mothers DNA that contradicted this hypothesis. Other studies have revealed the role of increased angiogenesis and vasculogenesis in the etiology of hemangiomas.
Diagnosis
The majority of IHs can be diagnosed by history and physical examination. In rare cases, imaging (ultrasound with Doppler, magnetic resonance imaging), and/or cytology or histopathology are needed to confirm the diagnosis. IHs are usually absent at birth or a small area of pallor, telangiectasias, or duskiness may be seen. A fully formed mass at birth usually indicates a diagnosis other than IH. Superficial hemangiomas in the upper dermis have a bright-red strawberry color, whereas those in the deep dermis and subcutis, deep hemangiomas, may appear blue and be firm or rubbery on palpation. Mixed hemangiomas can have both features. A minimally proliferative IH is an uncommon type that presents with fine macular telangiectasias with an occasional bright-red, papular, proliferative component. Minimally proliferative IHs are more common in the lower body.A precise history of the growth characteristics of the IH can be very helpful in making the diagnosis. In the first 4 to 8 weeks of life, IHs grow rapidly with primarily volumetric rather than radial growth. This is usually followed by a period of slower growth that can last 6–9 months, with 80% of the growth completed by 3 months. Finally, IHs involute over a period of years. The exceptions to these growth characteristics include minimally proliferative His, which do not substantially proliferate and large, deep IHs in which noticeable growth starts later and lasts longer.
If the diagnosis is not clear based on physical examination and growth history (most often in deep hemangiomas with little cutaneous involvement), then either imaging or histopathology can help confirm the diagnosis. On Doppler ultrasound, an IH in the proliferative phase appears as a high-flow, soft-tissue mass usually without direct arteriovenous shunting. On MRI, IHs show a well-circumscribed lesion with intermediate and increased signal intensity on T1- and T2-weighted sequences, respectively, and strong enhancement after gadolinium injections, with fast-flow vessels. Tissue for diagnosis can be obtained via fine-needle aspiration, skin biopsy, or excisional biopsy. Under the microscope, IHs are unencapsulated aggregates of closely packed, thin-walled capillaries, usually with endothelial lining. Blood-filled vessels are separated by scant connective tissue. Their lumina may be thrombosed and organized. Hemosiderin pigment deposition due to vessel rupture may be observed. The GLUT-1 histochemical marker can be helpful in distinguishing IHs from other items on the differential diagnosis, such as vascular malformations.
Liver
Infantile haemangiomas in the liver are found in 16% of all liver haemangiomas. Its sizes are usually less than 1 to 2 cm in diameter. It may show a "flash-filling" phenomenon in which there is the fast enhancement of the contrast material in the lesion instead of slow, centripetal, nodular filling of the lesions in usual hemangiomas. On CT and MRI, it shows rapid filling during arterial phase, with contrast retention in venous and delayed phases.
Treatment
Most IHs disappear without treatment, leaving minimal to no visible marks. This may take many years, however, and a proportion of lesions may require some form of therapy. Multidisciplinary clinical practice guidelines for the management of infantile hemangiomas were recently published. Indications for treatment include functional impairment (i.e. visual or feeding compromise), bleeding, potentially life-threatening complications (airway, cardiac, or hepatic disease), and risk of long-term or permanent disfigurement. Large IHs can leave visible skin changes secondary to significant stretching of the skin or alteration of surface texture. When they interfere with vision, breathing, or threaten significant disfigurement (most notably facial lesions, and in particular, nose and lips), they are usually treated. Medical therapies are most effective when used during the period of most significant hemangioma growth, which corresponds to the first 5 months of life. Ulcerated hemangiomas, a subset of lesions requiring therapy, are usually treated by addressing wound care, pain, and hemangioma growth.
Medication
Treatment options for IHs include medical therapies (systemic, intralesional, and topical), surgery, and laser therapy. Prior to 2008, the mainstay of therapy for problematic hemangiomas was oral corticosteroids, which are effective and remain an option for patients in whom beta-blocker therapy is contraindicated or poorly tolerated. Following the serendipitous observation that propranolol, a nonselective beta blocker, is well tolerated and effective for treatment of hemangiomas, the agent was studied in a large, randomized, controlled trial and was approved by the U.S. Food and Drug Administration for this indication in 2014. Oral propranolol is more effective than placebo, observation without intervention, or oral corticosteroids. Propranolol has subsequently become the first-line systemic medical therapy for treatment of these lesions.Since that time, topical timolol maleate in addition to oral propranalol has become a common therapy for infantile hemangiomas. According to a 2018 Cochrane review, both of these therapies have demonstrated beneficial effects in terms of clearance of hemangiomas without an increase in harms. In addition, no difference was detected between these two agents and their ability to reduce hemangioma size; however, whether a difference in safety exists is not clear. All of these results were based on moderate- to low-quality evidence, thus further randomized, controlled trials with large populations of children are needed to further evaluate these therapies. This review concluded that for now, no evidence challenges oral propranalol as the standard systemic therapy for treatment of these lesions.
Other systemic therapies which may be effective for IH treatment include vincristine, interferon, and other agents with antiangiogenic properties. Vincristine, which requires central venous access for administration, is traditionally used as a chemotherapy agent, but has been demonstrated to have efficacy against hemangiomas and other childhood vascular tumors, such as kaposiform hemangioendothelioma and tufted angioma. Interferon-alpha 2a and 2b, given by subcutaneous injection, has shown efficacy against hemangiomas, but may result in spastic diplegia in up to 20% of treated children. These agents are rarely used now in the era of beta-blocker therapy.
Intralesional corticosteroid (usually triamcinolone) injection has been used for small, localized hemangiomas, where it has been demonstrated to be relatively safe and effective. Injection of upper eyelid hemangiomas is controversial, given the reported risk of retinal embolization, possibly related to high injection pressures. Topical timolol maleate, a nonselective beta blocker available in a gel-forming solution approved for the treatment of glaucoma, has been increasingly recognized as a safe and effective off-label alternative for treatment of small hemangiomas. It is generally applied two to three times daily.
Surgery
Surgical excision of hemangiomas is rarely indicated, and limited to lesions which fail medical therapy (or when it is contraindicated), which are anatomically distributed in a location which is amenable to resection, and in which resection would likely be necessary and the scar will be similar regardless of timing of the surgery. Surgery may also be useful for removal of residual fibrofatty tissue (following hemangioma involution) and reconstruction of damaged structures.
Laser
Laser therapy, most often the pulsed dye laser (PDL), plays a limited role in hemangioma management. PDL is most often used for treatment of ulcerated hemangiomas, often in conjunction with topical therapies and wound care, and may speed healing and diminish pain. Laser therapy may also be useful for early superficial IHs (although rapidly proliferating lesions may be more prone to ulceration following PDL treatment), and for the treatment of cutaneous telangiectasias which persist following involution.
Prognosis
In the involution phase, an IH finally begins to diminish in size. While IHs were previously thought to improve by about 10% each year, newer evidence suggests that maximal improvement and involution is typically reached by 3.5 years of age. Most IHs resolve by age 10, but in some patients, the hemangioma does not completely resolve. Residual redness may be noted and can be improved with laser therapy, most commonly PDL. Ablative fractional resurfacing may be considered for textural skin changes. Hemangiomas, especially those that have gotten very large during the growth phase, may leave behind stretched skin or fibrofatty tissue that may be disfiguring or require future surgical correction. Areas of prior ulceration may leave behind permanent scarring.
Additional long-term sequelae stem from the identification of extracutaneous manifestations in association with the IH. For example, a patient with a large facial hemangioma who is found to meet criteria for PHACE syndrome will require potentially ongoing neurologic, cardiac, and/or ophthalmologic monitoring. In cases of IHs that compromise of vital structures, symptoms may improve with involution of the hemangioma. For example, respiratory distress would improve with involution of a space-occupying IH involving the airway and high-output heart failure may lessen with involution of a hepatic hemangioma and ultimately treatment may be tapered or discontinued. In other cases, such as an untreated eyelid hemangioma, resultant amblyopia does not improve with involution of the cutaneous lesion. For these reasons, infants with infantile hemangiomas should be evaluated by an appropriate clinician during the early proliferative phase so that risk monitoring and treatment be individualized and outcomes can be optimized.
Terminology
The terminology used to define, describe, and categorize vascular tumors and malformations has changed over time. The term hemangioma was originally used to describe any vascular tumor-like structure, whether it was present at or around birth or appeared later in life. In 1982, Mulliken and Glowacki proposed a new classification system for vascular anomalies which has been widely accepted and adopted by the International Society for the Study of Vascular Anomalies. This classification system was recently updated in 2015. The classification of vascular anomalies is now based upon cellular features, natural history, and clinical behavior of the lesion. Vascular anomalies are divided into vascular tumors/neoplasms which include infantile hemangiomas, and vascular malformations that include entities with enlarged or abnormal vessels such as capillary malformations (port wine stains), venous malformations, and lymphatic malformations. In 2000, GLUT-1, a specific immunohistochemical marker, was found to be positive in IHs and negative in other vascular tumors or malformations. This marker has revolutionized the ability to distinguish between infantile hemangioma and other vascular anomalies.
See also
Hemangioma
List of cutaneous conditions
References
External links
Infantile Hemangiomas: About Strawberry Baby Birthmarks
ISSVA Classification of Vascular Anomalies
Hemangioma Investigator Group
Vascular Birthmarks Foundation | 199 |