text
stringlengths 311
140k
|
---|
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Paracetamol poisoning and can you explain it in depth? [/INST] Paracetamol poisoning, also known as acetaminophen poisoning, is caused by excessive use of the medication paracetamol (acetaminophen). Most people have few or non-specific symptoms in the first 24 hours following overdose. These include feeling tired, abdominal pain, or nausea. This is typically followed by a couple of days without any symptoms, after which yellowish skin, blood clotting problems, and confusion occurs as a result of liver failure. Additional complications may include kidney failure, pancreatitis, low blood sugar, and lactic acidosis. If death does not occur, people tend to recover fully over a couple of weeks. Without treatment, death from toxicity occurs 4 to 18 days later.Paracetamol poisoning can occur accidentally or as an attempt to die by suicide. Risk factors for toxicity include alcoholism, malnutrition, and the taking of certain other hepatotoxic medications. Liver damage results not from paracetamol itself, but from one of its metabolites, N-acetyl-p-benzoquinone imine (NAPQI). NAPQI decreases the livers glutathione and directly damages cells in the liver. Diagnosis is based on the blood level of paracetamol at specific times after the medication was taken. These values are often plotted on the Rumack-Matthew nomogram to determine level of concern.Treatment may include activated charcoal if the person seeks medical help soon after the overdose. Attempting to force the person to vomit is not recommended. If there is a potential for toxicity, the antidote acetylcysteine is recommended. The medication is generally given for at least 24 hours. Psychiatric care may be required following recovery. A liver transplant may be required if damage to the liver becomes severe. The need for transplant is often based on low blood pH, high blood lactate, poor blood clotting, or significant hepatic encephalopathy. With early treatment liver failure is rare. Death occurs in about 0.1% of cases.Paracetamol poisoning was first described in the 1960s. Rates of poisoning vary significantly between regions of the world. In the United States more than 100,000 cases occur a year. In the United Kingdom it is the medication responsible for the greatest number of overdoses. Young children are most commonly affected. In the United States and the United Kingdom, paracetamol is the most common cause of acute liver failure.
Signs and symptoms
The signs and symptoms of paracetamol toxicity occur in three phases. The first phase begins within hours of overdose, and consists of nausea, vomiting, a pale appearance, and sweating. However, patients often have no specific symptoms or only mild symptoms in the first 24 hours of poisoning. Rarely, after massive overdoses, patients may develop symptoms of metabolic acidosis and coma early in the course of poisoning.The second phase occurs between 24 hours and 72 hours following overdose and consists of signs of increasing liver damage. In general, damage occurs in liver cells as they metabolize the paracetamol. The individual may experience right upper quadrant abdominal pain. The increasing liver damage also changes biochemical markers of liver function; International normalized ratio (INR) and the liver transaminases ALT and AST rise to abnormal levels. Acute kidney failure may also occur during this phase, typically caused by either hepatorenal syndrome or multiple organ dysfunction syndrome. In some cases, acute kidney failure may be the primary clinical manifestation of toxicity. In these cases, it has been suggested that the toxic metabolite is produced more in the kidneys than in the liver.The third phase follows at 3 to 5 days, and is marked by complications of massive liver necrosis leading to fulminant liver failure with complications of coagulation defects, low blood sugar, kidney failure, hepatic encephalopathy, brain swelling, sepsis, multiple organ failure, and death. If the third phase is survived, the liver necrosis runs its course, and liver and kidney function typically return to normal in a few weeks. The severity of paracetamol toxicity varies depending on the dose and whether appropriate treatment is received.
Cause
The toxic dose of paracetamol is highly variable. In general the recommended maximum daily dose for healthy adults is 4 grams. Higher doses lead to increasing risk of toxicity. In adults, single doses above 10 grams or 200 mg/kg of bodyweight, whichever is lower, have a reasonable likelihood of causing toxicity. Toxicity can also occur when multiple smaller doses within 24 hours exceed these levels. Following a dose of 1 gram of paracetamol four times a day for two weeks, patients can expect an increase in alanine transaminase in their liver to typically about three times the normal value. It is unlikely that this dose would lead to liver failure. Studies have shown significant hepatotoxicity is uncommon in patients who have taken greater than normal doses over 3 to 4 days. In adults, a dose of 6 grams a day over the preceding 48 hours could potentially lead to toxicity, while in children acute doses above 200 mg/kg could potentially cause toxicity. Acute paracetamol overdose in children rarely causes illness or death, and it is very uncommon for children to have levels that require treatment, with chronic larger-than-normal doses being the major cause of toxicity in children.Intentional overdosing (self-poisoning, with suicidal intent) is frequently implicated in paracetamol toxicity. In a 2006 review, paracetamol was the most frequently ingested compound in intentional overdosing.In rare individuals, paracetamol toxicity can result from normal use. This may be due to individual ("idiosyncratic") differences in the expression and activity of certain enzymes in one of the metabolic pathways that handle paracetamol (see paracetamols metabolism).
Risk factors
A number of factors can potentially increase the risk of developing paracetamol toxicity. Chronic excessive alcohol consumption can induce CYP2E1, thus increasing the potential toxicity of paracetamol. In one study of patients with liver injury, 64% reported alcohol intakes of greater than 80 grams a day, while 35% took 60 grams a day or less. Whether chronic alcoholism should be considered a risk factor has been debated by some clinical toxicologists. For chronic alcohol users, acute alcohol ingestion at the time of a paracetamol overdose may have a protective effect. For non-chronic alcohol users, acute alcohol consumption had no protective effect.
Fasting is a risk factor, possibly because of depletion of liver glutathione reserves. The concomitant use of the CYP2E1 inducer isoniazid increases the risk of hepatotoxicity, though whether 2E1 induction is related to the hepatotoxicity in this case is unclear. Concomitant use of other drugs that induce CYP enzymes, such as antiepileptics including carbamazepine, phenytoin, and barbiturates, have also been reported as risk factors.
Pathophysiology
When taken in normal therapeutic doses, paracetamol has been shown to be safe. Following a therapeutic dose, it is mostly converted to nontoxic metabolites via Phase II metabolism by conjugation with sulfate and glucuronide, with a small portion being oxidized via the cytochrome P450 enzyme system. Cytochromes P450 2E1 and 3A4 convert approximately 5% of paracetamol to a highly reactive intermediary metabolite, N-acetyl-p-benzoquinone imine (NAPQI). Under normal conditions, NAPQI is detoxified by conjugation with glutathione to form cysteine and mercapturic acid conjugates.In cases of paracetamol overdose, the sulfate and glucuronide pathways become saturated, and more paracetamol is shunted to the cytochrome P450 system to produce NAPQI. As a result, hepatocellular supplies of glutathione become depleted, as the demand for glutathione is higher than its regeneration. NAPQI therefore remains in its toxic form in the liver and reacts with cellular membrane molecules, resulting in widespread hepatocyte damage and death, leading to acute liver necrosis. In animal studies, the livers stores of glutathione must be depleted to less than 70% of normal levels before liver toxicity occurs.
Diagnosis
A persons history of taking paracetamol is somewhat accurate for the diagnosis. The most effective way to diagnose poisoning is by obtaining a blood paracetamol level. A drug nomogram developed in 1975, called the Rumack-Matthew nomogram, estimates the risk of toxicity based on the serum concentration of paracetamol at a given number of hours after ingestion. To determine the risk of potential hepatotoxicity, the paracetamol level is traced along the nomogram. Use of a timed serum paracetamol level plotted on the nomogram appears to be the best marker indicating the potential for liver injury. A paracetamol level drawn in the first four hours after ingestion may underestimate the amount in the system because paracetamol may still be in the process of being absorbed from the gastrointestinal tract. Therefore, a serum level taken before 4 hours is not recommended.Clinical or biochemical evidence of liver toxicity may develop in one to four days, although, in severe cases, it may be evident in 12 hours. Right-upper-quadrant tenderness may be present and can aid in diagnosis. Laboratory studies may show evidence of liver necrosis with elevated AST, ALT, bilirubin, and prolonged coagulation times, particularly an elevated prothrombin time. After paracetamol overdose, when AST and ALT exceed 1000 IU/L, paracetamol-induced hepatotoxicity can be diagnosed. In some cases, the AST and ALT levels can exceed 10,000 IU/L.
Detection in body fluids
Paracetamol may be quantified in blood, plasma, or urine as a diagnostic tool in clinical poisoning situations or to aid in the medicolegal investigation of suspicious deaths. The concentration in serum after a typical dose of paracetamol usually peaks below 30 mg/L, which equals 200 μmol/L. Levels of 30–300 mg/L (200–2000 μmol/L) are often observed in overdose patients. Postmortem blood levels have ranged from 50 to 400 mg/L in persons dying due to acute overdosage. Automated colorimetric techniques, gas chromatography and liquid chromatography are currently in use for the laboratory analysis of the drug in physiological specimens.
Prevention
Limitation of availability
Limiting the availability of paracetamol tablets has been attempted in some countries. In the UK, sales of over-the-counter paracetamol are restricted to packs of 32 x 500 mg tablets in pharmacies, and 16 x 500 mg tablets in non-pharmacy outlets. Pharmacists may provide up to 100 tablets for those with chronic conditions at the pharmacists discretion. In Ireland, the limits are 24 and 12 tablets, respectively. Subsequent study suggests that the reduced availability in large numbers had a significant effect in reducing poisoning deaths from paracetamol overdose.One suggested method of prevention is to make paracetamol a prescription-only medicine, or to remove it entirely from the market. However, overdose is a relatively minor problem; for example, 0.08% of the UK population (over 50 thousand people) present with paracetamol overdose each year. In contrast, paracetamol is a safe and effective medication that is taken without complications by millions of people. In addition, alternative pain relief medications such as aspirin are more toxic in overdose, whereas non-steroidal anti-inflammatory drugs are associated with more adverse effects following normal use.
Combination with other agents
One strategy for reducing harm done by acetaminophen overdoses is selling paracetamol pre-combined in tablets either with an emetic or an antidote. Paradote was a tablet sold in the UK which combined 500 mg paracetamol with 100 mg methionine, an amino acid formerly used in the treatment of paracetamol overdose.
There have been no studies so far on the effectiveness of paracetamol when given in combination with its most commonly used antidote, acetylcysteine.Calcitriol, the active metabolite of vitamin D3, appears to be a catalyst for glutathione production. Calcitriol was found to increase glutathione levels in rat astrocyte primary cultures on average by 42%, increasing glutathione protein concentrations from 29 nmol/mg to 41 nmol/mg, 24 and 48 hours after administration; it continued to have an influence on glutathione levels 96 hours after administration. It has been proposed that co-administration of calcitriol, via injection, may improve treatment outcomes.
Paracetamol replacements
Paracetamol ester prodrug with L-pyroglutamic acid (PCA), a biosynthetic precursor of glutathione, has been synthesized to reduce paracetamol hepatotoxicity and improve bioavailability. The toxicological studies of different paracetamol esters show that L-5-oxo-pyrrolidine-2-paracetamol carboxylate reduces toxicity after administration of an overdose of paracetamol to mice. The liver glutathione values in mice induced by intraperitoneal injection of the ester are superimposable with the GSH levels recorded in untreated mice control group. The mice group treated with an equivalent dose of paracetamol showed a significative decrease of glutathione of 35% (p<0.01 vs untreated control group). The oral LD50 was found to be greater than 2000 mg kg-1, whereas the intraperitoneal LD50 was 1900 mg kg-1. These results taken together with the good hydrolysis and bioavailability data show that this ester is a potential candidate as a prodrug of paracetamol.
Treatment
Gastric decontamination
In adults, the initial treatment for paracetamol overdose is gastrointestinal decontamination. Paracetamol absorption from the gastrointestinal tract is complete within two hours under normal circumstances, so decontamination is most helpful if performed within this timeframe. Gastric lavage, better known as stomach pumping, may be considered if the amount ingested is potentially life-threatening and the procedure can be performed within 60 minutes of ingestion. Activated charcoal is the most common gastrointestinal decontamination procedure as it adsorbs paracetamol, reducing its gastrointestinal absorption. Administering activated charcoal also poses less risk of aspiration than gastric lavage.It appears that the most benefit from activated charcoal is gained if it is given within 30 minutes to two hours of ingestion. Administering activated charcoal later than 2 hours can be considered in patients that may have delayed gastric emptying due to co-ingested drugs or following ingestion of sustained- or delayed-release paracetamol preparations. Activated charcoal should also be administered if co-ingested drugs warrant decontamination. There was reluctance to give activated charcoal in paracetamol overdose, because of the concern that it may also absorb the oral antidote acetylcysteine. Studies have shown that 39% less acetylcysteine is absorbed into the body when they are administered together. There are conflicting recommendations regarding whether to change the dosing of oral acetylcysteine after the administration of activated charcoal, and even whether the dosing of acetylcysteine needs to be altered at all. Intravenous acetylcysteine has no interaction with activated charcoal.
Inducing vomiting with syrup of ipecac has no role in paracetamol overdose because the vomiting it induces delays the effective administration of activated charcoal and oral acetylcysteine. Liver injury is extremely rare after acute accidental ingestion in children under 6 years of age. Children with accidental exposures do not require gastrointestinal decontamination with either gastric lavage, activated charcoal, or syrup of ipecac.
Acetylcysteine
Acetylcysteine, also called N-acetylcysteine or NAC, works to reduce paracetamol toxicity by replenishing body stores of the antioxidant glutathione. Glutathione reacts with the toxic NAPQI metabolite so that it does not damage cells and can be safely excreted. NAC was usually given following a treatment nomogram (one for patients with risk factors, and one for those without) but the use of the nomogram is no longer recommended as the evidence base to support the use of risk factors was poor and inconsistent and many of the risk factors are imprecise and difficult to determine with sufficient certainty in clinical practice. Cysteamine and methionine have also been used to prevent hepatotoxicity, although studies show that both are associated with more adverse effects than acetylcysteine. Additionally, acetylcysteine has been shown to be a more effective antidote, particularly in patients presenting greater than 8 hours post-ingestion and for those who present with liver failure symptoms.If the person presents less than eight hours after paracetamol overdose, then acetylcysteine significantly reduces the risk of serious hepatotoxicity and guarantees survival. If acetylcysteine is started more than 8 hours after ingestion, there is a sharp decline in its effectiveness because the cascade of toxic events in the liver has already begun, and the risk of acute liver necrosis and death increases dramatically. Although acetylcysteine is most effective if given early, it still has beneficial effects if given as late as 48 hours after ingestion. If the person presents more than eight hours after the paracetamol overdose, then activated charcoal is not useful, and acetylcysteine is started immediately. In earlier presentations, charcoal can be given when the patient arrives and acetylcysteine is initiated while waiting for the paracetamol level results to return from the laboratory.In United States practice, intravenous (IV) and oral administration are considered to be equally effective and safe if given within 8 hours of ingestion. However, IV is the only recommended route in Australasian and British practice. Oral acetylcysteine is given as a 140 mg/kg loading dose followed by 70 mg/kg every four hours for 17 more doses, and if the patient vomits within 1 hour of dose, the dose must be repeated. Oral acetylcysteine may be poorly tolerated due to its unpleasant taste, odor, and its tendency to cause nausea and vomiting. If repeated doses of charcoal are indicated because of another ingested drug, then subsequent doses of charcoal and acetylcysteine should be staggered.Intravenous acetylcysteine is given as a continuous infusion over 20 hours for a total dose 300 mg/kg. Recommended administration involves infusion of a 150 mg/kg loading dose over 15 to 60 minutes, followed by a 50 mg/kg infusion over four hours; the last 100 mg/kg are infused over the remaining 16 hours of the protocol. Intravenous acetylcysteine has the advantage of shortening hospital stay, increasing both doctor and patient convenience, and allowing administration of activated charcoal to reduce absorption of both the paracetamol and any co-ingested drugs without concerns about interference with oral acetylcysteine. Intravenous dosing varies with weight, specifically in children. For patients less than 20 kg, the loading dose is 150 mg/kg in 3 mL/kg diluent, administered over 60 minutes; the second dose is 50 mg/kg in 7 mL/kg diluent over 4 hours; and the third and final dose is 100 mg/kg in 14 mL/kg diluent over 16 hours.The most common adverse effect to acetylcysteine treatment is an anaphylactoid reaction, usually manifested by rash, wheeze, or mild hypotension. May cause infertility or death. Adverse reactions are more common in people treated with IV acetylcysteine, occurring in up to 20% of patients. Anaphylactoid reactions are more likely to occur with the first infusion (the loading dose). Rarely, severe life-threatening reactions may occur in predisposed individuals, such as patients with asthma or atopic dermatitis, and may be characterized by respiratory distress, facial swelling, and even death.If an anaphylactoid reaction occurs the acetylcysteine is temporarily halted or slowed and antihistamines and other supportive care is administered. For example, a nebulised beta-agonist like salbutamol may be indicated in the event of significant bronchospasm (or prophylactically in patients with a history of bronchospasm secondary to acetylcysteine). It is also important to closely monitor fluids and electrolytes.
Liver transplant
In people who develop acute liver failure or who are otherwise expected to die from liver failure, the mainstay of management is liver transplantation. Liver transplants are performed in specialist centers. The most commonly used criteria for liver transplant were developed by physicians at Kings College Hospital in London. Patients are recommended for transplant if they have an arterial blood pH less than 7.3 after fluid resuscitation or if a patient has Grade III or IV encephalopathy, a prothrombin time greater than 100 seconds, and a serum creatinine greater than 300 mmol/L In a 24-hour period. Other forms of liver support have been used including partial liver transplants. These techniques have the advantage of supporting the patient while their own liver regenerates. Once liver function returns immunosuppressive drugs are commenced and they have to take immunosuppressive medication for the rest of their lives.
Prognosis
The mortality rate from paracetamol overdose increases two days after the ingestion, reaches a maximum on day four, and then gradually decreases. Acidosis is the most important single indicator of probable mortality and the need for transplantation. A mortality rate of 95% without transplant was reported in patients who had a documented pH less than 7.30. Other indicators of poor prognosis include chronic kidney disease (stage 3 or worse), hepatic encephalopathy, a markedly elevated prothrombin time, or an elevated blood lactic acid level (lactic acidosis). One study has shown that a factor V level less than 10% of normal indicated a poor prognosis (91% mortality), whereas a ratio of factor VIII to factor V of less than 30 indicated a good prognosis (100% survival). Patients with a poor prognosis are usually identified for likely liver transplantation. Patients that do not die are expected to fully recover and have a normal life expectancy and quality of life.
Epidemiology
Many over-the-counter and prescription-only medications contain paracetamol. Because of its wide availability paired with comparably high toxicity, (compared to ibuprofen and aspirin) there is a much higher potential for overdose </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Acromegaly and can you explain it in depth? [/INST] Acromegaly is a disorder that results from excess growth hormone (GH) after the growth plates have closed. The initial symptom is typically enlargement of the hands and feet. There may also be an enlargement of the forehead, jaw, and nose. Other symptoms may include joint pain, thicker skin, deepening of the voice, headaches, and problems with vision. Complications of the disease may include type 2 diabetes, sleep apnea, and high blood pressure.Acromegaly is usually caused by the pituitary gland producing excess growth hormone. In more than 95% of cases the excess production is due to a benign tumor, known as a pituitary adenoma. The condition is not inherited from a persons parents. Acromegaly is rarely due to a tumor in another part of the body. Diagnosis is by measuring growth hormone after a person has consumed a glucose solution, or by measuring insulin-like growth factor I in the blood. After diagnosis, medical imaging of the pituitary is carried out to determine if an adenoma is present. If excess growth hormone is produced during childhood, the result is the condition gigantism rather than acromegaly, and it is characterized by excessive height.Treatment options include surgery to remove the tumor, medications, and radiation therapy. Surgery is usually the preferred treatment; the smaller the tumor, the more likely surgery will be curative. If surgery is contraindicated or not curative, somatostatin analogues or GH receptor antagonists may be used. Radiation therapy may be used if neither surgery nor medications are completely effective. Without treatment, life expectancy is reduced by 10 years; with treatment, life expectancy is not reduced.Acromegaly affects about 3 per 50,000 people. It is most commonly diagnosed in middle age. Males and females are affected with equal frequency. It was first described in the medical literature by Nicolas Saucerotte in 1772. The term is from the Greek ἄκρον (akron) meaning "extremity", and μέγα (mega) meaning "large".
Signs and symptoms
Features that may result from a high level of GH or expanding tumor include:
Headaches – often severe and prolonged
Soft tissue swelling visibly resulting in enlargement of the hands, feet, nose, lips, and ears, and a general thickening of the skin
Soft tissue swelling of internal organs, notably the heart with the attendant weakening of its muscularity, and the kidneys, also the vocal cords resulting in a characteristic thick, deep voice and slowing of speech
Generalized expansion of the skull at the fontanelle
Pronounced brow protrusion, often with ocular distension (frontal bossing)
Pronounced lower jaw protrusion (prognathism) with attendant macroglossia (enlargement of the tongue) and teeth spacing
Hypertrichosis, hyperpigmentation and hyperhidrosis may occur in these people.: 499
Skin tags
Carpal tunnel syndrome
Complications
Problems with bones and joints, including osteoarthritis, nerve compression syndrome due to bony overgrowth, and carpal tunnel syndrome
Hypertension
Diabetes mellitus
Cardiomyopathy, potentially leading to heart failure
Colorectal cancer
Sleep Apnea
Thyroid nodules and thyroid cancer
Hypogonadism
Compression of the optic chiasm by the growth of pituitary adenoma leading to visual problems
Causes
Pituitary adenoma
About 98% of cases of acromegaly are due to the overproduction of growth hormone by a benign tumor of the pituitary gland called an adenoma. These tumors produce excessive growth hormone and compress surrounding brain tissues as they grow larger. In some cases, they may compress the optic nerves. Expansion of the tumor may cause headaches and visual disturbances. In addition, compression of the surrounding normal pituitary tissue can alter production of other hormones, leading to changes in menstruation and breast discharge in women and impotence in men because of reduced testosterone production.A marked variation in rates of GH production and the aggressiveness of the tumor occurs. Some adenomas grow slowly and symptoms of GH excess are often not noticed for many years. Other adenomas grow rapidly and invade surrounding brain areas or the sinuses, which are located near the pituitary. In general, younger people tend to have more aggressive tumors.Most pituitary tumors arise spontaneously and are not genetically inherited. Many pituitary tumors arise from a genetic alteration in a single pituitary cell that leads to increased cell division and tumor formation. This genetic change, or mutation, is not present at birth but is acquired during life. The mutation occurs in a gene that regulates the transmission of chemical signals within pituitary cells; it permanently switches on the signal that tells the cell to divide and secrete growth hormones. The events within the cell that cause disordered pituitary cell growth and GH oversecretion currently are the subject of intensive research.Pituitary adenomas and diffuse somatomammotroph hyperplasia may result from somatic mutations activating GNAS, which may be acquired or associated with McCune-Albright syndrome.
Other tumors
In a few people, acromegaly is caused not by pituitary tumors, but by tumors of the pancreas, lungs, and adrenal glands. These tumors also lead to an excess of GH, either because they produce GH themselves or, more frequently, because they produce GHRH (growth hormone-releasing hormone), the hormone that stimulates the pituitary to make GH. In these people, the excess GHRH can be measured in the blood and establishes that the cause of the acromegaly is not due to a pituitary defect. When these nonpituitary tumors are surgically removed, GH levels fall and the symptoms of acromegaly improve.In people with GHRH-producing, non-pituitary tumors, the pituitary still may be enlarged and may be mistaken for a tumor. Therefore, it is important that physicians carefully analyze all "pituitary tumors" removed from people with acromegaly so as to not overlook the possibility that a tumor elsewhere in the body is causing the disorder.
Diagnosis
If acromegaly is suspected, medical laboratory investigations followed by medical imaging if the lab tests are positive confirms or rules out the presence of this condition.
IGF1 provides the most sensitive lab test for the diagnosis of acromegaly, and a GH suppression test following an oral glucose load, which is a very specific lab test, will confirm the diagnosis following a positive screening test for IGF1. A single value of the GH is not useful in view of its pulsatility (levels in the blood vary greatly even in healthy individuals). GH levels taken 2 hours after a 75- or 100-gram glucose tolerance test are helpful in the diagnosis: GH levels are suppressed below 1 μg/L in normal people, and levels higher than this cutoff are confirmatory of acromegaly.Other pituitary hormones must be assessed to address the secretory effects of the tumor, as well as the mass effect of the tumor on the normal pituitary gland. They include thyroid stimulating hormone (TSH), gonadotropic hormones (FSH, LH), adrenocorticotropic hormone, and prolactin.An MRI of the brain focusing on the sella turcica after gadolinium administration allows for clear delineation of the pituitary and the hypothalamus and the location of the tumor. A number of other overgrowth syndromes can result in similar problems.
Differential diagnosis
Pseudoacromegaly is a condition with the usual acromegaloid features, but without an increase in growth hormone and IGF-1. It is frequently associated with insulin resistance. Cases have been reported due to minoxidil at an unusually high dose. It can also be caused by a selective post receptor defect of insulin signalling, leading to the impairment of metabolic, but preservation of mitogenic, signalling.
Treatment
The goals of treatment are to reduce GH production to normal levels thereby reversing or ameliorating the signs and symptoms of acromegaly, to relieve the pressure that the growing pituitary tumor exerts on the surrounding brain areas, and to preserve normal pituitary function. Currently, treatment options include surgical removal of the tumor, drug therapy, and radiation therapy of the pituitary.
Medications
Somatostatin analogues
The primary current medical treatment of acromegaly is to use somatostatin analogues – octreotide (Sandostatin) or lanreotide (Somatuline).
These somatostatin analogues are synthetic forms of a brain hormone, somatostatin, which stops GH production. The long-acting forms of these drugs must be injected every 2 to 4 weeks for effective treatment. Most people with acromegaly respond to this medication. In many people with acromegaly, GH levels fall within one hour and headaches improve within minutes after the injection. Octreotide and lanreotide are effective for long-term treatment. Octreotide and lanreotide have also been used successfully to treat people with acromegaly caused by non-pituitary tumors.Somatostatin analogues are also sometimes used to shrink large tumors before surgery.Because octreotide inhibits gastrointestinal and pancreatic function, long-term use causes digestive problems such as loose stools, nausea, and gas in one third of people. In addition, approximately 25 percent of people with acromegaly develop gallstones, which are usually asymptomatic. In some cases, octreotide treatment can cause diabetes due to the fact that somatostatin and its analogues can inhibit the release of insulin. With an aggressive adenoma that is not able to be operated on, there may be a resistance to octreotide and in which case a second generation SSA, pasireotide, may be used for tumor control. However, insulin and glucose levels should be carefully monitored as pasireotide has been associated with hyperglycemia by reducing insulin secretion.
Dopamine agonists
For those who are unresponsive to somatostatin analogues, or for whom they are otherwise contraindicated, it is possible to treat using one of the dopamine agonists, bromocriptine or cabergoline. As tablets rather than injections, they cost considerably less. These drugs can also be used as an adjunct to somatostatin analogue therapy. They are most effective in those whose pituitary tumours cosecrete prolactin. Side effects of these dopamine agonists include gastrointestinal upset, nausea, vomiting, light-headedness when standing, and nasal congestion. These side effects can be reduced or eliminated if medication is started at a very low dose at bedtime, taken with food, and gradually increased to the full therapeutic dose. Bromocriptine lowers GH and IGF-1 levels and reduces tumor size in fewer than half of people with acromegaly. Some people report improvement in their symptoms although their GH and IGF-1 levels still are elevated.
Growth hormone receptor antagonists
The latest development in the medical treatment of acromegaly is the use of growth hormone receptor antagonists. The only available member of this family is pegvisomant (Somavert). By blocking the action of the endogenous growth hormone molecules, this compound is able to control the disease activity of acromegaly in virtually everyone with acromegaly. Pegvisomant has to be administered subcutaneously by daily injections. Combinations of long-acting somatostatin analogues and weekly injections of pegvisomant seem to be equally effective as daily injections of pegvisomant.
Surgery
Surgical removal of the pituitary tumor is usually effective in lowering growth hormone levels. Two surgical procedures are available for use. The first is endonasal transsphenoidal surgery, which involves the surgeon reaching the pituitary through an incision in the nasal cavity wall. The wall is reached by passing through the nostrils with microsurgical instruments. The second method is transsphenoidal surgery during which an incision is made into the gum beneath the upper lip. Further incisions are made to cut through the septum to reach the nasal cavity, where the pituitary is located. Endonasal transsphenoidal surgery is a less invasive procedure with a shorter recovery time than the older method of transsphenoidal surgery, and the likelihood of removing the entire tumor is greater with reduced side effects. Consequently, endonasal transsphenoidal surgery is the more common surgical choice.These procedures normally relieve the pressure on the surrounding brain regions and lead to a lowering of GH levels. Surgery is most successful in people with blood GH levels below 40 ng/ml before the operation and with pituitary tumors no larger than 10 mm in diameter. Success depends on the skill and experience of the surgeon. The success rate also depends on what level of GH is defined as a cure. The best measure of surgical success is the normalization of GH and IGF-1 levels. Ideally, GH should be less than 2 ng/ml after an oral glucose load. A review of GH levels in 1,360 people worldwide immediately after surgery revealed that 60% had random GH levels below 5 ng/ml. Complications of surgery may include cerebrospinal fluid leaks, meningitis, or damage to the surrounding normal pituitary tissue, requiring lifelong pituitary hormone replacement.Even when surgery is successful and hormone levels return to normal, people must be carefully monitored for years for possible recurrence. More commonly, hormone levels may improve, but not return completely to normal. These people may then require additional treatment, usually with medications.
Radiation therapy
Radiation therapy has been used both as a primary treatment and combined with surgery or drugs. It is usually reserved for people who have tumor remaining after surgery. These people often also receive medication to lower GH levels. Radiation therapy is given in divided doses over four to six weeks. This treatment lowers GH levels by about 50 percent over 2 to 5 years. People monitored for more than 5 years show significant further improvement. Radiation therapy causes a gradual loss of production of other pituitary hormones with time. Loss of vision and brain injury, which have been reported, are very rare complications of radiation treatments.
Selection of treatment
The initial treatment chosen should be individualized depending on the persons characteristics, such as age and tumor size. If the tumor has not yet invaded surrounding brain tissues, removal of the pituitary adenoma by an experienced neurosurgeon is usually the first choice. After surgery, a person must be monitored long-term for increasing GH levels.If surgery does not normalize hormone levels or a relapse occurs, a doctor will usually begin additional drug therapy. The current first choice is generally octreotide or lanreotide; however, bromocriptine and cabergoline are both cheaper and easier to administer. With all of these medications, long-term therapy is necessary, because their withdrawal can lead to rising GH levels and tumor re-expansion.Radiation therapy is generally used for people whose tumors are not completely removed by surgery, for people who are not good candidates for surgery because of other health problems, and for people who do not respond adequately to surgery and medication.
Prognosis
Life expectancy of people with acromegaly is dependent on how early the disease is detected. Life expectancy after the successful treatment of early disease is equal to that of the general population. Acromegaly can often go on for years before diagnosis, resulting in poorer outcome, and it is suggested that the better the growth hormone is controlled, the better the outcome. Upon successful surgical treatment, headaches and visual symptoms tend to resolve. One exception is sleep apnea, which is present in around 70% of cases, but does not tend to resolve with successful treatment of growth hormone level. While hypertension is a complication of 40% of cases, it typically responds well to regular regimens of blood pressure medication. Diabetes that occurs with acromegaly is treated with the typical medications, but successful lowering of growth hormone levels often alleviates symptoms of diabetes. Hypogonadism without gonad destruction is reversible with treatment. Acromegaly is associated with a slightly elevated risk of cancer.
Notable people
Famous people with acromegaly include:
Daniel Cajanus (1704–1749), "The Finnish Goliath". His natural size portrait and femur are still extant. His height was estimated to have been 247 cm (8 1").
Mary Ann Bevan (1874–1933), an English woman, who after developing acromegaly, toured the sideshow circuit as "the ugliest woman in the world".
André the Giant (born André Roussimoff, 1946–1993), French professional wrestler and actor
Salvatore Baccaro (1932–1984), Italian character actor. Active in B-movies, comedies, and horrors because of his peculiar features and spontaneous sympathy
Paul Benedict (1938–2008), American actor. Best known for portraying Harry Bentley, The Jeffersons English next door neighbour
Big Show (born Paul Wight; 1972), American professional wrestler and actor, had his pituitary tumor removed in 1991.
Eddie Carmel, born Oded Ha-Carmeili (1936–1972), Israeli-born entertainer with gigantism and acromegaly, popularly known as "The Jewish Giant"
Ted Cassidy (1932–1979), American actor. Best known for portraying Lurch in the TV sitcom The Addams Family
Rondo Hatton (1894–1946), American journalist and actor. A Hollywood favorite in B-movie horror films of the 1930s and 1940s. Hattons disfigurement, due to acromegaly, developed over time, beginning during his service in World War I.
Sultan Kösen, the worlds tallest man
Maximinus Thrax, Roman emperor (c. 173, reigned 235–238). Descriptions, as well as depictions, indicate acromegaly, though remains of his body are yet to be found.
The Great Khali (born Dalip Singh Rana), Indian professional wrestler, is best known for his tenure with WWE. He had his pituitary tumor removed in 2012 at age 39.
Primo Carnera (1906–1967), nicknamed the Ambling Alp, was an Italian professional boxer and wrestler who reigned as the boxing World Heavyweight Champion from 29 June 1933 to 14 June 1934.
Maurice Tillet (1903–1954), Russian-born French professional wrestler, is better known by his ring name, the French Angel.
Richard Kiel (1939–2014), actor, "Jaws" from two James Bond movies and Mr. Larson in Happy Gilmore
Pío Pico, the last Mexican Governor of California (1801–1894), manifested acromegaly without gigantism between at least 1847 and 1858. Some time after 1858, signs of the growth hormone-producing tumor disappeared along with all the secondary effects the tumor had caused in him. He looked normal in his 90s. His remarkable recovery is likely an example of spontaneous selective pituitary tumor apoplexy.
Earl Nightingale (March 12, 1921 – March 25, 1989) was an American radio speaker and author, dealing mostly with the subjects of human character development, motivation, and meaningful existence </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Actinic keratosis and can you explain it in depth? [/INST] Actinic keratosis (AK), sometimes called solar keratosis or senile keratosis, is a pre-cancerous area of thick, scaly, or crusty skin. Actinic keratosis is a disorder (-osis) of epidermal keratinocytes that is induced by ultraviolet (UV) light exposure (actin-). These growths are more common in fair-skinned people and those who are frequently in the sun. They are believed to form when skin gets damaged by UV radiation from the sun or indoor tanning beds, usually over the course of decades. Given their pre-cancerous nature, if left untreated, they may turn into a type of skin cancer called squamous cell carcinoma. Untreated lesions have up to a 20% risk of progression to squamous cell carcinoma, so treatment by a dermatologist is recommended.
Actinic keratoses characteristically appear as thick, scaly, or crusty areas that often feel dry or rough. Size commonly ranges between 2 and 6 millimeters, but they can grow to be several centimeters in diameter. Notably, AKs are often felt before they are seen, and the texture is sometimes compared to sandpaper. They may be dark, light, tan, pink, red, a combination of all these, or have the same color as the surrounding skin.
Given the causal relationship between sun exposure and AK growth, they often appear on a background of sun-damaged skin and in areas that are commonly sun-exposed, such as the face, ears, neck, scalp, chest, backs of hands, forearms, or lips. Because sun exposure is rarely limited to a small area, most people who have an AK have more than one.If clinical examination findings are not typical of AK and the possibility of in situ or invasive squamous cell carcinoma (SCC) cannot be excluded based on clinical examination alone, a biopsy or excision can be considered for definitive diagnosis by histologic examination of the lesional tissue. Multiple treatment options for AK are available. Photodynamic therapy (PDT) is one option the treatment of numerous AK lesions in a region of the skin, termed field cancerization. It involves the application of a photosensitizer to the skin followed by illumination with a strong light source. Topical creams, such as 5-fluorouracil or imiquimod, may require daily application to affected skin areas over a typical time course of weeks.
Cryotherapy is frequently used for few and well-defined lesions, but undesired skin lightening, or hypopigmentation, may occur at the treatment site. By following up with a dermatologist, AKs can be treated before they progress to skin cancer. If cancer does develop from an AK lesion, it can be caught early with close monitoring, at a time when treatment is likely to have a high cure rate.
Signs and symptoms
Actinic keratoses (AKs) most commonly present as a white, scaly plaque of variable thickness with surrounding redness; they are most notable for having a sandpaper-like texture when felt with a gloved hand. Skin nearby the lesion often shows evidence of solar damage characterized by notable pigmentary alterations, being yellow or pale in color with areas of hyperpigmentation; deep wrinkles, coarse texture, purpura and ecchymoses, dry skin, and scattered telangiectasias are also characteristic.Photoaging leads to an accumulation of oncogenic changes, resulting in a proliferation of mutated keratinocytes that can manifest as AKs or other neoplastic growths. With years of sun damage, it is possible to develop multiple AKs in a single area on the skin. This condition is termed field cancerization.
The lesions are usually asymptomatic, but can be tender, itch, bleed, or produce a stinging or burning sensation. AKs are typically graded in accordance with their clinical presentation: Grade I (easily visible, slightly palpable), Grade II (easily visible, palpable), and Grade III (frankly visible and hyperkeratotic).
Variants
Actinic keratoses can have various clinical presentations, often characterized as follows:
Classic (or common): Classic AKs present as white, scaly macules, papules or plaques of various thickness, often with surrounding erythema. They are usually 2–6mm in diameter but can sometimes reach several centimeters in diameter.
Hypertrophic (or hyperkeratotic): Hypertrophic AKs (HAKs) appears as a thicker scale or rough papule or plaque, often adherent to an erythematous base. Classic AKs can progress to become HAKs, and HAKs themselves can be difficult to distinguish from malignant lesions.
Atrophic: Atrophic AKs lack an overlying scale, and therefore appear as a nonpalpable change in color (or macule). They are often smooth and red and are less than 10mm in diameter.
AK with cutaneous horn: A cutaneous horn is a keratinic projection with its height at least one-half of its diameter, often conical in shape. They can be seen in the setting of actinic keratosis as a progression of an HAK, but are also present in other skin conditions. 38–40% of cutaneous horns represent AKs.
Pigmented AK: Pigmented AKs are rare variants that often present as macules or plaques that are tan to brown in color. They can be difficult to distinguish from a solar lentigo or lentigo maligna.
Actinic cheilitis: When an AK forms on the lip, it is called actinic cheilitis. This usually presents as a rough, scaly patch on the lip, often accompanied by the sensation of dry mouth and symptomatic splitting of the lips.
Bowenoid AK: Usually presents as a solitary, erythematous, scaly patch or plaque with well-defined borders. Bowenoid AKs are differentiated from Bowens disease by degree of epithelial involvement as seen on histology.The presence of ulceration, nodularity, or bleeding should raise concern for malignancy. Specifically, clinical findings suggesting an increased risk of progression to squamous cell carcinoma can be recognized as "IDRBEU": I (induration/inflammation), D (diameter > 1 cm), R (rapid enlargement), B (bleeding), E (erythema), and U (ulceration). AKs are usually diagnosed clinically, but because they are difficult to clinically differentiate from squamous cell carcinoma, any concerning features warrant biopsy for diagnostic confirmation.
Causes
The most important cause of AK formation is solar radiation, through a variety of mechanisms. Mutation of the p53 tumor suppressor gene, induced by UV radiation, has been identified as a crucial step in AK formation. This tumor suppressor gene, located on chromosome 17p132, allows for cell cycle arrest when DNA or RNA is damaged. Dysregulation of the p53 pathway can thus result in unchecked replication of dysplastic keratinocytes, thereby serving as a source of neoplastic growth and the development of AK, as well as possible progression from AK to skin cancer. Other molecular markers that have been associated with the development of AK include the expression of p16ink4, p14, the CD95 ligand, TNF-related apoptosis-inducing ligand (TRAIL) and TRAIL receptors, and loss of heterozygosity.Evidence also suggests that the human papillomavirus (HPV) plays a role in the development of AKs. The HPV virus has been detected in AKs, with measurable HPV viral loads (one HPV-DNA copy per less than 50 cells) measured in 40% of AKs. Similar to UV radiation, higher levels of HPV found in AKs reflect enhanced viral DNA replication. This is suspected to be related to the abnormal keratinocyte proliferation and differentiation in AKs, which facilitate an environment for HPV replication. This in turn may further stimulate the abnormal proliferation that contributes to the development of AKs and carcinogenesis.
Ultraviolet radiation
It is thought that ultraviolet (UV) radiation induces mutations in the keratinocytes of the epidermis, promoting the survival and proliferation of these atypical cells. Both UV-A and UV-B radiation have been implicated as causes of AKs. UV-A radiation (wavelength 320–400 nm) reaches more deeply into the skin and can lead to the generation of reactive oxygen species, which in turn can damage cell membranes, signaling proteins, and nucleic acids. UV-B radiation (wavelength 290–320 nm) causes thymidine dimer formation in DNA and RNA, leading to significant cellular mutations. In particular, mutations in the p53 tumor suppressor gene have been found in 30–50% of AK lesion skin samples.UV radiation has also been shown to cause elevated inflammatory markers such as arachidonic acid, as well as other molecules associated with inflammation. Eventually, over time these changes lead to the formation of AKs. Several predictors for increased AK risk from UV radiation have been identified:
Extent of sun exposure: Cumulative sun exposure leads to an increased risk for development of AKs. In one U.S. study, AKs were found in 55% of fair-skinned men with high cumulative sun exposure, and in only 19% of fair-skinned men with low cumulative sun exposure in an age-matched cohort (the percents for women in this same study were 37% and 12% respectively). Furthermore, the use of sunscreen (SPF 17 or higher) has been found to significantly reduce the development of AK lesions, and also promotes the regression of existing lesions.
History of sunburn: Studies show that even a single episode of painful sunburn as a child can increase an individuals risk of developing AK as an adult. Six or more painful sunburns over the course of a lifetime was found to be significantly associated with the likelihood of developing AK.
Skin pigmentation
Melanin is a pigment in the epidermis that functions to protect keratinocytes from the damage caused by UV radiation; it is found in higher concentrations in the epidermis of darker-skinned individuals, affording them protection against the development of AKs.
Fair-skinned individuals have a significantly increased risk of developing AKs when compared to olive-skinned individuals (odds ratios of 14.1 and 6.5, respectively), and AKs are uncommon in dark-skinned people of African descent. Other phenotypic features seen in fair-skinned individuals that are associated with an increased propensity to develop AKs include:
Freckling
Light hair and eye color
Propensity to sunburn
Inability to tan
Other risk factors
Immunosuppression: People with a compromised immune system from medical conditions (such as AIDS) or immunosuppressive therapy (such as chronic immunosuppression after organ transplantation, or chemotherapy for cancer) are at increased risk for developing AKs. They may develop AK at an earlier age or have an increased number of AK lesions compared to immunocompetent people.
Human papillomavirus (HPV): The role of HPV in the development of AK remains unclear, but evidence suggests that infection with the betapapillomavirus type of HPV may be associated with an increased likelihood of AK.
Genodermatoses: Certain genetic disorders interfere with DNA repair after sun exposure, thereby putting these individuals at higher risk for the development of AKs. Examples of such genetic disorders include xeroderma pigmentosum and Bloom syndrome.
Balding: AKs are commonly found on the scalps of balding men. The degree of baldness seems to be a risk factor for lesion development, as men with severe baldness were found to be seven times more likely to have 10 or more AKs when compared to men with minimal or no baldness. This observation can be explained by an absence of hair causing a larger proportion of scalp to be exposed to UV radiation if other sun protection measures are not taken.
Diagnosis
Physicians usually diagnose actinic keratosis by doing a thorough physical examination, through a combination of visual observation and touch. However a biopsy may be necessary when the keratosis is large in diameter, thick, or bleeding, in order to make sure that the lesion is not a skin cancer. Actinic keratosis may progress to invasive squamous cell carcinoma (SCC) but both diseases can present similarly upon physical exam and can be difficult to distinguish clinically. Histological examination of the lesion from a biopsy or excision may be necessary to definitively distinguish AK from in situ or invasive SCC. In addition to SCCs, AKs can be mistaken for other cutaneous lesions including seborrheic keratoses, basal cell carcinoma, lichenoid keratosis, porokeratosis, viral warts, erosive pustular dermatosis of the scalp, pemphigus foliaceus, inflammatory dermatoses like psoriasis, or melanoma.
Biopsy
A lesion biopsy is performed if the diagnosis remains uncertain after a clinical physical exam, or if there is suspicion that the AK might have progressed to squamous cell carcinoma. The most common tissue sampling techniques include shave or punch biopsy. When only a portion of the lesion can be removed due to its size or location, the biopsy should sample tissue from the thickest area of the lesion, as SCCs are most likely to be detected in that area.
If a shave biopsy is performed, it should extend through to the level of the dermis in order to provide sufficient tissue for diagnosis; ideally, it would extend to the mid-reticular dermis. Punch biopsy usually extends to the subcutaneous fat when the entire length of the punch blade is utilized.
Histopathology
On histologic examination, actinic keratoses usually show a collection of atypical keratinocytes with hyperpigmented or pleomorphic nuclei, extending to the basal layer of the epidermis. A "flag sign" is often described, referring to alternating areas of orthokeratosis and parakeratosis. Epidermal thickening and surrounding areas of sun-damaged skin are often seen. The normal ordered maturation of the keratinocytes is disordered to varying degrees: there may be widening of the intracellular spaces, cytologic atypia such as abnormally large nuclei, and a mild chronic inflammatory infiltrate.Specific findings depend on the clinical variant and particular lesion characteristics. The seven major histopathologic variants are all characterized by atypical keratinocytic proliferation beginning in the basal layer and confined to the epidermis; they include:
Hypertrophic: Notable for marked hyperkeratosis, often with evident parakeratosis. Keratinocytes in the stratum malphigii may show a loss of polarity, pleomorphism, and anaplasia. Some irregular downward proliferation into the uppermost dermis may be observed, but does not represent frank invasion.
Atrophic: With slight hyperkeratosis and overall atrophic changes to the epidermis; the basal layer shows cells with large, hyperchromatic nuclei in close proximity to each other. These cells have been observed to proliferate into the dermis as buds and duct-like structures.
Lichenoid: Demonstrate a band-like lymphocytic infiltrate in the papillary dermis, directly beneath the dermal-epidermal junction.
Achantholytic: Intercellular clefts or lacunae in the lowermost epidermal layer that result from anaplastic changes; these produce dyskeratotic cells with disrupted intercellular bridges.
Bowenoid: This term is controversial and usually refers to full-thickness atypia, microscopically indistinguishable from Bowens Disease. However most dermatologists and pathologists will use it in reference to tissue samples that are notable for small foci of atypia that involve the full thickness of the epidermis, in the background of a lesion that is otherwise consistent with an AK.
Epidermolytic: With granular degeneration.
Pigmented: Show pigmentation in the basal layer of the epidermis, similar to a solar lentigo.
Dermoscopy
Dermoscopy is a noninvasive technique utilizing a handheld magnifying device coupled with a transilluminating lift. It is often used in the evaluation of cutaneous lesions but lacks the definitive diagnostic ability of biopsy-based tissue diagnosis. Histopathologic exam remains the gold standard.
Polarized contact dermoscopy of AKs occasionally reveals a "rosette sign," described as four white points arranged in a clover pattern, often localized to within a follicular opening. It is hypothesized that the "rosette sign" corresponds histologically to the changes of orthokeratosis and parakeratosis known as the "flag sign."
Non-pigmented AKs: linear or wavy vascular patterning, or a "strawberry pattern," described as unfocused vessels between hair follicles, with white-haloed follicular openings.
Pigmented AKs: gray to brown dots or globules surrounding follicular openings, and annular-granular rhomboidal structures; often difficult to differentiate from lentigo maligna.
Prevention
Ultraviolet radiation is believed to contribute to the development of actinic keratoses by inducing mutations in epidermal keratinocytes, leading to proliferation of atypical cells. Therefore, preventive measures for AKs are targeted at limiting exposure to solar radiation, including:
Limiting extent of sun exposure
Avoid sun exposure during noontime hours between 10:00 AM and 2:00 PM when UV light is most powerful
Minimize all time in the sun, since UV exposure occurs even in the winter and on cloudy days
Using sun protection
Applying sunscreens with SPF ratings 30 or greater that also block both UVA and UVB light, at least every 2 hours and after swimming or sweating
Applying sunscreen at least 15 minutes before going outside, as this allows time for the sunscreen to be absorbed appropriately by the skin
Wearing sun protective clothing such as hats, sunglasses, long-sleeved shirts, long skirts, or trousersRecent research implicating human papillomavirus (HPV) in the development of AKs suggests that HPV prevention might in turn help prevent development of AKs, as UV-induced mutations and oncogenic transformation are likely facilitated in cases of active HPV infection. A key component of HPV prevention includes vaccination, and the CDC currently recommends routine vaccination in all children at age 11 or 12.There is some data that in individuals with a history of non-melanoma skin cancer, a low-fat diet can serve as a preventative measure against future actinic keratoses.
Management
There are a variety of treatment options for AK depending on the patient and the clinical characteristics of the lesion. AKs show a wide range of features, which guide decision-making in choosing treatment. As there are multiple effective treatments, patient preference and lifestyle are also factors that physicians consider when determining the management plan for actinic keratosis. Regular follow-up is advisable after any treatment to make sure no new lesions have developed and that old ones are not progressing. Adding topical treatment after a procedure may improve outcomes.
Medication
Topical medications are often recommended for areas where multiple or ill-defined AKs are present, as the medication can easily be used to treat a relatively large area.
Fluorouracil cream
Topical fluorouracil (5-FU) destroys AKs by blocking methylation of thymidylate synthetase, thereby interrupting DNA and RNA synthesis. This in turn prevents the proliferation of dysplastic cells in AK. Topical 5-FU is the most utilized treatment for AK, and often results in effective removal of the lesion. Overall, there is a 50% efficacy rate resulting in 100% clearance of AKs treated with topical 5-FU. 5-FU may be up to 90% effective in treating non-hyperkeratotic lesions. While topical 5-FU is a widely used and cost-effective treatment for AKs and is generally well tolerated, its potential side-effects can include: pain, crusting, redness, and local swelling. These adverse effects can be mitigated or minimized by reducing the frequency of application or taking breaks between uses. The most commonly used application regimen consists of applying a layer of topical cream to the lesion twice a day after washing; duration of treatment is typically 2–4 weeks to thinner skin like the cheeks and up to 8 weeks for the arms; treatment of up to 8 weeks has demonstrated a higher cure rate.
Imiquimod cream
Imiquimod is a topical immune-enhancing agent licensed for the treatment of genital warts. Imiquimod stimulates the immune system through the release and up-regulation of cytokines. Treatment with Imiquimod cream applied 2–3 times per week for 12 to 16 weeks was found to result in complete resolution of AKs in 50% of people, compared to 5% of controls. The Imiquimod 3.75% cream has been validated in a treatment regimen consisting of daily application to entire face and scalp for two 2-week treatment cycles, with a complete clearance rate of 36%.While the clearance rate observed with the Imiquimod 3.75% cream was lower than that observed with the 5% cream (36 and 50 percent, respectively), there are lower reported rates of adverse reactions with the 3.75% cream: 19% of individuals using Imiquimod 3.75% cream reported adverse reactions including local erythema, scabbing, and flaking at the application site, while nearly a third of individuals using the 5% cream reported the same types of reactions with Imiquimod treatment. However, it is ultimately difficult to compare the efficacy of the different strength creams directly, as current study data varies in methodology (e.g. duration and frequency of treatment, and amount of skin surface area covered).
Ingenol mebutate gel
Ingenol mebutate is a newer treatment for AK used in Europe and the United States. It works in two ways, first by disrupting cell membranes and mitochondria resulting cell death, and then by inducing antibody-dependent cellular cytotoxicity to eliminate remaining tumor cells. A 3-day treatment course with the 0.015% gel is recommended for the scalp and face, while a 2-day treatment course with the 0.05% gel is recommended for the trunk and extremities. Treatment with the 0.015% gel was found to completely clear 57% of AK, while the 0.05% gel had a 34% clearance rate. Advantages of ingenol mebutate treatment include the short duration of therapy and a low recurrence rate. Local skin reactions including pain, itching and redness can be expected during treatment with ingenol mebutate. This treatment was derived from the petty spurge, Euphorbia peplus which has been used as a traditional remedy for keratosis.
Diclofenac sodium gel
Topical diclofenac sodium gel is a nonsteroidal anti-inflammatory drug that is thought to work in the treatment of AK through its inhibition of the arachidonic acid pathway, thereby limiting the production of prostaglandins which are thought to be involved in the development of UVB-induced skin cancers. Recommended duration of therapy is 60 to 90 days with twice daily application. Treatment of facial AK with diclofenac gel led to complete lesion resolution in 40% of cases. Common side effects include dryness, itching, redness, and rash at the site of application.
Retinoids
Topical retinoids have been studied in the treatment of AK with modest results, and the American Academy of Dermatology does not currently recommend this as first-line therapy. Treatment with adapalene gel daily for 4 weeks, and then twice daily thereafter for a total of nine months led to a significant but modest reduction in the number AKs compared to placebo; it demonstrated the additional advantage of improving the appearance of photodamaged skin.Topical tretinoin is ineffective as treatment for reducing the number of AKs. For secondary prevention of AK, systemic, low-dose acitretin was found to be safe, well tolerated and moderately effective in chemoprophylaxis for skin cancers in kidney transplant patients. Acitretin is a viable treatment option for organ transplant patients according to expert opinion.
Tirbanibulin
Tirbanibulin (Klisyri) was approved for medical use in the United States in December 2020, for the treatment of actinic keratosis on the face or scalp.
Procedures
Cryotherapy
Liquid nitrogen (−195.8 °C) is the most commonly used destructive therapy for the treatment of AK in the United States. It is a well-tolerated office procedure that does not require anesthesia.Cryotherapy is particularly indicated for cases where there are fewer than 15 thin, well-demarcated lesions. Caution is encouraged for thicker, more hyperkeratotic lesions, as dysplastic cells may evade treatment. Treatment with both cryotherapy and field treatment can be considered for these more advanced lesions. Cryotherapy is generally performed using an open-spray technique, wherein the AK is sprayed for several seconds.The process can be repeated multiple times in one office visit, as tolerated. Cure rates from 67 to 99 percent have been reported, depending on freeze time and lesion characteristics. Disadvantages include discomfort during and after the procedure; blistering, scarring and redness; hypo- or hyperpigmentation; and destruction of healthy tissue.
Photodynamic therapy
AKs are one of the most common dermatologic lesions for which photodynamic therapy, including topical methyl aminolevulinate (MAL) or 5-aminolevulinic acid (5-ALA), is indicated.Treatment begins with preparation of the lesion, which includes scraping away scales and crusts using a dermal curette. A thick layer of topical MAL or 5-ALA cream is applied to the lesion and a small area surrounding the lesion, which is then covered with an occlusive dressing and left for a period of time. During this time the photosensitizer accumulates in the target cells within the AK lesion. The dressings are then removed and the lesion is treated with light at a specified wavelength.
Multiple treatment regimens using different photosensitizers, incubation times, light sources, and pretreatment regimens have been studied and suggest that longer incubation times lead to higher rates of lesion clearance. Photodynamic therapy is gaining in popularity. It has been found to have a 14% higher likelihood of achieving complete lesion clearance at 3 months compared to cryotherapy, and seems to result in superior cosmetic outcomes when compared to cryotherapy or 5-FU treatment. Photodynamic therapy can be particularly effective in treating areas with multiple AK lesions.
Surgical techniques
Surgical excision: Excision should be reserved for cases when the AK is a thick, horny papule, or when deeper invasion is suspected and histopathologic diagnosis is necessary. It is a rarely utilized technique for AK treatment.
Shave excision and curettage (sometimes followed by electrodesiccation when deemed appropriate by the physician): This technique is often used for treatment of AKs, and particularly for lesions appearing more similar to squamous cell carcinoma, or those that are unresponsive to other treatments. The surface of the lesion can be scraped away using a scalpel, or the base can be removed with a curette. Tissue can be evaluated histopathologically under the microscope, but specimens acquired using this technique are not often adequate to determine whether a lesion is invasive or intraepidermal.
Dermabrasion: Dermabrasion is useful in the treatment of large areas with multiple AK lesions. The process involves using a hand-held instrument to "sand" the skin, removing the stratum corneum layer of the epidermis. Diamond fraises or wire brushes revolving at high speeds are used. The procedure can be quite painful and requires procedural sedation and anesthetic, necessitating a hospital stay. One-year clearance rates with dermabrasion treatment are as high as 96%, but diminish drastically to 54% at five years.
Laser therapy
Laser therapy using carbon dioxide (CO2) or erbium:yttrium aluminum garnet (Er:YAG) lasers is a treatment approach being utilized with increased frequency, and sometimes in conjunction with computer scanning technology. Laser therapy has not been extensively studied, but current evidence suggests it may be effective in cases involving multiple AKs refractive to medical therapy, or AKs located in cosmetically sensitive locations such as the face. The CO2 laser has been recommended for extensive actinic cheilitis that has not responded to 5-FU.
Chemical peels
A chemical peel is a topically applied agent that wounds the outermost layer of the skin, promoting organized repair, exfoliation, and eventually the development of smooth and rejuvenated skin. Multiple therapies have been studied. A medium-depth peel may effectively treat multiple non-hyperkeratotic AKs. It can be achieved with 35% to 50% trichloroacetic acid (TCA) alone or at 35% in combination with Jessners solution in a once-daily application for a minimum of 3 weeks; 70% glycolic acid (α-hydroxy acid); or solid CO2. When compared to treatment with 5-FU, chemical peels have demonstrated similar efficacy and increased ease of use with similar morbidity. Chemical peels must be performed in a controlled clinic environment and are recommended only for individuals who are able to comply with follow-up precautions, including avoidance of sun exposure. Furthermore, they should be avoided in individuals with a history of HSV infection or keloids, and in those who are immunosuppressed or who are taking photosensitizing medications.
Prognosis
Untreated AKs follow one of three paths: they can either persist as AKs, regress, or progress to invasive skin cancer, as AK lesions are considered to be on the same continuum with squamous cell carcinoma (SCC). AK lesions that regress also have the potential to recur.
Progression: The overall risk of an AK turning into invasive cancer is low. In average-risk individuals, likelihood of an AK lesion progressing to SCC is less than 1% per year. Despite this low rate of progression, studies suggest that a full 60% of SCCs arise from pre-existing AKs, reinforcing the idea that these lesions are closely related.
Regression: Reported regression rates for single AK lesions have ranged between 15 and 63% after one year.
Recurrence: Recurrence rates after 1 year for single AK lesions that have regressed range between 15 and 53%.
Clinical course
Given the aforementioned differering clinical outcomes, it is difficult to predict the clinical course of any given actinic keratosis. AK lesions may also come and go—in a cycle of appearing on the skin, remaining for months, and then disappearing. Often they will reappear in a few weeks or months, particularly after unprotected sun exposure. Left untreated, there is a chance that the lesion will advance to become invasive. Although it is difficult to predict whether an AK will advance to become squamous cell carcinoma, it has been noted that squamous cell carcinomas originate in lesions formerly diagnosed as AKs with frequencies reported between 65 and 97%.
Epidemiology
Actinic keratosis is very common, with an estimated 14% of dermatology visits related to AKs. It is seen more often in fair-skinned individuals, and rates vary with geographical location and age. Other factors such as exposure to ultraviolet (UV) radiation, certain phenotypic features, and immunosuppression can also contribute to the development of AKs.
Men are more likely to develop AK than women, and the risk of developing AK lesions increases with age. These findings have been observed in multiple studies, with numbers from one study suggesting that approximately 5% of women ages 20–29 develop AK compared to 68% of women ages 60–69, and 10% of men ages 20–29 develop AK compared to 79% of men ages 60–69.Geography seems to play a role in the sense that individuals living in locations where they are exposed to more UV radiation throughout their lifetime have a significantly higher risk of developing AK </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Congenital adrenal hyperplasia and can you explain it in depth? [/INST] Congenital adrenal hyperplasia (CAH) is a group of autosomal recessive disorders characterized by impaired cortisol synthesis. It results from the deficiency of one of the five enzymes required for the synthesis of cortisol in the adrenal cortex. Most of these disorders involve excessive or deficient production of hormones such as glucocorticoids, mineralocorticoids, or sex steroids, and can alter development of primary or secondary sex characteristics in some affected infants, children, or adults. It is one of the most common autosomal recessive disorders in humans.
Types
CAH can occur in various forms. The clinical presentation of each form is different and depends to a large extent on the underlying enzyme defect, its precursor retention, and deficient products. Classical forms appear in infancy, and nonclassical forms appear in late childhood. The presentation in patients with classic CAH can be further subdivided into two forms: salt-wasting and simple-virilizing, depending on whether mineralocorticoid deficiency presents or absents, respectively (you should put absents before presents to straight the meaning of respectively). This subtyping is often not clinically meaningful, though, because all patients lose salt to some degree, and clinical presentations may overlap.
Classic
Salt-wasting
In 75% of cases of severe enzyme deficiency, insufficient aldosterone production can lead to salt wasting, failure to thrive, and potentially fatal hypovolemia and shock. A missed diagnosis of salt-loss CAH is related to the increased risk of early neonatal morbidity and death.
Simple-virilizing
The main feature of CAH in newborn females is the abnormal development of the external genitalia, which has varying degrees of virilization. According to clinical practice guidelines, for newborns found to have bilateral inaccessible gonads, CAH evaluation should be considered. If virilizing CAH cannot be identified and treated, both boys and girls may undergo rapid postnatal growth and virilization.
Nonclassic
In addition to the salt-wasting and simple-virilizing forms of CAH diagnosed in infancy, a mild or "nonclassic" form exists, which is characterized by varying degrees of postnatal androgen excess, but is sometimes asymptomatic. The nonclassic form may be noticed in late childhood and may lead to accelerated growth, premature sexual maturation, acne, and secondary polycystic ovary syndrome. In adult males, early balding and infertility may suggest the diagnosis. The nonclassic form is characterized by mild subclinical impairment of cortisol synthesis; serum cortisol concentration is usually normal.
Signs and symptoms
The symptoms of CAH vary depending upon the form of CAH and the sex of the patient. Symptoms can include:
Due to inadequate mineralocorticoids:
Vomiting due to salt-wasting, leading to dehydration and deathDue to excess androgens:
In extreme virilization, an elongated clitoris with a phallic-like structure is seen.
Ambiguous genitalia, in some infants, occurs such that initially identifying external genitalia as "male" or "female" is difficult.
Early pubic hair and rapid growth occurs in childhood.
Precocious puberty or failure of puberty to occur (sexual infantilism: absent or delayed puberty)
Excessive facial hair, virilization, and/or menstrual irregularity in adolescence
Infertility due to anovulation
Clitoromegaly, enlarged clitoris and shallow vaginaDue to insufficient androgens and estrogens:
Undervirilization in XY males can result in apparently female external genitalia.
In females, hypogonadism can cause sexual infantilism or abnormal pubertal development, infertility, and other reproductive system abnormalities.
Genetics
CAH results from mutations of genes for enzymes mediating the biochemical steps of production of mineralocorticoids, glucocorticoids, or sex steroids from cholesterol by the adrenal glands (steroidogenesis).Each form of CAH is associated with a specific defective gene. The most common type (95% of cases) involves the gene for 21-hydroxylase, which is found on 6p21.3 as part of the HLA complex; 21-hydroxylase deficiency results from a unique mutation with two highly homologous near-copies in series consisting of an active gene (CYP21A2) and an inactive pseudogene (CYP21A1P). Mutant alleles result from recombination between the active and pseudogenes (gene conversion). About 5% of cases of CAH are due to defects in the gene encoding 11β-hydroxylase and consequent 11β-hydroxylase deficiency. Other, more rare forms of CAH are caused by mutations in genes, including HSD3B2 (3β-hydroxysteroid dehydrogenase 2), CYP17A1 (17α-hydroxylase/17,20-lyase), CYP11A1 (P450scc; cholesterol side-chain cleavage enzyme), STAR (steroidogenic acute regulatory protein; StAR), CYB5A (cytochrome b5), and CYPOR (cytochrome P450 oxidoreductase; POR).
Expressivity
Further variability is introduced by the degree of enzyme inefficiency produced by the specific alleles each patient has. Some alleles result in more severe degrees of enzyme inefficiency. In general, severe degrees of inefficiency produce changes in the fetus and problems in prenatal or perinatal life. Milder degrees of inefficiency are usually associated with excessive or deficient sex hormone effects in childhood or adolescence, while the mildest forms of CAH interfere with ovulation and fertility in adults.
Diagnosis
Clinical evaluation
Female infants with classic CAH have ambiguous genitalia due to exposure to high concentrations of androgens in utero. CAH due to 21-hydroxylase deficiency is the most common cause of ambiguous genitalia in genotypically normal female infants (46XX). Less severely affected females may present with early pubarche. Young women may present with symptoms of polycystic ovarian syndrome (oligomenorrhea, polycystic ovaries, hirsutism).Males with classic CAH generally have no signs of CAH at birth. Some may present with hyperpigmentation, due to co-secretion with melanocyte-stimulating hormone, and possible penile enlargement. Age of diagnosis of males with CAH varies and depends on the severity of aldosterone deficiency. Boys with salt-wasting disease present early with symptoms of hyponatremia and hypovolemia. Boys with non-salt-wasting disease present later with signs of virilization.In rarer forms of CAH, males are undermasculinized and females generally have no signs or symptoms at birth.
Laboratory studies
Genetic analysis can be helpful to confirm a diagnosis of CAH, but it is not necessary if classic clinical and laboratory findings are present.
In classic 21-hydroxylase deficiency, laboratory studies will show:
Hypoglycemia (due to hypocortisolism) - One of cortisols many functions is to increase blood glucose levels. This occurs via a combination of several mechanisms, including (a) the stimulation of gluconeogesis (i.e. the creation of new glucose) in the liver, (b) the promotion of glycogenolysis (i.e. the breakdown of glycogen into glucose), and (c) the prevention of glucose leaving the bloodstream via the downregulation of GLUT-4 receptors (which normally promote movement of glucose from the bloodstream into adipose and muscle tissues). Therefore, when cortisol is deficient, these processes (effectively) occur in the reverse direction. Although there are compensatory mechanisms that mitigate the impact of hypocortisolism, they are limited in their extent and the net effect is still hypoglycemia.
Hyponatremia (due to hypoaldosteronism) - Aldosterone is the end product of the renin-angiotensin-aldosterone system that regulates blood pressure via blood pressure surveillance in the Kidney Juxtaglomerular apparatus. Aldosterone normally functions to increase sodium retention (which brings water as well) in exchange for potassium. Thus, lack of aldosterone causes hyperkalemia and hyponatremia. In fact, this is a distinguishing point from 11-hydroxylase deficiency, in which one of the increased products is 11-deoxycorticosterone that has weak mineralocorticoid activity. In 11-hydroxylase deficiency, 11-deoxycorticosterone is produced in such excess that it acts to retain sodium at the expense of potassium. It is this reason that patients with 11-hydroxylase deficiency do not show salt wasting (although sometimes they do in infancy), and instead have hypertension/water retention and sometimes hypokalemia.
Hyperkalemia (due to hypoaldosteronism)
Elevated 17α-hydroxyprogesteroneClassic 21-hydroxylase deficiency typically causes 17α-hydroxyprogesterone blood levels >242 nmol/L. (For comparison, a full-term infant at three days of age should have <3 nmol/L. Many neonatal screening programs have specific reference ranges by weight and gestational age because high levels may be seen in premature infants without CAH.) Salt-wasting patients tend to have higher 17α-hydroxyprogesterone levels than non-salt-wasting patients. In mild cases, 17α-hydroxyprogesterone may not be elevated in a particular random blood sample, but it will rise during a corticotropin stimulation test.
Classification
Cortisol is an adrenal steroid hormone required for normal endocrine function. Production begins in the second month of fetal life. Poor cortisol production is a hallmark of most forms of CAH. Inefficient cortisol production results in rising levels of ACTH, because cortisol feeds back to inhibit ACTH production, so loss of cortisol results in increased ACTH. This increased ACTH stimulation induces overgrowth (hyperplasia) and overactivity of the steroid-producing cells of the adrenal cortex. The defects causing adrenal hyperplasia are congenital (i.e. present at birth).
Cortisol deficiency in CAH is usually partial, and not the most serious problem for an affected person. Synthesis of cortisol shares steps with synthesis of mineralocorticoids such as aldosterone, androgens such as testosterone, and estrogens such as estradiol. The resulting excessive or deficient production of these three classes of hormones produce the most important problems for people with CAH. Specific enzyme inefficiencies are associated with characteristic patterns of over- or underproduction of mineralocorticoids or sex steroids.
Since the 1960s, most endocrinologists have referred to the forms of CAH by the traditional names in the left column, which generally correspond to the deficient enzyme activity. As exact structures and genes for the enzymes were identified in the 1980s, most of the enzymes were found to be cytochrome P450 oxidases and were renamed to reflect this. In some cases, more than one enzyme was found to participate in a reaction, and in other cases, a single enzyme mediated in more than one reaction. Variation in different tissues and mammalian species also was found.
In all its forms, congenital adrenal hyperplasia due to 21-hydroxylase deficiency accounts for about 95% of diagnosed cases of CAH. Unless another specific enzyme is mentioned, "CAH" in nearly all contexts refers to 21-hydroxylase deficiency. (The terms "salt-wasting CAH", and "simple virilizing CAH" usually refer to subtypes of this condition.) CAH due to deficiencies of enzymes other than 21-hydroxylase present many of the same management challenges, as 21-hydroxylase deficiency, but some involve mineralocorticoid excess or sex steroid deficiency.
Screening
Currently, in the United States and over 40 other countries, every child born is screened for 21-hydroxylase CAH at birth. This test detects elevated levels of 17α-hydroxyprogesterone (17-OHP). Detecting high levels of 17-OHP enables early detection of CAH. Newborns detected early enough can be placed on medication and live relatively normal lives.The screening process, however, is characterized by a high false-positive rate. In one study, CAH screening had the lowest positive predictive value (111 true-positive cases among 20,647 abnormal screening results in a 2-year period, or 0.53%, compared with 6.36% for biotinidase deficiency, 1.84% for congenital hypo-thyroidism, 0.56% for classic galactosemia, and 2.9% for phenylketonuria). According to this estimate, 200 unaffected newborns required clinical and laboratory follow-up for every true case of CAH.In 2020, Wael AbdAlmageed from USC Information Sciences Institute and Mimi Kim from USC Keck School Of Medicine led a joint study in which they used deep learning technology to analyze the facial morphology and features of CAH patients compared to control. In this cross-sectional study of 102 patients with CAH and 144 control participants, deep learning methods achieved a mean area under the receiver operating characteristic curve of 92% for predicting CAH from facial images. Facial features distinguished patients with CAH from controls, and analyses of facial regions found that the nose and upper face were most contributory. The findings suggest that facial morphologic features, as analyzed by deep neural network techniques, can be used as a phenotypic biomarker to predict CAH.
Treatment
Since the clinical manifestations of each form of CAH are unique and depend to a large extent on the underlying enzyme defects, their precursor retention and defective products, the therapeutic goal of CAH is to replenish insufficient adrenal hormones and suppress excess of precursors.Treatment of all forms of CAH may include any of:
Supplying enough glucocorticoid to reduce hyperplasia and overproduction of androgens or mineralocorticoids
Providing replacement mineralocorticoid and extra salt if the person is deficient
Providing replacement testosterone or estrogens at puberty if the person is deficient
Additional treatments to optimize growth by delaying puberty or delaying bone maturation
If CAH is caused by the deficiency of the 21-hydroxylase enzyme, then treatment aims to normalize levels of main substrate of the enzyme - 17α-hydroxyprogesterone.
Epidemiology
The incidence varies ethnically. In the United States, congenital adrenal hyperplasia in its classic form is particularly common in Native Americans and Yupik Inuit (incidence 1⁄280). Among American Caucasians, the incidence of the classic form is about 1⁄15,000).Continued treatment and wellness are enhanced by education and follow up.
History
Before 20th century
An Italian anatomist, Luigi De Crecchio (1832-1894) provided the earliest known description of a case of probable CAH.
I propose in this narrative that it is sometimes extremely difficult and even impossible to determine sex during life. In one of the anatomical theaters of the hospital..., there arrived toward the end of January a cadaver which in life was the body of a certain Joseph Marzo... The general physiognomy was decidedly male in all respects. There were no feminine curves to the body. There was a heavy beard. There was some delicacy of structure with muscles that were not very well developed... The distribution of pubic hair was typical of the male. Perhaps the lower extremities were somewhat delicate, resembling the female, and were covered with hair... The penis was curved posteriorly and measured 6 cm, or with stretching, 10 cm. The corona was 3 cm long and 8 cm in circumference. There was an ample prepuce. There was a first grade hypospadias... There were two folds of skin coming from the top of the penis and encircling it on either side. These were somewhat loose and resembled labia majora.
De Crecchio then described the internal organs, which included a normal vagina, uterus, fallopian tubes, and ovaries.
It was of the greatest importance to determine the habits, tendencies, passions, and general character of this individual... I was determined to get as complete a story as possible, determined to get at the base of the facts and to avoid undue exaggeration which was rampant in the conversation of many of the people present at the time of the dissection.
He interviewed many people and satisfied himself that Joseph Marzo "conducted himself within the sexual area exclusively as a male", even to the point of contracting the "French disease" on two occasions. The cause of death was another in a series of episodes of vomiting and diarrhea.This account was translated by Alfred Bongiovanni from De Crecchio ("Sopra un caso di apparenzi virili in una donna". Morgagni 7:154–188, 1865) in 1963 for an article in The New England Journal of Medicine.
20th and 21st centuries
The association of excessive sex steroid effects with diseases of the adrenal cortex have been recognized for over a century. The term "adrenogenital syndrome" was applied to both sex-steroid producing tumors and severe forms of CAH for much of the 20th century, before some of the forms of CAH were understood. Congenital adrenal hyperplasia, which also dates to the first half of the century, has become the preferred term to reduce ambiguity and to emphasize the underlying pathophysiology of the disorders.
Much modern understanding and treatment of CAH comes from research conducted at Johns Hopkins Medical School in Baltimore in the middle of the 20th century. Lawson Wilkins, "founder" of pediatric endocrinology, worked out the apparently paradoxical pathophysiology: that hyperplasia and overproduction of adrenal androgens resulted from impaired capacity for making cortisol. He reported use of adrenal cortical extracts to treat children with CAH in 1950. Genital reconstructive surgery was also pioneered at Hopkins. After application of karyotyping to CAH and other intersex disorders in the 1950s, John Money, JL Hampson, and JG Hampson persuaded both the scientific community and the public that sex assignment should not be based on any single biological criterion, and gender identity was largely learned and has no simple relationship with chromosomes or hormones. See Intersex for a fuller history, including recent controversies over reconstructive surgery.
Hydrocortisone, fludrocortisone, and prednisone were available by the late 1950s </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Adrenocortical carcinoma and can you explain it in depth? [/INST] Adrenocortical carcinoma (ACC) is an aggressive cancer originating in the cortex (steroid hormone-producing tissue) of the adrenal gland.
Adrenocortical carcinoma is remarkable for the many hormonal syndromes that can occur in patients with steroid hormone-producing ("functional") tumors, including Cushings syndrome, Conn syndrome, virilization, and feminization. Adrenocortical carcinoma has often invaded nearby tissues or metastasized to distant organs at the time of diagnosis, and the overall 5-year survival rate is about 50%.Adrenocortical carcinoma is a rare tumor, with incidence of one to two per million population annually. It has a bimodal distribution by age, with cases clustering in children under 5 and in adults 30–40 years old. The widely used angiotensin-II-responsive steroid-producing cell line H295R was originally isolated from a tumor diagnosed as adrenocortical carcinoma.
Signs and symptoms
Adrenocortical carcinoma may present differently in children and adults. Most tumors in children are functional, and virilization is by far the most common presenting symptom(s), followed by Cushings syndrome and precocious puberty. Among adults presenting with hormonal syndromes, Cushings syndrome alone is most common, followed by mixed Cushings and virilization (glucocorticoid and androgen overproduction). Feminization and Conn syndrome (mineralocorticoid excess) occur in less than 10% of cases. Rarely, pheochromocytoma-like hypersecretion of catecholamines has been reported in adrenocortical cancers. Nonfunctional tumors (about 40%, authorities vary) usually present with abdominal or flank pain, varicocele, and renal vein thrombosis or they may be asymptomatic and detected incidentally.All patients with suspected ACC should be carefully evaluated for signs and symptoms of hormonal syndromes. For Cushings syndrome (glucocorticoid excess), these include weight gain, muscle wasting, purple lines on the abdomen, a fatty "buffalo hump" on the neck, a "moon-like" face, and thinning, fragile skin. Virilism (androgen excess) is most obvious in women, and may produce excess facial and body hair, acne, enlargement of the clitoris, deepening of the voice, coarsening of facial features, cessation of menstruation. Conn syndrome (mineralcorticoid excess) is marked by high blood pressure, which can result in headache and hypokalemia (low serum potassium, which can in turn produce muscle weakness, confusion, and palpitations), low plasma renin activity, and high serum aldosterone. Feminization (estrogen excess) is most readily noted in men, and includes breast enlargement, decreased libido, and impotence.
Pathophysiology
The main etiologic factor of ACC is unknown, although families with Li–Fraumeni syndrome, caused by an inherited inactivation mutation in TP53, have increased risk. Several genes have been shown to be recurrently mutated, including TP53, CTNNB1, MEN1, PRKAR1A, RPL22, and DAXX. The telomerase gene TERT is often amplified while ZNRF3 and CDKN2A are often homozygously deleted. The genes h19, insulin-like growth factor II (IGF-II), and p57kip2 are important for fetal growth and development. They are located on chromosome 11p. Expression of the h19 gene is markedly reduced in both nonfunctioning and functioning adrenal cortical carcinomas, especially in tumors producing cortisol and aldosterone. Also, a loss occurs of activity of the p57kip2 gene product in virilizing adenomas and adrenal cortical carcinomas. In contrast, IGF-II gene expression has been shown to be high in adrenal cortical carcinomas. Finally, c-myc gene expression is relatively high in neoplasms, and it is often linked to poor prognosis.Bilateral adrenocortical tumors are less common than unilateral. The majority of bilateral tumours can be distinguished according to size and aspect of the nodules: primary pigmented nodular adrenocortical disease, which can be sporadic or part of Carney complex, and primary bilateral macro nodular adrenal hyperplasia.Metastasis is most commonly to the liver and lung.
Diagnosis
Laboratory findings
Hormonal syndromes should be confirmed with laboratory testing. Laboratory findings in Cushing syndrome include increased serum glucose (blood sugar) and increased urine cortisol. Adrenal virilism is confirmed by the finding of an excess of serum androstenedione and dehydroepiandrosterone. Findings in Conn syndrome include low serum potassium, low plasma renin activity, and high serum aldosterone. Feminization is confirmed with the finding of excess serum estrogen.
Imaging
Radiological studies of the abdomen, such as CT scans and magnetic resonance imaging are useful for identifying the site of the tumor, differentiating it from other diseases, such as adrenocortical adenoma, and determining the extent of invasion of the tumor into surrounding organs and tissues. On CT, it shows heterogeneous appearance due to necrosis, calcifications, and haemorrhage. After contrast injection, it shows peripheral enhancement. Invasion of adjacent structures such as kidney, vena cava, liver, and retroperitoneal lymph nodes are also common.On MRI, it shows low intensity on T1-weighted images, and high T2 signal with strong heterogeneous contrast enhancement and slow washout. Haemorrhagic areas may show high T1-signal.
Pathology
Adrenal tumors are often not biopsied prior to surgery, so diagnosis is confirmed on examination of the surgical specimen by a pathologist. Grossly, ACCs are often large, with a tan-yellow cut surface, and areas of hemorrhage and necrosis </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Alcohol withdrawal syndrome and can you explain it in depth? [/INST] Alcohol withdrawal syndrome (AWS) is a set of symptoms that can occur following a reduction in alcohol use after a period of excessive use. Symptoms typically include anxiety, shakiness, sweating, vomiting, fast heart rate, and a mild fever. More severe symptoms may include seizures, hallucinations, and delirium tremens (DTs). Symptoms typically begin around six hours following the last drink, are worst at 24 to 72 hours, and improve by seven days.Alcohol withdrawal may occur in those who are alcohol dependent. This may occur following a planned or unplanned decrease in alcohol intake. The underlying mechanism involves a decreased responsiveness of GABA receptors in the brain. The withdrawal process is typically followed using the Clinical Institute Withdrawal Assessment for Alcohol scale (CIWA-Ar).The typical treatment of alcohol withdrawal is with benzodiazepines such as chlordiazepoxide or diazepam. Often the amounts given are based on a persons symptoms. Thiamine is recommended routinely. Electrolyte problems and low blood sugar should also be treated. Early treatment improves outcomes.In the Western world about 15% of people have problems with alcoholism at some point in time. Alcohol depresses the central nervous system, slowing cerebral messaging and altering the way signals are sent and received. Progressively larger amounts of alcohol are needed to achieve the same physical and emotional results. The drinker eventually must consume alcohol just to avoid the physical cravings and withdrawal symptoms. About half of people with alcoholism will develop withdrawal symptoms upon reducing their use, with four percent developing severe symptoms. Among those with severe symptoms up to 15% die. Symptoms of alcohol withdrawal have been described at least as early as 400 BC by Hippocrates. It is not believed to have become a widespread problem until the 1700s.
Signs and symptoms
Signs and symptoms of alcohol withdrawal occur primarily in the central nervous system. The severity of withdrawal can vary from mild symptoms such as insomnia, trembling, and anxiety to severe and life-threatening symptoms such as alcoholic hallucinosis, delirium tremens, and autonomic instability.Withdrawal usually begins 6 to 24 hours after the last drink. It can last for up to one week. To be classified as alcohol withdrawal syndrome, patients must exhibit at least two of the following symptoms: increased hand tremor, insomnia, nausea or vomiting, transient hallucinations (auditory, visual or tactile), psychomotor agitation, anxiety, generalized tonic–clonic seizures, and autonomic instability.The severity of symptoms is dictated by a number of factors, the most important of which are degree of alcohol intake, length of time the individual has been using alcohol, and previous history of alcohol withdrawal. Symptoms are also grouped together and classified:
Alcohol hallucinosis: patients have transient visual, auditory, or tactile hallucinations, but are otherwise clear.
Withdrawal seizures: seizures occur within 48 hours of alcohol cessations and occur either as a single generalized tonic-clonic seizure or as a brief episode of multiple seizures.
Delirium tremens: hyperadrenergic state, disorientation, tremors, diaphoresis, impaired attention/consciousness, and visual and auditory hallucinations.
Progression
Six to 12 hours after the ingestion of the last drink, withdrawal symptoms such as shaking, headache, sweating, anxiety, nausea or vomiting may occur. Twelve to 24 hours after cessation, the condition may progress to such major symptoms as confusion, hallucinations (with awareness of reality), while less severe symptoms may persist and develop including tremor, agitation, hyperactivity and insomnia.At 12 to 48 hours following the last ethanol ingestion, the possibility of generalized tonic–clonic seizures should be anticipated, occurring in 3-5% of cases. Meanwhile, none of the earlier withdrawal symptoms will typically have abated. Seizures carry the risk of major complications and death for the alcoholic.Although the persons condition usually begins to improve after 48 hours, withdrawal symptoms sometimes continue to increase in severity and advance to the most severe stage of withdrawal, delirium tremens. This occurs in 5 to 20% of patients experiencing detoxification and one third of untreated cases, which is characterized by hallucinations that are indistinguishable from reality, severe confusion, seizures, high blood pressure, and fever that can persist anywhere from 4 to 12 days.
Protracted withdrawal
A protracted alcohol withdrawal syndrome occurs in many alcoholics when withdrawal symptoms continue beyond the acute withdrawal stage but usually at a subacute level of intensity and gradually decreasing with severity over time. This syndrome is sometimes referred to as the post-acute-withdrawal syndrome. Some withdrawal symptoms can linger for at least a year after discontinuation of alcohol. Symptoms can include a craving for alcohol, inability to feel pleasure from normally pleasurable things (known as anhedonia), clouding of sensorium, disorientation, nausea and vomiting or headache.Insomnia is a common protracted withdrawal symptom that persists after the acute withdrawal phase of alcohol. Insomnia has also been found to influence relapse rate. Studies have found that magnesium or trazodone can help treat the persisting withdrawal symptom of insomnia in recovering alcoholics. Insomnia can be difficult to treat in these individuals because many of the traditional sleep aids (e.g., benzodiazepine receptor agonists and barbiturate receptor agonists) work via a GABAA receptor mechanism and are cross-tolerant with alcohol. However, trazodone is not cross-tolerant with alcohol. The acute phase of the alcohol withdrawal syndrome can occasionally be protracted. Protracted delirium tremens has been reported in the medical literature as a possible but unusual feature of alcohol withdrawal.
Pathophysiology
Chronic use of alcohol leads to changes in brain chemistry especially in the GABAergic system. Various adaptations occur such as changes in gene expression and down regulation of GABAA receptors. During acute alcohol withdrawal, changes also occur such as upregulation of alpha4 containing GABAA receptors and downregulation of alpha1 and alpha3 containing GABAA receptors. Neurochemical changes occurring during alcohol withdrawal can be minimized with drugs which are used for acute detoxification. With abstinence from alcohol and cross-tolerant drugs these changes in neurochemistry may gradually return towards normal. Adaptations to the NMDA system also occur as a result of repeated alcohol intoxication and are involved in the hyper-excitability of the central nervous system during the alcohol withdrawal syndrome. Homocysteine levels, which are elevated during chronic drinking, increase even further during the withdrawal state, and may result in excitotoxicity. Alterations in ECG (in particular an increase in QT interval) and EEG abnormalities (including abnormal quantified EEG) may occur during early withdrawal. Dysfunction of the hypothalamic–pituitary–adrenal axis and increased release of corticotropin-releasing hormone occur during both acute as well as protracted abstinence from alcohol and contribute to both acute and protracted withdrawal symptoms. Anhedonia/dysphoria symptoms, which can persist as part of a protracted withdrawal, may be due to dopamine underactivity.
Kindling
Kindling is a phenomenon where repeated alcohol detoxifications leads to an increased severity of the withdrawal syndrome. For example, binge drinkers may initially experience no withdrawal symptoms, but with each period of alcohol use followed by cessation, their withdrawal symptoms intensify in severity and may eventually result in full-blown delirium tremens with convulsive seizures. Alcoholics who experience seizures during detoxification are more likely to have had previous episodes of alcohol detoxification than patients who did not have seizures during withdrawal. In addition, people with previous withdrawal syndromes are more likely to have more medically complicated alcohol withdrawal symptoms.Kindling can cause complications and may increase the risk of relapse, alcohol-related brain damage and cognitive deficits. Chronic alcohol misuse and kindling via multiple alcohol withdrawals may lead to permanent alterations in the GABAA receptors. The mechanism behind kindling is sensitization of some neuronal systems and desensitization of other neuronal systems which leads to increasingly gross neurochemical imbalances. This in turn leads to more profound withdrawal symptoms including anxiety, convulsions and neurotoxicity.Binge drinking is associated with increased impulsivity, impairments in spatial working memory and impaired emotional learning. These adverse effects are believed to be due to the neurotoxic effects of repeated withdrawal from alcohol on aberrant neuronal plasticity and cortical damage. Repeated periods of acute intoxication followed by acute detoxification has profound effects on the brain and is associated with an increased risk of seizures as well as cognitive deficits. The effects on the brain are similar to those seen in alcoholics who have detoxified repeatedly but not as severe as in alcoholics who have no history of prior detox. Thus, the acute withdrawal syndrome appears to be the most important factor in causing damage or impairment to brain function. The brain regions most sensitive to harm from binge drinking are the amygdala and prefrontal cortex.People in adolescence who experience repeated withdrawals from binge drinking show impairments of long-term nonverbal memory. Alcoholics who have had two or more alcohol withdrawals show more frontal lobe cognitive dysfunction than those who have experienced one or no prior withdrawals. Kindling of neurons is the proposed cause of withdrawal-related cognitive damage. Kindling from repeated withdrawals leads to accumulating neuroadaptive changes. Kindling may also be the reason for cognitive damage seen in binge drinkers.
Diagnosis
Many hospitals use the Clinical Institute Withdrawal Assessment for Alcohol (CIWA) protocol in order to assess the level of withdrawal present and therefore the amount of medication needed. When overuse of alcohol is suspected but drinking history is unclear, testing for elevated values of carbohydrate-deficient transferrin or gammaglutamyl transferase can help make the diagnosis of alcohol overuse and dependence more clear. The CIWA has also been shortened (now called the CIWA-Ar), while retaining its validity and reliability, to help assess patients more efficiently due to the life-threatening nature of alcohol withdrawal.Other conditions that may present similarly include benzodiazepine withdrawal syndrome (a condition also mainly caused by GABAA receptor adaptation).
Treatment
Benzodiazepines are effective for the management of symptoms as well as the prevention of seizures. Certain vitamins are also an important part of the management of alcohol withdrawal syndrome. In those with severe symptoms inpatient care is often required. In those with lesser symptoms treatment at home may be possible with daily visits with a health care provider.
Benzodiazepines
Benzodiazepines are the most commonly used medication for the treatment of alcohol withdrawal and are generally safe and effective in suppressing symptoms of alcohol withdrawal. This class of medication is generally effective in symptoms control, but need to be used carefully. Although benzodiazepines have a long history of successfully treating and preventing withdrawal, there is no consensus on the ideal one to use. The most commonly used agents are long-acting benzodiazepines, such as chlordiazepoxide and diazepam. These are believed to be superior to other benzodiazepines for treatment of delirium and allow for longer periods between doses. However, benzodiazepines with intermediate half-lives like lorazepam may be safer in people with liver problems. Benzodiazepines showed a protective benefit against alcohol withdrawal symptoms, in particular seizure, compared to other common methods of treatment.The primary debate between use of long-acting benzodiazepines and short-acting is that of ease of use. Longer-acting drugs, such as diazepam, can be administered less frequently. However, evidence does exist that "symptom-triggered regimens" such as those used when treating with lorazepam, are as safe and effective, but have decreased treatment duration and medication quantity used.Although benzodiazepines are very effective at treating alcohol withdrawal, they should be carefully used. Benzodiazepines should only be used for brief periods in alcoholics who are not already dependent on them, as they share cross tolerance with alcohol. There is a risk of replacing an alcohol addiction with benzodiazepine dependence or adding another addiction. Furthermore, disrupted GABA benzodiazepine receptor function is part of alcohol dependence and chronic benzodiazepines may prevent full recovery from alcohol induced mental effects. The combination of benzodiazepines and alcohol can amplify the adverse psychological effects of each other causing enhanced depressive effects on mood and increase suicidal actions and are generally contraindicated except for alcohol withdrawal.
Vitamins
Alcoholics are often deficient in various nutrients, which can cause severe complications during alcohol withdrawal, such as the development of Wernicke syndrome. To help to prevent Wernicke syndrome, these individuals should be administered a multivitamin preparation with sufficient quantities of thiamine and folic acid. During alcohol withdrawal, the prophylactic administration of thiamine, folic acid, and pyridoxine intravenously is recommended before starting any carbohydrate-containing fluids or food. These vitamins are often combined into a banana bag for intravenous administration.
Anticonvulsants
Very limited evidence indicates that topiramate or pregabalin may be useful in the treatment of alcohol withdrawal syndrome </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Alopecia areata and can you explain it in depth? [/INST] Alopecia areata, also known as spot baldness, is a condition in which hair is lost from some or all areas of the body. Often, it results in a few bald spots on the scalp, each about the size of a coin. Psychological stress and illness are possible factors in bringing on alopecia areata in individuals at risk, but in most cases there is no obvious trigger. People are generally otherwise healthy. In a few cases, all the hair on the scalp is lost (alopecia totalis), or all body hair is lost (alopecia universalis). Hair loss can be permanent, or temporary. It is distinctive from pattern hair loss, which is common among males.
Alopecia areata is believed to be an autoimmune disease resulting from a breach in the immune privilege of the hair follicles. Risk factors include a family history of the condition. Among identical twins, if one is affected, the other has about a 50% chance of also being affected. The underlying mechanism involves failure by the body to recognize its own cells, with subsequent immune-mediated destruction of the hair follicle.No cure for the condition is known. Efforts may be used to try to speed hair regrowth, such as cortisone injections. Sunscreen, head coverings to protect from cold and sun, and glasses, if the eyelashes are missing, are recommended. In some cases, the hair regrows, and the condition does not reoccur. In others, hair loss and regrowth occurs over years. Among those in whom all body hair is lost, fewer than 10% recover.About 0.15% of people are affected at any one time, and 2% of people are affected at some point in time. Onset is usually in childhood. Females are affected at higher rates than males.
Signs and symptoms
Typical first symptoms of alopecia areata are small bald patches. The underlying skin is unscarred and looks superficially normal. Although these patches can take many shapes, they are usually round or oval. Alopecia areata most often affects the scalp and beard, but may occur on any part of the body with hair. Different areas of the skin may exhibit hair loss and regrowth at the same time. The disease may also go into remission for a time, or may be permanent. It is common in children.The area of hair loss may tingle or be mildly painful. The hair tends to fall out over a short period of time, with the loss commonly occurring more on one side of the scalp than the other.Exclamation point hairs, narrower along the length of the strand closer to the base, producing a characteristic "exclamation point" appearance, are often present. These hairs are very short (3–4 mm), and can be seen surrounding the bald patches.When healthy hair is pulled out, at most a few should come out, and ripped hair should not be distributed evenly across the tugged portion of the scalp. In cases of alopecia areata, hair tends to pull out more easily along the edge of the patch where the follicles are already being attacked by the bodys immune system than away from the patch where they are still healthy.Nails may have pitting or trachyonychia.
Causes
Alopecia areata is thought to be a systemic autoimmune disorder in which the body attacks its own anagen hair follicles and suppresses or stops hair growth. For example, T cell lymphocytes cluster around affected follicles, causing inflammation and subsequent hair loss. Hair follicles in a normal state are thought to be kept secure from the immune system, a phenomenon called immune privilege. A breach in this immune privilege state is considered as the cause of alopecia areata. A few cases of babies being born with congenital alopecia areata have been reported.Alopecia areata is not contagious. It occurs more frequently in people who have affected family members, suggesting heredity may be a factor. Strong evidence of genetic association with increased risk for alopecia areata was found by studying families with two or more affected members. This study identified at least four regions in the genome that are likely to contain these genes. In addition, alopecia areata shares genetic risk factors with other autoimmune diseases, including rheumatoid arthritis, type 1 diabetes, and celiac disease. It may be the only manifestation of celiac disease.Endogenous retinoids metabolic defect is a key part of the pathogenesis of the alopecia areata.In 2010, a genome-wide association study was completed that identified 129 single nucleotide polymorphisms that were associated with alopecia areata. The genes that were identified include those involved in
controlling the activation and proliferation of regulatory T cells, cytotoxic T lymphocyte-associated antigen 4, interleukin-2, interleukin-2 receptor A, and Eos (also known as Ikaros family zinc finger 4), as well as the human leukocyte antigen. The study also identified two genes, PRDX5 and STX17, that are expressed in the hair follicle.
Diagnosis
Alopecia areata is usually diagnosed based on clinical features.
Trichoscopy may aid in establishing the diagnosis. In alopecia areata, trichoscopy shows regularly distributed "yellow dots" (hyperkeratotic plugs), small exclamation-mark hairs, and "black dots" (destroyed hairs in the hair follicle opening).Oftentimes, however, discrete areas of hair loss surrounded by exclamation mark hairs is sufficient for clinical diagnosis of alopecia areata. Sometimes, reddening of the skin, erythema, may also be present in the balding area.A biopsy is rarely needed to make the diagnosis or aid in the management of alopecia areata. Histologic findings may include peribulbar lymphocytic infiltration resembling a "swarm of bees", a shift in the anagen-to-telogen ratio towards telogen, and dilated follicular infundibulae. Other helpful findings can include pigment incontinence in the hair bulb and follicular stelae. Occasionally, in inactive alopecia areata, no inflammatory infiltrates are found.
Classification
Commonly, alopecia areata involves hair loss in one or more round spots on the scalp.
Hair may also be lost more diffusely over the whole scalp, in which case the condition is called diffuse alopecia areata.
Alopecia areata monolocularis describes baldness in only one spot. It may occur anywhere on the head.
Alopecia areata multilocularis refers to multiple areas of hair loss.
Ophiasis refers to hair loss in the shape of a wave at the circumference of the head.
The disease may be limited only to the beard, in which case it is called alopecia areata barbae.
If the person loses all the hair on the scalp, the disease is then called alopecia areata totalis.
If all body hair, including pubic hair, is lost, the diagnosis then becomes alopecia areata universalis.Alopecia areata totalis and universalis are rare.
Treatment
The objective assessment of treatment efficacy is very difficult and spontaneous remission is unpredictable, but if the affected area is patched, the hair may regrow spontaneously in many cases. None of the existing therapeutic options are curative or preventive.In cases of severe hair loss, limited success has been achieved by using the corticosteroid medications clobetasol or fluocinonide, corticosteroid injections, or cream. Application of corticosteroid creams to the affected skin is less effective and takes longer to produce results. Steroid injections are commonly used in sites where the areas of hair loss on the head are small or especially where eyebrow hair has been lost. Whether they are effective is uncertain. Some other medications that have been used are minoxidil, Elocon (mometasone) ointment (steroid cream), irritants (anthralin or topical coal tar), and topical immunotherapy ciclosporin, sometimes in different combinations. Topical corticosteroids frequently fail to enter the skin deeply enough to affect the hair bulbs, which are the treatment target, and small lesions typically also regrow spontaneously. Oral corticosteroids may decrease the hair loss, but only for the period during which they are taken, and these medications can cause serious side effects. No one treatment is effective in all cases, and some individuals may show no response to any treatment. Few treatments have been well evaluated. A 2008 meta-analysis of oral and topical corticosteroids, topical ciclosporin, photodynamic therapy, and topical minoxidil showed no benefit of hair growth compared with placebo, especially with regard to long-term benefits. For more severe cases, recent studies have shown promising results with the individual use of the immunosuppressant Methotrexate or adjunct use with corticosteroids. However, as relapse of the condition may occur long-term treatment is recommended.When alopecia areata is associated with celiac disease, treatment with a gluten-free diet allows for complete and permanent regrowth of scalp and other body hair in many people, but in others, remissions and recurrences are seen. This improvement is probably due to the normalization of the immune response as a result of gluten withdrawal from the diet.In June 2022, the U.S. Food and Drug Administration (FDA) authorized baricitinib for the treatment of severe alopecia areata.
Prognosis
In most cases that begin with a small number of patches of hair loss, hair grows back after a few months to a year. In cases with a greater number of patches, hair can either grow back or progress to alopecia areata totalis or, in rare cases, alopecia areata universalis.No loss of body function occurs, and the effects of alopecia areata are psychological (loss of self-image due to hair loss), although these can be severe </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Altitude sickness and can you explain it in depth? [/INST] Altitude sickness, the mildest form being acute mountain sickness (AMS), is the harmful effect of high altitude, caused by rapid exposure to low amounts of oxygen at high elevation. People can respond to high altitude in different ways. Symptoms may include headaches, vomiting, tiredness, confusion, trouble sleeping, and dizziness. Acute mountain sickness can progress to high-altitude pulmonary edema (HAPE) with associated shortness of breath or high-altitude cerebral edema (HACE) with associated confusion. Chronic mountain sickness may occur after long-term exposure to high altitude.Altitude sickness typically occurs only above 2,500 metres (8,000 ft), though some are affected at lower altitudes. Risk factors include a prior episode of altitude sickness, a high degree of activity, and a rapid increase in elevation. Diagnosis is based on symptoms and is supported in those who have more than a minor reduction in activities. It is recommended that at high altitude any symptoms of headache, nausea, shortness of breath, or vomiting be assumed to be altitude sickness.Prevention is by gradually increasing elevation by no more than 300 metres (1,000 ft) per day. Being physically fit does not decrease the risk. Treatment is generally by descending and sufficient fluids. Mild cases may be helped by ibuprofen, acetazolamide, or dexamethasone. Severe cases may benefit from oxygen therapy and a portable hyperbaric bag may be used if descent is not possible. Treatment efforts, however, have not been well studied.AMS occurs in about 20% of people after rapidly going to 2,500 metres (8,000 ft) and 40% of people going to 3,000 metres (10,000 ft). While AMS and HACE occurs equally frequently in males and females, HAPE occurs more often in males. The earliest description of altitude sickness is attributed to a Chinese text from around 30 BCE which describes "Big Headache Mountains", possibly referring to the Karakoram Mountains around Kilik Pass.
Signs and symptoms
People have different susceptibilities to altitude sickness; for some otherwise healthy people, acute altitude sickness can begin to appear at around 2,000 metres (6,600 ft) above sea level, such as at many mountain ski resorts, equivalent to a pressure of 80 kilopascals (0.79 atm). This is the most frequent type of altitude sickness encountered. Symptoms often manifest within ten hours of ascent and generally subside within two days, though they occasionally develop into the more serious conditions. Symptoms include headache, confusion, fatigue, stomach illness, dizziness, and sleep disturbance. Exertion may aggravate the symptoms.Those individuals with the lowest initial partial pressure of end-tidal pCO2 (the lowest concentration of carbon dioxide at the end of the respiratory cycle, a measure of a higher alveolar ventilation) and corresponding high oxygen saturation levels tend to have a lower incidence of acute mountain sickness than those with high end-tidal pCO2 and low oxygen saturation levels.
Primary symptoms
Headaches are the primary symptom used to diagnose altitude sickness, although a headache is also a symptom of dehydration. A headache occurring at an altitude above 2,400 metres (7,900 ft) – a pressure of 76 kilopascals (0.75 atm) – combined with any one or more of the following symptoms, may indicate altitude sickness:
Severe symptoms
Symptoms that may indicate life-threatening altitude sickness include:
Pulmonary edema (fluid in the lungs)
Symptoms similar to bronchitis
Persistent dry cough
Fever
Shortness of breath even when restingCerebral edema (swelling of the brain)
Headache that does not respond to analgesics
Unsteady gait
Gradual loss of consciousness
Increased nausea and vomiting
Retinal hemorrhageThe most serious symptoms of altitude sickness arise from edema (fluid accumulation in the tissues of the body). At very high altitude, humans can get either high-altitude pulmonary edema (HAPE), or high-altitude cerebral edema (HACE). The physiological cause of altitude-induced edema is not conclusively established. It is currently believed, however, that HACE is caused by local vasodilation of cerebral blood vessels in response to hypoxia, resulting in greater blood flow and, consequently, greater capillary pressures. On the other hand, HAPE may be due to general vasoconstriction in the pulmonary circulation (normally a response to regional ventilation-perfusion mismatches) which, with constant or increased cardiac output, also leads to increases in capillary pressures. For those with HACE, dexamethasone may provide temporary relief from symptoms in order to keep descending under their own power.HAPE can progress rapidly and is often fatal. Symptoms include fatigue, severe dyspnea at rest, and cough that is initially dry but may progress to produce pink, frothy sputum. Descent to lower altitudes alleviates the symptoms of HAPE.
HACE is a life-threatening condition that can lead to coma or death. Symptoms include headache, fatigue, visual impairment, bladder dysfunction, bowel dysfunction, loss of coordination, paralysis on one side of the body, and confusion. Descent to lower altitudes may save those affected by HACE.
Cause
Altitude sickness can first occur at 1,500 metres (4,900 ft), with the effects becoming severe at extreme altitudes (greater than 5,500 metres (18,000 ft)). Only brief trips above 6,000 metres (20,000 ft) are possible and supplemental oxygen is needed to avert sickness.
As altitude increases, the available amount of oxygen to sustain mental and physical alertness decreases with the overall air pressure, though the relative percentage of oxygen in air, at about 21%, remains practically unchanged up to 21,000 metres (69,000 ft). The RMS velocities of diatomic nitrogen and oxygen are very similar and thus no change occurs in the ratio of oxygen to nitrogen until stratospheric heights.
Dehydration due to the higher rate of water vapor lost from the lungs at higher altitudes may contribute to the symptoms of altitude sickness.The rate of ascent, altitude attained, amount of physical activity at high altitude, as well as individual susceptibility, are contributing factors to the onset and severity of high-altitude illness.
Altitude sickness usually occurs following a rapid ascent and can usually be prevented by ascending slowly. In most of these cases, the symptoms are temporary and usually abate as altitude acclimatization occurs. However, in extreme cases, altitude sickness can be fatal.
High altitude illness can be classified according to the altitude: high (1,500–3,500 metres (4,900–11,500 ft)), very high (3,500–5,500 metres (11,500–18,000 ft)) and extreme (above 5,500 metres (18,000 ft)).
High altitude
At high altitude, 1,500 to 3,500 metres (4,900 to 11,500 ft), the onset of physiological effects of diminished inspiratory oxygen pressure (PiO2) includes decreased exercise performance and increased ventilation (lower arterial partial pressure of carbon dioxide: PCO2). While arterial oxygen transport may be only slightly impaired the arterial oxygen saturation (SaO2) generally stays above 90%. Altitude sickness is common between 2,400 and 4,000 metres (7,900 and 13,100 ft) because of the large number of people who ascend rapidly to these altitudes.
Very high altitude
At very high altitude, 3,500 to 5,500 metres (11,500 to 18,000 ft), maximum SaO2 falls below 90% as the arterial PO2 falls below 60mmHg. Extreme hypoxemia may occur during exercise, during sleep, and in the presence of high altitude pulmonary edema or other acute lung conditions. Severe altitude illness occurs most commonly in this range.
Extreme altitude
Above 5,500 metres (18,000 ft), marked hypoxemia, hypocapnia, and alkalosis are characteristic of extreme altitudes. Progressive deterioration of physiologic function eventually outstrips acclimatization. As a result, no permanent human habitation occurs above 6,000 metres (20,000 ft). A period of acclimatization is necessary when ascending to extreme altitude; abrupt ascent without supplemental oxygen for other than brief exposures invites severe altitude sickness.
Mechanism
The physiology of altitude sickness centres around the alveolar gas equation; the atmospheric pressure is low, but there is still 20.9% oxygen. Water vapour still occupies the same pressure too—this means that there is less oxygen pressure available in the lungs and blood. Compare these two equations comparing the amount of oxygen in blood at altitude:
The hypoxia leads to an increase in minute ventilation (hence both low CO2, and subsequently bicarbonate), Hb increases through haemoconcentration and erythrogenesis. Alkalosis shifts the haemoglobin dissociation constant to the left, 2,3-BPG increases to counter this. Cardiac output increases through an increase in heart rate.The bodys response to high altitude includes the following:
↑ Erythropoietin → ↑ hematocrit and haemoglobin
↑ 2,3-BPG (allows ↑ release of O2 and a right shift on the Hb-O2 disassociation curve)
↑ kidney excretion of bicarbonate (use of acetazolamide can augment for treatment)
Chronic hypoxic pulmonary vasoconstriction (can cause right ventricular hypertrophy)People with high-altitude sickness generally have reduced hyperventilator response, impaired gas exchange, fluid retention or increased sympathetic drive. There is thought to be an increase in cerebral venous volume because of an increase in cerebral blood flow and hypocapnic cerebral vasoconstriction causing oedema.
Diagnosis
Altitude sickness is typically self-diagnosed since symptoms are consistent: nausea, vomiting, headache, and can generally be deduced from a rapid change in altitude or oxygen levels. However, some symptoms may be confused with dehydration. Some severe cases may require professional diagnosis which can be assisted with multiple different methods such as using an MRI or CT scan to check for abnormal buildup of fluids in the lung or brain.
Prevention
Ascending slowly is the best way to avoid altitude sickness. Avoiding strenuous activity such as skiing, hiking, etc. in the first 24 hours at high altitude may reduce the symptoms of AMS. Alcohol and sleeping pills are respiratory depressants, and thus slow down the acclimatization process and should be avoided. Alcohol also tends to cause dehydration and exacerbates AMS. Thus, avoiding alcohol consumption in the first 24–48 hours at a higher altitude is optimal.
Pre-acclimatization
Pre-acclimatization is when the body develops tolerance to low oxygen concentrations before ascending to an altitude. It significantly reduces risk because less time has to be spent at altitude to acclimatize in the traditional way. Additionally, because less time has to be spent on the mountain, less food and supplies have to be taken up. Several commercial systems exist that use altitude tents, so called because they mimic altitude by reducing the percentage of oxygen in the air while keeping air pressure constant to the surroundings. Examples of pre-acclimation measures include remote ischaemic preconditioning, using hypobaric air breathing in order to simulate altitude, and positive end-expiratory pressure.
Altitude acclimatization
Altitude acclimatization is the process of adjusting to decreasing oxygen levels at higher elevations, in order to avoid altitude sickness. Once above approximately 3,000 metres (10,000 ft) – a pressure of 70 kilopascals (0.69 atm) – most climbers and high-altitude trekkers take the "climb-high, sleep-low" approach. For high-altitude climbers, a typical acclimatization regimen might be to stay a few days at a base camp, climb up to a higher camp (slowly), and then return to base camp. A subsequent climb to the higher camp then includes an overnight stay. This process is then repeated a few times, each time extending the time spent at higher altitudes to let the body adjust to the oxygen level there, a process that involves the production of additional red blood cells. Once the climber has acclimatized to a given altitude, the process is repeated with camps placed at progressively higher elevations. The rule of thumb is to ascend no more than 300 m (1,000 ft) per day to sleep. That is, one can climb from 3,000 m (9,800 ft) (70 kPa or 0.69 atm) to 4,500 m (15,000 ft) (58 kPa or 0.57 atm) in one day, but one should then descend back to 3,300 m (10,800 ft) (67.5 kPa or 0.666 atm) to sleep. This process cannot safely be rushed, and this is why climbers need to spend days (or even weeks at times) acclimatizing before attempting to climb a high peak. Simulated altitude equipment such as altitude tents provide hypoxic (reduced oxygen) air, and are designed to allow partial pre-acclimation to high altitude, reducing the total time required on the mountain itself.
Altitude acclimatization is necessary for some people who move rapidly from lower altitudes to higher altitudes .
Medications
The drug acetazolamide (trade name Diamox) may help some people making a rapid ascent to sleeping altitude above 2,700 metres (9,000 ft), and it may also be effective if started early in the course of AMS. Acetazolamide can be taken before symptoms appear as a preventive measure at a dose of 125 mg twice daily. The Everest Base Camp Medical Centre cautions against its routine use as a substitute for a reasonable ascent schedule, except where rapid ascent is forced by flying into high altitude locations or due to terrain considerations. The Centre suggests a dosage of 125 mg twice daily for prophylaxis, starting from 24 hours before ascending until a few days at the highest altitude or on descending; with 250 mg twice daily recommended for treatment of AMS. The Centers for Disease Control and Prevention (CDC) suggest the same dose for prevention of 125 mg acetazolamide every 12 hours. Acetazolamide, a mild diuretic, works by stimulating the kidneys to secrete more bicarbonate in the urine, thereby acidifying the blood. This change in pH stimulates the respiratory center to increase the depth and frequency of respiration, thus speeding the natural acclimatization process. An undesirable side-effect of acetazolamide is a reduction in aerobic endurance performance. Other minor side effects include a tingle-sensation in hands and feet. Although a sulfonamide; acetazolamide is a non-antibiotic and has not been shown to cause life-threatening allergic cross-reactivity in those with a self-reported sulfonamide allergy. Dosage of 1000 mg/day will produce a 25% decrease in performance, on top of the reduction due to high-altitude exposure. The CDC advises that Dexamethasone be reserved for treatment of severe AMS and HACE during descents, and notes that Nifedipine may prevent HAPE.There is insufficient evidence to determine the safety of sumatriptan and if it may help prevent altitude sickness. Despite their popularity, antioxidant treatments have not been found to be effective medications for prevention of AMS. Interest in phosphodiesterase inhibitors such as sildenafil has been limited by the possibility that these drugs might worsen the headache of mountain sickness. A promising possible preventive for altitude sickness is myo-inositol trispyrophosphate (ITPP), which increases the amount of oxygen released by hemoglobin.
Prior to the onset of altitude sickness, ibuprofen is a suggested non-steroidal anti-inflammatory and painkiller that can help alleviate both the headache and nausea associated with AMS. It has not been studied for the prevention of cerebral edema (swelling of the brain) associated with extreme symptoms of AMS.
Over-the-counter herbal supplements and traditional medicines
Herbal supplements and traditional medicines are sometimes suggested to prevent high altitude sickness including ginkgo biloba, R crenulata, minerals such as iron, antacids, and hormonal-based supplements such as medroxyprogesterone and erythropoietin. Medical evidence to support the effectiveness and safety of these approaches is often contradictory or lacking. Indigenous peoples of the Americas, such as the Aymaras of the Altiplano, have for centuries chewed coca leaves to try to alleviate the symptoms of mild altitude sickness. This therapy has not yet been proven effective in a clinical study. In Chinese and Tibetan traditional medicine, an extract of the root tissue of Radix rhodiola is often taken in order to prevent the symptoms of high altitude sickness, however, no clear medical studies have confirmed the effectiveness or safety of this extract.
Oxygen enrichment
In high-altitude conditions, oxygen enrichment can counteract the hypoxia related effects of altitude sickness. A small amount of supplemental oxygen reduces the equivalent altitude in climate-controlled rooms. At 3,400 metres (11,200 ft) (67 kPa or 0.66 atm), raising the oxygen concentration level by 5% via an oxygen concentrator and an existing ventilation system provides an effective altitude of 3,000 m (10,000 ft) (70 kPa or 0.69 atm), which is more tolerable for those unaccustomed to high altitudes.Oxygen from gas bottles or liquid containers can be applied directly via a nasal cannula or mask. Oxygen concentrators based upon pressure swing adsorption (PSA), VSA, or vacuum-pressure swing adsorption (VPSA) can be used to generate the oxygen if electricity is available. Stationary oxygen concentrators typically use PSA technology, which has performance degradations at the lower barometric pressures at high altitudes </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Amblyopia and can you explain it in depth? [/INST] Amblyopia, also called lazy eye, is a disorder of sight in which the brain fails to fully process input from one eye and over time favors the other eye. It results in decreased vision in an eye that typically appears normal in other respects. Amblyopia is the most common cause of decreased vision in a single eye among children and younger adults.The cause of amblyopia can be any condition that interferes with focusing during early childhood. This can occur from poor alignment of the eyes (strabismic), an eye being irregularly shaped such that focusing is difficult, one eye being more nearsighted or farsighted than the other (refractive), or clouding of the lens of an eye (deprivational). After the underlying cause is addressed, vision is not restored right away, as the mechanism also involves the brain. Amblyopia can be difficult to detect, so vision testing is recommended for all children around the ages of four to five.Early detection improves treatment success. Glasses may be all the treatment needed for some children. If this is not sufficient, treatments which encourage or force the child to use the weaker eye are used. This is done by either using a patch or putting atropine in the stronger eye. Without treatment, amblyopia typically persists. Treatment in adulthood is usually much less effective.Amblyopia begins by the age of five. In adults, the disorder is estimated to affect 1–5% of the population. While treatment improves vision, it does not typically restore it to normal in the affected eye. Amblyopia was first described in the 1600s. The condition may make people ineligible to be pilots or police officers. The word amblyopia is from Greek ἀμβλύς amblys, meaning "blunt", and ὤψ ōps, meaning "sight".
Signs and symptoms
Many people with amblyopia, especially those who only have a mild form, are not aware they have the condition until tested at older ages, since the vision in their stronger eye is normal. People with amblyopia typically have poor stereo vision, since it requires both eyes. They further may have, on the affected eye, poor pattern recognition, poor visual acuity, and low sensitivity to contrast and motion.Amblyopia is characterized by several functional abnormalities in spatial vision, including reductions in visual acuity, contrast sensitivity function, and vernier acuity, as well as spatial distortion, abnormal spatial interactions, and impaired contour detection. In addition, individuals with amblyopia have binocular abnormalities such as impaired stereoacuity (stereoscopic acuity) and abnormal binocular summation. Also, central vision in amblyopes is more crowded than central vision in normal observers.These deficits are usually specific to the amblyopic eye. Subclinical deficits of the "better" eye have also been demonstrated.People with amblyopia also have problems of binocular vision such as limited stereoscopic depth perception and usually have difficulty seeing the three-dimensional images in hidden stereoscopic displays such as autostereograms. Perception of depth, from monocular cues such as size, perspective, and motion parallax remains normal.
Cause
Amblyopia has three main causes:
Strabismic: by strabismus (misaligned eyes)
Refractive: by anisometropia (difference of a certain degree of nearsightedness, farsightedness, or astigmatism), or by significant amount of equal refractive error in both eyes
Deprivational: by deprivation of vision early in life by vision-obstructing disorders such as congenital cataract
Strabismus
Strabismus, sometimes also incorrectly called lazy eye, is a condition in which the eyes are misaligned. Strabismus usually results in normal vision in the preferred sighting (or "fellow") eye (the eye that the person prefers to use), but may cause abnormal vision in the deviating or strabismic eye due to the difference between the images projecting to the brain from the two eyes.
Adult-onset strabismus usually causes double vision (diplopia), since the two eyes are not fixed on the same object.
Childrens brains are more neuroplastic, so can more easily adapt by suppressing images from one of the eyes, eliminating the double vision. This plastic response of the brain, interrupts the brains normal development, resulting in the amblyopia. Recent evidence points to a cause of infantile strabismus lying with the input to the visual cortex.Those with strabismic amblyopia tend to show ocular motion deficits when reading, even when they use the nonamblyopic eye. In particular, they tend to make more saccades per line than persons with normal stereo vision, and to have a reduced reading speed, especially when reading a text with small font size.Strabismic amblyopia is treated by clarifying the visual image with glasses, or encouraging use of the amblyopic eye with an eyepatch over the dominant eye or pharmacologic penalization of the better eye. Penalization usually consists of applying atropine drops to temporarily paralyze the accommodation reflex, leading to the blurring of vision in the good eye. It also dilates the pupil. This helps to prevent the bullying and teasing associated with wearing a patch, although sometimes application of the eye drops is challenging. The ocular alignment itself may be treated with surgical or nonsurgical methods, depending on the type and severity of the strabismus.
Refractive
Refractive amblyopia may result from anisometropia (unequal refractive error between the two eyes). Anisometropia exists when there is a difference in the power between the two eyes. The eye which provides the brain with a clearer image typically becomes the dominant eye. The image in the other eye is blurred, which results in abnormal development of one half of the visual system. Refractive amblyopia is usually less severe than strabismic amblyopia and is commonly missed by primary care physicians because of its less dramatic appearance and lack of obvious physical manifestation, such as with strabismus. Given that the refractive correction of anisometropia by means of spectacles typically leads to different image magnification for the two eyes, which may in turn prevent binocular vision, a refractive correction using contact lenses is to be considered. Also pediatric refractive surgery is a treatment option, in particular if conventional approaches have failed due to aniseikonia or lack of compliance or both.Frequently, amblyopia is associated with a combination of anisometropia and strabismus. In some cases, the vision between the eyes can differ to the point where one eye has twice average vision while the other eye is completely blind.
Deprivation and occlusion
Deprivation amblyopia (amblyopia ex anopsia) results when the ocular media become opaque, such as is the case with congenital cataract or corneal haziness. These opacities prevent adequate visual input from reaching the eye, and disrupt development. If not treated in a timely fashion, amblyopia may persist even after the cause of the opacity is removed. Sometimes, drooping of the eyelid (ptosis) or some other problem causes the upper eyelid to physically occlude a childs vision, which may cause amblyopia quickly. Occlusion amblyopia may be a complication of a hemangioma that blocks some or all of the eye. Other possible causes of deprivation and occlusion amblyopia include obstruction in the vitreous and aphakia. Deprivation amblyopia accounts for less than 3% of all individuals affected by amblyopia.
Pathophysiology
Amblyopia is a developmental problem in the brain, not any intrinsic, organic neurological problem in the eyeball (although organic problems can lead to amblyopia which can continue to exist after the organic problem has resolved by medical intervention).
The part of the brain receiving images from the affected eye is not stimulated properly and does not develop to its full visual potential. This has been confirmed by direct brain examination. David H. Hubel and Torsten Wiesel won the Nobel Prize in Physiology or Medicine in 1981 for their work in showing the extent of the damage to ocular dominance columns produced in kittens by sufficient visual deprivation during the so-called "critical period". The maximum "critical period" in humans is from birth to two years old.
Diagnosis
Amblyopia is diagnosed by identifying low visual acuity in one or both eyes, out of proportion to the structural abnormality of the eye and excluding other visual disorders as causes for the lowered visual acuity. It can be defined as an interocular difference of two lines or more in acuity (e.g. on Snellen chart) when the eye optics are maximally corrected. In young children, visual acuity is difficult to measure and can be estimated by observing the reactions of the patient when one eye is covered, including observing the patients ability to follow objects with one eye.
Stereotests like the Lang stereotest are not reliable exclusion tests for amblyopia. A person who passes the Lang stereotest test is unlikely to have strabismic amblyopia, but could nonetheless have refractive or deprivational amblyopia. Binocular retinal birefringence scanning may be able to identify, already in very young children, amblyopia that is associated with strabismus, microstrabismus, or reduced fixation accuracy. Diagnosis and treatment of amblyopia as early as possible is necessary to keep the vision loss to a minimum. Screening for amblyopia is recommended in all people between three and five years of age.
Treatment
Treatment of strabismic or anisometropic amblyopia consists of correcting the optical deficit (wearing the necessary spectacle prescription) and often forcing use of the amblyopic eye, by patching the good eye, or instilling topical atropine in the good eye, or both.: 130 Atropine appears to result in similar outcomes to patching. If there is overpatching or overpenalizing the good eye when treating amblyopia, "reverse amblyopia" can result. Eye patching is usually done on a part-time schedule of about 4–6 hours a day. Treatment is continued as long as vision improves. It is not worthwhile continuing to patch for more than 6 months if no improvement continues.Deprivation amblyopia is treated by removing the opacity as soon as possible followed by patching or penalizing the good eye to encourage the use of the amblyopic eye </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Amoebiasis and can you explain it in depth? [/INST] Amoebiasis, or amoebic dysentery, is an infection of the intestines caused by a parasitic amoeba Entamoeba histolytica. Amoebiasis can be present with no, mild, or severe symptoms. Symptoms may include lethargy, loss of weight, colonic ulcerations, abdominal pain, diarrhea, or bloody diarrhea. Complications can include inflammation and ulceration of the colon with tissue death or perforation, which may result in peritonitis. Anemia may develop due to prolonged gastric bleeding.Cysts of Entamoeba can survive for up to a month in soil or for up to 45 minutes under fingernails. Invasion of the intestinal lining results in bloody diarrhea. If the parasite reaches the bloodstream it can spread through the body, most frequently ending up in the liver where it can cause amoebic liver abscesses. Liver abscesses can occur without previous diarrhea. Diagnosis is typically made by stool examination using microscopy, but it can be difficult to distinguish E. hystolitica from other harmless entamoeba species. An increased white blood cell count may be present in severe cases. The most accurate test is finding specific antibodies in the blood, but it may remain positive following treatment. Bacterial colitis can result in similar symptoms.Prevention of amoebiasis is by improved sanitation, including separating food and water from faeces. There is no vaccine. There are two treatment options depending on the location of the infection. Amoebiasis in tissues is treated with either metronidazole, tinidazole, nitazoxanide, dehydroemetine or chloroquine, while luminal infection is treated with diloxanide furoate or iodoquinoline. Effective treatment against all stages of the disease may require a combination of medications. Infections without symptoms may be treated with just one antibiotic, and infections with symptoms are treated with two antibiotics.Amoebiasis is present all over the world, though most cases occur in the developing world. About 480 million people are currently infected with about 40 million new cases per year with significant symptoms. This results in the death of between 40,000–100,000 people a year. The first case of amoebiasis was documented in 1875 and in 1891 the disease was described in detail, resulting in the terms amoebic dysentery and amoebic liver abscess. Further evidence from the Philippines in 1913 found that upon swallowing cysts of E. histolytica volunteers developed the disease.
Signs and symptoms
Most infected people, about 90%, are asymptomatic, but this disease has the potential to become serious. It is estimated that about 40,000 to 100,000 people worldwide die annually due to amoebiasis.Infections can sometimes last for years if there is no treatment. Symptoms take from a few days to a few weeks to develop and manifest themselves, but usually it is about two to four weeks. Symptoms can range from mild diarrhea to dysentery with blood, coupled with intense abdominal pains. Extra-intestinal complications might also arise as a result of invasive infection which includes colitis, liver, lung, or brain abscesses. The blood comes from bleeding lesions created by the amoebae invading the lining of the colon. In about 10% of invasive cases the amoebae enter the bloodstream and may travel to other organs in the body. Most commonly this means the liver, as this is where blood from the intestine reaches first, but they can end up almost anywhere in the body.Onset time is highly variable and the average asymptomatic infection persists for over a year. It is theorized that the absence of symptoms or their intensity may vary with such factors as strain of amoeba, immune response of the host, and perhaps associated bacteria and viruses.In asymptomatic infections, the amoeba lives by eating and digesting bacteria and food particles in the gut, a part of the gastrointestinal tract. It does not usually come in contact with the intestine itself due to the protective layer of mucus that lines the gut. Disease occurs when amoeba comes in contact with the cells lining the intestine. It then secretes the same substances it uses to digest bacteria, which include enzymes that destroy cell membranes and proteins. This process can lead to penetration and digestion of human tissues, resulting first in flask-shaped ulcerations in the intestine. Entamoeba histolytica ingests the destroyed cells by phagocytosis and is often seen with red blood cells (a process known as erythrophagocytosis) inside when viewed in stool samples. Especially in Latin America, a granulomatous mass (known as an amoeboma) may form in the wall of the ascending colon or rectum due to long-lasting immunological cellular response, and is sometimes confused with cancer.The ingestion of one viable cyst may cause an infection.Steroid therapy can occasionally provoke severe amoebic colitis in people with any E. histolytica infection. This bears high mortality: on average more than 50% with severe colitis die.
Cause
Amoebiasis is an infection caused by the amoeba Entamoeba histolytica.
Transmission
Amoebiasis is usually transmitted by the fecal-oral route, but it can also be transmitted indirectly through contact with dirty hands or objects as well as by anal-oral contact. Infection is spread through ingestion of the cyst form of the parasite, a semi-dormant and hardy structure found in feces. Any non-encysted amoebae, or trophozoites, die quickly after leaving the body but may also be present in stool: these are rarely the source of new infections. Since amoebiasis is transmitted through contaminated food and water, it is often endemic in regions of the world with limited modern sanitation systems, including México, Central America, western South America, South Asia, and western and southern Africa.Amoebic dysentery is one form of travelers diarrhea, although most travelers diarrhea is bacterial or viral in origin.
Pathogenesis
Amoebiasis results from tissue destruction induced by the E. histolytica parasite.
E. histolytica causes tissue damage by three main events: direct host cell killing, inflammation, and parasite invasion.
Diagnosis
With colonoscopy it is possible to detect small ulcers of between 3–5mm, but diagnosis may be difficult as the mucous membrane between these areas can look either healthy or inflamed.
Trophozoites may be identified at the ulcer edge or within the tissue, using immunohistochemical staining with specific anti-E. histolytica antibodies.Asymptomatic human infections are usually diagnosed by finding cysts shed in the stool. Various flotation or sedimentation procedures have been developed to recover the cysts from fecal matter and stains help to visualize the isolated cysts for microscopic examination. Since cysts are not shed constantly, a minimum of three stools are examined. In symptomatic infections, the motile form (the trophozoite) is often seen in fresh feces. Serological tests exist, and most infected individuals (with symptoms or not) test positive for the presence of antibodies. The levels of antibody are much higher in individuals with liver abscesses. Serology only becomes positive about two weeks after infection. More recent developments include a kit that detects the presence of amoeba proteins in the feces, and another that detects ameba DNA in feces. These tests are not in widespread use due to their expense.Microscopy is still by far the most widespread method of diagnosis around the world. However it is not as sensitive or accurate in diagnosis as the other tests available. It is important to distinguish the E. histolytica cyst from the cysts of nonpathogenic intestinal protozoa such as Entamoeba coli by its appearance. E. histolytica cysts have a maximum of four nuclei, while the commensal Entamoeba coli cyst has up to 8 nuclei. Additionally, in E. histolytica, the endosome is centrally located in the nucleus, while it is usually off-center in Entamoeba coli. Finally, chromatoidal bodies in E. histolytica cysts are rounded, while they are jagged in Entamoeba coli. However, other species, Entamoeba dispar and E. moshkovskii, are also commensals and cannot be distinguished from E. histolytica under the microscope. As E. dispar is much more common than E. histolytica in most parts of the world this means that there is a lot of incorrect diagnosis of E. histolytica infection taking place. The WHO recommends that infections diagnosed by microscopy alone should not be treated if they are asymptomatic and there is no other reason to suspect that the infection is actually E. histolytica. Detection of cysts or trophozoites stools under microscope may require examination of several samples over several days to determine if they are present, because cysts are shed intermittently and may not show up in every sample.Typically, the organism can no longer be found in the feces once the disease goes extra-intestinal. Serological tests are useful in detecting infection by E. histolytica if the organism goes extra-intestinal and in excluding the organism from the diagnosis of other disorders. An Ova & Parasite (O&P) test or an E. histolytica fecal antigen assay is the proper assay for intestinal infections. Since antibodies may persist for years after clinical cure, a positive serological result may not necessarily indicate an active infection. A negative serological result, however, can be equally important in excluding suspected tissue invasion by E. histolytica.Stool antigen detection tests have helped to overcome some of the limitations of stool microscopy. Antigen detection tests are easy to use, but they have variable sensitivity and specificity, especially in low-endemic areas.Polymerase chain reaction (PCR) is considered the gold standard for diagnosis but remains underutilized.
Prevention
To help prevent the spread of amoebiasis around the home :
Wash hands thoroughly with soap and hot running water for at least 10 seconds after using the toilet or changing a babys diaper, and before handling food.
Clean bathrooms and toilets often; pay particular attention to toilet seats and taps.
Avoid sharing towels or face washers.To help prevent infection:
Avoid raw vegetables when in endemic areas, as they may have been fertilized using human feces.
Boil water or treat with iodine tablets.
Avoid eating street foods especially in public places where others are sharing sauces in one containerGood sanitary practice, as well as responsible sewage disposal or treatment, are necessary for the prevention of E. histolytica infection on an endemic level. E.histolytica cysts are usually resistant to chlorination, therefore sedimentation and filtration of water supplies are necessary to reduce the incidence of infection.E. histolytica cysts may be recovered from contaminated food by methods similar to those used for recovering Giardia lamblia cysts from feces. Filtration is probably the most practical method for recovery from drinking water and liquid foods. E. histolytica cysts must be distinguished from cysts of other parasitic (but nonpathogenic) protozoa and from cysts of free-living protozoa as discussed above. Recovery procedures are not very accurate; cysts are easily lost or damaged beyond recognition, which leads to many falsely negative results in recovery tests.
Treatment
E. histolytica infections occur in both the intestine and (in people with symptoms) in tissue of the intestine and/or liver. Those with symptoms require treatment with two medications, an amoebicidal tissue-active agent and a luminal cysticidal agent. Individuals that are asymptomatic only need a luminal cysticidal agent.
Prognosis
In the majority of cases, amoebas remain in the gastrointestinal tract of the hosts. Severe ulceration of the gastrointestinal mucosal surfaces occurs in less than 16% of cases. In fewer cases, the parasite invades the soft tissues, most commonly the liver. Only rarely are masses formed (amoebomas) that lead to intestinal obstruction.(Mistaken for Ca caecum and appendicular mass) Other local complications include bloody diarrhea, pericolic and pericaecal abscess.Complications of hepatic amoebiasis includes subdiaphragmatic abscess, perforation of diaphragm to pericardium and pleural cavity, perforation to abdominal cavital (amoebic peritonitis) and perforation of skin (amoebiasis cutis).Pulmonary amoebiasis can occur from liver lesions by spread through the blood or by perforation of pleural cavity and lung. It can cause lung abscess, pulmono pleural fistula, empyema lung and broncho pleural fistula. It can also reach the brain through blood vessels and cause amoebic brain abscess and amoebic meningoencephalitis. Cutaneous amoebiasis can also occur in skin around sites of colostomy wound, perianal region, region overlying visceral lesion and at the site of drainage of liver abscess.Urogenital tract amoebiasis derived from intestinal lesion can cause amoebic vulvovaginitis (Mays disease), rectovesicle fistula and rectovaginal fistula.Entamoeba histolytica infection is associated with malnutrition and stunting of growth in children.
Epidemiology
Amoebiasis caused about 55,000 deaths worldwide in 2010, down from 68,000 in 1990.
In older textbooks it is often stated that 10% of the worlds population is infected with Entamoeba histolytica. Nevertheless, this means that there are up to 50 million true E. histolytica infections and approximately seventy thousand die each year, mostly from liver abscesses or other complications. Although usually considered a tropical parasite, the first case reported (in 1875) was actually in St Petersburg in Russia, near the Arctic Circle. Infection is more common in warmer areas, but this is because of both poorer hygiene and the parasitic cysts surviving longer in warm moist conditions.
History
Amoebiasis was first described by Fedor A. Lösch in 1875, in northern Russia </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Anaplasmosis and can you explain it in depth? [/INST] Anaplasmosis is a tick-borne disease affecting ruminants, dogs, and horses, and is caused by Anaplasma bacteria. Anaplasmosis is an infectious but not contagious disease. Anaplasmosis can be transmitted through mechanical and biological vector processes. Anaplasmosis can also be referred to as "yellow bag" or "yellow fever" because the infected animal can develop a jaundiced look. Other signs of infection include weight loss, diarrhea, paleness of the skin, aggressive behavior, and high fever.Many different tick species can carry the bacteria that cause anaplasmosis. The two major bacterial pathogens are Anaplasma marginale and Anaplasma phagocytophilum. These microorganisms are Gram-negative, and infect red blood cells. Once the host is infected with anaplasmosis, the immune system will try to fight off and kill the infected red blood cells, but will also kill healthy red blood cells. The Anaplasma sparouinense species is responsible for a rare zoonosis, the Sparouine anaplasmosis, detected only in French Guiana, South America. This disease was described from a clandestine gold miner working deep in rainforest. Infection of his red blood cells led to a severe deterioration of his health and required his hospitalization. Molecular typing showed that Anaplasma sparouinense is distinct to all known species and more genetically related to recently described Anaplasma species causing infections in rainforest wild fauna of Brazil.While there are no current live or inactivated vaccines effective for all strains of A. marginale approved by the USDA for anaplasmosis, there are other means of prevention. Tick and fly control for herds of ruminants can be effective but also labor intensive. Chemical methods can also be used, including sanitizing surgical equipment after each use. Tetracycline drugs are the most common treatment for anaplasmosis, and can provide the animal with immunity for a period of time. The disease is more common in the South and West parts of the United States, but is no longer considered a major problem since the use of tetracycline drugs.
Transmission
Mechanical and biological vector transmission work in different ways but both lead to infection of the red blood cells. Mechanical transmission happens in two ways, one when red blood cells are inoculated with the blood parasite through surgical equipment including needles, dehorners, ear taggers, castrating knives, and tattoo instruments. Another mechanical transmission mode is through the mouthparts of biting flies who carry an Anaplasma species of blood parasite.Biological vector transmission is through ticks that carry a blood parasite able to cause anaplasmosis. The most common Anaplasmosis-causing tick is Ixodes scapularis, also known as the black-legged tick or the deer tick. Ticks who contain species of many different Anaplasma species can transmit this disease through a bite. The blood parasite survives and can multiply in the tick, and can sit dormant for months without being transmitted to an animal. When bitten by a tick carrying a blood parasite, the blood parasite can then enter the new host and cause infection.Once infected with a species of Anaplasma, the parasite multiplies in the blood stream and attaches to red blood cells. The immune system will attempt to kill the infected blood cells but will also kill uninfected red blood cells in the process. The number of red blood cells being destroyed becomes larger than new red blood cells being made, causing the host to become anemic and leading to many other symptoms. Once infected with anaplasmosis, the cattle will always be a carrier of the infectious disease, and calves born from carriers will also carry the disease.
Signs and symptoms
Classic signs and symptoms of anaplasmosis will not occur until 3–6 weeks after infection. The most common symptoms of anaplasmosis include fever, a decreased number of white blood cells, platelets in the bloodstream, and abnormally elevated levels of liver enzymes. The erythema chronicum migrans rash may be seen with anaplasmosis as it is co-transmitted in 10% of Lyme disease cases.
Anemia may be severe and result in cardiovascular changes such as an increase in heart rate. Blood in the urine may occur due to the lysis of red blood cells. General systemic signs include diarrhea, anorexia, and weight loss. Infected animals may develop a jaundiced look which then turns into paleness around the eyes, muzzle, lips, and teats of the cattle.All cattle are susceptible to infection by Anaplasma marginale, but the severity worsens with age increase. Older cattle tend to exhibit the most severe clinical symptoms; cattle aged 1–3 may also show severe symptoms but are able to recover easier.
Causes
The two major species that cause anaplasmosis in ruminants include Anaplasma marginale and Anaplasma phagocytophilum. Anaplasma marginale is found worldwide and is transmitted by Rhipicephalus ticks. Anaplasma phagocytophilum is also found worldwide, mainly transmitted by Ixodes ticks. Other species that cause anaplasmosis in specific species include:
Cattle:
Anaplasma centrale - found mainly in South America, Africa and the Middle East
Sheep and goats:
Anaplasma ovis - found worldwide. There is a prevalence of 82.9% in sheep, and 74.9% in goats. This species is the most prevalent for causing anaplasmosis in sheep and goats, although Anaplasma phagocytophilium can also cause the disease. Anaplasma phagocytophilium has a prevalence of 11.9% in sheep, and 15.2% in goats.
Morphology
There are many strains of Anaplasma marginale, all with differing morphology, antigenic properties, protein sequence, and ability to be transmitted by ticks. Major surface proteins (MSP) have been found to play a major role in the infection by Anaplasma marginale. Out of the six MSP found on this species, three of the major surface proteins do not seem to differ between all strains, those including MSP1a, MSP4, and MSP5. The msp1a gene, which codes for MSP1a, is used as a marker for the identification of Anaplasma marginale because it has shown to be conserved in the multiplication of rickettsia in cattle and ticks and has been shown to be involved in adhesion to bovine erythrocytes and tick cells.Anaplasma phagocytophilum is a gram-negative bacterium that does not have lipopolysaccharides or peptidoglycan. The outer membrane does not have a capsule, and is coarse with irregular periplasmic spaces. This species was originally included in the genus Ehrlichia (Ehrlichia phagocytophilium), but is now included in the genus Anaplasma (Anaplasma phagocytophilium).
Prevention
Currently, no live or inactivated vaccines have been approved by the USDA that are effective against all strains of A. marginale. Some vaccines that rely on erythrocyte-derived antigen sources provide immunity or prevent clinical disease, although these do not prevent cattle from being infected with A. marginale. Other means of prevention can include testing all ruminants in a herd and eliminating any individuals who test positive for anaplasmosis, leading to an anaplasmosis-free herd. Vector control measures can also be used. Tick control is widely used in some countries, including Africa, but rarely used in the United States due to the fact that this prevention method is labor-intensive and expensive. In contrast, the control of flies is effective and there are many ways to do this. Chemical agents can be used, sanitation methods (such as cleaning stalls/pens regularly, manure management, and protecting feed), as well as biological control by natural enemies of flies (including bees, mites, parasitoids). Ways to prevent iatrogenic transmission include avoiding re-using of needles and sanitizing medical equipment between uses. Antimicrobial treatment can also be used, although it is more commonly used in the case of active infection. This includes the drugs tetracycline and imidocarb, and is used in healthy ruminants to decrease the clinical effects of an active infection.
Treatment
The most common source of treatment is the use of tetracycline drugs (including tetracycline, chlortetracycline, oxytetracycline, rolitetracycline, doxycycline, and minocycline) and imidocarb. An injection of tetracycline drugs can give ruminants immunity to Anaplasma species for at least eight months. Imidocarb has been shown to be highly effective against Anaplasma marginale, but has been identified as a possible carcinogen and is not approved in the United States or Europe. Countries such as South Africa, Australia, Israel, and South America have used live vaccines containing infectious Anaplasma centrale to prevent infection of Anaplasma marginale. Live vaccines are prohibited in the United States, and there has been production of vaccines consisting of nonliving Anaplasma marginale pulled from infected bovine erythrocytes, which can provide some immunity but leaves cattle susceptible to other strains of Anaplasma marginale. Supportive therapy such as blood products and fluids may be necessary.
Epidemiology
In the United States, anaplasmosis is notably present in the South and West, where the tick hosts Ixodes spp. are found. It is also a seemingly increasing antibody in humans in Europe </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Anaplastic astrocytoma and can you explain it in depth? [/INST] Anaplastic astrocytoma is a rare WHO grade III type of astrocytoma, which is a type of cancer of the brain. In the United States, the annual incidence rate for anaplastic astrocytoma is 0.44 per 100,000 people.
Signs and symptoms
Initial presenting symptoms most commonly are headache, depressed mental status, focal neurological deficits, and/or seizures. The growth rate and mean interval between onset of symptoms and diagnosis is approximately 1.5–2 years but is highly variable, being intermediate between that of low-grade astrocytomas and glioblastomas. Seizures are less common among patients with anaplastic astrocytomas compared to low-grade lesions.
Causes
Most high-grade gliomas occur sporadically or without identifiable cause. However, a small proportion (less than 5%) of persons with malignant astrocytoma have a definite or suspected hereditary predisposition. The main hereditary predispositions are mainly neurofibromatosis type I, Li-Fraumeni syndrome, hereditary nonpolyposis colorectal cancer and tuberous sclerosis. Anaplastic astrocytomas have also been associated with previous exposure to vinyl chloride and to high doses of radiation therapy to the brain.
Pathology
Anaplastic astrocytomas fall under the category of high grade gliomas (WHO grade III-IV), which are pathologically undifferentiated gliomas that carry a poor clinical prognosis. Unlike glioblastomas (WHO grade IV), anaplastic astrocytomas lack vascular proliferation and necrosis on pathologic evaluation. Compared to grade II tumors, anaplastic astrocytomas are more cellular, demonstrate more atypia, and mitoses are seen.
Treatment
The standard initial treatment is to remove as much of the tumor as possible without worsening neurologic deficits </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Anorexia nervosa and can you explain it in depth? [/INST] Anorexia nervosa, often referred to simply as anorexia, is an eating disorder characterized by low weight, food restriction, body image disturbance, fear of gaining weight, and an overpowering desire to be thin. Anorexia is a term of Greek origin: an- (ἀν-, prefix denoting negation) and orexis (ὄρεξις, "appetite"), translating literally to "a loss of appetite"; the adjective nervosa indicating the functional and non-organic nature of the disorder. Anorexia nervosa was coined by Gull in 1873 but, despite literal translation, the symptom of hunger is frequently present and the pathological control of this instinct is a source of satisfaction for the patients.
Individuals with anorexia nervosa commonly see themselves as overweight, although they are in fact underweight. The DSM-5 describes this perceptual symptom as "disturbance in the way in which ones body weight or shape is experienced". In research and clinical settings, this symptom is called "body image disturbance". Individuals with anorexia nervosa also often deny that they have a problem with low weight. They may weigh themselves frequently, eat small amounts, and only eat certain foods. Some exercise excessively, force themselves to vomit (in the "anorexia purging" subtype), or use laxatives to lose weight and control body shapes, and/or binge eat. Medical complications may include osteoporosis, infertility, and heart damage, among others. Women will often stop having menstrual periods. In extreme cases, patients with anorexia nervosa who continually refuse significant dietary intake and weight restoration interventions, and are declared incompetent to make decisions by a psychiatrist, may be fed by force under restraint via nasogastric tube after asking their parents or proxies to make the decision for them.The cause of anorexia is currently unknown. There appear to be some genetic components with identical twins more often affected than fraternal twins. Cultural factors also appear to play a role, with societies that value thinness having higher rates of the disease. Additionally, it occurs more commonly among those involved in activities that value thinness, such as high-level athletics, modeling, and dancing. Anorexia often begins following a major life-change or stress-inducing event. The diagnosis requires a significantly low weight and the severity of disease is based on body mass index (BMI) in adults with mild disease having a BMI of greater than 17, moderate a BMI of 16 to 17, severe a BMI of 15 to 16, and extreme a BMI less than 15. In children, a BMI for age percentile of less than the 5th percentile is often used.Treatment of anorexia involves restoring the patient back to a healthy weight, treating their underlying psychological problems, and addressing behaviors that promote the problem. While medications do not help with weight gain, they may be used to help with associated anxiety or depression. Different therapy methods may be useful, such as cognitive behavioral therapy or an approach where parents assume responsibility for feeding their child, known as Maudsley family therapy. Sometimes people require admission to a hospital to restore weight. Evidence for benefit from nasogastric tube feeding is unclear; such an intervention may be highly distressing for both anorexia patients and healthcare staff when administered against the patients will under restraint. Some people with anorexia will have a single episode and recover while others may have recurring episodes over years. Many complications improve or resolve with the regaining of weight.Globally, anorexia is estimated to affect 2.9 million people as of 2015. It is estimated to occur in 0.3% to 4.3% of women and 0.2% to 0.3% of men in Western countries at some point in their life. About 0.4% of young women are affected in a given year and it is estimated to occur ten times more commonly among women than men. Rates in most of the developing world are unclear. Often it begins during the teen years or young adulthood. While anorexia became more commonly diagnosed during the 20th century it is unclear if this was due to an increase in its frequency or simply better diagnosis. In 2013, it directly resulted in about 600 deaths globally, up from 400 deaths in 1990. Eating disorders also increase a persons risk of death from a wide range of other causes, including suicide. About 5% of people with anorexia die from complications over a ten-year period, a nearly six times increased risk. The term "anorexia nervosa" was first used in 1873 by William Gull to describe this condition.
In recent years, evolutionary psychiatry as an emerging scientific discipline has been studying mental disorders from an evolutionary perspective. It is still debated whether eating disorders such as anorexia have evolutionary functions or if they are problems resulting from a modern lifestyle.
Signs and symptoms
Anorexia nervosa is an eating disorder characterized by attempts to lose weight to the point of starvation. A person with anorexia nervosa may exhibit a number of signs and symptoms, the type and severity of which may vary and be present but not readily apparent.Anorexia nervosa, and the associated malnutrition that results from self-imposed starvation, can cause complications in every major organ system in the body. Hypokalaemia, a drop in the level of potassium in the blood, is a sign of anorexia nervosa. A significant drop in potassium can cause abnormal heart rhythms, constipation, fatigue, muscle damage, and paralysis.Signs and symptoms may be classified in physical, cognitive, affective, behavioral and perceptual:
Physical symptoms
A low body mass index for ones age and height
Amenorrhea, a symptom that occurs after prolonged weight loss, causing menstruation to stop
Dry hair and skin, as well as hair thinning
Fear of even the slightest weight gain; taking all precautionary measures to avoid weight gain or becoming "overweight"
Rapid, continuous weight loss
Lanugo: soft, fine hair growing over the face and body
Bradycardia or tachycardia.
Chronic fatigue
Orange discoloration of the skin, particularly the feet (Carotenosis)
Infertility
Halitosis (from vomiting or starvation-induced ketosis)
Hypotension or orthostatic hypotension
Having severe muscle tension, aches and pains
Insomnia
Abdominal distension
Cognitive symptoms
An obsession with counting calories and monitoring fat contents of food.
Preoccupation with food, recipes, or cooking; may cook elaborate dinners for others, but not eat the food themselves or consume a very small portion.
Admiration of thinner people.
Thoughts of being fat or not thin enough
An altered mental representation of ones body
Impaired theory of mind, exacerbated by lower BMI and depression
Difficulty in abstract thinking and problem solving
Rigid and inflexible thinking
Poor self-esteem
Hypercriticism and perfectionism
Affective symptoms
Depression
Ashamed of oneself or ones body
Anxiety disorders
Rapid mood swings
Emotional dysregulation
Alexithymia
Behavioral symptoms
Food restrictions despite being underweight or at a healthy weight.
Food rituals, such as cutting food into tiny pieces, refusing to eat around others, and hiding or discarding of food.
Purging (only in the anorexia purging subtype) with laxatives, diet pills, ipecac syrup, or diuretics to flush food out of their system after eating or engage in self-induced vomiting.
Excessive exercise, including micro-exercising, for example making small persistent movements of fingers or toes.
Self harming or self-loathing.
Solitude: may avoid friends and family and become more withdrawn and secretive.
Perceptual symptoms
Perception of self as overweight, in contradiction to an underweight reality (namely "body image disturbance")
Intolerance to cold and frequent complaints of being cold; body temperature may lower (hypothermia) in an effort to conserve energy due to malnutrition.
Altered body schema (i.e. an implicit representation of the body evoked by acting)
Altered interoception
Interoception
Interoception involves the conscious and unconscious sense of the internal state of the body, and it has an important role in homeostasis and regulation of emotions. Aside from noticeable physiological dysfunction, interoceptive deficits also prompt individuals with anorexia to concentrate on distorted perceptions of multiple elements of their body image. This exists in both people with anorexia and in healthy individuals due to impairment in interoceptive sensitivity and interoceptive awareness.Aside from weight gain and outer appearance, people with anorexia also report abnormal bodily functions such as indistinct feelings of fullness. This provides an example of miscommunication between internal signals of the body and the brain. Due to impaired interoceptive sensitivity, powerful cues of fullness may be detected prematurely in highly sensitive individuals, which can result in decreased calorie consumption and generate anxiety surrounding food intake in anorexia patients. People with anorexia also report difficulty identifying and describing their emotional feelings and the inability to distinguish emotions from bodily sensations in general, called alexithymia.Interoceptive awareness and emotion are deeply intertwined, and could mutually impact each other in abnormalities. Anorexia patients also exhibit emotional regulation difficulties that ignite emotionally-cued eating behaviors, such as restricting food or excessive exercising. Impaired interoceptive sensitivity and interoceptive awareness can lead anorexia patients to adapt distorted interpretations of weight gain that are cued by physical sensations related to digestion (e.g., fullness). Combined, these interoceptive and emotional elements could together trigger maladaptive and negatively reinforced behavioral responses that assist in the maintenance of anorexia. In addition to metacognition, people with anorexia also have difficulty with social cognition including interpreting others emotions, and demonstrating empathy. Abnormal interoceptive awareness and interoceptive sensitivity shown through all of these examples have been observed so frequently in anorexia that they have become key characteristics of the illness.
Comorbidity
Other psychological issues may factor into anorexia nervosa. Some people have a previous disorder which may increase their vulnerability to developing an eating disorder and some develop them afterwards. The presence of psychiatric comorbidity has been shown to affect the severity and type of anorexia nervosa symptoms in both adolescents and adults.Obsessive-compulsive disorder (OCD) and obsessive-compulsive personality disorder (OCPD) are highly comorbid with AN. OCD is linked with more severe symptomatology and worse prognosis. The causality between personality disorders and eating disorders has yet to be fully established. Other comorbid conditions include depression, alcoholism, borderline and other personality disorders, anxiety disorders, attention deficit hyperactivity disorder, and body dysmorphic disorder (BDD). Depression and anxiety are the most common comorbidities, and depression is associated with a worse outcome. Autism spectrum disorders occur more commonly among people with eating disorders than in the general population. Zucker et al. (2007) proposed that conditions on the autism spectrum make up the cognitive endophenotype underlying anorexia nervosa and appealed for increased interdisciplinary collaboration.
Causes
There is evidence for biological, psychological, developmental, and sociocultural risk factors, but the exact cause of eating disorders is unknown.
Genetic
Anorexia nervosa is highly heritable. Twin studies have shown a heritability rate of 28–58%. First-degree relatives of those with anorexia have roughly 12 times the risk of developing anorexia. Association studies have been performed, studying 128 different polymorphisms related to 43 genes including genes involved in regulation of eating behavior, motivation and reward mechanics, personality traits and emotion. Consistent associations have been identified for polymorphisms associated with agouti-related peptide, brain derived neurotrophic factor, catechol-o-methyl transferase, SK3 and opioid receptor delta-1. Epigenetic modifications, such as DNA methylation, may contribute to the development or maintenance of anorexia nervosa, though clinical research in this area is in its infancy.A 2019 study found a genetic relationship with mental disorders, such as schizophrenia, obsessive–compulsive disorder, anxiety disorder and depression; and metabolic functioning with a negative correlation with fat mass, type 2 diabetes and leptin.One gene that has been linked to anorexia might be of particular interest. This gene codes for a protein called the estrogen related receptor alpha (ERRalpha). In some tissues, this gene alters the ability of estrogen and estrogen receptors to interact with DNA and change the function of cells. Since estrogen has potent effects upon appetite and feeding, any genetic abnormality in the estrogen signaling pathway could contribute to the symptoms of anorexia and explain why anorexia typically appears in young women just after the onset of puberty.
Environmental
Obstetric complications: prenatal and perinatal complications may factor into the development of anorexia nervosa, such as preterm birth, maternal anemia, diabetes mellitus, preeclampsia, placental infarction, and neonatal heart abnormalities. Neonatal complications may also have an influence on harm avoidance, one of the personality traits associated with the development of AN.Neuroendocrine dysregulation: altered signalling of peptides that facilitate communication between the gut, brain and adipose tissue, such as ghrelin, leptin, neuropeptide Y and orexin, may contribute to the pathogenesis of anorexia nervosa by disrupting regulation of hunger and satiety.Gastrointestinal diseases: people with gastrointestinal disorders may be more at risk of developing disorders of eating practices than the general population, principally restrictive eating disturbances. An association of anorexia nervosa with celiac disease has been found. The role that gastrointestinal symptoms play in the development of eating disorders seems rather complex. Some authors report that unresolved symptoms prior to gastrointestinal disease diagnosis may create a food aversion in these persons, causing alterations to their eating patterns. Other authors report that greater symptoms throughout their diagnosis led to greater risk. It has been documented that some people with celiac disease, irritable bowel syndrome or inflammatory bowel disease who are not conscious about the importance of strictly following their diet, choose to consume their trigger foods to promote weight loss. On the other hand, individuals with good dietary management may develop anxiety, food aversion and eating disorders because of concerns around cross contamination of their foods. Some authors suggest that medical professionals should evaluate the presence of an unrecognized celiac disease in all people with eating disorder, especially if they present any gastrointestinal symptom (such as decreased appetite, abdominal pain, bloating, distension, vomiting, diarrhea or constipation), weight loss, or growth failure; and also routinely ask celiac patients about weight or body shape concerns, dieting or vomiting for weight control, to evaluate the possible presence of eating disorders, especially in women.Studies have hypothesized the continuance of disordered eating patterns may be epiphenomena of starvation. The results of the Minnesota Starvation Experiment showed normal controls exhibit many of the behavioral patterns of AN when subjected to starvation. This may be due to the numerous changes in the neuroendocrine system, which results in a self-perpetuating cycle.Anorexia nervosa is more likely to occur in a persons pubertal years. Some explanatory hypotheses for the rising prevalence of eating disorders in adolescence are "increase of adipose tissue in girls, hormonal changes of puberty, societal expectations of increased independence and autonomy that are particularly difficult for anorexic adolescents to meet; [and] increased influence of the peer group and its values."
Psychological
Early theories of the cause of anorexia linked it to childhood sexual abuse or dysfunctional families; evidence is conflicting, and well-designed research is needed. The fear of food is known as sitiophobia or cibophobia, and is part of the differential diagnosis. Other psychological causes of anorexia include low self-esteem, feeling like there is lack of control, depression, anxiety, and loneliness. People with anorexia are, in general, highly perfectionistic and most have obsessive compulsive personality traits which may facilitate sticking to a restricted diet. It has been suggested that patients with anorexia are rigid in their thought patterns, and place a high level of importance upon being thin.A risk factor for anorexia is trauma. Although the prevalence rates vary greatly, between 37% and 100%, there appears to be a link between traumatic events and eating disorder diagnosis. Approximately 72% of individuals with anorexia report experiencing a traumatic event prior to the onset of eating disorder symptoms, with binge-purge subtype reporting the highest rates. There are many traumatic events that may be risk factors for development of anorexia, the first identified traumatic event predicting anorexia was childhood sexual abuse. However, other traumatic events, such as physical and emotional abuse have also been found to be risk factors. Interpersonal, as opposed to non-interpersonal trauma, has been seen as the most common type of traumatic event, which can encompass sexual, physical, and emotional abuse. Individuals who experience repeated trauma, like those who experience trauma perpetrated by a caregiver or loved one, have increased symptom severity of anorexia and a greater prevalence of comorbid psychiatric diagnoses.In individuals with anorexia, the prevalence rates for those who also qualify for a PTSD diagnosis ranges from 4% to 52% in non-clinical samples to 10% to 47% in clinical samples. A complicated symptom profile develops when trauma and anorexia meld; the bodily experience of the individual is changed and intrusive thoughts and sensations may be experienced. Traumatic events can lead to intrusive and obsessive thoughts, and the symptom of anorexia that has been most closely linked to a PTSD diagnosis is increased obsessive thoughts pertaining to food. Similarly, impulsivity is linked to the purge and binge-purge subtypes of anorexia, trauma, and PTSD. Emotional trauma (e.g., invalidation, chaotic family environment in childhood) may lead to difficulty with emotions, particularly the identification of and how physical sensations contribute to the emotional response. Trauma and traumatic events can disturb an individuals sense of self and affect their ability to thrive, especially within their bodies.When trauma is perpetrated on an individual, it can lead to feelings of not being safe within their own body; that their body is for others to use and not theirs alone. Individuals may experience a feeling of disconnection from their body after a traumatic experience, leading to a desire to distance themselves from the body. Trauma overwhelms individuals emotionally, physically, and psychologically. Both physical and sexual abuse can lead to an individual seeing their body as belonging to an "other" and not to the "self". Individuals who feel as though they have no control over their bodies due to trauma may use food as a means of control because the choice to eat is an unmatched expression of control. By exerting control over food, individuals can choose when to eat and how much to eat. Individuals, particularly children experiencing abuse, may feel a loss of control over their life, circumstances, and their own bodies. Particularly sexual abuse, but also physical abuse, can make individuals feel that the body is not a safe place and an object over which another has control. Starvation, in the case of anorexia, may also lead to reduction in the body as a sexual object, making starvation a solution. Restriction may also be a means by which the pain an individual is experiencing can be communicated.
Sociological
Anorexia nervosa has been increasingly diagnosed since 1950; the increase has been linked to vulnerability and internalization of body ideals. People in professions where there is a particular social pressure to be thin (such as models and dancers) were more likely to develop anorexia, and those with anorexia have much higher contact with cultural sources that promote weight loss. This trend can also be observed for people who partake in certain sports, such as jockeys and wrestlers. There is a higher incidence and prevalence of anorexia nervosa in sports with an emphasis on aesthetics, where low body fat is advantageous, and sports in which one has to make weight for competition. Family group dynamics can play a role in the cause of anorexia including negative expressed emotion in overprotective families where blame is frequently experienced among its members. When there is a constant pressure from people to be thin, teasing and bullying can cause low self-esteem and other psychological symptoms.
Media effects
Persistent exposure to media that present body ideals may constitute a risk factor for body dissatisfaction and anorexia nervosa. The cultural ideal for body shape for men versus women continues to favor slender women and athletic, V-shaped muscular men. A 2002 review found that, of the magazines most popular among people aged 18 to 24 years, those read by men, unlike those read by women, were more likely to feature ads and articles on shape than on diet. Body dissatisfaction and internalization of body ideals are risk factors for anorexia nervosa that threaten the health of both male and female populations.Another online aspect contributing to higher rates of eating disorders such as anorexia nervosa are websites and communities on social media that stress the importance of attainment of body ideals extol and promote anorexia nervosa through the use of religious metaphors, lifestyle descriptions, "thinspiration" or "fitspiration" (inspirational photo galleries and quotes that aim to serve as motivators for attainment of body ideals). Pro-anorexia websites reinforce internalization of body ideals and the importance of their attainment.The media portray a false view of what people truly look like. In magazines and movies and even on billboards most of the actors/models are digitally altered in multiple ways. People then strive to look like these "perfect" role models when in reality they are not near perfection themselves.
Mechanisms
Evidence from physiological, pharmacological and neuroimaging studies suggest serotonin (also called 5-HT) may play a role in anorexia. While acutely ill, metabolic changes may produce a number of biological findings in people with anorexia that are not necessarily causative of the anorexic behavior. For example, abnormal hormonal responses to challenges with serotonergic agents have been observed during acute illness, but not recovery. Nevertheless, increased cerebrospinal fluid concentrations of 5-hydroxyindoleacetic acid (a metabolite of serotonin), and changes in anorectic behavior in response to acute tryptophan depletion (tryptophan is a metabolic precursor to serotonin) support a role in anorexia. The activity of the 5-HT2A receptors has been reported to be lower in patients with anorexia in a number of cortical regions, evidenced by lower binding potential of this receptor as measured by PET or SPECT, independent of the state of illness. While these findings may be confounded by comorbid psychiatric disorders, taken as a whole they indicate serotonin in anorexia. These alterations in serotonin have been linked to traits characteristic of anorexia such as obsessiveness, anxiety, and appetite dysregulation.Neuroimaging studies investigating the functional connectivity between brain regions have observed a number of alterations in networks related to cognitive control, introspection, and sensory function. Alterations in networks related to the dorsal anterior cingulate cortex may be related to excessive cognitive control of eating related behaviors. Similarly, altered somatosensory integration and introspection may relate to abnormal body image. A review of functional neuroimaging studies reported reduced activations in "bottom up" limbic region and increased activations in "top down" cortical regions which may play a role in restrictive eating.Compared to controls, people who have recovered from anorexia show reduced activation in the reward system in response to food, and reduced correlation between self reported liking of a sugary drink and activity in the striatum and anterior cingulate cortex. Increased binding potential of 11C radiolabelled raclopride in the striatum, interpreted as reflecting decreased endogenous dopamine due to competitive displacement, has also been observed.Structural neuroimaging studies have found global reductions in both gray matter and white matter, as well as increased cerebrospinal fluid volumes. Regional decreases in the left hypothalamus, left inferior parietal lobe, right lentiform nucleus and right caudate have also been reported in acutely ill patients. However, these alterations seem to be associated with acute malnutrition and largely reversible with weight restoration, at least in nonchronic cases in younger people. In contrast, some studies have reported increased orbitofrontal cortex volume in currently ill and in recovered patients, although findings are inconsistent. Reduced white matter integrity in the fornix has also been reported.
Diagnosis
A diagnostic assessment includes the persons current circumstances, biographical history, current symptoms, and family history. The assessment also includes a mental state examination, which is an assessment of the persons current mood and thought content, focusing on views on weight and patterns of eating.
DSM-5
Anorexia nervosa is classified under the Feeding and Eating Disorders in the latest revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM 5). There is no specific BMI cut-off that defines low weight required for the diagnosis of anorexia nervosa.The diagnostic criteria for anorexia nervosa (all of which needing to be met for diagnosis) are:
Restriction of energy intake relative to requirements leading to a low body weight. (Criterion A)
Intense fear of gaining weight or persistent behaviors that interfere with gaining weight. (Criterion B)
Disturbance in the way a persons weight or body shape is experienced or a lack of recognition about the risks of the low body weight. (Criterion C)Relative to the previous version of the DSM (DSM-IV-TR), the 2013 revision (DSM5) reflects changes in the criteria for anorexia nervosa. Most notably, the amenorrhea (absent period) criterion was removed. Amenorrhea was removed for several reasons: it does not apply to males, it is not applicable for females before or after the age of menstruation or taking birth control pills, and some women who meet the other criteria for AN still report some menstrual activity.
Subtypes
There are two subtypes of AN:
Binge-eating/purging type: patients with anorexia could show binge eating and purging behavior. It is different from bulimia nervosa in terms of the individuals weight. An individual with binge-eating/purging type anorexia is usually significantly underweight. People with bulimia nervosa on the other hand can sometimes be normal-weight or overweight.
Restricting type: the individual uses restricting food intake, fasting, diet pills, or exercise as a means for losing weight; they may exercise excessively to keep off weight or prevent weight gain, and some individuals eat only enough to stay alive. In the restrictive type, there are no recurrent episodes of binge-eating or purging present.
Levels of severity
Body mass index (BMI) is used by the DSM-5 as an indicator of the level of severity of anorexia nervosa. The DSM-5 states these as follows:
Mild: BMI of greater than 17
Moderate: BMI of 16–16.99
Severe: BMI of 15–15.99
Extreme: BMI of less than 15
Investigations
Medical tests to check for signs of physical deterioration in anorexia nervosa may be performed by a general physician or psychiatrist, including:
Complete blood count (CBC): a test of the white blood cells, red blood cells and platelets used to assess the presence of various disorders such as leukocytosis, leukopenia, thrombocytosis and anemia which may result from malnutrition.
Urinalysis: a variety of tests performed on the urine used in the diagnosis of medical disorders, to test for substance abuse, and as an indicator of overall health
Chem-20: Chem-20 also known as SMA-20 a group of twenty separate chemical tests performed on blood serum. Tests include protein and electrolytes such as potassium, chlorine and sodium and tests specific to liver and kidney function.
Glucose tolerance test: Oral glucose tolerance test (OGTT) used to assess the bodys ability to metabolize glucose. Can be useful in detecting various disorders such as diabetes, an insulinoma, Cushings Syndrome, hypoglycemia and polycystic ovary syndrome.
Lipid profile: includes cholesterol (including total cholesterol, HDL and LDL) and triglycerides.
Serum cholinesterase test: a test of liver enzymes (acetylcholinesterase and pseudocholinesterase) useful as a test of liver function and to assess the effects of malnutrition.
Liver Function Test: A series of tests used to assess liver function some of the tests are also used in the assessment of malnutrition, protein deficiency, kidney function, bleeding disorders, and Crohns Disease.
Luteinizing hormone (LH) response to gonadotropin-releasing hormone (GnRH): Tests the pituitary glands response to GnRh, a hormone produced in the hypothalamus. Hypogonadism is often seen in anorexia nervosa cases.
Creatine kinase (CK) test: measures the circulating blood levels of creatine kinase an enzyme found in the heart (CK-MB), brain (CK-BB) and skeletal muscle (CK-MM).
Blood urea nitrogen (BUN) test: urea nitrogen is the byproduct of protein metabolism first formed in the liver then removed from the body by the kidneys. The BUN test is primarily used to test kidney function. A low BUN level may indicate the effects of malnutrition.
BUN-to-creatinine ratio: A BUN to creatinine ratio is used to predict various conditions. A high BUN/creatinine ratio can occur in severe hydration, acute kidney failure, congestive heart failure, and intestinal bleeding. A low BUN/creatinine ratio can indicate a low protein diet, celiac disease, rhabdomyolysis, or cirrhosis of the liver.
Electrocardiogram (EKG or ECG): measures electrical activity of the heart. It can be used to detect various disorders such as hyperkalemia.
Electroencephalogram (EEG): measures the electrical activity of the brain. It can be used to detect abnormalities such as those associated with pituitary tumors.
Thyroid function tests: tests used to assess thyroid functioning by checking levels of thyroid-stimulating hormone (TSH), thyroxine (T4), and triiodothyronine (T3).
Differential diagnoses
A variety of medical and psychological conditions have been misdiagnosed as anorexia nervosa; in some cases the correct diagnosis was not made for more than ten years.
The distinction between binge purging anorexia, bulimia nervosa and Other Specified Feeding or Eating Disorders (OSFED) is often difficult for non-specialist clinicians. A main factor differentiating binge-purge anorexia from bulimia is the gap in physical weight. Patients with bulimia nervosa are ordinarily at a healthy weight, or slightly overweight. Patients with binge-purge anorexia are commonly underweight. Moreover, patients with the binge-purging subtype may be significantly underweight and typically do not binge-eat large amounts of food. In contrast, those with bulimia nervosa tend to binge large amounts of food. It is not unusual for patients with an eating disorder to "move through" various diagnoses as their behavior and beliefs change over time.
Treatment
There is no conclusive evidence that any particular treatment for anorexia nervosa works better than others.Treatment for anorexia nervosa tries to address three main areas.
Restoring the person to a healthy weight;
Treating the psychological disorders related to the illness;
Reducing or eliminating behaviors or thoughts that originally led to the disordered eating.In some clinical settings a specific body image intervention is performed to reduce body dissatisfaction and body image disturbance. Although restoring the persons weight is the primary task at hand, optimal treatment also includes and monitors behavioral change in the individual as well. There is some evidence that hospitalization might adversely affect long term outcome, but sometimes is necessary. Psychotherapy for individuals with AN is challenging as they may value being thin and may seek to maintain control and resist change. Initially, developing a desire to change is fundamental. Despite no evidence for better treatment in adults patients, research stated that family based therapy is the primary choice for adolescents with AN.
Therapy
Family-based treatment (FBT) has been shown to be more successful than individual therapy for adolescents with AN. Various forms of family-based treatment have been proven to work in the treatment of adolescent AN including conjoint family therapy (CFT), in which the parents and child are seen together by the same therapist, and separated family therapy (SFT) in which the parents and child attend therapy separately with different therapists. Proponents of family therapy for adolescents with AN assert that it is important to include parents in the adolescents treatment.A four- to five-year follow up study of the Maudsley family therapy, an evidence-based manualized model, showed full recovery at rates up to 90%. Although this model is recommended by the NIMH, critics claim that it has the potential to create power struggles in an intimate relationship and may disrupt equal partnerships. Cognitive behavioral therapy (CBT) is useful in adolescents and adults with anorexia nervosa. One of the most known psychotherapy in the field is CBT-E, an enhanced cognitive-behavior therapy specifically focus to eating disorder psychopathology. Acceptance and commitment therapy is a third-wave cognitive-behavioral therapy which has shown promise in the treatment of AN. Cognitive remediation therapy (CRT) is also used in treating anorexia nervosa. Schema-Focused Therapy (a form of CBT) was developed by Dr. Jeffrey Young and is effective in helping patients identify origins and triggers for disordered eating.
Diet
Diet is the most essential factor to work on in people with anorexia nervosa, and must be tailored to each persons needs. Food variety is important when establishing meal plans as well as foods that are higher in energy density. People must consume adequate calories, starting slowly, and increasing at a measured pace. Evidence of a role for zinc supplementation during refeeding is unclear.
Medication
Pharmaceuticals have limited benefit for anorexia itself. There is a lack of good information from which to make recommendations concerning the effectiveness of antidepressants in treating anorexia. Administration of olanzapine has been shown to result in a modest but statistically significant increase in body weight of anorexia nervosa patients.
Admission to hospital
Patients with AN may be deemed to have a lack of insight regarding the necessity of treatment, and thus may be involuntarily treated without their consent.: 1038 AN has a high mortality and patients admitted in a severely ill state to medical units are at particularly high risk. Diagnosis can be challenging, risk assessment may not be performed accurately, consent and the need for compulsion may not be assessed appropriately, refeeding syndrome may be missed or poorly treated and the behavioural and family problems in AN may be missed or poorly managed. Guidelines published by the Royal College of Psychiatrists recommend that medical and psychiatric experts work together in managing severely ill people with AN.
Refeeding syndrome
The rate of refeeding can be difficult to establish, because the fear of refeeding syndrome (RFS) can lead to underfeeding. It is thought that RFS, with falling phosphate and potassium levels, is more likely to occur when BMI is very low, and when medical comorbidities such as infection or cardiac failure, are present. In those circumstances, it is recommended to start refeeding slowly but to build up rapidly as long as RFS does not occur. Recommendations on energy requirements vary, from 5–10 kcal/kg/day in the most medically compromised patients, who appear to have the highest risk of RFS, to 1900 kcal/day.
Experience of treatment
Patients involved in treatment sometimes felt that treatment focused on biological aspects of body weight and eating behaviour change rather than their perceptions or emotional state.: 8 Patients felt that a therapists trust in them shown by being treated as a complete person with their own capacities was significant.: 9 Some patients defined recovery from AN in terms of reclaiming a lost identity.: 10 Healthcare workers involved in the treatment of anorexia reported frustration and anger to set backs in treatment and noncompliance and were afraid of patients dying. Some healthcare workers felt that they did not understand the treatment and that medical doctors were making decisions.: 11 They may feel powerless to improve a patients situation and deskilled as a result.: 12 Healthcare workers involved in monitoring patients consumption of food felt watched themselves.: 12 Healthcare workers often feel a degree of moral dissonance of not being in control of outcomes which they may protect against by focusing on individual tasks, avoiding identifying with patients (for example by making their eating behavior very different and not sharing personal information with patients), and blaming patients for their distress.: 13,14 Healthcare workers would inflexibly follow process to avoid responsibility.: 13 Healthcare workers attempted to reach balance by gradually giving patients back control avoiding feeling sole responsibility for outcomes, being mindful of their emotional state, and trying to view eating disorders as external from patients.: 13
Prognosis
AN has the highest mortality rate of any psychological disorder. The mortality rate is 11 to 12 times greater than in the general population, and the suicide risk is 56 times higher. Half of women with AN achieve a full recovery, while an additional 20–30% may partially recover. Not all people with anorexia recover completely: about 20% develop anorexia nervosa as a chronic disorder. If anorexia nervosa is not treated, serious complications such as heart conditions and kidney failure can arise and eventually lead to death. The average number of years from onset to remission of AN is seven for women and three for men. After ten to fifteen years, 70% of people no longer meet the diagnostic criteria, but many still continue to have eating-related problems.Alexithymia (inability to identify and describe ones own emotions) influences treatment outcome. Recovery is also viewed on a spectrum rather than black and white. According to the Morgan-Russell criteria, individuals can have a good, intermediate, or poor outcome. Even when a person is classified as having a "good" outcome, weight only has to be within 15% of average, and normal menstruation must be present in females. The good outcome also excludes psychological health. Recovery for people with anorexia nervosa is undeniably positive, but recovery does not mean a return to normal.
Complications
Anorexia nervosa can have serious implications if its duration and severity are significant and if onset occurs before the completion of growth, pubertal maturation, or the attainment of peak bone mass. Complications specific to adolescents and children with anorexia nervosa can include the following:
Growth retardation may occur, as height gain may slow and can stop completely with severe weight loss or chronic malnutrition. In such cases, provided that growth potential is preserved, height increase can resume and reach full potential after normal intake is resumed. Height potential is normally preserved if the duration and severity of illness are not significant or if the illness is accompanied by delayed bone age (especially prior to a bone age of approximately 15 years), as hypogonadism may partially counteract the effects of undernutrition on height by allowing for a longer duration of growth compared to controls. Appropriate early treatment can preserve height potential, and may even help to increase it in some post-anorexic subjects, due to factors such as long-term reduced estrogen-producing adipose tissue levels compared to premorbid levels. In some cases, especially where onset is before puberty, complications such as stunted growth and pubertal delay are usually reversible.Anorexia nervosa causes alterations in the female reproductive system; significant weight loss, as well as psychological stress and intense exercise, typically results in a cessation of menstruation in women who are past puberty. In patients with anorexia nervosa, there is a reduction of the secretion of gonadotropin releasing hormone in the central nervous system, preventing ovulation. Anorexia nervosa can also result in pubertal delay or arrest. Both height gain and pubertal development are dependent on the release of growth hormone and gonadotropins (LH and FSH) from the pituitary gland. Suppression of gonadotropins in people with anorexia nervosa has been documented. Typically, growth hormone (GH) levels are high, but levels of IGF-1, the downstream hormone that should be released in response to GH are low; this indicates a state of "resistance" to GH due to chronic starvation. IGF-1 is necessary for bone formation, and decreased levels in anorexia nervosa contribute to a loss of bone density and potentially contribute to osteopenia or osteoporosis. Anorexia nervosa can also result in reduction of peak bone mass. Buildup of bone is greatest during adolescence, and if onset of anorexia nervosa occurs during this time and stalls puberty, low bone mass may be permanent.Hepatic steatosis, or fatty infiltration of the liver, can also occur, and is an indicator of malnutrition in children. Neurological disorders that may occur as complications include seizures and tremors. Wernicke encephalopathy, which results from vitamin B1 deficiency, has been reported in patients who are extremely malnourished; symptoms include confusion, problems with the muscles responsible for eye movements and abnormalities in walking gait.
The most common gastrointestinal complications of anorexia nervosa are delayed stomach emptying and constipation, but also include elevated liver function tests, diarrhea, acute pancreatitis, heartburn, difficulty swallowing, and, rarely, superior mesenteric artery syndrome. Delayed stomach emptying, or gastroparesis, often develops following food restriction and weight loss; the most common symptom is bloating with gas and abdominal distension, and often occurs after eating. Other symptoms of gastroparesis include early satiety, fullness, nausea, and vomiting. The symptoms may inhibit efforts at eating and recovery, but can be managed by limiting high-fiber foods, using liquid nutritional supplements, or using metoclopramide to increase emptying of food from the stomach. Gastroparesis generally resolves when weight is regained.
Cardiac complications
Anorexia nervosa increases the risk of sudden cardiac death, though the precise cause is unknown. Cardiac complications include structural and functional changes to the heart. Some of these cardiovascular changes are mild and are reversible with treatment, while others may be life-threatening. Cardiac complications can include arrhythmias, abnormally slow heart beat, low blood pressure, decreased size of the heart muscle, reduced heart volume, mitral valve prolapse, myocardial fibrosis, and pericardial effusion.Abnormalities in conduction and repolarization of the heart that can result from anorexia nervosa include QT prolongation, increased QT dispersion, conduction delays, and junctional escape rhythms. Electrolyte abnormalities, particularly hypokalemia and hypomagnesemia, can cause anomalies in the electrical activity of the heart, and result in life-threatening arrhythmias. Hypokalemia most commonly results in patients with anorexia when restricting is accompanied by purging (induced vomiting or laxative use). Hypotension (low blood pressure) is common, and symptoms include fatigue and weakness. Orthostatic hypotension, a marked decrease in blood pressure when standing from a supine position, may also occur. Symptoms include lightheadedness upon standing, weakness, and cognitive impairment, and may result in fainting or near-fainting. Orthostasis in anorexia nervosa indicates worsening cardiac function and may indicate a need for hospitalization. Hypotension and orthostasis generally resolve upon recovery to a normal weight. The weight loss in anorexia nervosa also causes atrophy of cardiac muscle. This leads to decreased ability to pump blood, a reduction in the ability to sustain exercise, a diminished ability to increase blood pressure in response to exercise, and a subjective feeling of fatigue.Some individuals may also have a decrease in cardiac contractility. Cardiac complications can be life-threatening, but the heart muscle generally improves with weight gain, and the heart normalizes in size over weeks to months, with recovery. Atrophy of the heart muscle is a marker of the severity of the disease, and while it is reversible with treatment and refeeding, it is possible that it may cause permanent, microscopic changes to the heart muscle that increase the risk of sudden cardiac death. Individuals with anorexia nervosa may experience chest pain or palpitations; these can be a result of mitral valve prolapse. Mitral valve prolapse occurs because the size of the heart muscle decreases while the tissue of the mitral valve remains the same size. Studies have shown rates of mitral valve prolapse of around 20 percent in those with anorexia nervosa, while the rate in the general population is estimated at 2–4 percent. It has been suggested that there is an association between mitral valve prolapse and sudden cardiac death, but it has not been proven to be causative, either in patients with anorexia nervosa or in the general population.
Relapse
Rates of relapse after treatment range from 9–52% with many studies reporting a relapse rate of at least 25%. Relapse occurs in approximately a third of people in hospital, and is greatest in the first six to eighteen months after release from an institution.
Epidemiology
Anorexia is estimated to occur in 0.9% to 4.3% of women and 0.2% to 0.3% of men in Western countries at some point in their life. About 0.4% of young females are affected in a given year and it is estimated to occur three to ten times less commonly in males. Rates in most of the developing world are unclear. Often it begins during the teen years or young adulthood. Medical students are a high risk group, with an overall estimated prevalence of 10.4% globally.The lifetime rate of atypical anorexia nervosa, a form of ED-NOS in which the person loses a significant amount of weight and is at risk for serious medical complications despite having a higher body-mass index, is much higher, at 5–12%.While anorexia became more commonly diagnosed during the 20th century it is unclear if this was due to an increase in its frequency or simply better diagnosis. Most studies show that since at least 1970 the incidence of AN in adult women is fairly constant, while there is some indication that the incidence may have been increasing for girls aged between 14 and 20.
Underrepresentation
Eating disorders are less reported in preindustrial, non-westernized countries than in Western countries. In Africa, not including South Africa, the only data presenting information about eating disorders occurs in case reports and isolated studies, not studies investigating prevalence. Western countries experience slightly higher rates of eating disorders than non-western countries. Theories to explain these lower rates of eating disorders, lower reporting and lower research rates in include the effects of westernisation, and culture change.Men (and women) who might otherwise be diagnosed with anorexia may not meet the DSM-IV criteria for BMI since they have muscle weight, but have very little fat. In many cases a subclinical "not otherwise specified" diagnosis is made instead; ED-NOS in the DSM-IV, and other specified feeding or eating disorder or unspecified feeding or eating disorder in the DSM-5. ED-NOS was the most diagnosed eating disorder in 2009, and it was also shown that AN did not differ significantly in eating pathology or general psychopathology from EDNOS involving restrictive eating.The elderly population is increasingly experiencing anorexia nervosa, which has been termed the "Anorexia of Aging". The eating disorder is similar to that of typical anorexia nervosa but is more often accompanied by the overuse of laxatives in order to purge the individual of ingested food. Most geriatric anorexia patients limit their food intake to dairy or grains, whereas an adolescent anorexic has a more general limitation.This eating disorder that affects older adults has two types - early onset and late onset. Early onset refers to a recurrence of anorexia in late life in an individual who experienced the disease during their youth. Late onset describes instances where the eating disorder begins for the first time late in life.
The stimulus for anorexia in elderly patients is typically a loss of control over their lives, which can be brought on by many events, including moving into an assisted living facility. This is also a time when most older individuals experience a rise in conflict with family members, such as limitations on driving or limitations on personal freedom, which increases the likelihood of an issue with anorexia. There can be physical issues in the elderly that leads to anorexia of aging, including a decline in chewing ability, a decline in taste and smell, and a decrease in appetite. Psychological reasons for the elderly to develop anorexia can include depression and bereavement, and even an indirect attempt at suicide. There are also common comorbid psychiatric conditions with aging anorexics, including major depression, anxiety disorder, obsessive compulsive disorder, bipolar disorder, schizophrenia, and dementia.The signs and symptoms that go along with anorexia of aging are similar to what is observed in adolescent anorexia, including sudden weight loss, unexplained hair loss or dental problems, and a desire to eat alone.There are also several medical conditions that can result from anorexia in the elderly. An increased risk of illness and death can be a result of anorexia. There is also a decline in muscle and bone mass as a result of a reduction in protein intake during anorexia. Another result of anorexia in the aging population is irreparable damage to kidneys, heart or colon and an imbalance of electrolytes.Many assessments are available to diagnose anorexia in the aging community. These assessments include the Simplified Nutritional Assessment Questionnaire (SNAQ) and Functional Assessment of Anorexia/Cachexia Therapy (FAACT). Specific to the geriatric populace, the interRAI system identifies detrimental conditions in assisted living facilities and nursing homes. Even a simple screening for nutritional insufficiencies such as low levels of important vitamins, can help to identify someone who has anorexia of aging.Anorexia in the elderly should be identified by the retirement communities but is often overlooked, especially in patients with dementia. Some studies report that malnutrition is prevalent in nursing homes, with up to 58% of residents suffering from it, which can lead to the difficulty of identifying anorexia. One of the challenges with assisted living facilities is that they often serve bland, monotonous food, which lessens residents desire to eat.The treatment for anorexia of aging is undifferentiated as anorexia for any other age group. Some of the treatment options include outpatient and inpatient facilities, antidepressant medication and behavioral therapy such as meal observation and discussing eating habits.Male and female athletes are often overlooked as anorexic. Research emphasizes the importance to take athletes diet, weight and symptoms into account when diagnosing anorexia, instead of just looking at weight and BMI. For athletes, ritualized activities such as weigh-ins place emphasis on gaining and losing large amounts of weight, which may promote the development of eating disorders among them. While women use diet pills, which is an indicator of unhealthy behavior and an eating disorder, men use steroids, which contextualizes the beauty ideals for genders </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Anthrax and can you explain it in depth? [/INST] Anthrax is an infection caused by the bacterium Bacillus anthracis. It can occur in four forms: skin, lungs, intestinal, and injection. Symptom onset occurs between one day and more than two months after the infection is contracted. The skin form presents with a small blister with surrounding swelling that often turns into a painless ulcer with a black center. The inhalation form presents with fever, chest pain and shortness of breath. The intestinal form presents with diarrhea (which may contain blood), abdominal pains, nausea and vomiting. The injection form presents with fever and an abscess at the site of drug injection.According to the Centers for Disease Control the first clinical descriptions of cutaneous anthrax were given by Maret in 1752 and Fournier in 1769. Before that anthrax had been described only through historical accounts. The Prussian scientist Robert Koch (1843–1910) was the first to identify Bacillus anthracis as the bacterium that causes anthrax.
Anthrax is spread by contact with the bacteriums spores, which often appear in infectious animal products. Contact is by breathing or eating or through an area of broken skin. It does not typically spread directly between people. Risk factors include people who work with animals or animal products, travelers, and military personnel. Diagnosis can be confirmed by finding antibodies or the toxin in the blood or by culture of a sample from the infected site.Anthrax vaccination is recommended for people at high risk of infection. Immunizing animals against anthrax is recommended in areas where previous infections have occurred. A two-month course of antibiotics such as ciprofloxacin, levofloxacin and doxycycline after exposure can also prevent infection. If infection occurs, treatment is with antibiotics and possibly antitoxin. The type and number of antibiotics used depend on the type of infection. Antitoxin is recommended for those with widespread infection.A rare disease, human anthrax is most common in Africa and central and southern Asia. It also occurs more regularly in Southern Europe than elsewhere on the continent and is uncommon in Northern Europe and North America. Globally, at least 2,000 cases occur a year, with about two cases a year in the United States. Skin infections represent more than 95% of cases. Without treatment the risk of death from skin anthrax is 23.7%. For intestinal infection the risk of death is 25 to 75%, while respiratory anthrax has a mortality of 50 to 80%, even with treatment. Until the 20th century anthrax infections killed hundreds of thousands of people and animals each year. Anthrax has been developed as a weapon by a number of countries. In herbivorous animals infection occurs when they eat or breathe in the spores while grazing. Animals may become infected by killing and/or eating infected animals.
Etymology
The English name comes from anthrax (ἄνθραξ), the Greek word for coal, possibly having Egyptian etymology, because of the characteristic black skin lesions developed by people with a cutaneous anthrax infection. The central black eschar surrounded by vivid red skin has long been recognised as typical of the disease. The first recorded use of the word "anthrax" in English is in a 1398 translation of Bartholomaeus Anglicus work De proprietatibus rerum (On the Properties of Things, 1240).Anthrax was historically known by a wide variety of names indicating its symptoms, location and groups considered most vulnerable to infection. They included Siberian plague, Cumberland disease, charbon, splenic fever, malignant edema, woolsorters disease and la maladie de Bradford.
Signs and symptoms
Skin
Cutaneous anthrax, also known as hide-porters disease, is when anthrax occurs on the skin. It is the most common form (>90% of anthrax cases). It is the least dangerous form (low mortality with treatment, 23.7% mortality without). Cutaneous anthrax presents as a boil-like skin lesion that eventually forms an ulcer with a black center (eschar). The black eschar often shows up as a large, painless, necrotic ulcer (beginning as an irritating and itchy skin lesion or blister that is dark and usually concentrated as a black dot, somewhat resembling bread mold) at the site of infection. In general, cutaneous infections form within the site of spore penetration between two and five days after exposure. Unlike bruises or most other lesions, cutaneous anthrax infections normally do not cause pain. Nearby lymph nodes may become infected, reddened, swollen, and painful. A scab forms over the lesion soon, and falls off in a few weeks. Complete recovery may take longer.
Cutaneous anthrax is typically caused when B. anthracis spores enter through cuts on the skin. This form is found most commonly when humans handle infected animals and/or animal products.
Injection
In December 2009, an outbreak of anthrax occurred among injecting heroin users in the Glasgow and Stirling areas of Scotland, resulting in 14 deaths. The source of the anthrax is believed to have been dilution of the heroin with bone meal in Afghanistan. Injected anthrax may have symptoms similar to cutaneous anthrax, and may also cause infection deep into the muscle and spread faster.
Lungs
Inhalation anthrax usually develops within a week after exposure, but may take up to 2 months. During the first few days of illness, most people have fever, chills, and fatigue. These symptoms may be accompanied by cough, shortness of breath, chest pain, and nausea or vomiting, making inhalation anthrax difficult to distinguish from influenza and community-acquired pneumonia. This is often described as the prodromal period.Over the next day or so, shortness of breath, cough, and chest pain become more common, and complaints not involving the chest such as nausea, vomiting, altered mental status, sweats, and headache develop in one-third or more of people. Upper respiratory tract symptoms occur in only a quarter of people, and muscle pains are rare. Altered mental status or shortness of breath generally brings people to healthcare and marks the fulminant phase of illness.It infects the lymph nodes in the chest first, rather than the lungs themselves, a condition called hemorrhagic mediastinitis, causing bloody fluid to accumulate in the chest cavity, thereby causing shortness of breath. The second (pneumonia) stage occurs when the infection spreads from the lymph nodes to the lungs. Symptoms of the second stage develop suddenly within hours or days after the first stage. Symptoms include high fever, extreme shortness of breath, shock, and rapid death within 48 hours in fatal cases.
Gastrointestinal
Gastrointestinal (GI) infection is most often caused by consuming anthrax-infected meat and is characterized by diarrhea, potentially with blood, abdominal pains, acute inflammation of the intestinal tract, and loss of appetite. Occasional vomiting of blood can occur. Lesions have been found in the intestines and in the mouth and throat. After the bacterium invades the gastrointestinal system, it spreads to the bloodstream and throughout the body, while continuing to make toxins.
Cause
Bacteria
Bacillus anthracis is a rod-shaped, Gram-positive, facultative anaerobic bacterium about 1 by 9 μm in size. It was shown to cause disease by Robert Koch in 1876 when he took a blood sample from an infected cow, isolated the bacteria, and put them into a mouse. The bacterium normally rests in spore form in the soil, and can survive for decades in this state. Herbivores are often infected while grazing, especially when eating rough, irritant, or spiky vegetation; the vegetation has been hypothesized to cause wounds within the GI tract, permitting entry of the bacterial spores into the tissues. Once ingested or placed in an open wound, the bacteria begin multiplying inside the animal or human and typically kill the host within a few days or weeks. The spores germinate at the site of entry into the tissues and then spread by the circulation to the lymphatics, where the bacteria multiply.The production of two powerful exotoxins and lethal toxin by the bacteria causes death. Veterinarians can often tell a possible anthrax-induced death by its sudden occurrence, and by the dark, nonclotting blood that oozes from the body orifices. Most anthrax bacteria inside the body after death are outcompeted and destroyed by anaerobic bacteria within minutes to hours post mortem. However, anthrax vegetative bacteria that escape the body via oozing blood or through the opening of the carcass may form hardy spores. These vegetative bacteria are not contagious. One spore forms per one vegetative bacterium. The triggers for spore formation are not yet known, though oxygen tension and lack of nutrients may play roles. Once formed, these spores are very hard to eradicate.The infection of herbivores (and occasionally humans) by the inhalational route normally begins with inhaled spores being transported through the air passages into the tiny air sacs (alveoli) in the lungs. The spores are then picked up by scavenger cells (macrophages) in the lungs and are transported through small vessels (lymphatics) to the lymph nodes in the central chest cavity (mediastinum). Damage caused by the anthrax spores and bacilli to the central chest cavity can cause chest pain and difficulty breathing. Once in the lymph nodes, the spores germinate into active bacilli that multiply and eventually burst the macrophages, releasing many more bacilli into the bloodstream to be transferred to the entire body. Once in the bloodstream, these bacilli release three proteins named lethal factor, edema factor, and protective antigen. The three are not toxic by themselves, but their combination is incredibly lethal to humans. Protective antigen combines with these other two factors to form lethal toxin and edema toxin, respectively. These toxins are the primary agents of tissue destruction, bleeding, and death of the host. If antibiotics are administered too late, even if the antibiotics eradicate the bacteria, some hosts still die of toxemia because the toxins produced by the bacilli remain in their systems at lethal dose levels.
Exposure
The spores of anthrax are able to survive in harsh conditions for decades or even centuries. Such spores can be found on all continents, including Antarctica. Disturbed grave sites of infected animals have been known to cause infection after 70 years.Historically, inhalational anthrax was called woolsorters disease because it was an occupational hazard for people who sorted wool. Today, this form of infection is extremely rare in advanced nations, as almost no infected animals remain.Occupational exposure to infected animals or their products (such as skin, wool, and meat) is the usual pathway of exposure for humans. Workers who are exposed to dead animals and animal products are at the highest risk, especially in countries where anthrax is more common. Anthrax in livestock grazing on open range where they mix with wild animals still occasionally occurs in the United States and elsewhere.Many workers who deal with wool and animal hides are routinely exposed to low levels of anthrax spores, but most exposure levels are not sufficient to develop anthrax infections. A lethal infection is reported to result from inhalation of about 10,000–20,000 spores, though this dose varies among host species. Little documented evidence is available to verify the exact or average number of spores needed for infection.
Mode of infection
Anthrax can enter the human body through the intestines (ingestion), lungs (inhalation), or skin (cutaneous) and causes distinct clinical symptoms based on its site of entry. In general, an infected human is quarantined. However, anthrax does not usually spread from an infected human to an uninfected human. If the disease is fatal to the persons body, though, its mass of anthrax bacilli becomes a potential source of infection to others and special precautions should be used to prevent further contamination. Inhalational anthrax, if left untreated until obvious symptoms occur, is usually fatal.Anthrax can be contracted in laboratory accidents or by handling infected animals, their wool, or their hides. It has also been used in biological warfare agents and by terrorists to intentionally infect as exemplified by the 2001 anthrax attacks.
Mechanism
The lethality of the anthrax disease is due to the bacteriums two principal virulence factors: the poly-D-glutamic acid capsule, which protects the bacterium from phagocytosis by host neutrophils, and the tripartite protein toxin, called anthrax toxin. Anthrax components: protective antigen (PA), edema factor (EF), and lethal factor (LF). PA plus LF produces lethal toxin, and PA plus EF produces edema toxin. These toxins cause death and tissue swelling (edema), respectively.
To enter the cells, the edema and lethal factors use another protein produced by B. anthracis called protective antigen, which binds to two surface receptors on the host cell. A cell protease then cleaves PA into two fragments: PA20 and PA63. PA20 dissociates into the extracellular medium, playing no further role in the toxic cycle. PA63 then oligomerizes with six other PA63 fragments forming a heptameric ring-shaped structure named a prepore. Once in this shape, the complex can competitively bind up to three EFs or LFs, forming a resistant complex. Receptor-mediated endocytosis occurs next, providing the newly formed toxic complex access to the interior of the host cell. The acidified environment within the endosome triggers the heptamer to release the LF and/or EF into the cytosol. It is unknown how exactly the complex results in the death of the cell.
Edema factor is a calmodulin-dependent adenylate cyclase. Adenylate cyclase catalyzes the conversion of ATP into cyclic AMP (cAMP) and pyrophosphate. The complexation of adenylate cyclase with calmodulin removes calmodulin from stimulating calcium-triggered signaling, thus inhibiting the immune response. To be specific, LF inactivates neutrophils (a type of phagocytic cell) by the process just described so they cannot phagocytose bacteria. Throughout history, lethal factor was presumed to cause macrophages to make TNF-alpha and interleukin 1, beta (IL1B). TNF-alpha is a cytokine whose primary role is to regulate immune cells, as well as to induce inflammation and apoptosis or programmed cell death. Interleukin 1, beta is another cytokine that also regulates inflammation and apoptosis. The overproduction of TNF-alpha and IL1B ultimately leads to septic shock and death. However, recent evidence indicates anthrax also targets endothelial cells that line serous cavities such as the pericardial cavity, pleural cavity, and peritoneal cavity, lymph vessels, and blood vessels, causing vascular leakage of fluid and cells, and ultimately hypovolemic shock and septic shock.
Diagnosis
Various techniques may be used for the direct identification of B. anthracis in clinical material. Firstly, specimens may be Gram stained. Bacillus spp. are quite large in size (3 to 4 μm long), they may grow in long chains, and they stain Gram-positive. To confirm the organism is B. anthracis, rapid diagnostic techniques such as polymerase chain reaction-based assays and immunofluorescence microscopy may be used.All Bacillus species grow well on 5% sheep blood agar and other routine culture media. Polymyxin-lysozyme-EDTA-thallous acetate can be used to isolate B. anthracis from contaminated specimens, and bicarbonate agar is used as an identification method to induce capsule formation. Bacillus spp. usually grow within 24 hours of incubation at 35 °C, in ambient air (room temperature) or in 5% CO2. If bicarbonate agar is used for identification, then the medium must be incubated in 5% CO2. B. anthracis colonies are medium-large, gray, flat, and irregular with swirling projections, often referred to as having a "medusa head" appearance, and are not hemolytic on 5% sheep blood agar. The bacteria are not motile, susceptible to penicillin, and produce a wide zone of lecithinase on egg yolk agar. Confirmatory testing to identify B. anthracis includes gamma bacteriophage testing, indirect hemagglutination, and enzyme-linked immunosorbent assay to detect antibodies. The best confirmatory precipitation test for anthrax is the Ascoli test.
Prevention
Precautions are taken to avoid contact with the skin and any fluids exuded through natural body openings of a deceased body that is suspected of harboring anthrax. The body should be put in strict quarantine. A blood sample is collected and sealed in a container and analyzed in an approved laboratory to ascertain if anthrax is the cause of death. The body should be sealed in an airtight body bag and incinerated to prevent the transmission of anthrax spores. Microscopic visualization of the encapsulated bacilli, usually in very large numbers, in a blood smear stained with polychrome methylene blue (McFadyean stain) is fully diagnostic, though the culture of the organism is still the gold standard for diagnosis. Full isolation of the body is important to prevent possible contamination of others.Protective, impermeable clothing and equipment such as rubber gloves, rubber apron, and rubber boots with no perforations are used when handling the body. No skin, especially if it has any wounds or scratches, should be exposed. Disposable personal protective equipment is preferable, but if not available, decontamination can be achieved by autoclaving. Used disposable equipment is burned and/or buried after use. All contaminated bedding or clothing is isolated in double plastic bags and treated as biohazard waste. Respiratory equipment capable of filtering small particles, such the US National Institute for Occupational Safety and Health- and Mine Safety and Health Administration-approved high-efficiency respirator, is worn.
Vaccines
Vaccines against anthrax for use in livestock and humans have had a prominent place in the history of medicine. The French scientist Louis Pasteur developed the first effective vaccine in 1881. Human anthrax vaccines were developed by the Soviet Union in the late 1930s and in the US and UK in the 1950s. The current FDA-approved US vaccine was formulated in the 1960s.Currently administered human anthrax vaccines include acellular (United States) and live vaccine (Russia) varieties. All currently used anthrax vaccines show considerable local and general reactogenicity (erythema, induration, soreness, fever) and serious adverse reactions occur in about 1% of recipients. The American product, BioThrax, is licensed by the FDA and was formerly administered in a six-dose primary series at 0, 2, 4 weeks and 6, 12, 18 months, with annual boosters to maintain immunity. In 2008, the FDA approved omitting the week-2 dose, resulting in the currently recommended five-dose series. New second-generation vaccines currently being researched include recombinant live vaccines and recombinant subunit vaccines. In the 20th century the use of a modern product (BioThrax) to protect American troops against the use of anthrax in biological warfare was controversial.
Antibiotics
Preventive antibiotics are recommended in those who have been exposed. Early detection of sources of anthrax infection can allow preventive measures to be taken. In response to the anthrax attacks of October 2001, the United States Postal Service (USPS) installed biodetection systems (BDSs) in their large-scale mail processing facilities. BDS response plans were formulated by the USPS in conjunction with local responders including fire, police, hospitals, and public health. Employees of these facilities have been educated about anthrax, response actions, and prophylactic medication. Because of the time delay inherent in getting final verification that anthrax has been used, prophylactic antibiotic treatment of possibly exposed personnel must be started as soon as possible.
Treatment
Anthrax cannot be spread from person to person, except in the rare case of skin exudates from cutaneous anthrax. However, a persons clothing and body may be contaminated with anthrax spores. Effective decontamination of people can be accomplished by a thorough wash-down with antimicrobial soap and water. Wastewater is treated with bleach or another antimicrobial agent. Effective decontamination of articles can be accomplished by boiling them in water for 30 minutes or longer. Chlorine bleach is ineffective in destroying spores and vegetative cells on surfaces, though formaldehyde is effective. Burning clothing is very effective in destroying spores. After decontamination, there is no need to immunize, treat, or isolate contacts of persons ill with anthrax unless they were also exposed to the same source of infection.
Antibiotics
Early antibiotic treatment of anthrax is essential; delay significantly lessens chances for survival. Treatment for anthrax infection and other bacterial infections includes large doses of intravenous and oral antibiotics, such as fluoroquinolones (ciprofloxacin), doxycycline, erythromycin, vancomycin, or penicillin. FDA-approved agents include ciprofloxacin, doxycycline, and penicillin. In possible cases of pulmonary anthrax, early antibiotic prophylaxis treatment is crucial to prevent possible death. Many attempts have been made to develop new drugs against anthrax, but existing drugs are effective if treatment is started soon enough.
Monoclonal antibodies
In May 2009, Human Genome Sciences submitted a biologic license application (BLA, permission to market) for its new drug, raxibacumab (brand name ABthrax) intended for emergency treatment of inhaled anthrax. On 14 December 2012, the US Food and Drug Administration approved raxibacumab injection to treat inhalational anthrax. Raxibacumab is a monoclonal antibody that neutralizes toxins produced by B. anthracis. In March 2016, FDA approved a second anthrax treatment using a monoclonal antibody which neutralizes the toxins produced by B. anthracis. Obiltoxaximab is approved to treat inhalational anthrax in conjunction with appropriate antibacterial drugs, and for prevention when alternative therapies are not available or appropriate.
Prognosis
Cutaneous anthrax is rarely fatal if treated, because the infection area is limited to the skin, preventing the lethal factor, edema factor, and protective antigen from entering and destroying a vital organ. Without treatment, about 20% of cutaneous skin infection cases progress to toxemia and death.Before 2001, fatality rates for inhalation anthrax were 90%; since then, they have fallen to 45%. People that progress to the fulminant phase of inhalational anthrax nearly always die, with one case study showing a death rate of 97%. Anthrax meningoencephalitis is also nearly always fatal.GI infections can be treated, but usually result in fatality rates of 25% to 60%, depending upon how soon treatment commences. This form of anthrax is the rarest.
Epidemiology
Globally, at least 2,000 cases occur a year.
United States
The last fatal case of natural inhalational anthrax in the United States occurred in California in 1976, when a home weaver died after working with infected wool imported from Pakistan. To minimize the chance of spreading the disease, the body was transported to UCLA in a sealed plastic body bag within a sealed metal container for autopsy.Gastrointestinal anthrax is exceedingly rare in the United States, with only two cases on record. The first case was reported in 1942, according to the Centers for Disease Control and Prevention. During December 2009, the New Hampshire Department of Health and Human Services confirmed a case of gastrointestinal anthrax in an adult female.
In 2007 two cases of cutaneous anthrax were reported in Danbury, Connecticut. The case involved the maker of traditional African-style drums who was working with a goat hide purchased from a dealer in New York City which had been previously cleared by Customs. While the hide was being scraped, a spider bite led to the spores entering the bloodstream. His son also became infected.The CDC investigated the source of the December 2009 infection and the possibility that it was contracted from an African drum recently used by the woman taking part in a drum circle. The woman apparently inhaled anthrax, in spore form, from the hide of the drum. She became critically ill, but with gastrointestinal anthrax rather than inhaled anthrax, which made her unique in American medical history. The building where the infection took place was cleaned and reopened to the public and the woman recovered. The New Hampshire state epidemiologist, Jodie Dionne-Odom, stated "It is a mystery. We really dont know why it happened."
Croatia
In July 2022, dozens of cattle in a nature park in Lonjsko Polje, a flood plain by the Sava river, died of anthrax and 6 people have been hospitalized with light, skin-related symptoms.
United Kingdom
In November 2008, a drum maker in the United Kingdom who worked with untreated animal skins died from anthrax. In December 2009, an outbreak of anthrax occurred among heroin addicts in the Glasgow and Stirling areas of Scotland, resulting in 14 deaths. The source of the anthrax is believed to have been dilution of the heroin with bone meal in Afghanistan.
History
Discovery
Robert Koch, a German physician and scientist, first identified the bacterium that caused the anthrax disease in 1875 in Wollstein (now Wolsztyn - a town in Poland). His pioneering work in the late 19th century was one of the first demonstrations that diseases could be caused by microbes. In a groundbreaking series of experiments, he uncovered the lifecycle and means of transmission of anthrax. His experiments not only helped create an understanding of anthrax but also helped elucidate the role of microbes in causing illness at a time when debates still took place over spontaneous generation versus cell theory. Koch went on to study the mechanisms of other diseases and won the 1905 Nobel Prize in Physiology or Medicine for his discovery of the bacterium causing tuberculosis.Although Koch arguably made the greatest theoretical contribution to understanding anthrax, other researchers were more concerned with the practical questions of how to prevent the disease. In Britain, where anthrax affected workers in the wool, worsted, hides, and tanning industries, it was viewed with fear. John Henry Bell, a doctor born & based in Bradford, first made the link between the mysterious and deadly "woolsorters disease" and anthrax, showing in 1878 that they were one and the same. In the early 20th century, Friederich Wilhelm Eurich, the German bacteriologist who settled in Bradford with his family as a child, carried out important research for the local Anthrax Investigation Board. Eurich also made valuable contributions to a Home Office Departmental Committee of Inquiry, established in 1913 to address the continuing problem of industrial anthrax. His work in this capacity, much of it collaboration with the factory inspector G. Elmhirst Duckering, led directly to the Anthrax Prevention Act (1919).
First vaccination
Anthrax posed a major economic challenge in France and elsewhere during the 19th century. Horses, cattle, and sheep were particularly vulnerable, and national funds were set aside to investigate the production of a vaccine. French scientist Louis Pasteur was charged with the production of a vaccine, following his successful work in developing methods that helped to protect the important wine and silk industries.In May 1881, Pasteur – in collaboration with his assistants Jean-Joseph Henri Toussaint, Émile Roux and others – performed a public experiment at Pouilly-le-Fort to demonstrate his concept of vaccination. He prepared two groups of 25 sheep, one goat, and several cattle. The animals of one group were injected with an anthrax vaccine prepared by Pasteur twice, at an interval of 15 days; the control group was left unvaccinated. Thirty days after the first injection, both groups were injected with a culture of live anthrax bacteria. All the animals in the unvaccinated group died, while all of the animals in the vaccinated group survived.After this apparent triumph, which was widely reported in the local, national, and international press, Pasteur made strenuous efforts to export the vaccine beyond France. He used his celebrity status to establish Pasteur Institutes across Europe and Asia, and his nephew, Adrien Loir, travelled to Australia in 1888 to try to introduce the vaccine to combat anthrax in New South Wales. Ultimately, the vaccine was unsuccessful in the challenging climate of rural Australia, and it was soon superseded by a more robust version developed by local researchers John Gunn and John McGarvie Smith.The human vaccine for anthrax became available in 1954. This was a cell-free vaccine instead of the live-cell Pasteur-style vaccine used for veterinary purposes. An improved cell-free vaccine became available in 1970.
Engineered strains
The Sterne strain of anthrax, named after the Trieste-born immunologist Max Sterne, is an attenuated strain used as a vaccine, which contains only the anthrax toxin virulence plasmid and not the polyglutamic acid capsule expressing plasmid.
Strain 836, created by the Soviet bioweapons program in the 1980s, was later called by the Los Angeles Times "the most virulent and vicious strain of anthrax known to man".
The virulent Ames strain, which was used in the 2001 anthrax attacks in the United States, has received the most news coverage of any anthrax outbreak. The Ames strain contains two virulence plasmids, which separately encode for a three-protein toxin, called anthrax toxin, and a polyglutamic acid capsule.
Nonetheless, the Vollum strain, developed but never used as a biological weapon during the Second World War, is much more dangerous. The Vollum (also incorrectly referred to as Vellum) strain was isolated in 1935 from a cow in Oxfordshire. This same strain was used during the Gruinard bioweapons trials. A variation of Vollum, known as "Vollum 1B", was used during the 1960s in the US and UK bioweapon programs. Vollum 1B is widely believed to have been isolated from William A. Boyles, a 46-year-old scientist at the US Army Biological Warfare Laboratories at Camp (later Fort) Detrick, Maryland, who died in 1951 after being accidentally infected with the Vollum strain.
US Air Force researchers have developed a vaccine strain to produce an improved anthrax vaccine which requires a minimal number of injections to achieve and maintain long-term immunity. It is designated as the Alls/Gifford (Curlicue) strain.
Society and culture
Site cleanup
Anthrax spores can survive for very long periods of time in the environment after release. Chemical methods for cleaning anthrax-contaminated sites or materials may use oxidizing agents such as peroxides, ethylene oxide, Sandia Foam, chlorine dioxide (used in the Hart Senate Office Building), peracetic acid, ozone gas, hypochlorous acid, sodium persulfate, and liquid bleach products containing sodium hypochlorite. Nonoxidizing agents shown to be effective for anthrax decontamination include methyl bromide, formaldehyde, and metam sodium. These agents destroy bacterial spores. All of the aforementioned anthrax decontamination technologies have been demonstrated to be effective in laboratory tests conducted by the US EPA or others.Decontamination techniques for Bacillus anthracis spores are affected by the material with which the spores are associated, environmental factors such as temperature and humidity, and microbiological factors such as the spore species, anthracis strain, and test methods used.A bleach solution for treating hard surfaces has been approved by the EPA. Chlorine dioxide has emerged as the preferred biocide against anthrax-contaminated sites, having been employed in the treatment of numerous government buildings over the past decade. Its chief drawback is the need for in situ processes to have the reactant on demand.
To speed the process, trace amounts of a nontoxic catalyst composed of iron and tetroamido macrocyclic ligands are combined with sodium carbonate and bicarbonate and converted into a spray. The spray formula is applied to an infested area and is followed by another spray containing tert-butyl hydroperoxide.Using the catalyst method, complete destruction of all anthrax spores can be achieved in under 30 minutes. A standard catalyst-free spray destroys fewer than half the spores in the same amount of time.
Cleanups at a Senate Office Building, several contaminated postal facilities, and other US government and private office buildings, a collaborative effort headed by the Environmental Protection Agency showed decontamination to be possible, but time-consuming and costly. Clearing the Senate Office Building of anthrax spores cost $27 million, according to the Government Accountability Office. Cleaning the Brentwood postal facility in Washington cost $130 million and took 26 months. Since then, newer and less costly methods have been developed.Cleanup of anthrax-contaminated areas on ranches and in the wild is much more problematic. Carcasses may be burned, though often 3 days are needed to burn a large carcass and this is not feasible in areas with little wood. Carcasses may also be buried, though the burying of large animals deeply enough to prevent resurfacing of spores requires much manpower and expensive tools. Carcasses have been soaked in formaldehyde to kill spores, though this has environmental contamination issues. Block burning of vegetation in large areas enclosing an anthrax outbreak has been tried; this, while environmentally destructive, causes healthy animals to move away from an area with carcasses in search of fresh grass. Some wildlife workers have experimented with covering fresh anthrax carcasses with shadecloth and heavy objects. This prevents some scavengers from opening the carcasses, thus allowing the putrefactive bacteria within the carcass to kill the vegetative B. anthracis cells and preventing sporulation. This method also has drawbacks, as scavengers such as hyenas are capable of infiltrating almost any exclosure.The experimental site at Gruinard Island is said to have been decontaminated with a mixture of formaldehyde and seawater by the Ministry of Defence. It is not clear whether similar treatments had been applied to US test sites.
Biological warfare
Anthrax spores have been used as a biological warfare weapon. Its first modern incidence occurred when Nordic rebels, supplied by the German General Staff, used anthrax with unknown results against the Imperial Russian Army in Finland in 1916. Anthrax was first tested as a biological warfare agent by Unit 731 of the Japanese Kwantung Army in Manchuria during the 1930s; some of this testing involved intentional infection of prisoners of war, thousands of whom died. Anthrax, designated at the time as Agent N, was also investigated by the Allies in the 1940s.A long history of practical bioweapons research exists in this area. For example, in 1942, British bioweapons trials severely contaminated Gruinard Island in Scotland with anthrax spores of the Vollum-14578 strain, making it a no-go area until it was decontaminated in 1990. The Gruinard trials involved testing the effectiveness of a submunition of an "N-bomb" – a biological weapon containing dried anthrax spores. Additionally, five million "cattle cakes" (animal feed pellets impregnated with anthrax spores) were prepared and stored at Porton Down for "Operation Vegetarian" – anti livestock attacks against Germany to be made by the Royal Air Force. The plan was for anthrax-based biological weapons to be dropped on Germany in 1944. However, the edible cattle cakes and the bomb were not used; the cattle cakes were incinerated in late 1945.
Weaponized anthrax was part of the US stockpile prior to 1972, when the United States signed the Biological Weapons Convention. President Nixon ordered the dismantling of US biowarfare programs in 1969 and the destruction of all existing stockpiles of bioweapons. In 1978–79, the Rhodesian government used anthrax against cattle and humans during its campaign against rebels. The Soviet Union created and stored 100 to 200 tons of anthrax spores at Kantubek on Vozrozhdeniya Island; they were abandoned in 1992 and destroyed in 2002.American military and British Army personnel are no longer routinely vaccinated against anthrax prior to active service in places where biological attacks are considered a threat.
Sverdlovsk incident (2 April 1979)
Despite signing the 1972 agreement to end bioweapon production, the government of the Soviet Union had an active bioweapons program that included the production of hundreds of tons of anthrax after this period. On 2 April 1979, some of the over one million people living in Sverdlovsk (now called Ekaterinburg, Russia), about 1,370 kilometres (850 mi) east of Moscow, were exposed to an accidental release of anthrax from a biological weapons complex located near there. At least 94 people were infected, of whom at least 68 died. One victim died four days after the release, 10 over an eight-day period at the peak of the deaths, and the last six weeks later. Extensive cleanup, vaccinations, and medical interventions managed to save about 30 of the victims. Extensive cover-ups and destruction of records by the KGB continued from 1979 until Russian President Boris Yeltsin admitted this anthrax accident in 1992. Jeanne Guillemin reported in 1999 that a combined Russian and United States team investigated the accident in 1992.Nearly all of the night-shift workers of a ceramics plant directly across the street from the biological facility (compound 19) became infected, and most died. Since most were men, some NATO governments suspected the Soviet Union had developed a sex-specific weapon. The government blamed the outbreak on the consumption of anthrax-tainted meat, and ordered the confiscation of all uninspected meat that entered the city. They also ordered all stray dogs to be shot and people not have contact with sick animals. Also, a voluntary evacuation and anthrax vaccination program was established for people from 18 to 55.To support the cover-up story, Soviet medical and legal journals published articles about an outbreak in livestock that caused GI anthrax in people having consumed infected meat, and cutaneous anthrax in people having come into contact with the animals. All medical and public health records were confiscated by the KGB. In addition to the medical problems the outbreak caused, it also prompted Western countries to be more suspicious of a covert Soviet bioweapons program and to increase their surveillance of suspected sites. In 1986, the US government was allowed to investigate the incident, and concluded the exposure was from aerosol anthrax from a military weapons facility. In 1992, President Yeltsin admitted he was "absolutely certain" that "rumors" about the Soviet Union violating the 1972 Bioweapons Treaty were true. The Soviet Union, like the US and UK, had agreed to submit information to the UN about their bioweapons programs, but omitted known facilities and never acknowledged their weapons program.
Anthrax bioterrorism
In theory, anthrax spores can be cultivated with minimal special equipment and a first-year collegiate microbiological education.
To make large amounts of an aerosol form of anthrax suitable for biological warfare requires extensive practical knowledge, training, and highly advanced equipment.Concentrated anthrax spores were used for bioterrorism in the 2001 anthrax attacks in the United States, delivered by mailing postal letters containing the spores. The letters were sent to several news media offices and two Democratic senators: Tom Daschle of South Dakota and Patrick Leahy of Vermont. As a result, 22 were infected and five died </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Anticholinergic and can you explain it in depth? [/INST] Anticholinergics (anticholinergic agents) are substances that block the action of the neurotransmitter called acetylcholine (ACh) at synapses in the central and peripheral nervous system.These agents inhibit the parasympathetic nervous system by selectively blocking the binding of ACh to its receptor in nerve cells. The nerve fibers of the parasympathetic system are responsible for the involuntary movement of smooth muscles present in the gastrointestinal tract, urinary tract, lungs, sweat glands, and many other parts of the body.In broad terms, anticholinergics are divided into two categories in accordance with their specific targets in the central and peripheral nervous system and at the neuromuscular junction: antimuscarinic agents, and antinicotinic agents (ganglionic blockers, neuromuscular blockers).The term "anticholinergic" is typically used to refer to antimuscarinics which competitively inhibit the binding of ACh to muscarinic acetylcholine receptors; such agents do not antagonize the binding at nicotinic acetylcholine receptors at the neuromuscular junction, although the term is sometimes used to refer to agents which do so.
Medical uses
Anticholinergic drugs are used to treat a variety of conditions:
Dizziness (including vertigo and motion sickness-related symptoms)
Extrapyramidal symptoms, a potential side-effect of antipsychotic medications
Gastrointestinal disorders (e.g., peptic ulcers, diarrhea, pylorospasm, diverticulitis, ulcerative colitis, nausea, and vomiting)
Genitourinary disorders (e.g., cystitis, urethritis, and prostatitis)
Insomnia, although usually only on a short-term basis
Respiratory disorders (e.g., asthma, chronic bronchitis, and chronic obstructive pulmonary disease [COPD])
Sinus bradycardia due to a hypersensitive vagus nerve
Organophosphate based nerve agent poisoning, such as VX, sarin, tabun, and soman (atropine is favoured in conjunction with an oxime, usually pralidoxime)Anticholinergics generally have antisialagogue effects (decreasing saliva production), and most produce some level of sedation, both being advantageous in surgical procedures.Until the beginning of the 20th century anticholinergic drugs were widely used to treat psychiatric disorders.
Physiological effects
Delirium (often with hallucinations and delusions indistinguishable from reality)
Ocular symptoms (from eye drops): mydriasis, pupil dilation, and acute angle-closure glaucoma in those with shallow anterior chamber
Anhidrosis, dry mouth, dry skin
Fever
Constipation
Tachycardia
Urinary retention
Cutaneous vasodilationClinically the most significant feature is delirium, particularly in the elderly, who are most likely to be affected by the toxidrome.
Side effects
Long-term use may increase the risk of both cognitive and physical decline. It is unclear whether they affect the risk of death generally. However, in older adults they do appear to increase the risk of death.Possible effects of anticholinergics include:
Possible effects in the central nervous system resemble those associated with delirium, and may include:
Older patients are at a higher risk of experiencing CNS side effects.
Toxicity
An acute anticholinergic syndrome is reversible and subsides once all of the causative agents have been excreted. Reversible acetylcholinesterase inhibitor agents such as physostigmine can be used as an antidote in life-threatening cases. Wider use is discouraged due to the significant side effects related to cholinergic excess including seizures, muscle weakness, bradycardia, bronchoconstriction, lacrimation, salivation, bronchorrhea, vomiting, and diarrhea. Even in documented cases of anticholinergic toxicity, seizures have been reported after the rapid administration of physostigmine </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Antisialagogue and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Aphthous stomatitis and can you explain it in depth? [/INST] Aphthous stomatitis, or recurrent aphthous stomatitis (RAS), is a common condition characterized by the repeated formation of benign and non-contagious mouth ulcers (aphthae) in otherwise healthy individuals. The informal term canker sore is also used, mainly in North America, although it may also refer to other types of mouth ulcers. The cause is not completely understood but involves a T cell-mediated immune response triggered by a variety of factors which may include nutritional deficiencies, local trauma, stress, hormonal influences, allergies, genetic predisposition, certain foods, dehydration, some food additives, or some hygienic chemical additives like SDS (common in toothpaste).
These ulcers occur periodically and heal completely between attacks. In the majority of cases, the individual ulcers last about 7–10 days, and ulceration episodes occur 3–6 times per year. Most appear on the non-keratinizing epithelial surfaces in the mouth – i.e. anywhere except the attached gingiva, the hard palate and the dorsum of the tongue – although the more severe forms, which are less common, may also involve keratinizing epithelial surfaces. Symptoms range from a minor nuisance to interfering with eating and drinking. The severe forms may be debilitating, even causing weight loss due to malnutrition.
The condition is very common, affecting about 20% of the general population to some degree. The onset is often during childhood or adolescence, and the condition usually lasts for several years before gradually disappearing. There is no cure, and treatments such as corticosteroids aim to manage pain, reduce healing time and reduce the frequency of episodes of ulceration.
Signs and symptoms
Persons with aphthous stomatitis have no detectable systemic symptoms or signs (i.e., outside the mouth). Generally, symptoms may include prodromal sensations such as burning, itching, or stinging, which may precede the appearance of any lesion by some hours; and pain, which is often out of proportion to the extent of the ulceration and is worsened by physical contact, especially with certain foods and drinks (e.g., if they are acidic or abrasive). Pain is worst in the days immediately following the initial formation of the ulcer, and then recedes as healing progresses. If there are lesions on the tongue, speaking and chewing can be uncomfortable, and ulcers on the soft palate, back of the throat, or esophagus can cause painful swallowing. Signs are limited to the lesions themselves.
Ulceration episodes usually occur about 3–6 times per year. However, severe disease is characterized by virtually constant ulceration (new lesions developing before old ones have healed) and may cause debilitating chronic pain and interfere with comfortable eating. In severe cases, this prevents adequate nutrient intake leading to malnutrition and weight loss.Aphthous ulcers typically begin as erythematous macules (reddened, flat area of mucosa) which develop into ulcers that are covered with a yellow-grey fibrinous membrane that can be scraped away. A reddish "halo" surrounds the ulcer. The size, number, location, healing time, and periodicity between episodes of ulcer formation are all dependent upon the subtype of aphthous stomatitis.
Causes
The cause is not entirely clear, but is thought to be multifactorial. It has been suggested that aphthous stomatitis is not a single entity, but rather a group of conditions with different causes. Multiple research studies have attempted to identify a causative organism, but aphthous stomatitis appears to be non-contagious, non-infectious, and not sexually transmissible. The mucosal destruction is thought to be the result of a T cell (T lymphocyte) mediated immune response which involves the generation of interleukins and tumor necrosis factor alpha (TNF-α). Mast cells and macrophages are also involved, secreting TNF-α along with the T cells. When early aphthous ulcers are biopsied, the histologic appearance shows a dense inflammatory infiltrate, 80% of which is made up of T cells. Persons with aphthous stomatitis also have circulating lymphocytes which react with peptides 91–105 of heat shock protein 65–60, and the ratio of CD4+ T cells to CD8+ T cells in the peripheral blood of individuals with aphthous stomatitis is decreased.Aphthous stomatitis has been associated with other autoimmune diseases, namely systemic lupus erythematosus, Behçets disease and inflammatory bowel diseases. However, common autoantibodies are not detected in most patients, and the condition tends to resolve spontaneously with advancing age rather than worsen.
Evidence for the T cell-mediated mechanism of mucosal destruction is strong, but the exact triggers for this process are unknown and are thought to be multiple and varied from one person to the next. This suggests that there are a number of possible triggers, each of which is capable of producing the disease in different subgroups. In other words, different subgroups appear to have different causes for the condition. These can be considered in three general groups, namely primary immuno-dysregulation, decrease of the mucosal barrier and states of heightened antigenic sensitivity (see below). Risk factors in aphthous stomatitis are also sometimes considered as either host-related or environmental.
Immunity
At least 40% of people with aphthous stomatitis have a positive family history, suggesting that some people are genetically predisposed to developing oral ulceration. HLA-B12, HLA-B51, HLA-Cw7, HLA-A2, HLA-A11, and HLA-DR2 are examples of human leukocyte antigen types associated with aphthous stomatitis. However, these HLA types are inconsistently associated with the condition, and also vary according to ethnicity. People who have a positive family history of aphthous stomatitis tend to develop a more severe form of the condition, and at an earlier age than is typical.Stress has effects on the immune system, which may explain why some cases directly correlate with stress. It is often stated that in studies of students with the condition, ulceration is exacerbated during examination periods and lessened during periods of vacation. Alternatively, it has been suggested that oral parafunctional activities such as lip or cheek chewing become more pronounced during periods of stress, and hence the mucosa is subjected to more minor trauma.Aphthous-like ulceration also occurs in conditions involving systemic immuno-dysregulation, e.g. cyclic neutropenia and human immunodeficiency virus infection. In cyclic neutropenia, more severe oral ulceration occurs during periods of severe immuno-dysregulation, and resolution of the underlying neutropenia is associated with healing of the ulcers. The relative increase in percentage of CD8+ T cells, caused by a reduction in numbers of CD4+ T cells may be implicated in RAS-type ulceration in HIV infection.
Mucosal barrier
The thickness of the mucosa may be an important factor in aphthous stomatitis. Usually, ulcers form on the thinner, non-keratinizing mucosal surfaces in the mouth. Factors which decrease the thickness of the mucosa increase the frequency of occurrence, and factors which increase the thickness of the mucosa correlate with decreased ulceration.The nutritional deficiencies associated with aphthous stomatitis (vitamin B12, folate, and iron) can all cause a decrease in the thickness of the oral mucosa (atrophy).Local trauma is also associated with aphthous stomatitis, and it is known that trauma can decrease the mucosal barrier. Trauma could occur during injections of local anesthetic in the mouth, or otherwise during dental treatments, frictional trauma from a sharp surface in the mouth such as broken tooth, or from tooth brushing.Hormonal factors are capable of altering the mucosal barrier. In one study, a small group of females with aphthous stomatitis had fewer occurrences of aphthous ulcers during the luteal phase of the menstrual cycle or with use of the contraceptive pill. This phase is associated with a fall in progestogen levels, mucosal proliferation and keratinization. This subgroup often experiences remission during pregnancy. However, other studies report no correlation between aphthous stomatitis and menstrual period, pregnancy or menopause.Aphthous stomatitis is more common in people who smoke, and there is also a correlation between habit duration and severity of the condition. Tobacco use is associated with an increase in keratinization of the oral mucosa. In extreme forms, this may manifest as leukoplakia or stomatitis nicotina (smokers keratosis). This increased keratinization may mechanically reinforce the mucosa and reduce the tendency of ulcers to form after minor trauma, or present a more substantial barrier to microbes and antigens, but this is unclear. Nicotine is also known to stimulate production of adrenal steroids and reduce production of TNF-α, interleukin-1 and interleukin-6. Smokeless tobacco products also seem to protect against aphthous stomatitis. Cessation of smoking is known to sometimes precede the onset of aphthous stomatitis in people previously unaffected, or exacerbate the condition in those who were already experiencing aphthous ulceration. Despite this correlation, starting smoking again does not usually lessen the condition.
Antigenic sensitivity
Various antigenic triggers have been implicated as a trigger, including L forms of streptococci, herpes simplex virus, varicella-zoster virus, adenovirus, and cytomegalovirus. Some people with aphthous stomatitis may show herpes virus within the epithelium of the mucosa, but without any productive infection. In some persons, attacks of ulceration occur at the same time as asymptomatic viral shedding and elevated viral titres.In some instances, recurrent mouth ulcers may be a manifestation of an allergic reaction. Possible allergens include certain foods (e.g., chocolate, coffee, strawberries, eggs, nuts, tomatoes, cheese, citrus fruits, benzoates, cinnamaldehyde, and highly acidic foods), toothpastes, and mouthwashes. Where dietary allergens are responsible, mouth ulcers usually develop within about 12–24 hours of exposure.Sodium lauryl sulphate (SLS), a detergent present in some brands of toothpaste and other oral healthcare products, may produce oral ulceration in some individuals. It has been shown that aphthous stomatitis is more common in people using toothpastes containing SLS, and that some reduction in ulceration occurs when a SLS-free toothpaste is used. Some have argued that since SLS is almost ubiquitously used in oral hygiene products, there is unlikely to be a true predisposition for aphthous stomatitis caused by SLS.
Systemic disease
Aphthous-like ulceration may occur in association with several systemic disorders (see table). These ulcers are clinically and histopathologically identical to the lesions of aphthous stomatitis, but this type of oral ulceration is not considered to be true aphthous stomatitis by some sources. Some of these conditions may cause ulceration on other mucosal surfaces in addition to the mouth such as the conjunctiva or the genital mucous membranes. Resolution of the systemic condition often leads to decreased frequency and severity of the oral ulceration.Behçets disease is a triad of mouth ulcers, genital ulcers and anterior uveitis. The main feature of Behçets disease is aphthous-like ulceration, but this is usually more severe than seen in aphthous stomatitis without a systemic cause, and typically resembles major or herpetiforme ulceration or both. Aphthous-like ulceration is the first sign of the disease in 25–75% of cases. Behçets is more common in individuals whose ethnic origin is from regions along the Silk Road (between the Mediterranean and the Far East). It tends to be rare in other countries such as the United States and the United Kingdom. MAGIC syndrome is a possible variant of Behçets disease, and is associated with aphthous-like ulceration. The name stands for "mouth and genital ulcers with inflamed cartilage" (relapsing polychondritis).PFAPA syndrome is a rare condition that tends to occur in children. The name stands for "periodic fever, aphthae, pharyngitis (sore throat) and cervical adenitis" (inflammation of the lymph nodes in the neck). The fevers occur periodically about every 3–5 weeks. The condition appears to improve with tonsillectomy or immunosuppression, suggesting an immunologic cause.In cyclic neutropenia, there is a reduction in the level of circulating neutrophils in the blood that occurs about every 21 days. Opportunistic infections commonly occur and aphthous-like ulceration is worst during this time.Hematinic deficiencies (vitamin B12, folic acid and iron), occurring singly or in combination, and with or without any underlying gastrointestinal disease, may be twice as common in people with RAS. However, iron and vitamin supplements only infrequently improve the ulceration. The relationship to vitamin B12 deficiency has been the subject of many studies. Although these studies found that 0–42% of those with recurrent ulcers have a vitamin B12 deficiency, an association with deficiency is rare. Even in the absence of deficiency, vitamin B12 supplementation may be helpful due to unclear mechanisms. Hematinic deficiencies can cause anemia, which is also associated with aphthous-like ulceration.Gastrointestinal disorders are sometimes associated with aphthous-like stomatitis, e.g. most commonly celiac disease, but also inflammatory bowel disease such as Crohns disease or ulcerative colitis. The link between gastrointestinal disorders and aphthous stomatitis is probably related to nutritional deficiencies caused by malabsorption. Less than 5% of people with RAS have celiac disease, which usually presents with severe malnutrition, anemia, abdominal pain, diarrhea and glossitis (inflammation of the tongue). Sometimes aphthous-like ulcerations can be the only sign of celiac disease. Despite this association, a gluten-free diet does not usually improve the oral ulceration.Other examples of systemic conditions associated with aphthous-like ulceration include reactive arthritis, and recurrent erythema multiforme.
Diagnosis
Diagnosis is mostly based on the clinical appearance and the medical history. The most important diagnostic feature is a history of recurrent, self healing ulcers at fairly regular intervals. Although there are many causes of oral ulceration, recurrent oral ulceration has relatively few causes, most commonly aphthous stomatitis, but rarely Behçets disease, erythema multiforme, ulceration associated with gastrointestinal disease, and recurrent intra-oral herpes simplex infection. A systemic cause is more likely in adults who suddenly develop recurrent oral ulceration with no prior history.Special investigations may be indicated to rule out other causes of oral ulceration. These include blood tests to exclude anemia, deficiencies of iron, folate or vitamin B12, or celiac disease. However, the nutritional deficiencies may be latent and the peripheral blood picture may appear relatively normal. Some suggest that screening for celiac disease should form part of the routine work up for individuals complaining of recurrent oral ulceration. Many of the systemic diseases cause other symptoms apart from oral ulceration, which is in contrast to aphthous stomatitis where there is isolated oral ulceration. Patch testing may be indicated if allergies are suspected (e.g. a strong relationship between certain foods and episodes of ulceration). Several drugs can cause oral ulceration (e.g. nicorandil), and a trial substitution to an alternative drug may highlight a causal relationship.Tissue biopsy is not usually required, unless to rule out other suspected conditions such as oral squamous cell carcinoma. The histopathologic appearance is not pathognomonic (the microscopic appearance is not specific to the condition). Early lesions have a central zone of ulceration covered by a fibrinous membrane. In the connective tissue deep to the ulcer there is increased vascularity and a mixed inflammatory infiltrate composed of lymphocytes, histiocytes and polymorphonuclear leukocytes. The epithelium on the margins of the ulcer shows spongiosis and there are many mononuclear cells in the basal third. There are also lymphocytes and histiocytes in the connective tissue surrounding deeper blood vessels near to the ulcer, described histologically as "perivascular cuffing".
Classification
Aphthous stomatitis has been classified as a type of non-infectious stomatitis (inflammation of the mouth). One classification distinguishes "common simple aphthae", accounting for 95% of cases, with 3–6 attacks per year, rapid healing, minimal pain and restriction of ulceration to the mouth; and "complex aphthae", accounting for 5% of cases, where ulcers may be present on the genital mucosa in addition to mouth, healing is slower and pain is more severe. A more common method of classifying aphthous stomatitis is into three variants, distinguished by the size, number and location of the lesions, the healing time of individual ulcers and whether a scar is left after healing (see below).
Minor aphthous ulceration
This is the most common type of aphthous stomatitis, accounting for about 80–85% of all cases. This subtype is termed minor aphthous ulceration (MiAU), or minor recurrent aphthous stomatitis (MiRAS). The lesions themselves may be referred to as minor aphthae or minor aphthous ulcers. These lesions are generally less than 10 mm in diameter (usually about 2–3 mm), and affect non-keratinized mucosal surfaces (i.e. the labial and buccal mucosa, lateral borders of the tongue and the floor of the mouth). Usually several ulcers appear at the same time, but single ulcers are possible. Healing usually takes seven to ten days and leaves no scar. Between episodes of ulceration, there is usually an ulcer-free period of variable length.
Major aphthous ulceration
This subtype makes up about 10% of all cases of aphthous stomatitis. It is termed major aphthous ulceration (MaAU) or major recurrent aphthous stomatitis (MaRAS). Major aphthous ulcers (major aphthae) are similar to minor aphthous ulcers, but are more than 10 mm in diameter and the ulceration is deeper. Because the lesions are larger, healing takes longer (about twenty to thirty days), and may leave scars. Each episode of ulceration usually produces a greater number of ulcers, and the time between attacks is less than seen in minor aphthous stomatitis. Major aphthous ulceration usually affects non-keratinized mucosal surfaces, but less commonly keratinized mucosa may also be involved, such as the dorsum (top surface) of the tongue or the gingiva (gums). The soft palate or the fauces (back of the throat) may also be involved, the latter being part of the oropharynx rather than the oral cavity. Compared to minor aphthous ulceration, major aphthae tend to have an irregular outline.
Herpetiform ulceration
Herpetiform ulcers, (also termed stomatitis herpetiformis, or herpes-like ulcerations) is a subtype of aphthous stomatitis so named because the lesions resemble a primary infection with herpes simplex virus (primary herpetic gingivostomatitis). However, herpetiform ulceration is not caused by herpes viruses. As with all types of aphthous stomatitis, it is not contagious. Unlike true herpetic ulcers, herpetiforme ulcers are not preceded by vesicles (small, fluid-filled blisters). Herpetiforme ulcers are less than 1 mm in diameter and occur in variably sized crops up to one hundred at a time. Adjacent ulcers may merge to form larger, continuous areas of ulceration. Healing occurs within fifteen days without scarring. The ulceration may affect keratinized mucosal surfaces in addition to non keratinized. Herpetiform ulceration is often extremely painful, and the lesions recur more frequently than minor or major aphthous ulcers. Recurrence may be so frequent that ulceration is virtually continuous. It generally occurs in a slightly older age group than the other subtypes, and females are affected slightly more frequently than males.
RAS type ulceration
Recurrent oral ulceration associated with systemic conditions is termed "RAS-type ulceration", "RAS-like ulceration", or "aphthous-like ulcers". Aphthous stomatitis occurs in individuals with no associated systemic disease. Persons with certain systemic diseases may be prone to oral ulceration, but this is secondary to the underlying medical condition (see the systemic disease section). This kind of ulceration is considered by some to be separate from true aphthous stomatitis. However, this definition is not strictly applied. For example, many sources refer to oral ulceration caused by anemia and/or nutritional deficiencies as aphthous stomatitis, and some also consider Behçets disease to be a variant.
Treatment
The vast majority of people with aphthous stomatitis have minor symptoms and do not require any specific therapy. The pain is often tolerable with simple dietary modification during an episode of ulceration such as avoiding spicy and acidic foods and beverages. Many different topical and systemic medications have been proposed (see table), sometimes showing little or no evidence of usefulness when formally investigated. Some of the results of interventions for RAS may in truth represent a placebo effect. No therapy is curative, with treatment aiming to relieve pain, promote healing and reduce the frequency of episodes of ulceration.
Medication
The first line of therapy for aphthous stomatitis are topical agents rather than systemic medication, with topical corticosteroids being the mainstay treatment. Systemic treatment is usually reserved for severe disease due to the risk of adverse side effects associated with many of these agents. A systematic review found that no single systemic intervention was found to be effective. Good oral hygiene is important to prevent secondary infection of the ulcers.Occasionally, in females where ulceration is correlated to the menstrual cycle or to birth control pills, progestogen or a change in birth control may be beneficial. Use of nicotine replacement therapy for people who have developed oral ulceration after stopping smoking has also been reported. Starting smoking again does not usually lessen the condition. Trauma can be reduced by avoiding rough or sharp foodstuffs and by brushing teeth with care. If sodium lauryl sulfate is suspected to be the cause, avoidance of products containing this chemical may be useful and prevent recurrence in some individuals. Similarly patch testing may indicate that food allergy is responsible, and the diet modified accordingly. If investigations reveal deficiency states, correction of the deficiency may result in resolution of the ulceration. For example, there is some evidence that vitamin B12 supplementation may prevent recurrence in some individuals.
Other
Surgical excision of aphthous ulcers has been described, but it is an ineffective and inappropriate treatment. Silver nitrate has also been used as a chemical cauterant. Apart from the mainstream approaches detailed above, there are numerous treatments of unproven effectiveness, ranging from herbal remedies to otherwise alternative treatments, including Aloe vera, Myrtus communis, Rosa damascena, potassium alum, zinc sulfate, nicotine, polio virus vaccine and prostaglandin E2.
Prognosis
By definition, there is no serious underlying medical condition, and most importantly, the ulcers do not represent oral cancer nor are they infectious. However, aphthae are capable of causing significant discomfort. There is a spectrum of severity, with symptoms ranging from a minor nuisance to disabling. Due to pain during eating, weight loss may develop as a result of not eating in severe cases of aphthous stomatitis. Usually, the condition lasts for several years before spontaneously disappearing in later life.
Epidemiology
Aphthous stomatitis affects between 5% and 66% of people, with about 20% of individuals in most populations having the condition to some degree. This makes it the most common disease of the oral mucosa. Aphthous stomatitis occurs worldwide, but is more common in developed countries.Within nations, it is more common in higher socioeconomic groups. Males and females are affected in an equal ratio, and the peak age of onset between 10 and 19 years. About 80% of people with aphthous stomatitis first developed the condition before the age of 30. There have been reports of ethnic variation. For example, in the United States, aphthous stomatitis may be three times more common in white-skinned people than black-skinned people.
History, society and culture
"Aphthous affectations" and "aphthous ulcerations" of the mouth are mentioned several times in the treatise "Of the Epidemics" (part of the Hippocratic corpus, in the 4th century BCE), although it seems likely that this was oral ulceration as a manifestation of some infectious disease, since they are described as occurring in epidemic-like patterns, with concurrent symptoms such as fever.
Aphthous stomatitis was once thought to be a form of recurrent herpes simplex virus infection, and some clinicians still refer to the condition as "herpes" despite this cause having been disproven.The informal term "canker sore" is sometimes used, mainly in North America, either to describe this condition generally, or to refer to the individual ulcers of this condition, or mouth ulcers of any cause unrelated to this condition. The origin of the word "canker" is thought to have been influenced by Latin, Old English, Middle English and Old North French. In Latin, cancer translates to "malignant tumor" or literally "crab" (related to the likening of sectioned tumors to the limbs of a crab). The closely related word in Middle English and Old North French, chancre, now more usually applied to syphilis, is also thought to be involved. Despite this etymology, aphthous stomatitis is not a form of cancer but rather entirely benign.
An aphtha (plural aphthae) is a non specific term that refers to an ulcer of the mouth. The word is derived from the Greek word aphtha meaning "eruption" or "ulcer" </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is APL and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Apnea of prematurity and can you explain it in depth? [/INST] Apnea of prematurity is defined as cessation of breathing by a premature infant that lasts for more than 20 seconds and/or is accompanied by hypoxia or bradycardia. Apnea is traditionally classified as either obstructive, central, or mixed. Obstructive apnea may occur when the infants neck is hyperflexed or conversely, hyperextended. It may also occur due to low pharyngeal muscle tone or to inflammation of the soft tissues, which can block the flow of air though the pharynx and vocal cords. Central apnea occurs when there is a lack of respiratory effort. This may result from central nervous system immaturity, or from the effects of medications or illness. Many episodes of apnea of prematurity may start as either obstructive or central, but then involve elements of both, becoming mixed in nature.
Pathophysiology
Ventilatory drive is primarily dependent on response to increased levels of carbon dioxide (CO2) and acid in the blood. A secondary stimulus is hypoxia. Responses to these stimuli are impaired in premature infants due to immaturity of specialized regions of the brain that sense these changes. In addition, premature infants have an exaggerated response to laryngeal stimulation (a normal reflex that closes the airway as a protective measure).
Diagnosis
Apnea of prematurity can be readily identified from other forms of infant apnea such as obstructive apnea, hypoventilation syndromes, breathing regulation issues during feeding, and reflux associated apnea with an infant pneumogram or infant apnea/sleep study.
It has been reported that the incidence of neonatal apnea happens in almost all infants with gestational age of less than 29 weeks or the birth weight of less than 1000g.
Treatment
Medications
Methylxanthines (theophylline and caffeine) have been used for almost three decades to treat apnea of prematurity. Despite this prevalent use, there are concerns of long term negative effects from the use of caffeine. Caffeine and theophylline have similar short-term effects on apnea, but theophylline is associated with higher rates of toxicity.
Respiratory support
Simple tactile stimulation by touching the skin or patting the infant may stop an apneic episode by raising the infants level of alertness. Increasing the environmental oxygen level by placing the infant in a tent or hood with supplemental oxygen can diminish the frequency of AOP, and may also help the infant maintain adequate oxygenation during short episodes of apnea. Increased oxygen at low levels can also be delivered using a nasal cannula, which additionally may provide some stimulation due to the tactile stimulation of the cannula </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Arsenic poisoning and can you explain it in depth? [/INST] Arsenic poisoning is a medical condition that occurs due to elevated levels of arsenic in the body. If arsenic poisoning occurs over a brief period of time, symptoms may include vomiting, abdominal pain, encephalopathy, and watery diarrhea that contains blood. Long-term exposure can result in thickening of the skin, darker skin, abdominal pain, diarrhea, heart disease, numbness, and cancer.The most common reason for long-term exposure is contaminated drinking water. Groundwater most often becomes contaminated naturally; however, contamination may also occur from mining or agriculture. It may also be found in the soil and air. Recommended levels in water are less than 10–50 µg/L (10–50 parts per billion). Other routes of exposure include toxic waste sites and traditional medicines. Most cases of poisoning are accidental. Arsenic acts by changing the functioning of around 200 enzymes. Diagnosis is by testing the urine, blood, or hair.Prevention is by using water that does not contain high levels of arsenic. This may be achieved by the use of special filters or using rainwater. There is not good evidence to support specific treatments for long-term poisoning. For acute poisonings treating dehydration is important. Dimercaptosuccinic acid or dimercaptopropane sulfonate may be used while dimercaprol (BAL) is not recommended. Hemodialysis may also be used.Through drinking water, more than 200 million people globally are exposed to higher-than-safe levels of arsenic. The areas most affected are Bangladesh and West Bengal. Exposure is also more common in people of low income and minorities. Acute poisoning is uncommon. The toxicity of arsenic has been described as far back as 1500 BC in the Ebers papyrus.
Signs and symptoms
Symptoms of arsenic poisoning begin with headaches, confusion, severe diarrhea, and drowsiness. As the poisoning develops, convulsions and changes in fingernail pigmentation called leukonychia striata (Meess lines, or Aldrich-Meess lines) may occur. When the poisoning becomes acute, symptoms may include diarrhea, vomiting, vomiting blood, blood in the urine, cramping muscles, hair loss, stomach pain, and more convulsions. The organs of the body that are usually affected by arsenic poisoning are the lungs, skin, kidneys, and liver. The final result of arsenic poisoning is coma and death.Arsenic is related to heart disease (hypertension-related cardiovascular disease), cancer, stroke (cerebrovascular diseases), chronic lower respiratory diseases, and diabetes. Skin effects can include skin cancer in the long term, but often prior to skin cancer are different skin lesions. Other effects may include darkening of skin and thickening of skin.Chronic exposure to arsenic is related to vitamin A deficiency, which is related to heart disease and night blindness. The acute minimal lethal dose of arsenic in adults is estimated to be 70 to 200 mg or 1 mg/kg/day.
Cancer
Arsenic increases the risk of cancer. Exposure is related to skin, lung, liver, and kidney cancer among others.Its comutagenic effects may be explained by interference with base and nucleotide excision repair, eventually through interaction with zinc finger structures. Dimethylarsinic acid, DMA(V), caused DNA single strand breaks resulting from inhibition of repair enzymes at levels of 5 to 100 mM in human epithelial type II cells.MMA(III) and DMA(III) were also shown to be directly genotoxic by effectuating scissions in supercoiled ΦX174 DNA. Increased arsenic exposure is associated with an increased frequency of chromosomal aberrations, micronuclei and sister-chromatid exchanges. An explanation for chromosomal aberrations is the sensitivity of the protein tubulin and the mitotic spindle to arsenic. Histological observations confirm effects on cellular integrity, shape and locomotion.DMA(III) is able to form reactive oxygen species by reaction with molecular oxygen. Resulting metabolites are the dimethylarsenic radical and the dimethylarsenic peroxyl radical.
Both DMA(III) and DMA(V) were shown to release iron from horse spleen as well as from human liver ferritin if ascorbic acid was administered simultaneously. Thus, formation of reactive oxygen species can be promoted.
Moreover, arsenic could cause oxidative stress by depleting the cells antioxidants, especially the ones containing thiol groups. The accumulation of reactive oxygen species like that cited above and hydroxyl radicals, superoxide radicals and hydrogen peroxides causes aberrant gene expression at low concentrations and lesions of lipids, proteins and DNA in higher concentrations which eventually lead to cellular death.
In a rat animal model, urine levels of 8-hydroxy-2-deoxyguanosine (as a biomarker of DNA damage byreactive oxygen species) were measured after treatment with DMA(V). In comparison to control levels, they turned out to be significantly increased. This theory is further supported by a cross-sectional study which found elevated mean serum lipid peroxides in the As exposed individuals which correlated with blood levels of inorganic arsenic and methylated metabolites and inversely correlated with nonprotein sulfhydryl (NPSH) levels in whole blood. Another study found an association of As levels in whole blood with the level of reactive oxidants in plasma and an inverse relationship with plasma antioxidants. A finding of the latter study indicates that methylation might in fact be a detoxification pathway with regard to oxidative stress: the results showed that the lower the As methylation capacity was, the lower the level of plasma antioxidant capacity. As reviewed by Kitchin (2001), the oxidative stress theory provides an explanation for the preferred tumor sites connected with arsenic exposure. Considering that a high partial pressure of oxygen is present in lungs and DMA(III) is excreted in gaseous state via the lungs, this seems to be a plausible mechanism for special vulnerability. The fact that DMA is produced by methylation in the liver, excreted via the kidneys and later on stored in the bladder accounts for the other tumor localizations.
Regarding DNA methylation, some studies suggest interaction of As with methyltransferases which leads to an inactivation of tumor suppressor genes through hypermethylation; others state that hypomethylation might occur due to a lack of SAM resulting in aberrant gene activation. An experiment by Zhong et al. (2001) with arsenite-exposed human lung A549, kidney UOK123, UOK109 and UOK121 cells isolated eight different DNA fragments by methylation-sensitive arbitrarily primed polymerase chain reactions. It turned out that six of the fragments were hyper- and two of them were hypomethylated. Higher levels of DNA methyltransferase mRNA and enzyme activity were found.Kitchin (2001) proposed a model of altered growth factors which lead to cell proliferation and thus to carcinogenesis. From observations, it is known that chronic low-dose arsenic poisoning can lead to increased tolerance to its acute toxicity. MRP1-overexpressing lung tumor GLC4/Sb30 cells poorly accumulate arsenite and arsenate. This is mediated through MRP-1 dependent efflux. The efflux requires glutathione, but no arsenic-glutathione complex formation.Although many mechanisms have been proposed, no definite model can be given for the mechanisms of chronic arsenic poisoning. The prevailing events of toxicity and carcinogenicity might be quite tissue-specific. Current consensus on the mode of carcinogenesis is that it acts primarily as a tumor promoter. Its co-carcinogenicity has been demonstrated in several models. However, the finding of several studies that chronically arsenic-exposed Andean populations (as most extremely exposed to UV-light) do not develop skin cancer with chronic arsenic exposure, is puzzling.
Causes
Organic arsenic is less harmful than inorganic arsenic. Seafood is a common source of the less toxic organic arsenic in the form of arsenobetaine. The arsenic reported in 2012 in fruit juice and rice by Consumer Reports was primarily inorganic arsenic. Because of its high toxicity, arsenic is seldom used in the Western world, although in Asia it is still a popular pesticide. Arsenic is mainly encountered occupationally in the smelting of zinc and copper ores.
Drinking water
Arsenic is naturally found in groundwater and presents serious health threats when high amounts exist. Chronic arsenic poisoning results from drinking contaminated well water over a long period of time. Many aquifers contain high concentration of arsenic salts. The World Health Organization (WHO) Guidelines for drinking water quality established in 1993 a provisional guideline value of 0.01 mg/L (10 parts per billion) for maximum contaminant levels of arsenic in drinking water. This recommendation was established based on the limit of detection for most laboratories testing equipment at the time of publication of the WHO water quality guidelines. More recent findings show that consumption of water with levels as low as 0.00017 mg/L (0.17 parts per billion) over long periods of time can lead to arsenicosis.From a 1988 study in China, the US protection agency quantified the lifetime exposure of arsenic in drinking water at concentrations of 0.0017 mg/L (1.7 ppb), 0.00017 mg/L, and 0.000017 mg/L are associated with a lifetime skin cancer risk of 1 in 10,000, 1 in 100,000, and 1 in 1,000,000 respectively. WHO asserts that a water level of 0.01 mg/L (10 ppb) poses a risk of 6 in 10,000 chance of lifetime skin cancer risk and contends that this level of risk is acceptable.One of the worst incidents of arsenic poisoning via well water occurred in Bangladesh, which the World Health Organization called the "largest mass poisoning of a population in history" recognized as a major public health concern. The contamination in the Ganga-Brahmaputra fluvial plains in India and Padma-Meghna fluvial plains in Bangladesh demonstrated adverse impacts on human health.Mining techniques such as hydraulic fracturing may mobilize arsenic in groundwater and aquifers due to enhanced methane transport and resulting changes in redox conditions, and inject fluid containing additional arsenic.
Groundwater
In the US, the U.S. Geological Survey estimates that the median groundwater concentration is 1 μg/L or less, although some groundwater aquifers, particularly in the western United States, can contain much higher levels. For example, median levels in Nevada were about 8 μg/L but levels of naturally occurring arsenic as high as 1000 μg/L have been measured in the United States in drinking water.Geothermally active zones occur at hotspots where mantle-derived plumes ascend, such as in Hawaii and Yellowstone National Park, US. Arsenic is an incompatible element (does not fit easily into the lattices of common rock-forming minerals). Concentrations of arsenic are high mainly in geothermal waters that leach continental rocks. Arsenic in hot geothermal fluids was shown to be derived mainly from leaching of host rocks at Yellowstone National Park, in Wyoming, US, rather than from magmas.In the western US, there are As (arsenic) inputs to groundwater and surface water from geothermal fluids in and near Yellowstone National Park, and in other western mineralized areas. Groundwater associated with volcanics in California contain As at concentrations ranging up to 48,000 μg/L, with As-bearing sulfide minerals as the main source. Geothermal waters on Dominica in the Lesser Antilles also contain concentrations of As >50 μg/L.In general, because arsenic is an incompatible element, it accumulates in differentiated magmas, and in other western mineralized areas. Weathering of pegmatite veins in Connecticut, US, was thought to contribute As to groundwater.In Pennsylvania, As concentrations in water discharging from abandoned anthracite mines ranged from <0.03 to 15 μg/L and from abandoned bituminous mines, from 0.10 to 64 μg/L, with 10% of samples exceeding the United States Environmental Protection Agency MLC of 10 μg/L.In Wisconsin, As concentrations of water in sandstone and dolomite aquifers were as high as 100 μg/L. Oxidation of pyrite hosted by these formations was the likely source of the As.In the Piedmont of Pennsylvania and New Jersey, groundwater in Mesozoic age aquifers contains elevated levels of As—domestic well waters from Pennsylvania contained up to 65 μg/L, whereas in New Jersey the highest concentration measured recently was 215 μg/L.
Food
In the United States, Schoof et al. estimated an average adult intake of 3.2 μg/day, with a range of 1–20 μg/day. Estimates for children were similar. Food also contains many organic arsenic compounds. The key organic arsenic compounds that can be routinely found in food (depending on food type) include monomethylarsonic acid (MMAsV), dimethylarsinic acid (DMAsV), arsenobetaine, arsenocholine, arsenosugars, and arsenolipids. DMAsV or MMAsV can be found in various types of fin fish, crabs, and mollusks, but often at very low levels.Arsenobetaine is the major form of arsenic in marine animals, and, by all accounts, it is considered a compound that is nontoxic under conditions of human consumption. Arsenocholine, which is mainly found in shrimp, is chemically similar to arsenobetaine, and is considered to be "essentially nontoxic". Although arsenobetaine is little studied, available information indicates it is not mutagenic, immunotoxic, or embryotoxic.Arsenosugars and arsenolipids have recently been identified. Exposure to these compounds and toxicological implications are currently being studied. Arsenosugars are detected mainly in seaweed but are also found to a lesser extent in marine mollusks. Studies addressing arsenosugar toxicity, however, have largely been limited to in vitro studies, which show that arsenosugars are significantly less toxic than both inorganic arsenic and trivalent methylated arsenic metabolites.It has been found that rice is particularly susceptible to accumulation of arsenic from soil. Rice grown in the United States has an average 260 ppb of arsenic, according to a study; but U.S. arsenic intake remains far below World Health Organization-recommended limits. China has set a standard for arsenic limits in food (150 ppb), as levels in rice exceed those in water.Arsenic is a ubiquitous element present in American drinking water. In the United States, levels of arsenic that are above natural levels, but still well below danger levels set in federal safety standards, have been detected in commercially raised chickens. The source of the arsenic appears to be the feed additives roxarsone and nitarsone, which are used to control the parasitic infection coccidiosis as well as to increase weight and skin coloring of the poultry.High levels of inorganic arsenic were reportedly found in 83 California wines in 2015.
Soil
Exposure to arsenic in soil can occur through multiple pathways. Compared with the intake of naturally occurring arsenic from water and the diet, soil arsenic constitutes only a small fraction of intake.
Air
The European Commission (2000) reports that levels of arsenic in air range 0–1 ng/m3 in remote areas, 0.2–1.5 ng/m3 in rural areas, 0.5–3 ng/m3 in urban areas, and up to about 50 ng/m3 in the vicinity of industrial sites. Based on these data, the European Commission (2000) estimated that in relation to food, cigarette smoking, water, and soil, air contributes less than 1% of total arsenic exposure.
Pesticides
The use of lead arsenate pesticides has been effectively eliminated for over 50 years. However, because of the pesticides environmental persistence, it is estimated that millions of acres of land are still contaminated with lead arsenate residues. This presents a potentially significant public health concern in some areas of the United States (e.g., New Jersey, Washington, and Wisconsin), where large areas of land used historically as orchards have been converted into residential developments.Some modern uses of arsenic-based pesticides still exist. Chromated copper arsenate has been registered for use in the United States since the 1940s as a wood preservative, protecting wood from insects and microbial agents. In 2003, manufacturers of chromated copper arsenate instituted a voluntary recall of residential uses of wood treated with the chemical. The Environmental Protection Agency Act2008 final report stated that chromated copper arsenate is still approved for use in nonresidential applications, such as in marine facilities (pilings and structures), utility poles, and sand highway structures.
Copper smelting
Exposure studies in the copper smelting industry are much more extensive and have established definitive links between arsenic, a by-product of copper smelting, and lung cancer via inhalation. Dermal and neurological effects were also increased in some of these studies. Although as time went on, occupational controls became more stringent and workers were exposed to reduced arsenic concentrations, the arsenic exposures measured from these studies ranged from about 0.05 to 0.3 mg/m3 and are significantly higher than airborne environmental exposures to arsenic (which range from 0 to 0.000003 mg/m3).
Pathophysiology
Arsenic interferes with cellular longevity by allosteric inhibition of an essential metabolic enzyme pyruvate dehydrogenase complex, which catalyzes the oxidation of pyruvate to acetyl-CoA by NAD+. With the enzyme inhibited, the energy system of the cell is disrupted resulting in cellular apoptosis. Biochemically, arsenic prevents use of thiamine resulting in a clinical picture resembling thiamine deficiency. Poisoning with arsenic can raise lactate levels and lead to lactic acidosis. Low potassium levels in the cells increases the risk of experiencing a life-threatening heart rhythm problem from arsenic trioxide.
Arsenic in cells clearly stimulates the production of hydrogen peroxide (H2O2). When the H2O2 reacts with certain metals such as iron or manganese it produces a highly reactive hydroxyl radical. Inorganic arsenic trioxide found in ground water particularly affects voltage-gated potassium channels,
disrupting cellular electrolytic function resulting in neurological disturbances, cardiovascular episodes such as prolonged QT interval, neutropenia, high blood pressure,
central nervous system dysfunction, anemia, and death.
Arsenic exposure plays a key role in the pathogenesis of vascular endothelial dysfunction as it inactivates endothelial nitric oxide synthase, leading to reduction in the generation and bioavailability of nitric oxide. In addition, the chronic arsenic exposure induces high oxidative stress, which may affect the structure and function of cardiovascular system. Further, the arsenic exposure has been noted to induce atherosclerosis by increasing the platelet aggregation and reducing fibrinolysis. Moreover, arsenic exposure may cause arrhythmia by increasing the QT interval and accelerating the cellular calcium overload. The chronic exposure to arsenic upregulates the expression of tumor necrosis factor-α, interleukin-1, vascular cell adhesion molecule and vascular endothelial growth factor to induce cardiovascular pathogenesis.
Arsenic has also been shown to induce cardiac hypertrophy by activating certain transcription factors involved in pathologically remodeling the heart. Tissue culture studies have shown that arsenic compounds block both IKr and Iks channels and, at the same time, activate IK-ATP channels. Arsenic compounds also disrupt ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits pyruvate dehydrogenase and by competing with phosphate it uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration, and ATP synthesis. Hydrogen peroxide production is also increased, which might form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure, probably from necrotic cell death, not apoptosis. A post mortem reveals brick red colored mucosa, due to severe hemorrhage. Although arsenic causes toxicity, it can also play a protective role.
Mechanism
Arsenite inhibits not only the formation of acetyl-CoA but also the enzyme succinic dehydrogenase. Arsenate can replace phosphate in many reactions. It is able to form Glc-6-arsenate in vitro; therefore it has been argued that hexokinase could be inhibited. (Eventually this may be a mechanism leading to muscle weakness in chronic arsenic poisoning.) In the glyceraldehyde 3-phosphate dehydrogenase reaction arsenate attacks the enzyme-bound thioester. The formed 1-arseno-3-phosphoglycerate is unstable and hydrolyzes spontaneously. Thus, ATP formation in glycolysis is inhibited while bypassing the phosphoglycerate kinase reaction. (Moreover, the formation of 2,3-bisphosphoglycerate in erythrocytes might be affected, followed by a higher oxygen affinity of hemoglobin and subsequently enhanced cyanosis.) As shown by Gresser (1981), submitochondrial particles synthesize adenosine-5-diphosphate-arsenate from ADP and arsenate in presence of succinate. Thus, by a variety of mechanisms arsenate leads to an impairment of cell respiration and subsequently diminished ATP formation. This is consistent with observed ATP depletion of exposed cells and histopathological findings of mitochondrial and cell swelling, glycogen depletion in liver cells and fatty change in liver, heart and kidney.
Experiments demonstrated enhanced arterial thrombosis in a rat animal model, elevations of serotonin levels, thromboxane A[2] and adhesion proteins in platelets, while human platelets showed similar responses. The effect on vascular endothelium may eventually be mediated by the arsenic-induced formation of nitric oxide.
It was demonstrated that +3 As concentrations substantially lower than concentrations required for inhibition of the lysosomal protease cathepsin L in B cell line TA3 were sufficient to trigger apoptosis in the same B cell line, while the latter could be a mechanism mediating immunosuppressive effects.
Kinetics
The two forms of inorganic arsenic, reduced (trivalent As(III)) and oxidized (pentavalent As(V)), can be absorbed, and accumulated in tissues and body fluids. In the liver, the metabolism of arsenic involves enzymatic and non-enzymatic methylation; the most frequently excreted metabolite (≥ 90%) in the urine of mammals is dimethylarsinic acid or cacodylic acid, DMA(V). Dimethylarsenic acid is also known as Agent Blue and was used as herbicide in the American war in Vietnam.
In humans inorganic arsenic is reduced nonenzymatically from pentoxide to trioxide, using glutathione or it is mediated by enzymes. Reduction of arsenic pentoxide to arsenic trioxide increases its toxicity and bio availability, Methylation occurs through methyltransferase enzymes. S-adenosylmethionine (SAM) may serve as methyl donor. Various pathways are used, the principal route being dependent on the current environment of the cell. Resulting metabolites are monomethylarsonous acid, MMA(III), and dimethylarsinous acid, DMA(III).
Methylation had been regarded as a detoxification process, but reduction from +5 As to +3 As may be considered as a bioactivation instead. Another suggestion is that methylation might be a detoxification if "As[III] intermediates are not permitted to accumulate" because the pentavalent organoarsenics have a lower affinity to thiol groups than inorganic pentavalent arsenics. Gebel (2002) stated that methylation is a detoxification through accelerated excretion. With regard to carcinogenicity it has been suggested that methylation should be regarded as a toxification.Arsenic, especially +3 As, binds to single, but with higher affinity to vicinal sulfhydryl groups, thus reacts with a variety of proteins and inhibits their activity. It was also proposed that binding of arsenite at nonessential sites might contribute to detoxification. Arsenite inhibits members of the disulfide oxidoreductase family like glutathione reductase and thioredoxin reductase.The remaining unbound arsenic (≤ 10%) accumulates in cells, which over time may lead to skin, bladder, kidney, liver, lung, and prostate cancers. Other forms of arsenic toxicity in humans have been observed in blood, bone marrow, cardiac, central nervous system, gastrointestinal, gonadal, kidney, liver, pancreatic, and skin tissues.
Heat shock response
Another aspect is the similarity of arsenic effects to the heat shock response. Short-term arsenic exposure has effects on signal transduction inducing heat shock proteins with masses of 27, 60, 70, 72, 90, and 110 kDa as well as metallothionein, ubiquitin, mitogen-activated [MAP] kinases, extracellular regulated kinase [ERK], c-jun terminal kinases [JNK] and p38.
Via JNK and p38 it activates c-fos, c-jun and egr-1 which are usually activated by growth factors and cytokines. The effects are largely dependent on the dosing regime and may be as well inversed.
As shown by some experiments reviewed by Del Razo (2001), reactive oxygen species induced by low levels of inorganic arsenic increase the transcription and the activity of the activator protein 1 (AP-1) and the nuclear factor-κB (NF-κB) (maybe enhanced by elevated MAPK levels), which results in c-fos/c-jun activation, over-secretion of pro-inflammatory and growth promoting cytokines stimulating cell proliferation. Germolec et al. (1996) found an increased cytokine expression and cell proliferation in skin biopsies from individuals chronically exposed to arsenic-contaminated drinking water.Increased AP-1 and NF-κB obviously also result in an up-regulation of mdm2 protein, which decreases p53 protein levels. Thus, taking into account p53s function, a lack of it could cause a faster accumulation of mutations contributing to carcinogenesis. However, high levels of inorganic arsenic inhibit NF-κB activation and cell proliferation. An experiment of Hu et al. (2002) demonstrated increased binding activity of AP-1 and NF-κB after acute (24 h) exposure to +3 sodium arsenite, whereas long-term exposure (10–12 weeks) yielded the opposite result. The authors conclude that the former may be interpreted as a defense response while the latter could lead to carcinogenesis. As the contradicting findings and connected mechanistic hypotheses indicate, there is a difference in acute and chronic effects of arsenic on signal transduction which is not clearly understood yet.
Oxidative stress
Studies have demonstrated that the oxidative stress generated by arsenic may disrupt the signal transduction pathways of the nuclear transcriptional factors PPARs, AP-1, and NF-κB, as well as the pro-inflammatory cytokines IL-8 and TNF-α. The interference of oxidative stress with signal transduction pathways may affect physiological processes associated with cell growth, metabolic syndrome X, glucose homeostasis, lipid metabolism, obesity, insulin resistance, inflammation, and diabetes-2. Recent scientific evidence has elucidated the physiological roles of the PPARs in the ω- hydroxylation of fatty acids and the inhibition of pro-inflammatory transcription factors (NF-κB and AP-1), pro-inflammatory cytokines (IL-1, -6, -8, -12, and TNF-α), cell4 adhesion molecules (ICAM-1 and VCAM-1), inducible nitric oxide synthase, proinflammatory nitric oxide (NO), and anti-apoptotic factors.Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of type 2 diabetes. The human liver after exposure to therapeutic drugs may exhibit hepatic non-cirrhotic portal hypertension, fibrosis, and cirrhosis. However, the literature provides insufficient scientific evidence to show cause and effect between arsenic and the onset of diabetes mellitus Type 2.
Diagnosis
Arsenic may be measured in blood or urine to monitor excessive environmental or occupational exposure, confirm a diagnosis of poisoning in hospitalized victims or to assist in the forensic investigation in a case of fatal over dosage. Some analytical techniques are capable of distinguishing organic from inorganic forms of the element. Organic arsenic compounds tend to be eliminated in the urine in unchanged form, while inorganic forms are largely converted to organic arsenic compounds in the body prior to urinary excretion. The current biological exposure index for U.S. workers of 35 µg/L total urinary arsenic may easily be exceeded by a healthy person eating a seafood meal.Tests are available to diagnose poisoning by measuring arsenic in blood, urine, hair, and fingernails. The urine test is the most reliable test for arsenic exposure within the last few days. Urine testing needs to be done within 24–48 hours for an accurate analysis of an acute exposure. Tests on hair and fingernails can measure exposure to high levels of arsenic over the past 6–12 months. These tests can determine if one has been exposed to above-average levels of arsenic. They cannot predict, however, whether the arsenic levels in the body will affect health. Chronic arsenic exposure can remain in the body systems for a longer period of time than a shorter term or more isolated exposure and can be detected in a longer time frame after the introduction of the arsenic, important in trying to determine the source of the exposure.
Hair is a potential bioindicator for arsenic exposure due to its ability to store trace elements from blood. Incorporated elements maintain their position during growth of hair. Thus for a temporal estimation of exposure, an assay of hair composition needs to be carried out with a single hair which is not possible with older techniques requiring homogenization and dissolution of several strands of hair. This type of biomonitoring has been achieved with newer microanalytical techniques like synchrotron radiation based X-ray fluorescence spectroscopy and microparticle induced X-ray emission. The highly focused and intense beams study small spots on biological samples allowing analysis to micro level along with the chemical speciation. In a study, this method has been used to follow arsenic level before, during and after treatment with arsenious oxide in patients with acute promyelocytic leukemia.
Treatment
Chelation
Dimercaprol and dimercaptosuccinic acid are chelating agents that sequester the arsenic away from blood proteins and are used in treating acute arsenic poisoning. The most important side effect is hypertension. Dimercaprol is considerably more toxic than succimer. dimercaptosuccinic acid monoesters, e.g. MiADMSA, are promising antidotes for arsenic poisoning.
Nutrition
Supplemental potassium decreases the risk of experiencing a life-threatening heart rhythm problem from arsenic trioxide.
History
Beginning in about 3000 BC arsenic was mined and added to copper in the alloying of bronze, but the adverse health effects of working with arsenic led to it being abandoned when a viable alternative, tin, was discovered.In addition to its presence as a poison, for centuries arsenic was used medicinally. It has been used for over 2,400 years as a part of traditional Chinese medicine. In the western world, arsenic compounds, such as salvarsan, were used extensively to treat syphilis before penicillin was introduced. It was eventually replaced as a therapeutic agent by sulfa drugs and then by other antibiotics. Arsenic was also an ingredient in many tonics (or "patent medicines").
In addition, during the Elizabethan era, some women used a mixture of vinegar, chalk, and arsenic applied topically to whiten their skin. This use of arsenic was intended to prevent aging and creasing of the skin, but some arsenic was inevitably absorbed into the blood stream.During the Victorian era (late 19th century) in the United States, U.S. newspapers advertised "arsenic complexion wafers" that promised to remove facial blemishes such as moles and pimples.Some pigments, most notably the popular Emerald Green (known also under several other names), were based on arsenic compounds. Overexposure to these pigments was a frequent cause of accidental poisoning of artists and craftsmen.
Arsenic became a favored method for murder of the Middle Ages and Renaissance, particularly among ruling classes in Italy allegedly. Because the symptoms are similar to those of cholera, which was common at the time, arsenic poisoning often went undetected.: 63 By the 19th century, it had acquired the nickname "inheritance powder," perhaps because impatient heirs were known or suspected to use it to ensure or accelerate their inheritances.: 21 It was also a common murder technique in the 19th century in domestic violence situations, such as the case of Rebecca Copin, who attempted to poison her husband by "putting arsenic in his coffee".In post-WW1 Hungary, arsenic extracted by boiling fly paper was used in an estimated 300 murders by the Angel Makers of Nagyrév.
In imperial China, arsenic trioxide and sulfides were used in murder, as well as for capital punishment for members of the royal family or aristocracy. Forensic studies have determined that the Guangxu Emperor (d. 1908) was murdered by arsenic, most likely ordered by the Empress Dowager Cixi or Generalissimo Yuan Shikai. Likewise, in ancient Korea, and particularly in the Joseon Dynasty, arsenic-sulfur compounds have been used as a major ingredient of sayak (사약; 賜藥), which was a poison cocktail used in capital punishment of high-profile political figures and members of the royal family. Due to social and political prominence of the condemned, many of these events were well-documented, often in the Annals of Joseon Dynasty; they are sometimes portrayed in historical television miniseries because of their dramatic nature.
Legislation
In the U.S. in 1975, under the authority of the Safe Drinking Water Act, the U.S. Environmental Protection Agency determined the National Interim Primary Drinking Water Regulation levels of arsenic (inorganic contaminant – IOCs) to be 0.05 mg/L (50 parts per billion – ppb).Throughout the years, many studies reported dose-dependent effects of arsenic in drinking water and skin cancer. In order to prevent new cases and death from cancerous and non-cancerous diseases, the Safe Drinking Water Act directed the Environmental Protection Agency to revise arsenics levels and specified the maximum contaminant level (MCL). MCLs are set as close to the health goals as possible, considering cost, benefits and the ability of public water systems to detect and remove contaminants using suitable treatment technologies.In 2001, Environmental Protection Agency adopted a lower standard of MCL 0.01 mg/L (10 ppb) for arsenic in drinking water that applies to both community water systems and non-transient non-community water systems.In some other countries, when developing national drinking water standards based on the guideline values, it is necessary to take account of a variety of geographical, socio-economic, dietary and other conditions affecting potential exposure. These factors lead to national standards that differ appreciably from the guideline values. That is the case in countries such as India and Bangladesh, where the permissible limit of arsenic in absence of an alternative source of water is 0.05 mg/L.
Challenges to implementation
Arsenic removal technologies are traditional treatment processes which have been tailored to improve removal of arsenic from drinking water. Although some of the removal processes, such as precipitative processes, adsorption processes, ion exchange processes, and separation (membrane) processes, may be technically feasible, their cost may be prohibitive.For underdeveloped countries, the challenge is finding the means to fund such technologies. The Environmental Protection Agency, for example, has estimated the total national annualized cost of treatment, monitoring, reporting, record keeping, and administration to enforce the MCL rule to be approximately $181 million </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Ascariasis and can you explain it in depth? [/INST] Ascariasis is a disease caused by the parasitic roundworm Ascaris lumbricoides. Infections have no symptoms in more than 85% of cases, especially if the number of worms is small. Symptoms increase with the number of worms present and may include shortness of breath and fever in the beginning of the disease. These may be followed by symptoms of abdominal swelling, abdominal pain, and diarrhea. Children are most commonly affected, and in this age group the infection may also cause poor weight gain, malnutrition, and learning problems.Infection occurs by ingestion of food or drink contaminated with Ascaris eggs from feces. The eggs hatch in the intestines, the larvae burrow through the gut wall, and migrate to the lungs via the blood. There they break into the alveoli and pass up the trachea, where they are coughed up and may be swallowed. The larvae then pass through the stomach for a second time into the intestine, where they become adult worms. It is a type of soil-transmitted helminthiasis and part of a group of diseases called helminthiases.Prevention is by improved sanitation, which includes improving access to toilets and proper disposal of feces. Handwashing with soap appears protective. In areas where more than 20% of the population is affected, treating everyone at regular intervals is recommended. Reoccurring infections are common. There is no vaccine. Treatments recommended by the World Health Organization are the medications albendazole, mebendazole, levamisole, or pyrantel pamoate. Other effective agents include tribendimidine and nitazoxanide.About 0.8 to 1.2 billion people globally have ascariasis, with the most heavily affected populations being in sub-Saharan Africa, Latin America, and Asia. This makes ascariasis the most common form of soil-transmitted helminthiasis. As of 2010 it caused about 2,700 deaths a year, down from 3,400 in 1990. Another type of Ascaris infects pigs. Ascariasis is classified as a neglected tropical disease.
Signs and symptoms
In populations where worm infections are widespread, it is common to find that most people are infected by a small number of worms, while a small number of people are heavily infected. This is characteristic of many types of worm infections. Those people who are infected with only a small number of worms usually have no symptoms.
Migrating larvae
As larval stages travel through the body, they may cause visceral damage, peritonitis and inflammation, enlargement of the liver or spleen, and an inflammation of the lungs. Pulmonary manifestations take place during larval migration and may present as Loefflers syndrome, a transient respiratory illness associated with blood eosinophilia and pulmonary infiltrates with radiographic shadowing.
Intestinal blockage
The worms can occasionally cause intestinal blockage when large numbers get tangled into a bolus or they may migrate from the small intestine, which may require surgery. More than 796 A. lumbricoides worms weighing up to 550 g (19 oz) were recovered at autopsy from a two-year-old South African girl. The worms had caused torsion and gangrene of the ileum, which was interpreted as the cause of death.The worms lack teeth. However, they can rarely cause bowel perforations by inducing volvulus and closed-loop obstruction.
Bowel obstruction
Bowel obstruction may occur in up to 0.2 per 1000 per year. A worm may block the ampulla of Vater, or go into the main pancreatic duct, resulting in acute pancreatitis with raised serum levels of amylase and lipase. Occasionally, a worm can travel through the biliary tree and even into the gallbladder, causing acute cholangitis or acute cholecystitis.
Allergies
Ascariasis may result in allergies to shrimp and dustmites due to the shared antigen, tropomyosin; this has not been confirmed in the laboratory.
Malnutrition
The worms in the intestine may cause malabsorption and anorexia, which contribute to malnutrition. The malabsorption may be due to a loss of brush border enzymes, erosion and flattening of the villi, and inflammation of the lamina propria.
Others
Ascaris have an aversion to some general anesthetics and may exit the body, sometimes through the mouth, when an infected individual is put under general anesthesia.
Cause
Transmission
The source of infection is from objects which have been contaminated with fecal matter containing eggs. Ingestion of infective eggs from soil contaminated with human feces or contaminated vegetables and water is the primary route of infection. Infectious eggs may occur on other objects such as hands, money and furniture. Transmission from human to human by direct contact is impossible.Transmission comes through municipal recycling of wastewater into crop fields. This is quite common in emerging industrial economies and poses serious risks for local crop sales and exports of contaminated vegetables. A 1986 outbreak of ascariasis in Italy was traced to irresponsible wastewater recycling used to grow Balkan vegetable exports.The number of ova (eggs) in sewage or in crops that were irrigated with raw or partially treated sewage, is a measure of the degree of ascariasis incidence. For example:
In a study published in 1992, municipal wastewater in Riyadh, Saudi Arabia, detected over 100 eggs per litre of wastewater and in Czechoslovakia was as high as 240–1050 eggs per litre.
In one field study in Marrakech, Morocco, where raw sewage is used to fertilize crop fields, Ascaris eggs were detected at the rate of 0.18 eggs/kg in potatoes, 0.27 eggs/kg in turnip, 4.63 eggs/kg in mint, 0.7 eggs/kg in carrots, and 1.64 eggs/kg in radish. A similar study in the same area showed that 73% of children working on these farms were infected with helminths, particularly Ascaris, probably as a result of exposure to the raw sewage.
Lifecycle
The first appearance of eggs in stools is 60–70 days. In larval ascariasis, symptoms occur 4–16 days after infection. The final symptoms are gastrointestinal discomfort, colic and vomiting, fever, and observation of live worms in stools. Some patients may have pulmonary symptoms or neurological disorders during migration of the larvae. There are generally few or no symptoms. A bolus of worms may obstruct the intestine; migrating larvae may cause pneumonitis and eosinophilia. Adult worms have a lifespan of 1–2 years which means that individuals may be infected all their lives as worms die and new worms are acquired.Eggs can survive potentially for 15 years and a single worm may produce 200,000 eggs a day. They maintain their position by swimming against the intestinal flow.
Mechanism
Ascaris takes most of its nutrients from the partially digested host food in the intestine. There is some evidence that it can secrete enzyme inhibitors, presumably to protect itself from digestion by the hosts enzymes. Children are often more severely affected.
Diagnosis
Most diagnoses are made by identifying the appearance of the worm or eggs in feces. Due to the large quantity of eggs laid, diagnosis can generally be made using only one or two fecal smears.
The diagnosis is usually incidental when the host passes a worm in the stool or vomit. The eggs can be seen in a smear of fresh feces examined on a glass slide under a microscope and there are various techniques to concentrate them first or increase their visibility, such as the ether sedimentation method or the Kato technique. The eggs have a characteristic shape: they are oval with a thick, mamillated shell (covered with rounded mounds or lumps), measuring 35–50 micrometer in diameter and 40–70 in length. During pulmonary disease, larvae may be found in fluids aspirated from the lungs. White blood cell counts may demonstrate peripheral eosinophilia; this is common in many parasitic infections and is not specific to ascariasis. On X-ray, 15–35 cm long filling defects, sometimes with whirled appearance (bolus of worms).
Prevention
Prevention is by improved access to sanitation which includes the use of properly functioning and clean toilets by all community members as one important aspect. Handwashing with soap may be protective; however, there is no evidence it affects the severity of the disease. Eliminating the use of untreated human faeces as fertilizer is also important.In areas where more than 20% of the population is affected treating everyone is recommended. This has a cost of about 2 to 3 cents per person per treatment. This is known as mass drug administration and is often carried out among school-age children. For this purpose, broad-spectrum benzimidazoles such as mebendazole and albendazole are the drugs of choice recommended by WHO.
Treatment
Medications
Medications that are used to kill roundworms are called ascaricides. Those recommended by the World Health Organization for ascariasis are: albendazole, mebendazole, levamisole and pyrantel pamoate. Single‐dose of albendazole, mebendazole, and ivermectin are effective against ascariasis. They are effective at removing parasites and eggs from the intestines. Other effective agents include tribendimidine and nitazoxanide. Pyrantel pamoate may induce intestinal obstruction in a heavy worm load. Albendazole is contraindicated during pregnancy and children under two years of age. Thiabendazole may cause migration of the worm into the esophagus, so it is usually combined with piperazine.Piperazine is a flaccid paralyzing agent that blocks the response of Ascaris muscle to acetylcholine, which immobilizes the worm </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Assisted reproductive technology and can you explain it in depth? [/INST] Assisted reproductive technology (ART) includes medical procedures used primarily to address infertility. This subject involves procedures such as in vitro fertilization (IVF), intracytoplasmic sperm injection (ICSI), cryopreservation of gametes or embryos, and/or the use of fertility medication. When used to address infertility, ART may also be referred to as fertility treatment. ART mainly belongs to the field of reproductive endocrinology and infertility. Some forms of ART may be used with regard to fertile couples for genetic purpose (see preimplantation genetic diagnosis). ART may also be used in surrogacy arrangements, although not all surrogacy arrangements involve ART.
The existence of sterility will not always require ART to be the first option to consider, as there are occasions when its cause is a mild disorder that can be solved with more conventional treatments or with behaviors based on promoting health and reproductive habits.
Procedures
General
With ART, the process of sexual intercourse is bypassed and fertilization of the oocytes occurs in the laboratory environment (i.e., in vitro fertilization).In the US, the Centers for Disease Control and Prevention (CDC) defines ART to include "all fertility treatments in which both eggs and sperm are handled. In general, ART procedures involve surgically removing eggs from a womans ovaries, combining them with sperm in the laboratory, and returning them to the womans body or donating them to another woman." According to CDC, "they do not include treatments in which only sperm are handled (i.e., intrauterine—or artificial—insemination) or procedures in which a woman takes medicine only to stimulate egg production without the intention of having eggs retrieved."In Europe, ART also excludes artificial insemination and includes only procedures where oocytes are handled.The WHO, or World Health Organization, also defines ART this way.
Ovulation induction
Ovulation induction is usually used in the sense of stimulation of the development of ovarian follicles by fertility medication to reverse anovulation or oligoovulation. These medications are given by injection for 8 to 14 days. A health care provider closely monitors the development of the eggs using transvaginal ultrasound and blood tests to assess follicle growth and estrogen production by the ovaries. When follicles have reached an adequate size and the eggs are mature enough, an injection of the hormone hCG initiates the ovulation process. Egg retrieval should occur from 34 to 36 hours after the hCG injection.
In vitro fertilization
In vitro fertilization is the technique of letting fertilization of the male and female gametes (sperm and egg) occur outside the female body.
Techniques usually used in in vitro fertilization include:
Transvaginal ovum retrieval (OVR) is the process whereby a small needle is inserted through the back of the vagina and guided via ultrasound into the ovarian follicles to collect the fluid that contains the eggs.
Embryo transfer is the step in the process whereby one or several embryos are placed into the uterus of the female with the intent to establish a pregnancy.Less commonly used techniques in in vitro fertilization are:
Assisted zona hatching (AZH) is performed shortly before the embryo is transferred to the uterus. A small opening is made in the outer layer surrounding the egg in order to help the embryo hatch out and aid in the implantation process of the growing embryo.
Intracytoplasmic sperm injection (ICSI) is beneficial in the case of male factor infertility where sperm counts are very low or failed fertilization occurred with previous IVF attempt(s). The ICSI procedure involves a single sperm carefully injected into the center of an egg using a microneedle. With ICSI, only one sperm per egg is needed. Without ICSI, you need between 50,000 and 100,000. This method is also sometimes employed when donor sperm is used.
Autologous endometrial coculture is a possible treatment for patients who have failed previous IVF attempts or who have poor embryo quality. The patients fertilized eggs are placed on top of a layer of cells from the patients own uterine lining, creating a more natural environment for embryo development.
In zygote intrafallopian transfer (ZIFT), egg cells are removed from the womans ovaries and fertilized in the laboratory; the resulting zygote is then placed into the fallopian tube.
Cytoplasmic transfer is the technique in which the contents of a fertile egg from a donor are injected into the infertile egg of the patient along with the sperm.
Egg donors are resources for women with no eggs due to surgery, chemotherapy, or genetic causes; or with poor egg quality, previously unsuccessful IVF cycles or advanced maternal age. In the egg donor process, eggs are retrieved from a donors ovaries, fertilized in the laboratory with the sperm from the recipients partner, and the resulting healthy embryos are returned to the recipients uterus.
Sperm donation may provide the source for the sperm used in IVF procedures where the male partner produces no sperm or has an inheritable disease, or where the woman being treated has no male partner.
Preimplantation genetic diagnosis (PGD) involves the use of genetic screening mechanisms such as fluorescent in-situ hybridization (FISH) or comparative genomic hybridization (CGH) to help identify genetically abnormal embryos and improve healthy outcomes.
Embryo splitting can be used for twinning to increase the number of available embryos.
Pre-implantation genetic diagnosis
A pre-implantation genetic diagnosis procedure may be conducted on embryos prior to implantation (as a form of embryo profiling), and sometimes even of oocytes prior to fertilization. PGD is considered in a similar fashion to prenatal diagnosis. PGD is an adjunct to ART procedures, and requires in vitro fertilization to obtain oocytes or embryos for evaluation. Embryos are generally obtained through blastomere or blastocyst biopsy. The latter technique has proved to be less deleterious for the embryo, therefore it is advisable to perform the biopsy around day 5 or 6 of development. Sex selection is the attempt to control the sex of offspring to achieve a desired sex in case of X chromosome linked diseases. It can be accomplished in several ways, both pre- and post-implantation of an embryo, as well as at birth. Pre-implantation techniques include PGD, but also sperm sorting.
Others
Other assisted reproduction techniques include:
Mitochondrial replacement therapy (MRT, sometimes called mitochondrial donation) is the replacement of mitochondria in one or more cells to prevent or ameliorate disease. MRT originated as a special form of IVF in which some or all of the future babys mitochondrial DNA comes from a third party. This technique is used in cases when mothers carry genes for mitochondrial diseases. The therapy is approved for use in the United Kingdom.
In gamete intrafallopian transfer (GIFT) a mixture of sperm and eggs is placed directly into a womans fallopian tubes using laparoscopy following a transvaginal ovum retrieval.
Reproductive surgery, treating e.g. fallopian tube obstruction and vas deferens obstruction, or reversing a vasectomy by a reverse vasectomy. In surgical sperm retrieval (SSR) the reproductive urologist obtains sperm from the vas deferens, epididymis or directly from the testis in a short outpatient procedure.
By cryopreservation, eggs, sperm and reproductive tissue can be preserved for later IVF.
Risks
The majority of IVF-conceived infants do not have birth defects.
However, some studies have suggested that assisted reproductive technology is associated with an increased risk of birth defects.
Artificial reproductive technology is becoming more available. Early studies suggest that there could be an increased risk for medical complications with both the mother and baby. Some of these include low birth weight, placental insufficiency, chromosomal disorders, preterm deliveries, gestational diabetes, and pre-eclampsia (Aiken and Brockelsby).In the largest U.S. study, which used data from a statewide registry of birth defects,
6.2% of IVF-conceived children had major defects, as compared with 4.4% of naturally conceived children matched for maternal age and other factors (odds ratio, 1.3; 95% confidence interval, 1.00 to 1.67). ART carries with it a risk for heterotopic pregnancy (simultaneous intrauterine and extrauterine pregnancy).
The main risks are:
Genetic disorders
Low birth weight. In IVF and ICSI, a risk factor is the decreased expression of proteins in energy metabolism; Ferritin light chain and ATP5A1.
Preterm birth. Low birth weight and preterm birth are strongly associated with many health problems, such as visual impairment and cerebral palsy. Children born after IVF are roughly twice as likely to have cerebral palsy.Sperm donation is an exception, with a birth defect rate of almost a fifth compared to the general population. It may be explained by that sperm banks accept only people with high sperm count.
Germ cells of the mouse normally have a frequency of spontaneous point mutations that is 5 to 10-fold lower than that in somatic cells from the same individual. This low frequency in the germline leads to embryos that have a low frequency of point mutations in the next generation. No significant differences were observed in the frequency or spectrum of mutations between naturally conceived fetuses and assisted-conception fetuses. This suggests that with respect to the maintenance of genetic integrity assisted conception is safe.Current data indicate little or no increased risk for postpartum depression among women who use ART.Usage of assisted reproductive technology including ovarian stimulation and in vitro fertilization have been associated with an increased overall risk of childhood cancer in the offspring, which may be caused by the same original disease or condition that caused the infertility or subfertility in the mother or father.That said, In a landmark paper by Jacques Balayla et al. it was determined that infants born after ART have similar neurodevelopment than infants born after natural conception.In theory, ART can solve almost all reproductive problems, except for severe pathology or the absence of a uterus (or womb), using specific gamete or embryo donation techniques. However, this does not mean that all women can be treated with assisted reproductive techniques, or that all women who are treated will achieve pregnancy.
Usage
As a result of the 1992 Fertility Clinic Success Rate and Certification Act, the CDC is required to publish the annual ART success rates at U.S. fertility clinics. Assisted reproductive technology procedures performed in the U.S. has over than doubled over the last 10 years, with 140,000 procedures in 2006, resulting in 55,000 births.In Australia, 3.1% of births are a result of ART.The most common reasons for discontinuation of fertility treatment have been estimated to be: postponement of treatment (39%), physical and psychological burden (19%), psychological burden (14%), physical burden (6.32%), relational and personal problems (17%), personal reasons (9%), relational problems (9%), treatment rejection (13%) and organizational (12%) and clinic (8%) problems.
By country
United States
Many Americans do not have insurance coverage for fertility investigations and treatments. Many states are starting to mandate coverage, and the rate of use is 278% higher in states with complete coverage.There are some health insurance companies that cover diagnosis of infertility, but frequently once diagnosed will not cover any treatment costs.Approximate treatment/diagnosis costs in the United States, with inflation, as of 2021 (US$):
Initial workup: hysteroscopy, hysterosalpingogram, blood tests ~$2,800
Sonohysterogram (SHG) ~ $830–$1,400
Clomiphene citrate cycle ~ $280–$690
IVF cycle ~ $13,900–$41,600
Use of a surrogate mother to carry the child – dependent on arrangementsAnother way to look at costs is to determine the expected cost of establishing a pregnancy. Thus, if a clomiphene treatment has a chance to establish a pregnancy in 8% of cycles and costs $690, the expected cost is $8,300 to establish a pregnancy, compared to an IVF cycle (cycle fecundity 40%) with a corresponding expected cost of $41,600 ($16,600 × 40%).
For the community as a whole, the cost of IVF on average pays back by 700% by tax from future employment by the conceived human being.
European Union
In Europe, 157,500 children were born using assisted reproductive technology in 2015, according to the European Society of Human Reproduction and Embryology (ESHRE). But there are major differences in legislation across the Old Continent.
A European directive fixes standards concerning the use of human tissue and cells, but all ethical and legal questions on ART remain the prerogative of EU member states.
Across Europe, the legal criteria per availability vary somewhat. In 11 countries all women may benefit; in 8 others only heterosexual couples are concerned; in 7 only single women; and in 2 (Austria and Germany) only lesbian couples.
Spain was the first European country to open ART to all women, in 1977, the year the first sperm bank was opened there. In France, the right to ART is accorded to all women since 2019. In the last 15 years, legislation has evolved quickly. For example, Portugal made ART available in 2006 with conditions very similar to those in France, before amending the law in 2016 to allow lesbian couples and single women to benefit. Italy clarified its uncertain legal situation in 2004 by adopting Europes strictest laws: ART is only available to heterosexual couples, married or otherwise, and sperm donation is prohibited.
Today, 21 countries provide partial public funding for ART treatment. The seven others, which do not, are Ireland, Cyprus, Estonia, Latvia, Luxembourg, Malta, and Romania.
Such subsidies are subject to conditions, however. In Belgium, a fixed payment of €1,073 is made for each full cycle of the IVF process. The woman must be aged under 43 and may not carry out more than six cycles of ART. There is also a limit on the number of transferable embryos, which varies according to age and the number of cycles completed.
In France, ART is subsidized in full by national health insurance for women up to age 43, with limits of 4 attempts at IVF and 6 at artificial insemination.
Germany tightened its conditions for public funding in 2004, which caused a sharp drop in the number of ART cycles carried out, from more than 102,000 in 2003 to fewer than 57,000 the following year. Since then the figure has remained stable.
17 countries limit access to ART according to the age of the woman. 10 countries have established an upper age limit, varying from 40 (Finland, Netherlands) to 50 (including Spain, Greece and Estonia).
Since 1994, France is one of a number of countries (including Germany, Spain, and the UK) which use the somewhat vague notion of "natural age of procreation". In 2017, the steering council of Frances Agency of Biomedicine established an age limit of 43 for women using ART.
10 countries have no age limit for ART. These include Austria, Hungary, Italy and Poland.
Most European countries allow donations of gametes by third parties. But the situations vary depending on whether sperm or eggs are concerned. Sperm donations are authorized in 20 EU member states; in 11 of them anonymity is allowed. Egg donations are possible in 17 states, including 8 under anonymous conditions.
On 12 April, the Council of Europe adopted a recommendation which encourages an end to anonymity. In the UK, anonymous sperm donations ended in 2005 and children have access to the identity of the donor when they reach adulthood.
In France, the principle of anonymous donations of sperm or embryos is maintained in the law of bioethics of 2011, but a new bill under discussion may change the situation.
United Kingdom
In the United Kingdom, all patients have the right to preliminary testing, provided free of charge by the National Health Service (NHS). However, treatment is not widely available on the NHS and there can be long waiting lists. Many patients therefore pay for immediate treatment within the NHS or seek help from private clinics.
In 2013, the National Institute for Health and Care Excellence (NICE) published new guidelines about who should have access to IVF treatment on the NHS in England and Wales.The guidelines say women aged between 40 and 42 should be offered one cycle of IVF on the NHS if they have never had IVF treatment before, have no evidence of low ovarian reserve (this is when eggs in the ovary are low in number, or low in quality), and have been informed of the additional implications of IVF and pregnancy at this age. However, if tests show IVF is the only treatment likely to help them get pregnant, women should be referred for IVF straight away.
This policy is often modified by local Clinical Commissioning Groups, in a fairly blatant breach of the NHS Constitution for England which provides that patients have the right to drugs and treatments that have been recommended by NICE for use in the NHS. For example, the Cheshire, Merseyside and West Lancashire Clinical Commissioning Group insists on additional conditions:
The person undergoing treatment must have commenced treatment before her 40th birthday;
The person undergoing treatment must have a BMI of between 19 and 29;
Neither partner must have any living children, from either the current or previous relationships. This includes adopted as well as biological children; and,
Sub-fertility must not be the direct result of a sterilisation procedure in either partner (this does not include conditions where sterilisation occurs as a result of another medical problem). Couples who have undertaken a reversal of their sterilisation procedure are not eligible for treatment.
Canada
Some treatments are covered by OHIP (public health insurance) in Ontario and others are not. Women with bilaterally blocked fallopian tubes and are under the age of 40 have treatment covered but are still required to pay test fees (around CA$3,000–4,000). Coverage varies in other provinces. Most other patients are required to pay for treatments themselves.
Israel
Israels national health insurance, which is mandatory for all Israeli citizens, covers nearly all fertility treatments. IVF costs are fully subsidized up to the birth of two children for all Israeli women, including single women and lesbian couples. Embryo transfers for purposes of gestational surrogacy are also covered.
Germany
On 27 January 2009, the Federal Constitutional Court ruled that it is unconstitutional, that the health insurance companies have to bear only 50% of the cost for IVF. On 2 March 2012, the Federal Council has approved a draft law of some federal states, which provides that the federal government provides a subsidy of 25% to the cost. Thus, the share of costs borne for the pair would drop to just 25%. Since July 2017, assisted reproductive technology is also allowed for married lesbian couples, as German parliament allowed same-sex marriages in Germany.
France
In July 2020, the French Parliament allowed assisted reproductive technology also for lesbian couples and single women.
Cuba
Cuban sources mention that assisted reproduction is completely legal and free in the country.
India
The Government of India has notified the Surrogacy (Regulation) Act 2021 and the Assisted Reproductive Technology (Regulation) Act 2021 to regulate the practice of ART. Prior to that, the National Guidelines for Accreditation, Supervision and Regulation of ART Clinics in India published by the Ministry for Health and Family Welfare, Government of India in the year 2005 was governing the field. Indian law recognises the right of a single woman, who is a major, to have children through ART.
Society and culture
Ethics
Some couples may find it difficult to stop treatment despite very bad prognosis, resulting in futile therapies. This has the potential to give ART providers a difficult decision of whether to continue or refuse treatment.Some assisted reproductive technologies have the potential to be harmful to both the mother and child, posing a psychological and/or physical health risk, which may impact the ongoing use of these treatments.
In Israel, there is research supporting using art, including recycled lab materials from the IVF process, to help women work through some of these mixed emotions.
Fictional representation
Films and other fiction depicting emotional struggles of assisted reproductive technology have had an upswing in the latter part of the 2000s decade, although the techniques have been available for decades. As ART becomes more utilized, the number of people that can relate to it by personal experience in one way or another is growing.For specific examples, refer to the fiction sections in individual subarticles, e.g. surrogacy, sperm donation and fertility clinic.
In addition, reproduction and pregnancy in speculative fiction has been present for many decades.
Historical facts
25 July 1978, Louise Brown was born; this was the first successful birth of a child after IVF treatment. The procedure took place at Dr Kershaws Cottage Hospital (now Dr Kershaws Hospice) in Royton, Oldham, England. Patrick Steptoe (gynaecologist) and Robert Edwards (physiologist) worked together to develop the IVF technique </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Autism spectrum and can you explain it in depth? [/INST] The autism spectrum is a range of neurodevelopmental conditions generally characterized by difficulties in social interactions and communication, repetitive behaviors, intense interests, and unusual responses to sensory stimuli. It is commonly referred to as autism or, in the context of a professional diagnosis, autism spectrum disorder (ASD), but the latter term remains controversial among neurodiversity advocates, neurodiversity researchers, and many autistic people due to the use of the word disorder and due to questions about its utility outside of diagnostic contexts.A spectrum disorder is one that can manifest very differently from person to person: any given person with the disorder is likely to show some but not all of the characteristics associated with it, and may show them to very different degrees. Different autistic people might show strikingly different characteristics, and the same person might also present differently at different times. Historically, the autism spectrum was divided into sub-categories, but questions persisted over the validity of these divisions. The most recent editions of the major English-language diagnostic manuals, Diagnostic and Statistical Manual of Mental Disorders (DSM-5-TR, published in 2022) and International Classification of Diseases (ICD-11, released in 2021) both list ASD as a single disorder.Given concerns about the appropriateness of the term disorder, many sources prefer to use the word "autism" without any additional words, on the basis that this is the least controversial term among people with different perspectives or (in the United Kingdom) autism spectrum conditions (ASC) rather than ASD. While psychiatry traditionally classifies autism as a neurodevelopmental disorder, many autistic people, most autistic advocates and a rapidly increasing number of researchers see autism as part of neurodiversity, the natural diversity in human thinking, and experience, with strengths, differences, and weaknesses. On this view, promoted by the autism rights movement, autism is not pathological, but this does not preclude autistic individuals from being disabled and potentially having high support needs due to co-occurring conditions and lack of person-environment fit. This relatively positive and holistic view of autism has led to a certain degree of friction between autistic individuals, advocates, charities, researchers and practitioners.The causes of autism are not well understood, but are likely linked to altered structures of the brain at birth. There is no official cure for autism, so interventions focus on, for example, finding and learning other modes of communication in a non-verbal autist, or applied behavior analysis interventions, which are highly controversial.
Other controversies in autism are scientific, sociological, political, or philosophical, and some have aspects of all four. First, it is controversial and uncertain if social-communication difficulties of autistic people are inherent core deficits (see empathizing-systemizing theory developed by Simon Baron-Cohen), or due to mismatch in social communication styles, cognition, and experiences resulting in bidirectional misunderstanding between autistic people and non-autistic people (see double empathy problem theory developed by autistic researcher Damien Milton and recent growing evidence that found that autistic people empathize, communicate and socialize well with autistic people), or a combination of both factors. A 2018 study has shown that autistic people are more prone to object personification, suggesting that autistic empathy may be not only more complex but also more all-encompassing, contrary to the popular belief that autistic people lack empathy.
Moreover, scientists are still trying to determine what causes autism; it is highly heritable and believed to be mainly genetic, but there are many genes involved, and environmental factors may also be relevant. It is unclear why autism commonly co-occurs with ADHD, epilepsy and a range of other conditions. There are ongoing disagreements about what should be included as part of the autism spectrum, whether meaningful sub-types of autism exist, and the significance of autism-associated traits in the wider population. The combination of broader criteria and increased awareness has led to a trend of steadily increasing estimates of autism prevalence, causing a common misconception that there is an autism epidemic and perpetuating the myth that it is caused by vaccines.
Classification
Spectrum model
Autism is a highly variable neurodevelopmental disorder and has long been thought to cover a wide spectrum, ranging from individuals with high support needs (who may be non-speaking, experience developmental delay, and be more likely to present with other co-existing diagnoses including intellectual disability) to individuals with low support needs (who may have more typical speech-language and intellectual skills but atypical social/conversation skills, narrowly focused interests, and wordy, pedantic communication). Because the behavior spectrum is continuous, boundaries between diagnostic categories are somewhat arbitrary.
ICD
The World Health Organizations International Classification of Diseases (11th Revision) ICD-11, regarded as the global standard, was released in June 2018 and came into full effect as of January 2022. It describes ASD as follows:
Autism spectrum disorder is characterised by persistent deficits in the ability to initiate and to sustain reciprocal social interaction and social communication, and by a range of restricted, repetitive, and inflexible patterns of behaviour, interests or activities that are clearly atypical or excessive for the individuals age and sociocultural context. The onset of the disorder occurs during the developmental period, typically in early childhood, but symptoms may not become fully manifest until later, when social demands exceed limited capacities. Deficits are sufficiently severe to cause impairment in personal, family, social, educational, occupational or other important areas of functioning and are usually a pervasive feature of the individuals functioning observable in all settings, although they may vary according to social, educational, or other context. Individuals along the spectrum exhibit a full range of intellectual functioning and language abilities.
ICD-11 was produced by professionals from 55 countries out of the 90 countries involved and is the most widely used reference worldwide. Clinicians use the ICD as a reference for diagnosis and reporting but researchers, particularly in the US, continue to use the Diagnostic and Statistical Manual of Mental Disorders (DSM-5-TR from 2022, DSM-5 from 2013, or their predecessors) as some material is not included in the ICD (the ICD is broader in scope, covering general as well as mental health). There remain differences, for example Rett disorder was included in ASD in the DSM-5 but in the ICD-11 it was excluded and placed in the chapter for Developmental Anomalies. Both the ICD and the DSM have been under revision and there has been collaborative work towards a convergence of the two since 1980 (when DSM-3 was published and ICD-9 was current), including more rigorous biological assessment - in place of historical experience - and a simplification of the system of classification.
DSM
The American Psychiatric Associations Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision (DSM-5-TR), released in 2022, is the current version of the DSM. The fifth edition, DSM-5, released in May 2013, was the first to define ASD as a single diagnosis, which is continued in DSM-5-TR. ASD encompasses previous diagnoses which included Asperger disorder, childhood disintegrative disorder, PDD-NOS, and the range of diagnoses which included the word autism. Rather than distinguishing between these diagnoses, the DSM-5 and DSM-5-TR adopt a dimensional approach to diagnosing disorders that fall underneath the autistic spectrum umbrella in one diagnostic category. Within this category, the DSM-5 and the DSM includes a framework that differentiates each individual by dimensions of symptom severity, as well as by associated features (i.e., the presence of other disorders or factors which likely contribute to the symptoms, other neurodevelopmental or mental disorders, intellectual disability, or language impairment). The symptom domains are social communication and restricted, repetitive behaviors, with the option of a separate severity - the negative impact of the symptoms on the individual - being specified for each domain, rather than an overall severity. Prior to the DSM-5, the DSM separated social deficits and communication deficits into two domains. Further, the DSM-5 changed to an onset age in the early developmental period, with a note that symptoms may manifest later when social demands exceed capabilities, rather than the previous, more restricted 3 years of age. These changes continue in the DSM-5-TR.
Features and characteristics
For many autistic individuals, characteristics first appear during infancy or childhood and generally follow a steady course without remission (different developmental timelines described in more detail below). Autistic people may be severely impaired in some respects but average, or even superior, in others.Clinicians consider assessment for ASD when a patient shows:
regular difficulties in social interaction or communication
restricted or repetitive behaviors (often called "stimming")
resistance to changes or restricted interestsThese features are typically assessed with the following, when appropriate:
problems in obtaining or sustaining employment or education
difficulties in initiating or sustaining social relationships
connections with mental health or learning disability services
a history of neurodevelopmental conditions (including learning disabilities and ADHD) or mental health conditions.There are many signs associated with ASD; the presentation varies widely:
Atypical eating is also common, but it does not need to be present to make a diagnosis.In addition, a small percentage of autistic people can exhibit notable ability, for example in mathematics, music or artistic reproduction, which in exceptional cases is referred to as savant syndrome.
Developmental course
There are two possible developmental courses of ASD. One course of development is more gradual in nature, with symptoms appearing fairly early in life and persisting. A second course of development is characterized by normal or near-normal development before onset of regression or loss of skills, which is known as regressive autism.
Gradual autism development
Most parents report that the onset of autism features appear within the first year of life. This course of development is fairly gradual, in that parents typically report concerns in development over the first two years of life and diagnosis can be made around 3–4 years of age. Overt features gradually begin after the age of six months, become established by age two or three years, and tend to continue through adulthood, although often in more muted form. Some of the early signs of ASDs in this course include decreased attention at faces, failure to obviously respond when name is called, failure to show interests by showing or pointing, and delayed imaginative play.
Regressive autism development
Regressive autism occurs when a child appears to develop typically but then starts to lose speech and social skills and is subsequently diagnosed with ASD. Other terms used to describe regression in children with autism are autism with regression, autistic regression, setback-type autism, and acquired autistic syndrome.Within the regressive autism developmental course, there are two patterns. The first pattern is when developmental losses occur in the first 15 months to 3 years. The second pattern, childhood disintegrative disorder (a diagnosis now included under ASD), is characterized by regression after normal development in the first 3 to 4, or even up to 9 years of life.After the regression, the child follows the standard pattern of autistic neurological development. The term regressive autism refers to the appearance that neurological development has reversed; it is actually only the affected developmental skills, rather than the neurology as a whole, that regresses.
The apparent onset of regressive autism can be surprising and distressing to parents, who often initially suspect severe hearing loss. Attribution of regression to environmental stress factors may result in a delay in diagnosis.There is no standard definition for regression. Some children show a mixture of features, with some early delays and some later losses; and there is evidence of a continuous spectrum of behaviors, rather than, or in addition to, a black-and-white distinction, between autism with and without regression. There are several intermediate types of development, which do not neatly fit into either the traditional early onset or the regressive categories, including mixtures of early deficits, failures to progress, subtle diminishment, and obvious losses.
Regression may occur in a variety of domains, including communication, social, cognitive, and self-help skills; however, the most common regression is loss of language. Some children lose social development instead of language; some lose both. Skill loss may be quite rapid, or may be slow and preceded by a lengthy period of no skill progression; the loss may be accompanied by reduced social play or increased irritability. The temporarily acquired skills typically amount to a few words of spoken language, and may include some rudimentary social perception.The prevalence of regression varies depending on the definition used. If regression is defined strictly to require loss of language, it is less common; if defined more broadly, to include cases where language is preserved but social interaction is diminished, it is more common. Though regressive autism is often thought to be a less common (compared with gradual course of autism onset described above), this is an area of ongoing debate; some evidence suggests that a pattern of regressive autism may be more common than previously thought. There are some who believe that regressive autism is simply early-onset autism which was recognized at a later date. Researchers have conducted studies to determine whether regressive autism is a distinct subset of ASD, but the results of these studies have contradicted one another.
Differential outcomes
There continues to be a debate over the differential outcomes based on these two developmental courses. Some studies suggest that regression is associated with poorer outcomes and others report no differences between those with early gradual onset and those who experience a regression period. While there is conflicting evidence surrounding language outcomes in autism, some studies have shown that cognitive and language abilities at age 2+1⁄2 may help predict language proficiency and production after age 5. Overall, the literature stresses the importance of early intervention in achieving positive longitudinal outcomes.
Social and communication skills
In social contexts, autistic people may respond and behave differently than individuals without ASD.Impairments in social skills present many challenges for autistic individuals. Deficits in social skills may lead to problems with friendships, romantic relationships, daily living, and vocational success. One study that examined the outcomes of autistic adults found that, compared to the general population, autistic people were less likely to be married, but it is unclear whether this outcome was due to deficits in social skills or intellectual impairment, or some other reason.Prior to 2013, deficits in social function and communication were considered two separate symptom domains of autism. The current social communication domain criteria for autism diagnosis require individuals to have deficits across three social skills: social-emotional reciprocity, nonverbal communication, and developing and sustaining relationships. Communication deficits are due to problems with social-emotional skills like joint attention and social reciprocity.A range of social-emotional reciprocity difficulties (an individuals ability to naturally engage in social interactions) may be present. Autistic individuals may lack mutual sharing of interests, for example many autistic children prefer not to play or interact with others. They may lack awareness or understanding of other peoples thoughts or feelings – a child may get too close to peers (entering their personal space) without noticing that this makes them uncomfortable. They may also engage in atypical behaviors to gain attention, for example a child may push a peer to gain attention before starting a conversation.Older children and adults with ASD perform worse on tests of face and emotion recognition than non-autistic individuals, although this may be partly due to a lower ability to define a persons own emotions.Autistic people experience deficits in their ability to develop, maintain, and understand relationships, as well as difficulties adjusting behavior to fit social contexts. ASD presents with impairments in pragmatic communication skills, such as difficulty initiating a conversation or failure to consider the interests of the listener to sustain a conversation. The ability to be focused exclusively on one topic in communication is known as monotropism, and can be compared to "tunnel vision". It is common for autistic individuals to communicate strong interest in a specific topic, speaking in lesson-like monologues about their passion instead of enabling reciprocal communication with whomever they are speaking to. What may look like self-involvement or indifference toward others stems from a struggle to recognize or remember that other people have their own personalities, perspectives, and interests. Another difference in pragmatic communication skills is that autistic people may not recognize the need to control the volume of their voice in different social settings – for example, they may speak loudly in libraries or movie theaters.Autistic people display atypical nonverbal behaviors or have difficulties with nonverbal communication. They may make infrequent eye contact – an autistic child may not make eye contact when called by name, or they may avoid making eye contact with an observer. Aversion of gaze can also be seen in anxiety disorders, however poor eye contact in autistic children is not due to shyness or anxiety; rather, it is overall diminished in quantity. Autistic individuals may struggle with both production and understanding of facial expressions. They often do not know how to recognize emotions from others facial expressions, or they may not respond with the appropriate facial expressions. They may have trouble recognizing subtle expressions of emotion and identifying what various emotions mean for the conversation. A defining feature is that autistic people have social impairments and often lack the intuition about others that many people take for granted. Temple Grandin, an autistic woman involved in autism activism, described her inability to understand the social communication of neurotypicals, or people with typical neural development, as leaving her feeling "like an anthropologist on Mars". They may also not pick up on body language or social cues such as eye contact and facial expressions if they provide more information than the person can process at that time. They struggle with understanding the context and subtext of conversational or printed situations, and have trouble forming resulting conclusions about the content. This also results in a lack of social awareness and atypical language expression. How facial expressions differ between those on the autism spectrum and neurotypical individuals is not clear. Further, at least half of autistic children have unusual prosody.Autistic people may also experience difficulties with verbal communication. Differences in communication may be present from the first year of life, and may include delayed onset of babbling, unusual gestures, diminished responsiveness, and vocal patterns that are not synchronized with the caregiver. In the second and third years, autistic children have less frequent and less diverse babbling, consonants, words, and word combinations; their gestures are less often integrated with words. Autistic children are less likely to make requests or share experiences, and are more likely to simply repeat others words (echolalia). Joint attention seems to be necessary for functional speech, and deficits in joint attention seem to distinguish infants with ASD. For example, they may look at a pointing hand instead of the object to which the hand is pointing, and they consistently fail to point at objects in order to comment on or share an experience. Autistic children may have difficulty with imaginative play and with developing symbols into language. Some autistic linguistic behaviors include repetitive or rigid language, and restricted interests in conversation. For example, a child might repeat words or insist on always talking about the same subject. Echolalia may also be present in autistic individuals, for example by responding to a question by repeating the inquiry instead of answering. Language impairment is also common in autistic children, but is not part of a diagnosis. Many autistic children develop language skills at an uneven pace where they easily acquire some aspects of communication, while never fully developing others, such as in some cases of hyperlexia. In some cases, individuals remain completely nonverbal throughout their lives. The CDC estimated that around 40% of autistic children dont speak at all, although the accompanying levels of literacy and nonverbal communication skills vary.
Restricted and repetitive behaviors
ASD includes a wide variety of characteristics. Some of these include behavioral characteristics which widely range from slow development of social and learning skills to difficulties creating connections with other people. Autistic individuals may experience these challenges with forming connections due to anxiety or depression, which they are more likely to experience, and as a result isolate themselves.Other behavioral characteristics include abnormal responses to sensations (such as sights, sounds, touch, taste and smell) and problems keeping a consistent speech rhythm. The latter problem influences an individuals social skills, leading to potential problems in how they are understood by communication partners. Behavioral characteristics displayed by autistic people typically influence development, language, and social competence. Behavioral characteristics of autistic people can be observed as perceptual disturbances, disturbances of development rate, relating, speech and language, and motility.The second core symptom of autism spectrum is a pattern of restricted and repetitive behaviors, activities, and interests. In order to be diagnosed with ASD under DSM-5 or DSM-5-TR, a person must have at least two of the following behaviors:
Repetitive behaviors – Repetitive behaviors such as rocking, hand flapping, finger flicking, head banging, or repeating phrases or sounds. These behaviors may occur constantly or only when the person gets stressed, anxious or upset.
Resistance to change – A strict adherence to routines such as eating certain foods in a specific order, or taking the same path to school every day. The child may have a meltdown if there is any change or disruption to their routine.
Restricted interests – An excessive interest in a particular activity, topic, or hobby, and devoting all their attention to it. For example, young children might completely focus on things that spin and ignore everything else. Older children might try to learn everything about a single topic, such as the weather or sports, and perseverate or talk about it constantly.
Sensory reactivity– An unusual reaction to certain sensory inputs such as having a negative reaction to specific sounds or textures, being fascinated by lights or movements or having an apparent indifference to pain or heat.Autistic individuals can display many forms of repetitive or restricted behavior, which the Repetitive Behavior Scale-Revised (RBS-R) categorizes as follows.
Stereotyped behaviors: Repetitive movements, such as hand flapping, head rolling, or body rocking.
Compulsive behaviors: Time-consuming behaviors intended to reduce anxiety, that an individual feels compelled to perform repeatedly or according to rigid rules, such as placing objects in a specific order, checking things, or handwashing.
Sameness: Resistance to change; for example, insisting that the furniture not be moved or refusing to be interrupted.
Ritualistic behavior: Unvarying pattern of daily activities, such as an unchanging menu or a dressing ritual. This is closely associated with sameness and an independent validation has suggested combining the two factors.
Restricted interests: Interests or fixations that are abnormal in theme or intensity of focus, such as preoccupation with a single television program, toy, or game.
Self-injury: Behaviors such as eye-poking, skin-picking, hand-biting and head-banging.
Self-injury
Self-injurious behaviors (SIB) are relatively common in autistic people, and can include head-banging, self-cutting, self-biting, and hair-pulling. Some of these behaviors can result in serious injury or death. Following are theories about the cause of self-injurious behavior in children
with developmental delay, including autistic individuals:
Frequency and/or continuation of self-injurious behavior can be influenced by environmental factors (e.g. reward in return for halting self-injurious behavior). However this theory is not applicable to younger children with autism. There is some evidence that frequency of self-injurious behavior can be reduced by removing or modifying environmental factors that reinforce this behavior.: 10–12
Higher rates of self-injury are also noted in socially isolated individuals with autism. Studies have shown that socialization skills are related factors to self injurious behavior for individuals with autism.
Self-injury could be a response to modulate pain perception when chronic pain or other health problems that cause pain are present.: 12–13
An abnormal basal ganglia connectivity may predispose to self-injurious behavior.: 13
Other features
Autistic individuals may have symptoms that do not contribute to the official diagnosis, but that can affect the individual or the family. Some individuals with ASD show unusual abilities, ranging from splinter skills (such as the memorization of trivia) to the rare talents of autistic savants. One study describes how some individuals with ASD show superior skills in perception and attention, relative to the general population. Sensory abnormalities are found in over 90% of autistic people, and are considered core features by some. Differences between the previously recognized disorders under the autism spectrum are greater for under-responsivity (for example, walking into things) than for over-responsivity (for example, distress from loud noises) or for sensation seeking (for example, rhythmic movements). An estimated 60–80% of autistic people have motor signs that include poor muscle tone, poor motor planning, and toe walking; deficits in motor coordination are pervasive across ASD and are greater in autism proper. Unusual eating behavior occurs in about three-quarters of children with ASD, to the extent that it was formerly a diagnostic indicator. Selectivity is the most common problem, although eating rituals and food refusal also occur.There is tentative evidence that gender dysphoria occurs more frequently in autistic people (see Autism and LGBT identities). As well as that, a 2021 anonymized online survey of 16–90 year-olds revealed that autistic males are more likely to identify as bisexual, while autistic females are more likely to identify as homosexual.Gastrointestinal problems are one of the most commonly co-occurring medical conditions in autistic people. These are linked to greater social impairment, irritability, language impairments, mood changes, and behavior and sleep problems.Parents of children with ASD have higher levels of stress. Siblings of children with ASD report greater admiration and less conflict with the affected sibling than siblings of unaffected children and were similar to siblings of children with Down syndrome in these aspects of the sibling relationship. However, they reported lower levels of closeness and intimacy than siblings of children with Down syndrome; siblings of individuals with ASD have greater risk of negative well-being and poorer sibling relationships as adults.
Causes
It had long been presumed that there is a common cause at the genetic, cognitive, and neural levels for the social and non-social components of autisms symptoms, described as a triad in the classic autism criteria. However, there is increasing suspicion that autism is instead a complex disorder whose core aspects have distinct causes that often co-occur. While it is unlikely that a single cause for ASD exists, many risk factors identified in the research literature may contribute to ASD development. These risk factors include genetics, prenatal and perinatal factors (meaning factors during pregnancy or very early infancy), neuroanatomical abnormalities, and environmental factors. It is possible to identify general risk factors, but much more difficult to pinpoint specific factors. Given the current state of knowledge, prediction can only be of a global nature and therefore requires the use of general markers.
Biological subgroups
Research into causes has been hampered by the inability to identify biologically meaningful subgroups within the autistic population and by the traditional boundaries between the disciplines of psychiatry, psychology, neurology and pediatrics. Newer technologies such as fMRI and diffusion tensor imaging can help identify biologically relevant phenotypes (observable traits) that can be viewed on brain scans, to help further neurogenetic studies of autism; one example is lowered activity in the fusiform face area of the brain, which is associated with impaired perception of people versus objects. It has been proposed to classify autism using genetics as well as behavior. (For more, see Brett Abrahams)
Genetics
Autism has a strong genetic basis, although the genetics of autism are complex and it is unclear whether ASD is explained more by rare mutations with major effects, or by rare multi-gene interactions of common genetic variants. Complexity arises due to interactions among multiple genes, the environment, and epigenetic factors which do not change DNA sequencing but are heritable and influence gene expression. Many genes have been associated with autism through sequencing the genomes of affected individuals and their parents. However, most of the mutations that increase autism risk have not been identified. Typically, autism cannot be traced to a Mendelian (single-gene) mutation or to a single chromosome abnormality, and none of the genetic syndromes associated with ASD have been shown to selectively cause ASD. Numerous candidate genes have been located, with only small effects attributable to any particular gene. Most loci individually explain less than 1% of cases of autism.
As of 2018, it appeared that between 74% and 93% of ASD risk is heritable. After an older child is diagnosed with ASD, 7–20% of subsequent children are likely to be as well. If parents have one autistic child, they have a 2% to 8% chance of having a second child who is also autistic. If the autistic child is an identical twin the other will be affected 36 to 95 percent of the time. If they are fraternal twins the other will only be affected up to 31 percent of the time. The large number of autistic individuals with unaffected family members may result from spontaneous structural variation, such as deletions, duplications or inversions in genetic material during meiosis. Hence, a substantial fraction of autism cases may be traceable to genetic causes that are highly heritable but not inherited: that is, the mutation that causes the autism is not present in the parental genome.As of 2018, understanding of genetic risk factors had shifted from a focus on a few alleles to an understanding that genetic involvement in ASD is probably diffuse, depending on a large number of variants, some of which are common and have a small effect, and some of which are rare and have a large effect. The most common gene disrupted with large effect rare variants appeared to be CHD8, but less than 0.5% of autistic people have such a mutation. The gene CHD8 encodes the protein chromodomain helicase DNA binding protein 8, which is a chromatin regulator enzyme that is essential during fetal development, CHD8 is an ATP dependent enzyme. The protein contains an Snf2 helicase domain that is responsible for the hydrolysis of ATP to ADP. CHD8 encodes for a DNA helicase that function as a transcription repressor by remodeling chromatin structure by altering the position of nucleosomes. CHD8 negatively regulates Wnt signaling. Wnt signaling is important in the vertebrate early development and morphogenesis. It is believed that CHD8 also recruits the linker histone H1 and causes the repression of β-catenin and p53 target genes. The importance of CHD8 can be observed in studies where CHD8-knockout mice died after 5.5 embryonic days because of widespread p53 induced apoptosis. Some studies have determined the role of CHD8 in autism spectrum disorder (ASD). CHD8 expression significantly increases during human mid-fetal development. The chromatin remodeling activity and its interaction with transcriptional regulators have shown to play an important role in ASD aetiology. The developing mammalian brain has a conserved CHD8 target regions that are associated with ASD risk genes. The knockdown of CHD8 in human neural stem cells results in dysregulation of ASD risk genes that are targeted by CHD8. Recently CD8 has been associated to the regulation of long non-coding RNAs (lncRNAs), and the regulation of X chromosome inactivation (XCI) initiation, via regulation of Xist long non-coding RNA, the master regulator of XCI, though competitive binding to Xist regulatory regions.Some ASD is associated with clearly genetic conditions, like fragile X syndrome; however, only around 2% of autistic people have fragile X. Hypotheses from evolutionary psychiatry suggest that these genes persist because they are linked to human inventiveness, intelligence or systemising.Current research suggests that genes that increase susceptibility to ASD are ones that control protein synthesis in neuronal cells in response to cell needs, activity and adhesion of neuronal cells, synapse formation and remodeling, and excitatory to inhibitory neurotransmitter balance. Therefore, despite up to 1000 different genes thought to contribute to increased risk of ASD, all of them eventually affect normal neural development and connectivity between different functional areas of the brain in a similar manner that is characteristic of an ASD brain. Some of these genes are known to modulate production of the GABA neurotransmitter which is the main inhibitory neurotransmitter in the nervous system. These GABA-related genes are under-expressed in an ASD brain. On the other hand, genes controlling expression of glial and immune cells in the brain e.g. astrocytes and microglia, respectively, are over-expressed which correlates with increased number of glial and immune cells found in postmortem ASD brains. Some genes under investigation in ASD pathophysiology are those that affect the mTOR signaling pathway which supports cell growth and survival.All these genetic variants contribute to the development of the autistic spectrum; however, it cannot be guaranteed that they are determinants for the development.Autism may be under-diagnosed in women and girls due to an assumption that it is primarily a male condition, but genetic phenomena such as imprinting and X linkage have the ability to raise the frequency and severity of conditions in males, and theories have been put forward for a genetic reason why males are diagnosed more often, such as the imprinted brain hypothesis and the extreme male brain theory.
Early life
Several prenatal and perinatal complications have been reported as possible risk factors for autism. These risk factors include maternal gestational diabetes, maternal and paternal age over 30, bleeding during pregnancy after the first trimester, use of certain prescription medication (e.g. valproate) during pregnancy, and meconium in the amniotic fluid. While research is not conclusive on the relation of these factors to autism, each of these factors has been identified more frequently in children with autism, compared to their siblings who do not have autism, and other typically developing youth. While it is unclear if any single factors during the prenatal phase affect the risk of autism, complications during pregnancy may be a risk.Low vitamin D levels in early development have been hypothesized as a risk factor for autism.There are also studies being done to test if certain types of regressive autism have an autoimmune basis.Maternal nutrition and inflammation during preconception and pregnancy influences fetal neurodevelopment. Intrauterine growth restriction is associated with ASD, in both term and preterm infants. Maternal inflammatory and autoimmune diseases may damage fetal tissues, aggravating a genetic problem or damaging the nervous system.Exposure to air pollution during pregnancy, especially heavy metals and particulates, may increase the risk of autism. Environmental factors that have been claimed without evidence to contribute to or exacerbate autism include certain foods, infectious diseases, solvents, PCBs, phthalates and phenols used in plastic products, pesticides, brominated flame retardants, alcohol, smoking, illicit drugs, vaccines, and prenatal stress. Some, such as the MMR vaccine, have been completely disproven.
Disproven vaccine hypothesis
Parents may first become aware of autistic symptoms in their child around the time of a routine vaccination. This has led to unsupported theories blaming vaccine "overload", a vaccine preservative, or the MMR vaccine for causing autism. In 1998 Andrew Wakefield led a fraudulent, litigation-funded study that suggested that the MMR vaccine may cause autism. This conjecture suggested that autism results from brain damage caused either by the MMR vaccine itself, or by thimerosal, a vaccine preservative. No convincing scientific evidence supports these claims, they are biologically implausible, and further evidence continues to refute them, including the observation that the rate of autism continues to climb despite elimination of thimerosal from routine childhood vaccines. A 2014 meta-analysis examined ten major studies on autism and vaccines involving 1.25 million children worldwide; it concluded that neither the MMR vaccine, which has never contained thimerosal, nor the vaccine components thimerosal or mercury, lead to the development of ASDs. Despite this, misplaced parental concern has led to lower rates of childhood immunizations, outbreaks of previously controlled childhood diseases in some countries, and the preventable deaths of several children.
Etiological hypotheses
Several hypotheses have been presented that try to explain how and why autism develops by integrating known causes (genetic and environmental effects) and findings (neurobiological and somatic). Some are more comprehensive, such as the Pathogenetic Triad, which proposes and operationalizes three core features (an autistic personality, cognitive compensation, neuropathological burden) that interact to cause autism, and the Intense World Theory, which explains autism through a hyper-active neurobiology that leads to an increased perception, attention, memory, and emotionality. There are also simpler hypotheses that explain only individual parts of the neurobiology or phenotype of autism, such as mind-blindness (a decreased ability for Theory of Mind), the weak central coherence theory, or the extreme male brain and empathising-systemising theory.
Evolutionary hypotheses
Research exploring the evolutionary benefits of autism and associated genes has suggested that autistic people may have played a "unique role in technological spheres and understanding of natural systems" in the course of human development. It has been suggested that it may have arisen as "a slight trade off for other traits that are seen as highly advantageous", providing "advantages in tool making and mechanical thinking", with speculation that the condition may "reveal itself to be the result of a balanced polymorphism, like sickle cell anemia, that is advantageous in a certain mixture of genes and disadvantageous in specific combinations".In 2011, a paper in Evolutionary Psychology proposed that autistic traits, including increased abilities for spatial intelligence, concentration and memory, could have been naturally selected to enable self-sufficient foraging in a more (although not completely) solitary environment, referred to as the "Solitary Forager Hypothesis". A 2016 paper examines Asperger syndrome as "an alternative pro-social adaptive strategy" which may have developed as a result of the emergence of "collaborative morality" in the context of small-scale hunter-gathering, i.e. where "a positive social reputation for making a contribution to group wellbeing and survival" becomes more important than complex social understanding.Conversely, some multidisciplinary research suggests that recent human evolution may be a driving force in the rise of a number of medical conditions in recent human populations, including autism. Studies in evolutionary medicine indicate that as biological evolution becomes outpaced by cultural evolution, disorders linked to bodily dysfunction increase in prevalence due to a lack of contact with pathogens and negative environmental conditions that once widely affected ancestral populations. Because natural selection primarily favors reproduction over health and longevity, the lack of this impetus to adapt to certain harmful circumstances creates a tendency for genes in descendant populations to over-express themselves, which may cause a wide array of maladies, ranging from mental illnesses to autoimmune disorders.
Pathophysiology
Autisms symptoms result from maturation-related changes in various systems of the brain. How autism occurs is not well understood. Its mechanism can be divided into two areas: the pathophysiology of brain structures and processes associated with autism, and the neuropsychological linkages between brain structures and behaviors. The behaviors appear to have multiple pathophysiologies.There is evidence that gut–brain axis abnormalities may be involved. A 2015 review proposed that immune, gastrointestinal inflammation, malfunction of the autonomic nervous system, gut flora alterations, and food metabolites may cause brain neuroinflammation and dysfunction. A 2016 review concludes that enteric nervous system abnormalities might play a role in neurological disorders such as autism. Neural connections and the immune system are a pathway that may allow diseases originated in the intestine spread to the brain.Several lines of evidence point to synaptic dysfunction as a cause of autism. Some rare mutations may lead to autism by disrupting some synaptic pathways, such as those involved with cell adhesion. All known teratogens (agents that cause birth defects) related to the risk of autism appear to act during the first eight weeks from conception, and though this does not exclude the possibility that autism can be initiated or affected later, there is strong evidence that autism arises very early in development.In general, neuroanatomical studies support the concept that autism may involve a combination of brain enlargement in some areas and reduction in others. These studies suggest that autism may be caused by abnormal neuronal growth and pruning during the early stages of prenatal and postnatal brain development, leaving some areas of the brain with too many neurons and other areas with too few neurons. Some research has reported an overall brain enlargement in autism, while others suggest abnormalities in several areas of the brain, including the frontal lobe, the mirror neuron system, the limbic system, the temporal lobe, and the corpus callosum.In functional neuroimaging studies, when performing theory of mind and facial emotion response tasks, the median person on the autism spectrum exhibits less activation in the primary and secondary somatosensory cortices of the brain than the median member of a properly sampled control population. This finding coincides with reports demonstrating abnormal patterns of cortical thickness and grey matter volume in those regions of autistic peoples brains.
Brain connectivity
Brains of autistic individuals have been observed to have abnormal connectivity and the degree of these abnormalities directly correlates with the severity of autism. Following are some observed abnormal connectivity patterns in autistic individuals:
Decreased connectivity between different specialized regions of the brain (e.g. lower neuron density in corpus callosum) and relative over-connectivity within specialized regions of the brain by adulthood. Connectivity between different regions of the brain (long-range connectivity) is important for integration and global processing of information and comparing incoming sensory information with the existing model of the world within the brain. Connections within each specialized regions (short-range connections) are important for processing individual details and modifying the existing model of the world within the brain to more closely reflect incoming sensory information. In infancy, children at high risk for autism that were later diagnosed with autism were observed to have abnormally high long-range connectivity which then decreased through childhood to eventual long-range under-connectivity by adulthood.
Abnormal preferential processing of information by the left hemisphere of the brain vs. preferential processing of information by right hemisphere in neurotypical individuals. The left hemisphere is associated with processing information related to details whereas the right hemisphere is associated with processing information in a more global and integrated sense that is essential for pattern recognition. For example, visual information like face recognition is normally processed by the right hemisphere which tends to integrate all information from an incoming sensory signal, whereas an ASD brain preferentially processes visual information in the left hemisphere where information tends to be processed for local details of the face rather than the overall configuration of the face. This left lateralization negatively impacts both facial recognition and spatial skills.
Increased functional connectivity within the left hemisphere which directly correlates with severity of autism. This observation also supports preferential processing of details of individual components of sensory information over global processing of sensory information in an ASD brain.
Prominent abnormal connectivity in the frontal and occipital regions. In autistic individuals low connectivity in the frontal cortex was observed from infancy through adulthood. This is in contrast to long-range connectivity which is high in infancy and low in adulthood in ASD. Abnormal neural organization is also observed in the Brocas area which is important for speech production.
Neuropathology
Listed below are some characteristic findings in ASD brains on molecular and cellular levels regardless of the specific genetic variation or mutation contributing to autism in a particular individual:
Limbic system with smaller neurons that are more densely packed together. Given that the limbic system is the main center of emotions and memory in the human brain, this observation may explain social impairment in ASD.
Fewer and smaller Purkinje neurons in the cerebellum. New research suggest a role of the cerebellum in emotional processing and language.
Increased number of astrocytes and microglia in the cerebral cortex. These cells provide metabolic and functional support to neurons and act as immune cells in the nervous system, respectively.
Increased brain size in early childhood causing macrocephaly in 15–20% of ASD individuals. The brain size however normalizes by mid-childhood. This variation in brain size in not uniform in the ASD brain with some parts like the frontal and temporal lobes being larger, some like the parietal and occipital lobes being normal sized, and some like cerebellar vermis, corpus callosum, and basal ganglia being smaller than neurotypical individuals.
Cell adhesion molecules that are essential to formation and maintenance of connections between neurons, neuroligins found on postsynaptic neurons that bind presynaptic cell adhesion molecules, and proteins that anchor cell adhesion molecules to neurons are all found to be mutated in ASD.
Gut-immune-brain axis
46% to 84% of autistic individuals have GI-related problems like reflux, diarrhea, constipation, inflammatory bowel disease, and food allergies. It has been observed that the makeup of gut bacteria in autistic people is different than that of neurotypical individuals which has raised the question of influence of gut bacteria on ASD development via inducing an inflammatory state. Listed below are some research findings on the influence of gut bacteria and abnormal immune responses on brain development:
Some studies on rodents have shown gut bacteria influencing emotional functions and neurotransmitter balance in the brain, both of which are impacted in ASD.
The immune system is thought to be the intermediary that modulates the influence of gut bacteria on the brain. Some ASD individuals have a dysfunctional immune system with higher numbers of some types of immune cells, biochemical messengers and modulators, and autoimmune antibodies. Increased inflammatory biomarkers correlate with increased severity of ASD symptoms and there is some evidence to support a state of chronic brain inflammation in ASD.
More pronounced inflammatory responses to bacteria were found in ASD individuals with an abnormal gut microbiota. Additionally, immunoglobulin A antibodies that are central to gut immunity were also found in elevated levels in ASD populations. Some of these antibodies may attack proteins that support myelination of the brain, a process that is important for robust transmission of neural signal in many nerves.
Activation of the maternal immune system during pregnancy (by gut bacteria, bacterial toxins, an infection, or non-infectious causes) and gut bacteria in the mother that induce increased levels of Th17, a pro-inflammatory immune cell, have been associated with an increased risk of autism. Some maternal IgG antibodies that cross the placenta to provide passive immunity to the fetus can also attack the fetal brain.
It is proposed that inflammation within the brain promoted by inflammatory responses to harmful gut microbiome impacts brain development.
Pro-inflammatory cytokines IFN-γ, IFN-α, TNF-α, IL-6 and IL-17 have been shown to promote autistic behaviors in animal models. Giving anti-IL-6 and anti-IL-17 along with IL-6 and IL-17, respectively, have been shown to negate this effect in the same animal models.
Some gut proteins and microbial products can cross the blood–brain barrier and activate mast cells in the brain. Mast cells release pro-inflammatory factors and histamine which further increase blood–brain barrier permeability and help set up a cycle of chronic inflammation.
Mirror neuron system
The mirror neuron system consists of a network of brain areas that have been associated with empathy processes in humans. In humans, the mirror neuron system has been identified in the inferior frontal gyrus and the inferior parietal lobule and is thought to be activated during imitation or observation of behaviors. The connection between mirror neuron dysfunction and autism is tentative, and it remains to be seen how mirror neurons may be related to many of the important characteristics of autism.
Social brain interconnectivity
A number of discrete brain regions and networks among regions that are involved in dealing with other people have been discussed together under the rubric of the social brain. As of 2012, there is a consensus that autism spectrum is likely related to problems with interconnectivity among these regions and networks, rather than problems with any specific region or network.
Temporal lobe
Functions of the temporal lobe are related to many of the deficits observed in individuals with ASDs, such as receptive language, social cognition, joint attention, action observation, and empathy. The temporal lobe also contains the superior temporal sulcus and the fusiform face area, which may mediate facial processing. It has been argued that dysfunction in the superior temporal sulcus underlies the social deficits that characterize autism. Compared to typically developing individuals, one study found that individuals with so-called high-functioning autism had reduced activity in the fusiform face area when viewing pictures of faces.
Mitochondria
ASD could be linked to mitochondrial disease, a basic cellular abnormality with the potential to cause disturbances in a wide range of body systems. A 2012 meta-analysis study, as well as other population studies show that approximately 5% of autistic children meet the criteria for classical mitochondrial dysfunction. It is unclear why this mitochondrial disease occurs, considering that only 23% of children with both ASD and mitochondrial disease present with mitochondrial DNA abnormalities.
Serotonin
Serotonin is a major neurotransmitter in the nervous system and contributes to formation of new neurons (neurogenesis), formation of new connections between neurons (synaptogenesis), remodeling of synapses, and survival and migration of neurons, processes that are necessary for a developing brain and some also necessary for learning in the adult brain. 45% of ASD individuals have been found to have increased blood serotonin levels. It has been hypothesized that increased activity of serotonin in the developing brain may facilitate the onset of ASD, with an association found in six out of eight studies between the use of selective serotonin reuptake inhibitors (SSRIs) by the pregnant mother and the development of ASD in the child exposed to SSRI in the antenatal environment. The study could not definitively conclude SSRIs caused the increased risk for ASD due to the biases found in those studies, and the authors called for more definitive, better conducted studies. Confounding by indication has since then been shown to be likely. However, it is also hypothesized that SSRIs may help reduce symptoms of ASD and even positively affect brain development in some ASD patients.
Diagnosis
Autism Spectrum Disorder a clinical diagnosis that is typically made by a physician based off reported and directly observed behavior in the affected individual. According to the updated diagnostic criteria in the DSM-5-TR, in order to receive a diagnosis of Autism Spectrum Disorder, one must present with “persistent deficits in social communication and social interaction” and “restricted, repetitive patterns of behavior, interests, or activities.” These behaviors must begin in early childhood and affect one’s ability to perform everyday tasks. Furthermore, the symptoms must not be fully explainable by intellectual developmental disorder or global developmental delay.
There are several factors that make Autism Spectrum Disorder difficult to diagnose. First off, there are no standardized imaging, molecular or genetic tests that can be used to diagnose ASD. Additionally, there is a lot of variety in how ASD affects individuals. The behavioral manifestations of ASD depend on ones developmental stage, age of presentation, current support, and individual variability. Lastly, there are multiple conditions that may present similarly to Autism Spectrum Disorder, including intellectual disability, hearing impairment, a specific language impairment such as Landau–Kleffner syndrome. ADHD, anxiety disorder, and psychotic disorders. Furthermore, the presence of autism can make it harder to diagnose coexisting psychiatric disorders such as depression.Ideally the diagnosis of ASD should be given by a team of clinicians (e.g. pediatricians, child psychiatrists, child neurologists) based on information provided from the affected individual, caregivers, other medical professionals and from direct observation. Evaluation of a child or adult for Autism Spectrum Disorder typically starts with a pediatrician or primary care physician taking a developmental history and performing a physical exam. If warranted, the physician may refer the individual to an ASD specialist who will observe and assess cognitive, communication, family, and other factors using standardized tools, and taking into account any associated medical conditions. A pediatric neuropsychologist is often asked to assess behavior and cognitive skills, both to aid diagnosis and to help recommend educational interventions. Further workup may be performed after someone is diagnosed with ASD. This may include a clinical genetics evaluation particularly when other symptoms already suggest a genetic cause. Although up to 40% of ASD cases may be linked to genetic causes, it is not currently recommended to perform complete genetic testing on every individual who is diagnosed with ASD. Consensus guidelines for genetic testing in patients with ASD in the US and UK are limited to high-resolution chromosome and fragile X testing. Metabolic and neuroimaging tests are also not routinely performed for diagnosis of ASD.The age at which ASD is diagnosed varies. Sometimes ASD can be diagnosed as early as 14 months, however, a reliable diagnosis of ASD is usually made at the age of two years. Diagnosis becomes increasingly stable over the first three years of life. For example, a one-year-old who meets diagnostic criteria for ASD is less likely than a three-year-old to continue to do so a few years later. Additionally, age of diagnosis may depend on the severity of ASD, with more severe forms of ASD more likely to be diagnosed at an earlier age. Issues with access to healthcare such as cost of appointments or delays in making appointments often lead to delays in the diagnosis of ASD. In the UK the National Autism Plan for Children recommends at most 30 weeks from first concern to completed diagnosis and assessment, though few cases are handled that quickly in practice. Lack of access to appropriate medical care, broadening diagnostic criteria and increased awareness surrounding ASD in recent years has resulted in an increased number of individuals receiving a diagnosis of ASD as adults. Diagnosis of ASD in adults poses unique challenges because it still relies on an accurate developmental history and because autistic adults sometimes learn coping strategies (known as camouflaging) which may make it more difficult to obtain a diagnosis.The presentation and diagnosis of Autism Spectrum Disorder may vary based on sex and gender identity. Most studies that have investigated the impact of gender on presentation and diagnosis of Autism Spectrum Disorder have not differentiated between the impact of sex versus gender. There is some evidence that autistic women and girls tend to show less repetitive behavior and may engage in more camouflaging than autistic males. Camouflaging may include making oneself perform normative facial expressions and eye contact. Differences in behavioral presentation and gender-stereotypes may make it more challenging to diagnose autism spectrum disorder in a timely manner in females. A notable percentage of autistic females may be misdiagnosed, diagnosed after a considerable delay, or not diagnosed at all.Considering the unique challenges in diagnosing ASD using behavioral and observational assessment, specific US practice parameters for its assessment were published by the American Academy of Neurology in the year 2000, the American Academy of Child and Adolescent Psychiatry in 1999, and a consensus panel with representation from various professional societies in 1999. The practice parameters outlined by these societies include an initial screening of children by general practitioners (i.e., "Level 1 screening") and for children who fail the initial screening, a comprehensive diagnostic assessment by experienced clinicians (i.e. "Level 2 evaluation"). Furthermore, it has been suggested that assessments of children with suspected ASD be evaluated within a developmental framework, include multiple informants (e.g., parents and teachers) from diverse contexts (e.g., home and school), and employ a multidisciplinary team of professionals (e.g., clinical psychologists, neuropsychologists, and psychiatrists).As of 2019, psychologists wait until a child showed initial evidence of ASD tendencies, then administer various psychological assessment tools to assess for ASD. Among these measurements, the Autism Diagnostic Interview-Revised (ADI-R) and the Autism Diagnostic Observation Schedule (ADOS) are considered the "gold standards" for assessing autistic children. The ADI-R is a semi-structured parent interview that probes for symptoms of autism by evaluating a childs current behavior and developmental history. The ADOS is a semistructured interactive evaluation of ASD symptoms that is used to measure social and communication abilities by eliciting several opportunities for spontaneous behaviors (e.g., eye contact) in standardized context. Various other questionnaires (e.g., The Childhood Autism Rating Scale, Autism Treatment Evaluation Checklist) and tests of cognitive functioning (e.g., The Peabody Picture Vocabulary Test) are typically included in an ASD assessment battery. The diagnostic interview for social and communication disorders (DISCO) may also be used.
Screening
About half of parents of children with ASD notice their childs atypical behaviors by age 18 months, and about four-fifths notice by age 24 months. If a child does not meet any of the following milestones, it "is an absolute indication to proceed with further evaluations. Delay in referral for such testing may delay early diagnosis and treatment and affect the [childs] long-term outcome."
No response to name (or gazing with direct eye contact) by 6 months.
No babbling by 12 months.
No gesturing (pointing, waving, etc.) by 12 months.
No single words by 16 months.
No two-word (spontaneous, not just echolalic) phrases by 24 months.
Loss of any language or social skills, at any age.The Japanese practice is to screen all children for ASD at 18 and 24 months, using autism-specific formal screening tests. In contrast, in the UK, children whose families or doctors recognize possible signs of autism are screened. It is not known which approach is more effective. The UK National Screening Committee does not recommend universal ASD screening in young children. Their main concerns includes higher chances of misdiagnosis at younger ages and lack of evidence of effectiveness of early interventions There is no consensus between professional and expert bodies in the US on screening for autism in children younger than 3 years.Screening tools include the Modified Checklist for Autism in Toddlers (M-CHAT), the Early Screening of Autistic Traits Questionnaire, and the First Year Inventory; initial data on M-CHAT and its predecessor, the Checklist for Autism in Toddlers (CHAT), on children aged 18–30 months suggests that it is best used in a clinical setting and that it has low sensitivity (many false-negatives) but good specificity (few false-positives). It may be more accurate to precede these tests with a broadband screener that does not distinguish ASD from other developmental disorders. Screening tools designed for one cultures norms for behaviors like eye contact may be inappropriate for a different culture. Although genetic screening for autism is generally still impractical, it can be considered in some cases, such as children with neurological symptoms and dysmorphic features.
Misdiagnosis
There is a significant level of misdiagnosis of autism in neurodevelopmentally typical children; 18–37% of children diagnosed with ASD eventually lose their diagnosis. This high rate of lost diagnosis cannot be accounted for by successful ASD treatment alone. The most common reason parents reported as the cause of lost ASD diagnosis was new information about the child (73.5%), such as a replacement diagnosis. Other reasons included a diagnosis given so the child could receive ASD treatment (24.2%), ASD treatment success or maturation (21%), and parents disagreeing with the initial diagnosis (1.9%).Many of the children who were later found not to meet ASD diagnosis criteria then received diagnosis for another developmental disorder. Most common was ADHD, but other diagnoses included sensory disorders, anxiety, personality disorder, or learning disability. Neurodevelopment and psychiatric disorders that are commonly misdiagnosed as ASD include specific language impairment, social communication disorder, anxiety disorder, reactive attachment disorder, cognitive impairment, visual impairment, hearing loss and normal behavioral variation. Some behavioral variations that resemble autistic traits are repetitive behaviors, sensitivity to change in daily routines, focused interests, and toe-walking. These are considered normal behavioral variations when they do not cause impaired function. Boys are more likely to exhibit repetitive behaviors especially when excited, tired, bored, or stressed. Some ways of distinguishing typical behavioral variations from autistic behaviors are the ability of the child to suppress these behaviors and the absence of these behaviors during sleep.
Comorbidity
ASDs tend to be highly comorbid with other disorders. Comorbidity may increase with age and may worsen the course of youth with ASDs and make intervention and treatment more difficult. Distinguishing between ASDs and other diagnoses can be challenging because the traits of ASDs often overlap with symptoms of other disorders, and the characteristics of ASDs make traditional diagnostic procedures difficult.
The most common medical condition occurring in individuals with ASDs is seizure disorder or epilepsy, which occurs in 11–39% of autistic individuals. The risk varies with age, cognitive level, and type of language disorder.
Tuberous sclerosis, an autosomal dominant genetic condition in which non-malignant tumors grow in the brain and on other vital organs, is present in 1–4% of individuals with ASDs.
Intellectual disabilities are some of the most common comorbid disorders with ASDs. Recent estimates suggest that 40–69% of autistic individuals have some degree of an intellectual disability, more likely to be severe for females. A number of genetic syndromes causing intellectual disability may also be comorbid with ASD, including fragile X, Down, Prader-Willi, Angelman, Williams syndrome and SYNGAP1-related intellectual disability.
Learning disabilities are also highly comorbid in individuals with an ASD. Approximately 25–75% of individuals with an ASD also have some degree of a learning disability.
Various anxiety disorders tend to co-occur with ASDs, with overall comorbidity rates of 7–84%. They are common among children with ASD; there are no firm data, but studies have reported prevalences ranging from 11% to 84%. Many anxiety disorders have symptoms that are better explained by ASD itself, or are hard to distinguish from ASDs symptoms.
Rates of comorbid depression in individuals with an ASD range from 4–58%.
The relationship between ASD and schizophrenia remains a controversial subject under continued investigation, and recent meta-analyses have examined genetic, environmental, infectious, and immune risk factors that may be shared between the two conditions. Oxidative stress, DNA damage and DNA repair have been postulated to play a role in the aetiopathology of both ASD and schizophrenia.
Deficits in ASD are often linked to behavior problems, such as difficulties following directions, being cooperative, and doing things on other peoples terms. Symptoms similar to those of attention deficit hyperactivity disorder (ADHD) can be part of an ASD diagnosis.
Sensory processing disorder is also comorbid with ASD, with comorbidity rates of 42–88%.
Starting in adolescence, some people with Asperger syndrome (26% in one sample) fall under the criteria for the similar condition schizoid personality disorder, which is characterized by a lack of interest in social relationships, a tendency towards a solitary or sheltered lifestyle, secretiveness, emotional coldness, detachment and apathy. Asperger syndrome was traditionally called "schizoid disorder of childhood."
Genetic disorders - about 10–15% of autism cases have an identifiable Mendelian (single-gene) condition, chromosome abnormality, or other genetic syndromes.
Several metabolic defects, such as phenylketonuria, are associated with autistic symptoms.
Sleep problems affect about two-thirds of individuals with ASD at some point in childhood. These most commonly include symptoms of insomnia such as difficulty in falling asleep, frequent nocturnal awakenings, and early morning awakenings. Sleep problems are associated with difficult behaviors and family stress, and are often a focus of clinical attention over and above the primary ASD diagnosis.
Management
There is no treatment as such for autism, and many sources advise that this is not an appropriate goal, although treatment of co-occurring conditions remains an important goal. There is no known cure for autism, nor can any known treatments significantly reduce brain mutations caused by autism, although those who require little-to-no support are more likely to experience a lessening of symptoms over time. Several interventions can help children with autism, and no single treatment is best, with treatment typically tailored to the childs needs. Studies of interventions have methodological problems that prevent definitive conclusions about efficacy; however, the development of evidence-based interventions has advanced.The main goals of treatment are to lessen associated deficits and family distress, and to increase quality of life and functional independence. In general, higher IQs are correlated with greater responsiveness to treatment and improved treatment outcomes. Behavioral, psychological, education, and/or skill-building interventions may be used to assist autistic people to learn life skills necessary for living independently, as well as other social, communication, and language skills. Therapy also aims to reduce challenging behaviors and build upon strengths.Intensive, sustained special education programs and behavior therapy early in life can help children acquire self-care, communication, and job skills. Although evidence-based interventions for autistic children vary in their methods, many adopt a psychoeducational approach to enhancing cognitive, communication, and social skills while minimizing problem behaviors. While medications have not been found to help with core symptoms, they may be used for associated symptoms, such as irritability, inattention, or repetitive behavior patterns.
Non-pharmacological interventions
Intensive, sustained special education or remedial education programs and behavior therapy early in life can help children acquire self-care, social, and job skills. Available approaches include applied behavior analysis, developmental models, structured teaching, speech and language therapy, social skills therapy, and occupational therapy. Among these approaches, interventions either treat autistic features comprehensively, or focus treatment on a specific area of deficit. Generally, when educating those with autism, specific tactics may be used to effectively relay information to these individuals. Using as much social interaction as possible is key in targeting the inhibition autistic individuals experience concerning person-to-person contact. Additionally, research has shown that employing semantic groupings, which involves assigning words to typical conceptual categories, can be beneficial in fostering learning.There has been increasing attention to the development of evidence-based interventions for autistic young children. Two theoretical frameworks outlined for early childhood intervention include applied behavioral analysis (ABA) and the developmental social-pragmatic model (DSP). Although ABA therapy has a strong evidence base, particularly in regard to early intensive home-based therapy, ABAs effectiveness may be limited by diagnostic severity and IQ of the person affected by ASD. The Journal of Clinical Child and Adolescent Psychology has deemed two early childhood interventions as "well-established": individual comprehensive ABA, and focused teacher-implemented ABA combined with DSP.Another evidence-based intervention that has demonstrated efficacy is a parent training model, which teaches parents how to implement various ABA and DSP techniques themselves. Various DSP programs have been developed to explicitly deliver intervention systems through at-home parent implementation.
In October 2015, the American Academy of Pediatrics (AAP) proposed new evidence-based recommendations for early interventions in ASD for children under 3. These recommendations emphasize early involvement with both developmental and behavioral methods, support by and for parents and caregivers, and a focus on both the core and associated symptoms of ASD. However, a Cochrane review found no evidence that early intensive behavioral intervention (EIBI) is effective in reducing behavioral problems associated with autism in most autistic children but did help improve IQ and language skills. The Cochrane review did acknowledge that this may be due to the low quality of studies currently available on EIBI and therefore providers should recommend EIBI based on their clinical judgement and the familys preferences. No adverse effects of EIBI treatment were found. Studies on pet therapy have shown positive effects.Generally speaking, treatment of ASD focuses on behavioral and educational interventions to target its two core symptoms: social communication deficits and restricted, repetitive behaviors. If symptoms continue after behavioral strategies have been implemented, some medications can be recommended to target specific symptoms or co-existing problems such as restricted and repetitive behaviors (RRBs), anxiety, depression, hyperactivity/inattention and sleep disturbance. Melatonin for example can be used for sleep problems.While there are a number of parent-mediated behavioral therapies to target social communication deficits in children with autism, there is uncertainty regarding the efficacy of interventions to treat RRBs.
Education
Educational interventions often used include applied behavior analysis (ABA), developmental models, structured teaching, speech and language therapy and social skills therapy. Among these approaches, interventions either treat autistic features comprehensively, or focalize treatment on a specific area of deficit. The quality of research for early intensive behavioral intervention (EIBI)—a treatment procedure incorporating over thirty hours per week of the structured type of ABA that is carried out with very young children—is currently low, and more vigorous research designs with larger sample sizes are needed. Two theoretical frameworks outlined for early childhood intervention include structured and naturalistic ABA interventions, and developmental social pragmatic models (DSP). One interventional strategy utilizes a parent training model, which teaches parents how to implement various ABA and DSP techniques, allowing for parents to disseminate interventions themselves. Various DSP programs have been developed to explicitly deliver intervention systems through at-home parent implementation. Despite the recent development of parent training models, these interventions have demonstrated effectiveness in numerous studies, being evaluated as a probable efficacious mode of treatment. Early, intensive ABA therapy has demonstrated effectiveness in enhancing communication and adaptive functioning in preschool children; it is also well-established for improving the intellectual performance of that age group.Similarly, a teacher-implemented intervention that utilizes a more naturalistic form of ABA combined with a developmental social pragmatic approach has been found to be beneficial in improving social-communication skills in young children, although there is less evidence in its treatment of global symptoms. Neuropsychological reports are often poorly communicated to educators, resulting in a gap between what a report recommends and what education is provided. The appropriateness of including children with varying severity of autism spectrum disorders in the general education population is a subject of current debate among educators and researchers.
Pharmacological interventions
Medications may be used to treat ASD symptoms that interfere with integrating a child into home or school when behavioral treatment fails. They may also be used for associated health problems, such as ADHD or anxiety. However, their routine prescription for the core features of ASD is not recommended. More than half of US children diagnosed with ASD are prescribed psychoactive drugs or anticonvulsants, with the most common drug classes being antidepressants, stimulants, and antipsychotics. The atypical antipsychotic drugs risperidone and aripiprazole are FDA-approved for treating associated aggressive and self-injurious behaviors. However, their side effects must be weighed against their potential benefits, and autistic people may respond atypically. Side effects may include weight gain, tiredness, drooling, and aggression. There is some emerging data that show positive effects of aripiprazole and risperidone on restricted and repetitive behaviors (i.e., stimming; e.g., flapping, twisting, complex whole-body movements), but due to the small sample size and different focus of these studies and the concerns about its side effects, antipsychotics are not recommended as primary treatment of RRBs. SSRI antidepressants, such as fluoxetine and fluvoxamine, have been shown to be effective in reducing repetitive and ritualistic behaviors, while the stimulant medication methylphenidate is beneficial for some children with co-morbid inattentiveness or hyperactivity. There is scant reliable research about the effectiveness or safety of drug treatments for adolescents and adults with ASD. No known medication relieves autisms core symptoms of social and communication impairments.
Alternative medicine
A multitude of researched alternative therapies have also been implemented. Many have resulted in harm to autistic people. A 2020 systematic review on adults with autism has provided emerging evidence for decreasing stress, anxiety, ruminating thoughts, anger, and aggression through mindfulness-based interventions for improving mental health.Although popularly used as an alternative treatment for autistic people, as of 2018 there is no good evidence to recommend a gluten- and casein-free diet as a standard treatment. A 2018 review concluded that it may be a therapeutic option for specific groups of children with autism, such as those with known food intolerances or allergies, or with food intolerance markers. The authors analyzed the prospective trials conducted to date that studied the efficacy of the gluten- and casein-free diet in children with ASD (4 in total). All of them compared gluten- and casein-free diet versus normal diet with a control group (2 double-blind randomized controlled trials, 1 double-blind crossover trial, 1 single-blind trial). In two of the studies, whose duration was 12 and 24 months, a significant improvement in ASD symptoms (efficacy rate 50%) was identified. In the other two studies, whose duration was 3 months, no significant effect was observed. The authors concluded that a longer duration of the diet may be necessary to achieve the improvement of the ASD symptoms. Other problems documented in the trials carried out include transgressions of the diet, small sample size, the heterogeneity of the participants and the possibility of a placebo effect. In the subset of people who have gluten sensitivity there is limited evidence that suggests that a gluten-free diet may improve some autistic behaviors.The preference that autistic children have for unconventional foods can lead to reduction in bone cortical thickness with this risk being greater in those on casein-free diets, as a consequence of the low intake of calcium and vitamin D; however, suboptimal bone development in ASD has also been associated with lack of exercise and gastrointestinal disorders. In 2005, botched chelation therapy killed a five-year-old child with autism. Chelation is not recommended for autistic people since the associated risks outweigh any potential benefits. Another alternative medicine practice with no evidence is CEASE therapy, a pseudoscientific mixture of homeopathy, supplements, and vaccine detoxing.Results of a systematic review on interventions to address health outcomes among autistic adults found emerging evidence to support mindfulness-based interventions for improving mental health. This includes decreasing stress, anxiety, ruminating thoughts, anger, and aggression. An updated Cochrane review (2022) found evidence that music therapy likely improves social interactions, verbal communication, and non-verbal communication skills. There has been early research looking at hyperbaric treatments in children with autism. Studies on pet therapy have shown positive effects.
Prevention
While infection with rubella during pregnancy causes fewer than 1% of cases of autism, vaccination against rubella can prevent many of those cases.
Outcomes
There is no known cure for autism. The degree of symptoms can decrease, occasionally to the extent that people lose their diagnosis of ASD; this occurs sometimes after intensive treatment and sometimes not. It is not known how often this outcome happens, with reported rates in unselected samples ranging from 3% to 25%. Although core difficulties tend to persist, symptoms often become less severe with age. Acquiring language before age six, having an IQ above 50, and having a marketable skill all predict better outcomes; independent living is unlikely with severe autism.Many autistic people face significant obstacles in transitioning to adulthood. Compared to the general population, autistic people are more likely to be unemployed and to have never had a job. About half of people in their 20s with autism are not employed. Some autistic adults are unable to live independently.
Academic performance
The number of students identified and served as eligible for autism services in the United States has increased from 5,413 children in 1991–1992 to 370,011 children in the 2010–2011 academic school year. The United States Department of Health and Human Services reported approximately 1 in 68 children are diagnosed with ASD at age 8 although onset is typically between ages 2 and 4.The increasing number of students diagnosed with ASD in the schools presents significant challenges to teachers, school psychologists, and other school professionals. These challenges include developing a consistent practice that best support the social and cognitive development of the increasing number of autistic students. Although there is considerable research addressing assessment, identification, and support services for autistic children, there is a need for further research focused on these topics within the school context. Further research on appropriate support services for students with ASD will provide school psychologists and other education professionals with specific directions for advocacy and service delivery that aim to enhance school outcomes for students with ASD.Attempts to identify and use best intervention practices for students with autism also pose a challenge due to over dependence on popular or well-known interventions and curricula. Some evidence suggests that although these interventions work for some students, there remains a lack of specificity for which type of student, under what environmental conditions (one-on-one, specialized instruction or general education) and for which targeted deficits they work best. More research is needed to identify what assessment methods are most effective for identifying the level of educational needs for students with ASD. Additionally, children living in higher resources settings in the United States tend to experience earlier ASD interventions than children in lower resource settings (e.g. rural areas).A difficulty for academic performance in students with ASD is the tendency to generalize learning. Learning is different for each student, which is the same for students with ASD. To assist in learning, accommodations are commonly put into place for students with differing abilities. The existing schema of these students works in different ways and can be adjusted to best support the educational development for each student.The cost of educating a student with ASD in the US is about $8,600 a year more than the cost of educating an average student, which is about $12,000.Though much of the focus on early childhood intervention for ASD has centered on high-income countries like the United States, some of the most significant unmet needs for autistic individuals are in low- and middle-income countries. In these contexts, research has been more limited but there is evidence to suggest that some comprehensive care plans can be successfully delivered by non-specialists in schools and in the community.
Employment
In the United States, about half of people in their 20s with autism are unemployed, and one third of those with graduate degrees may be unemployed. While employers state hiring concerns about productivity and supervision, experienced employers of autistics give positive reports of above average memory and detail orientation as well as a high regard for rules and procedure in autistic employees. The majority of the economic burden of autism is caused by lost productivity in the job market. From the perspective of the social model of disability, social model of disability, much of this unemployment is caused by the lack of understanding from employers and coworkers. Adding content related to autism in existing diversity training can clarify misconceptions, support employees, and help provide new opportunities for autistic people. As of 2021, the potential for new autism employment initiatives by major employers in the United States continue to grow. The most high-profile autism initiative in the United States, "Autism at Work" grew to 20 of the largest companies in the United States. However, special hiring programs remain largely limited to entry-level technology positions, such as software testing, and exclude those who have talents outside of technology. An alternative approach is systemic neurodiversity inclusion. Developing organizational systems with enough flexibility and fairness to include autistic employees improves the work experience of all employees.
Epidemiology
The World Health Organization (WHO) estimates that about 1 in 100 children have autism. The number of people diagnosed has increased considerably since the 1990s, which may be partly due to increased recognition of the condition.While rates of ASD are consistent across cultures, they vary greatly by gender, with boys diagnosed far more frequently than girls: 1 in 70 boys, but only 1 in 315 girls at eight years of age. Girls, however, are more likely to have associated cognitive impairment, suggesting that less severe forms of ASD are likely being missed in girls and women. Prevalence differences may be a result of gender differences in expression of clinical symptoms, with women and girls with autism showing less atypical behaviors and, therefore, less likely to receive an ASD diagnosis.Using DSM-5 criteria, 92% of the children diagnosed per DSM-IV with one of the disorders which is now considered part of ASD will still meet the diagnostic criteria of ASD. However, if both ASD and the social (pragmatic) communication disorder categories of DSM-5 are combined, the prevalence of autism is mostly unchanged from the prevalence per the DSM-IV criteria. The best estimate for prevalence of ASD is 0.7% or 1 child in 143 children. Relatively mild forms of autism, such as Aspergers as well as other developmental disorders, are included in the DSM-5 diagnostic criteria. ASD rates were constant between 2014 and 2016 but twice the rate compared to the time period between 2011 and 2014 (1.25 vs 2.47%). A Canadian meta-analysis from 2019 confirmed these effects as the profiles of people diagnosed with autism became less and less different from the profiles of the general population. In the US, the rates for diagnosed ASD have been steadily increasing since 2000 when records began being kept. While it remains unclear whether this trend represents a true rise in incidence, it likely reflects changes in ASD diagnostic criteria, improved detection, and increased public awareness of autism. In 2012, the NHS estimated that the overall prevalence of autism among adults aged 18 years and over in the UK was 1.1%.A 2016 survey in the United States reported a rate of 25 per 1,000 children for ASD. It is important to note that rates of autism are poorly understood in many low- and middle-income countries, which affects the accuracy of global ASD prevalence estimates, but it is thought that most autistic individuals live in low- and middle-income countries.In the UK, from 1998 to 2018, the autism diagnoses increased by 787%. This increase is largely attributable to changes in diagnostic practices, referral patterns, availability of services, age at diagnosis, and public awareness (particularly among women), though unidentified environmental risk factors cannot be ruled out. The available evidence does not rule out the possibility that autisms true prevalence has increased; a real increase would suggest directing more attention and funding toward psychosocial factors and changing environmental factors instead of continuing to focus on genetics. It has been established that vaccination is not a risk factor for autism and is not a cause of any increase in autism prevalence rates, if any change in the rate of autism exists at all.Males have higher likelihood of being diagnosed with ASD than females. The sex ratio averages 4.3:1 and is greatly modified by cognitive impairment: it may be close to 2:1 with intellectual disability and more than 5.5:1 without. Several theories about the higher prevalence in males have been investigated, but the cause of the difference is unconfirmed; one theory is that females are underdiagnosed.The risk of developing autism is greater with older fathers than with older mothers; two potential explanations are the known increase in mutation burden in older sperm, and the hypothesis that men marry later if they carry genetic liability and show some signs of autism. Most professionals believe that race, ethnicity, and socioeconomic background do not affect the occurrence of autism.
United States
In the United States it is estimated to affect more than 2% of children (about 1.5 million) as of 2016. According to the latest CDC prevalence reports, 1 in 44 children (2.3%) in the United States had a diagnosis of ASD in 2018. Prevalence is estimated at 6 per 1,000 for ASDs as a whole.
History
A few examples of autistic symptoms and treatments were described long before autism was named. The Table Talk of Martin Luther, compiled by his notetaker, Mathesius, contains the story of a 12-year-old boy who may have been severely autistic. The earliest well-documented case of autism is that of Hugh Blair of Borgue, as detailed in a 1747 court case in which his brother successfully petitioned to annul Blairs marriage to gain Blairs inheritance.The Wild Boy of Aveyron, a feral child found in 1798, showed several signs of autism. He was non-verbal during his teenage years, and his case was widely popular among society for its time. Such cases brought awareness to autism, and more research was conducted on the natural dimensions of human behavior. The medical student Jean Itard treated him with a behavioral program designed to help him form social attachments and to induce speech via imitation.The New Latin word autismus (English translation autism) was coined by the Swiss psychiatrist Eugen Bleuler in 1910 as he was defining symptoms of schizophrenia. He derived it from the Greek word: αὐτός, romanized: autós, lit. self and used it to mean morbid self-admiration, referring to "autistic withdrawal of the patient to his fantasies, against which any influence from outside becomes an intolerable disturbance". A Soviet child psychiatrist, Grunya Sukhareva, described a similar syndrome in Russian in 1925, and in German in 1926.
Clinical development and diagnoses
Autism as it is known today can be drawn back to the late 1930s, when two separate psychiatrists - Hans Asperger of the Vienna University Hospital and Leo Kanner of the Johns Hopkins Hospital - used the word autism to describe the patients they were studying in their own clinical research. The word autism first took its modern sense in German, when Asperger adopted Bleulers terminology autistic psychopaths in a 1938 lecture in German about child psychology. Asperger was investigating an ASD which was later known as Asperger syndrome, although it did not become widely recognized as a separate diagnosis until 1981. In English, Kanner first used autism in its modern sense when he introduced the label early infantile autism in a 1943 report of 11 children with striking behavioral similarities. Almost all the characteristics described in Kanners first paper on the subject, notably "autistic aloneness" and "insistence on sameness", are still regarded as typical of the autistic spectrum of disorders. It is not known whether Kanner derived the term independently of Asperger.Kanners reuse of autism led to decades of confused terminology like infantile schizophrenia, and child psychiatrys focus on maternal deprivation led to misconceptions of autism as an infants response to "refrigerator mothers". Starting in the late 1960s, autism was established as a separate syndrome. However, Kanner was the first person to describe ASD as a neurodevelopmental disorder in 1943 by calling it infantile autism and therefore rejected the refrigerator mother theory.The discussion of autism prior to the twentieth century is one that brings about much controversy. Without researchers being able to meet a consensus on the varying forms around the condition, there was a lack of research being conducted on the disorder. Discussing the syndrome and its complexity frustrated researchers.
Controversies have surrounded various claims regarding the etiology of ASDs. In the 1950s, the refrigerator mother theory emerged as an explanation for autism. The hypothesis was based on the idea that autistic behaviors stem from the emotional frigidity, lack of warmth, and cold, distant, rejecting demeanor of a childs mother. Parents of children with an ASD experienced blame, guilt, and self-doubt, especially as the theory was embraced by the medical establishment and went largely unchallenged into the mid-1960s. The "refrigerator mother" theory has since continued to be refuted in scientific literature, including a 2015 systematic review which showed no association between caregiver interaction and language outcomes in ASD. Another controversial claim suggested that watching extensive amounts of television may cause autism. This hypothesis was largely based on research suggesting that the increasing rates of autism in the 1970s and 1980s were linked to the growth of cable television at this time.
Terminology and distinction from schizophrenia
As late as the mid-1970s there was little evidence of a genetic role in autism, however by 2007 it was recognised as one of the most heritable psychiatric conditions. Although the rise of parent organizations and the destigmatization of childhood ASD have affected how ASD is viewed, parents continue to feel social stigma in situations where their childs autistic behavior is perceived negatively, and many primary care physicians and medical specialists express beliefs consistent with outdated autism research.It took until 1980 for the DSM-III to differentiate autism from childhood schizophrenia. In 1987, the DSM-III-R provided a checklist for diagnosing autism. In May 2013, the DSM-5 was released, updating the classification for pervasive developmental disorders. The grouping of disorders, including PDD-NOS, autism, Asperger syndrome, Rett syndrome, and CDD, has been removed and replaced with the general term of Autism Spectrum Disorders. The two categories that exist are impaired social communication and/or interaction, and restricted and/or repetitive behaviors.The Internet has helped autistic individuals bypass nonverbal cues and emotional sharing that they find difficult to deal with, and has given them a way to form online communities and work remotely. Societal and cultural aspects of autism have developed: some in the community seek a cure, while others believe that autism is simply another way of being.
Society and culture
An autistic culture has emerged, accompanied by the autistic rights and neurodiversity movements who argue autism should be accepted as a difference to be accommodated instead of cured, although a minority of autistic individuals do continue seeking a cure. Worldwide, events related to autism include World Autism Awareness Day, Autism Sunday, Autistic Pride Day, Autreat, and others. Social-science scholars study those with autism in hopes to learn more about "autism as a culture, transcultural comparisons ... and research on social movements." Many autistic individuals have been successful in their fields.
Neurodiversity movement
The neurodiversity movement and the autism rights movement are social movements within the context of disability rights, emphasizing the concept of neurodiversity, which describes the autism spectrum as a result of natural variations in the human brain rather than a disorder to be cured. The autism rights movement advocates for including greater acceptance of autistic behaviors; therapies that focus on coping skills rather than imitating the behaviors of those without autism; and the recognition of the autistic community as a minority group. Autism rights or neurodiversity advocates believe that the autism spectrum is genetic and should be accepted as a natural expression of the human genome. However, these movements are not without criticism; for example, a common argument made against neurodiversity activists is that the majority of them are high-functioning, have Asperger syndrome, or are self-diagnosed, and do not represent the views of low-functioning autistic people.The concept of neurodiversity is contentious within various autism advocacy and research groups and has led to infighting.
Caregivers
Families who care for an autistic child face added stress from a number of different causes. Parents may struggle to understand the diagnosis and to find appropriate care options. Parents often take a negative view of the diagnosis, and may struggle emotionally. More than half of parents over the age of 50 are still living with their child, as about 85% of autistic people have difficulties living independently. Some studies also find decreased earning among parents who care for autistic children.
Broader autism phenotype
The broader autism phenotype (BAP) describes individuals who may not have ASD but do have autistic traits, such as avoiding eye contact and stimming.
See also
Autism spectrum disorders in the media
List of films about autism
Social narrative
References
Sources
"Neurodevelopmental Disorders". Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision (DSM-5-TR). Washington, DC: American Psychiatric Association. 18 March 2022. ISBN 9780890425770. LCCN 2021051782.
"6A02 Autism spectrum disorder". International Classification of Diseases 11th Revision (ICD-11). World Health Organisation. February 2022 [adopted in 2019]. 6A02. Retrieved 14 May 2022.
Further reading
Gabovitch, Elaine; Dutra, Courtney; Lauer, Emily. (2016). The Healthy People 2020 Roadmap for Massachusetts Children & Youth with ASD/DD: Understanding Needs and Measuring Outcomes (Report). Worcester: UMASS Chan Medical School. Retrieved 30 June 2022.
Matson JL, Dempsey T (2008). "Stereotypy in Adults with Autism Spectrum Disorders: Relationship and Diagnostic Fidelity". Journal of Developmental and Physical Disabilities, 20(2) 155–165. doi 10.1007/s10882-007-9086-0. S2CID 143874013.
Johnny L Matson; Michael L Matson; Tessa T Rivet (September 2007). "Social-skills treatments for children with autism spectrum disorders: an overview". Behavior Modification. 31 (5): 682–707. doi:10.1177/0145445507301650. ISSN 0145-4455. PMID 17699124. Wikidata Q28240738.
Johnny L Matson; Mary Shoemaker (14 July 2009). "Intellectual disability and its relationship to autism spectrum disorders". Research in Developmental Disabilities. 30 (6): 1107–1114. doi:10.1016/J.RIDD.2009.06.003. ISSN 0891-4222. PMID 19604668. Wikidata Q37552242. eISSN 1873-3379
Pedersen AL, Pettygrove S, Lu Z, Andrews J, Meaney FJ, Kurzius-Spencer M, et al. (August 2017). "DSM Criteria that Best Differentiate Intellectual Disability from Autism Spectrum Disorder" </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Babesiosis and can you explain it in depth? [/INST] Babesiosis or piroplasmosis is a malaria-like parasitic disease caused by infection with a eukaryotic parasite in the order Piroplasmida, typically a Babesia or Theileria, in the phylum Apicomplexa. Human babesiosis transmission via tick bite is most common in the Northeastern and Midwestern United States and parts of Europe, and sporadic throughout the rest of the world. It occurs in warm weather. People can get infected with Babesia parasites by the bite of an infected tick, by getting a blood transfusion from an infected donor of blood products, or by congenital transmission (an infected mother to her baby).
Ticks transmit the human strain of babesiosis, so it often presents with other tick-borne illnesses such as Lyme disease. After trypanosomes, Babesia is thought to be the second-most common blood parasite of mammals. They can have major adverse effects on the health of domestic animals in areas without severe winters. In cattle the disease is known as Texas cattle fever or redwater.
Signs and symptoms
Half of all children and a quarter of previously healthy adults with Babesia infection are asymptomatic. When people do develop symptoms, the most common are fever and hemolytic anemia, symptoms that are similar to those of malaria. People with symptoms usually become ill 1 to 4 weeks after the bite, or 1 to 9 weeks after transfusion of contaminated blood products. A person infected with babesiosis gradually develops malaise and fatigue, followed by a fever. Hemolytic anemia, in which red blood cells are destroyed and removed from the blood, also develops. Chills, sweats, and thrombocytopenia are also common symptoms. Symptoms may last from several days to several months.Less common symptoms and physical exam findings of mild-to-moderate babesiosis:
In more severe cases, symptoms similar to malaria occur, with fevers up to 40.5 °C (105 °F), shaking chills, and severe anemia (hemolytic anemia). Organ failure may follow, including adult respiratory distress syndrome. Sepsis in people who have had a splenectomy can occur rapidly, consistent with overwhelming post-splenectomy infection. Severe cases are also more likely to occur in the very young, very old, and persons with immunodeficiency, such as HIV/AIDS patients.A reported increase in human babesiosis diagnoses in the 2000s is thought to be caused by more widespread testing and higher numbers of people with immunodeficiencies coming in contact with ticks, the disease vector. Little is known about the occurrence of Babesia species in malaria-endemic areas, where Babesia can easily be misdiagnosed as Plasmodium. Human patients with repeat babesiosis infection may exhibit premunity.
Cause
Babesia species are in the phylum Apicomplexa, which also has the protozoan parasites that cause malaria, toxoplasmosis, and cryptosporidiosis. Four clades of Babesia species infect humans. The main species in each clade are:
B. microti (<3 µm)
B. duncani
B. divergens (cattle parasite seen mostly in Europe) and B. venatorum (roe deer parasite, formerly called EU1), most closely related to the large Babesia clade
Large Babesia (>3 µm) mostly infects ungulates, but also includes K01 strain (an isolated case observed in South Korea, see isolated cases)
Pathophysiology
Babesia parasites reproduce in red blood cells, where they can be seen as cross-shaped inclusions (four merozoites asexually budding, but attached together forming a structure looking like a "Maltese cross") and cause hemolytic anemia, quite similar to malaria.
Unlike the Plasmodium parasites that cause malaria, Babesia species lack an exoerythrocytic phase, so the liver is usually not affected.In nonhuman animals, Babesia canis rossi, Babesia bigemina, and Babesia bovis cause particularly severe forms of the disease, including a severe haemolytic anaemia, with positive erythrocyte-in-saline-agglutination test indicating an immune-mediated component to the haemolysis. Common sequelae include haemoglobinuria "red-water", disseminated intravascular coagulation, and "cerebral babesiosis" caused by sludging of erythrocytes in cerebral capillaries.In bovine species, the organism causes hemolytic anemia, so an infected animal shows pale mucous membranes initially. As the levels of bilirubin (a byproduct of red blood cell lysis) continue to increase, the visible mucous membranes become yellow in color (icterus) due to the failure of the liver to metabolize the excess bilirubin. Hemoglobinuria is seen due to excretion of red-blood-cell lysis byproducts via the kidneys. Fever of 40.5 °C (105 °F) develops due to release of inflammatory byproducts.
Diagnosis
Only specialized laboratories can adequately diagnose Babesia infection in humans, so Babesia infections are considered highly under-reported. It develops in patients who live in or travel to an endemic area or receive a contaminated blood transfusion within the preceding 9 weeks, so this aspect of the medical history is vital. Babesiosis may be suspected when a person with such an exposure history develops persistent fevers and hemolytic anemia. The definitive diagnostic test is the identification of parasites on a Giemsa-stained thin-film blood smear.So-called "Maltese cross formations" on the blood film are diagnostic (pathognomonic) of babesiosis, since they are not seen in malaria, the primary differential diagnosis. Careful examination of multiple smears may be necessary, since Babesia may infect less than 1% of circulating red blood cells, thus be easily overlooked.Serologic testing for antibodies against Babesia (both IgG and IgM) can detect low-level infection in cases with a high clinical suspicion, but negative blood film examinations. Serology is also useful for differentiating babesiosis from malaria in cases where people are at risk for both infections. Since detectable antibody responses require about a week after infection to develop, serologic testing may be falsely negative early in the disease course.A polymerase chain reaction (PCR) test has been developed for the detection of Babesia from the peripheral blood. PCR may be at least as sensitive and specific as blood-film examination in diagnosing babesiosis, though it is also significantly more expensive. Most often, PCR testing is used in conjunction with blood film examination and possibly serologic testing.Other laboratory findings include decreased numbers of red blood cells and platelets on complete blood count.In animals, babesiosis is suspected by observation of clinical signs (hemoglobinuria and anemia) in animals in endemic areas. Diagnosis is confirmed by observation of merozoites on thin film blood smear examined at maximum magnification under oil using Romonovski stains (methylene blue and eosin). This is a routine part of the veterinary examination of dogs and ruminants in regions where babesiosis is endemic.Babesia canis and B. bigemina are "large Babesia species" that form paired merozoites in the erythrocytes, commonly described as resembling "two pears hanging together", rather than the "Maltese cross" of the "small Babesia species". Their merozoites are around twice the size of small ones.Cerebral babesiosis is suspected in vivo when neurological signs (often severe) are seen in cattle that are positive for B. bovis on blood smear, but this has yet to be proven scientifically. Outspoken red discoloration of the grey matter post mortem further strengthens suspicion of cerebral babesiosis. Diagnosis is confirmed post mortem by observation of Babesia-infected erythrocytes sludged in the cerebral cortical capillaries in a brain smear.
Treatment
Treatment of asymptomatic carriers should be considered if parasites are still detected after 3 months. In mild-to-moderate babesiosis, the treatment of choice is a combination of atovaquone and azithromycin. This regimen is preferred to clindamycin and quinine because it has fewer side effects. The standard course is 7 to 10 days, but this is extended to at least 6 weeks in people with relapsing disease. Even mild cases are recommended to be treated to decrease the chance of inadvertently transmitting the infection by donating blood. In severe babesiosis, the combination of clindamycin and quinine is preferred. In life-threatening cases, exchange transfusion is performed. In this procedure, the infected red blood cells are removed and replaced with uninfected ones.
Imizol is a drug used for treatment of babesiosis in dogs.
Extracts of the poisonous, bulbous plant Boophone disticha are used in the folk medicine of South Africa to treat equine babesiosis. B. disticha is a member of the daffodil family Amaryllidaceae and has also been used in preparations employed as arrow poisons, hallucinogens, and in embalming. The plant is rich in alkaloids, some of which display an action similar to that of scopolamine.
Epidemiology
Babesiosis is a vector-borne illness usually transmitted by Ixodes scapularis ticks. B. microti uses the same tick vector as Lyme disease, and may occur in conjunction with Lyme. The organism can also be transmitted by blood transfusion. Ticks of domestic animals, especially Rhipicephalus (Boophilus) microplus and R. (B.) decoloratus transmit several species of Babesia to livestock, causing considerable economic losses to farmers in tropical and subtropical regions.In the United States, the majority of babesiosis cases are caused by B. microti, and occur in the Northeast and northern Midwest from May through October. Areas with especially high rates include eastern Long Island, Fire Island, Nantucket Island, and Marthas Vineyard.
The Centers for Disease Control and Prevention now requires state health departments to report infections using Form OMB No. 0920-0728. In 2014, Rhode Island had an incidence of 16.3 reported infections per 100,000 people.In Europe, B. divergens is the primary cause of infectious babesiosis and is transmitted by I. ricinus.Babesiosis has emerged in Lower Hudson Valley, New York, since 2001.In Australia, one locally-acquired case of B. microti has been reported, which was fatal. A subsequent investigation found no additional evidence of human Babesiosis in over 7000 patient samples, leading the authors to conclude that Babesiosis was rare in Australia. A similar disease in cattle, commonly known as tick fever, is spread by Babesia bovis and B. bigemina in the introduced cattle tick Rhipicephalus microplus. This disease is found in eastern and northern Australia.
Isolated cases
A table of isolated cases of babesiosis, which may be underestimated given how widely distributed the tick vectors are in temperate latitudes.
History
The disease is named for the genus of the causative organism, which was named after the Romanian bacteriologist Victor Babeș. In 1888, Victor Babeș identified the microorganisms in red blood cells as the cause of febrile hemoglobinuria in cattle. In 1893, Theobald Smith and Frederick Kilborne discovered that a tick was the vector for transmission in Texas cattle. The agent was B. bigemina. This was the first demonstration that an arthropod could act as a disease vector to transmit an infectious agent to a vertebrate host.In 1957, the first human case was documented in a splenectomized Croatian herdsman. The agent was B. divergens. In 1969, the first case was reported in an immunocompetent individual on Nantucket Island. The agent was B. microti, and the vector was the tick I. scapularis. Equine babesiosis (caused by the protozoan Theileria equi) is also known as piroplasmosis (from the Latin piro, meaning pear + Greek plasma, a thing formed).
Other animals
Veterinary treatment of babesiosis does not normally use antibiotics. In livestock and animals, diminazen (Berenil), imidocarb, or trypan blue would be the drugs of choice for treatment of B. canis rossi (dogs in Africa), B. bovis, and B. bigemina (cattle in Southern Africa). In acute cases in cattle, blood transfusion may be carried out.
A vaccine is effective against B. canis canis (dogs in the Mediterranean region), but is ineffective against B. c. rossi. B. imitans causes a mild form of the disease that frequently resolves without treatment (dogs in Southeast Asia).
References
External links
Center for Global Health (2019-06-25). "Babesiosis". Parasites and Health, DPDx—Laboratory Identification of Parasites of Public Health Concern. Centers for Disease Control & Prevention. Archived from the original on 2013-03-07. Retrieved 2003-10-07. Public domain source from which the first version of this article was derived.
Krause, Peter J; et al. (27 January 2021). "Clinical Practice Guidelines by the Infectious Diseases Society of America (IDSA): 2020 Guideline on Diagnosis and Management of Babesiosis". Clinical Infectious Diseases. 72 (2): e49–e64 </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Induced coma and can you explain it in depth? [/INST] An induced coma – also known as a medically induced coma (MIC), barbiturate-induced coma, or drug-induced coma – is a temporary coma (a deep state of unconsciousness) brought on by a controlled dose of an anesthetic drug, often a barbiturate such as pentobarbital or thiopental. Barbiturate comas are used to protect the brain during major neurosurgery, as a last line of treatment in certain cases of status epilepticus that have not responded to other treatments, and in refractory intracranial hypertension following traumatic brain injury.
Induced coma usually results in significant systemic adverse effects. The patient is likely to completely lose respiratory drive and require mechanical ventilation; gut motility is reduced; hypotension can complicate efforts to maintain cerebral perfusion pressure and often requires the use of vasopressor drugs. Hypokalemia often results. The completely immobile patient is at increased risk of bed sores as well as infection from catheters.
Induced coma is a feature of the Milwaukee protocol, a controversial method that is promoted as a means of treating rabies infection in people.
Theory
Barbiturates reduce the metabolic rate of brain tissue, as well as the cerebral blood flow </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Barretts esophagus and can you explain it in depth? [/INST] Barretts esophagus is a condition in which there is an abnormal (metaplastic) change in the mucosal cells lining the lower portion of the esophagus, from stratified squamous epithelium to simple columnar epithelium with interspersed goblet cells that are normally present only in the small intestine and large intestine. This change is considered to be a premalignant condition because it is associated with a high incidence of further transition to esophageal adenocarcinoma, an often-deadly cancer.The main cause of Barretts esophagus is thought to be an adaptation to chronic acid exposure from reflux esophagitis. Barretts esophagus is diagnosed by endoscopy: observing the characteristic appearance of this condition by direct inspection of the lower esophagus; followed by microscopic examination of tissue from the affected area obtained from biopsy. The cells of Barretts esophagus are classified into four categories: nondysplastic, low-grade dysplasia, high-grade dysplasia, and frank carcinoma. High-grade dysplasia and early stages of adenocarcinoma may be treated by endoscopic resection or radiofrequency ablation. Later stages of adenocarcinoma may be treated with surgical resection or palliation. Those with nondysplastic or low-grade dysplasia are managed by annual observation with endoscopy, or treatment with radiofrequency ablation. In high-grade dysplasia, the risk of developing cancer might be at 10% per patient-year or greater.The incidence of esophageal adenocarcinoma has increased substantially in the Western world in recent years. The condition is found in 5–15% of patients who seek medical care for heartburn (gastroesophageal reflux disease, or GERD), although a large subgroup of patients with Barretts esophagus are asymptomatic. The condition is named after surgeon Norman Barrett (1903–1979) even though the condition was originally described by Philip Rowland Allison in 1946.
Signs and symptoms
The change from normal to premalignant cells indicate Barretts esophagus does not cause any particular symptoms. Barretts esophagus, however, is associated with these symptoms:
frequent and longstanding heartburn
trouble swallowing (dysphagia)
vomiting blood (hematemesis)
pain under the sternum where the esophagus meets the stomach
pain when swallowing (odynophagia), which can lead to unintentional weight lossThe risk of developing Barretts esophagus is increased by central obesity (vs. peripheral obesity). The exact mechanism is unclear. The difference in distribution of fat among men (more central) and women (more peripheral) may explain the increased risk in males.
Pathophysiology
Barretts esophagus occurs due to chronic inflammation. The principal cause of chronic inflammation is gastroesophageal reflux disease, GERD (UK: GORD). In this disease, acidic stomach, bile, and small intestine and pancreatic contents cause damage to the cells of the lower esophagus. In turn, this provokes an advantage for cells more resistant to these noxious stimuli in particular HOXA13-expressing stem cells that are characterised by distal (intestinal) characteristics and outcompete the normal squamous cells.This mechanism also explains the selection of HER2/neu (also called ERBB2) and the overexpressing (lineage-addicted) cancer cells during the process of carcinogenesis, and the efficacy of targeted therapy against the Her-2 receptor with trastuzumab (Herceptin) in the treatment of adenocarcinomas at the gastroesophageal junction.
Researchers are unable to predict who with heartburn will develop Barretts esophagus. While no relationship exists between the severity of heartburn and the development of Barretts esophagus, a relationship does exist between chronic heartburn and the development of Barretts esophagus. Sometimes, people with Barretts esophagus have no heartburn symptoms at all.
Some anecdotal evidence indicates those with the eating disorder bulimia are more likely to develop Barretts esophagus because bulimia can cause severe acid reflux, and because purging also floods the esophagus with acid. However, a link between bulimia and Barretts esophagus remains unproven.During episodes of reflux, bile acids enter the esophagus, and this may be an important factor in carcinogenesis. Individuals with GERD and BE are exposed to high concentrations of deoxycholic acid that has cytotoxic effects and can cause DNA damage.
Diagnosis
Both macroscopic (from endoscopy) and microscopic positive findings are required to make a diagnosis. Barretts esophagus is marked by the presence of columnar epithelia in the lower esophagus, replacing the normal squamous cell epithelium—an example of metaplasia. The secretory columnar epithelium may be more able to withstand the erosive action of the gastric secretions; however, this metaplasia confers an increased risk of adenocarcinoma.
Screening
Screening endoscopy is recommended among males over the age of 60 who have reflux symptoms that are of long duration and not controllable with treatment. Among those not expected to live more than 5 years screening is not recommended.The Seattle protocol is used commonly in endoscopy to obtain endoscopic biopsies for screening, taken every 1 to 2 cm from the gastroesophageal junction.
Since the COVID-19 pandemic In Scotland, the local NHS started using a swallowable sponge (Cytosponge) in hospitals to collect cell samples for diagnosis. Preliminary studies have shown this diagnostic test to be a useful tool for screening people with heartburn symptoms and improved diagnosis.
Intestinal metaplasia
The presence of goblet cells, called intestinal metaplasia, is necessary to make a diagnosis of Barretts esophagus. This frequently occurs in the presence of other metaplastic columnar cells, but only the presence of goblet cells is diagnostic. The metaplasia is grossly visible through a gastroscope, but biopsy specimens must be examined under a microscope to determine whether cells are gastric or colonic in nature. Colonic metaplasia is usually identified by finding goblet cells in the epithelium and is necessary for the true diagnosis.Many histologic mimics of Barretts esophagus are known (i.e. goblet cells occurring in the transitional epithelium of normal esophageal submucosal gland ducts, "pseudogoblet cells" in which abundant foveolar [gastric] type mucin simulates the acid mucin true goblet cells). Assessment of relationship to submucosal glands and transitional-type epithelium with examination of multiple levels through the tissue may allow the pathologist to reliably distinguish between goblet cells of submucosal gland ducts and true Barretts esophagus (specialized columnar metaplasia). The histochemical stain Alcian blue pH 2.5 is also frequently used to distinguish true intestinal-type mucins from their histologic mimics. Recently, immunohistochemical analysis with antibodies to CDX-2 (specific for mid and hindgut intestinal derivation) has also been used to identify true intestinal-type metaplastic cells. The protein AGR2 is elevated in Barretts esophagus and can be used as a biomarker for distinguishing Barrett epithelium from normal esophageal epithelium.The presence of intestinal metaplasia in Barretts esophagus represents a marker for the progression of metaplasia towards dysplasia and eventually adenocarcinoma. This factor combined with two different immunohistochemical expression of p53, Her2 and p16 leads to two different genetic pathways that likely progress to dysplasia in Barretts esophagus. Also intestinal metaplastic cells can be positive for CK 7+/CK20-.
Epithelial dysplasia
After the initial diagnosis of Barretts esophagus is rendered, affected persons undergo annual surveillance to detect changes that indicate higher risk to progression to cancer: development of epithelial dysplasia (or "intraepithelial neoplasia").
Among all metaplastic lesions, around 8% were associated with dysplasia. particularly a recent study demonstrated that dysplastic lesions were located mainly in the posterior wall of the esophagus.Considerable variability is seen in assessment for dysplasia among pathologists. Recently, gastroenterology and GI pathology societies have recommended that any diagnosis of high-grade dysplasia in Barrett be confirmed by at least two fellowship-trained GI pathologists prior to definitive treatment for patients. For more accuracy and reproducibility, it is also recommended to follow international classification systems, such as the "Vienna classification" of gastrointestinal epithelial neoplasia (2000).
Management
Many people with Barretts esophagus do not have dysplasia. Medical societies recommend that if a patient has Barretts esophagus, and if the past two endoscopy and biopsy examinations have confirmed the absence of dysplasia, then the patient should not have another endoscopy within three years.Endoscopic surveillance of people with Barretts esophagus is often recommended, although little direct evidence supports this practice. Treatment options for high-grade dysplasia include surgical removal of the esophagus (esophagectomy) or endoscopic treatments such as endoscopic mucosal resection or ablation (destruction).The risk of malignancy is highest in the United States in Caucasian men over fifty years of age with more than five years of symptoms. Current recommendations include routine endoscopy and biopsy (looking for dysplastic changes). Although in the past physicians have taken a watchful waiting approach, newly published research supports consideration of intervention for Barretts esophagus. Balloon-based radiofrequency ablation, invented by Ganz, Stern, and Zelickson in 1999, is a new treatment modality for the treatment of Barretts esophagus and dysplasia and has been the subject of numerous published clinical trials. The findings demonstrate radiofrequency ablation is at least 90% effective to completely clear Barretts esophagus and dysplasia, with durability of up to five years and a favorable safety profile.Anti-reflux surgery has not been proven to prevent esophageal cancer. However, the indication is that proton pump inhibitors are effective in limiting the progression of esophageal cancer. Laser treatment is used in severe dysplasia, while overt malignancy may require surgery, radiation therapy, or systemic chemotherapy. A recent five-year random-controlled trial has shown that photodynamic therapy using photofrin is statistically more effective in eliminating dysplastic growth areas than sole use of a proton pump inhibitor.There is presently no reliable way to determine which patients with Barretts esophagus will go on to develop esophageal cancer, although a recent study found the detection of three different genetic abnormalities was associated with as much as a 79% chance of developing cancer in six years.Endoscopic mucosal resection has also been evaluated as a management technique </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Emotional and behavioral disorders and can you explain it in depth? [/INST] Emotional and behavioral disorders (EBD; also known as behavioral and emotional disorders) refer to a disability classification used in educational settings that allows educational institutions to provide special education and related services to students who have displayed poor social and/or academic progress.The classification is often given to students after conducting a Functional Behavior Analysis. These students need individualized behavior supports such as a Behavior Intervention Plan, to receive a free and appropriate public education. Students with EBD may be eligible for an Individualized Education Plan (IEP) and/or accommodations in the classroom through a 504 Plan.
History
Early history
Before any studies were done on the subject, mental illnesses were often thought to be a form of demonic possession or witchcraft. Since much was unknown, there was little to no distinction between the different types of mental illness and developmental disorders that we refer to today. Most often, they were dealt with by performing an exorcism on the person exhibiting signs of any mental illness. In the early to mid 1800s, asylums were introduced to America and Europe. There, patients were treated cruelly and often referred to as lunatics by the doctors in the professional fields. The main focus of asylums were to shun people with mental illnesses from the public. In 1963, the Community Mental Health Centers Construction Act (Public Law 88–164), was passed by Congress and signed by John F. Kennedy, which provided federal funding to community mental health centers. This legislation changed the way that mental health services were handled and also led to the closure of many large asylums. Many laws soon followed assisting more and more people with EBDs. 1978 came with the passing of Public Law 94- 142 which required free and public education to all disabled children including those with EBDs. An extension of PL 94–142, PL 99-457, was put into act which would provide services to all disabled children from the ages of 3-5 by the 1990–91 school year. PL 94-142 has since been renamed to the Individuals with Disabilities Education Act (IDEA).
Use and development of the term
Various terms have been used to describe irregular emotional and behavioral disorders. Many of the terms such as mental illness and psychopathology were used to describe adults with such conditions. Mental illness was a label for most people with any type of disorder and it was common for people with emotional and behavioral disorders to be labeled with a mental illness. However, those terms were avoided when describing children as it seemed too stigmatizing. In the late 1900s the term "behaviorally disordered" appeared. Some professionals in the field of special education accepted the term while others felt it ignored emotional issues. In order to make a more uniformed terminology, the National Mental Health and Special Education Coalition, which consists of over thirty professional and advocacy groups, coined the term "emotional and behavioral disorders" in 1988.
Criteria
According to the Individuals with Disabilities Education Act an EBD classification is required if one or more of the following characteristics is excessively observed in a student over a significant amount of time:
Learning challenges that cannot be explained by intellectual, sensory, or health factors.
Trouble keeping up or building satisfactory relationships with peers and teachers.
Inappropriate behavior (against self or others) or emotions (shares the need to harm others or self, low self-worth) in normal conditions.
An overall attitude of unhappiness or depression.
A tendency to develop physical symptoms or fears related with individual or school issues.The term "EBD" includes students diagnosed with schizophrenia. However, it does not have any significant bearing on students who are socially maladjusted unless they also meet the above criteria.
Criticisms
Providing or failing to provide an EBD classification to a student may be controversial, as the IDEA does not clarify which children would be considered "socially maladjusted". Students with a psychiatric diagnosis of conduct disorder are not guaranteed to receive additional educational services under an EBD classification. Students with an EBD classification who meet the diagnostic criteria for various disruptive behavior disorders, including attention-deficit hyperactivity disorder (ADHD), oppositional defiant disorder (ODD), or conduct disorder (CD) do not have an automatic eligibility to receive an IEP or 504 Plan. Students considered "socially maladjusted", but ineligible for an EBD classification (i.e., students diagnosed with conduct disorder), often receive better educational services in special education classrooms or alternative schools with high structure, clear rules, and consistent consequences.
Student characteristics
Students with EBD are a diverse population with a wide range of intellectual and academic abilities. Males, African-Americans, and economically disadvantaged students are over-represented in the EBD population, and students with EBD are more likely to live in single-parent homes, foster homes, or other non-traditional living situations. These students also tend to have low rates of positive social interactions with peers in educational contexts. Students with EBD are often categorized as "internalizers" (e.g., have poor self-esteem, or are diagnosed with an anxiety disorder or mood disorder) or "externalizers" (e.g., disrupt classroom instruction, or are diagnosed with disruptive behavior disorders such as oppositional defiant disorder and conduct disorder). Male students may be over-represented in the EBD population because they appear to be more likely to exhibit disruptive externalizing behavior that interferes with classroom instruction. Females may be more likely to exhibit internalizing behavior that does not interfere with classroom instruction, though to what extent this perception is due to social expectations of differences in male and female behavior is unclear. In any case, it is important to note that both internalizing and externalizing behaviour can and do occur in either sex; Students with EBD are also at an increased risk for learning disabilities, school dropout, substance abuse, and juvenile delinquency.
Internalizing and externalizing behavior
A person with EBD with "internalizing" behavior may have poor self-esteem, have depression, experience loss of interest in social, academic, and other life activities, and may exhibit non-suicidal self-injury or substance abuse. Students with internalizing behavior may also have a diagnosis of separation anxiety or another anxiety disorder, post-traumatic stress disorder (PTSD), specific or social phobia, obsessive–compulsive disorder (OCD), panic disorder, and/or an eating disorder. Teachers are more likely to write referrals for students that are overly disruptive. Screening tools used to detect students with high levels of "internalizing" behavior are not sensitive and are rarely used in practice. Students with EBD with "externalizing" behavior may be aggressive, non-compliant, extroverted, or disruptive.
Students with EBD that show externalizing behavior are often diagnosed with attention deficit hyperactivity disorder (ADHD), oppositional defiant disorder (ODD), conduct disorder, and/or bipolar disorder; however, this population can also include typically developing children that have learned to exhibit externalizing behavior for various reasons (e.g., escape from academic demands or access to attention). These students often have difficulty inhibiting emotional responses resulting from anger, frustration, and disappointment. Students who "externalize" exhibit behaviors such as insulting, provoking, threatening, bullying, cursing, and fighting, along with other forms of aggression. Male students with EBD exhibit externalizing behavior more often than their female counterparts.Children and adolescents with ADD or ADHD may display different types of externalizing behavior and should be either medicated or going through behavioral treatment for their diagnosis. Adolescents with severe ADHD would likely benefit most from both medication and behavioral treatment. Younger children should go through behavioral treatment before being treated with medication. Another recommended form of treatment for children and adolescents diagnosed with ADHD would be counseling from a mental health professional. Treatment options will improve performance of children and adolescents on emotion recognition tasks, specifically response time as there is no difficulty recognizing human emotions. The degree of required treatments vary depending on the degree of ADD or ADHD the individual has.
Treatment for these types of behaviors should include the parents as it is evident that their parenting skills impact on how their child deals with their symptoms, especially when at a younger age. Parents going through a parenting skills training program were reported a decrease in internalizing and externalizing behavior in their children post-training program. The program included learning how to give positive attention, increase good behavior with small frequent rewards and specific praise as well as learning how to decrease attention when the child behaved poorly.
Effect on cognition
In recent years, many researchers have been interested in exploring the relationship between emotional disorders and cognition. Evidence has revealed that there is a relationship between the two. Strauman (1989) investigated how emotional disorders shape a persons cognitive structure, that is, the mental processes people utilize to make sense of the world around them. He recruited three groups of individuals: those with social phobias, those with depression, and controls with no emotional disorder diagnosis. He wanted to determine whether these groups had a cognitive structure showing an actual/ideal (AI) discrepancy (referring to an individual not believing that they have achieved their personal desires) or actual/own/other (AOO) discrepancy (referring to an individuals actions not living up to what their significant other believes that they need to be). He found that depressed individuals had the highest AI discrepancy and social phobics had the greatest AOO discrepancy, while the controls were lower or in between the two for both discrepancies.Specific cognitive processes (e.g., attention) may be different in those with emotional disorders. MacLeod, Mathews, and Tata (1986) tested the reaction times of 32 participants, some of whom were diagnosed with Generalized Anxiety disorder, when presented with threatening words. They found that when threatening words were presented, people with greater anxiety tended to have increased selective attention, meaning that they reacted quicker to a stimulus in an area where a threatening word was just presented (32-59ms faster). When in the control group, subjects reacted slower when there was a threatening word proceeding the stimulus (16-32ms slower).Emotional disorders can also alter the way people regulate their emotions. Joormann and Gotlib (2010) conducted a study with depressed, or previously depressed, individuals to test this. They found that, when compared to individuals who have never had a depressive episode, previously and currently depressed individuals tended to use maladaptive emotion regulation strategies (such as rumination or brooding) more. They also found that when depressed individuals displayed cognitive inhibition (slowing of response to a variable that had been previously ignored) when asked to describe a negative word (ignored variable was a positive word), they were less likely to ruminate or brood. When they displayed cognitive inhibition when asked to describe a positive word (ignored variable was a negative word), they were more likely to reflect.
Services in the United States
There are many types of services available to EBD students, referenced below. One service is one-on-one support (or an aide) who assists in everyday activities and academics. Another service is foundations offer behavior services as well as counseling support. Some services include classrooms that are dedicated to educational foundations and work on building the student up possessively. States also offer dedicated schools with multiple resources that help students with EBD excel and transition (back) into local schools.
Texas
The state of Texas has the Texas Behavior Support Initiative (TBSI) authorized by Senate Bill 1196 and Texas Administrative Code §89.1053. With its design to provide knowledge for the use of constructive behavior interventions and to aid students, including students with disabilities. TBSI meets the legislative requirements for the use of restraint and time-out, along with providing the baseline work for behavior strategies and prevention throughout each environment.
New York
The state of New York has the Foundations Behavioral Health that has been approved out of state educations and residential provider with the New York State Education Dept. Foundations offer Academic and Behavioral Health Services to students between the ages of 14–21. This program allows students educational experience to have strategic interventions to aid their social and behavioral functioning. Some of the programs highlights include Functional Behavioral Assessment (FBA), Behavioral Intervention Plan (BIP) & Community Based Instruction (CBI).
California
The state of California has Spectrum Center classrooms in Los Angeles and the San Francisco area which are providing Emotional Disabilities and Behavioral Services. They provide academic classrooms for students who are actively working to improve grade-level standards and working toward getting their high school diploma. The main practice is the use of Positive Behavior Interventions and Supports (PBIS). PBIS instructional practices help students determine their skill level and progress, restore their skills through direct instruction, knowing the standards on their grade level and small group counseling.
Michigan
The state of Michigan has a Behavioral Education Center (BEC) in Bangor. Its purpose is to aid local schools directs with students between the ages of 5–26 years old with EBDs. Along with having students use appropriate behaviors and skills to successfully return to their local school setting </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Benign prostatic hyperplasia and can you explain it in depth? [/INST] Benign prostatic hyperplasia (BPH), also called prostate enlargement, is a noncancerous increase in size of the prostate gland. Symptoms may include frequent urination, trouble starting to urinate, weak stream, inability to urinate, or loss of bladder control. Complications can include urinary tract infections, bladder stones, and chronic kidney problems.The cause is unclear. Risk factors include a family history, obesity, type 2 diabetes, not enough exercise, and erectile dysfunction. Medications like pseudoephedrine, anticholinergics, and calcium channel blockers may worsen symptoms. The underlying mechanism involves the prostate pressing on the urethra and thereby making it difficult to pass urine out of the bladder. Diagnosis is typically based on symptoms and examination after ruling out other possible causes.Treatment options include lifestyle changes, medications, a number of procedures, and surgery. In those with mild symptoms, weight loss, exercise, and decreasing caffeine intake are recommended, although the quality of the evidence for exercise is low. In those with more significant symptoms, medications may include alpha blockers such as terazosin or 5α-reductase inhibitors such as finasteride. Surgical removal of part of the prostate may be carried out in those who do not improve with other measures. Some herbal medicines that have been studied, such as saw palmetto, have not been shown to help. Other herbal medicines somewhat effective at improving urine flow include beta-sitosterol from Hypoxis rooperi (African star grass), pygeum (extracted from the bark of Prunus africana), pumpkin seeds (Cucurbita pepo), and stinging nettle (Urtica dioica) root.About 105 million men are affected globally. BPH typically begins after the age of 40. Half of males age 50 and over are affected. After the age of 80, that figure climbs to as high as about 90% of males affected. Although prostate specific antigen levels may be elevated in males with BPH, the condition does not increase the risk of prostate cancer.
Signs and symptoms
BPH is the most common cause of lower urinary tract symptoms (LUTS), which are divided into storage, voiding, and symptoms which occur after urination. Storage symptoms include the need to urinate frequently, waking at night to urinate, urgency (compelling need to void that cannot be deferred), involuntary urination, including involuntary urination at night, or urge incontinence (urine leak following a strong sudden need to urinate). Voiding symptoms include urinary hesitancy (a delay between trying to urinate and the flow actually beginning), intermittency (not continuous), involuntary interruption of voiding, weak urinary stream, straining to void, a sensation of incomplete emptying, and uncontrollable leaking after the end of urination. These symptoms may be accompanied by bladder pain or pain while urinating, called dysuria.Bladder outlet obstruction (BOO) can be caused by BPH. Symptoms are abdominal pain, a continuous feeling of a full bladder, frequent urination, acute urinary retention (inability to urinate), pain during urination (dysuria), problems starting urination (urinary hesitancy), slow urine flow, starting and stopping (urinary intermittency), and nocturia.BPH can be a progressive disease, especially if left untreated. Incomplete voiding results in residual urine or urinary stasis, which can lead to an increased risk of urinary tract infection.
Causes
Hormones
Most experts consider androgens (testosterone and related hormones) to play a permissive role in the development of BPH. This means that androgens must be present for BPH to occur, but do not necessarily directly cause the condition. This is supported by evidence suggesting that castrated boys do not develop BPH when they age. In an unusual study of 26 eunuchs from the palace of the Qing dynasty still living in Beijing in 1960, the prostate could not be felt in 81% of the studied eunuchs. The average time since castration was 54 years (range, 41–65 years). On the other hand, some studies suggest that administering exogenous testosterone is not associated with a significant increase in the risk of BPH symptoms, so the role of testosterone in prostate cancer and BPH is still unclear. Further randomized controlled trials with more participants are needed to quantify any risk of giving exogenous testosterone.Dihydrotestosterone (DHT), a metabolite of testosterone, is a critical mediator of prostatic growth. DHT is synthesized in the prostate from circulating testosterone by the action of the enzyme 5α-reductase, type 2. DHT can act in an autocrine fashion on the stromal cells or in paracrine fashion by diffusing into nearby epithelial cells. In both of these cell types, DHT binds to nuclear androgen receptors and signals the transcription of growth factors that are mitogenic to the epithelial and stromal cells. DHT is ten times more potent than testosterone because it dissociates from the androgen receptor more slowly. The importance of DHT in causing nodular hyperplasia is supported by clinical observations in which an inhibitor of 5α-reductase such as finasteride is given to men with this condition. Therapy with a 5α-reductase inhibitor markedly reduces the DHT content of the prostate and, in turn, reduces prostate volume and BPH symptoms.Testosterone promotes prostate cell proliferation, but relatively low levels of serum testosterone are found in patients with BPH. One small study has shown that medical castration lowers the serum and prostate hormone levels unevenly, having less effect on testosterone and dihydrotestosterone levels in the prostate.While there is some evidence that estrogen may play a role in the cause of BPH, this effect appears to be mediated mainly through local conversion of androgens to estrogen in the prostate tissue rather than a direct effect of estrogen itself. In canine in vivo studies castration, which significantly reduced androgen levels but left estrogen levels unchanged, caused significant atrophy of the prostate. Studies looking for a correlation between prostatic hyperplasia and serum estrogen levels in humans have generally shown none.In 2008, Gat et al. published evidence that BPH is caused by failure in the spermatic venous drainage system resulting in increased hydrostatic pressure and local testosterone levels elevated more than 100 fold above serum levels. If confirmed, this mechanism explains why serum androgen levels do not seem to correlate with BPH and why giving exogenous testosterone would not make much difference.
Diet
Studies indicate that dietary patterns may affect development of BPH, but further research is needed to clarify any important relationship. Studies from China suggest that greater protein intake may be a factor in development of BPH. Men older than 60 in rural areas had very low rates of clinical BPH, while men living in cities and consuming more animal protein had a higher incidence. On the other hand, a study in Japanese-American men in Hawaii found a strong negative association with alcohol intake, but a weak positive association with beef intake. In a large prospective cohort study in the US (the Health Professionals Follow-up Study), investigators reported modest associations between BPH (men with strong symptoms of BPH or surgically confirmed BPH) and total energy and protein, but not fat intake. There is also epidemiological evidence linking BPH with metabolic syndrome (concurrent obesity, impaired glucose metabolism and diabetes, high triglyceride levels, high levels of low-density cholesterol, and hypertension).
Degeneration
Benign prostatic hyperplasia is an age-related disease. Misrepair-accumulation aging theory suggests that development of benign prostatic hyperplasia is a consequence of fibrosis and weakening of the muscular tissue in the prostate. The muscular tissue is important in the functionality of the prostate, and provides the force for excreting the fluid produced by prostatic glands. However, repeated contractions and dilations of myofibers will unavoidably cause injuries and broken myofibers. Myofibers have a low potential for regeneration; therefore, collagen fibers need to be used to replace the broken myofibers. Such misrepairs make the muscular tissue weak in functioning, and the fluid secreted by glands cannot be excreted completely. Then, the accumulation of fluid in glands increases the resistance of muscular tissue during the movements of contractions and dilations, and more and more myofibers will be broken and replaced by collagen fibers.
Pathophysiology
As men age, the enzymes aromatase and 5-alpha reductase increase in activity. These enzymes are responsible for converting androgen hormones into estrogen and dihydrotestosterone, respectively. This metabolism of androgen hormones leads to a decrease in testosterone but increased levels of DHT and estrogen.
Both the glandular epithelial cells and the stromal cells (including muscular fibers) undergo hyperplasia in BPH. Most sources agree that of the two tissues, stromal hyperplasia predominates, but the exact ratio of the two is unclear.:694Anatomically the median and lateral lobes are usually enlarged, due to their highly glandular composition. The anterior lobe has little in the way of glandular tissue and is seldom enlarged. (Carcinoma of the prostate typically occurs in the posterior lobe – hence the ability to discern an irregular outline per rectal examination). The earliest microscopic signs of BPH usually begin between the age of 30 and 50 years old in the PUG, which is posterior to the proximal urethra.:694 In BPH, the majority of growth occurs in the transition zone (TZ) of the prostate.:694 In addition to these two classic areas, the peripheral zone (PZ) is also involved to a lesser extent.:695 Prostatic cancer typically occurs in the PZ. However, BPH nodules, usually from the TZ are often biopsied anyway to rule out cancer in the TZ.:695 BPH can be a progressive growth that in rare instances leads to exceptional enlargement. In some males, the prostate enlargement exceeds 200 to 500 grams. This condition has been defined as giant prostatic hyperplasia (GPH).
Diagnosis
The clinical diagnosis of BPH is based on a history of LUTS (lower urinary tract symptoms), a digital rectal exam, and exclusion of other causes of similar signs and symptoms. The degree of LUTS does not necessarily correspond to the size of the prostate. An enlarged prostate gland on rectal examination that is symmetric and smooth supports a diagnosis of BPH. However, if the prostate gland feels asymmetrical, firm, or nodular, this raises concern for prostate cancer.Validated questionnaires such as the American Urological Association Symptom Index (AUA-SI), the International Prostate Symptom Score (I-PSS), and more recently the UWIN score (urgency, weak stream, incomplete emptying, and nocturia) are useful aids to making the diagnosis of BPH and quantifying the severity of symptoms.
Laboratory investigations
Urinalysis is typically performed when LUTS are present and BPH is suspected to evaluate for signs of a urinary tract infection, glucose in the urine (suggestive of diabetes), or protein in the urine (suggestive of kidney disease). Bloodwork including kidney function tests and prostate specific antigen (PSA) are often ordered to evaluate for kidney damage and prostate cancer, respectively. However, checking blood PSA levels for prostate cancer screening is controversial and not necessarily indicated in every evaluation for BPH. Benign prostatic hyperplasia and prostate cancer are both capable of increasing blood PSA levels and PSA elevation is unable to differentiate these two conditions well. If PSA levels are checked and are high, then further investigation is warranted. Measures including PSA density, free PSA, rectal examination, and transrectal ultrasonography may be helpful in determining whether a PSA increase is due to BPH or prostate cancer.
Imaging and other investigations
Uroflowmetry is done to measure the rate of urine flow and total volume of urine voided when the subject is peeing.Abdominal ultrasound examination of the prostate and kidneys is often performed to rule out hydronephrosis and hydroureter. Incidentally, cysts, tumours, and stones may be found on ultrasound. Post-void residual volume of more than 100 ml may indicate significant obstruction. Prostate size of 30 cc or more indicates enlargement of the prostate.Prostatic calcification can be detected through transrectal ultrasound (TRUS). Calcification is due to solidification of prostatic secretions or calcified corpora amylacea (hyaline masses on the prostate gland). Calcification is also found in a variety of other conditions such as prostatitis, chronic pelvic pain syndrome, and prostate cancer. For those with elevated levels of PSA, TRUS guided biopsy is performed to take a sample of the prostate for investigation. Although MRI is more accurate than TRUS in determining prostate volume, TRUS is less expensive and almost as accurate as MRI. Therefore, TRUS is still preferred to measure prostate volume.
Differential diagnosis
Medical conditions
The differential diagnosis for LUTS is broad and includes various medical conditions, neurologic disorders, and other diseases of the bladder, urethra, and prostate such as bladder cancer, urinary tract infection, urethral stricture, urethral calculi (stones), chronic prostatitis, and prostate cancer. Neurogenic bladder can cause urinary retention and cause symptoms similar to those of BPH. This may occur as a result of uncoordinated contraction of the bladder muscle or impairment in the timing of bladder muscle contraction and urethral sphincter relaxation. Notable causes of neurogenic bladder include disorders of the central nervous system such as Parkinsons disease, multiple sclerosis, and spinal cord injuries as well as disorders of the peripheral nervous system such as diabetes mellitus, vitamin B12 deficiency, and alcohol-induced nerve damage. Individuals affected by heart failure often experience nighttime awakenings to urinate due to redistribution of fluid accumulated in swollen legs.
Medications
Certain medications can increase urination difficulties by increasing bladder outlet resistance due to increased smooth muscle tone at the prostate or bladder neck and contribute to LUTS. Alpha-adrenergic agonist medications, such as decongestants with pseudoephedrine can increase bladder outlet resistance. In contrast, calcium channel blockers and anticholinergic medications can worsen urinary retention by promoting bladder muscle relaxation. Diuretic medications such as loop diuretics (e.g., furosemide) or thiazides (e.g., chlorthalidone) can cause or worsen urinary frequency and nighttime awakenings to urinate.
Management
When treating and managing benign prostatic hyperplasia, the aim is to prevent complications related to the disease and improve or relieve symptoms. Approaches used include lifestyle modifications, medications, and surgery.
Lifestyle
Lifestyle alterations to address the symptoms of BPH include physical activity, decreasing fluid intake before bedtime, moderating the consumption of alcohol and caffeine-containing products and following a timed voiding schedule.
Patients can also attempt to avoid products and medications with anticholinergic properties that may exacerbate urinary retention symptoms of BPH, including antihistamines, decongestants, opioids, and tricyclic antidepressants; however, changes in medications should be done with input from a medical professional.
Physical activity
Physical activity has been recommended as a treatment for urinary tract symptoms. A 2019 Cochrane review of six studies involving 652 men assessing the effects of physical activity alone, physical activity as a part of a self-management program, among others. However, the quality of evidence was very low and therefore it remains uncertain whether physical activity is helpful in men experiencing urinary symptoms caused by benign prostatic hyperplasia.
Voiding position
Voiding position when urinating may influence urodynamic parameters (urinary flow rate, voiding time, and post-void residual volume). A meta-analysis found no differences between the standing and sitting positions for healthy males, but that, for elderly males with lower urinary tract symptoms, voiding in the sitting position--
decreased the post void residual volume;
increased the maximum urinary flow, comparable with pharmacological intervention; and
decreased the voiding time.This urodynamic profile is associated with a lower risk of urologic complications, such as cystitis and bladder stones.
Medications
The two main medication classes for BPH management are alpha blockers and 5α-reductase inhibitors.
Alpha blockers
Selective α1-blockers are the most common choice for initial therapy. They include alfuzosin, doxazosin, silodosin, tamsulosin, terazosin, and naftopidil. They have a small to moderate benefit at improving symptoms. Selective alpha-1 blockers are similar in effectiveness but have slightly different side effect profiles. Alpha blockers relax smooth muscle in the prostate and the bladder neck, thus decreasing the blockage of urine flow. Common side effects of alpha blockers include orthostatic hypotension (a head rush or dizzy spell when standing up or stretching), ejaculation changes, erectile dysfunction, headaches, nasal congestion, and weakness. For men with LUTS due to an enlarged prostate, the effects of naftopidil, tamsulosin and silodosin on urinary symptoms and quality of life may be similar. Naftopidil and tamsulosin may have similar levels of unwanted sexual side effects but fewer unwanted side effects than silodosin.Tamsulosin and silodosin are selective α1 receptor blockers that preferentially bind to the α1A receptor in the prostate instead of the α1B receptor in the blood vessels. Less-selective α1 receptor blockers such as terazosin and doxazosin may lower blood pressure. The older, less selective α1-adrenergic blocker prazosin is not a first line choice for either high blood pressure or prostatic hyperplasia; it is a choice for patients who present with both problems at the same time. The older, broadly non-selective alpha blocker medications such as phenoxybenzamine are not recommended for control of BPH. Non-selective alpha blockers such as terazosin and doxazosin may also require slow dose adjustments as they can lower blood pressure and cause syncope (fainting) if the response to the medication is too strong.
5α-reductase inhibitors
The 5α-reductase inhibitors finasteride and dutasteride may also be used in men with BPH. These medications inhibit the 5α-reductase enzyme, which, in turn, inhibits production of DHT, a hormone responsible for enlarging the prostate. Effects may take longer to appear than alpha blockers, but they persist for many years. When used together with alpha blockers, no benefit was reported in short-term trials, but in a longer-term study (3–4 years) there was a greater reduction in BPH progression to acute urinary retention and surgery than with either agent alone, especially in people with more severe symptoms and larger prostates. Other trials have confirmed reductions in symptoms, within 6 months in one trial, an effect that was maintained after withdrawal of the alpha blocker. Side effects include decreased libido and ejaculatory or erectile dysfunction. The 5α-reductase inhibitors are contraindicated in pregnant women because of their teratogenicity due to interference with fetal testosterone metabolism, and as a precaution, pregnant women should not handle crushed or broken tablets.
Phosphodiesterase inhibitors (PDE)
A 2018 Cochrane review of studies on men over 60 with moderate to severe lower urinary tract symptoms analyzed the impacts of phosphodiesterase inhibitors (PDE) in comparison to other drugs. These drugs may improve urinary symptoms slightly and reduce urinary bother but may also cause more side effects compared to placebo. The evidence in this review found that there is probably no difference between PDE and alpha blockers, however when used in combination they may provide a greater improvement in symptoms (with more side effects). PDE also likely improves symptoms when used in combination with 5-alpha reductase inhibitors.
Several phosphodiesterase-5 inhibitors are also effective, but may require multiple doses daily to maintain adequate urine flow. Tadalafil, a phosphodiesterase-5 inhibitor, was considered then rejected by NICE in the UK for the treatment of symptoms associated with BPH. In 2011, the U.S. Food and Drug Administration approved tadalafil to treat the signs and symptoms of benign prostatic hyperplasia, and for the treatment of BPH and erectile dysfunction (ED), when the conditions occur simultaneously.
Others
Antimuscarinics such as tolterodine may also be used, especially in combination with alpha blockers. They act by decreasing acetylcholine effects on the smooth muscle of the bladder, thus helping control symptoms of an overactive bladder.
Self-catheterization
Intermittent urinary catheterization is used to relieve the bladder in people with urinary retention. Self-catheterization is an option in BPH when it is difficult or impossible to completely empty the bladder. Urinary tract infection is the most common complication of intermittent catheterization. Several techniques and types of catheter are available, including sterile (single-use) and clean (multiple use) catheters, but, based on current information, none is superior to others in reducing the incidence of urinary tract infection.
Surgery
If medical treatment is not effective, surgery may be performed. Surgical techniques used include the following:
Transurethral resection of the prostate (TURP): the gold standard. TURP is thought to be the most effective approach for improving urinary symptoms and urinary flow, however, this surgical procedure may be associated with complications in up to 20% of men </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Benzodiazepine overdose and can you explain it in depth? [/INST] Benzodiazepine overdose describes the ingestion of one of the drugs in the benzodiazepine class in quantities greater than are recommended or generally practiced. The most common symptoms of overdose include central nervous system (CNS) depression, impaired balance, ataxia, and slurred speech. Severe symptoms include coma and respiratory depression. Supportive care is the mainstay of treatment of benzodiazepine overdose. There is an antidote, flumazenil, but its use is controversial.Deaths from single-drug benzodiazepine overdoses occur infrequently, particularly after the point of hospital admission. However, combinations of high doses of benzodiazepines with alcohol, barbiturates, opioids or tricyclic antidepressants are particularly dangerous, and may lead to severe complications such as coma or death. In 2013, benzodiazepines were involved in 31% of the estimated 22,767 deaths from prescription drug overdose in the United States. The US Food and Drug Administration (FDA) has subsequently issued a black box warning regarding concurrent use of benzodiazepines and opioids. Benzodiazepines are one of the most highly prescribed classes of drugs, and they are commonly used in self-poisoning. Over 10 years in the United Kingdom, 1512 fatal poisonings have been attributed to benzodiazepines with or without alcohol. Temazepam was shown to be more toxic than the majority of benzodiazepines. An Australian (1995) study found oxazepam less toxic and less sedative, and temazepam more toxic and more sedative, than most benzodiazepines in overdose.
Signs and symptoms
Following an acute overdose of a benzodiazepine the onset of symptoms is typically rapid with most developing symptoms within 4 hours. Patients initially present with mild to moderate impairment of central nervous system function. Initial signs and symptoms include intoxication, somnolence, diplopia, impaired balance, impaired motor function, anterograde amnesia, ataxia, and slurred speech. Most patients with pure benzodiazepine overdose will usually only exhibit these mild CNS symptoms. Paradoxical reactions such as anxiety, delirium, combativeness, hallucinations, and aggression can also occur following benzodiazepine overdose. Gastrointestinal symptoms such as nausea and vomiting have also been occasionally reported.Cases of severe overdose have been reported and symptoms displayed might include prolonged deep coma or deep cyclic coma, apnea, respiratory depression, hypoxemia, hypothermia, hypotension, bradycardia, cardiac arrest, and pulmonary aspiration, with the possibility of death. Severe consequences are rare following overdose of benzodiazepines alone but the severity of overdose is increased significantly if benzodiazepines are taken in overdose in combination with other medications. Significant toxicity may result following recreation drug misuse in conjunction with other CNS depressants such as opioids or alcohol. The duration of symptoms following overdose is usually between 12 and 36 hours in the majority of cases. The majority of drug-related deaths involve misuse of heroin or other opioids in combination with benzodiazepines or other CNS depressant drugs. In most cases of fatal overdose it is likely that lack of opioid tolerance combined with the depressant effects of benzodiazepines is the cause of death.The symptoms of an overdose such as sleepiness, agitation and ataxia occur much more frequently and severely in children. Hypotonia may also occur in severe cases.
Toxicity
Benzodiazepines have a wide therapeutic index and taken alone in overdose rarely cause severe complications or fatalities. More often than not, a patient who inadvertently takes more than the prescribed dose will simply feel drowsy and fall asleep for a few hours. Benzodiazepines taken in overdose in combination with alcohol, barbiturates, opioids, tricyclic antidepressants, or sedating antipsychotics, anticonvulsants, or antihistamines are particularly dangerous. Additionally, emergency department visits involving benzodiazepines compared to other sedative-hypnotics have much higher odds of hospitalization, patient transfer, or death. In the case of alcohol and barbiturates, not only do they have an additive effect but they also increase the binding affinity of benzodiazepines to the benzodiazepine binding site, which results in a very significant potentiation of the CNS and respiratory depressant effects. In addition, the elderly and those with chronic illnesses are much more vulnerable to lethal overdose with benzodiazepines. Fatal overdoses can occur at relatively low doses in these individuals.
Comparability
The various benzodiazepines differ in their toxicity since they produce varying levels of sedation in overdose. A 1993 British study of deaths during the 1980s found flurazepam and temazepam more frequently involved in drug-related deaths, causing more deaths per million prescriptions than other benzodiazepines. Flurazepam, now rarely prescribed in the United Kingdom and Australia, had the highest fatal toxicity index of any benzodiazepine (15.0), followed by temazepam (11.9), versus benzodiazepines overall (5.9), taken with or without alcohol. An Australian (1995) study found oxazepam less toxic and less sedative, and temazepam more toxic and more sedative, than most benzodiazepines in overdose. An Australian study (2004) of overdose admissions between 1987 and 2002 found alprazolam, which happens to be the most prescribed benzodiazepine in Australia and the United States, to be more toxic than diazepam and other benzodiazepines. They also cited a review of the Annual Reports of the American Association of Poison Control Centers National Data Collection System, which showed alprazolam was involved in 34 fatal deliberate self-poisonings over 10 years (1992–2001), compared with 30 fatal deliberate self-poisonings involving diazepam. In a New Zealand study (2003) of 200 deaths, Zopiclone, a benzodiazepine receptor agonist, had similar overdose potential as benzodiazepines.
Pathophysiology
Benzodiazepines bind to a specific benzodiazepine receptor, thereby enhancing the effect of the neurotransmitter gamma-aminobutyric acid (GABA) and causing CNS depression. In overdose situations this pharmacological effect is extended leading to a more severe CNS depression and potentially coma or cardiac arrest. Benzodiazepine-overdose-related coma may be characterised by an alpha pattern with the central somatosensory conduction time (CCT) after median nerve stimulation being prolonged and the N20 to be dispersed. Brain-stem auditory evoked potentials demonstrate delayed interpeak latencies (IPLs) I-III, III-V and I-V. Toxic overdoses of benzodiazepines therefore cause prolonged CCT and IPLs.
Diagnosis
The diagnosis of benzodiazepine overdose may be difficult, but is usually made based on the clinical presentation of the patient along with a history of overdose. Obtaining a laboratory test for benzodiazepine blood concentrations can be useful in patients presenting with CNS depression or coma of unknown origin. Techniques available to measure blood concentrations include thin layer chromatography, gas liquid chromatography with or without a mass spectrometer, and radioimmunoassay. Blood benzodiazepine concentrations, however, do not appear to be related to any toxicological effect or predictive of clinical outcome. Blood concentrations are, therefore, used mainly to confirm the diagnosis rather than being useful for the clinical management of the patient.
Treatment
Medical observation and supportive care are the mainstay of treatment of benzodiazepine overdose. Although benzodiazepines are absorbed by activated charcoal, gastric decontamination with activated charcoal is not beneficial in pure benzodiazepine overdose as the risk of adverse effects would outweigh any potential benefit from the procedure. It is recommended only if benzodiazepines have been taken in combination with other drugs that may benefit from decontamination. Gastric lavage (stomach pumping) or whole bowel irrigation are also not recommended. Enhancing elimination of the drug with hemodialysis, hemoperfusion, or forced diuresis is unlikely to be beneficial as these procedures have little effect on the clearance of benzodiazepines due to their large volume of distribution and lipid solubility.
Supportive measures
Supportive measures include observation of vital signs, especially Glasgow Coma Scale and airway patency. IV access with fluid administration and maintenance of the airway with intubation and artificial ventilation may be required if respiratory depression or pulmonary aspiration occurs. Supportive measures should be put in place prior to administration of any benzodiazepine antagonist in order to protect the patient from both the withdrawal effects and possible complications arising from the benzodiazepine. A determination of possible deliberate overdose should be considered with appropriate scrutiny, and precautions taken to prevent any attempt by the patient to commit further bodily harm. Hypotension is corrected with fluid replacement, although catecholamines such as norepinephrine or dopamine may be required to increase blood pressure. Bradycardia is treated with atropine or an infusion of norepinephrine to increase coronary blood flow and heart rate.
Flumazenil
Flumazenil (Romazicon) is a competitive benzodiazepine receptor antagonist that can be used as an antidote for benzodiazepine overdose. Its use, however, is controversial as it has numerous contraindications. It is contraindicated in patients who are on long-term benzodiazepines, those who have ingested a substance that lowers the seizure threshold, or in patients who have tachycardia, widened QRS complex on ECG, anticholinergic signs, or a history of seizures. Due to these contraindications and the possibility of it causing severe adverse effects including seizures, adverse cardiac effects, and death, in the majority of cases there is no indication for the use of flumazenil in the management of benzodiazepine overdose as the risks in general outweigh any potential benefit of administration. It also has no role in the management of unknown overdoses. In addition, if full airway protection has been achieved, a good outcome is expected, and therefore flumazenil administration is unlikely to be required.Flumazenil is very effective at reversing the CNS depression associated with benzodiazepines but is less effective at reversing respiratory depression. One study found that only 10% of the patient population presenting with a benzodiazepine overdose are suitable candidates for flumazenil. In this select population who are naive to and overdose solely on a benzodiazepine, it can be considered. Due to its short half life, the duration of action of flumazenil is usually less than 1 hour, and multiple doses may be needed. When flumazenil is indicated the risks can be reduced or avoided by slow dose titration of flumazenil. Due to risks and its many contraindications, flumazenil should be administered only after discussion with a medical toxicologist.
Epidemiology
In a Swedish (2003) study benzodiazepines were implicated in 39% of suicides by drug poisoning in the elderly 1992–1996. Nitrazepam and flunitrazepam accounted for 90% of benzodiazepine implicated suicides. In cases where benzodiazepines contributed to death, but were not the sole cause, drowning, typically in the bath, was a common method used. Benzodiazepines were the predominant drug class in suicides in this review of Swedish death certificates. In 72% of the cases, benzodiazepines were the only drug consumed. Thus, many of deaths associated with benzodiazepine overdoses may not be a direct result of the toxic effects but either due to being combined with other drugs or used as a tool to kill oneself using a different method, e.g. drowning.In a Swedish retrospective study of deaths of 1987, in 159 of 1587 autopsy cases benzodiazepines were found </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Thiamine deficiency and can you explain it in depth? [/INST] Thiamine deficiency is a medical condition of low levels of thiamine (Vitamin B1). A severe and chronic form is known as beriberi. The two main types in adults are wet beriberi and dry beriberi. Wet beriberi affects the cardiovascular system, resulting in a fast heart rate, shortness of breath, and leg swelling. Dry beriberi affects the nervous system, resulting in numbness of the hands and feet, confusion, trouble moving the legs, and pain. A form with loss of appetite and constipation may also occur. Another type, acute beriberi, found mostly in babies, presents with loss of appetite, vomiting, lactic acidosis, changes in heart rate, and enlargement of the heart.Risk factors include a diet of mostly white rice, alcoholism, dialysis, chronic diarrhea, and taking high doses of diuretics. In rare cases, it may be due to a genetic condition that results in difficulties absorbing thiamine found in food. Wernicke encephalopathy and Korsakoff syndrome are forms of dry beriberi. Diagnosis is based on symptoms, low levels of thiamine in the urine, high blood lactate, and improvement with thiamine supplementation.Treatment is by thiamine supplementation, either by mouth or by injection. With treatment, symptoms generally resolve in a few weeks. The disease may be prevented at the population level through the fortification of food.Thiamine deficiency is rare in the United States. It remains relatively common in sub-Saharan Africa. Outbreaks have been seen in refugee camps. Thiamine deficiency has been described for thousands of years in Asia, and became more common in the late 1800s with the increased processing of rice.
Signs and symptoms
Symptoms of beriberi include weight loss, emotional disturbances, impaired sensory perception, weakness and pain in the limbs, and periods of irregular heart rate. Edema (swelling of bodily tissues) is common. It may increase the amount of lactic acid and pyruvic acid within the blood. In advanced cases, the disease may cause high-output cardiac failure and death.
Symptoms may occur concurrently with those of Wernickes encephalopathy, a primarily neurological thiamine deficiency-related condition.
Beriberi is divided into four categories. The first three are historical and the fourth, gastrointestinal beriberi, was recognized in 2004:
Dry beriberi especially affects the peripheral nervous system.
Wet beriberi especially affects the cardiovascular system and other bodily systems.
Infantile beriberi affects the babies of malnourished mothers.
Gastrointestinal beriberi affects the digestive system and other bodily systems.
Dry beriberi
Dry beriberi causes wasting and partial paralysis resulting from damaged peripheral nerves. It is also referred to as endemic neuritis. It is characterized by:
Difficulty with walking
Tingling or loss of sensation (numbness) in hands and feet
Loss of tendon reflexes
Loss of muscle function or paralysis of the lower legs
Mental confusion/speech difficulties
Pain
Involuntary eye movements (nystagmus)
VomitingA selective impairment of the large proprioceptive sensory fibers without motor impairment can occur and present as a prominent sensory ataxia, which is a loss of balance and coordination due to loss of the proprioceptive inputs from the periphery and loss of position sense.
Brain disease
Wernickes encephalopathy (WE), Korsakoff syndrome (also called alcohol amnestic disorder), and Wernicke–Korsakoff syndrome are forms of dry beriberi.Wernickes encephalopathy is the most frequently encountered manifestation of thiamine deficiency in Western society, though it may also occur in patients with impaired nutrition from other causes, such as gastrointestinal disease, those with HIV/AIDS, and with the injudicious administration of parenteral glucose or hyperalimentation without adequate B-vitamin supplementation. This is a striking neuro-psychiatric disorder characterized by paralysis of eye movements, abnormal stance and gait, and markedly deranged mental function.Korsakoff syndrome, in general, is considered to occur with deterioration of brain function in patients initially diagnosed with WE. This is an amnestic-confabulatory syndrome characterized by retrograde and anterograde amnesia, impairment of conceptual functions, and decreased spontaneity and initiative.Alcoholics may have thiamine deficiency because of:
Inadequate nutritional intake: Alcoholics tend to intake less than the recommended amount of thiamine.
Decreased uptake of thiamine from the GI tract: Active transport of thiamine into enterocytes is disturbed during acute alcohol exposure.
Liver thiamine stores are reduced due to hepatic steatosis or fibrosis.
Impaired thiamine utilization: Magnesium, which is required for the binding of thiamine to thiamine-using enzymes within the cell, is also deficient due to chronic alcohol consumption. The inefficient use of any thiamine that does reach the cells will further exacerbate the thiamine deficiency.
Ethanol per se inhibits thiamine transport in the gastrointestinal system and blocks phosphorylation of thiamine to its cofactor form (ThDP).Following improved nutrition and the removal of alcohol consumption, some impairments linked with thiamine deficiency are reversed, in particular poor brain functionality, although in more severe cases, Wernicke–Korsakoff syndrome leaves permanent damage. (See delirium tremens.)
Wet beriberi
Wet beriberi affects the heart and circulatory system. It is sometimes fatal, as it causes a combination of heart failure and weakening of the capillary walls, which causes the peripheral tissues to become edematous. Wet beriberi is characterized by:
Increased heart rate
Vasodilation leading to decreased systemic vascular resistance, and high-output heart failure
Elevated jugular venous pressure
Dyspnea (shortness of breath) on exertion
Paroxysmal nocturnal dyspnea
Peripheral edema (swelling of lower legs) or generalized edema (swelling throughout the body)
Dilated cardiomyopathy
Gastrointestinal beriberi
Gastrointestinal beriberi causes abdominal pain. It is characterized by:
Abdominal pain
Nausea
Vomiting
Lactic acidosis
Infants
Infantile beriberi usually occurs between two and six months of age in children whose mothers have inadequate thiamine intake. It may present as either wet or dry beriberi.In the acute form, the baby develops dyspnea and cyanosis and soon dies of heart failure. These symptoms may be described in infantile beriberi:
Hoarseness, where the child makes moves to moan, but emits no sound or just faint moans caused by nerve paralysis
Weight loss, becoming thinner and then marasmic as the disease progresses
Vomiting
Diarrhea
Pale skin
Edema
Ill temper
Alterations of the cardiovascular system, especially tachycardia (rapid heart rate)
Convulsions occasionally observed in the terminal stages
Cause
Beriberi is often caused by eating a diet with a very high proportion of calorie rich polished rice (common in Asia) or cassava root (common in sub-Saharan Africa), without much if any thiamine containing animals or vegetables.It may also be caused by shortcomings other than inadequate intake - diseases or operations on the digestive tract, alcoholism, dialysis or genetic deficiencies. All those causes mainly affect the central nervous system, and provoke the development of Wernickes encephalopathy.
Wernickes disease is one of the most prevalent neurological or neuropsychiatric diseases. In autopsy series, features of Wernicke lesions are observed in approximately 2% of general cases. Medical record research shows that about 85% had not been diagnosed, although only 19% would be asymptomatic. In children, only 58% were diagnosed. In alcohol abusers, autopsy series showed neurological damages at rates of 12.5% or more. Mortality caused by Wernickes disease reaches 17% of diseases, which means 3.4/1000 or about 25 million contemporaries. The number of people with Wernickes disease may be even higher, considering that early stages may have dysfunctions prior to the production of observable lesions at necropsy. In addition, uncounted numbers of people can experience fetal damage and subsequent diseases.
Genetics
Genetic diseases of thiamine transport are rare but serious. Thiamine responsive megaloblastic anemia syndrome (TRMA) with diabetes mellitus and sensorineural deafness is an autosomal recessive disorder caused by mutations in the gene SLC19A2, a high affinity thiamine transporter. TRMA patients do not show signs of systemic thiamine deficiency, suggesting redundancy in the thiamine transport system. This has led to the discovery of a second high-affinity thiamine transporter, SLC19A3. Leigh disease (subacute necrotising encephalomyelopathy) is an inherited disorder that affects mostly infants in the first years of life and is invariably fatal. Pathological similarities between Leigh disease and WE led to the hypothesis that the cause was a defect in thiamine metabolism. One of the most consistent findings has been an abnormality of the activation of the pyruvate dehydrogenase complex.Mutations in the SLC19A3 gene have been linked to biotin-thiamine responsive basal ganglia disease, which is treated with pharmacological doses of thiamine and biotin, another B vitamin.
Other disorders in which a putative role for thiamine has been implicated include subacute necrotising encephalomyelopathy, opsoclonus myoclonus syndrome (a paraneoplastic syndrome), and Nigerian seasonal ataxia (or African seasonal ataxia). In addition, several inherited disorders of ThDP-dependent enzymes have been reported, which may respond to thiamine treatment.
Pathophysiology
Thiamine in the human body has a half-life of 18 days and is quickly exhausted, particularly when metabolic demands exceed intake. A derivative of thiamine, thiamine pyrophosphate (TPP), is a cofactor involved in the citric acid cycle, as well as connecting the breakdown of sugars with the citric acid cycle. The citric acid cycle is a central metabolic pathway involved in the regulation of carbohydrate, lipid, and amino acid metabolism, and its disruption due to thiamine deficiency inhibits the production of many molecules including the neurotransmitters glutamic acid and GABA. Additionally, thiamine may also be directly involved in neuromodulation.
Diagnosis
A positive diagnosis test for thiamine deficiency involves measuring the activity of the enzyme transketolase in erythrocytes (Erythrocyte transketolase activation assay). Alternatively, thiamine and its phosphorylated derivatives can directly be detected in whole blood, tissues, foods, animal feed, and pharmaceutical preparations following the conversion of thiamine to fluorescent thiochrome derivatives (thiochrome assay) and separation by high-performance liquid chromatography (HPLC). Capillary electrophoresis (CE) techniques and in-capillary enzyme reaction methods have emerged as alternative techniques in quantifying and monitoring thiamine levels in samples.
The normal thiamine concentration in EDTA-blood is about 20-100 µg/L.
Treatment
Many people with beriberi can be treated with thiamine alone. Given thiamine intravenously (and later orally), rapid and dramatic recovery occurs, generally within 24 hours.Improvements of peripheral neuropathy may require several months of thiamine treatment.
Epidemiology
Beriberi is a recurrent nutritional disease in detention houses, even in this century. In 1999, an outbreak of beriberi occurred in a detention center in Taiwan. High rates of illness and death in overcrowded Haitian jails in 2007 were traced to the traditional practice of washing rice before cooking. In the Ivory Coast, among a group of prisoners with heavy punishment, 64% were affected by beriberi. Before beginning treatment, prisoners exhibited symptoms of dry or wet beriberi with neurological signs (tingling: 41%), cardiovascular signs (dyspnoea: 42%, thoracic pain: 35%), and edemas of the lower limbs (51%). With treatment, the rate of healing was about 97%.Populations under extreme stress may be at higher risk for beriberi. Displaced populations, such as refugees from war, are susceptible to micronutritional deficiency, including beriberi. The severe nutritional deprivation caused by famine also can cause beriberis, although symptoms may be overlooked in clinical assessment or masked by other famine-related problems. An extreme weight-loss diet can, rarely, induce a famine-like state and the accompanying beriberi.
History
Earliest written descriptions of thiamine deficiency are from ancient China in the context of Chinese medicine. One of the earliest is by Ge Hong in his book Zhou hou bei ji fang (Emergency Formulas to Keep up Your Sleeve) written sometime during the third century. Hong called the illness by the name jiao qi, which can be interpreted as "foot qi". He described the symptoms to include swelling, weakness, and numbness of the feet. He also acknowledged that the illness could be deadly, and claimed that it could be cured by eating certain foods, such as fermented soybeans in wine. Better known examples of early descriptions of "foot qi" are by Chao Yuanfang (who lived during 550–630) in his book Zhu bing yuan hou lun (Sources and Symptoms of All Diseases) and by Sun Simiao (581–682) in his book Bei ji qian jin yao fang (Essential Emergency Formulas Worth a Thousand in Gold).In the late 19th century, beriberi was studied by Takaki Kanehiro, a British-trained Japanese medical doctor of the Imperial Japanese Navy. Beriberi was a serious problem in the Japanese navy; sailors fell ill an average of four times a year in the period 1878 to 1881, and 35% were cases of beriberi. In 1883, Takaki learned of a very high incidence of beriberi among cadets on a training mission from Japan to Hawaii, via New Zealand and South America. The voyage lasted more than nine months and resulted in 169 cases of sickness and 25 deaths on a ship of 376 men. With the support of the Japanese Navy, he conducted an experiment in which another ship was deployed on the same route, except that its crew was fed a diet of meat, fish, barley, rice, and beans. At the end of the voyage, this crew had only 14 cases of beriberi and no deaths. This convinced Takaki and the Japanese Navy that diet was the cause. In 1884, Takaki observed that beriberi was common among low-ranking crew who were often provided free rice, thus ate little else, but not among crews of Western navies, nor among Japanese officers who consumed a more varied diet.
In 1897, Christiaan Eijkman, a Dutch physician and pathologist, demonstrated that beriberi is caused by poor diet, and discovered that feeding unpolished rice (instead of the polished variety) to chickens helped to prevent beriberi. The following year, Sir Frederick Hopkins postulated that some foods contained "accessory factors"—in addition to proteins, carbohydrates, fats, and salt—that were necessary for the functions of the human body. In 1901, Gerrit Grijns, a Dutch physician and assistant to Christiaan Eijkman in the Netherlands, correctly interpreted beriberi as a deficiency syndrome, and between 1910 and 1913, Edward Bright Vedder established that an extract of rice bran is a treatment for beriberi. In 1929, Eijkman and Hopkins were awarded the Nobel Prize for Physiology or Medicine for their discoveries.
Etymology
Although according to the Oxford English Dictionary, the term "beriberi" comes from a Sinhalese phrase meaning "weak, weak" or "I cannot, I cannot", the word being duplicated for emphasis, the origin of the phrase is questionable. It has also been suggested to come from Hindi, Arabic, and a few other languages, with many meanings like "weakness", "sailor", and even "sheep". Such suggested origins were listed by Heinrich Botho Scheube, among others. Edward Vedder wrote in his book Beriberi (1913) that "it is impossible to definitely trace the origin of the word beriberi". Word berbere was used in writing at least as early as 1568 by Diogo do Couto, when he described the deficiency in India.Kakke, which is a Japanese synonym for thiamine deficiency, comes from the way "jiao qi" is pronounced in Japanese. "Jiao qi" is an old word used in Chinese medicine to describe beriberi. "Kakke" is supposed to have entered into the Japanese language sometime between the sixth and eighth centuries.
Other animals
Poultry
As most feedstuffs used in poultry diets contain enough quantities of vitamins to meet the requirements in this species, deficiencies in this vitamin do not occur with commercial diets. This was, at least, the opinion in the 1960s.Mature chickens show signs three weeks after being fed a deficient diet. In young chicks, it can appear before two weeks of age. Onset is sudden in young chicks, with anorexia and an unsteady gait. Later on, locomotor signs begin, with an apparent paralysis of the flexor of the toes. The characteristic position is called "stargazing", with the affected animal sitting on its hocks with its head thrown back in a posture called opisthotonos. Response to administration of the vitamin is rather quick, occurring a few hours later.
Ruminants
Polioencephalomalacia (PEM) is the most common thiamine deficiency disorder in young ruminant and nonruminant animals. Symptoms of PEM include a profuse, but transient, diarrhea, listlessness, circling movements, stargazing or opisthotonus (head drawn back over neck), and muscle tremors. The most common cause is high-carbohydrate feeds, leading to the overgrowth of thiaminase-producing bacteria, but dietary ingestion of thiaminase (e.g., in bracken fern), or inhibition of thiamine absorption by high sulfur intake are also possible. Another cause of PEM is Clostridium sporogenes or Bacillus aneurinolyticus infection. These bacteria produce thiaminases that can cause an acute thiamine deficiency in the affected animal.
Snakes
Snakes that consume a diet largely composed of goldfish and feeder minnows are susceptible to developing thiamine deficiency. This is often a problem observed in captivity when keeping garter and ribbon snakes that are fed a goldfish-exclusive diet, as these fish contain thiaminase, an enzyme that breaks down thiamine.
Wild birds and fish
Thiamine deficiency has been identified as the cause of a paralytic disease affecting wild birds in the Baltic Sea area dating back to 1982. In this condition, there is difficulty in keeping the wings folded along the side of the body when resting, loss of the ability to fly and voice, with eventual paralysis of the wings and legs and death. It affects primarily 0.5–1 kg-sized birds such as the European herring gull (Larus argentatus), common starling (Sturnus vulgaris), and common eider (Somateria mollissima). Researches noted, "Because the investigated species occupy a wide range of ecological niches and positions in the food web, we are open to the possibility that other animal classes may develop thiamine deficiency, as well."p. 12006In the counties of Blekinge and Skåne (southernmost Sweden), mass deaths of several bird species, especially the European herring gull, have been observed since the early 2000s. More recently, species of other classes seems to be affected. High mortality of salmon (Salmo salar) in the river Mörrumsån is reported, and mammals such as the Eurasian elk (Alces alces) have died in unusually high numbers. Lack of thiamine is the common denominator where analysis is done. In April 2012, the County Administrative Board of Blekinge found the situation so alarming that they asked the Swedish government to set up a closer investigation.
References
Further reading
Arnold, D (2010). "British India and the beri-beri problem". Medical History. 54 (3): 295–314. doi:10.1017/S0025727300004622. PMC 2889456. PMID 20592882.
Chisholm, Hugh, ed. (1911). "Beri-Beri" </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Beta blocker and can you explain it in depth? [/INST] Beta blockers, also spelled β-blockers, are a class of medications that are predominantly used to manage abnormal heart rhythms, and to protect the heart from a second heart attack after a first heart attack (secondary prevention). They are also widely used to treat high blood pressure, although they are no longer the first choice for initial treatment of most patients.Beta blockers are competitive antagonists that block the receptor sites for the endogenous catecholamines epinephrine (adrenaline) and norepinephrine (noradrenaline) on adrenergic beta receptors, of the sympathetic nervous system, which mediates the fight-or-flight response. Some block activation of all types of β-adrenergic receptors and others are selective for one of the three known types of beta receptors, designated β1, β2 and β3 receptors. β1-adrenergic receptors are located mainly in the heart and in the kidneys. β2-adrenergic receptors are located mainly in the lungs, gastrointestinal tract, liver, uterus, vascular smooth muscle, and skeletal muscle. β3-adrenergic receptors are located in fat cells.Beta receptors are found on cells of the heart muscles, smooth muscles, airways, arteries, kidneys, and other tissues that are part of the sympathetic nervous system and lead to stress responses, especially when they are stimulated by epinephrine (adrenaline). Beta blockers interfere with the binding to the receptor of epinephrine and other stress hormones and weaken the effects of stress hormones.
In 1964, James Black synthesized the first clinically significant beta blockers—propranolol and pronethalol; it revolutionized the medical management of angina pectoris and is considered by many to be one of the most important contributions to clinical medicine and pharmacology of the 20th century.For the treatment of primary hypertension, meta-analyses of studies which mostly used atenolol have shown that although beta blockers are more effective than placebo in preventing stroke and total cardiovascular events, they are not as effective as diuretics, medications inhibiting the renin–angiotensin system (e.g., ACE inhibitors), or calcium channel blockers.
Medical uses
Large differences exist in the pharmacology of agents within the class, thus not all beta blockers are used for all indications listed below.
Indications for beta blockers include:
Angina pectoris (contraindicated for Prinzmetals angina)
Atrial fibrillation
Cardiac arrhythmia
Congestive heart failure
Essential tremor
Glaucoma (as eye drops, they decrease intraocular pressure by lowering aqueous humor secretion)
Hypertension, although they are generally not preferred as an initial treatment
Hyperthyroidism
Migraine prophylaxis
Mitral valve prolapse
Myocardial infarction
Phaeochromocytoma, in conjunction with α-blocker
Postural orthostatic tachycardia syndrome
Symptomatic control (tachycardia, tremor) in anxiety and hyperthyroidism
Theophylline overdoseBeta blockers have also been used for:
Acute aortic dissection
Hypertrophic obstructive cardiomyopathy
Long QT syndrome
Marfan syndrome (treatment with propranolol slows progression of aortic dilation and its complications)
Prevention of variceal bleeding in portal hypertension
Possible mitigation of hyperhidrosis
Social and other anxiety disorders
Controversially, for reduction of perioperative mortality in non-cardiac surgery, but the best evidence suggests that they increase mortality when used this way
Congestive heart failure
Although beta blockers were once contraindicated in congestive heart failure, as they have the potential to worsen the condition due to their effect of decreasing cardiac contractility, studies in the late 1990s showed their efficacy at reducing morbidity and mortality.Bisoprolol, carvedilol, and sustained-release metoprolol are specifically indicated as adjuncts to standard ACE inhibitor and diuretic therapy in congestive heart failure, although at doses typically much lower than those indicated for other conditions. Beta blockers are only indicated in cases of compensated, stable congestive heart failure; in cases of acute decompensated heart failure, beta blockers will cause a further decrease in ejection fraction, worsening the patients current symptoms.
Beta blockers are known primarily for their reductive effect on heart rate, although this is not the only mechanism of action of importance in congestive heart failure. Beta blockers, in addition to their sympatholytic β1 activity in the heart, influence the renin–angiotensin system at the kidneys. Beta blockers cause a decrease in renin secretion, which in turn reduces the heart oxygen demand by lowering the extracellular volume and increasing the oxygen-carrying capacity of the blood. Heart failure characteristically involves increased catecholamine activity on the heart, which is responsible for several deleterious effects, including increased oxygen demand, propagation of inflammatory mediators, and abnormal cardiac tissue remodeling, all of which decrease the efficiency of cardiac contraction and contribute to the low ejection fraction. Beta blockers counter this inappropriately high sympathetic activity, eventually leading to an improved ejection fraction, despite an initial reduction in ejection fraction.
Trials have shown beta blockers reduce the absolute risk of death by 4.5% over a 13-month period. In addition to reducing the risk of mortality, the numbers of hospital visits and hospitalizations were also reduced in the trials.Therapeutic administration of beta blockers for congestive heart failure ought to begin at very low doses (1/8 of target) with a gradual escalation of the dose. The heart of the patient must adjust to decreasing stimulation by catecholamines and find a new equilibrium at a lower adrenergic drive.
Anxiety
Officially, beta blockers are not approved for anxiolytic use by the U.S. Food and Drug Administration. However, many controlled trials in the past 25 years indicate beta blockers are effective in anxiety disorders, though the mechanism of action is not known. The physiological symptoms of the fight-or-flight response (pounding heart, cold/clammy hands, increased respiration, sweating, etc.) are significantly reduced, thus enabling anxious individuals to concentrate on the task at hand.
Musicians, public speakers, actors, and professional dancers have been known to use beta blockers to avoid performance anxiety, stage fright, and tremor during both auditions and public performances. The application to stage fright was first recognized in The Lancet in 1976, and by 1987, a survey conducted by the International Conference of Symphony Orchestra Musicians, representing the 51 largest orchestras in the United States, revealed 27% of its musicians had used beta blockers and 70% obtained them from friends, not physicians. Beta blockers are inexpensive, said to be relatively safe, and on one hand, seem to improve musicians performances on a technical level, while some, such as Barry Green, the author of "The Inner Game of Music" and Don Greene, a former Olympic diving coach who teaches Juilliard students to overcome their stage fright naturally, say the performances may be perceived as "soulless and inauthentic".
Surgery
Low certainty evidence indicates that the use of beta blockers around the time of cardiac surgery may decrease the risk of heart dysrhythmias and atrial fibrillation. Starting them around the time of other types of surgery, however, may worsen outcomes. For non-cardiac surgery, the use of beta blockers to prevent adverse effects may reduce the risk of atrial fibrillation and myocardial infarctions (very low certainty evidence), however, there is moderate certainty evidence that this approach may increase the risk of hypotension. Low-certainty evidence suggests that beta blockers used perioperatively in non-cardiac surgeries may increase the risk of bradycardia.
Performance-enhancing use
Because they promote lower heart rates and reduce tremors, beta blockers have been used in professional sports where high accuracy is required, including archery, shooting, golf and snooker. Beta blockers are banned in some sports by the International Olympic Committee. In the 2008 Summer Olympics, 50-metre pistol silver medalist and 10-metre air pistol bronze medalist Kim Jong-su tested positive for propranolol and was stripped of his medals.For similar reasons, beta blockers have also been used by surgeons.Classical musicians have commonly used beta blockers since the 1970s to reduce stage fright.
Adverse effects
Adverse drug reactions associated with the use of beta blockers include: nausea, diarrhea, bronchospasm, dyspnea, cold extremities, exacerbation of Raynauds syndrome, bradycardia, hypotension, heart failure, heart block, fatigue, dizziness, alopecia (hair loss), abnormal vision, hallucinations, insomnia, nightmares, sexual dysfunction, erectile dysfunction and/or alteration of glucose and lipid metabolism. Mixed α1/β-antagonist therapy is also commonly associated with orthostatic hypotension. Carvedilol therapy is commonly associated with edema. Due to the high penetration across the blood–brain barrier, lipophilic beta blockers, such as propranolol and metoprolol, are more likely than other less lipophilic beta blockers to cause sleep disturbances, such as insomnia, vivid dreams and nightmares.Adverse effects associated with β2-adrenergic receptor antagonist activity (bronchospasm, peripheral vasoconstriction, alteration of glucose and lipid metabolism) are less common with β1-selective (often termed "cardioselective") agents, but receptor selectivity diminishes at higher doses. Beta blockade, especially of the beta-1 receptor at the macula densa, inhibits renin release, thus decreasing the release of aldosterone. This causes hyponatremia and hyperkalemia.
Hypoglycemia can occur with beta blockade because β2-adrenoceptors normally stimulate glycogen breakdown (glycogenolysis) in the liver and pancreatic release of the hormone glucagon, which work together to increase plasma glucose. Therefore, blocking β2-adrenoceptors lowers plasma glucose. β1-blockers have fewer metabolic side effects in diabetic patients; however, the fast heart rate that serves as a warning sign for insulin-induced low blood sugar may be masked, resulting in hypoglycemia unawareness. This is termed beta blocker-induced hypoglycemia unawareness. Therefore, beta blockers are to be used cautiously in diabetics.A 2007 study revealed diuretics and beta blockers used for hypertension increase a patients risk of developing diabetes mellitus, while ACE inhibitors and angiotensin II receptor antagonists (angiotensin receptor blockers) actually decrease the risk of diabetes. Clinical guidelines in Great Britain, but not in the United States, call for avoiding diuretics and beta blockers as first-line treatment of hypertension due to the risk of diabetes.Beta blockers must not be used in the treatment of selective alpha-adrenergic agonist overdose. The blockade of only beta receptors increases blood pressure, reduces coronary blood flow, left ventricular function, and cardiac output and tissue perfusion by means of leaving the alpha-adrenergic system stimulation unopposed. Beta blockers with lipophilic properties and CNS penetration such as metoprolol and labetalol may be useful for treating CNS and cardiovascular toxicity from a methamphetamine overdose. The mixed alpha- and beta blocker labetalol is especially useful for treatment of concomitant tachycardia and hypertension induced by methamphetamine. The phenomenon of "unopposed alpha stimulation" has not been reported with the use of beta blockers for treatment of methamphetamine toxicity. Other appropriate antihypertensive drugs to administer during hypertensive crisis resulting from stimulant overdose are vasodilators such as nitroglycerin, diuretics such as furosemide, and alpha blockers such as phentolamine.
Contraindications
Contraindications for beta blockers include:
Asthma
The 2007 National Heart, Lung, and Blood Institute (NHLBI) asthma guidelines recommend against the use of non-selective beta blockers in asthmatics, while allowing for the use of cardio selective beta blockers.: 182 Cardio selective beta blocker (β1 blockers), if really required, can be prescribed at the least possible dose to those with mild to moderate respiratory symptoms. β2-agonists can somewhat mitigate β-Blocker-induced bronchospasm where it exerts greater efficacy on reversing selective β-blocker-induced bronchospasm than the nonselective β-blocker-induced worsening asthma and/or COPD.
Diabetes mellitus
Epinephrine signals early warning of the upcoming hypoglycemia.Beta blockers inhibition on epinephrines effect can somewhat exacerbate hypoglycemia by interfering with glycogenolysis and mask signs of hypoglycemia such as tachycardia, palpitations, diaphoresis, and tremors. Diligent blood glucose level monitoring is necessary for a patient with diabetes mellitus on beta blocker.
Hyperthyroidism
Abrupt withdrawal can result in a thyroid storm.
Bradycardia or AV block
Unless a pacemaker is present, beta blockers can severely depress conduction in the AV node, resulting in a reduction of heart rate and cardiac output. One should be very cautious with the use of beta blockers in tachycardia patients with Wolff-Parkinson-White Syndrome, as it can result in life-threatening arrhythmia in certain patients. By slowing the conduction through the AV node, preferential conduction through the accessory pathway is favored. If the patient happens to develop atrial flutter, this could lead to a 1:1 conduction with very fast ventricular rate, or worse, ventricular fibrillation in the case of atrial fibrillation.
Toxicity
Glucagon, used in the treatment of overdose, increases the strength of heart contractions, increases intracellular cAMP, and decreases renal vascular resistance. It is, therefore, useful in patients with beta blocker cardiotoxicity. Cardiac pacing is usually reserved for patients unresponsive to pharmacological therapy.
People experiencing bronchospasm due to the β2 receptor-blocking effects of nonselective beta blockers may be treated with anticholinergic drugs, such as ipratropium, which are safer than beta agonists in patients with cardiovascular disease. Other antidotes for beta blocker poisoning are salbutamol and isoprenaline.
β-receptor antagonism
Stimulation of β1 receptors by epinephrine and norepinephrine induces a positive chronotropic and inotropic effect on the heart and increases cardiac conduction velocity and automaticity. Stimulation of β1 receptors on the kidney causes renin release. Stimulation of β2 receptors induces smooth muscle relaxation, induces tremor in skeletal muscle, and increases glycogenolysis in the liver and skeletal muscle. Stimulation of β3 receptors induces lipolysis.Beta blockers inhibit these normal epinephrine- and norepinephrine-mediated sympathetic actions, but have minimal effect on resting subjects.
That is, they reduce the effect of excitement or physical exertion on heart rate and force of contraction, and also tremor, and breakdown of glycogen. Beta blockers can have a constricting effect on the bronchi of the lungs, possibly worsening or causing asthma symptoms.Since β2 adrenergic receptors can cause vascular smooth muscle dilation, beta blockers may cause some vasoconstriction. However, this effect tends to be small because the activity of β2 receptors is overshadowed by the more dominant vasoconstricting α1 receptors. By far the greatest effect of beta blockers remains in the heart. Newer, third-generation beta blockers can cause vasodilation through blockade of alpha-adrenergic receptors.Accordingly, nonselective beta blockers are expected to have antihypertensive effects. The primary antihypertensive mechanism of beta blockers is unclear, but may involve reduction in cardiac output (due to negative chronotropic and inotropic effects). It may also be due to reduction in renin release from the kidneys, and a central nervous system effect to reduce sympathetic activity (for those beta blockers that do cross the blood–brain barrier, e.g. propranolol).
Antianginal effects result from negative chronotropic and inotropic effects, which decrease cardiac workload and oxygen demand. Negative chronotropic properties of beta blockers allow the lifesaving property of heart rate control. Beta blockers are readily titrated to optimal rate control in many pathologic states.
The antiarrhythmic effects of beta blockers arise from sympathetic nervous system blockade—resulting in depression of sinus node function and atrioventricular node conduction, and prolonged atrial refractory periods. Sotalol, in particular, has additional antiarrhythmic properties and prolongs action potential duration through potassium channel blockade.
Blockade of the sympathetic nervous system on renin release leads to reduced aldosterone via the renin–angiotensin–aldosterone system, with a resultant decrease in blood pressure due to decreased sodium and water retention.
Intrinsic sympathomimetic activity
Also referred to as intrinsic sympathomimetic effect, this term is used particularly with beta blockers that can show both agonism and antagonism at a given beta receptor, depending on the concentration of the agent (beta blocker) and the concentration of the antagonized agent (usually an endogenous compound, such as norepinephrine). See partial agonist for a more general description.
Some beta blockers (e.g. oxprenolol, pindolol, penbutolol, labetalol and acebutolol) exhibit intrinsic sympathomimetic activity (ISA). These agents are capable of exerting low-level agonist activity at the β-adrenergic receptor while simultaneously acting as a receptor site antagonist. These agents, therefore, may be useful in individuals exhibiting excessive bradycardia with sustained beta blocker therapy.
Agents with ISA should not be used for patients with any kind of angina as it can aggravate or after myocardial infarctions. They may also be less effective than other beta blockers in the management of angina and tachyarrhythmia.
α1-receptor antagonism
Some beta blockers (e.g., labetalol and carvedilol) exhibit mixed antagonism of both β- and α1-adrenergic receptors, which provides additional arteriolar vasodilating action.
Examples
Nonselective agents
Nonselective beta blockers display both β1 and β2 antagonism.
Propranolol
Bucindolol (has additional α1-blocking activity)
Carteolol
Carvedilol (has additional α1-blocking activity)
Labetalol (has intrinsic sympathomimetic activity and additional α1-blocking activity)
Nadolol
Oxprenolol (has intrinsic sympathomimetic activity)
Penbutolol (has intrinsic sympathomimetic activity)
Pindolol (has intrinsic sympathomimetic activity)
Sotalol (not considered a "typical beta blocker")
Timolol
β1-selective agents
β1-selective beta blockers are also known as cardioselective beta blockers. Pharmacologically, the beta-blockade of the B1 receptors in the heart will act on cAMP. The function of cAMP as a second messenger in the cardiac cell is that it phosphorylates the LTCC and the ryanodine receptor to increase intracellular calcium levels and cause contraction. Beta-blockade of the B1 receptor will inhibit cAMP from phosphorylating, and it will decrease the ionotrophic and chronotropic effect </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Biliary atresia and can you explain it in depth? [/INST] Biliary atresia, also known as extrahepatic ductopenia and progressive obliterative cholangiopathy, is a childhood disease of the liver in which one or more bile ducts are abnormally narrow, blocked, or absent. It can be congenital or acquired. It has an incidence of one in 10,000–15,000 live births in the United States, and a prevalence of one in 16,700 in the British Isles. Biliary atresia is most common in East Asia, with a frequency of one in 5,000.
The cause of biliary atresia in Egyptian infants has been proven to be as a result of aflatoxin induced cholangiopathy acquired prenatally in infants who have glutathione S transferase M1 deficiency. The biliary atresia phenotype caused by congenital aflatoxicosis in GST M1 deficient neonates is named Kotb disease. Syndromic biliary atresia (e.g. Biliary Atresia Splenic Malformation (BASM)) has been associated with certain genes (e.g. Polycystic Kidney Disease 1 Like 1 - PKD1L1), and some infants with isolated biliary atresia may arise as a result of an autoimmune inflammatory response, possibly due to a viral infection of the liver soon after birth. In animals plant toxins have been shown to cause biliary atresia. The only effective treatments are operations such as the Kasai procedure and liver transplantation.
Signs and symptoms
Initially, the symptoms of biliary atresia are indistinguishable from those of neonatal jaundice, a usually harmless condition commonly seen in infants. However, infants with biliary atresia develop progressive conjugated jaundice, pale white stools, and dark urine. Some infants fail to thrive as there will be a degree of fat and fat-soluble vitamin malabsorption (e.g. Vitamin K). This may cause a bleeding tendency. Eventually, and usually after 2 months, cirrhosis with portal hypertension will develop. If left untreated, biliary atresia can lead to liver failure. Unlike other forms of jaundice, however, biliary-atresia-related cholestasis mostly does not result in kernicterus, a form of brain damage resulting from liver dysfunction. This is because in biliary atresia, the liver, although diseased, is still able to conjugate bilirubin, and conjugated bilirubin is unable to cross the blood–brain barrier.
Causes
The cause of biliary atresia in most infants is not fully understood and it is well possible that a number of factors may play a role, but especially maternal rotavirus infection during pregnancy and subsequent transmission of the virus to the child resulting in infection of the biliary epithelium and subsequent occluding fibrosis may be important in this respect. Some cases may relate to infection with other viruses (including COVID-19) as well, for instance infection with the hepatotropic virus reovirus 3 has been proposed and congenital cytomegalovirus infection, as well. In addition autoimmune processes may contribute to pathogenesis in some cases as well. However, with regard to these alternative causation the experimental evidence remains rather weak.
Genetics
An association between biliary atresia and the ADD3 gene was first detected in Chinese populations through a genome-wide association study, and was confirmed in Thai Asians and Caucasians. A possible association with deletion of the gene GPC1, which encodes a glypican 1-a heparan sulfate proteoglycan, has been reported. This gene is located on the long arm of chromosome 2 (2q37) and is involved in the regulation of inflammation and the Hedgehog gene.Egyptian infants with biliary atresia were found to have null GSTM1 genotype while all their mothers were heterozygous for GSTM1. Thus these infants may be protected in utero by their maternal detoxification system, yet once born they cannot handle the detoxification of an aflatoxin load.
Toxins
Some cases of biliary atresia may result from exposure to aflatoxin B1, and to a lesser extent aflatoxin B2 during late pregnancy. Intact maternal detoxification protects baby during intrauterine life, yet after delivery, the baby struggles with the aflatoxin in its blood and liver. Moreover, the baby feeds aflatoxin M1 from its mom, as aflatoxin M1 is the detoxification product of aflatoxin B1. It is a milder toxin that causes cholangitis in the baby.There are isolated examples of biliary atresia in animals. For instance, lambs born to sheep grazing on land contaminated with a weed (Red Crumbweed) developed biliary atresia at certain times. The plants were later found to contain a toxin, now called biliatresone Studies are ongoing to determine whether there is a link between human cases of biliary atresia and toxins such as biliatresone. There are some indications that a metabolite of certain human gut bacteria may be similar to biliatresone.
Pathophysiology
There are three main types of extra-hepatic biliary atresia:
Type I: Atresia is restricted to the common bile duct.
Type II: Atresia of the common hepatic duct.
Type III: Atresia involves the most proximal part of the bile ducts (>95% of all cases).In approximately 10% of cases, other anomalies may be associated with biliary atresia. The most common of these syndromic forms is BASM and might include heart lesions, polysplenia, situs inversus, absent venae cavae, and a preduodenal portal vein. Progressive cirrhosis is associated with signs and symptoms of portal hypertension, such as esophagogastric varix bleeding, hypersplenism, hepatorenal syndrome, and hepatopulmonary syndrome.In an Egyptian study, abnormally high levels of aflatoxin B1 and to a lesser extent aflatoxin B2 was found in liver tissue and blood of all neonates with biliary atresia. Aflatoxins may cause extensive damage to the hepatocytes leading to hepatitis and damage to bile ducts causing inflammation, adhesions and final obstruction of bile ducts. The affected neonates have a genetic detoxification defect that does not allow them to detoxify these aflatoxins timely or effectively. The babies have homozygous deficiency of glutathione S transferase (GST) M1. The aflatoxin damaged liver cells and bile duct cells are removed by neutrophil elastase and by involvement of immune system mediators such as CCL-2 or MCP-1, tumor necrosis factor (TNF), interleukin-6 (IL-6), TGF-beta, endothelin (ET), and nitric oxide (NO). Among these, TGF-beta is the most important pro-fibrogenic cytokine that can be seen in progressive cirrhosis.The cascade of immune involvement to remove damaged hepatocytes and cholangiocytes ushers regeneration. Yet in infants with biliary atresia regeneration is defective, and results in cirrhosis, as these infants have disrupted p53 and disrupted GSTPi. p53 and GSTPi are responsible for DNA fidelity at regeneration. Hence, these infants get accelerated cirrhosis and march to portal hypertension.
Diagnosis
Diagnosis is made by an assessment of history, physical examination in conjunction with blood tests, a liver biopsy, and ultrasound scans imaging and is prompted by prolonged or persistent jaundice, with abnormalities in liver function tests. Ultrasound or other forms of imaging such as radio-isotope liver scans can also be used but final confirmation is usually only reached at the time of exploratory surgery.
Differential diagnoses
The differential diagnoses are extensive and include: Alagille syndrome, alpha-1-antitrypsin deficiency, Byler disease (progressive familial intrahepatic cholestasis), Caroli disease, choledochal cyst, cholestasis, congenital cytomegalovirus disease, congenital herpes simplex virus infection, congenital rubella, congenital syphilis, congenital toxoplasmosis, cystic fibrosis, galactosemia, idiopathic neonatal hepatitis, lipid storage disorders, neonatal hemochromatosis, and total parenteral nutrition-associated cholestasis.
Treatment
Most (>95%) infants with biliary atresia will undergo an operation designed to retain and salvage the native liver, restore bile flow, and reduce the level of jaundice. This is known as the Kasai procedure (after Morio Kasai, the Japanese surgeon who first developed the technique) or hepatoportoenterostomy. Although the procedure is not thought of as curative, it may relieve jaundice and stop liver fibrosis, allowing normal growth and development. Published series from Japan, North America, and the UK show that bilirubin levels will fall to normal values in about 50-55% of infants, allowing 40-50% to retain their own liver to reach the age of 5 and 10 years (and beyond). Liver transplantation is an option for those children whose liver function and symptoms fail to respond to a Kasai operation.Recent large-scale studies by Davenport et al </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Binge eating disorder and can you explain it in depth? [/INST] Binge eating disorder (BED) is an eating disorder characterized by frequent and recurrent binge eating episodes with associated negative psychological and social problems, but without the compensatory behaviors common to bulimia nervosa, OSFED, or the binge-purge subtype of anorexia nervosa.
BED is a recently described condition, which was required to distinguish binge eating similar to that seen in bulimia nervosa but without characteristic purging. Individuals who are diagnosed with bulimia nervosa and binge eating disorder exhibit similar patterns of compulsive overeating, neurobiological features of dysfunctional cognitive control and food addiction, and biological and environmental risk factors. Some professionals consider BED to be a milder form of bulimia with the two conditions on the same spectrum.Binge eating is one of the most prevalent eating disorders among adults, though there tends to be less media coverage and research about the disorder in comparison to anorexia nervosa and bulimia nervosa.
Signs and symptoms
Binge eating is the core symptom of BED; however, not everyone who binge eats has BED. An individual may occasionally binge eat without experiencing many of the negative physical, psychological, or social effects of BED. This may be considered disordered eating rather than a clinical disorder. Precisely defining binge eating can be problematic, however binge eating episodes in BED are generally described as having the following potential features:
Eating much faster than normal, perhaps in a short space of time
Eating until feeling uncomfortably full
Eating a large amount when not hungry
Subjective loss of control over how much or what is eaten
Binges may be planned in advance, involving the purchase of special binge foods, and the allocation of specific time for binging, sometimes at night
Eating alone or secretly due to embarrassment over the amount of food consumed
There may be a dazed mental state during the binge
Not being able to remember what was eaten after the binge
Feelings of guilt, shame or disgust following a food binge
Body image disturbanceIn contrast to bulimia nervosa, binge eating episodes are not regularly followed by activities intended to compensate for the amount of food consumed, such as self-induced vomiting, laxative or enema misuse, or strenuous exercise. BED is characterized more by overeating than dietary restriction. Those with BED often have poor body image and frequently diet, but are unsuccessful due to the severity of their binge eating.Obesity is common in persons with BED, as is depression, low self-esteem, stress and boredom. Those with BED are also at risk of Non-alcoholic fatty liver disease, menstrual irregularities such as amenorrhea, and gastrointestinal problems such as acid reflux and heartburn.
Causes
As with other eating disorders, binge eating is an "expressive disorder"—a disorder that is an expression of deeper psychological problems. People who have binge eating disorder have been found to have higher weight bias internalization, which includes low self-esteem, unhealthy eating patterns, and general body dissatisfaction. Binge eating disorder commonly develops as a result or side effect of depression, as it is common for people to turn to comfort foods when they are feeling down.There was resistance to give binge eating disorder the status of a fully fledged eating disorder because many perceived binge eating disorder to be caused by individual choices. Previous research has focused on the relationship between body image and eating disorders, and concludes that disordered eating might be linked to rigid dieting practices. In the majority of cases of anorexia, extreme and inflexible restriction of dietary intake leads at some point to the development of binge eating, weight regain, bulimia nervosa, or a mixed form of eating disorder not otherwise specified. Binge eating may begin when individuals recover from an adoption of rigid eating habits. When under a strict diet that mimics the effects of starvation, the body may be preparing for a new type of behavior pattern, one that consumes a large amount of food in a relatively short period of time.Some studies show that BED aggregates in families and could be genetic. However, very few published studies around the genetics exist.However, other research suggests that binge eating disorder can also be caused by environmental factors and the impact of traumatic events. One study showed that women with binge eating disorder experienced more adverse life events in the year prior to the onset of the development of the disorder, and that binge eating disorder was positively associated with how frequently negative events occur. Additionally, the research found that individuals who had binge eating disorder were more likely to have experienced physical abuse, perceived risk of physical abuse, stress, and body criticism. Other risk factors may include childhood obesity, critical comments about weight, low self-esteem, depression, and physical or sexual abuse in childhood. A systematic review concluded that bulimia nervosa and binge eating disorder are more impacted by family separations, a loss in their lives and negative parent-child interactions compared to those with anorexia nervosa. A few studies have suggested that there could be a genetic component to binge eating disorder, though other studies have shown more ambiguous results. Studies have shown that binge eating tends to run in families and a twin study by Bulik, Sullivan, and Kendler has shown a, "moderate heritability for binge eating" at 41 percent. More research must be done before any firm conclusions can be drawn regarding the heritability of binge eating disorder. Studies have also shown that eating disorders such as anorexia and bulimia reduce coping abilities, which makes it more likely for those suffering to turn to binge eating as a coping strategy.A correlation between dietary restraint and the occurrence of binge eating has been shown in some research. While binge eaters are often believed to be lacking in self-control, the root of such behavior might instead be linked to rigid dieting practices. The relationship between strict dieting and binge eating is characterized by a vicious circle. Binge eating is more likely to occur after dieting, and vice versa. Several forms of dieting include delay in eating (e.g., not eating during the day), restriction of overall calorie intake (e.g., setting calorie limit to 1,000 calories per day), and avoidance of certain types of food (e.g., "forbidden" food, such as sugar, carbohydrates, etc.) Strict and extreme dieting differs from ordinary dieting. Some evidence suggests the effectiveness of moderate calorie restriction in decreasing binge eating episodes among overweight individuals with binge eating disorder, at least in the short-term."In the U.S, it is estimated that 3.5% of young women and 30% to 40% of people who seek weight loss treatments, can be clinically diagnosed with binge eating disorder."
Diagnosis
International Classification of Diseases
BED was first included in the Diagnostic and Statistical Manual of Mental Disorders (DSM) in 1994 simply as a feature of eating disorder. In 2013 it gained formal recognition as a psychiatric condition in the DSM-5.The 2017 update to the American version of the ICD-10 includes BED under F50.81. ICD-11 may contain a dedicated entry (6B62), defining BED as frequent, recurrent episodes of binge eating (once a week or more over a period of several months) which are not regularly followed by inappropriate compensatory behaviors aimed at preventing weight gain.
Diagnostic and Statistical Manual
Previously considered a topic for further research exploration, binge eating disorder was included in the Diagnostic and Statistical Manual of Mental Disorders in 2013. Until 2013, binge eating disorder was categorized as an Eating Disorder Not Otherwise Specified, an umbrella category for eating disorders that dont fall under the categories for anorexia nervosa or bulimia nervosa. Because it was not a recognized psychiatric disorder in the DSM until 2013, it has been difficult to obtain insurance reimbursement for treatments. The disorder now has its own category under DSM-5, which outlines the signs and symptoms that must be present to classify a persons behavior as binge eating disorder. Studies have confirmed the high predictive value of these criteria for diagnosing BED.According to the World Health Organizations ICD-11 classification of BED, the severity of the disorder can be classified as mild (1-3 episodes/week), moderate (4-7 episodes/week), severe (8-13 episodes/week) and extreme (>14 episodes/week).One study claims that the method for diagnosing BED is for a clinician to conduct a structured interview using the DSM-5 criteria or taking the Eating Disorder Examination. The Structured Clinical Interview takes no more than 75 minutes to complete and has a systematic approach which follows the DSM-5 criteria. The Eating Disorder Examination is a semi-structured interview which identifies the frequency of binges and associated eating disorder features.
Treatment
Counselling and certain medication, such as lisdexamfetamine and selective serotonin reuptake inhibitor (SSRIs), may help. Some recommend a multidisciplinary approach in the treatment of the disorder.
Counselling
Cognitive behavioral therapy (CBT) treatment has been demonstrated as a more effective form of treatment for BED than behavioral weight loss programs. 50 percent of BED individuals achieve complete remission from binge eating and 68-90% will reduce the amount of binge eating episodes they have. CBT has also been shown to be an effective method to address self-image issues and psychiatric comorbidities (e.g., depression) associated with the disorder. The goal of CBT is to interrupt binge-eating behaviour, learn to create a normal eating schedule, change the perception around weight and shape and develop positive attitudes about ones body. Although this treatment is successful in eliminating binge eating episodes, it does not lead to losing any weight. Recent reviews have concluded that psychological interventions such as psychotherapy and behavioral interventions are more effective than pharmacological interventions for the treatment of binge eating disorder. A meta-analysis concluded that psychotherapy based on CBT not only significantly improved binge-eating symptomatology but also reduced a clients BMI significantly at posttreatment and longer than 6 and 12 months after treatment. There is the 12-step Overeaters Anonymous or Food Addicts in Recovery Anonymous. Behavioral weight loss treatment has been proven to be effective as a means to achieve weight loss amongst patients.
Medication
Lisdexamfetamine is a USFDA-approved drug that is used for the treatment of moderate to severe binge eating disorder in adults. As of 2021, it is the first and only medication formally approved for the treatment of BED. It is thought that lisdexamfetamine treats BED through a combination of effects on appetite and satiety, reward, and cognitive processes, including attention, impulsivity, and behavioral inhibition.Three other classes of medications are also used in the treatment of binge eating disorder: antidepressants, anticonvulsants, and anti-obesity medications. Antidepressant medications of the selective serotonin reuptake inhibitor (SSRI) have been found to effectively reduce episodes of binge eating and reduce weight. Similarly, anticonvulsant medications such as topiramate and zonisamide may be able to effectively suppress appetite. The long-term effectiveness of medication for binge eating disorder is currently unknown. For BED patients with manic episodes, risperidone is recommended. If BED patients have bipolar depression, lamotrigine is appropriate to use.Trials of antidepressants, anticonvulsants, and anti-obesity medications suggest that these medications are superior to placebo in reducing binge eating. Medications are not considered the treatment of choice because psychotherapeutic approaches, such as CBT, are more effective than medications for binge eating disorder. A meta-analysis concluded that using medications did not reduce binge-eating episodes and BMI posttreatment at 6–12 months. This indicates a potential possibility of relapse after withdrawal from the medications. Medications also do not increase the effectiveness of psychotherapy, though some patients may benefit from anticonvulsant and anti-obesity medications, such as phentermine/topiramate, for weight loss.Blocking opioid receptors leads to less food intake. Additionally, bupropion and naltrexone used together may cause weight loss. Combining these alongside psychotherapies like CBT may lead to better outcomes for BED.
Surgery
Bariatric surgery has also been proposed as another approach to treat BED and a recent meta-analysis showed that approximately two-thirds of individuals who seek this type of surgery for weight loss purposes have BED. Bariatric surgery recipients who had BED prior to receiving the surgery tend to have poorer weight-loss outcomes and are more likely to continue to exhibit eating behaviors characteristic of BED.
Lifestyle Interventions
Other treatments for BED include lifestyle interventions like weight training, peer support groups, and investigation of hormonal abnormalities.
Prognosis
Individuals with BED often have a lower overall quality of life and commonly experience social difficulties. Early behavior change is an accurate prediction of remission of symptoms later.Individuals who have BED commonly have other comorbidities such as major depressive disorder, personality disorder, bipolar disorder, substance abuse, body dysmorphic disorder, kleptomania, irritable bowel syndrome, fibromyalgia, or an anxiety disorder. Individuals may also exhibit varying degrees of panic attacks and a history of attempted suicide.While people of a normal weight may overeat occasionally, an ongoing habit of consuming large amounts of food in a short period of time may ultimately lead to weight gain and obesity. The main physical health consequences of this type of eating disorder are brought on by the weight gain resulting from calorie-laden bingeing episodes. Mental and emotional consequences of binge eating disorder include social weight stigma and emotional loss of control. Up to 70% of individuals with BED may also be obese, and therefore obesity-associated morbidities such as high blood pressure and coronary artery disease type 2 diabetes mellitus gastrointestinal issues (e.g., gallbladder disease), high cholesterol levels, musculoskeletal problems and obstructive sleep apnea may also be present.
Epidemiology
General
The prevalence of BED in the general population is approximately 1-3%.BED cases usually occur between the ages of 12.4 and 24.7, but prevalence rates increase until the age of 40.Binge eating disorder is the most common eating disorder in adults.The limited amount of research that has been done on BED shows that rates of binge eating disorder are fairly comparable among men and women. The lifetime prevalence of binge eating disorder has been observed in studies to be 2.0 percent for men and 3.5 percent for women, higher than that of the commonly recognized eating disorders anorexia nervosa and bulimia nervosa. However another systematic literature review found the prevalence average to be about 2.3% in women and about 0.3% in men. Lifetime prevalence rates for BED in women can range anywhere from 1.5 to 6 times higher than in men. One literature review found that point prevalence rates for BED vary from 0.1 percent to 24.1 percent depending on the sample. This same review also found that the 12-month prevalence rates vary between 0.1 percent to 8.8 percent.Recent studies found that eating disorders which included anorexia nervosa, bulimia nervosa and binge-eating disorder are common among sexual and gender minority populations, including gay, lesbian, bisexual and transgender people. This could be due to the minority stress and discrimination this population experiences.Due to limited and inconsistent information and research on ethnic and racial differences, prevalence rates are hard to determine for BED. Rates of binge eating disorder have been found to be similar among black women, white women, and white men, while some studies have shown that binge eating disorder is more common among black women than among white women. However, majority of the research done around BED is focused on White women. One literature review found information citing no difference between BED prevalence among Hispanic, African American, and White women while other information found that BED prevalence was highest among Hispanics followed by Black individuals and finally White people.
Worldwide Prevalences
Eating disorders have usually been considered something that was specific to Western countries. However, the prevalence of eating disorders is increasing in other non-Western countries. Though the research on binge eating disorders tends to be concentrated in North America, the disorder occurs across cultures </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Blastic plasmacytoid dendritic cell neoplasm and can you explain it in depth? [/INST] Blastic plasmacytoid dendritic cell neoplasm (BPDCN) is a rare hematologic malignancy. It was initially regarded as a form of lymphocyte-derived cutaneous lymphoma and alternatively named CD4+CD56+ hematodermic tumor, blastic NK cell lymphoma, and agranular CD4+ NK cell leukemia. Later, however, the disease was determined to be a malignancy of plasmacytoid dendritic cells rather than lymphocytes and therefore termed blastic plasmacytoid dendritic cell neoplasm. In 2016, the World Health Organization designated BPDCN to be in its own separate category within the myeloid class of neoplasms. It is estimated that BPDCN constitutes 0.44% of all hematological malignancies.Blastic plasmacytoid dendritic cell neoplasm is an aggressive malignancy with features of cutaneous lymphoma (e.g. malignant plasmacytoid dendritic cell infiltrations into the skin to form single or multiple lesions) and/or leukemia (i.e. malignant plasmacytoid dendritic cells in blood and bone marrow). While commonly presenting with these clinical features, BPDCN, particularly in its more advanced stages, may also involve malignant plasmacytoid dendritic cell infiltrations in and thereby injury to the liver, spleen, lymph nodes, central nervous system, or other tissues. The neoplasm occurs in individuals of all ages but predominates in the elderly; in children, it afflicts males and females equally but in adults is far more common (~75% of cases) in males.Blastic plasmacytoid dendritic cell neoplasm typically responds to chemotherapy regimens used to treat hematological malignancies. All too often, however, the disease rapidly recurs and does so in a more drug-resistant form. Furthermore, the disease may occur in association with the myelodysplastic syndrome or transform to acute myeloid leukemia. Consequently, BPDCN has a very low 5 year survival rate. Current translational research studies on treating BPDCN have therefore focused on non-chemotherapeutic regimens that target the molecular pathways which may promote the disease.
Presentation
Blastic plasmacytoid dendritic cell neoplasm occurs in children, including neonates, but is more common in adults, particularly those between the ages 60–80. BPDCN usually (i.e. 61% to 90% of cases) presents with skin lesions, i.e. nodules, tumors, red or purple papules, bruise-like patches, and/or ulcers that most often occur on the head, face, and upper torso. The lesions are due to diffuse infiltrations of the skin by malignant pDC. In one large study, this presentation was accompanied by swollen lymph nodes, usually in the neck, due to malignant pDC infiltrations (~50% of cases); enlarged liver (~16% of cases) and/or spleen (26% of cases), also due to malignant pDC infiltrations; increased levels of malignant pDC in blood (i.e. >2% of nucleated cells) (~40% of cases), bone marrow (~65% of cases) and cerebrospinal fluid (47% of childhood cases but less often detected in adult cases). More advanced or severe cases may present with extreme organ and/or lymph node enlargements, skin lesions in virtually any site, and clinical evidence of malignant pDC infiltrations in the breasts, eyes, kidneys, lungs, gastrointestinal tract, bone, sinuses, ears, or testes. About 10% of individuals with BPDCN present with a leukemia-like disease, i.e. they exhibit circulating malignant pDC, anemia, thrombocytopenia, and/or leukopenia due to extensive malignant pDC infiltrations in the bone marrow. A leukemic phase of the disease is a common feature of end stage and post-therapy relapsing BPDCN.
Pathophysiology
There are three types of dendritic cells, plasmacytic dendritic cells (pDC) and two types of conventional dendritic cells (cDC), myeloid cDC1 and myeloid cDC2. pDC circulate in the blood, representing <0.4% of all nucleated blood cells, and are present in various hematological tissues such as lymph nodes and spleen. Their major function is to detect and then initiate immune responses to intracellular pathogens, particularly viruses such as the cold sore-causing Herpes simplex viruses, HIV, and hepatitis viruses but also bacteria such as the tuberculosis-causing Mycobacterium tuberculosis, fungi such as the aspergillosis-causing Aspergillus fumigatus and parasites such as malaria-causing Plasmodium falciparum. Following detection of these intracellular pathogens, pCD initiate immune responses by producing massive amounts of type I and type III interferons as well as by differentiating (i.e. maturing) into conventional dendritic cells that further promote immune responses by, e.g. functioning as antigen-presenting cells. The malignant pDC in BPDCN have the appearance of immature plasmacytoid dendritic cells. They are distinguished from other dendritic, myeloid, lymphoid and NK cell types by exhibiting at least several of the following properties: 1) plasmacytoid morphology; 2) production of large amounts of type I interferons when properly stimulated; 3) ability to differentiate into conventional dendritic cells when properly stimulated; 4) the expression of key marker proteins such as granzyme B, TCF4, interleukin-3 receptor (i.e. CD123), CLEC4C, and Neuropilin, and 5) failure to express certain marker proteins that are commonly expressed by myeloid, lymphoid, and NK cell lineages.Blastic plasmacytoid dendritic cell neoplasm typically arises after the serial acquisition of multiple genetic abnormalities in pDC or their precursor cells. Inactivating mutations (i.e. mutations which cause the gene to make no or a less active product) in the TET2 gene are the most common genetic abnormality in the disease, occurring in 32–67% of all BPDCN cases and often accompanied by mutations in either the NPM1 or SRSF2 gene. Numerous other genetic abnormalities are associated with the disease: 1) mutations in NRAS, ASXL1, and TP53; 2) deletions of the CDKN2A-ARF-CDKN2B locus on the short arm of chromosome 9, CDKN1B locus on the short arm of chromosome 12, RB1 locus on the long arm of chromosome 13, or NRC1 locus on the long arm of chromosome 5; 3) fusions of KMT2A on the long arm of chromosome 11 with MLLT1 on the short arm of chromosome 10, SUPT3H on the short arm of chromosome with MYC on the long arm of chromosome 8, or KMT2A on the long arm of chromosome 11 with MLLT1 on the long arm of chromosome 19; and 4) duplication or loss of entire chromosomes, particularly chromosomes 9, 13, or 15. Laboratory studies indicate that malignant pDC have a pathologically overactive NF-κB pathway that promotes their survival and production of various cytokines) that stimulate their own proliferation. Presumably, these genetic abnormalities lead to the activation of the NF-κB pathway and/or other cellular activation pathways which promote the survival, proliferation, and/or other malignant phenotypic traits in pDC and thereby cause BPDCN.
Diagnosis
BPDCN is suggested by a biopsy of skin lesions which reveals the infiltration by medium-sized blast (i.e. immature) cells into the dermis while sparing the epidermis. These cells exhibit irregular nuclei, fine chromatin, and at least one small nucleolus. Such blast cells may also be observed in the circulation, bone marrow, or other tissues and suggest BPDCN. However, the diagnosis of this disease requires determination that these cells are pDC blast cells rather than AML, T-cell lymphoblastic lymphoma (TCLL), or aggressive NK-cell leukemia (NKL) blast cells. Various studies have offered similar but not identical criteria to make this determination. All studies agree that pDC should have a typical plasmacytoid morphology and express a particular profile of marker proteins as detected by immunoassay and/or flow cytometry. However, the studies disagree on which marker proteins to profile. One studys profile assayed 1) CD4, CD56, CD123 (i.e. Interleukin-3 receptor, and TLC1, which are expressed on 80–100% of pDC but uncommon on AML, TCLL, or NKL blasts); 2) CD2AP and CLEC4C which are unique to pDC; and 3) myeloperoxidase, lysozyme, CD34, CD14, CD11c, and CD163 which are unique to AML, TCLL, or NKL blasts. Two other studies recommended assaying somewhat different sets of marker proteins.
Treatment
There have been no controlled studies to define the optimal treatment for BPDCN. Studies on small numbers of individuals with the disease have found that the standard chemotherapy regimens used for the initial induction treatments of AML, acute lymphoblastic leukemia, and high-grade lymphoma give complete remission rates of 77%, 93%, and 80%, respectively, in childhood PBDN and 47%, 77%, and 53%, respectively, in adult PBDN. However, these remissions were short-lived: post-treatment mean times to relapse or death were 12 months for children and 6.8 months for adults. Given these poor remission and survival rates, other treatments have been added to the initial treatment regimens. Studies have shown that the addition of intrathecally administered drugs (administered directly into the spinal canal) as prophylaxis prolongs the period of CNS-free disease and increases overall survival. Hematopoietic stem cell transplantation following initial chemotherapy-induced remission also prolongs these remissions and, it is suggested, offers potential for curing the disease. (A graft-versus-leukemia effect may have contributed to the benefits seen after transplantation.) Studies have not yet determined whether allogenic (i.e. taken from others) or autologous (i.e. taken from self) stem cells achieve better results, although one retrospective study in Japan found that autologous stem cells gave significantly better overall and progression-free survival rates. A phase I clinical research study to test the safety and efficacy of a combination chemotherapy regimen consisting of methotrexate, L-asparaginase, idarubicin, and dexamethasone followed by allogenic or autologous bone marrow transplantation in 26 participants newly diagnosed with BPDCN is planned but not yet in its recruiting phase.While few studies have reported on the treatment of BPDCN that has recurred following initial therapy, donor lymphocyte infusions coupled with alternative chemotherapy treatments have induced second complete or partial remissions in a few patients.
Tagraxofusp-erzs
Tagraxofusp-erzs (trade name Elzonris; formerly SL-401 and DT388-IL3) was approved in the United States in December 2018 for the treatment of BPDCN. Tagraxofusp-erzs is a fusion protein consisting of interleukin 3 (i.e. IL-3) fused to diphtheria toxin. The fusion protein readily kills cultured pDC by binding to their IL-3 receptors to thereby gain entrance to the cells and then blocking these cells protein synthesis (due to diphtheria toxin-mediated inhibition of eukaryotic elongation factor 2).
Prognosis
Due to the high rates of recurrence following initial therapy and the short overall survival times of individuals with BPDCN, prognosis of the disease is poor. However, further study of treatment regimens that include intrathecal chemotherapy and hematological stem cell transplantation in initial treatment regimens (see previous section) and newer non-chemotherapeutic drug treatments (see next section) may improve this situation.
Research
UCART123
UCART123 are chimeric T cell receptor-bearing cells, i.e </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Osteomyelitis and can you explain it in depth? [/INST] Osteomyelitis (OM) is an infection of bone. Symptoms may include pain in a specific bone with overlying redness, fever, and weakness. The long bones of the arms and legs are most commonly involved in children e.g. the femur and humerus, while the feet, spine, and hips are most commonly involved in adults.The cause is usually a bacterial infection, but rarely can be a fungal infection. It may occur by spread from the blood or from surrounding tissue. Risks for developing osteomyelitis include diabetes, intravenous drug use, prior removal of the spleen, and trauma to the area. Diagnosis is typically suspected based on symptoms and basic laboratory tests as C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR).This is because plain radiographs are unremarkable in the first few days following acute infection. Diagnosis is further confirmed by blood tests, medical imaging, or bone biopsy.Treatment of bacterial osteomyelitis often involves both antimicrobials and surgery. In people with poor blood flow, amputation may be required. Treatment of the relatively rare fungal osteomyelitis as mycetoma infections entails the use of antifungal medications. In contrast to bacterial osteomyelitis, amputation or large bony resections is more common in neglected fungal osteomyelitis, namely mycetoma, where infections of the foot account for the majority of cases. Treatment outcomes of bacterial osteomyelitis are generally good when the condition has only been present a short time. About 2.4 per 100,000 people are affected each year. The young and old are more commonly affected. Males are more commonly affected than females. The condition was described at least as early as the 300s BC by Hippocrates. Prior to the availability of antibiotics, the risk of death was significant.
Signs and symptoms
Symptoms may include pain in a specific bone with overlying redness, fever, and weakness and inability to walk especially in children with acute bacterial osteomyelitis. Onset may be sudden or gradual. Enlarged lymph nodes may be present.
In fungal osteomyelitis, there is usually a history of walking bare-footed, especially in rural and farming areas. Contrary to the mode of infection in bacterial osteomyelitis, which is primarily blood-borne, fungal osteomyelitis starts as a skin infection, then invades deeper tissues until it reaches bone.
Cause
In children, the metaphyses, the ends of long bones, are usually affected. In adults, the vertebrae and the pelvis are most commonly affected.Acute osteomyelitis almost invariably occurs in children who are otherwise healthy, because of rich blood supply to the growing bones. When adults are affected, it may be because of compromised host resistance due to debilitation, intravenous drug abuse, infectious root-canaled teeth, or other disease or drugs (e.g., immunosuppressive therapy).Osteomyelitis is a secondary complication in 1–3% of patients with pulmonary tuberculosis. In this case, the bacteria, in general, spread to the bone through the circulatory system, first infecting the synovium (due to its higher oxygen concentration) before spreading to the adjacent bone. In tubercular osteomyelitis, the long bones and vertebrae are the ones that tend to be affected.Staphylococcus aureus is the organism most commonly isolated from all forms of osteomyelitis.Bloodstream-sourced osteomyelitis is seen most frequently in children, and nearly 90% of cases are caused by Staphylococcus aureus. In infants, S. aureus, Group B streptococci (most common) and Escherichia coli are commonly isolated; in children from one to 16 years of age, S. aureus, Streptococcus pyogenes, and Haemophilus influenzae are common. In some subpopulations, including intravenous drug users and splenectomized patients, Gram-negative bacteria, including enteric bacteria, are significant pathogens.The most common form of the disease in adults is caused by injury exposing the bone to local infection. Staphylococcus aureus is the most common organism seen in osteomyelitis, seeded from areas of contiguous infection. But anaerobes and Gram-negative organisms, including Pseudomonas aeruginosa, E. coli, and Serratia marcescens, are also common. Mixed infections are the rule rather than the exception.Systemic mycotic infections may also cause osteomyelitis. The two most common are Blastomyces dermatitidis and Coccidioides immitis.In osteomyelitis involving the vertebral bodies, about half the cases are due to S. aureus, and the other half are due to tuberculosis (spread hematogenously from the lungs). Tubercular osteomyelitis of the spine was so common before the initiation of effective antitubercular therapy, it acquired a special name, Potts disease.The Burkholderia cepacia complex has been implicated in vertebral osteomyelitis in intravenous drug users.
Pathogenesis
In general, microorganisms may infect bone through one or more of three basic methods
Via the bloodstream (haematogeneously) – the most common method
From nearby areas of infection (as in cellulitis), or
Penetrating trauma, including iatrogenic causes such as joint replacements or internal fixation of fractures, leading to a fracture-related infection, or secondary periapical periodontitis in teeth.The area usually affected when the infection is contracted through the bloodstream is the metaphysis of the bone. Once the bone is infected, leukocytes enter the infected area, and, in their attempt to engulf the infectious organisms, release enzymes that lyse the bone. Pus spreads into the bones blood vessels, impairing their flow, and areas of devitalized infected bone, known as sequestra, form the basis of a chronic infection. Often, the body will try to create new bone around the area of necrosis. The resulting new bone is often called an involucrum. On histologic examination, these areas of necrotic bone are the basis for distinguishing between acute osteomyelitis and chronic osteomyelitis. Osteomyelitis is an infective process that encompasses all of the bone (osseous) components, including the bone marrow. When it is chronic, it can lead to bone sclerosis and deformity.Chronic osteomyelitis may be due to the presence of intracellular bacteria. Once intracellular, the bacteria are able to spread to adjacent bone cells. At this point, the bacteria may be resistant to certain antibiotics. These combined factors may explain the chronicity and difficult eradication of this disease, resulting in significant costs and disability, potentially leading to amputation. The presence of intracellular bacteria in chronic osteomyelitis is likely an unrecognized contributing factor in its persistence.In infants, the infection can spread to a joint and cause arthritis. In children, large subperiosteal abscesses can form because the periosteum is loosely attached to the surface of the bone.Because of the particulars of their blood supply, the tibia, femur, humerus, vertebrae, maxilla and the mandibular bodies are especially susceptible to osteomyelitis. Abscesses of any bone, however, may be precipitated by trauma to the affected area. Many infections are caused by Staphylococcus aureus, a member of the normal flora found on the skin and mucous membranes. In patients with sickle cell disease, the most common causative agent is Salmonella, with a relative incidence more than twice that of S. aureus.
Diagnosis
The diagnosis of osteomyelitis is complex and relies on a combination of clinical suspicion and indirect laboratory markers such as a high white blood cell count and fever, although confirmation of clinical and laboratory suspicion with imaging is usually necessary.Radiographs and CT are the initial method of diagnosis, but are not sensitive and only moderately specific for the diagnosis. They can show the cortical destruction of advanced osteomyelitis, but can miss nascent or indolent diagnoses.Confirmation is most often by MRI. The presence of edema, diagnosed as increased signal on T2 sequences, is sensitive, but not specific, as edema can occur in reaction to adjacent cellulitis. Confirmation of bony marrow and cortical destruction by viewing the T1 sequences significantly increases specificity. The administration of intravenous gadolinium-based contrast enhances specificity further. In certain situations, such as severe Charcot arthropathy, diagnosis with MRI is still difficult. Similarly, it is limited in distinguishing avascular necrosis from osteomyelitis in sickle cell anemia.Nuclear medicine scans can be a helpful adjunct to MRI in patients who have metallic hardware that limits or prevents effective magnetic resonance. Generally a triple phase technetium 99 based scan will show increased uptake on all three phases. Gallium scans are 100% sensitive for osteomyelitis but not specific, and may be helpful in patients with metallic prostheses. Combined WBC imaging with marrow studies has 90% accuracy in diagnosing osteomyelitis.Diagnosis of osteomyelitis is often based on radiologic results showing a lytic center with a ring of sclerosis. Culture of material taken from a bone biopsy is needed to identify the specific pathogen; alternative sampling methods such as needle puncture or surface swabs are easier to perform, but cannot be trusted to produce reliable results.Factors that may commonly complicate osteomyelitis are fractures of the bone, amyloidosis, endocarditis, or sepsis.
Classification
The definition of OM is broad, and encompasses a wide variety of conditions. Traditionally, the length of time the infection has been present and whether there is suppuration (pus formation) or osteosclerosis (pathological increased density of bone) are used to arbitrarily classify OM. Chronic OM is often defined as OM that has been present for more than one month. In reality, there are no distinct subtypes; instead, there is a spectrum of pathologic features that reflects a balance between the type and severity of the cause of the inflammation, the immune system and local and systemic predisposing factors.
Suppurative osteomyelitis
Acute suppurative osteomyelitis
Chronic suppurative osteomyelitis
Primary (no preceding phase)
Secondary (follows an acute phase)
Non-suppurative osteomyelitis
Diffuse sclerosing
Focal sclerosing (condensing osteitis)
Proliferative periostitis (periostitis ossificans, Garrés sclerosing osteomyelitis)
OsteoradionecrosisOM can also be typed according to the area of the skeleton in which it is present. For example, osteomyelitis of the jaws is different in several respects from osteomyelitis present in a long bone. Vertebral osteomyelitis is another possible presentation.
Treatment
Osteomyelitis often requires prolonged antibiotic therapy for weeks or months. A PICC line or central venous catheter can be placed for long-term intravenous medication administration. Some studies of children with acute osteomyelitis report that antibiotic by mouth may be justified due to PICC-related complications. It may require surgical debridement in severe cases, or even amputation. Antibiotics by mouth and by intravenous appear similar.Due to insufficient evidence it is unclear what the best antibiotic treatment is for osteomyelitis in people with sickle cell disease as of 2019.Initial first-line antibiotic choice is determined by the patients history and regional differences in common infective organisms. A treatment lasting 42 days is practiced in a number of facilities. Local and sustained availability of drugs have proven to be more effective in achieving prophylactic and therapeutic outcomes. Open surgery is needed for chronic osteomyelitis, whereby the involucrum is opened and the sequestrum is removed or sometimes saucerization can be done. Hyperbaric oxygen therapy has been shown to be a useful adjunct to the treatment of refractory osteomyelitis.Before the widespread availability and use of antibiotics, blow fly larvae were sometimes deliberately introduced to the wounds to feed on the infected material, effectively scouring them clean.There is tentative evidence that bioactive glass may also be useful in long bone infections </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Botulism and can you explain it in depth? [/INST] Botulism is a rare and potentially fatal illness caused by a toxin produced by the bacterium Clostridium botulinum. The disease begins with weakness, blurred vision, feeling tired, and trouble speaking. This may then be followed by weakness of the arms, chest muscles, and legs. Vomiting, swelling of the abdomen, and diarrhea may also occur. The disease does not usually affect consciousness or cause a fever.
Botulism can be spread in several ways. The bacterial spores which cause it are common in both soil and water. They produce the botulinum toxin when exposed to low oxygen levels and certain temperatures. Foodborne botulism happens when food containing the toxin is eaten. Infant botulism happens when the bacteria develops in the intestines and releases the toxin. This typically only occurs in children less than six months old, as protective mechanisms develop after that time. Wound botulism is found most often among those who inject street drugs. In this situation, spores enter a wound, and in the absence of oxygen, release the toxin. It is not passed directly between people. The diagnosis is confirmed by finding the toxin or bacteria in the person in question.
Prevention is primarily by proper food preparation. The bacteria, though not the spores, are destroyed by heating it to more than 85 °C (185 °F) for longer than 5 minutes. Honey can contain the organism, and for this reason, honey should not be fed to children under 12 months. Treatment is with an antitoxin. In those who lose their ability to breathe on their own, mechanical ventilation may be necessary for months. Antibiotics may be used for wound botulism. Death occurs in 5 to 10% of people. Botulism also affects many other animals. The word is from Latin botulus, meaning sausage.
Signs and symptoms
The muscle weakness of botulism characteristically starts in the muscles supplied by the cranial nerves—a group of twelve nerves that control eye movements, the facial muscles and the muscles controlling chewing and swallowing. Double vision, drooping of both eyelids, loss of facial expression and swallowing problems may therefore occur. In addition to affecting the voluntary muscles, it can also cause disruptions in the autonomic nervous system. This is experienced as a dry mouth and throat (due to decreased production of saliva), postural hypotension (decreased blood pressure on standing, with resultant lightheadedness and risk of blackouts), and eventually constipation (due to decreased forward movement of intestinal contents). Some of the toxins (B and E) also precipitate nausea, vomiting, and difficulty with talking. The weakness then spreads to the arms (starting in the shoulders and proceeding to the forearms) and legs (again from the thighs down to the feet).Severe botulism leads to reduced movement of the muscles of respiration, and hence problems with gas exchange. This may be experienced as dyspnea (difficulty breathing), but when severe can lead to respiratory failure, due to the buildup of unexhaled carbon dioxide and its resultant depressant effect on the brain. This may lead to respiratory compromise and death if untreated.Clinicians frequently think of the symptoms of botulism in terms of a classic triad: bulbar palsy and descending paralysis, lack of fever, and clear senses and mental status ("clear sensorium").
Infant botulism
Infant botulism (also referred to as floppy baby syndrome) was first recognized in 1976, and is the most common form of botulism in the United States. Infants are susceptible to infant botulism in the first year of life, with more than 90% of cases occurring in infants younger than six months. Infant botulism results from the ingestion of the C. botulinum spores, and subsequent colonization of the small intestine. The infant gut may be colonized when the composition of the intestinal microflora (normal flora) is insufficient to competitively inhibit the growth of C. botulinum and levels of bile acids (which normally inhibit clostridial growth) are lower than later in life.The growth of the spores releases botulinum toxin, which is then absorbed into the bloodstream and taken throughout the body, causing paralysis by blocking the release of acetylcholine at the neuromuscular junction. Typical symptoms of infant botulism include constipation, lethargy, weakness, difficulty feeding, and an altered cry, often progressing to a complete descending flaccid paralysis. Although constipation is usually the first symptom of infant botulism, it is commonly overlooked.Honey is a known dietary reservoir of C. botulinum spores and has been linked to infant botulism. For this reason, honey is not recommended for infants less than one year of age. Most cases of infant botulism, however, are thought to be caused by acquiring the spores from the natural environment. Clostridium botulinum is a ubiquitous soil-dwelling bacterium. Many infant botulism patients have been demonstrated to live near a construction site or an area of soil disturbance.Infant botulism has been reported in 49 of 50 US states (all save for Rhode Island), and cases have been recognized in 26 countries on five continents.
Complications
Infant botulism has no long-term side effects.
Botulism can result in death due to respiratory failure. However, in the past 50 years, the proportion of patients with botulism who die has fallen from about 50% to 7% due to improved supportive care. A patient with severe botulism may require mechanical ventilation (breathing support through a ventilator) as well as intensive medical and nursing care, sometimes for several months. The person may require rehabilitation therapy after leaving the hospital.
Cause
Clostridium botulinum is an anaerobic, Gram positive, spore-forming rod. Botulinum toxin is one of the most powerful known toxins: about one microgram is lethal to humans when inhaled. It acts by blocking nerve function (neuromuscular blockade) through inhibition of the excitatory neurotransmitter acetylcholines release from the presynaptic membrane of neuromuscular junctions in the somatic nervous system. This causes paralysis. Advanced botulism can cause respiratory failure by paralysing the muscles of the chest; this can progress to respiratory arrest. Furthermore, acetylcholine release from the presynaptic membranes of muscarinic nerve synapses is blocked. This can lead to a variety of autonomic signs and symptoms described above.
In all cases, illness is caused by the botulinum toxin produced by the bacterium C. botulinum in anaerobic conditions and not by the bacterium itself. The pattern of damage occurs because the toxin affects nerves that fire (depolarize) at a higher frequency first.Mechanisms of entry into the human body for botulinum toxin are described below.
Colonization of the gut
The most common form in Western countries is infant botulism. This occurs in infants who are colonized with the bacterium in the small intestine during the early stages of their lives. The bacterium then produces the toxin, which is absorbed into the bloodstream. The consumption of honey during the first year of life has been identified as a risk factor for infant botulism; it is a factor in a fifth of all cases. The adult form of infant botulism is termed adult intestinal toxemia, and is exceedingly rare.
Food
Toxin that is produced by the bacterium in containers of food that have been improperly preserved is the most common cause of food-borne botulism. Fish that has been pickled without the salinity or acidity of brine that contains acetic acid and high sodium levels, as well as smoked fish stored at too high a temperature, presents a risk, as does improperly canned food.
Food-borne botulism results from contaminated food in which C. botulinum spores have been allowed to germinate in low-oxygen conditions. This typically occurs in improperly prepared home-canned food substances and fermented dishes without adequate salt or acidity. Given that multiple people often consume food from the same source, it is common for more than a single person to be affected simultaneously. Symptoms usually appear 12–36 hours after eating, but can also appear within 6 hours to 10 days.
Wound
Wound botulism results from the contamination of a wound with the bacteria, which then secrete the toxin into the bloodstream. This has become more common in intravenous drug users since the 1990s, especially people using black tar heroin and those injecting heroin into the skin rather than the veins. Wound botulism accounts for 29% of cases.
Inhalation
Isolated cases of botulism have been described after inhalation by laboratory workers.
Injection
Symptoms of botulism may occur away from the injection site of botulinum toxin. This may include loss of strength, blurred vision, change of voice, or trouble breathing which can result in death. Onset can be hours to weeks after an injection. This generally only occurs with inappropriate strengths of botulinum toxin for cosmetic use or due to the larger doses used to treat movement disorders. Following a 2008 review the FDA added these concerns as a boxed warning.
Mechanism
The toxin is the protein botulinum toxin produced under anaerobic conditions (where there is no oxygen) by the bacterium Clostridium botulinum.
Clostridium botulinum is a large anaerobic Gram-positive bacillus that forms subterminal endospores.There are eight serological varieties of the bacterium denoted by the letters A to H. The toxin from all of these acts in the same way and produces similar symptoms: the motor nerve endings are prevented from releasing acetylcholine, causing flaccid paralysis and symptoms of blurred vision, ptosis, nausea, vomiting, diarrhea or constipation, cramps, and respiratory difficulty.
Botulinum toxin is broken into eight neurotoxins (labeled as types A, B, C [C1, C2], D, E, F, and G), which are antigenically and serologically distinct but structurally similar. Human botulism is caused mainly by types A, B, E, and (rarely) F. Types C and D cause toxicity only in other animals.In October 2013, scientists released news of the discovery of type H, the first new botulism neurotoxin found in forty years. However, further studies showed type H to be a chimeric toxin composed of parts of types F and A (FA).Some types produce a characteristic putrefactive smell and digest meat (types A and some of B and F); these are said to be proteolytic; type E and some types of B, C, D and F are nonproteolytic and can go undetected because there is no strong odor associated with them.When the bacteria are under stress, they develop spores, which are inert. Their natural habitats are in the soil, in the silt that comprises the bottom sediment of streams, lakes, and coastal waters and ocean, while some types are natural inhabitants of the intestinal tracts of mammals (e.g., horses, cattle, humans), and are present in their excreta. The spores can survive in their inert form for many years.Toxin is produced by the bacteria when environmental conditions are favourable for the spores to replicate and grow, but the gene that encodes for the toxin protein is actually carried by a virus or phage that infects the bacteria. Little is known about the natural factors that control phage infection and replication within the bacteria.The spores require warm temperatures, a protein source, an anaerobic environment, and moisture in order to become active and produce toxin. In the wild, decomposing vegetation and invertebrates combined with warm temperatures can provide ideal conditions for the botulism bacteria to activate and produce toxin that may affect feeding birds and other animals. Spores are not killed by boiling, but botulism is uncommon because special, rarely obtained conditions are necessary for botulinum toxin production from C. botulinum spores, including an anaerobic, low-salt, low-acid, low-sugar environment at ambient temperatures.Botulinum inhibits the release within the nervous system of acetylcholine, a neurotransmitter, responsible for communication between motor neurons and muscle cells. All forms of botulism lead to paralysis that typically starts with the muscles of the face and then spreads towards the limbs. In severe forms, botulism leads to paralysis of the breathing muscles and causes respiratory failure. In light of this life-threatening complication, all suspected cases of botulism are treated as medical emergencies, and public health officials are usually involved to identify the source and take steps to prevent further cases from occurring.Botulinum toxin A, C, and E cleave the SNAP-25, ultimately leading to paralysis.
Diagnosis
For botulism in babies, diagnosis should be made on signs and symptoms. Confirmation of the diagnosis is made by testing of a stool or enema specimen with the mouse bioassay.
In people whose history and physical examination suggest botulism, these clues are often not enough to allow a diagnosis. Other diseases such as Guillain–Barré syndrome, stroke, and myasthenia gravis can appear similar to botulism, and special tests may be needed to exclude these other conditions. These tests may include a brain scan, cerebrospinal fluid examination, nerve conduction test (electromyography, or EMG), and an edrophonium chloride (Tensilon) test for myasthenia gravis. A definite diagnosis can be made if botulinum toxin is identified in the food, stomach or intestinal contents, vomit or feces. The toxin is occasionally found in the blood in peracute cases. Botulinum toxin can be detected by a variety of techniques, including enzyme-linked immunosorbent assays (ELISAs), electrochemiluminescent (ECL) tests and mouse inoculation or feeding trials. The toxins can be typed with neutralization tests in mice. In toxicoinfectious botulism, the organism can be cultured from tissues. On egg yolk medium, toxin-producing colonies usually display surface iridescence that extends beyond the colony.
Prevention
Although the vegetative form of the bacteria is destroyed by boiling, the spore itself is not killed by the temperatures reached with normal sea-level-pressure boiling, leaving it free to grow and again produce the toxin when conditions are right.A recommended prevention measure for infant botulism is to avoid giving honey to infants less than 12 months of age, as botulinum spores are often present. In older children and adults the normal intestinal bacteria suppress development of C. botulinum.While commercially canned goods are required to undergo a "botulinum cook" in a pressure cooker at 121 °C (250 °F) for 3 minutes, and thus rarely cause botulism, there have been notable exceptions. Two were the 1978 Alaskan salmon outbreak and the 2007 Castleberrys Food Company outbreak. Foodborne botulism is the rarest form though, accounting for only around 15% of cases (US) and has more frequently been from home-canned foods with low acid content, such as carrot juice, asparagus, green beans, beets, and corn. However, outbreaks of botulism have resulted from more unusual sources. In July 2002, fourteen Alaskans ate muktuk (whale meat) from a beached whale, and eight of them developed symptoms of botulism, two of them requiring mechanical ventilation.Other, much rarer sources of infection (about every decade in the US) include garlic or herbs stored covered in oil without acidification, chili peppers, improperly handled baked potatoes wrapped in aluminum foil, tomatoes, and home-canned or fermented fish.
When canning or preserving food at home, attention should be paid to hygiene, pressure, temperature, refrigeration and storage. When making home preserves, only acidic fruit such as apples, pears, stone fruits and berries should be used. Tropical fruit and tomatoes are low in acidity and must have some acidity added before they are canned.Low-acid foods have pH values higher than 4.6. They include red meats, seafood, poultry, milk, and all fresh vegetables except for most tomatoes. Most mixtures of low-acid and acid foods also have pH values above 4.6 unless their recipes include enough lemon juice, citric acid, or vinegar to make them acidic. Acid foods have a pH of 4.6 or lower. They include fruits, pickles, sauerkraut, jams, jellies, marmalades, and fruit butters.Although tomatoes usually are considered an acid food, some are now known to have pH values slightly above 4.6. Figs also have pH values slightly above 4.6. Therefore, if they are to be canned as acid foods, these products must be acidified to a pH of 4.6 or lower with lemon juice or citric acid. Properly acidified tomatoes and figs are acid foods and can be safely processed in a boiling-water canner.Oils infused with fresh garlic or herbs should be acidified and refrigerated. Potatoes which have been baked while wrapped in aluminum foil should be kept hot until served or refrigerated. Because the botulism toxin is destroyed by high temperatures, home-canned foods are best boiled for 10 minutes before eating. Metal cans containing food in which bacteria are growing may bulge outwards due to gas production from bacterial growth or the food inside may be foamy or have a bad odor; such cans with any of these signs should be discarded.Any container of food which has been heat-treated and then assumed to be airtight which shows signs of not being so, e.g., metal cans with pinprick holes from rust or mechanical damage, should be discarded. Contamination of a canned food solely with C. botulinum may not cause any visual defects to the container, such as bulging. Only assurance of sufficient thermal processing during production, and absence of a route for subsequent contamination, should be used as indicators of food safety.
The addition of nitrites and nitrates to processed meats such as ham, bacon, and sausages reduces growth and toxin production of C. botulinum.
Vaccine
Vaccines are under development, but they have disadvantages, and in some cases there are concerns that they may revert to dangerous native activity. As of 2017 work to develop a better vaccine was being carried out, but the US FDA had not approved any vaccine against botulism.
Treatment
Botulism is generally treated with botulism antitoxin and supportive care.Supportive care for botulism includes monitoring of respiratory function. Respiratory failure due to paralysis may require mechanical ventilation for 2 to 8 weeks, plus intensive medical and nursing care. After this time, paralysis generally improves as new neuromuscular connections are formed.In some abdominal cases, physicians may try to remove contaminated food still in the digestive tract by inducing vomiting or using enemas. Wounds should be treated, usually surgically, to remove the source of the toxin-producing bacteria.
Antitoxin
Botulinum antitoxin consists of antibodies that neutralize botulinum toxin in the circulatory system by passive immunization. This prevents additional toxin from binding to the neuromuscular junction, but does not reverse any already inflicted paralysis.In adults, a trivalent antitoxin containing antibodies raised against botulinum toxin types A, B, and E is used most commonly; however, a heptavalent botulism antitoxin has also been developed and was approved by the U.S. FDA in 2013. In infants, horse-derived antitoxin is sometimes avoided for fear of infants developing serum sickness or lasting hypersensitivity to horse-derived proteins. To avoid this, a human-derived antitoxin has been developed and approved by the U.S. FDA in 2003 for the treatment of infant botulism. This human-derived antitoxin has been shown to be both safe and effective for the treatment of infant botulism. However, the danger of equine-derived antitoxin to infants has not been clearly established, and one study showed the equine-derived antitoxin to be both safe and effective for the treatment of infant botulism.Trivalent (A,B,E) botulinum antitoxin is derived from equine sources utilizing whole antibodies (Fab and Fc portions). In the United States, this antitoxin is available from the local health department via the CDC. The second antitoxin, heptavalent (A,B,C,D,E,F,G) botulinum antitoxin, is derived from "despeciated" equine IgG antibodies which have had the Fc portion cleaved off leaving the F(ab)2 portions. This less immunogenic antitoxin is effective against all known strains of botulism where not contraindicated.
Prognosis
The paralysis caused by botulism can persist for 2 to 8 weeks, during which supportive care and ventilation may be necessary to keep the person alive. Botulism can be fatal in 5% to 10% of people who are affected. However, if left untreated, botulism is fatal in 40% to 50% of cases.Infant botulism typically has no long-term side effects but can be complicated by treatment-associated adverse events. The case fatality rate is less than 2% for hospitalized babies.
Epidemiology
Globally, botulism is fairly rare, with approximately 1,000 identified cases yearly.
United States
In the United States an average of 145 cases are reported each year. Of these, roughly 65% are infant botulism, 20% are wound botulism, and 15% are foodborne. Infant botulism is predominantly sporadic and not associated with epidemics, but great geographic variability exists. From 1974 to 1996, for example, 47% of all infant botulism cases reported in the U.S. occurred in California.Between 1990 and 2000, the Centers for Disease Control and Prevention reported 263 individual foodborne cases from 160 botulism events in the United States with a case-fatality rate of 4%. Thirty-nine percent (103 cases and 58 events) occurred in Alaska, all of which were attributable to traditional Alaska aboriginal foods. In the lower 49 states, home-canned food was implicated in 70 events (~69%) with canned asparagus being the most frequent cause. Two restaurant-associated outbreaks affected 25 people. The median number of cases per year was 23 (range 17–43), the median number of events per year was 14 (range 9–24). The highest incidence rates occurred in Alaska, Idaho, Washington, and Oregon. All other states had an incidence rate of 1 case per ten million people or less.The number of cases of food borne and infant botulism has changed little in recent years, but wound botulism has increased because of the use of black tar heroin, especially in California.All data regarding botulism antitoxin releases and laboratory confirmation of cases in the US are recorded annually by the Centers for Disease Control and Prevention and published on their website.
On 2 July 1971, the U.S. Food and Drug Administration (FDA) released a public warning after learning that a New York man had died and his wife had become seriously ill due to botulism after eating a can of Bon Vivant vichyssoise soup.
Between 31 March and 6 April 1977, 59 individuals developed type B botulism. All who fell ill had eaten at the same Mexican restaurant in Pontiac, Michigan, and had consumed a hot sauce made with improperly home-canned jalapeño peppers, either by adding it to their food, or by eating nachos that had been prepared with the hot sauce. The full clinical spectrum (mild symptomatology with neurologic findings through life-threatening ventilatory paralysis) of type B botulism was documented.
In April 1994, the largest outbreak of botulism in the United States since 1978 occurred in El Paso, Texas. Thirty people were affected; 4 required mechanical ventilation. All ate food from a Greek restaurant. The attack rate among people who ate a potato-based dip was 86% (19/22) compared with 6% (11/176) among people who did not eat the dip (relative risk [RR] = 13.8; 95% confidence interval [CI], 7.6–25.1). The attack rate among people who ate an eggplant-based dip was 67% (6/9) compared with 13% (24/189) among people who did not (RR = 5.2; 95% CI, 2.9–9.5). Botulism toxin type A was detected in patients and in both dips. Toxin formation resulted from holding aluminum foil-wrapped baked potatoes at room temperature, apparently for several days, before they were used in the dips. Food handlers should be informed of the potential hazards caused by holding foil-wrapped potatoes at ambient temperatures after cooking.
In 2002, fourteen Alaskans ate muktuk (whale blubber) from a beached whale, resulting in eight of them developing botulism, with two of the affected requiring mechanical ventilation.
Beginning in late June 2007, 8 people contracted botulism poisoning by eating canned food products produced by Castleberrys Food Company in its Augusta, Georgia plant. It was later identified that the Castleberrys plant had serious production problems on a specific line of retorts that had under-processed the cans of food. These issues included broken cooking alarms, leaking water valves and inaccurate temperature devices, all the result of poor management of the company. All of the victims were hospitalized and placed on mechanical ventilation. The Castleberrys Food Company outbreak was the first instance of botulism in commercial canned foods in the United States in over 30 years.
One person died, 21 cases were confirmed, and 10 more were suspected in Lancaster, Ohio when a botulism outbreak occurred after a church potluck in April 2015. The suspected source was a salad made from home-canned potatoes.
A botulism outbreak occurred in Northern California in May 2017 after 10 people consumed nacho cheese dip served at a gas station in Sacramento County. One man died as a result of the outbreak.
United Kingdom
The largest recorded outbreak of foodborne botulism in the United Kingdom occurred in June 1989. A total of 27 patients were affected; one patient died. Twenty-five of the patients had eaten one brand of hazelnut yogurt in the week before the onset of symptoms. Control measures included the cessation of all yogurt production by the implicated producer, the withdrawal of the firms yogurts from sale, the recall of cans of the hazelnut conserve, and advice to the general public to avoid the consumption of all hazelnut yogurts.
China
From 1958 to 1983 there were 986 outbreaks of botulism in China involving 4,377 people with 548 deaths.
Qapqal disease
After the Chinese Communist Revolution in 1949, a mysterious plague (named Qapqal disease) was noticed to be affecting several Sibe villages in Qapqal Xibe Autonomous County. It was endemic with distinctive epidemic patterns, yet the underlying cause remained unknown for a long period of time. It caused a number of deaths and forced some people to leave the place.In 1958, a team of experts were sent to the area by the Ministry of Health to investigate the cases. The epidemic survey conducted proved that the disease was primarily type A botulism, with several cases of type B. The team also discovered that, the source of the botulinum was local fermented grain and beans, as well as a raw meat food called mi song hu hu. They promoted the improvement of fermentation techniques among local residents, and thus eliminated the disease.
Canada
From 1985 to 2015 there were outbreaks causing 91 confirmed cases of foodborne botulism in Canada, 85% of which were in Inuit communities, especially Nunavik, as well as First Nations of the coast of British Columbia, following consumption of traditionally prepared marine mammal and fish products.
Ukraine
In 2017, there were 70 cases of botulism with 8 deaths in Ukraine. The previous year there were 115 cases with 12 deaths. Most cases were the result of dried fish, a common local drinking snack.
Vietnam
In 2020, several cases of botulism were reported in Vietnam. All of them were related to a product containing contaminated vegetarian pâté. Some patients were put on life support.
Other susceptible species
Botulism can occur in many vertebrates and invertebrates. Botulism has been reported in such species as rats, mice, chicken, frogs, toads, goldfish, aplysia, squid, crayfish, drosophila and leeches.Death from botulism is common in waterfowl; an estimated 10,000 to 100,000 birds die of botulism annually. The disease is commonly called "limberneck". In some large outbreaks, a million or more birds may die. Ducks appear to be affected most often. An enzootic form of duck botulism in the Western USA and Canada is known as "western duck sickness". Botulism also affects commercially raised poultry. In chickens, the mortality rate varies from a few birds to 40% of the flock.
Botulism seems to be relatively uncommon in domestic mammals; however, in some parts of the world, epidemics with up to 65% mortality are seen in cattle. The prognosis is poor in large animals that are recumbent.
In cattle, the symptoms may include drooling, restlessness, uncoordination, urine retention, dysphagia, and sternal recumbency. Laterally recumbent animals are usually very close to death. In sheep, the symptoms may include drooling, a serous nasal discharge, stiffness, and incoordination. Abdominal respiration may be observed and the tail may switch on the side. As the disease progresses, the limbs may become paralyzed and death may occur.
Phosphorus-deficient cattle, especially in southern Africa, are inclined to ingest bones and carrion containing clostridial toxins and consequently develop lame sickness or lamsiekte.
The clinical signs in horses are similar to cattle. The muscle paralysis is progressive; it usually begins at the hindquarters and gradually moves to the front limbs, neck, and head. Death generally occurs 24 to 72 hours after initial symptoms and results from respiratory paralysis. Some foals are found dead without other clinical signs.
Clostridium botulinum type C toxin has been incriminated as the cause of grass sickness, a condition in horses which occurs in rainy and hot summers in Northern Europe. The main symptom is pharynx paralysis.Domestic dogs may develop systemic toxemia after consuming C. botulinum type C exotoxin or spores within bird carcasses or other infected meat but are generally resistant to the more severe effects of Clostridium botulinum type C. Symptoms include flaccid muscle paralysis, which can lead to death due to cardiac and respiratory arrest.Pigs are relatively resistant to botulism. Reported symptoms include anorexia, refusal to drink, vomiting, pupillary dilation, and muscle paralysis.In poultry and wild birds, flaccid paralysis is usually seen in the legs, wings, neck and eyelids </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Colonoscopy and can you explain it in depth? [/INST] Colonoscopy () or coloscopy () is the endoscopic examination of the large bowel and the distal part of the small bowel with a CCD camera or a fiber optic camera on a flexible tube passed through the anus. It can provide a visual diagnosis (e.g., ulceration, polyps) and grants the opportunity for biopsy or removal of suspected colorectal cancer lesions.
Colonoscopy can remove polyps smaller than one millimeter. Once polyps are removed, they can be studied with the aid of a microscope to determine if they are precancerous or not. It can take up to 15 years for a polyp to turn cancerous.Colonoscopy is similar to sigmoidoscopy—the difference being related to which parts of the colon each can examine. A colonoscopy allows an examination of the entire colon (1,200–1,500 mm in length). A sigmoidoscopy allows an examination of the distal portion (about 600 mm) of the colon, which may be sufficient because benefits to cancer survival of colonoscopy have been limited to the detection of lesions in the distal portion of the colon.A sigmoidoscopy is often used as a screening procedure for a full colonoscopy, often done in conjunction with a fecal occult blood test (FOBT). About 5% of these screened patients are referred to colonoscopy.Virtual colonoscopy, which uses 2D and 3D imagery reconstructed from computed tomography (CT) scans or from nuclear magnetic resonance (MR) scans, is also possible, as a totally non-invasive medical test. Virtual colonoscopy does not allow therapeutic maneuvers such as polyp and tumour removal or biopsy, nor visualization of lesions smaller than five millimeters; if a growth or polyp is detected using CT colonography, it would require removal during a standard colonoscopy. Surgeons have used the term pouchoscopy to refer to a colonoscopy of the ileo-anal pouch.
Medical uses
Conditions that call for colonoscopies include gastrointestinal hemorrhage, unexplained changes in bowel habit and suspicion of malignancy. Colonoscopies are often used to diagnose colon polyp and colon cancer, but are also frequently used to diagnose inflammatory bowel disease. In older patients (sometimes even younger ones) an unexplained drop in hematocrit (one sign of anemia) is an indication that calls for a colonoscopy, usually along with an esophagogastroduodenoscopy (EGD), even if no obvious blood has been seen in the stool (feces).Fecal occult blood is a quick test which can be done to test for microscopic traces of blood in the stool. A positive test is almost always an indication to do a colonoscopy. In most cases the positive result is just due to hemorrhoids; however, it can also be due to diverticulosis, inflammatory bowel disease (Crohns disease, ulcerative colitis), colon cancer, or polyps. Colonic polypectomy has become a routine part of colonoscopy, allowing quick and simple removal of polyps during the procedure, without invasive surgery.
Colon cancer screening
Colonoscopy is one of the colorectal cancer screening tests available to people in the US who are 45 years of age and older. The other screening tests include flexible sigmoidoscopy, double-contrast barium enema, computed tomographic (CT) colongraphy (virtual colonoscopy), guaiac-based fecal occult blood test (gFOBT), fecal immunochemical test (FIT), and multitarget stool DNA screening test (Cologuard).Subsequent rescreenings are then scheduled based on the initial results found, with a five- or ten-year recall being common for colonoscopies that produce normal results. People with a family history of colon cancer are often first screened during their teenage years.
Among people who have had an initial colonoscopy that found no polyps, the risk of developing colorectal cancer within five years is extremely low. Therefore, there is no need for those people to have another colonoscopy sooner than five years after the first screening.Some medical societies in the US recommend a screening colonoscopy every 10 years beginning at age 50 for adults without increased risk for colorectal cancer. Research shows that the risk of cancer is low for 10 years if a high-quality colonoscopy does not detect cancer, so tests for this purpose are indicated every ten years.Colonoscopy screening prevents approximately two-thirds of deaths due to colorectal cancers on the left side of the colon, and is not associated with a significant reduction in deaths from right-sided disease.Colonoscopy reduces cancer rates by detecting some colon polyps and cancers on the left side of the colon early enough that they may be treated, and a smaller number on the right side; many of these left-sided growths would have been detected by a sigmoidoscopy procedure.Since polyps often take 10 to 15 years to transform into cancer in someone at average risk of colorectal cancer, guidelines recommend 10 years after a normal screening colonoscopy before the next colonoscopy. (This interval does not apply to people at high risk of colorectal cancer, or to those who experience symptoms of colorectal cancer.)Although widely touted in the US as the "gold standard" of colon cancer screening, colonoscopy has never been studied as a screening tool. Most of the potential benefits of colonoscopy have been extrapolated from randomized trials of the sigmoidoscopy. The CONFIRM trial, a randomized trial on colonoscopy vs. FIT is currently ongoing.
Recommendations
The American Cancer Society recommends, beginning at age 45, both men and women follow one of these testing schedules for screening to find colon polyps and/or cancer:
Flexible sigmoidoscopy every 5 years, or
Colonoscopy every 10 years, or
Double-contrast barium enema every 5 years, or
CT colonography (virtual colonoscopy) every 5 years
Yearly guaiac-based fecal occult blood test (gFOBT)
Yearly fecal immunochemical test (FIT)
Stool DNA test (sDNA) every 3 years
Medicare coverage
In the United States, Medicare insurance covers the following colorectal-cancer screening tests:
Colonoscopy: average risk — every 10 years beginning at age 50, high risk — every 2 years with no age restriction
Flexible sigmoidoscopy — every 4 years beginning at age 50
Double-contrast barium enema: average risk — every 4 years beginning at age 50, high risk — every 2 years
(CT) colongraphy: not covered by Medicare
gFOBT: average risk — every year beginning at age 50
FIT: average risk — every year beginning at age 50
Cologuard: average risk — every 3 years beginning at age 50
Risks
About 1 in 200 people who undergo a colonoscopy experience a serious complication. Perforation of the colon occurs in about 1 in 2000 procedures, bleeding in 2.6 per 1000, and death in 3 per 100,000, with an overall risk of serious complications of 0.35%.In some low-risk populations screening by colonoscopy in the absence of symptoms does not outweigh the risks of the procedure. For example, the odds of developing colorectal cancer between the ages of 20 and 40 in the absence of specific risk factors are about 1 in 1,250 (0.08%).The rate of complications varies with the practitioner and institution performing the procedure, and other variables.
Perforation
The most serious complication generally is gastrointestinal perforation, which is life-threatening and in most cases requires immediate major surgery for repair. Fewer than 20% of cases may be successfully managed with a conservative (non-surgical) approach.A 2003 analysis of the relative risks of sigmoidoscopy and colonoscopy brought into attention that the risk of perforation after colonoscopy is approximately double that after sigmoidoscopy (consistent with the fact that colonoscopy examines a longer section of the colon), a difference that appeared to be decreasing.
Bleeding
Bleeding complications may be treated immediately during the procedure by cauterization using the instrument. Delayed bleeding may also occur at the site of polyp removal up to a week after the procedure, and a repeat procedure can then be performed to treat the bleeding site. Even more rarely, splenic rupture can occur after colonoscopy because of adhesions between the colon and the spleen.
Anaesthesia
As with any procedure involving anaesthesia, other complications would include cardiopulmonary complications such as a temporary drop in blood pressure and oxygen saturation usually the result of overmedication, and are easily reversed. Anesthesia can also increase the risk of developing blood clots and lead to pulmonary embolism or deep venous thrombosis. (DVT) In rare cases, more serious cardiopulmonary events such as a heart attack, stroke, or even death may occur; these are extremely rare except in critically ill patients with multiple risk factors. In rare cases, coma associated with anesthesia may occur.
Bowel preparation
Dehydration caused by the laxatives that are usually administered during the bowel preparation for colonoscopy also may occur. Therefore, patients must drink large amounts of fluids during the day of colonoscopy preparation to prevent dehydration. Loss of electrolytes or dehydration is a potential risk that can even prove deadly. In rare cases, severe dehydration can lead to kidney damage or dysfunction under the form of phosphate nephropathy.
Other
Virtual colonoscopies carry risks that are associated with radiation exposure.Colonoscopy preparation and colonoscopy procedure can cause inflammation of the bowels and diarrhea or bowel obstruction.During colonoscopies where a polyp is removed (a polypectomy), the risk of complications has been higher, although still low at about 2.3 percent. One of the most serious complications that may arise after colonoscopy is the postpolypectomy syndrome. This syndrome occurs due to potential burns to the bowel wall when the polyp is removed, and may cause fever and abdominal pain. It is a rare complication, treated with intravenous fluids and antibiotics.
Bowel infections are a potential colonoscopy risk, although rare. The colon is not a sterile environment; many bacteria that normally live in the colon ensure the well-functioning of the bowel, and the risk of infections is minimal. Infections can occur during biopsies when too much tissue is removed and bacteria protrude in areas they do not belong to, or in cases when the lining of the colon is perforated and the bacteria get into the abdominal cavity. Infection may also be transmitted between patients if the colonoscope is not cleaned and sterilized properly between tests.
Minor colonoscopy risks may include nausea, vomiting or allergies to the sedatives that are used. If medication is given intravenously, the vein may become irritated. Most localized irritations to the vein leave a tender lump lasting a number of days but going away eventually. The incidence of these complications is less than 1%.
On rare occasions, intracolonic explosion may occur. A meticulous bowel preparation is the key to prevent this complication.Signs of complications include severe abdominal pain, fevers and chills, or rectal bleeding (more than half a cup or 100ml).
Procedure
Preparation
The colon must be free of solid matter for the test to be performed properly. For one to three days, the patient is required to follow a low fiber or clear-liquid-only diet. Examples of clear fluids are apple juice, chicken and/or beef broth or bouillon, lemon-lime soda, lemonade, sports drink, and water. It is important that the patient remains hydrated. Sports drinks contain electrolytes which are depleted during the purging of the bowel. Drinks containing fiber such as prune and orange juice should not be consumed, nor should liquids dyed red, purple, orange, or sometimes brown; however, cola is allowed. In most cases, tea or coffee taken without milk are allowed.The day before the colonoscopy, the patient is either given a laxative preparation (such as bisacodyl, phospho soda, sodium picosulfate, or sodium phosphate and/or magnesium citrate) and large quantities of fluid, or whole bowel irrigation is performed using a solution of polyethylene glycol and electrolytes. The procedure may involve both a pill-form laxative and a bowel irrigation preparation with the polyethylene glycol powder dissolved into any clear liquid, such as a sports drink that contains electrolytes.
A typical procedure regimen then would be as follows: in the morning of the day before the procedure, a 238 g bottle of polyethylene glycol powder should be poured into 1.9 litres (64 oz.) of the chosen clear liquid, which then should be mixed and refrigerated. Two bisacodyl 5 mg tablets are taken 3 pm; at 5 pm, the patient starts drinking the mixture (approx. 8 oz. (0.5 litres) each 15-30 min. until finished); at 8 pm, take two bisacodyl 5 mg tablets; continue drinking/hydrating into the evening until bedtime with clear permitted fluids. The procedure may be scheduled early in the day so the patient need not go without food and only limited fluids until later.The goal of the preparation is to clear the colon of solid matter, and the patient may be advised to spend the day at home with ready access to toilet facilities. The patient may also want to have at hand moist towelettes or a bidet for cleaning the anus. A soothing salve such as petroleum jelly applied after cleaning the anus will reduce discomfort.
The patient may be asked not to take aspirin or similar products such as salicylate, ibuprofen, etc. for up to ten days before the procedure to avoid the risk of bleeding if a polypectomy is performed during the procedure. A blood test may be performed before the procedure.
Investigation
During the procedure the patient is often given sedation intravenously, employing agents such as fentanyl or midazolam. Although meperidine (Demerol) may be used as an alternative to fentanyl, the concern of seizures has relegated this agent to second choice for sedation behind the combination of fentanyl and midazolam. The average person will receive a combination of these two drugs, usually between 25 and 100 µg IV fentanyl and 1–4 mg IV midazolam. Sedation practices vary between practitioners and nations; in some clinics in Norway, sedation is rarely administered.Some endoscopists are experimenting with, or routinely use, alternative or additional methods such as nitrous oxide and propofol, which have advantages and disadvantages relating to recovery time (particularly the duration of amnesia after the procedure is complete), patient experience, and the degree of supervision needed for safe administration. This sedation is called "twilight anesthesia". For some patients it is not fully effective, so they are indeed awake for the procedure and can watch the inside of their colon on the color monitor. Substituting propofol for midazolam, which gives the patient quicker recovery, is gaining wider use, but requires closer monitoring of respiration.
A meta-analysis found that playing music improves tolerability to patients of the procedure.The first step is usually a digital rectal examination, to examine the tone of the sphincter and to determine if preparation has been adequate. The endoscope is then passed through the anus up the rectum, the colon (sigmoid, descending, transverse and ascending colon, the cecum), and ultimately the terminal ileum. The endoscope has a movable tip and multiple channels for instrumentation, air, suction and light. The bowel is occasionally insufflated with air to maximize visibility (a procedure that gives the patient the false sensation of needing to take a bowel movement). Biopsies are frequently taken for histology. Additionally in a procedure known as chromoendoscopy, a contrast-dye (such as indigo carmine) may be sprayed through the endoscope onto the bowel wall to help visualise any abnormalities in the mucosal morphology. A Cochrane review updated in 2016 found strong evidence that chromoscopy enhances the detection of cancerous tumours in the colon and rectum.In most experienced hands, the endoscope is advanced to the junction of where the colon and small bowel join up (cecum) in under 10 minutes in 95% of cases. Due to tight turns and redundancy in areas of the colon that are not "fixed", loops may form in which advancement of the endoscope creates a "bowing" effect that causes the tip to actually retract. These loops often result in discomfort due to stretching of the colon and its associated mesentery. Manoeuvres to "reduce" or remove the loop include pulling the endoscope backwards while twisting it. Alternatively, body position changes and abdominal support from external hand pressure can often "straighten" the endoscope to allow the scope to move forward. In a minority of patients, looping is often cited as a cause for an incomplete examination. Usage of alternative instruments leading to completion of the examination has been investigated, including use of pediatric colonoscope, push enteroscope and upper GI endoscope variants.For screening purposes, a closer visual inspection is then often performed upon withdrawal of the endoscope over the course of 20 to 25 minutes. Lawsuits over missed cancerous lesions have recently prompted some institutions to better document withdrawal time as rapid withdrawal times may be a source of potential medical legal liability. This is often a real concern in clinical settings where high caseloads could provide financial incentive to complete colonoscopies as quickly as possible.
Suspicious lesions may be cauterized, treated with laser light or cut with an electric wire for purposes of biopsy or complete removal polypectomy. Medication can be injected, e.g. to control bleeding lesions. The procedure typically takes 20–30 minutes, depending on the indication and findings; with multiple polypectomies or biopsies, procedure times may be longer. As mentioned above, anatomic considerations may also affect procedure times.
After the procedure, some recovery time is usually allowed to let the sedative wear off. Outpatient recovery time can take an estimated 30–60 minutes. Most facilities require that patients have a person with them to help them home afterwards (depending on the sedation method used).
One common after-effect from the procedure is a bout of flatulence and minor wind pain caused by air insufflation into the colon during the procedure.
An advantage of colonoscopy over X-ray imaging or other less invasive tests is the ability to perform therapeutic interventions during the test. A polyp is a growth of excess of tissue that can develop into cancer. If a polyp is found, for example, it can be removed by one of several techniques. A snare device can be placed around a polyp for removal. Even if the polyp is flat on the surface it can often be removed. For example, the following shows a polyp removed in stages:
Pain management
The pain associated with the procedure is not caused by the insertion of the scope but rather by the inflation of the colon in order to do the inspection. The scope itself is essentially a long, flexible tube about a centimeter in diameter — that is, as big around as the little finger, which is less than the diameter of an average stool.
The colon is wrinkled and corrugated, somewhat like an accordion or a clothes-dryer exhaust tube, which gives it the large surface area needed for water absorption. In order to inspect this surface thoroughly, the physician blows it up like a balloon, using air from a compressor or carbon dioxide from a gas bottle (CO2 is absorbed into the bloodstream through the mucosal lining of the colon much faster than air and then exhaled through the lungs which is associated with less post procedural pain), in order to get the creases out. The stomach, intestines, and colon have a so-called "second brain" wrapped around them, which autonomously runs the chemical factory of digestion. It uses complex hormone signals and nerve signals to communicate with the brain and the rest of the body. Normally a colons job is to digest food and regulate the intestinal flora. The harmful bacteria in rancid food, for example, creates gas.
The colon has distension sensors that can tell when there is unexpected gas pushing the colon walls out—thus the "second brain" tells the person that he or she is having intestinal difficulties by way of the sensation of nausea. Doctors typically recommend either total anesthesia or a partial twilight sedative to either preclude or to lessen the patients awareness of pain or discomfort, or just the unusual sensations of the procedure. Once the colon has been inflated, the doctor inspects it with the scope as it is slowly pulled backward. If any polyps are found they are then cut out for later biopsy.
Some doctors prefer to work with totally anesthetized patients inasmuch as the lack of any perceived pain or discomfort allows for a leisurely examination. Twilight sedation is, however, inherently safer than general anesthesia; it also allows the patients to follow simple commands and even to watch the procedure on a closed-circuit monitor. Tens of millions of adults annually need to have colonoscopies, and yet many dont because of concerns about the procedure.Colonoscopy can be carried out without any sedation and without problems with pain, which is practised in several institutions in many countries with the patients agreement. This allows the patient to shift the body position to help the doctor carry out the procedure and significantly reduces recovery time and side-effects. There is some discomfort when the colon is distended with air, but this is not usually particularly painful, and it passes relatively quickly. Unsedated patients can be released from the hospital on their own without any feelings of nausea, able to continue with normal activities, and without the need for an escort as recommended after sedation.
Ultrasound
Duodenography and colonography are performed like a standard abdominal examination using B-mode and color flow Doppler ultrasonography using a low frequency transducer — for example a 2.5 MHz — and a high frequency transducer, for example a 7.5 MHz probe. Detailed examination of duodenal walls and folds, colonic walls and haustra was performed using a 7.5 MHz probe. Deeply located abdominal structures were examined using 2.5 MHz probe. All ultrasound examinations are performed after overnight fasting (for at least 16 hours) using standard scanning procedure. Subjects are examined with and without water contrast. Water contrast imaging is performed by having adult subjects take at least one liter of water prior to examination. Patients are examined in the supine, left posterior oblique, and left lateral decubitus positions using the intercostal and subcostal approaches. The liver, gall bladder, spleen, pancreas, duodenum, colon, and kidneys are routinely evaluated in all patients.
With patient lying supine, the examination of the duodenum with high frequency ultrasound duodenography is performed with 7.5 MHz probe placed in the right upper abdomen, and central epigastric successively; for high frequency ultrasound colonography, the ascending colon, is examined with starting point usually midway of an imaginary line running from the iliac crest to the umbilicus and proceeding cephalid through the right mid abdomen; for the descending colon, the examination begins from the left upper abdomen proceeding caudally and traversing the left mid abdomen and left lower abdomen, terminating at the sigmoid colon in the lower pelvic region. Color flow Doppler sonography is used to examine the localization of lesions in relation to vessels. All measurements of diameter and wall thickness are performed with built-in software. Measurements are taken between peristaltic waves.
Economics
Researchers have found that older patients with three or more significant health problems, like dementia or heart failure, had high rates of repeat colonoscopies without medical indications. These patients are less likely to live long enough to develop colon cancer. Gordon states, "At about $1,000 per procedure, theres clearly an economic incentive".The Hemoccult II FOBT (combined with follow-up colonoscopy if indicated by the test) is over 5 times as cost effective as other screening strategies, but is only about 85% as sensitive. Because of this relatively low sensitivity, US guidelines advocate the over 5 times more expensive procedures instead, because even the relatively small increase in lives saved and 5-fold cost increase is seen as worth choosing, given US living standards.
History
In the 1960s, Dr. Niwa and Dr. Yamagata at Tokyo University developed the device. After 1968, Dr. William Wolff and Dr. Hiromi Shinya pioneered the development of the colonoscope. Their invention, in 1969 in Japan, was an advance over the barium enema and the flexible sigmoidoscope because it allowed for the visualization and removal of polyps from the entire large intestine. Wolff and Shinya advocated for their invention and published much of the early evidence needed to overcome skepticism about the devices safety and efficacy.
Colonoscopy with CCD invention and market is led by Fuji film, Olympus and Hoya in Japan. In 1982, Dr. Lawrence Kaplan of Aspen Medical Group in St. Paul, MN reported a series of 100 consecutive colonoscopies and upper endoscopies performed in a free-standing clinic miles from the nearest hospital, demonstrating the safety and cost effectiveness of these outpatient procedures. (Personal communication to the Joint Commission on Ambulatory Care, May 1983)
Etymology
The terms colonoscopy or coloscopy are derived from the ancient Greek noun κόλον, same as English colon, and the verb σκοπεῖν, look (in)to, examine. The term colonoscopy is however ill-constructed, as this form supposes that the first part of the compound consists of a possible root κολων- or κολον-, with the connecting vowel -o, instead of the root κόλ- of κόλον. A compound such as κολωνοειδής, like a hill, (with the additional -on-) is derived from the ancient Greek word κολώνη or κολωνός, hill. Similarly, colonoscopy (with the additional -on-) can literally be translated as examination of the hill, instead of the examination of the colon.
In English, multiple words exist that are derived from κόλον, such as colectomy, colocentesis, colopathy, and colostomy among many others, that actually lack the incorrect additional -on-. A few compound words such as colonopathy have doublets with -on- inserted.
See also
References
Further reading
Gupta S, Lieberman D, Anderson JC, Burke CA, Dominitz JA, Kaltenbach T, et al. (March 2020). "Recommendations for Follow-Up After Colonoscopy and Polypectomy: A Consensus Update by the US Multi-Society Task Force on Colorectal Cancer". Gastroenterology. 158 (4): 1131–1153.e5. doi:10.1053/j.gastro.2019.10.026. PMC 7672705. PMID 32044092.
Gupta S, Lieberman D, Anderson JC, Burke CA, Dominitz JA, Kaltenbach T, et al </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Bronchopulmonary dysplasia and can you explain it in depth? [/INST] Bronchopulmonary dysplasia (BPD; part of the spectrum of chronic lung disease of infancy) is a chronic lung disease in which premature infants, usually those who were treated with supplemental oxygen, require long-term oxygen. The alveoli that are present tend to not be mature enough to function normally. It is more common in infants with low birth weight (LBW) and those who receive prolonged mechanical ventilation to treat respiratory distress syndrome (RDS). It results in significant morbidity and mortality. The definition of BPD has continued to evolve primarily due to changes in the population, such as more survivors at earlier gestational ages, and improved neonatal management including surfactant, antenatal glucocorticoid therapy, and less aggressive mechanical ventilation.Currently the description of BPD includes the grading of its severity into mild, moderate and severe. This correlates with the infants maturity, growth and overall severity of illness. The new system offers a better description of underlying pulmonary disease and its severity.
Presentation
Complications
Feeding problems are common in infants with bronchopulmonary dysplasia, often due to prolonged intubation. Such infants often display oral-tactile hypersensitivity (also known as oral aversion).
Physical findings:
hypoxemia;
hypercapnia;
crackles, wheezing, & decreased breath sounds;
increased bronchial secretions;
hyperinflation;
frequent lower respiratory infections;
delayed growth & development;
cor pulmonale;
CXR shows with hyperinflation, low diaphragm, atelectasis, cystic changes.
Cause
Prolonged high oxygen delivery in premature infants causes necrotizing bronchiolitis and alveolar septal injury, with inflammation and scarring. This results in hypoxemia. Today, with the advent of surfactant therapy and high frequency ventilation and oxygen supplementation, infants with BPD experience much milder injury without necrotizing bronchiolitis or alveolar septal fibrosis. Instead, there are usually uniformly dilated acini with thin alveolar septa and little or no interstitial fibrosis. It develops most commonly in the first 4 weeks after birth.
Diagnosis
Earlier criteria
The classic diagnosis of BPD may be assigned at 28 days of life if the following criteria are met:
Positive pressure ventilation during the first 2 weeks of life for a minimum of 3 days.
Clinical signs of abnormal respiratory function.
Requirements for supplemental oxygen for longer than 28 days of age to maintain PaO2 above 50 mm Hg.
Chest radiograph with diffuse abnormal findings characteristic of BPD.
Newer criteria
The 2006 National Institute of Health (US) criteria for BPD (for neonates treated with more than 21% oxygen for at least 28 days) is as follows:,
MildBreathing room air at 36 weeks post-menstrual age or discharge (whichever comes first) for babies born before 32 weeks, or
breathing room air by 56 days postnatal age, or discharge (whichever comes first) for babies born after 32 weeks gestation.ModerateNeed for <30% oxygen at 36 weeks postmenstrual age, or discharge (whichever comes first) for babies born before 32 weeks, or
need for <30% oxygen to 56 days postnatal age, or discharge (whichever comes first) for babies born after 32 weeks gestation.SevereNeed for >30% oxygen, with or without positive pressure ventilation or continuous positive pressure at 36 weeks postmenstrual age, or discharge (whichever comes first) for babies born before 32 weeks, or
need for >30% oxygen with or without positive pressure ventilation or continuous positive pressure at 56 days postnatal age, or discharge (whichever comes first) for babies born after 32 weeks gestation.
Management
Infants with bronchopulmonary dysplasia are often treated with diuretics that decrease fluid in the alveoli where gas exchange occurs and bronchodilators that relax the airway muscles to facilitate breathing. Viral immunization is important for these children who have a higher risk of infections in the respiratory tract.
Corticosteroid treatment
There is evidence that steroids (systemic corticosteroid treatment) given to babies less than 7 days old can prevent bronchopulmonary dysplasia. This treatment increases the risk of neurodevelopmental sequelae (cerebral palsy) and gastrointestinal perforation.For babies 7 days old and older, "late systemic postnatal corticosteroid treatment" may reduce the risk of death and of bronchopulmonary dysplasia. There is some evidence that this treatment does not increase the risk of cerebral palsy, however, long-term studies considering the neurodevelopmental outcomes is needed to further understand the risk of this treatment option. Late systemic postnatal corticosteroid treatment is therefore only recommended for babies 7 days old or older who cannot be taken off of a ventilator </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Brucellosis and can you explain it in depth? [/INST] Brucellosis is a highly contagious zoonosis caused by ingestion of unpasteurized milk or undercooked meat from infected animals, or close contact with their secretions. It is also known as undulant fever, Malta fever, and Mediterranean fever.The bacteria causing this disease, Brucella, are small, Gram-negative, nonmotile, nonspore-forming, rod-shaped (coccobacilli) bacteria. They function as facultative intracellular parasites, causing chronic disease, which usually persists for life. Four species infect humans: B. abortus, B. canis, B. melitensis, and B. suis. B. abortus is less virulent than B. melitensis and is primarily a disease of cattle. B. canis affects dogs. B. melitensis is the most virulent and invasive species; it usually infects goats and occasionally sheep. B. suis is of intermediate virulence and chiefly infects pigs. Symptoms include profuse sweating and joint and muscle pain. Brucellosis has been recognized in animals and humans since the early 20th century.
Signs and symptoms
The symptoms are like those associated with many other febrile diseases, but with emphasis on muscular pain and night sweats. The duration of the disease can vary from a few weeks to many months or even years.
In the first stage of the disease, bacteremia occurs and leads to the classic triad of undulant fevers, sweating (often with characteristic foul, moldy smell sometimes likened to wet hay), and migratory arthralgia and myalgia (joint and muscle pain). Blood tests characteristically reveal a low number of white blood cells and red blood cells, show some elevation of liver enzymes such as aspartate aminotransferase and alanine aminotransferase, and demonstrate positive Bengal rose and Huddleston reactions. Gastrointestinal symptoms occur in 70% of cases and include nausea, vomiting, decreased appetite, unintentional weight loss, abdominal pain, constipation, diarrhea, an enlarged liver, liver inflammation, liver abscess, and an enlarged spleen.This complex is, at least in Portugal, Israel, Syria, and Jordan, known as Malta fever. During episodes of Malta fever, melitococcemia (presence of brucellae in the blood) can usually be demonstrated by means of blood culture in tryptose medium or Albini medium. If untreated, the disease can give origin to focalizations or become chronic. The focalizations of brucellosis occur usually in bones and joints, and osteomyelitis or spondylodiscitis of the lumbar spine accompanied by sacroiliitis is very characteristic of this disease. Orchitis is also common in men.
The consequences of Brucella infection are highly variable and may include arthritis, spondylitis, thrombocytopenia, meningitis, uveitis, optic neuritis, endocarditis, and various neurological disorders collectively known as neurobrucellosis.
Cause
Brucellosis in humans is usually associated with consumption of unpasteurized milk and soft cheeses made from the milk of infected animals—primarily goats, infected with B. melitensis and with occupational exposure of laboratory workers, veterinarians, and slaughterhouse workers. Some vaccines used in livestock, most notably B. abortus strain 19, also cause disease in humans if accidentally injected. Brucellosis induces inconstant fevers, miscarriage, sweating, weakness, anemia, headaches, depression, and muscular and bodily pain. The other strains, B. suis and B. canis, cause infection in pigs and dogs, respectively.Overall findings support that brucellosis poses an occupational risk to goat farmers with specific areas of concern including weak awareness of disease transmission to humans and lack of knowledge on specific safe farm practices such as quarantine practices.
Diagnosis
The diagnosis of brucellosis relies on:
Demonstration of the agent: blood cultures in tryptose broth, bone marrow cultures: The growth of brucellae is extremely slow (they can take up to two months to grow) and the culture poses a risk to laboratory personnel due to high infectivity of brucellae.
Demonstration of antibodies against the agent either with the classic Huddleson, Wright, and/or Bengal Rose reactions, either with ELISA or the 2-mercaptoethanol assay for IgM antibodies associated with chronic disease
Histologic evidence of granulomatous hepatitis on hepatic biopsy
Radiologic alterations in infected vertebrae: the Pedro Pons sign (preferential erosion of the anterosuperior corner of lumbar vertebrae) and marked osteophytosis are suspicious of brucellic spondylitis.Definite diagnosis of brucellosis requires the isolation of the organism from the blood, body fluids, or tissues, but serological methods may be the only tests available in many settings. Positive blood culture yield ranges between 40 and 70% and is less commonly positive for B. abortus than B. melitensis or B. suis. Identification of specific antibodies against bacterial lipopolysaccharide and other antigens can be detected by the standard agglutination test (SAT), rose Bengal, 2-mercaptoethanol (2-ME), antihuman globulin (Coombs) and indirect enzyme-linked immunosorbent assay (ELISA). SAT is the most commonly used serology in endemic areas. An agglutination titre greater than 1:160 is considered significant in nonendemic areas and greater than 1:320 in endemic areas.Due to the similarity of the O polysaccharide of Brucella to that of various other Gram-negative bacteria (e.g. Francisella tularensis, Escherichia coli, Salmonella urbana, Yersinia enterocolitica, Vibrio cholerae, and Stenotrophomonas maltophilia), the appearance of cross-reactions of class M immunoglobulins may occur. The inability to diagnose B. canis by SAT due to lack of cross-reaction is another drawback. False-negative SAT may be caused by the presence of blocking antibodies (the prozone phenomenon) in the α2-globulin (IgA) and in the α-globulin (IgG) fractions.Dipstick assays are new and promising, based on the binding of Brucella IgM antibodies, and are simple, accurate, and rapid. ELISA typically uses cytoplasmic proteins as antigens. It measures IgM, IgG, and IgA with better sensitivity and specificity than the SAT in most recent comparative studies. The commercial Brucellacapt test, a single-step immunocapture assay for the detection of total anti-Brucella antibodies, is an increasingly used adjunctive test when resources permit. PCR is fast and should be specific. Many varieties of PCR have been developed (e.g. nested PCR, realtime PCR, and PCR-ELISA) and found to have superior specificity and sensitivity in detecting both primary infection and relapse after treatment. Unfortunately, these are not standardized for routine use, and some centres have reported persistent PCR positivity after clinically successful treatment, fuelling the controversy about the existence of prolonged chronic brucellosis.Other laboratory findings include normal peripheral white cell count, and occasional leucopenia with relative lymphocytosis. The serum biochemical profiles are commonly normal.
Prevention
Surveillance using serological tests, as well as tests on milk such as the milk ring test, can be used for screening and play an important role in campaigns to eliminate the disease. Also, individual animal testing both for trade and for disease-control purposes is practiced. In endemic areas, vaccination is often used to reduce the incidence of infection. An animal vaccine is available that uses modified live bacteria. The World Organisation for Animal Health Manual of Diagnostic Test and Vaccines for Terrestrial Animals provides detailed guidance on the production of vaccines. As the disease is closer to being eliminated, a test and eradication program is required to eliminate it.The main way of preventing brucellosis is by using fastidious hygiene in producing raw milk products, or by pasteurizing all milk that is to be ingested by human beings, either in its unaltered form or as a derivative, such as cheese.
Treatment
Antibiotics such as tetracyclines, rifampicin, and the aminoglycosides streptomycin and gentamicin are effective against Brucella bacteria. However, the use of more than one antibiotic is needed for several weeks, because the bacteria incubate within cells.The gold standard treatment for adults is daily intramuscular injections of streptomycin 1 g for 14 days and oral doxycycline 100 mg twice daily for 45 days (concurrently). Gentamicin 5 mg/kg by intramuscular injection once daily for 7 days is an acceptable substitute when streptomycin is not available or contraindicated. Another widely used regimen is doxycycline plus rifampicin twice daily for at least 6 weeks. This regimen has the advantage of oral administration. A triple therapy of doxycycline, with rifampicin and co-trimoxazole, has been used successfully to treat neurobrucellosis. Doxycycline plus streptomycin regimen (for 2 to 3 weeks) is more effective than doxycycline plus rifampicin regimen (for 6 weeks).Doxycycline is able to cross the blood–brain barrier, but requires the addition of two other drugs to prevent relapse. Ciprofloxacin and co-trimoxazole therapy is associated with an unacceptably high rate of relapse. In brucellic endocarditis, surgery is required for an optimal outcome. Even with optimal antibrucellic therapy, relapses still occur in 5 to 10% of patients with Malta fever.
Prognosis
The mortality of the disease in 1909, as recorded in the British Army and Navy stationed in Malta, was 2%. The most frequent cause of death was endocarditis. Recent advances in antibiotics and surgery have been successful in preventing death due to endocarditis. Prevention of human brucellosis can be achieved by eradication of the disease in animals by vaccination and other veterinary control methods such as testing herds/flocks and slaughtering animals when infection is present. Currently, no effective vaccine is available for humans. Boiling milk before consumption, or before using it to produce other dairy products, is protective against transmission via ingestion. Changing traditional food habits of eating raw meat, liver, or bone marrow is necessary, but difficult to implement. Patients who have had brucellosis should probably be excluded indefinitely from donating blood or organs. Exposure of diagnostic laboratory personnel to Brucella organisms remains a problem in both endemic settings and when brucellosis is unknowingly imported by a patient. After appropriate risk assessment, staff with significant exposure should be offered postexposure prophylaxis and followed up serologically for 6 months.
Epidemiology
Argentina
According to a study published in 2002, an estimated 10–13% of farm animals are infected with Brucella species. Annual losses from the disease were calculated at around $60 million. Since 1932, government agencies have undertaken efforts to contain the disease. Currently, all cattle of ages 3–8 months must receive the Brucella abortus strain 19 vaccine.
Australia
Australia is free of cattle brucellosis, although it occurred in the past. Brucellosis of sheep or goats has never been reported. Brucellosis of pigs does occur. Feral pigs are the typical source of human infections.
Canada
On 19 September 1985, the Canadian government declared its cattle population brucellosis-free. Brucellosis ring testing of milk and cream, and testing of cattle to be slaughtered ended on 1 April 1999. Monitoring continues through testing at auction markets, through standard disease-reporting procedures, and through testing of cattle being qualified for export to countries other than the United States.
China
An outbreak infecting humans took place in Lanzhou in 2020 after the Lanzhou Biopharmaceutical Plant, which was involved in vaccine production, accidentally pumped out the bacteria into the atmosphere in exhaust air due to use of expired disinfectant. The outbreak affected over 6,000 people.
Europe
Malta
Until the early 20th century, the disease was endemic in Malta to the point of it being referred to as "Maltese fever". Since 2005, due to a strict regimen of certification of milk animals and widespread use of pasteurization, the illness has been eradicated from Malta.
Republic of Ireland
Ireland was declared free of brucellosis on 1 July 2009. The disease had troubled the countrys farmers and veterinarians for several decades. The Irish government submitted an application to the European Commission, which verified that Ireland had been liberated. Brendan Smith, Irelands then Minister for Agriculture, Food and the Marine, said the elimination of brucellosis was "a landmark in the history of disease eradication in Ireland". Irelands Department of Agriculture, Food and the Marine intends to reduce its brucellosis eradication programme now that eradication has been confirmed.
UK
Mainland Britain has been free of brucellosis since 1979, although there have been episodic re-introductions since. The last outbreak of brucellosis in Great Britain was in cattle in Cornwall in 2004. Northern Ireland was declared officially brucellosis-free in 2015.
New Zealand
Brucellosis in New Zealand is limited to sheep (B. ovis). The country is free of all other species of Brucella.
United States
Dairy herds in the U.S. are tested at least once a year to be certified brucellosis-free with the Brucella milk ring test. Cows confirmed to be infected are often killed. In the United States, veterinarians are required to vaccinate all young stock, to further reduce the chance of zoonotic transmission. This vaccination is usually referred to as a "calfhood" vaccination. Most cattle receive a tattoo in one of their ears, serving as proof of their vaccination status. This tattoo also includes the last digit of the year they were born.The first state–federal cooperative efforts towards eradication of brucellosis caused by B. abortus in the U.S. began in 1934.Brucellosis was originally imported to North America with non-native domestic cattle (Bos taurus), which transmitted the disease to wild bison (Bison bison) and elk (Cervus canadensis). No records exist of brucellosis in ungulates native to America until the early 19th century.
History
Brucellosis first came to the attention of British medical officers in the 1850s in Malta during the Crimean War, and was referred to as Malta Fever. Jeffery Allen Marston (1831–1911) described his own case of the disease in 1861. The causal relationship between organism and disease was first established in 1887 by David Bruce. Bruce considered the agent spherical and classified it as a coccus.
In 1897, Danish veterinarian Bernhard Bang isolated a bacillus as the agent of heightened spontaneous abortion in cows, and the name "Bangs disease" was assigned to this condition. Bang considered the organism rod-shaped and classified it as a bacillus. So at the time, no one knew that this bacillus had anything to do with the causative agent in Malta fever.Maltese scientist and archaeologist Themistocles Zammit identified unpasteurized goat milk as the major etiologic factor of undulant fever in June 1905.In the late 1910s, American bacteriologist Alice C. Evans was studying the Bang bacillus and gradually realized that it was virtually indistinguishable from the Bruce coccus. The short-rod versus oblong-round morphologic borderline explained the leveling of the erstwhile bacillus/coccus distinction (that is, these "two" pathogens were not a coccus versus a bacillus but rather were one coccobacillus). The Bang bacillus was already known to be enzootic in American dairy cattle, which showed itself in the regularity with which herds experienced contagious abortion. Having made the discovery that the bacteria were certainly nearly identical and perhaps totally so, Evans then wondered why Malta fever was not widely diagnosed or reported in the United States. She began to wonder whether many cases of vaguely defined febrile illnesses were in fact caused by the drinking of raw (unpasteurized) milk. During the 1920s, this hypothesis was vindicated. Such illnesses ranged from undiagnosed and untreated gastrointestinal upset to misdiagnosed febrile and painful versions, some even fatal. This advance in bacteriological science sparked extensive changes in the American dairy industry to improve food safety. The changes included making pasteurization standard and greatly tightening the standards of cleanliness in milkhouses on dairy farms. The expense prompted delay and skepticism in the industry, but the new hygiene rules eventually became the norm. Although these measures have sometimes struck people as overdone in the decades since, being unhygienic at milking time or in the milkhouse, or drinking raw milk, are not a safe alternative.In the decades after Evanss work, this genus, which received the name Brucella in honor of Bruce, was found to contain several species with varying virulence. The name "brucellosis" gradually replaced the 19th-century names Mediterranean fever and Malta fever.Neurobrucellosis, a neurological involvement in brucellosis, was first described in 1879. In the late 19th century, its symptoms were described in more detail by M. Louis Hughes, a Surgeon-Captain of the Royal Army Medical Corps stationed in Malta who isolated brucella organisms from a patient with meningo-encephalitis. In 1989, neurologists in Saudi Arabia made significant contributions to the medical literature involving neurobrucellosis.These obsolete names have previously been applied to brucellosis:
Biological warfare
Brucella species were weaponized by several advanced countries by the mid-20th century. In 1954, B. suis became the first agent weaponized by the United States at its Pine Bluff Arsenal near Pine Bluff, Arkansas. Brucella species survive well in aerosols and resist drying. Brucella and all other remaining biological weapons in the U.S. arsenal were destroyed in 1971–72 when the American offensive biological warfare program was discontinued by order of President Richard Nixon.The experimental American bacteriological warfare program focused on three agents of the Brucella group:
Porcine brucellosis (agent US)
Bovine brucellosis (agent AA)
Caprine brucellosis (agent AM)Agent US was in advanced development by the end of World War II. When the United States Air Force (USAF) wanted a biological warfare capability, the Chemical Corps offered Agent US in the M114 bomblet, based on the four-pound bursting bomblet developed for spreading anthrax during World War II. Though the capability was developed, operational testing indicated the weapon was less than desirable, and the USAF designed it as an interim capability until it could eventually be replaced by a more effective biological weapon.The main drawback of using the M114 with Agent US was that it acted mainly as an incapacitating agent, whereas the USAF administration wanted weapons that were deadly. The stability of M114 in storage was too low to allow for storing it at forward air bases, and the logistical requirements to neutralize a target were far higher than was originally planned. Ultimately, this would have required too much logistical support to be practical in the field.Agents US and AA had a median infective dose of 500 organisms/person, and for Agent AM it was 300 organisms/person. The incubation time was believed to be about 2 weeks, with a duration of infection of several months. The lethality estimate was, based on epidemiological information, 1 to 2
per cent. Agent AM was believed to be a somewhat more virulent disease, with a fatality rate of 3 per cent being expected.
Other animals
Species infecting domestic livestock are B. abortus (cattle, bison, and elk), B. canis (dogs), B. melitensis (goats and sheep), B. ovis (sheep), and B. suis (caribou and pigs). Brucella species have also been isolated from several marine mammal species (cetaceans and pinnipeds).
Cattle
B. abortus is the principal cause of brucellosis in cattle. The bacteria are shed from an infected animal at or around the time of calving or abortion. Once exposed, the likelihood of an animal becoming infected is variable, depending on age, pregnancy status, and other intrinsic factors of the animal, as well as the number of bacteria to which the animal was exposed. The most common clinical signs of cattle infected with B. abortus are high incidences of abortions, arthritic joints, and retained placenta.The two main causes for spontaneous abortion in animals are erythritol, which can promote infections in the fetus and placenta, and the lack of anti-Brucella activity in the amniotic fluid. Males can also harbor the bacteria in their reproductive tracts, namely seminal vesicles, ampullae, testicles, and epididymes.
Dogs
The causative agent of brucellosis in dogs, B. canis, is transmitted to other dogs through breeding and contact with aborted fetuses. Brucellosis can occur in humans who come in contact with infected aborted tissue or semen. The bacteria in dogs normally infect the genitals and lymphatic system, but can also spread to the eyes, kidneys, and intervertebral discs. Brucellosis in the intervertebral disc is one possible cause of discospondylitis. Symptoms of brucellosis in dogs include abortion in female dogs and scrotal inflammation and orchitis in males. Fever is uncommon. Infection of the eye can cause uveitis, and infection of the intervertebral disc can cause pain or weakness. Blood testing of the dogs prior to breeding can prevent the spread of this disease. It is treated with antibiotics, as with humans, but it is difficult to cure.
Aquatic wildlife
Brucellosis in cetaceans is caused by the bacterium B. ceti. First discovered in the aborted fetus of a bottlenose dolphin, the structure of B. ceti is similar to Brucella in land animals. B. ceti is commonly detected in two suborders of cetaceans, the Mysticeti and Odontoceti. The Mysticeti include four families of baleen whales, filter-feeders, and the Odontoceti include two families of toothed cetaceans ranging from dolphins to sperm whales. B. ceti is believed to transfer from animal to animal through sexual intercourse, maternal feeding, aborted fetuses, placental issues, from mother to fetus, or through fish reservoirs. Brucellosis is a reproductive disease, so has an extreme negative impact on the population dynamics of a species. This becomes a greater issue when the already low population numbers of cetaceans are taken into consideration. B. ceti has been identified in four of the 14 cetacean families, but the antibodies have been detected in seven of the families. This indicates that B. ceti is common amongst cetacean families and populations. Only a small percentage of exposed individuals become ill or die. However, particular species apparently are more likely to become infected by B. ceti. The harbor porpoise, striped dolphin, white-sided dolphin, bottlenose dolphin, and common dolphin have the highest frequency of infection amongst ondontocetes. In the mysticetes families, the northern minke whale is by far the most infected species. Dolphins and porpoises are more likely to be infected than cetaceans such as whales. With regard to sex and age biases, the infections do not seem influenced by the age or sex of an individual. Although fatal to cetaceans, B. ceti has a low infection rate for humans.
Terrestrial wildlife
The disease in its various strains can infect multiple wildlife species, including elk (Cervus canadensis), bison (Bison bison), African buffalo (Syncerus caffer), European wild boar (Sus scrofa), caribou (Rangifer tarandus), moose (Alces alces), and marine mammals (see section on aquatic wildlife above). While some regions use vaccines to prevent the spread of brucellosis between infected and uninfected wildlife populations, no suitable brucellosis vaccine for terrestrial wildlife has been developed. This gap in medicinal knowledge creates more pressure for management practices that reduce spread of the disease.Wild bison and elk in the greater Yellowstone area are the last remaining reservoir of B. abortus in the US. The recent transmission of brucellosis from elk back to cattle in Idaho and Wyoming illustrates how the area, as the last remaining reservoir in the United States, may adversely affect the livestock industry. Eliminating brucellosis from this area is a challenge, as many viewpoints exist on how to manage diseased wildlife. However, the Wyoming Game and Fish Department has recently begun to protect scavengers (particularly coyotes and red fox) on elk feedgrounds, because they act as sustainable, no-cost, biological control agents by removing infected elk fetuses quickly.The National Elk Refuge in Jackson, Wyoming asserts that the intensity of the winter feeding program affects the spread of brucellosis more than the population size of elk and bison. Since concentrating animals around food plots accelerates spread of the disease, management strategies to reduce herd density and increase dispersion could limit its spread.
Effects on hunters
Hunters may be at additional risk for exposure to brucellosis due to increased contact with susceptible wildlife, including predators that may have fed on infected prey. Hunting dogs can also be at risk of infection. Exposure can occur through contact with open wounds or by directly inhaling the bacteria while cleaning game </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Burning mouth syndrome and can you explain it in depth? [/INST] Burning mouth syndrome (BMS) is a burning, tingling or scalding sensation in the mouth, lasting for at least four to six months, with no underlying known dental or medical cause. No related signs of disease are found in the mouth. People with burning mouth syndrome may also have a subjective xerostomia (dry mouth sensation where no cause can be found such as reduced salivary flow), paraesthesia (altered sensation such as tingling in the mouth), or an altered sense of taste or smell.A burning sensation in the mouth can be a symptom of another disease when local or systemic factors are found to be implicated; this is not considered to be burning mouth syndrome, which is a syndrome of medically unexplained symptoms. The International Association for the Study of Pain defines burning mouth syndrome as "a distinctive nosological entity characterized by unremitting oral burning or similar pain in the absence of detectable mucosal changes" and "burning pain in the tongue or other oral mucous membranes", and the International Headache Society defines it as "an intra-oral burning sensation for which no medical or dental cause can be found". To ensure the correct diagnosis of burning mouth syndrome, Research Diagnostic Criteria (RDC/BMS) have been developed.Insufficient evidence leaves it unclear if effective treatments exist.
Signs and symptoms
By definition, BMS has no signs. Sometimes affected persons will attribute the symptoms to sores in the mouth, but these are in fact normal anatomic structures (e.g. lingual papillae, varices). Symptoms of BMS are variable, but the typical clinical picture is given below, considered according to the Socrates pain assessment method (see table). If clinical signs are visible, then another explanation for the burning sensation may be present. Erythema (redness) and edema (swelling) of papillae on the tip of the tongue may be a sign that the tongue is being habitually pressed against the teeth. The number and size of filiform papillae may be reduced. If the tongue is very red and smooth, then there is likely a local or systemic cause (e.g. eythematous candidiasis, anemia).
Causes
Theories
In about 50% of cases of burning mouth sensation no identifiable cause is apparent; these cases are termed (primary) BMS. Several theories of what causes BMS have been proposed, and these are supported by varying degrees of evidence, but none is proven.
As most people with BMS are postmenopausal women, one theory of the cause of BMS is of estrogen or progesterone deficit, but a strong statistical correlation has not been demonstrated. Another theory is that BMS is related to autoimmunity, as abnormal antinuclear antibody and rheumatoid factor can be found in the serum of more than 50% of persons with BMS, but these levels may also be seen in elderly people who do not have any of the symptoms of this condition. Whilst salivary flow rates are normal and there are no clinical signs of a dry mouth to explain a complaint of dry mouth, levels of salivary proteins and phosphate may be elevated and salivary pH or buffering capacity may be reduced.Depression and anxiety are strongly associated with BMS. It is not known if depression is a cause or result of BMS, as depression may develop in any setting of constant unrelieved irritation, pain, and sleep disturbance. It is estimated that about 20% of BMS cases involve psychogenic factors, and some consider BMS a psychosomatic illness, caused by cancerophobia, concern about sexually transmitted infections, or hypochondriasis.Chronic low-grade trauma due to parafunctional habits (e.g. rubbing the tongue against the teeth or pressing it against the palate), may be involved. BMS is more common in persons with Parkinsons disease, so it has been suggested that it is a disorder of reduced pain threshold and increased sensitivity. Often people with BMS have unusually raised taste sensitivity, termed hypergeusia ("super tasters"). Dysgeusia (usually a bitter or metallic taste) is present in about 60% of people with BMS, a factor which led to the concept of a defect in sensory peripheral neural mechanisms. Changes in the oral environment, such as changes in the composition of saliva, may induce neuropathy or interruption of nerve transduction. The onset of BMS is often spontaneous, although it may be gradual. There is sometimes a correlation with a major life event or stressful period in life. In women, the onset of BMS is most likely three to twelve years following menopause.
Other causes of an oral burning sensation
Several local and systemic factors can give a burning sensation in the mouth without any clinical signs, and therefore may be misdiagnosed as BMS. Some sources state that where there is an identifiable cause for a burning sensation, this can be termed "secondary BMS" to distinguish it from primary BMS. However, the accepted definitions of BMS hold that there are no identifiable causes for BMS, and where there are identifiable causes, the term BMS should not be used.Some causes of a burning mouth sensation may be accompanied by clinical signs in the mouth or elsewhere on the body. For example, burning mouth pain may be a symptom of allergic contact stomatitis. This is a contact sensitivity (type IV hypersensitivity reaction) in the oral tissues to common substances such as sodium lauryl sulfate, cinnamaldehyde or dental materials. However, allergic contact stomatitis is accompanied by visible lesions and gives positive response with patch testing. Acute (short term) exposure to the allergen (the substance triggering the allergic response) causes non-specific inflammation and possibly mucosal ulceration. Chronic (long term) exposure to the allergen may appear as chronic inflammatory, lichenoid (lesions resembling oral lichen planus), or plasma cell gingivitis, which may be accompanied by glossitis and cheilitis. Apart from BMS itself, a full list of causes of an oral burning sensation is given below:
Deficiency of iron, folic acid or various B vitamins (glossitis e.g. due to anemia), or zinc
Neuropathy, e.g. following damage to the chorda tympani nerve.
Hypothyroidism.
Medications ("scalded mouth syndrome", unrelated to BMS) - protease inhibitors and angiotensin-converting-enzyme inhibitors (e.g. captopril).
Type 2 diabetes
True xerostomia, caused by hyposalivation e.g. Sjögrens syndrome
Parafunctional activity, e.g. nocturnal bruxism or a tongue thrusting habit.
Restriction of the tongue by poorly constructed dentures.
Geographic tongue.
Oral candidiasis.
Herpetic infection (herpes simplex virus).
Fissured tongue.
Lichen planus.
Allergies and contact sensitivities to foods, metals, and other substances (see table).
Hiatal hernia.
Human immunodeficiency virus.
Multiple myeloma
Diagnosis
BMS is a diagnosis of exclusion, i.e. all other explanations for the symptoms are ruled out before the diagnosis is made. There are no clinically useful investigations that would help to support a diagnosis of BMS (by definition all tests would have normal results), but blood tests and / or urinalysis may be useful to rule out anemia, deficiency states, hypothyroidism and diabetes. Investigation of a dry mouth symptom may involve sialometry, which objectively determines if there is any reduction of the salivary flow rate (hyposalivation). Oral candidiasis can be tested for with use of a swabs, smears, an oral rinse or saliva samples. It has been suggested that allergy testing (e.g., patch test) is inappropriate in the absence of a clear history and clinical signs in people with a burning sensation in the mouth. The diagnosis of a people with a burning symptom may also involve psychologic screening e.g. depression questionnaires.The second edition of the International Classification of Headache Disorders lists diagnostic criteria for "Glossodynia and Sore Mouth":
A. Pain in the mouth present daily and persisting for most of the day,
B. Oral mucosa is of normal appearance,
C. Local and systemic diseases have been excluded.
Classification
A burning sensation in the mouth may be primary (i.e. burning mouth syndrome) or secondary to systemic or local factors. Other sources refer to a "secondary BMS" with a similar definition, i.e. a burning sensation which is caused by local or systemic factors, or "where oral burning is explained by a clinical abnormality". However this contradicts the accepted definition of BMS which specifies that no cause can be identified. "Secondary BMS" could therefore be considered a misnomer. BMS is an example of dysesthesia, or a distortion of sensation.Some consider BMS to be a variant of atypical facial pain. More recently, BMS has been described as one of the 4 recognizable symptom complexes of chronic facial pain, along with atypical facial pain, temporomandibular joint dysfunction and atypical odontalgia. BMS has been subdivided into three general types, with type two being the most common and type three being the least common. Types one and two have unremitting symptoms, whereas type three may show remitting symptoms.
Type 1 - Symptoms not present upon waking, and then increase throughout the day
Type 2 - Symptoms upon waking and through the day
Type 3 - No regular pattern of symptomsSometimes those terms specific to the tongue (e.g. glossodynia) are reserved for when the burning sensation is located only on the tongue.
Treatment
If a cause can be identified for a burning sensation in the mouth, then treatment of this underlying factor is recommended. If symptom persist despite treatment a diagnosis of BMS is confirmed. BMS has been traditionally treated by reassurance and with antidepressants, anxiolytics or anticonvulsants. A 2016 Cochrane review of treatment for burning mouth syndrome concluded that strong evidence of an effective treatment was not available, however, a systematic review in 2018 found that the use of antidepressants and alpha-lipoic acids gave promising results.Other treatments which have been used include atypical antipsychotics, histamine receptor antagonists, and dopamine agonists. Supplementation with vitamin complexes and cognitive behavioral therapy may be helpful in the management of burning mouth syndrome.
Prognosis
BMS is benign (importantly, it is not a symptom of oral cancer), but as a cause of chronic pain which is poorly controlled, it can detriment quality of life, and may become a fixation which cannot be ignored, thus interfering with work and other daily activities. Two thirds of people with BMS have a spontaneous partial recovery six to seven years after the initial onset, but in others the condition is permanent. Recovery is often preceded by a change in the character of the symptom from constant to intermittent </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Bursitis and can you explain it in depth? [/INST] Bursitis is the inflammation of one or more bursae (fluid filled sacs) of synovial fluid in the body. They are lined with a synovial membrane that secretes a lubricating synovial fluid. There are more than 150 bursae in the human body. The bursae rest at the points where internal functionaries, such as muscles and tendons, slide across bone. Healthy bursae create a smooth, almost frictionless functional gliding surface making normal movement painless. When bursitis occurs, however, movement relying on the inflamed bursa becomes difficult and painful. Moreover, movement of tendons and muscles over the inflamed bursa aggravates its inflammation, perpetuating the problem. Muscle can also be stiffened.
Signs and symptoms
Bursitis commonly affects superficial bursae. These include the subacromial, prepatellar, retrocalcaneal, and pes anserinus bursae of the shoulder, knee, heel and shin, etc. (see below). Symptoms vary from localized warmth and erythema to joint pain and stiffness, to stinging pain that surrounds the joint around the inflamed bursa. In this condition, the pain usually is worse during and after activity, and then the bursa and the surrounding joint becomes stiff the next morning.Bursitis could possibly also cause a snapping, grinding or popping sound – known as snapping scapula syndrome – when it occurs in the shoulder joint. This is not necessarily painful.
Cause
There can be several concurrent causes. Trauma, auto-immune disorders, infection and iatrogenic (medicine-related) factors can all cause bursitis. Bursitis is commonly caused by repetitive movement and excessive pressure. Shoulders, elbows and knees are the most commonly affected. Inflammation of the bursae may also be caused by other inflammatory conditions such as rheumatoid arthritis, scleroderma, systemic lupus erythematosus, and gout. Immune deficiencies, including HIV and diabetes, can also cause bursitis. Infrequently, scoliosis can cause bursitis of the shoulders; however, shoulder bursitis is more commonly caused by overuse of the shoulder joint and related muscles.Traumatic injury is another cause of bursitis. The inflammation irritates because the bursa no longer fits in the original small area between the bone and the functionary muscle or tendon. When the bone increases pressure upon the bursa, bursitis results. Sometimes the cause is unknown </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Clostridioides difficile infection and can you explain it in depth? [/INST] Clostridioides difficile infection
(CDI or C-diff), also known as Clostridium difficile infection, is a symptomatic infection due to the spore-forming bacterium Clostridioides difficile. Symptoms include watery diarrhea, fever, nausea, and abdominal pain. It makes up about 20% of cases of antibiotic-associated diarrhea. Antibiotics can contribute to detrimental changes in gut microbiota; specifically, they decrease short-chain fatty acid absorption which results in osmotic, or watery, diarrhea. Complications may include pseudomembranous colitis, toxic megacolon, perforation of the colon, and sepsis.Clostridioides difficile infection is spread by bacterial spores found within feces. Surfaces may become contaminated with the spores with further spread occurring via the hands of healthcare workers. Risk factors for infection include antibiotic or proton pump inhibitor use, hospitalization, other health problems, and older age. Diagnosis is by stool culture or testing for the bacterias DNA or toxins. If a person tests positive but has no symptoms, the condition is known as C. difficile colonization rather than an infection.Prevention efforts include terminal room cleaning in hospitals, limiting antibiotic use, and handwashing campaigns in hospitals. Alcohol based hand sanitizer does not appear effective. Discontinuation of antibiotics may result in resolution of symptoms within three days in about 20% of those infected. The antibiotics metronidazole, vancomycin or fidaxomicin, will cure the infection. Retesting after treatment, as long as the symptoms have resolved, is not recommended, as a person may often remain colonized. Recurrences have been reported in up to 25% of people. Some tentative evidence indicates fecal microbiota transplantation and probiotics may decrease the risk of recurrence.C. difficile infections occur in all areas of the world. About 453,000 cases occurred in the United States in 2011, resulting in 29,000 deaths. Global rates of disease increased between 2001 and 2016. C. difficile infections occur more often in women than men. The bacterium was discovered in 1935 and found to be disease-causing in 1978. In the United States, healthcare-associated infections increase the cost of care by US$1.5 billion each year. Although C. difficile is a common healthcare-associated infection, at most 30% of infections are transmitted within hospitals. The majority of infections are acquired outside of hospitals, where medications and a recent history of diarrheal illnesses (e.g. laxative abuse or food poisoning due to Salmonellosis) are thought to drive the risk of colonization.
Signs and symptoms
Signs and symptoms of CDI range from mild diarrhea to severe life-threatening inflammation of the colon.In adults, a clinical prediction rule found the best signs to be significant diarrhea ("new onset of more than three partially formed or watery stools per 24-hour period"), recent antibiotic exposure, abdominal pain, fever (up to 40.5 °C or 105 °F), and a distinctive foul odor to the stool resembling horse manure. In a hospital population, prior antibiotic treatment plus diarrhea or abdominal pain had a sensitivity of 86% and a specificity of 45%. In this study with a prevalence of positive cytotoxin assays of 14%, the positive predictive value was 18% and the negative predictive value was 94%.In children, the most prevalent symptom of a CDI is watery diarrhea with at least three bowel movements a day for two or more days, which may be accompanied by fever, loss of appetite, nausea, and/or abdominal pain. Those with a severe infection also may develop serious inflammation of the colon and have little or no diarrhea.
Cause
Infection with C. difficile bacteria is responsible for C. difficile diarrhea.
C. difficile
Clostridia are anaerobic motile bacteria, ubiquitous in nature, and especially prevalent in soil. Under the microscope, they appear as long, irregular (often drumstick- or spindle-shaped) cells with a bulge at their terminal ends. Under Gram staining, C. difficile cells are Gram-positive and show optimum growth on blood agar at human body temperatures in the absence of oxygen. When stressed, the bacteria produce spores that are able to tolerate extreme conditions that the active bacteria cannot tolerate.C. difficile may colonize the human colon without symptom; approximately 2–5% of the adult population are carriers, although it varies considerably with demographics. The risk of colonization has been linked to a history of unrelated diarrheal illnesses (e.g. laxative abuse and food poisoning due to Salmonellosis or Vibrio cholerae infection).Pathogenic C. difficile strains produce multiple toxins. The most well-characterized are enterotoxin (Clostridium difficile toxin A) and cytotoxin (Clostridium difficile toxin B), both of which may produce diarrhea and inflammation in infected people, although their relative contributions have been debated. Toxins A and B are glucosyltransferases that target and inactivate the Rho family of GTPases. Toxin B (cytotoxin) induces actin depolymerization by a mechanism correlated with a decrease in the ADP-ribosylation of the low molecular mass GTP-binding Rho proteins. Another toxin, binary toxin, also has been described, but its role in disease is not fully understood.Antibiotic treatment of CDIs may be difficult, due both to antibiotic resistance and physiological factors of the bacteria (spore formation, protective effects of the pseudomembrane). The emergence of a new and highly toxic strain of C. difficile that is resistant to fluoroquinolone antibiotics such as ciprofloxacin and levofloxacin, said to be causing geographically dispersed outbreaks in North America, was reported in 2005. The U.S. Centers for Disease Control and Prevention in Atlanta warned of the emergence of an epidemic strain with increased virulence, antibiotic resistance, or both.C. difficile is transmitted from person to person by the fecal-oral route. The organism forms heat-resistant spores that are not killed by alcohol-based hand cleansers or routine surface cleaning. Thus, these spores survive in clinical environments for long periods. Because of this, the bacteria may be cultured from almost any surface. Once spores are ingested, their acid-resistance allows them to pass through the stomach unscathed. Upon exposure to bile acids, they germinate and multiply into vegetative cells in the colon. People without a history of gastrointestinal disturbances due to antibiotic use or diarrheal illness are less likely to become colonized by C. difficile.In 2005, molecular analysis led to the identification of the C. difficile strain type characterized as group BI by restriction endonuclease analysis, as North American pulse-field-type NAP1 by pulsed-field gel electrophoresis and as ribotype 027; the differing terminology reflects the predominant techniques used for epidemiological typing. This strain is referred to as C. difficile BI/NAP1/027.
Risk factors
Antibiotics
C. difficile colitis is associated most strongly with the use of these antibiotics: fluoroquinolones, cephalosporins, and clindamycin.Some research suggests the routine use of antibiotics in the raising of livestock is contributing to outbreaks of bacterial infections such as C. difficile.
Healthcare environment
People are most often infected in hospitals, nursing homes, or other medical institutions, although infection outside medical settings is increasing. Individuals can develop the infection if they touch objects or surfaces that are contaminated with feces and then touch their mouth or mucous membranes. Healthcare workers could possibly spread the bacteria or contaminate surfaces through hand contact. The rate of C. difficile acquisition is estimated to be 13% in those with hospital stays of up to two weeks, and 50% with stays longer than four weeks.Long-term hospitalization or residence in a nursing home within the previous year are independent risk factors for increased colonization.
Acid suppression medication
Increasing rates of community-acquired CDI are associated with the use of medication to suppress gastric acid production: H2-receptor antagonists increased the risk 1.5-fold, and proton pump inhibitors by 1.7 with once-daily use and 2.4 with more than once-daily use.
Diarrheal illnesses
People with a recent history of diarrheal illness are at increased risk of becoming colonized by C. difficile when exposed to spores, including laxative abuse and gastrointestinal pathogens. Disturbances that increase intestinal motility are thought to transiently elevate the concentration of available dietary sugars, allowing C. difficile to proliferate and gain a foothold in the gut. Although not all colonization events lead to disease, asymptomatic carriers remain colonized for years at a time. During this time, the abundance of C. difficile varies considerably day-to-day, causing periods of increased shedding that could substantially contribute to community-acquired infection rates.
Other
As a result of suppression of healthy bacteria, via a loss of bacterial food source, prolonged use of an elemental diet increases the risk of developing C. difficile infection. Low serum albumin levels is a risk factor for the development of C. difficile infection and when infected for severe disease. The protective effects of serum albumin may be related to the capability of this protein to bind C. difficile toxin A and toxin B, thus impairing entry into enterocytes.
Pathophysiology
The use of systemic antibiotics, including broad-spectrum penicillins/cephalosporins, fluoroquinolones, and clindamycin, causes the normal microbiota of the bowel to be altered. In particular, when the antibiotic kills off other competing bacteria in the intestine, any bacteria remaining will have less competition for space and nutrients. The net effect is to permit more extensive growth than normal of certain bacteria. C. difficile is one such type of bacterium. In addition to proliferating in the bowel, C. difficile also produces toxins. Without either toxin A or toxin B, C. difficile may colonize the gut, but is unlikely to cause pseudomembranous colitis. The colitis associated with severe infection is part of an inflammatory reaction, with the "pseudomembrane" formed by a viscous collection of inflammatory cells, fibrin, and necrotic cells.
Diagnosis
Prior to the advent of tests to detect C. difficile toxins, the diagnosis most often was made by colonoscopy or sigmoidoscopy. The appearance of "pseudomembranes" on the mucosa of the colon or rectum is highly suggestive, but not diagnostic of the condition. The pseudomembranes are composed of an exudate made of inflammatory debris, white blood cells. Although colonoscopy and sigmoidoscopy are still employed, now stool testing for the presence of C. difficile toxins is frequently the first-line diagnostic approach. Usually, only two toxins are tested for—toxin A and toxin B—but the organism produces several others. This test is not 100% accurate, with a considerable false-negative rate even with repeat testing.
Classification
CDI may be classified in non-severe CDI, severe CDI and fulminant CDI depending on creatinine and white blood count parameters.
Cytotoxicity assay
C. difficile toxins have a cytopathic effect in cell culture, and neutralization of any effect observed with specific antisera is the practical gold standard for studies investigating new CDI diagnostic techniques. Toxigenic culture, in which organisms are cultured on selective media and tested for toxin production, remains the gold standard and is the most sensitive and specific test, although it is slow and labor-intensive.
Toxin ELISA
Assessment of the A and B toxins by enzyme-linked immunosorbent assay (ELISA) for toxin A or B (or both) has a sensitivity of 63–99% and a specificity of 93–100%.Previously, experts recommended sending as many as three stool samples to rule out disease if initial tests are negative, but evidence suggests repeated testing during the same episode of diarrhea is of limited value and should be discouraged. C. difficile toxin should clear from the stool of somebody previously infected if treatment is effective. Many hospitals only test for the prevalent toxin A. Strains that express only the B toxin are now present in many hospitals, however, so testing for both toxins should occur. Not testing for both may contribute to a delay in obtaining laboratory results, which is often the cause of prolonged illness and poor outcomes.
Other stool tests
Stool leukocyte measurements and stool lactoferrin levels also have been proposed as diagnostic tests, but may have limited diagnostic accuracy.
PCR
Testing of stool samples by real-time polymerase chain reaction is able to detect C. difficile about 93% of the time and when positive is incorrectly positive about 3% of the time. This is more accurate than cytotoxigenic culture or cell cytotoxicity assay. Another benefit is that the result can be achieved within three hours. Drawbacks include a higher cost and the fact that the test only looks for the gene for the toxin and not the toxin itself. The latter means that if the test is used without confirmation, overdiagnosis may occur. Repeat testing may be misleading, and testing specimens more than once every seven days in people without new symptoms is highly unlikely to yield useful information.
Prevention
Self containment by housing people in private rooms is important to prevent the spread of C. difficile. Contact precautions are an important part of preventing the spread of C. difficile. C. difficile does not often occur in people who are not taking antibiotics so limiting use of antibiotics decreases the risk.
Antibiotics
The most effective method for preventing CDI is proper antibiotic prescribing. In the hospital setting, where CDI is most common, most people who develop CDI are exposed to antibiotics. Although proper antibiotic prescribing is highly recommended, about 50% is considered inappropriate. This is consistent whether in the hospital, clinic, community, or academic setting. A decrease in CDI by limiting antibiotics or by limiting unnecessary prescriptions in general, both in an outbreak and nonoutbreak setting has been demonstrated to be most strongly associated with reduced CDI. Further, reactions to medication may be severe: CDI infections were the most common contributor to adverse drug events seen in U.S. hospitals in 2011. In some regions of the UK, reduced used of fluoroquinolone antibiotics seems to lead to reduced rates of CDI.
Probiotics
Some evidence indicates probiotics may be useful to prevent infection and recurrence. Treatment with Saccharomyces boulardii in those who are not immunocompromised with C. difficile also may be useful. Initially, in 2010, the Infectious Diseases Society of America recommended against their use due to the risk of complications. Subsequent reviews, however, did not find an increase in adverse effects with treatment, and overall treatment appears safe and moderately effective in preventing Clostridium difficile-associated diarrhea.One study in particular found that there does appear to be a "protective effect" of probiotics, specifically reducing the risk of antibiotic-associated diarrhea (AAD) by 51% in 3,631 outpatients, but it is important to note that the types of infections in the subjects were not specified. Yogurt, tablets, dietary supplements are just a few examples of probiotics available for people.
Infection control
Rigorous infection protocols are required to minimize this risk of transmission. Infection control measures, such as wearing gloves and noncritical medical devices used for a single person with CDI, are effective at prevention. This works by limiting the spread of C. difficile in the hospital setting. In addition, washing with soap and water will wash away the spores from contaminated hands, but alcohol-based hand rubs are ineffective. These precautions should remain in place among those in hospital for at least 2 days after the diarrhea has stopped.Bleach wipes containing 0.55% sodium hypochlorite have been shown to kill the spores and prevent transmission. Installing lidded toilets and closing the lid prior to flushing also reduces the risk of contamination.Those who have CDIs should be in rooms with other people with CDIs or by themselves when in hospital.Common hospital disinfectants are ineffective against C. difficile spores, and may promote spore formation, but various oxidants (e.g 1% sodium hypochlorite solution) rapidly destroy spores. Hydrogen peroxide vapor (HPV) systems used to sterilize a room after treatment is completed have been shown to reduce infection rates and to reduce risk of infection to others. The incidence of CDI was reduced by 53% or 42% through use of HPV. Ultraviolet cleaning devices, and housekeeping staff especially dedicated to disinfecting the rooms of people with C. difficile after discharge may be effective.
Treatment
Carrying C. difficile without symptoms is common. Treatment in those without symptoms is controversial. In general, mild cases do not require specific treatment. Oral rehydration therapy is useful in treating dehydration associated with the diarrhea.
Medications
Several different antibiotics are used for C. difficile, with the available agents being more or less equally effective.Vancomycin or fidaxomicin by mouth are the typically recommended for mild, moderate, and severe infections. They are also the first-line treatment for pregnant women, especially since metronidazole may cause birth defects. Typical vancomycin 125mg is taken four times a day by mouth for 10 days. Fidaxomicin is taken at 200 mg twice daily for 10 days. It may also be given rectally if the person develops an ileus.Fidaxomicin is tolerated as well as vancomycin, and may have a lower risk of recurrence. Fidaxomicin has been found to be as effective as vancomycin in those with mild to moderate disease, and it may be better than vancomycin in those with severe disease. Fidaxomicin may be used in those who have recurrent infections and have not responded to other antibiotics. Metronidazole (500 mg 3 times daily for 10 days) by mouth is recommended as an alternative treatment only for C. difficile infections when the affected person is allergic to first-line treatments, is unable to tolerate them, or has financial difficulties preventing them from accessing them. In fulminant disease vancomycin by mouth and intravenous metronidazole are commonly used together.Medications used to slow or stop diarrhea, such as loperamide, may only be used after initiating the treatment.Cholestyramine, an ion-exchange resin, is effective in binding both toxin A and B, slowing bowel motility, and helping prevent dehydration. Cholestyramine is recommended with vancomycin. A last-resort treatment in those who are immunosuppressed is intravenous immunoglobulin. Monoclonal antibodies against C. difficile toxin A and C. difficile toxin B are approved to prevent recurrence of C. difficile infection including bezlotoxumab.
Probiotics
Evidence to support the use of probiotics in the treatment of active disease is insufficient. Researchers have recently begun taking a mechanical approach to fecal-derived products. It is known that certain microbes with 7α-dehydroxylase activity can metabolize primary to secondary bile acids, which inhibit C. difficile. Thus, incorporating such microbes into therapeutic products such as probiotics may be protective, although more pre-clinical investigations are needed.
Fecal microbiota transplantation
Fecal microbiota transplant, also known as a stool transplant, is roughly 85% to 90% effective in those for whom antibiotics have not worked. It involves infusion of the microbiota acquired from the feces of a healthy donor to reverse the bacterial imbalance responsible for the recurring nature of the infection. The procedure replenishes the normal colonic microbiota that had been wiped out by antibiotics, and re-establishes resistance to colonization by Clostridioides difficile. Side effects, at least initially, are few.Some evidence looks hopeful that fecal transplant can be delivered in the form of a pill. They are available in the United States, but are not FDA-approved as of 2015.
Surgery
In those with severe C. difficile colitis, colectomy may improve the outcomes. Specific criteria may be used to determine who will benefit most from surgery.
Recurrent infection
Recurrent CDI occurs in 20 to 30% of the patients, with increasing rates of recurrence with each subsequent episode. In clinical settings, it is virtually impossible to distinguish a recurrence that develops as a relapse of CDI with the same strain of C. difficile versus reinfection that is the result of a new strain.
Several treatment options exist for recurrent C difficile infection. For the first episode of recurrent C difficile infection, the 2017 IDSA guidelines recommend oral vancomycin at a dose of 125 mg four times daily for 10 days if metronidazole was used for the initial episode. If oral vancomycin was used for the initial episode, then a prolonged oral vancomycin pulse dose of 125 mg four times daily for 10-14 days followed by a taper (twice daily for one week, then every two to three days for 2-8 weeks) or fidaxomicin 200 mg twice daily for 10 days. For a second recurrent episode, the IDSA recommends options including the aforementioned oral vancomycin pulse dose followed by the prolonged taper; oral vancomycin 125 mg four times daily for 10 days followed by rifaximin 400 mg three times daily for 20 days; fidaxomicin 200 mg twice daily for 10 days, or a fecal microbiota transplant.For patients with C. diff infections that fail to be resolved with traditional antibiotic regimens, fecal microbiome transplants boasts an average cure rate of >90%. In a review of 317 patients, it was shown to lead to resolution in 92% of the persistent and recurrent disease cases. It is clear that restoration of gut flora is paramount in the struggle against recurrent CDI. With effective antibiotic therapy, C. difficile can be reduced and natural colonization resistance can develop over time as the natural microbial community recovers. Reinfection or recurrence may occur before this process is complete. Fecal microbiota transplant may expedite this recovery by directly replacing the missing microbial community members. However, human-derived fecal matter is difficult to standardize and has multiple potential risks, including the transfer of infectious material and long-term consequences of inoculating the gut with a foreign fecal material. As a result, further research is necessary to study the long term effective outcomes of FMT.
Prognosis
After a first treatment with metronidazole or vancomycin, C. difficile recurs in about 20% of people. This increases to 40% and 60% with subsequent recurrences.
Epidemiology
C. difficile diarrhea is estimated to occur in eight of 100,000 people each year. Among those who are admitted to hospital, it occurs in between four and eight people per 1,000. In 2011, it resulted in about half a million infections and 29,000 deaths in the United States.Due in part to the emergence of a fluoroquinolone-resistant strain, C. difficile-related deaths increased 400% between 2000 and 2007 in the United States. According to the CDC, "C. difficile has become the most common microbial cause of healthcare-associated infections in U.S. hospitals and costs up to $4.8 billion each year in excess health care costs for acute care facilities alone."
History
Ivan C. Hall and Elizabeth OToole first named the bacterium Bacillus difficilis in 1935, choosing its specific epithet because it was resistant to early attempts at isolation and grew very slowly in culture. André Romain Prévot subsequently transferred it to the genus Clostridium, which made its binomen Clostridium difficile. Its combination was later changed to Clostridiodes difficile after being transferred to the new genus Clostridioides.Pseudomembranous colitis first was described as a complication of C. difficile infection in 1978, when a toxin was isolated from people with pseudomembranous colitis and Kochs postulates were met.
Notable outbreaks
On 4 June 2003, two outbreaks of a highly virulent strain of this bacterium were reported in Montreal, Quebec, and Calgary, Alberta. Sources put the death count to as low as 36 and as high as 89, with around 1,400 cases in 2003 and within the first few months of 2004. CDIs continued to be a problem in the Quebec healthcare system in late 2004. As of March 2005, it had spread into the Toronto area, hospitalizing 10 people. One died while the others were being discharged.
A similar outbreak took place at Stoke Mandeville Hospital in the United Kingdom between 2003 and 2005. The local epidemiology of C. difficile may offer clues on how its spread may relate to the time a patient spends in hospital and/or a rehabilitation center. It also samples the ability of institutions to detect increased rates, and their capacity to respond with more aggressive hand-washing campaigns, quarantine methods, and the availability of yogurt containing live cultures to patients at risk for infection.
Both the Canadian and English outbreaks possibly were related to the seemingly more virulent strain NAP1/027 of the bacterium. Known as Quebec strain, it has been implicated in an epidemic at two Dutch hospitals (Harderwijk and Amersfoort, both 2005). A theory for explaining the increased virulence of 027 is that it is a hyperproducer of both toxins A and B and that certain antibiotics may stimulate the bacteria to hyperproduce.
On 1 October 2006, C. difficile was said to have killed at least 49 people at hospitals in Leicester, England, over eight months, according to a National Health Service investigation. Another 29 similar cases were investigated by coroners. A UK Department of Health memo leaked shortly afterward revealed significant concern in government about the bacterium, described as being "endemic throughout the health service"
On 27 October 2006, nine deaths were attributed to the bacterium in Quebec.
On 18 November 2006, the bacterium was reported to have been responsible for 12 deaths in Quebec. This 12th reported death was only two days after the St. Hyacinthes Honoré Mercier announced the outbreak was under control. Thirty-one people were diagnosed with CDIs. Cleaning crews took measures in an attempt to clear the outbreak.
C. difficile was mentioned on 6,480 death certificates in 2006 in UK.
On 27 February 2007, a new outbreak was identified at Trillium Health Centre in Mississauga, Ontario, where 14 people were diagnosed with CDIs. The bacteria were of the same strain as the one in Quebec. Officials have not been able to determine whether C. difficile was responsible for the deaths of four people over the prior two months.
Between February and June 2007, three people at Loughlinstown Hospital in Dublin, Ireland, were found by the coroner to have died as a result of C. difficile infection. In an inquest, the Coroners Court found the hospital had no designated infection control team or consultant microbiologist on staff.
Between June 2007 and August 2008, Northern Health and Social Care Trust Northern Ireland, Antrim Area, Braid Valley, Mid Ulster Hospitals were the subject of inquiry. During the inquiry, expert reviewers concluded that C. difficile was implicated in 31 of these deaths, as the underlying cause in 15, and as a contributory cause in 16. During that time, the review also noted 375 instances of CDIs in those being treated at the hospital.
In October 2007, Maidstone and Tunbridge Wells NHS Trust was heavily criticized by the Healthcare Commission regarding its handling of a major outbreak of C. difficile in its hospitals in Kent from April 2004 to September 2006. In its report, the Commission estimated approximately 90 people "definitely or probably" died as a result of the infection.
In November 2007, the 027 strain spread into several hospitals in southern Finland, with 10 deaths out of 115 infected people reported on 2007-12-14.
In November 2009, four deaths at Our Lady of Lourdes Hospital in Ireland have possible links to CDI. A further 12 people tested positive for infection, and another 20 showed signs of infection.
From February 2009 to February 2010, 199 people at Herlev hospital in Denmark were suspected of being infected with the 027 strain. In the first half of 2009, 29 died in hospitals in Copenhagen after they were infected with the bacterium.
In May 2010, a total of 138 people at four different hospitals in Denmark were infected with the 027 strain plus there were some isolated occurrences at other hospitals.
In May 2010, 14 fatalities were related to the bacterium in the Australian state of Victoria. Two years later, the same strain of the bacterium was detected in New Zealand.
On 28 May 2011, an outbreak in Ontario had been reported, with 26 fatalities as of 24 July 2011.
In 2012/2013, a total of 27 people at one hospital in the south of Sweden (Ystad) were infected with 10 deaths. Five died of the strain 017.
Etymology and pronunciation
The genus name is from the Greek klōstēr (κλωστήρ), "spindle", and the specific name is from Latin difficile, neuter singular form of difficilis "difficult, obstinate", chosen in reference to fastidiousness upon culturing.
Regarding the pronunciation of the current and former genus assignments, Clostridioides is and Clostridium is . Both genera still have species assigned to them, but this species is now classified in the former. Via the norms of binomial nomenclature, it is understood that the former binomial name of this species is now an alias.
Regarding the specific name, is the traditional norm, reflecting how medical English usually pronounces naturalized New Latin words (which in turns largely reflects traditional English pronunciation of Latin), although a restored pronunciation of is also sometimes used (the classical Latin pronunciation is reconstructed as [kloːsˈtrɪdɪ.ũː dɪfˈfɪkɪlɛ]). The specific name is also commonly pronounced , as though it were French, which from a prescriptive viewpoint is a "mispronunciation" but from a linguistically descriptive viewpoint cannot be described as erroneous because it is so widely used among health care professionals; it can be described as "the non-preferred variant" from the viewpoint of sticking most regularly to New Latin in binomial nomenclature, which is also a valid viewpoint, although New Latin specific names contain such a wide array of extra-Latin roots (including surnames and jocular references) that extra-Latin pronunciation is involved anyway (as seen, for example, with Ba humbugi, Spongiforma squarepantsii, and hundreds of others).
Research
As of 2019, vaccine candidates providing immunity against C. difficile toxin A and C. difficile toxin B have advanced the most in clinical research, but do not prevent bacterial colonization. A vaccine candidate by Pfizer is in a phase 3 clinical trial that is estimated to be completed in September 2021 and a vaccine candidate by GlaxoSmithKline is in a phase 1 clinical trial that is estimated to be completed in July 2021.
CDA-1 and CDB-1 (also known as MDX-066/MDX-1388 and MBL-CDA1/MBL-CDB1) is an investigational, monoclonal antibody combination co-developed by Medarex and Massachusetts Biologic Laboratories (MBL) to target and neutralize C. difficile toxins A and B, for the treatment of CDI. Merck & Co., Inc. gained worldwide rights to develop and commercialize CDA-1 and CDB-1 through an exclusive license agreement signed in April 2009. It is intended as an add-on therapy to one of the existing antibiotics to treat CDI.
Nitazoxanide is a synthetic nitrothiazolyl-salicylamide derivative indicated as an antiprotozoal agent (FDA-approved for the treatment of infectious diarrhea caused by Cryptosporidium parvum and Giardia lamblia) and also is currently being studied in C </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Cachexia and can you explain it in depth? [/INST] Cachexia () is a complex syndrome associated with an underlying illness, causing ongoing muscle loss that is not entirely reversed with nutritional supplementation. A range of diseases can cause cachexia, most commonly cancer, congestive heart failure, chronic obstructive pulmonary disease, chronic kidney disease, and AIDS. Systemic inflammation from these conditions can cause detrimental changes to metabolism and body composition. In contrast to weight loss from inadequate caloric intake, cachexia causes mostly muscle loss instead of fat loss. Diagnosis of cachexia can be difficult due to the lack of well-established diagnostic criteria. Cachexia can improve with treatment of the underlying illness but other treatment approaches have limited benefit. Cachexia is associated with increased mortality and poor quality of life.
The term is from Greek κακός kakos, "bad", and ἕξις hexis, "condition".
Causes
Cachexia can be caused by diverse medical conditions, but is most often associated with end-stage cancer, known as cancer cachexia. About 50% of all cancer patients develop cachexia. Those with upper gastrointestinal and pancreatic cancers have the highest frequency of developing a cachexic symptom. Prevalence of cachexia rises in more advanced stages and is estimated to affect 80% of terminal cancer patients.Congestive heart failure, AIDS, chronic obstructive pulmonary disease, and chronic kidney disease are other conditions that often cause cachexia. Cachexia can also be the result of advanced stages of cystic fibrosis, multiple sclerosis, motor neuron disease, Parkinsons disease, dementia, tuberculosis, multiple system atrophy, mercury poisoning, Crohns disease, trypanosomiasis, rheumatoid arthritis, and celiac disease as well as other systemic diseases.
Mechanism
The exact mechanism in which these diseases cause cachexia is poorly understood, and likely is multifactorial with multiple disease pathways involved. Inflammatory cytokines appear to play a central role including tumor necrosis factor (TNF) (which is also nicknamed cachexin or cachectin), interferon gamma and interleukin 6. TNF has been shown to have a direct catabolic effect on skeletal muscle and adipose tissue through the ubiquitin proteasome pathway. This mechanism involves the formation of reactive oxygen species leading to upregulation of the transcription factor NF-κB. NF-κB is a known regulator of the genes that encode cytokines and cytokine receptors. The increased production of cytokines induces proteolysis and breakdown of myofibrillar proteins. Systemic inflammation also causes reduced protein synthesis through inhibition of the Akt/mTOR pathway.Although many different tissues and cell types may be responsible for the increase in circulating cytokines, evidence indicates tumors themselves are an important source of factors that may promote cachexia in cancer. Tumor-derived molecules such as lipid mobilizing factor, proteolysis-inducing factor, and mitochondrial uncoupling proteins may induce protein degradation and contribute to cachexia. Uncontrolled inflammation in cachexia can lead to an elevated resting metabolic rate, further increasing the demands for protein and energy sources.There is also evidence of alteration in feeding control loops in cachexia. High levels of leptin, a hormone secreted by adipocytes, block the release of neuropeptide Y, which is the most potent feeding-stimulatory peptide in the hypothalamic orexigenic network, leading to decreased energy intake despite the high metabolic demand for nutrients.
Diagnosis
Diagnostic guidelines and criteria have only recently been proposed despite the prevalence of cachexia and varying criteria, the primary features of cachexia include progressive depletion of muscle and fat mass, reduced food intake, abnormal metabolism of carbohydrate, protein, and fat, reduced quality of life, and increased physical impairment.Historically, body weight changes were used as the primary metrics of cachexia, including low body mass index and involuntary weight loss of more than 10%. Using weight alone is limited by the presence of edema, tumor mass and the high prevalence of obesity in the general population. Weight-based criteria do not take into account changes in body composition, especially loss of lean body mass.
In the attempt to include a broader evaluation of the burden of cachexia, diagnostic criteria using assessments of laboratory metrics and symptoms in addition to weight have been proposed. The criteria included weight loss of at least 5% in 12 months or low body mass index (less than 22 kg/m2) with at least three of the following features: decreased muscle strength, fatigue, anorexia, low fat‐free mass index, or abnormal biochemistry (increased inflammatory markers, anemia, low serum albumin). In cancer patients, cachexia is diagnosed from unintended weight loss of more than 5%. For cancer patients with a body mass index of less than 20 kg/m2, cachexia is diagnosed after the unintended weight loss of more than 2%. Additionally, it can be diagnosed through sarcopenia, or loss of skeletal muscle mass.Laboratory markers are used in evaluation of people with cachexia, including albumin, prealbumin, C-reactive protein, or hemoglobin. However, laboratory metrics and cut-off values are not standardized across different diagnostic criteria. Acute phase reactants (IL-6, IL-1b, tumor necrosis factor-a, IL-8, interferon-g) are sometimes measured but correlate poorly with outcomes. There are no biomarkers to identify people with cancer who may develop cachexia.In the effort to better classify cachexia severity, several scoring systems have been proposed including the Cachexia Staging Score (CSS) and Cachexia Score (CASCO). The CSS takes into account weight loss, subjective reporting of muscle function, performance status, appetite loss, and laboratory changes to categorize patients into non-cachexia, pre-cachexia, cachexia, and refractory cachexia. The Cachexia SCOre (CASCO) is another validated score that includes evaluation of body weight loss and composition, inflammation, metabolic disturbances, immunosuppression, physical performance, anorexia, and quality of life.Evaluation of changes in body composition is limited by the difficulty in measuring muscle mass and health in a non-invasive and cost-effective way. Imaging with quantification of muscle mass has been investigated including bioelectrical impedance analysis, computed tomography, dual-energy X-ray absorptiometry (DEXA), and magnetic resonance imaging but are not widely used.
Definition
Identification, treatment, and research of cachexia have historically been limited by the lack of a widely accepted definition of cachexia. In 2011, an international consensus group adopted a definition of cachexia as "a multifactorial syndrome defined by an ongoing loss of skeletal muscle mass (with or without loss of fat mass) that can be partially but not entirely reversed by conventional nutritional support."Cachexia differs from weight loss due to malnutrition from malabsorption, anorexia nervosa, or anorexia due to major depressive disorder. Weight loss from inadequate caloric intake generally causes fat loss before muscle loss, whereas cachexia causes predominantly muscle wasting. Cachexia is also distinct from sarcopenia, or age-related muscle loss, although they often co-exist.
Treatment
The management of cachexia depends on the underlying cause, the general prognosis, and the needs of the person affected. The most effective approach to cachexia is treating the underlying disease process. An example is the reduction in cachexia from AIDS by highly active antiretroviral therapy. However this is often not possible or maybe inadequate to reverse the cachexia syndrome in other diseases. Approaches to mitigate muscle loss include exercise, nutritional therapies, and medications.
Exercise
Therapy that includes regular physical exercise can be recommended for the treatment of cachexia due to the positive effects of exercise on skeletal muscle but current evidence remains uncertain as to its effectiveness, acceptability and safety for cancer patients. Individuals with cachexia generally report low levels of physical activity and few engage in an exercise routine, owing to low motivation to exercise and a belief that exercising may worsen their symptoms or cause harm.
Medications
Appetite stimulant medications are used to treat cachexia to increase food intake, but are not effective in stopping muscle wasting and may have detrimental side effects. Appetite stimulants include glucocorticoids, cannabinoids, or progestins such as megestrol acetate. Anti-emetics such as 5-HT3 antagonists are also commonly used in cancer cachexia if nausea is a prominent symptom.Anabolic-androgenic steroids like oxandrolone may be beneficial in cachexia but their use is recommended for a maximum of two weeks since a longer duration of treatment increases side effects. Whilst preliminary studies have suggested thalidomide may be useful, a Cochrane review found no evidence to make an informed decision about the use of this drug in cancer patients with cachexia.
Nutrition
The increased metabolic rate and appetite suppression common in cachexia can compound muscle loss. Studies using a calorie-dense protein supplementation have suggested at least weight stabilization can be achieved, although improvements in lean body mass have not been observed in these studies.
Supplements
Administration of exogenous amino acids have been investigated to serve as a protein-sparing metabolic fuel by providing substrates for both muscle metabolism and gluconeogenesis. The branched-chain amino acids leucine and valine may have potential in inhibiting overexpression of protein breakdown pathways. The amino acid glutamine has been used as a component of oral supplementation to reverse cachexia in people with advanced cancer or HIV/AIDS.β-hydroxy β-methylbutyrate (HMB) is a metabolite of leucine that acts as a signaling molecule to stimulate protein synthesis. Studies showed positive results for chronic pulmonary disease, hip fracture, and in AIDS‐related and cancer‐related cachexia. However, many of these clinical studies used HMB as a component of combination treatment with glutamine, arginine, leucine, higher dietary protein and/or vitamins, which limits the assessment of the efficacy of HMB alone.
Epidemiology
Accurate epidemiological data on the prevalence of cachexia is lacking due to changing diagnostic criteria and under-identification of people with the disorder. It is estimated that cachexia from any disease is estimated to affect more than 5 million people in the United States. The prevalence of cachexia is growing and estimated at 1% of the population. The prevalence is lower in Asia but due to the larger population, represents a similar burden. Cachexia is also a significant problem in South America and Africa.The most frequent causes of cachexia in the United States by population prevalence are: 1) chronic obstructive pulmonary disease (COPD), 2) heart failure, 3) cancer cachexia, 4) chronic kidney disease. The prevalence of cachexia ranges from 15 to 60% among people with cancer, increasing to an estimated 80% in terminal cancer </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Calcium channel blocker toxicity and can you explain it in depth? [/INST] Calcium channel blocker toxicity is the taking of too much of the medications known as calcium channel blockers (CCBs), either by accident or on purpose. This often causes a slow heart rate and low blood pressure. This can progress to the heart stopping altogether. Some CCBs can also cause a fast heart rate as a result of the low blood pressure. Other symptoms may include nausea, vomiting, sleepiness, and shortness of breath. Symptoms usually occur in the first six hours but with some forms of the medication may not start until 24 after hours.There are a number of treatments that may be useful. These include efforts to reduce absorption of the drug including: activated charcoal taken by mouth if given shortly after the ingestion or whole bowel irrigation if an extended release formula was taken. Efforts to bring about vomiting are not recommended. Medications to treat the toxic effects include: intravenous fluids, calcium gluconate, glucagon, high dose insulin, vasopressors and lipid emulsion. Extracorporeal membrane oxygenation may also be an option.More than ten thousand cases of calcium channel blocker toxicity were reported in the United States in 2010. Along with beta blockers and digoxin calcium channel blockers have one of the highest rates of death in overdose. These medications first became available in the 1970s and 1980s. They are one of the few types of medication in which one pill can result in the death of a child.
Signs and symptoms
Most people who have taken too much of a calcium channel blocker, especially diltiazem, get slow heart rate and low blood pressure (vasodilatory shock). This can progress to the heart stopping altogether. CCBs of the dihydropyridine group, as well as flunarizine, predominantly cause reflex tachycardia as a reaction to the low blood pressure.Other potential symptoms include: nausea and vomiting, a decreased level of consciousness, and breathing difficulties. Symptoms usually begin within 6 hours of taking the medication by mouth. With extended release formulations symptoms may not occur for up to a day. Seizures are rare in adults but in children occur more often. Hypocalcaemia may also occur.
Cause
Calcium channel blockers, also known as calcium channel antagonists, are widely used for a number of health conditions. Thus they are commonly present in many peoples homes. In young children one pill may cause serious health problems and potentially death. The calcium channel blocker that caused the greatest number of deaths in 2010 in the United States was verapamil. This agent is believed to cause more heart problems than many of the others.
Diagnosis
A blood or urine test to diagnose overdose is not generally available. CCB overdose may cause high blood sugar levels, and this is often a sign of how severe the problem will become.
Electrocardiogram
CCB toxicity can cause a number of electrocardiogram abnormalities with a low sinus rhythm being the most common. Others include: QT prolongation, bundle branch block, first-degree atrioventricular block, and even sinus tachycardia.
Differential
It may not be possible to tell the difference between beta blocker toxicity and calcium channel blocker overdose based on signs and symptoms.
Management
The medical management of CCB toxicity may be difficult. It may not improve with the usual treatments used for a low blood pressure and a slow heart rate. In those who have no symptoms or signs six hours following taking an immediate release formulation and 24 hours after taking an extended release formulation need no further medical treatment.
Detoxification
Activated charcoal is recommended if it can be given within an hour or two of taking the calcium channel blockers. In those who have taken an extended release formulation of a CCB but are otherwise doing fine, whole bowel irrigation with polyethylene glycol may be useful. Causing vomiting by the use of medications such as ipecac is not recommended.
Insulin
High doses of intravenous insulin with glucose may be useful and are a first line treatment in overdoses. As this treatment may cause a drop in blood sugar and blood potassium levels, these should be monitored closely.
Other
Intravenous calcium gluconate or calcium chloride is considered a specific antidotes. Slow heart rate can be treated with atropine and sympathomimetics </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Campylobacteriosis and can you explain it in depth? [/INST] Campylobacteriosis is an infection by the Campylobacter bacterium, most commonly C. jejuni. It is among the most common bacterial infections of humans, often a foodborne illness. It produces an inflammatory, sometimes bloody, diarrhea or dysentery syndrome, mostly including cramps, fever and pain.
Symptoms and signs
The prodromal symptoms are fever, headache, and myalgia, which can be severe, lasting as long as 24 hours. After 1–5 days, typically, these are followed by diarrhea (as many as 10 watery, frequently bloody, bowel movements per day) or dysentery, cramps, abdominal pain, and fever as high as 40 °C (104 °F). In most people, the illness lasts for 2–10 days. It is classified as invasive/inflammatory diarrhea, also described as bloody diarrhea or dysentery.There are other diseases showing similar symptoms. For instance, abdominal pain and tenderness may be very localized, mimicking acute appendicitis. Furthermore, Helicobacter pylori is closely related to Campylobacter and causes peptic ulcer disease.
Complications
Complications include toxic megacolon, dehydration and sepsis. Such complications generally occur in young children (< 1 year of age) and immunocompromised people. A chronic course of the disease is possible; this disease process is likely to develop without a distinct acute phase. Chronic campylobacteriosis features a long period of sub-febrile temperature and asthenia; eye damage, arthritis, endocarditis may develop if infection is untreated. Occasional deaths occur in young, previously healthy individuals because of blood volume depletion (due to dehydration), and in people who are elderly or immunocompromised.Some individuals (1–2 in 100,000 cases) develop Guillain–Barré syndrome, in which the nerves that join the spinal cord and brain to the rest of the body are damaged, sometimes permanently. This occurs only with infection of C. jejuni and C. upsaliensis.
Other factors
In patients with HIV, infections may be more frequent, may cause prolonged bouts of dirty brown diarrhea, and may be more commonly associated with bacteremia and antibiotic resistance. In participants of unprotected anal intercourse, campylobacteriosis is more localized to the distal end of the colon and may be termed a proctocolitis. The severity and persistence of infection in patients with AIDS and hypogammaglobulinemia indicates that both cell-mediated and humoral immunity are important in preventing and terminating infection.
Cause
Campylobacteriosis is caused by Campylobacter bacteria (curved or spiral, motile, non–spore-forming, Gram-negative rods). The disease is usually caused by C. jejuni, a spiral and comma-shaped bacterium normally found in cattle, swine, and birds, where it is nonpathogenic, but the illness can also be caused by C. coli (also found in cattle, swine, and birds), C. upsaliensis (found in cats and dogs) and C. lari (present in seabirds in particular).One effect of campylobacteriosis is tissue injury in the gut. The sites of tissue injury include the jejunum, the ileum, and the colon. C jejuni appears to achieve this by invading and destroying epithelial cells.C. jejuni can also cause a latent autoimmune effect on the nerves of the legs, which is usually seen several weeks after a surgical procedure of the abdomen. The effect is known as an acute idiopathic demyelinating polyneuropathy (AIDP), i.e. Guillain–Barré syndrome, in which one sees symptoms of ascending paralysis, dysaesthesias usually below the waist, and, in the later stages, respiratory failure.Some strains of C jejuni produce a cholera-like enterotoxin, which is important in watery diarrhea observed in infections. The organism produces diffuse, bloody, edematous, and exudative enteritis. In a small number of cases, the infection may be associated with hemolytic uremic syndrome and thrombotic thrombocytopenic purpura through a poorly understood mechanism.
Transmission
The common routes of transmission for the disease-causing bacteria are fecal-oral, person-to-person sexual contact, ingestion of contaminated food (generally unpasteurized (raw) milk and undercooked or poorly handled poultry), and waterborne (i.e., through contaminated drinking water). Contact with contaminated poultry, livestock, or household pets, especially puppies, can also cause disease.Animals farmed for meat are the main source of campylobacteriosis. A study published in PLoS Genetics (26 September 2008) by researchers from Lancashire, England, and Chicago, Illinois, found that 97 percent of campylobacteriosis cases sampled in Lancashire were caused by bacteria typically found in chicken and livestock. In 57 percent of cases, the bacteria could be traced to chicken, and in 35 percent to cattle. Wild animal and environmental sources were accountable for just three percent of disease.The infectious dose is 1000–10,000 bacteria (although ten to five hundred bacteria can be enough to infect humans). Campylobacter species are sensitive to hydrochloric acid in the stomach, and acid reduction treatment can reduce the amount of inoculum needed to cause disease.Exposure to bacteria is often more common during travelling, and therefore campylobacteriosis is a common form of travelers diarrhea.
Diagnosis
Campylobacter organisms can be detected by performing a Gram stain of a stool sample with high specificity and a sensitivity of ~60%, but are most often diagnosed by stool culture. Fecal leukocytes should be present and indicate the diarrhea to be inflammatory in nature. Methods currently being developed to detect the presence of campylobacter organisms include antigen testing via an EIA or PCR.
Prevention
Pasteurization of milk and chlorination of drinking water destroys the organisms.
Treatment with antibiotics can reduce fecal excretion.
Infected health care workers should not provide direct patient care.
Separate cutting boards should be used for foods of animal origin and other foods. After preparing raw food of animal origin, all cutting boards and countertops should be carefully cleaned with soap and hot water.
Contact with pet saliva and feces should be avoided.The World Health Organization recommends the following:
Food should be properly cooked and hot when served.
Consume only pasteurized or boiled milk and milk products, never raw milk products.
Make sure that ice is from safe water.
If you are not sure of the safety of drinking water, boil it, or disinfect it with chemical disinfectant.
Wash hands thoroughly and frequently with soap, especially after using the toilet and after contact with pets and farm animals.
Wash fruits and vegetables thoroughly, especially if they are to be eaten raw. Peel fruits and vegetables whenever possible.
Food handlers, professionals and at home, should observe hygienic rules during food preparation.
Professional food handlers should immediately report to their employer any fever, diarrhea, vomiting or visible infected skin lesions.
Treatment
The infection is usually self-limiting, and in most cases, symptomatic treatment by liquid and electrolyte replacement is enough in human infections.
Antibiotics
Antibiotic treatment only has a marginal effect on the duration of symptoms, and its use is not recommended except in high-risk patients with clinical complications.Erythromycin can be used in children, and tetracycline in adults. Some studies show, however, that erythromycin rapidly eliminates Campylobacter from the stool without affecting the duration of illness. Nevertheless, children with dysentery due to C. jejuni benefit from early treatment with erythromycin. Treatment with antibiotics, therefore, depends on the severity of symptoms. Quinolones are effective if the organism is sensitive, but high rates of quinolone use in livestock mean that quinolones are now largely ineffective.Antimotility agents, such as loperamide, can lead to prolonged illness or intestinal perforation in any invasive diarrhea, and should be avoided. Trimethoprim/sulfamethoxazole and ampicillin are ineffective against Campylobacter.
In animals
In the past, poultry infections were often treated by mass administration of enrofloxacin and sarafloxacin for single instances of infection. The FDA banned this practice, as it promoted the development of fluoroquinolone-resistant populations.
A major broad-spectrum fluoroquinolone used in humans is ciprofloxacin.Currently growing resistance of the Campylobacter to fluoroquinolones and macrolides is of a major concern.
Prognosis
Campylobacteriosis is usually self-limited without any mortality (assuming proper hydration is maintained). However, there are several possible complications.
Epidemiology
Campylobacter is one of the most common causes of human bacterial gastroenteritis. For instance, an estimated 2 million cases of Campylobacter enteritis occur annually in the U.S., accounting for 5–7% of cases of gastroenteritis. Furthermore, in the United Kingdom during 2000, Campylobacter jejuni was involved in 77.3% in all cases of laboratory confirmed foodborne illness. About 15 of every 100,000 people are diagnosed with campylobacteriosis every year, and with many cases going unreported, up to 0.5% of the general population may unknowingly harbor Campylobacter in their gut.Unfortunately, the true incidence of campylobacteriosis is unknown in most countries, especially developing countries. The reasons are among others underreporting, difficulties with diagnosis and differences in reporting systems in different countries.A large animal reservoir is present as well, with up to 100% of poultry, including chickens, turkeys, and waterfowl, having asymptomatic infections in their intestinal tracts. Infected chicken feces may contain up to 109 bacteria per 25 grams, and due to the animals close proximity, the bacteria are rapidly spread to other chickens. This vastly exceeds the infectious dose of 1000–10,000 bacteria for humans.In January 2013, the UKs Food Standards Agency warned that two-thirds of all raw chicken bought from UK shops was contaminated with campylobacter, affecting an estimated half a million people annually and killing approximately 100.
Outbreak
In August–September 2016, 5,200 people fell ill with campylobacteriosis in Hastings, New Zealand after the local water supply in Havelock North tested positive for the pathogen Campylobacter jejuni. Four deaths were suspected to be due to the outbreak. It is suspected that after heavy rain fell on 5–6 August, water contamination from flooding caused the outbreak, although this is the subject of a government Inquiry </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Fungemia and can you explain it in depth? [/INST] Fungemia is the presence of fungi or yeasts in the blood. The most common type, also known as candidemia, candedemia, or systemic candidiasis, is caused by Candida species; candidemia is also among the most common bloodstream infections of any kind. Infections by other fungi, including Saccharomyces, Aspergillus and Cryptococcus, are also called fungemia. It is most commonly seen in immunosuppressed or immunocompromised patients with severe neutropenia, cancer patients, or in patients with intravenous catheters. It has been suggested the otherwise immunocompetent patients taking infliximab may be at a higher risk for fungemia.
Diagnosis is difficult, as routine blood cultures have poor sensitivity.
Signs and symptoms
Symptoms can range from mild to extreme—often described as extreme flu-like symptoms. Many symptoms may be associated with fungemia, including pain, acute confusion, chronic fatigue, and infections. Skin infections can include persistent or non-healing wounds and lesions, sweating, itching, and unusual discharge or drainage.
Risk factors
Pathogens
The most commonly known pathogen is Candida albicans, causing roughly 70% of fungemias, followed by Candida glabrata with 10%, Aspergillus with 1% and Saccharomyces as the fourth most common. However, the frequency of infection by C. glabrata, Saccharomyces boulardii, Candida tropicalis, C. krusei and C. parapsilosis is increasing, perhaps because significant use of fluconazole is common or due to increase in antibiotic use.Candida auris is an emerging multidrug-resistant (MDR) yeast that can cause invasive infections and is associated with high mortality. It was first described in 2009 after being isolated from external ear discharge of a patient in Japan. Since the 2009 report, C </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Capillariasis and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Cardiac stress test and can you explain it in depth? [/INST] A cardiac stress test (also referred to as a cardiac diagnostic test, cardiopulmonary exercise test, or abbreviated CPX test) is a cardiological test that measures the hearts ability to respond to external stress in a controlled clinical environment. The stress response is induced by exercise or by intravenous pharmacological stimulation.
Cardiac stress tests compare the coronary circulation while the patient is at rest with the same patients circulation during maximum cardiac exertion, showing any abnormal blood flow to the myocardium (heart muscle tissue). The results can be interpreted as a reflection on the general physical condition of the test patient. This test can be used to diagnose coronary artery disease (also known as ischemic heart disease) and assess patient prognosis after a myocardial infarction (heart attack).
Exercise-induced stressors are most commonly either exercise on a treadmill or pedalling a stationary exercise bicycle ergometer. The level of stress is progressively increased by raising the difficulty (steepness of the slope on a treadmill or resistance on an ergometer) and speed. People who cannot use their legs may exercise with a bicycle-like crank that they turn with their arms, or may be given a medication to induce cardiac stress. Once the stress test is completed, the patient generally is advised to not suddenly stop activity but to slowly decrease the intensity of the exercise over the course of several minutes.
The test administrator or attending physician examines the symptoms and blood pressure response. To measure the hearts response to the stress the patient may be connected to an electrocardiogram (ECG); in this case the test is most commonly called a cardiac stress test but is known by other names, such as exercise testing, stress testing treadmills, exercise tolerance test, stress test or stress test ECG. Alternatively a stress test may use an echocardiogram for ultrasonic imaging of the heart (in which case the test is called an echocardiography stress test or stress echo), or a gamma camera to image radioisotopes injected into the bloodstream (called a nuclear stress test).
Stress echocardiography
A stress test may be accompanied by echocardiography. The echocardiography is performed both before and after the exercise so that structural differences can be compared.
A resting echocardiogram is obtained prior to stress. The images obtained are similar to the ones obtained during a full surface echocardiogram, commonly referred to as transthoracic echocardiogram. The patient is subjected to stress in the form of exercise or chemically (usually dobutamine). After the target heart rate is achieved, stress echocardiogram images are obtained. The two echocardiogram images are then compared to assess for any abnormalities in wall motion of the heart. This is used to detect obstructive coronary artery disease.
Cardiopulmonary exercise test
While also measuring breathing gases (e.g. O2, VO2), the test is often referred to as a cardiopulmonary exercise test (CPET).
Common indications for a cardiopulmonary exercise test is:
Evaluation of dyspnea.
Work up before heart transplantation.
Prognosis and risk assessment of heart failure patients.
The test is also common in sport science for measuring athletes VO2 max.
Nuclear stress test
The best known example of a nuclear stress test is myocardial perfusion imaging. Typically, a radiotracer (Tc-99 sestamibi, Myoview or thallous chloride 201) may be injected during the test. After a suitable waiting period to ensure proper distribution of the radiotracer, scans are acquired with a gamma camera to capture images of the blood flow. Scans acquired before and after exercise are examined to assess the state of the coronary arteries of the patient.Showing the relative amounts of radioisotope within the heart muscle, the nuclear stress tests more accurately identify regional areas of reduced blood flow.Stress and potential cardiac damage from exercise during the test is a problem in patients with ECG abnormalities at rest or in patients with severe motor disability. Pharmacological stimulation from vasodilators such as dipyridamole or adenosine, or positive chronotropic agents such as dobutamine can be used. Testing personnel can include a cardiac radiologist, a nuclear medicine physician, a nuclear medicine technologist, a cardiology technologist, a cardiologist, and/or a nurse.The typical dose of radiation received during this procedure can range from 9.4 millisieverts to 40.7 millisieverts.
Function
The American Heart Association recommends ECG treadmill testing as the first choice for patients with medium risk of coronary heart disease according to risk factors of smoking, family history of coronary artery stenosis, hypertension, diabetes and high cholesterol. In 2013, in its "Exercise Standards for Testing and Training", the AHA indicated that high frequency QRS analysis during ECG treadmill test have useful test performance for detection of coronary heart disease.
Perfusion stress test (with 99mTc labelled sestamibi) is appropriate for select patients, especially those with an abnormal resting electrocardiogram.
Intracoronary ultrasound or angiogram can provide more information at the risk of complications associated with cardiac catheterization.
Diagnostic value
The common approach for stress testing by American College of Cardiology and American Heart Association indicates the following:
Treadmill test: sensitivity 73-90%, specificity 50-74% (modified Bruce protocol)
Nuclear test: sensitivity 81%, specificity 85-95%(Sensitivity is the percentage of people with the condition who are correctly identified by the test as having the condition; specificity is the percentage of people without the condition are correctly identified by the test as not having the condition).
To arrive at the patients posttest likelihood of disease, interpretation of the stress test result requires integration of the patients pretest likelihood with the tests sensitivity and specificity. This approach, first described by Diamond and Forrester in the 1970s, results in an estimate of the patients post-test likelihood of disease.
The value of stress tests has always been recognized as limited in assessing heart disease such as atherosclerosis, a condition which mainly produces wall thickening and enlargement of the arteries. This is because the stress test compares the patients coronary flow status before and after exercise and is suitable to detecting specific areas of ischemia and lumen narrowing, not a generalized arterial thickening.According to American Heart Association data, about 65% of men and 47% of women present with a heart attack or sudden cardiac arrest as their first symptom of cardiovascular disease. Stress tests, carried out shortly before these events, are not relevant to the prediction of infarction in the majority of individuals tested. Over the past two decades, better methods have been developed to identify atherosclerotic disease before it becomes symptomatic. These detection methods include anatomical and physiological methods.
Examples of anatomical methodsCT coronary calcium score
Coronary CT angiography
Intima-media thickness (IMT)
Intravascular ultrasound (IVUS)Examples of physiological methodsLipoprotein analysis
HbA1c
Hs-CRP
HomocysteineThe anatomic methods directly measure some aspects of the actual process of atherosclerosis itself and therefore offer the possibility of early diagnosis but are often more expensive and may be invasive (in the case of IVUS, for example). The physiological methods are often less expensive and safer but are not able to quantify the current status of the disease or directly track progression.
Contraindications and termination conditions
Stress cardiac imaging is not recommended for asymptomatic, low-risk patients as part of their routine care. Some estimates show that such screening accounts for 45% of cardiac stress imaging, and evidence does not show that this results in better outcomes for patients. Unless high-risk markers are present, such as diabetes in patients aged over 40, peripheral arterial disease; or a risk of coronary heart disease greater than 2 percent yearly, most health societies do not recommend the test as a routine procedure.Absolute contraindications to cardiac stress test include:
Acute myocardial infarction within 48 hours
Unstable angina not yet stabilized with medical therapy
Uncontrolled cardiac arrhythmia, which may have significant hemodynamic responses (e.g. ventricular tachycardia)
Severe symptomatic aortic stenosis, aortic dissection, pulmonary embolism, and pericarditis
Multivessel coronary artery diseases that have a high risk of producing an acute myocardial infarction
Decompensated or inadequately controlled congestive heart failure
Uncontrolled hypertension (blood pressure>200/110mm Hg)
Severe pulmonary hypertension
Acute aortic dissection
Acutely ill for any reasonIndications for termination:
A cardiac stress test should be terminated before completion under the following circumstances:Absolute indications for termination include:
Systolic blood pressure decreases by more than 10 mmHg with increase in work rate, or drops below baseline in the same position, with other evidence of ischemia.
Increase in nervous system symptoms: Dizziness, ataxia or near syncope
Moderate to severe anginal pain (above 3 on standard 4-point scale)
Signs of poor perfusion, e.g. cyanosis or pallor
Request of the test subject
Technical difficulties (e.g. difficulties in measuring blood pressure or EGC)
ST Segment elevation of more than 1 mm in aVR, V1 or non-Q wave leads
Sustained ventricular tachycardiaRelative indications for termination include:
Systolic blood pressure decreases by more than 10 mmHg with increase in work rate, or drops below baseline in the same position, without other evidence of ischemia.
ST or QRS segment changes, e.g. more than 2 mm horizontal or downsloping ST segment depression in non-Q wave leads, or marked axis shift
Arrhythmias other than sustained ventricular tachycardia e.g. Premature ventricular contractions, both multifocal or triplet; heart block; supraventricular tachycardia or bradyarrhythmias
Intraventricular conduction delay or bundle branch block or that cannot be distinguished from ventricular tachycardia
Increasing chest pain
Fatigue, shortness of breath, wheezing, claudication or leg cramps
Hypertensive response (systolic blood pressure > 250 mmHg or diastolic blood pressure > 115 mmHg)
Adverse effects
Side effects from cardiac stress testing may include
Palpitations, chest pain, myocardial infarction, shortness of breath, headache, nausea or fatigue.
Adenosine and dipyridamole can cause mild hypotension.
As the tracers used for this test are carcinogenic, frequent use of these tests carries a small risk of cancer.
Pharmacological agents
Pharmacologic stress testing relies on coronary steal. Vasodilators are used to dilate coronary vessels, which causes increased blood velocity and flow rate in normal vessels and less of a response in stenotic vessels </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Systemic primary carnitine deficiency and can you explain it in depth? [/INST] Systemic primary carnitine deficiency (SPCD) is an inborn error of fatty acid transport caused by a defect in the transporter responsible for moving carnitine across the plasma membrane. Carnitine is an important amino acid for fatty acid metabolism. When carnitine cannot be transported into tissues, fatty acid oxidation is impaired, leading to a variety of symptoms such as chronic muscle weakness, cardiomyopathy, hypoglycemia and liver dysfunction. The specific transporter involved with SPCD is OCTN2, coded for by the SLC22A5 gene located on chromosome 5. SPCD is inherited in an autosomal recessive manner, with mutated alleles coming from both parents.
Acute episodes due to SPCD are often preceded by metabolic stress such as extended fasting, infections or vomiting. Cardiomyopathy can develop in the absence of an acute episode, and can result in death. SPCD leads to increased carnitine excretion in the urine and low levels in plasma. In most locations with expanded newborn screening, SPCD can be identified and treated shortly after birth. Treatment with high doses of carnitine supplementation is effective, but needs to be rigorously maintained for life.
Signs and symptoms
The presentation of patient with SPCD can be incredibly varied, from asymptomatic to lethal cardiac manifestations. Early cases were reported with liver dysfunction, muscular findings (weakness and underdevelopment), hypoketotic hypoglycemia, cardiomegaly, cardiomyopathy and marked carnitine deficiency in plasma and tissues, combined with increased excretion in urine. Patients who present clinically with SPCD fall into two categories, a metabolic presentation with hypoglycemia and a cardiac presentation characterized by cardiomyopathy. Muscle weakness can be found with either presentation.In countries with expanded newborn screening, SPCD can be identified shortly after birth. Affected infants show low levels of free carnitine and all other acylcarnitine species by tandem mass spectrometry. Not all infants with low free carnitine are affected with SPCD. Some may have carnitine deficiency secondary to another metabolic condition or due to maternal carnitine deficiency. Proper follow-up of newborn screening results for low free carnitine includes studies of the mother to determine whether her carnitine deficiency is due to SPCD or secondary to a metabolic disease or diet. Maternal cases of SPCD have been identified at a higher than expected rate, often in women who are asymptomatic. Some mothers have also been identified through newborn screening with cardiomyopathy that had not been previously diagnosed. The identification and treatment of these asymptomatic individuals is still developing, as it is not clear whether they require the same levels of intervention as patients identified with SPCD early in life based on clinical presentation.
Genetics
SPCD is an autosomal recessive condition, meaning a mutated allele must be inherited from each parent for an individual to be affected. The gene responsible for the OCTN2 carnitine transporter is SLC22A5, located at 5q31.1-32. SLC22A5 is regulated by peroxisome proliferator-activated receptor alpha. The transporter, OCTN2, is located in the apical membrane of the renal tubular cells, where it plays a role in tubular reabsorption.The defective OCTN2 is unable to recapture carnitine prior to its excretion in urine, leading to the characteristic biochemical findings of massively increased urine carnitine levels and significantly decreased plasma carnitine levels. Decreased levels of plasma carnitine inhibit fatty acid oxidation during times of excessive energy demand. Carnitine is needed to transport long chain fatty acids into the mitochondria, where they can be broken down to produce acetyl-CoA. Individuals with SPCD cannot produce ketone bodies as energy due to the interruption of fatty acid oxidation. Although SPCD is an autosomal recessive condition, heterozygotes have been shown to be at an increased risk for developing benign cardiomyopathy compared to wild type individuals.
Diagnosis
The first suspicion of SPCD in a patient with a non-specific presentation is an extremely low plasma carnitine level. When combined with an increased concentration of carnitine in urine, the suspicion of SPCD can often be confirmed by either molecular testing or functional studies assessing the uptake of carnitine in cultured fibroblasts.
Treatment
Identification of patients presymptomatically via newborn screening has allowed early intervention and treatment. Treatment for SPCD involves high dose carnitine supplementation, which must be continued for life. Individuals who are identified and treated at birth have very good outcomes, including the prevention of cardiomyopathy. Mothers who are identified after a positive newborn screen but are otherwise asymptomatic are typically offered carnitine supplementation as well </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Precocious puberty and can you explain it in depth? [/INST] In medicine, precocious puberty is puberty occurring at an unusually early age. In most cases, the process is normal in every aspect except the unusually early age and simply represents a variation of normal development. In a minority of children with precocious puberty, the early development is triggered by a disease such as a tumor or injury of the brain. Even when there is no disease, unusually early puberty can have adverse effects on social behavior and psychological development, can reduce adult height potential, and may shift some lifelong health risks. Central precocious puberty can be treated by suppressing the pituitary hormones that induce sex steroid production. The opposite condition is delayed puberty.The term is used with several slightly different meanings that are usually apparent from the context. In its broadest sense, and often simplified as early puberty, "precocious puberty" sometimes refers to any physical sex hormone effect, due to any cause, occurring earlier than the usual age, especially when it is being considered as a medical problem. Stricter definitions of "precocity" may refer only to central puberty starting before a statistically specified age based on percentile in the population (e.g., 2.5 standard deviations below the population mean), on expert recommendations of ages at which there is more than a negligible chance of discovering an abnormal cause, or based on opinion as to the age at which early puberty may have adverse effects. A common definition for medical purposes is onset before 8 years in girls or 9 years in boys.
Causes
Early pubic hair, breast, or genital development may result from natural early maturation or from several other conditions.
Central
If the cause can be traced to the hypothalamus or pituitary, the cause is considered central. Other names for this type are complete or true precocious puberty.Causes of central precocious puberty can include:
hypothalamic hamartoma produces pulsatile gonadotropin-releasing hormone (GnRH)
Langerhans cell histiocytosis
McCune–Albright syndromeCentral precocious puberty can also be caused by brain tumors, infection (most commonly tuberculous meningitis, especially in developing countries), trauma, hydrocephalus, and Angelman syndrome. Precocious puberty is associated with advancement in bone age, which leads to early fusion of epiphyses, thus resulting in reduced final height and short stature.Adrenocortical oncocytomas are rare with mostly benign and nonfunctioning tumors. There have been only three cases of functioning adrenocortical oncocytoma that have been reported up until 2013. Children with adrenocortical oncocytomas will present with "premature pubarche, clitoromegaly, and increased serum dehydroepiandrosterone sulfate and testosterone" which are some of the presentations associated with precocious puberty.Precocious puberty in girls begins before the age of 8. The youngest mother on record is Lina Medina, who gave birth at the age of either 5 years, 7 months and 17 days or 6 years 5 months as mentioned in another report."Central precocious puberty (CPP) was reported in some patients with suprasellar arachnoid cysts (SAC), and SCFE (slipped capital femoral epiphysis) occurs in patients with CPP because of rapid growth and changes of growth hormone secretion."If no cause can be identified, it is considered idiopathic or constitutional.
Peripheral
Secondary sexual development induced by sex steroids from other abnormal sources is referred to as peripheral precocious puberty or precocious pseudopuberty. It typically presents as a severe form of disease with children. Symptoms are usually as a sequelae from adrenal hyperplasia (because of 21-hydroxylase deficiency or 11-beta hydroxylase deficiency, the former being more common), which includes but is not limited to hypertension, hypotension, electrolyte abnormalities, ambiguous genitalia in females, signs of virilization in females. Blood tests will typically reveal high level of androgens with low levels of cortisol.
Causes can include:
Endogenous sources
Gonadal tumors (such as arrhenoblastoma)
Adrenal tumors
Germ cell tumor
Congenital adrenal hyperplasia
McCune–Albright syndrome
Silver–Russell syndrome
Familial male-limited precocious puberty (testotoxicosis)
Exogenous hormones
Environmental exogenous hormones
As treatment for another condition
Isosexual and heterosexual
Generally, patients with precocious puberty develop phenotypically appropriate secondary sexual characteristics. This is called isosexual precocity.In some cases, a patient may develop characteristics of the opposite sex. For example, a male may develop breasts and other feminine characteristics, while a female may develop a deepened voice and facial hair. This is called heterosexual or contrasexual precocity. It is very rare in comparison to isosexual precocity and is usually the result of unusual circumstances. As an example, children with a very rare genetic condition called aromatase excess syndrome – in which exceptionally high circulating levels of estrogen are present – usually develop precocious puberty. Males and females are hyper-feminized by the syndrome. The "opposite" case would be the hyper-masculinisation of both male and female patients with congenital adrenal hyperplasia (CAH) due to 21-hydroxylase deficiency, in which there is an excess of androgens. Thus, in the aromatase excess syndrome the precocious puberty is isosexual in females and heterosexual in males, whilst in the CAH its isosexual in males and heterosexual in females.
Research
Although the causes of early puberty are still somewhat unclear, girls who have a high-fat diet and are not physically active or are obese are more likely to physically mature earlier. "Obese girls, defined as at least 10 kilograms (22 pounds) overweight, had an 80 percent chance of developing breasts before their ninth birthday and starting menstruation before age 12 – the western average for menstruation is about 12.7 years." In addition to diet and exercise habits, exposure to chemicals that mimic estrogen (known as xenoestrogens) is another possible cause of early puberty in girls. Bisphenol A, a xenoestrogen found in hard plastics, has been shown to affect sexual development. "Factors other than obesity, however, perhaps genetic and/or environmental ones, are needed to explain the higher prevalence of early puberty in black versus white girls." While more girls are increasingly entering puberty at younger ages, new research indicates that some boys are actually starting later (delayed puberty). "Increasing rates of obese and overweight children in the United States may be contributing to a later onset of puberty in boys, say researchers at the University of Michigan Health System."High levels of beta-hCG in serum and cerebrospinal fluid observed in a 9-year-old boy suggest a pineal gland tumor. The tumor is called a chorionic gonadotropin secreting pineal tumor. Radiotherapy and chemotherapy reduced tumor and beta-hCG levels normalized.In a study using neonatal melatonin on rats, results suggest that elevated melatonin could be responsible for some cases of early puberty.Familial cases of idiopathic central precocious puberty (ICPP) have been reported, leading researchers to believe there are specific genetic modulators of ICPP. Mutations in genes such as LIN28, and LEP and LEPR, which encode leptin and the leptin receptor, have been associated with precocious puberty. The association between LIN28 and puberty timing was validated experimentally in vivo, when it was found that mice with ectopic over-expression of LIN28 show an extended period of pre-pubertal growth and a significant delay in puberty onset.Mutations in the kisspeptin (KISS1) and its receptor, KISS1R (also known as GPR54), involved in GnRH secretion and puberty onset, are also thought to be the cause for ICPP However, this is still a controversial area of research, and some investigators found no association of mutations in the LIN28 and KISS1/KISS1R genes to be the common cause underlying ICPP.The gene MKRN3, which is a maternally imprinted gene, was first cloned by Jong et al. in 1999. MKRN3 was originally named Zinc finger protein 127. It is located on human chromosome 15 on the long arm in the Prader-Willi syndrome critical region2, and has since been identified as a cause of premature sexual development or CPP. The identification of mutations in MKRN3 leading to sporadic cases of CPP has been a significant contribution to better understanding the mechanism of puberty. MKRN3 appears to act as a "brake" on the central hypothalamic-pituitary access. Thus, loss of function mutations of the protein allow early activation of the GnRH pathway and cause phenotypic CPP. Patients with a MKRN3 mutation all display the classic signs of CCP including early breast and testes development, increased bone aging and elevated hormone levels of GnRH and LH.
Diagnosis
Studies indicate that breast development in girls and the appearance of pubic hair in both girls and boys are starting earlier than in previous generations. As a result, "early puberty" in children as young as 9 and 10 is no longer considered abnormal, particularly with girls. Although it is not considered as abnormal, it may be upsetting to parents and can be harmful to children who mature physically at a time when they are immature mentally.No age reliably separates normal from abnormal processes in children, but the following age thresholds for evaluation are thought to minimize the risk of missing a significant medical problem:
Breast development in boys before appearance of pubic hair or testicular enlargement
Pubic hair or genital enlargement (gonadarche) in boys with onset before 9 years
Pubic hair (pubarche) before 8 or breast development (thelarche) in girls with onset before 7 years
Menstruation (menarche) in girls before 10 yearsMedical evaluation is sometimes necessary to recognize the few children with serious conditions from the majority who have entered puberty early but are still medically normal. Early sexual development warrants evaluation because it may:
induce early bone maturation and reduce eventual adult height
indicate the presence of a tumour or other serious problem
cause the child, particularly a girl, to become an object of adult sexual interest.
Treatment
One possible treatment is with anastrozole </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Cervical effacement and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Cervicitis and can you explain it in depth? [/INST] Cervicitis is inflammation of the uterine cervix. Cervicitis in women has many features in common with urethritis in men and many cases are caused by sexually transmitted infections. Non-infectious causes of cervicitis can include intrauterine devices, contraceptive diaphragms, and allergic reactions to spermicides or latex condoms.
Cervicitis affects over half of all women during their adult life.Cervicitis may ascend and cause endometritis and pelvic inflammatory disease (PID). Cervicitis may be acute or chronic.
Symptoms and signs
Cervicitis may have no symptoms. If symptoms do manifest, they may include:
Abnormal vaginal bleeding after intercourse between periods
Unusual gray, white, or yellow vaginal discharge
Painful sexual intercourse
Pain in the vagina
Pressure or heaviness in the pelvis
Frequent, painful urination
Causes
Cervicitis can be caused by any of a number of infections, of which the most common are chlamydia and gonorrhea, with chlamydia accounting for approximately 40% of cases. Other causes include Trichomonas vaginalis, herpes simplex virus, and Mycoplasma genitalium.While sexually transmitted infections (STIs) are the most common cause of cervicitis, there are other potential causes as well. This includes vaginitis caused by bacterial vaginosis or Trichomonas vaginalis. This also includes a device inserted into the pelvic area (i.e. a cervical cap, IUD, pessary, etc.); an allergy to spermicides or latex in condoms; or, exposure to a chemical, for example while douching. Inflammation can also be idiopathic, where no specific cause is found. While IUDs do not cause cervicitis, active cervicitis is a contraindication to placing an IUD. If a person with an IUD develops cervicitis, it usually does not need to be removed, if the person wants to continue using it.There are also certain behaviors that can place individuals at a higher risk for contracting cervicitis. High-risk sexual behavior, a history of STIs, many sexual partners, sex at an early age, and sexual partners who engage in high-risk sexual behavior or have had an STI can increase the likelihood of contracting cervicitis.
Diagnosis
To diagnose cervicitis, a clinician will perform a pelvic exam. This exam includes a speculum exam with visual inspection of the cervix for abnormal discharge, which is usually purulent or bleeding from the cervix with little provocation. Swabs can be used to collect a sample of this discharge for inspection under a microscope and/or lab testing for gonorrhea, chlamydia, and Trichomonas vaginalis. A bimanual exam in which the clinician palpates the cervix to see if there is any associated pain should be done to assess for pelvic inflammatory disease.
Prevention
The risk of contracting cervicitis from STIs can be reduced by using condoms during every sexual encounter. Condoms are effective against the spread of STIs like chlamydia and gonorrhea that cause cervicitis. Also, being in a long-term monogamous relationship with an uninfected partner can lower the risk of an STI.Ensuring that foreign objects like tampons are properly placed in the vagina and following instructions how long to leave it inside, how often to change it, and/or how often to clean it can reduce the risk of cervicitis </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Chancroid and can you explain it in depth? [/INST] Chancroid ( SHANG-kroyd) is a bacterial sexually transmitted infection characterized by painful sores on the genitalia. Chancroid is known to spread from one individual to another solely through sexual contact. However, there have been reports of accidental infection through another route which is by the hand. While uncommon in the western world, it is the most common cause of genital ulceration worldwide.
Signs and symptoms
These are only local and no systemic manifestations are present.
The ulcer characteristically:
Ranges in size dramatically from 3 to 50 mm (1/8 inch to 2 inches) across
Is painful
Has sharply defined, undermined borders
Has irregular or ragged borders, described as saucer-shaped.
Has a base that is covered with a gray or yellowish-gray material
Has a base that bleeds easily if traumatized or scraped
Painful swollen lymph nodes occur in 30–60% of patients.
Dysuria (pain with urination) and dyspareunia (pain with intercourse) in femalesAbout half of infected men have only a single ulcer. Women frequently have four or more ulcers, with fewer symptoms. The ulcers are typically confined to the genital region most of the time.
The initial ulcer may be mistaken as a "hard" chancre, the typical sore of primary syphilis, as opposed to the "soft chancre" of chancroid.Approximately one-third of the infected individuals will develop enlargements of the inguinal lymph nodes, the nodes located in the fold between the leg and the lower abdomen.Half of those who develop swelling of the inguinal lymph nodes will progress to a point where the nodes rupture through the skin, producing draining abscesses. The swollen lymph nodes and abscesses are often referred to as buboes.
Complications
Extensive lymph node inflammation may develop.
Large inguinal abscesses may develop and rupture to form draining sinus or giant ulcer.
Superinfection by Fusarium and Bacteroides. These later require debridement and may result in disfiguring scars.
Phimosis can develop in long-standing lesion by scarring and thickening of foreskin, which may subsequently require circumcision.
Sites For Chancroid Lesions
Males
Internal and external surface of prepuce.
Coronal sulcus
Frenulum
Shaft of penis
Prepucial orifice
Urethral meatus
Glans penis
Perineum area
Females
Labia majora is most common site. "Kissing ulcers" may develop. These are ulcers that occur on opposing surfaces of the labia.
Labia minora
Fourchette
Vestibule
Clitoris
Perineal area
Inner thighs
Causes
Chancroid is a bacterial infection caused by the fastidious Gram-negative streptobacillus Haemophilus ducreyi. This pathogen is highly infectious. It is a disease found primarily in developing countries, most prevalent in low socioeconomic groups, associated with commercial sex workers.Chancroid, caused by H. ducreyi has infrequently been associated with cases of Genital Ulcer Disease in the US but has been isolated in up to 10% of genital ulcers diagnosed from STD clinics in Memphis and Chicago.Infection levels are very low in the Western world, typically around one case per two million of the population (Canada, France, Australia, UK and US). Most individuals diagnosed with chancroid have visited countries or areas where the disease is known to occur frequently, although outbreaks have been observed in association with crack cocaine use and prostitution.Chancroid is a risk factor for contracting HIV, due to their ecological association or shared risk of exposure, and biologically facilitated transmission of one infection by the other. Approximately 10% of people with chancroid will have a co-infection with syphilis and/or HIV.
Pathogenesis
H. ducreyi enters skin through microabrasions incurred during sexual intercourse. The incubation period of H. ducreyi infection is 10 to 14 days after which there is progression of the disease. A local tissue reaction leads to development of erythomatous papule, which progresses to pustule in 4–7 days. It then undergoes central necrosis to ulcerate.
Diagnosis
Variants
Some of clinical variants are as follows.
Laboratory findings
From bubo pus or ulcer secretions, H. ducreyi can be identified using special culture media; however, there is a <80% sensitivity. PCR-based identification of the organisms is available, but none in the United States are FDA-cleared. Simple, rapid, sensitive and inexpensive antigen detection methods for H. ducreyi identification are also popular. Serologic detection of H. ducreyi uses outer membrane protein and lipooligosaccharide. Most of the time, the diagnosis is based on presumptive approach using the symptomatology which in this case includes multiple painful genital ulcers.
Differential diagnosis
Despite many distinguishing features, the clinical spectrums of following diseases may overlap with chancroid:
Primary syphilis
Genital herpesPractical clinical approach for this STI as Genital Ulcer Disease is to rule out top differential diagnosis of Syphilis and Herpes and consider empirical treatment for Chancroid as testing is not commonly done for the latter.
Comparison with syphilis
There are many differences and similarities between the conditions syphilitic chancre and chancroid:
SimilaritiesBoth originate as pustules at the site of inoculation, and progress to ulcerated lesions
Both lesions are typically 1–2 cm in diameter
Both lesions are caused by sexually transmissible organisms
Both lesions typically appear on the genitals of infected individuals
Both lesions can be present at multiple sites and with multiple lesionsDifferencesChancre is a lesion typical of infection with the bacterium that causes syphilis, Treponema pallidum
Chancroid is a lesion typical of infection with the bacterium Haemophilus ducreyi
Chancres are typically painless, whereas chancroid are typically painful
Chancres are typically non-exudative, whereas chancroid typically have a grey or yellow purulent exudate
Chancres have a hard (indurated) edge, whereas chancroid have a soft edge
Chancres heal spontaneously within three to six weeks, even in the absence of treatment
Chancres can occur in the pharynx as well as on the genitals
Prevention
Chancroid spreads in populations with high sexual activity, such as prostitutes. Use of condom, prophylaxis by azithromycin, syndromic management of genital ulcers, treating patients with reactive syphilis serology are some of the strategies successfully tried in Thailand. Also, treatment of sexual partners is advocated whether they develop symptoms or not as long as there was unprotected sexual intercourse with the patient within 10 days of developing the symptoms.
Treatment
For the initial stages of the lesion, cleaning with soapy solution is recommended and sitz bath may be beneficial. Fluctuant nodules may require aspiration. Treatment may include more than one prescribed medication.
Antibiotics
Macrolides are often used to treat chancroid. The CDC recommendation is either a single oral dose (1 gram) of azithromycin, a single IM dose (250 mg) of ceftriaxone, oral (500 mg) of erythromycin three times a day for seven days, or oral (500 mg) of ciprofloxacin twice a day for three days. Due to a paucity of reliable empirical evidence it is not clear whether macrolides are actually more effective and/or better tolerated than other antibiotics when treating chancroid </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Cholangiocarcinoma and can you explain it in depth? [/INST] Cholangiocarcinoma, also known as bile duct cancer, is a type of cancer that forms in the bile ducts. Symptoms of cholangiocarcinoma may include abdominal pain, yellowish skin, weight loss, generalized itching, and fever. Light colored stool or dark urine may also occur. Other biliary tract cancers include gallbladder cancer and cancer of the ampulla of Vater.Risk factors for cholangiocarcinoma include primary sclerosing cholangitis (an inflammatory disease of the bile ducts), ulcerative colitis, cirrhosis, hepatitis C, hepatitis B, infection with certain liver flukes, and some congenital liver malformations. However, most people have no identifiable risk factors. The diagnosis is suspected based on a combination of blood tests, medical imaging, endoscopy, and sometimes surgical exploration. The disease is confirmed by examination of cells from the tumor under a microscope. It is typically an adenocarcinoma (a cancer that forms glands or secretes mucin).Cholangiocarcinoma is typically incurable at diagnosis which is why early detection is ideal. In these cases palliative treatments may include surgical resection, chemotherapy, radiation therapy, and stenting procedures. In about a third of cases involving the common bile duct and less commonly with other locations the tumor can be completely removed by surgery offering a chance of a cure. Even when surgical removal is successful chemotherapy and radiation therapy are generally recommended. In certain cases surgery may include a liver transplantation. Even when surgery is successful the 5-year survival is typically less than 50%.Cholangiocarcinoma is rare in the Western world, with estimates of it occurring in 0.5–2 people per 100,000 per year. Rates are higher in Southeast Asia where liver flukes are common. Rates in parts of Thailand are 60 per 100,000 per year. It typically occurs in people in their 70s; however, in those with primary sclerosing cholangitis it often occurs in the 40s. Rates of cholangiocarcinoma within the liver in the Western world have increased.
Signs and symptoms
The most common physical indications of cholangiocarcinoma are abnormal liver function tests, jaundice (yellowing of the eyes and skin occurring when bile ducts are blocked by tumor), abdominal pain (30–50%), generalized itching (66%), weight loss (30–50%), fever (up to 20%), and changes in the color of stool or urine. To some extent, the symptoms depend upon the location of the tumor: people with cholangiocarcinoma in the extrahepatic bile ducts (outside the liver) are more likely to have jaundice, while those with tumors of the bile ducts within the liver more often have pain without jaundice.Blood tests of liver function in people with cholangiocarcinoma often reveal a so-called "obstructive picture", with elevated bilirubin, alkaline phosphatase, and gamma glutamyl transferase levels, and relatively normal transaminase levels. Such laboratory findings suggest obstruction of the bile ducts, rather than inflammation or infection of the liver parenchyma, as the primary cause of the jaundice.
Risk factors
Although most people present without any known risk factors evident, a number of risk factors for the development of cholangiocarcinoma have been described. In the Western world, the most common of these is primary sclerosing cholangitis (PSC), an inflammatory disease of the bile ducts which is closely associated with ulcerative colitis (UC). Epidemiologic studies have suggested that the lifetime risk of developing cholangiocarcinoma for a person with PSC is on the order of 10–15%, although autopsy series have found rates as high as 30% in this population. For inflammatory bowel disease patients with altered DNA repair functions, the progression from PSC to cholangiocarcinoma may be a consequence of DNA damage resulting from biliary inflammation and bile acids.Certain parasitic liver diseases may be risk factors as well. Colonization with the liver flukes Opisthorchis viverrini (found in Thailand, Laos PDR, and Vietnam) or Clonorchis sinensis (found in China, Taiwan, eastern Russia, Korea, and Vietnam) has been associated with the development of cholangiocarcinoma. Control programs (Integrated Opisthorchiasis Control Program) aimed at discouraging the consumption of raw and undercooked food have been successful at reducing the incidence of cholangiocarcinoma in some countries. People with chronic liver disease, whether in the form of viral hepatitis (e.g. hepatitis B or hepatitis C), alcoholic liver disease, or cirrhosis of the liver due to other causes, are at significantly increased risk of cholangiocarcinoma. HIV infection was also identified in one study as a potential risk factor for cholangiocarcinoma, although it was unclear whether HIV itself or other correlated and confounding factors (e.g. hepatitis C infection) were responsible for the association.Infection with the bacteria Helicobacter bilis and Helicobacter hepaticus species can cause biliary cancer.Congenital liver abnormalities, such as Caroli disease (a specific type of five recognized choledochal cysts), have been associated with an approximately 15% lifetime risk of developing cholangiocarcinoma. The rare inherited disorders Lynch syndrome II and biliary papillomatosis have also been found to be associated with cholangiocarcinoma. The presence of gallstones (cholelithiasis) is not clearly associated with cholangiocarcinoma. However, intrahepatic stones (called hepatolithiasis), which are rare in the West but common in parts of Asia, have been strongly associated with cholangiocarcinoma. Exposure to Thorotrast, a form of thorium dioxide which was used as a radiologic contrast medium, has been linked to the development of cholangiocarcinoma as late as 30–40 years after exposure; Thorotrast was banned in the United States in the 1950s due to its carcinogenicity.
Pathophysiology
Cholangiocarcinoma can affect any area of the bile ducts, either within or outside the liver. Tumors occurring in the bile ducts within the liver are referred to as intrahepatic, those occurring in the ducts outside the liver are extrahepatic, and tumors occurring at the site where the bile ducts exit the liver may be referred to as perihilar. A cholangiocarcinoma occurring at the junction where the left and right hepatic ducts meet to form the common hepatic duct may be referred to eponymously as a Klatskin tumor.Although cholangiocarcinoma is known to have the histological and molecular features of an adenocarcinoma of epithelial cells lining the biliary tract, the actual cell of origin is unknown. Recent evidence has suggested that the initial transformed cell that generates the primary tumor may arise from a pluripotent hepatic stem cell. Cholangiocarcinoma is thought to develop through a series of stages – from early hyperplasia and metaplasia, through dysplasia, to the development of frank carcinoma – in a process similar to that seen in the development of colon cancer. Chronic inflammation and obstruction of the bile ducts, and the resulting impaired bile flow, are thought to play a role in this progression.Histologically, cholangiocarcinomas may vary from undifferentiated to well-differentiated. They are often surrounded by a brisk fibrotic or desmoplastic tissue response; in the presence of extensive fibrosis, it can be difficult to distinguish well-differentiated cholangiocarcinoma from normal reactive epithelium. There is no entirely specific immunohistochemical stain that can distinguish malignant from benign biliary ductal tissue, although staining for cytokeratins, carcinoembryonic antigen, and mucins may aid in diagnosis. Most tumors (>90%) are adenocarcinomas.
Diagnosis
Blood tests
There are no specific blood tests that can diagnose cholangiocarcinoma by themselves. Serum levels of carcinoembryonic antigen (CEA) and CA19-9 are often elevated, but are not sensitive or specific enough to be used as a general screening tool. However, they may be useful in conjunction with imaging methods in supporting a suspected diagnosis of cholangiocarcinoma.
Abdominal imaging
Ultrasound of the liver and biliary tree is often used as the initial imaging modality in people with suspected obstructive jaundice. Ultrasound can identify obstruction and ductal dilatation and, in some cases, may be sufficient to diagnose cholangiocarcinoma. Computed tomography (CT) scanning may also play an important role in the diagnosis of cholangiocarcinoma.
Imaging of the biliary tree
While abdominal imaging can be useful in the diagnosis of cholangiocarcinoma, direct imaging of the bile ducts is often necessary. Endoscopic retrograde cholangiopancreatography (ERCP), an endoscopic procedure performed by a gastroenterologist or specially trained surgeon, has been widely used for this purpose. Although ERCP is an invasive procedure with attendant risks, its advantages include the ability to obtain biopsies and to place stents or perform other interventions to relieve biliary obstruction. Endoscopic ultrasound can also be performed at the time of ERCP and may increase the accuracy of the biopsy and yield information on lymph node invasion and operability. As an alternative to ERCP, percutaneous transhepatic cholangiography (PTC) may be utilized. Magnetic resonance cholangiopancreatography (MRCP) is a non-invasive alternative to ERCP. Some authors have suggested that MRCP should supplant ERCP in the diagnosis of biliary cancers, as it may more accurately define the tumor and avoids the risks of ERCP.
Surgery
Surgical exploration may be necessary to obtain a suitable biopsy and to accurately stage a person with cholangiocarcinoma. Laparoscopy can be used for staging purposes and may avoid the need for a more invasive surgical procedure, such as laparotomy, in some people.
Pathology
Histologically, cholangiocarcinomas are classically well to moderately differentiated adenocarcinomas. Immunohistochemistry is useful in the diagnosis and may be used to help differentiate a cholangiocarcinoma from hepatocellular carcinoma and metastasis of other gastrointestinal tumors. Cytological scrapings are often nondiagnostic, as these tumors typically have a desmoplastic stroma and, therefore, do not release diagnostic tumor cells with scrapings.
Staging
Although there are at least three staging systems for cholangiocarcinoma (e.g. those of Bismuth, Blumgart, and the American Joint Committee on Cancer), none have been shown to be useful in predicting survival. The most important staging issue is whether the tumor can be surgically removed, or whether it is too advanced for surgical treatment to be successful. Often, this determination can only be made at the time of surgery.General guidelines for operability include:
Absence of lymph node or liver metastases
Absence of involvement of the portal vein
Absence of direct invasion of adjacent organs
Absence of widespread metastatic disease
Treatment
Cholangiocarcinoma is considered to be an incurable and rapidly lethal disease unless all the tumors can be fully resected (cut out surgically). Since the operability of the tumor can only be assessed during surgery in most cases, a majority of people undergo exploratory surgery unless there is already a clear indication that the tumor is inoperable. However, the Mayo Clinic has reported significant success treating early bile duct cancer with liver transplantation using a protocolized approach and strict selection criteria.Adjuvant therapy followed by liver transplantation may have a role in treatment of certain unresectable cases. Locoregional therapies including transarterial chemoembolization (TACE), transarterial radioembolization (TARE) and ablation therapies have a role in intrahepatic variants of cholangiocarcinoma to provide palliation or potential cure in people who are not surgical candidates.
Adjuvant chemotherapy and radiation therapy
If the tumor can be removed surgically, people may receive adjuvant chemotherapy or radiation therapy after the operation to improve the chances of cure. If the tissue margins are negative (i.e. the tumor has been totally excised), adjuvant therapy is of uncertain benefit. Both positive and negative results have been reported with adjuvant radiation therapy in this setting, and no prospective randomized controlled trials have been conducted as of March 2007. Adjuvant chemotherapy appears to be ineffective in people with completely resected tumors. The role of combined chemoradiotherapy in this setting is unclear. However, if the tumor tissue margins are positive, indicating that the tumor was not completely removed via surgery, then adjuvant therapy with radiation and possibly chemotherapy is generally recommended based on the available data.
Treatment of advanced disease
The majority of cases of cholangiocarcinoma present as inoperable (unresectable) disease in which case people are generally treated with palliative chemotherapy, with or without radiotherapy. Chemotherapy has been shown in a randomized controlled trial to improve quality of life and extend survival in people with inoperable cholangiocarcinoma. There is no single chemotherapy regimen which is universally used, and enrollment in clinical trials is often recommended when possible. Chemotherapy agents used to treat cholangiocarcinoma include 5-fluorouracil with leucovorin, gemcitabine as a single agent, or gemcitabine plus cisplatin, irinotecan, or capecitabine. A small pilot study suggested possible benefit from the tyrosine kinase inhibitor erlotinib in people with advanced cholangiocarcinoma.
Radiation therapy appears to prolong survival in people with resected extrahepatic cholangiocarcinoma, and the few reports of its use in unresectable cholangiocarcinoma appear to show improved survival, but numbers are small.Infigratinib (Truseltiq) is a tyrosine kinase inhibitor of fibroblast growth factor receptor (FGFR) that was approved for medical use in the United States in May 2021. It is indicated for the treatment of people with previously treated locally advanced or metastatic cholangiocarcinoma harboring an FGFR2 fusion or rearrangement.Pemigatinib (Pemazyre) is a kinase inhibitor of fibroblast growth factor receptor 2 (FGFR2) that was approved for medical use in the United States in April 2020. It is indicated for the treatment of adults with previously treated, unresectable locally advanced or metastatic cholangiocarcinoma with a fibroblast growth factor receptor 2 (FGFR2) fusion or other rearrangement as detected by an FDA-approved test.
Ivodesinib (Tibsovo) is a small molecule inhibitor of isocitrate dehydrogenase 1. The FDA approved ivosidenib in August 2021 for adults with previously treated, locally advanced or metastatic cholangiocarcinoma with an isocitrate dehydrogenase-1 (IDH1) mutation as detected by an FDA-approved test.Durvalumab, (Imfinzi) is an immune checkpoint inhibitor that blocks the PD-L1 protein on the surface of immune cells, thereby allowing the immune system to recognize and attack tumor cells. In Phase III clinical trials, durvalumab, in combination with standard-of-care chemotherapy, demonstrated a statistically significant and clinically meaningful improvement in overall survival and progression-free survival versus chemotherapy alone as a 1st-line treatment for patients with advanced biliary tract cancer.
Prognosis
Surgical resection offers the only potential chance of cure in cholangiocarcinoma. For non-resectable cases, the five-year survival rate is 0% where the disease is inoperable because distal lymph nodes show metastases, and less than 5% in general. Overall mean duration of survival is less than 6 months in people with metastatic disease.For surgical cases, the odds of cure vary depending on the tumor location and whether the tumor can be completely, or only partially, removed. Distal cholangiocarcinomas (those arising from the common bile duct) are generally treated surgically with a Whipple procedure; long-term survival rates range from 15 to 25%, although one series reported a five-year survival of 54% for people with no involvement of the lymph nodes. Intrahepatic cholangiocarcinomas (those arising from the bile ducts within the liver) are usually treated with partial hepatectomy </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Cholera and can you explain it in depth? [/INST] Cholera is an infection of the small intestine by some strains of the bacterium Vibrio cholerae. Symptoms may range from none, to mild, to severe. The classic symptom is large amounts of watery diarrhea that lasts a few days. Vomiting and muscle cramps may also occur. Diarrhea can be so severe that it leads within hours to severe dehydration and electrolyte imbalance. This may result in sunken eyes, cold skin, decreased skin elasticity, and wrinkling of the hands and feet. Dehydration can cause the skin to turn bluish. Symptoms start two hours to five days after exposure.Cholera is caused by a number of types of Vibrio cholerae, with some types producing more severe disease than others. It is spread mostly by unsafe water and unsafe food that has been contaminated with human feces containing the bacteria. Undercooked shellfish is a common source. Humans are the only known host for the bacteria. Risk factors for the disease include poor sanitation, not enough clean drinking water, and poverty. Cholera can be diagnosed by a stool test. A rapid dipstick test is available but is not as accurate.Prevention methods against cholera include improved sanitation and access to clean water. Cholera vaccines that are given by mouth provide reasonable protection for about six months. They have the added benefit of protecting against another type of diarrhea caused by E. coli. By 2017 the US Food and Drug Administration (FDA) had approved a single-dose, live, oral cholera vaccine called Vaxchora for adults aged 18–64 who are travelling to an area of active cholera transmission. It offers limited protection to young children. People who survive an episode of cholera have long-lasting immunity for at least 3 years (the period tested.)The primary treatment for affected individuals is oral rehydration salts (ORS), the replacement of fluids and electrolytes by using slightly sweet and salty solutions. Rice-based solutions are preferred. Zinc supplementation is useful in children. In severe cases, intravenous fluids, such as Ringers lactate, may be required, and antibiotics may be beneficial. Testing to see which antibiotic the cholera is susceptible to can help guide the choice.Cholera continues to affect an estimated 3–5 million people worldwide and causes 28,800–130,000 deaths a year. The most recent of seven cholera pandemics and associated outbreaks, since the early 19th century, started about 1961. As of 2010, it is rare in high income countries. Children are mostly affected. Cholera occurs as both outbreaks and chronically in certain areas. Areas with an ongoing risk of disease include Africa and Southeast Asia. The risk of death among those affected is usually less than 5%, given improved treatment, but may be as high as 50% without such access to treatment. Descriptions of cholera are found as early as the 5th century BC in Sanskrit. In Europe, cholera was a term initially used to describe any kind of gastroenteritis, and was not used for this disease until the early 19th century. The study of cholera in England by John Snow between 1849 and 1854 led to significant advances in the field of epidemiology because of his insights about transmission via contaminated water.
Signs and symptoms
The primary symptoms of cholera are profuse diarrhea and vomiting of clear fluid. These symptoms usually start suddenly, half a day to five days after ingestion of the bacteria. The diarrhea is frequently described as "rice water" in nature and may have a fishy odor. An untreated person with cholera may produce 10 to 20 litres (3 to 5 US gal) of diarrhea a day. Severe cholera, without treatment, kills about half of affected individuals. If the severe diarrhea is not treated, it can result in life-threatening dehydration and electrolyte imbalances. Estimates of the ratio of asymptomatic to symptomatic infections have ranged from 3 to 100. Cholera has been nicknamed the "blue death" because a persons skin may turn bluish-gray from extreme loss of fluids.Fever is rare and should raise suspicion for secondary infection. Patients can be lethargic and might have sunken eyes, dry mouth, cold clammy skin, or wrinkled hands and feet. Kussmaul breathing, a deep and labored breathing pattern, can occur because of acidosis from stool bicarbonate losses and lactic acidosis associated with poor perfusion. Blood pressure drops due to dehydration, peripheral pulse is rapid and thready, and urine output decreases with time. Muscle cramping and weakness, altered consciousness, seizures, or even coma due to electrolyte imbalances are common, especially in children.
Cause
Transmission
Cholera bacteria have been found in shellfish and plankton.Transmission is usually through the fecal-oral route of contaminated food or water caused by poor sanitation. Most cholera cases in developed countries are a result of transmission by food, while in developing countries it is more often water. Food transmission can occur when people harvest seafood such as oysters in waters infected with sewage, as Vibrio cholerae accumulates in planktonic crustaceans and the oysters eat the zooplankton.People infected with cholera often have diarrhea, and disease transmission may occur if this highly liquid stool, colloquially referred to as "rice-water", contaminates water used by others. A single diarrheal event can cause a one-million fold increase in numbers of V. cholerae in the environment. The source of the contamination is typically other people with cholera when their untreated diarrheal discharge is allowed to get into waterways, groundwater or drinking water supplies. Drinking any contaminated water and eating any foods washed in the water, as well as shellfish living in the affected waterway, can cause a person to contract an infection. Cholera is rarely spread directly from person to person.V. cholerae also exists outside the human body in natural water sources, either by itself or through interacting with phytoplankton, zooplankton, or biotic and abiotic detritus. Drinking such water can also result in the disease, even without prior contamination through fecal matter. Selective pressures exist however in the aquatic environment that may reduce the virulence of V. cholerae. Specifically, animal models indicate that the transcriptional profile of the pathogen changes as it prepares to enter an aquatic environment. This transcriptional change results in a loss of ability of V. cholerae to be cultured on standard media, a phenotype referred to as viable but non-culturable (VBNC) or more conservatively active but non-culturable (ABNC). One study indicates that the culturability of V. cholerae drops 90% within 24 hours of entering the water, and furthermore that this loss in culturability is associated with a loss in virulence.Both toxic and non-toxic strains exist. Non-toxic strains can acquire toxicity through a temperate bacteriophage.
Susceptibility
About 100 million bacteria must typically be ingested to cause cholera in a normal healthy adult. This dose, however, is less in those with lowered gastric acidity (for instance those using proton pump inhibitors). Children are also more susceptible, with two- to four-year-olds having the highest rates of infection. Individuals susceptibility to cholera is also affected by their blood type, with those with type O blood being the most susceptible. Persons with lowered immunity, such as persons with AIDS or malnourished children, are more likely to suffer a severe case if they become infected. Any individual, even a healthy adult in middle age, can undergo a severe case, and each persons case should be measured by the loss of fluids, preferably in consultation with a professional health care provider.The cystic fibrosis genetic mutation known as delta-F508 in humans has been said to maintain a selective heterozygous advantage: heterozygous carriers of the mutation (who are not affected by cystic fibrosis) are more resistant to V. cholerae infections. In this model, the genetic deficiency in the cystic fibrosis transmembrane conductance regulator channel proteins interferes with bacteria binding to the intestinal epithelium, thus reducing the effects of an infection.
Mechanism
When consumed, most bacteria do not survive the acidic conditions of the human stomach. The few surviving bacteria conserve their energy and stored nutrients during the passage through the stomach by shutting down protein production. When the surviving bacteria exit the stomach and reach the small intestine, they must propel themselves through the thick mucus that lines the small intestine to reach the intestinal walls where they can attach and thrive.Once the cholera bacteria reach the intestinal wall, they no longer need the flagella to move. The bacteria stop producing the protein flagellin to conserve energy and nutrients by changing the mix of proteins that they express in response to the changed chemical surroundings. On reaching the intestinal wall, V. cholerae start producing the toxic proteins that give the infected person a watery diarrhea. This carries the multiplying new generations of V. cholerae bacteria out into the drinking water of the next host if proper sanitation measures are not in place.The cholera toxin (CTX or CT) is an oligomeric complex made up of six protein subunits: a single copy of the A subunit (part A), and five copies of the B subunit (part B), connected by a disulfide bond. The five B subunits form a five-membered ring that binds to GM1 gangliosides on the surface of the intestinal epithelium cells. The A1 portion of the A subunit is an enzyme that ADP-ribosylates G proteins, while the A2 chain fits into the central pore of the B subunit ring. Upon binding, the complex is taken into the cell via receptor-mediated endocytosis. Once inside the cell, the disulfide bond is reduced, and the A1 subunit is freed to bind with a human partner protein called ADP-ribosylation factor 6 (Arf6). Binding exposes its active site, allowing it to permanently ribosylate the Gs alpha subunit of the heterotrimeric G protein. This results in constitutive cAMP production, which in turn leads to the secretion of water, sodium, potassium, and bicarbonate into the lumen of the small intestine and rapid dehydration. The gene encoding the cholera toxin was introduced into V. cholerae by horizontal gene transfer. Virulent strains of V. cholerae carry a variant of a temperate bacteriophage called CTXφ.
Microbiologists have studied the genetic mechanisms by which the V. cholerae bacteria turn off the production of some proteins and turn on the production of other proteins as they respond to the series of chemical environments they encounter, passing through the stomach, through the mucous layer of the small intestine, and on to the intestinal wall. Of particular interest have been the genetic mechanisms by which cholera bacteria turn on the protein production of the toxins that interact with host cell mechanisms to pump chloride ions into the small intestine, creating an ionic pressure which prevents sodium ions from entering the cell. The chloride and sodium ions create a salt-water environment in the small intestines, which through osmosis can pull up to six liters of water per day through the intestinal cells, creating the massive amounts of diarrhea. The host can become rapidly dehydrated unless treated properly.By inserting separate, successive sections of V. cholerae DNA into the DNA of other bacteria, such as E. coli that would not naturally produce the protein toxins, researchers have investigated the mechanisms by which V. cholerae responds to the changing chemical environments of the stomach, mucous layers, and intestinal wall. Researchers have discovered a complex cascade of regulatory proteins controls expression of V. cholerae virulence determinants. In responding to the chemical environment at the intestinal wall, the V. cholerae bacteria produce the TcpP/TcpH proteins, which, together with the ToxR/ToxS proteins, activate the expression of the ToxT regulatory protein. ToxT then directly activates expression of virulence genes that produce the toxins, causing diarrhea in the infected person and allowing the bacteria to colonize the intestine. Current research aims at discovering "the signal that makes the cholera bacteria stop swimming and start to colonize (that is, adhere to the cells of) the small intestine."
Genetic structure
Amplified fragment length polymorphism fingerprinting of the pandemic isolates of V. cholerae has revealed variation in the genetic structure. Two clusters have been identified: Cluster I and Cluster II. For the most part, Cluster I consists of strains from the 1960s and 1970s, while Cluster II largely contains strains from the 1980s and 1990s, based on the change in the clone structure. This grouping of strains is best seen in the strains from the African continent.
Antibiotic resistance
In many areas of the world, antibiotic resistance is increasing within cholera bacteria. In Bangladesh, for example, most cases are resistant to tetracycline, trimethoprim-sulfamethoxazole, and erythromycin. Rapid diagnostic assay methods are available for the identification of multi-drug resistant cases. New generation antimicrobials have been discovered which are effective against cholera bacteria in in vitro studies.
Diagnosis
A rapid dipstick test is available to determine the presence of V. cholerae. In those samples that test positive, further testing should be done to determine antibiotic resistance. In epidemic situations, a clinical diagnosis may be made by taking a patient history and doing a brief examination. Treatment via hydration and over-the-counter hydration solutions can be started without or before confirmation by laboratory analysis, especially where cholera is a common problem.Stool and swab samples collected in the acute stage of the disease, before antibiotics have been administered, are the most useful specimens for laboratory diagnosis. If an epidemic of cholera is suspected, the most common causative agent is V. cholerae O1. If V. cholerae serogroup O1 is not isolated, the laboratory should test for V. cholerae O139. However, if neither of these organisms is isolated, it is necessary to send stool specimens to a reference laboratory.Infection with V. cholerae O139 should be reported and handled in the same manner as that caused by V. cholerae O1. The associated diarrheal illness should be referred to as cholera and must be reported in the United States.
Prevention
The World Health Organization (WHO) recommends focusing on prevention, preparedness, and response to combat the spread of cholera. They also stress the importance of an effective surveillance system. Governments can play a role in all of these areas.
Water, sanitation and hygiene
Although cholera may be life-threatening, prevention of the disease is normally straightforward if proper sanitation practices are followed. In developed countries, due to their nearly universal advanced water treatment and sanitation practices, cholera is rare. For example, the last major outbreak of cholera in the United States occurred in 1910–1911. Cholera is mainly a risk in developing countries in those areas where access to WASH (water, sanitation and hygiene) infrastructure is still inadequate.
Effective sanitation practices, if instituted and adhered to in time, are usually sufficient to stop an epidemic. There are several points along the cholera transmission path at which its spread may be halted:
Sterilization: Proper disposal and treatment of all materials that may have come into contact with the feces of other people with cholera (e.g., clothing, bedding, etc.) are essential. These should be sanitized by washing in hot water, using chlorine bleach if possible. Hands that touch cholera patients or their clothing, bedding, etc., should be thoroughly cleaned and disinfected with chlorinated water or other effective antimicrobial agents.
Sewage and fecal sludge management: In cholera-affected areas, sewage and fecal sludge need to be treated and managed carefully in order to stop the spread of this disease via human excreta. Provision of sanitation and hygiene is an important preventative measure. Open defecation, release of untreated sewage, or dumping of fecal sludge from pit latrines or septic tanks into the environment need to be prevented. In many cholera affected zones, there is a low degree of sewage treatment. Therefore, the implementation of dry toilets that do not contribute to water pollution, as they do not flush with water, may be an interesting alternative to flush toilets.
Sources: Warnings about possible cholera contamination should be posted around contaminated water sources with directions on how to decontaminate the water (boiling, chlorination etc.) for possible use.
Water purification: All water used for drinking, washing, or cooking should be sterilized by either boiling, chlorination, ozone water treatment, ultraviolet light sterilization (e.g., by solar water disinfection), or antimicrobial filtration in any area where cholera may be present. Chlorination and boiling are often the least expensive and most effective means of halting transmission. Cloth filters or sari filtration, though very basic, have significantly reduced the occurrence of cholera when used in poor villages in Bangladesh that rely on untreated surface water. Better antimicrobial filters, like those present in advanced individual water treatment hiking kits, are most effective. Public health education and adherence to appropriate sanitation practices are of primary importance to help prevent and control transmission of cholera and other diseases.Handwashing with soap or ash after using a toilet and before handling food or eating is also recommended for cholera prevention by WHO Africa.
Surveillance
Surveillance and prompt reporting allow for containing cholera epidemics rapidly. Cholera exists as a seasonal disease in many endemic countries, occurring annually mostly during rainy seasons. Surveillance systems can provide early alerts to outbreaks, therefore leading to coordinated response and assist in preparation of preparedness plans. Efficient surveillance systems can also improve the risk assessment for potential cholera outbreaks. Understanding the seasonality and location of outbreaks provides guidance for improving cholera control activities for the most vulnerable. For prevention to be effective, it is important that cases be reported to national health authorities.
Vaccination
Spanish physician Jaume Ferran i Clua developed a cholera inoculation in 1885, the first to immunize humans against a bacterial disease. However, his vaccine and inoculation was rather controversial and was rejected by his peers and several investigation commissions. Russian-Jewish bacteriologist Waldemar Haffkine successfully developed the first human cholera vaccine in July 1892. He conducted a massive inoculation program in British India.Persons who survive an episode of cholera have long-lasting immunity for at least 3 years (the period tested.) A number of safe and effective oral vaccines for cholera are available. The World Health Organization (WHO) has three prequalified oral cholera vaccines (OCVs): Dukoral, Sanchol, and Euvichol. Dukoral, an orally administered, inactivated whole cell vaccine, has an overall efficacy of about 52% during the first year after being given and 62% in the second year, with minimal side effects. It is available in over 60 countries. However, it is not currently recommended by the Centers for Disease Control and Prevention (CDC) for most people traveling from the United States to endemic countries. The vaccine that the US Food and Drug Administration (FDA) recommends, Vaxchora, is an oral attenuated live vaccine, that is effective for adults aged 18-64 as a single dose.One injectable vaccine was found to be effective for two to three years. The protective efficacy was 28% lower in children less than five years old. However, as of 2010, it has limited availability. Work is under way to investigate the role of mass vaccination. The WHO recommends immunization of high-risk groups, such as children and people with HIV, in countries where this disease is endemic. If people are immunized broadly, herd immunity results, with a decrease in the amount of contamination in the environment.WHO recommends that oral cholera vaccination be considered in areas where the disease is endemic (with seasonal peaks), as part of the response to outbreaks, or in a humanitarian crisis during which the risk of cholera is high. Oral Cholera Vaccine (OCV) has been recognized as an adjunct tool for prevention and control of cholera. The World Health Organization (WHO) has prequalified three bivalent cholera vaccines—Dukoral (SBL Vaccines), containing a non-toxic B-subunit of cholera toxin and providing protection against V. cholerae O1; and two vaccines developed using the same transfer of technology—ShanChol (Shantha Biotec) and Euvichol (EuBiologics Co.), which have bivalent O1 and O139 oral killed cholera vaccines. Oral cholera vaccination could be deployed in a diverse range of situations from cholera-endemic areas and locations of humanitarian crises, but no clear consensus exists.
Sari filtration
Developed for use in Bangladesh, the "sari filter" is a simple and cost-effective appropriate technology method for reducing the contamination of drinking water. Used sari cloth is preferable but other types of used cloth can be used with some effect, though the effectiveness will vary significantly. Used cloth is more effective than new cloth, as the repeated washing reduces the space between the fibers. Water collected in this way has a greatly reduced pathogen count—though it will not necessarily be perfectly safe, it is an improvement for poor people with limited options. In Bangladesh this practice was found to decrease rates of cholera by nearly half. It involves folding a sari four to eight times. Between uses the cloth should be rinsed in clean water and dried in the sun to kill any bacteria on it. A nylon cloth appears to work as well but is not as affordable.
Treatment
Continued eating speeds the recovery of normal intestinal function. The WHO recommends this generally for cases of diarrhea no matter what the underlying cause. A CDC training manual specifically for cholera states: "Continue to breastfeed your baby if the baby has watery diarrhea, even when traveling to get treatment. Adults and older children should continue to eat frequently."
Fluids
The most common error in caring for patients with cholera is to underestimate the speed
and volume of fluids required. In most cases, cholera can be successfully treated with oral rehydration therapy (ORT), which is highly effective, safe, and simple to administer. Rice-based solutions are preferred to glucose-based ones due to greater efficiency. In severe cases with significant dehydration, intravenous rehydration may be necessary. Ringers lactate is the preferred solution, often with added potassium. Large volumes and continued replacement until diarrhea has subsided may be needed. Ten percent of a persons body weight in fluid may need to be given in the first two to four hours. This method was first tried on a mass scale during the Bangladesh Liberation War, and was found to have much success. Despite widespread beliefs, fruit juices and commercial fizzy drinks like cola are not ideal for rehydration of people with serious infections of the intestines, and their excessive sugar content may even harm water uptake.If commercially produced oral rehydration solutions are too expensive or difficult to obtain, solutions can be made. One such recipe calls for 1 liter of boiled water, 1/2 teaspoon of salt, 6 teaspoons of sugar, and added mashed banana for potassium and to improve taste.
Electrolytes
As there frequently is initially acidosis, the potassium level may be normal, even though large losses have occurred. As the dehydration is corrected, potassium levels may decrease rapidly, and thus need to be replaced. This may be done by consuming foods high in potassium, like bananas or coconut water.
Antibiotics
Antibiotic treatments for one to three days shorten the course of the disease and reduce the severity of the symptoms. Use of antibiotics also reduces fluid requirements. People will recover without them, however, if sufficient hydration is maintained. The WHO only recommends antibiotics in those with severe dehydration.Doxycycline is typically used first line, although some strains of V. cholerae have shown resistance. Testing for resistance during an outbreak can help determine appropriate future choices. Other antibiotics proven to be effective include cotrimoxazole, erythromycin, tetracycline, chloramphenicol, and furazolidone. Fluoroquinolones, such as ciprofloxacin, also may be used, but resistance has been reported.Antibiotics improve outcomes in those who are both severely and not severely dehydrated. Azithromycin and tetracycline may work better than doxycycline or ciprofloxacin.
Zinc supplementation
In Bangladesh zinc supplementation reduced the duration and severity of diarrhea in children with cholera when given with antibiotics and rehydration therapy as needed. It reduced the length of disease by eight hours and the amount of diarrhea stool by 10%. Supplementation appears to be also effective in both treating and preventing infectious diarrhea due to other causes among children in the developing world.
Prognosis
If people with cholera are treated quickly and properly, the mortality rate is less than 1%; however, with untreated cholera, the mortality rate rises to 50–60%.For certain genetic strains of cholera, such as the one present during the 2010 epidemic in Haiti and the 2004 outbreak in India, death can occur within two hours of becoming ill.
Epidemiology
Cholera affects an estimated 2.8 million people worldwide, and causes approximately 95,000 deaths a year (uncertainty range: 21,000–143,000) as of 2015. This occurs mainly in the developing world.In the early 1980s, death rates are believed to have still been higher than three million a year. It is difficult to calculate exact numbers of cases, as many go unreported due to concerns that an outbreak may have a negative impact on the tourism of a country. As of 2004, cholera remained both epidemic and endemic in many areas of the world.Recent major outbreaks are the 2010s Haiti cholera outbreak and the 2016–2021 Yemen cholera outbreak. In October 2016, an outbreak of cholera began in war-ravaged Yemen. WHO called it "the worst cholera outbreak in the world". In 2019, 93% of the reported 923,037 cholera cases were from Yemen (with 1911 deaths reported). Between September 2019 and September 2020, a global total of over 450,000 cases and over 900 deaths was reported; however, the accuracy of these numbers suffer from over-reporting from countries that report suspected cases (and not laboratory confirmed cases), as well as under-reporting from countries that do not report official cases (such as Bangladesh, India and Philippines).Although much is known about the mechanisms behind the spread of cholera, researchers still do not have a full understanding of what makes cholera outbreaks happen in some places and not others. Lack of treatment of human feces and lack of treatment of drinking water greatly facilitate its spread. Bodies of water have been found to serve as a reservoir of infection, and seafood shipped long distances can spread the disease.
Cholera had disappeared from the Americas for most of the 20th century, but it reappeared toward the end of that century, beginning with a severe outbreak in Peru. Following the end of the 2010s Haiti cholera outbreak, there have not been any cholera cases in the Americas since February 2019. As of August 2021 the disease is endemic in Africa and some areas of eastern and western Asia (Bangladesh, India and Yemen). Cholera is not endemic in Europe; all reported cases had a travel history to endemic areas.
History of outbreaks
The word cholera is from Greek: χολέρα kholera from χολή kholē "bile". Cholera likely has its origins in the Indian subcontinent as evidenced by its prevalence in the region for centuries.References to cholera appear in the European literature as early as 1642, from the Dutch physician Jakob de Bondts description in his De Medicina Indorum. (The "Indorum" of the title refers to the East Indies. He also gave first European descriptions of other diseases.) But at the time, the word "cholera" was historically used by European physicians to refer to any gastrointestinal upset resulting in yellow diarrhea. De Bondt thus used a common word already in regular use to describe the new disease. This was a frequent practice of the time. It was not until the 1830s that the name for severe yellow diarrhea changed in English from "cholera" to "cholera morbus" to differentiate it from what was then known as "Asiatic cholera", or that associated with origins in India and the East.
Early outbreaks in the Indian subcontinent are believed to have been the result of crowded, poor living conditions, as well as the presence of pools of still water, both of which provide ideal conditions for cholera to thrive. The disease first spread by travelers along trade routes (land and sea) to Russia in 1817, later to the rest of Europe, and from Europe to North America and the rest of the world, (hence the name "Asiatic cholera"). Seven cholera pandemics have occurred since the early 19th century; the first one did not reach the Americas. The seventh pandemic originated in Indonesia in 1961.The first cholera pandemic occurred in the Bengal region of India, near Calcutta starting in 1817 through 1824. The disease dispersed from India to Southeast Asia, the Middle East, Europe, and Eastern Africa. The movement of British Army and Navy ships and personnel is believed to have contributed to the range of the pandemic, since the ships carried people with the disease to the shores of the Indian Ocean, from Africa to Indonesia, and north to China and Japan.The second pandemic lasted from 1826 to 1837 and particularly affected North America and Europe. Advancements in transportation and global trade, and increased human migration, including soldiers, meant that more people were carrying the disease more widely. The third pandemic erupted in 1846, persisted until 1860, extended to North Africa, and reached North and South America. It was introduced to North America at Quebec, Canada, via Irish immigrants from the Great Famine. In this pandemic, Brazil was affected for the first time. The fourth pandemic lasted from 1863 to 1875, spreading from India to Naples and Spain, and reaching the United States at New Orleans, Louisiana in 1873. It spread throughout the Mississippi River system on the continent.
The fifth pandemic was from 1881 to 1896. It started in India and spread to Europe, Asia, and South America. The sixth pandemic ran from 1899–1923. These epidemics had a lower number of fatalities because physicians and researchers had a greater understanding of the cholera bacteria. Egypt, the Arabian peninsula, Persia, India, and the Philippines were hit hardest during these epidemics. Other areas, such as Germany in 1892 (primarily the city of Hamburg, where more than 8.600 people died) and Naples from 1910 to 1911, also suffered severe outbreaks.
The seventh pandemic originated in 1961 in Indonesia and is marked by the emergence of a new strain, nicknamed El Tor, which still persists (as of 2018) in developing countries. This pandemic had initially subsided about 1975 and was thought to have ended, but, as noted, it has persisted. There were a rise in cases in the 1990s and since.
Cholera became widespread in the 19th century. Since then it has killed tens of millions of people. In Russia alone, between 1847 and 1851, more than one million people died from the disease. It killed 150,000 Americans during the second pandemic. Between 1900 and 1920, perhaps eight million people died of cholera in India. Cholera officially became the first reportable disease in the United States due to the significant effects it had on health. John Snow, in England, in 1854 was the first to identify the importance of contaminated water as its source of transmission. Cholera is now no longer considered a pressing health threat in Europe and North America due to filtering and chlorination of water supplies, but it still strongly affects populations in developing countries.
In the past, vessels flew a yellow quarantine flag if any crew members or passengers had cholera. No one aboard a vessel flying a yellow flag would be allowed ashore for an extended period, typically 30 to 40 days.Historically many different claimed remedies have existed in folklore. Many of the older remedies were based on the miasma theory, that the disease was transmitted by bad air. Some believed that abdominal chilling made one more susceptible, and flannel and cholera belts were included in army kits. In the 1854–1855 outbreak in Naples, homeopathic camphor was used according to Hahnemann. T. J. Ritters Mothers Remedies book lists tomato syrup as a home remedy from northern America. Elecampane was recommended in the United Kingdom, according to William Thomas Fernie. The first effective human vaccine was developed in 1885, and the first effective antibiotic was developed in 1948.
Cholera cases are much less frequent in developed countries where governments have helped to establish water sanitation practices and effective medical treatments. In the 19th century the United States, for example, had a severe cholera problem similar to those in some developing countries. It had three large cholera outbreaks in the 1800s, which can be attributed to Vibrio choleraes spread through interior waterways such as the Erie Canal and the extensive Mississippi River valley system, as well as the major ports along the Eastern Seaboard and their cities upriver. The island of Manhattan in New York City touches the Atlantic Ocean, where cholera collected from river waters and ship discharges just off the coast. At this time, New York City did not have as effective a sanitation system as it developed in the later 20th century, so cholera spread through the citys water supply.Cholera morbus is a historical term that was used to refer to gastroenteritis rather than specifically to what is now defined as the disease of cholera.
Research
One of the major contributions to fighting cholera was made by the physician and pioneer medical scientist John Snow (1813–1858), who in 1854 found a link between cholera and contaminated drinking water. Dr. Snow proposed a microbial origin for epidemic cholera in 1849. In his major "state of the art" review of 1855, he proposed a substantially complete and correct model for the cause of the disease. In two pioneering epidemiological field studies, he was able to demonstrate human sewage contamination was the most probable disease vector in two major epidemics in London in 1854. His model was not immediately accepted, but it was increasingly seen as plausible as medical microbiology developed over the next 30 years or so. For his work on cholera, John Snow is often regarded as the "Father of Epidemiology".The bacterium was isolated in 1854 by Italian anatomist Filippo Pacini, but its exact nature and his results were not widely known. In the same year, the Catalan Joaquim Balcells i Pascual discovered the bacterium. In 1856 António Augusto da Costa Simões and José Ferreira de Macedo Pinto, two Portuguese researchers, are believed to have done the same.Between the mid-1850s and the 1900s, cities in developed nations made massive investment in clean water supply and well-separated sewage treatment infrastructures. This eliminated the threat of cholera epidemics from the major developed cities in the world. In 1883, Robert Koch identified V. cholerae with a microscope as the bacillus causing the disease.Hemendra Nath Chatterjee, a Bengali scientist, was the first to formulate and demonstrate the effectiveness of oral rehydration salt (ORS) to treat diarrhea. In his 1953 paper, published in The Lancet, he states that promethazine can stop vomiting during cholera and then oral rehydration is possible. The formulation of the fluid replacement solution was 4 g of sodium chloride, 25 g of glucose and 1000 ml of water.
Indian medical scientist Sambhu Nath De discovered the cholera toxin, the animal model of cholera, and successfully demonstrated the method of transmission of cholera pathogen Vibrio cholerae.Robert Allan Phillips, working at US Naval Medical Research Unit Two in Southeast Asia, evaluated the pathophysiology of the disease using modern laboratory chemistry techniques. He developed a protocol for rehydration. His research led the Lasker Foundation to award him its prize in 1967.More recently, in 2002, Alam, et al., studied stool samples from patients at the International Centre for Diarrhoeal Disease in Dhaka, Bangladesh. From the various experiments they conducted, the researchers found a correlation between the passage of V. cholerae through the human digestive system and an increased infectivity state. Furthermore, the researchers found the bacterium creates a hyperinfected state where genes that control biosynthesis of amino acids, iron uptake systems, and formation of periplasmic nitrate reductase complexes were induced just before defecation. These induced characteristics allow the cholera vibrios to survive in the "rice water" stools, an environment of limited oxygen and iron, of patients with a cholera infection.
Global Strategy
In 2017, the WHO launched the "Ending Cholera: a global roadmap to 2030" strategy which aims to reduce cholera deaths by 90% by 2030. The strategy was developed by the Global Task Force on Cholera Control (GTFCC) which develops country-specific plans and monitors progress. The approach to achieve this goal combines surveillance, water sanitation, rehydration treatment and oral vaccines. Specifically, the control strategy focuses on three approaches: i) early detection and response to outbreaks to contain outbreaks, ii) stopping cholera transmission through improved sanitation and vaccines in hotspots, and iii) a global framework for cholera control through the GTFCC.The WHO and the GTFCC do not consider global cholera eradication a viable goal. Even though humans are the only host of cholera, the bacterium can persist in the environment without a human host. While global eradication is not possible, elimination of human to human transmission may be possible. Local elimination is possible, which has been underway most recently during the 2010s Haiti cholera outbreak. Haiti aims to achieve certification of elimination by 2022.The GTFCC targets 47 countries, 13 of which have established vaccination campaigns.
Society and culture
Health policy
In many developing countries, cholera still reaches its victims through contaminated water sources, and countries without proper sanitation techniques have greater incidence of the disease. Governments can play a role in this. In 2008, for example, the Zimbabwean cholera outbreak was due partly to the governments role, according to a report from the James Baker Institute. The Haitian governments inability to provide safe drinking water after the 2010 earthquake led to an increase in cholera cases as well.Similarly, South Africas cholera outbreak was exacerbated by the governments policy of privatizing water programs. The wealthy elite of the country were able to afford safe water while others had to use water from cholera-infected rivers.According to Rita R. Colwell of the James Baker Institute, if cholera does begin to spread, government preparedness is crucial. A governments ability to contain the disease before it extends to other areas can prevent a high death toll and the development of an epidemic or even pandemic. Effective disease surveillance can ensure that cholera outbreaks are recognized as soon as possible and dealt with appropriately. Oftentimes, this will allow public health programs to determine and control the cause of the cases, whether it is unsanitary water or seafood that have accumulated a lot of Vibrio cholerae specimens. Having an effective surveillance program contributes to a governments ability to prevent cholera from spreading. In the year 2000 in the state of Kerala in India, the Kottayam district was determined to be "Cholera-affected"; this pronouncement led to task forces that concentrated on educating citizens with 13,670 information sessions about human health. These task forces promoted the boiling of water to obtain safe water, and provided chlorine and oral rehydration salts. Ultimately, this helped to control the spread of the disease to other areas and minimize deaths. On the other hand, researchers have shown that most of the citizens infected during the 1991 cholera outbreak in Bangladesh lived in rural areas, and were not recognized by the governments surveillance program. This inhibited physicians abilities to detect cholera cases early.According to Colwell, the quality and inclusiveness of a countrys health care system affects the control of cholera, as it did in the Zimbabwean cholera outbreak. While sanitation practices are important, when governments respond quickly and have readily available vaccines, the country will have a lower cholera death toll. Affordability of vaccines can be a problem; if the governments do not provide vaccinations, only the wealthy may be able to afford them and there will be a greater toll on the countrys poor. The speed with which government leaders respond to cholera outbreaks is important.Besides contributing to an effective or declining public health care system and water sanitation treatments, government can have indirect effects on cholera control and the effectiveness of a response to cholera. A countrys government can impact its ability to prevent disease and control its spread. A speedy government response backed by a fully functioning health care system and financial resources can prevent choleras spread. This limits choleras ability to cause death, or at the very least a decline in education, as children are kept out of school to minimize the risk of infection.
Notable cases
Tchaikovskys death has traditionally been attributed to cholera, most probably contracted through drinking contaminated water several days earlier. Tchaikovskys mother died of cholera, and his father became sick with cholera at this time but made a full recovery. Some scholars, however, including English musicologist and Tchaikovsky authority David Brown and biographer Anthony Holden, have theorized that his death was a suicide.
2010s Haiti cholera outbreak. Ten months after the 2010 earthquake, an outbreak swept over Haiti, traced to a United Nations base of peacekeepers from Nepal. This marks the worst cholera outbreak in recent history, as well as the best documented cholera outbreak in modern public health.
Adam Mickiewicz, Polish poet and novelist, is thought to have died of cholera in Istanbul in 1855.
Sadi Carnot, physicist, a pioneer of thermodynamics (d. 1832)
Charles X, King of France (d. 1836)
James K. Polk, eleventh president of the United States (d. 1849)
Carl von Clausewitz, Prussian soldier and German military theorist (d. 1831)
Elliot Bovill, Chief Justice of the Straits Settlements (1893)
Nikola Tesla, Serbian-American inventor, engineer and futurist known for his contributions to the design of the modern alternating current (AC) electricity supply system, contracted cholera in 1873 at the age of 17. He was bedridden for nine months, and near death multiple times, but survived and fully recovered.
In popular culture
Unlike tuberculosis ("consumption") which in literature and the arts was often romanticized as a disease of denizens of the demimonde or those with an artistic temperament, cholera is a disease which almost entirely affects the lower-classes living in filth and poverty. This, and the unpleasant course of the disease – which includes voluminous "rice-water" diarrhea, the hemorrhaging of liquids from the mouth, and violent muscle contractions which continue even after death – has discouraged the disease from being romanticized, or even the actual factual presentation of the disease in popular culture.
The 1889 novel Mastro-don Gesualdo by Giovanni Verga presents the course of a cholera epidemic across the island of Sicily, but does not show the suffering of those affected.
In Thomas Manns novella Death in Venice, first published in 1912 as Der Tod in Venedig, Mann "presented the disease as emblematic of the final bestial degradation of the sexually transgressive author Gustav von Aschenbach." Contrary to the actual facts of how violently cholera kills, Mann has his protagonist die peacefully on a beach in a deck chair. Luchino Viscontis 1971 film version also hid from the audience the actual course of the disease. Manns novella was also made into an opera by Benjamin Britten in 1973, his last one, and into a ballet by John Neumeier for his Hamburg Ballet company, in December 2003.*
In Gabriel Garcia Márquezs 1985 novel Love in the Time of Cholera, cholera is "a looming background presence rather than a central figure requiring vile description." The novel was adapted in 2007 for the film of the same name directed by Mike Newell.
Country examples
Zambia
In Zambia, widespread cholera outbreaks have occurred since 1977, most commonly in the capital city of Lusaka. In 2017, an outbreak of cholera was declared in Zambia after laboratory confirmation of Vibrio cholerae O1, biotype El Tor, serotype Ogawa, from stool samples from two patients with acute watery diarrhea. There was a rapid increase in the number of cases from several hundred cases in early December 2017 to approximately 2,000 by early January 2018. With intensification of the rains, new cases increased on a daily basis reaching a peak on the first week of January 2018 with over 700 cases reported.In collaboration with partners, the Zambia Ministry of Health (MoH) launched a multifaceted public health response that included increased chlorination of the Lusaka municipal water supply, provision of emergency water supplies, water quality monitoring and testing, enhanced surveillance, epidemiologic investigations, a cholera vaccination campaign, aggressive case management and health care worker training, and laboratory testing of clinical samples.The Zambian Ministry of Health implemented a reactive one-dose Oral Cholera Vaccine (OCV) campaign in April 2016 in three Lusaka compounds, followed by a pre-emptive second-round in December.
India
The city of Kolkata, India in the state of West Bengal in the Ganges delta has been described as the "homeland of cholera", with regular outbreaks and pronounced seasonality. In India, where the disease is endemic, cholera outbreaks occur every year between dry seasons and rainy seasons. India is also characterized by high population density, unsafe drinking water, open drains, and poor sanitation which provide an optimal niche for survival, sustenance and transmission of Vibrio cholerae.
Democratic Republic of Congo
In Goma in the Democratic Republic of Congo, cholera has left an enduring mark on human and medical history. Cholera pandemics in the 19th and 20th centuries led to the growth of epidemiology as a science and in recent years it has continued to press advances in the concepts of disease ecology, basic membrane biology, and transmembrane signaling and in the use of scientific information and treatment design.
Notes
References
Further reading
Arnold, David (1986). "Cholera and Colonialism in British India". Past & Present. 113 (113): 118–151. doi:10.1093/past/113.1.118. JSTOR 650982. PMID 11617906.
Azizi, MH; Azizi, F (January 2010). "History of Cholera Outbreaks in Iran during the 19th and 20th Centuries". Middle East Journal of Digestive Diseases. 2 (1): 51–55. PMC 4154910. PMID 25197514.
Bilson, Geoffrey. A Darkened House: Cholera in Nineteenth-Century Canada (U of Toronto Press, 1980).
Cooper, Donald B. (1986). "The New Black Death: Cholera in Brazil, 1855-1856". Social Science History. 10 (4): 467–488. doi:10.2307/1171027. JSTOR 1171027. PMID 11618140.
Echenberg, Myron (2011). Africa in the Time of Cholera: A History of Pandemics from 1817 to the Present. ISBN 978-0-521-18820-3.
Evans, Richard J. (1988). "Epidemics and Revolutions: Cholera in Nineteenth-Century Europe". Past & Present. 120 (120): 123–146. doi:10.1093/past/120.1.123. JSTOR 650924. PMID 11617908.
Evans, Richard J. (2005). Death in Hamburg: Society and Politics in the Cholera Years. ISBN 978-0-14-303636-4.
Gilbert, Pamela K. Cholera and Nation: Doctoring the Social Body in Victorian England" (SUNY Press, 2008).
Hamlin, Christopher (2009). Cholera: The Biography. Oxford University Press.
Huber, Valeska (November 2020). "Pandemics and the politics of difference: rewriting the history of internationalism through nineteenth-century cholera". Journal of Global History. 15 (3): 394–407. doi:10.1017/S1740022820000236. S2CID 228940685.
Huber, Valeska (June 2006). "THE UNIFICATION OF THE GLOBE BY DISEASE? THE INTERNATIONAL SANITARY CONFERENCES ON CHOLERA, 1851–1894". The Historical Journal. 49 (2): 453–476. doi:10.1017/S0018246X06005280. S2CID 162994263.
Jenson, Deborah; Szabo, Victoria (November 2011). "Cholera in Haiti and Other Caribbean Regions, 19th Century". Emerging Infectious Diseases. 17 (11): 2130–2135. doi:10.3201/eid1711.110958. PMC 3310590. PMID 22099117.
Kotar, S. L.; Gessler, J. E. (2014). Cholera: A Worldwide History. ISBN 978-0-7864-7242-0.
Kudlick, Catherine Jean (1996). Cholera in Post-Revolutionary Paris: A Cultural History. Berkeley: University of California Press.
Legros, Dominique (15 October 2018). "Global Cholera Epidemiology: Opportunities to Reduce the Burden of Cholera by 2030". The Journal of Infectious Diseases. 218 (suppl_3): S137–S140. doi:10.1093/infdis/jiy486. PMC 6207143. PMID 30184102.
Mukharji, Projit Bihari (2012). "The Cholera Cloud in the Nineteenth-Century British World: History of an Object-Without-an-Essence". Bulletin of the History of Medicine. 86 (3): 303–332. doi:10.1353/bhm.2012.0050. JSTOR 26305866. PMID 23241908. S2CID 207267413. INIST:26721136 Project MUSE 492086.
Rosenberg, Charles E. (1987). The Cholera Years: The United States in 1832, 1849, and 1866. University of Chicago Press. ISBN 978-0-226-72677-9.
Roth, Mitchel (1997). "Cholera, Community, and Public Health in Gold Rush Sacramento and San Francisco". Pacific Historical Review. 66 (4): 527–551. doi:10.2307/3642236. JSTOR 3642236.
Snowden, Frank M </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Invasive hydatidiform mole and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Choriocarcinoma and can you explain it in depth? [/INST] Choriocarcinoma is a malignant, trophoblastic cancer, usually of the placenta. It is characterized by early hematogenous spread to the lungs. It belongs to the malignant end of the spectrum in gestational trophoblastic disease (GTD). It is also classified as a germ cell tumor and may arise in the testis or ovary.
Signs and symptoms
increased quantitative chorionic gonadotropin (the "pregnancy hormone") levels
vaginal bleeding
shortness of breath
hemoptysis (coughing up blood)
chest pain
chest X-ray shows multiple infiltrates of various shapes in both lungs
presents in males as a testicular cancer, sometimes with skin hyperpigmentation (from excess chorionic gonadotropin cross reacting with the alpha MSH receptor), gynecomastia, and weight loss (from excess chorionic gonadotropin cross reacting with the LH, FSH, and TSH receptor) in males
can present with decreased thyroid-stimulating hormone (TSH) due to hyperthyroidism.
Cause
Choriocarcinoma of the placenta during pregnancy is preceded by:
hydatidiform mole (50% of cases)
spontaneous abortion (20% of cases)
ectopic pregnancy (2% of cases)
normal term pregnancy (20–30% of cases)
hyperemesis gravidarumRarely, choriocarcinoma occurs in primary locations other than the placenta; very rarely, it occurs in testicles. Although trophoblastic components are common components of mixed germ cell tumors, pure choriocarcinoma of the adult testis is rare. Pure choriocarcinoma of the testis represents the most aggressive pathologic variant of germ cell tumors in adults, characteristically with early hematogenous and lymphatic metastatic spread </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Claudication and can you explain it in depth? [/INST] Claudication is a medical term usually referring to impairment in walking, or pain, discomfort, numbness, or tiredness in the legs that occurs during walking or standing and is relieved by rest. The perceived level of pain from claudication can be mild to extremely severe. Claudication is most common in the calves but it can also affect the feet, thighs, hips, buttocks, or arms. The word claudication comes from the Latin claudicare meaning to limp.
Claudication that appears after a short amount of walking may sometimes be described by US medical professionals by the number of typical city street blocks that the patient can walk before the onset of claudication. Thus, "one-block claudication" appears after walking one block, "two-block claudication" appears after walking two blocks, etc. The term block would be understood more exactly locally but is on the order of 100 metres.
Types
Intermittent vascular
Intermittent vascular (or arterial) claudication (Latin: claudicatio intermittens) most often refers to cramping pains in the buttock or leg muscles, especially the calves. It is caused by poor circulation of the blood to the affected area, called peripheral arterial disease. The poor blood flow is often a result of atherosclerotic blockages more proximal to the affected area; individuals with intermittent claudication may have diabetes — often undiagnosed. Another cause, or exacerbating factor, is excessive sitting (several hours), especially in the absence of reasonable breaks, along with a general lack of walking or other exercise that stimulates the legs.
Spinal or neurogenic
Spinal or neurogenic claudication is not due to lack of blood supply, but rather it is caused by nerve root compression or stenosis of the spinal canal, usually from a degenerative spine, most often at the "L4-L5" or "L5-S1" level. This may result from many factors, including bulging disc, herniated disc or fragments from previously herniated discs (post-operative), scar tissue from previous surgeries, osteophytes (bone spurs that jut out from the edge of a vertebra into the foramen, the opening through which the nerve root passes). In most cases neurogenic claudication is bilateral, i.e. symmetrical.
Jaw
Jaw claudication is pain in the jaw or ear while chewing. This is caused by insufficiency of the arteries supplying the jaw muscles, associated with giant cell arteritis.
Diagnosis
Differential diagnosis
Vascular (or arterial) claudication typically occurs after activity or ambulation for a distance with resultant vascular insufficiency (lack of blood flow) where the muscular demands of oxygen outweighs the supply. Symptoms are lower extremity cramping. Resting from activity even in a standing position may help relieve the symptoms </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Colic and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Common cold and can you explain it in depth? [/INST] The common cold, also known simply as a cold, is a viral infectious disease of the upper respiratory tract that primarily affects the respiratory mucosa of the nose, throat, sinuses, and larynx. Signs and symptoms may appear less than two days after exposure to the virus. These may include coughing, sore throat, runny nose, sneezing, headache, and fever. People usually recover in seven to ten days, but some symptoms may last up to three weeks. Occasionally, those with other health problems may develop pneumonia.Well over 200 virus strains are implicated in causing the common cold, with rhinoviruses, coronaviruses, adenoviruses and enteroviruses being the most common. They spread through the air during close contact with infected people or indirectly through contact with objects in the environment, followed by transfer to the mouth or nose. Risk factors include going to child care facilities, not sleeping well, and psychological stress. The symptoms are mostly due to the bodys immune response to the infection rather than to tissue destruction by the viruses themselves. The symptoms of influenza are similar to those of a cold, although usually more severe and less likely to include a runny nose.There is no vaccine for the common cold. The primary methods of prevention are hand washing; not touching the eyes, nose or mouth with unwashed hands; and staying away from sick people. Some evidence supports the use of face masks. There is also no cure, but the symptoms can be treated. Zinc may reduce the duration and severity of symptoms if started shortly after the onset of symptoms. Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen may help with pain. Antibiotics, however, should not be used, as all colds are caused by viruses, and there is no good evidence that cough medicines are effective.The common cold is the most frequent infectious disease in humans. Under normal circumstances, the average adult gets two to three colds a year, while the average child may get six to eight. Infections occur more commonly during the winter. These infections have existed throughout human history.
Signs and symptoms
The typical symptoms of a cold include cough, runny nose, sneezing, nasal congestion, and a sore throat, sometimes accompanied by muscle ache, fatigue, headache, and loss of appetite. A sore throat is present in about 40% of cases, a cough in about 50%, and muscle ache likewise in about 50%. In adults, a fever is generally not present but it is common in infants and young children. The cough is usually mild compared to that accompanying influenza. While a cough and a fever indicate a higher likelihood of influenza in adults, a great deal of similarity exists between these two conditions. A number of the viruses that cause the common cold may also result in asymptomatic infections.The color of the mucus or nasal secretion may vary from clear to yellow to green and does not indicate the class of agent causing the infection.
Progression
A cold usually begins with fatigue, a feeling of being chilled, sneezing, and a headache, followed in a couple of days by a runny nose and cough. Symptoms may begin within sixteen hours of exposure and typically peak two to four days after onset. They usually resolve in seven to ten days, but some can last for up to three weeks. The average duration of cough is eighteen days and in some cases people develop a post-viral cough which can linger after the infection is gone. In children, the cough lasts for more than ten days in 35–40% of cases and continues for more than 25 days in 10%.
Causes
Viruses
The common cold is an infection of the upper respiratory tract which can be caused by many different viruses. The most commonly implicated is a rhinovirus (30–80%), a type of picornavirus with 99 known serotypes. Other commonly implicated viruses include human coronaviruses (≈ 15%), influenza viruses (10–15%), adenoviruses (5%), human respiratory syncytial virus (RSV), enteroviruses other than rhinoviruses, human parainfluenza viruses, and human metapneumovirus. Frequently more than one virus is present. In total, more than 200 viral types are associated with colds.
Transmission
The common cold virus is typically transmitted via airborne droplets (aerosols), direct contact with infected nasal secretions, or fomites (contaminated objects). Which of these routes is of primary importance has not been determined. The viruses may survive for prolonged periods in the environment (over 18 hours for rhinoviruses) and can be picked up by peoples hands and subsequently carried to their eyes or nose where infection occurs. Transmission from animals is considered highly unlikely; an outbreak documented at a British scientific base on Adelaide Island after seventeen weeks of isolation was thought to have been caused by transmission from a contaminated object or an asymptomatic human carrier, rather than from the husky dogs which were also present at the base.Transmission is common in daycare and at school due to the proximity of many children with little immunity and frequently poor hygiene. These infections are then brought home to other members of the family. There is no evidence that recirculated air during commercial flight is a method of transmission. People sitting in close proximity appear to be at greater risk of infection.Rhinovirus-caused colds are most infectious during the first three days of symptoms; they are much less infectious afterwards.
Weather
A common misconception is that one can "catch a cold" simply through prolonged exposure to cold weather. Although it is now known that colds are viral infections, the prevalence of many such viruses are indeed seasonal, occurring more frequently during cold weather. The reason for the seasonality has not been conclusively determined. Possible explanations may include cold temperature-induced changes in the respiratory system, decreased immune response, and low humidity causing an increase in viral transmission rates, perhaps due to dry air allowing small viral droplets to disperse farther and stay in the air longer.The apparent seasonality may also be due to social factors, such as people spending more time indoors, near infected people, and specifically children at school. Although normal exposure to cold does not increase ones risk of infection, severe exposure leading to significant reduction of body temperature (hypothermia) may put one at a greater risk for the common cold; although controversial, the majority of evidence suggests that it may increase susceptibility to infection.
Other
Herd immunity, generated from previous exposure to cold viruses, plays an important role in limiting viral spread, as seen with younger populations that have greater rates of respiratory infections. Poor immune function is a risk factor for disease. Insufficient sleep and malnutrition have been associated with a greater risk of developing infection following rhinovirus exposure; this is believed to be due to their effects on immune function. Breast feeding decreases the risk of acute otitis media and lower respiratory tract infections among other diseases, and it is recommended that breast feeding be continued when an infant has a cold. In the developed world breast feeding may not be protective against the common cold in and of itself.
Pathophysiology
The symptoms of the common cold are believed to be primarily related to the immune response to the virus. The mechanism of this immune response is virus specific. For example, the rhinovirus is typically acquired by direct contact; it binds to humans via ICAM-1 receptors and the CDHR3 receptor through unknown mechanisms to trigger the release of inflammatory mediators. These inflammatory mediators then produce the symptoms. It does not generally cause damage to the nasal epithelium. The respiratory syncytial virus (RSV), on the other hand, is contracted by direct contact and airborne droplets. It then replicates in the nose and throat before frequently spreading to the lower respiratory tract. RSV does cause epithelium damage. Human parainfluenza virus typically results in inflammation of the nose, throat, and bronchi. In young children when it affects the trachea it may produce the symptoms of croup due to the small size of their airways.
Diagnosis
The distinction between viral upper respiratory tract infections is loosely based on the location of symptoms, with the common cold affecting primarily the nose (rhinitis), throat (pharyngitis), and lungs (bronchitis). There can be significant overlap, and more than one area can be affected. Self-diagnosis is frequent. Isolation of the viral agent involved is rarely performed, and it is generally not possible to identify the virus type through symptoms.
Prevention
The only useful ways to reduce the spread of cold viruses are physical measures such as using correct hand washing technique and face masks; in the healthcare environment, gowns and disposable gloves are also used. Isolation or quarantine is not used as the disease is so widespread and symptoms are non-specific. There is no vaccine to protect against the common cold. Vaccination has proven difficult as there are many viruses involved and they mutate rapidly. Creation of a broadly effective vaccine is, therefore, highly improbable.Regular hand washing appears to be effective in reducing the transmission of cold viruses, especially among children. Whether the addition of antivirals or antibacterials to normal hand washing provides greater benefit is unknown. Wearing face masks when around people who are infected may be beneficial; however, there is insufficient evidence for maintaining a greater social distance.It is unclear if zinc supplements affect the likelihood of contracting a cold. Routine vitamin C supplements do not reduce the risk or severity of the common cold, though they may reduce its duration. Gargling with water was found useful in one small trial.
Management
Treatments of the common cold primarily involve medications and other therapies for symptomatic relief. Getting plenty of rest, drinking fluids to maintain hydration, and gargling with warm salt water are reasonable conservative measures. Much of the benefit from symptomatic treatment is, however, attributed to the placebo effect. As of 2010, no medications or herbal remedies had been conclusively demonstrated to shorten the duration of infection.
Symptomatic
Treatments that may help with symptoms include simple pain medication and medications for fevers such as ibuprofen and acetaminophen (paracetamol). It, however, is not clear if acetaminophen helps with symptoms. It is not known if over the counter cough medications are effective for treating an acute cough. Cough medicines are not recommended for use in children due to a lack of evidence supporting effectiveness and the potential for harm. In 2009, Canada restricted the use of over-the-counter cough and cold medication in children six years and under due to concerns regarding risks and unproven benefits. The misuse of dextromethorphan (an over-the-counter cough medicine) has led to its ban in a number of countries. Intranasal corticosteroids have not been found to be useful.In adults short term use of nasal decongestants may have a small benefit. Antihistamines may improve symptoms in the first day or two; however, there is no longer-term benefit and they have adverse effects such as drowsiness. Other decongestants such as pseudoephedrine appear effective in adults. Combined oral analgesics, antihistaminics and decongestants are generally effective for older children and adults. Ipratropium nasal spray may reduce the symptoms of a runny nose but has little effect on stuffiness. Ipratropium may also help with cough in adults. The safety and effectiveness of nasal decongestant use in children is unclear.Due to lack of studies, it is not known whether increased fluid intake improves symptoms or shortens respiratory illness. As of 2017 heated and humidified air, such as via RhinoTherm, is of unclear benefit. One study has found chest vapor rub to provide some relief of nocturnal cough, congestion, and sleep difficulty.Some advise to avoid physical exercise if there are symptoms such as fever, widespread muscle aches or fatigue. It is regarded as safe to perform moderate exercise if the symptoms are confined to the head, including runny nose, nasal congestion, sneezing, or a minor sore throat. There is a popular belief that having a hot drink can help with cold symptoms, but evidence to support this is very limited.
Antibiotics and antivirals
Antibiotics have no effect against viral infections, including the common cold. Due to their side effects, antibiotics cause overall harm but are still frequently prescribed. Some of the reasons that antibiotics are so commonly prescribed include peoples expectations for them, physicians desire to help, and the difficulty in excluding complications that may be amenable to antibiotics. There are no effective antiviral drugs for the common cold even though some preliminary research has shown benefits.
Zinc
Zinc supplements may shorten the duration of colds by up to 33% and reduce the severity of symptoms if supplementation begins within 24 hours of the onset of symptoms. Some zinc remedies directly applied to the inside of the nose have led to the loss of the sense of smell. A 2017 review did not recommend the use of zinc for the common cold for various reasons; whereas a 2017 and 2018 review both recommended the use of zinc, but also advocated further research on the topic.
Alternative medicine
While there are many alternative medicines and Chinese herbal medicines supposed to treat the common cold, there is insufficient scientific evidence to support their use. As of 2015, there is weak evidence to support nasal irrigation with saline. There is no firm evidence that Echinacea products or garlic provide any meaningful benefit in treating or preventing colds.
Vitamins C and D
Vitamin C supplementation does not affect the incidence of the common cold, but may reduce its duration. There is no conclusive evidence that vitamin D supplementation is efficacious in the prevention or treatment of respiratory tract infections.
Prognosis
The common cold is generally mild and self-limiting with most symptoms generally improving in a week. In children, half of cases go away in 10 days and 90% in 15 days. Severe complications, if they occur, are usually in the very old, the very young, or those who are immunosuppressed. Secondary bacterial infections may occur resulting in sinusitis, pharyngitis, or an ear infection. It is estimated that sinusitis occurs in 8% and ear infection in 30% of cases.
Epidemiology
The common cold is the most common human disease and affects people all over the globe. Adults typically have two to three infections annually, and children may have six to ten colds a year (and up to twelve colds a year for school children). Rates of symptomatic infections increase in the elderly due to declining immunity.Native Americans and Inuit are more likely to be infected with colds and develop complications such as otitis media than Caucasians. This may be explained as much by issues such as poverty and overcrowding as by ethnicity.
History
While the cause of the common cold was identified in the 1950s, the disease appears to have been with humanity since its early history. Its symptoms and treatment are described in the Egyptian Ebers papyrus, the oldest existing medical text, written before the 16th century BCE. The name "cold" came into use in the 16th century, due to the similarity between its symptoms and those of exposure to cold weather.In the United Kingdom, the Common Cold Unit (CCU) was set up by the Medical Research Council in 1946 and it was where the rhinovirus was discovered in 1956. In the 1970s, the CCU demonstrated that treatment with interferon during the incubation phase of rhinovirus infection protects somewhat against the disease, but no practical treatment could be developed. The unit was closed in 1989, two years after it completed research of zinc gluconate lozenges in the prevention and treatment of rhinovirus colds, the only successful treatment in the history of the unit.
Research directions
Antivirals have been tested for effectiveness in the common cold; as of 2009, none had been both found effective and licensed for use. There are ongoing trials of the anti-viral drug pleconaril which shows promise against picornaviruses as well as trials of BTA-798. The oral form of pleconaril had safety issues and an aerosol form is being studied. Double-stranded RNA activated caspase oligomerizer (DRACO), a broad-spectrum antiviral therapy, has shown preliminary effectiveness in treating rhinovirus, as well as other infectious viruses.The genomes of all known human rhinovirus strains have been sequenced.
Societal impact
The economic impact of the common cold is not well understood in much of the world. In the United States, the common cold leads to 75–100 million physician visits annually at a conservative cost estimate of $7.7 billion per year </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Genital wart and can you explain it in depth? [/INST] Genital warts are a sexually transmitted infection caused by certain types of human papillomavirus (HPV). They are generally pink in color and project out from the surface of the skin. Usually they cause few symptoms, but can occasionally be painful. Typically they appear one to eight months following exposure. Warts are the most easily recognized symptom of genital HPV infection.HPV types 6 and 11 are responsible for causing majority of genital warts whereas HPV types 16, 18, 31, 33, and 35 are also occasionally found. It is spread through direct skin-to-skin contact, usually during oral, genital, or anal sex with an infected partner. Diagnosis is generally based on symptoms and can be confirmed by biopsy. The types of HPV that cause cancer are not the same as those that cause warts.Some HPV vaccines can prevent genital warts as may condoms. Treatment options include creams such as podophyllin, imiquimod, and trichloroacetic acid. Cryotherapy or surgery may also be an option. After treatment warts often resolve within six months. Without treatment, in up to a third of cases they resolve on their own.About 1% of people in the United States have genital warts. Many people, however, are infected and do not have symptoms. Without vaccination nearly all sexually active people will get some type of HPV at one point in their lives. The disease has been known at least since the time of Hippocrates in 300 BC.
Signs and symptoms
They may be found anywhere in the anal or genital area, and are frequently found on external surfaces of the body, including the penile shaft, scrotum, or labia majora of the vagina. They can also occur on internal surfaces like the opening to the urethra, inside the vagina, on the cervix, or in the anus.They can be as small as 1-5mm in diameter, but can also grow or spread into large masses in the genital or anal area. In some cases they look like small stalks. They may be hard ("keratinized") or soft. Their color can be variable, and sometimes they may bleed.In most cases, there are no symptoms of HPV infection other than the warts themselves. Sometimes warts may cause itching, redness, or discomfort, especially when they occur around the anus. Although they are usually without other physical symptoms, an outbreak of genital warts may cause psychological distress, such as anxiety, in some people.
Causes
Transmission
HPV is most commonly transmitted through penetrative sex. While HPV can also be transmitted via non-penetrative sexual activity, it is less transmissible than via penetrative sex. There is conflicting evidence about the effect of condoms on transmission of low-risk HPV. Some studies have suggested that they are effective at reducing transmission. Other studies suggest that condoms are not effective at preventing transmission of the low-risk HPV variants that cause genital warts. The effect of condoms on HPV transmission may also be sex-dependent; there is some evidence that condoms are more effective at preventing infection of males than of females.The types of HPV that cause warts are highly transmissible. Roughly three out of four unaffected partners of patients with warts develop them within eight months. Other studies of partner concordance suggest that the presence of visible warts may be an indicator of increased infectivity; HPV concordance rates are higher in couples where one partner has visible warts.
Latency and recurrence
Although 90% of HPV infections are cleared by the body within two years of infection, it is possible for infected cells to undergo a latency (quiet) period, with the first occurrence or a recurrence of symptoms happening months or years later. Latent HPV, even with no outward symptoms, is still transmissible to a sexual partner. If an individual has unprotected sex with an infected partner, there is a 70% chance that he or she will also become infected.In individuals with a history of previous HPV infection, the appearance of new warts may be either from a new exposure to HPV, or from a recurrence of the previous infection. As many as one-third of people with warts will experience a recurrence.
Children
Anal or genital warts may be transmitted during birth. The presence of wart-like lesions on the genitals of young children has been suggested as an indicator of sexual abuse. However, genital warts can sometimes result from autoinoculation by warts elsewhere on the body, such as from the hands. It has also been reported from sharing of swimsuits, underwear, or bath towels, and from non-sexual touching during routine care such as diapering. Genital warts in children are less likely to be caused by HPV subtypes 6 and 11 than adults, and more likely to be caused by HPV types that cause warts elsewhere on the body ("cutaneous types"). Surveys of pediatricians who are child abuse specialists suggest that in children younger than 4 years old, there is no consensus on whether the appearance of new anal or genital warts, by itself, can be considered an indicator of sexual abuse.
Diagnosis
The diagnosis of genital warts is most often made visually, but may require confirmation by biopsy in some cases. Smaller warts may occasionally be confused with molluscum contagiosum.
Genital warts, histopathologically, characteristically rise above the skin surface due to enlargement of the dermal papillae, have parakeratosis and the characteristic nuclear changes typical of HPV infections (nuclear enlargement with perinuclear clearing).
DNA tests are available for diagnosis of high-risk HPV infections. Because genital warts are caused by low-risk HPV types, DNA tests cannot be used for diagnosis of genital warts or other low-risk HPV infections.Some practitioners use an acetic acid solution to identify smaller warts ("subclinical lesions"), but this practice is controversial. Because a diagnosis made with acetic acid will not meaningfully affect the course of the disease, and cannot be verified by a more specific test, a 2007 UK guideline advises against its use.
Prevention
Gardasil (sold by Merck & Co.) is a vaccine that protects against human papillomavirus types 6, 11, 16 and 18. Types 6 and 11 cause genital warts, while 16 and 18 cause cervical cancer. The vaccine is preventive, not therapeutic, and must be given before exposure to the virus type to be effective, ideally before the beginning of sexual activity. The vaccine is approved by the US Food and Drug Administration for use in both males and females as early as 9 years of age.In the UK, Gardasil replaced Cervarix in September 2012 for reasons unrelated to safety. Cervarix had been used routinely in young females from its introduction in 2008, but was only effective against the high-risk HPV types 16 and 18, neither of which typically causes warts.
Management
There is no cure for HPV. Existing treatments are focused on the removal of visible warts, but these may also regress on their own without any therapy. There is no evidence to suggest that removing visible warts reduces transmission of the underlying HPV infection. As many as 80% of people with HPV will clear the infection within 18 months.A healthcare practitioner may offer one of several ways to treat warts, depending on their number, sizes, locations, or other factors. All treatments can potentially cause depigmentation, itching, pain, or scarring.Treatments can be classified as either physically ablative, or topical agents. Physically ablative therapies are considered more effective at initial wart removal, but like all therapies have significant recurrence rates.Many therapies, including folk remedies, have been suggested for treating genital warts, some of which have little evidence to suggest they are effective or safe. Those listed here are ones mentioned in national or international practice guidelines as having some basis in evidence for their use.
Physical ablation
Physically ablative methods are more likely to be effective on keratinized warts. They are also most appropriate for patients with fewer numbers of relatively smaller warts.
Simple excision, such as with scissors under local anesthesia, is highly effective.
Liquid nitrogen cryosurgery is usually performed in an office visit, at weekly intervals. It is effective, inexpensive, safe for pregnancy, and does not usually cause scarring.
Electrocauterization (sometimes called "loop electrical excision procedure" or LEEP) is a procedure with a long history of use and is considered effective.
Laser ablation has less evidence to suggest its use. It may be less effective than other ablative methods. It is extremely expensive, and often used as a last resort.
Formal surgical procedures, performed by a specialist under general anesthesia or spinal anesthesia may be necessary for larger or more extensive warts, intra-anal warts, or warts in children. It carries a greater risk of scarring than other methods.
Topical agents
A 0.15–0.5% podophyllotoxin (also called podofilox) solution in a gel or cream. It can be applied by the patient to the affected area and is not washed off. It is the purified and standardized active ingredient of podophyllin (see below). Podofilox is safer and more effective than podophyllin. Skin erosion and pain are more commonly reported than with imiquimod and sinecatechins. Its use is cycled (2 times per day for 3 days then 4–7 days off); one review states that it should only be used for four cycles.
Imiquimod is a topical immune response cream, applied to the affected area. It causes less local irritation than podofilox but may cause fungal infections (11% in package insert) and flu-like symptoms (less than 5% disclosed in package insert).
Sinecatechins is an ointment of catechins (55% epigallocatechin gallate) extracted from green tea and other components </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Convex and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Paracetamol poisoning and can you explain it in depth? [/INST] Paracetamol poisoning, also known as acetaminophen poisoning, is caused by excessive use of the medication paracetamol (acetaminophen). Most people have few or non-specific symptoms in the first 24 hours following overdose. These include feeling tired, abdominal pain, or nausea. This is typically followed by a couple of days without any symptoms, after which yellowish skin, blood clotting problems, and confusion occurs as a result of liver failure. Additional complications may include kidney failure, pancreatitis, low blood sugar, and lactic acidosis. If death does not occur, people tend to recover fully over a couple of weeks. Without treatment, death from toxicity occurs 4 to 18 days later.Paracetamol poisoning can occur accidentally or as an attempt to die by suicide. Risk factors for toxicity include alcoholism, malnutrition, and the taking of certain other hepatotoxic medications. Liver damage results not from paracetamol itself, but from one of its metabolites, N-acetyl-p-benzoquinone imine (NAPQI). NAPQI decreases the livers glutathione and directly damages cells in the liver. Diagnosis is based on the blood level of paracetamol at specific times after the medication was taken. These values are often plotted on the Rumack-Matthew nomogram to determine level of concern.Treatment may include activated charcoal if the person seeks medical help soon after the overdose. Attempting to force the person to vomit is not recommended. If there is a potential for toxicity, the antidote acetylcysteine is recommended. The medication is generally given for at least 24 hours. Psychiatric care may be required following recovery. A liver transplant may be required if damage to the liver becomes severe. The need for transplant is often based on low blood pH, high blood lactate, poor blood clotting, or significant hepatic encephalopathy. With early treatment liver failure is rare. Death occurs in about 0.1% of cases.Paracetamol poisoning was first described in the 1960s. Rates of poisoning vary significantly between regions of the world. In the United States more than 100,000 cases occur a year. In the United Kingdom it is the medication responsible for the greatest number of overdoses. Young children are most commonly affected. In the United States and the United Kingdom, paracetamol is the most common cause of acute liver failure.
Signs and symptoms
The signs and symptoms of paracetamol toxicity occur in three phases. The first phase begins within hours of overdose, and consists of nausea, vomiting, a pale appearance, and sweating. However, patients often have no specific symptoms or only mild symptoms in the first 24 hours of poisoning. Rarely, after massive overdoses, patients may develop symptoms of metabolic acidosis and coma early in the course of poisoning.The second phase occurs between 24 hours and 72 hours following overdose and consists of signs of increasing liver damage. In general, damage occurs in liver cells as they metabolize the paracetamol. The individual may experience right upper quadrant abdominal pain. The increasing liver damage also changes biochemical markers of liver function; International normalized ratio (INR) and the liver transaminases ALT and AST rise to abnormal levels. Acute kidney failure may also occur during this phase, typically caused by either hepatorenal syndrome or multiple organ dysfunction syndrome. In some cases, acute kidney failure may be the primary clinical manifestation of toxicity. In these cases, it has been suggested that the toxic metabolite is produced more in the kidneys than in the liver.The third phase follows at 3 to 5 days, and is marked by complications of massive liver necrosis leading to fulminant liver failure with complications of coagulation defects, low blood sugar, kidney failure, hepatic encephalopathy, brain swelling, sepsis, multiple organ failure, and death. If the third phase is survived, the liver necrosis runs its course, and liver and kidney function typically return to normal in a few weeks. The severity of paracetamol toxicity varies depending on the dose and whether appropriate treatment is received.
Cause
The toxic dose of paracetamol is highly variable. In general the recommended maximum daily dose for healthy adults is 4 grams. Higher doses lead to increasing risk of toxicity. In adults, single doses above 10 grams or 200 mg/kg of bodyweight, whichever is lower, have a reasonable likelihood of causing toxicity. Toxicity can also occur when multiple smaller doses within 24 hours exceed these levels. Following a dose of 1 gram of paracetamol four times a day for two weeks, patients can expect an increase in alanine transaminase in their liver to typically about three times the normal value. It is unlikely that this dose would lead to liver failure. Studies have shown significant hepatotoxicity is uncommon in patients who have taken greater than normal doses over 3 to 4 days. In adults, a dose of 6 grams a day over the preceding 48 hours could potentially lead to toxicity, while in children acute doses above 200 mg/kg could potentially cause toxicity. Acute paracetamol overdose in children rarely causes illness or death, and it is very uncommon for children to have levels that require treatment, with chronic larger-than-normal doses being the major cause of toxicity in children.Intentional overdosing (self-poisoning, with suicidal intent) is frequently implicated in paracetamol toxicity. In a 2006 review, paracetamol was the most frequently ingested compound in intentional overdosing.In rare individuals, paracetamol toxicity can result from normal use. This may be due to individual ("idiosyncratic") differences in the expression and activity of certain enzymes in one of the metabolic pathways that handle paracetamol (see paracetamols metabolism).
Risk factors
A number of factors can potentially increase the risk of developing paracetamol toxicity. Chronic excessive alcohol consumption can induce CYP2E1, thus increasing the potential toxicity of paracetamol. In one study of patients with liver injury, 64% reported alcohol intakes of greater than 80 grams a day, while 35% took 60 grams a day or less. Whether chronic alcoholism should be considered a risk factor has been debated by some clinical toxicologists. For chronic alcohol users, acute alcohol ingestion at the time of a paracetamol overdose may have a protective effect. For non-chronic alcohol users, acute alcohol consumption had no protective effect.
Fasting is a risk factor, possibly because of depletion of liver glutathione reserves. The concomitant use of the CYP2E1 inducer isoniazid increases the risk of hepatotoxicity, though whether 2E1 induction is related to the hepatotoxicity in this case is unclear. Concomitant use of other drugs that induce CYP enzymes, such as antiepileptics including carbamazepine, phenytoin, and barbiturates, have also been reported as risk factors.
Pathophysiology
When taken in normal therapeutic doses, paracetamol has been shown to be safe. Following a therapeutic dose, it is mostly converted to nontoxic metabolites via Phase II metabolism by conjugation with sulfate and glucuronide, with a small portion being oxidized via the cytochrome P450 enzyme system. Cytochromes P450 2E1 and 3A4 convert approximately 5% of paracetamol to a highly reactive intermediary metabolite, N-acetyl-p-benzoquinone imine (NAPQI). Under normal conditions, NAPQI is detoxified by conjugation with glutathione to form cysteine and mercapturic acid conjugates.In cases of paracetamol overdose, the sulfate and glucuronide pathways become saturated, and more paracetamol is shunted to the cytochrome P450 system to produce NAPQI. As a result, hepatocellular supplies of glutathione become depleted, as the demand for glutathione is higher than its regeneration. NAPQI therefore remains in its toxic form in the liver and reacts with cellular membrane molecules, resulting in widespread hepatocyte damage and death, leading to acute liver necrosis. In animal studies, the livers stores of glutathione must be depleted to less than 70% of normal levels before liver toxicity occurs.
Diagnosis
A persons history of taking paracetamol is somewhat accurate for the diagnosis. The most effective way to diagnose poisoning is by obtaining a blood paracetamol level. A drug nomogram developed in 1975, called the Rumack-Matthew nomogram, estimates the risk of toxicity based on the serum concentration of paracetamol at a given number of hours after ingestion. To determine the risk of potential hepatotoxicity, the paracetamol level is traced along the nomogram. Use of a timed serum paracetamol level plotted on the nomogram appears to be the best marker indicating the potential for liver injury. A paracetamol level drawn in the first four hours after ingestion may underestimate the amount in the system because paracetamol may still be in the process of being absorbed from the gastrointestinal tract. Therefore, a serum level taken before 4 hours is not recommended.Clinical or biochemical evidence of liver toxicity may develop in one to four days, although, in severe cases, it may be evident in 12 hours. Right-upper-quadrant tenderness may be present and can aid in diagnosis. Laboratory studies may show evidence of liver necrosis with elevated AST, ALT, bilirubin, and prolonged coagulation times, particularly an elevated prothrombin time. After paracetamol overdose, when AST and ALT exceed 1000 IU/L, paracetamol-induced hepatotoxicity can be diagnosed. In some cases, the AST and ALT levels can exceed 10,000 IU/L.
Detection in body fluids
Paracetamol may be quantified in blood, plasma, or urine as a diagnostic tool in clinical poisoning situations or to aid in the medicolegal investigation of suspicious deaths. The concentration in serum after a typical dose of paracetamol usually peaks below 30 mg/L, which equals 200 μmol/L. Levels of 30–300 mg/L (200–2000 μmol/L) are often observed in overdose patients. Postmortem blood levels have ranged from 50 to 400 mg/L in persons dying due to acute overdosage. Automated colorimetric techniques, gas chromatography and liquid chromatography are currently in use for the laboratory analysis of the drug in physiological specimens.
Prevention
Limitation of availability
Limiting the availability of paracetamol tablets has been attempted in some countries. In the UK, sales of over-the-counter paracetamol are restricted to packs of 32 x 500 mg tablets in pharmacies, and 16 x 500 mg tablets in non-pharmacy outlets. Pharmacists may provide up to 100 tablets for those with chronic conditions at the pharmacists discretion. In Ireland, the limits are 24 and 12 tablets, respectively. Subsequent study suggests that the reduced availability in large numbers had a significant effect in reducing poisoning deaths from paracetamol overdose.One suggested method of prevention is to make paracetamol a prescription-only medicine, or to remove it entirely from the market. However, overdose is a relatively minor problem; for example, 0.08% of the UK population (over 50 thousand people) present with paracetamol overdose each year. In contrast, paracetamol is a safe and effective medication that is taken without complications by millions of people. In addition, alternative pain relief medications such as aspirin are more toxic in overdose, whereas non-steroidal anti-inflammatory drugs are associated with more adverse effects following normal use.
Combination with other agents
One strategy for reducing harm done by acetaminophen overdoses is selling paracetamol pre-combined in tablets either with an emetic or an antidote. Paradote was a tablet sold in the UK which combined 500 mg paracetamol with 100 mg methionine, an amino acid formerly used in the treatment of paracetamol overdose.
There have been no studies so far on the effectiveness of paracetamol when given in combination with its most commonly used antidote, acetylcysteine.Calcitriol, the active metabolite of vitamin D3, appears to be a catalyst for glutathione production. Calcitriol was found to increase glutathione levels in rat astrocyte primary cultures on average by 42%, increasing glutathione protein concentrations from 29 nmol/mg to 41 nmol/mg, 24 and 48 hours after administration; it continued to have an influence on glutathione levels 96 hours after administration. It has been proposed that co-administration of calcitriol, via injection, may improve treatment outcomes.
Paracetamol replacements
Paracetamol ester prodrug with L-pyroglutamic acid (PCA), a biosynthetic precursor of glutathione, has been synthesized to reduce paracetamol hepatotoxicity and improve bioavailability. The toxicological studies of different paracetamol esters show that L-5-oxo-pyrrolidine-2-paracetamol carboxylate reduces toxicity after administration of an overdose of paracetamol to mice. The liver glutathione values in mice induced by intraperitoneal injection of the ester are superimposable with the GSH levels recorded in untreated mice control group. The mice group treated with an equivalent dose of paracetamol showed a significative decrease of glutathione of 35% (p<0.01 vs untreated control group). The oral LD50 was found to be greater than 2000 mg kg-1, whereas the intraperitoneal LD50 was 1900 mg kg-1. These results taken together with the good hydrolysis and bioavailability data show that this ester is a potential candidate as a prodrug of paracetamol.
Treatment
Gastric decontamination
In adults, the initial treatment for paracetamol overdose is gastrointestinal decontamination. Paracetamol absorption from the gastrointestinal tract is complete within two hours under normal circumstances, so decontamination is most helpful if performed within this timeframe. Gastric lavage, better known as stomach pumping, may be considered if the amount ingested is potentially life-threatening and the procedure can be performed within 60 minutes of ingestion. Activated charcoal is the most common gastrointestinal decontamination procedure as it adsorbs paracetamol, reducing its gastrointestinal absorption. Administering activated charcoal also poses less risk of aspiration than gastric lavage.It appears that the most benefit from activated charcoal is gained if it is given within 30 minutes to two hours of ingestion. Administering activated charcoal later than 2 hours can be considered in patients that may have delayed gastric emptying due to co-ingested drugs or following ingestion of sustained- or delayed-release paracetamol preparations. Activated charcoal should also be administered if co-ingested drugs warrant decontamination. There was reluctance to give activated charcoal in paracetamol overdose, because of the concern that it may also absorb the oral antidote acetylcysteine. Studies have shown that 39% less acetylcysteine is absorbed into the body when they are administered together. There are conflicting recommendations regarding whether to change the dosing of oral acetylcysteine after the administration of activated charcoal, and even whether the dosing of acetylcysteine needs to be altered at all. Intravenous acetylcysteine has no interaction with activated charcoal.
Inducing vomiting with syrup of ipecac has no role in paracetamol overdose because the vomiting it induces delays the effective administration of activated charcoal and oral acetylcysteine. Liver injury is extremely rare after acute accidental ingestion in children under 6 years of age. Children with accidental exposures do not require gastrointestinal decontamination with either gastric lavage, activated charcoal, or syrup of ipecac.
Acetylcysteine
Acetylcysteine, also called N-acetylcysteine or NAC, works to reduce paracetamol toxicity by replenishing body stores of the antioxidant glutathione. Glutathione reacts with the toxic NAPQI metabolite so that it does not damage cells and can be safely excreted. NAC was usually given following a treatment nomogram (one for patients with risk factors, and one for those without) but the use of the nomogram is no longer recommended as the evidence base to support the use of risk factors was poor and inconsistent and many of the risk factors are imprecise and difficult to determine with sufficient certainty in clinical practice. Cysteamine and methionine have also been used to prevent hepatotoxicity, although studies show that both are associated with more adverse effects than acetylcysteine. Additionally, acetylcysteine has been shown to be a more effective antidote, particularly in patients presenting greater than 8 hours post-ingestion and for those who present with liver failure symptoms.If the person presents less than eight hours after paracetamol overdose, then acetylcysteine significantly reduces the risk of serious hepatotoxicity and guarantees survival. If acetylcysteine is started more than 8 hours after ingestion, there is a sharp decline in its effectiveness because the cascade of toxic events in the liver has already begun, and the risk of acute liver necrosis and death increases dramatically. Although acetylcysteine is most effective if given early, it still has beneficial effects if given as late as 48 hours after ingestion. If the person presents more than eight hours after the paracetamol overdose, then activated charcoal is not useful, and acetylcysteine is started immediately. In earlier presentations, charcoal can be given when the patient arrives and acetylcysteine is initiated while waiting for the paracetamol level results to return from the laboratory.In United States practice, intravenous (IV) and oral administration are considered to be equally effective and safe if given within 8 hours of ingestion. However, IV is the only recommended route in Australasian and British practice. Oral acetylcysteine is given as a 140 mg/kg loading dose followed by 70 mg/kg every four hours for 17 more doses, and if the patient vomits within 1 hour of dose, the dose must be repeated. Oral acetylcysteine may be poorly tolerated due to its unpleasant taste, odor, and its tendency to cause nausea and vomiting. If repeated doses of charcoal are indicated because of another ingested drug, then subsequent doses of charcoal and acetylcysteine should be staggered.Intravenous acetylcysteine is given as a continuous infusion over 20 hours for a total dose 300 mg/kg. Recommended administration involves infusion of a 150 mg/kg loading dose over 15 to 60 minutes, followed by a 50 mg/kg infusion over four hours; the last 100 mg/kg are infused over the remaining 16 hours of the protocol. Intravenous acetylcysteine has the advantage of shortening hospital stay, increasing both doctor and patient convenience, and allowing administration of activated charcoal to reduce absorption of both the paracetamol and any co-ingested drugs without concerns about interference with oral acetylcysteine. Intravenous dosing varies with weight, specifically in children. For patients less than 20 kg, the loading dose is 150 mg/kg in 3 mL/kg diluent, administered over 60 minutes; the second dose is 50 mg/kg in 7 mL/kg diluent over 4 hours; and the third and final dose is 100 mg/kg in 14 mL/kg diluent over 16 hours.The most common adverse effect to acetylcysteine treatment is an anaphylactoid reaction, usually manifested by rash, wheeze, or mild hypotension. May cause infertility or death. Adverse reactions are more common in people treated with IV acetylcysteine, occurring in up to 20% of patients. Anaphylactoid reactions are more likely to occur with the first infusion (the loading dose). Rarely, severe life-threatening reactions may occur in predisposed individuals, such as patients with asthma or atopic dermatitis, and may be characterized by respiratory distress, facial swelling, and even death.If an anaphylactoid reaction occurs the acetylcysteine is temporarily halted or slowed and antihistamines and other supportive care is administered. For example, a nebulised beta-agonist like salbutamol may be indicated in the event of significant bronchospasm (or prophylactically in patients with a history of bronchospasm secondary to acetylcysteine). It is also important to closely monitor fluids and electrolytes.
Liver transplant
In people who develop acute liver failure or who are otherwise expected to die from liver failure, the mainstay of management is liver transplantation. Liver transplants are performed in specialist centers. The most commonly used criteria for liver transplant were developed by physicians at Kings College Hospital in London. Patients are recommended for transplant if they have an arterial blood pH less than 7.3 after fluid resuscitation or if a patient has Grade III or IV encephalopathy, a prothrombin time greater than 100 seconds, and a serum creatinine greater than 300 mmol/L In a 24-hour period. Other forms of liver support have been used including partial liver transplants. These techniques have the advantage of supporting the patient while their own liver regenerates. Once liver function returns immunosuppressive drugs are commenced and they have to take immunosuppressive medication for the rest of their lives.
Prognosis
The mortality rate from paracetamol overdose increases two days after the ingestion, reaches a maximum on day four, and then gradually decreases. Acidosis is the most important single indicator of probable mortality and the need for transplantation. A mortality rate of 95% without transplant was reported in patients who had a documented pH less than 7.30. Other indicators of poor prognosis include chronic kidney disease (stage 3 or worse), hepatic encephalopathy, a markedly elevated prothrombin time, or an elevated blood lactic acid level (lactic acidosis). One study has shown that a factor V level less than 10% of normal indicated a poor prognosis (91% mortality), whereas a ratio of factor VIII to factor V of less than 30 indicated a good prognosis (100% survival). Patients with a poor prognosis are usually identified for likely liver transplantation. Patients that do not die are expected to fully recover and have a normal life expectancy and quality of life.
Epidemiology
Many over-the-counter and prescription-only medications contain paracetamol. Because of its wide availability paired with comparably high toxicity, (compared to ibuprofen and aspirin) there is a much higher potential for overdose </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Acromegaly and can you explain it in depth? [/INST] Acromegaly is a disorder that results from excess growth hormone (GH) after the growth plates have closed. The initial symptom is typically enlargement of the hands and feet. There may also be an enlargement of the forehead, jaw, and nose. Other symptoms may include joint pain, thicker skin, deepening of the voice, headaches, and problems with vision. Complications of the disease may include type 2 diabetes, sleep apnea, and high blood pressure.Acromegaly is usually caused by the pituitary gland producing excess growth hormone. In more than 95% of cases the excess production is due to a benign tumor, known as a pituitary adenoma. The condition is not inherited from a persons parents. Acromegaly is rarely due to a tumor in another part of the body. Diagnosis is by measuring growth hormone after a person has consumed a glucose solution, or by measuring insulin-like growth factor I in the blood. After diagnosis, medical imaging of the pituitary is carried out to determine if an adenoma is present. If excess growth hormone is produced during childhood, the result is the condition gigantism rather than acromegaly, and it is characterized by excessive height.Treatment options include surgery to remove the tumor, medications, and radiation therapy. Surgery is usually the preferred treatment; the smaller the tumor, the more likely surgery will be curative. If surgery is contraindicated or not curative, somatostatin analogues or GH receptor antagonists may be used. Radiation therapy may be used if neither surgery nor medications are completely effective. Without treatment, life expectancy is reduced by 10 years; with treatment, life expectancy is not reduced.Acromegaly affects about 3 per 50,000 people. It is most commonly diagnosed in middle age. Males and females are affected with equal frequency. It was first described in the medical literature by Nicolas Saucerotte in 1772. The term is from the Greek ἄκρον (akron) meaning "extremity", and μέγα (mega) meaning "large".
Signs and symptoms
Features that may result from a high level of GH or expanding tumor include:
Headaches – often severe and prolonged
Soft tissue swelling visibly resulting in enlargement of the hands, feet, nose, lips, and ears, and a general thickening of the skin
Soft tissue swelling of internal organs, notably the heart with the attendant weakening of its muscularity, and the kidneys, also the vocal cords resulting in a characteristic thick, deep voice and slowing of speech
Generalized expansion of the skull at the fontanelle
Pronounced brow protrusion, often with ocular distension (frontal bossing)
Pronounced lower jaw protrusion (prognathism) with attendant macroglossia (enlargement of the tongue) and teeth spacing
Hypertrichosis, hyperpigmentation and hyperhidrosis may occur in these people.: 499
Skin tags
Carpal tunnel syndrome
Complications
Problems with bones and joints, including osteoarthritis, nerve compression syndrome due to bony overgrowth, and carpal tunnel syndrome
Hypertension
Diabetes mellitus
Cardiomyopathy, potentially leading to heart failure
Colorectal cancer
Sleep Apnea
Thyroid nodules and thyroid cancer
Hypogonadism
Compression of the optic chiasm by the growth of pituitary adenoma leading to visual problems
Causes
Pituitary adenoma
About 98% of cases of acromegaly are due to the overproduction of growth hormone by a benign tumor of the pituitary gland called an adenoma. These tumors produce excessive growth hormone and compress surrounding brain tissues as they grow larger. In some cases, they may compress the optic nerves. Expansion of the tumor may cause headaches and visual disturbances. In addition, compression of the surrounding normal pituitary tissue can alter production of other hormones, leading to changes in menstruation and breast discharge in women and impotence in men because of reduced testosterone production.A marked variation in rates of GH production and the aggressiveness of the tumor occurs. Some adenomas grow slowly and symptoms of GH excess are often not noticed for many years. Other adenomas grow rapidly and invade surrounding brain areas or the sinuses, which are located near the pituitary. In general, younger people tend to have more aggressive tumors.Most pituitary tumors arise spontaneously and are not genetically inherited. Many pituitary tumors arise from a genetic alteration in a single pituitary cell that leads to increased cell division and tumor formation. This genetic change, or mutation, is not present at birth but is acquired during life. The mutation occurs in a gene that regulates the transmission of chemical signals within pituitary cells; it permanently switches on the signal that tells the cell to divide and secrete growth hormones. The events within the cell that cause disordered pituitary cell growth and GH oversecretion currently are the subject of intensive research.Pituitary adenomas and diffuse somatomammotroph hyperplasia may result from somatic mutations activating GNAS, which may be acquired or associated with McCune-Albright syndrome.
Other tumors
In a few people, acromegaly is caused not by pituitary tumors, but by tumors of the pancreas, lungs, and adrenal glands. These tumors also lead to an excess of GH, either because they produce GH themselves or, more frequently, because they produce GHRH (growth hormone-releasing hormone), the hormone that stimulates the pituitary to make GH. In these people, the excess GHRH can be measured in the blood and establishes that the cause of the acromegaly is not due to a pituitary defect. When these nonpituitary tumors are surgically removed, GH levels fall and the symptoms of acromegaly improve.In people with GHRH-producing, non-pituitary tumors, the pituitary still may be enlarged and may be mistaken for a tumor. Therefore, it is important that physicians carefully analyze all "pituitary tumors" removed from people with acromegaly so as to not overlook the possibility that a tumor elsewhere in the body is causing the disorder.
Diagnosis
If acromegaly is suspected, medical laboratory investigations followed by medical imaging if the lab tests are positive confirms or rules out the presence of this condition.
IGF1 provides the most sensitive lab test for the diagnosis of acromegaly, and a GH suppression test following an oral glucose load, which is a very specific lab test, will confirm the diagnosis following a positive screening test for IGF1. A single value of the GH is not useful in view of its pulsatility (levels in the blood vary greatly even in healthy individuals). GH levels taken 2 hours after a 75- or 100-gram glucose tolerance test are helpful in the diagnosis: GH levels are suppressed below 1 μg/L in normal people, and levels higher than this cutoff are confirmatory of acromegaly.Other pituitary hormones must be assessed to address the secretory effects of the tumor, as well as the mass effect of the tumor on the normal pituitary gland. They include thyroid stimulating hormone (TSH), gonadotropic hormones (FSH, LH), adrenocorticotropic hormone, and prolactin.An MRI of the brain focusing on the sella turcica after gadolinium administration allows for clear delineation of the pituitary and the hypothalamus and the location of the tumor. A number of other overgrowth syndromes can result in similar problems.
Differential diagnosis
Pseudoacromegaly is a condition with the usual acromegaloid features, but without an increase in growth hormone and IGF-1. It is frequently associated with insulin resistance. Cases have been reported due to minoxidil at an unusually high dose. It can also be caused by a selective post receptor defect of insulin signalling, leading to the impairment of metabolic, but preservation of mitogenic, signalling.
Treatment
The goals of treatment are to reduce GH production to normal levels thereby reversing or ameliorating the signs and symptoms of acromegaly, to relieve the pressure that the growing pituitary tumor exerts on the surrounding brain areas, and to preserve normal pituitary function. Currently, treatment options include surgical removal of the tumor, drug therapy, and radiation therapy of the pituitary.
Medications
Somatostatin analogues
The primary current medical treatment of acromegaly is to use somatostatin analogues – octreotide (Sandostatin) or lanreotide (Somatuline).
These somatostatin analogues are synthetic forms of a brain hormone, somatostatin, which stops GH production. The long-acting forms of these drugs must be injected every 2 to 4 weeks for effective treatment. Most people with acromegaly respond to this medication. In many people with acromegaly, GH levels fall within one hour and headaches improve within minutes after the injection. Octreotide and lanreotide are effective for long-term treatment. Octreotide and lanreotide have also been used successfully to treat people with acromegaly caused by non-pituitary tumors.Somatostatin analogues are also sometimes used to shrink large tumors before surgery.Because octreotide inhibits gastrointestinal and pancreatic function, long-term use causes digestive problems such as loose stools, nausea, and gas in one third of people. In addition, approximately 25 percent of people with acromegaly develop gallstones, which are usually asymptomatic. In some cases, octreotide treatment can cause diabetes due to the fact that somatostatin and its analogues can inhibit the release of insulin. With an aggressive adenoma that is not able to be operated on, there may be a resistance to octreotide and in which case a second generation SSA, pasireotide, may be used for tumor control. However, insulin and glucose levels should be carefully monitored as pasireotide has been associated with hyperglycemia by reducing insulin secretion.
Dopamine agonists
For those who are unresponsive to somatostatin analogues, or for whom they are otherwise contraindicated, it is possible to treat using one of the dopamine agonists, bromocriptine or cabergoline. As tablets rather than injections, they cost considerably less. These drugs can also be used as an adjunct to somatostatin analogue therapy. They are most effective in those whose pituitary tumours cosecrete prolactin. Side effects of these dopamine agonists include gastrointestinal upset, nausea, vomiting, light-headedness when standing, and nasal congestion. These side effects can be reduced or eliminated if medication is started at a very low dose at bedtime, taken with food, and gradually increased to the full therapeutic dose. Bromocriptine lowers GH and IGF-1 levels and reduces tumor size in fewer than half of people with acromegaly. Some people report improvement in their symptoms although their GH and IGF-1 levels still are elevated.
Growth hormone receptor antagonists
The latest development in the medical treatment of acromegaly is the use of growth hormone receptor antagonists. The only available member of this family is pegvisomant (Somavert). By blocking the action of the endogenous growth hormone molecules, this compound is able to control the disease activity of acromegaly in virtually everyone with acromegaly. Pegvisomant has to be administered subcutaneously by daily injections. Combinations of long-acting somatostatin analogues and weekly injections of pegvisomant seem to be equally effective as daily injections of pegvisomant.
Surgery
Surgical removal of the pituitary tumor is usually effective in lowering growth hormone levels. Two surgical procedures are available for use. The first is endonasal transsphenoidal surgery, which involves the surgeon reaching the pituitary through an incision in the nasal cavity wall. The wall is reached by passing through the nostrils with microsurgical instruments. The second method is transsphenoidal surgery during which an incision is made into the gum beneath the upper lip. Further incisions are made to cut through the septum to reach the nasal cavity, where the pituitary is located. Endonasal transsphenoidal surgery is a less invasive procedure with a shorter recovery time than the older method of transsphenoidal surgery, and the likelihood of removing the entire tumor is greater with reduced side effects. Consequently, endonasal transsphenoidal surgery is the more common surgical choice.These procedures normally relieve the pressure on the surrounding brain regions and lead to a lowering of GH levels. Surgery is most successful in people with blood GH levels below 40 ng/ml before the operation and with pituitary tumors no larger than 10 mm in diameter. Success depends on the skill and experience of the surgeon. The success rate also depends on what level of GH is defined as a cure. The best measure of surgical success is the normalization of GH and IGF-1 levels. Ideally, GH should be less than 2 ng/ml after an oral glucose load. A review of GH levels in 1,360 people worldwide immediately after surgery revealed that 60% had random GH levels below 5 ng/ml. Complications of surgery may include cerebrospinal fluid leaks, meningitis, or damage to the surrounding normal pituitary tissue, requiring lifelong pituitary hormone replacement.Even when surgery is successful and hormone levels return to normal, people must be carefully monitored for years for possible recurrence. More commonly, hormone levels may improve, but not return completely to normal. These people may then require additional treatment, usually with medications.
Radiation therapy
Radiation therapy has been used both as a primary treatment and combined with surgery or drugs. It is usually reserved for people who have tumor remaining after surgery. These people often also receive medication to lower GH levels. Radiation therapy is given in divided doses over four to six weeks. This treatment lowers GH levels by about 50 percent over 2 to 5 years. People monitored for more than 5 years show significant further improvement. Radiation therapy causes a gradual loss of production of other pituitary hormones with time. Loss of vision and brain injury, which have been reported, are very rare complications of radiation treatments.
Selection of treatment
The initial treatment chosen should be individualized depending on the persons characteristics, such as age and tumor size. If the tumor has not yet invaded surrounding brain tissues, removal of the pituitary adenoma by an experienced neurosurgeon is usually the first choice. After surgery, a person must be monitored long-term for increasing GH levels.If surgery does not normalize hormone levels or a relapse occurs, a doctor will usually begin additional drug therapy. The current first choice is generally octreotide or lanreotide; however, bromocriptine and cabergoline are both cheaper and easier to administer. With all of these medications, long-term therapy is necessary, because their withdrawal can lead to rising GH levels and tumor re-expansion.Radiation therapy is generally used for people whose tumors are not completely removed by surgery, for people who are not good candidates for surgery because of other health problems, and for people who do not respond adequately to surgery and medication.
Prognosis
Life expectancy of people with acromegaly is dependent on how early the disease is detected. Life expectancy after the successful treatment of early disease is equal to that of the general population. Acromegaly can often go on for years before diagnosis, resulting in poorer outcome, and it is suggested that the better the growth hormone is controlled, the better the outcome. Upon successful surgical treatment, headaches and visual symptoms tend to resolve. One exception is sleep apnea, which is present in around 70% of cases, but does not tend to resolve with successful treatment of growth hormone level. While hypertension is a complication of 40% of cases, it typically responds well to regular regimens of blood pressure medication. Diabetes that occurs with acromegaly is treated with the typical medications, but successful lowering of growth hormone levels often alleviates symptoms of diabetes. Hypogonadism without gonad destruction is reversible with treatment. Acromegaly is associated with a slightly elevated risk of cancer.
Notable people
Famous people with acromegaly include:
Daniel Cajanus (1704–1749), "The Finnish Goliath". His natural size portrait and femur are still extant. His height was estimated to have been 247 cm (8 1").
Mary Ann Bevan (1874–1933), an English woman, who after developing acromegaly, toured the sideshow circuit as "the ugliest woman in the world".
André the Giant (born André Roussimoff, 1946–1993), French professional wrestler and actor
Salvatore Baccaro (1932–1984), Italian character actor. Active in B-movies, comedies, and horrors because of his peculiar features and spontaneous sympathy
Paul Benedict (1938–2008), American actor. Best known for portraying Harry Bentley, The Jeffersons English next door neighbour
Big Show (born Paul Wight; 1972), American professional wrestler and actor, had his pituitary tumor removed in 1991.
Eddie Carmel, born Oded Ha-Carmeili (1936–1972), Israeli-born entertainer with gigantism and acromegaly, popularly known as "The Jewish Giant"
Ted Cassidy (1932–1979), American actor. Best known for portraying Lurch in the TV sitcom The Addams Family
Rondo Hatton (1894–1946), American journalist and actor. A Hollywood favorite in B-movie horror films of the 1930s and 1940s. Hattons disfigurement, due to acromegaly, developed over time, beginning during his service in World War I.
Sultan Kösen, the worlds tallest man
Maximinus Thrax, Roman emperor (c. 173, reigned 235–238). Descriptions, as well as depictions, indicate acromegaly, though remains of his body are yet to be found.
The Great Khali (born Dalip Singh Rana), Indian professional wrestler, is best known for his tenure with WWE. He had his pituitary tumor removed in 2012 at age 39.
Primo Carnera (1906–1967), nicknamed the Ambling Alp, was an Italian professional boxer and wrestler who reigned as the boxing World Heavyweight Champion from 29 June 1933 to 14 June 1934.
Maurice Tillet (1903–1954), Russian-born French professional wrestler, is better known by his ring name, the French Angel.
Richard Kiel (1939–2014), actor, "Jaws" from two James Bond movies and Mr. Larson in Happy Gilmore
Pío Pico, the last Mexican Governor of California (1801–1894), manifested acromegaly without gigantism between at least 1847 and 1858. Some time after 1858, signs of the growth hormone-producing tumor disappeared along with all the secondary effects the tumor had caused in him. He looked normal in his 90s. His remarkable recovery is likely an example of spontaneous selective pituitary tumor apoplexy.
Earl Nightingale (March 12, 1921 – March 25, 1989) was an American radio speaker and author, dealing mostly with the subjects of human character development, motivation, and meaningful existence </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Actinic keratosis and can you explain it in depth? [/INST] Actinic keratosis (AK), sometimes called solar keratosis or senile keratosis, is a pre-cancerous area of thick, scaly, or crusty skin. Actinic keratosis is a disorder (-osis) of epidermal keratinocytes that is induced by ultraviolet (UV) light exposure (actin-). These growths are more common in fair-skinned people and those who are frequently in the sun. They are believed to form when skin gets damaged by UV radiation from the sun or indoor tanning beds, usually over the course of decades. Given their pre-cancerous nature, if left untreated, they may turn into a type of skin cancer called squamous cell carcinoma. Untreated lesions have up to a 20% risk of progression to squamous cell carcinoma, so treatment by a dermatologist is recommended.
Actinic keratoses characteristically appear as thick, scaly, or crusty areas that often feel dry or rough. Size commonly ranges between 2 and 6 millimeters, but they can grow to be several centimeters in diameter. Notably, AKs are often felt before they are seen, and the texture is sometimes compared to sandpaper. They may be dark, light, tan, pink, red, a combination of all these, or have the same color as the surrounding skin.
Given the causal relationship between sun exposure and AK growth, they often appear on a background of sun-damaged skin and in areas that are commonly sun-exposed, such as the face, ears, neck, scalp, chest, backs of hands, forearms, or lips. Because sun exposure is rarely limited to a small area, most people who have an AK have more than one.If clinical examination findings are not typical of AK and the possibility of in situ or invasive squamous cell carcinoma (SCC) cannot be excluded based on clinical examination alone, a biopsy or excision can be considered for definitive diagnosis by histologic examination of the lesional tissue. Multiple treatment options for AK are available. Photodynamic therapy (PDT) is one option the treatment of numerous AK lesions in a region of the skin, termed field cancerization. It involves the application of a photosensitizer to the skin followed by illumination with a strong light source. Topical creams, such as 5-fluorouracil or imiquimod, may require daily application to affected skin areas over a typical time course of weeks.
Cryotherapy is frequently used for few and well-defined lesions, but undesired skin lightening, or hypopigmentation, may occur at the treatment site. By following up with a dermatologist, AKs can be treated before they progress to skin cancer. If cancer does develop from an AK lesion, it can be caught early with close monitoring, at a time when treatment is likely to have a high cure rate.
Signs and symptoms
Actinic keratoses (AKs) most commonly present as a white, scaly plaque of variable thickness with surrounding redness; they are most notable for having a sandpaper-like texture when felt with a gloved hand. Skin nearby the lesion often shows evidence of solar damage characterized by notable pigmentary alterations, being yellow or pale in color with areas of hyperpigmentation; deep wrinkles, coarse texture, purpura and ecchymoses, dry skin, and scattered telangiectasias are also characteristic.Photoaging leads to an accumulation of oncogenic changes, resulting in a proliferation of mutated keratinocytes that can manifest as AKs or other neoplastic growths. With years of sun damage, it is possible to develop multiple AKs in a single area on the skin. This condition is termed field cancerization.
The lesions are usually asymptomatic, but can be tender, itch, bleed, or produce a stinging or burning sensation. AKs are typically graded in accordance with their clinical presentation: Grade I (easily visible, slightly palpable), Grade II (easily visible, palpable), and Grade III (frankly visible and hyperkeratotic).
Variants
Actinic keratoses can have various clinical presentations, often characterized as follows:
Classic (or common): Classic AKs present as white, scaly macules, papules or plaques of various thickness, often with surrounding erythema. They are usually 2–6mm in diameter but can sometimes reach several centimeters in diameter.
Hypertrophic (or hyperkeratotic): Hypertrophic AKs (HAKs) appears as a thicker scale or rough papule or plaque, often adherent to an erythematous base. Classic AKs can progress to become HAKs, and HAKs themselves can be difficult to distinguish from malignant lesions.
Atrophic: Atrophic AKs lack an overlying scale, and therefore appear as a nonpalpable change in color (or macule). They are often smooth and red and are less than 10mm in diameter.
AK with cutaneous horn: A cutaneous horn is a keratinic projection with its height at least one-half of its diameter, often conical in shape. They can be seen in the setting of actinic keratosis as a progression of an HAK, but are also present in other skin conditions. 38–40% of cutaneous horns represent AKs.
Pigmented AK: Pigmented AKs are rare variants that often present as macules or plaques that are tan to brown in color. They can be difficult to distinguish from a solar lentigo or lentigo maligna.
Actinic cheilitis: When an AK forms on the lip, it is called actinic cheilitis. This usually presents as a rough, scaly patch on the lip, often accompanied by the sensation of dry mouth and symptomatic splitting of the lips.
Bowenoid AK: Usually presents as a solitary, erythematous, scaly patch or plaque with well-defined borders. Bowenoid AKs are differentiated from Bowens disease by degree of epithelial involvement as seen on histology.The presence of ulceration, nodularity, or bleeding should raise concern for malignancy. Specifically, clinical findings suggesting an increased risk of progression to squamous cell carcinoma can be recognized as "IDRBEU": I (induration/inflammation), D (diameter > 1 cm), R (rapid enlargement), B (bleeding), E (erythema), and U (ulceration). AKs are usually diagnosed clinically, but because they are difficult to clinically differentiate from squamous cell carcinoma, any concerning features warrant biopsy for diagnostic confirmation.
Causes
The most important cause of AK formation is solar radiation, through a variety of mechanisms. Mutation of the p53 tumor suppressor gene, induced by UV radiation, has been identified as a crucial step in AK formation. This tumor suppressor gene, located on chromosome 17p132, allows for cell cycle arrest when DNA or RNA is damaged. Dysregulation of the p53 pathway can thus result in unchecked replication of dysplastic keratinocytes, thereby serving as a source of neoplastic growth and the development of AK, as well as possible progression from AK to skin cancer. Other molecular markers that have been associated with the development of AK include the expression of p16ink4, p14, the CD95 ligand, TNF-related apoptosis-inducing ligand (TRAIL) and TRAIL receptors, and loss of heterozygosity.Evidence also suggests that the human papillomavirus (HPV) plays a role in the development of AKs. The HPV virus has been detected in AKs, with measurable HPV viral loads (one HPV-DNA copy per less than 50 cells) measured in 40% of AKs. Similar to UV radiation, higher levels of HPV found in AKs reflect enhanced viral DNA replication. This is suspected to be related to the abnormal keratinocyte proliferation and differentiation in AKs, which facilitate an environment for HPV replication. This in turn may further stimulate the abnormal proliferation that contributes to the development of AKs and carcinogenesis.
Ultraviolet radiation
It is thought that ultraviolet (UV) radiation induces mutations in the keratinocytes of the epidermis, promoting the survival and proliferation of these atypical cells. Both UV-A and UV-B radiation have been implicated as causes of AKs. UV-A radiation (wavelength 320–400 nm) reaches more deeply into the skin and can lead to the generation of reactive oxygen species, which in turn can damage cell membranes, signaling proteins, and nucleic acids. UV-B radiation (wavelength 290–320 nm) causes thymidine dimer formation in DNA and RNA, leading to significant cellular mutations. In particular, mutations in the p53 tumor suppressor gene have been found in 30–50% of AK lesion skin samples.UV radiation has also been shown to cause elevated inflammatory markers such as arachidonic acid, as well as other molecules associated with inflammation. Eventually, over time these changes lead to the formation of AKs. Several predictors for increased AK risk from UV radiation have been identified:
Extent of sun exposure: Cumulative sun exposure leads to an increased risk for development of AKs. In one U.S. study, AKs were found in 55% of fair-skinned men with high cumulative sun exposure, and in only 19% of fair-skinned men with low cumulative sun exposure in an age-matched cohort (the percents for women in this same study were 37% and 12% respectively). Furthermore, the use of sunscreen (SPF 17 or higher) has been found to significantly reduce the development of AK lesions, and also promotes the regression of existing lesions.
History of sunburn: Studies show that even a single episode of painful sunburn as a child can increase an individuals risk of developing AK as an adult. Six or more painful sunburns over the course of a lifetime was found to be significantly associated with the likelihood of developing AK.
Skin pigmentation
Melanin is a pigment in the epidermis that functions to protect keratinocytes from the damage caused by UV radiation; it is found in higher concentrations in the epidermis of darker-skinned individuals, affording them protection against the development of AKs.
Fair-skinned individuals have a significantly increased risk of developing AKs when compared to olive-skinned individuals (odds ratios of 14.1 and 6.5, respectively), and AKs are uncommon in dark-skinned people of African descent. Other phenotypic features seen in fair-skinned individuals that are associated with an increased propensity to develop AKs include:
Freckling
Light hair and eye color
Propensity to sunburn
Inability to tan
Other risk factors
Immunosuppression: People with a compromised immune system from medical conditions (such as AIDS) or immunosuppressive therapy (such as chronic immunosuppression after organ transplantation, or chemotherapy for cancer) are at increased risk for developing AKs. They may develop AK at an earlier age or have an increased number of AK lesions compared to immunocompetent people.
Human papillomavirus (HPV): The role of HPV in the development of AK remains unclear, but evidence suggests that infection with the betapapillomavirus type of HPV may be associated with an increased likelihood of AK.
Genodermatoses: Certain genetic disorders interfere with DNA repair after sun exposure, thereby putting these individuals at higher risk for the development of AKs. Examples of such genetic disorders include xeroderma pigmentosum and Bloom syndrome.
Balding: AKs are commonly found on the scalps of balding men. The degree of baldness seems to be a risk factor for lesion development, as men with severe baldness were found to be seven times more likely to have 10 or more AKs when compared to men with minimal or no baldness. This observation can be explained by an absence of hair causing a larger proportion of scalp to be exposed to UV radiation if other sun protection measures are not taken.
Diagnosis
Physicians usually diagnose actinic keratosis by doing a thorough physical examination, through a combination of visual observation and touch. However a biopsy may be necessary when the keratosis is large in diameter, thick, or bleeding, in order to make sure that the lesion is not a skin cancer. Actinic keratosis may progress to invasive squamous cell carcinoma (SCC) but both diseases can present similarly upon physical exam and can be difficult to distinguish clinically. Histological examination of the lesion from a biopsy or excision may be necessary to definitively distinguish AK from in situ or invasive SCC. In addition to SCCs, AKs can be mistaken for other cutaneous lesions including seborrheic keratoses, basal cell carcinoma, lichenoid keratosis, porokeratosis, viral warts, erosive pustular dermatosis of the scalp, pemphigus foliaceus, inflammatory dermatoses like psoriasis, or melanoma.
Biopsy
A lesion biopsy is performed if the diagnosis remains uncertain after a clinical physical exam, or if there is suspicion that the AK might have progressed to squamous cell carcinoma. The most common tissue sampling techniques include shave or punch biopsy. When only a portion of the lesion can be removed due to its size or location, the biopsy should sample tissue from the thickest area of the lesion, as SCCs are most likely to be detected in that area.
If a shave biopsy is performed, it should extend through to the level of the dermis in order to provide sufficient tissue for diagnosis; ideally, it would extend to the mid-reticular dermis. Punch biopsy usually extends to the subcutaneous fat when the entire length of the punch blade is utilized.
Histopathology
On histologic examination, actinic keratoses usually show a collection of atypical keratinocytes with hyperpigmented or pleomorphic nuclei, extending to the basal layer of the epidermis. A "flag sign" is often described, referring to alternating areas of orthokeratosis and parakeratosis. Epidermal thickening and surrounding areas of sun-damaged skin are often seen. The normal ordered maturation of the keratinocytes is disordered to varying degrees: there may be widening of the intracellular spaces, cytologic atypia such as abnormally large nuclei, and a mild chronic inflammatory infiltrate.Specific findings depend on the clinical variant and particular lesion characteristics. The seven major histopathologic variants are all characterized by atypical keratinocytic proliferation beginning in the basal layer and confined to the epidermis; they include:
Hypertrophic: Notable for marked hyperkeratosis, often with evident parakeratosis. Keratinocytes in the stratum malphigii may show a loss of polarity, pleomorphism, and anaplasia. Some irregular downward proliferation into the uppermost dermis may be observed, but does not represent frank invasion.
Atrophic: With slight hyperkeratosis and overall atrophic changes to the epidermis; the basal layer shows cells with large, hyperchromatic nuclei in close proximity to each other. These cells have been observed to proliferate into the dermis as buds and duct-like structures.
Lichenoid: Demonstrate a band-like lymphocytic infiltrate in the papillary dermis, directly beneath the dermal-epidermal junction.
Achantholytic: Intercellular clefts or lacunae in the lowermost epidermal layer that result from anaplastic changes; these produce dyskeratotic cells with disrupted intercellular bridges.
Bowenoid: This term is controversial and usually refers to full-thickness atypia, microscopically indistinguishable from Bowens Disease. However most dermatologists and pathologists will use it in reference to tissue samples that are notable for small foci of atypia that involve the full thickness of the epidermis, in the background of a lesion that is otherwise consistent with an AK.
Epidermolytic: With granular degeneration.
Pigmented: Show pigmentation in the basal layer of the epidermis, similar to a solar lentigo.
Dermoscopy
Dermoscopy is a noninvasive technique utilizing a handheld magnifying device coupled with a transilluminating lift. It is often used in the evaluation of cutaneous lesions but lacks the definitive diagnostic ability of biopsy-based tissue diagnosis. Histopathologic exam remains the gold standard.
Polarized contact dermoscopy of AKs occasionally reveals a "rosette sign," described as four white points arranged in a clover pattern, often localized to within a follicular opening. It is hypothesized that the "rosette sign" corresponds histologically to the changes of orthokeratosis and parakeratosis known as the "flag sign."
Non-pigmented AKs: linear or wavy vascular patterning, or a "strawberry pattern," described as unfocused vessels between hair follicles, with white-haloed follicular openings.
Pigmented AKs: gray to brown dots or globules surrounding follicular openings, and annular-granular rhomboidal structures; often difficult to differentiate from lentigo maligna.
Prevention
Ultraviolet radiation is believed to contribute to the development of actinic keratoses by inducing mutations in epidermal keratinocytes, leading to proliferation of atypical cells. Therefore, preventive measures for AKs are targeted at limiting exposure to solar radiation, including:
Limiting extent of sun exposure
Avoid sun exposure during noontime hours between 10:00 AM and 2:00 PM when UV light is most powerful
Minimize all time in the sun, since UV exposure occurs even in the winter and on cloudy days
Using sun protection
Applying sunscreens with SPF ratings 30 or greater that also block both UVA and UVB light, at least every 2 hours and after swimming or sweating
Applying sunscreen at least 15 minutes before going outside, as this allows time for the sunscreen to be absorbed appropriately by the skin
Wearing sun protective clothing such as hats, sunglasses, long-sleeved shirts, long skirts, or trousersRecent research implicating human papillomavirus (HPV) in the development of AKs suggests that HPV prevention might in turn help prevent development of AKs, as UV-induced mutations and oncogenic transformation are likely facilitated in cases of active HPV infection. A key component of HPV prevention includes vaccination, and the CDC currently recommends routine vaccination in all children at age 11 or 12.There is some data that in individuals with a history of non-melanoma skin cancer, a low-fat diet can serve as a preventative measure against future actinic keratoses.
Management
There are a variety of treatment options for AK depending on the patient and the clinical characteristics of the lesion. AKs show a wide range of features, which guide decision-making in choosing treatment. As there are multiple effective treatments, patient preference and lifestyle are also factors that physicians consider when determining the management plan for actinic keratosis. Regular follow-up is advisable after any treatment to make sure no new lesions have developed and that old ones are not progressing. Adding topical treatment after a procedure may improve outcomes.
Medication
Topical medications are often recommended for areas where multiple or ill-defined AKs are present, as the medication can easily be used to treat a relatively large area.
Fluorouracil cream
Topical fluorouracil (5-FU) destroys AKs by blocking methylation of thymidylate synthetase, thereby interrupting DNA and RNA synthesis. This in turn prevents the proliferation of dysplastic cells in AK. Topical 5-FU is the most utilized treatment for AK, and often results in effective removal of the lesion. Overall, there is a 50% efficacy rate resulting in 100% clearance of AKs treated with topical 5-FU. 5-FU may be up to 90% effective in treating non-hyperkeratotic lesions. While topical 5-FU is a widely used and cost-effective treatment for AKs and is generally well tolerated, its potential side-effects can include: pain, crusting, redness, and local swelling. These adverse effects can be mitigated or minimized by reducing the frequency of application or taking breaks between uses. The most commonly used application regimen consists of applying a layer of topical cream to the lesion twice a day after washing; duration of treatment is typically 2–4 weeks to thinner skin like the cheeks and up to 8 weeks for the arms; treatment of up to 8 weeks has demonstrated a higher cure rate.
Imiquimod cream
Imiquimod is a topical immune-enhancing agent licensed for the treatment of genital warts. Imiquimod stimulates the immune system through the release and up-regulation of cytokines. Treatment with Imiquimod cream applied 2–3 times per week for 12 to 16 weeks was found to result in complete resolution of AKs in 50% of people, compared to 5% of controls. The Imiquimod 3.75% cream has been validated in a treatment regimen consisting of daily application to entire face and scalp for two 2-week treatment cycles, with a complete clearance rate of 36%.While the clearance rate observed with the Imiquimod 3.75% cream was lower than that observed with the 5% cream (36 and 50 percent, respectively), there are lower reported rates of adverse reactions with the 3.75% cream: 19% of individuals using Imiquimod 3.75% cream reported adverse reactions including local erythema, scabbing, and flaking at the application site, while nearly a third of individuals using the 5% cream reported the same types of reactions with Imiquimod treatment. However, it is ultimately difficult to compare the efficacy of the different strength creams directly, as current study data varies in methodology (e.g. duration and frequency of treatment, and amount of skin surface area covered).
Ingenol mebutate gel
Ingenol mebutate is a newer treatment for AK used in Europe and the United States. It works in two ways, first by disrupting cell membranes and mitochondria resulting cell death, and then by inducing antibody-dependent cellular cytotoxicity to eliminate remaining tumor cells. A 3-day treatment course with the 0.015% gel is recommended for the scalp and face, while a 2-day treatment course with the 0.05% gel is recommended for the trunk and extremities. Treatment with the 0.015% gel was found to completely clear 57% of AK, while the 0.05% gel had a 34% clearance rate. Advantages of ingenol mebutate treatment include the short duration of therapy and a low recurrence rate. Local skin reactions including pain, itching and redness can be expected during treatment with ingenol mebutate. This treatment was derived from the petty spurge, Euphorbia peplus which has been used as a traditional remedy for keratosis.
Diclofenac sodium gel
Topical diclofenac sodium gel is a nonsteroidal anti-inflammatory drug that is thought to work in the treatment of AK through its inhibition of the arachidonic acid pathway, thereby limiting the production of prostaglandins which are thought to be involved in the development of UVB-induced skin cancers. Recommended duration of therapy is 60 to 90 days with twice daily application. Treatment of facial AK with diclofenac gel led to complete lesion resolution in 40% of cases. Common side effects include dryness, itching, redness, and rash at the site of application.
Retinoids
Topical retinoids have been studied in the treatment of AK with modest results, and the American Academy of Dermatology does not currently recommend this as first-line therapy. Treatment with adapalene gel daily for 4 weeks, and then twice daily thereafter for a total of nine months led to a significant but modest reduction in the number AKs compared to placebo; it demonstrated the additional advantage of improving the appearance of photodamaged skin.Topical tretinoin is ineffective as treatment for reducing the number of AKs. For secondary prevention of AK, systemic, low-dose acitretin was found to be safe, well tolerated and moderately effective in chemoprophylaxis for skin cancers in kidney transplant patients. Acitretin is a viable treatment option for organ transplant patients according to expert opinion.
Tirbanibulin
Tirbanibulin (Klisyri) was approved for medical use in the United States in December 2020, for the treatment of actinic keratosis on the face or scalp.
Procedures
Cryotherapy
Liquid nitrogen (−195.8 °C) is the most commonly used destructive therapy for the treatment of AK in the United States. It is a well-tolerated office procedure that does not require anesthesia.Cryotherapy is particularly indicated for cases where there are fewer than 15 thin, well-demarcated lesions. Caution is encouraged for thicker, more hyperkeratotic lesions, as dysplastic cells may evade treatment. Treatment with both cryotherapy and field treatment can be considered for these more advanced lesions. Cryotherapy is generally performed using an open-spray technique, wherein the AK is sprayed for several seconds.The process can be repeated multiple times in one office visit, as tolerated. Cure rates from 67 to 99 percent have been reported, depending on freeze time and lesion characteristics. Disadvantages include discomfort during and after the procedure; blistering, scarring and redness; hypo- or hyperpigmentation; and destruction of healthy tissue.
Photodynamic therapy
AKs are one of the most common dermatologic lesions for which photodynamic therapy, including topical methyl aminolevulinate (MAL) or 5-aminolevulinic acid (5-ALA), is indicated.Treatment begins with preparation of the lesion, which includes scraping away scales and crusts using a dermal curette. A thick layer of topical MAL or 5-ALA cream is applied to the lesion and a small area surrounding the lesion, which is then covered with an occlusive dressing and left for a period of time. During this time the photosensitizer accumulates in the target cells within the AK lesion. The dressings are then removed and the lesion is treated with light at a specified wavelength.
Multiple treatment regimens using different photosensitizers, incubation times, light sources, and pretreatment regimens have been studied and suggest that longer incubation times lead to higher rates of lesion clearance. Photodynamic therapy is gaining in popularity. It has been found to have a 14% higher likelihood of achieving complete lesion clearance at 3 months compared to cryotherapy, and seems to result in superior cosmetic outcomes when compared to cryotherapy or 5-FU treatment. Photodynamic therapy can be particularly effective in treating areas with multiple AK lesions.
Surgical techniques
Surgical excision: Excision should be reserved for cases when the AK is a thick, horny papule, or when deeper invasion is suspected and histopathologic diagnosis is necessary. It is a rarely utilized technique for AK treatment.
Shave excision and curettage (sometimes followed by electrodesiccation when deemed appropriate by the physician): This technique is often used for treatment of AKs, and particularly for lesions appearing more similar to squamous cell carcinoma, or those that are unresponsive to other treatments. The surface of the lesion can be scraped away using a scalpel, or the base can be removed with a curette. Tissue can be evaluated histopathologically under the microscope, but specimens acquired using this technique are not often adequate to determine whether a lesion is invasive or intraepidermal.
Dermabrasion: Dermabrasion is useful in the treatment of large areas with multiple AK lesions. The process involves using a hand-held instrument to "sand" the skin, removing the stratum corneum layer of the epidermis. Diamond fraises or wire brushes revolving at high speeds are used. The procedure can be quite painful and requires procedural sedation and anesthetic, necessitating a hospital stay. One-year clearance rates with dermabrasion treatment are as high as 96%, but diminish drastically to 54% at five years.
Laser therapy
Laser therapy using carbon dioxide (CO2) or erbium:yttrium aluminum garnet (Er:YAG) lasers is a treatment approach being utilized with increased frequency, and sometimes in conjunction with computer scanning technology. Laser therapy has not been extensively studied, but current evidence suggests it may be effective in cases involving multiple AKs refractive to medical therapy, or AKs located in cosmetically sensitive locations such as the face. The CO2 laser has been recommended for extensive actinic cheilitis that has not responded to 5-FU.
Chemical peels
A chemical peel is a topically applied agent that wounds the outermost layer of the skin, promoting organized repair, exfoliation, and eventually the development of smooth and rejuvenated skin. Multiple therapies have been studied. A medium-depth peel may effectively treat multiple non-hyperkeratotic AKs. It can be achieved with 35% to 50% trichloroacetic acid (TCA) alone or at 35% in combination with Jessners solution in a once-daily application for a minimum of 3 weeks; 70% glycolic acid (α-hydroxy acid); or solid CO2. When compared to treatment with 5-FU, chemical peels have demonstrated similar efficacy and increased ease of use with similar morbidity. Chemical peels must be performed in a controlled clinic environment and are recommended only for individuals who are able to comply with follow-up precautions, including avoidance of sun exposure. Furthermore, they should be avoided in individuals with a history of HSV infection or keloids, and in those who are immunosuppressed or who are taking photosensitizing medications.
Prognosis
Untreated AKs follow one of three paths: they can either persist as AKs, regress, or progress to invasive skin cancer, as AK lesions are considered to be on the same continuum with squamous cell carcinoma (SCC). AK lesions that regress also have the potential to recur.
Progression: The overall risk of an AK turning into invasive cancer is low. In average-risk individuals, likelihood of an AK lesion progressing to SCC is less than 1% per year. Despite this low rate of progression, studies suggest that a full 60% of SCCs arise from pre-existing AKs, reinforcing the idea that these lesions are closely related.
Regression: Reported regression rates for single AK lesions have ranged between 15 and 63% after one year.
Recurrence: Recurrence rates after 1 year for single AK lesions that have regressed range between 15 and 53%.
Clinical course
Given the aforementioned differering clinical outcomes, it is difficult to predict the clinical course of any given actinic keratosis. AK lesions may also come and go—in a cycle of appearing on the skin, remaining for months, and then disappearing. Often they will reappear in a few weeks or months, particularly after unprotected sun exposure. Left untreated, there is a chance that the lesion will advance to become invasive. Although it is difficult to predict whether an AK will advance to become squamous cell carcinoma, it has been noted that squamous cell carcinomas originate in lesions formerly diagnosed as AKs with frequencies reported between 65 and 97%.
Epidemiology
Actinic keratosis is very common, with an estimated 14% of dermatology visits related to AKs. It is seen more often in fair-skinned individuals, and rates vary with geographical location and age. Other factors such as exposure to ultraviolet (UV) radiation, certain phenotypic features, and immunosuppression can also contribute to the development of AKs.
Men are more likely to develop AK than women, and the risk of developing AK lesions increases with age. These findings have been observed in multiple studies, with numbers from one study suggesting that approximately 5% of women ages 20–29 develop AK compared to 68% of women ages 60–69, and 10% of men ages 20–29 develop AK compared to 79% of men ages 60–69.Geography seems to play a role in the sense that individuals living in locations where they are exposed to more UV radiation throughout their lifetime have a significantly higher risk of developing AK </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Congenital adrenal hyperplasia and can you explain it in depth? [/INST] Congenital adrenal hyperplasia (CAH) is a group of autosomal recessive disorders characterized by impaired cortisol synthesis. It results from the deficiency of one of the five enzymes required for the synthesis of cortisol in the adrenal cortex. Most of these disorders involve excessive or deficient production of hormones such as glucocorticoids, mineralocorticoids, or sex steroids, and can alter development of primary or secondary sex characteristics in some affected infants, children, or adults. It is one of the most common autosomal recessive disorders in humans.
Types
CAH can occur in various forms. The clinical presentation of each form is different and depends to a large extent on the underlying enzyme defect, its precursor retention, and deficient products. Classical forms appear in infancy, and nonclassical forms appear in late childhood. The presentation in patients with classic CAH can be further subdivided into two forms: salt-wasting and simple-virilizing, depending on whether mineralocorticoid deficiency presents or absents, respectively (you should put absents before presents to straight the meaning of respectively). This subtyping is often not clinically meaningful, though, because all patients lose salt to some degree, and clinical presentations may overlap.
Classic
Salt-wasting
In 75% of cases of severe enzyme deficiency, insufficient aldosterone production can lead to salt wasting, failure to thrive, and potentially fatal hypovolemia and shock. A missed diagnosis of salt-loss CAH is related to the increased risk of early neonatal morbidity and death.
Simple-virilizing
The main feature of CAH in newborn females is the abnormal development of the external genitalia, which has varying degrees of virilization. According to clinical practice guidelines, for newborns found to have bilateral inaccessible gonads, CAH evaluation should be considered. If virilizing CAH cannot be identified and treated, both boys and girls may undergo rapid postnatal growth and virilization.
Nonclassic
In addition to the salt-wasting and simple-virilizing forms of CAH diagnosed in infancy, a mild or "nonclassic" form exists, which is characterized by varying degrees of postnatal androgen excess, but is sometimes asymptomatic. The nonclassic form may be noticed in late childhood and may lead to accelerated growth, premature sexual maturation, acne, and secondary polycystic ovary syndrome. In adult males, early balding and infertility may suggest the diagnosis. The nonclassic form is characterized by mild subclinical impairment of cortisol synthesis; serum cortisol concentration is usually normal.
Signs and symptoms
The symptoms of CAH vary depending upon the form of CAH and the sex of the patient. Symptoms can include:
Due to inadequate mineralocorticoids:
Vomiting due to salt-wasting, leading to dehydration and deathDue to excess androgens:
In extreme virilization, an elongated clitoris with a phallic-like structure is seen.
Ambiguous genitalia, in some infants, occurs such that initially identifying external genitalia as "male" or "female" is difficult.
Early pubic hair and rapid growth occurs in childhood.
Precocious puberty or failure of puberty to occur (sexual infantilism: absent or delayed puberty)
Excessive facial hair, virilization, and/or menstrual irregularity in adolescence
Infertility due to anovulation
Clitoromegaly, enlarged clitoris and shallow vaginaDue to insufficient androgens and estrogens:
Undervirilization in XY males can result in apparently female external genitalia.
In females, hypogonadism can cause sexual infantilism or abnormal pubertal development, infertility, and other reproductive system abnormalities.
Genetics
CAH results from mutations of genes for enzymes mediating the biochemical steps of production of mineralocorticoids, glucocorticoids, or sex steroids from cholesterol by the adrenal glands (steroidogenesis).Each form of CAH is associated with a specific defective gene. The most common type (95% of cases) involves the gene for 21-hydroxylase, which is found on 6p21.3 as part of the HLA complex; 21-hydroxylase deficiency results from a unique mutation with two highly homologous near-copies in series consisting of an active gene (CYP21A2) and an inactive pseudogene (CYP21A1P). Mutant alleles result from recombination between the active and pseudogenes (gene conversion). About 5% of cases of CAH are due to defects in the gene encoding 11β-hydroxylase and consequent 11β-hydroxylase deficiency. Other, more rare forms of CAH are caused by mutations in genes, including HSD3B2 (3β-hydroxysteroid dehydrogenase 2), CYP17A1 (17α-hydroxylase/17,20-lyase), CYP11A1 (P450scc; cholesterol side-chain cleavage enzyme), STAR (steroidogenic acute regulatory protein; StAR), CYB5A (cytochrome b5), and CYPOR (cytochrome P450 oxidoreductase; POR).
Expressivity
Further variability is introduced by the degree of enzyme inefficiency produced by the specific alleles each patient has. Some alleles result in more severe degrees of enzyme inefficiency. In general, severe degrees of inefficiency produce changes in the fetus and problems in prenatal or perinatal life. Milder degrees of inefficiency are usually associated with excessive or deficient sex hormone effects in childhood or adolescence, while the mildest forms of CAH interfere with ovulation and fertility in adults.
Diagnosis
Clinical evaluation
Female infants with classic CAH have ambiguous genitalia due to exposure to high concentrations of androgens in utero. CAH due to 21-hydroxylase deficiency is the most common cause of ambiguous genitalia in genotypically normal female infants (46XX). Less severely affected females may present with early pubarche. Young women may present with symptoms of polycystic ovarian syndrome (oligomenorrhea, polycystic ovaries, hirsutism).Males with classic CAH generally have no signs of CAH at birth. Some may present with hyperpigmentation, due to co-secretion with melanocyte-stimulating hormone, and possible penile enlargement. Age of diagnosis of males with CAH varies and depends on the severity of aldosterone deficiency. Boys with salt-wasting disease present early with symptoms of hyponatremia and hypovolemia. Boys with non-salt-wasting disease present later with signs of virilization.In rarer forms of CAH, males are undermasculinized and females generally have no signs or symptoms at birth.
Laboratory studies
Genetic analysis can be helpful to confirm a diagnosis of CAH, but it is not necessary if classic clinical and laboratory findings are present.
In classic 21-hydroxylase deficiency, laboratory studies will show:
Hypoglycemia (due to hypocortisolism) - One of cortisols many functions is to increase blood glucose levels. This occurs via a combination of several mechanisms, including (a) the stimulation of gluconeogesis (i.e. the creation of new glucose) in the liver, (b) the promotion of glycogenolysis (i.e. the breakdown of glycogen into glucose), and (c) the prevention of glucose leaving the bloodstream via the downregulation of GLUT-4 receptors (which normally promote movement of glucose from the bloodstream into adipose and muscle tissues). Therefore, when cortisol is deficient, these processes (effectively) occur in the reverse direction. Although there are compensatory mechanisms that mitigate the impact of hypocortisolism, they are limited in their extent and the net effect is still hypoglycemia.
Hyponatremia (due to hypoaldosteronism) - Aldosterone is the end product of the renin-angiotensin-aldosterone system that regulates blood pressure via blood pressure surveillance in the Kidney Juxtaglomerular apparatus. Aldosterone normally functions to increase sodium retention (which brings water as well) in exchange for potassium. Thus, lack of aldosterone causes hyperkalemia and hyponatremia. In fact, this is a distinguishing point from 11-hydroxylase deficiency, in which one of the increased products is 11-deoxycorticosterone that has weak mineralocorticoid activity. In 11-hydroxylase deficiency, 11-deoxycorticosterone is produced in such excess that it acts to retain sodium at the expense of potassium. It is this reason that patients with 11-hydroxylase deficiency do not show salt wasting (although sometimes they do in infancy), and instead have hypertension/water retention and sometimes hypokalemia.
Hyperkalemia (due to hypoaldosteronism)
Elevated 17α-hydroxyprogesteroneClassic 21-hydroxylase deficiency typically causes 17α-hydroxyprogesterone blood levels >242 nmol/L. (For comparison, a full-term infant at three days of age should have <3 nmol/L. Many neonatal screening programs have specific reference ranges by weight and gestational age because high levels may be seen in premature infants without CAH.) Salt-wasting patients tend to have higher 17α-hydroxyprogesterone levels than non-salt-wasting patients. In mild cases, 17α-hydroxyprogesterone may not be elevated in a particular random blood sample, but it will rise during a corticotropin stimulation test.
Classification
Cortisol is an adrenal steroid hormone required for normal endocrine function. Production begins in the second month of fetal life. Poor cortisol production is a hallmark of most forms of CAH. Inefficient cortisol production results in rising levels of ACTH, because cortisol feeds back to inhibit ACTH production, so loss of cortisol results in increased ACTH. This increased ACTH stimulation induces overgrowth (hyperplasia) and overactivity of the steroid-producing cells of the adrenal cortex. The defects causing adrenal hyperplasia are congenital (i.e. present at birth).
Cortisol deficiency in CAH is usually partial, and not the most serious problem for an affected person. Synthesis of cortisol shares steps with synthesis of mineralocorticoids such as aldosterone, androgens such as testosterone, and estrogens such as estradiol. The resulting excessive or deficient production of these three classes of hormones produce the most important problems for people with CAH. Specific enzyme inefficiencies are associated with characteristic patterns of over- or underproduction of mineralocorticoids or sex steroids.
Since the 1960s, most endocrinologists have referred to the forms of CAH by the traditional names in the left column, which generally correspond to the deficient enzyme activity. As exact structures and genes for the enzymes were identified in the 1980s, most of the enzymes were found to be cytochrome P450 oxidases and were renamed to reflect this. In some cases, more than one enzyme was found to participate in a reaction, and in other cases, a single enzyme mediated in more than one reaction. Variation in different tissues and mammalian species also was found.
In all its forms, congenital adrenal hyperplasia due to 21-hydroxylase deficiency accounts for about 95% of diagnosed cases of CAH. Unless another specific enzyme is mentioned, "CAH" in nearly all contexts refers to 21-hydroxylase deficiency. (The terms "salt-wasting CAH", and "simple virilizing CAH" usually refer to subtypes of this condition.) CAH due to deficiencies of enzymes other than 21-hydroxylase present many of the same management challenges, as 21-hydroxylase deficiency, but some involve mineralocorticoid excess or sex steroid deficiency.
Screening
Currently, in the United States and over 40 other countries, every child born is screened for 21-hydroxylase CAH at birth. This test detects elevated levels of 17α-hydroxyprogesterone (17-OHP). Detecting high levels of 17-OHP enables early detection of CAH. Newborns detected early enough can be placed on medication and live relatively normal lives.The screening process, however, is characterized by a high false-positive rate. In one study, CAH screening had the lowest positive predictive value (111 true-positive cases among 20,647 abnormal screening results in a 2-year period, or 0.53%, compared with 6.36% for biotinidase deficiency, 1.84% for congenital hypo-thyroidism, 0.56% for classic galactosemia, and 2.9% for phenylketonuria). According to this estimate, 200 unaffected newborns required clinical and laboratory follow-up for every true case of CAH.In 2020, Wael AbdAlmageed from USC Information Sciences Institute and Mimi Kim from USC Keck School Of Medicine led a joint study in which they used deep learning technology to analyze the facial morphology and features of CAH patients compared to control. In this cross-sectional study of 102 patients with CAH and 144 control participants, deep learning methods achieved a mean area under the receiver operating characteristic curve of 92% for predicting CAH from facial images. Facial features distinguished patients with CAH from controls, and analyses of facial regions found that the nose and upper face were most contributory. The findings suggest that facial morphologic features, as analyzed by deep neural network techniques, can be used as a phenotypic biomarker to predict CAH.
Treatment
Since the clinical manifestations of each form of CAH are unique and depend to a large extent on the underlying enzyme defects, their precursor retention and defective products, the therapeutic goal of CAH is to replenish insufficient adrenal hormones and suppress excess of precursors.Treatment of all forms of CAH may include any of:
Supplying enough glucocorticoid to reduce hyperplasia and overproduction of androgens or mineralocorticoids
Providing replacement mineralocorticoid and extra salt if the person is deficient
Providing replacement testosterone or estrogens at puberty if the person is deficient
Additional treatments to optimize growth by delaying puberty or delaying bone maturation
If CAH is caused by the deficiency of the 21-hydroxylase enzyme, then treatment aims to normalize levels of main substrate of the enzyme - 17α-hydroxyprogesterone.
Epidemiology
The incidence varies ethnically. In the United States, congenital adrenal hyperplasia in its classic form is particularly common in Native Americans and Yupik Inuit (incidence 1⁄280). Among American Caucasians, the incidence of the classic form is about 1⁄15,000).Continued treatment and wellness are enhanced by education and follow up.
History
Before 20th century
An Italian anatomist, Luigi De Crecchio (1832-1894) provided the earliest known description of a case of probable CAH.
I propose in this narrative that it is sometimes extremely difficult and even impossible to determine sex during life. In one of the anatomical theaters of the hospital..., there arrived toward the end of January a cadaver which in life was the body of a certain Joseph Marzo... The general physiognomy was decidedly male in all respects. There were no feminine curves to the body. There was a heavy beard. There was some delicacy of structure with muscles that were not very well developed... The distribution of pubic hair was typical of the male. Perhaps the lower extremities were somewhat delicate, resembling the female, and were covered with hair... The penis was curved posteriorly and measured 6 cm, or with stretching, 10 cm. The corona was 3 cm long and 8 cm in circumference. There was an ample prepuce. There was a first grade hypospadias... There were two folds of skin coming from the top of the penis and encircling it on either side. These were somewhat loose and resembled labia majora.
De Crecchio then described the internal organs, which included a normal vagina, uterus, fallopian tubes, and ovaries.
It was of the greatest importance to determine the habits, tendencies, passions, and general character of this individual... I was determined to get as complete a story as possible, determined to get at the base of the facts and to avoid undue exaggeration which was rampant in the conversation of many of the people present at the time of the dissection.
He interviewed many people and satisfied himself that Joseph Marzo "conducted himself within the sexual area exclusively as a male", even to the point of contracting the "French disease" on two occasions. The cause of death was another in a series of episodes of vomiting and diarrhea.This account was translated by Alfred Bongiovanni from De Crecchio ("Sopra un caso di apparenzi virili in una donna". Morgagni 7:154–188, 1865) in 1963 for an article in The New England Journal of Medicine.
20th and 21st centuries
The association of excessive sex steroid effects with diseases of the adrenal cortex have been recognized for over a century. The term "adrenogenital syndrome" was applied to both sex-steroid producing tumors and severe forms of CAH for much of the 20th century, before some of the forms of CAH were understood. Congenital adrenal hyperplasia, which also dates to the first half of the century, has become the preferred term to reduce ambiguity and to emphasize the underlying pathophysiology of the disorders.
Much modern understanding and treatment of CAH comes from research conducted at Johns Hopkins Medical School in Baltimore in the middle of the 20th century. Lawson Wilkins, "founder" of pediatric endocrinology, worked out the apparently paradoxical pathophysiology: that hyperplasia and overproduction of adrenal androgens resulted from impaired capacity for making cortisol. He reported use of adrenal cortical extracts to treat children with CAH in 1950. Genital reconstructive surgery was also pioneered at Hopkins. After application of karyotyping to CAH and other intersex disorders in the 1950s, John Money, JL Hampson, and JG Hampson persuaded both the scientific community and the public that sex assignment should not be based on any single biological criterion, and gender identity was largely learned and has no simple relationship with chromosomes or hormones. See Intersex for a fuller history, including recent controversies over reconstructive surgery.
Hydrocortisone, fludrocortisone, and prednisone were available by the late 1950s </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Adrenocortical carcinoma and can you explain it in depth? [/INST] Adrenocortical carcinoma (ACC) is an aggressive cancer originating in the cortex (steroid hormone-producing tissue) of the adrenal gland.
Adrenocortical carcinoma is remarkable for the many hormonal syndromes that can occur in patients with steroid hormone-producing ("functional") tumors, including Cushings syndrome, Conn syndrome, virilization, and feminization. Adrenocortical carcinoma has often invaded nearby tissues or metastasized to distant organs at the time of diagnosis, and the overall 5-year survival rate is about 50%.Adrenocortical carcinoma is a rare tumor, with incidence of one to two per million population annually. It has a bimodal distribution by age, with cases clustering in children under 5 and in adults 30–40 years old. The widely used angiotensin-II-responsive steroid-producing cell line H295R was originally isolated from a tumor diagnosed as adrenocortical carcinoma.
Signs and symptoms
Adrenocortical carcinoma may present differently in children and adults. Most tumors in children are functional, and virilization is by far the most common presenting symptom(s), followed by Cushings syndrome and precocious puberty. Among adults presenting with hormonal syndromes, Cushings syndrome alone is most common, followed by mixed Cushings and virilization (glucocorticoid and androgen overproduction). Feminization and Conn syndrome (mineralocorticoid excess) occur in less than 10% of cases. Rarely, pheochromocytoma-like hypersecretion of catecholamines has been reported in adrenocortical cancers. Nonfunctional tumors (about 40%, authorities vary) usually present with abdominal or flank pain, varicocele, and renal vein thrombosis or they may be asymptomatic and detected incidentally.All patients with suspected ACC should be carefully evaluated for signs and symptoms of hormonal syndromes. For Cushings syndrome (glucocorticoid excess), these include weight gain, muscle wasting, purple lines on the abdomen, a fatty "buffalo hump" on the neck, a "moon-like" face, and thinning, fragile skin. Virilism (androgen excess) is most obvious in women, and may produce excess facial and body hair, acne, enlargement of the clitoris, deepening of the voice, coarsening of facial features, cessation of menstruation. Conn syndrome (mineralcorticoid excess) is marked by high blood pressure, which can result in headache and hypokalemia (low serum potassium, which can in turn produce muscle weakness, confusion, and palpitations), low plasma renin activity, and high serum aldosterone. Feminization (estrogen excess) is most readily noted in men, and includes breast enlargement, decreased libido, and impotence.
Pathophysiology
The main etiologic factor of ACC is unknown, although families with Li–Fraumeni syndrome, caused by an inherited inactivation mutation in TP53, have increased risk. Several genes have been shown to be recurrently mutated, including TP53, CTNNB1, MEN1, PRKAR1A, RPL22, and DAXX. The telomerase gene TERT is often amplified while ZNRF3 and CDKN2A are often homozygously deleted. The genes h19, insulin-like growth factor II (IGF-II), and p57kip2 are important for fetal growth and development. They are located on chromosome 11p. Expression of the h19 gene is markedly reduced in both nonfunctioning and functioning adrenal cortical carcinomas, especially in tumors producing cortisol and aldosterone. Also, a loss occurs of activity of the p57kip2 gene product in virilizing adenomas and adrenal cortical carcinomas. In contrast, IGF-II gene expression has been shown to be high in adrenal cortical carcinomas. Finally, c-myc gene expression is relatively high in neoplasms, and it is often linked to poor prognosis.Bilateral adrenocortical tumors are less common than unilateral. The majority of bilateral tumours can be distinguished according to size and aspect of the nodules: primary pigmented nodular adrenocortical disease, which can be sporadic or part of Carney complex, and primary bilateral macro nodular adrenal hyperplasia.Metastasis is most commonly to the liver and lung.
Diagnosis
Laboratory findings
Hormonal syndromes should be confirmed with laboratory testing. Laboratory findings in Cushing syndrome include increased serum glucose (blood sugar) and increased urine cortisol. Adrenal virilism is confirmed by the finding of an excess of serum androstenedione and dehydroepiandrosterone. Findings in Conn syndrome include low serum potassium, low plasma renin activity, and high serum aldosterone. Feminization is confirmed with the finding of excess serum estrogen.
Imaging
Radiological studies of the abdomen, such as CT scans and magnetic resonance imaging are useful for identifying the site of the tumor, differentiating it from other diseases, such as adrenocortical adenoma, and determining the extent of invasion of the tumor into surrounding organs and tissues. On CT, it shows heterogeneous appearance due to necrosis, calcifications, and haemorrhage. After contrast injection, it shows peripheral enhancement. Invasion of adjacent structures such as kidney, vena cava, liver, and retroperitoneal lymph nodes are also common.On MRI, it shows low intensity on T1-weighted images, and high T2 signal with strong heterogeneous contrast enhancement and slow washout. Haemorrhagic areas may show high T1-signal.
Pathology
Adrenal tumors are often not biopsied prior to surgery, so diagnosis is confirmed on examination of the surgical specimen by a pathologist. Grossly, ACCs are often large, with a tan-yellow cut surface, and areas of hemorrhage and necrosis </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Alcohol withdrawal syndrome and can you explain it in depth? [/INST] Alcohol withdrawal syndrome (AWS) is a set of symptoms that can occur following a reduction in alcohol use after a period of excessive use. Symptoms typically include anxiety, shakiness, sweating, vomiting, fast heart rate, and a mild fever. More severe symptoms may include seizures, hallucinations, and delirium tremens (DTs). Symptoms typically begin around six hours following the last drink, are worst at 24 to 72 hours, and improve by seven days.Alcohol withdrawal may occur in those who are alcohol dependent. This may occur following a planned or unplanned decrease in alcohol intake. The underlying mechanism involves a decreased responsiveness of GABA receptors in the brain. The withdrawal process is typically followed using the Clinical Institute Withdrawal Assessment for Alcohol scale (CIWA-Ar).The typical treatment of alcohol withdrawal is with benzodiazepines such as chlordiazepoxide or diazepam. Often the amounts given are based on a persons symptoms. Thiamine is recommended routinely. Electrolyte problems and low blood sugar should also be treated. Early treatment improves outcomes.In the Western world about 15% of people have problems with alcoholism at some point in time. Alcohol depresses the central nervous system, slowing cerebral messaging and altering the way signals are sent and received. Progressively larger amounts of alcohol are needed to achieve the same physical and emotional results. The drinker eventually must consume alcohol just to avoid the physical cravings and withdrawal symptoms. About half of people with alcoholism will develop withdrawal symptoms upon reducing their use, with four percent developing severe symptoms. Among those with severe symptoms up to 15% die. Symptoms of alcohol withdrawal have been described at least as early as 400 BC by Hippocrates. It is not believed to have become a widespread problem until the 1700s.
Signs and symptoms
Signs and symptoms of alcohol withdrawal occur primarily in the central nervous system. The severity of withdrawal can vary from mild symptoms such as insomnia, trembling, and anxiety to severe and life-threatening symptoms such as alcoholic hallucinosis, delirium tremens, and autonomic instability.Withdrawal usually begins 6 to 24 hours after the last drink. It can last for up to one week. To be classified as alcohol withdrawal syndrome, patients must exhibit at least two of the following symptoms: increased hand tremor, insomnia, nausea or vomiting, transient hallucinations (auditory, visual or tactile), psychomotor agitation, anxiety, generalized tonic–clonic seizures, and autonomic instability.The severity of symptoms is dictated by a number of factors, the most important of which are degree of alcohol intake, length of time the individual has been using alcohol, and previous history of alcohol withdrawal. Symptoms are also grouped together and classified:
Alcohol hallucinosis: patients have transient visual, auditory, or tactile hallucinations, but are otherwise clear.
Withdrawal seizures: seizures occur within 48 hours of alcohol cessations and occur either as a single generalized tonic-clonic seizure or as a brief episode of multiple seizures.
Delirium tremens: hyperadrenergic state, disorientation, tremors, diaphoresis, impaired attention/consciousness, and visual and auditory hallucinations.
Progression
Six to 12 hours after the ingestion of the last drink, withdrawal symptoms such as shaking, headache, sweating, anxiety, nausea or vomiting may occur. Twelve to 24 hours after cessation, the condition may progress to such major symptoms as confusion, hallucinations (with awareness of reality), while less severe symptoms may persist and develop including tremor, agitation, hyperactivity and insomnia.At 12 to 48 hours following the last ethanol ingestion, the possibility of generalized tonic–clonic seizures should be anticipated, occurring in 3-5% of cases. Meanwhile, none of the earlier withdrawal symptoms will typically have abated. Seizures carry the risk of major complications and death for the alcoholic.Although the persons condition usually begins to improve after 48 hours, withdrawal symptoms sometimes continue to increase in severity and advance to the most severe stage of withdrawal, delirium tremens. This occurs in 5 to 20% of patients experiencing detoxification and one third of untreated cases, which is characterized by hallucinations that are indistinguishable from reality, severe confusion, seizures, high blood pressure, and fever that can persist anywhere from 4 to 12 days.
Protracted withdrawal
A protracted alcohol withdrawal syndrome occurs in many alcoholics when withdrawal symptoms continue beyond the acute withdrawal stage but usually at a subacute level of intensity and gradually decreasing with severity over time. This syndrome is sometimes referred to as the post-acute-withdrawal syndrome. Some withdrawal symptoms can linger for at least a year after discontinuation of alcohol. Symptoms can include a craving for alcohol, inability to feel pleasure from normally pleasurable things (known as anhedonia), clouding of sensorium, disorientation, nausea and vomiting or headache.Insomnia is a common protracted withdrawal symptom that persists after the acute withdrawal phase of alcohol. Insomnia has also been found to influence relapse rate. Studies have found that magnesium or trazodone can help treat the persisting withdrawal symptom of insomnia in recovering alcoholics. Insomnia can be difficult to treat in these individuals because many of the traditional sleep aids (e.g., benzodiazepine receptor agonists and barbiturate receptor agonists) work via a GABAA receptor mechanism and are cross-tolerant with alcohol. However, trazodone is not cross-tolerant with alcohol. The acute phase of the alcohol withdrawal syndrome can occasionally be protracted. Protracted delirium tremens has been reported in the medical literature as a possible but unusual feature of alcohol withdrawal.
Pathophysiology
Chronic use of alcohol leads to changes in brain chemistry especially in the GABAergic system. Various adaptations occur such as changes in gene expression and down regulation of GABAA receptors. During acute alcohol withdrawal, changes also occur such as upregulation of alpha4 containing GABAA receptors and downregulation of alpha1 and alpha3 containing GABAA receptors. Neurochemical changes occurring during alcohol withdrawal can be minimized with drugs which are used for acute detoxification. With abstinence from alcohol and cross-tolerant drugs these changes in neurochemistry may gradually return towards normal. Adaptations to the NMDA system also occur as a result of repeated alcohol intoxication and are involved in the hyper-excitability of the central nervous system during the alcohol withdrawal syndrome. Homocysteine levels, which are elevated during chronic drinking, increase even further during the withdrawal state, and may result in excitotoxicity. Alterations in ECG (in particular an increase in QT interval) and EEG abnormalities (including abnormal quantified EEG) may occur during early withdrawal. Dysfunction of the hypothalamic–pituitary–adrenal axis and increased release of corticotropin-releasing hormone occur during both acute as well as protracted abstinence from alcohol and contribute to both acute and protracted withdrawal symptoms. Anhedonia/dysphoria symptoms, which can persist as part of a protracted withdrawal, may be due to dopamine underactivity.
Kindling
Kindling is a phenomenon where repeated alcohol detoxifications leads to an increased severity of the withdrawal syndrome. For example, binge drinkers may initially experience no withdrawal symptoms, but with each period of alcohol use followed by cessation, their withdrawal symptoms intensify in severity and may eventually result in full-blown delirium tremens with convulsive seizures. Alcoholics who experience seizures during detoxification are more likely to have had previous episodes of alcohol detoxification than patients who did not have seizures during withdrawal. In addition, people with previous withdrawal syndromes are more likely to have more medically complicated alcohol withdrawal symptoms.Kindling can cause complications and may increase the risk of relapse, alcohol-related brain damage and cognitive deficits. Chronic alcohol misuse and kindling via multiple alcohol withdrawals may lead to permanent alterations in the GABAA receptors. The mechanism behind kindling is sensitization of some neuronal systems and desensitization of other neuronal systems which leads to increasingly gross neurochemical imbalances. This in turn leads to more profound withdrawal symptoms including anxiety, convulsions and neurotoxicity.Binge drinking is associated with increased impulsivity, impairments in spatial working memory and impaired emotional learning. These adverse effects are believed to be due to the neurotoxic effects of repeated withdrawal from alcohol on aberrant neuronal plasticity and cortical damage. Repeated periods of acute intoxication followed by acute detoxification has profound effects on the brain and is associated with an increased risk of seizures as well as cognitive deficits. The effects on the brain are similar to those seen in alcoholics who have detoxified repeatedly but not as severe as in alcoholics who have no history of prior detox. Thus, the acute withdrawal syndrome appears to be the most important factor in causing damage or impairment to brain function. The brain regions most sensitive to harm from binge drinking are the amygdala and prefrontal cortex.People in adolescence who experience repeated withdrawals from binge drinking show impairments of long-term nonverbal memory. Alcoholics who have had two or more alcohol withdrawals show more frontal lobe cognitive dysfunction than those who have experienced one or no prior withdrawals. Kindling of neurons is the proposed cause of withdrawal-related cognitive damage. Kindling from repeated withdrawals leads to accumulating neuroadaptive changes. Kindling may also be the reason for cognitive damage seen in binge drinkers.
Diagnosis
Many hospitals use the Clinical Institute Withdrawal Assessment for Alcohol (CIWA) protocol in order to assess the level of withdrawal present and therefore the amount of medication needed. When overuse of alcohol is suspected but drinking history is unclear, testing for elevated values of carbohydrate-deficient transferrin or gammaglutamyl transferase can help make the diagnosis of alcohol overuse and dependence more clear. The CIWA has also been shortened (now called the CIWA-Ar), while retaining its validity and reliability, to help assess patients more efficiently due to the life-threatening nature of alcohol withdrawal.Other conditions that may present similarly include benzodiazepine withdrawal syndrome (a condition also mainly caused by GABAA receptor adaptation).
Treatment
Benzodiazepines are effective for the management of symptoms as well as the prevention of seizures. Certain vitamins are also an important part of the management of alcohol withdrawal syndrome. In those with severe symptoms inpatient care is often required. In those with lesser symptoms treatment at home may be possible with daily visits with a health care provider.
Benzodiazepines
Benzodiazepines are the most commonly used medication for the treatment of alcohol withdrawal and are generally safe and effective in suppressing symptoms of alcohol withdrawal. This class of medication is generally effective in symptoms control, but need to be used carefully. Although benzodiazepines have a long history of successfully treating and preventing withdrawal, there is no consensus on the ideal one to use. The most commonly used agents are long-acting benzodiazepines, such as chlordiazepoxide and diazepam. These are believed to be superior to other benzodiazepines for treatment of delirium and allow for longer periods between doses. However, benzodiazepines with intermediate half-lives like lorazepam may be safer in people with liver problems. Benzodiazepines showed a protective benefit against alcohol withdrawal symptoms, in particular seizure, compared to other common methods of treatment.The primary debate between use of long-acting benzodiazepines and short-acting is that of ease of use. Longer-acting drugs, such as diazepam, can be administered less frequently. However, evidence does exist that "symptom-triggered regimens" such as those used when treating with lorazepam, are as safe and effective, but have decreased treatment duration and medication quantity used.Although benzodiazepines are very effective at treating alcohol withdrawal, they should be carefully used. Benzodiazepines should only be used for brief periods in alcoholics who are not already dependent on them, as they share cross tolerance with alcohol. There is a risk of replacing an alcohol addiction with benzodiazepine dependence or adding another addiction. Furthermore, disrupted GABA benzodiazepine receptor function is part of alcohol dependence and chronic benzodiazepines may prevent full recovery from alcohol induced mental effects. The combination of benzodiazepines and alcohol can amplify the adverse psychological effects of each other causing enhanced depressive effects on mood and increase suicidal actions and are generally contraindicated except for alcohol withdrawal.
Vitamins
Alcoholics are often deficient in various nutrients, which can cause severe complications during alcohol withdrawal, such as the development of Wernicke syndrome. To help to prevent Wernicke syndrome, these individuals should be administered a multivitamin preparation with sufficient quantities of thiamine and folic acid. During alcohol withdrawal, the prophylactic administration of thiamine, folic acid, and pyridoxine intravenously is recommended before starting any carbohydrate-containing fluids or food. These vitamins are often combined into a banana bag for intravenous administration.
Anticonvulsants
Very limited evidence indicates that topiramate or pregabalin may be useful in the treatment of alcohol withdrawal syndrome </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Alopecia areata and can you explain it in depth? [/INST] Alopecia areata, also known as spot baldness, is a condition in which hair is lost from some or all areas of the body. Often, it results in a few bald spots on the scalp, each about the size of a coin. Psychological stress and illness are possible factors in bringing on alopecia areata in individuals at risk, but in most cases there is no obvious trigger. People are generally otherwise healthy. In a few cases, all the hair on the scalp is lost (alopecia totalis), or all body hair is lost (alopecia universalis). Hair loss can be permanent, or temporary. It is distinctive from pattern hair loss, which is common among males.
Alopecia areata is believed to be an autoimmune disease resulting from a breach in the immune privilege of the hair follicles. Risk factors include a family history of the condition. Among identical twins, if one is affected, the other has about a 50% chance of also being affected. The underlying mechanism involves failure by the body to recognize its own cells, with subsequent immune-mediated destruction of the hair follicle.No cure for the condition is known. Efforts may be used to try to speed hair regrowth, such as cortisone injections. Sunscreen, head coverings to protect from cold and sun, and glasses, if the eyelashes are missing, are recommended. In some cases, the hair regrows, and the condition does not reoccur. In others, hair loss and regrowth occurs over years. Among those in whom all body hair is lost, fewer than 10% recover.About 0.15% of people are affected at any one time, and 2% of people are affected at some point in time. Onset is usually in childhood. Females are affected at higher rates than males.
Signs and symptoms
Typical first symptoms of alopecia areata are small bald patches. The underlying skin is unscarred and looks superficially normal. Although these patches can take many shapes, they are usually round or oval. Alopecia areata most often affects the scalp and beard, but may occur on any part of the body with hair. Different areas of the skin may exhibit hair loss and regrowth at the same time. The disease may also go into remission for a time, or may be permanent. It is common in children.The area of hair loss may tingle or be mildly painful. The hair tends to fall out over a short period of time, with the loss commonly occurring more on one side of the scalp than the other.Exclamation point hairs, narrower along the length of the strand closer to the base, producing a characteristic "exclamation point" appearance, are often present. These hairs are very short (3–4 mm), and can be seen surrounding the bald patches.When healthy hair is pulled out, at most a few should come out, and ripped hair should not be distributed evenly across the tugged portion of the scalp. In cases of alopecia areata, hair tends to pull out more easily along the edge of the patch where the follicles are already being attacked by the bodys immune system than away from the patch where they are still healthy.Nails may have pitting or trachyonychia.
Causes
Alopecia areata is thought to be a systemic autoimmune disorder in which the body attacks its own anagen hair follicles and suppresses or stops hair growth. For example, T cell lymphocytes cluster around affected follicles, causing inflammation and subsequent hair loss. Hair follicles in a normal state are thought to be kept secure from the immune system, a phenomenon called immune privilege. A breach in this immune privilege state is considered as the cause of alopecia areata. A few cases of babies being born with congenital alopecia areata have been reported.Alopecia areata is not contagious. It occurs more frequently in people who have affected family members, suggesting heredity may be a factor. Strong evidence of genetic association with increased risk for alopecia areata was found by studying families with two or more affected members. This study identified at least four regions in the genome that are likely to contain these genes. In addition, alopecia areata shares genetic risk factors with other autoimmune diseases, including rheumatoid arthritis, type 1 diabetes, and celiac disease. It may be the only manifestation of celiac disease.Endogenous retinoids metabolic defect is a key part of the pathogenesis of the alopecia areata.In 2010, a genome-wide association study was completed that identified 129 single nucleotide polymorphisms that were associated with alopecia areata. The genes that were identified include those involved in
controlling the activation and proliferation of regulatory T cells, cytotoxic T lymphocyte-associated antigen 4, interleukin-2, interleukin-2 receptor A, and Eos (also known as Ikaros family zinc finger 4), as well as the human leukocyte antigen. The study also identified two genes, PRDX5 and STX17, that are expressed in the hair follicle.
Diagnosis
Alopecia areata is usually diagnosed based on clinical features.
Trichoscopy may aid in establishing the diagnosis. In alopecia areata, trichoscopy shows regularly distributed "yellow dots" (hyperkeratotic plugs), small exclamation-mark hairs, and "black dots" (destroyed hairs in the hair follicle opening).Oftentimes, however, discrete areas of hair loss surrounded by exclamation mark hairs is sufficient for clinical diagnosis of alopecia areata. Sometimes, reddening of the skin, erythema, may also be present in the balding area.A biopsy is rarely needed to make the diagnosis or aid in the management of alopecia areata. Histologic findings may include peribulbar lymphocytic infiltration resembling a "swarm of bees", a shift in the anagen-to-telogen ratio towards telogen, and dilated follicular infundibulae. Other helpful findings can include pigment incontinence in the hair bulb and follicular stelae. Occasionally, in inactive alopecia areata, no inflammatory infiltrates are found.
Classification
Commonly, alopecia areata involves hair loss in one or more round spots on the scalp.
Hair may also be lost more diffusely over the whole scalp, in which case the condition is called diffuse alopecia areata.
Alopecia areata monolocularis describes baldness in only one spot. It may occur anywhere on the head.
Alopecia areata multilocularis refers to multiple areas of hair loss.
Ophiasis refers to hair loss in the shape of a wave at the circumference of the head.
The disease may be limited only to the beard, in which case it is called alopecia areata barbae.
If the person loses all the hair on the scalp, the disease is then called alopecia areata totalis.
If all body hair, including pubic hair, is lost, the diagnosis then becomes alopecia areata universalis.Alopecia areata totalis and universalis are rare.
Treatment
The objective assessment of treatment efficacy is very difficult and spontaneous remission is unpredictable, but if the affected area is patched, the hair may regrow spontaneously in many cases. None of the existing therapeutic options are curative or preventive.In cases of severe hair loss, limited success has been achieved by using the corticosteroid medications clobetasol or fluocinonide, corticosteroid injections, or cream. Application of corticosteroid creams to the affected skin is less effective and takes longer to produce results. Steroid injections are commonly used in sites where the areas of hair loss on the head are small or especially where eyebrow hair has been lost. Whether they are effective is uncertain. Some other medications that have been used are minoxidil, Elocon (mometasone) ointment (steroid cream), irritants (anthralin or topical coal tar), and topical immunotherapy ciclosporin, sometimes in different combinations. Topical corticosteroids frequently fail to enter the skin deeply enough to affect the hair bulbs, which are the treatment target, and small lesions typically also regrow spontaneously. Oral corticosteroids may decrease the hair loss, but only for the period during which they are taken, and these medications can cause serious side effects. No one treatment is effective in all cases, and some individuals may show no response to any treatment. Few treatments have been well evaluated. A 2008 meta-analysis of oral and topical corticosteroids, topical ciclosporin, photodynamic therapy, and topical minoxidil showed no benefit of hair growth compared with placebo, especially with regard to long-term benefits. For more severe cases, recent studies have shown promising results with the individual use of the immunosuppressant Methotrexate or adjunct use with corticosteroids. However, as relapse of the condition may occur long-term treatment is recommended.When alopecia areata is associated with celiac disease, treatment with a gluten-free diet allows for complete and permanent regrowth of scalp and other body hair in many people, but in others, remissions and recurrences are seen. This improvement is probably due to the normalization of the immune response as a result of gluten withdrawal from the diet.In June 2022, the U.S. Food and Drug Administration (FDA) authorized baricitinib for the treatment of severe alopecia areata.
Prognosis
In most cases that begin with a small number of patches of hair loss, hair grows back after a few months to a year. In cases with a greater number of patches, hair can either grow back or progress to alopecia areata totalis or, in rare cases, alopecia areata universalis.No loss of body function occurs, and the effects of alopecia areata are psychological (loss of self-image due to hair loss), although these can be severe </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Altitude sickness and can you explain it in depth? [/INST] Altitude sickness, the mildest form being acute mountain sickness (AMS), is the harmful effect of high altitude, caused by rapid exposure to low amounts of oxygen at high elevation. People can respond to high altitude in different ways. Symptoms may include headaches, vomiting, tiredness, confusion, trouble sleeping, and dizziness. Acute mountain sickness can progress to high-altitude pulmonary edema (HAPE) with associated shortness of breath or high-altitude cerebral edema (HACE) with associated confusion. Chronic mountain sickness may occur after long-term exposure to high altitude.Altitude sickness typically occurs only above 2,500 metres (8,000 ft), though some are affected at lower altitudes. Risk factors include a prior episode of altitude sickness, a high degree of activity, and a rapid increase in elevation. Diagnosis is based on symptoms and is supported in those who have more than a minor reduction in activities. It is recommended that at high altitude any symptoms of headache, nausea, shortness of breath, or vomiting be assumed to be altitude sickness.Prevention is by gradually increasing elevation by no more than 300 metres (1,000 ft) per day. Being physically fit does not decrease the risk. Treatment is generally by descending and sufficient fluids. Mild cases may be helped by ibuprofen, acetazolamide, or dexamethasone. Severe cases may benefit from oxygen therapy and a portable hyperbaric bag may be used if descent is not possible. Treatment efforts, however, have not been well studied.AMS occurs in about 20% of people after rapidly going to 2,500 metres (8,000 ft) and 40% of people going to 3,000 metres (10,000 ft). While AMS and HACE occurs equally frequently in males and females, HAPE occurs more often in males. The earliest description of altitude sickness is attributed to a Chinese text from around 30 BCE which describes "Big Headache Mountains", possibly referring to the Karakoram Mountains around Kilik Pass.
Signs and symptoms
People have different susceptibilities to altitude sickness; for some otherwise healthy people, acute altitude sickness can begin to appear at around 2,000 metres (6,600 ft) above sea level, such as at many mountain ski resorts, equivalent to a pressure of 80 kilopascals (0.79 atm). This is the most frequent type of altitude sickness encountered. Symptoms often manifest within ten hours of ascent and generally subside within two days, though they occasionally develop into the more serious conditions. Symptoms include headache, confusion, fatigue, stomach illness, dizziness, and sleep disturbance. Exertion may aggravate the symptoms.Those individuals with the lowest initial partial pressure of end-tidal pCO2 (the lowest concentration of carbon dioxide at the end of the respiratory cycle, a measure of a higher alveolar ventilation) and corresponding high oxygen saturation levels tend to have a lower incidence of acute mountain sickness than those with high end-tidal pCO2 and low oxygen saturation levels.
Primary symptoms
Headaches are the primary symptom used to diagnose altitude sickness, although a headache is also a symptom of dehydration. A headache occurring at an altitude above 2,400 metres (7,900 ft) – a pressure of 76 kilopascals (0.75 atm) – combined with any one or more of the following symptoms, may indicate altitude sickness:
Severe symptoms
Symptoms that may indicate life-threatening altitude sickness include:
Pulmonary edema (fluid in the lungs)
Symptoms similar to bronchitis
Persistent dry cough
Fever
Shortness of breath even when restingCerebral edema (swelling of the brain)
Headache that does not respond to analgesics
Unsteady gait
Gradual loss of consciousness
Increased nausea and vomiting
Retinal hemorrhageThe most serious symptoms of altitude sickness arise from edema (fluid accumulation in the tissues of the body). At very high altitude, humans can get either high-altitude pulmonary edema (HAPE), or high-altitude cerebral edema (HACE). The physiological cause of altitude-induced edema is not conclusively established. It is currently believed, however, that HACE is caused by local vasodilation of cerebral blood vessels in response to hypoxia, resulting in greater blood flow and, consequently, greater capillary pressures. On the other hand, HAPE may be due to general vasoconstriction in the pulmonary circulation (normally a response to regional ventilation-perfusion mismatches) which, with constant or increased cardiac output, also leads to increases in capillary pressures. For those with HACE, dexamethasone may provide temporary relief from symptoms in order to keep descending under their own power.HAPE can progress rapidly and is often fatal. Symptoms include fatigue, severe dyspnea at rest, and cough that is initially dry but may progress to produce pink, frothy sputum. Descent to lower altitudes alleviates the symptoms of HAPE.
HACE is a life-threatening condition that can lead to coma or death. Symptoms include headache, fatigue, visual impairment, bladder dysfunction, bowel dysfunction, loss of coordination, paralysis on one side of the body, and confusion. Descent to lower altitudes may save those affected by HACE.
Cause
Altitude sickness can first occur at 1,500 metres (4,900 ft), with the effects becoming severe at extreme altitudes (greater than 5,500 metres (18,000 ft)). Only brief trips above 6,000 metres (20,000 ft) are possible and supplemental oxygen is needed to avert sickness.
As altitude increases, the available amount of oxygen to sustain mental and physical alertness decreases with the overall air pressure, though the relative percentage of oxygen in air, at about 21%, remains practically unchanged up to 21,000 metres (69,000 ft). The RMS velocities of diatomic nitrogen and oxygen are very similar and thus no change occurs in the ratio of oxygen to nitrogen until stratospheric heights.
Dehydration due to the higher rate of water vapor lost from the lungs at higher altitudes may contribute to the symptoms of altitude sickness.The rate of ascent, altitude attained, amount of physical activity at high altitude, as well as individual susceptibility, are contributing factors to the onset and severity of high-altitude illness.
Altitude sickness usually occurs following a rapid ascent and can usually be prevented by ascending slowly. In most of these cases, the symptoms are temporary and usually abate as altitude acclimatization occurs. However, in extreme cases, altitude sickness can be fatal.
High altitude illness can be classified according to the altitude: high (1,500–3,500 metres (4,900–11,500 ft)), very high (3,500–5,500 metres (11,500–18,000 ft)) and extreme (above 5,500 metres (18,000 ft)).
High altitude
At high altitude, 1,500 to 3,500 metres (4,900 to 11,500 ft), the onset of physiological effects of diminished inspiratory oxygen pressure (PiO2) includes decreased exercise performance and increased ventilation (lower arterial partial pressure of carbon dioxide: PCO2). While arterial oxygen transport may be only slightly impaired the arterial oxygen saturation (SaO2) generally stays above 90%. Altitude sickness is common between 2,400 and 4,000 metres (7,900 and 13,100 ft) because of the large number of people who ascend rapidly to these altitudes.
Very high altitude
At very high altitude, 3,500 to 5,500 metres (11,500 to 18,000 ft), maximum SaO2 falls below 90% as the arterial PO2 falls below 60mmHg. Extreme hypoxemia may occur during exercise, during sleep, and in the presence of high altitude pulmonary edema or other acute lung conditions. Severe altitude illness occurs most commonly in this range.
Extreme altitude
Above 5,500 metres (18,000 ft), marked hypoxemia, hypocapnia, and alkalosis are characteristic of extreme altitudes. Progressive deterioration of physiologic function eventually outstrips acclimatization. As a result, no permanent human habitation occurs above 6,000 metres (20,000 ft). A period of acclimatization is necessary when ascending to extreme altitude; abrupt ascent without supplemental oxygen for other than brief exposures invites severe altitude sickness.
Mechanism
The physiology of altitude sickness centres around the alveolar gas equation; the atmospheric pressure is low, but there is still 20.9% oxygen. Water vapour still occupies the same pressure too—this means that there is less oxygen pressure available in the lungs and blood. Compare these two equations comparing the amount of oxygen in blood at altitude:
The hypoxia leads to an increase in minute ventilation (hence both low CO2, and subsequently bicarbonate), Hb increases through haemoconcentration and erythrogenesis. Alkalosis shifts the haemoglobin dissociation constant to the left, 2,3-BPG increases to counter this. Cardiac output increases through an increase in heart rate.The bodys response to high altitude includes the following:
↑ Erythropoietin → ↑ hematocrit and haemoglobin
↑ 2,3-BPG (allows ↑ release of O2 and a right shift on the Hb-O2 disassociation curve)
↑ kidney excretion of bicarbonate (use of acetazolamide can augment for treatment)
Chronic hypoxic pulmonary vasoconstriction (can cause right ventricular hypertrophy)People with high-altitude sickness generally have reduced hyperventilator response, impaired gas exchange, fluid retention or increased sympathetic drive. There is thought to be an increase in cerebral venous volume because of an increase in cerebral blood flow and hypocapnic cerebral vasoconstriction causing oedema.
Diagnosis
Altitude sickness is typically self-diagnosed since symptoms are consistent: nausea, vomiting, headache, and can generally be deduced from a rapid change in altitude or oxygen levels. However, some symptoms may be confused with dehydration. Some severe cases may require professional diagnosis which can be assisted with multiple different methods such as using an MRI or CT scan to check for abnormal buildup of fluids in the lung or brain.
Prevention
Ascending slowly is the best way to avoid altitude sickness. Avoiding strenuous activity such as skiing, hiking, etc. in the first 24 hours at high altitude may reduce the symptoms of AMS. Alcohol and sleeping pills are respiratory depressants, and thus slow down the acclimatization process and should be avoided. Alcohol also tends to cause dehydration and exacerbates AMS. Thus, avoiding alcohol consumption in the first 24–48 hours at a higher altitude is optimal.
Pre-acclimatization
Pre-acclimatization is when the body develops tolerance to low oxygen concentrations before ascending to an altitude. It significantly reduces risk because less time has to be spent at altitude to acclimatize in the traditional way. Additionally, because less time has to be spent on the mountain, less food and supplies have to be taken up. Several commercial systems exist that use altitude tents, so called because they mimic altitude by reducing the percentage of oxygen in the air while keeping air pressure constant to the surroundings. Examples of pre-acclimation measures include remote ischaemic preconditioning, using hypobaric air breathing in order to simulate altitude, and positive end-expiratory pressure.
Altitude acclimatization
Altitude acclimatization is the process of adjusting to decreasing oxygen levels at higher elevations, in order to avoid altitude sickness. Once above approximately 3,000 metres (10,000 ft) – a pressure of 70 kilopascals (0.69 atm) – most climbers and high-altitude trekkers take the "climb-high, sleep-low" approach. For high-altitude climbers, a typical acclimatization regimen might be to stay a few days at a base camp, climb up to a higher camp (slowly), and then return to base camp. A subsequent climb to the higher camp then includes an overnight stay. This process is then repeated a few times, each time extending the time spent at higher altitudes to let the body adjust to the oxygen level there, a process that involves the production of additional red blood cells. Once the climber has acclimatized to a given altitude, the process is repeated with camps placed at progressively higher elevations. The rule of thumb is to ascend no more than 300 m (1,000 ft) per day to sleep. That is, one can climb from 3,000 m (9,800 ft) (70 kPa or 0.69 atm) to 4,500 m (15,000 ft) (58 kPa or 0.57 atm) in one day, but one should then descend back to 3,300 m (10,800 ft) (67.5 kPa or 0.666 atm) to sleep. This process cannot safely be rushed, and this is why climbers need to spend days (or even weeks at times) acclimatizing before attempting to climb a high peak. Simulated altitude equipment such as altitude tents provide hypoxic (reduced oxygen) air, and are designed to allow partial pre-acclimation to high altitude, reducing the total time required on the mountain itself.
Altitude acclimatization is necessary for some people who move rapidly from lower altitudes to higher altitudes .
Medications
The drug acetazolamide (trade name Diamox) may help some people making a rapid ascent to sleeping altitude above 2,700 metres (9,000 ft), and it may also be effective if started early in the course of AMS. Acetazolamide can be taken before symptoms appear as a preventive measure at a dose of 125 mg twice daily. The Everest Base Camp Medical Centre cautions against its routine use as a substitute for a reasonable ascent schedule, except where rapid ascent is forced by flying into high altitude locations or due to terrain considerations. The Centre suggests a dosage of 125 mg twice daily for prophylaxis, starting from 24 hours before ascending until a few days at the highest altitude or on descending; with 250 mg twice daily recommended for treatment of AMS. The Centers for Disease Control and Prevention (CDC) suggest the same dose for prevention of 125 mg acetazolamide every 12 hours. Acetazolamide, a mild diuretic, works by stimulating the kidneys to secrete more bicarbonate in the urine, thereby acidifying the blood. This change in pH stimulates the respiratory center to increase the depth and frequency of respiration, thus speeding the natural acclimatization process. An undesirable side-effect of acetazolamide is a reduction in aerobic endurance performance. Other minor side effects include a tingle-sensation in hands and feet. Although a sulfonamide; acetazolamide is a non-antibiotic and has not been shown to cause life-threatening allergic cross-reactivity in those with a self-reported sulfonamide allergy. Dosage of 1000 mg/day will produce a 25% decrease in performance, on top of the reduction due to high-altitude exposure. The CDC advises that Dexamethasone be reserved for treatment of severe AMS and HACE during descents, and notes that Nifedipine may prevent HAPE.There is insufficient evidence to determine the safety of sumatriptan and if it may help prevent altitude sickness. Despite their popularity, antioxidant treatments have not been found to be effective medications for prevention of AMS. Interest in phosphodiesterase inhibitors such as sildenafil has been limited by the possibility that these drugs might worsen the headache of mountain sickness. A promising possible preventive for altitude sickness is myo-inositol trispyrophosphate (ITPP), which increases the amount of oxygen released by hemoglobin.
Prior to the onset of altitude sickness, ibuprofen is a suggested non-steroidal anti-inflammatory and painkiller that can help alleviate both the headache and nausea associated with AMS. It has not been studied for the prevention of cerebral edema (swelling of the brain) associated with extreme symptoms of AMS.
Over-the-counter herbal supplements and traditional medicines
Herbal supplements and traditional medicines are sometimes suggested to prevent high altitude sickness including ginkgo biloba, R crenulata, minerals such as iron, antacids, and hormonal-based supplements such as medroxyprogesterone and erythropoietin. Medical evidence to support the effectiveness and safety of these approaches is often contradictory or lacking. Indigenous peoples of the Americas, such as the Aymaras of the Altiplano, have for centuries chewed coca leaves to try to alleviate the symptoms of mild altitude sickness. This therapy has not yet been proven effective in a clinical study. In Chinese and Tibetan traditional medicine, an extract of the root tissue of Radix rhodiola is often taken in order to prevent the symptoms of high altitude sickness, however, no clear medical studies have confirmed the effectiveness or safety of this extract.
Oxygen enrichment
In high-altitude conditions, oxygen enrichment can counteract the hypoxia related effects of altitude sickness. A small amount of supplemental oxygen reduces the equivalent altitude in climate-controlled rooms. At 3,400 metres (11,200 ft) (67 kPa or 0.66 atm), raising the oxygen concentration level by 5% via an oxygen concentrator and an existing ventilation system provides an effective altitude of 3,000 m (10,000 ft) (70 kPa or 0.69 atm), which is more tolerable for those unaccustomed to high altitudes.Oxygen from gas bottles or liquid containers can be applied directly via a nasal cannula or mask. Oxygen concentrators based upon pressure swing adsorption (PSA), VSA, or vacuum-pressure swing adsorption (VPSA) can be used to generate the oxygen if electricity is available. Stationary oxygen concentrators typically use PSA technology, which has performance degradations at the lower barometric pressures at high altitudes </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Amblyopia and can you explain it in depth? [/INST] Amblyopia, also called lazy eye, is a disorder of sight in which the brain fails to fully process input from one eye and over time favors the other eye. It results in decreased vision in an eye that typically appears normal in other respects. Amblyopia is the most common cause of decreased vision in a single eye among children and younger adults.The cause of amblyopia can be any condition that interferes with focusing during early childhood. This can occur from poor alignment of the eyes (strabismic), an eye being irregularly shaped such that focusing is difficult, one eye being more nearsighted or farsighted than the other (refractive), or clouding of the lens of an eye (deprivational). After the underlying cause is addressed, vision is not restored right away, as the mechanism also involves the brain. Amblyopia can be difficult to detect, so vision testing is recommended for all children around the ages of four to five.Early detection improves treatment success. Glasses may be all the treatment needed for some children. If this is not sufficient, treatments which encourage or force the child to use the weaker eye are used. This is done by either using a patch or putting atropine in the stronger eye. Without treatment, amblyopia typically persists. Treatment in adulthood is usually much less effective.Amblyopia begins by the age of five. In adults, the disorder is estimated to affect 1–5% of the population. While treatment improves vision, it does not typically restore it to normal in the affected eye. Amblyopia was first described in the 1600s. The condition may make people ineligible to be pilots or police officers. The word amblyopia is from Greek ἀμβλύς amblys, meaning "blunt", and ὤψ ōps, meaning "sight".
Signs and symptoms
Many people with amblyopia, especially those who only have a mild form, are not aware they have the condition until tested at older ages, since the vision in their stronger eye is normal. People with amblyopia typically have poor stereo vision, since it requires both eyes. They further may have, on the affected eye, poor pattern recognition, poor visual acuity, and low sensitivity to contrast and motion.Amblyopia is characterized by several functional abnormalities in spatial vision, including reductions in visual acuity, contrast sensitivity function, and vernier acuity, as well as spatial distortion, abnormal spatial interactions, and impaired contour detection. In addition, individuals with amblyopia have binocular abnormalities such as impaired stereoacuity (stereoscopic acuity) and abnormal binocular summation. Also, central vision in amblyopes is more crowded than central vision in normal observers.These deficits are usually specific to the amblyopic eye. Subclinical deficits of the "better" eye have also been demonstrated.People with amblyopia also have problems of binocular vision such as limited stereoscopic depth perception and usually have difficulty seeing the three-dimensional images in hidden stereoscopic displays such as autostereograms. Perception of depth, from monocular cues such as size, perspective, and motion parallax remains normal.
Cause
Amblyopia has three main causes:
Strabismic: by strabismus (misaligned eyes)
Refractive: by anisometropia (difference of a certain degree of nearsightedness, farsightedness, or astigmatism), or by significant amount of equal refractive error in both eyes
Deprivational: by deprivation of vision early in life by vision-obstructing disorders such as congenital cataract
Strabismus
Strabismus, sometimes also incorrectly called lazy eye, is a condition in which the eyes are misaligned. Strabismus usually results in normal vision in the preferred sighting (or "fellow") eye (the eye that the person prefers to use), but may cause abnormal vision in the deviating or strabismic eye due to the difference between the images projecting to the brain from the two eyes.
Adult-onset strabismus usually causes double vision (diplopia), since the two eyes are not fixed on the same object.
Childrens brains are more neuroplastic, so can more easily adapt by suppressing images from one of the eyes, eliminating the double vision. This plastic response of the brain, interrupts the brains normal development, resulting in the amblyopia. Recent evidence points to a cause of infantile strabismus lying with the input to the visual cortex.Those with strabismic amblyopia tend to show ocular motion deficits when reading, even when they use the nonamblyopic eye. In particular, they tend to make more saccades per line than persons with normal stereo vision, and to have a reduced reading speed, especially when reading a text with small font size.Strabismic amblyopia is treated by clarifying the visual image with glasses, or encouraging use of the amblyopic eye with an eyepatch over the dominant eye or pharmacologic penalization of the better eye. Penalization usually consists of applying atropine drops to temporarily paralyze the accommodation reflex, leading to the blurring of vision in the good eye. It also dilates the pupil. This helps to prevent the bullying and teasing associated with wearing a patch, although sometimes application of the eye drops is challenging. The ocular alignment itself may be treated with surgical or nonsurgical methods, depending on the type and severity of the strabismus.
Refractive
Refractive amblyopia may result from anisometropia (unequal refractive error between the two eyes). Anisometropia exists when there is a difference in the power between the two eyes. The eye which provides the brain with a clearer image typically becomes the dominant eye. The image in the other eye is blurred, which results in abnormal development of one half of the visual system. Refractive amblyopia is usually less severe than strabismic amblyopia and is commonly missed by primary care physicians because of its less dramatic appearance and lack of obvious physical manifestation, such as with strabismus. Given that the refractive correction of anisometropia by means of spectacles typically leads to different image magnification for the two eyes, which may in turn prevent binocular vision, a refractive correction using contact lenses is to be considered. Also pediatric refractive surgery is a treatment option, in particular if conventional approaches have failed due to aniseikonia or lack of compliance or both.Frequently, amblyopia is associated with a combination of anisometropia and strabismus. In some cases, the vision between the eyes can differ to the point where one eye has twice average vision while the other eye is completely blind.
Deprivation and occlusion
Deprivation amblyopia (amblyopia ex anopsia) results when the ocular media become opaque, such as is the case with congenital cataract or corneal haziness. These opacities prevent adequate visual input from reaching the eye, and disrupt development. If not treated in a timely fashion, amblyopia may persist even after the cause of the opacity is removed. Sometimes, drooping of the eyelid (ptosis) or some other problem causes the upper eyelid to physically occlude a childs vision, which may cause amblyopia quickly. Occlusion amblyopia may be a complication of a hemangioma that blocks some or all of the eye. Other possible causes of deprivation and occlusion amblyopia include obstruction in the vitreous and aphakia. Deprivation amblyopia accounts for less than 3% of all individuals affected by amblyopia.
Pathophysiology
Amblyopia is a developmental problem in the brain, not any intrinsic, organic neurological problem in the eyeball (although organic problems can lead to amblyopia which can continue to exist after the organic problem has resolved by medical intervention).
The part of the brain receiving images from the affected eye is not stimulated properly and does not develop to its full visual potential. This has been confirmed by direct brain examination. David H. Hubel and Torsten Wiesel won the Nobel Prize in Physiology or Medicine in 1981 for their work in showing the extent of the damage to ocular dominance columns produced in kittens by sufficient visual deprivation during the so-called "critical period". The maximum "critical period" in humans is from birth to two years old.
Diagnosis
Amblyopia is diagnosed by identifying low visual acuity in one or both eyes, out of proportion to the structural abnormality of the eye and excluding other visual disorders as causes for the lowered visual acuity. It can be defined as an interocular difference of two lines or more in acuity (e.g. on Snellen chart) when the eye optics are maximally corrected. In young children, visual acuity is difficult to measure and can be estimated by observing the reactions of the patient when one eye is covered, including observing the patients ability to follow objects with one eye.
Stereotests like the Lang stereotest are not reliable exclusion tests for amblyopia. A person who passes the Lang stereotest test is unlikely to have strabismic amblyopia, but could nonetheless have refractive or deprivational amblyopia. Binocular retinal birefringence scanning may be able to identify, already in very young children, amblyopia that is associated with strabismus, microstrabismus, or reduced fixation accuracy. Diagnosis and treatment of amblyopia as early as possible is necessary to keep the vision loss to a minimum. Screening for amblyopia is recommended in all people between three and five years of age.
Treatment
Treatment of strabismic or anisometropic amblyopia consists of correcting the optical deficit (wearing the necessary spectacle prescription) and often forcing use of the amblyopic eye, by patching the good eye, or instilling topical atropine in the good eye, or both.: 130 Atropine appears to result in similar outcomes to patching. If there is overpatching or overpenalizing the good eye when treating amblyopia, "reverse amblyopia" can result. Eye patching is usually done on a part-time schedule of about 4–6 hours a day. Treatment is continued as long as vision improves. It is not worthwhile continuing to patch for more than 6 months if no improvement continues.Deprivation amblyopia is treated by removing the opacity as soon as possible followed by patching or penalizing the good eye to encourage the use of the amblyopic eye </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Amoebiasis and can you explain it in depth? [/INST] Amoebiasis, or amoebic dysentery, is an infection of the intestines caused by a parasitic amoeba Entamoeba histolytica. Amoebiasis can be present with no, mild, or severe symptoms. Symptoms may include lethargy, loss of weight, colonic ulcerations, abdominal pain, diarrhea, or bloody diarrhea. Complications can include inflammation and ulceration of the colon with tissue death or perforation, which may result in peritonitis. Anemia may develop due to prolonged gastric bleeding.Cysts of Entamoeba can survive for up to a month in soil or for up to 45 minutes under fingernails. Invasion of the intestinal lining results in bloody diarrhea. If the parasite reaches the bloodstream it can spread through the body, most frequently ending up in the liver where it can cause amoebic liver abscesses. Liver abscesses can occur without previous diarrhea. Diagnosis is typically made by stool examination using microscopy, but it can be difficult to distinguish E. hystolitica from other harmless entamoeba species. An increased white blood cell count may be present in severe cases. The most accurate test is finding specific antibodies in the blood, but it may remain positive following treatment. Bacterial colitis can result in similar symptoms.Prevention of amoebiasis is by improved sanitation, including separating food and water from faeces. There is no vaccine. There are two treatment options depending on the location of the infection. Amoebiasis in tissues is treated with either metronidazole, tinidazole, nitazoxanide, dehydroemetine or chloroquine, while luminal infection is treated with diloxanide furoate or iodoquinoline. Effective treatment against all stages of the disease may require a combination of medications. Infections without symptoms may be treated with just one antibiotic, and infections with symptoms are treated with two antibiotics.Amoebiasis is present all over the world, though most cases occur in the developing world. About 480 million people are currently infected with about 40 million new cases per year with significant symptoms. This results in the death of between 40,000–100,000 people a year. The first case of amoebiasis was documented in 1875 and in 1891 the disease was described in detail, resulting in the terms amoebic dysentery and amoebic liver abscess. Further evidence from the Philippines in 1913 found that upon swallowing cysts of E. histolytica volunteers developed the disease.
Signs and symptoms
Most infected people, about 90%, are asymptomatic, but this disease has the potential to become serious. It is estimated that about 40,000 to 100,000 people worldwide die annually due to amoebiasis.Infections can sometimes last for years if there is no treatment. Symptoms take from a few days to a few weeks to develop and manifest themselves, but usually it is about two to four weeks. Symptoms can range from mild diarrhea to dysentery with blood, coupled with intense abdominal pains. Extra-intestinal complications might also arise as a result of invasive infection which includes colitis, liver, lung, or brain abscesses. The blood comes from bleeding lesions created by the amoebae invading the lining of the colon. In about 10% of invasive cases the amoebae enter the bloodstream and may travel to other organs in the body. Most commonly this means the liver, as this is where blood from the intestine reaches first, but they can end up almost anywhere in the body.Onset time is highly variable and the average asymptomatic infection persists for over a year. It is theorized that the absence of symptoms or their intensity may vary with such factors as strain of amoeba, immune response of the host, and perhaps associated bacteria and viruses.In asymptomatic infections, the amoeba lives by eating and digesting bacteria and food particles in the gut, a part of the gastrointestinal tract. It does not usually come in contact with the intestine itself due to the protective layer of mucus that lines the gut. Disease occurs when amoeba comes in contact with the cells lining the intestine. It then secretes the same substances it uses to digest bacteria, which include enzymes that destroy cell membranes and proteins. This process can lead to penetration and digestion of human tissues, resulting first in flask-shaped ulcerations in the intestine. Entamoeba histolytica ingests the destroyed cells by phagocytosis and is often seen with red blood cells (a process known as erythrophagocytosis) inside when viewed in stool samples. Especially in Latin America, a granulomatous mass (known as an amoeboma) may form in the wall of the ascending colon or rectum due to long-lasting immunological cellular response, and is sometimes confused with cancer.The ingestion of one viable cyst may cause an infection.Steroid therapy can occasionally provoke severe amoebic colitis in people with any E. histolytica infection. This bears high mortality: on average more than 50% with severe colitis die.
Cause
Amoebiasis is an infection caused by the amoeba Entamoeba histolytica.
Transmission
Amoebiasis is usually transmitted by the fecal-oral route, but it can also be transmitted indirectly through contact with dirty hands or objects as well as by anal-oral contact. Infection is spread through ingestion of the cyst form of the parasite, a semi-dormant and hardy structure found in feces. Any non-encysted amoebae, or trophozoites, die quickly after leaving the body but may also be present in stool: these are rarely the source of new infections. Since amoebiasis is transmitted through contaminated food and water, it is often endemic in regions of the world with limited modern sanitation systems, including México, Central America, western South America, South Asia, and western and southern Africa.Amoebic dysentery is one form of travelers diarrhea, although most travelers diarrhea is bacterial or viral in origin.
Pathogenesis
Amoebiasis results from tissue destruction induced by the E. histolytica parasite.
E. histolytica causes tissue damage by three main events: direct host cell killing, inflammation, and parasite invasion.
Diagnosis
With colonoscopy it is possible to detect small ulcers of between 3–5mm, but diagnosis may be difficult as the mucous membrane between these areas can look either healthy or inflamed.
Trophozoites may be identified at the ulcer edge or within the tissue, using immunohistochemical staining with specific anti-E. histolytica antibodies.Asymptomatic human infections are usually diagnosed by finding cysts shed in the stool. Various flotation or sedimentation procedures have been developed to recover the cysts from fecal matter and stains help to visualize the isolated cysts for microscopic examination. Since cysts are not shed constantly, a minimum of three stools are examined. In symptomatic infections, the motile form (the trophozoite) is often seen in fresh feces. Serological tests exist, and most infected individuals (with symptoms or not) test positive for the presence of antibodies. The levels of antibody are much higher in individuals with liver abscesses. Serology only becomes positive about two weeks after infection. More recent developments include a kit that detects the presence of amoeba proteins in the feces, and another that detects ameba DNA in feces. These tests are not in widespread use due to their expense.Microscopy is still by far the most widespread method of diagnosis around the world. However it is not as sensitive or accurate in diagnosis as the other tests available. It is important to distinguish the E. histolytica cyst from the cysts of nonpathogenic intestinal protozoa such as Entamoeba coli by its appearance. E. histolytica cysts have a maximum of four nuclei, while the commensal Entamoeba coli cyst has up to 8 nuclei. Additionally, in E. histolytica, the endosome is centrally located in the nucleus, while it is usually off-center in Entamoeba coli. Finally, chromatoidal bodies in E. histolytica cysts are rounded, while they are jagged in Entamoeba coli. However, other species, Entamoeba dispar and E. moshkovskii, are also commensals and cannot be distinguished from E. histolytica under the microscope. As E. dispar is much more common than E. histolytica in most parts of the world this means that there is a lot of incorrect diagnosis of E. histolytica infection taking place. The WHO recommends that infections diagnosed by microscopy alone should not be treated if they are asymptomatic and there is no other reason to suspect that the infection is actually E. histolytica. Detection of cysts or trophozoites stools under microscope may require examination of several samples over several days to determine if they are present, because cysts are shed intermittently and may not show up in every sample.Typically, the organism can no longer be found in the feces once the disease goes extra-intestinal. Serological tests are useful in detecting infection by E. histolytica if the organism goes extra-intestinal and in excluding the organism from the diagnosis of other disorders. An Ova & Parasite (O&P) test or an E. histolytica fecal antigen assay is the proper assay for intestinal infections. Since antibodies may persist for years after clinical cure, a positive serological result may not necessarily indicate an active infection. A negative serological result, however, can be equally important in excluding suspected tissue invasion by E. histolytica.Stool antigen detection tests have helped to overcome some of the limitations of stool microscopy. Antigen detection tests are easy to use, but they have variable sensitivity and specificity, especially in low-endemic areas.Polymerase chain reaction (PCR) is considered the gold standard for diagnosis but remains underutilized.
Prevention
To help prevent the spread of amoebiasis around the home :
Wash hands thoroughly with soap and hot running water for at least 10 seconds after using the toilet or changing a babys diaper, and before handling food.
Clean bathrooms and toilets often; pay particular attention to toilet seats and taps.
Avoid sharing towels or face washers.To help prevent infection:
Avoid raw vegetables when in endemic areas, as they may have been fertilized using human feces.
Boil water or treat with iodine tablets.
Avoid eating street foods especially in public places where others are sharing sauces in one containerGood sanitary practice, as well as responsible sewage disposal or treatment, are necessary for the prevention of E. histolytica infection on an endemic level. E.histolytica cysts are usually resistant to chlorination, therefore sedimentation and filtration of water supplies are necessary to reduce the incidence of infection.E. histolytica cysts may be recovered from contaminated food by methods similar to those used for recovering Giardia lamblia cysts from feces. Filtration is probably the most practical method for recovery from drinking water and liquid foods. E. histolytica cysts must be distinguished from cysts of other parasitic (but nonpathogenic) protozoa and from cysts of free-living protozoa as discussed above. Recovery procedures are not very accurate; cysts are easily lost or damaged beyond recognition, which leads to many falsely negative results in recovery tests.
Treatment
E. histolytica infections occur in both the intestine and (in people with symptoms) in tissue of the intestine and/or liver. Those with symptoms require treatment with two medications, an amoebicidal tissue-active agent and a luminal cysticidal agent. Individuals that are asymptomatic only need a luminal cysticidal agent.
Prognosis
In the majority of cases, amoebas remain in the gastrointestinal tract of the hosts. Severe ulceration of the gastrointestinal mucosal surfaces occurs in less than 16% of cases. In fewer cases, the parasite invades the soft tissues, most commonly the liver. Only rarely are masses formed (amoebomas) that lead to intestinal obstruction.(Mistaken for Ca caecum and appendicular mass) Other local complications include bloody diarrhea, pericolic and pericaecal abscess.Complications of hepatic amoebiasis includes subdiaphragmatic abscess, perforation of diaphragm to pericardium and pleural cavity, perforation to abdominal cavital (amoebic peritonitis) and perforation of skin (amoebiasis cutis).Pulmonary amoebiasis can occur from liver lesions by spread through the blood or by perforation of pleural cavity and lung. It can cause lung abscess, pulmono pleural fistula, empyema lung and broncho pleural fistula. It can also reach the brain through blood vessels and cause amoebic brain abscess and amoebic meningoencephalitis. Cutaneous amoebiasis can also occur in skin around sites of colostomy wound, perianal region, region overlying visceral lesion and at the site of drainage of liver abscess.Urogenital tract amoebiasis derived from intestinal lesion can cause amoebic vulvovaginitis (Mays disease), rectovesicle fistula and rectovaginal fistula.Entamoeba histolytica infection is associated with malnutrition and stunting of growth in children.
Epidemiology
Amoebiasis caused about 55,000 deaths worldwide in 2010, down from 68,000 in 1990.
In older textbooks it is often stated that 10% of the worlds population is infected with Entamoeba histolytica. Nevertheless, this means that there are up to 50 million true E. histolytica infections and approximately seventy thousand die each year, mostly from liver abscesses or other complications. Although usually considered a tropical parasite, the first case reported (in 1875) was actually in St Petersburg in Russia, near the Arctic Circle. Infection is more common in warmer areas, but this is because of both poorer hygiene and the parasitic cysts surviving longer in warm moist conditions.
History
Amoebiasis was first described by Fedor A. Lösch in 1875, in northern Russia </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Anaplasmosis and can you explain it in depth? [/INST] Anaplasmosis is a tick-borne disease affecting ruminants, dogs, and horses, and is caused by Anaplasma bacteria. Anaplasmosis is an infectious but not contagious disease. Anaplasmosis can be transmitted through mechanical and biological vector processes. Anaplasmosis can also be referred to as "yellow bag" or "yellow fever" because the infected animal can develop a jaundiced look. Other signs of infection include weight loss, diarrhea, paleness of the skin, aggressive behavior, and high fever.Many different tick species can carry the bacteria that cause anaplasmosis. The two major bacterial pathogens are Anaplasma marginale and Anaplasma phagocytophilum. These microorganisms are Gram-negative, and infect red blood cells. Once the host is infected with anaplasmosis, the immune system will try to fight off and kill the infected red blood cells, but will also kill healthy red blood cells. The Anaplasma sparouinense species is responsible for a rare zoonosis, the Sparouine anaplasmosis, detected only in French Guiana, South America. This disease was described from a clandestine gold miner working deep in rainforest. Infection of his red blood cells led to a severe deterioration of his health and required his hospitalization. Molecular typing showed that Anaplasma sparouinense is distinct to all known species and more genetically related to recently described Anaplasma species causing infections in rainforest wild fauna of Brazil.While there are no current live or inactivated vaccines effective for all strains of A. marginale approved by the USDA for anaplasmosis, there are other means of prevention. Tick and fly control for herds of ruminants can be effective but also labor intensive. Chemical methods can also be used, including sanitizing surgical equipment after each use. Tetracycline drugs are the most common treatment for anaplasmosis, and can provide the animal with immunity for a period of time. The disease is more common in the South and West parts of the United States, but is no longer considered a major problem since the use of tetracycline drugs.
Transmission
Mechanical and biological vector transmission work in different ways but both lead to infection of the red blood cells. Mechanical transmission happens in two ways, one when red blood cells are inoculated with the blood parasite through surgical equipment including needles, dehorners, ear taggers, castrating knives, and tattoo instruments. Another mechanical transmission mode is through the mouthparts of biting flies who carry an Anaplasma species of blood parasite.Biological vector transmission is through ticks that carry a blood parasite able to cause anaplasmosis. The most common Anaplasmosis-causing tick is Ixodes scapularis, also known as the black-legged tick or the deer tick. Ticks who contain species of many different Anaplasma species can transmit this disease through a bite. The blood parasite survives and can multiply in the tick, and can sit dormant for months without being transmitted to an animal. When bitten by a tick carrying a blood parasite, the blood parasite can then enter the new host and cause infection.Once infected with a species of Anaplasma, the parasite multiplies in the blood stream and attaches to red blood cells. The immune system will attempt to kill the infected blood cells but will also kill uninfected red blood cells in the process. The number of red blood cells being destroyed becomes larger than new red blood cells being made, causing the host to become anemic and leading to many other symptoms. Once infected with anaplasmosis, the cattle will always be a carrier of the infectious disease, and calves born from carriers will also carry the disease.
Signs and symptoms
Classic signs and symptoms of anaplasmosis will not occur until 3–6 weeks after infection. The most common symptoms of anaplasmosis include fever, a decreased number of white blood cells, platelets in the bloodstream, and abnormally elevated levels of liver enzymes. The erythema chronicum migrans rash may be seen with anaplasmosis as it is co-transmitted in 10% of Lyme disease cases.
Anemia may be severe and result in cardiovascular changes such as an increase in heart rate. Blood in the urine may occur due to the lysis of red blood cells. General systemic signs include diarrhea, anorexia, and weight loss. Infected animals may develop a jaundiced look which then turns into paleness around the eyes, muzzle, lips, and teats of the cattle.All cattle are susceptible to infection by Anaplasma marginale, but the severity worsens with age increase. Older cattle tend to exhibit the most severe clinical symptoms; cattle aged 1–3 may also show severe symptoms but are able to recover easier.
Causes
The two major species that cause anaplasmosis in ruminants include Anaplasma marginale and Anaplasma phagocytophilum. Anaplasma marginale is found worldwide and is transmitted by Rhipicephalus ticks. Anaplasma phagocytophilum is also found worldwide, mainly transmitted by Ixodes ticks. Other species that cause anaplasmosis in specific species include:
Cattle:
Anaplasma centrale - found mainly in South America, Africa and the Middle East
Sheep and goats:
Anaplasma ovis - found worldwide. There is a prevalence of 82.9% in sheep, and 74.9% in goats. This species is the most prevalent for causing anaplasmosis in sheep and goats, although Anaplasma phagocytophilium can also cause the disease. Anaplasma phagocytophilium has a prevalence of 11.9% in sheep, and 15.2% in goats.
Morphology
There are many strains of Anaplasma marginale, all with differing morphology, antigenic properties, protein sequence, and ability to be transmitted by ticks. Major surface proteins (MSP) have been found to play a major role in the infection by Anaplasma marginale. Out of the six MSP found on this species, three of the major surface proteins do not seem to differ between all strains, those including MSP1a, MSP4, and MSP5. The msp1a gene, which codes for MSP1a, is used as a marker for the identification of Anaplasma marginale because it has shown to be conserved in the multiplication of rickettsia in cattle and ticks and has been shown to be involved in adhesion to bovine erythrocytes and tick cells.Anaplasma phagocytophilum is a gram-negative bacterium that does not have lipopolysaccharides or peptidoglycan. The outer membrane does not have a capsule, and is coarse with irregular periplasmic spaces. This species was originally included in the genus Ehrlichia (Ehrlichia phagocytophilium), but is now included in the genus Anaplasma (Anaplasma phagocytophilium).
Prevention
Currently, no live or inactivated vaccines have been approved by the USDA that are effective against all strains of A. marginale. Some vaccines that rely on erythrocyte-derived antigen sources provide immunity or prevent clinical disease, although these do not prevent cattle from being infected with A. marginale. Other means of prevention can include testing all ruminants in a herd and eliminating any individuals who test positive for anaplasmosis, leading to an anaplasmosis-free herd. Vector control measures can also be used. Tick control is widely used in some countries, including Africa, but rarely used in the United States due to the fact that this prevention method is labor-intensive and expensive. In contrast, the control of flies is effective and there are many ways to do this. Chemical agents can be used, sanitation methods (such as cleaning stalls/pens regularly, manure management, and protecting feed), as well as biological control by natural enemies of flies (including bees, mites, parasitoids). Ways to prevent iatrogenic transmission include avoiding re-using of needles and sanitizing medical equipment between uses. Antimicrobial treatment can also be used, although it is more commonly used in the case of active infection. This includes the drugs tetracycline and imidocarb, and is used in healthy ruminants to decrease the clinical effects of an active infection.
Treatment
The most common source of treatment is the use of tetracycline drugs (including tetracycline, chlortetracycline, oxytetracycline, rolitetracycline, doxycycline, and minocycline) and imidocarb. An injection of tetracycline drugs can give ruminants immunity to Anaplasma species for at least eight months. Imidocarb has been shown to be highly effective against Anaplasma marginale, but has been identified as a possible carcinogen and is not approved in the United States or Europe. Countries such as South Africa, Australia, Israel, and South America have used live vaccines containing infectious Anaplasma centrale to prevent infection of Anaplasma marginale. Live vaccines are prohibited in the United States, and there has been production of vaccines consisting of nonliving Anaplasma marginale pulled from infected bovine erythrocytes, which can provide some immunity but leaves cattle susceptible to other strains of Anaplasma marginale. Supportive therapy such as blood products and fluids may be necessary.
Epidemiology
In the United States, anaplasmosis is notably present in the South and West, where the tick hosts Ixodes spp. are found. It is also a seemingly increasing antibody in humans in Europe </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Anaplastic astrocytoma and can you explain it in depth? [/INST] Anaplastic astrocytoma is a rare WHO grade III type of astrocytoma, which is a type of cancer of the brain. In the United States, the annual incidence rate for anaplastic astrocytoma is 0.44 per 100,000 people.
Signs and symptoms
Initial presenting symptoms most commonly are headache, depressed mental status, focal neurological deficits, and/or seizures. The growth rate and mean interval between onset of symptoms and diagnosis is approximately 1.5–2 years but is highly variable, being intermediate between that of low-grade astrocytomas and glioblastomas. Seizures are less common among patients with anaplastic astrocytomas compared to low-grade lesions.
Causes
Most high-grade gliomas occur sporadically or without identifiable cause. However, a small proportion (less than 5%) of persons with malignant astrocytoma have a definite or suspected hereditary predisposition. The main hereditary predispositions are mainly neurofibromatosis type I, Li-Fraumeni syndrome, hereditary nonpolyposis colorectal cancer and tuberous sclerosis. Anaplastic astrocytomas have also been associated with previous exposure to vinyl chloride and to high doses of radiation therapy to the brain.
Pathology
Anaplastic astrocytomas fall under the category of high grade gliomas (WHO grade III-IV), which are pathologically undifferentiated gliomas that carry a poor clinical prognosis. Unlike glioblastomas (WHO grade IV), anaplastic astrocytomas lack vascular proliferation and necrosis on pathologic evaluation. Compared to grade II tumors, anaplastic astrocytomas are more cellular, demonstrate more atypia, and mitoses are seen.
Treatment
The standard initial treatment is to remove as much of the tumor as possible without worsening neurologic deficits </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Anorexia nervosa and can you explain it in depth? [/INST] Anorexia nervosa, often referred to simply as anorexia, is an eating disorder characterized by low weight, food restriction, body image disturbance, fear of gaining weight, and an overpowering desire to be thin. Anorexia is a term of Greek origin: an- (ἀν-, prefix denoting negation) and orexis (ὄρεξις, "appetite"), translating literally to "a loss of appetite"; the adjective nervosa indicating the functional and non-organic nature of the disorder. Anorexia nervosa was coined by Gull in 1873 but, despite literal translation, the symptom of hunger is frequently present and the pathological control of this instinct is a source of satisfaction for the patients.
Individuals with anorexia nervosa commonly see themselves as overweight, although they are in fact underweight. The DSM-5 describes this perceptual symptom as "disturbance in the way in which ones body weight or shape is experienced". In research and clinical settings, this symptom is called "body image disturbance". Individuals with anorexia nervosa also often deny that they have a problem with low weight. They may weigh themselves frequently, eat small amounts, and only eat certain foods. Some exercise excessively, force themselves to vomit (in the "anorexia purging" subtype), or use laxatives to lose weight and control body shapes, and/or binge eat. Medical complications may include osteoporosis, infertility, and heart damage, among others. Women will often stop having menstrual periods. In extreme cases, patients with anorexia nervosa who continually refuse significant dietary intake and weight restoration interventions, and are declared incompetent to make decisions by a psychiatrist, may be fed by force under restraint via nasogastric tube after asking their parents or proxies to make the decision for them.The cause of anorexia is currently unknown. There appear to be some genetic components with identical twins more often affected than fraternal twins. Cultural factors also appear to play a role, with societies that value thinness having higher rates of the disease. Additionally, it occurs more commonly among those involved in activities that value thinness, such as high-level athletics, modeling, and dancing. Anorexia often begins following a major life-change or stress-inducing event. The diagnosis requires a significantly low weight and the severity of disease is based on body mass index (BMI) in adults with mild disease having a BMI of greater than 17, moderate a BMI of 16 to 17, severe a BMI of 15 to 16, and extreme a BMI less than 15. In children, a BMI for age percentile of less than the 5th percentile is often used.Treatment of anorexia involves restoring the patient back to a healthy weight, treating their underlying psychological problems, and addressing behaviors that promote the problem. While medications do not help with weight gain, they may be used to help with associated anxiety or depression. Different therapy methods may be useful, such as cognitive behavioral therapy or an approach where parents assume responsibility for feeding their child, known as Maudsley family therapy. Sometimes people require admission to a hospital to restore weight. Evidence for benefit from nasogastric tube feeding is unclear; such an intervention may be highly distressing for both anorexia patients and healthcare staff when administered against the patients will under restraint. Some people with anorexia will have a single episode and recover while others may have recurring episodes over years. Many complications improve or resolve with the regaining of weight.Globally, anorexia is estimated to affect 2.9 million people as of 2015. It is estimated to occur in 0.3% to 4.3% of women and 0.2% to 0.3% of men in Western countries at some point in their life. About 0.4% of young women are affected in a given year and it is estimated to occur ten times more commonly among women than men. Rates in most of the developing world are unclear. Often it begins during the teen years or young adulthood. While anorexia became more commonly diagnosed during the 20th century it is unclear if this was due to an increase in its frequency or simply better diagnosis. In 2013, it directly resulted in about 600 deaths globally, up from 400 deaths in 1990. Eating disorders also increase a persons risk of death from a wide range of other causes, including suicide. About 5% of people with anorexia die from complications over a ten-year period, a nearly six times increased risk. The term "anorexia nervosa" was first used in 1873 by William Gull to describe this condition.
In recent years, evolutionary psychiatry as an emerging scientific discipline has been studying mental disorders from an evolutionary perspective. It is still debated whether eating disorders such as anorexia have evolutionary functions or if they are problems resulting from a modern lifestyle.
Signs and symptoms
Anorexia nervosa is an eating disorder characterized by attempts to lose weight to the point of starvation. A person with anorexia nervosa may exhibit a number of signs and symptoms, the type and severity of which may vary and be present but not readily apparent.Anorexia nervosa, and the associated malnutrition that results from self-imposed starvation, can cause complications in every major organ system in the body. Hypokalaemia, a drop in the level of potassium in the blood, is a sign of anorexia nervosa. A significant drop in potassium can cause abnormal heart rhythms, constipation, fatigue, muscle damage, and paralysis.Signs and symptoms may be classified in physical, cognitive, affective, behavioral and perceptual:
Physical symptoms
A low body mass index for ones age and height
Amenorrhea, a symptom that occurs after prolonged weight loss, causing menstruation to stop
Dry hair and skin, as well as hair thinning
Fear of even the slightest weight gain; taking all precautionary measures to avoid weight gain or becoming "overweight"
Rapid, continuous weight loss
Lanugo: soft, fine hair growing over the face and body
Bradycardia or tachycardia.
Chronic fatigue
Orange discoloration of the skin, particularly the feet (Carotenosis)
Infertility
Halitosis (from vomiting or starvation-induced ketosis)
Hypotension or orthostatic hypotension
Having severe muscle tension, aches and pains
Insomnia
Abdominal distension
Cognitive symptoms
An obsession with counting calories and monitoring fat contents of food.
Preoccupation with food, recipes, or cooking; may cook elaborate dinners for others, but not eat the food themselves or consume a very small portion.
Admiration of thinner people.
Thoughts of being fat or not thin enough
An altered mental representation of ones body
Impaired theory of mind, exacerbated by lower BMI and depression
Difficulty in abstract thinking and problem solving
Rigid and inflexible thinking
Poor self-esteem
Hypercriticism and perfectionism
Affective symptoms
Depression
Ashamed of oneself or ones body
Anxiety disorders
Rapid mood swings
Emotional dysregulation
Alexithymia
Behavioral symptoms
Food restrictions despite being underweight or at a healthy weight.
Food rituals, such as cutting food into tiny pieces, refusing to eat around others, and hiding or discarding of food.
Purging (only in the anorexia purging subtype) with laxatives, diet pills, ipecac syrup, or diuretics to flush food out of their system after eating or engage in self-induced vomiting.
Excessive exercise, including micro-exercising, for example making small persistent movements of fingers or toes.
Self harming or self-loathing.
Solitude: may avoid friends and family and become more withdrawn and secretive.
Perceptual symptoms
Perception of self as overweight, in contradiction to an underweight reality (namely "body image disturbance")
Intolerance to cold and frequent complaints of being cold; body temperature may lower (hypothermia) in an effort to conserve energy due to malnutrition.
Altered body schema (i.e. an implicit representation of the body evoked by acting)
Altered interoception
Interoception
Interoception involves the conscious and unconscious sense of the internal state of the body, and it has an important role in homeostasis and regulation of emotions. Aside from noticeable physiological dysfunction, interoceptive deficits also prompt individuals with anorexia to concentrate on distorted perceptions of multiple elements of their body image. This exists in both people with anorexia and in healthy individuals due to impairment in interoceptive sensitivity and interoceptive awareness.Aside from weight gain and outer appearance, people with anorexia also report abnormal bodily functions such as indistinct feelings of fullness. This provides an example of miscommunication between internal signals of the body and the brain. Due to impaired interoceptive sensitivity, powerful cues of fullness may be detected prematurely in highly sensitive individuals, which can result in decreased calorie consumption and generate anxiety surrounding food intake in anorexia patients. People with anorexia also report difficulty identifying and describing their emotional feelings and the inability to distinguish emotions from bodily sensations in general, called alexithymia.Interoceptive awareness and emotion are deeply intertwined, and could mutually impact each other in abnormalities. Anorexia patients also exhibit emotional regulation difficulties that ignite emotionally-cued eating behaviors, such as restricting food or excessive exercising. Impaired interoceptive sensitivity and interoceptive awareness can lead anorexia patients to adapt distorted interpretations of weight gain that are cued by physical sensations related to digestion (e.g., fullness). Combined, these interoceptive and emotional elements could together trigger maladaptive and negatively reinforced behavioral responses that assist in the maintenance of anorexia. In addition to metacognition, people with anorexia also have difficulty with social cognition including interpreting others emotions, and demonstrating empathy. Abnormal interoceptive awareness and interoceptive sensitivity shown through all of these examples have been observed so frequently in anorexia that they have become key characteristics of the illness.
Comorbidity
Other psychological issues may factor into anorexia nervosa. Some people have a previous disorder which may increase their vulnerability to developing an eating disorder and some develop them afterwards. The presence of psychiatric comorbidity has been shown to affect the severity and type of anorexia nervosa symptoms in both adolescents and adults.Obsessive-compulsive disorder (OCD) and obsessive-compulsive personality disorder (OCPD) are highly comorbid with AN. OCD is linked with more severe symptomatology and worse prognosis. The causality between personality disorders and eating disorders has yet to be fully established. Other comorbid conditions include depression, alcoholism, borderline and other personality disorders, anxiety disorders, attention deficit hyperactivity disorder, and body dysmorphic disorder (BDD). Depression and anxiety are the most common comorbidities, and depression is associated with a worse outcome. Autism spectrum disorders occur more commonly among people with eating disorders than in the general population. Zucker et al. (2007) proposed that conditions on the autism spectrum make up the cognitive endophenotype underlying anorexia nervosa and appealed for increased interdisciplinary collaboration.
Causes
There is evidence for biological, psychological, developmental, and sociocultural risk factors, but the exact cause of eating disorders is unknown.
Genetic
Anorexia nervosa is highly heritable. Twin studies have shown a heritability rate of 28–58%. First-degree relatives of those with anorexia have roughly 12 times the risk of developing anorexia. Association studies have been performed, studying 128 different polymorphisms related to 43 genes including genes involved in regulation of eating behavior, motivation and reward mechanics, personality traits and emotion. Consistent associations have been identified for polymorphisms associated with agouti-related peptide, brain derived neurotrophic factor, catechol-o-methyl transferase, SK3 and opioid receptor delta-1. Epigenetic modifications, such as DNA methylation, may contribute to the development or maintenance of anorexia nervosa, though clinical research in this area is in its infancy.A 2019 study found a genetic relationship with mental disorders, such as schizophrenia, obsessive–compulsive disorder, anxiety disorder and depression; and metabolic functioning with a negative correlation with fat mass, type 2 diabetes and leptin.One gene that has been linked to anorexia might be of particular interest. This gene codes for a protein called the estrogen related receptor alpha (ERRalpha). In some tissues, this gene alters the ability of estrogen and estrogen receptors to interact with DNA and change the function of cells. Since estrogen has potent effects upon appetite and feeding, any genetic abnormality in the estrogen signaling pathway could contribute to the symptoms of anorexia and explain why anorexia typically appears in young women just after the onset of puberty.
Environmental
Obstetric complications: prenatal and perinatal complications may factor into the development of anorexia nervosa, such as preterm birth, maternal anemia, diabetes mellitus, preeclampsia, placental infarction, and neonatal heart abnormalities. Neonatal complications may also have an influence on harm avoidance, one of the personality traits associated with the development of AN.Neuroendocrine dysregulation: altered signalling of peptides that facilitate communication between the gut, brain and adipose tissue, such as ghrelin, leptin, neuropeptide Y and orexin, may contribute to the pathogenesis of anorexia nervosa by disrupting regulation of hunger and satiety.Gastrointestinal diseases: people with gastrointestinal disorders may be more at risk of developing disorders of eating practices than the general population, principally restrictive eating disturbances. An association of anorexia nervosa with celiac disease has been found. The role that gastrointestinal symptoms play in the development of eating disorders seems rather complex. Some authors report that unresolved symptoms prior to gastrointestinal disease diagnosis may create a food aversion in these persons, causing alterations to their eating patterns. Other authors report that greater symptoms throughout their diagnosis led to greater risk. It has been documented that some people with celiac disease, irritable bowel syndrome or inflammatory bowel disease who are not conscious about the importance of strictly following their diet, choose to consume their trigger foods to promote weight loss. On the other hand, individuals with good dietary management may develop anxiety, food aversion and eating disorders because of concerns around cross contamination of their foods. Some authors suggest that medical professionals should evaluate the presence of an unrecognized celiac disease in all people with eating disorder, especially if they present any gastrointestinal symptom (such as decreased appetite, abdominal pain, bloating, distension, vomiting, diarrhea or constipation), weight loss, or growth failure; and also routinely ask celiac patients about weight or body shape concerns, dieting or vomiting for weight control, to evaluate the possible presence of eating disorders, especially in women.Studies have hypothesized the continuance of disordered eating patterns may be epiphenomena of starvation. The results of the Minnesota Starvation Experiment showed normal controls exhibit many of the behavioral patterns of AN when subjected to starvation. This may be due to the numerous changes in the neuroendocrine system, which results in a self-perpetuating cycle.Anorexia nervosa is more likely to occur in a persons pubertal years. Some explanatory hypotheses for the rising prevalence of eating disorders in adolescence are "increase of adipose tissue in girls, hormonal changes of puberty, societal expectations of increased independence and autonomy that are particularly difficult for anorexic adolescents to meet; [and] increased influence of the peer group and its values."
Psychological
Early theories of the cause of anorexia linked it to childhood sexual abuse or dysfunctional families; evidence is conflicting, and well-designed research is needed. The fear of food is known as sitiophobia or cibophobia, and is part of the differential diagnosis. Other psychological causes of anorexia include low self-esteem, feeling like there is lack of control, depression, anxiety, and loneliness. People with anorexia are, in general, highly perfectionistic and most have obsessive compulsive personality traits which may facilitate sticking to a restricted diet. It has been suggested that patients with anorexia are rigid in their thought patterns, and place a high level of importance upon being thin.A risk factor for anorexia is trauma. Although the prevalence rates vary greatly, between 37% and 100%, there appears to be a link between traumatic events and eating disorder diagnosis. Approximately 72% of individuals with anorexia report experiencing a traumatic event prior to the onset of eating disorder symptoms, with binge-purge subtype reporting the highest rates. There are many traumatic events that may be risk factors for development of anorexia, the first identified traumatic event predicting anorexia was childhood sexual abuse. However, other traumatic events, such as physical and emotional abuse have also been found to be risk factors. Interpersonal, as opposed to non-interpersonal trauma, has been seen as the most common type of traumatic event, which can encompass sexual, physical, and emotional abuse. Individuals who experience repeated trauma, like those who experience trauma perpetrated by a caregiver or loved one, have increased symptom severity of anorexia and a greater prevalence of comorbid psychiatric diagnoses.In individuals with anorexia, the prevalence rates for those who also qualify for a PTSD diagnosis ranges from 4% to 52% in non-clinical samples to 10% to 47% in clinical samples. A complicated symptom profile develops when trauma and anorexia meld; the bodily experience of the individual is changed and intrusive thoughts and sensations may be experienced. Traumatic events can lead to intrusive and obsessive thoughts, and the symptom of anorexia that has been most closely linked to a PTSD diagnosis is increased obsessive thoughts pertaining to food. Similarly, impulsivity is linked to the purge and binge-purge subtypes of anorexia, trauma, and PTSD. Emotional trauma (e.g., invalidation, chaotic family environment in childhood) may lead to difficulty with emotions, particularly the identification of and how physical sensations contribute to the emotional response. Trauma and traumatic events can disturb an individuals sense of self and affect their ability to thrive, especially within their bodies.When trauma is perpetrated on an individual, it can lead to feelings of not being safe within their own body; that their body is for others to use and not theirs alone. Individuals may experience a feeling of disconnection from their body after a traumatic experience, leading to a desire to distance themselves from the body. Trauma overwhelms individuals emotionally, physically, and psychologically. Both physical and sexual abuse can lead to an individual seeing their body as belonging to an "other" and not to the "self". Individuals who feel as though they have no control over their bodies due to trauma may use food as a means of control because the choice to eat is an unmatched expression of control. By exerting control over food, individuals can choose when to eat and how much to eat. Individuals, particularly children experiencing abuse, may feel a loss of control over their life, circumstances, and their own bodies. Particularly sexual abuse, but also physical abuse, can make individuals feel that the body is not a safe place and an object over which another has control. Starvation, in the case of anorexia, may also lead to reduction in the body as a sexual object, making starvation a solution. Restriction may also be a means by which the pain an individual is experiencing can be communicated.
Sociological
Anorexia nervosa has been increasingly diagnosed since 1950; the increase has been linked to vulnerability and internalization of body ideals. People in professions where there is a particular social pressure to be thin (such as models and dancers) were more likely to develop anorexia, and those with anorexia have much higher contact with cultural sources that promote weight loss. This trend can also be observed for people who partake in certain sports, such as jockeys and wrestlers. There is a higher incidence and prevalence of anorexia nervosa in sports with an emphasis on aesthetics, where low body fat is advantageous, and sports in which one has to make weight for competition. Family group dynamics can play a role in the cause of anorexia including negative expressed emotion in overprotective families where blame is frequently experienced among its members. When there is a constant pressure from people to be thin, teasing and bullying can cause low self-esteem and other psychological symptoms.
Media effects
Persistent exposure to media that present body ideals may constitute a risk factor for body dissatisfaction and anorexia nervosa. The cultural ideal for body shape for men versus women continues to favor slender women and athletic, V-shaped muscular men. A 2002 review found that, of the magazines most popular among people aged 18 to 24 years, those read by men, unlike those read by women, were more likely to feature ads and articles on shape than on diet. Body dissatisfaction and internalization of body ideals are risk factors for anorexia nervosa that threaten the health of both male and female populations.Another online aspect contributing to higher rates of eating disorders such as anorexia nervosa are websites and communities on social media that stress the importance of attainment of body ideals extol and promote anorexia nervosa through the use of religious metaphors, lifestyle descriptions, "thinspiration" or "fitspiration" (inspirational photo galleries and quotes that aim to serve as motivators for attainment of body ideals). Pro-anorexia websites reinforce internalization of body ideals and the importance of their attainment.The media portray a false view of what people truly look like. In magazines and movies and even on billboards most of the actors/models are digitally altered in multiple ways. People then strive to look like these "perfect" role models when in reality they are not near perfection themselves.
Mechanisms
Evidence from physiological, pharmacological and neuroimaging studies suggest serotonin (also called 5-HT) may play a role in anorexia. While acutely ill, metabolic changes may produce a number of biological findings in people with anorexia that are not necessarily causative of the anorexic behavior. For example, abnormal hormonal responses to challenges with serotonergic agents have been observed during acute illness, but not recovery. Nevertheless, increased cerebrospinal fluid concentrations of 5-hydroxyindoleacetic acid (a metabolite of serotonin), and changes in anorectic behavior in response to acute tryptophan depletion (tryptophan is a metabolic precursor to serotonin) support a role in anorexia. The activity of the 5-HT2A receptors has been reported to be lower in patients with anorexia in a number of cortical regions, evidenced by lower binding potential of this receptor as measured by PET or SPECT, independent of the state of illness. While these findings may be confounded by comorbid psychiatric disorders, taken as a whole they indicate serotonin in anorexia. These alterations in serotonin have been linked to traits characteristic of anorexia such as obsessiveness, anxiety, and appetite dysregulation.Neuroimaging studies investigating the functional connectivity between brain regions have observed a number of alterations in networks related to cognitive control, introspection, and sensory function. Alterations in networks related to the dorsal anterior cingulate cortex may be related to excessive cognitive control of eating related behaviors. Similarly, altered somatosensory integration and introspection may relate to abnormal body image. A review of functional neuroimaging studies reported reduced activations in "bottom up" limbic region and increased activations in "top down" cortical regions which may play a role in restrictive eating.Compared to controls, people who have recovered from anorexia show reduced activation in the reward system in response to food, and reduced correlation between self reported liking of a sugary drink and activity in the striatum and anterior cingulate cortex. Increased binding potential of 11C radiolabelled raclopride in the striatum, interpreted as reflecting decreased endogenous dopamine due to competitive displacement, has also been observed.Structural neuroimaging studies have found global reductions in both gray matter and white matter, as well as increased cerebrospinal fluid volumes. Regional decreases in the left hypothalamus, left inferior parietal lobe, right lentiform nucleus and right caudate have also been reported in acutely ill patients. However, these alterations seem to be associated with acute malnutrition and largely reversible with weight restoration, at least in nonchronic cases in younger people. In contrast, some studies have reported increased orbitofrontal cortex volume in currently ill and in recovered patients, although findings are inconsistent. Reduced white matter integrity in the fornix has also been reported.
Diagnosis
A diagnostic assessment includes the persons current circumstances, biographical history, current symptoms, and family history. The assessment also includes a mental state examination, which is an assessment of the persons current mood and thought content, focusing on views on weight and patterns of eating.
DSM-5
Anorexia nervosa is classified under the Feeding and Eating Disorders in the latest revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM 5). There is no specific BMI cut-off that defines low weight required for the diagnosis of anorexia nervosa.The diagnostic criteria for anorexia nervosa (all of which needing to be met for diagnosis) are:
Restriction of energy intake relative to requirements leading to a low body weight. (Criterion A)
Intense fear of gaining weight or persistent behaviors that interfere with gaining weight. (Criterion B)
Disturbance in the way a persons weight or body shape is experienced or a lack of recognition about the risks of the low body weight. (Criterion C)Relative to the previous version of the DSM (DSM-IV-TR), the 2013 revision (DSM5) reflects changes in the criteria for anorexia nervosa. Most notably, the amenorrhea (absent period) criterion was removed. Amenorrhea was removed for several reasons: it does not apply to males, it is not applicable for females before or after the age of menstruation or taking birth control pills, and some women who meet the other criteria for AN still report some menstrual activity.
Subtypes
There are two subtypes of AN:
Binge-eating/purging type: patients with anorexia could show binge eating and purging behavior. It is different from bulimia nervosa in terms of the individuals weight. An individual with binge-eating/purging type anorexia is usually significantly underweight. People with bulimia nervosa on the other hand can sometimes be normal-weight or overweight.
Restricting type: the individual uses restricting food intake, fasting, diet pills, or exercise as a means for losing weight; they may exercise excessively to keep off weight or prevent weight gain, and some individuals eat only enough to stay alive. In the restrictive type, there are no recurrent episodes of binge-eating or purging present.
Levels of severity
Body mass index (BMI) is used by the DSM-5 as an indicator of the level of severity of anorexia nervosa. The DSM-5 states these as follows:
Mild: BMI of greater than 17
Moderate: BMI of 16–16.99
Severe: BMI of 15–15.99
Extreme: BMI of less than 15
Investigations
Medical tests to check for signs of physical deterioration in anorexia nervosa may be performed by a general physician or psychiatrist, including:
Complete blood count (CBC): a test of the white blood cells, red blood cells and platelets used to assess the presence of various disorders such as leukocytosis, leukopenia, thrombocytosis and anemia which may result from malnutrition.
Urinalysis: a variety of tests performed on the urine used in the diagnosis of medical disorders, to test for substance abuse, and as an indicator of overall health
Chem-20: Chem-20 also known as SMA-20 a group of twenty separate chemical tests performed on blood serum. Tests include protein and electrolytes such as potassium, chlorine and sodium and tests specific to liver and kidney function.
Glucose tolerance test: Oral glucose tolerance test (OGTT) used to assess the bodys ability to metabolize glucose. Can be useful in detecting various disorders such as diabetes, an insulinoma, Cushings Syndrome, hypoglycemia and polycystic ovary syndrome.
Lipid profile: includes cholesterol (including total cholesterol, HDL and LDL) and triglycerides.
Serum cholinesterase test: a test of liver enzymes (acetylcholinesterase and pseudocholinesterase) useful as a test of liver function and to assess the effects of malnutrition.
Liver Function Test: A series of tests used to assess liver function some of the tests are also used in the assessment of malnutrition, protein deficiency, kidney function, bleeding disorders, and Crohns Disease.
Luteinizing hormone (LH) response to gonadotropin-releasing hormone (GnRH): Tests the pituitary glands response to GnRh, a hormone produced in the hypothalamus. Hypogonadism is often seen in anorexia nervosa cases.
Creatine kinase (CK) test: measures the circulating blood levels of creatine kinase an enzyme found in the heart (CK-MB), brain (CK-BB) and skeletal muscle (CK-MM).
Blood urea nitrogen (BUN) test: urea nitrogen is the byproduct of protein metabolism first formed in the liver then removed from the body by the kidneys. The BUN test is primarily used to test kidney function. A low BUN level may indicate the effects of malnutrition.
BUN-to-creatinine ratio: A BUN to creatinine ratio is used to predict various conditions. A high BUN/creatinine ratio can occur in severe hydration, acute kidney failure, congestive heart failure, and intestinal bleeding. A low BUN/creatinine ratio can indicate a low protein diet, celiac disease, rhabdomyolysis, or cirrhosis of the liver.
Electrocardiogram (EKG or ECG): measures electrical activity of the heart. It can be used to detect various disorders such as hyperkalemia.
Electroencephalogram (EEG): measures the electrical activity of the brain. It can be used to detect abnormalities such as those associated with pituitary tumors.
Thyroid function tests: tests used to assess thyroid functioning by checking levels of thyroid-stimulating hormone (TSH), thyroxine (T4), and triiodothyronine (T3).
Differential diagnoses
A variety of medical and psychological conditions have been misdiagnosed as anorexia nervosa; in some cases the correct diagnosis was not made for more than ten years.
The distinction between binge purging anorexia, bulimia nervosa and Other Specified Feeding or Eating Disorders (OSFED) is often difficult for non-specialist clinicians. A main factor differentiating binge-purge anorexia from bulimia is the gap in physical weight. Patients with bulimia nervosa are ordinarily at a healthy weight, or slightly overweight. Patients with binge-purge anorexia are commonly underweight. Moreover, patients with the binge-purging subtype may be significantly underweight and typically do not binge-eat large amounts of food. In contrast, those with bulimia nervosa tend to binge large amounts of food. It is not unusual for patients with an eating disorder to "move through" various diagnoses as their behavior and beliefs change over time.
Treatment
There is no conclusive evidence that any particular treatment for anorexia nervosa works better than others.Treatment for anorexia nervosa tries to address three main areas.
Restoring the person to a healthy weight;
Treating the psychological disorders related to the illness;
Reducing or eliminating behaviors or thoughts that originally led to the disordered eating.In some clinical settings a specific body image intervention is performed to reduce body dissatisfaction and body image disturbance. Although restoring the persons weight is the primary task at hand, optimal treatment also includes and monitors behavioral change in the individual as well. There is some evidence that hospitalization might adversely affect long term outcome, but sometimes is necessary. Psychotherapy for individuals with AN is challenging as they may value being thin and may seek to maintain control and resist change. Initially, developing a desire to change is fundamental. Despite no evidence for better treatment in adults patients, research stated that family based therapy is the primary choice for adolescents with AN.
Therapy
Family-based treatment (FBT) has been shown to be more successful than individual therapy for adolescents with AN. Various forms of family-based treatment have been proven to work in the treatment of adolescent AN including conjoint family therapy (CFT), in which the parents and child are seen together by the same therapist, and separated family therapy (SFT) in which the parents and child attend therapy separately with different therapists. Proponents of family therapy for adolescents with AN assert that it is important to include parents in the adolescents treatment.A four- to five-year follow up study of the Maudsley family therapy, an evidence-based manualized model, showed full recovery at rates up to 90%. Although this model is recommended by the NIMH, critics claim that it has the potential to create power struggles in an intimate relationship and may disrupt equal partnerships. Cognitive behavioral therapy (CBT) is useful in adolescents and adults with anorexia nervosa. One of the most known psychotherapy in the field is CBT-E, an enhanced cognitive-behavior therapy specifically focus to eating disorder psychopathology. Acceptance and commitment therapy is a third-wave cognitive-behavioral therapy which has shown promise in the treatment of AN. Cognitive remediation therapy (CRT) is also used in treating anorexia nervosa. Schema-Focused Therapy (a form of CBT) was developed by Dr. Jeffrey Young and is effective in helping patients identify origins and triggers for disordered eating.
Diet
Diet is the most essential factor to work on in people with anorexia nervosa, and must be tailored to each persons needs. Food variety is important when establishing meal plans as well as foods that are higher in energy density. People must consume adequate calories, starting slowly, and increasing at a measured pace. Evidence of a role for zinc supplementation during refeeding is unclear.
Medication
Pharmaceuticals have limited benefit for anorexia itself. There is a lack of good information from which to make recommendations concerning the effectiveness of antidepressants in treating anorexia. Administration of olanzapine has been shown to result in a modest but statistically significant increase in body weight of anorexia nervosa patients.
Admission to hospital
Patients with AN may be deemed to have a lack of insight regarding the necessity of treatment, and thus may be involuntarily treated without their consent.: 1038 AN has a high mortality and patients admitted in a severely ill state to medical units are at particularly high risk. Diagnosis can be challenging, risk assessment may not be performed accurately, consent and the need for compulsion may not be assessed appropriately, refeeding syndrome may be missed or poorly treated and the behavioural and family problems in AN may be missed or poorly managed. Guidelines published by the Royal College of Psychiatrists recommend that medical and psychiatric experts work together in managing severely ill people with AN.
Refeeding syndrome
The rate of refeeding can be difficult to establish, because the fear of refeeding syndrome (RFS) can lead to underfeeding. It is thought that RFS, with falling phosphate and potassium levels, is more likely to occur when BMI is very low, and when medical comorbidities such as infection or cardiac failure, are present. In those circumstances, it is recommended to start refeeding slowly but to build up rapidly as long as RFS does not occur. Recommendations on energy requirements vary, from 5–10 kcal/kg/day in the most medically compromised patients, who appear to have the highest risk of RFS, to 1900 kcal/day.
Experience of treatment
Patients involved in treatment sometimes felt that treatment focused on biological aspects of body weight and eating behaviour change rather than their perceptions or emotional state.: 8 Patients felt that a therapists trust in them shown by being treated as a complete person with their own capacities was significant.: 9 Some patients defined recovery from AN in terms of reclaiming a lost identity.: 10 Healthcare workers involved in the treatment of anorexia reported frustration and anger to set backs in treatment and noncompliance and were afraid of patients dying. Some healthcare workers felt that they did not understand the treatment and that medical doctors were making decisions.: 11 They may feel powerless to improve a patients situation and deskilled as a result.: 12 Healthcare workers involved in monitoring patients consumption of food felt watched themselves.: 12 Healthcare workers often feel a degree of moral dissonance of not being in control of outcomes which they may protect against by focusing on individual tasks, avoiding identifying with patients (for example by making their eating behavior very different and not sharing personal information with patients), and blaming patients for their distress.: 13,14 Healthcare workers would inflexibly follow process to avoid responsibility.: 13 Healthcare workers attempted to reach balance by gradually giving patients back control avoiding feeling sole responsibility for outcomes, being mindful of their emotional state, and trying to view eating disorders as external from patients.: 13
Prognosis
AN has the highest mortality rate of any psychological disorder. The mortality rate is 11 to 12 times greater than in the general population, and the suicide risk is 56 times higher. Half of women with AN achieve a full recovery, while an additional 20–30% may partially recover. Not all people with anorexia recover completely: about 20% develop anorexia nervosa as a chronic disorder. If anorexia nervosa is not treated, serious complications such as heart conditions and kidney failure can arise and eventually lead to death. The average number of years from onset to remission of AN is seven for women and three for men. After ten to fifteen years, 70% of people no longer meet the diagnostic criteria, but many still continue to have eating-related problems.Alexithymia (inability to identify and describe ones own emotions) influences treatment outcome. Recovery is also viewed on a spectrum rather than black and white. According to the Morgan-Russell criteria, individuals can have a good, intermediate, or poor outcome. Even when a person is classified as having a "good" outcome, weight only has to be within 15% of average, and normal menstruation must be present in females. The good outcome also excludes psychological health. Recovery for people with anorexia nervosa is undeniably positive, but recovery does not mean a return to normal.
Complications
Anorexia nervosa can have serious implications if its duration and severity are significant and if onset occurs before the completion of growth, pubertal maturation, or the attainment of peak bone mass. Complications specific to adolescents and children with anorexia nervosa can include the following:
Growth retardation may occur, as height gain may slow and can stop completely with severe weight loss or chronic malnutrition. In such cases, provided that growth potential is preserved, height increase can resume and reach full potential after normal intake is resumed. Height potential is normally preserved if the duration and severity of illness are not significant or if the illness is accompanied by delayed bone age (especially prior to a bone age of approximately 15 years), as hypogonadism may partially counteract the effects of undernutrition on height by allowing for a longer duration of growth compared to controls. Appropriate early treatment can preserve height potential, and may even help to increase it in some post-anorexic subjects, due to factors such as long-term reduced estrogen-producing adipose tissue levels compared to premorbid levels. In some cases, especially where onset is before puberty, complications such as stunted growth and pubertal delay are usually reversible.Anorexia nervosa causes alterations in the female reproductive system; significant weight loss, as well as psychological stress and intense exercise, typically results in a cessation of menstruation in women who are past puberty. In patients with anorexia nervosa, there is a reduction of the secretion of gonadotropin releasing hormone in the central nervous system, preventing ovulation. Anorexia nervosa can also result in pubertal delay or arrest. Both height gain and pubertal development are dependent on the release of growth hormone and gonadotropins (LH and FSH) from the pituitary gland. Suppression of gonadotropins in people with anorexia nervosa has been documented. Typically, growth hormone (GH) levels are high, but levels of IGF-1, the downstream hormone that should be released in response to GH are low; this indicates a state of "resistance" to GH due to chronic starvation. IGF-1 is necessary for bone formation, and decreased levels in anorexia nervosa contribute to a loss of bone density and potentially contribute to osteopenia or osteoporosis. Anorexia nervosa can also result in reduction of peak bone mass. Buildup of bone is greatest during adolescence, and if onset of anorexia nervosa occurs during this time and stalls puberty, low bone mass may be permanent.Hepatic steatosis, or fatty infiltration of the liver, can also occur, and is an indicator of malnutrition in children. Neurological disorders that may occur as complications include seizures and tremors. Wernicke encephalopathy, which results from vitamin B1 deficiency, has been reported in patients who are extremely malnourished; symptoms include confusion, problems with the muscles responsible for eye movements and abnormalities in walking gait.
The most common gastrointestinal complications of anorexia nervosa are delayed stomach emptying and constipation, but also include elevated liver function tests, diarrhea, acute pancreatitis, heartburn, difficulty swallowing, and, rarely, superior mesenteric artery syndrome. Delayed stomach emptying, or gastroparesis, often develops following food restriction and weight loss; the most common symptom is bloating with gas and abdominal distension, and often occurs after eating. Other symptoms of gastroparesis include early satiety, fullness, nausea, and vomiting. The symptoms may inhibit efforts at eating and recovery, but can be managed by limiting high-fiber foods, using liquid nutritional supplements, or using metoclopramide to increase emptying of food from the stomach. Gastroparesis generally resolves when weight is regained.
Cardiac complications
Anorexia nervosa increases the risk of sudden cardiac death, though the precise cause is unknown. Cardiac complications include structural and functional changes to the heart. Some of these cardiovascular changes are mild and are reversible with treatment, while others may be life-threatening. Cardiac complications can include arrhythmias, abnormally slow heart beat, low blood pressure, decreased size of the heart muscle, reduced heart volume, mitral valve prolapse, myocardial fibrosis, and pericardial effusion.Abnormalities in conduction and repolarization of the heart that can result from anorexia nervosa include QT prolongation, increased QT dispersion, conduction delays, and junctional escape rhythms. Electrolyte abnormalities, particularly hypokalemia and hypomagnesemia, can cause anomalies in the electrical activity of the heart, and result in life-threatening arrhythmias. Hypokalemia most commonly results in patients with anorexia when restricting is accompanied by purging (induced vomiting or laxative use). Hypotension (low blood pressure) is common, and symptoms include fatigue and weakness. Orthostatic hypotension, a marked decrease in blood pressure when standing from a supine position, may also occur. Symptoms include lightheadedness upon standing, weakness, and cognitive impairment, and may result in fainting or near-fainting. Orthostasis in anorexia nervosa indicates worsening cardiac function and may indicate a need for hospitalization. Hypotension and orthostasis generally resolve upon recovery to a normal weight. The weight loss in anorexia nervosa also causes atrophy of cardiac muscle. This leads to decreased ability to pump blood, a reduction in the ability to sustain exercise, a diminished ability to increase blood pressure in response to exercise, and a subjective feeling of fatigue.Some individuals may also have a decrease in cardiac contractility. Cardiac complications can be life-threatening, but the heart muscle generally improves with weight gain, and the heart normalizes in size over weeks to months, with recovery. Atrophy of the heart muscle is a marker of the severity of the disease, and while it is reversible with treatment and refeeding, it is possible that it may cause permanent, microscopic changes to the heart muscle that increase the risk of sudden cardiac death. Individuals with anorexia nervosa may experience chest pain or palpitations; these can be a result of mitral valve prolapse. Mitral valve prolapse occurs because the size of the heart muscle decreases while the tissue of the mitral valve remains the same size. Studies have shown rates of mitral valve prolapse of around 20 percent in those with anorexia nervosa, while the rate in the general population is estimated at 2–4 percent. It has been suggested that there is an association between mitral valve prolapse and sudden cardiac death, but it has not been proven to be causative, either in patients with anorexia nervosa or in the general population.
Relapse
Rates of relapse after treatment range from 9–52% with many studies reporting a relapse rate of at least 25%. Relapse occurs in approximately a third of people in hospital, and is greatest in the first six to eighteen months after release from an institution.
Epidemiology
Anorexia is estimated to occur in 0.9% to 4.3% of women and 0.2% to 0.3% of men in Western countries at some point in their life. About 0.4% of young females are affected in a given year and it is estimated to occur three to ten times less commonly in males. Rates in most of the developing world are unclear. Often it begins during the teen years or young adulthood. Medical students are a high risk group, with an overall estimated prevalence of 10.4% globally.The lifetime rate of atypical anorexia nervosa, a form of ED-NOS in which the person loses a significant amount of weight and is at risk for serious medical complications despite having a higher body-mass index, is much higher, at 5–12%.While anorexia became more commonly diagnosed during the 20th century it is unclear if this was due to an increase in its frequency or simply better diagnosis. Most studies show that since at least 1970 the incidence of AN in adult women is fairly constant, while there is some indication that the incidence may have been increasing for girls aged between 14 and 20.
Underrepresentation
Eating disorders are less reported in preindustrial, non-westernized countries than in Western countries. In Africa, not including South Africa, the only data presenting information about eating disorders occurs in case reports and isolated studies, not studies investigating prevalence. Western countries experience slightly higher rates of eating disorders than non-western countries. Theories to explain these lower rates of eating disorders, lower reporting and lower research rates in include the effects of westernisation, and culture change.Men (and women) who might otherwise be diagnosed with anorexia may not meet the DSM-IV criteria for BMI since they have muscle weight, but have very little fat. In many cases a subclinical "not otherwise specified" diagnosis is made instead; ED-NOS in the DSM-IV, and other specified feeding or eating disorder or unspecified feeding or eating disorder in the DSM-5. ED-NOS was the most diagnosed eating disorder in 2009, and it was also shown that AN did not differ significantly in eating pathology or general psychopathology from EDNOS involving restrictive eating.The elderly population is increasingly experiencing anorexia nervosa, which has been termed the "Anorexia of Aging". The eating disorder is similar to that of typical anorexia nervosa but is more often accompanied by the overuse of laxatives in order to purge the individual of ingested food. Most geriatric anorexia patients limit their food intake to dairy or grains, whereas an adolescent anorexic has a more general limitation.This eating disorder that affects older adults has two types - early onset and late onset. Early onset refers to a recurrence of anorexia in late life in an individual who experienced the disease during their youth. Late onset describes instances where the eating disorder begins for the first time late in life.
The stimulus for anorexia in elderly patients is typically a loss of control over their lives, which can be brought on by many events, including moving into an assisted living facility. This is also a time when most older individuals experience a rise in conflict with family members, such as limitations on driving or limitations on personal freedom, which increases the likelihood of an issue with anorexia. There can be physical issues in the elderly that leads to anorexia of aging, including a decline in chewing ability, a decline in taste and smell, and a decrease in appetite. Psychological reasons for the elderly to develop anorexia can include depression and bereavement, and even an indirect attempt at suicide. There are also common comorbid psychiatric conditions with aging anorexics, including major depression, anxiety disorder, obsessive compulsive disorder, bipolar disorder, schizophrenia, and dementia.The signs and symptoms that go along with anorexia of aging are similar to what is observed in adolescent anorexia, including sudden weight loss, unexplained hair loss or dental problems, and a desire to eat alone.There are also several medical conditions that can result from anorexia in the elderly. An increased risk of illness and death can be a result of anorexia. There is also a decline in muscle and bone mass as a result of a reduction in protein intake during anorexia. Another result of anorexia in the aging population is irreparable damage to kidneys, heart or colon and an imbalance of electrolytes.Many assessments are available to diagnose anorexia in the aging community. These assessments include the Simplified Nutritional Assessment Questionnaire (SNAQ) and Functional Assessment of Anorexia/Cachexia Therapy (FAACT). Specific to the geriatric populace, the interRAI system identifies detrimental conditions in assisted living facilities and nursing homes. Even a simple screening for nutritional insufficiencies such as low levels of important vitamins, can help to identify someone who has anorexia of aging.Anorexia in the elderly should be identified by the retirement communities but is often overlooked, especially in patients with dementia. Some studies report that malnutrition is prevalent in nursing homes, with up to 58% of residents suffering from it, which can lead to the difficulty of identifying anorexia. One of the challenges with assisted living facilities is that they often serve bland, monotonous food, which lessens residents desire to eat.The treatment for anorexia of aging is undifferentiated as anorexia for any other age group. Some of the treatment options include outpatient and inpatient facilities, antidepressant medication and behavioral therapy such as meal observation and discussing eating habits.Male and female athletes are often overlooked as anorexic. Research emphasizes the importance to take athletes diet, weight and symptoms into account when diagnosing anorexia, instead of just looking at weight and BMI. For athletes, ritualized activities such as weigh-ins place emphasis on gaining and losing large amounts of weight, which may promote the development of eating disorders among them. While women use diet pills, which is an indicator of unhealthy behavior and an eating disorder, men use steroids, which contextualizes the beauty ideals for genders </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Anthrax and can you explain it in depth? [/INST] Anthrax is an infection caused by the bacterium Bacillus anthracis. It can occur in four forms: skin, lungs, intestinal, and injection. Symptom onset occurs between one day and more than two months after the infection is contracted. The skin form presents with a small blister with surrounding swelling that often turns into a painless ulcer with a black center. The inhalation form presents with fever, chest pain and shortness of breath. The intestinal form presents with diarrhea (which may contain blood), abdominal pains, nausea and vomiting. The injection form presents with fever and an abscess at the site of drug injection.According to the Centers for Disease Control the first clinical descriptions of cutaneous anthrax were given by Maret in 1752 and Fournier in 1769. Before that anthrax had been described only through historical accounts. The Prussian scientist Robert Koch (1843–1910) was the first to identify Bacillus anthracis as the bacterium that causes anthrax.
Anthrax is spread by contact with the bacteriums spores, which often appear in infectious animal products. Contact is by breathing or eating or through an area of broken skin. It does not typically spread directly between people. Risk factors include people who work with animals or animal products, travelers, and military personnel. Diagnosis can be confirmed by finding antibodies or the toxin in the blood or by culture of a sample from the infected site.Anthrax vaccination is recommended for people at high risk of infection. Immunizing animals against anthrax is recommended in areas where previous infections have occurred. A two-month course of antibiotics such as ciprofloxacin, levofloxacin and doxycycline after exposure can also prevent infection. If infection occurs, treatment is with antibiotics and possibly antitoxin. The type and number of antibiotics used depend on the type of infection. Antitoxin is recommended for those with widespread infection.A rare disease, human anthrax is most common in Africa and central and southern Asia. It also occurs more regularly in Southern Europe than elsewhere on the continent and is uncommon in Northern Europe and North America. Globally, at least 2,000 cases occur a year, with about two cases a year in the United States. Skin infections represent more than 95% of cases. Without treatment the risk of death from skin anthrax is 23.7%. For intestinal infection the risk of death is 25 to 75%, while respiratory anthrax has a mortality of 50 to 80%, even with treatment. Until the 20th century anthrax infections killed hundreds of thousands of people and animals each year. Anthrax has been developed as a weapon by a number of countries. In herbivorous animals infection occurs when they eat or breathe in the spores while grazing. Animals may become infected by killing and/or eating infected animals.
Etymology
The English name comes from anthrax (ἄνθραξ), the Greek word for coal, possibly having Egyptian etymology, because of the characteristic black skin lesions developed by people with a cutaneous anthrax infection. The central black eschar surrounded by vivid red skin has long been recognised as typical of the disease. The first recorded use of the word "anthrax" in English is in a 1398 translation of Bartholomaeus Anglicus work De proprietatibus rerum (On the Properties of Things, 1240).Anthrax was historically known by a wide variety of names indicating its symptoms, location and groups considered most vulnerable to infection. They included Siberian plague, Cumberland disease, charbon, splenic fever, malignant edema, woolsorters disease and la maladie de Bradford.
Signs and symptoms
Skin
Cutaneous anthrax, also known as hide-porters disease, is when anthrax occurs on the skin. It is the most common form (>90% of anthrax cases). It is the least dangerous form (low mortality with treatment, 23.7% mortality without). Cutaneous anthrax presents as a boil-like skin lesion that eventually forms an ulcer with a black center (eschar). The black eschar often shows up as a large, painless, necrotic ulcer (beginning as an irritating and itchy skin lesion or blister that is dark and usually concentrated as a black dot, somewhat resembling bread mold) at the site of infection. In general, cutaneous infections form within the site of spore penetration between two and five days after exposure. Unlike bruises or most other lesions, cutaneous anthrax infections normally do not cause pain. Nearby lymph nodes may become infected, reddened, swollen, and painful. A scab forms over the lesion soon, and falls off in a few weeks. Complete recovery may take longer.
Cutaneous anthrax is typically caused when B. anthracis spores enter through cuts on the skin. This form is found most commonly when humans handle infected animals and/or animal products.
Injection
In December 2009, an outbreak of anthrax occurred among injecting heroin users in the Glasgow and Stirling areas of Scotland, resulting in 14 deaths. The source of the anthrax is believed to have been dilution of the heroin with bone meal in Afghanistan. Injected anthrax may have symptoms similar to cutaneous anthrax, and may also cause infection deep into the muscle and spread faster.
Lungs
Inhalation anthrax usually develops within a week after exposure, but may take up to 2 months. During the first few days of illness, most people have fever, chills, and fatigue. These symptoms may be accompanied by cough, shortness of breath, chest pain, and nausea or vomiting, making inhalation anthrax difficult to distinguish from influenza and community-acquired pneumonia. This is often described as the prodromal period.Over the next day or so, shortness of breath, cough, and chest pain become more common, and complaints not involving the chest such as nausea, vomiting, altered mental status, sweats, and headache develop in one-third or more of people. Upper respiratory tract symptoms occur in only a quarter of people, and muscle pains are rare. Altered mental status or shortness of breath generally brings people to healthcare and marks the fulminant phase of illness.It infects the lymph nodes in the chest first, rather than the lungs themselves, a condition called hemorrhagic mediastinitis, causing bloody fluid to accumulate in the chest cavity, thereby causing shortness of breath. The second (pneumonia) stage occurs when the infection spreads from the lymph nodes to the lungs. Symptoms of the second stage develop suddenly within hours or days after the first stage. Symptoms include high fever, extreme shortness of breath, shock, and rapid death within 48 hours in fatal cases.
Gastrointestinal
Gastrointestinal (GI) infection is most often caused by consuming anthrax-infected meat and is characterized by diarrhea, potentially with blood, abdominal pains, acute inflammation of the intestinal tract, and loss of appetite. Occasional vomiting of blood can occur. Lesions have been found in the intestines and in the mouth and throat. After the bacterium invades the gastrointestinal system, it spreads to the bloodstream and throughout the body, while continuing to make toxins.
Cause
Bacteria
Bacillus anthracis is a rod-shaped, Gram-positive, facultative anaerobic bacterium about 1 by 9 μm in size. It was shown to cause disease by Robert Koch in 1876 when he took a blood sample from an infected cow, isolated the bacteria, and put them into a mouse. The bacterium normally rests in spore form in the soil, and can survive for decades in this state. Herbivores are often infected while grazing, especially when eating rough, irritant, or spiky vegetation; the vegetation has been hypothesized to cause wounds within the GI tract, permitting entry of the bacterial spores into the tissues. Once ingested or placed in an open wound, the bacteria begin multiplying inside the animal or human and typically kill the host within a few days or weeks. The spores germinate at the site of entry into the tissues and then spread by the circulation to the lymphatics, where the bacteria multiply.The production of two powerful exotoxins and lethal toxin by the bacteria causes death. Veterinarians can often tell a possible anthrax-induced death by its sudden occurrence, and by the dark, nonclotting blood that oozes from the body orifices. Most anthrax bacteria inside the body after death are outcompeted and destroyed by anaerobic bacteria within minutes to hours post mortem. However, anthrax vegetative bacteria that escape the body via oozing blood or through the opening of the carcass may form hardy spores. These vegetative bacteria are not contagious. One spore forms per one vegetative bacterium. The triggers for spore formation are not yet known, though oxygen tension and lack of nutrients may play roles. Once formed, these spores are very hard to eradicate.The infection of herbivores (and occasionally humans) by the inhalational route normally begins with inhaled spores being transported through the air passages into the tiny air sacs (alveoli) in the lungs. The spores are then picked up by scavenger cells (macrophages) in the lungs and are transported through small vessels (lymphatics) to the lymph nodes in the central chest cavity (mediastinum). Damage caused by the anthrax spores and bacilli to the central chest cavity can cause chest pain and difficulty breathing. Once in the lymph nodes, the spores germinate into active bacilli that multiply and eventually burst the macrophages, releasing many more bacilli into the bloodstream to be transferred to the entire body. Once in the bloodstream, these bacilli release three proteins named lethal factor, edema factor, and protective antigen. The three are not toxic by themselves, but their combination is incredibly lethal to humans. Protective antigen combines with these other two factors to form lethal toxin and edema toxin, respectively. These toxins are the primary agents of tissue destruction, bleeding, and death of the host. If antibiotics are administered too late, even if the antibiotics eradicate the bacteria, some hosts still die of toxemia because the toxins produced by the bacilli remain in their systems at lethal dose levels.
Exposure
The spores of anthrax are able to survive in harsh conditions for decades or even centuries. Such spores can be found on all continents, including Antarctica. Disturbed grave sites of infected animals have been known to cause infection after 70 years.Historically, inhalational anthrax was called woolsorters disease because it was an occupational hazard for people who sorted wool. Today, this form of infection is extremely rare in advanced nations, as almost no infected animals remain.Occupational exposure to infected animals or their products (such as skin, wool, and meat) is the usual pathway of exposure for humans. Workers who are exposed to dead animals and animal products are at the highest risk, especially in countries where anthrax is more common. Anthrax in livestock grazing on open range where they mix with wild animals still occasionally occurs in the United States and elsewhere.Many workers who deal with wool and animal hides are routinely exposed to low levels of anthrax spores, but most exposure levels are not sufficient to develop anthrax infections. A lethal infection is reported to result from inhalation of about 10,000–20,000 spores, though this dose varies among host species. Little documented evidence is available to verify the exact or average number of spores needed for infection.
Mode of infection
Anthrax can enter the human body through the intestines (ingestion), lungs (inhalation), or skin (cutaneous) and causes distinct clinical symptoms based on its site of entry. In general, an infected human is quarantined. However, anthrax does not usually spread from an infected human to an uninfected human. If the disease is fatal to the persons body, though, its mass of anthrax bacilli becomes a potential source of infection to others and special precautions should be used to prevent further contamination. Inhalational anthrax, if left untreated until obvious symptoms occur, is usually fatal.Anthrax can be contracted in laboratory accidents or by handling infected animals, their wool, or their hides. It has also been used in biological warfare agents and by terrorists to intentionally infect as exemplified by the 2001 anthrax attacks.
Mechanism
The lethality of the anthrax disease is due to the bacteriums two principal virulence factors: the poly-D-glutamic acid capsule, which protects the bacterium from phagocytosis by host neutrophils, and the tripartite protein toxin, called anthrax toxin. Anthrax components: protective antigen (PA), edema factor (EF), and lethal factor (LF). PA plus LF produces lethal toxin, and PA plus EF produces edema toxin. These toxins cause death and tissue swelling (edema), respectively.
To enter the cells, the edema and lethal factors use another protein produced by B. anthracis called protective antigen, which binds to two surface receptors on the host cell. A cell protease then cleaves PA into two fragments: PA20 and PA63. PA20 dissociates into the extracellular medium, playing no further role in the toxic cycle. PA63 then oligomerizes with six other PA63 fragments forming a heptameric ring-shaped structure named a prepore. Once in this shape, the complex can competitively bind up to three EFs or LFs, forming a resistant complex. Receptor-mediated endocytosis occurs next, providing the newly formed toxic complex access to the interior of the host cell. The acidified environment within the endosome triggers the heptamer to release the LF and/or EF into the cytosol. It is unknown how exactly the complex results in the death of the cell.
Edema factor is a calmodulin-dependent adenylate cyclase. Adenylate cyclase catalyzes the conversion of ATP into cyclic AMP (cAMP) and pyrophosphate. The complexation of adenylate cyclase with calmodulin removes calmodulin from stimulating calcium-triggered signaling, thus inhibiting the immune response. To be specific, LF inactivates neutrophils (a type of phagocytic cell) by the process just described so they cannot phagocytose bacteria. Throughout history, lethal factor was presumed to cause macrophages to make TNF-alpha and interleukin 1, beta (IL1B). TNF-alpha is a cytokine whose primary role is to regulate immune cells, as well as to induce inflammation and apoptosis or programmed cell death. Interleukin 1, beta is another cytokine that also regulates inflammation and apoptosis. The overproduction of TNF-alpha and IL1B ultimately leads to septic shock and death. However, recent evidence indicates anthrax also targets endothelial cells that line serous cavities such as the pericardial cavity, pleural cavity, and peritoneal cavity, lymph vessels, and blood vessels, causing vascular leakage of fluid and cells, and ultimately hypovolemic shock and septic shock.
Diagnosis
Various techniques may be used for the direct identification of B. anthracis in clinical material. Firstly, specimens may be Gram stained. Bacillus spp. are quite large in size (3 to 4 μm long), they may grow in long chains, and they stain Gram-positive. To confirm the organism is B. anthracis, rapid diagnostic techniques such as polymerase chain reaction-based assays and immunofluorescence microscopy may be used.All Bacillus species grow well on 5% sheep blood agar and other routine culture media. Polymyxin-lysozyme-EDTA-thallous acetate can be used to isolate B. anthracis from contaminated specimens, and bicarbonate agar is used as an identification method to induce capsule formation. Bacillus spp. usually grow within 24 hours of incubation at 35 °C, in ambient air (room temperature) or in 5% CO2. If bicarbonate agar is used for identification, then the medium must be incubated in 5% CO2. B. anthracis colonies are medium-large, gray, flat, and irregular with swirling projections, often referred to as having a "medusa head" appearance, and are not hemolytic on 5% sheep blood agar. The bacteria are not motile, susceptible to penicillin, and produce a wide zone of lecithinase on egg yolk agar. Confirmatory testing to identify B. anthracis includes gamma bacteriophage testing, indirect hemagglutination, and enzyme-linked immunosorbent assay to detect antibodies. The best confirmatory precipitation test for anthrax is the Ascoli test.
Prevention
Precautions are taken to avoid contact with the skin and any fluids exuded through natural body openings of a deceased body that is suspected of harboring anthrax. The body should be put in strict quarantine. A blood sample is collected and sealed in a container and analyzed in an approved laboratory to ascertain if anthrax is the cause of death. The body should be sealed in an airtight body bag and incinerated to prevent the transmission of anthrax spores. Microscopic visualization of the encapsulated bacilli, usually in very large numbers, in a blood smear stained with polychrome methylene blue (McFadyean stain) is fully diagnostic, though the culture of the organism is still the gold standard for diagnosis. Full isolation of the body is important to prevent possible contamination of others.Protective, impermeable clothing and equipment such as rubber gloves, rubber apron, and rubber boots with no perforations are used when handling the body. No skin, especially if it has any wounds or scratches, should be exposed. Disposable personal protective equipment is preferable, but if not available, decontamination can be achieved by autoclaving. Used disposable equipment is burned and/or buried after use. All contaminated bedding or clothing is isolated in double plastic bags and treated as biohazard waste. Respiratory equipment capable of filtering small particles, such the US National Institute for Occupational Safety and Health- and Mine Safety and Health Administration-approved high-efficiency respirator, is worn.
Vaccines
Vaccines against anthrax for use in livestock and humans have had a prominent place in the history of medicine. The French scientist Louis Pasteur developed the first effective vaccine in 1881. Human anthrax vaccines were developed by the Soviet Union in the late 1930s and in the US and UK in the 1950s. The current FDA-approved US vaccine was formulated in the 1960s.Currently administered human anthrax vaccines include acellular (United States) and live vaccine (Russia) varieties. All currently used anthrax vaccines show considerable local and general reactogenicity (erythema, induration, soreness, fever) and serious adverse reactions occur in about 1% of recipients. The American product, BioThrax, is licensed by the FDA and was formerly administered in a six-dose primary series at 0, 2, 4 weeks and 6, 12, 18 months, with annual boosters to maintain immunity. In 2008, the FDA approved omitting the week-2 dose, resulting in the currently recommended five-dose series. New second-generation vaccines currently being researched include recombinant live vaccines and recombinant subunit vaccines. In the 20th century the use of a modern product (BioThrax) to protect American troops against the use of anthrax in biological warfare was controversial.
Antibiotics
Preventive antibiotics are recommended in those who have been exposed. Early detection of sources of anthrax infection can allow preventive measures to be taken. In response to the anthrax attacks of October 2001, the United States Postal Service (USPS) installed biodetection systems (BDSs) in their large-scale mail processing facilities. BDS response plans were formulated by the USPS in conjunction with local responders including fire, police, hospitals, and public health. Employees of these facilities have been educated about anthrax, response actions, and prophylactic medication. Because of the time delay inherent in getting final verification that anthrax has been used, prophylactic antibiotic treatment of possibly exposed personnel must be started as soon as possible.
Treatment
Anthrax cannot be spread from person to person, except in the rare case of skin exudates from cutaneous anthrax. However, a persons clothing and body may be contaminated with anthrax spores. Effective decontamination of people can be accomplished by a thorough wash-down with antimicrobial soap and water. Wastewater is treated with bleach or another antimicrobial agent. Effective decontamination of articles can be accomplished by boiling them in water for 30 minutes or longer. Chlorine bleach is ineffective in destroying spores and vegetative cells on surfaces, though formaldehyde is effective. Burning clothing is very effective in destroying spores. After decontamination, there is no need to immunize, treat, or isolate contacts of persons ill with anthrax unless they were also exposed to the same source of infection.
Antibiotics
Early antibiotic treatment of anthrax is essential; delay significantly lessens chances for survival. Treatment for anthrax infection and other bacterial infections includes large doses of intravenous and oral antibiotics, such as fluoroquinolones (ciprofloxacin), doxycycline, erythromycin, vancomycin, or penicillin. FDA-approved agents include ciprofloxacin, doxycycline, and penicillin. In possible cases of pulmonary anthrax, early antibiotic prophylaxis treatment is crucial to prevent possible death. Many attempts have been made to develop new drugs against anthrax, but existing drugs are effective if treatment is started soon enough.
Monoclonal antibodies
In May 2009, Human Genome Sciences submitted a biologic license application (BLA, permission to market) for its new drug, raxibacumab (brand name ABthrax) intended for emergency treatment of inhaled anthrax. On 14 December 2012, the US Food and Drug Administration approved raxibacumab injection to treat inhalational anthrax. Raxibacumab is a monoclonal antibody that neutralizes toxins produced by B. anthracis. In March 2016, FDA approved a second anthrax treatment using a monoclonal antibody which neutralizes the toxins produced by B. anthracis. Obiltoxaximab is approved to treat inhalational anthrax in conjunction with appropriate antibacterial drugs, and for prevention when alternative therapies are not available or appropriate.
Prognosis
Cutaneous anthrax is rarely fatal if treated, because the infection area is limited to the skin, preventing the lethal factor, edema factor, and protective antigen from entering and destroying a vital organ. Without treatment, about 20% of cutaneous skin infection cases progress to toxemia and death.Before 2001, fatality rates for inhalation anthrax were 90%; since then, they have fallen to 45%. People that progress to the fulminant phase of inhalational anthrax nearly always die, with one case study showing a death rate of 97%. Anthrax meningoencephalitis is also nearly always fatal.GI infections can be treated, but usually result in fatality rates of 25% to 60%, depending upon how soon treatment commences. This form of anthrax is the rarest.
Epidemiology
Globally, at least 2,000 cases occur a year.
United States
The last fatal case of natural inhalational anthrax in the United States occurred in California in 1976, when a home weaver died after working with infected wool imported from Pakistan. To minimize the chance of spreading the disease, the body was transported to UCLA in a sealed plastic body bag within a sealed metal container for autopsy.Gastrointestinal anthrax is exceedingly rare in the United States, with only two cases on record. The first case was reported in 1942, according to the Centers for Disease Control and Prevention. During December 2009, the New Hampshire Department of Health and Human Services confirmed a case of gastrointestinal anthrax in an adult female.
In 2007 two cases of cutaneous anthrax were reported in Danbury, Connecticut. The case involved the maker of traditional African-style drums who was working with a goat hide purchased from a dealer in New York City which had been previously cleared by Customs. While the hide was being scraped, a spider bite led to the spores entering the bloodstream. His son also became infected.The CDC investigated the source of the December 2009 infection and the possibility that it was contracted from an African drum recently used by the woman taking part in a drum circle. The woman apparently inhaled anthrax, in spore form, from the hide of the drum. She became critically ill, but with gastrointestinal anthrax rather than inhaled anthrax, which made her unique in American medical history. The building where the infection took place was cleaned and reopened to the public and the woman recovered. The New Hampshire state epidemiologist, Jodie Dionne-Odom, stated "It is a mystery. We really dont know why it happened."
Croatia
In July 2022, dozens of cattle in a nature park in Lonjsko Polje, a flood plain by the Sava river, died of anthrax and 6 people have been hospitalized with light, skin-related symptoms.
United Kingdom
In November 2008, a drum maker in the United Kingdom who worked with untreated animal skins died from anthrax. In December 2009, an outbreak of anthrax occurred among heroin addicts in the Glasgow and Stirling areas of Scotland, resulting in 14 deaths. The source of the anthrax is believed to have been dilution of the heroin with bone meal in Afghanistan.
History
Discovery
Robert Koch, a German physician and scientist, first identified the bacterium that caused the anthrax disease in 1875 in Wollstein (now Wolsztyn - a town in Poland). His pioneering work in the late 19th century was one of the first demonstrations that diseases could be caused by microbes. In a groundbreaking series of experiments, he uncovered the lifecycle and means of transmission of anthrax. His experiments not only helped create an understanding of anthrax but also helped elucidate the role of microbes in causing illness at a time when debates still took place over spontaneous generation versus cell theory. Koch went on to study the mechanisms of other diseases and won the 1905 Nobel Prize in Physiology or Medicine for his discovery of the bacterium causing tuberculosis.Although Koch arguably made the greatest theoretical contribution to understanding anthrax, other researchers were more concerned with the practical questions of how to prevent the disease. In Britain, where anthrax affected workers in the wool, worsted, hides, and tanning industries, it was viewed with fear. John Henry Bell, a doctor born & based in Bradford, first made the link between the mysterious and deadly "woolsorters disease" and anthrax, showing in 1878 that they were one and the same. In the early 20th century, Friederich Wilhelm Eurich, the German bacteriologist who settled in Bradford with his family as a child, carried out important research for the local Anthrax Investigation Board. Eurich also made valuable contributions to a Home Office Departmental Committee of Inquiry, established in 1913 to address the continuing problem of industrial anthrax. His work in this capacity, much of it collaboration with the factory inspector G. Elmhirst Duckering, led directly to the Anthrax Prevention Act (1919).
First vaccination
Anthrax posed a major economic challenge in France and elsewhere during the 19th century. Horses, cattle, and sheep were particularly vulnerable, and national funds were set aside to investigate the production of a vaccine. French scientist Louis Pasteur was charged with the production of a vaccine, following his successful work in developing methods that helped to protect the important wine and silk industries.In May 1881, Pasteur – in collaboration with his assistants Jean-Joseph Henri Toussaint, Émile Roux and others – performed a public experiment at Pouilly-le-Fort to demonstrate his concept of vaccination. He prepared two groups of 25 sheep, one goat, and several cattle. The animals of one group were injected with an anthrax vaccine prepared by Pasteur twice, at an interval of 15 days; the control group was left unvaccinated. Thirty days after the first injection, both groups were injected with a culture of live anthrax bacteria. All the animals in the unvaccinated group died, while all of the animals in the vaccinated group survived.After this apparent triumph, which was widely reported in the local, national, and international press, Pasteur made strenuous efforts to export the vaccine beyond France. He used his celebrity status to establish Pasteur Institutes across Europe and Asia, and his nephew, Adrien Loir, travelled to Australia in 1888 to try to introduce the vaccine to combat anthrax in New South Wales. Ultimately, the vaccine was unsuccessful in the challenging climate of rural Australia, and it was soon superseded by a more robust version developed by local researchers John Gunn and John McGarvie Smith.The human vaccine for anthrax became available in 1954. This was a cell-free vaccine instead of the live-cell Pasteur-style vaccine used for veterinary purposes. An improved cell-free vaccine became available in 1970.
Engineered strains
The Sterne strain of anthrax, named after the Trieste-born immunologist Max Sterne, is an attenuated strain used as a vaccine, which contains only the anthrax toxin virulence plasmid and not the polyglutamic acid capsule expressing plasmid.
Strain 836, created by the Soviet bioweapons program in the 1980s, was later called by the Los Angeles Times "the most virulent and vicious strain of anthrax known to man".
The virulent Ames strain, which was used in the 2001 anthrax attacks in the United States, has received the most news coverage of any anthrax outbreak. The Ames strain contains two virulence plasmids, which separately encode for a three-protein toxin, called anthrax toxin, and a polyglutamic acid capsule.
Nonetheless, the Vollum strain, developed but never used as a biological weapon during the Second World War, is much more dangerous. The Vollum (also incorrectly referred to as Vellum) strain was isolated in 1935 from a cow in Oxfordshire. This same strain was used during the Gruinard bioweapons trials. A variation of Vollum, known as "Vollum 1B", was used during the 1960s in the US and UK bioweapon programs. Vollum 1B is widely believed to have been isolated from William A. Boyles, a 46-year-old scientist at the US Army Biological Warfare Laboratories at Camp (later Fort) Detrick, Maryland, who died in 1951 after being accidentally infected with the Vollum strain.
US Air Force researchers have developed a vaccine strain to produce an improved anthrax vaccine which requires a minimal number of injections to achieve and maintain long-term immunity. It is designated as the Alls/Gifford (Curlicue) strain.
Society and culture
Site cleanup
Anthrax spores can survive for very long periods of time in the environment after release. Chemical methods for cleaning anthrax-contaminated sites or materials may use oxidizing agents such as peroxides, ethylene oxide, Sandia Foam, chlorine dioxide (used in the Hart Senate Office Building), peracetic acid, ozone gas, hypochlorous acid, sodium persulfate, and liquid bleach products containing sodium hypochlorite. Nonoxidizing agents shown to be effective for anthrax decontamination include methyl bromide, formaldehyde, and metam sodium. These agents destroy bacterial spores. All of the aforementioned anthrax decontamination technologies have been demonstrated to be effective in laboratory tests conducted by the US EPA or others.Decontamination techniques for Bacillus anthracis spores are affected by the material with which the spores are associated, environmental factors such as temperature and humidity, and microbiological factors such as the spore species, anthracis strain, and test methods used.A bleach solution for treating hard surfaces has been approved by the EPA. Chlorine dioxide has emerged as the preferred biocide against anthrax-contaminated sites, having been employed in the treatment of numerous government buildings over the past decade. Its chief drawback is the need for in situ processes to have the reactant on demand.
To speed the process, trace amounts of a nontoxic catalyst composed of iron and tetroamido macrocyclic ligands are combined with sodium carbonate and bicarbonate and converted into a spray. The spray formula is applied to an infested area and is followed by another spray containing tert-butyl hydroperoxide.Using the catalyst method, complete destruction of all anthrax spores can be achieved in under 30 minutes. A standard catalyst-free spray destroys fewer than half the spores in the same amount of time.
Cleanups at a Senate Office Building, several contaminated postal facilities, and other US government and private office buildings, a collaborative effort headed by the Environmental Protection Agency showed decontamination to be possible, but time-consuming and costly. Clearing the Senate Office Building of anthrax spores cost $27 million, according to the Government Accountability Office. Cleaning the Brentwood postal facility in Washington cost $130 million and took 26 months. Since then, newer and less costly methods have been developed.Cleanup of anthrax-contaminated areas on ranches and in the wild is much more problematic. Carcasses may be burned, though often 3 days are needed to burn a large carcass and this is not feasible in areas with little wood. Carcasses may also be buried, though the burying of large animals deeply enough to prevent resurfacing of spores requires much manpower and expensive tools. Carcasses have been soaked in formaldehyde to kill spores, though this has environmental contamination issues. Block burning of vegetation in large areas enclosing an anthrax outbreak has been tried; this, while environmentally destructive, causes healthy animals to move away from an area with carcasses in search of fresh grass. Some wildlife workers have experimented with covering fresh anthrax carcasses with shadecloth and heavy objects. This prevents some scavengers from opening the carcasses, thus allowing the putrefactive bacteria within the carcass to kill the vegetative B. anthracis cells and preventing sporulation. This method also has drawbacks, as scavengers such as hyenas are capable of infiltrating almost any exclosure.The experimental site at Gruinard Island is said to have been decontaminated with a mixture of formaldehyde and seawater by the Ministry of Defence. It is not clear whether similar treatments had been applied to US test sites.
Biological warfare
Anthrax spores have been used as a biological warfare weapon. Its first modern incidence occurred when Nordic rebels, supplied by the German General Staff, used anthrax with unknown results against the Imperial Russian Army in Finland in 1916. Anthrax was first tested as a biological warfare agent by Unit 731 of the Japanese Kwantung Army in Manchuria during the 1930s; some of this testing involved intentional infection of prisoners of war, thousands of whom died. Anthrax, designated at the time as Agent N, was also investigated by the Allies in the 1940s.A long history of practical bioweapons research exists in this area. For example, in 1942, British bioweapons trials severely contaminated Gruinard Island in Scotland with anthrax spores of the Vollum-14578 strain, making it a no-go area until it was decontaminated in 1990. The Gruinard trials involved testing the effectiveness of a submunition of an "N-bomb" – a biological weapon containing dried anthrax spores. Additionally, five million "cattle cakes" (animal feed pellets impregnated with anthrax spores) were prepared and stored at Porton Down for "Operation Vegetarian" – anti livestock attacks against Germany to be made by the Royal Air Force. The plan was for anthrax-based biological weapons to be dropped on Germany in 1944. However, the edible cattle cakes and the bomb were not used; the cattle cakes were incinerated in late 1945.
Weaponized anthrax was part of the US stockpile prior to 1972, when the United States signed the Biological Weapons Convention. President Nixon ordered the dismantling of US biowarfare programs in 1969 and the destruction of all existing stockpiles of bioweapons. In 1978–79, the Rhodesian government used anthrax against cattle and humans during its campaign against rebels. The Soviet Union created and stored 100 to 200 tons of anthrax spores at Kantubek on Vozrozhdeniya Island; they were abandoned in 1992 and destroyed in 2002.American military and British Army personnel are no longer routinely vaccinated against anthrax prior to active service in places where biological attacks are considered a threat.
Sverdlovsk incident (2 April 1979)
Despite signing the 1972 agreement to end bioweapon production, the government of the Soviet Union had an active bioweapons program that included the production of hundreds of tons of anthrax after this period. On 2 April 1979, some of the over one million people living in Sverdlovsk (now called Ekaterinburg, Russia), about 1,370 kilometres (850 mi) east of Moscow, were exposed to an accidental release of anthrax from a biological weapons complex located near there. At least 94 people were infected, of whom at least 68 died. One victim died four days after the release, 10 over an eight-day period at the peak of the deaths, and the last six weeks later. Extensive cleanup, vaccinations, and medical interventions managed to save about 30 of the victims. Extensive cover-ups and destruction of records by the KGB continued from 1979 until Russian President Boris Yeltsin admitted this anthrax accident in 1992. Jeanne Guillemin reported in 1999 that a combined Russian and United States team investigated the accident in 1992.Nearly all of the night-shift workers of a ceramics plant directly across the street from the biological facility (compound 19) became infected, and most died. Since most were men, some NATO governments suspected the Soviet Union had developed a sex-specific weapon. The government blamed the outbreak on the consumption of anthrax-tainted meat, and ordered the confiscation of all uninspected meat that entered the city. They also ordered all stray dogs to be shot and people not have contact with sick animals. Also, a voluntary evacuation and anthrax vaccination program was established for people from 18 to 55.To support the cover-up story, Soviet medical and legal journals published articles about an outbreak in livestock that caused GI anthrax in people having consumed infected meat, and cutaneous anthrax in people having come into contact with the animals. All medical and public health records were confiscated by the KGB. In addition to the medical problems the outbreak caused, it also prompted Western countries to be more suspicious of a covert Soviet bioweapons program and to increase their surveillance of suspected sites. In 1986, the US government was allowed to investigate the incident, and concluded the exposure was from aerosol anthrax from a military weapons facility. In 1992, President Yeltsin admitted he was "absolutely certain" that "rumors" about the Soviet Union violating the 1972 Bioweapons Treaty were true. The Soviet Union, like the US and UK, had agreed to submit information to the UN about their bioweapons programs, but omitted known facilities and never acknowledged their weapons program.
Anthrax bioterrorism
In theory, anthrax spores can be cultivated with minimal special equipment and a first-year collegiate microbiological education.
To make large amounts of an aerosol form of anthrax suitable for biological warfare requires extensive practical knowledge, training, and highly advanced equipment.Concentrated anthrax spores were used for bioterrorism in the 2001 anthrax attacks in the United States, delivered by mailing postal letters containing the spores. The letters were sent to several news media offices and two Democratic senators: Tom Daschle of South Dakota and Patrick Leahy of Vermont. As a result, 22 were infected and five died </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Anticholinergic and can you explain it in depth? [/INST] Anticholinergics (anticholinergic agents) are substances that block the action of the neurotransmitter called acetylcholine (ACh) at synapses in the central and peripheral nervous system.These agents inhibit the parasympathetic nervous system by selectively blocking the binding of ACh to its receptor in nerve cells. The nerve fibers of the parasympathetic system are responsible for the involuntary movement of smooth muscles present in the gastrointestinal tract, urinary tract, lungs, sweat glands, and many other parts of the body.In broad terms, anticholinergics are divided into two categories in accordance with their specific targets in the central and peripheral nervous system and at the neuromuscular junction: antimuscarinic agents, and antinicotinic agents (ganglionic blockers, neuromuscular blockers).The term "anticholinergic" is typically used to refer to antimuscarinics which competitively inhibit the binding of ACh to muscarinic acetylcholine receptors; such agents do not antagonize the binding at nicotinic acetylcholine receptors at the neuromuscular junction, although the term is sometimes used to refer to agents which do so.
Medical uses
Anticholinergic drugs are used to treat a variety of conditions:
Dizziness (including vertigo and motion sickness-related symptoms)
Extrapyramidal symptoms, a potential side-effect of antipsychotic medications
Gastrointestinal disorders (e.g., peptic ulcers, diarrhea, pylorospasm, diverticulitis, ulcerative colitis, nausea, and vomiting)
Genitourinary disorders (e.g., cystitis, urethritis, and prostatitis)
Insomnia, although usually only on a short-term basis
Respiratory disorders (e.g., asthma, chronic bronchitis, and chronic obstructive pulmonary disease [COPD])
Sinus bradycardia due to a hypersensitive vagus nerve
Organophosphate based nerve agent poisoning, such as VX, sarin, tabun, and soman (atropine is favoured in conjunction with an oxime, usually pralidoxime)Anticholinergics generally have antisialagogue effects (decreasing saliva production), and most produce some level of sedation, both being advantageous in surgical procedures.Until the beginning of the 20th century anticholinergic drugs were widely used to treat psychiatric disorders.
Physiological effects
Delirium (often with hallucinations and delusions indistinguishable from reality)
Ocular symptoms (from eye drops): mydriasis, pupil dilation, and acute angle-closure glaucoma in those with shallow anterior chamber
Anhidrosis, dry mouth, dry skin
Fever
Constipation
Tachycardia
Urinary retention
Cutaneous vasodilationClinically the most significant feature is delirium, particularly in the elderly, who are most likely to be affected by the toxidrome.
Side effects
Long-term use may increase the risk of both cognitive and physical decline. It is unclear whether they affect the risk of death generally. However, in older adults they do appear to increase the risk of death.Possible effects of anticholinergics include:
Possible effects in the central nervous system resemble those associated with delirium, and may include:
Older patients are at a higher risk of experiencing CNS side effects.
Toxicity
An acute anticholinergic syndrome is reversible and subsides once all of the causative agents have been excreted. Reversible acetylcholinesterase inhibitor agents such as physostigmine can be used as an antidote in life-threatening cases. Wider use is discouraged due to the significant side effects related to cholinergic excess including seizures, muscle weakness, bradycardia, bronchoconstriction, lacrimation, salivation, bronchorrhea, vomiting, and diarrhea. Even in documented cases of anticholinergic toxicity, seizures have been reported after the rapid administration of physostigmine </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Antisialagogue and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Aphthous stomatitis and can you explain it in depth? [/INST] Aphthous stomatitis, or recurrent aphthous stomatitis (RAS), is a common condition characterized by the repeated formation of benign and non-contagious mouth ulcers (aphthae) in otherwise healthy individuals. The informal term canker sore is also used, mainly in North America, although it may also refer to other types of mouth ulcers. The cause is not completely understood but involves a T cell-mediated immune response triggered by a variety of factors which may include nutritional deficiencies, local trauma, stress, hormonal influences, allergies, genetic predisposition, certain foods, dehydration, some food additives, or some hygienic chemical additives like SDS (common in toothpaste).
These ulcers occur periodically and heal completely between attacks. In the majority of cases, the individual ulcers last about 7–10 days, and ulceration episodes occur 3–6 times per year. Most appear on the non-keratinizing epithelial surfaces in the mouth – i.e. anywhere except the attached gingiva, the hard palate and the dorsum of the tongue – although the more severe forms, which are less common, may also involve keratinizing epithelial surfaces. Symptoms range from a minor nuisance to interfering with eating and drinking. The severe forms may be debilitating, even causing weight loss due to malnutrition.
The condition is very common, affecting about 20% of the general population to some degree. The onset is often during childhood or adolescence, and the condition usually lasts for several years before gradually disappearing. There is no cure, and treatments such as corticosteroids aim to manage pain, reduce healing time and reduce the frequency of episodes of ulceration.
Signs and symptoms
Persons with aphthous stomatitis have no detectable systemic symptoms or signs (i.e., outside the mouth). Generally, symptoms may include prodromal sensations such as burning, itching, or stinging, which may precede the appearance of any lesion by some hours; and pain, which is often out of proportion to the extent of the ulceration and is worsened by physical contact, especially with certain foods and drinks (e.g., if they are acidic or abrasive). Pain is worst in the days immediately following the initial formation of the ulcer, and then recedes as healing progresses. If there are lesions on the tongue, speaking and chewing can be uncomfortable, and ulcers on the soft palate, back of the throat, or esophagus can cause painful swallowing. Signs are limited to the lesions themselves.
Ulceration episodes usually occur about 3–6 times per year. However, severe disease is characterized by virtually constant ulceration (new lesions developing before old ones have healed) and may cause debilitating chronic pain and interfere with comfortable eating. In severe cases, this prevents adequate nutrient intake leading to malnutrition and weight loss.Aphthous ulcers typically begin as erythematous macules (reddened, flat area of mucosa) which develop into ulcers that are covered with a yellow-grey fibrinous membrane that can be scraped away. A reddish "halo" surrounds the ulcer. The size, number, location, healing time, and periodicity between episodes of ulcer formation are all dependent upon the subtype of aphthous stomatitis.
Causes
The cause is not entirely clear, but is thought to be multifactorial. It has been suggested that aphthous stomatitis is not a single entity, but rather a group of conditions with different causes. Multiple research studies have attempted to identify a causative organism, but aphthous stomatitis appears to be non-contagious, non-infectious, and not sexually transmissible. The mucosal destruction is thought to be the result of a T cell (T lymphocyte) mediated immune response which involves the generation of interleukins and tumor necrosis factor alpha (TNF-α). Mast cells and macrophages are also involved, secreting TNF-α along with the T cells. When early aphthous ulcers are biopsied, the histologic appearance shows a dense inflammatory infiltrate, 80% of which is made up of T cells. Persons with aphthous stomatitis also have circulating lymphocytes which react with peptides 91–105 of heat shock protein 65–60, and the ratio of CD4+ T cells to CD8+ T cells in the peripheral blood of individuals with aphthous stomatitis is decreased.Aphthous stomatitis has been associated with other autoimmune diseases, namely systemic lupus erythematosus, Behçets disease and inflammatory bowel diseases. However, common autoantibodies are not detected in most patients, and the condition tends to resolve spontaneously with advancing age rather than worsen.
Evidence for the T cell-mediated mechanism of mucosal destruction is strong, but the exact triggers for this process are unknown and are thought to be multiple and varied from one person to the next. This suggests that there are a number of possible triggers, each of which is capable of producing the disease in different subgroups. In other words, different subgroups appear to have different causes for the condition. These can be considered in three general groups, namely primary immuno-dysregulation, decrease of the mucosal barrier and states of heightened antigenic sensitivity (see below). Risk factors in aphthous stomatitis are also sometimes considered as either host-related or environmental.
Immunity
At least 40% of people with aphthous stomatitis have a positive family history, suggesting that some people are genetically predisposed to developing oral ulceration. HLA-B12, HLA-B51, HLA-Cw7, HLA-A2, HLA-A11, and HLA-DR2 are examples of human leukocyte antigen types associated with aphthous stomatitis. However, these HLA types are inconsistently associated with the condition, and also vary according to ethnicity. People who have a positive family history of aphthous stomatitis tend to develop a more severe form of the condition, and at an earlier age than is typical.Stress has effects on the immune system, which may explain why some cases directly correlate with stress. It is often stated that in studies of students with the condition, ulceration is exacerbated during examination periods and lessened during periods of vacation. Alternatively, it has been suggested that oral parafunctional activities such as lip or cheek chewing become more pronounced during periods of stress, and hence the mucosa is subjected to more minor trauma.Aphthous-like ulceration also occurs in conditions involving systemic immuno-dysregulation, e.g. cyclic neutropenia and human immunodeficiency virus infection. In cyclic neutropenia, more severe oral ulceration occurs during periods of severe immuno-dysregulation, and resolution of the underlying neutropenia is associated with healing of the ulcers. The relative increase in percentage of CD8+ T cells, caused by a reduction in numbers of CD4+ T cells may be implicated in RAS-type ulceration in HIV infection.
Mucosal barrier
The thickness of the mucosa may be an important factor in aphthous stomatitis. Usually, ulcers form on the thinner, non-keratinizing mucosal surfaces in the mouth. Factors which decrease the thickness of the mucosa increase the frequency of occurrence, and factors which increase the thickness of the mucosa correlate with decreased ulceration.The nutritional deficiencies associated with aphthous stomatitis (vitamin B12, folate, and iron) can all cause a decrease in the thickness of the oral mucosa (atrophy).Local trauma is also associated with aphthous stomatitis, and it is known that trauma can decrease the mucosal barrier. Trauma could occur during injections of local anesthetic in the mouth, or otherwise during dental treatments, frictional trauma from a sharp surface in the mouth such as broken tooth, or from tooth brushing.Hormonal factors are capable of altering the mucosal barrier. In one study, a small group of females with aphthous stomatitis had fewer occurrences of aphthous ulcers during the luteal phase of the menstrual cycle or with use of the contraceptive pill. This phase is associated with a fall in progestogen levels, mucosal proliferation and keratinization. This subgroup often experiences remission during pregnancy. However, other studies report no correlation between aphthous stomatitis and menstrual period, pregnancy or menopause.Aphthous stomatitis is more common in people who smoke, and there is also a correlation between habit duration and severity of the condition. Tobacco use is associated with an increase in keratinization of the oral mucosa. In extreme forms, this may manifest as leukoplakia or stomatitis nicotina (smokers keratosis). This increased keratinization may mechanically reinforce the mucosa and reduce the tendency of ulcers to form after minor trauma, or present a more substantial barrier to microbes and antigens, but this is unclear. Nicotine is also known to stimulate production of adrenal steroids and reduce production of TNF-α, interleukin-1 and interleukin-6. Smokeless tobacco products also seem to protect against aphthous stomatitis. Cessation of smoking is known to sometimes precede the onset of aphthous stomatitis in people previously unaffected, or exacerbate the condition in those who were already experiencing aphthous ulceration. Despite this correlation, starting smoking again does not usually lessen the condition.
Antigenic sensitivity
Various antigenic triggers have been implicated as a trigger, including L forms of streptococci, herpes simplex virus, varicella-zoster virus, adenovirus, and cytomegalovirus. Some people with aphthous stomatitis may show herpes virus within the epithelium of the mucosa, but without any productive infection. In some persons, attacks of ulceration occur at the same time as asymptomatic viral shedding and elevated viral titres.In some instances, recurrent mouth ulcers may be a manifestation of an allergic reaction. Possible allergens include certain foods (e.g., chocolate, coffee, strawberries, eggs, nuts, tomatoes, cheese, citrus fruits, benzoates, cinnamaldehyde, and highly acidic foods), toothpastes, and mouthwashes. Where dietary allergens are responsible, mouth ulcers usually develop within about 12–24 hours of exposure.Sodium lauryl sulphate (SLS), a detergent present in some brands of toothpaste and other oral healthcare products, may produce oral ulceration in some individuals. It has been shown that aphthous stomatitis is more common in people using toothpastes containing SLS, and that some reduction in ulceration occurs when a SLS-free toothpaste is used. Some have argued that since SLS is almost ubiquitously used in oral hygiene products, there is unlikely to be a true predisposition for aphthous stomatitis caused by SLS.
Systemic disease
Aphthous-like ulceration may occur in association with several systemic disorders (see table). These ulcers are clinically and histopathologically identical to the lesions of aphthous stomatitis, but this type of oral ulceration is not considered to be true aphthous stomatitis by some sources. Some of these conditions may cause ulceration on other mucosal surfaces in addition to the mouth such as the conjunctiva or the genital mucous membranes. Resolution of the systemic condition often leads to decreased frequency and severity of the oral ulceration.Behçets disease is a triad of mouth ulcers, genital ulcers and anterior uveitis. The main feature of Behçets disease is aphthous-like ulceration, but this is usually more severe than seen in aphthous stomatitis without a systemic cause, and typically resembles major or herpetiforme ulceration or both. Aphthous-like ulceration is the first sign of the disease in 25–75% of cases. Behçets is more common in individuals whose ethnic origin is from regions along the Silk Road (between the Mediterranean and the Far East). It tends to be rare in other countries such as the United States and the United Kingdom. MAGIC syndrome is a possible variant of Behçets disease, and is associated with aphthous-like ulceration. The name stands for "mouth and genital ulcers with inflamed cartilage" (relapsing polychondritis).PFAPA syndrome is a rare condition that tends to occur in children. The name stands for "periodic fever, aphthae, pharyngitis (sore throat) and cervical adenitis" (inflammation of the lymph nodes in the neck). The fevers occur periodically about every 3–5 weeks. The condition appears to improve with tonsillectomy or immunosuppression, suggesting an immunologic cause.In cyclic neutropenia, there is a reduction in the level of circulating neutrophils in the blood that occurs about every 21 days. Opportunistic infections commonly occur and aphthous-like ulceration is worst during this time.Hematinic deficiencies (vitamin B12, folic acid and iron), occurring singly or in combination, and with or without any underlying gastrointestinal disease, may be twice as common in people with RAS. However, iron and vitamin supplements only infrequently improve the ulceration. The relationship to vitamin B12 deficiency has been the subject of many studies. Although these studies found that 0–42% of those with recurrent ulcers have a vitamin B12 deficiency, an association with deficiency is rare. Even in the absence of deficiency, vitamin B12 supplementation may be helpful due to unclear mechanisms. Hematinic deficiencies can cause anemia, which is also associated with aphthous-like ulceration.Gastrointestinal disorders are sometimes associated with aphthous-like stomatitis, e.g. most commonly celiac disease, but also inflammatory bowel disease such as Crohns disease or ulcerative colitis. The link between gastrointestinal disorders and aphthous stomatitis is probably related to nutritional deficiencies caused by malabsorption. Less than 5% of people with RAS have celiac disease, which usually presents with severe malnutrition, anemia, abdominal pain, diarrhea and glossitis (inflammation of the tongue). Sometimes aphthous-like ulcerations can be the only sign of celiac disease. Despite this association, a gluten-free diet does not usually improve the oral ulceration.Other examples of systemic conditions associated with aphthous-like ulceration include reactive arthritis, and recurrent erythema multiforme.
Diagnosis
Diagnosis is mostly based on the clinical appearance and the medical history. The most important diagnostic feature is a history of recurrent, self healing ulcers at fairly regular intervals. Although there are many causes of oral ulceration, recurrent oral ulceration has relatively few causes, most commonly aphthous stomatitis, but rarely Behçets disease, erythema multiforme, ulceration associated with gastrointestinal disease, and recurrent intra-oral herpes simplex infection. A systemic cause is more likely in adults who suddenly develop recurrent oral ulceration with no prior history.Special investigations may be indicated to rule out other causes of oral ulceration. These include blood tests to exclude anemia, deficiencies of iron, folate or vitamin B12, or celiac disease. However, the nutritional deficiencies may be latent and the peripheral blood picture may appear relatively normal. Some suggest that screening for celiac disease should form part of the routine work up for individuals complaining of recurrent oral ulceration. Many of the systemic diseases cause other symptoms apart from oral ulceration, which is in contrast to aphthous stomatitis where there is isolated oral ulceration. Patch testing may be indicated if allergies are suspected (e.g. a strong relationship between certain foods and episodes of ulceration). Several drugs can cause oral ulceration (e.g. nicorandil), and a trial substitution to an alternative drug may highlight a causal relationship.Tissue biopsy is not usually required, unless to rule out other suspected conditions such as oral squamous cell carcinoma. The histopathologic appearance is not pathognomonic (the microscopic appearance is not specific to the condition). Early lesions have a central zone of ulceration covered by a fibrinous membrane. In the connective tissue deep to the ulcer there is increased vascularity and a mixed inflammatory infiltrate composed of lymphocytes, histiocytes and polymorphonuclear leukocytes. The epithelium on the margins of the ulcer shows spongiosis and there are many mononuclear cells in the basal third. There are also lymphocytes and histiocytes in the connective tissue surrounding deeper blood vessels near to the ulcer, described histologically as "perivascular cuffing".
Classification
Aphthous stomatitis has been classified as a type of non-infectious stomatitis (inflammation of the mouth). One classification distinguishes "common simple aphthae", accounting for 95% of cases, with 3–6 attacks per year, rapid healing, minimal pain and restriction of ulceration to the mouth; and "complex aphthae", accounting for 5% of cases, where ulcers may be present on the genital mucosa in addition to mouth, healing is slower and pain is more severe. A more common method of classifying aphthous stomatitis is into three variants, distinguished by the size, number and location of the lesions, the healing time of individual ulcers and whether a scar is left after healing (see below).
Minor aphthous ulceration
This is the most common type of aphthous stomatitis, accounting for about 80–85% of all cases. This subtype is termed minor aphthous ulceration (MiAU), or minor recurrent aphthous stomatitis (MiRAS). The lesions themselves may be referred to as minor aphthae or minor aphthous ulcers. These lesions are generally less than 10 mm in diameter (usually about 2–3 mm), and affect non-keratinized mucosal surfaces (i.e. the labial and buccal mucosa, lateral borders of the tongue and the floor of the mouth). Usually several ulcers appear at the same time, but single ulcers are possible. Healing usually takes seven to ten days and leaves no scar. Between episodes of ulceration, there is usually an ulcer-free period of variable length.
Major aphthous ulceration
This subtype makes up about 10% of all cases of aphthous stomatitis. It is termed major aphthous ulceration (MaAU) or major recurrent aphthous stomatitis (MaRAS). Major aphthous ulcers (major aphthae) are similar to minor aphthous ulcers, but are more than 10 mm in diameter and the ulceration is deeper. Because the lesions are larger, healing takes longer (about twenty to thirty days), and may leave scars. Each episode of ulceration usually produces a greater number of ulcers, and the time between attacks is less than seen in minor aphthous stomatitis. Major aphthous ulceration usually affects non-keratinized mucosal surfaces, but less commonly keratinized mucosa may also be involved, such as the dorsum (top surface) of the tongue or the gingiva (gums). The soft palate or the fauces (back of the throat) may also be involved, the latter being part of the oropharynx rather than the oral cavity. Compared to minor aphthous ulceration, major aphthae tend to have an irregular outline.
Herpetiform ulceration
Herpetiform ulcers, (also termed stomatitis herpetiformis, or herpes-like ulcerations) is a subtype of aphthous stomatitis so named because the lesions resemble a primary infection with herpes simplex virus (primary herpetic gingivostomatitis). However, herpetiform ulceration is not caused by herpes viruses. As with all types of aphthous stomatitis, it is not contagious. Unlike true herpetic ulcers, herpetiforme ulcers are not preceded by vesicles (small, fluid-filled blisters). Herpetiforme ulcers are less than 1 mm in diameter and occur in variably sized crops up to one hundred at a time. Adjacent ulcers may merge to form larger, continuous areas of ulceration. Healing occurs within fifteen days without scarring. The ulceration may affect keratinized mucosal surfaces in addition to non keratinized. Herpetiform ulceration is often extremely painful, and the lesions recur more frequently than minor or major aphthous ulcers. Recurrence may be so frequent that ulceration is virtually continuous. It generally occurs in a slightly older age group than the other subtypes, and females are affected slightly more frequently than males.
RAS type ulceration
Recurrent oral ulceration associated with systemic conditions is termed "RAS-type ulceration", "RAS-like ulceration", or "aphthous-like ulcers". Aphthous stomatitis occurs in individuals with no associated systemic disease. Persons with certain systemic diseases may be prone to oral ulceration, but this is secondary to the underlying medical condition (see the systemic disease section). This kind of ulceration is considered by some to be separate from true aphthous stomatitis. However, this definition is not strictly applied. For example, many sources refer to oral ulceration caused by anemia and/or nutritional deficiencies as aphthous stomatitis, and some also consider Behçets disease to be a variant.
Treatment
The vast majority of people with aphthous stomatitis have minor symptoms and do not require any specific therapy. The pain is often tolerable with simple dietary modification during an episode of ulceration such as avoiding spicy and acidic foods and beverages. Many different topical and systemic medications have been proposed (see table), sometimes showing little or no evidence of usefulness when formally investigated. Some of the results of interventions for RAS may in truth represent a placebo effect. No therapy is curative, with treatment aiming to relieve pain, promote healing and reduce the frequency of episodes of ulceration.
Medication
The first line of therapy for aphthous stomatitis are topical agents rather than systemic medication, with topical corticosteroids being the mainstay treatment. Systemic treatment is usually reserved for severe disease due to the risk of adverse side effects associated with many of these agents. A systematic review found that no single systemic intervention was found to be effective. Good oral hygiene is important to prevent secondary infection of the ulcers.Occasionally, in females where ulceration is correlated to the menstrual cycle or to birth control pills, progestogen or a change in birth control may be beneficial. Use of nicotine replacement therapy for people who have developed oral ulceration after stopping smoking has also been reported. Starting smoking again does not usually lessen the condition. Trauma can be reduced by avoiding rough or sharp foodstuffs and by brushing teeth with care. If sodium lauryl sulfate is suspected to be the cause, avoidance of products containing this chemical may be useful and prevent recurrence in some individuals. Similarly patch testing may indicate that food allergy is responsible, and the diet modified accordingly. If investigations reveal deficiency states, correction of the deficiency may result in resolution of the ulceration. For example, there is some evidence that vitamin B12 supplementation may prevent recurrence in some individuals.
Other
Surgical excision of aphthous ulcers has been described, but it is an ineffective and inappropriate treatment. Silver nitrate has also been used as a chemical cauterant. Apart from the mainstream approaches detailed above, there are numerous treatments of unproven effectiveness, ranging from herbal remedies to otherwise alternative treatments, including Aloe vera, Myrtus communis, Rosa damascena, potassium alum, zinc sulfate, nicotine, polio virus vaccine and prostaglandin E2.
Prognosis
By definition, there is no serious underlying medical condition, and most importantly, the ulcers do not represent oral cancer nor are they infectious. However, aphthae are capable of causing significant discomfort. There is a spectrum of severity, with symptoms ranging from a minor nuisance to disabling. Due to pain during eating, weight loss may develop as a result of not eating in severe cases of aphthous stomatitis. Usually, the condition lasts for several years before spontaneously disappearing in later life.
Epidemiology
Aphthous stomatitis affects between 5% and 66% of people, with about 20% of individuals in most populations having the condition to some degree. This makes it the most common disease of the oral mucosa. Aphthous stomatitis occurs worldwide, but is more common in developed countries.Within nations, it is more common in higher socioeconomic groups. Males and females are affected in an equal ratio, and the peak age of onset between 10 and 19 years. About 80% of people with aphthous stomatitis first developed the condition before the age of 30. There have been reports of ethnic variation. For example, in the United States, aphthous stomatitis may be three times more common in white-skinned people than black-skinned people.
History, society and culture
"Aphthous affectations" and "aphthous ulcerations" of the mouth are mentioned several times in the treatise "Of the Epidemics" (part of the Hippocratic corpus, in the 4th century BCE), although it seems likely that this was oral ulceration as a manifestation of some infectious disease, since they are described as occurring in epidemic-like patterns, with concurrent symptoms such as fever.
Aphthous stomatitis was once thought to be a form of recurrent herpes simplex virus infection, and some clinicians still refer to the condition as "herpes" despite this cause having been disproven.The informal term "canker sore" is sometimes used, mainly in North America, either to describe this condition generally, or to refer to the individual ulcers of this condition, or mouth ulcers of any cause unrelated to this condition. The origin of the word "canker" is thought to have been influenced by Latin, Old English, Middle English and Old North French. In Latin, cancer translates to "malignant tumor" or literally "crab" (related to the likening of sectioned tumors to the limbs of a crab). The closely related word in Middle English and Old North French, chancre, now more usually applied to syphilis, is also thought to be involved. Despite this etymology, aphthous stomatitis is not a form of cancer but rather entirely benign.
An aphtha (plural aphthae) is a non specific term that refers to an ulcer of the mouth. The word is derived from the Greek word aphtha meaning "eruption" or "ulcer" </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is APL and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Apnea of prematurity and can you explain it in depth? [/INST] Apnea of prematurity is defined as cessation of breathing by a premature infant that lasts for more than 20 seconds and/or is accompanied by hypoxia or bradycardia. Apnea is traditionally classified as either obstructive, central, or mixed. Obstructive apnea may occur when the infants neck is hyperflexed or conversely, hyperextended. It may also occur due to low pharyngeal muscle tone or to inflammation of the soft tissues, which can block the flow of air though the pharynx and vocal cords. Central apnea occurs when there is a lack of respiratory effort. This may result from central nervous system immaturity, or from the effects of medications or illness. Many episodes of apnea of prematurity may start as either obstructive or central, but then involve elements of both, becoming mixed in nature.
Pathophysiology
Ventilatory drive is primarily dependent on response to increased levels of carbon dioxide (CO2) and acid in the blood. A secondary stimulus is hypoxia. Responses to these stimuli are impaired in premature infants due to immaturity of specialized regions of the brain that sense these changes. In addition, premature infants have an exaggerated response to laryngeal stimulation (a normal reflex that closes the airway as a protective measure).
Diagnosis
Apnea of prematurity can be readily identified from other forms of infant apnea such as obstructive apnea, hypoventilation syndromes, breathing regulation issues during feeding, and reflux associated apnea with an infant pneumogram or infant apnea/sleep study.
It has been reported that the incidence of neonatal apnea happens in almost all infants with gestational age of less than 29 weeks or the birth weight of less than 1000g.
Treatment
Medications
Methylxanthines (theophylline and caffeine) have been used for almost three decades to treat apnea of prematurity. Despite this prevalent use, there are concerns of long term negative effects from the use of caffeine. Caffeine and theophylline have similar short-term effects on apnea, but theophylline is associated with higher rates of toxicity.
Respiratory support
Simple tactile stimulation by touching the skin or patting the infant may stop an apneic episode by raising the infants level of alertness. Increasing the environmental oxygen level by placing the infant in a tent or hood with supplemental oxygen can diminish the frequency of AOP, and may also help the infant maintain adequate oxygenation during short episodes of apnea. Increased oxygen at low levels can also be delivered using a nasal cannula, which additionally may provide some stimulation due to the tactile stimulation of the cannula </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Arsenic poisoning and can you explain it in depth? [/INST] Arsenic poisoning is a medical condition that occurs due to elevated levels of arsenic in the body. If arsenic poisoning occurs over a brief period of time, symptoms may include vomiting, abdominal pain, encephalopathy, and watery diarrhea that contains blood. Long-term exposure can result in thickening of the skin, darker skin, abdominal pain, diarrhea, heart disease, numbness, and cancer.The most common reason for long-term exposure is contaminated drinking water. Groundwater most often becomes contaminated naturally; however, contamination may also occur from mining or agriculture. It may also be found in the soil and air. Recommended levels in water are less than 10–50 µg/L (10–50 parts per billion). Other routes of exposure include toxic waste sites and traditional medicines. Most cases of poisoning are accidental. Arsenic acts by changing the functioning of around 200 enzymes. Diagnosis is by testing the urine, blood, or hair.Prevention is by using water that does not contain high levels of arsenic. This may be achieved by the use of special filters or using rainwater. There is not good evidence to support specific treatments for long-term poisoning. For acute poisonings treating dehydration is important. Dimercaptosuccinic acid or dimercaptopropane sulfonate may be used while dimercaprol (BAL) is not recommended. Hemodialysis may also be used.Through drinking water, more than 200 million people globally are exposed to higher-than-safe levels of arsenic. The areas most affected are Bangladesh and West Bengal. Exposure is also more common in people of low income and minorities. Acute poisoning is uncommon. The toxicity of arsenic has been described as far back as 1500 BC in the Ebers papyrus.
Signs and symptoms
Symptoms of arsenic poisoning begin with headaches, confusion, severe diarrhea, and drowsiness. As the poisoning develops, convulsions and changes in fingernail pigmentation called leukonychia striata (Meess lines, or Aldrich-Meess lines) may occur. When the poisoning becomes acute, symptoms may include diarrhea, vomiting, vomiting blood, blood in the urine, cramping muscles, hair loss, stomach pain, and more convulsions. The organs of the body that are usually affected by arsenic poisoning are the lungs, skin, kidneys, and liver. The final result of arsenic poisoning is coma and death.Arsenic is related to heart disease (hypertension-related cardiovascular disease), cancer, stroke (cerebrovascular diseases), chronic lower respiratory diseases, and diabetes. Skin effects can include skin cancer in the long term, but often prior to skin cancer are different skin lesions. Other effects may include darkening of skin and thickening of skin.Chronic exposure to arsenic is related to vitamin A deficiency, which is related to heart disease and night blindness. The acute minimal lethal dose of arsenic in adults is estimated to be 70 to 200 mg or 1 mg/kg/day.
Cancer
Arsenic increases the risk of cancer. Exposure is related to skin, lung, liver, and kidney cancer among others.Its comutagenic effects may be explained by interference with base and nucleotide excision repair, eventually through interaction with zinc finger structures. Dimethylarsinic acid, DMA(V), caused DNA single strand breaks resulting from inhibition of repair enzymes at levels of 5 to 100 mM in human epithelial type II cells.MMA(III) and DMA(III) were also shown to be directly genotoxic by effectuating scissions in supercoiled ΦX174 DNA. Increased arsenic exposure is associated with an increased frequency of chromosomal aberrations, micronuclei and sister-chromatid exchanges. An explanation for chromosomal aberrations is the sensitivity of the protein tubulin and the mitotic spindle to arsenic. Histological observations confirm effects on cellular integrity, shape and locomotion.DMA(III) is able to form reactive oxygen species by reaction with molecular oxygen. Resulting metabolites are the dimethylarsenic radical and the dimethylarsenic peroxyl radical.
Both DMA(III) and DMA(V) were shown to release iron from horse spleen as well as from human liver ferritin if ascorbic acid was administered simultaneously. Thus, formation of reactive oxygen species can be promoted.
Moreover, arsenic could cause oxidative stress by depleting the cells antioxidants, especially the ones containing thiol groups. The accumulation of reactive oxygen species like that cited above and hydroxyl radicals, superoxide radicals and hydrogen peroxides causes aberrant gene expression at low concentrations and lesions of lipids, proteins and DNA in higher concentrations which eventually lead to cellular death.
In a rat animal model, urine levels of 8-hydroxy-2-deoxyguanosine (as a biomarker of DNA damage byreactive oxygen species) were measured after treatment with DMA(V). In comparison to control levels, they turned out to be significantly increased. This theory is further supported by a cross-sectional study which found elevated mean serum lipid peroxides in the As exposed individuals which correlated with blood levels of inorganic arsenic and methylated metabolites and inversely correlated with nonprotein sulfhydryl (NPSH) levels in whole blood. Another study found an association of As levels in whole blood with the level of reactive oxidants in plasma and an inverse relationship with plasma antioxidants. A finding of the latter study indicates that methylation might in fact be a detoxification pathway with regard to oxidative stress: the results showed that the lower the As methylation capacity was, the lower the level of plasma antioxidant capacity. As reviewed by Kitchin (2001), the oxidative stress theory provides an explanation for the preferred tumor sites connected with arsenic exposure. Considering that a high partial pressure of oxygen is present in lungs and DMA(III) is excreted in gaseous state via the lungs, this seems to be a plausible mechanism for special vulnerability. The fact that DMA is produced by methylation in the liver, excreted via the kidneys and later on stored in the bladder accounts for the other tumor localizations.
Regarding DNA methylation, some studies suggest interaction of As with methyltransferases which leads to an inactivation of tumor suppressor genes through hypermethylation; others state that hypomethylation might occur due to a lack of SAM resulting in aberrant gene activation. An experiment by Zhong et al. (2001) with arsenite-exposed human lung A549, kidney UOK123, UOK109 and UOK121 cells isolated eight different DNA fragments by methylation-sensitive arbitrarily primed polymerase chain reactions. It turned out that six of the fragments were hyper- and two of them were hypomethylated. Higher levels of DNA methyltransferase mRNA and enzyme activity were found.Kitchin (2001) proposed a model of altered growth factors which lead to cell proliferation and thus to carcinogenesis. From observations, it is known that chronic low-dose arsenic poisoning can lead to increased tolerance to its acute toxicity. MRP1-overexpressing lung tumor GLC4/Sb30 cells poorly accumulate arsenite and arsenate. This is mediated through MRP-1 dependent efflux. The efflux requires glutathione, but no arsenic-glutathione complex formation.Although many mechanisms have been proposed, no definite model can be given for the mechanisms of chronic arsenic poisoning. The prevailing events of toxicity and carcinogenicity might be quite tissue-specific. Current consensus on the mode of carcinogenesis is that it acts primarily as a tumor promoter. Its co-carcinogenicity has been demonstrated in several models. However, the finding of several studies that chronically arsenic-exposed Andean populations (as most extremely exposed to UV-light) do not develop skin cancer with chronic arsenic exposure, is puzzling.
Causes
Organic arsenic is less harmful than inorganic arsenic. Seafood is a common source of the less toxic organic arsenic in the form of arsenobetaine. The arsenic reported in 2012 in fruit juice and rice by Consumer Reports was primarily inorganic arsenic. Because of its high toxicity, arsenic is seldom used in the Western world, although in Asia it is still a popular pesticide. Arsenic is mainly encountered occupationally in the smelting of zinc and copper ores.
Drinking water
Arsenic is naturally found in groundwater and presents serious health threats when high amounts exist. Chronic arsenic poisoning results from drinking contaminated well water over a long period of time. Many aquifers contain high concentration of arsenic salts. The World Health Organization (WHO) Guidelines for drinking water quality established in 1993 a provisional guideline value of 0.01 mg/L (10 parts per billion) for maximum contaminant levels of arsenic in drinking water. This recommendation was established based on the limit of detection for most laboratories testing equipment at the time of publication of the WHO water quality guidelines. More recent findings show that consumption of water with levels as low as 0.00017 mg/L (0.17 parts per billion) over long periods of time can lead to arsenicosis.From a 1988 study in China, the US protection agency quantified the lifetime exposure of arsenic in drinking water at concentrations of 0.0017 mg/L (1.7 ppb), 0.00017 mg/L, and 0.000017 mg/L are associated with a lifetime skin cancer risk of 1 in 10,000, 1 in 100,000, and 1 in 1,000,000 respectively. WHO asserts that a water level of 0.01 mg/L (10 ppb) poses a risk of 6 in 10,000 chance of lifetime skin cancer risk and contends that this level of risk is acceptable.One of the worst incidents of arsenic poisoning via well water occurred in Bangladesh, which the World Health Organization called the "largest mass poisoning of a population in history" recognized as a major public health concern. The contamination in the Ganga-Brahmaputra fluvial plains in India and Padma-Meghna fluvial plains in Bangladesh demonstrated adverse impacts on human health.Mining techniques such as hydraulic fracturing may mobilize arsenic in groundwater and aquifers due to enhanced methane transport and resulting changes in redox conditions, and inject fluid containing additional arsenic.
Groundwater
In the US, the U.S. Geological Survey estimates that the median groundwater concentration is 1 μg/L or less, although some groundwater aquifers, particularly in the western United States, can contain much higher levels. For example, median levels in Nevada were about 8 μg/L but levels of naturally occurring arsenic as high as 1000 μg/L have been measured in the United States in drinking water.Geothermally active zones occur at hotspots where mantle-derived plumes ascend, such as in Hawaii and Yellowstone National Park, US. Arsenic is an incompatible element (does not fit easily into the lattices of common rock-forming minerals). Concentrations of arsenic are high mainly in geothermal waters that leach continental rocks. Arsenic in hot geothermal fluids was shown to be derived mainly from leaching of host rocks at Yellowstone National Park, in Wyoming, US, rather than from magmas.In the western US, there are As (arsenic) inputs to groundwater and surface water from geothermal fluids in and near Yellowstone National Park, and in other western mineralized areas. Groundwater associated with volcanics in California contain As at concentrations ranging up to 48,000 μg/L, with As-bearing sulfide minerals as the main source. Geothermal waters on Dominica in the Lesser Antilles also contain concentrations of As >50 μg/L.In general, because arsenic is an incompatible element, it accumulates in differentiated magmas, and in other western mineralized areas. Weathering of pegmatite veins in Connecticut, US, was thought to contribute As to groundwater.In Pennsylvania, As concentrations in water discharging from abandoned anthracite mines ranged from <0.03 to 15 μg/L and from abandoned bituminous mines, from 0.10 to 64 μg/L, with 10% of samples exceeding the United States Environmental Protection Agency MLC of 10 μg/L.In Wisconsin, As concentrations of water in sandstone and dolomite aquifers were as high as 100 μg/L. Oxidation of pyrite hosted by these formations was the likely source of the As.In the Piedmont of Pennsylvania and New Jersey, groundwater in Mesozoic age aquifers contains elevated levels of As—domestic well waters from Pennsylvania contained up to 65 μg/L, whereas in New Jersey the highest concentration measured recently was 215 μg/L.
Food
In the United States, Schoof et al. estimated an average adult intake of 3.2 μg/day, with a range of 1–20 μg/day. Estimates for children were similar. Food also contains many organic arsenic compounds. The key organic arsenic compounds that can be routinely found in food (depending on food type) include monomethylarsonic acid (MMAsV), dimethylarsinic acid (DMAsV), arsenobetaine, arsenocholine, arsenosugars, and arsenolipids. DMAsV or MMAsV can be found in various types of fin fish, crabs, and mollusks, but often at very low levels.Arsenobetaine is the major form of arsenic in marine animals, and, by all accounts, it is considered a compound that is nontoxic under conditions of human consumption. Arsenocholine, which is mainly found in shrimp, is chemically similar to arsenobetaine, and is considered to be "essentially nontoxic". Although arsenobetaine is little studied, available information indicates it is not mutagenic, immunotoxic, or embryotoxic.Arsenosugars and arsenolipids have recently been identified. Exposure to these compounds and toxicological implications are currently being studied. Arsenosugars are detected mainly in seaweed but are also found to a lesser extent in marine mollusks. Studies addressing arsenosugar toxicity, however, have largely been limited to in vitro studies, which show that arsenosugars are significantly less toxic than both inorganic arsenic and trivalent methylated arsenic metabolites.It has been found that rice is particularly susceptible to accumulation of arsenic from soil. Rice grown in the United States has an average 260 ppb of arsenic, according to a study; but U.S. arsenic intake remains far below World Health Organization-recommended limits. China has set a standard for arsenic limits in food (150 ppb), as levels in rice exceed those in water.Arsenic is a ubiquitous element present in American drinking water. In the United States, levels of arsenic that are above natural levels, but still well below danger levels set in federal safety standards, have been detected in commercially raised chickens. The source of the arsenic appears to be the feed additives roxarsone and nitarsone, which are used to control the parasitic infection coccidiosis as well as to increase weight and skin coloring of the poultry.High levels of inorganic arsenic were reportedly found in 83 California wines in 2015.
Soil
Exposure to arsenic in soil can occur through multiple pathways. Compared with the intake of naturally occurring arsenic from water and the diet, soil arsenic constitutes only a small fraction of intake.
Air
The European Commission (2000) reports that levels of arsenic in air range 0–1 ng/m3 in remote areas, 0.2–1.5 ng/m3 in rural areas, 0.5–3 ng/m3 in urban areas, and up to about 50 ng/m3 in the vicinity of industrial sites. Based on these data, the European Commission (2000) estimated that in relation to food, cigarette smoking, water, and soil, air contributes less than 1% of total arsenic exposure.
Pesticides
The use of lead arsenate pesticides has been effectively eliminated for over 50 years. However, because of the pesticides environmental persistence, it is estimated that millions of acres of land are still contaminated with lead arsenate residues. This presents a potentially significant public health concern in some areas of the United States (e.g., New Jersey, Washington, and Wisconsin), where large areas of land used historically as orchards have been converted into residential developments.Some modern uses of arsenic-based pesticides still exist. Chromated copper arsenate has been registered for use in the United States since the 1940s as a wood preservative, protecting wood from insects and microbial agents. In 2003, manufacturers of chromated copper arsenate instituted a voluntary recall of residential uses of wood treated with the chemical. The Environmental Protection Agency Act2008 final report stated that chromated copper arsenate is still approved for use in nonresidential applications, such as in marine facilities (pilings and structures), utility poles, and sand highway structures.
Copper smelting
Exposure studies in the copper smelting industry are much more extensive and have established definitive links between arsenic, a by-product of copper smelting, and lung cancer via inhalation. Dermal and neurological effects were also increased in some of these studies. Although as time went on, occupational controls became more stringent and workers were exposed to reduced arsenic concentrations, the arsenic exposures measured from these studies ranged from about 0.05 to 0.3 mg/m3 and are significantly higher than airborne environmental exposures to arsenic (which range from 0 to 0.000003 mg/m3).
Pathophysiology
Arsenic interferes with cellular longevity by allosteric inhibition of an essential metabolic enzyme pyruvate dehydrogenase complex, which catalyzes the oxidation of pyruvate to acetyl-CoA by NAD+. With the enzyme inhibited, the energy system of the cell is disrupted resulting in cellular apoptosis. Biochemically, arsenic prevents use of thiamine resulting in a clinical picture resembling thiamine deficiency. Poisoning with arsenic can raise lactate levels and lead to lactic acidosis. Low potassium levels in the cells increases the risk of experiencing a life-threatening heart rhythm problem from arsenic trioxide.
Arsenic in cells clearly stimulates the production of hydrogen peroxide (H2O2). When the H2O2 reacts with certain metals such as iron or manganese it produces a highly reactive hydroxyl radical. Inorganic arsenic trioxide found in ground water particularly affects voltage-gated potassium channels,
disrupting cellular electrolytic function resulting in neurological disturbances, cardiovascular episodes such as prolonged QT interval, neutropenia, high blood pressure,
central nervous system dysfunction, anemia, and death.
Arsenic exposure plays a key role in the pathogenesis of vascular endothelial dysfunction as it inactivates endothelial nitric oxide synthase, leading to reduction in the generation and bioavailability of nitric oxide. In addition, the chronic arsenic exposure induces high oxidative stress, which may affect the structure and function of cardiovascular system. Further, the arsenic exposure has been noted to induce atherosclerosis by increasing the platelet aggregation and reducing fibrinolysis. Moreover, arsenic exposure may cause arrhythmia by increasing the QT interval and accelerating the cellular calcium overload. The chronic exposure to arsenic upregulates the expression of tumor necrosis factor-α, interleukin-1, vascular cell adhesion molecule and vascular endothelial growth factor to induce cardiovascular pathogenesis.
Arsenic has also been shown to induce cardiac hypertrophy by activating certain transcription factors involved in pathologically remodeling the heart. Tissue culture studies have shown that arsenic compounds block both IKr and Iks channels and, at the same time, activate IK-ATP channels. Arsenic compounds also disrupt ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits pyruvate dehydrogenase and by competing with phosphate it uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration, and ATP synthesis. Hydrogen peroxide production is also increased, which might form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure, probably from necrotic cell death, not apoptosis. A post mortem reveals brick red colored mucosa, due to severe hemorrhage. Although arsenic causes toxicity, it can also play a protective role.
Mechanism
Arsenite inhibits not only the formation of acetyl-CoA but also the enzyme succinic dehydrogenase. Arsenate can replace phosphate in many reactions. It is able to form Glc-6-arsenate in vitro; therefore it has been argued that hexokinase could be inhibited. (Eventually this may be a mechanism leading to muscle weakness in chronic arsenic poisoning.) In the glyceraldehyde 3-phosphate dehydrogenase reaction arsenate attacks the enzyme-bound thioester. The formed 1-arseno-3-phosphoglycerate is unstable and hydrolyzes spontaneously. Thus, ATP formation in glycolysis is inhibited while bypassing the phosphoglycerate kinase reaction. (Moreover, the formation of 2,3-bisphosphoglycerate in erythrocytes might be affected, followed by a higher oxygen affinity of hemoglobin and subsequently enhanced cyanosis.) As shown by Gresser (1981), submitochondrial particles synthesize adenosine-5-diphosphate-arsenate from ADP and arsenate in presence of succinate. Thus, by a variety of mechanisms arsenate leads to an impairment of cell respiration and subsequently diminished ATP formation. This is consistent with observed ATP depletion of exposed cells and histopathological findings of mitochondrial and cell swelling, glycogen depletion in liver cells and fatty change in liver, heart and kidney.
Experiments demonstrated enhanced arterial thrombosis in a rat animal model, elevations of serotonin levels, thromboxane A[2] and adhesion proteins in platelets, while human platelets showed similar responses. The effect on vascular endothelium may eventually be mediated by the arsenic-induced formation of nitric oxide.
It was demonstrated that +3 As concentrations substantially lower than concentrations required for inhibition of the lysosomal protease cathepsin L in B cell line TA3 were sufficient to trigger apoptosis in the same B cell line, while the latter could be a mechanism mediating immunosuppressive effects.
Kinetics
The two forms of inorganic arsenic, reduced (trivalent As(III)) and oxidized (pentavalent As(V)), can be absorbed, and accumulated in tissues and body fluids. In the liver, the metabolism of arsenic involves enzymatic and non-enzymatic methylation; the most frequently excreted metabolite (≥ 90%) in the urine of mammals is dimethylarsinic acid or cacodylic acid, DMA(V). Dimethylarsenic acid is also known as Agent Blue and was used as herbicide in the American war in Vietnam.
In humans inorganic arsenic is reduced nonenzymatically from pentoxide to trioxide, using glutathione or it is mediated by enzymes. Reduction of arsenic pentoxide to arsenic trioxide increases its toxicity and bio availability, Methylation occurs through methyltransferase enzymes. S-adenosylmethionine (SAM) may serve as methyl donor. Various pathways are used, the principal route being dependent on the current environment of the cell. Resulting metabolites are monomethylarsonous acid, MMA(III), and dimethylarsinous acid, DMA(III).
Methylation had been regarded as a detoxification process, but reduction from +5 As to +3 As may be considered as a bioactivation instead. Another suggestion is that methylation might be a detoxification if "As[III] intermediates are not permitted to accumulate" because the pentavalent organoarsenics have a lower affinity to thiol groups than inorganic pentavalent arsenics. Gebel (2002) stated that methylation is a detoxification through accelerated excretion. With regard to carcinogenicity it has been suggested that methylation should be regarded as a toxification.Arsenic, especially +3 As, binds to single, but with higher affinity to vicinal sulfhydryl groups, thus reacts with a variety of proteins and inhibits their activity. It was also proposed that binding of arsenite at nonessential sites might contribute to detoxification. Arsenite inhibits members of the disulfide oxidoreductase family like glutathione reductase and thioredoxin reductase.The remaining unbound arsenic (≤ 10%) accumulates in cells, which over time may lead to skin, bladder, kidney, liver, lung, and prostate cancers. Other forms of arsenic toxicity in humans have been observed in blood, bone marrow, cardiac, central nervous system, gastrointestinal, gonadal, kidney, liver, pancreatic, and skin tissues.
Heat shock response
Another aspect is the similarity of arsenic effects to the heat shock response. Short-term arsenic exposure has effects on signal transduction inducing heat shock proteins with masses of 27, 60, 70, 72, 90, and 110 kDa as well as metallothionein, ubiquitin, mitogen-activated [MAP] kinases, extracellular regulated kinase [ERK], c-jun terminal kinases [JNK] and p38.
Via JNK and p38 it activates c-fos, c-jun and egr-1 which are usually activated by growth factors and cytokines. The effects are largely dependent on the dosing regime and may be as well inversed.
As shown by some experiments reviewed by Del Razo (2001), reactive oxygen species induced by low levels of inorganic arsenic increase the transcription and the activity of the activator protein 1 (AP-1) and the nuclear factor-κB (NF-κB) (maybe enhanced by elevated MAPK levels), which results in c-fos/c-jun activation, over-secretion of pro-inflammatory and growth promoting cytokines stimulating cell proliferation. Germolec et al. (1996) found an increased cytokine expression and cell proliferation in skin biopsies from individuals chronically exposed to arsenic-contaminated drinking water.Increased AP-1 and NF-κB obviously also result in an up-regulation of mdm2 protein, which decreases p53 protein levels. Thus, taking into account p53s function, a lack of it could cause a faster accumulation of mutations contributing to carcinogenesis. However, high levels of inorganic arsenic inhibit NF-κB activation and cell proliferation. An experiment of Hu et al. (2002) demonstrated increased binding activity of AP-1 and NF-κB after acute (24 h) exposure to +3 sodium arsenite, whereas long-term exposure (10–12 weeks) yielded the opposite result. The authors conclude that the former may be interpreted as a defense response while the latter could lead to carcinogenesis. As the contradicting findings and connected mechanistic hypotheses indicate, there is a difference in acute and chronic effects of arsenic on signal transduction which is not clearly understood yet.
Oxidative stress
Studies have demonstrated that the oxidative stress generated by arsenic may disrupt the signal transduction pathways of the nuclear transcriptional factors PPARs, AP-1, and NF-κB, as well as the pro-inflammatory cytokines IL-8 and TNF-α. The interference of oxidative stress with signal transduction pathways may affect physiological processes associated with cell growth, metabolic syndrome X, glucose homeostasis, lipid metabolism, obesity, insulin resistance, inflammation, and diabetes-2. Recent scientific evidence has elucidated the physiological roles of the PPARs in the ω- hydroxylation of fatty acids and the inhibition of pro-inflammatory transcription factors (NF-κB and AP-1), pro-inflammatory cytokines (IL-1, -6, -8, -12, and TNF-α), cell4 adhesion molecules (ICAM-1 and VCAM-1), inducible nitric oxide synthase, proinflammatory nitric oxide (NO), and anti-apoptotic factors.Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of type 2 diabetes. The human liver after exposure to therapeutic drugs may exhibit hepatic non-cirrhotic portal hypertension, fibrosis, and cirrhosis. However, the literature provides insufficient scientific evidence to show cause and effect between arsenic and the onset of diabetes mellitus Type 2.
Diagnosis
Arsenic may be measured in blood or urine to monitor excessive environmental or occupational exposure, confirm a diagnosis of poisoning in hospitalized victims or to assist in the forensic investigation in a case of fatal over dosage. Some analytical techniques are capable of distinguishing organic from inorganic forms of the element. Organic arsenic compounds tend to be eliminated in the urine in unchanged form, while inorganic forms are largely converted to organic arsenic compounds in the body prior to urinary excretion. The current biological exposure index for U.S. workers of 35 µg/L total urinary arsenic may easily be exceeded by a healthy person eating a seafood meal.Tests are available to diagnose poisoning by measuring arsenic in blood, urine, hair, and fingernails. The urine test is the most reliable test for arsenic exposure within the last few days. Urine testing needs to be done within 24–48 hours for an accurate analysis of an acute exposure. Tests on hair and fingernails can measure exposure to high levels of arsenic over the past 6–12 months. These tests can determine if one has been exposed to above-average levels of arsenic. They cannot predict, however, whether the arsenic levels in the body will affect health. Chronic arsenic exposure can remain in the body systems for a longer period of time than a shorter term or more isolated exposure and can be detected in a longer time frame after the introduction of the arsenic, important in trying to determine the source of the exposure.
Hair is a potential bioindicator for arsenic exposure due to its ability to store trace elements from blood. Incorporated elements maintain their position during growth of hair. Thus for a temporal estimation of exposure, an assay of hair composition needs to be carried out with a single hair which is not possible with older techniques requiring homogenization and dissolution of several strands of hair. This type of biomonitoring has been achieved with newer microanalytical techniques like synchrotron radiation based X-ray fluorescence spectroscopy and microparticle induced X-ray emission. The highly focused and intense beams study small spots on biological samples allowing analysis to micro level along with the chemical speciation. In a study, this method has been used to follow arsenic level before, during and after treatment with arsenious oxide in patients with acute promyelocytic leukemia.
Treatment
Chelation
Dimercaprol and dimercaptosuccinic acid are chelating agents that sequester the arsenic away from blood proteins and are used in treating acute arsenic poisoning. The most important side effect is hypertension. Dimercaprol is considerably more toxic than succimer. dimercaptosuccinic acid monoesters, e.g. MiADMSA, are promising antidotes for arsenic poisoning.
Nutrition
Supplemental potassium decreases the risk of experiencing a life-threatening heart rhythm problem from arsenic trioxide.
History
Beginning in about 3000 BC arsenic was mined and added to copper in the alloying of bronze, but the adverse health effects of working with arsenic led to it being abandoned when a viable alternative, tin, was discovered.In addition to its presence as a poison, for centuries arsenic was used medicinally. It has been used for over 2,400 years as a part of traditional Chinese medicine. In the western world, arsenic compounds, such as salvarsan, were used extensively to treat syphilis before penicillin was introduced. It was eventually replaced as a therapeutic agent by sulfa drugs and then by other antibiotics. Arsenic was also an ingredient in many tonics (or "patent medicines").
In addition, during the Elizabethan era, some women used a mixture of vinegar, chalk, and arsenic applied topically to whiten their skin. This use of arsenic was intended to prevent aging and creasing of the skin, but some arsenic was inevitably absorbed into the blood stream.During the Victorian era (late 19th century) in the United States, U.S. newspapers advertised "arsenic complexion wafers" that promised to remove facial blemishes such as moles and pimples.Some pigments, most notably the popular Emerald Green (known also under several other names), were based on arsenic compounds. Overexposure to these pigments was a frequent cause of accidental poisoning of artists and craftsmen.
Arsenic became a favored method for murder of the Middle Ages and Renaissance, particularly among ruling classes in Italy allegedly. Because the symptoms are similar to those of cholera, which was common at the time, arsenic poisoning often went undetected.: 63 By the 19th century, it had acquired the nickname "inheritance powder," perhaps because impatient heirs were known or suspected to use it to ensure or accelerate their inheritances.: 21 It was also a common murder technique in the 19th century in domestic violence situations, such as the case of Rebecca Copin, who attempted to poison her husband by "putting arsenic in his coffee".In post-WW1 Hungary, arsenic extracted by boiling fly paper was used in an estimated 300 murders by the Angel Makers of Nagyrév.
In imperial China, arsenic trioxide and sulfides were used in murder, as well as for capital punishment for members of the royal family or aristocracy. Forensic studies have determined that the Guangxu Emperor (d. 1908) was murdered by arsenic, most likely ordered by the Empress Dowager Cixi or Generalissimo Yuan Shikai. Likewise, in ancient Korea, and particularly in the Joseon Dynasty, arsenic-sulfur compounds have been used as a major ingredient of sayak (사약; 賜藥), which was a poison cocktail used in capital punishment of high-profile political figures and members of the royal family. Due to social and political prominence of the condemned, many of these events were well-documented, often in the Annals of Joseon Dynasty; they are sometimes portrayed in historical television miniseries because of their dramatic nature.
Legislation
In the U.S. in 1975, under the authority of the Safe Drinking Water Act, the U.S. Environmental Protection Agency determined the National Interim Primary Drinking Water Regulation levels of arsenic (inorganic contaminant – IOCs) to be 0.05 mg/L (50 parts per billion – ppb).Throughout the years, many studies reported dose-dependent effects of arsenic in drinking water and skin cancer. In order to prevent new cases and death from cancerous and non-cancerous diseases, the Safe Drinking Water Act directed the Environmental Protection Agency to revise arsenics levels and specified the maximum contaminant level (MCL). MCLs are set as close to the health goals as possible, considering cost, benefits and the ability of public water systems to detect and remove contaminants using suitable treatment technologies.In 2001, Environmental Protection Agency adopted a lower standard of MCL 0.01 mg/L (10 ppb) for arsenic in drinking water that applies to both community water systems and non-transient non-community water systems.In some other countries, when developing national drinking water standards based on the guideline values, it is necessary to take account of a variety of geographical, socio-economic, dietary and other conditions affecting potential exposure. These factors lead to national standards that differ appreciably from the guideline values. That is the case in countries such as India and Bangladesh, where the permissible limit of arsenic in absence of an alternative source of water is 0.05 mg/L.
Challenges to implementation
Arsenic removal technologies are traditional treatment processes which have been tailored to improve removal of arsenic from drinking water. Although some of the removal processes, such as precipitative processes, adsorption processes, ion exchange processes, and separation (membrane) processes, may be technically feasible, their cost may be prohibitive.For underdeveloped countries, the challenge is finding the means to fund such technologies. The Environmental Protection Agency, for example, has estimated the total national annualized cost of treatment, monitoring, reporting, record keeping, and administration to enforce the MCL rule to be approximately $181 million </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Ascariasis and can you explain it in depth? [/INST] Ascariasis is a disease caused by the parasitic roundworm Ascaris lumbricoides. Infections have no symptoms in more than 85% of cases, especially if the number of worms is small. Symptoms increase with the number of worms present and may include shortness of breath and fever in the beginning of the disease. These may be followed by symptoms of abdominal swelling, abdominal pain, and diarrhea. Children are most commonly affected, and in this age group the infection may also cause poor weight gain, malnutrition, and learning problems.Infection occurs by ingestion of food or drink contaminated with Ascaris eggs from feces. The eggs hatch in the intestines, the larvae burrow through the gut wall, and migrate to the lungs via the blood. There they break into the alveoli and pass up the trachea, where they are coughed up and may be swallowed. The larvae then pass through the stomach for a second time into the intestine, where they become adult worms. It is a type of soil-transmitted helminthiasis and part of a group of diseases called helminthiases.Prevention is by improved sanitation, which includes improving access to toilets and proper disposal of feces. Handwashing with soap appears protective. In areas where more than 20% of the population is affected, treating everyone at regular intervals is recommended. Reoccurring infections are common. There is no vaccine. Treatments recommended by the World Health Organization are the medications albendazole, mebendazole, levamisole, or pyrantel pamoate. Other effective agents include tribendimidine and nitazoxanide.About 0.8 to 1.2 billion people globally have ascariasis, with the most heavily affected populations being in sub-Saharan Africa, Latin America, and Asia. This makes ascariasis the most common form of soil-transmitted helminthiasis. As of 2010 it caused about 2,700 deaths a year, down from 3,400 in 1990. Another type of Ascaris infects pigs. Ascariasis is classified as a neglected tropical disease.
Signs and symptoms
In populations where worm infections are widespread, it is common to find that most people are infected by a small number of worms, while a small number of people are heavily infected. This is characteristic of many types of worm infections. Those people who are infected with only a small number of worms usually have no symptoms.
Migrating larvae
As larval stages travel through the body, they may cause visceral damage, peritonitis and inflammation, enlargement of the liver or spleen, and an inflammation of the lungs. Pulmonary manifestations take place during larval migration and may present as Loefflers syndrome, a transient respiratory illness associated with blood eosinophilia and pulmonary infiltrates with radiographic shadowing.
Intestinal blockage
The worms can occasionally cause intestinal blockage when large numbers get tangled into a bolus or they may migrate from the small intestine, which may require surgery. More than 796 A. lumbricoides worms weighing up to 550 g (19 oz) were recovered at autopsy from a two-year-old South African girl. The worms had caused torsion and gangrene of the ileum, which was interpreted as the cause of death.The worms lack teeth. However, they can rarely cause bowel perforations by inducing volvulus and closed-loop obstruction.
Bowel obstruction
Bowel obstruction may occur in up to 0.2 per 1000 per year. A worm may block the ampulla of Vater, or go into the main pancreatic duct, resulting in acute pancreatitis with raised serum levels of amylase and lipase. Occasionally, a worm can travel through the biliary tree and even into the gallbladder, causing acute cholangitis or acute cholecystitis.
Allergies
Ascariasis may result in allergies to shrimp and dustmites due to the shared antigen, tropomyosin; this has not been confirmed in the laboratory.
Malnutrition
The worms in the intestine may cause malabsorption and anorexia, which contribute to malnutrition. The malabsorption may be due to a loss of brush border enzymes, erosion and flattening of the villi, and inflammation of the lamina propria.
Others
Ascaris have an aversion to some general anesthetics and may exit the body, sometimes through the mouth, when an infected individual is put under general anesthesia.
Cause
Transmission
The source of infection is from objects which have been contaminated with fecal matter containing eggs. Ingestion of infective eggs from soil contaminated with human feces or contaminated vegetables and water is the primary route of infection. Infectious eggs may occur on other objects such as hands, money and furniture. Transmission from human to human by direct contact is impossible.Transmission comes through municipal recycling of wastewater into crop fields. This is quite common in emerging industrial economies and poses serious risks for local crop sales and exports of contaminated vegetables. A 1986 outbreak of ascariasis in Italy was traced to irresponsible wastewater recycling used to grow Balkan vegetable exports.The number of ova (eggs) in sewage or in crops that were irrigated with raw or partially treated sewage, is a measure of the degree of ascariasis incidence. For example:
In a study published in 1992, municipal wastewater in Riyadh, Saudi Arabia, detected over 100 eggs per litre of wastewater and in Czechoslovakia was as high as 240–1050 eggs per litre.
In one field study in Marrakech, Morocco, where raw sewage is used to fertilize crop fields, Ascaris eggs were detected at the rate of 0.18 eggs/kg in potatoes, 0.27 eggs/kg in turnip, 4.63 eggs/kg in mint, 0.7 eggs/kg in carrots, and 1.64 eggs/kg in radish. A similar study in the same area showed that 73% of children working on these farms were infected with helminths, particularly Ascaris, probably as a result of exposure to the raw sewage.
Lifecycle
The first appearance of eggs in stools is 60–70 days. In larval ascariasis, symptoms occur 4–16 days after infection. The final symptoms are gastrointestinal discomfort, colic and vomiting, fever, and observation of live worms in stools. Some patients may have pulmonary symptoms or neurological disorders during migration of the larvae. There are generally few or no symptoms. A bolus of worms may obstruct the intestine; migrating larvae may cause pneumonitis and eosinophilia. Adult worms have a lifespan of 1–2 years which means that individuals may be infected all their lives as worms die and new worms are acquired.Eggs can survive potentially for 15 years and a single worm may produce 200,000 eggs a day. They maintain their position by swimming against the intestinal flow.
Mechanism
Ascaris takes most of its nutrients from the partially digested host food in the intestine. There is some evidence that it can secrete enzyme inhibitors, presumably to protect itself from digestion by the hosts enzymes. Children are often more severely affected.
Diagnosis
Most diagnoses are made by identifying the appearance of the worm or eggs in feces. Due to the large quantity of eggs laid, diagnosis can generally be made using only one or two fecal smears.
The diagnosis is usually incidental when the host passes a worm in the stool or vomit. The eggs can be seen in a smear of fresh feces examined on a glass slide under a microscope and there are various techniques to concentrate them first or increase their visibility, such as the ether sedimentation method or the Kato technique. The eggs have a characteristic shape: they are oval with a thick, mamillated shell (covered with rounded mounds or lumps), measuring 35–50 micrometer in diameter and 40–70 in length. During pulmonary disease, larvae may be found in fluids aspirated from the lungs. White blood cell counts may demonstrate peripheral eosinophilia; this is common in many parasitic infections and is not specific to ascariasis. On X-ray, 15–35 cm long filling defects, sometimes with whirled appearance (bolus of worms).
Prevention
Prevention is by improved access to sanitation which includes the use of properly functioning and clean toilets by all community members as one important aspect. Handwashing with soap may be protective; however, there is no evidence it affects the severity of the disease. Eliminating the use of untreated human faeces as fertilizer is also important.In areas where more than 20% of the population is affected treating everyone is recommended. This has a cost of about 2 to 3 cents per person per treatment. This is known as mass drug administration and is often carried out among school-age children. For this purpose, broad-spectrum benzimidazoles such as mebendazole and albendazole are the drugs of choice recommended by WHO.
Treatment
Medications
Medications that are used to kill roundworms are called ascaricides. Those recommended by the World Health Organization for ascariasis are: albendazole, mebendazole, levamisole and pyrantel pamoate. Single‐dose of albendazole, mebendazole, and ivermectin are effective against ascariasis. They are effective at removing parasites and eggs from the intestines. Other effective agents include tribendimidine and nitazoxanide. Pyrantel pamoate may induce intestinal obstruction in a heavy worm load. Albendazole is contraindicated during pregnancy and children under two years of age. Thiabendazole may cause migration of the worm into the esophagus, so it is usually combined with piperazine.Piperazine is a flaccid paralyzing agent that blocks the response of Ascaris muscle to acetylcholine, which immobilizes the worm </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Assisted reproductive technology and can you explain it in depth? [/INST] Assisted reproductive technology (ART) includes medical procedures used primarily to address infertility. This subject involves procedures such as in vitro fertilization (IVF), intracytoplasmic sperm injection (ICSI), cryopreservation of gametes or embryos, and/or the use of fertility medication. When used to address infertility, ART may also be referred to as fertility treatment. ART mainly belongs to the field of reproductive endocrinology and infertility. Some forms of ART may be used with regard to fertile couples for genetic purpose (see preimplantation genetic diagnosis). ART may also be used in surrogacy arrangements, although not all surrogacy arrangements involve ART.
The existence of sterility will not always require ART to be the first option to consider, as there are occasions when its cause is a mild disorder that can be solved with more conventional treatments or with behaviors based on promoting health and reproductive habits.
Procedures
General
With ART, the process of sexual intercourse is bypassed and fertilization of the oocytes occurs in the laboratory environment (i.e., in vitro fertilization).In the US, the Centers for Disease Control and Prevention (CDC) defines ART to include "all fertility treatments in which both eggs and sperm are handled. In general, ART procedures involve surgically removing eggs from a womans ovaries, combining them with sperm in the laboratory, and returning them to the womans body or donating them to another woman." According to CDC, "they do not include treatments in which only sperm are handled (i.e., intrauterine—or artificial—insemination) or procedures in which a woman takes medicine only to stimulate egg production without the intention of having eggs retrieved."In Europe, ART also excludes artificial insemination and includes only procedures where oocytes are handled.The WHO, or World Health Organization, also defines ART this way.
Ovulation induction
Ovulation induction is usually used in the sense of stimulation of the development of ovarian follicles by fertility medication to reverse anovulation or oligoovulation. These medications are given by injection for 8 to 14 days. A health care provider closely monitors the development of the eggs using transvaginal ultrasound and blood tests to assess follicle growth and estrogen production by the ovaries. When follicles have reached an adequate size and the eggs are mature enough, an injection of the hormone hCG initiates the ovulation process. Egg retrieval should occur from 34 to 36 hours after the hCG injection.
In vitro fertilization
In vitro fertilization is the technique of letting fertilization of the male and female gametes (sperm and egg) occur outside the female body.
Techniques usually used in in vitro fertilization include:
Transvaginal ovum retrieval (OVR) is the process whereby a small needle is inserted through the back of the vagina and guided via ultrasound into the ovarian follicles to collect the fluid that contains the eggs.
Embryo transfer is the step in the process whereby one or several embryos are placed into the uterus of the female with the intent to establish a pregnancy.Less commonly used techniques in in vitro fertilization are:
Assisted zona hatching (AZH) is performed shortly before the embryo is transferred to the uterus. A small opening is made in the outer layer surrounding the egg in order to help the embryo hatch out and aid in the implantation process of the growing embryo.
Intracytoplasmic sperm injection (ICSI) is beneficial in the case of male factor infertility where sperm counts are very low or failed fertilization occurred with previous IVF attempt(s). The ICSI procedure involves a single sperm carefully injected into the center of an egg using a microneedle. With ICSI, only one sperm per egg is needed. Without ICSI, you need between 50,000 and 100,000. This method is also sometimes employed when donor sperm is used.
Autologous endometrial coculture is a possible treatment for patients who have failed previous IVF attempts or who have poor embryo quality. The patients fertilized eggs are placed on top of a layer of cells from the patients own uterine lining, creating a more natural environment for embryo development.
In zygote intrafallopian transfer (ZIFT), egg cells are removed from the womans ovaries and fertilized in the laboratory; the resulting zygote is then placed into the fallopian tube.
Cytoplasmic transfer is the technique in which the contents of a fertile egg from a donor are injected into the infertile egg of the patient along with the sperm.
Egg donors are resources for women with no eggs due to surgery, chemotherapy, or genetic causes; or with poor egg quality, previously unsuccessful IVF cycles or advanced maternal age. In the egg donor process, eggs are retrieved from a donors ovaries, fertilized in the laboratory with the sperm from the recipients partner, and the resulting healthy embryos are returned to the recipients uterus.
Sperm donation may provide the source for the sperm used in IVF procedures where the male partner produces no sperm or has an inheritable disease, or where the woman being treated has no male partner.
Preimplantation genetic diagnosis (PGD) involves the use of genetic screening mechanisms such as fluorescent in-situ hybridization (FISH) or comparative genomic hybridization (CGH) to help identify genetically abnormal embryos and improve healthy outcomes.
Embryo splitting can be used for twinning to increase the number of available embryos.
Pre-implantation genetic diagnosis
A pre-implantation genetic diagnosis procedure may be conducted on embryos prior to implantation (as a form of embryo profiling), and sometimes even of oocytes prior to fertilization. PGD is considered in a similar fashion to prenatal diagnosis. PGD is an adjunct to ART procedures, and requires in vitro fertilization to obtain oocytes or embryos for evaluation. Embryos are generally obtained through blastomere or blastocyst biopsy. The latter technique has proved to be less deleterious for the embryo, therefore it is advisable to perform the biopsy around day 5 or 6 of development. Sex selection is the attempt to control the sex of offspring to achieve a desired sex in case of X chromosome linked diseases. It can be accomplished in several ways, both pre- and post-implantation of an embryo, as well as at birth. Pre-implantation techniques include PGD, but also sperm sorting.
Others
Other assisted reproduction techniques include:
Mitochondrial replacement therapy (MRT, sometimes called mitochondrial donation) is the replacement of mitochondria in one or more cells to prevent or ameliorate disease. MRT originated as a special form of IVF in which some or all of the future babys mitochondrial DNA comes from a third party. This technique is used in cases when mothers carry genes for mitochondrial diseases. The therapy is approved for use in the United Kingdom.
In gamete intrafallopian transfer (GIFT) a mixture of sperm and eggs is placed directly into a womans fallopian tubes using laparoscopy following a transvaginal ovum retrieval.
Reproductive surgery, treating e.g. fallopian tube obstruction and vas deferens obstruction, or reversing a vasectomy by a reverse vasectomy. In surgical sperm retrieval (SSR) the reproductive urologist obtains sperm from the vas deferens, epididymis or directly from the testis in a short outpatient procedure.
By cryopreservation, eggs, sperm and reproductive tissue can be preserved for later IVF.
Risks
The majority of IVF-conceived infants do not have birth defects.
However, some studies have suggested that assisted reproductive technology is associated with an increased risk of birth defects.
Artificial reproductive technology is becoming more available. Early studies suggest that there could be an increased risk for medical complications with both the mother and baby. Some of these include low birth weight, placental insufficiency, chromosomal disorders, preterm deliveries, gestational diabetes, and pre-eclampsia (Aiken and Brockelsby).In the largest U.S. study, which used data from a statewide registry of birth defects,
6.2% of IVF-conceived children had major defects, as compared with 4.4% of naturally conceived children matched for maternal age and other factors (odds ratio, 1.3; 95% confidence interval, 1.00 to 1.67). ART carries with it a risk for heterotopic pregnancy (simultaneous intrauterine and extrauterine pregnancy).
The main risks are:
Genetic disorders
Low birth weight. In IVF and ICSI, a risk factor is the decreased expression of proteins in energy metabolism; Ferritin light chain and ATP5A1.
Preterm birth. Low birth weight and preterm birth are strongly associated with many health problems, such as visual impairment and cerebral palsy. Children born after IVF are roughly twice as likely to have cerebral palsy.Sperm donation is an exception, with a birth defect rate of almost a fifth compared to the general population. It may be explained by that sperm banks accept only people with high sperm count.
Germ cells of the mouse normally have a frequency of spontaneous point mutations that is 5 to 10-fold lower than that in somatic cells from the same individual. This low frequency in the germline leads to embryos that have a low frequency of point mutations in the next generation. No significant differences were observed in the frequency or spectrum of mutations between naturally conceived fetuses and assisted-conception fetuses. This suggests that with respect to the maintenance of genetic integrity assisted conception is safe.Current data indicate little or no increased risk for postpartum depression among women who use ART.Usage of assisted reproductive technology including ovarian stimulation and in vitro fertilization have been associated with an increased overall risk of childhood cancer in the offspring, which may be caused by the same original disease or condition that caused the infertility or subfertility in the mother or father.That said, In a landmark paper by Jacques Balayla et al. it was determined that infants born after ART have similar neurodevelopment than infants born after natural conception.In theory, ART can solve almost all reproductive problems, except for severe pathology or the absence of a uterus (or womb), using specific gamete or embryo donation techniques. However, this does not mean that all women can be treated with assisted reproductive techniques, or that all women who are treated will achieve pregnancy.
Usage
As a result of the 1992 Fertility Clinic Success Rate and Certification Act, the CDC is required to publish the annual ART success rates at U.S. fertility clinics. Assisted reproductive technology procedures performed in the U.S. has over than doubled over the last 10 years, with 140,000 procedures in 2006, resulting in 55,000 births.In Australia, 3.1% of births are a result of ART.The most common reasons for discontinuation of fertility treatment have been estimated to be: postponement of treatment (39%), physical and psychological burden (19%), psychological burden (14%), physical burden (6.32%), relational and personal problems (17%), personal reasons (9%), relational problems (9%), treatment rejection (13%) and organizational (12%) and clinic (8%) problems.
By country
United States
Many Americans do not have insurance coverage for fertility investigations and treatments. Many states are starting to mandate coverage, and the rate of use is 278% higher in states with complete coverage.There are some health insurance companies that cover diagnosis of infertility, but frequently once diagnosed will not cover any treatment costs.Approximate treatment/diagnosis costs in the United States, with inflation, as of 2021 (US$):
Initial workup: hysteroscopy, hysterosalpingogram, blood tests ~$2,800
Sonohysterogram (SHG) ~ $830–$1,400
Clomiphene citrate cycle ~ $280–$690
IVF cycle ~ $13,900–$41,600
Use of a surrogate mother to carry the child – dependent on arrangementsAnother way to look at costs is to determine the expected cost of establishing a pregnancy. Thus, if a clomiphene treatment has a chance to establish a pregnancy in 8% of cycles and costs $690, the expected cost is $8,300 to establish a pregnancy, compared to an IVF cycle (cycle fecundity 40%) with a corresponding expected cost of $41,600 ($16,600 × 40%).
For the community as a whole, the cost of IVF on average pays back by 700% by tax from future employment by the conceived human being.
European Union
In Europe, 157,500 children were born using assisted reproductive technology in 2015, according to the European Society of Human Reproduction and Embryology (ESHRE). But there are major differences in legislation across the Old Continent.
A European directive fixes standards concerning the use of human tissue and cells, but all ethical and legal questions on ART remain the prerogative of EU member states.
Across Europe, the legal criteria per availability vary somewhat. In 11 countries all women may benefit; in 8 others only heterosexual couples are concerned; in 7 only single women; and in 2 (Austria and Germany) only lesbian couples.
Spain was the first European country to open ART to all women, in 1977, the year the first sperm bank was opened there. In France, the right to ART is accorded to all women since 2019. In the last 15 years, legislation has evolved quickly. For example, Portugal made ART available in 2006 with conditions very similar to those in France, before amending the law in 2016 to allow lesbian couples and single women to benefit. Italy clarified its uncertain legal situation in 2004 by adopting Europes strictest laws: ART is only available to heterosexual couples, married or otherwise, and sperm donation is prohibited.
Today, 21 countries provide partial public funding for ART treatment. The seven others, which do not, are Ireland, Cyprus, Estonia, Latvia, Luxembourg, Malta, and Romania.
Such subsidies are subject to conditions, however. In Belgium, a fixed payment of €1,073 is made for each full cycle of the IVF process. The woman must be aged under 43 and may not carry out more than six cycles of ART. There is also a limit on the number of transferable embryos, which varies according to age and the number of cycles completed.
In France, ART is subsidized in full by national health insurance for women up to age 43, with limits of 4 attempts at IVF and 6 at artificial insemination.
Germany tightened its conditions for public funding in 2004, which caused a sharp drop in the number of ART cycles carried out, from more than 102,000 in 2003 to fewer than 57,000 the following year. Since then the figure has remained stable.
17 countries limit access to ART according to the age of the woman. 10 countries have established an upper age limit, varying from 40 (Finland, Netherlands) to 50 (including Spain, Greece and Estonia).
Since 1994, France is one of a number of countries (including Germany, Spain, and the UK) which use the somewhat vague notion of "natural age of procreation". In 2017, the steering council of Frances Agency of Biomedicine established an age limit of 43 for women using ART.
10 countries have no age limit for ART. These include Austria, Hungary, Italy and Poland.
Most European countries allow donations of gametes by third parties. But the situations vary depending on whether sperm or eggs are concerned. Sperm donations are authorized in 20 EU member states; in 11 of them anonymity is allowed. Egg donations are possible in 17 states, including 8 under anonymous conditions.
On 12 April, the Council of Europe adopted a recommendation which encourages an end to anonymity. In the UK, anonymous sperm donations ended in 2005 and children have access to the identity of the donor when they reach adulthood.
In France, the principle of anonymous donations of sperm or embryos is maintained in the law of bioethics of 2011, but a new bill under discussion may change the situation.
United Kingdom
In the United Kingdom, all patients have the right to preliminary testing, provided free of charge by the National Health Service (NHS). However, treatment is not widely available on the NHS and there can be long waiting lists. Many patients therefore pay for immediate treatment within the NHS or seek help from private clinics.
In 2013, the National Institute for Health and Care Excellence (NICE) published new guidelines about who should have access to IVF treatment on the NHS in England and Wales.The guidelines say women aged between 40 and 42 should be offered one cycle of IVF on the NHS if they have never had IVF treatment before, have no evidence of low ovarian reserve (this is when eggs in the ovary are low in number, or low in quality), and have been informed of the additional implications of IVF and pregnancy at this age. However, if tests show IVF is the only treatment likely to help them get pregnant, women should be referred for IVF straight away.
This policy is often modified by local Clinical Commissioning Groups, in a fairly blatant breach of the NHS Constitution for England which provides that patients have the right to drugs and treatments that have been recommended by NICE for use in the NHS. For example, the Cheshire, Merseyside and West Lancashire Clinical Commissioning Group insists on additional conditions:
The person undergoing treatment must have commenced treatment before her 40th birthday;
The person undergoing treatment must have a BMI of between 19 and 29;
Neither partner must have any living children, from either the current or previous relationships. This includes adopted as well as biological children; and,
Sub-fertility must not be the direct result of a sterilisation procedure in either partner (this does not include conditions where sterilisation occurs as a result of another medical problem). Couples who have undertaken a reversal of their sterilisation procedure are not eligible for treatment.
Canada
Some treatments are covered by OHIP (public health insurance) in Ontario and others are not. Women with bilaterally blocked fallopian tubes and are under the age of 40 have treatment covered but are still required to pay test fees (around CA$3,000–4,000). Coverage varies in other provinces. Most other patients are required to pay for treatments themselves.
Israel
Israels national health insurance, which is mandatory for all Israeli citizens, covers nearly all fertility treatments. IVF costs are fully subsidized up to the birth of two children for all Israeli women, including single women and lesbian couples. Embryo transfers for purposes of gestational surrogacy are also covered.
Germany
On 27 January 2009, the Federal Constitutional Court ruled that it is unconstitutional, that the health insurance companies have to bear only 50% of the cost for IVF. On 2 March 2012, the Federal Council has approved a draft law of some federal states, which provides that the federal government provides a subsidy of 25% to the cost. Thus, the share of costs borne for the pair would drop to just 25%. Since July 2017, assisted reproductive technology is also allowed for married lesbian couples, as German parliament allowed same-sex marriages in Germany.
France
In July 2020, the French Parliament allowed assisted reproductive technology also for lesbian couples and single women.
Cuba
Cuban sources mention that assisted reproduction is completely legal and free in the country.
India
The Government of India has notified the Surrogacy (Regulation) Act 2021 and the Assisted Reproductive Technology (Regulation) Act 2021 to regulate the practice of ART. Prior to that, the National Guidelines for Accreditation, Supervision and Regulation of ART Clinics in India published by the Ministry for Health and Family Welfare, Government of India in the year 2005 was governing the field. Indian law recognises the right of a single woman, who is a major, to have children through ART.
Society and culture
Ethics
Some couples may find it difficult to stop treatment despite very bad prognosis, resulting in futile therapies. This has the potential to give ART providers a difficult decision of whether to continue or refuse treatment.Some assisted reproductive technologies have the potential to be harmful to both the mother and child, posing a psychological and/or physical health risk, which may impact the ongoing use of these treatments.
In Israel, there is research supporting using art, including recycled lab materials from the IVF process, to help women work through some of these mixed emotions.
Fictional representation
Films and other fiction depicting emotional struggles of assisted reproductive technology have had an upswing in the latter part of the 2000s decade, although the techniques have been available for decades. As ART becomes more utilized, the number of people that can relate to it by personal experience in one way or another is growing.For specific examples, refer to the fiction sections in individual subarticles, e.g. surrogacy, sperm donation and fertility clinic.
In addition, reproduction and pregnancy in speculative fiction has been present for many decades.
Historical facts
25 July 1978, Louise Brown was born; this was the first successful birth of a child after IVF treatment. The procedure took place at Dr Kershaws Cottage Hospital (now Dr Kershaws Hospice) in Royton, Oldham, England. Patrick Steptoe (gynaecologist) and Robert Edwards (physiologist) worked together to develop the IVF technique </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Autism spectrum and can you explain it in depth? [/INST] The autism spectrum is a range of neurodevelopmental conditions generally characterized by difficulties in social interactions and communication, repetitive behaviors, intense interests, and unusual responses to sensory stimuli. It is commonly referred to as autism or, in the context of a professional diagnosis, autism spectrum disorder (ASD), but the latter term remains controversial among neurodiversity advocates, neurodiversity researchers, and many autistic people due to the use of the word disorder and due to questions about its utility outside of diagnostic contexts.A spectrum disorder is one that can manifest very differently from person to person: any given person with the disorder is likely to show some but not all of the characteristics associated with it, and may show them to very different degrees. Different autistic people might show strikingly different characteristics, and the same person might also present differently at different times. Historically, the autism spectrum was divided into sub-categories, but questions persisted over the validity of these divisions. The most recent editions of the major English-language diagnostic manuals, Diagnostic and Statistical Manual of Mental Disorders (DSM-5-TR, published in 2022) and International Classification of Diseases (ICD-11, released in 2021) both list ASD as a single disorder.Given concerns about the appropriateness of the term disorder, many sources prefer to use the word "autism" without any additional words, on the basis that this is the least controversial term among people with different perspectives or (in the United Kingdom) autism spectrum conditions (ASC) rather than ASD. While psychiatry traditionally classifies autism as a neurodevelopmental disorder, many autistic people, most autistic advocates and a rapidly increasing number of researchers see autism as part of neurodiversity, the natural diversity in human thinking, and experience, with strengths, differences, and weaknesses. On this view, promoted by the autism rights movement, autism is not pathological, but this does not preclude autistic individuals from being disabled and potentially having high support needs due to co-occurring conditions and lack of person-environment fit. This relatively positive and holistic view of autism has led to a certain degree of friction between autistic individuals, advocates, charities, researchers and practitioners.The causes of autism are not well understood, but are likely linked to altered structures of the brain at birth. There is no official cure for autism, so interventions focus on, for example, finding and learning other modes of communication in a non-verbal autist, or applied behavior analysis interventions, which are highly controversial.
Other controversies in autism are scientific, sociological, political, or philosophical, and some have aspects of all four. First, it is controversial and uncertain if social-communication difficulties of autistic people are inherent core deficits (see empathizing-systemizing theory developed by Simon Baron-Cohen), or due to mismatch in social communication styles, cognition, and experiences resulting in bidirectional misunderstanding between autistic people and non-autistic people (see double empathy problem theory developed by autistic researcher Damien Milton and recent growing evidence that found that autistic people empathize, communicate and socialize well with autistic people), or a combination of both factors. A 2018 study has shown that autistic people are more prone to object personification, suggesting that autistic empathy may be not only more complex but also more all-encompassing, contrary to the popular belief that autistic people lack empathy.
Moreover, scientists are still trying to determine what causes autism; it is highly heritable and believed to be mainly genetic, but there are many genes involved, and environmental factors may also be relevant. It is unclear why autism commonly co-occurs with ADHD, epilepsy and a range of other conditions. There are ongoing disagreements about what should be included as part of the autism spectrum, whether meaningful sub-types of autism exist, and the significance of autism-associated traits in the wider population. The combination of broader criteria and increased awareness has led to a trend of steadily increasing estimates of autism prevalence, causing a common misconception that there is an autism epidemic and perpetuating the myth that it is caused by vaccines.
Classification
Spectrum model
Autism is a highly variable neurodevelopmental disorder and has long been thought to cover a wide spectrum, ranging from individuals with high support needs (who may be non-speaking, experience developmental delay, and be more likely to present with other co-existing diagnoses including intellectual disability) to individuals with low support needs (who may have more typical speech-language and intellectual skills but atypical social/conversation skills, narrowly focused interests, and wordy, pedantic communication). Because the behavior spectrum is continuous, boundaries between diagnostic categories are somewhat arbitrary.
ICD
The World Health Organizations International Classification of Diseases (11th Revision) ICD-11, regarded as the global standard, was released in June 2018 and came into full effect as of January 2022. It describes ASD as follows:
Autism spectrum disorder is characterised by persistent deficits in the ability to initiate and to sustain reciprocal social interaction and social communication, and by a range of restricted, repetitive, and inflexible patterns of behaviour, interests or activities that are clearly atypical or excessive for the individuals age and sociocultural context. The onset of the disorder occurs during the developmental period, typically in early childhood, but symptoms may not become fully manifest until later, when social demands exceed limited capacities. Deficits are sufficiently severe to cause impairment in personal, family, social, educational, occupational or other important areas of functioning and are usually a pervasive feature of the individuals functioning observable in all settings, although they may vary according to social, educational, or other context. Individuals along the spectrum exhibit a full range of intellectual functioning and language abilities.
ICD-11 was produced by professionals from 55 countries out of the 90 countries involved and is the most widely used reference worldwide. Clinicians use the ICD as a reference for diagnosis and reporting but researchers, particularly in the US, continue to use the Diagnostic and Statistical Manual of Mental Disorders (DSM-5-TR from 2022, DSM-5 from 2013, or their predecessors) as some material is not included in the ICD (the ICD is broader in scope, covering general as well as mental health). There remain differences, for example Rett disorder was included in ASD in the DSM-5 but in the ICD-11 it was excluded and placed in the chapter for Developmental Anomalies. Both the ICD and the DSM have been under revision and there has been collaborative work towards a convergence of the two since 1980 (when DSM-3 was published and ICD-9 was current), including more rigorous biological assessment - in place of historical experience - and a simplification of the system of classification.
DSM
The American Psychiatric Associations Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision (DSM-5-TR), released in 2022, is the current version of the DSM. The fifth edition, DSM-5, released in May 2013, was the first to define ASD as a single diagnosis, which is continued in DSM-5-TR. ASD encompasses previous diagnoses which included Asperger disorder, childhood disintegrative disorder, PDD-NOS, and the range of diagnoses which included the word autism. Rather than distinguishing between these diagnoses, the DSM-5 and DSM-5-TR adopt a dimensional approach to diagnosing disorders that fall underneath the autistic spectrum umbrella in one diagnostic category. Within this category, the DSM-5 and the DSM includes a framework that differentiates each individual by dimensions of symptom severity, as well as by associated features (i.e., the presence of other disorders or factors which likely contribute to the symptoms, other neurodevelopmental or mental disorders, intellectual disability, or language impairment). The symptom domains are social communication and restricted, repetitive behaviors, with the option of a separate severity - the negative impact of the symptoms on the individual - being specified for each domain, rather than an overall severity. Prior to the DSM-5, the DSM separated social deficits and communication deficits into two domains. Further, the DSM-5 changed to an onset age in the early developmental period, with a note that symptoms may manifest later when social demands exceed capabilities, rather than the previous, more restricted 3 years of age. These changes continue in the DSM-5-TR.
Features and characteristics
For many autistic individuals, characteristics first appear during infancy or childhood and generally follow a steady course without remission (different developmental timelines described in more detail below). Autistic people may be severely impaired in some respects but average, or even superior, in others.Clinicians consider assessment for ASD when a patient shows:
regular difficulties in social interaction or communication
restricted or repetitive behaviors (often called "stimming")
resistance to changes or restricted interestsThese features are typically assessed with the following, when appropriate:
problems in obtaining or sustaining employment or education
difficulties in initiating or sustaining social relationships
connections with mental health or learning disability services
a history of neurodevelopmental conditions (including learning disabilities and ADHD) or mental health conditions.There are many signs associated with ASD; the presentation varies widely:
Atypical eating is also common, but it does not need to be present to make a diagnosis.In addition, a small percentage of autistic people can exhibit notable ability, for example in mathematics, music or artistic reproduction, which in exceptional cases is referred to as savant syndrome.
Developmental course
There are two possible developmental courses of ASD. One course of development is more gradual in nature, with symptoms appearing fairly early in life and persisting. A second course of development is characterized by normal or near-normal development before onset of regression or loss of skills, which is known as regressive autism.
Gradual autism development
Most parents report that the onset of autism features appear within the first year of life. This course of development is fairly gradual, in that parents typically report concerns in development over the first two years of life and diagnosis can be made around 3–4 years of age. Overt features gradually begin after the age of six months, become established by age two or three years, and tend to continue through adulthood, although often in more muted form. Some of the early signs of ASDs in this course include decreased attention at faces, failure to obviously respond when name is called, failure to show interests by showing or pointing, and delayed imaginative play.
Regressive autism development
Regressive autism occurs when a child appears to develop typically but then starts to lose speech and social skills and is subsequently diagnosed with ASD. Other terms used to describe regression in children with autism are autism with regression, autistic regression, setback-type autism, and acquired autistic syndrome.Within the regressive autism developmental course, there are two patterns. The first pattern is when developmental losses occur in the first 15 months to 3 years. The second pattern, childhood disintegrative disorder (a diagnosis now included under ASD), is characterized by regression after normal development in the first 3 to 4, or even up to 9 years of life.After the regression, the child follows the standard pattern of autistic neurological development. The term regressive autism refers to the appearance that neurological development has reversed; it is actually only the affected developmental skills, rather than the neurology as a whole, that regresses.
The apparent onset of regressive autism can be surprising and distressing to parents, who often initially suspect severe hearing loss. Attribution of regression to environmental stress factors may result in a delay in diagnosis.There is no standard definition for regression. Some children show a mixture of features, with some early delays and some later losses; and there is evidence of a continuous spectrum of behaviors, rather than, or in addition to, a black-and-white distinction, between autism with and without regression. There are several intermediate types of development, which do not neatly fit into either the traditional early onset or the regressive categories, including mixtures of early deficits, failures to progress, subtle diminishment, and obvious losses.
Regression may occur in a variety of domains, including communication, social, cognitive, and self-help skills; however, the most common regression is loss of language. Some children lose social development instead of language; some lose both. Skill loss may be quite rapid, or may be slow and preceded by a lengthy period of no skill progression; the loss may be accompanied by reduced social play or increased irritability. The temporarily acquired skills typically amount to a few words of spoken language, and may include some rudimentary social perception.The prevalence of regression varies depending on the definition used. If regression is defined strictly to require loss of language, it is less common; if defined more broadly, to include cases where language is preserved but social interaction is diminished, it is more common. Though regressive autism is often thought to be a less common (compared with gradual course of autism onset described above), this is an area of ongoing debate; some evidence suggests that a pattern of regressive autism may be more common than previously thought. There are some who believe that regressive autism is simply early-onset autism which was recognized at a later date. Researchers have conducted studies to determine whether regressive autism is a distinct subset of ASD, but the results of these studies have contradicted one another.
Differential outcomes
There continues to be a debate over the differential outcomes based on these two developmental courses. Some studies suggest that regression is associated with poorer outcomes and others report no differences between those with early gradual onset and those who experience a regression period. While there is conflicting evidence surrounding language outcomes in autism, some studies have shown that cognitive and language abilities at age 2+1⁄2 may help predict language proficiency and production after age 5. Overall, the literature stresses the importance of early intervention in achieving positive longitudinal outcomes.
Social and communication skills
In social contexts, autistic people may respond and behave differently than individuals without ASD.Impairments in social skills present many challenges for autistic individuals. Deficits in social skills may lead to problems with friendships, romantic relationships, daily living, and vocational success. One study that examined the outcomes of autistic adults found that, compared to the general population, autistic people were less likely to be married, but it is unclear whether this outcome was due to deficits in social skills or intellectual impairment, or some other reason.Prior to 2013, deficits in social function and communication were considered two separate symptom domains of autism. The current social communication domain criteria for autism diagnosis require individuals to have deficits across three social skills: social-emotional reciprocity, nonverbal communication, and developing and sustaining relationships. Communication deficits are due to problems with social-emotional skills like joint attention and social reciprocity.A range of social-emotional reciprocity difficulties (an individuals ability to naturally engage in social interactions) may be present. Autistic individuals may lack mutual sharing of interests, for example many autistic children prefer not to play or interact with others. They may lack awareness or understanding of other peoples thoughts or feelings – a child may get too close to peers (entering their personal space) without noticing that this makes them uncomfortable. They may also engage in atypical behaviors to gain attention, for example a child may push a peer to gain attention before starting a conversation.Older children and adults with ASD perform worse on tests of face and emotion recognition than non-autistic individuals, although this may be partly due to a lower ability to define a persons own emotions.Autistic people experience deficits in their ability to develop, maintain, and understand relationships, as well as difficulties adjusting behavior to fit social contexts. ASD presents with impairments in pragmatic communication skills, such as difficulty initiating a conversation or failure to consider the interests of the listener to sustain a conversation. The ability to be focused exclusively on one topic in communication is known as monotropism, and can be compared to "tunnel vision". It is common for autistic individuals to communicate strong interest in a specific topic, speaking in lesson-like monologues about their passion instead of enabling reciprocal communication with whomever they are speaking to. What may look like self-involvement or indifference toward others stems from a struggle to recognize or remember that other people have their own personalities, perspectives, and interests. Another difference in pragmatic communication skills is that autistic people may not recognize the need to control the volume of their voice in different social settings – for example, they may speak loudly in libraries or movie theaters.Autistic people display atypical nonverbal behaviors or have difficulties with nonverbal communication. They may make infrequent eye contact – an autistic child may not make eye contact when called by name, or they may avoid making eye contact with an observer. Aversion of gaze can also be seen in anxiety disorders, however poor eye contact in autistic children is not due to shyness or anxiety; rather, it is overall diminished in quantity. Autistic individuals may struggle with both production and understanding of facial expressions. They often do not know how to recognize emotions from others facial expressions, or they may not respond with the appropriate facial expressions. They may have trouble recognizing subtle expressions of emotion and identifying what various emotions mean for the conversation. A defining feature is that autistic people have social impairments and often lack the intuition about others that many people take for granted. Temple Grandin, an autistic woman involved in autism activism, described her inability to understand the social communication of neurotypicals, or people with typical neural development, as leaving her feeling "like an anthropologist on Mars". They may also not pick up on body language or social cues such as eye contact and facial expressions if they provide more information than the person can process at that time. They struggle with understanding the context and subtext of conversational or printed situations, and have trouble forming resulting conclusions about the content. This also results in a lack of social awareness and atypical language expression. How facial expressions differ between those on the autism spectrum and neurotypical individuals is not clear. Further, at least half of autistic children have unusual prosody.Autistic people may also experience difficulties with verbal communication. Differences in communication may be present from the first year of life, and may include delayed onset of babbling, unusual gestures, diminished responsiveness, and vocal patterns that are not synchronized with the caregiver. In the second and third years, autistic children have less frequent and less diverse babbling, consonants, words, and word combinations; their gestures are less often integrated with words. Autistic children are less likely to make requests or share experiences, and are more likely to simply repeat others words (echolalia). Joint attention seems to be necessary for functional speech, and deficits in joint attention seem to distinguish infants with ASD. For example, they may look at a pointing hand instead of the object to which the hand is pointing, and they consistently fail to point at objects in order to comment on or share an experience. Autistic children may have difficulty with imaginative play and with developing symbols into language. Some autistic linguistic behaviors include repetitive or rigid language, and restricted interests in conversation. For example, a child might repeat words or insist on always talking about the same subject. Echolalia may also be present in autistic individuals, for example by responding to a question by repeating the inquiry instead of answering. Language impairment is also common in autistic children, but is not part of a diagnosis. Many autistic children develop language skills at an uneven pace where they easily acquire some aspects of communication, while never fully developing others, such as in some cases of hyperlexia. In some cases, individuals remain completely nonverbal throughout their lives. The CDC estimated that around 40% of autistic children dont speak at all, although the accompanying levels of literacy and nonverbal communication skills vary.
Restricted and repetitive behaviors
ASD includes a wide variety of characteristics. Some of these include behavioral characteristics which widely range from slow development of social and learning skills to difficulties creating connections with other people. Autistic individuals may experience these challenges with forming connections due to anxiety or depression, which they are more likely to experience, and as a result isolate themselves.Other behavioral characteristics include abnormal responses to sensations (such as sights, sounds, touch, taste and smell) and problems keeping a consistent speech rhythm. The latter problem influences an individuals social skills, leading to potential problems in how they are understood by communication partners. Behavioral characteristics displayed by autistic people typically influence development, language, and social competence. Behavioral characteristics of autistic people can be observed as perceptual disturbances, disturbances of development rate, relating, speech and language, and motility.The second core symptom of autism spectrum is a pattern of restricted and repetitive behaviors, activities, and interests. In order to be diagnosed with ASD under DSM-5 or DSM-5-TR, a person must have at least two of the following behaviors:
Repetitive behaviors – Repetitive behaviors such as rocking, hand flapping, finger flicking, head banging, or repeating phrases or sounds. These behaviors may occur constantly or only when the person gets stressed, anxious or upset.
Resistance to change – A strict adherence to routines such as eating certain foods in a specific order, or taking the same path to school every day. The child may have a meltdown if there is any change or disruption to their routine.
Restricted interests – An excessive interest in a particular activity, topic, or hobby, and devoting all their attention to it. For example, young children might completely focus on things that spin and ignore everything else. Older children might try to learn everything about a single topic, such as the weather or sports, and perseverate or talk about it constantly.
Sensory reactivity– An unusual reaction to certain sensory inputs such as having a negative reaction to specific sounds or textures, being fascinated by lights or movements or having an apparent indifference to pain or heat.Autistic individuals can display many forms of repetitive or restricted behavior, which the Repetitive Behavior Scale-Revised (RBS-R) categorizes as follows.
Stereotyped behaviors: Repetitive movements, such as hand flapping, head rolling, or body rocking.
Compulsive behaviors: Time-consuming behaviors intended to reduce anxiety, that an individual feels compelled to perform repeatedly or according to rigid rules, such as placing objects in a specific order, checking things, or handwashing.
Sameness: Resistance to change; for example, insisting that the furniture not be moved or refusing to be interrupted.
Ritualistic behavior: Unvarying pattern of daily activities, such as an unchanging menu or a dressing ritual. This is closely associated with sameness and an independent validation has suggested combining the two factors.
Restricted interests: Interests or fixations that are abnormal in theme or intensity of focus, such as preoccupation with a single television program, toy, or game.
Self-injury: Behaviors such as eye-poking, skin-picking, hand-biting and head-banging.
Self-injury
Self-injurious behaviors (SIB) are relatively common in autistic people, and can include head-banging, self-cutting, self-biting, and hair-pulling. Some of these behaviors can result in serious injury or death. Following are theories about the cause of self-injurious behavior in children
with developmental delay, including autistic individuals:
Frequency and/or continuation of self-injurious behavior can be influenced by environmental factors (e.g. reward in return for halting self-injurious behavior). However this theory is not applicable to younger children with autism. There is some evidence that frequency of self-injurious behavior can be reduced by removing or modifying environmental factors that reinforce this behavior.: 10–12
Higher rates of self-injury are also noted in socially isolated individuals with autism. Studies have shown that socialization skills are related factors to self injurious behavior for individuals with autism.
Self-injury could be a response to modulate pain perception when chronic pain or other health problems that cause pain are present.: 12–13
An abnormal basal ganglia connectivity may predispose to self-injurious behavior.: 13
Other features
Autistic individuals may have symptoms that do not contribute to the official diagnosis, but that can affect the individual or the family. Some individuals with ASD show unusual abilities, ranging from splinter skills (such as the memorization of trivia) to the rare talents of autistic savants. One study describes how some individuals with ASD show superior skills in perception and attention, relative to the general population. Sensory abnormalities are found in over 90% of autistic people, and are considered core features by some. Differences between the previously recognized disorders under the autism spectrum are greater for under-responsivity (for example, walking into things) than for over-responsivity (for example, distress from loud noises) or for sensation seeking (for example, rhythmic movements). An estimated 60–80% of autistic people have motor signs that include poor muscle tone, poor motor planning, and toe walking; deficits in motor coordination are pervasive across ASD and are greater in autism proper. Unusual eating behavior occurs in about three-quarters of children with ASD, to the extent that it was formerly a diagnostic indicator. Selectivity is the most common problem, although eating rituals and food refusal also occur.There is tentative evidence that gender dysphoria occurs more frequently in autistic people (see Autism and LGBT identities). As well as that, a 2021 anonymized online survey of 16–90 year-olds revealed that autistic males are more likely to identify as bisexual, while autistic females are more likely to identify as homosexual.Gastrointestinal problems are one of the most commonly co-occurring medical conditions in autistic people. These are linked to greater social impairment, irritability, language impairments, mood changes, and behavior and sleep problems.Parents of children with ASD have higher levels of stress. Siblings of children with ASD report greater admiration and less conflict with the affected sibling than siblings of unaffected children and were similar to siblings of children with Down syndrome in these aspects of the sibling relationship. However, they reported lower levels of closeness and intimacy than siblings of children with Down syndrome; siblings of individuals with ASD have greater risk of negative well-being and poorer sibling relationships as adults.
Causes
It had long been presumed that there is a common cause at the genetic, cognitive, and neural levels for the social and non-social components of autisms symptoms, described as a triad in the classic autism criteria. However, there is increasing suspicion that autism is instead a complex disorder whose core aspects have distinct causes that often co-occur. While it is unlikely that a single cause for ASD exists, many risk factors identified in the research literature may contribute to ASD development. These risk factors include genetics, prenatal and perinatal factors (meaning factors during pregnancy or very early infancy), neuroanatomical abnormalities, and environmental factors. It is possible to identify general risk factors, but much more difficult to pinpoint specific factors. Given the current state of knowledge, prediction can only be of a global nature and therefore requires the use of general markers.
Biological subgroups
Research into causes has been hampered by the inability to identify biologically meaningful subgroups within the autistic population and by the traditional boundaries between the disciplines of psychiatry, psychology, neurology and pediatrics. Newer technologies such as fMRI and diffusion tensor imaging can help identify biologically relevant phenotypes (observable traits) that can be viewed on brain scans, to help further neurogenetic studies of autism; one example is lowered activity in the fusiform face area of the brain, which is associated with impaired perception of people versus objects. It has been proposed to classify autism using genetics as well as behavior. (For more, see Brett Abrahams)
Genetics
Autism has a strong genetic basis, although the genetics of autism are complex and it is unclear whether ASD is explained more by rare mutations with major effects, or by rare multi-gene interactions of common genetic variants. Complexity arises due to interactions among multiple genes, the environment, and epigenetic factors which do not change DNA sequencing but are heritable and influence gene expression. Many genes have been associated with autism through sequencing the genomes of affected individuals and their parents. However, most of the mutations that increase autism risk have not been identified. Typically, autism cannot be traced to a Mendelian (single-gene) mutation or to a single chromosome abnormality, and none of the genetic syndromes associated with ASD have been shown to selectively cause ASD. Numerous candidate genes have been located, with only small effects attributable to any particular gene. Most loci individually explain less than 1% of cases of autism.
As of 2018, it appeared that between 74% and 93% of ASD risk is heritable. After an older child is diagnosed with ASD, 7–20% of subsequent children are likely to be as well. If parents have one autistic child, they have a 2% to 8% chance of having a second child who is also autistic. If the autistic child is an identical twin the other will be affected 36 to 95 percent of the time. If they are fraternal twins the other will only be affected up to 31 percent of the time. The large number of autistic individuals with unaffected family members may result from spontaneous structural variation, such as deletions, duplications or inversions in genetic material during meiosis. Hence, a substantial fraction of autism cases may be traceable to genetic causes that are highly heritable but not inherited: that is, the mutation that causes the autism is not present in the parental genome.As of 2018, understanding of genetic risk factors had shifted from a focus on a few alleles to an understanding that genetic involvement in ASD is probably diffuse, depending on a large number of variants, some of which are common and have a small effect, and some of which are rare and have a large effect. The most common gene disrupted with large effect rare variants appeared to be CHD8, but less than 0.5% of autistic people have such a mutation. The gene CHD8 encodes the protein chromodomain helicase DNA binding protein 8, which is a chromatin regulator enzyme that is essential during fetal development, CHD8 is an ATP dependent enzyme. The protein contains an Snf2 helicase domain that is responsible for the hydrolysis of ATP to ADP. CHD8 encodes for a DNA helicase that function as a transcription repressor by remodeling chromatin structure by altering the position of nucleosomes. CHD8 negatively regulates Wnt signaling. Wnt signaling is important in the vertebrate early development and morphogenesis. It is believed that CHD8 also recruits the linker histone H1 and causes the repression of β-catenin and p53 target genes. The importance of CHD8 can be observed in studies where CHD8-knockout mice died after 5.5 embryonic days because of widespread p53 induced apoptosis. Some studies have determined the role of CHD8 in autism spectrum disorder (ASD). CHD8 expression significantly increases during human mid-fetal development. The chromatin remodeling activity and its interaction with transcriptional regulators have shown to play an important role in ASD aetiology. The developing mammalian brain has a conserved CHD8 target regions that are associated with ASD risk genes. The knockdown of CHD8 in human neural stem cells results in dysregulation of ASD risk genes that are targeted by CHD8. Recently CD8 has been associated to the regulation of long non-coding RNAs (lncRNAs), and the regulation of X chromosome inactivation (XCI) initiation, via regulation of Xist long non-coding RNA, the master regulator of XCI, though competitive binding to Xist regulatory regions.Some ASD is associated with clearly genetic conditions, like fragile X syndrome; however, only around 2% of autistic people have fragile X. Hypotheses from evolutionary psychiatry suggest that these genes persist because they are linked to human inventiveness, intelligence or systemising.Current research suggests that genes that increase susceptibility to ASD are ones that control protein synthesis in neuronal cells in response to cell needs, activity and adhesion of neuronal cells, synapse formation and remodeling, and excitatory to inhibitory neurotransmitter balance. Therefore, despite up to 1000 different genes thought to contribute to increased risk of ASD, all of them eventually affect normal neural development and connectivity between different functional areas of the brain in a similar manner that is characteristic of an ASD brain. Some of these genes are known to modulate production of the GABA neurotransmitter which is the main inhibitory neurotransmitter in the nervous system. These GABA-related genes are under-expressed in an ASD brain. On the other hand, genes controlling expression of glial and immune cells in the brain e.g. astrocytes and microglia, respectively, are over-expressed which correlates with increased number of glial and immune cells found in postmortem ASD brains. Some genes under investigation in ASD pathophysiology are those that affect the mTOR signaling pathway which supports cell growth and survival.All these genetic variants contribute to the development of the autistic spectrum; however, it cannot be guaranteed that they are determinants for the development.Autism may be under-diagnosed in women and girls due to an assumption that it is primarily a male condition, but genetic phenomena such as imprinting and X linkage have the ability to raise the frequency and severity of conditions in males, and theories have been put forward for a genetic reason why males are diagnosed more often, such as the imprinted brain hypothesis and the extreme male brain theory.
Early life
Several prenatal and perinatal complications have been reported as possible risk factors for autism. These risk factors include maternal gestational diabetes, maternal and paternal age over 30, bleeding during pregnancy after the first trimester, use of certain prescription medication (e.g. valproate) during pregnancy, and meconium in the amniotic fluid. While research is not conclusive on the relation of these factors to autism, each of these factors has been identified more frequently in children with autism, compared to their siblings who do not have autism, and other typically developing youth. While it is unclear if any single factors during the prenatal phase affect the risk of autism, complications during pregnancy may be a risk.Low vitamin D levels in early development have been hypothesized as a risk factor for autism.There are also studies being done to test if certain types of regressive autism have an autoimmune basis.Maternal nutrition and inflammation during preconception and pregnancy influences fetal neurodevelopment. Intrauterine growth restriction is associated with ASD, in both term and preterm infants. Maternal inflammatory and autoimmune diseases may damage fetal tissues, aggravating a genetic problem or damaging the nervous system.Exposure to air pollution during pregnancy, especially heavy metals and particulates, may increase the risk of autism. Environmental factors that have been claimed without evidence to contribute to or exacerbate autism include certain foods, infectious diseases, solvents, PCBs, phthalates and phenols used in plastic products, pesticides, brominated flame retardants, alcohol, smoking, illicit drugs, vaccines, and prenatal stress. Some, such as the MMR vaccine, have been completely disproven.
Disproven vaccine hypothesis
Parents may first become aware of autistic symptoms in their child around the time of a routine vaccination. This has led to unsupported theories blaming vaccine "overload", a vaccine preservative, or the MMR vaccine for causing autism. In 1998 Andrew Wakefield led a fraudulent, litigation-funded study that suggested that the MMR vaccine may cause autism. This conjecture suggested that autism results from brain damage caused either by the MMR vaccine itself, or by thimerosal, a vaccine preservative. No convincing scientific evidence supports these claims, they are biologically implausible, and further evidence continues to refute them, including the observation that the rate of autism continues to climb despite elimination of thimerosal from routine childhood vaccines. A 2014 meta-analysis examined ten major studies on autism and vaccines involving 1.25 million children worldwide; it concluded that neither the MMR vaccine, which has never contained thimerosal, nor the vaccine components thimerosal or mercury, lead to the development of ASDs. Despite this, misplaced parental concern has led to lower rates of childhood immunizations, outbreaks of previously controlled childhood diseases in some countries, and the preventable deaths of several children.
Etiological hypotheses
Several hypotheses have been presented that try to explain how and why autism develops by integrating known causes (genetic and environmental effects) and findings (neurobiological and somatic). Some are more comprehensive, such as the Pathogenetic Triad, which proposes and operationalizes three core features (an autistic personality, cognitive compensation, neuropathological burden) that interact to cause autism, and the Intense World Theory, which explains autism through a hyper-active neurobiology that leads to an increased perception, attention, memory, and emotionality. There are also simpler hypotheses that explain only individual parts of the neurobiology or phenotype of autism, such as mind-blindness (a decreased ability for Theory of Mind), the weak central coherence theory, or the extreme male brain and empathising-systemising theory.
Evolutionary hypotheses
Research exploring the evolutionary benefits of autism and associated genes has suggested that autistic people may have played a "unique role in technological spheres and understanding of natural systems" in the course of human development. It has been suggested that it may have arisen as "a slight trade off for other traits that are seen as highly advantageous", providing "advantages in tool making and mechanical thinking", with speculation that the condition may "reveal itself to be the result of a balanced polymorphism, like sickle cell anemia, that is advantageous in a certain mixture of genes and disadvantageous in specific combinations".In 2011, a paper in Evolutionary Psychology proposed that autistic traits, including increased abilities for spatial intelligence, concentration and memory, could have been naturally selected to enable self-sufficient foraging in a more (although not completely) solitary environment, referred to as the "Solitary Forager Hypothesis". A 2016 paper examines Asperger syndrome as "an alternative pro-social adaptive strategy" which may have developed as a result of the emergence of "collaborative morality" in the context of small-scale hunter-gathering, i.e. where "a positive social reputation for making a contribution to group wellbeing and survival" becomes more important than complex social understanding.Conversely, some multidisciplinary research suggests that recent human evolution may be a driving force in the rise of a number of medical conditions in recent human populations, including autism. Studies in evolutionary medicine indicate that as biological evolution becomes outpaced by cultural evolution, disorders linked to bodily dysfunction increase in prevalence due to a lack of contact with pathogens and negative environmental conditions that once widely affected ancestral populations. Because natural selection primarily favors reproduction over health and longevity, the lack of this impetus to adapt to certain harmful circumstances creates a tendency for genes in descendant populations to over-express themselves, which may cause a wide array of maladies, ranging from mental illnesses to autoimmune disorders.
Pathophysiology
Autisms symptoms result from maturation-related changes in various systems of the brain. How autism occurs is not well understood. Its mechanism can be divided into two areas: the pathophysiology of brain structures and processes associated with autism, and the neuropsychological linkages between brain structures and behaviors. The behaviors appear to have multiple pathophysiologies.There is evidence that gut–brain axis abnormalities may be involved. A 2015 review proposed that immune, gastrointestinal inflammation, malfunction of the autonomic nervous system, gut flora alterations, and food metabolites may cause brain neuroinflammation and dysfunction. A 2016 review concludes that enteric nervous system abnormalities might play a role in neurological disorders such as autism. Neural connections and the immune system are a pathway that may allow diseases originated in the intestine spread to the brain.Several lines of evidence point to synaptic dysfunction as a cause of autism. Some rare mutations may lead to autism by disrupting some synaptic pathways, such as those involved with cell adhesion. All known teratogens (agents that cause birth defects) related to the risk of autism appear to act during the first eight weeks from conception, and though this does not exclude the possibility that autism can be initiated or affected later, there is strong evidence that autism arises very early in development.In general, neuroanatomical studies support the concept that autism may involve a combination of brain enlargement in some areas and reduction in others. These studies suggest that autism may be caused by abnormal neuronal growth and pruning during the early stages of prenatal and postnatal brain development, leaving some areas of the brain with too many neurons and other areas with too few neurons. Some research has reported an overall brain enlargement in autism, while others suggest abnormalities in several areas of the brain, including the frontal lobe, the mirror neuron system, the limbic system, the temporal lobe, and the corpus callosum.In functional neuroimaging studies, when performing theory of mind and facial emotion response tasks, the median person on the autism spectrum exhibits less activation in the primary and secondary somatosensory cortices of the brain than the median member of a properly sampled control population. This finding coincides with reports demonstrating abnormal patterns of cortical thickness and grey matter volume in those regions of autistic peoples brains.
Brain connectivity
Brains of autistic individuals have been observed to have abnormal connectivity and the degree of these abnormalities directly correlates with the severity of autism. Following are some observed abnormal connectivity patterns in autistic individuals:
Decreased connectivity between different specialized regions of the brain (e.g. lower neuron density in corpus callosum) and relative over-connectivity within specialized regions of the brain by adulthood. Connectivity between different regions of the brain (long-range connectivity) is important for integration and global processing of information and comparing incoming sensory information with the existing model of the world within the brain. Connections within each specialized regions (short-range connections) are important for processing individual details and modifying the existing model of the world within the brain to more closely reflect incoming sensory information. In infancy, children at high risk for autism that were later diagnosed with autism were observed to have abnormally high long-range connectivity which then decreased through childhood to eventual long-range under-connectivity by adulthood.
Abnormal preferential processing of information by the left hemisphere of the brain vs. preferential processing of information by right hemisphere in neurotypical individuals. The left hemisphere is associated with processing information related to details whereas the right hemisphere is associated with processing information in a more global and integrated sense that is essential for pattern recognition. For example, visual information like face recognition is normally processed by the right hemisphere which tends to integrate all information from an incoming sensory signal, whereas an ASD brain preferentially processes visual information in the left hemisphere where information tends to be processed for local details of the face rather than the overall configuration of the face. This left lateralization negatively impacts both facial recognition and spatial skills.
Increased functional connectivity within the left hemisphere which directly correlates with severity of autism. This observation also supports preferential processing of details of individual components of sensory information over global processing of sensory information in an ASD brain.
Prominent abnormal connectivity in the frontal and occipital regions. In autistic individuals low connectivity in the frontal cortex was observed from infancy through adulthood. This is in contrast to long-range connectivity which is high in infancy and low in adulthood in ASD. Abnormal neural organization is also observed in the Brocas area which is important for speech production.
Neuropathology
Listed below are some characteristic findings in ASD brains on molecular and cellular levels regardless of the specific genetic variation or mutation contributing to autism in a particular individual:
Limbic system with smaller neurons that are more densely packed together. Given that the limbic system is the main center of emotions and memory in the human brain, this observation may explain social impairment in ASD.
Fewer and smaller Purkinje neurons in the cerebellum. New research suggest a role of the cerebellum in emotional processing and language.
Increased number of astrocytes and microglia in the cerebral cortex. These cells provide metabolic and functional support to neurons and act as immune cells in the nervous system, respectively.
Increased brain size in early childhood causing macrocephaly in 15–20% of ASD individuals. The brain size however normalizes by mid-childhood. This variation in brain size in not uniform in the ASD brain with some parts like the frontal and temporal lobes being larger, some like the parietal and occipital lobes being normal sized, and some like cerebellar vermis, corpus callosum, and basal ganglia being smaller than neurotypical individuals.
Cell adhesion molecules that are essential to formation and maintenance of connections between neurons, neuroligins found on postsynaptic neurons that bind presynaptic cell adhesion molecules, and proteins that anchor cell adhesion molecules to neurons are all found to be mutated in ASD.
Gut-immune-brain axis
46% to 84% of autistic individuals have GI-related problems like reflux, diarrhea, constipation, inflammatory bowel disease, and food allergies. It has been observed that the makeup of gut bacteria in autistic people is different than that of neurotypical individuals which has raised the question of influence of gut bacteria on ASD development via inducing an inflammatory state. Listed below are some research findings on the influence of gut bacteria and abnormal immune responses on brain development:
Some studies on rodents have shown gut bacteria influencing emotional functions and neurotransmitter balance in the brain, both of which are impacted in ASD.
The immune system is thought to be the intermediary that modulates the influence of gut bacteria on the brain. Some ASD individuals have a dysfunctional immune system with higher numbers of some types of immune cells, biochemical messengers and modulators, and autoimmune antibodies. Increased inflammatory biomarkers correlate with increased severity of ASD symptoms and there is some evidence to support a state of chronic brain inflammation in ASD.
More pronounced inflammatory responses to bacteria were found in ASD individuals with an abnormal gut microbiota. Additionally, immunoglobulin A antibodies that are central to gut immunity were also found in elevated levels in ASD populations. Some of these antibodies may attack proteins that support myelination of the brain, a process that is important for robust transmission of neural signal in many nerves.
Activation of the maternal immune system during pregnancy (by gut bacteria, bacterial toxins, an infection, or non-infectious causes) and gut bacteria in the mother that induce increased levels of Th17, a pro-inflammatory immune cell, have been associated with an increased risk of autism. Some maternal IgG antibodies that cross the placenta to provide passive immunity to the fetus can also attack the fetal brain.
It is proposed that inflammation within the brain promoted by inflammatory responses to harmful gut microbiome impacts brain development.
Pro-inflammatory cytokines IFN-γ, IFN-α, TNF-α, IL-6 and IL-17 have been shown to promote autistic behaviors in animal models. Giving anti-IL-6 and anti-IL-17 along with IL-6 and IL-17, respectively, have been shown to negate this effect in the same animal models.
Some gut proteins and microbial products can cross the blood–brain barrier and activate mast cells in the brain. Mast cells release pro-inflammatory factors and histamine which further increase blood–brain barrier permeability and help set up a cycle of chronic inflammation.
Mirror neuron system
The mirror neuron system consists of a network of brain areas that have been associated with empathy processes in humans. In humans, the mirror neuron system has been identified in the inferior frontal gyrus and the inferior parietal lobule and is thought to be activated during imitation or observation of behaviors. The connection between mirror neuron dysfunction and autism is tentative, and it remains to be seen how mirror neurons may be related to many of the important characteristics of autism.
Social brain interconnectivity
A number of discrete brain regions and networks among regions that are involved in dealing with other people have been discussed together under the rubric of the social brain. As of 2012, there is a consensus that autism spectrum is likely related to problems with interconnectivity among these regions and networks, rather than problems with any specific region or network.
Temporal lobe
Functions of the temporal lobe are related to many of the deficits observed in individuals with ASDs, such as receptive language, social cognition, joint attention, action observation, and empathy. The temporal lobe also contains the superior temporal sulcus and the fusiform face area, which may mediate facial processing. It has been argued that dysfunction in the superior temporal sulcus underlies the social deficits that characterize autism. Compared to typically developing individuals, one study found that individuals with so-called high-functioning autism had reduced activity in the fusiform face area when viewing pictures of faces.
Mitochondria
ASD could be linked to mitochondrial disease, a basic cellular abnormality with the potential to cause disturbances in a wide range of body systems. A 2012 meta-analysis study, as well as other population studies show that approximately 5% of autistic children meet the criteria for classical mitochondrial dysfunction. It is unclear why this mitochondrial disease occurs, considering that only 23% of children with both ASD and mitochondrial disease present with mitochondrial DNA abnormalities.
Serotonin
Serotonin is a major neurotransmitter in the nervous system and contributes to formation of new neurons (neurogenesis), formation of new connections between neurons (synaptogenesis), remodeling of synapses, and survival and migration of neurons, processes that are necessary for a developing brain and some also necessary for learning in the adult brain. 45% of ASD individuals have been found to have increased blood serotonin levels. It has been hypothesized that increased activity of serotonin in the developing brain may facilitate the onset of ASD, with an association found in six out of eight studies between the use of selective serotonin reuptake inhibitors (SSRIs) by the pregnant mother and the development of ASD in the child exposed to SSRI in the antenatal environment. The study could not definitively conclude SSRIs caused the increased risk for ASD due to the biases found in those studies, and the authors called for more definitive, better conducted studies. Confounding by indication has since then been shown to be likely. However, it is also hypothesized that SSRIs may help reduce symptoms of ASD and even positively affect brain development in some ASD patients.
Diagnosis
Autism Spectrum Disorder a clinical diagnosis that is typically made by a physician based off reported and directly observed behavior in the affected individual. According to the updated diagnostic criteria in the DSM-5-TR, in order to receive a diagnosis of Autism Spectrum Disorder, one must present with “persistent deficits in social communication and social interaction” and “restricted, repetitive patterns of behavior, interests, or activities.” These behaviors must begin in early childhood and affect one’s ability to perform everyday tasks. Furthermore, the symptoms must not be fully explainable by intellectual developmental disorder or global developmental delay.
There are several factors that make Autism Spectrum Disorder difficult to diagnose. First off, there are no standardized imaging, molecular or genetic tests that can be used to diagnose ASD. Additionally, there is a lot of variety in how ASD affects individuals. The behavioral manifestations of ASD depend on ones developmental stage, age of presentation, current support, and individual variability. Lastly, there are multiple conditions that may present similarly to Autism Spectrum Disorder, including intellectual disability, hearing impairment, a specific language impairment such as Landau–Kleffner syndrome. ADHD, anxiety disorder, and psychotic disorders. Furthermore, the presence of autism can make it harder to diagnose coexisting psychiatric disorders such as depression.Ideally the diagnosis of ASD should be given by a team of clinicians (e.g. pediatricians, child psychiatrists, child neurologists) based on information provided from the affected individual, caregivers, other medical professionals and from direct observation. Evaluation of a child or adult for Autism Spectrum Disorder typically starts with a pediatrician or primary care physician taking a developmental history and performing a physical exam. If warranted, the physician may refer the individual to an ASD specialist who will observe and assess cognitive, communication, family, and other factors using standardized tools, and taking into account any associated medical conditions. A pediatric neuropsychologist is often asked to assess behavior and cognitive skills, both to aid diagnosis and to help recommend educational interventions. Further workup may be performed after someone is diagnosed with ASD. This may include a clinical genetics evaluation particularly when other symptoms already suggest a genetic cause. Although up to 40% of ASD cases may be linked to genetic causes, it is not currently recommended to perform complete genetic testing on every individual who is diagnosed with ASD. Consensus guidelines for genetic testing in patients with ASD in the US and UK are limited to high-resolution chromosome and fragile X testing. Metabolic and neuroimaging tests are also not routinely performed for diagnosis of ASD.The age at which ASD is diagnosed varies. Sometimes ASD can be diagnosed as early as 14 months, however, a reliable diagnosis of ASD is usually made at the age of two years. Diagnosis becomes increasingly stable over the first three years of life. For example, a one-year-old who meets diagnostic criteria for ASD is less likely than a three-year-old to continue to do so a few years later. Additionally, age of diagnosis may depend on the severity of ASD, with more severe forms of ASD more likely to be diagnosed at an earlier age. Issues with access to healthcare such as cost of appointments or delays in making appointments often lead to delays in the diagnosis of ASD. In the UK the National Autism Plan for Children recommends at most 30 weeks from first concern to completed diagnosis and assessment, though few cases are handled that quickly in practice. Lack of access to appropriate medical care, broadening diagnostic criteria and increased awareness surrounding ASD in recent years has resulted in an increased number of individuals receiving a diagnosis of ASD as adults. Diagnosis of ASD in adults poses unique challenges because it still relies on an accurate developmental history and because autistic adults sometimes learn coping strategies (known as camouflaging) which may make it more difficult to obtain a diagnosis.The presentation and diagnosis of Autism Spectrum Disorder may vary based on sex and gender identity. Most studies that have investigated the impact of gender on presentation and diagnosis of Autism Spectrum Disorder have not differentiated between the impact of sex versus gender. There is some evidence that autistic women and girls tend to show less repetitive behavior and may engage in more camouflaging than autistic males. Camouflaging may include making oneself perform normative facial expressions and eye contact. Differences in behavioral presentation and gender-stereotypes may make it more challenging to diagnose autism spectrum disorder in a timely manner in females. A notable percentage of autistic females may be misdiagnosed, diagnosed after a considerable delay, or not diagnosed at all.Considering the unique challenges in diagnosing ASD using behavioral and observational assessment, specific US practice parameters for its assessment were published by the American Academy of Neurology in the year 2000, the American Academy of Child and Adolescent Psychiatry in 1999, and a consensus panel with representation from various professional societies in 1999. The practice parameters outlined by these societies include an initial screening of children by general practitioners (i.e., "Level 1 screening") and for children who fail the initial screening, a comprehensive diagnostic assessment by experienced clinicians (i.e. "Level 2 evaluation"). Furthermore, it has been suggested that assessments of children with suspected ASD be evaluated within a developmental framework, include multiple informants (e.g., parents and teachers) from diverse contexts (e.g., home and school), and employ a multidisciplinary team of professionals (e.g., clinical psychologists, neuropsychologists, and psychiatrists).As of 2019, psychologists wait until a child showed initial evidence of ASD tendencies, then administer various psychological assessment tools to assess for ASD. Among these measurements, the Autism Diagnostic Interview-Revised (ADI-R) and the Autism Diagnostic Observation Schedule (ADOS) are considered the "gold standards" for assessing autistic children. The ADI-R is a semi-structured parent interview that probes for symptoms of autism by evaluating a childs current behavior and developmental history. The ADOS is a semistructured interactive evaluation of ASD symptoms that is used to measure social and communication abilities by eliciting several opportunities for spontaneous behaviors (e.g., eye contact) in standardized context. Various other questionnaires (e.g., The Childhood Autism Rating Scale, Autism Treatment Evaluation Checklist) and tests of cognitive functioning (e.g., The Peabody Picture Vocabulary Test) are typically included in an ASD assessment battery. The diagnostic interview for social and communication disorders (DISCO) may also be used.
Screening
About half of parents of children with ASD notice their childs atypical behaviors by age 18 months, and about four-fifths notice by age 24 months. If a child does not meet any of the following milestones, it "is an absolute indication to proceed with further evaluations. Delay in referral for such testing may delay early diagnosis and treatment and affect the [childs] long-term outcome."
No response to name (or gazing with direct eye contact) by 6 months.
No babbling by 12 months.
No gesturing (pointing, waving, etc.) by 12 months.
No single words by 16 months.
No two-word (spontaneous, not just echolalic) phrases by 24 months.
Loss of any language or social skills, at any age.The Japanese practice is to screen all children for ASD at 18 and 24 months, using autism-specific formal screening tests. In contrast, in the UK, children whose families or doctors recognize possible signs of autism are screened. It is not known which approach is more effective. The UK National Screening Committee does not recommend universal ASD screening in young children. Their main concerns includes higher chances of misdiagnosis at younger ages and lack of evidence of effectiveness of early interventions There is no consensus between professional and expert bodies in the US on screening for autism in children younger than 3 years.Screening tools include the Modified Checklist for Autism in Toddlers (M-CHAT), the Early Screening of Autistic Traits Questionnaire, and the First Year Inventory; initial data on M-CHAT and its predecessor, the Checklist for Autism in Toddlers (CHAT), on children aged 18–30 months suggests that it is best used in a clinical setting and that it has low sensitivity (many false-negatives) but good specificity (few false-positives). It may be more accurate to precede these tests with a broadband screener that does not distinguish ASD from other developmental disorders. Screening tools designed for one cultures norms for behaviors like eye contact may be inappropriate for a different culture. Although genetic screening for autism is generally still impractical, it can be considered in some cases, such as children with neurological symptoms and dysmorphic features.
Misdiagnosis
There is a significant level of misdiagnosis of autism in neurodevelopmentally typical children; 18–37% of children diagnosed with ASD eventually lose their diagnosis. This high rate of lost diagnosis cannot be accounted for by successful ASD treatment alone. The most common reason parents reported as the cause of lost ASD diagnosis was new information about the child (73.5%), such as a replacement diagnosis. Other reasons included a diagnosis given so the child could receive ASD treatment (24.2%), ASD treatment success or maturation (21%), and parents disagreeing with the initial diagnosis (1.9%).Many of the children who were later found not to meet ASD diagnosis criteria then received diagnosis for another developmental disorder. Most common was ADHD, but other diagnoses included sensory disorders, anxiety, personality disorder, or learning disability. Neurodevelopment and psychiatric disorders that are commonly misdiagnosed as ASD include specific language impairment, social communication disorder, anxiety disorder, reactive attachment disorder, cognitive impairment, visual impairment, hearing loss and normal behavioral variation. Some behavioral variations that resemble autistic traits are repetitive behaviors, sensitivity to change in daily routines, focused interests, and toe-walking. These are considered normal behavioral variations when they do not cause impaired function. Boys are more likely to exhibit repetitive behaviors especially when excited, tired, bored, or stressed. Some ways of distinguishing typical behavioral variations from autistic behaviors are the ability of the child to suppress these behaviors and the absence of these behaviors during sleep.
Comorbidity
ASDs tend to be highly comorbid with other disorders. Comorbidity may increase with age and may worsen the course of youth with ASDs and make intervention and treatment more difficult. Distinguishing between ASDs and other diagnoses can be challenging because the traits of ASDs often overlap with symptoms of other disorders, and the characteristics of ASDs make traditional diagnostic procedures difficult.
The most common medical condition occurring in individuals with ASDs is seizure disorder or epilepsy, which occurs in 11–39% of autistic individuals. The risk varies with age, cognitive level, and type of language disorder.
Tuberous sclerosis, an autosomal dominant genetic condition in which non-malignant tumors grow in the brain and on other vital organs, is present in 1–4% of individuals with ASDs.
Intellectual disabilities are some of the most common comorbid disorders with ASDs. Recent estimates suggest that 40–69% of autistic individuals have some degree of an intellectual disability, more likely to be severe for females. A number of genetic syndromes causing intellectual disability may also be comorbid with ASD, including fragile X, Down, Prader-Willi, Angelman, Williams syndrome and SYNGAP1-related intellectual disability.
Learning disabilities are also highly comorbid in individuals with an ASD. Approximately 25–75% of individuals with an ASD also have some degree of a learning disability.
Various anxiety disorders tend to co-occur with ASDs, with overall comorbidity rates of 7–84%. They are common among children with ASD; there are no firm data, but studies have reported prevalences ranging from 11% to 84%. Many anxiety disorders have symptoms that are better explained by ASD itself, or are hard to distinguish from ASDs symptoms.
Rates of comorbid depression in individuals with an ASD range from 4–58%.
The relationship between ASD and schizophrenia remains a controversial subject under continued investigation, and recent meta-analyses have examined genetic, environmental, infectious, and immune risk factors that may be shared between the two conditions. Oxidative stress, DNA damage and DNA repair have been postulated to play a role in the aetiopathology of both ASD and schizophrenia.
Deficits in ASD are often linked to behavior problems, such as difficulties following directions, being cooperative, and doing things on other peoples terms. Symptoms similar to those of attention deficit hyperactivity disorder (ADHD) can be part of an ASD diagnosis.
Sensory processing disorder is also comorbid with ASD, with comorbidity rates of 42–88%.
Starting in adolescence, some people with Asperger syndrome (26% in one sample) fall under the criteria for the similar condition schizoid personality disorder, which is characterized by a lack of interest in social relationships, a tendency towards a solitary or sheltered lifestyle, secretiveness, emotional coldness, detachment and apathy. Asperger syndrome was traditionally called "schizoid disorder of childhood."
Genetic disorders - about 10–15% of autism cases have an identifiable Mendelian (single-gene) condition, chromosome abnormality, or other genetic syndromes.
Several metabolic defects, such as phenylketonuria, are associated with autistic symptoms.
Sleep problems affect about two-thirds of individuals with ASD at some point in childhood. These most commonly include symptoms of insomnia such as difficulty in falling asleep, frequent nocturnal awakenings, and early morning awakenings. Sleep problems are associated with difficult behaviors and family stress, and are often a focus of clinical attention over and above the primary ASD diagnosis.
Management
There is no treatment as such for autism, and many sources advise that this is not an appropriate goal, although treatment of co-occurring conditions remains an important goal. There is no known cure for autism, nor can any known treatments significantly reduce brain mutations caused by autism, although those who require little-to-no support are more likely to experience a lessening of symptoms over time. Several interventions can help children with autism, and no single treatment is best, with treatment typically tailored to the childs needs. Studies of interventions have methodological problems that prevent definitive conclusions about efficacy; however, the development of evidence-based interventions has advanced.The main goals of treatment are to lessen associated deficits and family distress, and to increase quality of life and functional independence. In general, higher IQs are correlated with greater responsiveness to treatment and improved treatment outcomes. Behavioral, psychological, education, and/or skill-building interventions may be used to assist autistic people to learn life skills necessary for living independently, as well as other social, communication, and language skills. Therapy also aims to reduce challenging behaviors and build upon strengths.Intensive, sustained special education programs and behavior therapy early in life can help children acquire self-care, communication, and job skills. Although evidence-based interventions for autistic children vary in their methods, many adopt a psychoeducational approach to enhancing cognitive, communication, and social skills while minimizing problem behaviors. While medications have not been found to help with core symptoms, they may be used for associated symptoms, such as irritability, inattention, or repetitive behavior patterns.
Non-pharmacological interventions
Intensive, sustained special education or remedial education programs and behavior therapy early in life can help children acquire self-care, social, and job skills. Available approaches include applied behavior analysis, developmental models, structured teaching, speech and language therapy, social skills therapy, and occupational therapy. Among these approaches, interventions either treat autistic features comprehensively, or focus treatment on a specific area of deficit. Generally, when educating those with autism, specific tactics may be used to effectively relay information to these individuals. Using as much social interaction as possible is key in targeting the inhibition autistic individuals experience concerning person-to-person contact. Additionally, research has shown that employing semantic groupings, which involves assigning words to typical conceptual categories, can be beneficial in fostering learning.There has been increasing attention to the development of evidence-based interventions for autistic young children. Two theoretical frameworks outlined for early childhood intervention include applied behavioral analysis (ABA) and the developmental social-pragmatic model (DSP). Although ABA therapy has a strong evidence base, particularly in regard to early intensive home-based therapy, ABAs effectiveness may be limited by diagnostic severity and IQ of the person affected by ASD. The Journal of Clinical Child and Adolescent Psychology has deemed two early childhood interventions as "well-established": individual comprehensive ABA, and focused teacher-implemented ABA combined with DSP.Another evidence-based intervention that has demonstrated efficacy is a parent training model, which teaches parents how to implement various ABA and DSP techniques themselves. Various DSP programs have been developed to explicitly deliver intervention systems through at-home parent implementation.
In October 2015, the American Academy of Pediatrics (AAP) proposed new evidence-based recommendations for early interventions in ASD for children under 3. These recommendations emphasize early involvement with both developmental and behavioral methods, support by and for parents and caregivers, and a focus on both the core and associated symptoms of ASD. However, a Cochrane review found no evidence that early intensive behavioral intervention (EIBI) is effective in reducing behavioral problems associated with autism in most autistic children but did help improve IQ and language skills. The Cochrane review did acknowledge that this may be due to the low quality of studies currently available on EIBI and therefore providers should recommend EIBI based on their clinical judgement and the familys preferences. No adverse effects of EIBI treatment were found. Studies on pet therapy have shown positive effects.Generally speaking, treatment of ASD focuses on behavioral and educational interventions to target its two core symptoms: social communication deficits and restricted, repetitive behaviors. If symptoms continue after behavioral strategies have been implemented, some medications can be recommended to target specific symptoms or co-existing problems such as restricted and repetitive behaviors (RRBs), anxiety, depression, hyperactivity/inattention and sleep disturbance. Melatonin for example can be used for sleep problems.While there are a number of parent-mediated behavioral therapies to target social communication deficits in children with autism, there is uncertainty regarding the efficacy of interventions to treat RRBs.
Education
Educational interventions often used include applied behavior analysis (ABA), developmental models, structured teaching, speech and language therapy and social skills therapy. Among these approaches, interventions either treat autistic features comprehensively, or focalize treatment on a specific area of deficit. The quality of research for early intensive behavioral intervention (EIBI)—a treatment procedure incorporating over thirty hours per week of the structured type of ABA that is carried out with very young children—is currently low, and more vigorous research designs with larger sample sizes are needed. Two theoretical frameworks outlined for early childhood intervention include structured and naturalistic ABA interventions, and developmental social pragmatic models (DSP). One interventional strategy utilizes a parent training model, which teaches parents how to implement various ABA and DSP techniques, allowing for parents to disseminate interventions themselves. Various DSP programs have been developed to explicitly deliver intervention systems through at-home parent implementation. Despite the recent development of parent training models, these interventions have demonstrated effectiveness in numerous studies, being evaluated as a probable efficacious mode of treatment. Early, intensive ABA therapy has demonstrated effectiveness in enhancing communication and adaptive functioning in preschool children; it is also well-established for improving the intellectual performance of that age group.Similarly, a teacher-implemented intervention that utilizes a more naturalistic form of ABA combined with a developmental social pragmatic approach has been found to be beneficial in improving social-communication skills in young children, although there is less evidence in its treatment of global symptoms. Neuropsychological reports are often poorly communicated to educators, resulting in a gap between what a report recommends and what education is provided. The appropriateness of including children with varying severity of autism spectrum disorders in the general education population is a subject of current debate among educators and researchers.
Pharmacological interventions
Medications may be used to treat ASD symptoms that interfere with integrating a child into home or school when behavioral treatment fails. They may also be used for associated health problems, such as ADHD or anxiety. However, their routine prescription for the core features of ASD is not recommended. More than half of US children diagnosed with ASD are prescribed psychoactive drugs or anticonvulsants, with the most common drug classes being antidepressants, stimulants, and antipsychotics. The atypical antipsychotic drugs risperidone and aripiprazole are FDA-approved for treating associated aggressive and self-injurious behaviors. However, their side effects must be weighed against their potential benefits, and autistic people may respond atypically. Side effects may include weight gain, tiredness, drooling, and aggression. There is some emerging data that show positive effects of aripiprazole and risperidone on restricted and repetitive behaviors (i.e., stimming; e.g., flapping, twisting, complex whole-body movements), but due to the small sample size and different focus of these studies and the concerns about its side effects, antipsychotics are not recommended as primary treatment of RRBs. SSRI antidepressants, such as fluoxetine and fluvoxamine, have been shown to be effective in reducing repetitive and ritualistic behaviors, while the stimulant medication methylphenidate is beneficial for some children with co-morbid inattentiveness or hyperactivity. There is scant reliable research about the effectiveness or safety of drug treatments for adolescents and adults with ASD. No known medication relieves autisms core symptoms of social and communication impairments.
Alternative medicine
A multitude of researched alternative therapies have also been implemented. Many have resulted in harm to autistic people. A 2020 systematic review on adults with autism has provided emerging evidence for decreasing stress, anxiety, ruminating thoughts, anger, and aggression through mindfulness-based interventions for improving mental health.Although popularly used as an alternative treatment for autistic people, as of 2018 there is no good evidence to recommend a gluten- and casein-free diet as a standard treatment. A 2018 review concluded that it may be a therapeutic option for specific groups of children with autism, such as those with known food intolerances or allergies, or with food intolerance markers. The authors analyzed the prospective trials conducted to date that studied the efficacy of the gluten- and casein-free diet in children with ASD (4 in total). All of them compared gluten- and casein-free diet versus normal diet with a control group (2 double-blind randomized controlled trials, 1 double-blind crossover trial, 1 single-blind trial). In two of the studies, whose duration was 12 and 24 months, a significant improvement in ASD symptoms (efficacy rate 50%) was identified. In the other two studies, whose duration was 3 months, no significant effect was observed. The authors concluded that a longer duration of the diet may be necessary to achieve the improvement of the ASD symptoms. Other problems documented in the trials carried out include transgressions of the diet, small sample size, the heterogeneity of the participants and the possibility of a placebo effect. In the subset of people who have gluten sensitivity there is limited evidence that suggests that a gluten-free diet may improve some autistic behaviors.The preference that autistic children have for unconventional foods can lead to reduction in bone cortical thickness with this risk being greater in those on casein-free diets, as a consequence of the low intake of calcium and vitamin D; however, suboptimal bone development in ASD has also been associated with lack of exercise and gastrointestinal disorders. In 2005, botched chelation therapy killed a five-year-old child with autism. Chelation is not recommended for autistic people since the associated risks outweigh any potential benefits. Another alternative medicine practice with no evidence is CEASE therapy, a pseudoscientific mixture of homeopathy, supplements, and vaccine detoxing.Results of a systematic review on interventions to address health outcomes among autistic adults found emerging evidence to support mindfulness-based interventions for improving mental health. This includes decreasing stress, anxiety, ruminating thoughts, anger, and aggression. An updated Cochrane review (2022) found evidence that music therapy likely improves social interactions, verbal communication, and non-verbal communication skills. There has been early research looking at hyperbaric treatments in children with autism. Studies on pet therapy have shown positive effects.
Prevention
While infection with rubella during pregnancy causes fewer than 1% of cases of autism, vaccination against rubella can prevent many of those cases.
Outcomes
There is no known cure for autism. The degree of symptoms can decrease, occasionally to the extent that people lose their diagnosis of ASD; this occurs sometimes after intensive treatment and sometimes not. It is not known how often this outcome happens, with reported rates in unselected samples ranging from 3% to 25%. Although core difficulties tend to persist, symptoms often become less severe with age. Acquiring language before age six, having an IQ above 50, and having a marketable skill all predict better outcomes; independent living is unlikely with severe autism.Many autistic people face significant obstacles in transitioning to adulthood. Compared to the general population, autistic people are more likely to be unemployed and to have never had a job. About half of people in their 20s with autism are not employed. Some autistic adults are unable to live independently.
Academic performance
The number of students identified and served as eligible for autism services in the United States has increased from 5,413 children in 1991–1992 to 370,011 children in the 2010–2011 academic school year. The United States Department of Health and Human Services reported approximately 1 in 68 children are diagnosed with ASD at age 8 although onset is typically between ages 2 and 4.The increasing number of students diagnosed with ASD in the schools presents significant challenges to teachers, school psychologists, and other school professionals. These challenges include developing a consistent practice that best support the social and cognitive development of the increasing number of autistic students. Although there is considerable research addressing assessment, identification, and support services for autistic children, there is a need for further research focused on these topics within the school context. Further research on appropriate support services for students with ASD will provide school psychologists and other education professionals with specific directions for advocacy and service delivery that aim to enhance school outcomes for students with ASD.Attempts to identify and use best intervention practices for students with autism also pose a challenge due to over dependence on popular or well-known interventions and curricula. Some evidence suggests that although these interventions work for some students, there remains a lack of specificity for which type of student, under what environmental conditions (one-on-one, specialized instruction or general education) and for which targeted deficits they work best. More research is needed to identify what assessment methods are most effective for identifying the level of educational needs for students with ASD. Additionally, children living in higher resources settings in the United States tend to experience earlier ASD interventions than children in lower resource settings (e.g. rural areas).A difficulty for academic performance in students with ASD is the tendency to generalize learning. Learning is different for each student, which is the same for students with ASD. To assist in learning, accommodations are commonly put into place for students with differing abilities. The existing schema of these students works in different ways and can be adjusted to best support the educational development for each student.The cost of educating a student with ASD in the US is about $8,600 a year more than the cost of educating an average student, which is about $12,000.Though much of the focus on early childhood intervention for ASD has centered on high-income countries like the United States, some of the most significant unmet needs for autistic individuals are in low- and middle-income countries. In these contexts, research has been more limited but there is evidence to suggest that some comprehensive care plans can be successfully delivered by non-specialists in schools and in the community.
Employment
In the United States, about half of people in their 20s with autism are unemployed, and one third of those with graduate degrees may be unemployed. While employers state hiring concerns about productivity and supervision, experienced employers of autistics give positive reports of above average memory and detail orientation as well as a high regard for rules and procedure in autistic employees. The majority of the economic burden of autism is caused by lost productivity in the job market. From the perspective of the social model of disability, social model of disability, much of this unemployment is caused by the lack of understanding from employers and coworkers. Adding content related to autism in existing diversity training can clarify misconceptions, support employees, and help provide new opportunities for autistic people. As of 2021, the potential for new autism employment initiatives by major employers in the United States continue to grow. The most high-profile autism initiative in the United States, "Autism at Work" grew to 20 of the largest companies in the United States. However, special hiring programs remain largely limited to entry-level technology positions, such as software testing, and exclude those who have talents outside of technology. An alternative approach is systemic neurodiversity inclusion. Developing organizational systems with enough flexibility and fairness to include autistic employees improves the work experience of all employees.
Epidemiology
The World Health Organization (WHO) estimates that about 1 in 100 children have autism. The number of people diagnosed has increased considerably since the 1990s, which may be partly due to increased recognition of the condition.While rates of ASD are consistent across cultures, they vary greatly by gender, with boys diagnosed far more frequently than girls: 1 in 70 boys, but only 1 in 315 girls at eight years of age. Girls, however, are more likely to have associated cognitive impairment, suggesting that less severe forms of ASD are likely being missed in girls and women. Prevalence differences may be a result of gender differences in expression of clinical symptoms, with women and girls with autism showing less atypical behaviors and, therefore, less likely to receive an ASD diagnosis.Using DSM-5 criteria, 92% of the children diagnosed per DSM-IV with one of the disorders which is now considered part of ASD will still meet the diagnostic criteria of ASD. However, if both ASD and the social (pragmatic) communication disorder categories of DSM-5 are combined, the prevalence of autism is mostly unchanged from the prevalence per the DSM-IV criteria. The best estimate for prevalence of ASD is 0.7% or 1 child in 143 children. Relatively mild forms of autism, such as Aspergers as well as other developmental disorders, are included in the DSM-5 diagnostic criteria. ASD rates were constant between 2014 and 2016 but twice the rate compared to the time period between 2011 and 2014 (1.25 vs 2.47%). A Canadian meta-analysis from 2019 confirmed these effects as the profiles of people diagnosed with autism became less and less different from the profiles of the general population. In the US, the rates for diagnosed ASD have been steadily increasing since 2000 when records began being kept. While it remains unclear whether this trend represents a true rise in incidence, it likely reflects changes in ASD diagnostic criteria, improved detection, and increased public awareness of autism. In 2012, the NHS estimated that the overall prevalence of autism among adults aged 18 years and over in the UK was 1.1%.A 2016 survey in the United States reported a rate of 25 per 1,000 children for ASD. It is important to note that rates of autism are poorly understood in many low- and middle-income countries, which affects the accuracy of global ASD prevalence estimates, but it is thought that most autistic individuals live in low- and middle-income countries.In the UK, from 1998 to 2018, the autism diagnoses increased by 787%. This increase is largely attributable to changes in diagnostic practices, referral patterns, availability of services, age at diagnosis, and public awareness (particularly among women), though unidentified environmental risk factors cannot be ruled out. The available evidence does not rule out the possibility that autisms true prevalence has increased; a real increase would suggest directing more attention and funding toward psychosocial factors and changing environmental factors instead of continuing to focus on genetics. It has been established that vaccination is not a risk factor for autism and is not a cause of any increase in autism prevalence rates, if any change in the rate of autism exists at all.Males have higher likelihood of being diagnosed with ASD than females. The sex ratio averages 4.3:1 and is greatly modified by cognitive impairment: it may be close to 2:1 with intellectual disability and more than 5.5:1 without. Several theories about the higher prevalence in males have been investigated, but the cause of the difference is unconfirmed; one theory is that females are underdiagnosed.The risk of developing autism is greater with older fathers than with older mothers; two potential explanations are the known increase in mutation burden in older sperm, and the hypothesis that men marry later if they carry genetic liability and show some signs of autism. Most professionals believe that race, ethnicity, and socioeconomic background do not affect the occurrence of autism.
United States
In the United States it is estimated to affect more than 2% of children (about 1.5 million) as of 2016. According to the latest CDC prevalence reports, 1 in 44 children (2.3%) in the United States had a diagnosis of ASD in 2018. Prevalence is estimated at 6 per 1,000 for ASDs as a whole.
History
A few examples of autistic symptoms and treatments were described long before autism was named. The Table Talk of Martin Luther, compiled by his notetaker, Mathesius, contains the story of a 12-year-old boy who may have been severely autistic. The earliest well-documented case of autism is that of Hugh Blair of Borgue, as detailed in a 1747 court case in which his brother successfully petitioned to annul Blairs marriage to gain Blairs inheritance.The Wild Boy of Aveyron, a feral child found in 1798, showed several signs of autism. He was non-verbal during his teenage years, and his case was widely popular among society for its time. Such cases brought awareness to autism, and more research was conducted on the natural dimensions of human behavior. The medical student Jean Itard treated him with a behavioral program designed to help him form social attachments and to induce speech via imitation.The New Latin word autismus (English translation autism) was coined by the Swiss psychiatrist Eugen Bleuler in 1910 as he was defining symptoms of schizophrenia. He derived it from the Greek word: αὐτός, romanized: autós, lit. self and used it to mean morbid self-admiration, referring to "autistic withdrawal of the patient to his fantasies, against which any influence from outside becomes an intolerable disturbance". A Soviet child psychiatrist, Grunya Sukhareva, described a similar syndrome in Russian in 1925, and in German in 1926.
Clinical development and diagnoses
Autism as it is known today can be drawn back to the late 1930s, when two separate psychiatrists - Hans Asperger of the Vienna University Hospital and Leo Kanner of the Johns Hopkins Hospital - used the word autism to describe the patients they were studying in their own clinical research. The word autism first took its modern sense in German, when Asperger adopted Bleulers terminology autistic psychopaths in a 1938 lecture in German about child psychology. Asperger was investigating an ASD which was later known as Asperger syndrome, although it did not become widely recognized as a separate diagnosis until 1981. In English, Kanner first used autism in its modern sense when he introduced the label early infantile autism in a 1943 report of 11 children with striking behavioral similarities. Almost all the characteristics described in Kanners first paper on the subject, notably "autistic aloneness" and "insistence on sameness", are still regarded as typical of the autistic spectrum of disorders. It is not known whether Kanner derived the term independently of Asperger.Kanners reuse of autism led to decades of confused terminology like infantile schizophrenia, and child psychiatrys focus on maternal deprivation led to misconceptions of autism as an infants response to "refrigerator mothers". Starting in the late 1960s, autism was established as a separate syndrome. However, Kanner was the first person to describe ASD as a neurodevelopmental disorder in 1943 by calling it infantile autism and therefore rejected the refrigerator mother theory.The discussion of autism prior to the twentieth century is one that brings about much controversy. Without researchers being able to meet a consensus on the varying forms around the condition, there was a lack of research being conducted on the disorder. Discussing the syndrome and its complexity frustrated researchers.
Controversies have surrounded various claims regarding the etiology of ASDs. In the 1950s, the refrigerator mother theory emerged as an explanation for autism. The hypothesis was based on the idea that autistic behaviors stem from the emotional frigidity, lack of warmth, and cold, distant, rejecting demeanor of a childs mother. Parents of children with an ASD experienced blame, guilt, and self-doubt, especially as the theory was embraced by the medical establishment and went largely unchallenged into the mid-1960s. The "refrigerator mother" theory has since continued to be refuted in scientific literature, including a 2015 systematic review which showed no association between caregiver interaction and language outcomes in ASD. Another controversial claim suggested that watching extensive amounts of television may cause autism. This hypothesis was largely based on research suggesting that the increasing rates of autism in the 1970s and 1980s were linked to the growth of cable television at this time.
Terminology and distinction from schizophrenia
As late as the mid-1970s there was little evidence of a genetic role in autism, however by 2007 it was recognised as one of the most heritable psychiatric conditions. Although the rise of parent organizations and the destigmatization of childhood ASD have affected how ASD is viewed, parents continue to feel social stigma in situations where their childs autistic behavior is perceived negatively, and many primary care physicians and medical specialists express beliefs consistent with outdated autism research.It took until 1980 for the DSM-III to differentiate autism from childhood schizophrenia. In 1987, the DSM-III-R provided a checklist for diagnosing autism. In May 2013, the DSM-5 was released, updating the classification for pervasive developmental disorders. The grouping of disorders, including PDD-NOS, autism, Asperger syndrome, Rett syndrome, and CDD, has been removed and replaced with the general term of Autism Spectrum Disorders. The two categories that exist are impaired social communication and/or interaction, and restricted and/or repetitive behaviors.The Internet has helped autistic individuals bypass nonverbal cues and emotional sharing that they find difficult to deal with, and has given them a way to form online communities and work remotely. Societal and cultural aspects of autism have developed: some in the community seek a cure, while others believe that autism is simply another way of being.
Society and culture
An autistic culture has emerged, accompanied by the autistic rights and neurodiversity movements who argue autism should be accepted as a difference to be accommodated instead of cured, although a minority of autistic individuals do continue seeking a cure. Worldwide, events related to autism include World Autism Awareness Day, Autism Sunday, Autistic Pride Day, Autreat, and others. Social-science scholars study those with autism in hopes to learn more about "autism as a culture, transcultural comparisons ... and research on social movements." Many autistic individuals have been successful in their fields.
Neurodiversity movement
The neurodiversity movement and the autism rights movement are social movements within the context of disability rights, emphasizing the concept of neurodiversity, which describes the autism spectrum as a result of natural variations in the human brain rather than a disorder to be cured. The autism rights movement advocates for including greater acceptance of autistic behaviors; therapies that focus on coping skills rather than imitating the behaviors of those without autism; and the recognition of the autistic community as a minority group. Autism rights or neurodiversity advocates believe that the autism spectrum is genetic and should be accepted as a natural expression of the human genome. However, these movements are not without criticism; for example, a common argument made against neurodiversity activists is that the majority of them are high-functioning, have Asperger syndrome, or are self-diagnosed, and do not represent the views of low-functioning autistic people.The concept of neurodiversity is contentious within various autism advocacy and research groups and has led to infighting.
Caregivers
Families who care for an autistic child face added stress from a number of different causes. Parents may struggle to understand the diagnosis and to find appropriate care options. Parents often take a negative view of the diagnosis, and may struggle emotionally. More than half of parents over the age of 50 are still living with their child, as about 85% of autistic people have difficulties living independently. Some studies also find decreased earning among parents who care for autistic children.
Broader autism phenotype
The broader autism phenotype (BAP) describes individuals who may not have ASD but do have autistic traits, such as avoiding eye contact and stimming.
See also
Autism spectrum disorders in the media
List of films about autism
Social narrative
References
Sources
"Neurodevelopmental Disorders". Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision (DSM-5-TR). Washington, DC: American Psychiatric Association. 18 March 2022. ISBN 9780890425770. LCCN 2021051782.
"6A02 Autism spectrum disorder". International Classification of Diseases 11th Revision (ICD-11). World Health Organisation. February 2022 [adopted in 2019]. 6A02. Retrieved 14 May 2022.
Further reading
Gabovitch, Elaine; Dutra, Courtney; Lauer, Emily. (2016). The Healthy People 2020 Roadmap for Massachusetts Children & Youth with ASD/DD: Understanding Needs and Measuring Outcomes (Report). Worcester: UMASS Chan Medical School. Retrieved 30 June 2022.
Matson JL, Dempsey T (2008). "Stereotypy in Adults with Autism Spectrum Disorders: Relationship and Diagnostic Fidelity". Journal of Developmental and Physical Disabilities, 20(2) 155–165. doi 10.1007/s10882-007-9086-0. S2CID 143874013.
Johnny L Matson; Michael L Matson; Tessa T Rivet (September 2007). "Social-skills treatments for children with autism spectrum disorders: an overview". Behavior Modification. 31 (5): 682–707. doi:10.1177/0145445507301650. ISSN 0145-4455. PMID 17699124. Wikidata Q28240738.
Johnny L Matson; Mary Shoemaker (14 July 2009). "Intellectual disability and its relationship to autism spectrum disorders". Research in Developmental Disabilities. 30 (6): 1107–1114. doi:10.1016/J.RIDD.2009.06.003. ISSN 0891-4222. PMID 19604668. Wikidata Q37552242. eISSN 1873-3379
Pedersen AL, Pettygrove S, Lu Z, Andrews J, Meaney FJ, Kurzius-Spencer M, et al. (August 2017). "DSM Criteria that Best Differentiate Intellectual Disability from Autism Spectrum Disorder" </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Babesiosis and can you explain it in depth? [/INST] Babesiosis or piroplasmosis is a malaria-like parasitic disease caused by infection with a eukaryotic parasite in the order Piroplasmida, typically a Babesia or Theileria, in the phylum Apicomplexa. Human babesiosis transmission via tick bite is most common in the Northeastern and Midwestern United States and parts of Europe, and sporadic throughout the rest of the world. It occurs in warm weather. People can get infected with Babesia parasites by the bite of an infected tick, by getting a blood transfusion from an infected donor of blood products, or by congenital transmission (an infected mother to her baby).
Ticks transmit the human strain of babesiosis, so it often presents with other tick-borne illnesses such as Lyme disease. After trypanosomes, Babesia is thought to be the second-most common blood parasite of mammals. They can have major adverse effects on the health of domestic animals in areas without severe winters. In cattle the disease is known as Texas cattle fever or redwater.
Signs and symptoms
Half of all children and a quarter of previously healthy adults with Babesia infection are asymptomatic. When people do develop symptoms, the most common are fever and hemolytic anemia, symptoms that are similar to those of malaria. People with symptoms usually become ill 1 to 4 weeks after the bite, or 1 to 9 weeks after transfusion of contaminated blood products. A person infected with babesiosis gradually develops malaise and fatigue, followed by a fever. Hemolytic anemia, in which red blood cells are destroyed and removed from the blood, also develops. Chills, sweats, and thrombocytopenia are also common symptoms. Symptoms may last from several days to several months.Less common symptoms and physical exam findings of mild-to-moderate babesiosis:
In more severe cases, symptoms similar to malaria occur, with fevers up to 40.5 °C (105 °F), shaking chills, and severe anemia (hemolytic anemia). Organ failure may follow, including adult respiratory distress syndrome. Sepsis in people who have had a splenectomy can occur rapidly, consistent with overwhelming post-splenectomy infection. Severe cases are also more likely to occur in the very young, very old, and persons with immunodeficiency, such as HIV/AIDS patients.A reported increase in human babesiosis diagnoses in the 2000s is thought to be caused by more widespread testing and higher numbers of people with immunodeficiencies coming in contact with ticks, the disease vector. Little is known about the occurrence of Babesia species in malaria-endemic areas, where Babesia can easily be misdiagnosed as Plasmodium. Human patients with repeat babesiosis infection may exhibit premunity.
Cause
Babesia species are in the phylum Apicomplexa, which also has the protozoan parasites that cause malaria, toxoplasmosis, and cryptosporidiosis. Four clades of Babesia species infect humans. The main species in each clade are:
B. microti (<3 µm)
B. duncani
B. divergens (cattle parasite seen mostly in Europe) and B. venatorum (roe deer parasite, formerly called EU1), most closely related to the large Babesia clade
Large Babesia (>3 µm) mostly infects ungulates, but also includes K01 strain (an isolated case observed in South Korea, see isolated cases)
Pathophysiology
Babesia parasites reproduce in red blood cells, where they can be seen as cross-shaped inclusions (four merozoites asexually budding, but attached together forming a structure looking like a "Maltese cross") and cause hemolytic anemia, quite similar to malaria.
Unlike the Plasmodium parasites that cause malaria, Babesia species lack an exoerythrocytic phase, so the liver is usually not affected.In nonhuman animals, Babesia canis rossi, Babesia bigemina, and Babesia bovis cause particularly severe forms of the disease, including a severe haemolytic anaemia, with positive erythrocyte-in-saline-agglutination test indicating an immune-mediated component to the haemolysis. Common sequelae include haemoglobinuria "red-water", disseminated intravascular coagulation, and "cerebral babesiosis" caused by sludging of erythrocytes in cerebral capillaries.In bovine species, the organism causes hemolytic anemia, so an infected animal shows pale mucous membranes initially. As the levels of bilirubin (a byproduct of red blood cell lysis) continue to increase, the visible mucous membranes become yellow in color (icterus) due to the failure of the liver to metabolize the excess bilirubin. Hemoglobinuria is seen due to excretion of red-blood-cell lysis byproducts via the kidneys. Fever of 40.5 °C (105 °F) develops due to release of inflammatory byproducts.
Diagnosis
Only specialized laboratories can adequately diagnose Babesia infection in humans, so Babesia infections are considered highly under-reported. It develops in patients who live in or travel to an endemic area or receive a contaminated blood transfusion within the preceding 9 weeks, so this aspect of the medical history is vital. Babesiosis may be suspected when a person with such an exposure history develops persistent fevers and hemolytic anemia. The definitive diagnostic test is the identification of parasites on a Giemsa-stained thin-film blood smear.So-called "Maltese cross formations" on the blood film are diagnostic (pathognomonic) of babesiosis, since they are not seen in malaria, the primary differential diagnosis. Careful examination of multiple smears may be necessary, since Babesia may infect less than 1% of circulating red blood cells, thus be easily overlooked.Serologic testing for antibodies against Babesia (both IgG and IgM) can detect low-level infection in cases with a high clinical suspicion, but negative blood film examinations. Serology is also useful for differentiating babesiosis from malaria in cases where people are at risk for both infections. Since detectable antibody responses require about a week after infection to develop, serologic testing may be falsely negative early in the disease course.A polymerase chain reaction (PCR) test has been developed for the detection of Babesia from the peripheral blood. PCR may be at least as sensitive and specific as blood-film examination in diagnosing babesiosis, though it is also significantly more expensive. Most often, PCR testing is used in conjunction with blood film examination and possibly serologic testing.Other laboratory findings include decreased numbers of red blood cells and platelets on complete blood count.In animals, babesiosis is suspected by observation of clinical signs (hemoglobinuria and anemia) in animals in endemic areas. Diagnosis is confirmed by observation of merozoites on thin film blood smear examined at maximum magnification under oil using Romonovski stains (methylene blue and eosin). This is a routine part of the veterinary examination of dogs and ruminants in regions where babesiosis is endemic.Babesia canis and B. bigemina are "large Babesia species" that form paired merozoites in the erythrocytes, commonly described as resembling "two pears hanging together", rather than the "Maltese cross" of the "small Babesia species". Their merozoites are around twice the size of small ones.Cerebral babesiosis is suspected in vivo when neurological signs (often severe) are seen in cattle that are positive for B. bovis on blood smear, but this has yet to be proven scientifically. Outspoken red discoloration of the grey matter post mortem further strengthens suspicion of cerebral babesiosis. Diagnosis is confirmed post mortem by observation of Babesia-infected erythrocytes sludged in the cerebral cortical capillaries in a brain smear.
Treatment
Treatment of asymptomatic carriers should be considered if parasites are still detected after 3 months. In mild-to-moderate babesiosis, the treatment of choice is a combination of atovaquone and azithromycin. This regimen is preferred to clindamycin and quinine because it has fewer side effects. The standard course is 7 to 10 days, but this is extended to at least 6 weeks in people with relapsing disease. Even mild cases are recommended to be treated to decrease the chance of inadvertently transmitting the infection by donating blood. In severe babesiosis, the combination of clindamycin and quinine is preferred. In life-threatening cases, exchange transfusion is performed. In this procedure, the infected red blood cells are removed and replaced with uninfected ones.
Imizol is a drug used for treatment of babesiosis in dogs.
Extracts of the poisonous, bulbous plant Boophone disticha are used in the folk medicine of South Africa to treat equine babesiosis. B. disticha is a member of the daffodil family Amaryllidaceae and has also been used in preparations employed as arrow poisons, hallucinogens, and in embalming. The plant is rich in alkaloids, some of which display an action similar to that of scopolamine.
Epidemiology
Babesiosis is a vector-borne illness usually transmitted by Ixodes scapularis ticks. B. microti uses the same tick vector as Lyme disease, and may occur in conjunction with Lyme. The organism can also be transmitted by blood transfusion. Ticks of domestic animals, especially Rhipicephalus (Boophilus) microplus and R. (B.) decoloratus transmit several species of Babesia to livestock, causing considerable economic losses to farmers in tropical and subtropical regions.In the United States, the majority of babesiosis cases are caused by B. microti, and occur in the Northeast and northern Midwest from May through October. Areas with especially high rates include eastern Long Island, Fire Island, Nantucket Island, and Marthas Vineyard.
The Centers for Disease Control and Prevention now requires state health departments to report infections using Form OMB No. 0920-0728. In 2014, Rhode Island had an incidence of 16.3 reported infections per 100,000 people.In Europe, B. divergens is the primary cause of infectious babesiosis and is transmitted by I. ricinus.Babesiosis has emerged in Lower Hudson Valley, New York, since 2001.In Australia, one locally-acquired case of B. microti has been reported, which was fatal. A subsequent investigation found no additional evidence of human Babesiosis in over 7000 patient samples, leading the authors to conclude that Babesiosis was rare in Australia. A similar disease in cattle, commonly known as tick fever, is spread by Babesia bovis and B. bigemina in the introduced cattle tick Rhipicephalus microplus. This disease is found in eastern and northern Australia.
Isolated cases
A table of isolated cases of babesiosis, which may be underestimated given how widely distributed the tick vectors are in temperate latitudes.
History
The disease is named for the genus of the causative organism, which was named after the Romanian bacteriologist Victor Babeș. In 1888, Victor Babeș identified the microorganisms in red blood cells as the cause of febrile hemoglobinuria in cattle. In 1893, Theobald Smith and Frederick Kilborne discovered that a tick was the vector for transmission in Texas cattle. The agent was B. bigemina. This was the first demonstration that an arthropod could act as a disease vector to transmit an infectious agent to a vertebrate host.In 1957, the first human case was documented in a splenectomized Croatian herdsman. The agent was B. divergens. In 1969, the first case was reported in an immunocompetent individual on Nantucket Island. The agent was B. microti, and the vector was the tick I. scapularis. Equine babesiosis (caused by the protozoan Theileria equi) is also known as piroplasmosis (from the Latin piro, meaning pear + Greek plasma, a thing formed).
Other animals
Veterinary treatment of babesiosis does not normally use antibiotics. In livestock and animals, diminazen (Berenil), imidocarb, or trypan blue would be the drugs of choice for treatment of B. canis rossi (dogs in Africa), B. bovis, and B. bigemina (cattle in Southern Africa). In acute cases in cattle, blood transfusion may be carried out.
A vaccine is effective against B. canis canis (dogs in the Mediterranean region), but is ineffective against B. c. rossi. B. imitans causes a mild form of the disease that frequently resolves without treatment (dogs in Southeast Asia).
References
External links
Center for Global Health (2019-06-25). "Babesiosis". Parasites and Health, DPDx—Laboratory Identification of Parasites of Public Health Concern. Centers for Disease Control & Prevention. Archived from the original on 2013-03-07. Retrieved 2003-10-07. Public domain source from which the first version of this article was derived.
Krause, Peter J; et al. (27 January 2021). "Clinical Practice Guidelines by the Infectious Diseases Society of America (IDSA): 2020 Guideline on Diagnosis and Management of Babesiosis". Clinical Infectious Diseases. 72 (2): e49–e64 </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Induced coma and can you explain it in depth? [/INST] An induced coma – also known as a medically induced coma (MIC), barbiturate-induced coma, or drug-induced coma – is a temporary coma (a deep state of unconsciousness) brought on by a controlled dose of an anesthetic drug, often a barbiturate such as pentobarbital or thiopental. Barbiturate comas are used to protect the brain during major neurosurgery, as a last line of treatment in certain cases of status epilepticus that have not responded to other treatments, and in refractory intracranial hypertension following traumatic brain injury.
Induced coma usually results in significant systemic adverse effects. The patient is likely to completely lose respiratory drive and require mechanical ventilation; gut motility is reduced; hypotension can complicate efforts to maintain cerebral perfusion pressure and often requires the use of vasopressor drugs. Hypokalemia often results. The completely immobile patient is at increased risk of bed sores as well as infection from catheters.
Induced coma is a feature of the Milwaukee protocol, a controversial method that is promoted as a means of treating rabies infection in people.
Theory
Barbiturates reduce the metabolic rate of brain tissue, as well as the cerebral blood flow </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Barretts esophagus and can you explain it in depth? [/INST] Barretts esophagus is a condition in which there is an abnormal (metaplastic) change in the mucosal cells lining the lower portion of the esophagus, from stratified squamous epithelium to simple columnar epithelium with interspersed goblet cells that are normally present only in the small intestine and large intestine. This change is considered to be a premalignant condition because it is associated with a high incidence of further transition to esophageal adenocarcinoma, an often-deadly cancer.The main cause of Barretts esophagus is thought to be an adaptation to chronic acid exposure from reflux esophagitis. Barretts esophagus is diagnosed by endoscopy: observing the characteristic appearance of this condition by direct inspection of the lower esophagus; followed by microscopic examination of tissue from the affected area obtained from biopsy. The cells of Barretts esophagus are classified into four categories: nondysplastic, low-grade dysplasia, high-grade dysplasia, and frank carcinoma. High-grade dysplasia and early stages of adenocarcinoma may be treated by endoscopic resection or radiofrequency ablation. Later stages of adenocarcinoma may be treated with surgical resection or palliation. Those with nondysplastic or low-grade dysplasia are managed by annual observation with endoscopy, or treatment with radiofrequency ablation. In high-grade dysplasia, the risk of developing cancer might be at 10% per patient-year or greater.The incidence of esophageal adenocarcinoma has increased substantially in the Western world in recent years. The condition is found in 5–15% of patients who seek medical care for heartburn (gastroesophageal reflux disease, or GERD), although a large subgroup of patients with Barretts esophagus are asymptomatic. The condition is named after surgeon Norman Barrett (1903–1979) even though the condition was originally described by Philip Rowland Allison in 1946.
Signs and symptoms
The change from normal to premalignant cells indicate Barretts esophagus does not cause any particular symptoms. Barretts esophagus, however, is associated with these symptoms:
frequent and longstanding heartburn
trouble swallowing (dysphagia)
vomiting blood (hematemesis)
pain under the sternum where the esophagus meets the stomach
pain when swallowing (odynophagia), which can lead to unintentional weight lossThe risk of developing Barretts esophagus is increased by central obesity (vs. peripheral obesity). The exact mechanism is unclear. The difference in distribution of fat among men (more central) and women (more peripheral) may explain the increased risk in males.
Pathophysiology
Barretts esophagus occurs due to chronic inflammation. The principal cause of chronic inflammation is gastroesophageal reflux disease, GERD (UK: GORD). In this disease, acidic stomach, bile, and small intestine and pancreatic contents cause damage to the cells of the lower esophagus. In turn, this provokes an advantage for cells more resistant to these noxious stimuli in particular HOXA13-expressing stem cells that are characterised by distal (intestinal) characteristics and outcompete the normal squamous cells.This mechanism also explains the selection of HER2/neu (also called ERBB2) and the overexpressing (lineage-addicted) cancer cells during the process of carcinogenesis, and the efficacy of targeted therapy against the Her-2 receptor with trastuzumab (Herceptin) in the treatment of adenocarcinomas at the gastroesophageal junction.
Researchers are unable to predict who with heartburn will develop Barretts esophagus. While no relationship exists between the severity of heartburn and the development of Barretts esophagus, a relationship does exist between chronic heartburn and the development of Barretts esophagus. Sometimes, people with Barretts esophagus have no heartburn symptoms at all.
Some anecdotal evidence indicates those with the eating disorder bulimia are more likely to develop Barretts esophagus because bulimia can cause severe acid reflux, and because purging also floods the esophagus with acid. However, a link between bulimia and Barretts esophagus remains unproven.During episodes of reflux, bile acids enter the esophagus, and this may be an important factor in carcinogenesis. Individuals with GERD and BE are exposed to high concentrations of deoxycholic acid that has cytotoxic effects and can cause DNA damage.
Diagnosis
Both macroscopic (from endoscopy) and microscopic positive findings are required to make a diagnosis. Barretts esophagus is marked by the presence of columnar epithelia in the lower esophagus, replacing the normal squamous cell epithelium—an example of metaplasia. The secretory columnar epithelium may be more able to withstand the erosive action of the gastric secretions; however, this metaplasia confers an increased risk of adenocarcinoma.
Screening
Screening endoscopy is recommended among males over the age of 60 who have reflux symptoms that are of long duration and not controllable with treatment. Among those not expected to live more than 5 years screening is not recommended.The Seattle protocol is used commonly in endoscopy to obtain endoscopic biopsies for screening, taken every 1 to 2 cm from the gastroesophageal junction.
Since the COVID-19 pandemic In Scotland, the local NHS started using a swallowable sponge (Cytosponge) in hospitals to collect cell samples for diagnosis. Preliminary studies have shown this diagnostic test to be a useful tool for screening people with heartburn symptoms and improved diagnosis.
Intestinal metaplasia
The presence of goblet cells, called intestinal metaplasia, is necessary to make a diagnosis of Barretts esophagus. This frequently occurs in the presence of other metaplastic columnar cells, but only the presence of goblet cells is diagnostic. The metaplasia is grossly visible through a gastroscope, but biopsy specimens must be examined under a microscope to determine whether cells are gastric or colonic in nature. Colonic metaplasia is usually identified by finding goblet cells in the epithelium and is necessary for the true diagnosis.Many histologic mimics of Barretts esophagus are known (i.e. goblet cells occurring in the transitional epithelium of normal esophageal submucosal gland ducts, "pseudogoblet cells" in which abundant foveolar [gastric] type mucin simulates the acid mucin true goblet cells). Assessment of relationship to submucosal glands and transitional-type epithelium with examination of multiple levels through the tissue may allow the pathologist to reliably distinguish between goblet cells of submucosal gland ducts and true Barretts esophagus (specialized columnar metaplasia). The histochemical stain Alcian blue pH 2.5 is also frequently used to distinguish true intestinal-type mucins from their histologic mimics. Recently, immunohistochemical analysis with antibodies to CDX-2 (specific for mid and hindgut intestinal derivation) has also been used to identify true intestinal-type metaplastic cells. The protein AGR2 is elevated in Barretts esophagus and can be used as a biomarker for distinguishing Barrett epithelium from normal esophageal epithelium.The presence of intestinal metaplasia in Barretts esophagus represents a marker for the progression of metaplasia towards dysplasia and eventually adenocarcinoma. This factor combined with two different immunohistochemical expression of p53, Her2 and p16 leads to two different genetic pathways that likely progress to dysplasia in Barretts esophagus. Also intestinal metaplastic cells can be positive for CK 7+/CK20-.
Epithelial dysplasia
After the initial diagnosis of Barretts esophagus is rendered, affected persons undergo annual surveillance to detect changes that indicate higher risk to progression to cancer: development of epithelial dysplasia (or "intraepithelial neoplasia").
Among all metaplastic lesions, around 8% were associated with dysplasia. particularly a recent study demonstrated that dysplastic lesions were located mainly in the posterior wall of the esophagus.Considerable variability is seen in assessment for dysplasia among pathologists. Recently, gastroenterology and GI pathology societies have recommended that any diagnosis of high-grade dysplasia in Barrett be confirmed by at least two fellowship-trained GI pathologists prior to definitive treatment for patients. For more accuracy and reproducibility, it is also recommended to follow international classification systems, such as the "Vienna classification" of gastrointestinal epithelial neoplasia (2000).
Management
Many people with Barretts esophagus do not have dysplasia. Medical societies recommend that if a patient has Barretts esophagus, and if the past two endoscopy and biopsy examinations have confirmed the absence of dysplasia, then the patient should not have another endoscopy within three years.Endoscopic surveillance of people with Barretts esophagus is often recommended, although little direct evidence supports this practice. Treatment options for high-grade dysplasia include surgical removal of the esophagus (esophagectomy) or endoscopic treatments such as endoscopic mucosal resection or ablation (destruction).The risk of malignancy is highest in the United States in Caucasian men over fifty years of age with more than five years of symptoms. Current recommendations include routine endoscopy and biopsy (looking for dysplastic changes). Although in the past physicians have taken a watchful waiting approach, newly published research supports consideration of intervention for Barretts esophagus. Balloon-based radiofrequency ablation, invented by Ganz, Stern, and Zelickson in 1999, is a new treatment modality for the treatment of Barretts esophagus and dysplasia and has been the subject of numerous published clinical trials. The findings demonstrate radiofrequency ablation is at least 90% effective to completely clear Barretts esophagus and dysplasia, with durability of up to five years and a favorable safety profile.Anti-reflux surgery has not been proven to prevent esophageal cancer. However, the indication is that proton pump inhibitors are effective in limiting the progression of esophageal cancer. Laser treatment is used in severe dysplasia, while overt malignancy may require surgery, radiation therapy, or systemic chemotherapy. A recent five-year random-controlled trial has shown that photodynamic therapy using photofrin is statistically more effective in eliminating dysplastic growth areas than sole use of a proton pump inhibitor.There is presently no reliable way to determine which patients with Barretts esophagus will go on to develop esophageal cancer, although a recent study found the detection of three different genetic abnormalities was associated with as much as a 79% chance of developing cancer in six years.Endoscopic mucosal resection has also been evaluated as a management technique </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Emotional and behavioral disorders and can you explain it in depth? [/INST] Emotional and behavioral disorders (EBD; also known as behavioral and emotional disorders) refer to a disability classification used in educational settings that allows educational institutions to provide special education and related services to students who have displayed poor social and/or academic progress.The classification is often given to students after conducting a Functional Behavior Analysis. These students need individualized behavior supports such as a Behavior Intervention Plan, to receive a free and appropriate public education. Students with EBD may be eligible for an Individualized Education Plan (IEP) and/or accommodations in the classroom through a 504 Plan.
History
Early history
Before any studies were done on the subject, mental illnesses were often thought to be a form of demonic possession or witchcraft. Since much was unknown, there was little to no distinction between the different types of mental illness and developmental disorders that we refer to today. Most often, they were dealt with by performing an exorcism on the person exhibiting signs of any mental illness. In the early to mid 1800s, asylums were introduced to America and Europe. There, patients were treated cruelly and often referred to as lunatics by the doctors in the professional fields. The main focus of asylums were to shun people with mental illnesses from the public. In 1963, the Community Mental Health Centers Construction Act (Public Law 88–164), was passed by Congress and signed by John F. Kennedy, which provided federal funding to community mental health centers. This legislation changed the way that mental health services were handled and also led to the closure of many large asylums. Many laws soon followed assisting more and more people with EBDs. 1978 came with the passing of Public Law 94- 142 which required free and public education to all disabled children including those with EBDs. An extension of PL 94–142, PL 99-457, was put into act which would provide services to all disabled children from the ages of 3-5 by the 1990–91 school year. PL 94-142 has since been renamed to the Individuals with Disabilities Education Act (IDEA).
Use and development of the term
Various terms have been used to describe irregular emotional and behavioral disorders. Many of the terms such as mental illness and psychopathology were used to describe adults with such conditions. Mental illness was a label for most people with any type of disorder and it was common for people with emotional and behavioral disorders to be labeled with a mental illness. However, those terms were avoided when describing children as it seemed too stigmatizing. In the late 1900s the term "behaviorally disordered" appeared. Some professionals in the field of special education accepted the term while others felt it ignored emotional issues. In order to make a more uniformed terminology, the National Mental Health and Special Education Coalition, which consists of over thirty professional and advocacy groups, coined the term "emotional and behavioral disorders" in 1988.
Criteria
According to the Individuals with Disabilities Education Act an EBD classification is required if one or more of the following characteristics is excessively observed in a student over a significant amount of time:
Learning challenges that cannot be explained by intellectual, sensory, or health factors.
Trouble keeping up or building satisfactory relationships with peers and teachers.
Inappropriate behavior (against self or others) or emotions (shares the need to harm others or self, low self-worth) in normal conditions.
An overall attitude of unhappiness or depression.
A tendency to develop physical symptoms or fears related with individual or school issues.The term "EBD" includes students diagnosed with schizophrenia. However, it does not have any significant bearing on students who are socially maladjusted unless they also meet the above criteria.
Criticisms
Providing or failing to provide an EBD classification to a student may be controversial, as the IDEA does not clarify which children would be considered "socially maladjusted". Students with a psychiatric diagnosis of conduct disorder are not guaranteed to receive additional educational services under an EBD classification. Students with an EBD classification who meet the diagnostic criteria for various disruptive behavior disorders, including attention-deficit hyperactivity disorder (ADHD), oppositional defiant disorder (ODD), or conduct disorder (CD) do not have an automatic eligibility to receive an IEP or 504 Plan. Students considered "socially maladjusted", but ineligible for an EBD classification (i.e., students diagnosed with conduct disorder), often receive better educational services in special education classrooms or alternative schools with high structure, clear rules, and consistent consequences.
Student characteristics
Students with EBD are a diverse population with a wide range of intellectual and academic abilities. Males, African-Americans, and economically disadvantaged students are over-represented in the EBD population, and students with EBD are more likely to live in single-parent homes, foster homes, or other non-traditional living situations. These students also tend to have low rates of positive social interactions with peers in educational contexts. Students with EBD are often categorized as "internalizers" (e.g., have poor self-esteem, or are diagnosed with an anxiety disorder or mood disorder) or "externalizers" (e.g., disrupt classroom instruction, or are diagnosed with disruptive behavior disorders such as oppositional defiant disorder and conduct disorder). Male students may be over-represented in the EBD population because they appear to be more likely to exhibit disruptive externalizing behavior that interferes with classroom instruction. Females may be more likely to exhibit internalizing behavior that does not interfere with classroom instruction, though to what extent this perception is due to social expectations of differences in male and female behavior is unclear. In any case, it is important to note that both internalizing and externalizing behaviour can and do occur in either sex; Students with EBD are also at an increased risk for learning disabilities, school dropout, substance abuse, and juvenile delinquency.
Internalizing and externalizing behavior
A person with EBD with "internalizing" behavior may have poor self-esteem, have depression, experience loss of interest in social, academic, and other life activities, and may exhibit non-suicidal self-injury or substance abuse. Students with internalizing behavior may also have a diagnosis of separation anxiety or another anxiety disorder, post-traumatic stress disorder (PTSD), specific or social phobia, obsessive–compulsive disorder (OCD), panic disorder, and/or an eating disorder. Teachers are more likely to write referrals for students that are overly disruptive. Screening tools used to detect students with high levels of "internalizing" behavior are not sensitive and are rarely used in practice. Students with EBD with "externalizing" behavior may be aggressive, non-compliant, extroverted, or disruptive.
Students with EBD that show externalizing behavior are often diagnosed with attention deficit hyperactivity disorder (ADHD), oppositional defiant disorder (ODD), conduct disorder, and/or bipolar disorder; however, this population can also include typically developing children that have learned to exhibit externalizing behavior for various reasons (e.g., escape from academic demands or access to attention). These students often have difficulty inhibiting emotional responses resulting from anger, frustration, and disappointment. Students who "externalize" exhibit behaviors such as insulting, provoking, threatening, bullying, cursing, and fighting, along with other forms of aggression. Male students with EBD exhibit externalizing behavior more often than their female counterparts.Children and adolescents with ADD or ADHD may display different types of externalizing behavior and should be either medicated or going through behavioral treatment for their diagnosis. Adolescents with severe ADHD would likely benefit most from both medication and behavioral treatment. Younger children should go through behavioral treatment before being treated with medication. Another recommended form of treatment for children and adolescents diagnosed with ADHD would be counseling from a mental health professional. Treatment options will improve performance of children and adolescents on emotion recognition tasks, specifically response time as there is no difficulty recognizing human emotions. The degree of required treatments vary depending on the degree of ADD or ADHD the individual has.
Treatment for these types of behaviors should include the parents as it is evident that their parenting skills impact on how their child deals with their symptoms, especially when at a younger age. Parents going through a parenting skills training program were reported a decrease in internalizing and externalizing behavior in their children post-training program. The program included learning how to give positive attention, increase good behavior with small frequent rewards and specific praise as well as learning how to decrease attention when the child behaved poorly.
Effect on cognition
In recent years, many researchers have been interested in exploring the relationship between emotional disorders and cognition. Evidence has revealed that there is a relationship between the two. Strauman (1989) investigated how emotional disorders shape a persons cognitive structure, that is, the mental processes people utilize to make sense of the world around them. He recruited three groups of individuals: those with social phobias, those with depression, and controls with no emotional disorder diagnosis. He wanted to determine whether these groups had a cognitive structure showing an actual/ideal (AI) discrepancy (referring to an individual not believing that they have achieved their personal desires) or actual/own/other (AOO) discrepancy (referring to an individuals actions not living up to what their significant other believes that they need to be). He found that depressed individuals had the highest AI discrepancy and social phobics had the greatest AOO discrepancy, while the controls were lower or in between the two for both discrepancies.Specific cognitive processes (e.g., attention) may be different in those with emotional disorders. MacLeod, Mathews, and Tata (1986) tested the reaction times of 32 participants, some of whom were diagnosed with Generalized Anxiety disorder, when presented with threatening words. They found that when threatening words were presented, people with greater anxiety tended to have increased selective attention, meaning that they reacted quicker to a stimulus in an area where a threatening word was just presented (32-59ms faster). When in the control group, subjects reacted slower when there was a threatening word proceeding the stimulus (16-32ms slower).Emotional disorders can also alter the way people regulate their emotions. Joormann and Gotlib (2010) conducted a study with depressed, or previously depressed, individuals to test this. They found that, when compared to individuals who have never had a depressive episode, previously and currently depressed individuals tended to use maladaptive emotion regulation strategies (such as rumination or brooding) more. They also found that when depressed individuals displayed cognitive inhibition (slowing of response to a variable that had been previously ignored) when asked to describe a negative word (ignored variable was a positive word), they were less likely to ruminate or brood. When they displayed cognitive inhibition when asked to describe a positive word (ignored variable was a negative word), they were more likely to reflect.
Services in the United States
There are many types of services available to EBD students, referenced below. One service is one-on-one support (or an aide) who assists in everyday activities and academics. Another service is foundations offer behavior services as well as counseling support. Some services include classrooms that are dedicated to educational foundations and work on building the student up possessively. States also offer dedicated schools with multiple resources that help students with EBD excel and transition (back) into local schools.
Texas
The state of Texas has the Texas Behavior Support Initiative (TBSI) authorized by Senate Bill 1196 and Texas Administrative Code §89.1053. With its design to provide knowledge for the use of constructive behavior interventions and to aid students, including students with disabilities. TBSI meets the legislative requirements for the use of restraint and time-out, along with providing the baseline work for behavior strategies and prevention throughout each environment.
New York
The state of New York has the Foundations Behavioral Health that has been approved out of state educations and residential provider with the New York State Education Dept. Foundations offer Academic and Behavioral Health Services to students between the ages of 14–21. This program allows students educational experience to have strategic interventions to aid their social and behavioral functioning. Some of the programs highlights include Functional Behavioral Assessment (FBA), Behavioral Intervention Plan (BIP) & Community Based Instruction (CBI).
California
The state of California has Spectrum Center classrooms in Los Angeles and the San Francisco area which are providing Emotional Disabilities and Behavioral Services. They provide academic classrooms for students who are actively working to improve grade-level standards and working toward getting their high school diploma. The main practice is the use of Positive Behavior Interventions and Supports (PBIS). PBIS instructional practices help students determine their skill level and progress, restore their skills through direct instruction, knowing the standards on their grade level and small group counseling.
Michigan
The state of Michigan has a Behavioral Education Center (BEC) in Bangor. Its purpose is to aid local schools directs with students between the ages of 5–26 years old with EBDs. Along with having students use appropriate behaviors and skills to successfully return to their local school setting </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Benign prostatic hyperplasia and can you explain it in depth? [/INST] Benign prostatic hyperplasia (BPH), also called prostate enlargement, is a noncancerous increase in size of the prostate gland. Symptoms may include frequent urination, trouble starting to urinate, weak stream, inability to urinate, or loss of bladder control. Complications can include urinary tract infections, bladder stones, and chronic kidney problems.The cause is unclear. Risk factors include a family history, obesity, type 2 diabetes, not enough exercise, and erectile dysfunction. Medications like pseudoephedrine, anticholinergics, and calcium channel blockers may worsen symptoms. The underlying mechanism involves the prostate pressing on the urethra and thereby making it difficult to pass urine out of the bladder. Diagnosis is typically based on symptoms and examination after ruling out other possible causes.Treatment options include lifestyle changes, medications, a number of procedures, and surgery. In those with mild symptoms, weight loss, exercise, and decreasing caffeine intake are recommended, although the quality of the evidence for exercise is low. In those with more significant symptoms, medications may include alpha blockers such as terazosin or 5α-reductase inhibitors such as finasteride. Surgical removal of part of the prostate may be carried out in those who do not improve with other measures. Some herbal medicines that have been studied, such as saw palmetto, have not been shown to help. Other herbal medicines somewhat effective at improving urine flow include beta-sitosterol from Hypoxis rooperi (African star grass), pygeum (extracted from the bark of Prunus africana), pumpkin seeds (Cucurbita pepo), and stinging nettle (Urtica dioica) root.About 105 million men are affected globally. BPH typically begins after the age of 40. Half of males age 50 and over are affected. After the age of 80, that figure climbs to as high as about 90% of males affected. Although prostate specific antigen levels may be elevated in males with BPH, the condition does not increase the risk of prostate cancer.
Signs and symptoms
BPH is the most common cause of lower urinary tract symptoms (LUTS), which are divided into storage, voiding, and symptoms which occur after urination. Storage symptoms include the need to urinate frequently, waking at night to urinate, urgency (compelling need to void that cannot be deferred), involuntary urination, including involuntary urination at night, or urge incontinence (urine leak following a strong sudden need to urinate). Voiding symptoms include urinary hesitancy (a delay between trying to urinate and the flow actually beginning), intermittency (not continuous), involuntary interruption of voiding, weak urinary stream, straining to void, a sensation of incomplete emptying, and uncontrollable leaking after the end of urination. These symptoms may be accompanied by bladder pain or pain while urinating, called dysuria.Bladder outlet obstruction (BOO) can be caused by BPH. Symptoms are abdominal pain, a continuous feeling of a full bladder, frequent urination, acute urinary retention (inability to urinate), pain during urination (dysuria), problems starting urination (urinary hesitancy), slow urine flow, starting and stopping (urinary intermittency), and nocturia.BPH can be a progressive disease, especially if left untreated. Incomplete voiding results in residual urine or urinary stasis, which can lead to an increased risk of urinary tract infection.
Causes
Hormones
Most experts consider androgens (testosterone and related hormones) to play a permissive role in the development of BPH. This means that androgens must be present for BPH to occur, but do not necessarily directly cause the condition. This is supported by evidence suggesting that castrated boys do not develop BPH when they age. In an unusual study of 26 eunuchs from the palace of the Qing dynasty still living in Beijing in 1960, the prostate could not be felt in 81% of the studied eunuchs. The average time since castration was 54 years (range, 41–65 years). On the other hand, some studies suggest that administering exogenous testosterone is not associated with a significant increase in the risk of BPH symptoms, so the role of testosterone in prostate cancer and BPH is still unclear. Further randomized controlled trials with more participants are needed to quantify any risk of giving exogenous testosterone.Dihydrotestosterone (DHT), a metabolite of testosterone, is a critical mediator of prostatic growth. DHT is synthesized in the prostate from circulating testosterone by the action of the enzyme 5α-reductase, type 2. DHT can act in an autocrine fashion on the stromal cells or in paracrine fashion by diffusing into nearby epithelial cells. In both of these cell types, DHT binds to nuclear androgen receptors and signals the transcription of growth factors that are mitogenic to the epithelial and stromal cells. DHT is ten times more potent than testosterone because it dissociates from the androgen receptor more slowly. The importance of DHT in causing nodular hyperplasia is supported by clinical observations in which an inhibitor of 5α-reductase such as finasteride is given to men with this condition. Therapy with a 5α-reductase inhibitor markedly reduces the DHT content of the prostate and, in turn, reduces prostate volume and BPH symptoms.Testosterone promotes prostate cell proliferation, but relatively low levels of serum testosterone are found in patients with BPH. One small study has shown that medical castration lowers the serum and prostate hormone levels unevenly, having less effect on testosterone and dihydrotestosterone levels in the prostate.While there is some evidence that estrogen may play a role in the cause of BPH, this effect appears to be mediated mainly through local conversion of androgens to estrogen in the prostate tissue rather than a direct effect of estrogen itself. In canine in vivo studies castration, which significantly reduced androgen levels but left estrogen levels unchanged, caused significant atrophy of the prostate. Studies looking for a correlation between prostatic hyperplasia and serum estrogen levels in humans have generally shown none.In 2008, Gat et al. published evidence that BPH is caused by failure in the spermatic venous drainage system resulting in increased hydrostatic pressure and local testosterone levels elevated more than 100 fold above serum levels. If confirmed, this mechanism explains why serum androgen levels do not seem to correlate with BPH and why giving exogenous testosterone would not make much difference.
Diet
Studies indicate that dietary patterns may affect development of BPH, but further research is needed to clarify any important relationship. Studies from China suggest that greater protein intake may be a factor in development of BPH. Men older than 60 in rural areas had very low rates of clinical BPH, while men living in cities and consuming more animal protein had a higher incidence. On the other hand, a study in Japanese-American men in Hawaii found a strong negative association with alcohol intake, but a weak positive association with beef intake. In a large prospective cohort study in the US (the Health Professionals Follow-up Study), investigators reported modest associations between BPH (men with strong symptoms of BPH or surgically confirmed BPH) and total energy and protein, but not fat intake. There is also epidemiological evidence linking BPH with metabolic syndrome (concurrent obesity, impaired glucose metabolism and diabetes, high triglyceride levels, high levels of low-density cholesterol, and hypertension).
Degeneration
Benign prostatic hyperplasia is an age-related disease. Misrepair-accumulation aging theory suggests that development of benign prostatic hyperplasia is a consequence of fibrosis and weakening of the muscular tissue in the prostate. The muscular tissue is important in the functionality of the prostate, and provides the force for excreting the fluid produced by prostatic glands. However, repeated contractions and dilations of myofibers will unavoidably cause injuries and broken myofibers. Myofibers have a low potential for regeneration; therefore, collagen fibers need to be used to replace the broken myofibers. Such misrepairs make the muscular tissue weak in functioning, and the fluid secreted by glands cannot be excreted completely. Then, the accumulation of fluid in glands increases the resistance of muscular tissue during the movements of contractions and dilations, and more and more myofibers will be broken and replaced by collagen fibers.
Pathophysiology
As men age, the enzymes aromatase and 5-alpha reductase increase in activity. These enzymes are responsible for converting androgen hormones into estrogen and dihydrotestosterone, respectively. This metabolism of androgen hormones leads to a decrease in testosterone but increased levels of DHT and estrogen.
Both the glandular epithelial cells and the stromal cells (including muscular fibers) undergo hyperplasia in BPH. Most sources agree that of the two tissues, stromal hyperplasia predominates, but the exact ratio of the two is unclear.:694Anatomically the median and lateral lobes are usually enlarged, due to their highly glandular composition. The anterior lobe has little in the way of glandular tissue and is seldom enlarged. (Carcinoma of the prostate typically occurs in the posterior lobe – hence the ability to discern an irregular outline per rectal examination). The earliest microscopic signs of BPH usually begin between the age of 30 and 50 years old in the PUG, which is posterior to the proximal urethra.:694 In BPH, the majority of growth occurs in the transition zone (TZ) of the prostate.:694 In addition to these two classic areas, the peripheral zone (PZ) is also involved to a lesser extent.:695 Prostatic cancer typically occurs in the PZ. However, BPH nodules, usually from the TZ are often biopsied anyway to rule out cancer in the TZ.:695 BPH can be a progressive growth that in rare instances leads to exceptional enlargement. In some males, the prostate enlargement exceeds 200 to 500 grams. This condition has been defined as giant prostatic hyperplasia (GPH).
Diagnosis
The clinical diagnosis of BPH is based on a history of LUTS (lower urinary tract symptoms), a digital rectal exam, and exclusion of other causes of similar signs and symptoms. The degree of LUTS does not necessarily correspond to the size of the prostate. An enlarged prostate gland on rectal examination that is symmetric and smooth supports a diagnosis of BPH. However, if the prostate gland feels asymmetrical, firm, or nodular, this raises concern for prostate cancer.Validated questionnaires such as the American Urological Association Symptom Index (AUA-SI), the International Prostate Symptom Score (I-PSS), and more recently the UWIN score (urgency, weak stream, incomplete emptying, and nocturia) are useful aids to making the diagnosis of BPH and quantifying the severity of symptoms.
Laboratory investigations
Urinalysis is typically performed when LUTS are present and BPH is suspected to evaluate for signs of a urinary tract infection, glucose in the urine (suggestive of diabetes), or protein in the urine (suggestive of kidney disease). Bloodwork including kidney function tests and prostate specific antigen (PSA) are often ordered to evaluate for kidney damage and prostate cancer, respectively. However, checking blood PSA levels for prostate cancer screening is controversial and not necessarily indicated in every evaluation for BPH. Benign prostatic hyperplasia and prostate cancer are both capable of increasing blood PSA levels and PSA elevation is unable to differentiate these two conditions well. If PSA levels are checked and are high, then further investigation is warranted. Measures including PSA density, free PSA, rectal examination, and transrectal ultrasonography may be helpful in determining whether a PSA increase is due to BPH or prostate cancer.
Imaging and other investigations
Uroflowmetry is done to measure the rate of urine flow and total volume of urine voided when the subject is peeing.Abdominal ultrasound examination of the prostate and kidneys is often performed to rule out hydronephrosis and hydroureter. Incidentally, cysts, tumours, and stones may be found on ultrasound. Post-void residual volume of more than 100 ml may indicate significant obstruction. Prostate size of 30 cc or more indicates enlargement of the prostate.Prostatic calcification can be detected through transrectal ultrasound (TRUS). Calcification is due to solidification of prostatic secretions or calcified corpora amylacea (hyaline masses on the prostate gland). Calcification is also found in a variety of other conditions such as prostatitis, chronic pelvic pain syndrome, and prostate cancer. For those with elevated levels of PSA, TRUS guided biopsy is performed to take a sample of the prostate for investigation. Although MRI is more accurate than TRUS in determining prostate volume, TRUS is less expensive and almost as accurate as MRI. Therefore, TRUS is still preferred to measure prostate volume.
Differential diagnosis
Medical conditions
The differential diagnosis for LUTS is broad and includes various medical conditions, neurologic disorders, and other diseases of the bladder, urethra, and prostate such as bladder cancer, urinary tract infection, urethral stricture, urethral calculi (stones), chronic prostatitis, and prostate cancer. Neurogenic bladder can cause urinary retention and cause symptoms similar to those of BPH. This may occur as a result of uncoordinated contraction of the bladder muscle or impairment in the timing of bladder muscle contraction and urethral sphincter relaxation. Notable causes of neurogenic bladder include disorders of the central nervous system such as Parkinsons disease, multiple sclerosis, and spinal cord injuries as well as disorders of the peripheral nervous system such as diabetes mellitus, vitamin B12 deficiency, and alcohol-induced nerve damage. Individuals affected by heart failure often experience nighttime awakenings to urinate due to redistribution of fluid accumulated in swollen legs.
Medications
Certain medications can increase urination difficulties by increasing bladder outlet resistance due to increased smooth muscle tone at the prostate or bladder neck and contribute to LUTS. Alpha-adrenergic agonist medications, such as decongestants with pseudoephedrine can increase bladder outlet resistance. In contrast, calcium channel blockers and anticholinergic medications can worsen urinary retention by promoting bladder muscle relaxation. Diuretic medications such as loop diuretics (e.g., furosemide) or thiazides (e.g., chlorthalidone) can cause or worsen urinary frequency and nighttime awakenings to urinate.
Management
When treating and managing benign prostatic hyperplasia, the aim is to prevent complications related to the disease and improve or relieve symptoms. Approaches used include lifestyle modifications, medications, and surgery.
Lifestyle
Lifestyle alterations to address the symptoms of BPH include physical activity, decreasing fluid intake before bedtime, moderating the consumption of alcohol and caffeine-containing products and following a timed voiding schedule.
Patients can also attempt to avoid products and medications with anticholinergic properties that may exacerbate urinary retention symptoms of BPH, including antihistamines, decongestants, opioids, and tricyclic antidepressants; however, changes in medications should be done with input from a medical professional.
Physical activity
Physical activity has been recommended as a treatment for urinary tract symptoms. A 2019 Cochrane review of six studies involving 652 men assessing the effects of physical activity alone, physical activity as a part of a self-management program, among others. However, the quality of evidence was very low and therefore it remains uncertain whether physical activity is helpful in men experiencing urinary symptoms caused by benign prostatic hyperplasia.
Voiding position
Voiding position when urinating may influence urodynamic parameters (urinary flow rate, voiding time, and post-void residual volume). A meta-analysis found no differences between the standing and sitting positions for healthy males, but that, for elderly males with lower urinary tract symptoms, voiding in the sitting position--
decreased the post void residual volume;
increased the maximum urinary flow, comparable with pharmacological intervention; and
decreased the voiding time.This urodynamic profile is associated with a lower risk of urologic complications, such as cystitis and bladder stones.
Medications
The two main medication classes for BPH management are alpha blockers and 5α-reductase inhibitors.
Alpha blockers
Selective α1-blockers are the most common choice for initial therapy. They include alfuzosin, doxazosin, silodosin, tamsulosin, terazosin, and naftopidil. They have a small to moderate benefit at improving symptoms. Selective alpha-1 blockers are similar in effectiveness but have slightly different side effect profiles. Alpha blockers relax smooth muscle in the prostate and the bladder neck, thus decreasing the blockage of urine flow. Common side effects of alpha blockers include orthostatic hypotension (a head rush or dizzy spell when standing up or stretching), ejaculation changes, erectile dysfunction, headaches, nasal congestion, and weakness. For men with LUTS due to an enlarged prostate, the effects of naftopidil, tamsulosin and silodosin on urinary symptoms and quality of life may be similar. Naftopidil and tamsulosin may have similar levels of unwanted sexual side effects but fewer unwanted side effects than silodosin.Tamsulosin and silodosin are selective α1 receptor blockers that preferentially bind to the α1A receptor in the prostate instead of the α1B receptor in the blood vessels. Less-selective α1 receptor blockers such as terazosin and doxazosin may lower blood pressure. The older, less selective α1-adrenergic blocker prazosin is not a first line choice for either high blood pressure or prostatic hyperplasia; it is a choice for patients who present with both problems at the same time. The older, broadly non-selective alpha blocker medications such as phenoxybenzamine are not recommended for control of BPH. Non-selective alpha blockers such as terazosin and doxazosin may also require slow dose adjustments as they can lower blood pressure and cause syncope (fainting) if the response to the medication is too strong.
5α-reductase inhibitors
The 5α-reductase inhibitors finasteride and dutasteride may also be used in men with BPH. These medications inhibit the 5α-reductase enzyme, which, in turn, inhibits production of DHT, a hormone responsible for enlarging the prostate. Effects may take longer to appear than alpha blockers, but they persist for many years. When used together with alpha blockers, no benefit was reported in short-term trials, but in a longer-term study (3–4 years) there was a greater reduction in BPH progression to acute urinary retention and surgery than with either agent alone, especially in people with more severe symptoms and larger prostates. Other trials have confirmed reductions in symptoms, within 6 months in one trial, an effect that was maintained after withdrawal of the alpha blocker. Side effects include decreased libido and ejaculatory or erectile dysfunction. The 5α-reductase inhibitors are contraindicated in pregnant women because of their teratogenicity due to interference with fetal testosterone metabolism, and as a precaution, pregnant women should not handle crushed or broken tablets.
Phosphodiesterase inhibitors (PDE)
A 2018 Cochrane review of studies on men over 60 with moderate to severe lower urinary tract symptoms analyzed the impacts of phosphodiesterase inhibitors (PDE) in comparison to other drugs. These drugs may improve urinary symptoms slightly and reduce urinary bother but may also cause more side effects compared to placebo. The evidence in this review found that there is probably no difference between PDE and alpha blockers, however when used in combination they may provide a greater improvement in symptoms (with more side effects). PDE also likely improves symptoms when used in combination with 5-alpha reductase inhibitors.
Several phosphodiesterase-5 inhibitors are also effective, but may require multiple doses daily to maintain adequate urine flow. Tadalafil, a phosphodiesterase-5 inhibitor, was considered then rejected by NICE in the UK for the treatment of symptoms associated with BPH. In 2011, the U.S. Food and Drug Administration approved tadalafil to treat the signs and symptoms of benign prostatic hyperplasia, and for the treatment of BPH and erectile dysfunction (ED), when the conditions occur simultaneously.
Others
Antimuscarinics such as tolterodine may also be used, especially in combination with alpha blockers. They act by decreasing acetylcholine effects on the smooth muscle of the bladder, thus helping control symptoms of an overactive bladder.
Self-catheterization
Intermittent urinary catheterization is used to relieve the bladder in people with urinary retention. Self-catheterization is an option in BPH when it is difficult or impossible to completely empty the bladder. Urinary tract infection is the most common complication of intermittent catheterization. Several techniques and types of catheter are available, including sterile (single-use) and clean (multiple use) catheters, but, based on current information, none is superior to others in reducing the incidence of urinary tract infection.
Surgery
If medical treatment is not effective, surgery may be performed. Surgical techniques used include the following:
Transurethral resection of the prostate (TURP): the gold standard. TURP is thought to be the most effective approach for improving urinary symptoms and urinary flow, however, this surgical procedure may be associated with complications in up to 20% of men </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Benzodiazepine overdose and can you explain it in depth? [/INST] Benzodiazepine overdose describes the ingestion of one of the drugs in the benzodiazepine class in quantities greater than are recommended or generally practiced. The most common symptoms of overdose include central nervous system (CNS) depression, impaired balance, ataxia, and slurred speech. Severe symptoms include coma and respiratory depression. Supportive care is the mainstay of treatment of benzodiazepine overdose. There is an antidote, flumazenil, but its use is controversial.Deaths from single-drug benzodiazepine overdoses occur infrequently, particularly after the point of hospital admission. However, combinations of high doses of benzodiazepines with alcohol, barbiturates, opioids or tricyclic antidepressants are particularly dangerous, and may lead to severe complications such as coma or death. In 2013, benzodiazepines were involved in 31% of the estimated 22,767 deaths from prescription drug overdose in the United States. The US Food and Drug Administration (FDA) has subsequently issued a black box warning regarding concurrent use of benzodiazepines and opioids. Benzodiazepines are one of the most highly prescribed classes of drugs, and they are commonly used in self-poisoning. Over 10 years in the United Kingdom, 1512 fatal poisonings have been attributed to benzodiazepines with or without alcohol. Temazepam was shown to be more toxic than the majority of benzodiazepines. An Australian (1995) study found oxazepam less toxic and less sedative, and temazepam more toxic and more sedative, than most benzodiazepines in overdose.
Signs and symptoms
Following an acute overdose of a benzodiazepine the onset of symptoms is typically rapid with most developing symptoms within 4 hours. Patients initially present with mild to moderate impairment of central nervous system function. Initial signs and symptoms include intoxication, somnolence, diplopia, impaired balance, impaired motor function, anterograde amnesia, ataxia, and slurred speech. Most patients with pure benzodiazepine overdose will usually only exhibit these mild CNS symptoms. Paradoxical reactions such as anxiety, delirium, combativeness, hallucinations, and aggression can also occur following benzodiazepine overdose. Gastrointestinal symptoms such as nausea and vomiting have also been occasionally reported.Cases of severe overdose have been reported and symptoms displayed might include prolonged deep coma or deep cyclic coma, apnea, respiratory depression, hypoxemia, hypothermia, hypotension, bradycardia, cardiac arrest, and pulmonary aspiration, with the possibility of death. Severe consequences are rare following overdose of benzodiazepines alone but the severity of overdose is increased significantly if benzodiazepines are taken in overdose in combination with other medications. Significant toxicity may result following recreation drug misuse in conjunction with other CNS depressants such as opioids or alcohol. The duration of symptoms following overdose is usually between 12 and 36 hours in the majority of cases. The majority of drug-related deaths involve misuse of heroin or other opioids in combination with benzodiazepines or other CNS depressant drugs. In most cases of fatal overdose it is likely that lack of opioid tolerance combined with the depressant effects of benzodiazepines is the cause of death.The symptoms of an overdose such as sleepiness, agitation and ataxia occur much more frequently and severely in children. Hypotonia may also occur in severe cases.
Toxicity
Benzodiazepines have a wide therapeutic index and taken alone in overdose rarely cause severe complications or fatalities. More often than not, a patient who inadvertently takes more than the prescribed dose will simply feel drowsy and fall asleep for a few hours. Benzodiazepines taken in overdose in combination with alcohol, barbiturates, opioids, tricyclic antidepressants, or sedating antipsychotics, anticonvulsants, or antihistamines are particularly dangerous. Additionally, emergency department visits involving benzodiazepines compared to other sedative-hypnotics have much higher odds of hospitalization, patient transfer, or death. In the case of alcohol and barbiturates, not only do they have an additive effect but they also increase the binding affinity of benzodiazepines to the benzodiazepine binding site, which results in a very significant potentiation of the CNS and respiratory depressant effects. In addition, the elderly and those with chronic illnesses are much more vulnerable to lethal overdose with benzodiazepines. Fatal overdoses can occur at relatively low doses in these individuals.
Comparability
The various benzodiazepines differ in their toxicity since they produce varying levels of sedation in overdose. A 1993 British study of deaths during the 1980s found flurazepam and temazepam more frequently involved in drug-related deaths, causing more deaths per million prescriptions than other benzodiazepines. Flurazepam, now rarely prescribed in the United Kingdom and Australia, had the highest fatal toxicity index of any benzodiazepine (15.0), followed by temazepam (11.9), versus benzodiazepines overall (5.9), taken with or without alcohol. An Australian (1995) study found oxazepam less toxic and less sedative, and temazepam more toxic and more sedative, than most benzodiazepines in overdose. An Australian study (2004) of overdose admissions between 1987 and 2002 found alprazolam, which happens to be the most prescribed benzodiazepine in Australia and the United States, to be more toxic than diazepam and other benzodiazepines. They also cited a review of the Annual Reports of the American Association of Poison Control Centers National Data Collection System, which showed alprazolam was involved in 34 fatal deliberate self-poisonings over 10 years (1992–2001), compared with 30 fatal deliberate self-poisonings involving diazepam. In a New Zealand study (2003) of 200 deaths, Zopiclone, a benzodiazepine receptor agonist, had similar overdose potential as benzodiazepines.
Pathophysiology
Benzodiazepines bind to a specific benzodiazepine receptor, thereby enhancing the effect of the neurotransmitter gamma-aminobutyric acid (GABA) and causing CNS depression. In overdose situations this pharmacological effect is extended leading to a more severe CNS depression and potentially coma or cardiac arrest. Benzodiazepine-overdose-related coma may be characterised by an alpha pattern with the central somatosensory conduction time (CCT) after median nerve stimulation being prolonged and the N20 to be dispersed. Brain-stem auditory evoked potentials demonstrate delayed interpeak latencies (IPLs) I-III, III-V and I-V. Toxic overdoses of benzodiazepines therefore cause prolonged CCT and IPLs.
Diagnosis
The diagnosis of benzodiazepine overdose may be difficult, but is usually made based on the clinical presentation of the patient along with a history of overdose. Obtaining a laboratory test for benzodiazepine blood concentrations can be useful in patients presenting with CNS depression or coma of unknown origin. Techniques available to measure blood concentrations include thin layer chromatography, gas liquid chromatography with or without a mass spectrometer, and radioimmunoassay. Blood benzodiazepine concentrations, however, do not appear to be related to any toxicological effect or predictive of clinical outcome. Blood concentrations are, therefore, used mainly to confirm the diagnosis rather than being useful for the clinical management of the patient.
Treatment
Medical observation and supportive care are the mainstay of treatment of benzodiazepine overdose. Although benzodiazepines are absorbed by activated charcoal, gastric decontamination with activated charcoal is not beneficial in pure benzodiazepine overdose as the risk of adverse effects would outweigh any potential benefit from the procedure. It is recommended only if benzodiazepines have been taken in combination with other drugs that may benefit from decontamination. Gastric lavage (stomach pumping) or whole bowel irrigation are also not recommended. Enhancing elimination of the drug with hemodialysis, hemoperfusion, or forced diuresis is unlikely to be beneficial as these procedures have little effect on the clearance of benzodiazepines due to their large volume of distribution and lipid solubility.
Supportive measures
Supportive measures include observation of vital signs, especially Glasgow Coma Scale and airway patency. IV access with fluid administration and maintenance of the airway with intubation and artificial ventilation may be required if respiratory depression or pulmonary aspiration occurs. Supportive measures should be put in place prior to administration of any benzodiazepine antagonist in order to protect the patient from both the withdrawal effects and possible complications arising from the benzodiazepine. A determination of possible deliberate overdose should be considered with appropriate scrutiny, and precautions taken to prevent any attempt by the patient to commit further bodily harm. Hypotension is corrected with fluid replacement, although catecholamines such as norepinephrine or dopamine may be required to increase blood pressure. Bradycardia is treated with atropine or an infusion of norepinephrine to increase coronary blood flow and heart rate.
Flumazenil
Flumazenil (Romazicon) is a competitive benzodiazepine receptor antagonist that can be used as an antidote for benzodiazepine overdose. Its use, however, is controversial as it has numerous contraindications. It is contraindicated in patients who are on long-term benzodiazepines, those who have ingested a substance that lowers the seizure threshold, or in patients who have tachycardia, widened QRS complex on ECG, anticholinergic signs, or a history of seizures. Due to these contraindications and the possibility of it causing severe adverse effects including seizures, adverse cardiac effects, and death, in the majority of cases there is no indication for the use of flumazenil in the management of benzodiazepine overdose as the risks in general outweigh any potential benefit of administration. It also has no role in the management of unknown overdoses. In addition, if full airway protection has been achieved, a good outcome is expected, and therefore flumazenil administration is unlikely to be required.Flumazenil is very effective at reversing the CNS depression associated with benzodiazepines but is less effective at reversing respiratory depression. One study found that only 10% of the patient population presenting with a benzodiazepine overdose are suitable candidates for flumazenil. In this select population who are naive to and overdose solely on a benzodiazepine, it can be considered. Due to its short half life, the duration of action of flumazenil is usually less than 1 hour, and multiple doses may be needed. When flumazenil is indicated the risks can be reduced or avoided by slow dose titration of flumazenil. Due to risks and its many contraindications, flumazenil should be administered only after discussion with a medical toxicologist.
Epidemiology
In a Swedish (2003) study benzodiazepines were implicated in 39% of suicides by drug poisoning in the elderly 1992–1996. Nitrazepam and flunitrazepam accounted for 90% of benzodiazepine implicated suicides. In cases where benzodiazepines contributed to death, but were not the sole cause, drowning, typically in the bath, was a common method used. Benzodiazepines were the predominant drug class in suicides in this review of Swedish death certificates. In 72% of the cases, benzodiazepines were the only drug consumed. Thus, many of deaths associated with benzodiazepine overdoses may not be a direct result of the toxic effects but either due to being combined with other drugs or used as a tool to kill oneself using a different method, e.g. drowning.In a Swedish retrospective study of deaths of 1987, in 159 of 1587 autopsy cases benzodiazepines were found </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Thiamine deficiency and can you explain it in depth? [/INST] Thiamine deficiency is a medical condition of low levels of thiamine (Vitamin B1). A severe and chronic form is known as beriberi. The two main types in adults are wet beriberi and dry beriberi. Wet beriberi affects the cardiovascular system, resulting in a fast heart rate, shortness of breath, and leg swelling. Dry beriberi affects the nervous system, resulting in numbness of the hands and feet, confusion, trouble moving the legs, and pain. A form with loss of appetite and constipation may also occur. Another type, acute beriberi, found mostly in babies, presents with loss of appetite, vomiting, lactic acidosis, changes in heart rate, and enlargement of the heart.Risk factors include a diet of mostly white rice, alcoholism, dialysis, chronic diarrhea, and taking high doses of diuretics. In rare cases, it may be due to a genetic condition that results in difficulties absorbing thiamine found in food. Wernicke encephalopathy and Korsakoff syndrome are forms of dry beriberi. Diagnosis is based on symptoms, low levels of thiamine in the urine, high blood lactate, and improvement with thiamine supplementation.Treatment is by thiamine supplementation, either by mouth or by injection. With treatment, symptoms generally resolve in a few weeks. The disease may be prevented at the population level through the fortification of food.Thiamine deficiency is rare in the United States. It remains relatively common in sub-Saharan Africa. Outbreaks have been seen in refugee camps. Thiamine deficiency has been described for thousands of years in Asia, and became more common in the late 1800s with the increased processing of rice.
Signs and symptoms
Symptoms of beriberi include weight loss, emotional disturbances, impaired sensory perception, weakness and pain in the limbs, and periods of irregular heart rate. Edema (swelling of bodily tissues) is common. It may increase the amount of lactic acid and pyruvic acid within the blood. In advanced cases, the disease may cause high-output cardiac failure and death.
Symptoms may occur concurrently with those of Wernickes encephalopathy, a primarily neurological thiamine deficiency-related condition.
Beriberi is divided into four categories. The first three are historical and the fourth, gastrointestinal beriberi, was recognized in 2004:
Dry beriberi especially affects the peripheral nervous system.
Wet beriberi especially affects the cardiovascular system and other bodily systems.
Infantile beriberi affects the babies of malnourished mothers.
Gastrointestinal beriberi affects the digestive system and other bodily systems.
Dry beriberi
Dry beriberi causes wasting and partial paralysis resulting from damaged peripheral nerves. It is also referred to as endemic neuritis. It is characterized by:
Difficulty with walking
Tingling or loss of sensation (numbness) in hands and feet
Loss of tendon reflexes
Loss of muscle function or paralysis of the lower legs
Mental confusion/speech difficulties
Pain
Involuntary eye movements (nystagmus)
VomitingA selective impairment of the large proprioceptive sensory fibers without motor impairment can occur and present as a prominent sensory ataxia, which is a loss of balance and coordination due to loss of the proprioceptive inputs from the periphery and loss of position sense.
Brain disease
Wernickes encephalopathy (WE), Korsakoff syndrome (also called alcohol amnestic disorder), and Wernicke–Korsakoff syndrome are forms of dry beriberi.Wernickes encephalopathy is the most frequently encountered manifestation of thiamine deficiency in Western society, though it may also occur in patients with impaired nutrition from other causes, such as gastrointestinal disease, those with HIV/AIDS, and with the injudicious administration of parenteral glucose or hyperalimentation without adequate B-vitamin supplementation. This is a striking neuro-psychiatric disorder characterized by paralysis of eye movements, abnormal stance and gait, and markedly deranged mental function.Korsakoff syndrome, in general, is considered to occur with deterioration of brain function in patients initially diagnosed with WE. This is an amnestic-confabulatory syndrome characterized by retrograde and anterograde amnesia, impairment of conceptual functions, and decreased spontaneity and initiative.Alcoholics may have thiamine deficiency because of:
Inadequate nutritional intake: Alcoholics tend to intake less than the recommended amount of thiamine.
Decreased uptake of thiamine from the GI tract: Active transport of thiamine into enterocytes is disturbed during acute alcohol exposure.
Liver thiamine stores are reduced due to hepatic steatosis or fibrosis.
Impaired thiamine utilization: Magnesium, which is required for the binding of thiamine to thiamine-using enzymes within the cell, is also deficient due to chronic alcohol consumption. The inefficient use of any thiamine that does reach the cells will further exacerbate the thiamine deficiency.
Ethanol per se inhibits thiamine transport in the gastrointestinal system and blocks phosphorylation of thiamine to its cofactor form (ThDP).Following improved nutrition and the removal of alcohol consumption, some impairments linked with thiamine deficiency are reversed, in particular poor brain functionality, although in more severe cases, Wernicke–Korsakoff syndrome leaves permanent damage. (See delirium tremens.)
Wet beriberi
Wet beriberi affects the heart and circulatory system. It is sometimes fatal, as it causes a combination of heart failure and weakening of the capillary walls, which causes the peripheral tissues to become edematous. Wet beriberi is characterized by:
Increased heart rate
Vasodilation leading to decreased systemic vascular resistance, and high-output heart failure
Elevated jugular venous pressure
Dyspnea (shortness of breath) on exertion
Paroxysmal nocturnal dyspnea
Peripheral edema (swelling of lower legs) or generalized edema (swelling throughout the body)
Dilated cardiomyopathy
Gastrointestinal beriberi
Gastrointestinal beriberi causes abdominal pain. It is characterized by:
Abdominal pain
Nausea
Vomiting
Lactic acidosis
Infants
Infantile beriberi usually occurs between two and six months of age in children whose mothers have inadequate thiamine intake. It may present as either wet or dry beriberi.In the acute form, the baby develops dyspnea and cyanosis and soon dies of heart failure. These symptoms may be described in infantile beriberi:
Hoarseness, where the child makes moves to moan, but emits no sound or just faint moans caused by nerve paralysis
Weight loss, becoming thinner and then marasmic as the disease progresses
Vomiting
Diarrhea
Pale skin
Edema
Ill temper
Alterations of the cardiovascular system, especially tachycardia (rapid heart rate)
Convulsions occasionally observed in the terminal stages
Cause
Beriberi is often caused by eating a diet with a very high proportion of calorie rich polished rice (common in Asia) or cassava root (common in sub-Saharan Africa), without much if any thiamine containing animals or vegetables.It may also be caused by shortcomings other than inadequate intake - diseases or operations on the digestive tract, alcoholism, dialysis or genetic deficiencies. All those causes mainly affect the central nervous system, and provoke the development of Wernickes encephalopathy.
Wernickes disease is one of the most prevalent neurological or neuropsychiatric diseases. In autopsy series, features of Wernicke lesions are observed in approximately 2% of general cases. Medical record research shows that about 85% had not been diagnosed, although only 19% would be asymptomatic. In children, only 58% were diagnosed. In alcohol abusers, autopsy series showed neurological damages at rates of 12.5% or more. Mortality caused by Wernickes disease reaches 17% of diseases, which means 3.4/1000 or about 25 million contemporaries. The number of people with Wernickes disease may be even higher, considering that early stages may have dysfunctions prior to the production of observable lesions at necropsy. In addition, uncounted numbers of people can experience fetal damage and subsequent diseases.
Genetics
Genetic diseases of thiamine transport are rare but serious. Thiamine responsive megaloblastic anemia syndrome (TRMA) with diabetes mellitus and sensorineural deafness is an autosomal recessive disorder caused by mutations in the gene SLC19A2, a high affinity thiamine transporter. TRMA patients do not show signs of systemic thiamine deficiency, suggesting redundancy in the thiamine transport system. This has led to the discovery of a second high-affinity thiamine transporter, SLC19A3. Leigh disease (subacute necrotising encephalomyelopathy) is an inherited disorder that affects mostly infants in the first years of life and is invariably fatal. Pathological similarities between Leigh disease and WE led to the hypothesis that the cause was a defect in thiamine metabolism. One of the most consistent findings has been an abnormality of the activation of the pyruvate dehydrogenase complex.Mutations in the SLC19A3 gene have been linked to biotin-thiamine responsive basal ganglia disease, which is treated with pharmacological doses of thiamine and biotin, another B vitamin.
Other disorders in which a putative role for thiamine has been implicated include subacute necrotising encephalomyelopathy, opsoclonus myoclonus syndrome (a paraneoplastic syndrome), and Nigerian seasonal ataxia (or African seasonal ataxia). In addition, several inherited disorders of ThDP-dependent enzymes have been reported, which may respond to thiamine treatment.
Pathophysiology
Thiamine in the human body has a half-life of 18 days and is quickly exhausted, particularly when metabolic demands exceed intake. A derivative of thiamine, thiamine pyrophosphate (TPP), is a cofactor involved in the citric acid cycle, as well as connecting the breakdown of sugars with the citric acid cycle. The citric acid cycle is a central metabolic pathway involved in the regulation of carbohydrate, lipid, and amino acid metabolism, and its disruption due to thiamine deficiency inhibits the production of many molecules including the neurotransmitters glutamic acid and GABA. Additionally, thiamine may also be directly involved in neuromodulation.
Diagnosis
A positive diagnosis test for thiamine deficiency involves measuring the activity of the enzyme transketolase in erythrocytes (Erythrocyte transketolase activation assay). Alternatively, thiamine and its phosphorylated derivatives can directly be detected in whole blood, tissues, foods, animal feed, and pharmaceutical preparations following the conversion of thiamine to fluorescent thiochrome derivatives (thiochrome assay) and separation by high-performance liquid chromatography (HPLC). Capillary electrophoresis (CE) techniques and in-capillary enzyme reaction methods have emerged as alternative techniques in quantifying and monitoring thiamine levels in samples.
The normal thiamine concentration in EDTA-blood is about 20-100 µg/L.
Treatment
Many people with beriberi can be treated with thiamine alone. Given thiamine intravenously (and later orally), rapid and dramatic recovery occurs, generally within 24 hours.Improvements of peripheral neuropathy may require several months of thiamine treatment.
Epidemiology
Beriberi is a recurrent nutritional disease in detention houses, even in this century. In 1999, an outbreak of beriberi occurred in a detention center in Taiwan. High rates of illness and death in overcrowded Haitian jails in 2007 were traced to the traditional practice of washing rice before cooking. In the Ivory Coast, among a group of prisoners with heavy punishment, 64% were affected by beriberi. Before beginning treatment, prisoners exhibited symptoms of dry or wet beriberi with neurological signs (tingling: 41%), cardiovascular signs (dyspnoea: 42%, thoracic pain: 35%), and edemas of the lower limbs (51%). With treatment, the rate of healing was about 97%.Populations under extreme stress may be at higher risk for beriberi. Displaced populations, such as refugees from war, are susceptible to micronutritional deficiency, including beriberi. The severe nutritional deprivation caused by famine also can cause beriberis, although symptoms may be overlooked in clinical assessment or masked by other famine-related problems. An extreme weight-loss diet can, rarely, induce a famine-like state and the accompanying beriberi.
History
Earliest written descriptions of thiamine deficiency are from ancient China in the context of Chinese medicine. One of the earliest is by Ge Hong in his book Zhou hou bei ji fang (Emergency Formulas to Keep up Your Sleeve) written sometime during the third century. Hong called the illness by the name jiao qi, which can be interpreted as "foot qi". He described the symptoms to include swelling, weakness, and numbness of the feet. He also acknowledged that the illness could be deadly, and claimed that it could be cured by eating certain foods, such as fermented soybeans in wine. Better known examples of early descriptions of "foot qi" are by Chao Yuanfang (who lived during 550–630) in his book Zhu bing yuan hou lun (Sources and Symptoms of All Diseases) and by Sun Simiao (581–682) in his book Bei ji qian jin yao fang (Essential Emergency Formulas Worth a Thousand in Gold).In the late 19th century, beriberi was studied by Takaki Kanehiro, a British-trained Japanese medical doctor of the Imperial Japanese Navy. Beriberi was a serious problem in the Japanese navy; sailors fell ill an average of four times a year in the period 1878 to 1881, and 35% were cases of beriberi. In 1883, Takaki learned of a very high incidence of beriberi among cadets on a training mission from Japan to Hawaii, via New Zealand and South America. The voyage lasted more than nine months and resulted in 169 cases of sickness and 25 deaths on a ship of 376 men. With the support of the Japanese Navy, he conducted an experiment in which another ship was deployed on the same route, except that its crew was fed a diet of meat, fish, barley, rice, and beans. At the end of the voyage, this crew had only 14 cases of beriberi and no deaths. This convinced Takaki and the Japanese Navy that diet was the cause. In 1884, Takaki observed that beriberi was common among low-ranking crew who were often provided free rice, thus ate little else, but not among crews of Western navies, nor among Japanese officers who consumed a more varied diet.
In 1897, Christiaan Eijkman, a Dutch physician and pathologist, demonstrated that beriberi is caused by poor diet, and discovered that feeding unpolished rice (instead of the polished variety) to chickens helped to prevent beriberi. The following year, Sir Frederick Hopkins postulated that some foods contained "accessory factors"—in addition to proteins, carbohydrates, fats, and salt—that were necessary for the functions of the human body. In 1901, Gerrit Grijns, a Dutch physician and assistant to Christiaan Eijkman in the Netherlands, correctly interpreted beriberi as a deficiency syndrome, and between 1910 and 1913, Edward Bright Vedder established that an extract of rice bran is a treatment for beriberi. In 1929, Eijkman and Hopkins were awarded the Nobel Prize for Physiology or Medicine for their discoveries.
Etymology
Although according to the Oxford English Dictionary, the term "beriberi" comes from a Sinhalese phrase meaning "weak, weak" or "I cannot, I cannot", the word being duplicated for emphasis, the origin of the phrase is questionable. It has also been suggested to come from Hindi, Arabic, and a few other languages, with many meanings like "weakness", "sailor", and even "sheep". Such suggested origins were listed by Heinrich Botho Scheube, among others. Edward Vedder wrote in his book Beriberi (1913) that "it is impossible to definitely trace the origin of the word beriberi". Word berbere was used in writing at least as early as 1568 by Diogo do Couto, when he described the deficiency in India.Kakke, which is a Japanese synonym for thiamine deficiency, comes from the way "jiao qi" is pronounced in Japanese. "Jiao qi" is an old word used in Chinese medicine to describe beriberi. "Kakke" is supposed to have entered into the Japanese language sometime between the sixth and eighth centuries.
Other animals
Poultry
As most feedstuffs used in poultry diets contain enough quantities of vitamins to meet the requirements in this species, deficiencies in this vitamin do not occur with commercial diets. This was, at least, the opinion in the 1960s.Mature chickens show signs three weeks after being fed a deficient diet. In young chicks, it can appear before two weeks of age. Onset is sudden in young chicks, with anorexia and an unsteady gait. Later on, locomotor signs begin, with an apparent paralysis of the flexor of the toes. The characteristic position is called "stargazing", with the affected animal sitting on its hocks with its head thrown back in a posture called opisthotonos. Response to administration of the vitamin is rather quick, occurring a few hours later.
Ruminants
Polioencephalomalacia (PEM) is the most common thiamine deficiency disorder in young ruminant and nonruminant animals. Symptoms of PEM include a profuse, but transient, diarrhea, listlessness, circling movements, stargazing or opisthotonus (head drawn back over neck), and muscle tremors. The most common cause is high-carbohydrate feeds, leading to the overgrowth of thiaminase-producing bacteria, but dietary ingestion of thiaminase (e.g., in bracken fern), or inhibition of thiamine absorption by high sulfur intake are also possible. Another cause of PEM is Clostridium sporogenes or Bacillus aneurinolyticus infection. These bacteria produce thiaminases that can cause an acute thiamine deficiency in the affected animal.
Snakes
Snakes that consume a diet largely composed of goldfish and feeder minnows are susceptible to developing thiamine deficiency. This is often a problem observed in captivity when keeping garter and ribbon snakes that are fed a goldfish-exclusive diet, as these fish contain thiaminase, an enzyme that breaks down thiamine.
Wild birds and fish
Thiamine deficiency has been identified as the cause of a paralytic disease affecting wild birds in the Baltic Sea area dating back to 1982. In this condition, there is difficulty in keeping the wings folded along the side of the body when resting, loss of the ability to fly and voice, with eventual paralysis of the wings and legs and death. It affects primarily 0.5–1 kg-sized birds such as the European herring gull (Larus argentatus), common starling (Sturnus vulgaris), and common eider (Somateria mollissima). Researches noted, "Because the investigated species occupy a wide range of ecological niches and positions in the food web, we are open to the possibility that other animal classes may develop thiamine deficiency, as well."p. 12006In the counties of Blekinge and Skåne (southernmost Sweden), mass deaths of several bird species, especially the European herring gull, have been observed since the early 2000s. More recently, species of other classes seems to be affected. High mortality of salmon (Salmo salar) in the river Mörrumsån is reported, and mammals such as the Eurasian elk (Alces alces) have died in unusually high numbers. Lack of thiamine is the common denominator where analysis is done. In April 2012, the County Administrative Board of Blekinge found the situation so alarming that they asked the Swedish government to set up a closer investigation.
References
Further reading
Arnold, D (2010). "British India and the beri-beri problem". Medical History. 54 (3): 295–314. doi:10.1017/S0025727300004622. PMC 2889456. PMID 20592882.
Chisholm, Hugh, ed. (1911). "Beri-Beri" </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Beta blocker and can you explain it in depth? [/INST] Beta blockers, also spelled β-blockers, are a class of medications that are predominantly used to manage abnormal heart rhythms, and to protect the heart from a second heart attack after a first heart attack (secondary prevention). They are also widely used to treat high blood pressure, although they are no longer the first choice for initial treatment of most patients.Beta blockers are competitive antagonists that block the receptor sites for the endogenous catecholamines epinephrine (adrenaline) and norepinephrine (noradrenaline) on adrenergic beta receptors, of the sympathetic nervous system, which mediates the fight-or-flight response. Some block activation of all types of β-adrenergic receptors and others are selective for one of the three known types of beta receptors, designated β1, β2 and β3 receptors. β1-adrenergic receptors are located mainly in the heart and in the kidneys. β2-adrenergic receptors are located mainly in the lungs, gastrointestinal tract, liver, uterus, vascular smooth muscle, and skeletal muscle. β3-adrenergic receptors are located in fat cells.Beta receptors are found on cells of the heart muscles, smooth muscles, airways, arteries, kidneys, and other tissues that are part of the sympathetic nervous system and lead to stress responses, especially when they are stimulated by epinephrine (adrenaline). Beta blockers interfere with the binding to the receptor of epinephrine and other stress hormones and weaken the effects of stress hormones.
In 1964, James Black synthesized the first clinically significant beta blockers—propranolol and pronethalol; it revolutionized the medical management of angina pectoris and is considered by many to be one of the most important contributions to clinical medicine and pharmacology of the 20th century.For the treatment of primary hypertension, meta-analyses of studies which mostly used atenolol have shown that although beta blockers are more effective than placebo in preventing stroke and total cardiovascular events, they are not as effective as diuretics, medications inhibiting the renin–angiotensin system (e.g., ACE inhibitors), or calcium channel blockers.
Medical uses
Large differences exist in the pharmacology of agents within the class, thus not all beta blockers are used for all indications listed below.
Indications for beta blockers include:
Angina pectoris (contraindicated for Prinzmetals angina)
Atrial fibrillation
Cardiac arrhythmia
Congestive heart failure
Essential tremor
Glaucoma (as eye drops, they decrease intraocular pressure by lowering aqueous humor secretion)
Hypertension, although they are generally not preferred as an initial treatment
Hyperthyroidism
Migraine prophylaxis
Mitral valve prolapse
Myocardial infarction
Phaeochromocytoma, in conjunction with α-blocker
Postural orthostatic tachycardia syndrome
Symptomatic control (tachycardia, tremor) in anxiety and hyperthyroidism
Theophylline overdoseBeta blockers have also been used for:
Acute aortic dissection
Hypertrophic obstructive cardiomyopathy
Long QT syndrome
Marfan syndrome (treatment with propranolol slows progression of aortic dilation and its complications)
Prevention of variceal bleeding in portal hypertension
Possible mitigation of hyperhidrosis
Social and other anxiety disorders
Controversially, for reduction of perioperative mortality in non-cardiac surgery, but the best evidence suggests that they increase mortality when used this way
Congestive heart failure
Although beta blockers were once contraindicated in congestive heart failure, as they have the potential to worsen the condition due to their effect of decreasing cardiac contractility, studies in the late 1990s showed their efficacy at reducing morbidity and mortality.Bisoprolol, carvedilol, and sustained-release metoprolol are specifically indicated as adjuncts to standard ACE inhibitor and diuretic therapy in congestive heart failure, although at doses typically much lower than those indicated for other conditions. Beta blockers are only indicated in cases of compensated, stable congestive heart failure; in cases of acute decompensated heart failure, beta blockers will cause a further decrease in ejection fraction, worsening the patients current symptoms.
Beta blockers are known primarily for their reductive effect on heart rate, although this is not the only mechanism of action of importance in congestive heart failure. Beta blockers, in addition to their sympatholytic β1 activity in the heart, influence the renin–angiotensin system at the kidneys. Beta blockers cause a decrease in renin secretion, which in turn reduces the heart oxygen demand by lowering the extracellular volume and increasing the oxygen-carrying capacity of the blood. Heart failure characteristically involves increased catecholamine activity on the heart, which is responsible for several deleterious effects, including increased oxygen demand, propagation of inflammatory mediators, and abnormal cardiac tissue remodeling, all of which decrease the efficiency of cardiac contraction and contribute to the low ejection fraction. Beta blockers counter this inappropriately high sympathetic activity, eventually leading to an improved ejection fraction, despite an initial reduction in ejection fraction.
Trials have shown beta blockers reduce the absolute risk of death by 4.5% over a 13-month period. In addition to reducing the risk of mortality, the numbers of hospital visits and hospitalizations were also reduced in the trials.Therapeutic administration of beta blockers for congestive heart failure ought to begin at very low doses (1/8 of target) with a gradual escalation of the dose. The heart of the patient must adjust to decreasing stimulation by catecholamines and find a new equilibrium at a lower adrenergic drive.
Anxiety
Officially, beta blockers are not approved for anxiolytic use by the U.S. Food and Drug Administration. However, many controlled trials in the past 25 years indicate beta blockers are effective in anxiety disorders, though the mechanism of action is not known. The physiological symptoms of the fight-or-flight response (pounding heart, cold/clammy hands, increased respiration, sweating, etc.) are significantly reduced, thus enabling anxious individuals to concentrate on the task at hand.
Musicians, public speakers, actors, and professional dancers have been known to use beta blockers to avoid performance anxiety, stage fright, and tremor during both auditions and public performances. The application to stage fright was first recognized in The Lancet in 1976, and by 1987, a survey conducted by the International Conference of Symphony Orchestra Musicians, representing the 51 largest orchestras in the United States, revealed 27% of its musicians had used beta blockers and 70% obtained them from friends, not physicians. Beta blockers are inexpensive, said to be relatively safe, and on one hand, seem to improve musicians performances on a technical level, while some, such as Barry Green, the author of "The Inner Game of Music" and Don Greene, a former Olympic diving coach who teaches Juilliard students to overcome their stage fright naturally, say the performances may be perceived as "soulless and inauthentic".
Surgery
Low certainty evidence indicates that the use of beta blockers around the time of cardiac surgery may decrease the risk of heart dysrhythmias and atrial fibrillation. Starting them around the time of other types of surgery, however, may worsen outcomes. For non-cardiac surgery, the use of beta blockers to prevent adverse effects may reduce the risk of atrial fibrillation and myocardial infarctions (very low certainty evidence), however, there is moderate certainty evidence that this approach may increase the risk of hypotension. Low-certainty evidence suggests that beta blockers used perioperatively in non-cardiac surgeries may increase the risk of bradycardia.
Performance-enhancing use
Because they promote lower heart rates and reduce tremors, beta blockers have been used in professional sports where high accuracy is required, including archery, shooting, golf and snooker. Beta blockers are banned in some sports by the International Olympic Committee. In the 2008 Summer Olympics, 50-metre pistol silver medalist and 10-metre air pistol bronze medalist Kim Jong-su tested positive for propranolol and was stripped of his medals.For similar reasons, beta blockers have also been used by surgeons.Classical musicians have commonly used beta blockers since the 1970s to reduce stage fright.
Adverse effects
Adverse drug reactions associated with the use of beta blockers include: nausea, diarrhea, bronchospasm, dyspnea, cold extremities, exacerbation of Raynauds syndrome, bradycardia, hypotension, heart failure, heart block, fatigue, dizziness, alopecia (hair loss), abnormal vision, hallucinations, insomnia, nightmares, sexual dysfunction, erectile dysfunction and/or alteration of glucose and lipid metabolism. Mixed α1/β-antagonist therapy is also commonly associated with orthostatic hypotension. Carvedilol therapy is commonly associated with edema. Due to the high penetration across the blood–brain barrier, lipophilic beta blockers, such as propranolol and metoprolol, are more likely than other less lipophilic beta blockers to cause sleep disturbances, such as insomnia, vivid dreams and nightmares.Adverse effects associated with β2-adrenergic receptor antagonist activity (bronchospasm, peripheral vasoconstriction, alteration of glucose and lipid metabolism) are less common with β1-selective (often termed "cardioselective") agents, but receptor selectivity diminishes at higher doses. Beta blockade, especially of the beta-1 receptor at the macula densa, inhibits renin release, thus decreasing the release of aldosterone. This causes hyponatremia and hyperkalemia.
Hypoglycemia can occur with beta blockade because β2-adrenoceptors normally stimulate glycogen breakdown (glycogenolysis) in the liver and pancreatic release of the hormone glucagon, which work together to increase plasma glucose. Therefore, blocking β2-adrenoceptors lowers plasma glucose. β1-blockers have fewer metabolic side effects in diabetic patients; however, the fast heart rate that serves as a warning sign for insulin-induced low blood sugar may be masked, resulting in hypoglycemia unawareness. This is termed beta blocker-induced hypoglycemia unawareness. Therefore, beta blockers are to be used cautiously in diabetics.A 2007 study revealed diuretics and beta blockers used for hypertension increase a patients risk of developing diabetes mellitus, while ACE inhibitors and angiotensin II receptor antagonists (angiotensin receptor blockers) actually decrease the risk of diabetes. Clinical guidelines in Great Britain, but not in the United States, call for avoiding diuretics and beta blockers as first-line treatment of hypertension due to the risk of diabetes.Beta blockers must not be used in the treatment of selective alpha-adrenergic agonist overdose. The blockade of only beta receptors increases blood pressure, reduces coronary blood flow, left ventricular function, and cardiac output and tissue perfusion by means of leaving the alpha-adrenergic system stimulation unopposed. Beta blockers with lipophilic properties and CNS penetration such as metoprolol and labetalol may be useful for treating CNS and cardiovascular toxicity from a methamphetamine overdose. The mixed alpha- and beta blocker labetalol is especially useful for treatment of concomitant tachycardia and hypertension induced by methamphetamine. The phenomenon of "unopposed alpha stimulation" has not been reported with the use of beta blockers for treatment of methamphetamine toxicity. Other appropriate antihypertensive drugs to administer during hypertensive crisis resulting from stimulant overdose are vasodilators such as nitroglycerin, diuretics such as furosemide, and alpha blockers such as phentolamine.
Contraindications
Contraindications for beta blockers include:
Asthma
The 2007 National Heart, Lung, and Blood Institute (NHLBI) asthma guidelines recommend against the use of non-selective beta blockers in asthmatics, while allowing for the use of cardio selective beta blockers.: 182 Cardio selective beta blocker (β1 blockers), if really required, can be prescribed at the least possible dose to those with mild to moderate respiratory symptoms. β2-agonists can somewhat mitigate β-Blocker-induced bronchospasm where it exerts greater efficacy on reversing selective β-blocker-induced bronchospasm than the nonselective β-blocker-induced worsening asthma and/or COPD.
Diabetes mellitus
Epinephrine signals early warning of the upcoming hypoglycemia.Beta blockers inhibition on epinephrines effect can somewhat exacerbate hypoglycemia by interfering with glycogenolysis and mask signs of hypoglycemia such as tachycardia, palpitations, diaphoresis, and tremors. Diligent blood glucose level monitoring is necessary for a patient with diabetes mellitus on beta blocker.
Hyperthyroidism
Abrupt withdrawal can result in a thyroid storm.
Bradycardia or AV block
Unless a pacemaker is present, beta blockers can severely depress conduction in the AV node, resulting in a reduction of heart rate and cardiac output. One should be very cautious with the use of beta blockers in tachycardia patients with Wolff-Parkinson-White Syndrome, as it can result in life-threatening arrhythmia in certain patients. By slowing the conduction through the AV node, preferential conduction through the accessory pathway is favored. If the patient happens to develop atrial flutter, this could lead to a 1:1 conduction with very fast ventricular rate, or worse, ventricular fibrillation in the case of atrial fibrillation.
Toxicity
Glucagon, used in the treatment of overdose, increases the strength of heart contractions, increases intracellular cAMP, and decreases renal vascular resistance. It is, therefore, useful in patients with beta blocker cardiotoxicity. Cardiac pacing is usually reserved for patients unresponsive to pharmacological therapy.
People experiencing bronchospasm due to the β2 receptor-blocking effects of nonselective beta blockers may be treated with anticholinergic drugs, such as ipratropium, which are safer than beta agonists in patients with cardiovascular disease. Other antidotes for beta blocker poisoning are salbutamol and isoprenaline.
β-receptor antagonism
Stimulation of β1 receptors by epinephrine and norepinephrine induces a positive chronotropic and inotropic effect on the heart and increases cardiac conduction velocity and automaticity. Stimulation of β1 receptors on the kidney causes renin release. Stimulation of β2 receptors induces smooth muscle relaxation, induces tremor in skeletal muscle, and increases glycogenolysis in the liver and skeletal muscle. Stimulation of β3 receptors induces lipolysis.Beta blockers inhibit these normal epinephrine- and norepinephrine-mediated sympathetic actions, but have minimal effect on resting subjects.
That is, they reduce the effect of excitement or physical exertion on heart rate and force of contraction, and also tremor, and breakdown of glycogen. Beta blockers can have a constricting effect on the bronchi of the lungs, possibly worsening or causing asthma symptoms.Since β2 adrenergic receptors can cause vascular smooth muscle dilation, beta blockers may cause some vasoconstriction. However, this effect tends to be small because the activity of β2 receptors is overshadowed by the more dominant vasoconstricting α1 receptors. By far the greatest effect of beta blockers remains in the heart. Newer, third-generation beta blockers can cause vasodilation through blockade of alpha-adrenergic receptors.Accordingly, nonselective beta blockers are expected to have antihypertensive effects. The primary antihypertensive mechanism of beta blockers is unclear, but may involve reduction in cardiac output (due to negative chronotropic and inotropic effects). It may also be due to reduction in renin release from the kidneys, and a central nervous system effect to reduce sympathetic activity (for those beta blockers that do cross the blood–brain barrier, e.g. propranolol).
Antianginal effects result from negative chronotropic and inotropic effects, which decrease cardiac workload and oxygen demand. Negative chronotropic properties of beta blockers allow the lifesaving property of heart rate control. Beta blockers are readily titrated to optimal rate control in many pathologic states.
The antiarrhythmic effects of beta blockers arise from sympathetic nervous system blockade—resulting in depression of sinus node function and atrioventricular node conduction, and prolonged atrial refractory periods. Sotalol, in particular, has additional antiarrhythmic properties and prolongs action potential duration through potassium channel blockade.
Blockade of the sympathetic nervous system on renin release leads to reduced aldosterone via the renin–angiotensin–aldosterone system, with a resultant decrease in blood pressure due to decreased sodium and water retention.
Intrinsic sympathomimetic activity
Also referred to as intrinsic sympathomimetic effect, this term is used particularly with beta blockers that can show both agonism and antagonism at a given beta receptor, depending on the concentration of the agent (beta blocker) and the concentration of the antagonized agent (usually an endogenous compound, such as norepinephrine). See partial agonist for a more general description.
Some beta blockers (e.g. oxprenolol, pindolol, penbutolol, labetalol and acebutolol) exhibit intrinsic sympathomimetic activity (ISA). These agents are capable of exerting low-level agonist activity at the β-adrenergic receptor while simultaneously acting as a receptor site antagonist. These agents, therefore, may be useful in individuals exhibiting excessive bradycardia with sustained beta blocker therapy.
Agents with ISA should not be used for patients with any kind of angina as it can aggravate or after myocardial infarctions. They may also be less effective than other beta blockers in the management of angina and tachyarrhythmia.
α1-receptor antagonism
Some beta blockers (e.g., labetalol and carvedilol) exhibit mixed antagonism of both β- and α1-adrenergic receptors, which provides additional arteriolar vasodilating action.
Examples
Nonselective agents
Nonselective beta blockers display both β1 and β2 antagonism.
Propranolol
Bucindolol (has additional α1-blocking activity)
Carteolol
Carvedilol (has additional α1-blocking activity)
Labetalol (has intrinsic sympathomimetic activity and additional α1-blocking activity)
Nadolol
Oxprenolol (has intrinsic sympathomimetic activity)
Penbutolol (has intrinsic sympathomimetic activity)
Pindolol (has intrinsic sympathomimetic activity)
Sotalol (not considered a "typical beta blocker")
Timolol
β1-selective agents
β1-selective beta blockers are also known as cardioselective beta blockers. Pharmacologically, the beta-blockade of the B1 receptors in the heart will act on cAMP. The function of cAMP as a second messenger in the cardiac cell is that it phosphorylates the LTCC and the ryanodine receptor to increase intracellular calcium levels and cause contraction. Beta-blockade of the B1 receptor will inhibit cAMP from phosphorylating, and it will decrease the ionotrophic and chronotropic effect </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Biliary atresia and can you explain it in depth? [/INST] Biliary atresia, also known as extrahepatic ductopenia and progressive obliterative cholangiopathy, is a childhood disease of the liver in which one or more bile ducts are abnormally narrow, blocked, or absent. It can be congenital or acquired. It has an incidence of one in 10,000–15,000 live births in the United States, and a prevalence of one in 16,700 in the British Isles. Biliary atresia is most common in East Asia, with a frequency of one in 5,000.
The cause of biliary atresia in Egyptian infants has been proven to be as a result of aflatoxin induced cholangiopathy acquired prenatally in infants who have glutathione S transferase M1 deficiency. The biliary atresia phenotype caused by congenital aflatoxicosis in GST M1 deficient neonates is named Kotb disease. Syndromic biliary atresia (e.g. Biliary Atresia Splenic Malformation (BASM)) has been associated with certain genes (e.g. Polycystic Kidney Disease 1 Like 1 - PKD1L1), and some infants with isolated biliary atresia may arise as a result of an autoimmune inflammatory response, possibly due to a viral infection of the liver soon after birth. In animals plant toxins have been shown to cause biliary atresia. The only effective treatments are operations such as the Kasai procedure and liver transplantation.
Signs and symptoms
Initially, the symptoms of biliary atresia are indistinguishable from those of neonatal jaundice, a usually harmless condition commonly seen in infants. However, infants with biliary atresia develop progressive conjugated jaundice, pale white stools, and dark urine. Some infants fail to thrive as there will be a degree of fat and fat-soluble vitamin malabsorption (e.g. Vitamin K). This may cause a bleeding tendency. Eventually, and usually after 2 months, cirrhosis with portal hypertension will develop. If left untreated, biliary atresia can lead to liver failure. Unlike other forms of jaundice, however, biliary-atresia-related cholestasis mostly does not result in kernicterus, a form of brain damage resulting from liver dysfunction. This is because in biliary atresia, the liver, although diseased, is still able to conjugate bilirubin, and conjugated bilirubin is unable to cross the blood–brain barrier.
Causes
The cause of biliary atresia in most infants is not fully understood and it is well possible that a number of factors may play a role, but especially maternal rotavirus infection during pregnancy and subsequent transmission of the virus to the child resulting in infection of the biliary epithelium and subsequent occluding fibrosis may be important in this respect. Some cases may relate to infection with other viruses (including COVID-19) as well, for instance infection with the hepatotropic virus reovirus 3 has been proposed and congenital cytomegalovirus infection, as well. In addition autoimmune processes may contribute to pathogenesis in some cases as well. However, with regard to these alternative causation the experimental evidence remains rather weak.
Genetics
An association between biliary atresia and the ADD3 gene was first detected in Chinese populations through a genome-wide association study, and was confirmed in Thai Asians and Caucasians. A possible association with deletion of the gene GPC1, which encodes a glypican 1-a heparan sulfate proteoglycan, has been reported. This gene is located on the long arm of chromosome 2 (2q37) and is involved in the regulation of inflammation and the Hedgehog gene.Egyptian infants with biliary atresia were found to have null GSTM1 genotype while all their mothers were heterozygous for GSTM1. Thus these infants may be protected in utero by their maternal detoxification system, yet once born they cannot handle the detoxification of an aflatoxin load.
Toxins
Some cases of biliary atresia may result from exposure to aflatoxin B1, and to a lesser extent aflatoxin B2 during late pregnancy. Intact maternal detoxification protects baby during intrauterine life, yet after delivery, the baby struggles with the aflatoxin in its blood and liver. Moreover, the baby feeds aflatoxin M1 from its mom, as aflatoxin M1 is the detoxification product of aflatoxin B1. It is a milder toxin that causes cholangitis in the baby.There are isolated examples of biliary atresia in animals. For instance, lambs born to sheep grazing on land contaminated with a weed (Red Crumbweed) developed biliary atresia at certain times. The plants were later found to contain a toxin, now called biliatresone Studies are ongoing to determine whether there is a link between human cases of biliary atresia and toxins such as biliatresone. There are some indications that a metabolite of certain human gut bacteria may be similar to biliatresone.
Pathophysiology
There are three main types of extra-hepatic biliary atresia:
Type I: Atresia is restricted to the common bile duct.
Type II: Atresia of the common hepatic duct.
Type III: Atresia involves the most proximal part of the bile ducts (>95% of all cases).In approximately 10% of cases, other anomalies may be associated with biliary atresia. The most common of these syndromic forms is BASM and might include heart lesions, polysplenia, situs inversus, absent venae cavae, and a preduodenal portal vein. Progressive cirrhosis is associated with signs and symptoms of portal hypertension, such as esophagogastric varix bleeding, hypersplenism, hepatorenal syndrome, and hepatopulmonary syndrome.In an Egyptian study, abnormally high levels of aflatoxin B1 and to a lesser extent aflatoxin B2 was found in liver tissue and blood of all neonates with biliary atresia. Aflatoxins may cause extensive damage to the hepatocytes leading to hepatitis and damage to bile ducts causing inflammation, adhesions and final obstruction of bile ducts. The affected neonates have a genetic detoxification defect that does not allow them to detoxify these aflatoxins timely or effectively. The babies have homozygous deficiency of glutathione S transferase (GST) M1. The aflatoxin damaged liver cells and bile duct cells are removed by neutrophil elastase and by involvement of immune system mediators such as CCL-2 or MCP-1, tumor necrosis factor (TNF), interleukin-6 (IL-6), TGF-beta, endothelin (ET), and nitric oxide (NO). Among these, TGF-beta is the most important pro-fibrogenic cytokine that can be seen in progressive cirrhosis.The cascade of immune involvement to remove damaged hepatocytes and cholangiocytes ushers regeneration. Yet in infants with biliary atresia regeneration is defective, and results in cirrhosis, as these infants have disrupted p53 and disrupted GSTPi. p53 and GSTPi are responsible for DNA fidelity at regeneration. Hence, these infants get accelerated cirrhosis and march to portal hypertension.
Diagnosis
Diagnosis is made by an assessment of history, physical examination in conjunction with blood tests, a liver biopsy, and ultrasound scans imaging and is prompted by prolonged or persistent jaundice, with abnormalities in liver function tests. Ultrasound or other forms of imaging such as radio-isotope liver scans can also be used but final confirmation is usually only reached at the time of exploratory surgery.
Differential diagnoses
The differential diagnoses are extensive and include: Alagille syndrome, alpha-1-antitrypsin deficiency, Byler disease (progressive familial intrahepatic cholestasis), Caroli disease, choledochal cyst, cholestasis, congenital cytomegalovirus disease, congenital herpes simplex virus infection, congenital rubella, congenital syphilis, congenital toxoplasmosis, cystic fibrosis, galactosemia, idiopathic neonatal hepatitis, lipid storage disorders, neonatal hemochromatosis, and total parenteral nutrition-associated cholestasis.
Treatment
Most (>95%) infants with biliary atresia will undergo an operation designed to retain and salvage the native liver, restore bile flow, and reduce the level of jaundice. This is known as the Kasai procedure (after Morio Kasai, the Japanese surgeon who first developed the technique) or hepatoportoenterostomy. Although the procedure is not thought of as curative, it may relieve jaundice and stop liver fibrosis, allowing normal growth and development. Published series from Japan, North America, and the UK show that bilirubin levels will fall to normal values in about 50-55% of infants, allowing 40-50% to retain their own liver to reach the age of 5 and 10 years (and beyond). Liver transplantation is an option for those children whose liver function and symptoms fail to respond to a Kasai operation.Recent large-scale studies by Davenport et al </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Binge eating disorder and can you explain it in depth? [/INST] Binge eating disorder (BED) is an eating disorder characterized by frequent and recurrent binge eating episodes with associated negative psychological and social problems, but without the compensatory behaviors common to bulimia nervosa, OSFED, or the binge-purge subtype of anorexia nervosa.
BED is a recently described condition, which was required to distinguish binge eating similar to that seen in bulimia nervosa but without characteristic purging. Individuals who are diagnosed with bulimia nervosa and binge eating disorder exhibit similar patterns of compulsive overeating, neurobiological features of dysfunctional cognitive control and food addiction, and biological and environmental risk factors. Some professionals consider BED to be a milder form of bulimia with the two conditions on the same spectrum.Binge eating is one of the most prevalent eating disorders among adults, though there tends to be less media coverage and research about the disorder in comparison to anorexia nervosa and bulimia nervosa.
Signs and symptoms
Binge eating is the core symptom of BED; however, not everyone who binge eats has BED. An individual may occasionally binge eat without experiencing many of the negative physical, psychological, or social effects of BED. This may be considered disordered eating rather than a clinical disorder. Precisely defining binge eating can be problematic, however binge eating episodes in BED are generally described as having the following potential features:
Eating much faster than normal, perhaps in a short space of time
Eating until feeling uncomfortably full
Eating a large amount when not hungry
Subjective loss of control over how much or what is eaten
Binges may be planned in advance, involving the purchase of special binge foods, and the allocation of specific time for binging, sometimes at night
Eating alone or secretly due to embarrassment over the amount of food consumed
There may be a dazed mental state during the binge
Not being able to remember what was eaten after the binge
Feelings of guilt, shame or disgust following a food binge
Body image disturbanceIn contrast to bulimia nervosa, binge eating episodes are not regularly followed by activities intended to compensate for the amount of food consumed, such as self-induced vomiting, laxative or enema misuse, or strenuous exercise. BED is characterized more by overeating than dietary restriction. Those with BED often have poor body image and frequently diet, but are unsuccessful due to the severity of their binge eating.Obesity is common in persons with BED, as is depression, low self-esteem, stress and boredom. Those with BED are also at risk of Non-alcoholic fatty liver disease, menstrual irregularities such as amenorrhea, and gastrointestinal problems such as acid reflux and heartburn.
Causes
As with other eating disorders, binge eating is an "expressive disorder"—a disorder that is an expression of deeper psychological problems. People who have binge eating disorder have been found to have higher weight bias internalization, which includes low self-esteem, unhealthy eating patterns, and general body dissatisfaction. Binge eating disorder commonly develops as a result or side effect of depression, as it is common for people to turn to comfort foods when they are feeling down.There was resistance to give binge eating disorder the status of a fully fledged eating disorder because many perceived binge eating disorder to be caused by individual choices. Previous research has focused on the relationship between body image and eating disorders, and concludes that disordered eating might be linked to rigid dieting practices. In the majority of cases of anorexia, extreme and inflexible restriction of dietary intake leads at some point to the development of binge eating, weight regain, bulimia nervosa, or a mixed form of eating disorder not otherwise specified. Binge eating may begin when individuals recover from an adoption of rigid eating habits. When under a strict diet that mimics the effects of starvation, the body may be preparing for a new type of behavior pattern, one that consumes a large amount of food in a relatively short period of time.Some studies show that BED aggregates in families and could be genetic. However, very few published studies around the genetics exist.However, other research suggests that binge eating disorder can also be caused by environmental factors and the impact of traumatic events. One study showed that women with binge eating disorder experienced more adverse life events in the year prior to the onset of the development of the disorder, and that binge eating disorder was positively associated with how frequently negative events occur. Additionally, the research found that individuals who had binge eating disorder were more likely to have experienced physical abuse, perceived risk of physical abuse, stress, and body criticism. Other risk factors may include childhood obesity, critical comments about weight, low self-esteem, depression, and physical or sexual abuse in childhood. A systematic review concluded that bulimia nervosa and binge eating disorder are more impacted by family separations, a loss in their lives and negative parent-child interactions compared to those with anorexia nervosa. A few studies have suggested that there could be a genetic component to binge eating disorder, though other studies have shown more ambiguous results. Studies have shown that binge eating tends to run in families and a twin study by Bulik, Sullivan, and Kendler has shown a, "moderate heritability for binge eating" at 41 percent. More research must be done before any firm conclusions can be drawn regarding the heritability of binge eating disorder. Studies have also shown that eating disorders such as anorexia and bulimia reduce coping abilities, which makes it more likely for those suffering to turn to binge eating as a coping strategy.A correlation between dietary restraint and the occurrence of binge eating has been shown in some research. While binge eaters are often believed to be lacking in self-control, the root of such behavior might instead be linked to rigid dieting practices. The relationship between strict dieting and binge eating is characterized by a vicious circle. Binge eating is more likely to occur after dieting, and vice versa. Several forms of dieting include delay in eating (e.g., not eating during the day), restriction of overall calorie intake (e.g., setting calorie limit to 1,000 calories per day), and avoidance of certain types of food (e.g., "forbidden" food, such as sugar, carbohydrates, etc.) Strict and extreme dieting differs from ordinary dieting. Some evidence suggests the effectiveness of moderate calorie restriction in decreasing binge eating episodes among overweight individuals with binge eating disorder, at least in the short-term."In the U.S, it is estimated that 3.5% of young women and 30% to 40% of people who seek weight loss treatments, can be clinically diagnosed with binge eating disorder."
Diagnosis
International Classification of Diseases
BED was first included in the Diagnostic and Statistical Manual of Mental Disorders (DSM) in 1994 simply as a feature of eating disorder. In 2013 it gained formal recognition as a psychiatric condition in the DSM-5.The 2017 update to the American version of the ICD-10 includes BED under F50.81. ICD-11 may contain a dedicated entry (6B62), defining BED as frequent, recurrent episodes of binge eating (once a week or more over a period of several months) which are not regularly followed by inappropriate compensatory behaviors aimed at preventing weight gain.
Diagnostic and Statistical Manual
Previously considered a topic for further research exploration, binge eating disorder was included in the Diagnostic and Statistical Manual of Mental Disorders in 2013. Until 2013, binge eating disorder was categorized as an Eating Disorder Not Otherwise Specified, an umbrella category for eating disorders that dont fall under the categories for anorexia nervosa or bulimia nervosa. Because it was not a recognized psychiatric disorder in the DSM until 2013, it has been difficult to obtain insurance reimbursement for treatments. The disorder now has its own category under DSM-5, which outlines the signs and symptoms that must be present to classify a persons behavior as binge eating disorder. Studies have confirmed the high predictive value of these criteria for diagnosing BED.According to the World Health Organizations ICD-11 classification of BED, the severity of the disorder can be classified as mild (1-3 episodes/week), moderate (4-7 episodes/week), severe (8-13 episodes/week) and extreme (>14 episodes/week).One study claims that the method for diagnosing BED is for a clinician to conduct a structured interview using the DSM-5 criteria or taking the Eating Disorder Examination. The Structured Clinical Interview takes no more than 75 minutes to complete and has a systematic approach which follows the DSM-5 criteria. The Eating Disorder Examination is a semi-structured interview which identifies the frequency of binges and associated eating disorder features.
Treatment
Counselling and certain medication, such as lisdexamfetamine and selective serotonin reuptake inhibitor (SSRIs), may help. Some recommend a multidisciplinary approach in the treatment of the disorder.
Counselling
Cognitive behavioral therapy (CBT) treatment has been demonstrated as a more effective form of treatment for BED than behavioral weight loss programs. 50 percent of BED individuals achieve complete remission from binge eating and 68-90% will reduce the amount of binge eating episodes they have. CBT has also been shown to be an effective method to address self-image issues and psychiatric comorbidities (e.g., depression) associated with the disorder. The goal of CBT is to interrupt binge-eating behaviour, learn to create a normal eating schedule, change the perception around weight and shape and develop positive attitudes about ones body. Although this treatment is successful in eliminating binge eating episodes, it does not lead to losing any weight. Recent reviews have concluded that psychological interventions such as psychotherapy and behavioral interventions are more effective than pharmacological interventions for the treatment of binge eating disorder. A meta-analysis concluded that psychotherapy based on CBT not only significantly improved binge-eating symptomatology but also reduced a clients BMI significantly at posttreatment and longer than 6 and 12 months after treatment. There is the 12-step Overeaters Anonymous or Food Addicts in Recovery Anonymous. Behavioral weight loss treatment has been proven to be effective as a means to achieve weight loss amongst patients.
Medication
Lisdexamfetamine is a USFDA-approved drug that is used for the treatment of moderate to severe binge eating disorder in adults. As of 2021, it is the first and only medication formally approved for the treatment of BED. It is thought that lisdexamfetamine treats BED through a combination of effects on appetite and satiety, reward, and cognitive processes, including attention, impulsivity, and behavioral inhibition.Three other classes of medications are also used in the treatment of binge eating disorder: antidepressants, anticonvulsants, and anti-obesity medications. Antidepressant medications of the selective serotonin reuptake inhibitor (SSRI) have been found to effectively reduce episodes of binge eating and reduce weight. Similarly, anticonvulsant medications such as topiramate and zonisamide may be able to effectively suppress appetite. The long-term effectiveness of medication for binge eating disorder is currently unknown. For BED patients with manic episodes, risperidone is recommended. If BED patients have bipolar depression, lamotrigine is appropriate to use.Trials of antidepressants, anticonvulsants, and anti-obesity medications suggest that these medications are superior to placebo in reducing binge eating. Medications are not considered the treatment of choice because psychotherapeutic approaches, such as CBT, are more effective than medications for binge eating disorder. A meta-analysis concluded that using medications did not reduce binge-eating episodes and BMI posttreatment at 6–12 months. This indicates a potential possibility of relapse after withdrawal from the medications. Medications also do not increase the effectiveness of psychotherapy, though some patients may benefit from anticonvulsant and anti-obesity medications, such as phentermine/topiramate, for weight loss.Blocking opioid receptors leads to less food intake. Additionally, bupropion and naltrexone used together may cause weight loss. Combining these alongside psychotherapies like CBT may lead to better outcomes for BED.
Surgery
Bariatric surgery has also been proposed as another approach to treat BED and a recent meta-analysis showed that approximately two-thirds of individuals who seek this type of surgery for weight loss purposes have BED. Bariatric surgery recipients who had BED prior to receiving the surgery tend to have poorer weight-loss outcomes and are more likely to continue to exhibit eating behaviors characteristic of BED.
Lifestyle Interventions
Other treatments for BED include lifestyle interventions like weight training, peer support groups, and investigation of hormonal abnormalities.
Prognosis
Individuals with BED often have a lower overall quality of life and commonly experience social difficulties. Early behavior change is an accurate prediction of remission of symptoms later.Individuals who have BED commonly have other comorbidities such as major depressive disorder, personality disorder, bipolar disorder, substance abuse, body dysmorphic disorder, kleptomania, irritable bowel syndrome, fibromyalgia, or an anxiety disorder. Individuals may also exhibit varying degrees of panic attacks and a history of attempted suicide.While people of a normal weight may overeat occasionally, an ongoing habit of consuming large amounts of food in a short period of time may ultimately lead to weight gain and obesity. The main physical health consequences of this type of eating disorder are brought on by the weight gain resulting from calorie-laden bingeing episodes. Mental and emotional consequences of binge eating disorder include social weight stigma and emotional loss of control. Up to 70% of individuals with BED may also be obese, and therefore obesity-associated morbidities such as high blood pressure and coronary artery disease type 2 diabetes mellitus gastrointestinal issues (e.g., gallbladder disease), high cholesterol levels, musculoskeletal problems and obstructive sleep apnea may also be present.
Epidemiology
General
The prevalence of BED in the general population is approximately 1-3%.BED cases usually occur between the ages of 12.4 and 24.7, but prevalence rates increase until the age of 40.Binge eating disorder is the most common eating disorder in adults.The limited amount of research that has been done on BED shows that rates of binge eating disorder are fairly comparable among men and women. The lifetime prevalence of binge eating disorder has been observed in studies to be 2.0 percent for men and 3.5 percent for women, higher than that of the commonly recognized eating disorders anorexia nervosa and bulimia nervosa. However another systematic literature review found the prevalence average to be about 2.3% in women and about 0.3% in men. Lifetime prevalence rates for BED in women can range anywhere from 1.5 to 6 times higher than in men. One literature review found that point prevalence rates for BED vary from 0.1 percent to 24.1 percent depending on the sample. This same review also found that the 12-month prevalence rates vary between 0.1 percent to 8.8 percent.Recent studies found that eating disorders which included anorexia nervosa, bulimia nervosa and binge-eating disorder are common among sexual and gender minority populations, including gay, lesbian, bisexual and transgender people. This could be due to the minority stress and discrimination this population experiences.Due to limited and inconsistent information and research on ethnic and racial differences, prevalence rates are hard to determine for BED. Rates of binge eating disorder have been found to be similar among black women, white women, and white men, while some studies have shown that binge eating disorder is more common among black women than among white women. However, majority of the research done around BED is focused on White women. One literature review found information citing no difference between BED prevalence among Hispanic, African American, and White women while other information found that BED prevalence was highest among Hispanics followed by Black individuals and finally White people.
Worldwide Prevalences
Eating disorders have usually been considered something that was specific to Western countries. However, the prevalence of eating disorders is increasing in other non-Western countries. Though the research on binge eating disorders tends to be concentrated in North America, the disorder occurs across cultures </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Blastic plasmacytoid dendritic cell neoplasm and can you explain it in depth? [/INST] Blastic plasmacytoid dendritic cell neoplasm (BPDCN) is a rare hematologic malignancy. It was initially regarded as a form of lymphocyte-derived cutaneous lymphoma and alternatively named CD4+CD56+ hematodermic tumor, blastic NK cell lymphoma, and agranular CD4+ NK cell leukemia. Later, however, the disease was determined to be a malignancy of plasmacytoid dendritic cells rather than lymphocytes and therefore termed blastic plasmacytoid dendritic cell neoplasm. In 2016, the World Health Organization designated BPDCN to be in its own separate category within the myeloid class of neoplasms. It is estimated that BPDCN constitutes 0.44% of all hematological malignancies.Blastic plasmacytoid dendritic cell neoplasm is an aggressive malignancy with features of cutaneous lymphoma (e.g. malignant plasmacytoid dendritic cell infiltrations into the skin to form single or multiple lesions) and/or leukemia (i.e. malignant plasmacytoid dendritic cells in blood and bone marrow). While commonly presenting with these clinical features, BPDCN, particularly in its more advanced stages, may also involve malignant plasmacytoid dendritic cell infiltrations in and thereby injury to the liver, spleen, lymph nodes, central nervous system, or other tissues. The neoplasm occurs in individuals of all ages but predominates in the elderly; in children, it afflicts males and females equally but in adults is far more common (~75% of cases) in males.Blastic plasmacytoid dendritic cell neoplasm typically responds to chemotherapy regimens used to treat hematological malignancies. All too often, however, the disease rapidly recurs and does so in a more drug-resistant form. Furthermore, the disease may occur in association with the myelodysplastic syndrome or transform to acute myeloid leukemia. Consequently, BPDCN has a very low 5 year survival rate. Current translational research studies on treating BPDCN have therefore focused on non-chemotherapeutic regimens that target the molecular pathways which may promote the disease.
Presentation
Blastic plasmacytoid dendritic cell neoplasm occurs in children, including neonates, but is more common in adults, particularly those between the ages 60–80. BPDCN usually (i.e. 61% to 90% of cases) presents with skin lesions, i.e. nodules, tumors, red or purple papules, bruise-like patches, and/or ulcers that most often occur on the head, face, and upper torso. The lesions are due to diffuse infiltrations of the skin by malignant pDC. In one large study, this presentation was accompanied by swollen lymph nodes, usually in the neck, due to malignant pDC infiltrations (~50% of cases); enlarged liver (~16% of cases) and/or spleen (26% of cases), also due to malignant pDC infiltrations; increased levels of malignant pDC in blood (i.e. >2% of nucleated cells) (~40% of cases), bone marrow (~65% of cases) and cerebrospinal fluid (47% of childhood cases but less often detected in adult cases). More advanced or severe cases may present with extreme organ and/or lymph node enlargements, skin lesions in virtually any site, and clinical evidence of malignant pDC infiltrations in the breasts, eyes, kidneys, lungs, gastrointestinal tract, bone, sinuses, ears, or testes. About 10% of individuals with BPDCN present with a leukemia-like disease, i.e. they exhibit circulating malignant pDC, anemia, thrombocytopenia, and/or leukopenia due to extensive malignant pDC infiltrations in the bone marrow. A leukemic phase of the disease is a common feature of end stage and post-therapy relapsing BPDCN.
Pathophysiology
There are three types of dendritic cells, plasmacytic dendritic cells (pDC) and two types of conventional dendritic cells (cDC), myeloid cDC1 and myeloid cDC2. pDC circulate in the blood, representing <0.4% of all nucleated blood cells, and are present in various hematological tissues such as lymph nodes and spleen. Their major function is to detect and then initiate immune responses to intracellular pathogens, particularly viruses such as the cold sore-causing Herpes simplex viruses, HIV, and hepatitis viruses but also bacteria such as the tuberculosis-causing Mycobacterium tuberculosis, fungi such as the aspergillosis-causing Aspergillus fumigatus and parasites such as malaria-causing Plasmodium falciparum. Following detection of these intracellular pathogens, pCD initiate immune responses by producing massive amounts of type I and type III interferons as well as by differentiating (i.e. maturing) into conventional dendritic cells that further promote immune responses by, e.g. functioning as antigen-presenting cells. The malignant pDC in BPDCN have the appearance of immature plasmacytoid dendritic cells. They are distinguished from other dendritic, myeloid, lymphoid and NK cell types by exhibiting at least several of the following properties: 1) plasmacytoid morphology; 2) production of large amounts of type I interferons when properly stimulated; 3) ability to differentiate into conventional dendritic cells when properly stimulated; 4) the expression of key marker proteins such as granzyme B, TCF4, interleukin-3 receptor (i.e. CD123), CLEC4C, and Neuropilin, and 5) failure to express certain marker proteins that are commonly expressed by myeloid, lymphoid, and NK cell lineages.Blastic plasmacytoid dendritic cell neoplasm typically arises after the serial acquisition of multiple genetic abnormalities in pDC or their precursor cells. Inactivating mutations (i.e. mutations which cause the gene to make no or a less active product) in the TET2 gene are the most common genetic abnormality in the disease, occurring in 32–67% of all BPDCN cases and often accompanied by mutations in either the NPM1 or SRSF2 gene. Numerous other genetic abnormalities are associated with the disease: 1) mutations in NRAS, ASXL1, and TP53; 2) deletions of the CDKN2A-ARF-CDKN2B locus on the short arm of chromosome 9, CDKN1B locus on the short arm of chromosome 12, RB1 locus on the long arm of chromosome 13, or NRC1 locus on the long arm of chromosome 5; 3) fusions of KMT2A on the long arm of chromosome 11 with MLLT1 on the short arm of chromosome 10, SUPT3H on the short arm of chromosome with MYC on the long arm of chromosome 8, or KMT2A on the long arm of chromosome 11 with MLLT1 on the long arm of chromosome 19; and 4) duplication or loss of entire chromosomes, particularly chromosomes 9, 13, or 15. Laboratory studies indicate that malignant pDC have a pathologically overactive NF-κB pathway that promotes their survival and production of various cytokines) that stimulate their own proliferation. Presumably, these genetic abnormalities lead to the activation of the NF-κB pathway and/or other cellular activation pathways which promote the survival, proliferation, and/or other malignant phenotypic traits in pDC and thereby cause BPDCN.
Diagnosis
BPDCN is suggested by a biopsy of skin lesions which reveals the infiltration by medium-sized blast (i.e. immature) cells into the dermis while sparing the epidermis. These cells exhibit irregular nuclei, fine chromatin, and at least one small nucleolus. Such blast cells may also be observed in the circulation, bone marrow, or other tissues and suggest BPDCN. However, the diagnosis of this disease requires determination that these cells are pDC blast cells rather than AML, T-cell lymphoblastic lymphoma (TCLL), or aggressive NK-cell leukemia (NKL) blast cells. Various studies have offered similar but not identical criteria to make this determination. All studies agree that pDC should have a typical plasmacytoid morphology and express a particular profile of marker proteins as detected by immunoassay and/or flow cytometry. However, the studies disagree on which marker proteins to profile. One studys profile assayed 1) CD4, CD56, CD123 (i.e. Interleukin-3 receptor, and TLC1, which are expressed on 80–100% of pDC but uncommon on AML, TCLL, or NKL blasts); 2) CD2AP and CLEC4C which are unique to pDC; and 3) myeloperoxidase, lysozyme, CD34, CD14, CD11c, and CD163 which are unique to AML, TCLL, or NKL blasts. Two other studies recommended assaying somewhat different sets of marker proteins.
Treatment
There have been no controlled studies to define the optimal treatment for BPDCN. Studies on small numbers of individuals with the disease have found that the standard chemotherapy regimens used for the initial induction treatments of AML, acute lymphoblastic leukemia, and high-grade lymphoma give complete remission rates of 77%, 93%, and 80%, respectively, in childhood PBDN and 47%, 77%, and 53%, respectively, in adult PBDN. However, these remissions were short-lived: post-treatment mean times to relapse or death were 12 months for children and 6.8 months for adults. Given these poor remission and survival rates, other treatments have been added to the initial treatment regimens. Studies have shown that the addition of intrathecally administered drugs (administered directly into the spinal canal) as prophylaxis prolongs the period of CNS-free disease and increases overall survival. Hematopoietic stem cell transplantation following initial chemotherapy-induced remission also prolongs these remissions and, it is suggested, offers potential for curing the disease. (A graft-versus-leukemia effect may have contributed to the benefits seen after transplantation.) Studies have not yet determined whether allogenic (i.e. taken from others) or autologous (i.e. taken from self) stem cells achieve better results, although one retrospective study in Japan found that autologous stem cells gave significantly better overall and progression-free survival rates. A phase I clinical research study to test the safety and efficacy of a combination chemotherapy regimen consisting of methotrexate, L-asparaginase, idarubicin, and dexamethasone followed by allogenic or autologous bone marrow transplantation in 26 participants newly diagnosed with BPDCN is planned but not yet in its recruiting phase.While few studies have reported on the treatment of BPDCN that has recurred following initial therapy, donor lymphocyte infusions coupled with alternative chemotherapy treatments have induced second complete or partial remissions in a few patients.
Tagraxofusp-erzs
Tagraxofusp-erzs (trade name Elzonris; formerly SL-401 and DT388-IL3) was approved in the United States in December 2018 for the treatment of BPDCN. Tagraxofusp-erzs is a fusion protein consisting of interleukin 3 (i.e. IL-3) fused to diphtheria toxin. The fusion protein readily kills cultured pDC by binding to their IL-3 receptors to thereby gain entrance to the cells and then blocking these cells protein synthesis (due to diphtheria toxin-mediated inhibition of eukaryotic elongation factor 2).
Prognosis
Due to the high rates of recurrence following initial therapy and the short overall survival times of individuals with BPDCN, prognosis of the disease is poor. However, further study of treatment regimens that include intrathecal chemotherapy and hematological stem cell transplantation in initial treatment regimens (see previous section) and newer non-chemotherapeutic drug treatments (see next section) may improve this situation.
Research
UCART123
UCART123 are chimeric T cell receptor-bearing cells, i.e </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Osteomyelitis and can you explain it in depth? [/INST] Osteomyelitis (OM) is an infection of bone. Symptoms may include pain in a specific bone with overlying redness, fever, and weakness. The long bones of the arms and legs are most commonly involved in children e.g. the femur and humerus, while the feet, spine, and hips are most commonly involved in adults.The cause is usually a bacterial infection, but rarely can be a fungal infection. It may occur by spread from the blood or from surrounding tissue. Risks for developing osteomyelitis include diabetes, intravenous drug use, prior removal of the spleen, and trauma to the area. Diagnosis is typically suspected based on symptoms and basic laboratory tests as C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR).This is because plain radiographs are unremarkable in the first few days following acute infection. Diagnosis is further confirmed by blood tests, medical imaging, or bone biopsy.Treatment of bacterial osteomyelitis often involves both antimicrobials and surgery. In people with poor blood flow, amputation may be required. Treatment of the relatively rare fungal osteomyelitis as mycetoma infections entails the use of antifungal medications. In contrast to bacterial osteomyelitis, amputation or large bony resections is more common in neglected fungal osteomyelitis, namely mycetoma, where infections of the foot account for the majority of cases. Treatment outcomes of bacterial osteomyelitis are generally good when the condition has only been present a short time. About 2.4 per 100,000 people are affected each year. The young and old are more commonly affected. Males are more commonly affected than females. The condition was described at least as early as the 300s BC by Hippocrates. Prior to the availability of antibiotics, the risk of death was significant.
Signs and symptoms
Symptoms may include pain in a specific bone with overlying redness, fever, and weakness and inability to walk especially in children with acute bacterial osteomyelitis. Onset may be sudden or gradual. Enlarged lymph nodes may be present.
In fungal osteomyelitis, there is usually a history of walking bare-footed, especially in rural and farming areas. Contrary to the mode of infection in bacterial osteomyelitis, which is primarily blood-borne, fungal osteomyelitis starts as a skin infection, then invades deeper tissues until it reaches bone.
Cause
In children, the metaphyses, the ends of long bones, are usually affected. In adults, the vertebrae and the pelvis are most commonly affected.Acute osteomyelitis almost invariably occurs in children who are otherwise healthy, because of rich blood supply to the growing bones. When adults are affected, it may be because of compromised host resistance due to debilitation, intravenous drug abuse, infectious root-canaled teeth, or other disease or drugs (e.g., immunosuppressive therapy).Osteomyelitis is a secondary complication in 1–3% of patients with pulmonary tuberculosis. In this case, the bacteria, in general, spread to the bone through the circulatory system, first infecting the synovium (due to its higher oxygen concentration) before spreading to the adjacent bone. In tubercular osteomyelitis, the long bones and vertebrae are the ones that tend to be affected.Staphylococcus aureus is the organism most commonly isolated from all forms of osteomyelitis.Bloodstream-sourced osteomyelitis is seen most frequently in children, and nearly 90% of cases are caused by Staphylococcus aureus. In infants, S. aureus, Group B streptococci (most common) and Escherichia coli are commonly isolated; in children from one to 16 years of age, S. aureus, Streptococcus pyogenes, and Haemophilus influenzae are common. In some subpopulations, including intravenous drug users and splenectomized patients, Gram-negative bacteria, including enteric bacteria, are significant pathogens.The most common form of the disease in adults is caused by injury exposing the bone to local infection. Staphylococcus aureus is the most common organism seen in osteomyelitis, seeded from areas of contiguous infection. But anaerobes and Gram-negative organisms, including Pseudomonas aeruginosa, E. coli, and Serratia marcescens, are also common. Mixed infections are the rule rather than the exception.Systemic mycotic infections may also cause osteomyelitis. The two most common are Blastomyces dermatitidis and Coccidioides immitis.In osteomyelitis involving the vertebral bodies, about half the cases are due to S. aureus, and the other half are due to tuberculosis (spread hematogenously from the lungs). Tubercular osteomyelitis of the spine was so common before the initiation of effective antitubercular therapy, it acquired a special name, Potts disease.The Burkholderia cepacia complex has been implicated in vertebral osteomyelitis in intravenous drug users.
Pathogenesis
In general, microorganisms may infect bone through one or more of three basic methods
Via the bloodstream (haematogeneously) – the most common method
From nearby areas of infection (as in cellulitis), or
Penetrating trauma, including iatrogenic causes such as joint replacements or internal fixation of fractures, leading to a fracture-related infection, or secondary periapical periodontitis in teeth.The area usually affected when the infection is contracted through the bloodstream is the metaphysis of the bone. Once the bone is infected, leukocytes enter the infected area, and, in their attempt to engulf the infectious organisms, release enzymes that lyse the bone. Pus spreads into the bones blood vessels, impairing their flow, and areas of devitalized infected bone, known as sequestra, form the basis of a chronic infection. Often, the body will try to create new bone around the area of necrosis. The resulting new bone is often called an involucrum. On histologic examination, these areas of necrotic bone are the basis for distinguishing between acute osteomyelitis and chronic osteomyelitis. Osteomyelitis is an infective process that encompasses all of the bone (osseous) components, including the bone marrow. When it is chronic, it can lead to bone sclerosis and deformity.Chronic osteomyelitis may be due to the presence of intracellular bacteria. Once intracellular, the bacteria are able to spread to adjacent bone cells. At this point, the bacteria may be resistant to certain antibiotics. These combined factors may explain the chronicity and difficult eradication of this disease, resulting in significant costs and disability, potentially leading to amputation. The presence of intracellular bacteria in chronic osteomyelitis is likely an unrecognized contributing factor in its persistence.In infants, the infection can spread to a joint and cause arthritis. In children, large subperiosteal abscesses can form because the periosteum is loosely attached to the surface of the bone.Because of the particulars of their blood supply, the tibia, femur, humerus, vertebrae, maxilla and the mandibular bodies are especially susceptible to osteomyelitis. Abscesses of any bone, however, may be precipitated by trauma to the affected area. Many infections are caused by Staphylococcus aureus, a member of the normal flora found on the skin and mucous membranes. In patients with sickle cell disease, the most common causative agent is Salmonella, with a relative incidence more than twice that of S. aureus.
Diagnosis
The diagnosis of osteomyelitis is complex and relies on a combination of clinical suspicion and indirect laboratory markers such as a high white blood cell count and fever, although confirmation of clinical and laboratory suspicion with imaging is usually necessary.Radiographs and CT are the initial method of diagnosis, but are not sensitive and only moderately specific for the diagnosis. They can show the cortical destruction of advanced osteomyelitis, but can miss nascent or indolent diagnoses.Confirmation is most often by MRI. The presence of edema, diagnosed as increased signal on T2 sequences, is sensitive, but not specific, as edema can occur in reaction to adjacent cellulitis. Confirmation of bony marrow and cortical destruction by viewing the T1 sequences significantly increases specificity. The administration of intravenous gadolinium-based contrast enhances specificity further. In certain situations, such as severe Charcot arthropathy, diagnosis with MRI is still difficult. Similarly, it is limited in distinguishing avascular necrosis from osteomyelitis in sickle cell anemia.Nuclear medicine scans can be a helpful adjunct to MRI in patients who have metallic hardware that limits or prevents effective magnetic resonance. Generally a triple phase technetium 99 based scan will show increased uptake on all three phases. Gallium scans are 100% sensitive for osteomyelitis but not specific, and may be helpful in patients with metallic prostheses. Combined WBC imaging with marrow studies has 90% accuracy in diagnosing osteomyelitis.Diagnosis of osteomyelitis is often based on radiologic results showing a lytic center with a ring of sclerosis. Culture of material taken from a bone biopsy is needed to identify the specific pathogen; alternative sampling methods such as needle puncture or surface swabs are easier to perform, but cannot be trusted to produce reliable results.Factors that may commonly complicate osteomyelitis are fractures of the bone, amyloidosis, endocarditis, or sepsis.
Classification
The definition of OM is broad, and encompasses a wide variety of conditions. Traditionally, the length of time the infection has been present and whether there is suppuration (pus formation) or osteosclerosis (pathological increased density of bone) are used to arbitrarily classify OM. Chronic OM is often defined as OM that has been present for more than one month. In reality, there are no distinct subtypes; instead, there is a spectrum of pathologic features that reflects a balance between the type and severity of the cause of the inflammation, the immune system and local and systemic predisposing factors.
Suppurative osteomyelitis
Acute suppurative osteomyelitis
Chronic suppurative osteomyelitis
Primary (no preceding phase)
Secondary (follows an acute phase)
Non-suppurative osteomyelitis
Diffuse sclerosing
Focal sclerosing (condensing osteitis)
Proliferative periostitis (periostitis ossificans, Garrés sclerosing osteomyelitis)
OsteoradionecrosisOM can also be typed according to the area of the skeleton in which it is present. For example, osteomyelitis of the jaws is different in several respects from osteomyelitis present in a long bone. Vertebral osteomyelitis is another possible presentation.
Treatment
Osteomyelitis often requires prolonged antibiotic therapy for weeks or months. A PICC line or central venous catheter can be placed for long-term intravenous medication administration. Some studies of children with acute osteomyelitis report that antibiotic by mouth may be justified due to PICC-related complications. It may require surgical debridement in severe cases, or even amputation. Antibiotics by mouth and by intravenous appear similar.Due to insufficient evidence it is unclear what the best antibiotic treatment is for osteomyelitis in people with sickle cell disease as of 2019.Initial first-line antibiotic choice is determined by the patients history and regional differences in common infective organisms. A treatment lasting 42 days is practiced in a number of facilities. Local and sustained availability of drugs have proven to be more effective in achieving prophylactic and therapeutic outcomes. Open surgery is needed for chronic osteomyelitis, whereby the involucrum is opened and the sequestrum is removed or sometimes saucerization can be done. Hyperbaric oxygen therapy has been shown to be a useful adjunct to the treatment of refractory osteomyelitis.Before the widespread availability and use of antibiotics, blow fly larvae were sometimes deliberately introduced to the wounds to feed on the infected material, effectively scouring them clean.There is tentative evidence that bioactive glass may also be useful in long bone infections </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Botulism and can you explain it in depth? [/INST] Botulism is a rare and potentially fatal illness caused by a toxin produced by the bacterium Clostridium botulinum. The disease begins with weakness, blurred vision, feeling tired, and trouble speaking. This may then be followed by weakness of the arms, chest muscles, and legs. Vomiting, swelling of the abdomen, and diarrhea may also occur. The disease does not usually affect consciousness or cause a fever.
Botulism can be spread in several ways. The bacterial spores which cause it are common in both soil and water. They produce the botulinum toxin when exposed to low oxygen levels and certain temperatures. Foodborne botulism happens when food containing the toxin is eaten. Infant botulism happens when the bacteria develops in the intestines and releases the toxin. This typically only occurs in children less than six months old, as protective mechanisms develop after that time. Wound botulism is found most often among those who inject street drugs. In this situation, spores enter a wound, and in the absence of oxygen, release the toxin. It is not passed directly between people. The diagnosis is confirmed by finding the toxin or bacteria in the person in question.
Prevention is primarily by proper food preparation. The bacteria, though not the spores, are destroyed by heating it to more than 85 °C (185 °F) for longer than 5 minutes. Honey can contain the organism, and for this reason, honey should not be fed to children under 12 months. Treatment is with an antitoxin. In those who lose their ability to breathe on their own, mechanical ventilation may be necessary for months. Antibiotics may be used for wound botulism. Death occurs in 5 to 10% of people. Botulism also affects many other animals. The word is from Latin botulus, meaning sausage.
Signs and symptoms
The muscle weakness of botulism characteristically starts in the muscles supplied by the cranial nerves—a group of twelve nerves that control eye movements, the facial muscles and the muscles controlling chewing and swallowing. Double vision, drooping of both eyelids, loss of facial expression and swallowing problems may therefore occur. In addition to affecting the voluntary muscles, it can also cause disruptions in the autonomic nervous system. This is experienced as a dry mouth and throat (due to decreased production of saliva), postural hypotension (decreased blood pressure on standing, with resultant lightheadedness and risk of blackouts), and eventually constipation (due to decreased forward movement of intestinal contents). Some of the toxins (B and E) also precipitate nausea, vomiting, and difficulty with talking. The weakness then spreads to the arms (starting in the shoulders and proceeding to the forearms) and legs (again from the thighs down to the feet).Severe botulism leads to reduced movement of the muscles of respiration, and hence problems with gas exchange. This may be experienced as dyspnea (difficulty breathing), but when severe can lead to respiratory failure, due to the buildup of unexhaled carbon dioxide and its resultant depressant effect on the brain. This may lead to respiratory compromise and death if untreated.Clinicians frequently think of the symptoms of botulism in terms of a classic triad: bulbar palsy and descending paralysis, lack of fever, and clear senses and mental status ("clear sensorium").
Infant botulism
Infant botulism (also referred to as floppy baby syndrome) was first recognized in 1976, and is the most common form of botulism in the United States. Infants are susceptible to infant botulism in the first year of life, with more than 90% of cases occurring in infants younger than six months. Infant botulism results from the ingestion of the C. botulinum spores, and subsequent colonization of the small intestine. The infant gut may be colonized when the composition of the intestinal microflora (normal flora) is insufficient to competitively inhibit the growth of C. botulinum and levels of bile acids (which normally inhibit clostridial growth) are lower than later in life.The growth of the spores releases botulinum toxin, which is then absorbed into the bloodstream and taken throughout the body, causing paralysis by blocking the release of acetylcholine at the neuromuscular junction. Typical symptoms of infant botulism include constipation, lethargy, weakness, difficulty feeding, and an altered cry, often progressing to a complete descending flaccid paralysis. Although constipation is usually the first symptom of infant botulism, it is commonly overlooked.Honey is a known dietary reservoir of C. botulinum spores and has been linked to infant botulism. For this reason, honey is not recommended for infants less than one year of age. Most cases of infant botulism, however, are thought to be caused by acquiring the spores from the natural environment. Clostridium botulinum is a ubiquitous soil-dwelling bacterium. Many infant botulism patients have been demonstrated to live near a construction site or an area of soil disturbance.Infant botulism has been reported in 49 of 50 US states (all save for Rhode Island), and cases have been recognized in 26 countries on five continents.
Complications
Infant botulism has no long-term side effects.
Botulism can result in death due to respiratory failure. However, in the past 50 years, the proportion of patients with botulism who die has fallen from about 50% to 7% due to improved supportive care. A patient with severe botulism may require mechanical ventilation (breathing support through a ventilator) as well as intensive medical and nursing care, sometimes for several months. The person may require rehabilitation therapy after leaving the hospital.
Cause
Clostridium botulinum is an anaerobic, Gram positive, spore-forming rod. Botulinum toxin is one of the most powerful known toxins: about one microgram is lethal to humans when inhaled. It acts by blocking nerve function (neuromuscular blockade) through inhibition of the excitatory neurotransmitter acetylcholines release from the presynaptic membrane of neuromuscular junctions in the somatic nervous system. This causes paralysis. Advanced botulism can cause respiratory failure by paralysing the muscles of the chest; this can progress to respiratory arrest. Furthermore, acetylcholine release from the presynaptic membranes of muscarinic nerve synapses is blocked. This can lead to a variety of autonomic signs and symptoms described above.
In all cases, illness is caused by the botulinum toxin produced by the bacterium C. botulinum in anaerobic conditions and not by the bacterium itself. The pattern of damage occurs because the toxin affects nerves that fire (depolarize) at a higher frequency first.Mechanisms of entry into the human body for botulinum toxin are described below.
Colonization of the gut
The most common form in Western countries is infant botulism. This occurs in infants who are colonized with the bacterium in the small intestine during the early stages of their lives. The bacterium then produces the toxin, which is absorbed into the bloodstream. The consumption of honey during the first year of life has been identified as a risk factor for infant botulism; it is a factor in a fifth of all cases. The adult form of infant botulism is termed adult intestinal toxemia, and is exceedingly rare.
Food
Toxin that is produced by the bacterium in containers of food that have been improperly preserved is the most common cause of food-borne botulism. Fish that has been pickled without the salinity or acidity of brine that contains acetic acid and high sodium levels, as well as smoked fish stored at too high a temperature, presents a risk, as does improperly canned food.
Food-borne botulism results from contaminated food in which C. botulinum spores have been allowed to germinate in low-oxygen conditions. This typically occurs in improperly prepared home-canned food substances and fermented dishes without adequate salt or acidity. Given that multiple people often consume food from the same source, it is common for more than a single person to be affected simultaneously. Symptoms usually appear 12–36 hours after eating, but can also appear within 6 hours to 10 days.
Wound
Wound botulism results from the contamination of a wound with the bacteria, which then secrete the toxin into the bloodstream. This has become more common in intravenous drug users since the 1990s, especially people using black tar heroin and those injecting heroin into the skin rather than the veins. Wound botulism accounts for 29% of cases.
Inhalation
Isolated cases of botulism have been described after inhalation by laboratory workers.
Injection
Symptoms of botulism may occur away from the injection site of botulinum toxin. This may include loss of strength, blurred vision, change of voice, or trouble breathing which can result in death. Onset can be hours to weeks after an injection. This generally only occurs with inappropriate strengths of botulinum toxin for cosmetic use or due to the larger doses used to treat movement disorders. Following a 2008 review the FDA added these concerns as a boxed warning.
Mechanism
The toxin is the protein botulinum toxin produced under anaerobic conditions (where there is no oxygen) by the bacterium Clostridium botulinum.
Clostridium botulinum is a large anaerobic Gram-positive bacillus that forms subterminal endospores.There are eight serological varieties of the bacterium denoted by the letters A to H. The toxin from all of these acts in the same way and produces similar symptoms: the motor nerve endings are prevented from releasing acetylcholine, causing flaccid paralysis and symptoms of blurred vision, ptosis, nausea, vomiting, diarrhea or constipation, cramps, and respiratory difficulty.
Botulinum toxin is broken into eight neurotoxins (labeled as types A, B, C [C1, C2], D, E, F, and G), which are antigenically and serologically distinct but structurally similar. Human botulism is caused mainly by types A, B, E, and (rarely) F. Types C and D cause toxicity only in other animals.In October 2013, scientists released news of the discovery of type H, the first new botulism neurotoxin found in forty years. However, further studies showed type H to be a chimeric toxin composed of parts of types F and A (FA).Some types produce a characteristic putrefactive smell and digest meat (types A and some of B and F); these are said to be proteolytic; type E and some types of B, C, D and F are nonproteolytic and can go undetected because there is no strong odor associated with them.When the bacteria are under stress, they develop spores, which are inert. Their natural habitats are in the soil, in the silt that comprises the bottom sediment of streams, lakes, and coastal waters and ocean, while some types are natural inhabitants of the intestinal tracts of mammals (e.g., horses, cattle, humans), and are present in their excreta. The spores can survive in their inert form for many years.Toxin is produced by the bacteria when environmental conditions are favourable for the spores to replicate and grow, but the gene that encodes for the toxin protein is actually carried by a virus or phage that infects the bacteria. Little is known about the natural factors that control phage infection and replication within the bacteria.The spores require warm temperatures, a protein source, an anaerobic environment, and moisture in order to become active and produce toxin. In the wild, decomposing vegetation and invertebrates combined with warm temperatures can provide ideal conditions for the botulism bacteria to activate and produce toxin that may affect feeding birds and other animals. Spores are not killed by boiling, but botulism is uncommon because special, rarely obtained conditions are necessary for botulinum toxin production from C. botulinum spores, including an anaerobic, low-salt, low-acid, low-sugar environment at ambient temperatures.Botulinum inhibits the release within the nervous system of acetylcholine, a neurotransmitter, responsible for communication between motor neurons and muscle cells. All forms of botulism lead to paralysis that typically starts with the muscles of the face and then spreads towards the limbs. In severe forms, botulism leads to paralysis of the breathing muscles and causes respiratory failure. In light of this life-threatening complication, all suspected cases of botulism are treated as medical emergencies, and public health officials are usually involved to identify the source and take steps to prevent further cases from occurring.Botulinum toxin A, C, and E cleave the SNAP-25, ultimately leading to paralysis.
Diagnosis
For botulism in babies, diagnosis should be made on signs and symptoms. Confirmation of the diagnosis is made by testing of a stool or enema specimen with the mouse bioassay.
In people whose history and physical examination suggest botulism, these clues are often not enough to allow a diagnosis. Other diseases such as Guillain–Barré syndrome, stroke, and myasthenia gravis can appear similar to botulism, and special tests may be needed to exclude these other conditions. These tests may include a brain scan, cerebrospinal fluid examination, nerve conduction test (electromyography, or EMG), and an edrophonium chloride (Tensilon) test for myasthenia gravis. A definite diagnosis can be made if botulinum toxin is identified in the food, stomach or intestinal contents, vomit or feces. The toxin is occasionally found in the blood in peracute cases. Botulinum toxin can be detected by a variety of techniques, including enzyme-linked immunosorbent assays (ELISAs), electrochemiluminescent (ECL) tests and mouse inoculation or feeding trials. The toxins can be typed with neutralization tests in mice. In toxicoinfectious botulism, the organism can be cultured from tissues. On egg yolk medium, toxin-producing colonies usually display surface iridescence that extends beyond the colony.
Prevention
Although the vegetative form of the bacteria is destroyed by boiling, the spore itself is not killed by the temperatures reached with normal sea-level-pressure boiling, leaving it free to grow and again produce the toxin when conditions are right.A recommended prevention measure for infant botulism is to avoid giving honey to infants less than 12 months of age, as botulinum spores are often present. In older children and adults the normal intestinal bacteria suppress development of C. botulinum.While commercially canned goods are required to undergo a "botulinum cook" in a pressure cooker at 121 °C (250 °F) for 3 minutes, and thus rarely cause botulism, there have been notable exceptions. Two were the 1978 Alaskan salmon outbreak and the 2007 Castleberrys Food Company outbreak. Foodborne botulism is the rarest form though, accounting for only around 15% of cases (US) and has more frequently been from home-canned foods with low acid content, such as carrot juice, asparagus, green beans, beets, and corn. However, outbreaks of botulism have resulted from more unusual sources. In July 2002, fourteen Alaskans ate muktuk (whale meat) from a beached whale, and eight of them developed symptoms of botulism, two of them requiring mechanical ventilation.Other, much rarer sources of infection (about every decade in the US) include garlic or herbs stored covered in oil without acidification, chili peppers, improperly handled baked potatoes wrapped in aluminum foil, tomatoes, and home-canned or fermented fish.
When canning or preserving food at home, attention should be paid to hygiene, pressure, temperature, refrigeration and storage. When making home preserves, only acidic fruit such as apples, pears, stone fruits and berries should be used. Tropical fruit and tomatoes are low in acidity and must have some acidity added before they are canned.Low-acid foods have pH values higher than 4.6. They include red meats, seafood, poultry, milk, and all fresh vegetables except for most tomatoes. Most mixtures of low-acid and acid foods also have pH values above 4.6 unless their recipes include enough lemon juice, citric acid, or vinegar to make them acidic. Acid foods have a pH of 4.6 or lower. They include fruits, pickles, sauerkraut, jams, jellies, marmalades, and fruit butters.Although tomatoes usually are considered an acid food, some are now known to have pH values slightly above 4.6. Figs also have pH values slightly above 4.6. Therefore, if they are to be canned as acid foods, these products must be acidified to a pH of 4.6 or lower with lemon juice or citric acid. Properly acidified tomatoes and figs are acid foods and can be safely processed in a boiling-water canner.Oils infused with fresh garlic or herbs should be acidified and refrigerated. Potatoes which have been baked while wrapped in aluminum foil should be kept hot until served or refrigerated. Because the botulism toxin is destroyed by high temperatures, home-canned foods are best boiled for 10 minutes before eating. Metal cans containing food in which bacteria are growing may bulge outwards due to gas production from bacterial growth or the food inside may be foamy or have a bad odor; such cans with any of these signs should be discarded.Any container of food which has been heat-treated and then assumed to be airtight which shows signs of not being so, e.g., metal cans with pinprick holes from rust or mechanical damage, should be discarded. Contamination of a canned food solely with C. botulinum may not cause any visual defects to the container, such as bulging. Only assurance of sufficient thermal processing during production, and absence of a route for subsequent contamination, should be used as indicators of food safety.
The addition of nitrites and nitrates to processed meats such as ham, bacon, and sausages reduces growth and toxin production of C. botulinum.
Vaccine
Vaccines are under development, but they have disadvantages, and in some cases there are concerns that they may revert to dangerous native activity. As of 2017 work to develop a better vaccine was being carried out, but the US FDA had not approved any vaccine against botulism.
Treatment
Botulism is generally treated with botulism antitoxin and supportive care.Supportive care for botulism includes monitoring of respiratory function. Respiratory failure due to paralysis may require mechanical ventilation for 2 to 8 weeks, plus intensive medical and nursing care. After this time, paralysis generally improves as new neuromuscular connections are formed.In some abdominal cases, physicians may try to remove contaminated food still in the digestive tract by inducing vomiting or using enemas. Wounds should be treated, usually surgically, to remove the source of the toxin-producing bacteria.
Antitoxin
Botulinum antitoxin consists of antibodies that neutralize botulinum toxin in the circulatory system by passive immunization. This prevents additional toxin from binding to the neuromuscular junction, but does not reverse any already inflicted paralysis.In adults, a trivalent antitoxin containing antibodies raised against botulinum toxin types A, B, and E is used most commonly; however, a heptavalent botulism antitoxin has also been developed and was approved by the U.S. FDA in 2013. In infants, horse-derived antitoxin is sometimes avoided for fear of infants developing serum sickness or lasting hypersensitivity to horse-derived proteins. To avoid this, a human-derived antitoxin has been developed and approved by the U.S. FDA in 2003 for the treatment of infant botulism. This human-derived antitoxin has been shown to be both safe and effective for the treatment of infant botulism. However, the danger of equine-derived antitoxin to infants has not been clearly established, and one study showed the equine-derived antitoxin to be both safe and effective for the treatment of infant botulism.Trivalent (A,B,E) botulinum antitoxin is derived from equine sources utilizing whole antibodies (Fab and Fc portions). In the United States, this antitoxin is available from the local health department via the CDC. The second antitoxin, heptavalent (A,B,C,D,E,F,G) botulinum antitoxin, is derived from "despeciated" equine IgG antibodies which have had the Fc portion cleaved off leaving the F(ab)2 portions. This less immunogenic antitoxin is effective against all known strains of botulism where not contraindicated.
Prognosis
The paralysis caused by botulism can persist for 2 to 8 weeks, during which supportive care and ventilation may be necessary to keep the person alive. Botulism can be fatal in 5% to 10% of people who are affected. However, if left untreated, botulism is fatal in 40% to 50% of cases.Infant botulism typically has no long-term side effects but can be complicated by treatment-associated adverse events. The case fatality rate is less than 2% for hospitalized babies.
Epidemiology
Globally, botulism is fairly rare, with approximately 1,000 identified cases yearly.
United States
In the United States an average of 145 cases are reported each year. Of these, roughly 65% are infant botulism, 20% are wound botulism, and 15% are foodborne. Infant botulism is predominantly sporadic and not associated with epidemics, but great geographic variability exists. From 1974 to 1996, for example, 47% of all infant botulism cases reported in the U.S. occurred in California.Between 1990 and 2000, the Centers for Disease Control and Prevention reported 263 individual foodborne cases from 160 botulism events in the United States with a case-fatality rate of 4%. Thirty-nine percent (103 cases and 58 events) occurred in Alaska, all of which were attributable to traditional Alaska aboriginal foods. In the lower 49 states, home-canned food was implicated in 70 events (~69%) with canned asparagus being the most frequent cause. Two restaurant-associated outbreaks affected 25 people. The median number of cases per year was 23 (range 17–43), the median number of events per year was 14 (range 9–24). The highest incidence rates occurred in Alaska, Idaho, Washington, and Oregon. All other states had an incidence rate of 1 case per ten million people or less.The number of cases of food borne and infant botulism has changed little in recent years, but wound botulism has increased because of the use of black tar heroin, especially in California.All data regarding botulism antitoxin releases and laboratory confirmation of cases in the US are recorded annually by the Centers for Disease Control and Prevention and published on their website.
On 2 July 1971, the U.S. Food and Drug Administration (FDA) released a public warning after learning that a New York man had died and his wife had become seriously ill due to botulism after eating a can of Bon Vivant vichyssoise soup.
Between 31 March and 6 April 1977, 59 individuals developed type B botulism. All who fell ill had eaten at the same Mexican restaurant in Pontiac, Michigan, and had consumed a hot sauce made with improperly home-canned jalapeño peppers, either by adding it to their food, or by eating nachos that had been prepared with the hot sauce. The full clinical spectrum (mild symptomatology with neurologic findings through life-threatening ventilatory paralysis) of type B botulism was documented.
In April 1994, the largest outbreak of botulism in the United States since 1978 occurred in El Paso, Texas. Thirty people were affected; 4 required mechanical ventilation. All ate food from a Greek restaurant. The attack rate among people who ate a potato-based dip was 86% (19/22) compared with 6% (11/176) among people who did not eat the dip (relative risk [RR] = 13.8; 95% confidence interval [CI], 7.6–25.1). The attack rate among people who ate an eggplant-based dip was 67% (6/9) compared with 13% (24/189) among people who did not (RR = 5.2; 95% CI, 2.9–9.5). Botulism toxin type A was detected in patients and in both dips. Toxin formation resulted from holding aluminum foil-wrapped baked potatoes at room temperature, apparently for several days, before they were used in the dips. Food handlers should be informed of the potential hazards caused by holding foil-wrapped potatoes at ambient temperatures after cooking.
In 2002, fourteen Alaskans ate muktuk (whale blubber) from a beached whale, resulting in eight of them developing botulism, with two of the affected requiring mechanical ventilation.
Beginning in late June 2007, 8 people contracted botulism poisoning by eating canned food products produced by Castleberrys Food Company in its Augusta, Georgia plant. It was later identified that the Castleberrys plant had serious production problems on a specific line of retorts that had under-processed the cans of food. These issues included broken cooking alarms, leaking water valves and inaccurate temperature devices, all the result of poor management of the company. All of the victims were hospitalized and placed on mechanical ventilation. The Castleberrys Food Company outbreak was the first instance of botulism in commercial canned foods in the United States in over 30 years.
One person died, 21 cases were confirmed, and 10 more were suspected in Lancaster, Ohio when a botulism outbreak occurred after a church potluck in April 2015. The suspected source was a salad made from home-canned potatoes.
A botulism outbreak occurred in Northern California in May 2017 after 10 people consumed nacho cheese dip served at a gas station in Sacramento County. One man died as a result of the outbreak.
United Kingdom
The largest recorded outbreak of foodborne botulism in the United Kingdom occurred in June 1989. A total of 27 patients were affected; one patient died. Twenty-five of the patients had eaten one brand of hazelnut yogurt in the week before the onset of symptoms. Control measures included the cessation of all yogurt production by the implicated producer, the withdrawal of the firms yogurts from sale, the recall of cans of the hazelnut conserve, and advice to the general public to avoid the consumption of all hazelnut yogurts.
China
From 1958 to 1983 there were 986 outbreaks of botulism in China involving 4,377 people with 548 deaths.
Qapqal disease
After the Chinese Communist Revolution in 1949, a mysterious plague (named Qapqal disease) was noticed to be affecting several Sibe villages in Qapqal Xibe Autonomous County. It was endemic with distinctive epidemic patterns, yet the underlying cause remained unknown for a long period of time. It caused a number of deaths and forced some people to leave the place.In 1958, a team of experts were sent to the area by the Ministry of Health to investigate the cases. The epidemic survey conducted proved that the disease was primarily type A botulism, with several cases of type B. The team also discovered that, the source of the botulinum was local fermented grain and beans, as well as a raw meat food called mi song hu hu. They promoted the improvement of fermentation techniques among local residents, and thus eliminated the disease.
Canada
From 1985 to 2015 there were outbreaks causing 91 confirmed cases of foodborne botulism in Canada, 85% of which were in Inuit communities, especially Nunavik, as well as First Nations of the coast of British Columbia, following consumption of traditionally prepared marine mammal and fish products.
Ukraine
In 2017, there were 70 cases of botulism with 8 deaths in Ukraine. The previous year there were 115 cases with 12 deaths. Most cases were the result of dried fish, a common local drinking snack.
Vietnam
In 2020, several cases of botulism were reported in Vietnam. All of them were related to a product containing contaminated vegetarian pâté. Some patients were put on life support.
Other susceptible species
Botulism can occur in many vertebrates and invertebrates. Botulism has been reported in such species as rats, mice, chicken, frogs, toads, goldfish, aplysia, squid, crayfish, drosophila and leeches.Death from botulism is common in waterfowl; an estimated 10,000 to 100,000 birds die of botulism annually. The disease is commonly called "limberneck". In some large outbreaks, a million or more birds may die. Ducks appear to be affected most often. An enzootic form of duck botulism in the Western USA and Canada is known as "western duck sickness". Botulism also affects commercially raised poultry. In chickens, the mortality rate varies from a few birds to 40% of the flock.
Botulism seems to be relatively uncommon in domestic mammals; however, in some parts of the world, epidemics with up to 65% mortality are seen in cattle. The prognosis is poor in large animals that are recumbent.
In cattle, the symptoms may include drooling, restlessness, uncoordination, urine retention, dysphagia, and sternal recumbency. Laterally recumbent animals are usually very close to death. In sheep, the symptoms may include drooling, a serous nasal discharge, stiffness, and incoordination. Abdominal respiration may be observed and the tail may switch on the side. As the disease progresses, the limbs may become paralyzed and death may occur.
Phosphorus-deficient cattle, especially in southern Africa, are inclined to ingest bones and carrion containing clostridial toxins and consequently develop lame sickness or lamsiekte.
The clinical signs in horses are similar to cattle. The muscle paralysis is progressive; it usually begins at the hindquarters and gradually moves to the front limbs, neck, and head. Death generally occurs 24 to 72 hours after initial symptoms and results from respiratory paralysis. Some foals are found dead without other clinical signs.
Clostridium botulinum type C toxin has been incriminated as the cause of grass sickness, a condition in horses which occurs in rainy and hot summers in Northern Europe. The main symptom is pharynx paralysis.Domestic dogs may develop systemic toxemia after consuming C. botulinum type C exotoxin or spores within bird carcasses or other infected meat but are generally resistant to the more severe effects of Clostridium botulinum type C. Symptoms include flaccid muscle paralysis, which can lead to death due to cardiac and respiratory arrest.Pigs are relatively resistant to botulism. Reported symptoms include anorexia, refusal to drink, vomiting, pupillary dilation, and muscle paralysis.In poultry and wild birds, flaccid paralysis is usually seen in the legs, wings, neck and eyelids </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Colonoscopy and can you explain it in depth? [/INST] Colonoscopy () or coloscopy () is the endoscopic examination of the large bowel and the distal part of the small bowel with a CCD camera or a fiber optic camera on a flexible tube passed through the anus. It can provide a visual diagnosis (e.g., ulceration, polyps) and grants the opportunity for biopsy or removal of suspected colorectal cancer lesions.
Colonoscopy can remove polyps smaller than one millimeter. Once polyps are removed, they can be studied with the aid of a microscope to determine if they are precancerous or not. It can take up to 15 years for a polyp to turn cancerous.Colonoscopy is similar to sigmoidoscopy—the difference being related to which parts of the colon each can examine. A colonoscopy allows an examination of the entire colon (1,200–1,500 mm in length). A sigmoidoscopy allows an examination of the distal portion (about 600 mm) of the colon, which may be sufficient because benefits to cancer survival of colonoscopy have been limited to the detection of lesions in the distal portion of the colon.A sigmoidoscopy is often used as a screening procedure for a full colonoscopy, often done in conjunction with a fecal occult blood test (FOBT). About 5% of these screened patients are referred to colonoscopy.Virtual colonoscopy, which uses 2D and 3D imagery reconstructed from computed tomography (CT) scans or from nuclear magnetic resonance (MR) scans, is also possible, as a totally non-invasive medical test. Virtual colonoscopy does not allow therapeutic maneuvers such as polyp and tumour removal or biopsy, nor visualization of lesions smaller than five millimeters; if a growth or polyp is detected using CT colonography, it would require removal during a standard colonoscopy. Surgeons have used the term pouchoscopy to refer to a colonoscopy of the ileo-anal pouch.
Medical uses
Conditions that call for colonoscopies include gastrointestinal hemorrhage, unexplained changes in bowel habit and suspicion of malignancy. Colonoscopies are often used to diagnose colon polyp and colon cancer, but are also frequently used to diagnose inflammatory bowel disease. In older patients (sometimes even younger ones) an unexplained drop in hematocrit (one sign of anemia) is an indication that calls for a colonoscopy, usually along with an esophagogastroduodenoscopy (EGD), even if no obvious blood has been seen in the stool (feces).Fecal occult blood is a quick test which can be done to test for microscopic traces of blood in the stool. A positive test is almost always an indication to do a colonoscopy. In most cases the positive result is just due to hemorrhoids; however, it can also be due to diverticulosis, inflammatory bowel disease (Crohns disease, ulcerative colitis), colon cancer, or polyps. Colonic polypectomy has become a routine part of colonoscopy, allowing quick and simple removal of polyps during the procedure, without invasive surgery.
Colon cancer screening
Colonoscopy is one of the colorectal cancer screening tests available to people in the US who are 45 years of age and older. The other screening tests include flexible sigmoidoscopy, double-contrast barium enema, computed tomographic (CT) colongraphy (virtual colonoscopy), guaiac-based fecal occult blood test (gFOBT), fecal immunochemical test (FIT), and multitarget stool DNA screening test (Cologuard).Subsequent rescreenings are then scheduled based on the initial results found, with a five- or ten-year recall being common for colonoscopies that produce normal results. People with a family history of colon cancer are often first screened during their teenage years.
Among people who have had an initial colonoscopy that found no polyps, the risk of developing colorectal cancer within five years is extremely low. Therefore, there is no need for those people to have another colonoscopy sooner than five years after the first screening.Some medical societies in the US recommend a screening colonoscopy every 10 years beginning at age 50 for adults without increased risk for colorectal cancer. Research shows that the risk of cancer is low for 10 years if a high-quality colonoscopy does not detect cancer, so tests for this purpose are indicated every ten years.Colonoscopy screening prevents approximately two-thirds of deaths due to colorectal cancers on the left side of the colon, and is not associated with a significant reduction in deaths from right-sided disease.Colonoscopy reduces cancer rates by detecting some colon polyps and cancers on the left side of the colon early enough that they may be treated, and a smaller number on the right side; many of these left-sided growths would have been detected by a sigmoidoscopy procedure.Since polyps often take 10 to 15 years to transform into cancer in someone at average risk of colorectal cancer, guidelines recommend 10 years after a normal screening colonoscopy before the next colonoscopy. (This interval does not apply to people at high risk of colorectal cancer, or to those who experience symptoms of colorectal cancer.)Although widely touted in the US as the "gold standard" of colon cancer screening, colonoscopy has never been studied as a screening tool. Most of the potential benefits of colonoscopy have been extrapolated from randomized trials of the sigmoidoscopy. The CONFIRM trial, a randomized trial on colonoscopy vs. FIT is currently ongoing.
Recommendations
The American Cancer Society recommends, beginning at age 45, both men and women follow one of these testing schedules for screening to find colon polyps and/or cancer:
Flexible sigmoidoscopy every 5 years, or
Colonoscopy every 10 years, or
Double-contrast barium enema every 5 years, or
CT colonography (virtual colonoscopy) every 5 years
Yearly guaiac-based fecal occult blood test (gFOBT)
Yearly fecal immunochemical test (FIT)
Stool DNA test (sDNA) every 3 years
Medicare coverage
In the United States, Medicare insurance covers the following colorectal-cancer screening tests:
Colonoscopy: average risk — every 10 years beginning at age 50, high risk — every 2 years with no age restriction
Flexible sigmoidoscopy — every 4 years beginning at age 50
Double-contrast barium enema: average risk — every 4 years beginning at age 50, high risk — every 2 years
(CT) colongraphy: not covered by Medicare
gFOBT: average risk — every year beginning at age 50
FIT: average risk — every year beginning at age 50
Cologuard: average risk — every 3 years beginning at age 50
Risks
About 1 in 200 people who undergo a colonoscopy experience a serious complication. Perforation of the colon occurs in about 1 in 2000 procedures, bleeding in 2.6 per 1000, and death in 3 per 100,000, with an overall risk of serious complications of 0.35%.In some low-risk populations screening by colonoscopy in the absence of symptoms does not outweigh the risks of the procedure. For example, the odds of developing colorectal cancer between the ages of 20 and 40 in the absence of specific risk factors are about 1 in 1,250 (0.08%).The rate of complications varies with the practitioner and institution performing the procedure, and other variables.
Perforation
The most serious complication generally is gastrointestinal perforation, which is life-threatening and in most cases requires immediate major surgery for repair. Fewer than 20% of cases may be successfully managed with a conservative (non-surgical) approach.A 2003 analysis of the relative risks of sigmoidoscopy and colonoscopy brought into attention that the risk of perforation after colonoscopy is approximately double that after sigmoidoscopy (consistent with the fact that colonoscopy examines a longer section of the colon), a difference that appeared to be decreasing.
Bleeding
Bleeding complications may be treated immediately during the procedure by cauterization using the instrument. Delayed bleeding may also occur at the site of polyp removal up to a week after the procedure, and a repeat procedure can then be performed to treat the bleeding site. Even more rarely, splenic rupture can occur after colonoscopy because of adhesions between the colon and the spleen.
Anaesthesia
As with any procedure involving anaesthesia, other complications would include cardiopulmonary complications such as a temporary drop in blood pressure and oxygen saturation usually the result of overmedication, and are easily reversed. Anesthesia can also increase the risk of developing blood clots and lead to pulmonary embolism or deep venous thrombosis. (DVT) In rare cases, more serious cardiopulmonary events such as a heart attack, stroke, or even death may occur; these are extremely rare except in critically ill patients with multiple risk factors. In rare cases, coma associated with anesthesia may occur.
Bowel preparation
Dehydration caused by the laxatives that are usually administered during the bowel preparation for colonoscopy also may occur. Therefore, patients must drink large amounts of fluids during the day of colonoscopy preparation to prevent dehydration. Loss of electrolytes or dehydration is a potential risk that can even prove deadly. In rare cases, severe dehydration can lead to kidney damage or dysfunction under the form of phosphate nephropathy.
Other
Virtual colonoscopies carry risks that are associated with radiation exposure.Colonoscopy preparation and colonoscopy procedure can cause inflammation of the bowels and diarrhea or bowel obstruction.During colonoscopies where a polyp is removed (a polypectomy), the risk of complications has been higher, although still low at about 2.3 percent. One of the most serious complications that may arise after colonoscopy is the postpolypectomy syndrome. This syndrome occurs due to potential burns to the bowel wall when the polyp is removed, and may cause fever and abdominal pain. It is a rare complication, treated with intravenous fluids and antibiotics.
Bowel infections are a potential colonoscopy risk, although rare. The colon is not a sterile environment; many bacteria that normally live in the colon ensure the well-functioning of the bowel, and the risk of infections is minimal. Infections can occur during biopsies when too much tissue is removed and bacteria protrude in areas they do not belong to, or in cases when the lining of the colon is perforated and the bacteria get into the abdominal cavity. Infection may also be transmitted between patients if the colonoscope is not cleaned and sterilized properly between tests.
Minor colonoscopy risks may include nausea, vomiting or allergies to the sedatives that are used. If medication is given intravenously, the vein may become irritated. Most localized irritations to the vein leave a tender lump lasting a number of days but going away eventually. The incidence of these complications is less than 1%.
On rare occasions, intracolonic explosion may occur. A meticulous bowel preparation is the key to prevent this complication.Signs of complications include severe abdominal pain, fevers and chills, or rectal bleeding (more than half a cup or 100ml).
Procedure
Preparation
The colon must be free of solid matter for the test to be performed properly. For one to three days, the patient is required to follow a low fiber or clear-liquid-only diet. Examples of clear fluids are apple juice, chicken and/or beef broth or bouillon, lemon-lime soda, lemonade, sports drink, and water. It is important that the patient remains hydrated. Sports drinks contain electrolytes which are depleted during the purging of the bowel. Drinks containing fiber such as prune and orange juice should not be consumed, nor should liquids dyed red, purple, orange, or sometimes brown; however, cola is allowed. In most cases, tea or coffee taken without milk are allowed.The day before the colonoscopy, the patient is either given a laxative preparation (such as bisacodyl, phospho soda, sodium picosulfate, or sodium phosphate and/or magnesium citrate) and large quantities of fluid, or whole bowel irrigation is performed using a solution of polyethylene glycol and electrolytes. The procedure may involve both a pill-form laxative and a bowel irrigation preparation with the polyethylene glycol powder dissolved into any clear liquid, such as a sports drink that contains electrolytes.
A typical procedure regimen then would be as follows: in the morning of the day before the procedure, a 238 g bottle of polyethylene glycol powder should be poured into 1.9 litres (64 oz.) of the chosen clear liquid, which then should be mixed and refrigerated. Two bisacodyl 5 mg tablets are taken 3 pm; at 5 pm, the patient starts drinking the mixture (approx. 8 oz. (0.5 litres) each 15-30 min. until finished); at 8 pm, take two bisacodyl 5 mg tablets; continue drinking/hydrating into the evening until bedtime with clear permitted fluids. The procedure may be scheduled early in the day so the patient need not go without food and only limited fluids until later.The goal of the preparation is to clear the colon of solid matter, and the patient may be advised to spend the day at home with ready access to toilet facilities. The patient may also want to have at hand moist towelettes or a bidet for cleaning the anus. A soothing salve such as petroleum jelly applied after cleaning the anus will reduce discomfort.
The patient may be asked not to take aspirin or similar products such as salicylate, ibuprofen, etc. for up to ten days before the procedure to avoid the risk of bleeding if a polypectomy is performed during the procedure. A blood test may be performed before the procedure.
Investigation
During the procedure the patient is often given sedation intravenously, employing agents such as fentanyl or midazolam. Although meperidine (Demerol) may be used as an alternative to fentanyl, the concern of seizures has relegated this agent to second choice for sedation behind the combination of fentanyl and midazolam. The average person will receive a combination of these two drugs, usually between 25 and 100 µg IV fentanyl and 1–4 mg IV midazolam. Sedation practices vary between practitioners and nations; in some clinics in Norway, sedation is rarely administered.Some endoscopists are experimenting with, or routinely use, alternative or additional methods such as nitrous oxide and propofol, which have advantages and disadvantages relating to recovery time (particularly the duration of amnesia after the procedure is complete), patient experience, and the degree of supervision needed for safe administration. This sedation is called "twilight anesthesia". For some patients it is not fully effective, so they are indeed awake for the procedure and can watch the inside of their colon on the color monitor. Substituting propofol for midazolam, which gives the patient quicker recovery, is gaining wider use, but requires closer monitoring of respiration.
A meta-analysis found that playing music improves tolerability to patients of the procedure.The first step is usually a digital rectal examination, to examine the tone of the sphincter and to determine if preparation has been adequate. The endoscope is then passed through the anus up the rectum, the colon (sigmoid, descending, transverse and ascending colon, the cecum), and ultimately the terminal ileum. The endoscope has a movable tip and multiple channels for instrumentation, air, suction and light. The bowel is occasionally insufflated with air to maximize visibility (a procedure that gives the patient the false sensation of needing to take a bowel movement). Biopsies are frequently taken for histology. Additionally in a procedure known as chromoendoscopy, a contrast-dye (such as indigo carmine) may be sprayed through the endoscope onto the bowel wall to help visualise any abnormalities in the mucosal morphology. A Cochrane review updated in 2016 found strong evidence that chromoscopy enhances the detection of cancerous tumours in the colon and rectum.In most experienced hands, the endoscope is advanced to the junction of where the colon and small bowel join up (cecum) in under 10 minutes in 95% of cases. Due to tight turns and redundancy in areas of the colon that are not "fixed", loops may form in which advancement of the endoscope creates a "bowing" effect that causes the tip to actually retract. These loops often result in discomfort due to stretching of the colon and its associated mesentery. Manoeuvres to "reduce" or remove the loop include pulling the endoscope backwards while twisting it. Alternatively, body position changes and abdominal support from external hand pressure can often "straighten" the endoscope to allow the scope to move forward. In a minority of patients, looping is often cited as a cause for an incomplete examination. Usage of alternative instruments leading to completion of the examination has been investigated, including use of pediatric colonoscope, push enteroscope and upper GI endoscope variants.For screening purposes, a closer visual inspection is then often performed upon withdrawal of the endoscope over the course of 20 to 25 minutes. Lawsuits over missed cancerous lesions have recently prompted some institutions to better document withdrawal time as rapid withdrawal times may be a source of potential medical legal liability. This is often a real concern in clinical settings where high caseloads could provide financial incentive to complete colonoscopies as quickly as possible.
Suspicious lesions may be cauterized, treated with laser light or cut with an electric wire for purposes of biopsy or complete removal polypectomy. Medication can be injected, e.g. to control bleeding lesions. The procedure typically takes 20–30 minutes, depending on the indication and findings; with multiple polypectomies or biopsies, procedure times may be longer. As mentioned above, anatomic considerations may also affect procedure times.
After the procedure, some recovery time is usually allowed to let the sedative wear off. Outpatient recovery time can take an estimated 30–60 minutes. Most facilities require that patients have a person with them to help them home afterwards (depending on the sedation method used).
One common after-effect from the procedure is a bout of flatulence and minor wind pain caused by air insufflation into the colon during the procedure.
An advantage of colonoscopy over X-ray imaging or other less invasive tests is the ability to perform therapeutic interventions during the test. A polyp is a growth of excess of tissue that can develop into cancer. If a polyp is found, for example, it can be removed by one of several techniques. A snare device can be placed around a polyp for removal. Even if the polyp is flat on the surface it can often be removed. For example, the following shows a polyp removed in stages:
Pain management
The pain associated with the procedure is not caused by the insertion of the scope but rather by the inflation of the colon in order to do the inspection. The scope itself is essentially a long, flexible tube about a centimeter in diameter — that is, as big around as the little finger, which is less than the diameter of an average stool.
The colon is wrinkled and corrugated, somewhat like an accordion or a clothes-dryer exhaust tube, which gives it the large surface area needed for water absorption. In order to inspect this surface thoroughly, the physician blows it up like a balloon, using air from a compressor or carbon dioxide from a gas bottle (CO2 is absorbed into the bloodstream through the mucosal lining of the colon much faster than air and then exhaled through the lungs which is associated with less post procedural pain), in order to get the creases out. The stomach, intestines, and colon have a so-called "second brain" wrapped around them, which autonomously runs the chemical factory of digestion. It uses complex hormone signals and nerve signals to communicate with the brain and the rest of the body. Normally a colons job is to digest food and regulate the intestinal flora. The harmful bacteria in rancid food, for example, creates gas.
The colon has distension sensors that can tell when there is unexpected gas pushing the colon walls out—thus the "second brain" tells the person that he or she is having intestinal difficulties by way of the sensation of nausea. Doctors typically recommend either total anesthesia or a partial twilight sedative to either preclude or to lessen the patients awareness of pain or discomfort, or just the unusual sensations of the procedure. Once the colon has been inflated, the doctor inspects it with the scope as it is slowly pulled backward. If any polyps are found they are then cut out for later biopsy.
Some doctors prefer to work with totally anesthetized patients inasmuch as the lack of any perceived pain or discomfort allows for a leisurely examination. Twilight sedation is, however, inherently safer than general anesthesia; it also allows the patients to follow simple commands and even to watch the procedure on a closed-circuit monitor. Tens of millions of adults annually need to have colonoscopies, and yet many dont because of concerns about the procedure.Colonoscopy can be carried out without any sedation and without problems with pain, which is practised in several institutions in many countries with the patients agreement. This allows the patient to shift the body position to help the doctor carry out the procedure and significantly reduces recovery time and side-effects. There is some discomfort when the colon is distended with air, but this is not usually particularly painful, and it passes relatively quickly. Unsedated patients can be released from the hospital on their own without any feelings of nausea, able to continue with normal activities, and without the need for an escort as recommended after sedation.
Ultrasound
Duodenography and colonography are performed like a standard abdominal examination using B-mode and color flow Doppler ultrasonography using a low frequency transducer — for example a 2.5 MHz — and a high frequency transducer, for example a 7.5 MHz probe. Detailed examination of duodenal walls and folds, colonic walls and haustra was performed using a 7.5 MHz probe. Deeply located abdominal structures were examined using 2.5 MHz probe. All ultrasound examinations are performed after overnight fasting (for at least 16 hours) using standard scanning procedure. Subjects are examined with and without water contrast. Water contrast imaging is performed by having adult subjects take at least one liter of water prior to examination. Patients are examined in the supine, left posterior oblique, and left lateral decubitus positions using the intercostal and subcostal approaches. The liver, gall bladder, spleen, pancreas, duodenum, colon, and kidneys are routinely evaluated in all patients.
With patient lying supine, the examination of the duodenum with high frequency ultrasound duodenography is performed with 7.5 MHz probe placed in the right upper abdomen, and central epigastric successively; for high frequency ultrasound colonography, the ascending colon, is examined with starting point usually midway of an imaginary line running from the iliac crest to the umbilicus and proceeding cephalid through the right mid abdomen; for the descending colon, the examination begins from the left upper abdomen proceeding caudally and traversing the left mid abdomen and left lower abdomen, terminating at the sigmoid colon in the lower pelvic region. Color flow Doppler sonography is used to examine the localization of lesions in relation to vessels. All measurements of diameter and wall thickness are performed with built-in software. Measurements are taken between peristaltic waves.
Economics
Researchers have found that older patients with three or more significant health problems, like dementia or heart failure, had high rates of repeat colonoscopies without medical indications. These patients are less likely to live long enough to develop colon cancer. Gordon states, "At about $1,000 per procedure, theres clearly an economic incentive".The Hemoccult II FOBT (combined with follow-up colonoscopy if indicated by the test) is over 5 times as cost effective as other screening strategies, but is only about 85% as sensitive. Because of this relatively low sensitivity, US guidelines advocate the over 5 times more expensive procedures instead, because even the relatively small increase in lives saved and 5-fold cost increase is seen as worth choosing, given US living standards.
History
In the 1960s, Dr. Niwa and Dr. Yamagata at Tokyo University developed the device. After 1968, Dr. William Wolff and Dr. Hiromi Shinya pioneered the development of the colonoscope. Their invention, in 1969 in Japan, was an advance over the barium enema and the flexible sigmoidoscope because it allowed for the visualization and removal of polyps from the entire large intestine. Wolff and Shinya advocated for their invention and published much of the early evidence needed to overcome skepticism about the devices safety and efficacy.
Colonoscopy with CCD invention and market is led by Fuji film, Olympus and Hoya in Japan. In 1982, Dr. Lawrence Kaplan of Aspen Medical Group in St. Paul, MN reported a series of 100 consecutive colonoscopies and upper endoscopies performed in a free-standing clinic miles from the nearest hospital, demonstrating the safety and cost effectiveness of these outpatient procedures. (Personal communication to the Joint Commission on Ambulatory Care, May 1983)
Etymology
The terms colonoscopy or coloscopy are derived from the ancient Greek noun κόλον, same as English colon, and the verb σκοπεῖν, look (in)to, examine. The term colonoscopy is however ill-constructed, as this form supposes that the first part of the compound consists of a possible root κολων- or κολον-, with the connecting vowel -o, instead of the root κόλ- of κόλον. A compound such as κολωνοειδής, like a hill, (with the additional -on-) is derived from the ancient Greek word κολώνη or κολωνός, hill. Similarly, colonoscopy (with the additional -on-) can literally be translated as examination of the hill, instead of the examination of the colon.
In English, multiple words exist that are derived from κόλον, such as colectomy, colocentesis, colopathy, and colostomy among many others, that actually lack the incorrect additional -on-. A few compound words such as colonopathy have doublets with -on- inserted.
See also
References
Further reading
Gupta S, Lieberman D, Anderson JC, Burke CA, Dominitz JA, Kaltenbach T, et al. (March 2020). "Recommendations for Follow-Up After Colonoscopy and Polypectomy: A Consensus Update by the US Multi-Society Task Force on Colorectal Cancer". Gastroenterology. 158 (4): 1131–1153.e5. doi:10.1053/j.gastro.2019.10.026. PMC 7672705. PMID 32044092.
Gupta S, Lieberman D, Anderson JC, Burke CA, Dominitz JA, Kaltenbach T, et al </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Bronchopulmonary dysplasia and can you explain it in depth? [/INST] Bronchopulmonary dysplasia (BPD; part of the spectrum of chronic lung disease of infancy) is a chronic lung disease in which premature infants, usually those who were treated with supplemental oxygen, require long-term oxygen. The alveoli that are present tend to not be mature enough to function normally. It is more common in infants with low birth weight (LBW) and those who receive prolonged mechanical ventilation to treat respiratory distress syndrome (RDS). It results in significant morbidity and mortality. The definition of BPD has continued to evolve primarily due to changes in the population, such as more survivors at earlier gestational ages, and improved neonatal management including surfactant, antenatal glucocorticoid therapy, and less aggressive mechanical ventilation.Currently the description of BPD includes the grading of its severity into mild, moderate and severe. This correlates with the infants maturity, growth and overall severity of illness. The new system offers a better description of underlying pulmonary disease and its severity.
Presentation
Complications
Feeding problems are common in infants with bronchopulmonary dysplasia, often due to prolonged intubation. Such infants often display oral-tactile hypersensitivity (also known as oral aversion).
Physical findings:
hypoxemia;
hypercapnia;
crackles, wheezing, & decreased breath sounds;
increased bronchial secretions;
hyperinflation;
frequent lower respiratory infections;
delayed growth & development;
cor pulmonale;
CXR shows with hyperinflation, low diaphragm, atelectasis, cystic changes.
Cause
Prolonged high oxygen delivery in premature infants causes necrotizing bronchiolitis and alveolar septal injury, with inflammation and scarring. This results in hypoxemia. Today, with the advent of surfactant therapy and high frequency ventilation and oxygen supplementation, infants with BPD experience much milder injury without necrotizing bronchiolitis or alveolar septal fibrosis. Instead, there are usually uniformly dilated acini with thin alveolar septa and little or no interstitial fibrosis. It develops most commonly in the first 4 weeks after birth.
Diagnosis
Earlier criteria
The classic diagnosis of BPD may be assigned at 28 days of life if the following criteria are met:
Positive pressure ventilation during the first 2 weeks of life for a minimum of 3 days.
Clinical signs of abnormal respiratory function.
Requirements for supplemental oxygen for longer than 28 days of age to maintain PaO2 above 50 mm Hg.
Chest radiograph with diffuse abnormal findings characteristic of BPD.
Newer criteria
The 2006 National Institute of Health (US) criteria for BPD (for neonates treated with more than 21% oxygen for at least 28 days) is as follows:,
MildBreathing room air at 36 weeks post-menstrual age or discharge (whichever comes first) for babies born before 32 weeks, or
breathing room air by 56 days postnatal age, or discharge (whichever comes first) for babies born after 32 weeks gestation.ModerateNeed for <30% oxygen at 36 weeks postmenstrual age, or discharge (whichever comes first) for babies born before 32 weeks, or
need for <30% oxygen to 56 days postnatal age, or discharge (whichever comes first) for babies born after 32 weeks gestation.SevereNeed for >30% oxygen, with or without positive pressure ventilation or continuous positive pressure at 36 weeks postmenstrual age, or discharge (whichever comes first) for babies born before 32 weeks, or
need for >30% oxygen with or without positive pressure ventilation or continuous positive pressure at 56 days postnatal age, or discharge (whichever comes first) for babies born after 32 weeks gestation.
Management
Infants with bronchopulmonary dysplasia are often treated with diuretics that decrease fluid in the alveoli where gas exchange occurs and bronchodilators that relax the airway muscles to facilitate breathing. Viral immunization is important for these children who have a higher risk of infections in the respiratory tract.
Corticosteroid treatment
There is evidence that steroids (systemic corticosteroid treatment) given to babies less than 7 days old can prevent bronchopulmonary dysplasia. This treatment increases the risk of neurodevelopmental sequelae (cerebral palsy) and gastrointestinal perforation.For babies 7 days old and older, "late systemic postnatal corticosteroid treatment" may reduce the risk of death and of bronchopulmonary dysplasia. There is some evidence that this treatment does not increase the risk of cerebral palsy, however, long-term studies considering the neurodevelopmental outcomes is needed to further understand the risk of this treatment option. Late systemic postnatal corticosteroid treatment is therefore only recommended for babies 7 days old or older who cannot be taken off of a ventilator </s> |