text
stringlengths 311
140k
|
---|
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Microscopic colitis and can you explain it in depth? [/INST] Microscopic colitis refers to two related medical conditions which cause diarrhea: collagenous colitis and lymphocytic colitis. Both conditions are characterized by the presence of chronic non-bloody watery diarrhea, normal appearances on colonoscopy and characteristic histopathology findings of inflammatory cells.
Signs and symptoms
The main symptom is persistent non-bloody watery diarrhea, which may be profuse. People may also experience abdominal pain, fecal incontinence, and unintentional weight loss. Microscopic colitis is the diagnosis in around 10% of cases investigated for chronic non-bloody diarrhea.
Associated conditions
A higher incidence of autoimmune diseases, for example arthritis, Sjögrens syndrome, thyroid disorders, and celiac disease, has been reported in people with microscopic colitis. Associations with various drugs have been found, especially proton pump inhibitors, H2 blockers, selective serotonin reuptake inhibitors (SSRIs), and non-steroidal anti-inflammatory drugs (NSAIDs). Bile acid diarrhea is found in 41% of patients with collagenous colitis and 29% with lymphocytic colitis. Additionally, cigarette smoking has been identified as a significant risk factor of microscopic colitis.
Diagnosis
Colonoscopic appearances are normal or near normal. As the changes are often patchy, an examination limited to the rectum may miss cases of microscopic colitis, and so a full colonoscopy is necessary. Multiple colonic biopsies are taken in order to make the diagnosis. Histological features of colonic biopsies indicating microscopic colitis are: greater than 20 intraepithelial lymphocytes per 100 epithelial cells and, additionally, 10-20 μm of a thickened subepithelial collagen band in collagenous colitis. Inflammation of the lamina propria, with mainly mononuclear cells, may be observed in collagenous colitis.
Pathology
Microscopic colitis is characterized by an increase in inflammatory cells, particularly lymphocytes, in colonic biopsies with an otherwise normal appearance and architecture of the colon. Inflammatory cells are increased both in the surface epithelium ("intraepithelial lymphocytes") and in the lamina propria. The key feature is more than 20 intra-epithelial lymphocytes per 100 epithelial cells. These are the principal features of lymphocytic colitis. An additional distinguishing feature of collagenous colitis is a thickened subepithelial collagen layer, which may be up to 30 micrometres thick, that occurs in addition to the features found in lymphocytic colitis. The fact that the two types of microscopic colitis share many features including epidemiology, risk factors and, response to therapy has led to the suggestion that they are actually subtypes of the same disease.
Differential diagnosis
Differential diagnoses, which should be ruled out, include bile acid diarrhea, lactose malabsorption, celiac disease, Crohns disease, ulcerative colitis, and infectious colitis.
Treatment
Lymphocytic and collagenous colitis have both been shown in randomized, placebo-controlled trials to respond well to budesonide, a glucocorticoid. Budesonide formulated to be active in the distal colon and rectum is effective for both active disease and in the prevention of relapse </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Hematemesis and can you explain it in depth? [/INST] Hematemesis is the vomiting of blood. It is always an important sign. It can be confused with hemoptysis (coughing up blood) or epistaxis (nosebleed), which are more common. The source is generally the upper gastrointestinal tract, typically above the suspensory muscle of duodenum. It may be caused by ulcers, tumors of the stomach or esophagus, varices, prolonged and vigorous retching, gastroenteritis, ingested blood (from bleeding in the mouth, nose, or throat), or certain drugs.Hematemesis is treated as a medical emergency, with treatments based on the amount of blood loss. Investigations include endoscopy. Any blood loss may be corrected with intravenous fluids and blood transfusions. Patients may need to avoid taking anything by mouth.
Definition
Hematemesis is the vomiting of blood. This is usually vomit that contains bright red blood. Coffee ground vomiting is similar to hematemesis, but is distinct in not involving bright red blood.Hematemesis is always an important sign. It must be differentiated from hemoptysis (coughing up blood) and epistaxis (nosebleed). Both of these are more common conditions. These may be difficult to distinguish.
Differential diagnosis
Hematemesis may be caused by:
Peptic ulcer. This may be related to Zollinger–Ellison syndrome, which causes severe disease.
Vascular malfunctions of the gastrointestinal tract, such as bleeding gastric varices or intestinal varices.
Mallory-Weiss syndrome: bleeding tears in the esophagal mucosa, usually caused by prolonged and vigorous retching.
Vomiting of ingested blood after bleeding in the mouth, nose, or throat.
Tumors of the stomach or esophagus.
Irritation or erosion of the lining of the esophagus or stomach.
Radiation poisoning.
Viral hemorrhagic fevers.
Gastroenteritis.
Gastritis.
Chronic viral hepatitis.
Hepatic schistosomiasis, or intestinal schistosomiasis. This is caused by the parasite Schistosoma mansoni.
Iatrogenic injury (invasive procedure such as endoscopy or transesophageal echocardiography)
Atrio-oesophageal fistula.
Yellow fever.
Strongyloidiasis.
Certain drugs.
Plague, most commonly from septicemic plague.
Mechanism
The source of vomited blood is usually from the upper gastrointestinal tract. This can include the esophagus, stomach, and parts of the small intestine. This may be the suspensory muscle of duodenum. Hematemesis tends to occur only after significant blood loss.
Evaluation
Hematemesis may be investigated with endoscopy of the upper gastrointestinal tract. Barium meal may also be used.
Management
Hematemesis is treated as a medical emergency. The most vital distinction is whether there is blood loss sufficient to cause shock. Correct management is required in such conditions. It is required to perform all tests such as endoscopy before medication. A platelet test is also an important test in such conditions. Medicines such as painkillers or antibiotics, e.g. ciprofloxacin, could decrease platelet count which can lead to thrombocytopenia (when the body does not have sufficient platelets in the blood and cannot form clots). In such conditions wrong medication or management could be deadly. Blood transfusion is required in such conditions if the body loses more than 20 percent of body blood volume. Severe loss makes it impossible for the heart to pump a sufficient amount of blood to the body. In such conditions unmaintained blood volume could lead to hypovolemic shock (hypovolemic shock could lead to damage of body organs e.g. kidney, brain, or gangrene of arms or legs). An untreated patient could develop cerebral atrophy.
Minimal blood loss
In cases that do not involve shock, treatment may involve proton pump inhibitors (such as omeprazole) to treat stomach ulcers if they are present. This is given until endoscopy can be arranged. Blood transfusions may be given if the level of hemoglobin in the blood is extremely low, that is less than 8.0 g/dL or 4.5–5.0 mmol/L. A patient may be kept nothing by mouth (or no eating or drinking). Adequate venous access (such as with large-bore cannulas or a central venous catheter) is generally obtained in case the patient develops a further bleed and becomes unstable.
Significant blood loss
In a "hemodynamically significant" case of hematemesis, there may be shock. Resuscitation is an immediate priority to prevent death </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Hematuria and can you explain it in depth? [/INST] Hematuria or haematuria is defined as the presence of blood or red blood cells in the urine. “Gross hematuria” occurs when urine appears red, brown, or tea-colored due to the presence of blood. Hematuria may also be subtle and only detectable with a microscope or laboratory test. Blood that enters and mixes with the urine can come from any location within the urinary system, including the kidney, ureter, urinary bladder, urethra, and in men, the prostate. Common causes of hematuria include urinary tract infection (UTI), kidney stones, viral illness, trauma, bladder cancer, and exercise. These causes are grouped into glomerular and non-glomerular causes, depending on the involvement of the glomerulus of the kidney. But not all red urine is hematuria. Other substances such as certain medications and foods (e.g. blackberries, beets, food dyes) can cause urine to appear red. Menstruation in women may also cause the appearance of hematuria and may result in a positive urine dipstick test for hematuria. A urine dipstick test may also give an incorrect positive result for hematuria if there are other substances in the urine such as myoglobin, a protein excreted into urine during rhabdomyolysis. A positive urine dipstick test should be confirmed with microscopy, where hematuria is defined by three of more red blood cells per high power field. When hematuria is detected, a thorough history and physical examination with appropriate further evaluation (e.g. laboratory testing) can help determine the underlying cause.
Differential diagnosis
Hematuria can be classified according to visibility, anatomical origin, and timing of blood during urination.In terms of the visibility, hematuria can be visible to the naked eye (termed "gross hematuria") and may appear red or brown (sometimes referred to as tea-colored), or it can be microscopic (i.e. not visible to the eye but detected with a microscope or laboratory test). Microscopic hematuria is present when there are three or more red blood cells per high power field.
In terms of the anatomical origin, blood or red blood cells can enter and mix with urine at multiple anatomical sites within the urinary system, including the kidney, ureter, urinary bladder, and urethra, and in men, the prostate. Additionally, menstruation in women may cause the appearance of hematuria and may result in a positive urine dipstick test for hematuria. The causes corresponding to these anatomic locations can be divided into glomerular and non-glomerular causes, referring to the involvement of the glomerulus of the kidney. Non-glomerular causes can be further subdivided into upper urinary tract and lower urinary tract causes.
In terms of the timing during urination, hematuria can be initial, terminal or total, meaning blood can appear in the urine at the onset, midstream, or later. If it appears soon after the onset of urination, a distal site is suggested. A longer delay suggests a more proximal lesion. In other words, shorter times suggest distal sites while longer times suggest proximal sites. Hematuria that occurs throughout urination suggests that bleeding is occurring above the level of the bladder. This is very significant in regards to developing a differential diagnosis and eventually for the purposes of creating a treatment plan for the patient.Many causes may present as either visible hematuria or microscopic hematuria, and so the differential diagnosis is frequently organized based upon glomerular and non-glomerular causes.
Glomerular hematuria
Glomerular causes include:
IgA nephropathy
Thin glomerular basement membrane disease
Hereditary nephritis (Alports disease)
Hemolytic uremic syndrome
Postinfectious glomerulonephritisCertain infectious agents like Group B Strep(strep pyogenes) can cause a person to get a very specific manifestation known as post-streptococcal glomerulonephritis. This manifestation results in a person having the symptom of cola-colored urine.
Membranoproliferative glomerulonephritis
Lupus nephritis
Henoch-Shonlein purpura
Nephritic syndrome
Nephrotic syndrome
Polycystic kidney disease
Idiopathic hematuria
Non-glomerular hematuria
Visible blood clots in the urine indicate a non-glomerular cause. Non-glomerular causes include:
Urinary tract infections, such as pyelonephritis, cystitis, prostatitis, and urethritis
Kidney stones
Cancers, such as renal cell carcinoma and bladder cancer (particularly transitional cell carcinoma), and in men, prostate cancer
Urinary tract strictures
Benign prostatic hyperplasia
Renal papillary necrosis
Trauma or damage to the lining of the urinary tract
Intense exercise
Increased tendency to bleed due to acquired or genetic conditions (e.g. sickle cell disease or vitamin K deficiency bleeding) or certain medications (e.g. blood thinners)
Mimickers of hematuria
Pigmenturia
Not all red or brown urine is caused by hematuria. Other substances such as certain medications and certain foods can cause urine to appear red.Medications that may cause urine to appear red include:
Phenazopyridine
Nitrofurantoin
Doxorubicin
RifampicinFoods that may cause urine to appear red include:
Blackberries
Food dyes.
Beets
Rhubarb
Fava beans
False positive urine dipstick
A urine dipstick may be falsely positive for hematuria due to other substances in the urine. While the urine dipstick test is able to recognize heme in red blood cells, it also identifies free hemoglobin and myoglobin. Free hemoglobin may be found in the urine resulting from hemolysis, and myoglobin may be found in the urine resulting from rhabdomyolysis. Thus, a positive dipstick test does not necessarily indicate hematuria; rather, microscopy of the urine showing three of more red blood cells per high power field confirms hematuria.
Menstruation
In women, menstruation may cause the appearance of hematuria and may result in a urine dipstick test positive for hematuria. Menstruation can be ruled out as a cause of hematuria by inquiring about menstruation history and ensuring the urine specimen is collected without menstrual blood.
Evaluation
The evaluation of hematuria is dependent upon the visibility of the blood in the urine (i.e. visible/gross vs microscopic hematuria). Visible hematuria must be investigated, as it may be due to a pathological cause. In those with visible hematuria, urological cancer (most frequently bladder or kidney cancer) is discovered in 20-25%. Hematuria alone without accompanying symptoms should be raise suspicion of malignancy of the urinary tract until proven otherwise. The initial evaluation of patients presenting with signs and symptoms that are consistent of hematuria include assessment of hemodynamic status, underlying cause of hematuria, and ensuring urinary drainage. These steps include assessment of the patients heart rate, blood pressure, a physician exam taken by a healthcare professional, and blood work to ensure the patients hemodynamic status is adequate. It is important to obtain a detailed history from the patient (i.e. recreational, occupational, and medication exposures) as this information can be helpful in suggesting a cause of hematuria. The physical exam can also be helpful in identifying a cause of the hematuria as certain signs found on the physical exam can suggest specific causes of the hematuria. In the event the initial evaluation of hematuria does not reveal an underlying cause then evaluation by a physician who specializes in Urology may proceed. This medical evaluation may consist of, but is not limited too, a history and physical exam taken by healthcare personnel, laboratory studies (i.e. blood work), cystoscopy, and specialized imaging procedures (i.e. CT or MRI).
Visible hematuria
The first step in evaluation of red or brown colored urine is to confirm true hematuria with urinalysis and urine microscopy, where hematuria is defined by three of more red blood cells per high power field. Although a urine dipstick test may be used, it can give false positive or false negative results. In gathering information, it is important to inquire about recent trauma, urologic procedures, menses, and culture-documented urinary tract infection. If any of these are present, it is appropriate to repeat a urinalysis with urine microscopy in 1 to 2 weeks or after treatment of the infection. If the results of the urinalysis and urine microscopy reveal a glomerular origin of hematuria (indicated by proteinuria or red blood cell casts), consultation of a nephrologist should be made. If the results of the urinalysis indicate a non-glomerular origin, a microbiological culture of the urine should be performed, if it has not been done already. If the culture is positive, treatment of the infection should follow and urinalysis and urine microscopy should be repeated once complete. If the culture is negative or if hematuria persists after treatment, CT urogram and cystoscopy should be performed. Of note, hemodynamic stability should be monitored and a complete blood count should be ordered to assess for anemia.In summary, those with visible hematuria confirmed by urinalysis and urine microscopy and with no recent trauma, urologic procedures, menses, or urinary tract infection should undergo cystoscopy and CT urogram.
Microscopic hematuria
After detecting and confirming hematuria with urinalysis and urine microscopy, the first step in evaluation of microscopic hematuria is to rule out benign causes. Benign causes include urinary tract infection, viral illness, kidney stone, recent intense exercise, menses, recent trauma, or recent urological procedure. After benign causes have resolved or been treated, a repeat urinalysis and urine microscopy is warranted to ensure cessation of hematuria. If hematuria persists (even if there is a suspected cause), the next step is to stratify the risk of the person for urothelial cancer into low, intermediate, or high risk to determine next steps. To be in the low risk category, one must satisfy all of the following criteria: Has never smoked tobacco or smoked less than 10 pack-years; is a female less than 50 years old or a male less than 40 years old; has 3-10 red blood cells per high power field; has not had microscopic hematuria before; and has no other risk factors for urothelial cancer. To be in the intermediate risk category, one must satisfy any of the following criteria: Has smoked 10-30 pack-years; is a female 50–59 years old or a male aged 40–59 years old; has 11-25 red blood cells per high power field; or was previously a low-risk patient with persistent microscopic hematuria and has 3-25 red blood cells per high power field. To be in the high risk category, one must satisfy any of the following criteria: Has smoked more than 30 pack-years; is older than 60 years of age; or has above 25 red blood cells per high power field on any urinalysis. For the low risk category, the next step is to either repeat a urinalysis with urine microscopy in 6 months or perform a cystoscopy and renal ultrasound. For the intermediate risk category, the next step is to perform a cystoscopy and renal ultrasound. For the high risk category, the next step is to perform a cystoscopy and CT urogram. If an underlying cause for hematuria is discovered, it should be managed appropriately. However, if no underlying cause is discovered, the hematuria should be re-evaluated with urinalysis and urine microscopy within 12 months. Additionally, for all risk categories, if a nephrologic origin is suspected, consultation of a nephrologist should be made.
Pathophysiology
The pathophysiology of hematuria can often be explained by damage to the structures of the urinary system, including the kidney, ureter, urinary bladder, and urethra, and in men, the prostate. Common mechanisms include structural disruption to the glomerular basement membrane and mechanical or chemical erosion of the mucosal surfaces of the genitourinary tract.
Management
Medical Emergency: Acute clot retention
Acute clot retention is one of three emergencies that can occur with hematuria. The other two are anemia and shock. Blood clots can prevent urine outflow through either ureter or the bladder. This is known as acute urinary retention.
Blood clots that remain in the bladder are digested by urinary urokinase producing fibrin fragments. These fibrin fragments are natural anticoagulants and promote ongoing bleeding from the urinary tract. Removing all blood clots prevents the formation of this natural anticoagulant. This in turns facilitates the cessation of bleeding from the urinary tract.The acute management of obstructing clots is the placement of a large (22-24 French) urethral Foley catheter. Clots are evacuated with a Toomey syringe and saline irrigation. If this does not control the bleeding, management should escalate to continuous bladder irrigation (CBI) via a three-port urethral catheter. If both a large urethral Foley catheter and CBI fail, an urgent cystoscopy in the operating room will be necessary. Lastly, a transfusion and/or a correction of a coexisting coagulopathy may be necessary.
Medical Emergency: Urosepsis
Urosepsis is defined as sepsis caused by a urogenital tract infection and comprises about 25% of all sepsis cases. Urosepsis is the result of a systemic inflammatory response to infection and can be identified by numerous signs and symptoms (e.g. fever, hypothermia, tachycardia, and leukocytosis). Signs and symptoms that indicate a urogential tract infection is the source of the sepsis may include, but are not limited to, flank pain, costovertebral angle tenderness, pain with micturition, urinary retention, and scrotal pain. In terms of the visibility, hematuria can be visible to the naked eye (termed "gross hematuria") and may appear red or brown (sometimes referred to as tea-colored), or it can be microscopic (i.e. not visible to the eye but detected of urosepsis. In addition to imaging tests, patients may be treated with antibiotics to relieve the infection and intravenous fluids to maintain cardiovascular and renal perfusion </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Nosebleed and can you explain it in depth? [/INST] A nosebleed, also known as epistaxis, is bleeding from the nose. Blood can flow down into the stomach, and cause nausea and vomiting. In more severe cases, blood may come out of both nostrils. Rarely, bleeding may be so significant that low blood pressure occurs. Blood may also come up the nasolacrimal duct and out from the eye.Risk factors include trauma, including putting the finger in the nose, blood thinners, high blood pressure, alcoholism, seasonal allergies, dry weather, and inhaled corticosteroids. There are two types: anterior, which is more common; and posterior, which is less common but more serious. Anterior nosebleeds generally occur from Kiesselbachs plexus while posterior bleeds generally occur from the sphenopalatine artery. The diagnosis is by direct observation.Prevention may include the use of petroleum jelly in the nose. Initially, treatment is generally the application of pressure for at least five minutes over the lower half of the nose. If this is not sufficient, nasal packing may be used. Tranexamic acid may also be helpful. If bleeding episodes continue, endoscopy is recommended.About 60% of people have a nosebleed at some point in their life. About 10% of nosebleeds are serious. Nosebleeds are rarely fatal, accounting for only 4 of the 2.4 million deaths in the U.S. in 1999. Nosebleeds most commonly affect those younger than 10 and older than 50.
Cause
Nosebleeds can occur due to a variety of reasons. Some of the most common causes include trauma from nose picking, blunt trauma (such as a motor vehicle accident), or insertion of a foreign object (more likely in children). Relative humidity (including centrally heated buildings), respiratory tract infections, chronic sinusitis, rhinitis or environmental irritants can cause inflammation and thinning of the tissue in the nose, leading to a greater likelihood of bleeding from the nose.Most causes of nose bleeding are self-limiting and do not require medical attention. However, if nosebleeds are recurrent or do not respond to home therapies, an underlying cause may need to be investigated. Some rarer causes are listed below:Coagulopathy
Thrombocytopenia (thrombotic thrombocytopenic purpura, idiopathic thrombocytopenic purpura)
Von Willebrands disease
Hemophilia
Leukemia
HIV
Chronic liver disease—cirrhosis causes deficiency of factor II, VII, IX,& XDietary
Sulfur dioxide (sulphur dioxide) E220 (as a food preservative used particularly in wines, dried fruits, etc. )
Sulphites as food preservatives
Salicylates naturally occurring in some fruits and vegetablesInflammatory
Granulomatosis with polyangiitis
Systemic lupus erythematosusMedications/Drugs
Anticoagulation (warfarin, heparin, aspirin, etc.)
Insufflated drugs (particularly cocaine)
Nasal sprays (particularly prolonged or improper use of nasal steroids)Neoplastic
Squamous cell carcinoma
Adenoid cystic carcinoma
Melanoma
Nasopharyngeal carcinoma
Nasopharyngeal angiofibroma
Nosebleeds can be a sign of cancer in the sinus area, which is rare, or tumors starting at the base of the brain, such as meningioma. Due to the sensitive location, nosebleeds caused by tumors are typically associated with other symptoms, such as hearing or vision problems.Traumatic
Anatomical deformities (e.g. septal spurs)
Blunt trauma (usually a sharp blow to the face such as a punch, sometimes accompanying a nasal fracture)
Foreign bodies (such as fingers during nose-picking)
Digital trauma
Middle ear barotrauma (such as from descent in aircraft or ascent in scuba diving)
Nasal bone fracture
Septal fracture/perforation
Surgery (e.g. septoplasty and functional endoscopic sinus surgery)
Nasal bleeds may be due to fracture of facial bones namely maxilla and zygoma.Vascular
Hereditary hemorrhagic telangiectasia (Osler–Weber–Rendu disease)
Angioma
Aneurysm of the carotid artery
Pathophysiology
The nasal mucosa contains a rich blood supply that can be easily ruptured and cause bleeding. Rupture may be spontaneous or initiated by trauma. Nosebleeds are reported in up to 60% of the population with peak incidences in those under the age of ten and over the age of 50 and appear to occur in males more than females. An increase in blood pressure (e.g. due to general hypertension) tends to increase the duration of spontaneous epistaxis. Anticoagulant medication and disorders of blood clotting can promote and prolong bleeding. Spontaneous epistaxis is more common in the elderly as the nasal mucosa (lining) becomes dry and thin and blood pressure tends to be higher. The elderly are also more prone to prolonged nosebleeds as their blood vessels are less able to constrict and control the bleeding.
The vast majority of nosebleeds occur in the front anterior (front) part of the nose from the nasal septum. This area is richly endowed with blood vessels (Kiesselbachs plexus). This region is also known as Littles area. Bleeding farther back in the nose is known as a posterior bleed and is usually due to bleeding from Woodruffs plexus, a venous plexus situated in the posterior part of inferior meatus. Posterior bleeds are often prolonged and difficult to control. They can be associated with bleeding from both nostrils and with a greater flow of blood into the mouth. Sometimes blood flowing from other sources of bleeding passes through the nasal cavity and exits the nostrils. It is thus blood coming from the nose but is not a true nosebleed, that is, not truly originating from the nasal cavity. Such bleeding is called "pseudoepistaxis" (pseudo + epistaxis). Examples include blood coughed up through the airway and ending up in the nasal cavity, then dripping out.
Prevention
People with uncomplicated nosebleeds can use conservative methods to prevent future nosebleeds such as sleeping in a humidified environment or applying petroleum jelly to the nasal nares.
Treatment
Most anterior nosebleeds can be stopped by applying direct pressure, which helps by promoting blood clots. Those who have a nosebleed should first attempt to blow out any blood clots and then apply pressure for at least five minutes and up to 20 minutes. Pressure should be firm and tilting the head forward helps decrease the chance of nausea and airway obstruction as seen in the picture on the right. When attempting to stop a nosebleed at home, the head should not be tilted back. Swallowing excess blood can irritate the stomach and cause vomiting. Vasoconstrictive medications such as oxymetazoline (Afrin) or phenylephrine are widely available over the counter for treatment of allergic rhinitis and may also be used to control benign cases of epistaxis. Those with nosebleeds that last longer than 20 minutes (in the setting of direct pressure as seen in the image to the right) should seek medical attention.
Nasal packing
If pressure and chemical cauterization cannot stop bleeding, nasal packing is the mainstay of treatment. There are several forms of nasal packing that can be contrasted by anterior nasal packing and posterior nasal packing. Traditionally, nasal packing was accomplished by packing gauze into the nose, thereby placing pressure on the vessels in the nose and stopping the bleeding. Traditional gauze packing has been replaced with products such as Merocel and the Rapid Rhino. The Merocel nasal tampon is similar to gauze packing except it is a synthetic foam polymer (made of polyvinyl alcohol and expands in the nose after application of water) that provides a less hospitable medium for bacteria. The Rapid Rhino stops nosebleeds using a balloon catheter, made of carboxymethylcellulose, which has a cuff that is inflated by air to stop bleeding through extra pressure in the nasal cavity. Systematic review articles have demonstrated that the efficacy in stopping nosebleeds is similar between the Rapid Rhino and Merocel packs; however, the Rapid Rhino has been shown to have greater ease of insertion and reduced discomfort. People who receive nasal packing need to return to a medical professional in 24–72 hours in order to have packing removed. There are also several dissolvable packing materials that stop bleeding through use of thrombotic agents that promote blood clots, such as surgicel. The thrombogenic foams and gels do not require removal and dissolve after a few days. Posterior nasal packing can be achieved by using a Foley catheter, blowing up the balloon when it is in the back of the throat, and applying traction. Complications of nasal packing include abscesses, septal hematomas, sinusitis, and pressure necrosis. In rare cases toxic shock syndrome can occur with prolonged nasal packing. As a result, many forms of nasal packing involve use of topical antistaphylococcal antibiotic ointment.
Tranexamic acid
Tranexamic acid helps promote blood clotting. For nosebleeds it can be applied to the site of bleeding, taken by mouth, or injected into a vein.
Cauterization
This method involves applying a chemical such as silver nitrate to the nasal mucosa, which burns and seals off the bleeding. Eventually the nasal tissue to which the chemical is applied will undergo necrosis. This form of treatment is best for mild bleeds, especially in children, that are clearly visible. A topical anesthetic (such as lidocaine) is usually applied prior to cauterization. Silver nitrate can cause blackening of the skin due to silver sulfide deposit, though this will fade with time.
Surgery
Ongoing bleeding despite good nasal packing is a surgical emergency and can be treated by endoscopic evaluation of the nasal cavity under general anesthesia to identify an elusive bleeding point or to directly ligate (tie off) the blood vessels supplying the nose. These blood vessels include the sphenopalatine, anterior and posterior ethmoidal arteries </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is 2 and can you explain it in depth? [/INST] 2 (two) is a number, numeral and digit. It is the natural number following 1 and preceding 3. It is the smallest and only even prime number. Because it forms the basis of a duality, it has religious and spiritual significance in many cultures.
Evolution
Arabic digit
The digit used in the modern Western world to represent the number 2 traces its roots back to the Indic Brahmic script, where "2" was written as two horizontal lines. The modern Chinese and Japanese languages (and Korean Hanja) still use this method. The Gupta script rotated the two lines 45 degrees, making them diagonal. The top line was sometimes also shortened and had its bottom end curve towards the center of the bottom line. In the Nagari script, the top line was written more like a curve connecting to the bottom line. In the Arabic Ghubar writing, the bottom line was completely vertical, and the digit looked like a dotless closing question mark. Restoring the bottom line to its original horizontal position, but keeping the top line as a curve that connects to the bottom line leads to our modern digit.In fonts with text figures, digit 2 usually is of x-height, for example, .
Etymology of two
The word two is derived from the Old English words twā (feminine), tū (neuter), and twēġen (masculine, which survives today in the form twain).The pronunciation /tuː/, like that of who is due to the labialization of the vowel by the w (combare from womb), which then disappeared before the related sound. The successive stages of pronunciation for the Old English twā would thus be /twɑː/, /twɔː/, /twoː/, /twuː/, and finally /tuː/.
In mathematics
Two is the smallest prime number, and the only even prime number, and for this reason it is sometimes called "the oddest prime". As the smallest prime number, it is also the smallest non-zero pronic number, and the only pronic prime. The next prime is three, which makes two and three the only two consecutive prime numbers. Two is the first prime number that does not have a proper twin prime with a difference two, while three is the first such prime number to have a twin prime. In consequence, the first pair of twin primes, three and five, encase four in-between, which is the square of two, or
2
2
{\displaystyle 2^{2}}
. Two is the first Sophie Germain prime, the first factorial prime, the first Lucas prime, and the first Ramanujan prime. It is also a Motzkin number, a Bell number, an all-Harshad number, a meandric number, a semi-meandric number, an open meandric number, and the third (or fourth) Fibonacci number.An integer is called even if it is divisible by 2. For integers written in a numeral system based on an even number such as decimal, divisibility by 2 is easily tested by merely looking at the last digit. If it is even, then the whole number is even. In particular, when written in the decimal system, all multiples of 2 will end in 0, 2, 4, 6, or 8.Two is the base of the binary system, the numeral system with the fewest tokens that allows denoting a natural number n substantially more concisely (with log2 n tokens) than a direct representation by the corresponding count of a single token (with n tokens). This binary number system is used extensively in computing.
For any number x:
x + x = 2 · x addition to multiplication
x · x = x2 multiplication to exponentiation
xx = x↑↑2 exponentiation to tetrationExtending this sequence of operations by introducing the notion of hyperoperations, here denoted by "hyper(a,b,c)" with a and c being the first and second operand, and b being the level in the above sketched sequence of operations, the following holds in general:
hyper(x,n,x) = hyper(x,(n + 1),2).Two has therefore the unique property that 2 + 2 = 2 · 2 = 22 = 2↑↑2 = 2↑↑↑2 = ..., disregarding the level of the hyperoperation, here denoted by Knuths up-arrow notation. The number of up-arrows refers to the level of the hyperoperation.
Two is the only number x such that the sum of the reciprocals of the natural powers of x equals itself. In symbols,
∑
k
=
0
∞
1
2
k
=
1
+
1
2
+
1
4
+
1
8
+
1
16
+
⋯
=
2.
{\displaystyle \sum _{k=0}^{\infty }{\frac {1}{2^{k}}}=1+{\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{8}}+{\frac {1}{16}}+\cdots =2.}
This comes from the fact that:
∑
k
=
0
∞
1
n
k
=
1
+
1
n
−
1
for all
n
∈
R
>
1.
{\displaystyle \sum _{k=0}^{\infty }{\frac {1}{n^{k}}}=1+{\frac {1}{n-1}}\quad {\mbox{for all}}\quad n\in \mathbb {R} >1.}
A Cantor space is a topological space
2
N
{\displaystyle 2^{\mathbb {N} }}
homeomorphic to the Cantor set. The countably infinite product topology of the simplest discrete two-point space, {0, 1}, is the traditional elementary example.
The sum of the reciprocals of all non-zero triangular numbers converges to
2
{\displaystyle 2}
.Powers of two are central to the concept of Mersenne primes, and important to computer science. Two is the first Mersenne prime exponent.
Taking the square root of a number is such a common mathematical operation, that the spot on the root sign where the index would normally be written for cubic and other roots, may simply be left blank for square roots, as it is tacitly understood.
The square root of 2 was the first known irrational number.
The smallest field has two elements.
In a set-theoretical construction of the natural numbers, 2 is identified with the set {{∅},∅}. This latter set is important in category theory: it is a subobject classifier in the category of sets.
Two consecutive twos (as in "22" for "two twos"), or equivalently "2-2", is the only fixed point of John Conways look-and-say function </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Dislocation of jaw and can you explain it in depth? [/INST] Dislocations occur when two bones that originally met at the joint detach. Dislocations should not be confused with Subluxation. Subluxation is when the joint is still partially attached to the bone.When a person has a dislocated jaw it is difficult to open and close the mouth. Dislocation can occur following a series of events if the jaw locks while open or unable to close. If the jaw is dislocated, it may cause an extreme headache or inability to concentrate. When the muscles alignment is out of sync, a pain will occur due to unwanted rotation of the jaw.If the pain remains constant, it may require surgery to realign the jaw. Depending on the severity of the jaws dislocation, pain relief such as paracetamol may assist to alleviate the initial chronic pain. If the pain relief is taken for an extended period of time, it may negatively affect the person while talking, eating, drinking, etc.
Symptoms
The symptoms can be numerous depending on the severity of the dislocation injury and how long the person is inflicted with the injury. Symptoms of a dislocated jaw include a bite that feels “off” or abnormal, hard time talking or moving jaw, not able to close mouth completely, drooling due to not being able to shut mouth completely, teeth feel they are out of alignment, and a pain that becomes unbearableThe immediate symptom can be a loud crunch noise occurring right up against the eardrum. This is instantly followed by excruciating pain, particularly in the side where the dislocation occurred.Short-term symptoms can range from mild to chronic headaches, muscle tension or pain in the face, jaw and neck.Long-term symptoms can result in sleep deprivation, tiredness/lethargy, frustration, bursts of anger or short fuse, difficulty performing everyday tasks, depression, social issues relating to difficulty talking, hearing sensitivity (particularly to high pitched sounds), tinnitus and pain when seated associated with posture while at a computer and reading books from general pressure on the jaw and facial muscles when tilting head down or up. And possible causing subsequent facial asymmetry.In contrast, symptoms of a fractured jaw include bleeding coming from the mouth, unable to open the mouth wide without pain, bruising and swelling of the face, difficulty eating due to the constant pain, loss of feeling in the face (more specifically the lower lip) and lacks full range of motion of the jaw.
Anatomy
The joint involved with jaw dislocation is the temporomandibular joint (TMJ). This joint is located where the mandibular condyles and the temporal bone meet. Membranes that surround the bones help during the hinging and gliding of jaw movement. For the mouth to close it requires the following muscles: the masseter, temporalis, and medial pterygoid muscle. For the jaw to open it requires the lateral pterygoid muscle.
Pathophysiology
There are four different positions of jaw dislocation: posterior, anterior, superior and lateral. The most common position is anterior, while the other types are rare. Anterior dislocation shifts the lower jaw forward if the mouth excessively opens. This type of dislocation may happen bilaterally or unilaterally after yawning. The muscles that are affected during anterior jaw dislocation are the masseter and temporalis which pull up on the mandible and the lateral pterygoid which relaxes the mandibular condyle. The condyle can get locked in front of the articular eminence. Posterior dislocation is possible for people who get injured by being punched in the chin. This dislocation will push the jaw back affecting the alignment of the mandibular condyle and mastoid. The external auditory canal may be fractured. Superior dislocations occur after being punched below the mandibular ramus as the mouth remains half-open. Since great force occurs in a punch, the angle of the jaw will be forced upward moving towards the condylar head. This can result in a fracture of the glenoid fossa and displacement of the condyle into the middle cranial fossa, potentially injuring the facial and vestibulocochlear nerves and the temporal lobe. Lateral dislocations move the mandibular condyle away from the skull and are likely to happen together with jaw fractures.Posterior, superior and lateral dislocations are uncommon injuries and usually result from high-energy trauma to the chin. By contrast, anterior dislocations are more often the result of low-energy trauma (e.g. tooth extraction) or secondary to a medical condition that affects the stability of the joint (e.g. seizures, ligamentous laxity, degeneration of joint capsule).
Diagnosis
As with other joint dislocations, clinical history and examination are crucial for diagnosis of a jaw dislocation. Commonly, plain and panoramic X-ray radiographies are used to determine the relative position of the mandibular condyle. If a complex or unusual injury is suspected, three-dimensional computed tomography is most reliable in diagnosing dislocation and possibly associated fractures or soft tissue injuries.In case of dislocations resulting from high-energy trauma, attention must also be paid to possible other injuries, particularly blunt or indirect trauma to the skull and cervical spine. Acutely life-threatening conditions need to be ruled out or treated in first line. For superior jaw dislocation in particular, serious intracranial complications such as epidural hematoma are possible and must be recognized and managed to prevent disability or even death. Therefore, neurological status has to be examined in patients with complex dislocations involving temporal bone fractures. Hearing deficits on the injured side may indicate damage to structures of the ear.
Treatment
Most temporomandibular disorders (TMDs) are self-limiting and do not get worse. Simple treatment, involving self-care practices, rehabilitation aimed at eliminating muscle spasms, and restoring correct coordination, is all that is required. Nonsteroidal anti-inflammatory analgesics (NSAIDs) should be used on a short-term, regular basis and not on an as needed basis </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Viral disease and can you explain it in depth? [/INST] A viral disease (or viral infection) occurs when an organisms body is invaded by pathogenic viruses, and infectious virus particles (virions) attach to and enter susceptible cells.
Structural Characteristics
Basic structural characteristics, such as genome type, virion shape and replication site, generally share the same features among virus species within the same family.
Double-stranded DNA families: three are non-enveloped (Adenoviridae, Papillomaviridae and Polyomaviridae) and two are enveloped (Herpesviridae and Poxviridae). All of the non-enveloped families have icosahedral capsids.
Partly double-stranded DNA viruses: Hepadnaviridae. These viruses are enveloped.
One family of single-stranded DNA viruses infects humans: Parvoviridae </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Alcoholic cardiomyopathy and can you explain it in depth? [/INST] Alcoholic cardiomyopathy is a disease in which the chronic long-term abuse of alcohol (i.e., ethanol) leads to heart failure </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Cardiac tamponade and can you explain it in depth? [/INST] Cardiac tamponade, also known as pericardial tamponade (), is the buildup of fluid in the pericardium (the sac around the heart), resulting in compression of the heart. Onset may be rapid or gradual. Symptoms typically include those of obstructive shock including shortness of breath, weakness, lightheadedness, and cough. Other symptoms may relate to the underlying cause.Common causes of cardiac tamponade include cancer, kidney failure, chest trauma, myocardial infarction, and pericarditis. Other causes include connective tissues diseases, hypothyroidism, aortic rupture, autoimmune disease, and complications of cardiac surgery. In Africa, tuberculosis is a relatively common cause.Diagnosis may be suspected based on low blood pressure, jugular venous distension, or quiet heart sounds (together known as Becks triad). A pericardial rub may be present in cases due to inflammation. The diagnosis may be further supported by specific electrocardiogram (ECG) changes, chest X-ray, or an ultrasound of the heart. If fluid increases slowly the pericardial sac can expand to contain more than 2 liters; however, if the increase is rapid, as little as 200 mL can result in tamponade.Tamponade is a medical emergency. When it results in symptoms, drainage is necessary. This can be done by pericardiocentesis, surgery to create a pericardial window, or a pericardiectomy. Drainage may also be necessary to rule out infection or cancer. Other treatments may include the use of dobutamine or in those with low blood volume, intravenous fluids. Those with few symptoms and no worrisome features can often be closely followed. The frequency of tamponade is unclear. One estimate from the United States places it at 2 per 10,000 per year.
Signs and symptoms
Onset may be rapid (acute) or more gradual (subacute). Signs of cardiac tamponade typically include those of cardiogenic shock including shortness of breath, weakness, lightheadedness, cough and those of Becks triad e.g. jugular vein distention, quiet heart sounds and hypotension. Other symptoms may relate to the underlying cause.Other general signs of shock (such as fast heart rate, shortness of breath and decreasing level of consciousness) may also occur. However, some of these signs may not be present in certain cases. A fast heart rate, although expected, may be absent in people with uremia and hypothyroidism.According to Reddy, cardiac tamponade and its progression can be described in 3 different phases. In phase I, the required filling pressure increases due to the high stiffness of the ventricles. This is because of the accumulation of pericardial fluid in the pericardial cavity. During phase II, the pericardial pressure exceeds the ventricular filling pressure caused by the further accumulation of pericardial fluid. This results in a decrease in cardiac input and output. A further decrease of cardiac input and output is typical in phase III of the progression of cardiac tamponade. This is caused by the equilibration of left ventricular filling and pericardial pressure, leading to “severe deterioration of end-organ perfusion.” Some of the symptoms, as a consequence, include abdominal pain due to liver engorgement.
Causes
Cardiac tamponade is caused by a large or uncontrolled pericardial effusion, i.e. the buildup of fluid inside the pericardium. This commonly occurs as a result of chest trauma (both blunt and penetrating), but can also be caused by myocardial infarction, myocardial rupture, cancer (most often Hodgkin lymphoma), uremia, pericarditis, or cardiac surgery, and rarely occurs during retrograde aortic dissection, or while the person is taking anticoagulant therapy. The effusion can occur rapidly (as in the case of trauma or myocardial rupture), or over a more gradual period of time (as in cancer). The fluid involved is often blood, but pus is also found in some circumstances.
Surgery
One of the most common settings for cardiac tamponade is in the first 7 days after heart surgery. After heart surgery, chest tubes are placed to drain blood. These chest tubes, however, are prone to clot formation. When a chest tube becomes occluded or clogged, the blood that should be drained can accumulate around the heart, leading to tamponade.
Pathophysiology
The pericardium, the double-walled sac surrounding the heart, consists of a fibrous pericardium layer on the outside and a double-layered serous pericardium on the inside. Between the two layers of the serous pericardium is the pericardial space, which is filled with lubricating serous fluid that prevents friction as the heart contracts. The outer layer of the heart is made of fibrous tissue which does not easily stretch, so once excess fluid begins to enter the pericardial space, pressure starts to increase. Consequently, the heart becomes compressed due to its inability to fully relax.If fluid continues to accumulate, each successive diastolic period leads to less blood entering the ventricles. Eventually, increasing pressure on the heart forces the septum to bend in towards the left ventricle, leading to a decrease in stroke volume. This causes the development of obstructive shock, which if left untreated may lead to cardiac arrest (often presenting as pulseless electrical activity). The decrease in stroke volume can also ultimately lead to a decrease in cardiac output, which could be signaled by tachycardia and hypotension.
Diagnosis
The three classic signs, known as Becks triad, are low blood pressure, jugular-venous distension, and muffled heart sounds. Other signs may include pulsus paradoxus (a drop of at least 10 mmHg in arterial blood pressure with inspiration), and ST segment changes on the electrocardiogram, which may also show low voltage QRS complexes.
Medical imaging
Tamponade can often be diagnosed radiographically. Echocardiography, which is the diagnostic test of choice, often demonstrates an enlarged pericardium or collapsed ventricles. A large cardiac tamponade will show as an enlarged globular-shaped heart on chest x-ray. During inspiration, the negative pressure in the thoracic cavity will cause increased pressure into the right ventricle. This increased pressure in the right ventricle will cause the interventricular septum to bulge towards the left ventricle, leading to decreased filling of the left ventricle. At the same time, right ventricle volume is markedly diminished and sometimes it can collapse.
Differential diagnosis
Initial diagnosis of cardiac tamponade can be challenging, as there is a broad differential diagnosis. The differential includes possible diagnoses based on symptoms, time course, mechanism of injury, patient history. Rapid onset cardiac tamponade may also appear similar to pleural effusions, shock, pulmonary embolism, and tension pneumothorax.If symptoms appeared more gradually, the differential diagnosis includes acute heart failure.In a person with trauma presenting with pulseless electrical activity in the absence of hypovolemia and tension pneumothorax, the most likely diagnosis is cardiac tamponade.In addition to the diagnostic complications afforded by the wide-ranging differential diagnosis for chest pain, diagnosis can be additionally complicated by the fact that people will often be weak or faint at presentation. For instance, a fast rate of breathing and difficulty breathing on exertion that progresses to air hunger at rest can be a key diagnostic symptom, but it may not be possible to obtain such information from people who are unconscious or who have convulsions at presentation.
Treatment
Pre-hospital care
Initial treatment given will usually be supportive in nature, for example administration of oxygen, and monitoring. There is little care that can be provided pre-hospital other than general treatment for shock. Some teams have performed an emergency thoracotomy to release clotting in the pericardium caused by a penetrating chest injury.Prompt diagnosis and treatment is the key to survival with tamponade. Some pre-hospital providers will have facilities to provide pericardiocentesis, which can be life-saving. If the person has already suffered a cardiac arrest, pericardiocentesis alone cannot ensure survival, and so rapid evacuation to a hospital is usually the more appropriate course of action.
Hospital management
Initial management in hospital is by pericardiocentesis </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Lactose intolerance and can you explain it in depth? [/INST] Lactose intolerance is a common condition caused by a decreased ability to digest lactose, a sugar found in dairy products. Those affected vary in the amount of lactose they can tolerate before symptoms develop. Symptoms may include abdominal pain, bloating, diarrhea, flatulence, and nausea. These symptoms typically start thirty minutes to two hours after eating or drinking milk-based food. Their severity typically depends on the amount a person eats or drinks. Lactose intolerance does not cause damage to the gastrointestinal tract.Lactose intolerance is due to the lack of the enzyme lactase in the small intestines to break lactose down into glucose and galactose. There are four types: primary, secondary, developmental, and congenital. Primary lactose intolerance occurs as the amount of lactase declines as people age. Secondary lactose intolerance is due to injury to the small intestine. Such injury could be the result of infection, celiac disease, inflammatory bowel disease, or other diseases. Developmental lactose intolerance may occur in premature babies and usually improves over a short period of time. Congenital lactose intolerance is an extremely rare genetic disorder in which little or no lactase is made from birth. The onset of primary lactose intolerance, the most common type, is typically in late childhood or early adulthood, but prevalence increases with age.Diagnosis may be confirmed if symptoms resolve following eliminating lactose from the diet. Other supporting tests include a hydrogen breath test and a stool acidity test. Other conditions that may produce similar symptoms include irritable bowel syndrome, celiac disease, and inflammatory bowel disease. Lactose intolerance is different from a milk allergy. Management is typically by decreasing the amount of lactose in the diet, taking lactase supplements, or treating the underlying disease. People are able to drink at least one cup of milk without developing symptoms, with greater amounts tolerated if drunk with a meal or throughout the day.Worldwide, around 65% of adults are affected by lactose malabsorption. Other mammals usually lose the ability to digest lactose after weaning and this was the ancestral state of all humans before the recent evolution of lactase persistence, which extends lactose tolerance into adulthood. Lactase persistence evolved in several populations independently, probably as an adaptation to the domestication of dairy animals around 10,000 years ago. Today the prevalence of lactose tolerance varies widely between regions and ethnic groups. The ability to digest lactose is most common in people of European descent, and to a lesser extent in parts of the Middle East and Africa. Traditional food cultures reflect local variations in tolerance and historically many societies have adapted to low levels of tolerance by making dairy products that contain less lactose than fresh milk. The medicalization of lactose intolerance as a disorder has been attributed to biases in research history, since most early studies were conducted amongst populations which are normally tolerant, as well as the cultural and economic importance of milk in countries such as the United States.
Terminology
Lactose intolerance primarily refers to a syndrome with one or more symptoms upon the consumption of food substances containing lactose sugar. Individuals may be lactose intolerant to varying degrees, depending on the severity of these symptoms.
Hypolactasia is the term specifically for the small intestine producing little or no lactase enzyme. If a person with hypolactasia consumes lactose sugar, it results in lactose malabsorption. The digestive system is unable to process the lactose sugar, and the unprocessed sugars in the gut produce the symptoms of lactose intolerance.
Lactose intolerance is not an allergy, because it is not an immune response, but rather a sensitivity to dairy caused by a deficiency of lactase enzyme. Milk allergy, occurring in about 2% of the population, is a separate condition, with distinct symptoms that occur when the presence of milk proteins trigger an immune reaction.
Signs and symptoms
The principal manifestation of lactose intolerance is an adverse reaction to products containing lactose (primarily milk), including abdominal bloating and cramps, flatulence, diarrhea, nausea, borborygmi, and vomiting (particularly in adolescents). These appear one-half to two hours after consumption. The severity of these signs and symptoms typically increases with the amount of lactose consumed; most lactose-intolerant people can tolerate a certain level of lactose in their diets without ill effects.Because lactose intolerance is not an allergy, it does not produce allergy symptoms (such as itching, hives, or anaphylaxis).
Causes
Lactose intolerance is a consequence of lactase deficiency, which may be genetic (primary hypolactasia and primary congenital alactasia) or environmentally induced (secondary or acquired hypolactasia). In either case, symptoms are caused by insufficient levels of lactase in the lining of the duodenum. Lactose, a disaccharide molecule found in milk and dairy products, cannot be directly absorbed through the wall of the small intestine into the bloodstream, so, in the absence of lactase, passes intact into the colon. Bacteria in the colon can metabolise lactose, and the resulting fermentation produces copious amounts of gas (a mixture of hydrogen, carbon dioxide, and methane) that causes the various abdominal symptoms. The unabsorbed sugars and fermentation products also raise the osmotic pressure of the colon, causing an increased flow of water into the bowels (diarrhea).Lactose intolerance in infants (congenital lactase deficiency) is caused by mutations in the LCT gene. The LCT gene provides the instructions for making lactase. Mutations are believed to interfere with the function of lactase, causing affected infants to have a severely impaired ability to digest lactose in breast milk or formula. Lactose intolerance in adulthood is a result of gradually decreasing activity (expression) of the LCT gene after infancy, which occurs in most humans. The specific DNA sequence in the MCM6 gene helps control whether the LCT gene is turned on or off. At least several thousand years ago, some humans developed a mutation in the MCM6 gene that keeps the LCT gene turned on even after breast feeding is stopped. Populations that are lactose intolerant lack this mutation. The LCT and MCM6 genes are both located on the long arm (q) of chromosome 2 in region 21. The locus can be expressed as 2q21. The lactase deficiency also could be linked to certain heritages and varies widely. A 2016 study of over 60,000 participants from 89 countries found regional prevalence of lactose malabsorption was "64% (54–74) in Asia (except Middle East), 47% (33–61) in eastern Europe, Russia, and former Soviet Republics, 38% (CI 18–57) in Latin America, 70% (57–83) in the Middle East, 66% (45–88) in northern Africa, 42% (13–71) in northern America, 45% (19–71) in Oceania, 63% (54–72) in sub-Saharan Africa, and 28% (19–37) in northern, southern and western Europe." According to Johns Hopkins Medicine, lactose intolerance is more common in Asian Americans, African Americans, Mexican Americans, and Native Americans. Analysis of the DNA of 94 ancient skeletons in Europe and Russia concluded that the mutation for lactose tolerance appeared about 4,300 years ago and spread throughout the European population.Some human populations have developed lactase persistence, in which lactase production continues into adulthood probably as a response to the benefits of being able to digest milk from farm animals. Some have argued that this links intolerance to natural selection favoring lactase-persistent individuals, but it is also consistent with a physiological response to decrease lactase production when it is not needed in cultures in which dairy products are not an available food source. Although populations in Europe, India, Arabia, and Africa were first thought to have high rates of lactase persistence because of a single mutation, lactase persistence has been traced to a number of mutations that occurred independently. Different alleles for lactase persistence have developed at least three times in East African populations, with persistence extending from 26% in Tanzania to 88% in the Beja pastoralist population in Sudan.The accumulation of epigenetic factors, primarily DNA methylation, in the extended LCT region, including the gene enhancer located in the MCM6 gene near C/T-13910 SNP, may also contribute to the onset of lactose intolerance in adults. Age-dependent expression of LCT in mice intestinal epithelium has been DNA methylation in the gene enhancer.Lactose intolerance is classified according to its causes as:
Primary hypolactasia
Primary hypolactasia, or primary lactase deficiency, is genetic, only affects adults, and is caused by the absence of a lactase persistence allele. In individuals without the lactase persistence allele, less lactase is produced by the body over time, leading to hypolactasia in adulthood. The frequency of lactase persistence, which allows lactose tolerance, varies enormously worldwide, with the highest prevalence in Northwestern Europe, declines across southern Europe and the Middle East and is low in Asia and most of Africa, although it is common in pastoralist populations from Africa.
Secondary hypolactasia
Secondary hypolactasia or secondary lactase deficiency, also called acquired hypolactasia or acquired lactase deficiency, is caused by an injury to the small intestine. This form of lactose intolerance can occur in both infants and lactase persistent adults and is generally reversible. It may be caused by acute gastroenteritis, coeliac disease, Crohns disease, ulcerative colitis, chemotherapy, intestinal parasites (such as giardia), or other environmental causes.
Primary congenital alactasia
Primary congenital alactasia, also called congenital lactase deficiency, is an extremely rare, autosomal recessive enzyme defect that prevents lactase expression from birth. People with congenital lactase deficiency cannot digest lactose from birth, so cannot digest breast milk. This genetic defect is characterized by a complete lack of lactase (alactasia). About 40 cases have been reported worldwide, mainly limited to Finland. Before the 20th century, babies born with congenital lactase deficiency often did not survive, but death rates decreased with soybean-derived infant formulas and manufactured lactose-free dairy products.
Diagnosis
In order to assess lactose intolerance, intestinal function is challenged by ingesting more dairy products than can be readily digested. Clinical symptoms typically appear within 30 minutes, but may take up to two hours, depending on other foods and activities. Substantial variability in response (symptoms of nausea, cramping, bloating, diarrhea, and flatulence) is to be expected, as the extent and severity of lactose intolerance varies among individuals.The next step is to determine whether it is due to primary lactase deficiency or an underlying disease that causes secondary lactase deficiency. Physicians should investigate the presence of undiagnosed coeliac disease, Crohn disease, or other enteropathies when secondary lactase deficiency is suspected and infectious gastroenteritis has been ruled out.Lactose intolerance is distinct from milk allergy, an immune response to cows milk proteins. They may be distinguished in diagnosis by giving lactose-free milk, producing no symptoms in the case of lactose intolerance, but the same reaction as to normal milk in the presence of a milk allergy. A person can have both conditions. If positive confirmation is necessary, four tests are available.
Hydrogen breath test
In a hydrogen breath test, the most accurate lactose intolerance test, after an overnight fast, 25 grams of lactose (in a solution with water) are swallowed. If the lactose cannot be digested, enteric bacteria metabolize it and produce hydrogen, which, along with methane, if produced, can be detected on the patients breath by a clinical gas chromatograph or compact solid-state detector. The test takes about 2.5 hours to complete. If the hydrogen levels in the patients breath are high, they may have lactose intolerance. This test is not usually done on babies and very young children, because it can cause severe diarrhea.
Lactose tolerance test
In conjunction, measuring blood glucose level every 10 to 15 minutes after ingestion will show a "flat curve" in individuals with lactose malabsorption, while the lactase persistent will have a significant "top", with a typical elevation of 50% to 100%, within one to two hours. However, due to the need for frequent blood sampling, this approach has been largely replaced by breath testing.After an overnight fast, blood is drawn and then 50 grams of lactose (in aqueous solution) are swallowed. Blood is then drawn again at the 30-minute, 1-hour, 2-hour, and 3-hour marks. If the lactose cannot be digested, blood glucose levels will rise by less than 20 mg/dl.
Stool acidity test
This test can be used to diagnose lactose intolerance in infants, for whom other forms of testing are risky or impractical. The infant is given lactose to drink. If the individual is tolerant, the lactose is digested and absorbed in the small intestine; otherwise, it is not digested and absorbed, and it reaches the colon. The bacteria in the colon, mixed with the lactose, cause acidity in stools. Stools passed after the ingestion of the lactose are tested for level of acidity. If the stools are acidic, the infant is intolerant to lactose.
Stool pH in lactose intolerance is less than 5.5.
Intestinal biopsy
An intestinal biopsy must confirm lactase deficiency following discovery of elevated hydrogen in the hydrogen breath test. Modern techniques have enabled a bedside test, identifying presence of lactase enzyme on upper gastrointestinal endoscopy instruments. However, for research applications such as mRNA measurements, a specialist laboratory is required.
Stool sugar chromatography
Chromatography can be used to separate and identify undigested sugars present in faeces. Although lactose may be detected in the faeces of people with lactose intolerance, this test is not considered reliable enough to conclusively diagnose or exclude lactose intolerance.
Genetic diagnostic
Genetic tests may be useful in assessing whether a person has primary lactose intolerance. Lactase activity persistence in adults is associated with two polymorphisms: C/T 13910 and G/A 22018 located in the MCM6 gene. These polymorphisms may be detected by molecular biology techniques at the DNA extracted from blood or saliva samples; genetic kits specific for this diagnosis are available. The procedure consists of extracting and amplifying DNA from the sample, following with a hybridation protocol in a strip. Colored bands are obtained as result, and depending on the different combinations, it would be possible to determine whether the patient is lactose intolerant. This test allows a noninvasive definitive diagnostic.
Frequency
Lactose intolerance is most common in people of East Asian descent, with 70 to 100 percent of people affected in these communities. Lactose intolerance is also more common in people of West African, Arab, and Jewish descent, while only about 5 percent of people of European descent are lactose intolerant.
Management
When lactose intolerance is due to secondary lactase deficiency, treatment of the underlying disease may allow lactase activity to return to normal levels. In people with celiac disease, lactose intolerance normally reverts or improves several months after starting a gluten-free diet, but temporary dietary restriction of lactose may be needed.People with primary lactase deficiency cannot modify their bodys ability to produce lactase. In societies where lactose intolerance is the norm, it is not considered a condition that requires treatment. However, where dairy is a larger component of the normal diet, a number of efforts may be useful. There are four general principles in dealing with lactose intolerance: avoidance of dietary lactose, substitution to maintain nutrient intake, regulation of calcium intake, and use of enzyme substitute. Regular consumption of dairy food by lactase deficient individuals may also reduce symptoms of intolerance by promoting colonic bacteria adaptation.
Dietary avoidance
The primary way of managing the symptoms of lactose intolerance is to limit the intake of lactose to a level that can be tolerated. Lactase deficient individuals vary in the amount of lactose they can tolerate, and some report that their tolerance varies over time, depending on health status and pregnancy. However, as a rule of thumb, people with primary lactase deficiency and no small intestine injury are usually able to consume at least 12 grams of lactose per sitting without symptoms, or with only mild symptoms, with greater amounts tolerated if consumed with a meal or throughout the day.
Lactose is found primarily in dairy products, which vary in the amount of lactose they contain:
Milk – unprocessed cows milk is about 4.7% lactose; goats milk 4.7%; sheeps milk 4.7%; buffalo milk 4.86%; and yak milk 4.93%.
Sour cream and buttermilk – if made in the traditional way, this may be tolerable, but most modern brands add milk solids.
Butter – the process of making butter largely removes lactose, but it is still present in small quantities; clarified butter contains a negligible amount of lactose.
Yogurt – lactobacilli used in the production of yogurt metabolize lactose to varying degrees, depending on the type of yogurt. Some bacteria found in yogurt also produce their own lactase, which facilitates digestion in the intestines of lactose intolerant individuals.
Cheese – fermentation also reduces the lactose content of cheeses and aging reduces it further; traditionally made hard cheeses might contain just 10% of the lactose found in an equivalent volume of milk. However, manufactured cheeses may be produced using processes that do not have the same lactose-reducing properties.There is no standardized method for measuring the lactose content of food. The stated dairy content of a product also varies according to manufacturing processes and labelling practices, and commercial terminology varies between languages and regions. As a result, absolute figures for the amount of lactose consumed (by weight) may not be very reliable. Kosher products labeled pareve or fleishig are free of milk. However, if a "D" (for "dairy") is present next to the circled "K", "U", or other hechsher, the food product likely contains milk solids, although it may also simply indicate the product was produced on equipment shared with other products containing milk derivatives.
Lactose is also a commercial food additive used for its texture, flavor, and adhesive qualities. It is found in additives labelled as casein, caseinate, whey, lactoserum, milk solids, modified milk ingredients, etc. As such lactose is found in foods such as processed meats (sausages/hot dogs, sliced meats, pâtés), gravy stock powder, margarines, sliced breads, breakfast cereals, potato chips, processed foods, medications, prepared meals, meal replacements (powders and bars), protein supplements (powders and bars), and even beers in the milk stout style. Some barbecue sauces and liquid cheeses used in fast-food restaurants may also contain lactose. Lactose is often used as the primary filler (main ingredient) in most prescription and non-prescription solid pill form medications, though product labeling seldom mentions the presence of lactose or milk, and neither do product monograms provided to pharmacists, and most pharmacists are unaware of the very wide scale yet common use of lactose in such medications until they contact the supplier or manufacturer for verification.
Milk substitutes
Plant-based milks and derivatives such as soy milk, rice milk, almond milk, coconut milk, hazelnut milk, oat milk, hemp milk, macadamia nut milk, and peanut milk are inherently lactose-free. Low-lactose and lactose-free versions of foods are often available to replace dairy-based foods for those with lactose intolerance.
Lactase supplements
When lactose avoidance is not possible, or on occasions when a person chooses to consume such items, then enzymatic lactase supplements may be used.Lactase enzymes similar to those produced in the small intestines of humans are produced industrially by fungi of the genus Aspergillus. The enzyme, β-galactosidase, is available in tablet form in a variety of doses, in many countries without a prescription. It functions well only in high-acid environments, such as that found in the human gut due to the addition of gastric juices from the stomach. Unfortunately, too much acid can denature it, so it should not be taken on an empty stomach. Also, the enzyme is ineffective if it does not reach the small intestine by the time the problematic food does. Lactose-sensitive individuals can experiment with both timing and dosage to fit their particular needs.
While essentially the same process as normal intestinal lactose digestion, direct treatment of milk employs a different variety of industrially produced lactase. This enzyme, produced by yeast from the genus Kluyveromyces, takes much longer to act, must be thoroughly mixed throughout the product, and is destroyed by even mildly acidic environments. Its main use is in producing the lactose-free or lactose-reduced dairy products sold in supermarkets.
Rehabituation to dairy products
Regular consumption of dairy foods containing lactose can promote a colonic bacteria adaptation, enhancing a favorable microbiome, which allows people with primary lactase deficiency to diminish their intolerance and to consume more dairy foods. The way to induce tolerance is based on progressive exposure, consuming smaller amounts frequently, distributed throughout the day. Lactose intolerance can also be managed by ingesting live yogurt cultures containing lactobacilli that are able to digest the lactose in other dairy products.
Epidemiology
Worldwide, about 65% of people experience some form of lactose intolerance as they age past infancy, but there are significant differences between populations and regions. As few as 5% of northern Europeans are lactose intolerant, while as many as 90% of adults in parts of Asia are lactose intolerant.Some populations, from an evolutionary perspective, have a better genetic makeup for tolerating lactose than others. In northern European countries, early adoption of dairy farming conferred a selective evolutionary advantage to individuals that could better tolerate lactose. This led to higher frequencies of lactose tolerance in these countries. For example, almost 100% of Irish people are predicted to be lactose tolerant. Conversely, regions of the south, such as Africa, did not adopt dairy farming as early and tolerance from milk consumption did not develop the same way as in northern Europe. Lactose intolerance is common among people of Jewish descent, as well as from West Africa, the Arab countries, Greece, and Italy. Different populations will present certain gene constructs depending on the evolutionary and cultural pre-settings of the geographical region.
History
Greater lactose tolerance has come about in two ways. Some populations have developed genetic changes to allow the digestion of lactose: lactase persistence. Other populations developed cooking methods like milk fermentation.Lactase persistence in humans evolved relatively recently (in the last 10,000 years) among some populations. Around 8,000 years ago in modern-day Turkey, humans became reliant on newly-domesticated animals that could be milked; such as cows, sheep, and goats. This resulted in higher frequency of lactase persistence. Lactase persistence became high in regions such as Europe, Scandinavia, the Middle East and Northwestern India. However, most people worldwide remain lactase non-persistent. Populations that raised animals not used for milk tend to have 90–100 percent of a lactose intolerant rate. For this reason, lactase persistence is of some interest to the fields of anthropology, human genetics, and archaeology, which typically use the genetically derived persistence/non-persistence terminology.The rise of dairy and producing dairy related products from cow milk alone, varies across different regions of the world, aside from genetic predisposition. The process of turning milk into cheese dates back earlier than 5200 BC.DNA analysis in February 2012 revealed that Ötzi was lactose intolerant, supporting the theory that lactose intolerance was still common at that time, despite the increasing spread of agriculture and dairying.Genetic analysis shows lactase persistence has developed several times in different places independently in an example of convergent evolution.
History of research
It is not until relatively recently that medicine recognised the worldwide prevalence of lactose intolerance and its genetic causes. Its symptoms were described as early as Hippocrates (460–370 BC), but until the 1960s, the prevailing assumption was that tolerance was the norm. Intolerance was explained as the result of a milk allergy, intestinal pathogens, or as being psychosomatic – it being recognised that some cultures did not practice dairying, and people from those cultures often reacted badly to consuming milk. Two reasons have been given for this misconception </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Tobacco smoking and can you explain it in depth? [/INST] Tobacco smoking is the practice of burning tobacco and ingesting the resulting smoke. The smoke may be inhaled, as is done with cigarettes, or simply released from the mouth, as is generally done with pipes and cigars. The practice is believed to have begun as early as 5000–3000 BC in Mesoamerica and South America. Tobacco was introduced to Eurasia in the late 17th century by European colonists, where it followed common trade routes. The practice encountered criticism from its first import into the Western world onwards but embedded itself in certain strata of a number of societies before becoming widespread upon the introduction of automated cigarette-rolling apparatus.Smoking is the most common method of consuming tobacco, and tobacco is the most common substance smoked. The agricultural product is often mixed with additives and then combusted. The resulting smoke is then inhaled and the active substances absorbed through the alveoli in the lungs or the oral mucosa. Many substances in cigarette smoke trigger chemical reactions in nerve endings, which heighten heart rate, alertness and reaction time, among other things. Dopamine and endorphins are released, which are often associated with pleasure.German scientists identified a link between smoking and lung cancer in the late 1920s, leading to the first anti-smoking campaign in modern history, albeit one truncated by the collapse of Nazi Germany at the end of World War II. In 1950, British researchers demonstrated a clear relationship between smoking and cancer. Evidence continued to mount in the 1960s, which prompted political action against the practice. Rates of consumption since 1965 in the developed world have either peaked or declined. However, they continue to climb in the developing world. As of 2008 to 2010, tobacco is used by about 49% of men and 11% of women aged 15 or older in fourteen low-income and middle-income countries (Bangladesh, Brazil, China, Egypt, India, Mexico, Philippines, Russia, Thailand, Turkey, Ukraine, Uruguay and Vietnam), with about 80% of this usage in the form of smoking. The gender gap tends to be less pronounced in lower age groups.Many smokers begin during adolescence or early adulthood. During the early stages, a combination of perceived pleasure acting as positive reinforcement and desire to respond to social peer pressure may offset the unpleasant symptoms of initial use, which typically include nausea and coughing. After an individual has smoked for some years, the avoidance of withdrawal symptoms and negative reinforcement become the key motivations to continue.
A 2009 study of first smoking experiences of seventh-grade students found out that the most common factor leading students to smoke is cigarette advertisements. Smoking by parents, siblings and friends also encourages students to smoke.
History
Use in ancient cultures
Smokings history dates back to as early as 5000–3000 BC, when the agricultural product began to be cultivated in Mesoamerica and South America; consumption later evolved into burning the plant substance either by accident or with intent of exploring other means of consumption. The practice worked its way into shamanistic rituals. Many ancient civilizations – such as the Babylonians, the Indians, and the Chinese – burnt incense during religious rituals. Smoking in the Americas probably had its origins in the incense-burning ceremonies of shamans but was later adopted for pleasure or as a social tool. The smoking of tobacco and various hallucinogenic drugs was used to achieve trances and to come into contact with the spirit world. Also, to stimulate respiration, tobacco smoke enemas were used.Eastern North American tribes would carry large amounts of tobacco in pouches as a readily accepted trade item and would often smoke it in ceremonial pipes, either in sacred ceremonies or to seal bargains. Adults as well as children enjoyed the practice. It was believed that tobacco was a gift from the Creator and that the exhaled tobacco smoke was capable of carrying ones thoughts and prayers to heaven.Apart from smoking, tobacco had uses as medicine. As a pain killer it was used for earache and toothache and occasionally as a poultice. Smoking was said by the desert Indians to be a cure for colds, especially if the tobacco was mixed with the leaves of the small Desert sage, Salvia dorrii, or the root of Indian balsam or cough root, Leptotaenia multifida, the addition of which was thought to be particularly good for asthma and tuberculosis.
Popularization
In 1612, six years after the settlement of Jamestown, Virginia, John Rolfe was credited as the first settler to successfully raise tobacco as a cash crop. The demand quickly grew as tobacco, referred to as "brown gold", revived the Virginia joint stock company from its failed gold expeditions. In order to meet demands from the Old World, tobacco was grown in succession, quickly depleting the soil. This became a motivator to settle west into the unknown continent, and likewise an expansion of tobacco production. Indentured servitude became the primary labor force up until Bacons Rebellion, from which the focus turned to slavery. This trend abated following the American Revolution as slavery became regarded as unprofitable. However, the practice was revived in 1794 with the invention of the cotton gin.Frenchman Jean Nicot (from whose name the word nicotine is derived) introduced tobacco to France in 1560, and tobacco then spread to England. The first report of a smoking Englishman is of a sailor in Bristol in 1556, seen "emitting smoke from his nostrils". Like tea, coffee and opium, tobacco was just one of many intoxicants that was originally used as a form of medicine. Tobacco was introduced around 1600 by French merchants in what today is modern-day Gambia and Senegal. At the same time, caravans from Morocco brought tobacco to the areas around Timbuktu, and the Portuguese brought the commodity (and the plant) to southern Africa, establishing the popularity of tobacco throughout all of Africa by the 1650s.
Soon after its introduction to the Old World, tobacco came under frequent criticism from state and religious leaders. James VI and I, King of Scotland and England, produced the treatise A Counterblaste to Tobacco in 1604, and also introduced excise duty on the product. Murad IV, sultan of the Ottoman Empire 1623–40 was among the first to attempt a smoking ban by claiming it was a threat to public morals and health. The Chongzhen Emperor of China issued an edict banning smoking two years before his death and the overthrow of the Ming dynasty. Later, the Manchu rulers of the Qing dynasty, would proclaim smoking "a more heinous crime than that even of neglecting archery". In Edo period Japan, some of the earliest tobacco plantations were scorned by the shogunate as being a threat to the military economy by letting valuable farmland go to waste for the use of a recreational drug instead of being used to plant food crops.
Religious leaders have often been prominent among those who considered smoking immoral or outright blasphemous. In 1634, the Patriarch of Moscow forbade the sale of tobacco, and sentenced men and women who flouted the ban to have their nostrils slit and their backs flayed. Pope Urban VIII likewise condemned smoking on holy places in a papal bull of 1624. Despite some concerted efforts, restrictions and bans were largely ignored. When James I of England, a staunch anti-smoker and the author of A Counterblaste to Tobacco, tried to curb the new trend by enforcing a 4000% tax increase on tobacco in 1604 it was unsuccessful, as suggested by the presence of around 7,000 tobacco outlets in London by the early 17th century. From this point on for some centuries, several administrations withdrew from efforts at discouragement and instead turned tobacco trade and cultivation into sometimes lucrative government monopolies.By the mid-17th century most major civilizations had been introduced to tobacco smoking and in many cases had already assimilated it into the native culture, despite some continued attempts upon the parts of rulers to eliminate the practice with penalties or fines. Tobacco, both product and plant, followed the major trade routes to major ports and markets, and then on into the hinterlands. The English language term smoking appears to have entered currency in the late 18th century, before which less abbreviated descriptions of the practice such as drinking smoke were also in use.Growth in the US remained stable until the American Civil War in 1860s, when the primary agricultural workforce shifted from slavery to sharecropping. This, along with a change in demand, accompanied the industrialization of cigarette production as craftsman James Bonsack created a machine in 1881 to partially automate their manufacture.
Social attitudes and public health
In Germany, anti-smoking groups, often associated with anti-liquor groups, first published advocacy against the consumption of tobacco in the journal Der Tabakgegner (The Tobacco Opponent) in 1912 and 1932. In 1929, Fritz Lickint of Dresden, Germany, published a paper containing formal statistical evidence of a lung cancer–tobacco link. During the Great Depression Adolf Hitler condemned his earlier smoking habit as a waste of money, and later with stronger assertions. This movement was further strengthened with Nazi reproductive policy as women who smoked were viewed as unsuitable to be wives and mothers in a German family. In the 20th century, smoking was common. There were social events like the smoke night which promoted the habit.
The anti-tobacco movement in Nazi Germany did not reach across enemy lines during the Second World War, as anti-smoking groups quickly lost popular support. By the end of the Second World War, American cigarette manufacturers quickly reentered the German black market. Illegal smuggling of tobacco became prevalent, and leaders of the Nazi anti-smoking campaign were silenced. As part of the Marshall Plan, the United States shipped free tobacco to Germany; with 24,000 tons in 1948 and 69,000 tons in 1949. Per capita yearly cigarette consumption in post-war Germany steadily rose from 460 in 1950 to 1,523 in 1963. By the end of the 20th century, anti-smoking campaigns in Germany were unable to exceed the effectiveness of the Nazi-era climax in the years 1939–41 and German tobacco health research was described by Robert N. Proctor as "muted".
In 1950, Richard Doll published research in the British Medical Journal showing a close link between smoking and lung cancer. Beginning in December 1952, the magazine Readers Digest published "Cancer by the Carton", a series of articles that linked smoking with lung cancer.In 1954, the British Doctors Study, a prospective study of some 40 thousand doctors for about 2.5 years, confirmed the suggestion, based on which the government issued advice that smoking and lung cancer rates were related. In January 1964, the United States Surgeon Generals Report on Smoking and Health likewise began suggesting the relationship between smoking and cancer.As scientific evidence mounted in the 1980s, tobacco companies claimed contributory negligence as the adverse health effects were previously unknown or lacked substantial credibility. Health authorities sided with these claims up until 1998, from which they reversed their position. The Tobacco Master Settlement Agreement, originally between the four largest US tobacco companies and the Attorneys General of 46 states, restricted certain types of tobacco advertisement and required payments for health compensation; which later amounted to the largest civil settlement in United States history.Social campaigns have been instituted in many places to discourage smoking, such as Canadas National Non-Smoking Week.
From 1965 to 2006, rates of smoking in the United States declined from 42% to 20.8%. The majority of those who quit were professional, affluent men. Although the per-capita number of smokers decreased, the average number of cigarettes consumed per person per day increased from 22 in 1954 to 30 in 1978. This paradoxical event suggests that those who quit smoked less, while those who continued to smoke moved to smoke more light cigarettes. The trend has been paralleled by many industrialized nations as rates have either leveled-off or declined. In the developing world, however, tobacco consumption continues to rise at 3.4% in 2002. In Africa, smoking is in most areas considered to be modern, and many of the strong adverse opinions that prevail in the West receive much less attention. Today Russia leads as the top consumer of tobacco followed by Indonesia, Laos, Ukraine, Belarus, Greece, Jordan, and China.As a result of public pressure and the FDA, Walmart and Sams club announced that they were raising the minimum age to purchase tobacco products, including all e-cigarettes, to 21 starting 1 July 2019.
Consumption
Methods
Tobacco is an agricultural product processed from the fresh leaves of plants in the genus Nicotiana. The genus contains several species, of which Nicotiana tabacum is the most commonly grown. Nicotiana rustica follows second, containing higher concentrations of nicotine. The leaves are harvested and cured to allow the slow oxidation and degradation of carotenoids in tobacco leaf. This produces certain compounds in the tobacco leaves which can be attributed to sweet hay, tea, rose oil, or fruity aromatic flavors. Before packaging, the tobacco is often combined with other additives in order to increase the addictive potency, shift the products pH, or improve the effects of smoke by making it more palatable. In the United States these additives are regulated to 599 substances. The product is then processed, packaged, and shipped to consumer markets.
Beedi
Beedis are thin South Asian cigarettes filled with tobacco flakes and wrapped in a tendu leaf tied with a string at one end. They produce higher levels of carbon monoxide, nicotine, and tar than cigarettes typical in the United States.
Cigars
Cigars are tightly rolled bundles of dried and fermented tobacco which are ignited so that smoke may be drawn into the smokers mouth. They are generally not inhaled because of the high alkalinity of the smoke, which can quickly become irritating to the trachea and lungs. The prevalence of cigar smoking varies depending on location, historical period, and population surveyed, and prevalence estimates vary somewhat depending on the survey method. The United States is the top consuming country by far, followed by Germany and the United Kingdom; the US and Western Europe account for about 75% of cigar sales worldwide. As of 2005 it is estimated that 4.3% of men and 0.3% of women smoke cigars in the USA.Cigarettes
Cigarettes, French for "small cigar", are a product consumed through smoking and manufactured out of cured and finely cut tobacco leaves and reconstituted tobacco, often combined with other additives, which are then rolled or stuffed into a paper-wrapped cylinder. Cigarettes are ignited and inhaled, usually through a cellulose acetate filter, into the mouth and lungs.
Hookah
Hookah are a single or multi-stemmed (often glass-based) water pipe for smoking. Originally from India. The hookah was a symbol of pride and honor for the landlords, kings and other such high class people. Now, the hookah has gained immense popularity, especially in the Middle East. A hookah operates by water filtration and indirect heat. It can be used for smoking herbal fruits, tobacco, or cannabis.
Kretek
Kretek are cigarettes made with a complex blend of tobacco, cloves and a flavoring "sauce". It was first introduced in the 1880s in Kudus, Java, to deliver the medicinal eugenol of cloves to the lungs. The quality and variety of tobacco play an important role in kretek production, from which kretek can contain more than 30 types of tobacco. Minced dried clove buds weighing about one-third of the tobacco blend are added to add flavoring. In 2004 the United States prohibited cigarettes from having a "characterizing flavor" of certain ingredients other than tobacco and menthol, thereby removing kretek from being classified as cigarettes.
Pipe smoking
Pipe smoking is done with a tobacco pipe, typically consisting of a small chamber (the bowl) for the combustion of the tobacco to be smoked and a thin stem (shank) that ends in a mouthpiece (the bit). Shredded pieces of tobacco are placed into the chamber and ignited.
Roll-your-own
Roll-your-own or hand-rolled cigarettes, often called "rollies", "cigi" or "Roll-ups", are very popular particularly in European countries and the UK. These are prepared from loose tobacco, cigarette papers, and filters all bought separately. They are usually much cheaper than ready-made cigarettes and small contraptions can be bought making the process easier.
Vaporizer
A vaporizer is a device used to sublimate the active ingredients of plant material. Rather than burning the herb, which produces potentially irritating, toxic, or carcinogenic by-products; a vaporizer heats the material in a partial vacuum so that the active compounds contained in the plant boil off into a vapor. This method is often preferable when medically administrating the smoke substance, as opposed to directly pyrolyzing the plant material.
Physiology
The active substances in tobacco, especially cigarettes, are administered by burning the leaves and inhaling the vaporized gas that results. This quickly and effectively delivers substances into the bloodstream by absorption through the alveoli in the lungs. The lungs contain some 300 million alveoli, which amounts to a surface area of over 70 m2 (about the size of a tennis court). This method is not completely efficient as not all of the smoke will be inhaled, and some amount of the active substances will be lost in the process of combustion, pyrolysis. Pipe and Cigar smoke are not inhaled because of its high alkalinity, which are irritating to the trachea and lungs. However, because of its higher alkalinity (pH 8.5) compared to cigarette smoke (pH 5.3), non-ionized nicotine is more readily absorbed through the mucous membranes in the mouth. Nicotine absorption from cigar and pipe, however, is much less than that from cigarette smoke. Nicotine and cocaine activate similar patterns of neurons, which supports the existence of common substrates among these drugs.The absorbed nicotine mimics nicotinic acetylcholine which when bound to nicotinic acetylcholine receptors prevents the reuptake of acetylcholine thereby increasing that neurotransmitter in those areas of the body. These nicotinic acetylcholine receptors are located in the central nervous system and at the nerve-muscle junction of skeletal muscles; whose activity increases heart rate, alertness, and faster reaction times. Nicotine acetylcholine stimulation is not directly addictive. However, since dopamine-releasing neurons are abundant on nicotine receptors, dopamine is released; and, in the nucleus accumbens, dopamine is associated with motivation causing reinforcing behavior. Dopamine increase, in the prefrontal cortex, may also increase working memory.When tobacco is smoked, most of the nicotine is pyrolyzed. However, a dose sufficient to cause mild somatic dependency and mild to strong psychological dependency remains. There is also a formation of harmane (an MAO inhibitor) from the acetaldehyde in tobacco smoke. This may play a role in nicotine addiction, by facilitating a dopamine release in the nucleus accumbens as a response to nicotine stimuli. Using rat studies, withdrawal after repeated exposure to nicotine results in less responsive nucleus accumbens cells, which produce dopamine responsible for reinforcement.
Demographics
As of 2000, smoking was practiced by around 1.22 billion people. At current rates of smoker replacement and market growth, this may reach around 1.9 billion in 2025.Smoking may be up to five times more prevalent among men than women in some communities, although the gender gap usually declines with younger age. In some developed countries smoking rates for men have peaked and begun to decline, while for women they continue to climb.As of 2002, about twenty percent of young teenagers (13–15) smoked worldwide. 80,000 to 100,000 children begin smoking every day, roughly half of whom live in Asia. Half of those who begin smoking in adolescent years are projected to go on to smoke for 15 to 20 years. As of 2019 in the United States, roughly 800,000 high school students smoke.The World Health Organization (WHO) states that "Much of the disease burden and premature mortality attributable to tobacco use disproportionately affect the poor". Of the 1.22 billion smokers, 1 billion of them live in developing or transitional economies. Rates of smoking have leveled off or declined in the developed world. In the developing world, however, tobacco consumption is rising by 3.4% per year as of 2002.The WHO in 2004 projected 58.8 million deaths to occur globally, from which 5.4 million are tobacco-attributed, and 4.9 million as of 2007. As of 2002, 70% of the deaths are in developing countries. As of 2017, smoking causes one in ten deaths worldwide, with half of those deaths in the US, China, India and Russia.
Psychology
Takeup
Most smokers begin smoking during adolescence or early adulthood. Some studies also show that smoking can also be linked to various mental health complications. Smoking has elements of risk-taking and rebellion, which often appeal to young people. The presence of peers that smoke and media featuring high-status models smoking may also encourage smoking. Because teenagers are influenced more by their peers than by adults, attempts by parents, schools, and health professionals at preventing people from trying cigarettes are often unsuccessful.Children of smoking parents are more likely to smoke than children with non-smoking parents. Children of parents who smoke are less likely to quit smoking. One study found that parental smoking cessation was associated with less adolescent smoking, except when the other parent currently smoked. A current study tested the relation of adolescent smoking to rules regulating where adults are allowed to smoke in the home. Results showed that restrictive home smoking policies were associated with lower likelihood of trying smoking for both middle and high school students.Behavioural research generally indicates that teenagers begin their smoking habits due to peer pressure, and cultural influence portrayed by friends. However, one study found that direct pressure to smoke cigarettes played a less significant part in adolescent smoking, with adolescents also reporting low levels of both normative and direct pressure to smoke cigarettes. Mere exposure to tobacco retailers may motivate smoking behaviour in adults. A similar study suggested that individuals may play a more active role in starting to smoke than has previously been thought and that social processes other than peer pressure also need to be taken into account. Another studys results indicated that peer pressure was significantly associated with smoking behavior across all age and gender cohorts, but that intrapersonal factors were significantly more important to the smoking behavior of 12- to 13-year-old girls than same-age boys. Within the 14- to 15-year-old age group, one peer pressure variable emerged as a significantly more important predictor of girls than boys smoking. It is debated whether peer pressure or self-selection is a greater cause of adolescent smoking.
Psychologist Hans Eysenck (who later was questioned for nonplausible results
and unsafe publications) developed a personality profile for the typical smoker. Extraversion is the trait that is most associated with smoking, and smokers tend to be sociable, impulsive, risk taking, and excitement seeking individuals.
Persistence
The reasons given by some smokers for this activity have been categorized as addictive smoking, pleasure from smoking, tension reduction/relaxation, social smoking, stimulation, habit/automatism, and handling. There are gender differences in how much each of these reasons contribute, with females more likely than males to cite tension reduction/relaxation, stimulation and social smoking.Some smokers argue that the depressant effect of smoking allows them to calm their nerves, often allowing for increased concentration. However, according to the Imperial College London, "Nicotine seems to provide both a stimulant and a depressant effect, and it is likely that the effect it has at any time is determined by the mood of the user, the environment and the circumstances of use. Studies have suggested that low doses have a depressant effect, while higher doses have stimulant effect."
Patterns
A number of studies have established that cigarette sales and smoking follow distinct time-related patterns. For example, cigarette sales in the United States of America have been shown to follow a strongly seasonal pattern, with the high months being the months of summer, and the low months being the winter months.Similarly, smoking has been shown to follow distinct circadian patterns during the waking day—with the high point usually occurring shortly after waking in the morning, and shortly before going to sleep at night.
Impact
Health
Tobacco smoking is the leading cause of preventable death and a global public health concern. There are 1.1 billion tobacco users in the world. One person dies every six seconds from a tobacco related disease.
Tobacco use leads most commonly to diseases affecting the heart and lungs, with smoking being a major risk factor for heart attacks, strokes, chronic obstructive pulmonary disease (COPD), idiopathic pulmonary fibrosis (IPF), emphysema, and various types and subtypes of cancers (particularly lung cancer, cancers of the oropharynx, larynx, and mouth, esophageal and pancreatic cancer). Cigarette smoking increases the risk of Crohns disease as well as the severity of the course of the disease. It is also the number one cause of bladder cancer. Cigarette smoking has also been associated with sarcopenia, the age-related loss of muscle mass and strength. The smoke from tobacco elicits carcinogenic effects on the tissues of the body that are exposed to the smoke. Regular cigar smoking is known to carry serious health risks, including increased risk of developing various types and subtypes of cancers, respiratory diseases, cardiovascular diseases, cerebrovascular diseases, periodontal diseases and teeth loss, and malignant diseases.Tobacco smoke is a complex mixture of over 7,000 toxic chemicals, 98 of which are associated with an increased risk of cardiovascular disease and 69 of which are known to be carcinogenic. The most important chemicals causing cancer are those that produce DNA damage, since such damage appears to be the primary underlying cause of cancer. Cunningham et al. combined the microgram weight of the compound in the smoke of one cigarette with the known genotoxic effect per microgram to identify the most carcinogenic compounds in cigarette smoke: acrolein, formaldehyde, acrylonitrile, 1,3-butadiene, acetaldehyde, ethylene oxide, and isoprene. In addition to the aforementioned toxic chemicals, flavored tobacco contains flavorings which upon heating release toxic chemicals and carcinogens such as carbon monoxide (CO), polycyclic aromatic hydrocarbons (PAHs), furans, phenols, aldehydes (such as acrolein), and acids, in addition to nitrogenous carcinogens, alcohols, and heavy metals, all of which are dangerous to human health. A comparison of 13 common hookah flavors found that melon flavors are the most dangerous, with their smoke containing four classes of hazards in high concentrations.The World Health Organization estimates that tobacco caused 5.4 million deaths in 2004 and 100 million deaths over the course of the 20th century. Similarly, the United States Centers for Disease Control and Prevention describes tobacco use as "the single most important preventable risk to human health in developed countries and an important cause of premature death worldwide." Although 70% of smokers state their intention to quit only 3–5% are actually successful in doing so.The probabilities of death from lung cancer before age 75 in the United Kingdom are 0.2% for men who never smoked (0.4% for women), 5.5% for male former smokers (2.6% in women), 15.9% for current male smokers (9.5% for women) and 24.4% for male "heavy smokers" defined as smoking more than 5 cigarettes per day (18.5% for women). Tobacco smoke can combine with other carcinogens present within the environment in order to produce elevated degrees of lung cancer.
The risk of lung cancer decreases almost from the first day someone quits smoking. Healthy cells that have escaped mutations grow and replace the damaged ones in the lungs. In the research dated December 2019, 40% of cells in former smokers looked like those of people who had never smoked.Rates of smoking have generally leveled-off or declined in the developed world. Smoking rates in the United States have dropped by half from 1965 to 2006, falling from 42% to 20.8% in adults. In the developing world, tobacco consumption is rising by 3.4% per year.Smoking alters the transcriptome of the lung parenchyma; the expression levels of a panel of seven genes (KMO, CD1A, SPINK5, TREM2, CYBB, DNASE2B, FGG) are increased in the lung tissue of smokers.Passive smoking is the inhalation of tobacco smoke by individuals who are not actively smoking. This smoke is known as second-hand smoke (SHS) or environmental tobacco smoke (ETS) when the burning end is present, and third-hand smoke after the burning end has been extinguished. Because of its negative implications, exposure to SHS has played a central role in the regulation of tobacco products. Six hundred thousand deaths were attributed to SHS in 2004. It also has been known to produce skin conditions such as freckles and dryness.In 2015, a meta-analysis found that smokers were at greater risk of developing psychotic disorder. Tobacco has also been described an anaphrodisiac due to its propensity for causing erectile dysfunction. There is a correlation between tobacco smoking and a reduced risk of Parkinsons disease.
Economic
In countries where there is a universally funded healthcare system, the government covers the cost of medical care for smokers who become ill through smoking in the form of increased taxes. Two broad debating positions exist on this front, the "pro-smoking" argument suggesting that heavy smokers generally do not live long enough to develop the costly and chronic illnesses which affect the elderly, reducing societys healthcare burden, and the "anti-smoking" argument suggests that the healthcare burden is increased because smokers get chronic illnesses younger and at a higher rate than the general population. Data on both positions has been contested. The Centers for Disease Control and Prevention published research in 2002 claiming that the cost of each pack of cigarettes sold in the United States was more than $7 in medical care and lost productivity. The cost may be higher, with another study putting it as high as $41 per pack, most of which however is on the individual and his/her family. This is how one author of that study puts it when he explains the very low cost for others: "The reason the number is low is that for private pensions, Social Security, and Medicare — the biggest factors in calculating costs to society — smoking actually saves money. Smokers die at a younger age and dont draw on the funds theyve paid into those systems." Other research demonstrates that premature death caused by smoking may redistribute Social Security income in unexpected ways that affect behavior and reduce the economic well-being of smokers and their dependents. To further support this, whatever the rate of smoking consumption is per day, smokers have a greater lifetime medical cost on average compared to a non-smoker by an estimated $6000 Between the cost for lost productivity and health care expenditures combined, cigarette smoking costs at least 193 billion dollars (Research also shows that smokers earn less money than nonsmokers). As for secondhand smoke, the cost is over 10 billion dollars.By contrast, some non-scientific studies, including one conducted by Philip Morris in the Czech Republic called Public Finance Balance of Smoking in the Czech Republic and another by the Cato Institute, support the opposite position. Philip Morris has explicitly apologised for the former study, saying: "The funding and public release of this study which, among other things, detailed purported cost savings to the Czech Republic due to premature deaths of smokers, exhibited terrible judgment as well as a complete and unacceptable disregard of basic human values. For one of our tobacco companies to commission this study was not just a terrible mistake, it was wrong. All of us at Philip Morris, no matter where we work, are extremely sorry for this. No one benefits from the very real, serious and significant diseases caused by smoking."Between 1970 and 1995, per-capita cigarette consumption in poorer developing countries increased by 67 percent, while it dropped by 10 percent in the richer developed world. Eighty percent of smokers now live in less developed countries. By 2030, the World Health Organization (WHO) forecasts that 10 million people a year will die of smoking-related illness, making it the single biggest cause of death worldwide, with the largest increase to be among women. WHO forecasts the 21st centurys death rate from smoking to be ten times the 20th centurys rate ("Washingtonian" magazine, December 2007).
The tobacco industry is known to be one of the largest global enterprises in the world. The six biggest tobacco companies made a combined profit of $35.1 billion (Jha et al., 2014) in 2010.
Social
Famous smokers of the past used cigarettes or pipes as part of their image, such as Jean-Paul Sartres Gauloises-brand cigarettes; Albert Einsteins, Douglas MacArthurs, Bertrand Russells, and Bing Crosbys pipes; or the news broadcaster Edward R. Murrows cigarette. Writers in particular seem to be known for smoking, for example, Cornell Professor Richard Kleins book Cigarettes are Sublime for the analysis, by this professor of French literature, of the role smoking plays in 19th and 20th century letters. The popular author Kurt Vonnegut addressed his addiction to cigarettes within his novels. British Prime Minister Harold Wilson was well known for smoking a pipe in public as was Winston Churchill for his cigars. Sherlock Holmes, the fictional detective created by Sir Arthur Conan Doyle, smoked a pipe, cigarettes, and cigars. The DC Vertigo comic book character John Constantine, created by Alan Moore, is synonymous with smoking, so much so that the first storyline by Preacher creator Garth Ennis centered around John Constantine contracting lung cancer. Professional wrestler James Fullington, while in character as "The Sandman", is a chronic smoker in order to appear "tough".
The problem of smoking at home is particularly difficult for women in many cultures (especially Arab cultures), where it may not be acceptable for a woman to ask her husband not to smoke at home or in the presence of her children. Studies have shown that pollution levels for smoking areas indoors are higher than levels found on busy roadways, in closed motor garages, and during fire storms. Furthermore, smoke can spread from one room to another, even if doors to the smoking area are closed.The ceremonial smoking of tobacco, and praying with a sacred pipe, is a prominent part of the religious ceremonies of a number of Native American Nations. Sema, the Anishinaabe word for tobacco, is grown for ceremonial use and considered the ultimate sacred plant since its smoke is believed to carry prayers to the spirits. In most major religions, however, tobacco smoking is not specifically prohibited, although it may be discouraged as an immoral habit. Before the health risks of smoking were identified through controlled study, smoking was considered an immoral habit by certain Christian preachers and social reformers. The founder of the Latter Day Saint movement, Joseph Smith, recorded that on 27 February 1833, he received a revelation which discouraged tobacco use. This "Word of Wisdom" was later accepted as a commandment, and faithful Latter-day Saints abstain completely from tobacco. Jehovahs Witnesses base their stand against smoking on the Bibles command to "clean ourselves of every defilement of flesh" (2 Corinthians 7:1). The Jewish Rabbi Yisrael Meir Kagan (1838–1933) was one of the first Jewish authorities to speak out on smoking. In Ahmadiyya Islam, smoking is highly discouraged, although not forbidden. During the month of fasting however, it is forbidden to smoke tobacco. In the Baháʼí Faith, smoking tobacco is discouraged though not forbidden.
Public policy
On 27 February 2005 the WHO Framework Convention on Tobacco Control, took effect. The FCTC is the worlds first public health treaty. Countries that sign on as parties agree to a set of common goals, minimum standards for tobacco control policy, and to cooperate in dealing with cross-border challenges such as cigarette smuggling. Currently the WHO declares that 4 billion people will be covered by the treaty, which includes 168 signatories. Among other steps, signatories are to put together legislation that will eliminate secondhand smoke in indoor workplaces, public transport, indoor public places and, as appropriate, other public places.
Taxation
Many governments have introduced excise taxes on cigarettes in order to reduce the consumption of cigarettes. The World Health Organisation finds that:
The structure of tobacco excise taxes varies considerably across countries, with lower income countries more likely to rely more on ad valorem excises and higher income countries more likely to rely more on specific excise taxes, while many countries at all income levels use a mix of specific and ad valorem excises.
Tobacco excise tax systems are quite complex in several countries, where different tax rates are applied based on prices, product characteristics such as presence/absence of a filter or length, packaging, weight, tobacco content, and/or production or sales volume. These complex systems are difficult to administer, create opportunities for tax avoidance, and are less effective from a public health perspective.
Globally, cigarette excise taxes account for less than 45 percent of cigarette prices, on average, while all taxes applied to cigarettes account for just over half of half of price. Higher income countries levy higher taxes on tobacco products and these taxes account for a greater share of price, with both the absolute tax and share of price accounted for by tax falling as country incomes fall.
In 2002, the Centers for Disease Control and Prevention said that each pack of cigarettes sold in the United States costs the nation more than $7 in medical care and lost productivity, around $3400 per year per smoker. Another study by a team of health economists finds the combined price paid by their families and society is about $41 per pack of cigarettes.Substantial scientific evidence shows that higher cigarette prices result in lower overall cigarette consumption. Most studies indicate that a 10% increase in price will reduce overall cigarette consumption by 3% to 5%. Youth, minorities, and low-income smokers are two to three times more likely to quit or smoke less than other smokers in response to price increases. Smoking is often cited as an example of an inelastic good, however, i.e. a large rise in price will only result in a small decrease in consumption.
Many nations have implemented some form of tobacco taxation. As of 1997, Denmark had the highest cigarette tax burden of $4.02 per pack. Taiwan only had a tax burden of $0.62 per pack. The federal government of the United States charges $1.01 per pack.Cigarette taxes vary widely from state to state in the United States. For example, Missouri has a cigarette tax of only 17 cents per pack, the nations lowest, while New York has the highest cigarette tax in the U.S.: $4.35 per pack. In Alabama, Illinois, Missouri, New York City, Tennessee, and Virginia, counties and cities may impose an additional limited tax on the price of cigarettes. Sales taxes are also levied on tobacco products in most jurisdictions.
In the United Kingdom, a packet of 20 cigarettes has a tax added of 16.5% of the retail price plus £4.90. The UK has a significant black market for tobacco, and it has been estimated by the tobacco industry that 27% of cigarette and 68% of handrolling tobacco consumption is non-UK duty paid (NUKDP).In Australia total taxes account for 62.5% of the final price of a packet of cigarettes (2011 figures). These taxes include federal excise or customs duty and Goods and Services Tax.
Restrictions
In June 1967, the US Federal Communications Commission ruled that programmes broadcast on a television station which discussed smoking and health were insufficient to offset the effects of paid advertisements that were broadcast for five to ten minutes each day. In April 1970, the US Congress passed the Public Health Cigarette Smoking Act banning the advertising of cigarettes on television and radio starting on 2 January 1971.The Tobacco Advertising Prohibition Act 1992 expressly prohibited almost all forms of Tobacco advertising in Australia, including the sponsorship of sporting or other cultural events by cigarette brands.
All tobacco advertising and sponsorship on television has been banned within the European Union since 1991 under the Television Without Frontiers Directive (1989). This ban was extended by the Tobacco Advertising Directive, which took effect in July 2005 to cover other forms of media such as the internet, print media, and radio. The directive does not include advertising in cinemas and on billboards or using merchandising – or tobacco sponsorship of cultural and sporting events which are purely local, with participants coming from only one Member State as these fall outside the jurisdiction of the European Commission. However, most member states have transposed the directive with national laws that are wider in scope than the directive and cover local advertising. A 2008 European Commission report concluded that the directive had been successfully transposed into national law in all EU member states, and that these laws were well implemented.Some countries also impose legal requirements on the packaging of tobacco products. For example, in the countries of the European Union, Turkey, Australia and South Africa, cigarette packs must be prominently labeled with the health risks associated with smoking. Canada, Australia, Thailand, Iceland and Brazil have also imposed labels upon cigarette packs warning smokers of the effects, and they include graphic images of the potential health effects of smoking. Cards are also inserted into cigarette packs in Canada. There are sixteen of them, and only one comes in a pack. They explain different methods of quitting smoking. Also, in the United Kingdom, there have been a number of graphic NHS advertisements, one showing a cigarette filled with fatty deposits, as if the cigarette is symbolizing the artery of a smoker.
Some countries have also banned advertisement at point of sale. United Kingdom and Ireland have limited the advertisement of tobacco at retailers. This includes storing of cigarettes behind a covered shelf not visible to the public. They do however allow some limited advertising at retailers. Norway has a complete ban of point of sale advertising. This includes smoking products and accessories. Implementing these policies can be challenging, all of these countries experienced resistance and challenges from the tobacco industry. The World Health Organisation recommends the complete ban of all types of advertisement or product placement, including at vending machines, at airports and on internet shops selling tobacco. The evidence is as yet unclear as to the impact of such bans.
Many countries have a smoking age. In many countries, including the United States, most European Union member states, New Zealand, Canada, South Africa, Israel, India, Brazil, Chile, Costa Rica and Australia, it is illegal to sell tobacco products to minors and in the Netherlands, Austria, Belgium, Denmark and South Africa it is illegal to sell tobacco products to people under the age of 16. On 1 September 2007 the minimum age to buy tobacco products in Germany rose from 16 to 18, as well as in the United Kingdom where on 1 October 2007 it rose from 16 to 18. Underlying such laws is the belief that people should make an informed decision regarding the risks of tobacco use. These laws have a lax enforcement in some nations and states. In China, Turkey, and many other countries usually a child will have little problem buying tobacco products, because they are often told to go to the store to buy tobacco for their parents.
Several countries such as Ireland, Latvia, Estonia, the Netherlands, Finland, Norway, Canada, Australia, Sweden, Portugal, Singapore, Italy, Indonesia, India, Lithuania, Chile, Spain, Iceland, United Kingdom, Slovenia, Turkey and Malta have legislated against smoking in public places, often including bars and restaurants. Restaurateurs have been permitted in some jurisdictions to build designated smoking areas (or to prohibit smoking). In the United States, many states prohibit smoking in restaurants, and some also prohibit smoking in bars. In provinces of Canada, smoking is illegal in indoor workplaces and public places, including bars and restaurants. As of 31 March 2008 Canada has introduced a smoke-free law in all public places, as well as within 10 metres of an entrance to any public place. In Australia, smoke-free laws vary from state to state. Currently, Queensland has completely smoke-free indoor public places (including workplaces, bars, pubs and eateries) as well as patrolled beaches and some outdoor public areas. There are, however, exceptions for designated smoking areas. In Victoria, smoking is restricted in railway stations, bus stops and tram stops as these are public locations where second-hand smoke can affect non-smokers waiting for public transport, and since 1 July 2007 is now extended to all indoor public places. In New Zealand and Brazil, smoking is restricted in enclosed public places including bars, restaurants and pubs. Hong Kong restricted smoking on 1 January 2007 in the workplace, public spaces such as restaurants, karaoke rooms, buildings, and public parks (bars which do not admit minors were exempt until 2009). In Romania smoking is illegal in trains, metro stations, public institutions (except where designated, usually outside) and public transport. In Germany, in addition to smoking bans in public buildings and transport, an anti-smoking ordinance for bars and restaurants was implemented in late 2007. A study by the University of Hamburg (Ahlfeldt and Maennig 2010) demonstrates, that the smoking ban had, if any, only short run impacts on bar and restaurant revenues. In the medium and long run no negative effect was measurable. The results suggest either, that the consumption in bars and restaurants is not affected by smoking bans in the long run, or, that negative revenue impacts by smokers are compensated by increasing revenues through non-smokers.
Ignition safety
An indirect public health problem posed by cigarettes is that of accidental fires, usually linked with consumption of alcohol. Enhanced combustion using nitrates was traditionally used but cigarette manufacturers have been silent on this subject claiming at first that a safe cigarette was technically impossible, then that it could only be achieved by modifying the paper. Roll your own cigarettes contain no additives and are fire safe. Numerous fire safe cigarette designs have been proposed, some by tobacco companies themselves, which would extinguish a cigarette left unattended for more than a minute or two, thereby reducing the risk of fire. Among American tobacco companies, some have resisted this idea, while others have embraced it. RJ Reynolds was a leader in making prototypes of these cigarettes in 1983 and will make all of their U.S. market cigarettes to be fire-safe by 2010 </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Retrograde amnesia and can you explain it in depth? [/INST] In neurology, retrograde amnesia (RA) is a loss of memory-access to events that occurred or information that was learned in the past. It is caused by an injury or the onset of a disease. It tends to negatively affect episodic, autobiographical, and declarative memory, while keeping procedural memory intact without increasing difficulty for learning new information. RA can be temporally graded, or more permanent based on the severity of its cause. It is usually consistent with Ribots law. The law states that subjects are more likely to lose memories closer to the traumatic incident than more memories that happened further from the incident. The type of information that is forgotten can range from a specific memory, such as a single event, or a more general memory. This would resemble generic amnesia. Anterograde amnesia is a similar condition that deals with the inability to form new memories following the onset of an injury or disease.
Cause
RA commonly results from damage to regions of the brain that are associated with episodic and declarative memory, including autobiographical information. In extreme cases, individuals may completely forget who they are. Generally, this is a more severe type of amnesia known as global, or generalized amnesia. However, memory loss can also be selective or categorical, manifested by a persons inability to remember events related to a specific incident or topic. Patients also differ in durations of RA (how long they cant recall information) and durations of what is forgotten (past time frame for which information is unavailable).
Temporally graded retrograde amnesia
In temporally graded retrograde amnesia, patients eventually recover most memories following the onset of RA. This suggests that the hippocampal formation/ consolidation, the process of coding new information, is only used in systematic consolidation for temporary storage, and short periods of time. Systemic consolidation then moves the information and long-term consolidation takes place in other brain structures. The fact that damage to the hippocampal formation can eventually overcome RA suggests that other brain structures are able to take over the jobs of the malfunctioning regions. RA can also progress and further deteriorate memory recollection, as in the case of Korsakoff syndrome and Alzheimers disease, due to the ongoing nature of the damage caused by the illnesses. The degree to which different patients recover from RA differs in time (some take a few days while others a few decades) and content (some will only remember certain specific instances while others more).
Focal, isolated, and pure retrograde amnesia
Focal isolated, and pure retrograde amnesia are terms used to describe a pure form of RA, with an absence of anterograde amnesia (AA). In addition, Focal RA in particular, has also been used to describe a RA situation in which there is a lack of observable physical deficit as well. This could be described as a psychogenic form of amnesia with mild anterograde and retrograde loss. A case study of DH revealed that the patient was unable to provide personal or public information, however there was no parahippocampal or entorhinal damage found. Individuals with focal brain damage have minimal RA.Isolated RA is associated with a visible thalamic lesion. Consistent with other forms of RA, the isolated form is marked by the inability to recall past information.Pure retrograde amnesia (PRA) refers to the behavioral syndrome that is characterized by the inability to retrieve remote information in the face of a normal ability to learn new information, with no other ecological or psychometric evidence of cognitive impairment. It should not be confused with brief periods of peritraumatic amnesia that are common in mild concussive head traumas. The findings of pure retrograde amnesia have helped form the dissociation between mechanisms for RA and AA. Several studies have found numerous causes for PRA like vascular diseases, head traumas ranging from mild to severe, encephalitis, as well as purely psychological conditions and totally unidentifiable etiologies. Most people who have PRA can function normally and learn new informationA pure form of RA is rare as most cases of RA co-occur with AA. A famous example is that of patient ML. The patients MRI revealed damage to the right ventral frontal cortex and underlying white matter, including the uncinate fasciculus, a band of fibers previously thought to mediate retrieval of specific events from ones personal past.During consolidation, the hippocampus acts as an intermediate tool that quickly stores new information until it is transferred to the neocortex for the long-term. The temporal lobe, which holds the hippocampus, entorhinal, perirhinal and parahippocampal cortices, has a reciprocal connection with the neocortex. The temporal lobe is temporarily needed when consolidating new information; as the learning becomes stronger, the neocortex becomes more independent of the temporal lobe.Studies on specific cases demonstrate how particular impaired areas of the hippocampus are associated with the severity of RA. Damage can be limited to the CA1 field of the hippocampus, causing very limited RA for about one to two years. More extensive damage limited to the hippocampus causes temporally graded amnesia for 15 to 25 years. Another study suggests that large medial temporal lobe lesions, that extend laterally to include other regions, produce more extensive RA, covering 40 to 50 years. These findings suggest that density of RA becomes more severe and long-term as the damage extends beyond the hippocampus to surrounding structures.
The common studied causes of RA do not always lead to the onset of RA. It may even be that in some cases both conscious feigning and unconscious processes are at play.
Traumatic brain injury (TBI) or post-traumatic amnesia
Traumatic brain injury (TBI), also known as post-traumatic amnesia, occurs from an external force that causes structural damage to the brain, such as a sharp blow to the head, a diffuse axonal injury, or childhood brain damage (e.g., shaken baby syndrome). In cases of sudden rapid acceleration, the brain continues moving around in the skull, harming brain tissue as it hits internal protrusions.TBI varies according to impact of external forces, location of structural damage, and severity of damage ranging from mild to severe. Retrograde amnesia can be one of the many consequences of brain injury but it is important to note that it is not always the outcome of TBI. An example of a subgroup of people who are often exposed to TBI are individuals who are involved in high-contact sports. Research on football players takes a closer look at some of the implications to their high-contact activities. Enduring consistent head injuries can have an effect on the neural consolidation of memory.Specific cases, such as that of patient ML, support the evidence that severe blows to the head can cause the onset of RA. In this specific case there was an onset of isolated RA following a severe head injury. The brain damage did not affect the persons ability to form new memories. Therefore, the idea that specific sections of retrograde memory are independent of anterograde is supported. Normally, there is a very gradual recovery, however, a dense period of amnesia immediately preceding the trauma usually persists.
Traumatic events
RA can occur without any anatomical damage to the brain, lacking an observable neurobiological basis. Primarily referred to as psychogenic amnesia or psychogenic fugue, it often occurs due to a traumatic situation that individuals wish to consciously or unconsciously avoid through intrapsychic conflicts or unconscious repressions. The onset of psychogenic amnesia can be either global (i.e., individual forgets all history) or situation specific (i.e., individual is unable to retrieve memories of specific situations).Patients experiencing psychogenic amnesia have impaired episodic memory, instances of wandering and traveling, and acceptance of a new identity as a result of inaccessible memories pertaining to their previous identity.Recent research has begun to investigate the effects of stress and fear-inducing situations with the onset of RA. Long-term potentiation (LTP) is the process by which there is a signal transmission between neurons after the activation of a neuron, which has been known to play a strong role in the hippocampus in learning and memory. Common changes in the hippocampus have been found to be related to stress and induced LTP. The commonalities support the idea that variations of stress can play a role in producing new memories as well as the onset of RA for other memories. The amygdala plays a crucial role in memory and can be affected by emotional stimuli, evoking RA.Studies of specific cases, such as AMN, support evidence of traumatic experiences as a plausible cause of RA. AMN escaped a small fire in his house, did not inhale any smoke, and had no brain damage. Nevertheless, he was unable to recall autobiographical knowledge the next day. This case shows that RA can occur in the absence of structural brain damage.After a traumatic head injury, emotional disturbances can occur at three different levels: neurological, reactionary, and long-term disturbances. Neurological disturbances can change emotional and motivational responses. Reactionary disturbances effect emotional and motivational responses as well, but reflect the failure to cope with environmental demands. Someone with this might withdraw from the environment that they are placed in because they no longer know how to handle the cognitive resources.
Nutritional deficiency
RA has been found among alcohol-dependent patients who have Korsakoffs syndrome. Korsakoffs syndrome patients develop retrograde amnesia due to a thiamine deficiency (lack of vitamin B1). Also, chronic alcohol use disorders are associated with a decrease in the volume of the left and right hippocampus.These patients regular diet consists mostly of hard alcohol intake, which lacks the necessary nutrients for typical development and maintenance. Therefore, after a prolonged time consuming primarily alcohol, these people undergo memory difficulties and ultimately develop RA. However, some of the drawbacks of using Korsakoff patients to study RA is the progressive nature of the illness and the unknown time of onset.
Infections
Infections that pass the blood–brain barrier can cause brain damage (encephalitis), sometimes resulting in the onset of RA. In the case of patient SS, the infection led to focal or isolated retrograde amnesia where there was an absence of or limited AA. Brain scans show abnormalities in the bilateral medial temporal lobes, including two thirds of the hippocampal formation and the posterior part of the amygdala.
Surgery
Henry Molaison had epilepsy that progressed and worsened by his late twenties. The severity of his condition caused him to undergo surgery in an effort to prevent his seizures. Unfortunately, due to a lack of overall known neurological knowledge, Molaisons surgeons removed his bilateral medial temporal lobe, causing profound AA and RA. The removed brain structures included the hippocampus, the amygdala, and the parahippocampal gyrus, now called the medial temporal lobe memory system. HM was one of the most studied memory cases to date and started the examination of neurological structures in relation to memory.
Patients who have RA due to surgery are "P.B." and "F.C." who had unilateral removal of the medial areas in the left temporal lobe.
Controlled induction
Clinically induced RA has been achieved using different forms of electrical induction.
Electroconvulsive therapy (ECT), used as a depression therapy, can cause impairments in memory. Tests show that information from days and weeks before the ECT can be permanently lost. The results of this study also show that severity of RA is more extreme in cases of bilateral ECT rather than unilateral ECT. Impairments can also be more intense if ECT is administered repetitively (sine wave simulation) as opposed to a single pulse (brief-pulse stimulation).
Electroconvulsive shock (ECS): The research in this field has been advanced by using animals as subjects. Researchers induce RA in rats, for example, by giving daily ECS treatments. This is done to further understand RA.
Brain structures
The most commonly affected areas are associated with episodic and declarative memory such as the hippocampus, the diencephalon, and the temporal lobes.
The hippocampus deals largely with memory consolidation, where information from the working memory and short-term memory is encoded into long-term storage for future retrieval. Amnesic patients with damage to the hippocampus are able to demonstrate some degree of unimpaired semantic memory, despite a loss of episodic memory, due to spared parahippocampal cortex. In other words, retrograde amnesics "know" about information or skill, but cannot "remember" how they do.
The diencephalon and the surrounding areas role in memory is not well understood. However, this structure appears to be involved in episodic memory recall.
The temporal lobes are essential for semantic and factual memory processing. Aside from helping to consolidate memory with the hippocampus, the temporal lobes are extremely important for semantic memory. Damage to this region of the brain can result in the impaired organization and categorization of verbal material, disturbance of language comprehension, and impaired long-term memory. The right frontal lobe is critical for the retrieval of episodic information, while the left frontal region is more active for the retrieval of semantic information. [56] Lesions in the right hemisphere and right frontal lobes result in the impaired recall of non-verbal material, such as music and drawings. Difficulties in studying this region of the brain extends to its duties in comprehension, naming objects, verbal memory, and other language functions.Brain plasticity has helped explain the recovery process of brain damage induced retrograde amnesia, where neuro-structures use different neural pathways to avoid the damaged areas while still performing their tasks. Thus, the brain can learn to be independent of the impaired hippocampus, but only to a certain extent.
For example, older memories are consolidated over time and in various structures of the brain, including Wernickes area and the neocortex, making retrieval through alternate pathways possible.
Diagnosis
Testing for retrograde amnesia
As previously mentioned, RA can affect peoples memories in different degrees, but testing is required to help determine if someone is experiencing RA. Several tests exist, for example, testing for factual knowledge such as known public events. A problem with this form of testing is that people generally differ in their knowledge of such subjects. The tests also have to be changed regularly to adapt to the period of time patients have been alive, and include questions about autobiographical events that range in amount of detail. Due to the nature of the information being tested, it is often difficult to verify the accuracy of the memories being recalled, especially if they are from a distant past. One way to confirm information is by consulting close family members or caregivers. Another problem that arises is that this type of testing can result in a significant amount of data that needs to be processed and confirmed. Other ways to test someone is via autobiographical knowledge using the Autobiographical Memory Interview (AMI). A test that comprises names of relatives, personal information, and job history. The AMI was created by Kopelman, Wilson, and Baddeley (1990). This information could help determine if someone is experiencing RA and the degree of memory affected. Some researchers have found that the time interval after the head injury occurred did not seem to matter. The effect of the memory loss was the same no matter how long it had been after from the injury.Brain abnormalities can be measured using magnetic resonance imaging (MRI), computed tomography scan (CT) and electroencephalography (EEG), which can provide detailed information about specific brain structures. In many cases, an autopsy helps identify the exact brain region affected and the extent of the damage that caused RA once the patient has died.
There are some aspects essential to the patient that remain unaffected by RA. In many patients, their personality remains the same. Also, semantic memory, that is general knowledge about the world, is usually unaffected. However, episodic memory, which refers to ones life experiences, is impaired.Another real life problem with RA is malingering, which is conceived as the rational output of a neurologically normal brain aiming at the surreptitious achievement of a well identified gain. Since it is common for people who have committed a crime to report having RA for that specific event in order to avoid their punishment, the legal system has pushed for the creation of a standardized test of amnesia. However, since most cases differ in onset, duration, and content forgotten, this task has shown to be a rather complex one.
Spontaneous recovery
When someone has RA, their memory cannot be recovered by simply being informed about their personal experiences and their identity. This is called reminder effect or reminder treatment. The reminder effect consists of re-exposing the patient to past personal information, which cannot reverse RA. Thus, reminding the patient about details of their life has no scientific bearings on recovering memory. Fortunately, memory can be and usually is recovered due to spontaneous recovery and plasticity.
Case studies
Since researchers are interested in examining the effects of disrupted brain areas and conducting experiments for further understanding of an unaffected, normal brain, many individuals with brain damage have volunteered to undergo countless tests to advance our scientific knowledge of the human brain. For example, Henry Molaison (HM) was someone with significant brain damage and participated in a lot of neurological research. Furthermore, he was also the most tested person in neuropsychology. All living people who participate are referred to in literature using only their initials to protect privacy.
Each case of RA has led to different symptoms and durations, where some patients have exhibited an inability to describe future plans, whether in the near future (e.g., this afternoon) or in the distant future (e.g., next summer) because of their inability to consolidate memories. Furthermore, researchers have also found that some patients can identify themselves and loved ones in photographs, but cannot determine the time or place the photo was taken. It has also been found that patients with RA greatly differ from the general population in remembering past events.A few case examples are:
After a head injury, AB had to relearn personal information. Many of ABs habits had also changed.
Patient CD reported disorientation of place and time following his injuries as well as relearning previously learned information and activities (e.g., using a razor).
EF was examined and found to be very confused about social norms (e.g., appropriate attire outside his home). EF exhibited memory loss of his personal experiences (e.g., childhood), and the impaired ability to recognize his wife and parents.
JG is the first recorded patient with isolated RA.
GH, a mother and a wife, had surgery in August 2002. When GH woke up after the surgery, she believed it was May 1989. Due to her amnesia, GH experienced great difficulty in her social environment, being overwhelmed by relationships to others.
DH, a learning disabilities instructor and husband, sustained a closed head injury. He did not show any normal signs of memory loss but he could not recall anything prior to the accident.
CDA is a 20-year-old man who fell and experienced head trauma after being unconscious for a little less than an hour. He had a self-identity loss and a retrograde deficit limited to the autobiographical events 5 years before the trauma. He often showed signs of spontaneous speech that was iterative and sometimes incoherent. When he saw his family and friends, he was shocked at how old they looked because he remembered them from 5 years earlier. This case also included amnesia for procedural skills like the fear of shaving or driving, which ultimately was overcome. There were no psychological, neuropsychological, or brain damage problems. His recovery of memory was progressive and spontaneous, where after several months the amnesia was limited to the two years preceding the trauma. This was a classic case of PRA. [57]
GC was a 38 year old accountant that was found in a town square unable to remember anything about himself and unaware of where he was and how he got there. He was eventually able to recall basic information about himself and his family, but could not recall emotionally charged autobiographical events pertaining to the last 7 years of his life. Within 3–4 days, it was determined that his autobiographical amnesia was clearly and strictly selective for professional events, as he could remember everything that was not related to his job. It was ultimately learned that the job had created severe emotional stress and anxiety due to the extreme hours that triggered a sudden fugue state. He was eventually able to recover most of his memories minus a single work event where he had stolen money from the company. This was a classic case of psychogenic amnesia. [57]
AF is a 15-year-old boy who hit his head and lost consciousness. He could not remember anything but was able to play songs on the piano, showing that his procedural memory was still intact. He gradually recovered some memories within the first 2–3 days but had autobiographical amnesia as well as significant memory loss for famous public facts and events for the 2 years prior to the injury. [57]Although it may seem that people living with brain damage have great difficulty continuing the usual day-to-day aspects, they still can accomplish many feats. People with RA are able to lead a normal life </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Leg and can you explain it in depth? [/INST] A leg is a weight-bearing and locomotive anatomical structure, usually having a columnar shape. During locomotion, legs function as "extensible struts". The combination of movements at all joints can be modeled as a single, linear element capable of changing length and rotating about an omnidirectional "hip" joint.
As an anatomical animal structure it is used for locomotion. The distal end is often modified to distribute force (such as a foot). Most animals have an even number of legs.
As a component of furniture, it is used for the economy of materials needed to provide the support for the useful surface such as the table top or chair seat.
Terminology
Uniped: 1 leg, such as clams
Biped: 2 legs, such as humans and birds
Triped: 3 legs, which typically does not occur naturally in healthy animals
Quadruped: 4 legs, such as dogs and horsesMany taxa are characterized by the number of legs:
Tetrapods have four legs. Squamates of genus Bipes have only two </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Burkitt lymphoma and can you explain it in depth? [/INST] Burkitt lymphoma is a cancer of the lymphatic system, particularly B lymphocytes found in the germinal center. It is named after Denis Parsons Burkitt, the Irish surgeon who first described the disease in 1958 while working in equatorial Africa. The overall cure rate for Burkitt lymphoma in developed countries is about 90%, and it is worse in low-income countries. Burkitt lymphoma is uncommon in adults, in whom it has a worse prognosis.
Classification
Burkitt lymphoma can be divided into three main clinical variants: the endemic, the sporadic, and the immunodeficiency-associated variants. By morphology (i.e., microscopic appearance), immunophenotype, and genetics, the variants of Burkitt lymphoma are alike.
The endemic variant (also called "African variant") most commonly occurs in children living in malaria-endemic regions of the world (e.g., equatorial Africa, Brazil, and Papua New Guinea). Epstein–Barr virus (EBV) infection is found in nearly all patients. Chronic malaria is believed to reduce resistance to EBV, allowing it to take hold. The disease characteristically involves the jaw or other facial bone, distal ileum, cecum, ovaries, kidney, or breast.
The sporadic type of Burkitt lymphoma (also known as "non-African") is the most common variant found in places where malaria is not prevalent such as North America and parts of Europe. The tumor cells have a similar appearance to the cancer cells of classical endemic Burkitt lymphoma. Sporadic lymphomas are rarely associated with the EBV. The jaw is less commonly involved in this variant. The abdominal region is the common site of involvement.
Immunodeficiency-associated Burkitt lymphoma is usually associated with HIV infection, but can also occur in the setting of post-transplant patients.Burkitt lymphoma is commonly associated with the infection of B cell lymphocytes with the EBV and in these cases is considered to be one form of the Epstein–Barr virus-associated lymphoproliferative diseases. The endemic variant of Burkitt lymphoma is in almost all cases associated with EBV infection. The fact that some Burkitt lymphoma cases do not involve EBV allows that many cases of the disease are not caused and/or promoted by EBV, i.e. the virus may be an innocent passenger virus in these cases. However, the almost ubiquitous presence of the virus in the endemic variant of Burkitt lymphoma suggests that it contributes to the development and/or progression of this variant. The mutational landscape in Burkitt lymphoma has recently been found to differ between tumors with and without EBV infection, further strengthening the role of the virus in disease origin.
Pathophysiology
Genetics
All types of Burkitt lymphoma are characterized by dysregulation of the c-myc gene by one of three chromosomal translocations. This gene is found at 8q24.
The most common variant is t(8;14)(q24;q32), which accounts for about 85% of cases. This involves c-myc and IGH@. A variant of this, a three-way translocation, t(8;14;18), has also been identified.
A rare variant is at t(2;8)(p12;q24). This involves IGK@ and c-myc.
Another rare variant is t(8;22)(q24;q11). This involves IGL@ and c-myc.Combined, the two less-common translocations, t(2;8)(p12;q24) and t(8;22)(q24;q11), account for the remaining 15% of cases not due to the t(8;14)(q24;q32) translocation.The c-myc gene found on chromosome 8 is part of the Myc family of genes that serve as regulators of cellular transcription and is associated with Burkitt lymphoma. Expression of the c-myc gene results in the synthesis of transcriptional factors that increase the expression of other genes involved in aerobic glycolysis. Ultimately, an increase in aerobic glycolysis plays a role in providing the necessary energy for cellular growth to occur. The translocation of the c-myc gene to the IGH, IGK, or IGL region moves the gene to a location in the genome near immunoglobulin enhancers which increases the expression of the c-myc gene. Overall, this translocation leads to increased cellular proliferation that is found in Burkitt lymphoma. Point mutations can also be present in the translocated c-myc gene resulting in the expressed c-myc protein being overactive. Other mutations found include the TCF transcription factor mutation, which increases cellular proliferation via the increased expression of other genes. Along with mutations that support cell proliferation, Burkitt lymphoma has been found to also harbor mutations in the TP53 gene that is a tumor suppressor and would normally function to limit cellular growth.
Virology
The complete role of EBV in the pathogenesis of endemic Burkitt lymphoma is not completely elucidated, but it has been shown to cause DNA damage, dysfunction of telomeres, and genome instability. B cell infection with EBV is latent, and the virus does not undergo replication. These latently infected B cells can then go on to produce proteins that function to promote cellular growth through modification of normal signaling pathways. EBV promotes the development of malignant B cells via proteins that limit apoptosis in cells that had the c-myc translocation. Apoptosis is limited by EBV through various means such as the EBNA-1 protein, BHF1 protein, EBER transcripts, vIL-10 gene, BZLF1, and LMP1. Malaria has been found to cause genomic instability in endemic Burkitt lymphoma. Malaria can lead to the reactivation of latent EBV and also MYC translocations via activation of the toll-like receptor 9. Malaria also promotes B-cell proliferation by altering the regular immune response. The immune system targets antigens (e.g., EBNA2 and LMP-1) and eliminates most B cells infected with EBV. Downregulation of antigens targeted by the immune system leads to the development of persistent B cells. These B cells can then undergo further mutations (e.g., c-myc translocation) that promote cancer development.
Immunology
Burkitt Lymphoma cells express HLA class I molecules normally, as well as some HLA class II complexes; however, CD4+ T cells are not properly activated. Burkitt lymphoma cells only express EBNA 1, a poorly antigenic EBV-associated antigen, that is able to escape HLA class I presentation, thus evades an immune response. EBNA 1 can be presented via HLA class II molecules, however HLA Class II pathway is unable to activate the CD4+ T cells.
Diagnosis
Malignant B cell characteristics
Normal B cells of a germinal center possess rearranged immunoglobulin heavy and light chain genes, and each isolated B cell possesses a unique IgH gene rearrangement. Since Burkitt lymphoma and other B-cell lymphomas are a clonal proliferative process, all tumor cells from one patient are supposed to possess identical IgH genes. When the DNA of tumor cells is analyzed using electrophoresis, a clonal band can be demonstrated, since identical IgH genes will move to the same position. On the contrary, when a normal or reactive lymph node is analyzed using the same technique, a smear rather than a distinct band will be seen. This technique is useful since sometimes benign reactive processes (e.g. infectious mononucleosis) and malignant lymphoma can be difficult to distinguish.
Microscopy
The tumor consists of sheets of a monotonous (i.e., similar in size and morphology) population of medium-sized lymphoid cells with high proliferative and apoptotic activity. The "starry sky" appearance seen under low power is due to scattered tingible body-laden macrophages (macrophages containing dead apoptotic tumor cells). The old descriptive term of "small non-cleaved cell" is misleading. The tumor cells are mostly medium in size (i.e., tumor nuclei size similar to that of histiocytes or endothelial cells). "Small non-cleaved cells" are compared to "large non-cleaved cells" of normal germinal center lymphocytes. Tumor cells possess small amounts of basophilic cytoplasm with three to four small nucleoli. The cellular outline usually appears squared off.
Immunohistochemistry
The tumor cells in Burkitt lymphoma generally strongly express markers of B cell differentiation (CD20, CD22, CD19), as well as CD10 and BCL6. The tumor cells are generally negative for BCL2 and TdT. The high mitotic activity of Burkitt lymphoma is confirmed by nearly 100% of the cells staining positive for Ki67.
Treatment
In general, the first line of treatment for Burkitt lymphoma is chemotherapy. A few of these regimens are: the GMALL-B-ALL/NHL2002 protocol, the modified Magrath regimen (R-CODOX-M/IVAC). COPADM, hyper-CVAD, and the Cancer and Leukemia Group B (CALGB) 8811 regimen; these can be associated with rituximab. In older patients, treatment may be dose-adjusted EPOCH with rituximab.The effects of the chemotherapy, as with all cancers, depend on the time of diagnosis. With faster-growing cancers, such as Burkitt, the cancer actually responds faster than with slower-growing cancers. This rapid response to chemotherapy can be hazardous to the patient, as a phenomenon called "tumor lysis syndrome" could occur. Close monitoring of the patient and adequate hydration is essential during the process. Since Burkitt lymphoma has high propensity to spread to the central nervous system (lymphomatous meningitis), intrathecal chemotherapy with methotrexate and/or ARA-C and/or prednisolone is given along with systemic chemotherapy.Chemotherapy
cyclophosphamide
doxorubicin
vincristine
methotrexate
cytarabine
ifosfamide
etoposide
rituximabOther treatments for Burkitt lymphoma include immunotherapy, bone marrow transplants, stem cell transplant, surgery to remove the tumor, and radiotherapy.
Prognosis
The prognosis for Burkitt lymphoma is superior in children and increased age is a negative prognostic indicator.
The overall cure rate for sporadic Burkitt lymphoma in developed countries is about 90%. In adults, Burkitt lymphoma is uncommon and has poor outcomes. Unsatisfactory initial response to the treatment of Burkitt lymphoma with cyclophosphamide, vincristine, and prednisolone indicates a negative prognosis. Furthermore, failed initial treatment and return of Burkitt lymphoma after a six-month stint of time serve as a poor prognostic indicator. The adequate utilization of therapeutic drugs during initial treatment limits additional treatment options following the return of the disease. Notably, in areas of the world where the initial treatment of Burkitt lymphoma is inadequate further treatment options may remain for cases when the disease returns. Laboratory studies such as lactate dehydrogenase (LDH), CD4 count, and other cytogenetic studies are also prognostic indicators. Unsatisfactory outcomes have been associated with an LDH that is found to be two times above the upper limit of normal. Specifically, there is a poor prognosis associated with a CD4 count that is decreased in the immunodeficiency-associated variant of Burkitt lymphoma. Genetic mutations extending beyond the previously described MYC translocation may also serve as negative prognostic indicators. Some notable genetic findings that may be associated with poor outcomes include: 13q deletion, 7q gain, ID3 and CCND3 double-hit mutations, and 18q21 CN-LOH mutations. The prognosis for Burkitt lymphoma can be better determined following staging utilizing imaging modalities such as positron emission tomography and computed tomography scans where tumor burden and invasion of the central nervous system have been found to indicate a poor prognosis.
Epidemiology
As a non-Hodgkin lymphoma (NHL), Burkitt lymphoma makes up 1-5% of cases, and it is more common in males than females with a 3–4 to 1 ratio. The endemic variant mainly impacts areas with an increased prevalence of malaria and EBV in Africa and Papua New Guinea. For children less than 18 years of age from equatorial Africa, the annual incidence of Burkitt lymphoma is 4–5/100,000. Additionally, in equatorial Africa, 50% of tumors that are diagnosed during childhood as well as 90% of lymphoma cases can be attributed to Burkitt lymphoma. The peak incidence for endemic Burkitt lymphoma is from ages 4 to 7 with an average age of 6 years </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Interstitial nephritis and can you explain it in depth? [/INST] Interstitial nephritis, also known as tubulointerstitial nephritis, is inflammation of the area of the kidney known as the renal interstitium, which consists of a collection of cells, extracellular matrix, and fluid surrounding the renal tubules. In addition to providing a scaffolding support for the tubular architecture, the interstitium has been shown to participate in the fluid and electrolyte exchange as well as endocrine functions of the kidney.There are a variety of known factors that can provoke the inflammatory process within the renal interstitium, including pharmacologic, environmental, infectious and systemic disease contributors. The spectrum of disease presentation can range from an acute process to a chronic condition with progressive tubular cell damage and renal dysfunction.
Signs and symptoms
Interstitial nephritis may present with a variety of signs and symptoms, many of these nonspecific. Fever is the most common, occurring in 30-50% of patients, particularly those with drug-induced interstitial nephritis. Other general symptoms that occur with variable frequency include nausea, vomiting, fatigue, lack of appetite, and weight loss. More specific symptoms, such as flank pain, pain with urination, and visible blood in the urine, as well as signs like hypertension can be helpful in increasing suspicion for the diagnosis. The "classic" triad of symptoms reported in early documented cases consisted of rash, joint pain, and increased eosinophils in the blood; however, more recent epidemiology suggests that this grouping of symptoms only occurs in a small minority (5-10%) of patients. With modern drugs causing between 70 and 90% of current cases, the possibility of a change in presentation exists.
Causes
Common causes include infection, or reaction to medication such as an analgesic, anti-inflammatory, or antibiotics such as methicillin (meticillin). Reaction to medications causes 71% to 92% of cases.
Drugs
This disease is also caused by other diseases and toxins that damage the kidney. Both acute and chronic tubulointerstitial nephritis can be caused by a bacterial infection in the kidneys known as pyelonephritis, but the most common cause is by an adverse reaction to a medication. The medications that are known to cause this sort of reaction are β-lactam antibiotics such as penicillin
and cephalexin, and nonsteroidal anti-inflammatory drugs (aspirin less frequently than others), as well as proton-pump inhibitors, rifampicin, sulfa medications, fluoroquinolones, diuretics, allopurinol, mesalamine, and phenytoin. The time between exposure to the drug and the development of acute tubulointerstitial nephritis can be anywhere from 5 days to 5 months (fenoprofen-induced).
Diagnosis
The condition can appear without symptoms. When present they may appear widely varied and can occur rapidly or gradually.When caused by an allergic reaction, the symptoms of acute tubulointerstitial nephritis are fever (27% of patients), rash (15% of patients), and enlarged kidneys. Some people experience dysuria, and lower back pain.
In chronic tubulointerstitial nephritis the patient can experience symptoms such as nausea, vomiting, fatigue, and weight loss. Other conditions that may develop include a high concentration of potassium in the blood, metabolic acidosis, and kidney failure.
Blood tests
About 23% of patients have a high level of eosinophils in the blood.
Urinary findings
Urinary findings include:
Eosinophiluria: Original studies with Methicillin-induced AIN showed sensitivity of 67% and specificity of 83%. The sensitivity is higher in patients with interstitial nephritis induced by methicillin or when the Hansels stain is used. However, a 2013 study showed that the sensitivity and specificity of urine eosinophil testing are 35.6% and 68% respectively.
Isosthenuria
Blood in the urine and occasional RBC casts
Sterile pyuria: white blood cells and no bacteria. About 50% of patients with AIN have pyuria.
Nephrotic-range amount of protein in the urine may be seen with NSAID-associated AIN. Low grade proteinuria may be seen in a majority of other causes of AIN.
White blood cell (WBC) casts: Fewer than one in five patients with AIN have WBC casts in urine.
Pathology
While non-invasive patient evaluation (physical examination, blood and urine testing, imaging studies) can be suggestive, the only way to definitively diagnosis interstitial nephritis is with a tissue diagnosis obtained by kidney biopsy. Pathologic examination will reveal the presence of interstitial edema and inflammatory infiltration with various white blood cells, including neutrophils, eosinophils, and lymphocytes. Generally, blood vessels and glomeruli are not affected. Electron microscopy shows mitochondrial damage in the tubular epithelial cells, vacuoles in the cytoplasm, and enlarged endoplasmic reticulum.
Gallium scan
The sensitivity of an abnormal gallium scan has been reported to range from 60% to 100%. A study of Gallium scan in 76 patients [23 with AIN, 8 with biopsy-proven AIN] showed an AUC of 0.75.
Novel biomarkers
Given the challenges with clinical diagnosis of AIN due to lack of clinical features and lack of accuracy of existing tests, there has been significant interest in identifying non-invasive biomarkers for this disease. One study showed that monocyte chemotactic protein-1 (chemokine CCL-2) and neutrophil gelatinase associated lipocalin (NGAL) were higher in patients with AIN than in controls (in this case healthy participants). A more recent study, showed that urine cytokines interleukin-9 and tumor necrosis factor-α were higher in patients with AIN than in controls without AIN who underwent a biopsy for evaluation of acute kidney injury and showed an AUC of 0.79 </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Abnormal posturing and can you explain it in depth? [/INST] Abnormal posturing is an involuntary flexion or extension of the arms and legs, indicating severe brain injury. It occurs when one set of muscles becomes incapacitated while the opposing set is not, and an external stimulus such as pain causes the working set of muscles to contract. The posturing may also occur without a stimulus. Since posturing is an important indicator of the amount of damage that has occurred to the brain, it is used by medical professionals to measure the severity of a coma with the Glasgow Coma Scale (for adults) and the Pediatric Glasgow Coma Scale (for infants).
The presence of abnormal posturing indicates a severe medical emergency requiring immediate medical attention. Decerebrate and decorticate posturing are strongly associated with poor outcome in a variety of conditions. For example, near-drowning patients that display decerebrate or decorticate posturing have worse outcomes than those that do not. Changes in the condition of the patient may cause alternation between different types of posturing.
Types
Three types of abnormal posturing are decorticate posturing, with the arms flexed over the chest; decerebrate posturing, with the arms extended at the sides; and opisthotonus, in which the head and back are arched backward.
Decorticate
Decorticate posturing is also called decorticate response, decorticate rigidity, flexor posturing, or, colloquially, "mummy baby". Patients with decorticate posturing present with the arms flexed, or bent inward on the chest, the hands are clenched into fists, and the legs extended and feet turned inward. A person displaying decorticate posturing in response to pain gets a score of three in the motor section of the Glasgow Coma Scale, caused by the flexion of muscles due to the neuro-muscular response to the trauma.There are two parts to decorticate posturing.
The first is the disinhibition of the red nucleus with facilitation of the rubrospinal tract. The rubrospinal tract facilitates motor neurons in the cervical spinal cord supplying the flexor muscles of the upper extremities. The rubrospinal tract and medullary reticulospinal tract biased flexion outweighs the medial and lateral vestibulospinal and pontine reticulospinal tract biased extension in the upper extremities.
The second component of decorticate posturing is the disruption of the lateral corticospinal tract which facilitates motor neurons in the lower spinal cord supplying flexor muscles of the lower extremities. Since the corticospinal tract is interrupted, the pontine reticulospinal and the medial and lateral vestibulospinal biased extension tracts greatly overwhelm the medullary reticulospinal biased flexion tract.The effects on these two tracts (corticospinal and rubrospinal) by lesions above the red nucleus is what leads to the characteristic flexion posturing of the upper extremities and extensor posturing of the lower extremities.Decorticate posturing indicates that there may be damage to areas including the cerebral hemispheres, the internal capsule, and the thalamus. It may also indicate damage to the midbrain. While decorticate posturing is still an ominous sign of severe brain damage, decerebrate posturing is usually indicative of more severe damage at the rubrospinal tract, and hence, the red nucleus is also involved, indicating a lesion lower in the brainstem.
Decerebrate
Decerebrate posturing is also called decerebrate response, decerebrate rigidity, or extensor posturing. It describes the involuntary extension of the upper extremities in response to external stimuli. In decerebrate posturing, the head is arched back, the arms are extended by the sides, and the legs are extended. A hallmark of decerebrate posturing is extended elbows. The arms and legs are extended and rotated internally. The patient is rigid, with the teeth clenched. The signs can be present on only one side of the body or on both sides, and they may be present just in the arms, and they may be intermittent.A person displaying decerebrate posturing in response to pain receives a score of two in the motor section of the Glasgow Coma Scale (for adults) and the Pediatric Glasgow Coma Scale (for infants), due to the muscles extending because of the neuro-muscular response to the trauma.Decerebrate posturing indicates brain stem damage, specifically damage below the level of the red nucleus (e.g. mid-collicular lesion). It is exhibited by people with lesions or compression in the midbrain and lesions in the cerebellum. Decerebrate posturing is commonly seen in pontine strokes. A patient with decorticate posturing may begin to show decerebrate posturing, or may go from one form of posturing to the other </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Bleeding and can you explain it in depth? [/INST] Bleeding, hemorrhage, haemorrhage or blood loss, is blood escaping from the circulatory system from damaged blood vessels. Bleeding can occur internally, or externally either through a natural opening such as the mouth, nose, ear, urethra, vagina or anus, or through a puncture in the skin.
Hypovolemia is a massive decrease in blood volume, and death by excessive loss of blood is referred to as exsanguination. Typically, a healthy person can endure a loss of 10–15% of the total blood volume without serious medical difficulties (by comparison, blood donation typically takes 8–10% of the donors blood volume). The stopping or controlling of bleeding is called hemostasis and is an important part of both first aid and surgery.
Types
Upper head
Intracranial hemorrhage – bleeding in the skull.
Cerebral hemorrhage – a type of intracranial hemorrhage, bleeding within the brain tissue itself.
Intracerebral hemorrhage – bleeding in the brain caused by the rupture of a blood vessel within the head. See also hemorrhagic stroke.
Subarachnoid hemorrhage (SAH) implies the presence of blood within the subarachnoid space from some pathologic process. The common medical use of the term SAH refers to the nontraumatic types of hemorrhages, usually from rupture of a berry aneurysm or arteriovenous malformation (AVM). The scope of this article is limited to these nontraumatic hemorrhages.
Eyes
Subconjunctival hemorrhage – bloody eye arising from a broken blood vessel in the sclera (whites of the eyes). Often the result of strain, including sneezing, coughing, vomiting or other kind of strain
Nose
Epistaxis – nosebleed
Mouth
Tooth eruption – losing a tooth
Hematemesis – vomiting fresh blood
Hemoptysis – coughing up blood from the lungs
Lungs
Pulmonary hemorrhage
Gastrointestinal
Upper gastrointestinal bleed
Lower gastrointestinal bleed
Occult gastrointestinal bleed
Urinary tract
Hematuria – blood in the urine from urinary bleeding
Gynecologic
Vaginal bleeding
Postpartum hemorrhage
Breakthrough bleeding
Ovarian bleeding – This is a potentially catastrophic and not so rare complication among lean patients with polycystic ovary syndrome undergoing transvaginal oocyte retrieval.
Anus
Melena – upper gastrointestinal bleeding
Hematochezia – lower gastrointestinal bleeding, or brisk upper gastrointestinal bleeding
Vascular
Ruptured aneurysm
Aortic transection
Iatrogenic injury
Causes
Bleeding arises due to either traumatic injury, underlying medical condition, or a combination.
Traumatic injury
Traumatic bleeding is caused by some type of injury. There are different types of wounds which may cause traumatic bleeding. These include:
Abrasion – Also called a graze, this is caused by transverse action of a foreign object against the skin, and usually does not penetrate below the epidermis.
Excoriation – In common with Abrasion, this is caused by mechanical destruction of the skin, although it usually has an underlying medical cause.
Hematoma – Caused by damage to a blood vessel that in turn causes blood to collect in an enclosed area.
Laceration – Irregular wound caused by blunt impact to soft tissue overlying hard tissue or tearing such as in childbirth. In some instances, this can also be used to describe an incision.
Incision – A cut into a body tissue or organ, such as by a scalpel, made during surgery.
Puncture Wound – Caused by an object that penetrated the skin and underlying layers, such as a nail, needle or knife.
Contusion – Also known as a bruise, this is a blunt trauma damaging tissue under the surface of the skin.
Crushing Injuries – Caused by a great or extreme amount of force applied over a period of time. The extent of a crushing injury may not immediately present itself.
Ballistic Trauma – Caused by a projectile weapon such as a firearm. This may include two external wounds (entry and exit) and a contiguous wound between the two.The pattern of injury, evaluation and treatment will vary with the mechanism of the injury. Blunt trauma causes injury via a shock effect; delivering energy over an area. Wounds are often not straight and unbroken skin may hide significant injury. Penetrating trauma follows the course of the injurious device. As the energy is applied in a more focused fashion, it requires less energy to cause significant injury. Any body organ, including bone and brain, can be injured and bleed. Bleeding may not be readily apparent; internal organs such as the liver, kidney and spleen may bleed into the abdominal cavity. The only apparent signs may come with blood loss. Bleeding from a bodily orifice, such as the rectum, nose, or ears may signal internal bleeding, but cannot be relied upon. Bleeding from a medical procedure also falls into this category.
Medical condition
"Medical bleeding" denotes hemorrhage as a result of an underlying medical condition (i.e. causes of bleeding that are not directly due to trauma). Blood can escape from blood vessels as a result of 3 basic patterns of injury:
Intravascular changes – changes of the blood within vessels (e.g. ↑ blood pressure, ↓ clotting factors)
Intramural changes – changes arising within the walls of blood vessels (e.g. aneurysms, dissections, AVMs, vasculitides)
Extravascular changes – changes arising outside blood vessels (e.g. H pylori infection, brain abscess, brain tumor)The underlying scientific basis for blood clotting and hemostasis is discussed in detail in the articles, coagulation, hemostasis and related articles. The discussion here is limited to the common practical aspects of blood clot formation which manifest as bleeding.
Some medical conditions can also make patients susceptible to bleeding. These are conditions that affect the normal hemostatic (bleeding-control) functions of the body. Such conditions either are, or cause, bleeding diatheses. Hemostasis involves several components. The main components of the hemostatic system include platelets and the coagulation system.
Platelets are small blood components that form a plug in the blood vessel wall that stops bleeding. Platelets also produce a variety of substances that stimulate the production of a blood clot. One of the most common causes of increased bleeding risk is exposure to nonsteroidal anti-inflammatory drugs (NSAIDs). The prototype for these drugs is aspirin, which inhibits the production of thromboxane. NSAIDs inhibit the activation of platelets, and thereby increase the risk of bleeding. The effect of aspirin is irreversible; therefore, the inhibitory effect of aspirin is present until the platelets have been replaced (about ten days). Other NSAIDs, such as "ibuprofen" (Motrin) and related drugs, are reversible and therefore, the effect on platelets is not as long-lived.There are several named coagulation factors that interact in a complex way to form blood clots, as discussed in the article on coagulation. Deficiencies of coagulation factors are associated with clinical bleeding. For instance, deficiency of Factor VIII causes classic hemophilia A while deficiencies of Factor IX cause "Christmas disease"(hemophilia B). Antibodies to Factor VIII can also inactivate the Factor VII and precipitate bleeding that is very difficult to control. This is a rare condition that is most likely to occur in older patients and in those with autoimmune diseases. Another common bleeding disorder is Von Willebrand disease. It is caused by a deficiency or abnormal function of the "Von Willebrand" factor, which is involved in platelet activation. Deficiencies in other factors, such as factor XIII or factor VII are occasionally seen, but may not be associated with severe bleeding and are not as commonly diagnosed.
In addition to NSAID-related bleeding, another common cause of bleeding is that related to the medication, warfarin ("Coumadin" and others). This medication needs to be closely monitored as the bleeding risk can be markedly increased by interactions with other medications. Warfarin acts by inhibiting the production of Vitamin K in the gut. Vitamin K is required for the production of the clotting factors, II, VII, IX, and X in the liver. One of the most common causes of warfarin-related bleeding is taking antibiotics. The gut bacteria make vitamin K and are killed by antibiotics. This decreases vitamin K levels and therefore the production of these clotting factors.
Deficiencies of platelet function may require platelet transfusion while deficiencies of clotting factors may require transfusion of either fresh frozen plasma or specific clotting factors, such as Factor VIII for patients with hemophilia.
Infection
Infectious diseases such as Ebola, Marburg virus disease and yellow fever can cause bleeding.
Diagnosis/Imaging
Dioxaborolane chemistry enables radioactive fluoride (18F) labeling of red blood cells, which allows for positron emission tomography (PET) imaging of intracerebral hemorrhages.
Classification
Blood loss
Hemorrhaging is broken down into four classes by the American College of Surgeons advanced trauma life support (ATLS).
Class I Hemorrhage involves up to 15% of blood volume. There is typically no change in vital signs and fluid resuscitation is not usually necessary.
Class II Hemorrhage involves 15-30% of total blood volume. A patient is often tachycardic (rapid heart beat) with a reduction in the difference between the systolic and diastolic blood pressures. The body attempts to compensate with peripheral vasoconstriction. Skin may start to look pale and be cool to the touch. The patient may exhibit slight changes in behavior. Volume resuscitation with crystalloids (Saline solution or Lactated Ringers solution) is all that is typically required. Blood transfusion is not usually required.
Class III Hemorrhage involves loss of 30-40% of circulating blood volume </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Object and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Malingering and can you explain it in depth? [/INST] Malingering is the fabrication, feigning, or exaggeration of physical or psychological symptoms designed to achieve a desired outcome, such as relief from duty or work.Malingering is not a medical diagnosis, but may be recorded as a "focus of clinical attention" or a "reason for contact with health services". Malingering is categorized as distinct from other forms of excessive illness behavior such as somatization disorder and factitious disorder, although not all mental health professionals agree with this formulation.Failure to detect actual cases of malingering imposes an economic burden on health care systems, workers compensation programs, and disability programs, such as Social Security Disability Insurance and veterans disability benefits. False accusations of malingering often harm genuine patients or claimants.
History
Antiquity
According to 1 Samuel in the Old Testament, King David feigned madness to Achish, the king of the Philistines </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Dysphagia and can you explain it in depth? [/INST] Dysphagia is difficulty in swallowing. Although classified under "symptoms and signs" in ICD-10, in some contexts it is classified as a condition in its own right.It may be a sensation that suggests difficulty in the passage of solids or liquids from the mouth to the stomach, a lack of pharyngeal sensation or various other inadequacies of the swallowing mechanism. Dysphagia is distinguished from other symptoms including odynophagia, which is defined as painful swallowing, and globus, which is the sensation of a lump in the throat. A person can have dysphagia without odynophagia (dysfunction without pain), odynophagia without dysphagia (pain without dysfunction) or both together. A psychogenic dysphagia is known as phagophobia.
Classification
Dysphagia is classified into the following major types:
Oropharyngeal dysphagia
Esophageal and obstructive dysphagia
Neuromuscular symptom complexes
Functional dysphagia is defined in some patients as having no organic cause for dysphagia that can be found.
Signs and symptoms
Some patients have limited awareness of their dysphagia, so lack of the symptom does not exclude an underlying disease. When dysphagia goes undiagnosed or untreated, patients are at a high risk of pulmonary aspiration and subsequent aspiration pneumonia secondary to food or liquids going the wrong way into the lungs. Some people present with "silent aspiration" and do not cough or show outward signs of aspiration. Undiagnosed dysphagia can also result in dehydration, malnutrition, and kidney failure.Some signs and symptoms of oropharyngeal dysphagia include difficulty controlling food in the mouth, inability to control food or saliva in the mouth, difficulty initiating a swallow, coughing, choking, frequent pneumonia, unexplained weight loss, gurgly or wet voice after swallowing, nasal regurgitation, and patient complaint of swallowing difficulty. When asked where the food is getting stuck, patients will often point to the cervical (neck) region as the site of the obstruction. The actual site of obstruction is always at or below the level at which the level of obstruction is perceived.The most common symptom of esophageal dysphagia is the inability to swallow solid food, which the patient will describe as becoming stuck or held up before it either passes into the stomach or is regurgitated. Pain on swallowing or odynophagia is a distinctive symptom that can be highly indicative of carcinoma, although it also has numerous other causes that are not related to cancer. Achalasia is a major exception to usual pattern of dysphagia in that swallowing of fluid tends to cause more difficulty than swallowing solids. In achalasia, there is idiopathic destruction of parasympathetic ganglia of the Auerbachs (Myenteric) plexus of the entire esophagus, which results in functional narrowing of the lower esophagus, and peristaltic failure throughout its length.
Complications
Complications of dysphagia may include aspiration, pneumonia, dehydration, and weight loss.
Cause
Following table enumerates possible causes of dysphagia:
Difficulty with or inability to swallow may be caused or exacerbated by usage of opiate and/or opioid drugs.
Diagnosis
Esophagoscopy and laryngoscopy can give direct view of lumens.
Esophageal motility study is useful in cases of esophageal achalasia and diffuse esophageal spasms.
Exfoliative cytology can be performed on esophageal lavage obtained by esophagoscopy. It can detect malignant cells in early stage.
Ultrasonography and CT scan are not very useful in finding causes of dysphagia, but can detect masses in mediastinum and aortic aneurysms.
FEES (Fibreoptic endoscopic evaluation of swallowing), sometimes with sensory evaluation, is done usually by a Medical Speech Pathologist or Deglutologist. This procedure involves the patient eating different consistencies as above.
Swallowing sounds and vibrations could be potentially used for dysphagia screening, but these approaches are in the early research stages.
Differential diagnosis
All causes of dysphagia are considered as differential diagnoses. Some common ones are:
Esophageal atresia
Paterson-Kelly syndrome
Zenkers diverticulum
Esophageal varices
Benign strictures
Achalasia
Esophageal diverticula
Scleroderma
Diffuse esophageal spasm
Polymyositis
Webs and rings
Esophageal cancer
Eosinophilic esophagitis
Hiatus hernia, especially paraesophageal type
Dysphagia lusoria
Stroke
Fahrs disease
Wernicke encephalopathy
Charcot–Marie–Tooth disease
Parkinsons disease
Multiple sclerosis
Amyotrophic lateral sclerosis
Rabies
Cervical SpondylosisEsophageal dysphagia is almost always caused by disease in or adjacent to the esophagus but occasionally the lesion is in the pharynx or stomach. In many of the pathological conditions causing dysphagia, the lumen becomes progressively narrowed and indistensible. Initially, only fibrous solids cause difficulty but later the problem can extend to all solids and later even to liquids. Patients with difficulty swallowing may benefit from thickened fluids if the person is more comfortable with those liquids, although, so far, there is no scientific study that proves that those thickened liquids are beneficial.
Dysphagia may manifest as the result of autonomic nervous system pathologies including stroke and ALS, or due to rapid iatrogenic correction of an electrolyte imbalance.In older adults, presbyphagia - the normal healthy changes in swallowing associated with age - should be considered as an alternative explanation for symptoms.
Treatments
There are many ways to treat dysphagia, such as swallowing therapy, dietary changes, feeding tubes, certain medications, and surgery. Treatment for dysphagia is managed by a group of specialists known as a multidisciplinary team. Members of the multidisciplinary team include: a speech language pathologist specializing in swallowing disorders (swallowing therapist), primary physician, gastroenterologist, nursing staff, respiratory therapist, dietitian, occupational therapist, physical therapist, pharmacist, and radiologist. The role of the members of the multidisciplinary team will differ depending on the type of swallowing disorder present. For example, the swallowing therapist will be directly involved in the treatment of a patient with oropharyngeal dysphagia, while a gastroenterologist will be directly involved in the treatment of an esophageal disorder.
Treatment strategies
The implementation of a treatment strategy should be based on a thorough evaluation by the multidisciplinary team. Treatment strategies will differ on a patient to patient basis and should be structured to meet the specific needs of each individual patient. Treatment strategies are chosen based on a number of different factors including diagnosis, prognosis, reaction to compensatory strategies, severity of dysphagia, cognitive status, respiratory function, caregiver support, and patient motivation and interest.
Oral vs. nonoral feeding
Adequate nutrition and hydration must be preserved at all times during dysphagia treatment. The overall goal of dysphagia therapy is to maintain or return the patient to, oral feeding. However, this must be done while ensuring adequate nutrition and hydration and a safe swallow (no aspiration of food into the lungs) </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Kidney disease and can you explain it in depth? [/INST] Kidney disease, or renal disease, technically referred to as nephropathy, is damage to or disease of a kidney. Nephritis is an inflammatory kidney disease and has several types according to the location of the inflammation. Inflammation can be diagnosed by blood tests. Nephrosis is non-inflammatory kidney disease. Nephritis and nephrosis can give rise to nephritic syndrome and nephrotic syndrome respectively. Kidney disease usually causes a loss of kidney function to some degree and can result in kidney failure, the complete loss of kidney function. Kidney failure is known as the end-stage of kidney disease, where dialysis or a kidney transplant is the only treatment option.
Chronic kidney disease is defined as prolonged kidney abnormalities (functional and/or structural in nature) that last for more than three months. Acute kidney disease is now termed acute kidney injury and is marked by the sudden reduction in kidney function over seven days. In 2007, about one in eight Americans had chronic kidney disease. This rate is increasing over time to where about 1 in 7 Americans are estimated to have CKD as of 2021.With the increasing prevalence of chronic kidney disease, there are many treatment plans available. Individuals can get blood or urine tests to access the health and function of the kidneys. For severe kidney disease, dialysis and kidney transplants are the most popular options. Dialysis is a treatment where it acts as an artificial kidney and removes waste products and fluid from an individuals blood. This is accomplished via a machine. Kidney transplants are the surgical procedure of transporting a healthy kidney from a donor into an individual with kidney disease. Both of these options have their pros and con, the success varies by individual.
Causes
Causes of kidney disease include deposition of the Immunoglobulin A antibodies in the glomerulus, administration of analgesics, xanthine oxidase deficiency, toxicity of chemotherapy agents, and long-term exposure to lead or its salts. Chronic conditions that can produce nephropathy include systemic lupus erythematosus, diabetes mellitus and high blood pressure (hypertension), which lead to diabetic nephropathy and hypertensive nephropathy, respectively.
Analgesics
One cause of nephropathy is the long term usage of pain medications known as analgesics. The pain medicines which can cause kidney problems include aspirin, acetaminophen, and nonsteroidal anti-inflammatory drugs (NSAIDs). This form of nephropathy is "chronic analgesic nephritis," a chronic inflammatory change characterized by loss and atrophy of tubules and interstitial fibrosis and inflammation (BRS Pathology, 2nd edition).
Specifically, long-term use of the analgesic phenacetin has been linked to renal papillary necrosis (necrotizing papillitis).
Diabetes
Diabetic nephropathy is a progressive kidney disease caused by angiopathy of the capillaries in the glomeruli. It is characterized by nephrotic syndrome and diffuse scarring of the glomeruli. It is particularly associated with poorly managed diabetes mellitus and is a primary reason for dialysis in many developed countries. It is classified as a small blood vessel complication of diabetes.
Autosomal dominant polycystic kidney disease
Gabow 1990 talks about Autosomal Dominant Polycystic Kidney disease and how this disease is genetic. They go on to say "Autosomal dominant polycystic kidney disease (ADPKD) is the most common genetic disease, affecting a half million Americans. The clinical phenotype can result from at least two different gene defects. One gene that can cause ADPKD has been located on the short arm of chromosome 16." The same article also goes on to say that millions of Americans are effected by this disease and is very common.
Long COVID and Kidney Disease
Yende & Parikh 2021 talk about the effects that COVID can have on a person that has a pre-existing health issue regarding kidney diseases. "frailty, chronic diseases, disability and immunodeficiency are at increased risk of kidney disease and progression to kidney failure, and infection with SARS-CoV-2 can further increase this risk” (Long COVID and Kidney Disease, 2021).
Diet
Higher dietary intake of animal protein, animal fat, and cholesterol may increase risk for microalbuminuria, a sign of kidney function decline, and generally, diets higher in fruits, vegetables, and whole grains but lower in meat and sweets may be protective against kidney function decline. This may be because sources of animal protein, animal fat, and cholesterol, and sweets are more acid-producing, while fruits, vegetables, legumes, and whole grains are more base-producing.
IgA nephropathy
IgA nephropathy is the most common glomerulonephritis throughout the world Primary IgA nephropathy is characterized by deposition of the IgA antibody in the glomerulus. The classic presentation (in 40-50% of the cases) is episodic frank hematuria which usually starts within a day or two of a non-specific upper respiratory tract infection (hence synpharyngitic) as opposed to post-streptococcal glomerulonephritis which occurs some time (weeks) after initial infection. Less commonly gastrointestinal or urinary infection can be the inciting agent. All of these infections have in common the activation of mucosal defenses and hence IgA antibody production.
Iodinated contrast media
Kidney disease induced by iodinated contrast media (ICM) is called CIN (= contrast induced nephropathy) or contrast-induced AKI (= acute kidney injury). Currently, the underlying mechanisms are unclear. But there is a body of evidence that several factors including apoptosis-induction seem to play a role.
Lithium
Lithium, a medication commonly used to treat bipolar disorder and schizoaffective disorders, can cause nephrogenic diabetes insipidus; its long-term use can lead to nephropathy.
Lupus
Despite expensive treatments, lupus nephritis remains a major cause of morbidity and mortality in people with relapsing or refractory lupus nephritis.
Xanthine oxidase deficiency
Another possible cause of Kidney disease is due to decreased function of xanthine oxidase in the purine degradation pathway. Xanthine oxidase will degrade hypoxanthine to xanthine and then to uric acid. Xanthine is not very soluble in water; therefore, an increase in xanthine forms crystals (which can lead to kidney stones) and result in damage to the kidney. Xanthine oxidase inhibitors, like allopurinol, can cause nephropathy.
Polycystic disease of the kidneys
Additional possible cause of nephropathy is due to the formation of cysts or pockets containing fluid within the kidneys. These cysts become enlarged with the progression of aging causing renal failure. Cysts may also form in other organs including the liver, brain, and ovaries. Polycystic Kidney Disease is a genetic disease caused by mutations in the PKD1, PKD2, and PKHD1 genes. This disease affects about half a million people in the US. Polycystic kidneys are susceptible to infections and cancer.
Toxicity of chemotherapy agents
Nephropathy can be associated with some therapies used to treat cancer. The most common form of kidney disease in cancer patients is Acute Kidney Injury (AKI) which can usually be due to volume depletion from vomiting and diarrhea that occur following chemotherapy or occasionally due to kidney toxicities of chemotherapeutic agents. Kidney failure from break down of cancer cells, usually after chemotherapy, is unique to onconephrology. Several chemotherapeutic agents, for example Cisplatin, are associated with acute and chronic kidney injuries. Newer agents such as anti Vascular Endothelial Growth Factor (anti VEGF) are also associated with similar injuries, as well as proteinuria, hypertension and thrombotic microangiopathy.
Diagnosis
The standard diagnostic workup of suspected kidney disease includes a medical history, physical examination, a urine test, and an ultrasound of the kidneys (renal ultrasonography). An ultrasound is essential in the diagnosis and management of kidney disease.
Treatment
Treatment approaches for kidney disease focus on managing the symptoms, controlling the progression, and also treating co-morbidities that a person may have.
Dialysis
Transplantation
Millions of people across the world have kidney disease. Of those millions, several thousand will need dialysis or a kidney transplant at its end-stage. In the United States, as of 2008, 16,500 people needed a kidney transplant. Of those, 5,000 died while waiting for a transplant. Currently, there is a shortage of donors, and in 2007 there were only 64,606 kidney transplants in the world. This shortage of donors is causing countries to place monetary value on kidneys </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Pseudallescheria boydii and can you explain it in depth? [/INST] Pseudallescheria boydii is a species of fungus classified in the Ascomycota. It is associated with some forms of eumycetoma/maduromycosis and is the causative agent of pseudallescheriasis. Typically found in stagnant and polluted water, it has been implicated in the infection of immunocompromised and near-drowned pneumonia patients. Treatment of infections with P. boydii is complicated by resistance to many of the standard antifungal agents normally used to treat infections by filamentous fungi.Pseudallescheria boydii fungal infection was the cause of death in three athletes submerged in the Yarkon River after a bridge collapsed during the 1997 Maccabiah Games.
Taxonomy
The fungus was originally described by American mycologist Cornelius Lott Shear in 1922 as a species of Allescheria. Shear obtained cultures from a patient of the Medical Department of the University of Texas. The microbe was apparently associated with a penetrating thorn the patient had incurred in his ankle while running barefoot 12 years before. The diseased area was found to contain hyphae-containing granules that, when cultured, led to the growth of the organism. Shear considered the fungus most closely related to Eurotiopsis gayoni (now called Allescheria gayoni). The specific epithet boydii refers to Dr. Mark F. Boyd, who sent Lott the specimen. David Malloch moved the species to the newly created genus Petriellidium in 1970. The genus name of Petriellidium was in honour of Lionello Petri (1875-1946), who was an Italian botanist (Mycology) and Phytopathologist from Florence. The species was then transferred to the genus Pseudallescheria in 1982 when examination of the type specimens of Petriellidium and Pseudallescheria revealed that they were the same genus.
Ecology
An ability to tolerate minimal aeration and high osmotic pressure enables P. boydii to grow on soil, polluted and stagnant water and manure. Although this fungus is commonly found in temperate climates, it is thermotolerant and can survive in tropical climates and in environments with low oxygen pressure. Growth of P. boydii can be seen in environments where nitrogen-containing compounds are common, usually due to human pollution. Its ability to use natural gas and other volatile organic compounds suggests a capacity for bioremediation.
Growth and morphology
Pseudallescheria boydii is a saprotrophic fungus with broad hyphae growing up to 2–5 μm in width. Colonies change in colour from white to pale brown and develop a cottony texture with maturity. After a 2–3 week incubation period, cleistothecia may form containing asci filled with eight fusiform, one-celled ascospores measuring 12–18 × 9–13 μm in diameter. This fungus grows on most standard media, maturing in 7 days. Its primary nutrients are the sugars xylose, arabinose, glucose, sucrose, ribitol, xylitol and L-arabinitol. It cannot assimilate maltose or lactose; however, it is able to assimilate urea, asparagine, potassium nitrate and ammonium nitrate. The optimal temperature for growth is 25 °C (77 °F) and the fungus is generally considered to be mesophilic, although it can grow at higher temperatures (up to 37 °C (99 °F)) as well. Asexual reproduction manifests in one of two forms: the Scedosporium type (the most common type) and the Graphium type. Scedosporium apiospermum forms greyish-white colonies with a grey-black reverse. The conidia are single-celled, pale brown and oval in form. Their size ranges from 4–9 x 6–10 μm and their development is annellidic.
Pathogenicity
Pseudallescheria boydii is an emerging opportunistic pathogen. Immune response is characterized by TLR2 recognition of P. boydii derived α-glucans, while TLR4 mediates the recognition of P. boydii derived rhamnomannans. Human infection takes one of two forms: mycetoma (99% of infections), a chronic, subcutaneous disease, and pseudallescheriasis, which includes all other forms of the disease commonly presented in the central nervous system, lungs, joints and bone. The former can also be distinguished by the presence of sclerotia, or granules, which are typically absent in pseudallescheriasis-type infections. Infection is initiated via inhalation or traumatic implantation in the skin. Infection can lead to arthritis, otitis, endocarditis, sinusitis, and other manifestations. Masses of hyphae can form "fungus balls" in the lungs. While "fungus balls" can also form in other organs, they are commonly derived from host necrotic tissue resulting from nodular infarction and thrombosis of lung vessels following infection.This species is second in prevalence after Aspergillus fumigatus as a fungal pathogen in cystic fibrosis patients. It causes allergic bronchopulmonary disease and chronic lung lesions that resemble aspergillosis. Infections can also occur in immunocompetent individuals, usually in the lungs and upper respiratory tract. Infections in the CNS, which are rare, present as neutrophilic meningitis or multiple brain abscesses and have a mortality rate of up to 75%. Infections have also been observed in animals, notably corneal infection, abdominal mycetoma and disseminated infections in dogs and horses. Transient colonization is more likely than disease. However, invasive pseudoallescheriasis can be found in patients with prolonged neutropenia, high-dose corticosteroid therapy and allotransplantation of bone marrow. Pseudallescheria boydii has also been implicated in pneumonia subsequent to near-drowning events with infection developing anywhere between a few weeks to several months after exposure yielding high mortality. Dissemination of the organism to the central nervous system has been observed in some cases. This species is also known as a non-invasive colonist of the external ear and airways of patients with poor lung or sinus clearance, and the first documented case of human pseudallescheriasis involved the ear canal. It has also been implicated in infection of joints following traumatic injury, and these infections can progress to osteomyelitis. Infections of the skin and cornea have also been reported. Typical host-related risk factors for infection include lymphopenia, steroid treatment, serum albumin levels of < 3 mg/dL and neutropenia.
Diagnosis
Detection and diagnosis of S. apiospermum is possible through isolation of the fungus in culture or through cytology and histopathology in the tissues of diseased individuals. In mycetoma-type infections, a confluence of symptoms is necessary for diagnosis, including tumefaction, draining sinuses and extrusion of grains. Furthermore, P. boydii grains and hyphae should be cultured and observed microscopically after staining with H&E, periodic acid–Schiff stain, Tissue Gram or Grocotts methenamine silver stain. A radiological diagnosis may be helpful in elucidating the extent of the disease in terms of bone and soft tissue involvement. Scedosporium-caused eumycetomas have been found to have thick-walled cavities and grains appearing as hyperreflective echoes on scans, while actinomycetomas show fine echoes at the bottom of cavities.Direct detection is possible in samples histochemically stained in 20% KOH followed by fluorescence microscopy with antibody. The characteristic shape, texture and colour of tissues can help identify S. apiospermum grains, which are often surrounded by an eosinophilic zone. Histopathologically, hyalohyphomycotic fungi like Scedosporium spp., Aspergillus spp., Fusarium spp. and Petriella spp. are similar in that they show septation of hyphae at regular intervals, have dichotomous branching and invade blood vessels. However, Scedosporium presents more irregular branching, sometimes with terminal or intercalary chlamydospores. In serum, Scedosporium infections can be detected by counterimmunoelectrophoresis. Molecular diagnostics appear to be promising in complementing current conventional diagnostic methods.Culture detection is accomplished by rinsing "grains" in 70% ethanol and sterile saline solution to avoid bacterial contamination prior to inoculation on growth medium. Selection of Scedosporium growth can be achieved on Leonians agar supplemented with 10 g/mL benomyl, or on media containing cycloheximide or amphotericin B. Optimal incubation is at a temperature of 25–35 °C (77–95 °F).
Treatment
Pseudallescheria boydii is resistant to amphotericin B and nearly all other antifungal drugs. Consequently, there is currently no consistently effective antifungal therapy for this agent </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Polydipsia and can you explain it in depth? [/INST] Polydipsia is excessive thirst or excess drinking. The word derives from the Greek πολυδίψιος (poludípsios) "very thirsty", which is derived from πολύς (polús, "much, many") + δίψα (dípsa, "thirst"). Polydipsia is a nonspecific symptom in various medical disorders. It also occurs as an abnormal behaviour in some non-human animals, such as in birds.
Causes
Diabetes
Polydipsia can be characteristic of diabetes mellitus, often as an initial symptom. It is observed in cases of poorly controlled diabetes, which is sometimes the result of low patient adherence to anti-diabetic medication.Diabetes insipidus ("tasteless" diabetes, as opposed to diabetes mellitus) can also cause polydipsia.
Other physiological causes
It can also be caused by a change in the osmolality of the extracellular fluids of the body, hypokalemia, decreased blood volume (as occurs during major hemorrhage), and other conditions that create a water deficit. This is usually a result of osmotic diuresis.
Polydipsia is also a symptom of anticholinergic poisoning. Zinc is also known to reduce symptoms of polydipsia by causing the body to absorb fluids more efficiently (reduction of diarrhea, induces constipation) and it causes the body to retain more sodium; thus a zinc deficiency can be a possible cause. The combination of polydipsia and (nocturnal) polyuria is also seen in (primary) hyperaldosteronism (which often goes with hypokalemia).
Antipsychotics can have side effects such as dry mouth that may make the patient feel thirsty.
Primary polydipsia
Primary polydipsia describes excessive thirst and water intake caused in the absence of physiological stimuli to drink </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Nephroptosis and can you explain it in depth? [/INST] Nephroptosis, is rare and abnormal condition in which the kidney drops down into the pelvis when the patient stands up. It is more common in women than in men. It has been one of the most controversial conditions in terms of both its diagnosis and its treatments.
Symptoms and signs
Nephroptosis is asymptomatic in most persons. However, nephroptosis can be characterized by violent attacks of colicky flank pain, nausea, chills, hypertension, hematuria and proteinuria. Persons with symptomatic nephroptosis often complain of sharp pains that radiate into the groin. Many persons also suggest a weighing feeling on the abdomen. Pain is typically relieved by lying down. It is believed that flank pain on standing that is relieved by lying down is due to movement of the kidney causing intermittent renal tract obstruction </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Mosaic (genetics) and can you explain it in depth? [/INST] Mosaicism or genetic mosaicism is a condition in multicellular organisms in which a single organism possesses more than one genetic line as the result of genetic mutation. This means that various genetic lines resulted from a single fertilized egg. Genetic mosaics may often be confused with chimerism, in which two or more genotypes arise in one individual similarly to mosaicism. In chimerism, though, the two genotypes arise from the fusion of more than one fertilized zygote in the early stages of embryonic development, rather than from a mutation or chromosome loss.
Genetic mosaicism can result from many different mechanisms including chromosome nondisjunction, anaphase lag, and endoreplication. Anaphase lagging is the most common way by which mosaicism arises in the preimplantation embryo. Mosaicism can also result from a mutation in one cell during development, in which case the mutation will be passed on only to its daughter cells (and will be present only in certain adult cells). Somatic mosaicism is not generally inheritable as it does not generally affect germ cells.
History
In 1929, Alfred Sturtevant studied mosaicism in Drosophila, a genus of fly. Muller in 1930 demonstrated that mosaicism in Drosophila is always associated with chromosomal rearrangements and Schultz in 1936 showed that in all cases studied these rearrangements were associated with heterochromatic inert regions, several hypotheses on the nature of such mosaicism were proposed. One hypothesis assumed that mosaicism appears as the result of a break and loss of chromosome segments. Curt Stern in 1935 assumed that the structural changes in the chromosomes took place as a result of somatic crossing, as a result of which mutations or small chromosomal rearrangements in somatic cells. Thus the inert region causes an increase in mutation frequency or small chromosomal rearrangements in active segments adjacent to inert regions.In the 1930s, Stern demonstrated that genetic recombination, normal in meiosis, can also take place in mitosis. When it does, it results in somatic (body) mosaics. These organisms contain two or more genetically distinct types of tissue. The term somatic mosaicism was used by CW Cotterman in 1956 in his seminal paper on antigenic variation.In 1944, Belgovskii proposed that mosaicism could not account for certain mosaic expressions caused by chromosomal rearrangements involving heterochromatic inert regions. The associated weakening of biochemical activity led to what he called a genetic chimera.
Types
Germline mosaicism
Germline or gonadal mosaicism is a particular form of mosaicism wherein some gametes—i.e., sperm or oocytes—carry a mutation, but the rest are normal. The cause is usually a mutation that occurred in an early stem cell that gave rise to all or part of the gametes.
Somatic mosaicism
Somatic mosaicism occurs when the somatic cells of the body are of more than one genotype. In the more common mosaics, different genotypes arise from a single fertilized egg cell, due to mitotic errors at first or later cleavages.
Somatic mutation leading to mosaicism is prevalent in the beginning and end stages of human life. Somatic mosaics are common in embryogenesis due to retrotransposition of long interspersed nuclear element-1 (LINE-1 or L1) and Alu transposable elements. In early development, DNA from undifferentiated cell types may be more susceptible to mobile element invasion due to long, unmethylated regions in the genome. Further, the accumulation of DNA copy errors and damage over a lifetime lead to greater occurrences of mosaic tissues in aging humans. As longevity has increased dramatically over the last century, human genome may not have had time to adapt to cumulative effects of mutagenesis. Thus, cancer research has shown that somatic mutations are increasingly present throughout a lifetime and are responsible for most leukemia, lymphomas, and solid tumors.
Trisomies, monosomies and related conditions
The most common form of mosaicism found through prenatal diagnosis involves trisomies. Although most forms of trisomy are due to problems in meiosis and affect all cells of the organism, some cases occur where the trisomy occurs in only a selection of the cells. This may be caused by a nondisjunction event in an early mitosis, resulting in a loss of a chromosome from some trisomic cells. Generally, this leads to a milder phenotype than in nonmosaic patients with the same disorder.
In rare cases, intersex conditions can be caused by mosaicism where some cells in the body have XX and others XY chromosomes (46, XX/XY). In the fruit fly Drosophila melanogaster, where a fly possessing two X chromosomes is a female and a fly possessing a single X chromosome is a sterile male, a loss of an X chromosome early in embryonic development can result in sexual mosaics, or gynandromorphs. Likewise, a loss of the Y chromosome can result in XY/X mosaic males.An example of this is one of the milder forms of Klinefelter syndrome, called 46,XY/47,XXY mosaic wherein some of the patients cells contain XY chromosomes, and some contain XXY chromosomes. The 46/47 annotation indicates that the XY cells have the normal number of 46 total chromosomes, and the XXY cells have a total of 47 chromosomes.
Also monosomies can present with some form of mosaicism. The only non-lethal full monosomy occurring in humans is the one causing Turners syndrome. Around 30% of Turners syndrome cases demonstrate mosaicism, while complete monosomy (45, X) occurs in about 50–60% of cases.
Mosaicism need not necessarily be deleterious, though. Revertant somatic mosaicism is a rare recombination event with a spontaneous correction of a mutant, pathogenic allele. In revertant mosaicism, the healthy tissue formed by mitotic recombination can outcompete the original, surrounding mutant cells in tissues such as blood and epithelia that regenerate often. In the skin disorder ichthyosis with confetti, normal skin spots appear early in life and increase in number and size over time.Other endogenous factors can also lead to mosaicism, including mobile elements, DNA polymerase slippage, and unbalanced chromosomal segregation. Exogenous factors include nicotine and UV radiation. Somatic mosaics have been created in Drosophila using X‑ray treatment and the use of irradiation to induce somatic mutation has been a useful technique in the study of genetics.True mosaicism should not be mistaken for the phenomenon of X-inactivation, where all cells in an organism have the same genotype, but a different copy of the X chromosome is expressed in different cells. The latter is the case in normal (XX) female mammals, although it is not always visible from the phenotype (as it is in calico cats). However, all multicellular organisms are likely to be somatic mosaics to some extent.
Gonosomal mosaicism
Gonosomal mosaicism is a type of somatic mosaicism that occurs very early in the organisms development and thus is present within both germline and somatic cells. Somatic mosaicism is not generally inheritable as it does not usually affect germ cells. In the instance of gonosomal mosaicism, organisms have the potential to pass the genetic alteration, including to potential offspring because the altered allele is present in both somatic and germline cells.
Brain cell mosaicism
A frequent type of neuronal genomic mosaicism is copy number variation. Possible sources of such variation were suggested to be incorrect repair of DNA damages and somatic recombination.
Mitotic recombination
One basic mechanism that can produce mosaic tissue is mitotic recombination or somatic crossover. It was first discovered by Curt Stern in Drosophila in 1936. The amount of tissue that is mosaic depends on where in the tree of cell division the exchange takes place. A phenotypic character called "twin spot" seen in Drosophila is a result of mitotic recombination. However, it also depends on the allelic status of the genes undergoing recombination. Twin spot occurs only if the heterozygous genes are linked in repulsion, i.e. the trans phase. The recombination needs to occur between the centromeres of the adjacent gene. This gives an appearance of yellow patches on the wild-type background in Drosophila. another example of mitotic recombination is the Blooms syndrome, which happens due to the mutation in the blm gene. The resulting BLM protein is defective. The defect in RecQ, a helicase, facilitates the defective unwinding of DNA during replication, thus is associated with the occurrence of this disease.
Use in experimental biology
Genetic mosaics are a particularly powerful tool when used in the commonly studied fruit fly, where specially selected strains frequently lose an X or a Y chromosome in one of the first embryonic cell divisions. These mosaics can then be used to analyze such things as courtship behavior, and female sexual attraction.More recently, the use of a transgene incorporated into the Drosophila genome has made the system far more flexible. The flip recombinase (or FLP) is a gene from the commonly studied yeast Saccharomyces cerevisiae that recognizes "flip recombinase target" (FRT) sites, which are short sequences of DNA, and induces recombination between them. FRT sites have been inserted transgenically near the centromere of each chromosome arm of D. melanogaster. The FLP gene can then be induced selectively, commonly using either the heat shock promoter or the GAL4/UAS system. The resulting clones can be identified either negatively or positively.
In negatively marked clones, the fly is transheterozygous for a gene encoding a visible marker (commonly the green fluorescent protein) and an allele of a gene to be studied (both on chromosomes bearing FRT sites). After induction of FLP expression, cells that undergo recombination will have progeny homozygous for either the marker or the allele being studied. Therefore, the cells that do not carry the marker (which are dark) can be identified as carrying a mutation.
Using negatively marked clones is sometimes inconvenient, especially when generating very small patches of cells, where seeing a dark spot on a bright background is more difficult than a bright spot on a dark background. Creating positively marked clones is possible using the so-called MARCM ("mosaic analysis with a repressible cell marker" system, developed by Liqun Luo, a professor at Stanford University, and his postdoctoral student Tzumin Lee, who now leads a group at Janelia Farm Research Campus. This system builds on the GAL4/UAS system, which is used to express GFP in specific cells. However, a globally expressed GAL80 gene is used to repress the action of GAL4, preventing the expression of GFP. Instead of using GFP to mark the wild-type chromosome as above, GAL80 serves this purpose, so that when it is removed by mitotic recombination, GAL4 is allowed to function, and GFP turns on </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Fracture and can you explain it in depth? [/INST] Fracture is the separation of an object or material into two or more pieces under the action of stress. The fracture of a solid usually occurs due to the development of certain displacement discontinuity surfaces within the solid. If a displacement develops perpendicular to the surface, it is called a normal tensile crack or simply a crack; if a displacement develops tangentially, it is called a shear crack, slip band or dislocation.Brittle fractures occur with no apparent deformation before fracture. Ductile fractures occur after visible deformation. Fracture strength, or breaking strength, is the stress when a specimen fails or fractures. The detailed understanding of how a fracture occurs and develops in materials is the object of fracture mechanics.
Strength
Fracture strength, also known as breaking strength, is the stress at which a specimen fails via fracture. This is usually determined for a given specimen by a tensile test, which charts the stress–strain curve (see image). The final recorded point is the fracture strength.
Ductile materials have a fracture strength lower than the ultimate tensile strength (UTS), whereas in brittle materials the fracture strength is equivalent to the UTS. If a ductile material reaches its ultimate tensile strength in a load-controlled situation, it will continue to deform, with no additional load application, until it ruptures. However, if the loading is displacement-controlled, the deformation of the material may relieve the load, preventing rupture.
The statistics of fracture in random materials have very intriguing behavior, and was noted by the architects and engineers quite early. Indeed, fracture or breakdown studies might be the oldest physical science studies, which still remain intriguing and very much alive. Leonardo da Vinci, more than 500 years ago, observed that the tensile strengths of nominally identical specimens of iron wire decrease with increasing length of the wires (see e.g., for a recent discussion). Similar observations were made by Galileo Galilei more than 400 years ago. This is the manifestation of the extreme statistics of failure (bigger sample volume can have larger defects due to cumulative fluctuations where failures nucleate and induce lower strength of the sample).
Types
There are two types of fractures: brittle and ductile fractures respectively without or with plastic deformation prior to failure.
Brittle
In brittle fracture, no apparent plastic deformation takes place before fracture. Brittle fracture typically involves little energy absorption and occurs at high speeds—up to 2,133.6 m/s (7,000 ft/s) in steel. In most cases brittle fracture will continue even when loading is discontinued.In brittle crystalline materials, fracture can occur by cleavage as the result of tensile stress acting normal to crystallographic planes with low bonding (cleavage planes). In amorphous solids, by contrast, the lack of a crystalline structure results in a conchoidal fracture, with cracks proceeding normal to the applied tension.
The fracture strength (or micro-crack nucleation stress) of a material was first theoretically estimated by Alan Arnold Griffith in 1921:
σ
t
h
e
o
r
e
t
i
c
a
l
=
E
γ
r
o
{\displaystyle \sigma _{\mathrm {theoretical} }={\sqrt {\frac {E\gamma }{r_{o}}}}}
where: –
E
{\displaystyle E}
is the Youngs modulus of the material,
γ
{\displaystyle \gamma }
is the surface energy, and
r
o
{\displaystyle r_{o}}
is the micro-crack length (or equilibrium distance between atomic centers in a crystalline solid).On the other hand, a crack introduces a stress concentration modeled by
σ
e
l
l
i
p
t
i
c
a
l
c
r
a
c
k
=
σ
a
p
p
l
i
e
d
(
1
+
2
a
ρ
)
=
2
σ
a
p
p
l
i
e
d
a
ρ
{\displaystyle \sigma _{\mathrm {elliptical\ crack} }=\sigma _{\mathrm {applied} }\left(1+2{\sqrt {\frac {a}{\rho }}}\right)=2\sigma _{\mathrm {applied} }{\sqrt {\frac {a}{\rho }}}}
(For sharp cracks)where: –
σ
a
p
p
l
i
e
d
{\displaystyle \sigma _{\mathrm {applied} }}
is the loading stress,
a
{\displaystyle a}
is half the length of the crack, and
ρ
{\displaystyle \rho }
is the radius of curvature at the crack tip.Putting these two equations together gets
σ
f
r
a
c
t
u
r
e
=
E
γ
ρ
4
a
r
o
.
{\displaystyle \sigma _{\mathrm {fracture} }={\sqrt {\frac {E\gamma \rho }{4ar_{o}}}}.}
Sharp cracks (small
ρ
{\displaystyle \rho }
) and large defects (large
a
{\displaystyle a}
) both lower the fracture strength of the material.
Recently, scientists have discovered supersonic fracture, the phenomenon of crack propagation faster than the speed of sound in a material. This phenomenon was recently also verified by experiment of fracture in rubber-like materials.
The basic sequence in a typical brittle fracture is: introduction of a flaw either before or after the material is put in service, slow and stable crack propagation under recurring loading, and sudden rapid failure when the crack reaches critical crack length based on the conditions defined by fracture mechanics. Brittle fracture may be avoided by controlling three primary factors: material fracture toughness (Kc), nominal stress level (σ), and introduced flaw size (a). Residual stresses, temperature, loading rate, and stress concentrations also contribute to brittle fracture by influencing the three primary factors.Under certain conditions, ductile materials can exhibit brittle behavior. Rapid loading, low temperature, and triaxial stress constraint conditions may cause ductile materials to fail without prior deformation.
Ductile
In ductile fracture, extensive plastic deformation (necking) takes place before fracture. The terms "rupture" and "ductile rupture" describe the ultimate failure of ductile materials loaded in tension. The extensive plasticity causes the crack to propagate slowly due to the absorption of a large amount of energy before fracture.
Because ductile rupture involves a high degree of plastic deformation, the fracture behavior of a propagating crack as modelled above changes fundamentally. Some of the energy from stress concentrations at the crack tips is dissipated by plastic deformation ahead of the crack as it propagates.
The basic steps in ductile fracture are void formation, void coalescence (also known as crack formation), crack propagation, and failure, often resulting in a cup-and-cone shaped failure surface. Voids typically coalesce around precipitates, secondary phases, inclusions, and at grain boundaries in the material. Ductile fracture is typically transgranular and deformation due to dislocation slip can cause the shear lip characteristic of cup and cone fracture.
Characteristics
The manner in which a crack propagates through a material gives insight into the mode of fracture. With ductile fracture a crack moves slowly and is accompanied by a large amount of plastic deformation around the crack tip. A ductile crack will usually not propagate unless an increased stress is applied and generally cease propagating when loading is removed. In a ductile material, a crack may progress to a section of the material where stresses are slightly lower and stop due to the blunting effect of plastic deformations at the crack tip. On the other hand, with brittle fracture, cracks spread very rapidly with little or no plastic deformation. The cracks that propagate in a brittle material will continue to grow once initiated.
Crack propagation is also categorized by the crack characteristics at the microscopic level. A crack that passes through the grains within the material is undergoing transgranular fracture. A crack that propagates along the grain boundaries is termed an intergranular fracture. Typically, the bonds between material grains are stronger at room temperature than the material itself, so transgranular fracture is more likely to occur. When temperatures increase enough to weaken the grain bonds, intergranular fracture is the more common fracture mode.
Testing
Fracture in materials is studied and quantified in multiple ways. Fracture is largely determined by the fracture toughness (
K
c
{\textstyle \mathrm {K} _{\mathrm {c} }}
), so fracture testing is often done to determine this. The two most widely used techniques for determining fracture toughness are the Three-point flexural test and the compact tension test.
By performing the compact tension and three-point flexural tests, one is able to determine the fracture toughness through the following equation:
K
c
=
σ
F
π
c
f
(
c
/
a
)
{\displaystyle \mathrm {K_{c}} =\sigma _{\mathrm {F} }{\sqrt {\pi \mathrm {c} }}\mathrm {f\ (c/a)} }
Where:-
f
(
c
/
a
)
{\displaystyle \mathrm {f\ (c/a)} }
is an empirically-derived equation to capture the test sample geometry
σ
F
{\displaystyle \sigma _{\mathrm {F} }}
is the fracture stress, and
c
{\displaystyle \mathrm {c} }
is the crack length.To accurately attain
K
c
{\textstyle \mathrm {K} _{\mathrm {c} }}
, the value of
c
{\textstyle \mathrm {c} }
must be precisely measured. This is done by taking the test piece with its fabricated notch of length
c
′
{\textstyle \mathrm {c\prime } }
and sharpening this notch to better emulate a crack tip found in real-world materials. Cyclical prestressing the sample can then induce a fatigue crack which extends the crack from the fabricated notch length of
c
′
{\textstyle \mathrm {c\prime } }
to
c
{\textstyle \mathrm {c} }
. This value
c
{\textstyle \mathrm {c} }
is used in the above equations for determining
K
c
{\textstyle \mathrm {K} _{\mathrm {c} }}
.Following this test, the sample can then be reoriented such that further loading of a load (F) will extend this crack and thus a load versus sample deflection curve can be obtained. With this curve, the slope of the linear portion, which is the inverse of the compliance of the material, can be obtained. This is then used to derive f(c/a) as defined above in the equation. With the knowledge of all these variables,
K
c
{\textstyle \mathrm {K} _{\mathrm {c} }}
can then be calculated.
Ceramics and inorganic glasses
Ceramics and inorganic glasses have fracturing behavior that differ those of metallic materials. Ceramics have high strengths and perform well in high temperatures due to the material strength being independent of temperature. Ceramics have low toughness as determined by testing under a tensile load; often, ceramics have
K
c
{\textstyle \mathrm {K} _{\mathrm {c} }}
values that are ~5% of that found in metals. However, ceramics are usually loaded in compression in everyday use, so the compressive strength is often referred to as the strength; this strength can often exceed that of most metals. However, ceramics are brittle and thus most work done revolves around preventing brittle fracture. Due to how ceramics are manufactured and processed, there are often preexisting defects in the material introduce a high degree of variability in the Mode I brittle fracture. Thus, there is a probabilistic nature to be accounted for in the design of ceramics. The Weibull distribution predicts the survival probability of a fraction of samples with a certain volume that survive a tensile stress sigma, and is often used to better assess the success of a ceramic in avoiding fracture.
Fiber bundles
To model fracture of a bundle of fibers, the Fiber Bundle Model was introduced by Thomas Pierce in 1926 as a model to understand the strength of composite materials. The bundle consists of a large number of parallel Hookean springs of identical length and each having identical spring constants. They have however different breaking stresses. All these springs are suspended from a rigid horizontal platform. The load is attached to a horizontal platform, connected to the lower ends of the springs. When this lower platform is absolutely rigid, the load at any point of time is shared equally (irrespective of how many fibers or springs have broken and where) by all the surviving fibers. This mode of load-sharing is called Equal-Load-Sharing mode. The lower platform can also be assumed to have finite rigidity, so that local deformation of the platform occurs wherever springs fail and the surviving neighbor fibers have to share a larger fraction of that transferred from the failed fiber. The extreme case is that of local load-sharing model, where load of the failed spring or fiber is shared (usually equally) by the surviving nearest neighbor fibers.
Disasters
Failures caused by brittle fracture have not been limited to any particular category of engineered structure. Though brittle fracture is less common than other types of failure, the impacts to life and property can be more severe </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Stevens–Johnson syndrome and can you explain it in depth? [/INST] Stevens–Johnson syndrome (SJS) is a type of severe skin reaction. Together with toxic epidermal necrolysis (TEN) and Stevens–Johnson/toxic epidermal necrolysis (SJS/TEN), it forms a spectrum of disease, with SJS being less severe. Erythema multiforme (EM) is generally considered a separate condition. Early symptoms of SJS include fever and flu-like symptoms. A few days later, the skin begins to blister and peel, forming painful raw areas. Mucous membranes, such as the mouth, are also typically involved. Complications include dehydration, sepsis, pneumonia and multiple organ failure.The most common cause is certain medications such as lamotrigine, carbamazepine, allopurinol, sulfonamide antibiotics and nevirapine. Other causes can include infections such as Mycoplasma pneumoniae and cytomegalovirus, or the cause may remain unknown. Risk factors include HIV/AIDS and systemic lupus erythematosus.The diagnosis of Stevens–Johnson syndrome is based on involvement of less than 10% of the skin. It is known as TEN when more than 30% of the skin is involved and considered an intermediate form when 10–30% is involved. SJS/TEN reactions are believed to follow a type IV hypersensitivity mechanism. It is also included with drug reaction with eosinophilia and systemic symptoms (DRESS syndrome), acute generalized exanthematous pustulosis (AGEP) and toxic epidermal necrolysis in a group of conditions known severe cutaneous adverse reactions (SCARs).Treatment typically takes place in hospital such as in a burn unit or intensive care unit. Efforts may include stopping the cause, pain medication, antihistamines, antibiotics, intravenous immunoglobulins or corticosteroids. Together with TEN, SJS affects 1 to 2 people per million per year. Typical onset is under the age of 30. Skin usually regrows over two to three weeks; however, complete recovery can take months. Overall, the risk of death with SJS is 5 to 10%.
Signs and symptoms
SJS usually begins with fever, sore throat, and fatigue, which is commonly misdiagnosed and therefore treated with antibiotics. SJS, SJS/TEN, and TEN are often heralded by fever, sore throat, cough, and burning eyes for 1 to 3 days. Patients with these disorders frequently experience burning pain of their skin at the start of disease. Ulcers and other lesions begin to appear in the mucous membranes, almost always in the mouth and lips, but also in the genital and anal regions. Those in the mouth are usually extremely painful and reduce the patients ability to eat or drink. Conjunctivitis occurs in about 30% of children who develop SJS. A rash of round lesions about an inch across arises on the face, trunk, arms and legs, and soles of the feet, but usually not the scalp.
Causes
SJS is thought to arise from a disorder of the immune system. The immune reaction can be triggered by drugs or infections. Genetic factors are associated with a predisposition to SJS. The cause of SJS is unknown in one-quarter to one-half of cases. SJS, SJS/TEN, and TEN are considered a single disease with common causes and mechanisms.Individuals expressing certain human leukocyte antigen (i.e. HLA) serotypes (i.e. genetic alleles), genetical-based T cell receptors, or variations in their efficiency to absorb, distribute to tissues, metabolize, or excrete (this combination is termed ADME) a drug are predisposed to develop SJS.
Medications
Although SJS can be caused by viral infections and malignancies, the main cause is medications. A leading cause appears to be the use of antibiotics, particularly sulfa drugs. Between 100 and 200 different drugs may be associated with SJS. No reliable test exists to establish a link between a particular drug and SJS for an individual case. Determining what drug is the cause is based on the time interval between first use of the drug and the beginning of the skin reaction. Drugs discontinued more than 1 month prior to onset of mucocutaneous physical findings are highly unlikely to cause SJS and TEN. SJS and TEN most often begin between 4 and 28 days after culprit drug administration. A published algorithm (ALDEN) to assess drug causality gives structured assistance in identifying the responsible medication.SJS may be caused by the medications rivaroxaban, vancomycin, allopurinol, valproate, levofloxacin, diclofenac, etravirine, isotretinoin, fluconazole, valdecoxib, sitagliptin, oseltamivir, penicillins, barbiturates, sulfonamides, phenytoin, azithromycin, oxcarbazepine, zonisamide, modafinil, lamotrigine, nevirapine, pyrimethamine, ibuprofen, ethosuximide, carbamazepine, bupropion, telaprevir, and nystatin.Medications that have traditionally been known to lead to SJS, erythema multiforme, and toxic epidermal necrolysis include sulfonamide antibiotics, penicillin antibiotics, cefixime (antibiotic), barbiturates (sedatives), lamotrigine, phenytoin (e.g., Dilantin) (anticonvulsants) and trimethoprim. Combining lamotrigine with sodium valproate increases the risk of SJS.Nonsteroidal anti-inflammatory drugs (NSAIDs) are a rare cause of SJS in adults; the risk is higher for older patients, women, and those initiating treatment. Typically, the symptoms of drug-induced SJS arise within a week of starting the medication. Similar to NSAIDs, paracetamol (acetaminophen) has also caused rare cases of SJS. People with systemic lupus erythematosus or HIV infections are more susceptible to drug-induced SJS.
Infections
The second most common cause of SJS and TEN is infection, particularly in children. This includes upper respiratory infections, otitis media, pharyngitis, and Epstein–Barr virus, Mycoplasma pneumoniae and cytomegalovirus infections. The routine use of medicines such as antibiotics, antipyretics and analgesics to manage infections can make it difficult to identify if cases were caused by the infection or medicines taken.Viral diseases reported to cause SJS include: herpes simplex virus (possibly; is debated), AIDS, coxsackievirus, influenza, hepatitis, and mumps.In pediatric cases, Epstein–Barr virus and enteroviruses have been associated with SJS.Recent upper respiratory tract infections have been reported by more than half of patients with SJS.Bacterial infections linked to SJS include group A beta-hemolytic streptococci, diphtheria, brucellosis, lymphogranuloma venereum, mycobacteria, Mycoplasma pneumoniae, rickettsial infections, tularemia, and typhoid.Fungal infections with coccidioidomycosis, dermatophytosis and histoplasmosis are also considered possible causes. Malaria and trichomoniasis, protozoal infections, have also been reported as causes.
Pathophysiology
SJS is a type IV hypersensitivity reaction in which a drug or its metabolite stimulates cytotoxic T cells (i.e. CD8+ T cells) and T helper cells (i.e. CD4+ T cells) to initiate autoimmune reactions that attack self tissues. In particular, it is a type IV, subtype IVc, delayed hypersensitivity reaction dependent in part on the tissue-injuring actions of natural killer cells. This contrasts with the other types of SCARs disorders, i.e., the DRESS syndrome which is a Type IV, Subtype IVb, hypersensitivity drug reaction dependent in part on the tissue-injuring actions of eosinophils and acute generalized exanthematous pustulosis which is a Type IV, subtype IVd, hypersensitivity reaction dependent in part on the tissue-injuring actions of neutrophils.Like other SCARs-inducing drugs, SJS-inducing drugs or their metabolites stimulate CD8+ T cells or CD4+ T cells to initiate autoimmune responses. Studies indicate that the mechanism by which a drug or its metabolites accomplishes this involves subverting the antigen presentation pathways of the innate immune system. The drug or metabolite covalently binds with a host protein to form a non-self, drug-related epitope. An antigen presenting cell (APC) takes up these alter proteins; digests them into small peptides; places the peptides in a groove on the human leukocyte antigen (i.e. HLA) component of their major histocompatibility complex (i.e. MHC); and presents the MHC-associated peptides to T-cell receptors on CD8+ T cells or CD4+ T cells. Those peptides expressing a drug-related, non-self epitope on one of their various HLA protein forms (HLA-A, HLA-B, HLA-C, HLA-DM, HLA-DO, HLA-DP, HLA-DQ, or HLA-DR) can bind to a T-cell receptor and thereby stimulate the receptor-bearing parent T cell to initiate attacks on self tissues. Alternatively, a drug or its metabolite may stimulate these T cells by inserting into the groove on a HLA protein to serve as a non-self epitope or bind outside of this groove to alter a HLA protein so that it forms a non-self epitope. In all these cases, however, a non-self epitope must bind to a specific HLA serotype (i.e. variation) in order to stimulate T cells. Since the human population expresses some 13,000 different HLA serotypes while an individual expresses only a fraction of them and since a SJS-inducing drug or metabolite interacts with only one or a few HLA serotypes, a drugs ability to induce SCARs is limited to those individuals who express HLA serotypes targeted by the drug or its metabolite. Accordingly, only rare individuals are predisposed to develop a SCARs in response to a particular drug on the bases of their expression of HLA serotypes: Studies have identified several HLA serotypes associated with development of SJS, SJS/TEN, or TEN in response to certain drugs. In general, these associations are restricted to the cited populations.In some East Asian populations studied (Han Chinese and Thai), carbamazepine- and phenytoin-induced SJS is strongly associated with HLA-B*1502 (HLA-B75), an HLA-B serotype of the broader serotype HLA-B15. A study in Europe suggested the gene marker is only relevant for East Asians. This has clinical relevance as it is agreed upon that prior to starting a medication such as allopurinol in a patient of Chinese descent, HLA-B*58:01 testing should be considered.Based on the Asian findings, similar studies in Europe showed 61% of allopurinol-induced SJS/TEN patients carried the HLA-B58 (phenotype frequency of the B*5801 allele in Europeans is typically 3%). One study concluded: "Even when HLA-B alleles behave as strong risk factors, as for allopurinol, they are neither sufficient nor necessary to explain the disease."Other HLA associations with the development of SJS, SJS/TEN, or TEN and the intake of specific drugs as determined in certain populations are given in HLA associations with SCARs.
T-cell receptors
In addition to acting through HLA proteins to bind with a T-cell receptor, a drug or its metabolite may bypass HLA proteins to bind directly to a T-cell receptor and thereby stimulate CD8+ T or CD4+ T cells to initiate autoimmune responses. In either case, this binding appears to develop only on certain T cell receptors. Since the genes for these receptors are highly edited, i.e. altered to encode proteins with different amino acid sequences, and since the human population may express more than 100 trillion different (i.e. different amino acid sequences) T-cell receptors while an individual express only a fraction of these, a drugs or its metabolites ability to induce the DRESS syndrome by interacting with a T cell receptor is limited to those individuals whose T cells express a T cell receptor(s) that can interact with the drug or its metabolite. Thus, only rare individuals are predisposed to develop SJS in response to a particular drug on the bases of their expression of specific T-cell receptor types. While the evidence supporting this T-cell receptor selectivity is limited, one study identified the preferential presence of the TCR-V-b and complementarity-determining region 3 in T-cell receptors found on the T cells in the blisters of patients with allopurinol-induced DRESS syndrome. This finding is compatible with the notion that specific types of T cell receptors are involved in the development of specific drug-induced SCARs.
ADME
Variations in ADME, i.e. an individuals efficiency in absorbing, tissue-distributing, metabolizing, or excreting a drug, have been found to occur in various severe cutaneous adverse reactions (SCARS) as well as other types of adverse drug reactions. These variations influence the levels and duration of a drug or its metabolite in tissues and thereby impact the drugs or metabolites ability to evoke these reactions. For example, CYP2C9 is an important drug-metabolizing cytochrome P450; it metabolizes and thereby inactivates phenytoin. Taiwanese, Japanese, and Malaysian individuals expressing the CYP2C9*3 variant of CYP2C9, which has reduced metabolic activity compared to the wild type (i.e. CYP2c9*1) cytochrome, have increased blood levels of phenytoin and a high incidence of SJS (as well as SJS/TEN and TEN) when taking the drug. In addition to abnormalities in drug-metabolizing enzymes, dysfunctions of the kidney, liver, or GI tract which increase a SCARs-inducing drug or metabolite levels are suggested to promote SCARs responses. These ADME abnormalities, it is also suggested, may interact with particular HLA proteins and T cell receptors to promote a SCARs disorder.
Diagnosis
The diagnosis is based on involvement of less than 10% of the skin. It is known as TEN when more than 30% of the skin is involved and an intermediate form with 10 to 30% involvement. A positive Nikolskys sign is helpful in the diagnosis of SJS and TEN. A skin biopsy is helpful, but not required, to establish a diagnosis of SJS and TEN.
Pathology
SJS, like TEN and erythema multiforme, is characterized by confluent epidermal necrosis with minimal associated inflammation. The acuity is apparent from the (normal) basket weave-like pattern of the stratum corneum.
Classification
Stevens–Johnson syndrome (SJS) is a milder form of toxic epidermal necrolysis (TEN). These conditions were first recognised in 1922. A classification first published in 1993, that has been adopted as a consensus definition, identifies Stevens–Johnson syndrome, toxic epidermal necrolysis, and SJS/TEN overlap. All three are part of a spectrum of severe cutaneous reactions (SCAR) which affect skin and mucous membranes. The distinction between SJS, SJS/TEN overlap, and TEN is based on the type of lesions and the amount of the body surface area with blisters and erosions. It is agreed that the most reliable method to classify EM, SJS, and TEN is based on lesion morphology and extent of epidermal detachment. Blisters and erosions cover between 3% and 10% of the body in SJS, 11–30% in SJS/TEN overlap, and over 30% in TEN. The skin pattern most commonly associated with SJS is widespread, often joined or touching (confluent), papuric spots (macules) or flat small blisters or large blisters which may also join. These occur primarily on the torso.SJS, TEN, and SJS/TEN overlap can be mistaken for erythema multiforme. Erythema multiforme, which is also within the SCAR spectrum, differs in clinical pattern and etiology.
Prevention
Screening individuals for certain predisposing gene variants before initiating treatment with particular SJS-, TEN/SJS-, or TEN-inducing drugs is recommended or under study. These recommendations are typically limited to specific populations that show a significant chance of having the indicated gene variant since screening of populations with extremely low incidences of expressing the variant is considered cost-ineffective. Individuals expressing the HLA allele associated with sensitivity to an indicated drug should not be treated with the drug. These recommendations include the following. Before treatment with carbamazepine, the Taiwan and USA Food and Drug Administrations recommend screening for HLA-B*15:02 in certain Asian groups. This has been implemented in Taiwan, Hong Kong, Singapore, and many medical centers in Thailand and Mainland China. Before treatment with allopurinol, the American College of Rheumatology guidelines for managing gout recommend HLA-B*58:01 screening. This is provided in many medical centers in Taiwan, Hong Kong, Thailand, and Mainland China. Before treatment with abacavir, the USA Food and Drug Administration recommends screening for HLA-B*57:01 in Caucasian populations. This screening is widely implemented. It has also been suggested that all individuals found to express this HLA serotype avoid treatment with abacovir. Current trials are underway in Taiwan to define the cost-effectiveness of avoiding phenytoin in SJS, SJS/TEN, and TEN for individuals expressing the CYP2C9*3 allele of CYP2C9.
Treatment
SJS constitutes a dermatological emergency. Patients with documented Mycoplasma infections can be treated with oral macrolide or oral doxycycline.Initially, treatment is similar to that for patients with thermal burns, and continued care can only be supportive (e.g., intravenous fluids and nasogastric or parenteral feeding) and symptomatic (e.g., analgesic mouth rinse for mouth ulcer). Dermatologists and surgeons tend to disagree about whether the skin should be debrided.Beyond this kind of supportive care, no treatment for SJS is accepted. Treatment with corticosteroids is controversial. Early retrospective studies suggested corticosteroids increased hospital stays and complication rates. No randomized trials of corticosteroids have been conducted for SJS, and it can be managed successfully without them.Other agents have been used, including cyclophosphamide and ciclosporin, but none have exhibited much therapeutic success. Intravenous immunoglobulin treatment has shown some promise in reducing the length of the reaction and improving symptoms. Other common supportive measures include the use of topical pain anesthetics and antiseptics, maintaining a warm environment, and intravenous analgesics.
An ophthalmologist should be consulted immediately, as SJS frequently causes the formation of scar tissue inside the eyelids, leading to corneal vascularization, impaired vision, and a host of other ocular problems. Those with chronic ocular surface disease caused by SJS may find some improvement with PROSE treatment (prosthetic replacement of the ocular surface ecosystem treatment).
Prognosis
SJS (with less than 10% of body surface area involved) has a mortality rate of around 5%. The mortality for toxic epidermal necrolysis (TEN) is 30–40%. The risk for death can be estimated using the SCORTEN scale, which takes a number of prognostic indicators into account. It is helpful to calculate a SCORTEN within the first 3 days of hospitalization. Other outcomes include organ damage/failure, cornea scratching, and blindness.. Restrictive lung disease may develop in patients with SJS and TEN after initial acute pulmonary involvement </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Axenfeld–Rieger syndrome and can you explain it in depth? [/INST] Axenfeld–Rieger syndrome is a rare autosomal dominant disorder, which affects the development of the teeth, eyes, and abdominal region.Axenfeld–Rieger syndrome is part of the so-called iridocorneal or anterior segment dysgenesis syndromes, which were formerly known as anterior segment cleavage syndromes, anterior chamber segmentation syndromes or mesodermal dysgenesis. Although the exact classification of this set of signs and symptoms is somewhat confusing in current scientific literature, most authors agree with the classification cited here. Axenfeld Anomaly is known as the development of a posterior embryotoxon, associated with strands of the iris adhered to a Schwalbe line that has been displaced anteriorly, which when added to glaucoma is called Axenfeld Syndrome. Riegers Anomaly is defined by a universe of congenital anomalies of the iris, such as iris hypoplasia, corectopia or polycoria. When systemic findings are added to Riegers anomaly, such as bone, facial and/or dental defects, it is known as Rieger syndrome. The combination of both entities gives rise to the Axenfeld-Rieger Anomaly when there are no systemic abnormalities and Axenfeld-Rieger Syndrome when there are.Axenfeld-Rieger Syndrome, is a rare disease that affects the eye bilaterally, with an estimated prevalence of 1/200,000 people, without gender predilection, and is characterized by autosomal dominant inheritance with complete penetrance of variable expressivity. The genes that have been identified in approximately 50% of cases are PITX2 and FOXC1. Given the important hereditary factor, it is important to evaluate the most direct members of the family.To explain the ocular alterations, there is a theory of the mechanism postulated by Shields et al., which implies an arrest in the migration of neural crest cells towards the third trimester of gestation, which leads to the persistence of primordial endothelial tissue in the iris and anterior chamber angle. Contraction of these membranes after birth lead to the progressive changes seen in some patients. This primordial endothelium also generates an excessive and atypical basement membrane, especially near the limbal corneal junction, which accounts for the prominent Schwalbe line. In the case of secondary glaucoma, it would be the consequence of dysgenesis in the chamber sinus.
Signs and symptoms
Disease manifestations:
Regarding the age of diagnosis, this differs according to the intensity of the symptoms, ranging from asymptomatic to florid symptoms, characterized by ocular and systemic diseases, affecting multiple organs that have in common their origin in the neural crest.
Eye manifestations:
Bilateral ocular manifestations are usually pathognomonic of the disease. In the case of children who develop glaucoma, they may attend the consultation with signs and symptoms of buphthalmos, photosensitivity, tearing, corneal decompensation, which associated with poor vision, can be completed with a strabismus. In the case of the adult, there is a greater chance of not presenting symptoms, so an ophthalmological control may be required to detect the problem. Using a slit lamp, a posterior embryotoxon characterized by a prominent anteriorly displaced Schwalbes ring near the temporal corneal limbus can be revealed. The unexpected finding of a posterior embryotoxon as a single whitish irregular arcuate ridge, on routine examination, is not necessarily a diagnosis of ARS, as this occurs in a percentage estimated in the literature from 8% to 15% of the normal population. In the case of gonioscopy, we can observe that the extension of the posterior embryotoxon can be greater and be present in the 360○, with a variable thickness of the annulus and unusually detached and hanging within the anterior chamber. Regarding the iris, we can observe peripheral extensions to Schwalbes line, which can be thin or thick and extend over the trabecular meshwork, obscuring the scleral spur and even pulling the iris and producing corectopia in iris tissue that can range from atrophy mild stromal to the presence of uveal ectropion, pseudopolycoria or even absence of iris. These chamber sinus anomalies predispose half of the cases to open-angle glaucoma, which can manifest throughout life and therefore require regular ophthalmological check-ups. Other related anomalies are strabismus due to alteration in the insertions of the extraocular muscles or secondary to amblyopia, and with a predisposition to exotropia and retinal detachment.Systemic manifestations:
In the case of pathologies that affect the extraocular organs, greater attention must be paid to anomalies in the cardiovascular system, since they represent the most worrying associations due to their repercussions at the systemic level. These are present in different structures that make it up, such as heart valve defects, the presence of Fallots tetralogy, atrial septal defects or persistent truncus arteriosus. Other alterations described are craniofacial anomalies associated with hypoplasia of the midface, hypertelorism, telecanthus, maxillary hypoplasia, short nasolabial fold, thin upper lip and larger everted lower lip, which are typical facial characteristics although expressed in a variable way. Maxillary hypoplasia and poor tooth development produce a prognathic profile. Inspection of the oral cavity may show microdontia, hypodontia, oligodontia, and a thickened frenulum. The crowns of the anterior teeth may be conical or peg-shaped and the roots may be shortened, the gingival attachments may be reduced, and the enamel may be hypoplastic, contributing to poor dental health. There are other described associations such as umbilical, auditory, pituitary, psychomotor, size, urethral and anal anomalies as well as albinism.
Pathophysiology
The molecular genetics of Axenfeld–Rieger syndrome are poorly understood, but center on three genes identified by cloning of chromosomal breakpoints from patients.This disorder is inheritable as an autosomal dominant trait, which means the defective gene is located on an autosome, and only one copy of the gene is sufficient to cause the disorder when inherited from a parent who has the disorder. As shown in the diagram, this gives a 50/50 chance of offspring inheriting the condition from an affected parent.
Diagnosis
Although most recognized for its correlation with the onset of glaucoma, the malformation is not limited to the eye, as Axenfeld–Rieger syndrome when associated with the PITX2 genetic mutation usually presents congenital malformations of the face, teeth, and skeletal system.The most characteristic feature affecting the eye is a distinct corneal posterior arcuate ring, known as an "embryotoxon". In severe cases, iris may be adherent to the cornea anterior to the Schwalbes line.One of the three known genetic mutations which cause Rieger syndrome can be identified through genetic samples analysis. About 40% of people with Axenfeld–Rieger have displayed mutations in genes PITX2, FOXC1, and PAX6. The difference between Type 1, 2, and 3 Axenfeld–Rieger syndrome is the genetic cause, all three types display the same symptoms and abnormalities.
Classification
The OMIM classification is as follows:
Detection of any of these mutations can give patients a clear diagnosis and prenatal procedures such as preimplantation genetic diagnosis, chorionic villus sampling and amniocentesis can be offered to patients and prospective parents.
Management
One of the surgical techniques used to treat this rare disease is the Phakic Retroiridian Pupilloplasty is an original surgical technique involving the creation of a sclerocorneal incision through a peripheral iridotomy, with the surgeon working behind the iris and creating a neopupil with an anterior chamber vitrectome. It requires very few follow-up visits and the patients recovery is fast.
Eponym
It is named after the German ophthalmologist Theodor Axenfeld who studied anterior segment disorders, especially those such as Rieger syndrome and the Axenfeld anomaly.
Axenfeld–Rieger syndrome is characterized by abnormalities of the eyes, teeth, and facial structure. Rieger syndrome, by medical definition, is determined by the presence of malformed teeth, underdeveloped anterior segment of the eyes, and cardiac problems associated with the Axenfeld anomaly. The term "Rieger syndrome" is sometimes used to indicate an association with glaucoma. Glaucoma occurs in up to 50% of patients with Rieger syndrome. Glaucoma develops during adolescence or late childhood, but often occurs in infancy. In addition, a prominent Schwalbes line, an opaque ring around the cornea known as posterior embryotoxon, may arise with hypoplasia of the iris. Below average height and stature, stunted development of the mid-facial features and mental deficiencies may also be observed in patients.
References
Further reading
Amendt, Brad A., ed. (2005). The Molecular Mechanisms of Axenfeld-Rieger Syndrome. Medical Intelligence Unit. Springer. doi:10.1007/0-387-28672-1. ISBN 978-0-387-28672-3.
Agarwal, Sunita; Agarwal, Athiya; Apple, David J., eds. (2002). "Axenfeld-Rieger Syndrome". Textbook of Ophthalmology. Jaypee Brothers. pp. 1049–51. ISBN 978-81-7179-884-1.
Shields, M.Bruce; Buckley, Edward; Klintworth, Gordon K.; Thresher, Randy (1985). "Axenfeld-Rieger syndrome. A spectrum of developmental disorders" </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Sodoku and can you explain it in depth? [/INST] Sodoku (鼠毒) is a bacterial zoonotic disease. It is caused by the Gram-negative rod Spirillum minus (also known as Spirillum minor). It is a form of rat-bite fever (RBF).
Signs and symptoms
The initial scratch or wound caused by a bite from a carrier rodent results in mild inflammatory reactions and ulcerations </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Wheeze and can you explain it in depth? [/INST] A wheeze is a continuous, coarse, whistling sound produced in the respiratory airways during breathing. For wheezes to occur, some part of the respiratory tree must be narrowed or obstructed (for example narrowing of the lower respiratory tract in an asthmatic attack), or airflow velocity within the respiratory tree must be heightened. Wheezing is commonly experienced by persons with a lung disease; the most common cause of recurrent wheezing is asthma, though it can also be a symptom of lung cancer, congestive heart failure, and certain types of heart diseases.
The differential diagnosis of wheezing is wide, and the reason for wheezing in a given patient is determined by considering the characteristics of the wheezes and the historical and clinical findings made by the examining physician.
Characteristics
Wheeze
Wheezes occupy different portions of the respiratory cycle depending on the site of airway obstruction and its nature. The fraction of the respiratory cycle during which a wheeze is produced roughly corresponds to the degree of airway obstruction. Bronchiolar disease usually causes wheezing that occurs in the expiratory phase of respiration. As a rule, extrathoracic airway obstruction produce inspiratory sounds. Intrathoracic major airway obstruction produces inspiratory as well as expiratory sounds. Distal airway obstruction predominantly produces expiratory sounds.The presence of expiratory phase wheezing signifies that the patients peak expiratory flow rate is less than 50% of normal. Wheezing heard in the inspiratory phase, on the other hand, is often a sign of a stiff stenosis, usually caused by tumors, foreign bodies or scarring. This is especially true if the wheeze is monotonal, occurs throughout the inspiratory phase (i.e. is "holoinspiratory"), and is heard more proximally, in the trachea. Inspiratory wheezing also occurs in hypersensitivity pneumonitis. Wheezes heard at the end of both expiratory and inspiratory phases usually signify the periodic opening of deflated alveoli, as occurs in some diseases that lead to collapse of parts of the lungs.
The location of the wheeze can also be an important clue to the diagnosis. Diffuse processes that affect most parts of the lungs are more likely to produce wheezing that may be heard throughout the chest via a stethoscope. Localized processes, such as the occlusion of a portion of the respiratory tree, are more likely to produce wheezing at that location, hence the sound will be loudest and radiate outwardly. The pitch of a wheeze does not reliably predict the degree of narrowing in the affected airway.
Stridor
A special type of wheeze is stridor </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Homelessness and can you explain it in depth? [/INST] Homelessness or houselessness – also known as a state of being unhoused or unsheltered – is the condition of lacking stable, safe, and adequate housing. People can be categorized as homeless if they are:
living on the streets, also known as rough sleeping (primary homelessness);
moving between temporary shelters, including houses of friends, family, and emergency accommodation (secondary homelessness); and
living in private boarding houses without a private bathroom or security of tenure (tertiary homelessness).
have no permanent house or place to live safely
Internally Displaced Persons, persons compelled to leave their places of domicile, who remain as refugees within their countrys borders.The rights of people experiencing homelessness also varies from country to country. United States government homeless enumeration studies also include people who sleep in a public or private place, which is not designed for use as a regular sleeping accommodation for human beings. Homelessness and poverty are interrelated. There is no methodological consensus on counting homeless people and identifying their needs; therefore, in most cities, only estimated homeless populations are known.
In 2005, an estimated 100 million people worldwide were homeless, and as many as one billion people (one in 6.5 at the time) live as squatters, refugees, or in temporary shelter, all lacking adequate housing.
Scarce and expensive housing is the main cause of rising homelessness in the United States.
United Nations definition
In 2004, the United Nations sector of Economic and Social Affairs defined a homeless household as those households without a shelter that would fall within the scope of living quarters due to a lack of a steady income. They carry their few possessions with them, sleeping in the streets, in doorways or on piers, or in another space, on a more or less random basis.In 2009, at the United Nations Economic Commission for Europe Conference of European Statisticians (CES), held in Geneva, Switzerland, the Group of Experts on Population and Housing Censuses defined homelessness as:
In its Recommendations for the Censuses of Population and Housing, the CES identifies homeless people under two broad groups: (a) Primary homelessness (or rooflessness). This category includes persons living in the streets without a shelter that would fall within the scope of living quarters; (b) Secondary homelessness. This category may include persons with no place of usual residence who move frequently between various types of accommodations (including dwellings, shelters, and institutions for the homeless or other living quarters). This category includes persons living in private dwellings but reporting no usual address on their census form. The CES acknowledges that the above approach does not provide a full definition of the homeless.
Article 25 of the Universal Declaration of Human Rights, adopted 10 December 1948 by the UN General Assembly, contains this text regarding housing and quality of living:
Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing, and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control.
The ETHOS Typology of Homelessness and Housing Exclusion was developed as a means of improving understanding and measurement of homelessness in Europe, and to provide a common "language" for transnational exchanges on homelessness. The ETHOS approach confirms that homelessness is a process (rather than a static phenomenon) that affects many vulnerable households at different points in their lives.The typology was launched in 2005 and is used for different purposes: as a framework for debate, for data collection purposes, for policy purposes, monitoring purposes, and in the media. This typology is an open exercise that makes abstraction of existing legal definitions in the EU member states. It exists in 25 language versions, the translations being provided mainly by volunteer translators.
Despite all the international agreements and efforts, many countries and individuals in power do not consider housing as a human right. Former US President, Jimmy Carter, addressed this issue in a 2017 interview. He said, “A lot of people dont look at housing as a human right, but it is.” His view contrasts with many American, especially those in power, that do not believe this right is inline with the US constitution.
Other terms
Recent homeless enumeration survey documentation utilizes the term unsheltered homeless. The common colloquial term street people does not fully encompass all unsheltered people, in that many such persons do not spend their time in urban street environments. Many shun such locales, because homeless people in urban environments may face the risk of being robbed or assaulted. Some people convert unoccupied or abandoned buildings ("squatting"), or inhabit mountainous areas or, more often, lowland meadows, creek banks, and beaches. Many jurisdictions have developed programs to provide short-term emergency shelter during particularly cold spells, often in churches or other institutional properties. These are referred to as warming centers, and are credited by their advocates as lifesaving.
History
Early history through the 19th century
United Kingdom
Following the Peasants Revolt, English constables were authorized under 1383 English Poor Laws statute to collar vagabonds and force them to show support; if they could not, the penalty was gaol. Vagabonds could be sentenced to the stocks for three days and nights; in 1530, whipping was added. The presumption was that vagabonds were unlicensed beggars. In 1547, a bill was passed that subjected vagrants to some of the more extreme provisions of the criminal law, namely two years servitude and branding with a "V" as the penalty for the first offense and death for the second. Large numbers of vagabonds were among the convicts transported to the American colonies in the 18th century. During the 16th century in England, the state first tried to give housing to vagrants instead of punishing them, by introducing bridewells to take vagrants and train them for a profession. In the 17th and 18th centuries, these were replaced by workhouses but these were intended to discourage too much reliance on state help.
United States
In the Antebellum South, the availability of slave labor made it difficult for poor white people to find work. To prevent poor white people from cooperating with enslaved black people, slaveowners policed poor whites with vagrancy laws.After the American Civil War, a large number of homeless men formed part of a counterculture known as "hobohemia" all over the United States. In smaller towns, hobos temporarily lived near train tracks and hopped onto trains to various destinations.The growing movement toward social concern sparked the development of rescue missions, such as the U.S. first rescue mission, the New York City Rescue Mission, founded in 1872 by Jerry and Maria McAuley.
Modern
20th century
The Great Depression of the 1930s caused a devastating epidemic of poverty, hunger, and homelessness in the United States. When Franklin D. Roosevelt took over the presidency from Herbert Hoover in 1933, he passed the New Deal, which greatly expanded social welfare, including providing funds to build public housing. This marked the end of the Great Depression.How the Other Half Lives and Jack Londons The People of the Abyss (1903) discussed homelessness and raised public awareness, which caused some changes in building codes and some social conditions. In England, dormitory housing called "spikes" was provided by local boroughs. By the 1930s in England, there were 30,000 people living in these facilities. In 1933, George Orwell wrote about poverty in London and Paris, in his book Down and Out in Paris and London. In general, in most countries, many towns and cities had an area that contained the poor, transients, and afflicted, such as a "skid row". In New York City, for example, there was an area known as "the Bowery", traditionally, where people with an alcohol use disorder were to be found sleeping on the streets, bottle in hand.
In the 1960s in the UK, nature and growing problem of homelessness changed in England as public concern grew. The number of people living "rough" in the streets had increased dramatically. However, beginning with the Conservative administrations Rough Sleeper Initiative, the number of people sleeping rough in London fell dramatically. This initiative was supported further by the incoming Labour administration from 2009 onwards with the publication of the Coming in from the Cold strategy published by the Rough Sleepers Unit, which proposed and delivered a massive increase in the number of hostel bed spaces in the capital and an increase in funding for street outreach teams, who work with rough sleepers to enable them to access services.Scotland saw a slightly different picture, with the impact of the right to buy ending in a devastating drop in available social housing, something that has never ever recovered. The 1980s and the 1990s resulted in an ever increasing picture of people becoming homeless, with very few rights to provide access to allow change.
2000s
However this picture changed in Scotland from 2001, as the Scottish Parliament came into place. It was agreed by all parties that a ten-year plan to eradicate homelessness by the end of 2012 would be implemented. The minister of housing met with the third sector and Local Authorities every 6 weeks, checking on progress, whilst consultations brought about legislative change, alongside work to prevent homelessness. There was a peak in applications around 2005, but from there onwards figures dropped year on year for the next 8 years. However, with a focus on the broader numbers of people experiencing homelessness, many people with higher levels of need got caught in the system. Work from 2017 started to address this, with a framework currently in place to work towards a day where everyone in Scotland has a home suitable to meet their needs.
In 2002, research showed that children and families were the largest growing segment of the homeless population in the United States, and this has presented new challenges to agencies.
In the US, the government asked many major cities to come up with a ten-year plan to end homelessness. One of the results of this was a "Housing First" solution. The Housing First program offers homeless people access to housing without having to undergo tests for sobriety and drug usage. The Housing First program seems to benefit homeless people in every aspect except for substance abuse, for which the program offers little accountability. An emerging consensus is that the Housing First program still gives clients a higher chance at retaining their housing once they get it. A few critical voices argue that it misuses resources and does more harm than good; they suggest that it encourages rent-seeking and that there is not yet enough evidence-based research on the effects of this program on the homeless population. Some formerly homeless people, who were finally able to obtain housing and other assets which helped to return to a normal lifestyle, have donated money and volunteer services to the organizations that provided aid to them during their homelessness. Alternatively, some social service entities that help homeless people now employ formerly homeless individuals to assist in the care process.
Homelessness has migrated toward rural and suburban areas. The number of homeless people has not changed dramatically but the number of homeless families has increased according to a report of HUD. The United States Congress appropriated $25 million in the McKinney-Vento Homeless Assistance Grants for 2008 to show the effectiveness of Rapid Re-housing programs in reducing family homelessness. In February 2009, President Obama signed the American Recovery and Reinvestment Act of 2009, part of which addressed homelessness prevention, allocating $1.5 billion for a Homeless Prevention Fund. The Emergency Shelter Grant (ESG) programs name was changed to Emergency Solution Grant (ESG) program, and funds were re-allocated to assist with homeless prevention and rapid re-housing for families and individuals.
Causes
Major reasons for homelessness include:
Rent and eviction
Gentrification is a process in which a formerly affordable neighborhood becomes popular with wealthier people, raising housing prices and pushing poorer residents out. Gentrification can cause or influence evictions, foreclosures, and rent regulation.
Increased wealth disparity and income inequality causes distortions in the housing market that push rent burdens higher, making housing unaffordable.In many countries, people lose their homes by government orders to make way for newer upscale high-rise buildings, roadways, and other governmental needs. The compensation may be minimal, in which case the former occupants cannot find appropriate new housing and become homeless.
Mortgage foreclosures where mortgage holders see the best solution to a loan default is to take and sell the house to pay off the debt can leave people homeless. Foreclosures on landlords often lead to the eviction of their tenants. "The Sarasota, Florida, Herald Tribune noted that, by some estimates, more than 311,000 tenants nationwide have been evicted from homes this year after lenders took over the properties."Rent regulation also has a small effect on shelter and street populations. This is largely due to rent control reducing the quality and quantity of housing. For example, a 2019 study found that San Franciscos rent control laws reduced tenant displacement from rent controlled units in the short-term, but resulted in landlords removing 30% of the rent controlled units from the rental market, (by conversion to condos or TICs) which led to a 15% citywide decrease in total rental units, and a 7% increase in citywide rents.
Economics
A substantial percentage of the US homeless population are individuals who are chronically unemployed, or have trouble managing their expenses. This can lead to poverty. Factors that can lead to economic struggle include neighbourhood gentrification (as previously discussed), drug addiction or gambling addiction, job loss, debt, chronic overspending of money, loss of money and/or assets due to divorce, death of breadwinning spouse, being denied jobs due to discrimination, living off of welfare or disability income, and many others.
Poverty
Poverty is a significant factor in homelessness. Alleviation of poverty, as a result, plays an essential role in eliminating homelessness. Some non-governmental organizations (NGOs) have studied ‘unconditional cash transfers’ (UCTs) to low-income families and individuals to reduce poverty in developing countries. Despite their initial concern about UTC’s potential negative effects on the recipients, the researchers found promising results. The study in Kenya found that the assisted households increased their consumption and savings. While the families spent more for their food and food security, they did not incur any expenses on unnecessary goods or services. This study shows that a proper approach to poverty could effectively eliminate this factor as part of a solution to homelessness. Providing access to education and employment to low-income families and individuals must also be considered to combat poverty and preventing homelessness.
Physical and mental health
It is not surprising that many researchers have identified that “Homelessness is closely connected to decline in physical and mental health”. Most people who use homeless shelters on a frequent basis face multiple disadvantages, such as increased prevalence of physical and mental health problems, disabilities, addiction, poverty, and discrimination. Studies have shown that homeless people have a high level of morbidity and mortality. Moreover, they suffer from a wide range of health problems, including contracting sever infectious diseases (such as tuberculosis, HIV/AIDs, and STDs), addictions, and mental illnesses. Notably, the lack of access to healthcare for homeless people adversely affects the healthcare system as well. Homeless people often obtain their care in emergency departments, partly due to their lack of adequate health insurance, especially for medications. Statistics demonstrate that they are admitted to hospital five times more than the public and stay under care much longer. Their prolonged healthcare imposes a significant cost to the healthcare system and deprives others from receiving their timely healthcare. Studies show that preventive and primary care (which homeless people are not receiving) substantially lower the overall healthcare costs. Unfortunately, in terms of providing adequate treatment to homeless people with respect to their mental illness, the healthcare system’s performance has not been promising, either. It is apparent that a comprehensive solution to homelessness must include an effective approach to providing healthcare to homeless people.
Disabilities, especially where disability services are non-existent, inconvenient, or poorly performing can impact a persons ability to support house payments, mortgages, or rent, especially if they are unable to work. Traumatic brain injury is one main disability that can account for homelessness. According to a Canadian survey, traumatic brain injury is widespread among homeless people and, for around 70% of respondents, can be attributed to a time "before the onset of homelessness"Being afflicted with a mental disorder, including substance use disorders, where mental health services are unavailable or difficult to access can also drive homelessness for the same reasons as disabilities. A United States federal survey done in 2005 indicated that at least one-third of homeless men and women have serious psychiatric disorders or problems. Autism spectrum disorders and schizophrenia are the top two common mental disabilities among the U.S. homeless. Personality disorders are also very prevalent, especially Cluster A. Substance abuse can also cause homelessness from behavioral patterns associated with addiction, that alienate an addicted individuals family and friends, who could otherwise provide support during difficult economic times.
Discrimination
A history of experiencing domestic violence can also attribute to homelessness. Compared to housed women, homeless women were more likely to report childhood histories of abuse, as well as more current physical abuse by male partners.Gender disparities also influence the demographics of homelessness. The experiences of homeless women and women in poverty are often overlooked, however, they experience specific gender-based victimization. As individuals with little to no physical or material capital, homeless women are particularly targeted by male law enforcement, and men living on the street. It has been found that "street-based homelessness dominates mainstream understanding of homelessness and it is indeed an environment in which males have far greater power (OGrady and Gaietz, 2004)." Women on the street are often motivated to gain capital through affiliation and relationships with men, rather than facing homelessness alone. Within these relationships, women are still likely to be physically and sexually abused.Social exclusion related to sexual orientation, gender identity or expression, or sex characteristics can also attribute to homelessness based on discrimination. Relationship breakdown, particularly in relation to young people and their parents, such as disownment due to sexuality or gender identity is one example.Former imprisonment status and a criminal history can also affect securing housing.
Human and natural disasters
Natural disasters, including but not limited to earthquakes, hurricanes, tsunamis, tornadoes, and volcanic eruptions can cause homelessness. An example is the 1999 Athens earthquake in Greece, in which many middle class people became homeless, with some of them living in containers, especially in the Nea Ionia earthquake survivors container city provided by the government; in most cases, their only property that survived the quake was their car. Such people are known in Greece as seismopathis, meaning earthquake-struck.War or armed conflict can create refugees fleeing the violence. Whether they be either domestic or foreign to the country, the number of migrants can outstrip the supply of affordable housing, leaving some section of this population to be homeless.
Foster care
Transitions from foster care and other public systems can also impact homelessness; specifically, youth who have been involved in, or are a part of the foster care system, are more likely to become homeless. Most leaving the system have no support and no income, making it nearly impossible to break the cycle, and forcing them to live on the streets. There is also a lack of shelter beds for youth; various shelters have strict and stringent admissions policies.
Choice
Although uncommon, some choose to be homeless as a personal lifestyle choice. There are different reasons why someone would choose to become homeless. They may not want to contribute to a capitalist society, which includes having a job, spending and owing money, and paying taxes to the government. The main aspect of freeganism is anti-consumerism, and avoiding spending excessive amounts of money at all costs. Some see homelessness as more free than living in a house or apartment, and prefer being in nature, as well as away from other people. Some may have had a traumatic experience in a house or apartment, such as a fire, and feel safer outside, being able to survey their surroundings.
Challenges
The basic problem of homelessness is the need for personal shelter, warmth, and safety. Other difficulties include:
Hygiene and sanitary facilities
Hostility from the public and laws against urban vagrancy
Cleaning and drying of clothes
Obtaining, preparing, and storing food
Keeping contact with friends, family, and government service providers without a permanent location or mailing address
Medical problems, including issues caused by an individuals homeless state (e.g., hypothermia or frostbite from sleeping outside in cold weather), or issues which are exacerbated by homelessness due to lack of access to treatment (e.g., mental health and the individual not having a place to store prescription drugs)
Personal security, quiet, and privacy, especially for sleeping, bathing, and other hygiene activities
Safekeeping of bedding, clothing, and possessions, which may have to be carried at all timesPeople experiencing homelessness face many problems beyond the lack of a safe and suitable home. They are often faced with reduced access to private and public services and vital necessities:
General rejection or discrimination from other people
Increased risk of suffering violence and abuse
Limited access to education
Loss of usual relationships with the mainstream
Not being seen as suitable for employment
Reduced access to banking services
Reduced access to communications technology
Reduced access to healthcare and dental services
Targeting by municipalities to exclude from public space
Implication of hostile architecture
Difficulty forming trust in relation to services, systems, and other people; exacerbating pre-existing difficulty accessing aid and escaping homelessness, particularly present in the chronically homeless. Statistics from the past twenty years, in Scotland, demonstrate that the biggest cause of homelessness is varying forms of relationship breakdown.There is sometimes corruption and theft by the employees of a shelter, as evidenced by a 2011 investigative report by FOX 25 TV in Boston, wherein a number of Boston public shelter employees were found stealing large amounts of food over a period of time from the shelters kitchen for their private use and catering. Homeless people are often obliged to adopt various strategies of self-presentation in order to maintain a sense of dignity, which constrains their interaction with passers-by, and leads to suspicion and stigmatization by the mainstream public.Homelessness is also a risk factor for depression caused by prejudice. When someone is prejudiced against people who are homeless, and then becomes homeless themselves, their anti-homelessness prejudice turns inward, causing depression. "Mental disorders, physical disability, homelessness, and having a sexually transmitted infection are all stigmatized statuses someone can gain despite having negative stereotypes about those groups." Difficulties can compound exponentially. A study found that in the city of Hong Kong over half of the homeless people in the city (56%) had some degree of mental illness. Only 13% of the 56% were receiving treatment for their condition leaving a huge portion of homeless untreated for their mental illness.The issue of anti-homeless architecture came to light in 2014, after a photo displayed hostile features (spikes on the floor) in London, and took social media by storm. The photo of an anti-homeless structure was a classic example of hostile architecture, in an attempt to discourage people from attempting to access or use public space in irregular ways. However, although this has only recently came to light, hostile architecture has been around for a long time in many places.: 68 An example of this is a low overpass that was put in place between New York City and Long Island. Robert Moses, an urban planner, designed it this way in an attempt to prevent public buses from being able to pass through it.
Healthcare
Health care for homeless people is a major public health challenge. When compared to the general population, people who are homeless experience higher rates of adverse physical and mental health outcomes. Chronic disease severity, respiratory conditions, rates of mental health illnesses and substance use are all often greater in homeless populations than the general population. Homelessness is also associated with a high risk of suicide attempts. Homeless people are more likely to suffer injuries and medical problems from their lifestyle on the street, which includes poor nutrition, exposure to the severe elements of weather, and a higher exposure to violence. Yet at the same time, they have reduced access to public medical services or clinics, in part because they often lack identification or registration for public healthcare services. There are significant challenges in treating homeless people who have psychiatric disorders because clinical appointments may not be kept, their continuing whereabouts are unknown, their medicines may not be taken as prescribed, medical and psychiatric histories are not accurate, and other reasons. Because many homeless people have mental illnesses, this has presented a crisis in care.The conditions affecting homeless people are somewhat specialized, and have opened a new area of medicine tailored to this population. Skin conditions, including scabies, are common, because homeless people are exposed to extreme cold in the winter, and have little access to bathing facilities. They have problems caring for their feet, and have more severe dental problems than the general population. Diabetes, especially untreated, is widespread in the homeless population. Specialized medical textbooks have been written to address this for providers.Due to the demand for free medical services by homeless people, it might take months to get a minimal dental appointment in a free-care clinic. Communicable diseases are of great concern, especially tuberculosis, which spreads more easily in crowded homeless shelters in high-density urban settings. There has been ongoing concern and studies about the health and wellness of the older homeless population, typically ages 50 to 64 and older, as to whether they are significantly more sickly than their younger counterparts, and if they are under-served.A 2011 study led by Dr. Rebecca T. Brown in Boston, conducted by the Institute for Aging Research (an affiliate of Harvard Medical School), Beth Israel Deaconess Medical Center, and the Boston Health Care for the Homeless Program found the elderly homeless population had "higher rates of geriatric syndromes, including functional decline, falls, frailty, and depression than seniors in the general population, and that many of these conditions may be easily treated if detected". The report was published in the Journal of Geriatric Internal Medicine. There are government avenues which provide resources for the development of healthcare for homeless people. In the United States, the Bureau of Primary Health Care has a program that provides grants to fund the delivery of healthcare to homeless people. According to 2011 UDS, data community health centers were able to provide service to 1,087,431 homeless individuals. There are also many nonprofit and religious organizations which provide healthcare services to homeless people. These organizations help meet the large need which exists for expanding healthcare for homeless people.
There have been significant numbers of unsheltered persons dying of hypothermia, adding impetus to the trend of establishing warming centers, as well as extending enumeration surveys with vulnerability indexes.
Effect on life expectancy
In 1999, Dr. Susan Barrow of the Columbia University Center for Homelessness Prevention Studies reported in a study that the "age-adjusted death rates of homeless men and women were four times those of the general U.S. population and two to three times those of the general population of New York City". A report commissioned by homeless charity Crisis in 2011 found that on average, homeless people in the UK have a life expectancy of 47 years, 30 years younger than the rest of the population.
Health impacts of extreme weather events
People experiencing homelessness are at a significant increased risk to the effects of extreme weather events. Such weather events include extreme heat and cold, floods, storm surges, heavy rain, and droughts. While there are many contributing factors to these events, climate change is driving an increasing frequency and intensity of these events. The homeless population is considerably more vulnerable to these weather events, due to their higher rates of chronic disease, and lower socioeconomic status. Despite having a minimal carbon footprint, homeless people unfortunately experience a disproportionate burden of the effects of climate change.Homeless persons have increased vulnerability to extreme weather events for many reasons. They are disadvantaged in most social determinants of health, including lack of housing and access to adequate food and water, reduced access to health care, and difficulty in maintaining health care. They have significantly higher rates of chronic disease including respiratory disease and infections, gastrointestinal disease, musculoskeletal problems and mental health disease. In fact, self-reported rates of respiratory diseases (including asthma, chronic bronchitis and emphysema) are double that of the general population.The homeless population often live in higher risk urban areas, with increased exposure and little protection from the elements. They also have limited access to clean drinking water, and other methods of cooling down. The built environment in urban areas also contributes to the "heat island effect", the phenomenon whereby cities experience higher temperatures due to the predominance of dark, paved surfaces, and lack of vegetation. Homeless populations are often excluded from disaster planning efforts, further increasing their vulnerability when these events occur. Without the means to escape extreme temperatures and seek proper shelter, and cooling or warming resources, homeless people are often left to suffer the brunt of the extreme weather.
The health effects that result from extreme weather include exacerbation of chronic diseases and acute illnesses. Pre-existing conditions can be greatly exacerbated by extreme heat and cold, including cardiovascular, respiratory, skin and renal disease, often resulting in higher morbidity and mortality during extreme weather. Acute conditions such as sunburn, dehydration, heat stroke, and allergic reactions are also common. In addition, a rise in insect bites can lead to vector-borne infections. Mental health conditions can also be impacted by extreme weather events as a result of lack of sleep, increased alcohol consumption, reduced access to resources and reduced ability to adjust to the environmental changes. In fact, pre-existing psychiatric illness has been shown to triple the risk of death from extreme heat. Overall, extreme weather events appear to have a "magnifying effect" in exacerbating the underlying prevalent mental and physical health conditions of homeless populations.
Case study: Hurricane Katrina
In 2005, Hurricane Katrina, a category 5 hurricane, made landfall on Florida and Louisiana. It particularly affected the city of New Orleans and the surrounding areas. Hurricane Katrina was the deadliest hurricane in the US in seven decades, with more than 1,600 confirmed deaths, and more than 1,000 people missing. The hurricane disproportionately affected marginalized individuals, and individuals with lower socioeconomic status (i.e., 93% of shelter residents were African–American, 32% had household incomes below $10,000/year and 54% were uninsured). The storm nearly doubled the number of homeless people in New Orleans. While in most cities, homeless people account for 1% of the population, in New Orleans, the homeless account for 4% of the population. In addition to its devastating effects on infrastructure and the economy, the estimated prevalence of mental illness and the incidence of West Nile Virus more than doubled after Hurricane Katrina in the hurricane-affected regions.
Legal documentation
Homeless people may find it difficult to document their date of birth or their address. Due to the fact that homeless people usually have no place to store possessions, they often lose their belongings, including identification and other documents, or find them destroyed by police or others. Without a photo ID, homeless persons cannot get a job or access many social services, including healthcare. They can be denied access to even the most basic assistance: clothing closets, food pantries, certain public benefits, and in some cases, emergency shelters. Obtaining replacement identification is difficult. Without an address, birth certificates cannot be mailed. Fees may be cost-prohibitive for impoverished persons. And some states will not issue birth certificates unless the person has photo identification, creating a Catch-22. This problem is far less acute in countries that provide free-at-use health care, such as the UK, where hospitals are open-access day and night and make no charges for treatment. In the U.S., free-care clinics for homeless people and other people do exist in major cities, but often attract more demand than they can meet.
Victimization by violent crimes
Homeless people are often the victims of violent crime. A 2007 study found that the rate of violent crimes against homeless people in the United States is increasing. A study of women veterans found that homelessness is associated with domestic violence, both directly, as the result of leaving an abusive partner, and indirectly, due to trauma, mental health conditions, and substance abuse.
Stigma
Conditions such as alcoholism and mental illness are often associated with homelessness. Many people fear homeless people, due to the stigma surrounding the homeless community. Surveys have revealed that before spending time with the homeless, most people fear them, but after spending time with homeless people, that fear is lessened, or is no longer there. Another effect of this stigma is isolation.The stigmas of homelessness can thus be divided into three major categories in general: (1) attributing homelessness to personal incompetency and health conditions (e.g., unemployment, mental health issues, substance abuse, etc.); (2) seeing homeless people as posing threats to ones personal safety; and (3) de-sanitizing homeless people (i.e., seeing them as pathogens). Past research has shown that those types of stigmas are being reinforced through the fact that one is homeless and have a negative impact on effective public policymaking in terms of reducing homelessness. When a person lives on a street, many aspects of their personal situations, such as mental health issues and alcoholism, are more likely to be exposed to the public as compared to people who are not homeless and have access to resources that will help improve their personal crises. Such lack of privacy inevitably reinforces stigma by increasing observations of stereotypes for the public. Furthermore, media often attributes those personal crises to the direct cause of crimes, further leading the public to believe that homeless people are a threat to their personal safety. Many also believe that contacts with homeless people increase their chance of contracting diseases given that they lack access to stable, sanitary living conditions. Those types of stigmas are intertwined with each other when shaping public opinions on policies related to the homeless population, resulting in many ineffective policies that do not reduce homelessness at all. An example of such ineffective but somewhat popular policies is imposing bans on sleeping on the streets.Relying on the famous contact hypothesis, researchers argue that increasing contact between the homeless population and non-homeless population is likely to change public opinions on this out-group and make the public more well-informed when it comes to policymaking. While some believe that the contact hypothesis is only valid on the condition that the context and type of contact are specified, in the case of reducing discrimination against the homelessness population, some survey data indicate that the context (e.g., the proportion of the homeless population in ones city) and type of contact (e.g., TV shows about the homelessness population or interpersonal conversations about homelessness) do not produce much variation as they all increase positive attitudes towards homeless people and public policies that aid this group. Given that the restrictions of contexts and types of contact to reduce stigma are minimal, this finding is informative and significant to the government when it comes to making policies to offer institutional support for reducing discrimination in a country and for gauging public opinions on their proposed policies to reduce homelessness.
Global statistics
Demographics
In western countries such as the United States, the typical homeless person is male and single, with the Netherlands reporting 80% of homeless people aged 18–65 to be men. Some cities have particularly high percentages of males in homeless populations, with men comprising eighty-five percent of the homeless in Dublin. Non-white people are also overrepresented in homeless populations, with such groups two and one-half times more likely to be homeless in the U.S. The median age of homeless people is approximately 35.
Statistics for developed countries
In 2005, an estimated 100 million people worldwide were homeless. The following statistics indicate the approximate average number of homeless people at any one time. Each country has a different approach to counting homeless people, and estimates of homelessness made by different organizations vary wildly, so comparisons should be made with caution.
European Union: 3,000,000 (UN-HABITAT 2004)
England: 11,580 single households were assessed as rough sleeping at the point of approach in 2021, up 39% from 2019–20, with 119,400 households owed a prevention duty in 2020–21
Scotland: 27,571 households were assessed as homeless in 2020/21, a decrease of 13% compared to 2019/20
Canada: 150,000
Australia: On census night in 2006 there were 105,000 people homeless across Australia, an increase from the 99,900 Australians who were counted as homeless in the 2001 census
United States: The HUD 2018 Annual Homeless Assessment Report (AHAR) to Congress reports that in a single night, roughly 553,000 people were experiencing homelessness in the United States. According to HUDs July 2010 fifth Homeless Assessment Report to Congress, in a single night in January 2010, single-point analysis reported to HUD showed 649,917 people experiencing homelessness. This number had increased from January 2009s 643,067. The unsheltered count increased by 2.8 percent while the sheltered count remained the same. Also, HUD reported the number of chronically homeless people (persons with severe disabilities and long homeless histories) decreased one percent between 2009 and 2010, from 110,917 to 109,812. Since 2007 this number had decreased by 11 percent. This was mostly due to the expansion of permanent supportive housing programs.The change in numbers has occurred due to the prevalence of homelessness in local communities rather than other changes. According to HUDs July 2010 Homeless Assessment Report to Congress, more than 1.59 million people spent at least one night in an emergency shelter or transitional housing program during the 2010 reporting period, a 2.2 percent increase from 2009. Most users of homeless shelters used only an emergency shelter, while 17 percent used only transitional housing, and less than 5 percent used both during the reporting period. Since 2007, the annual number of those using homeless shelters in cities has decreased from 1.22 million to 1.02 million, a 17 percent decrease. The number of persons using homeless shelters in suburban and rural areas increased 57 percent, from 367,000 to 576,000. In the U.S., the federal governments HUD agency has required federally-funded organizations to use a computer tracking system for homeless people and their statistics, called HMIS (Homeless Management Information System). There has been some opposition to this kind of tracking by privacy advocacy groups, such as EPIC.However, HUD considers its reporting techniques to be reasonably accurate for homeless in shelters and programs in its Annual Homeless Assessment Report to Congress. Actually determining and counting the number of homeless is very difficult in general due to their lifestyle habits. There are so-called "hidden homeless" out of sight of the normal population and perhaps staying on private property. Various countries, states, and cities have come up with differing means and techniques to calculate an approximate count. For example, a one night "homeless census count", called a point-in-time (PIT) count, usually held in early winter for the year, is a technique used by a number of American cities, such as Boston. Los Angeles uses a mixed set of techniques for counting, including the PIT street count.In 2003, The United States Department of Housing and Urban Development (HUD) had begun requiring a PIT count in all "Continuum of Care" communities which required them to report a count of people, housing status, and geographic locations of individuals counted. Some communities provide sub-population information to the PIT, such as information on veterans, youth, and elderly individuals, as done in Boston.Japan: 20,000–100,000 (some figures put it at 200,000–400,000). Reports show that homelessness is on the rise in Japan since the mid-1990s. There are more homeless men than homeless women in Japan because it is usually easier for women to get a job and they are less isolated than men. Also Japanese families usually provide more support for women than they do for men.
Developing and undeveloped countries
The number of homeless people worldwide grew steadily in 2005. In some developing countries such as Nigeria and South Africa, homelessness is rampant, with millions of children living and working on the streets. Homelessness has become a problem in the countries of China, India, Thailand, Indonesia, and the Philippines despite their growing prosperity, partly due to migrant workers who have trouble finding permanent homes.Determining the true number of homeless people worldwide varies between 100 million and 1 billion people based on the exact definition used. Refugees, asylum-seekers, and internally displaced persons can also be considered homeless in that they, too, experience "marginalization, minority status, socioeconomic disadvantage, poor physical health, collapse of social supports, psychological distress, and difficulty adapting to host cultures" such as the domestic homeless.In the past twenty years, scholars such as Tipple and Speak have begun to refer to homelessness as the "antithesis or absence of home" rather than rooflessness or the "lack of physical shelter." This complication in the homelessness debate further delineates the idea that home actually consists of an adequate shelter, an experienced and dynamic place that serves as a "base" for nurturing human relationships and the "free development of individuals" and their identity. Thus, the home is perceived to be an extension of ones self and identity. In contrast, the homeless experience, according to Moore, constitutes more as a "lack of belonging" and a loss of identity that leads to individuals or communities feeling "out of place" once they can no longer call a place of their own home.This new perspective on homelessness sheds light on the plight of refugees, a population of stateless people who are not normally included in the mainstream definition of homelessness. It has also created problems for researchers because the nature of "counting" homeless people across the globe relies heavily on who is considered a homeless person. Homeless individuals, and by extension refugees, can be seen as lacking lack the "crucible of our modern society" and lacking a way of actively belonging to and engaging with their respective communities or cultures. As Casavant demonstrates, a spectrum of definitions for homelessness, called the "continuum of homelessness", should refer to refugees as homeless individuals because they not only lose their home, but they are also afflicted with a myriad of problems that parallel those affecting the domestic homeless, such as "[a lack of] stable, safe and healthy housing, an extremely low income, adverse discrimination in access to services, with problems of mental health, alcohol, and drug abuse or social disorganization". Refugees, like the domestic homeless, lose their source of identity and way of connecting with their culture for an indefinite period of time.
Thus, the current definition of homelessness unfortunately allows people to simplistically assume that homeless people, including refugees, are merely "without a place to live" when that is not the case. As numerous studies show, forced migration and displacement brings with it another host of problems including socioeconomic instability, "increased stress, isolation, and new responsibilities" in a completely new environment.For people in Russia, especially the youth, alcohol and substance use is a major cause and reason for becoming and continuing to be homeless. The United Nations Centre for Human Settlements (UN-Habitat) wrote in its Global Report on Human Settlements in 1995: "Homelessness is a problem in developed as well as in developing countries. In London, for example, life expectancy among homeless people is more than 25 years lower than the national average."
Poor urban housing conditions are a global problem, but conditions are worst in developing countries. Habitat says that today 600 million people live in life- and health-threatening homes in Africa, Asia, and Latin America. For example, more than three in four young people had insufficient means of shelter and sanitation in some African countries like Malawi. "The threat of mass homelessness is greatest in those regions because that is where population is growing fastest. By 2015, the 10 largest cities in the world will be in Asia, Latin America, and Africa. Nine of them will be in developing countries: Mumbai, India – 27.4 million; Lagos, Nigeria – 24.4; Shanghai, China – 23.4; Jakarta, Indonesia – 21.2; São Paulo, Brazil – 20.8; Karachi, Pakistan – 20.6; Beijing, China – 19.4; Dhaka, Bangladesh – 19; Mexico City, Mexico – 18.8. The only city in a developed country that will be in the top ten is Tokyo, Japan – 28.7 million."In 2008, Dr. Anna Tibaijuka, executive director of UN-HABITAT, referring to the recent report "State of the Worlds Cities Report 2008/2009", said that the world economic crisis we are in should be viewed as a "housing finance crisis" in which the poorest of poor were left to fend for themselves.
Refuges and alternative accommodation
There are various places where a homeless person might seek refuge:
24-hour Internet cafes are now used by over 5,000 Japanese "Net cafe refugees". An estimated 75% of Japans 3,200 all-night internet cafes cater to regular overnight guests, who in some cases have become their main source of income.
24-hour McDonalds restaurants are used by "McRefugees" in Japan, China and Hong Kong. There are about 250 McRefugees in Hong Kong.
Couch surfing: temporary sleeping arrangements in dwellings of friends or family members ("couch surfing"). This can also include housing in exchange for labor or sex. Couch surfers may be harder to recognize than street homeless people and are often omitted from housing counts.
Homeless shelters: including emergency cold-weather shelters opened by churches or community agencies, which may consist of cots in a heated warehouse, or temporary Christmas Shelters. More elaborate homeless shelters such as Pinellas Hope in Florida provide residents with a recreation tent, a dining tent, laundry facilities, outdoor tents, casitas, and shuttle services that help inhabitants get to their jobs each day.
Inexpensive boarding houses: have also been called flophouses. They offer cheap, low-quality temporary lodging.
Inexpensive motels offer cheap, low-quality temporary lodging. However, some who can afford housing live in a motel by choice. For example, David and Jean Davidson spent 22 years at various UK Travelodges.
Public places: Parks, bus or train stations, public libraries, airports, public transportation vehicles (by continual riding where unlimited passes are available), hospital lobbies or waiting areas, college campuses, and 24-hour businesses such as coffee shops. Many public places use security guards or police to prevent people from loitering or sleeping at these locations for a variety of reasons, including image, safety, and comfort of patrons.
Shantytowns: ad hoc dwelling sites of improvised shelters and shacks, usually near rail yards, interstates and high transportation veins. Some shantytowns have interstitial tenting areas, but the predominant feature consists of hard structures. Each pad or site tends to accumulate roofing, sheathing, plywood, and nailed two by fours.
Single room occupancy (more commonly abbreviated to SRO):a form of housing that is typically aimed at residents with low or minimal incomes who rent small, furnished single rooms with a bed, chair, and sometimes a small desk. SRO units are rented out as permanent residence or primary residence to individuals, within a multi-tenant building where tenants share a kitchen, toilets or bathrooms. In the 2010s, some SRO units may have a small refrigerator, microwave and sink. (also called a "residential hotel").
Squatting in an unoccupied structure where a homeless person may live without payment and without the owners knowledge or permission. Often these buildings are long abandoned and not safe to occupy.
Tent cities: ad hoc campsites of tents and improvised shelters consisting of tarpaulins and blankets, often near industrial and institutionally zoned real estate such as rail yards, highways and high transportation veins. A few more elaborate tent cities, such as Dignity Village, are hybrids of tent cities and shantytowns. Tent cities frequently consist only of tents and fabric improvised structures, with no semi-permanent structures at all.
Outdoors: on the ground or in a sleeping bag, tent, or improvised shelter, such as a large cardboard box, under a bridge, in an urban doorway, in a park or a vacant lot.
Tunnels such as abandoned subway, maintenance, or train tunnels are popular among the long-term or permanent homeless. The inhabitants of such refuges are called in some places, like New York City, "Mole People". Natural caves beneath urban centers allow for places where people can congregate. Leaking water pipes, electric wires, and steam pipes allow for some of the essentials of living.
Vehicles: cars or trucks used as temporary or sometimes long-term living quarters, for example by those recently evicted from a home. Some people live in recreational vehicles (RVs), school buses, vans, sport utility vehicles, covered pickup trucks, station wagons, sedans, or hatchbacks. The vehicular homeless, according to homeless advocates and researchers, comprise the fastest-growing segment of the homeless population. Many cities have safe parking programs in which lawful sites are permitted at churches or in other out-of-the-way locations. For example, because it is illegal to park on the street in Santa Barbara, the New Beginnings Counseling Center worked with the city to make designated parking lots available to homeless people.
Other housing options
Transitional housingTransitional housing provides temporary housing for certain segments of the homeless population, including the working homeless, and is meant to transition residents into permanent, affordable housing. This is usually a room or apartment in a residence with support services. The transitional time can be relatively short, for example, one or two years, and in that time the person must file for and obtain permanent housing along with gainful employment or income, even if Social Security or assistance. Sometimes transitional housing programs charge a room and board fee, maybe 30% of an individuals income, which is sometimes partially or fully refunded after the person procures a permanent residence. In the U.S., federal funding for transitional housing programs was originally allocated in the McKinney–Vento Homeless Assistance Act of 1986.FoyersFoyers are a specific type of transitional housing designed for homeless or at-risk teens. Foyers are generally institutions that provide affordable accommodation as well as support and training services for residents. They were pioneered in the 1990s in the United Kingdom, but have been adopted in areas in Australia and the United States as well.
Supportive housingSupportive housing is a combination of housing and services intended as a cost-effective way to help people live more stable, productive lives. Supportive housing works well for those who face the most complex challenges – individuals and families confronted with homelessness who also have very low incomes or serious, persistent issues such as substance use disorder, addictions, alcohol use disorder, mental illness, HIV/AIDS, or other serious challenges. A 2021 systematic review of 28 interventions, mostly in North America, showed that interventions with highest levels of support led to improved outcomes for both housing stability, and health outcomes.Government initiatives
In South Australia, the state government of Premier Mike Rann (2002–2011) committed substantial funding to a series of initiatives designed to combat homelessness. Advised by Social Inclusion Commissioner David Cappo and the founder of New Yorks Common Ground program, Rosanne Haggerty, the Rann government established Common Ground Adelaide, building high-quality inner city apartments (combined with intensive support) for "rough sleeping" homeless people. The government also funded the Street to Home program and a hospital liaison service designed to assist homeless people admitted to the emergency departments of Adelaides major public hospitals. Rather than being released back into homelessness, patients identified as rough sleepers were found accommodation backed by professional support. Common Ground and Street to Home now operate across Australia in other States.
Assistance and resources
Most countries provide a variety of services to assist homeless people. Provisions of food, shelter, and clothing and may be organized and run by community organizations, often with the help of volunteers, or by government departments. Assistance programs may be supported by government, charities, churches, and individual donors. However, not all homeless people can access these resources. In 1998, a study by Koegel and Schoeni of a homeless population in Los Angeles, California, found that a significant minority of homeless did not participate in government assistance programs, with high transaction costs being a likely contributing factor.
Social supports
While some homeless people are known to have a community with one another, providing each other various types of support, people who are not homeless also may provide them friendship, relational care, and other forms of assistance. Such social supports may occur through a formal process, such as under the auspices of a non-governmental organization, religious organization, or homeless ministry, or may be done on an individual basis.
Income
Employment
The United States Department of Labor has sought to address one of the main causes of homelessness, a lack of meaningful and sustainable employment, through targeted training programs and increased access to employment opportunities that can help homeless people develop sustainable lifestyles. This has included the development of the United States Interagency Council on Homelessness, which addresses homelessness on the federal level in addition to connecting homeless individuals to resources at the state level. All individuals who are in need of assistance are able, in theory, to access employment and training services under the Workforce Investment Act (WIA), although this is contingent upon funding and program support by the government.
Income sources outside of regular employment
Waste management
Homeless people can also use waste management services to earn money. Some homeless people find returnable bottles and cans and bring them to recycling centers to earn money. They can sort out organic trash from other trash or separate out trash made of the same material (for example, different types of plastics, and different types of metal). In addition, rather than picking waste at landfills, they can also collect litter found on/beside the road to earn an income.
Street newspapers
Street newspapers are newspapers or magazines sold by homeless or poor individuals and produced mainly to support these populations. Most such newspapers primarily provide coverage about homelessness and poverty-related issues and seek to strengthen social networks within homeless communities, making them a tool for allowing homeless individuals to work.
Medicine
The 2010 passage of the Patient Protection and Affordable Care Act could provide new healthcare options for homeless people in the United States, particularly through the optional expansion of Medicaid. A 2013 Yale study indicated that a substantial proportion of the chronically homeless population in America would be able to obtain Medicaid coverage if states expanded Medicaid under the Affordable Care Act.In 1985, the Boston Health Care for the Homeless Program was founded to assist the growing numbers of homeless living on the streets and in shelters in Boston and who were suffering from a lack of effective medical services. In 2004, Boston Health Care for the Homeless in conjunction with the National Health Care for the Homeless Council published a medical manual called The Health Care of Homeless Persons, edited by James J. OConnell, M.D., specifically for the treatment of the homeless population. In June 2008 in Boston, the Jean Yawkey Place, a four-story, 7,214.2-square-metre (77,653 sq ft) building, was opened by the Boston Health Care for the Homeless Program. It is an entire full-service building on the Boston Medical Center campus dedicated to providing healthcare for homeless people. It also contains a long-term care facility, the Barbara McInnis House, which expanded to 104 beds and is the first and largest medical respite program for homeless people in the United States.In Los Angeles, a collaboration between the Ostrow School of Dentistry of the University of Southern California and the Union Rescue Mission shelter offer homeless people in the Skid Row area free dental services.Studies on the effects of intensive mental health interventions have demonstrated some improvements in housing stability and to be economically beneficial on cost-analysis.
Housing
Permanent supportive housing (PSH) interventions appear to have improvements in housing stability for people living with homelessness even in long-term.
Savings from housing homeless in the US
In 2013, a Central Florida Commission on Homelessness study indicated that the region spends $31,000 a year per homeless person to cover "salaries of law enforcement officers to arrest and transport homeless individuals – largely for nonviolent offenses such as trespassing, public intoxication or sleeping in parks – as well as the cost of jail stays, emergency room visits and hospitalization for medical and psychiatric issues. This did not include "money spent by nonprofit agencies to feed, clothe and sometimes shelter these individuals". In contrast, the report estimated the cost of permanent supportive housing at "$10,051 per person per year" and concluded that "[h]ousing even half of the regions chronically homeless population would save taxpayers $149 million over the next decade – even allowing for 10 percent to end up back on the streets again." This particular study followed 107 long-term-homeless residents living in Orange, Osceola or Seminole Counties. There are similar studies showing large financial savings in Charlotte and Southeastern Colorado from focusing on simply housing the homeless."In general, housing interventions had mixed economic results on cost-analysis studies.
Innovative solutions
Los Angeles conducted a competition promoted by Mayor Eric Garcetti soliciting ideas from developers to use bond money more efficiently in building housing for the citys homeless population. The top five winners were announced on 1 February 2019 and the concepts included using assembly-ready molded polymer panels that can be put together with basic tools, prefabricated 5-story stack-able houses, erecting privately financed modular buildings on properties that do not require City Council approval, using bond money to convert residential garages into small apartments which are then dedicated to homeless rentals, and the redeveloping of Bungalow-court units, the small low-income iconic buildings that housed 7% of the citys population in the 1920s.In the neighborhood of Westlake, Los Angeles, the city is funding the first transitionally homeless housing building using "Cargotecture", or "architecture built from repurposed shipping containers." The Hope on Alvarado micro-apartment building will consist of 4-stories of 84 containers stacked together like Lego bricks on top of a traditionally constructed ground floor. Completion is anticipated by the end of 2019.
Political action
Voting for elected officials is important for the homeless population to have a voice in the democratic process.There are also many community organizations and social movements around the world which are taking action to reduce homelessness. They have sought to counteract the causes and reduce the consequences by starting initiatives that help homeless people transition to self-sufficiency. Social movements and initiatives tend to follow a grassroots, community-based model of organization – generally characterized by a loose, informal and decentralized structure, with an emphasis on radical protest politics. By contrast, an interest group aims to influence government policies by relying on more of a formal organizational structure. These groups share a common element: they are both made up of and run by a mix of allies of the homeless population and former or current members of the homeless population. Both grassroots groups and interest groups aim to break stereotyped images of homeless people as being weak and hapless, or defiant criminals and drug addicts, and to ensure that the voice of homeless people and their representatives is clearly heard by policymakers.
Organizing in homeless shelters
Homeless shelters can become grounds for community organization and the recruitment of homeless individuals into social movements for their own cause. Cooperation between the shelter and an elected representative from the homeless community at each shelter can serve as the backbone of this type of initiative. The representative presents and forwards problems raises concerns and provides new ideas to the director and staff of the shelters. Examples of possible problems are ways to deal with substance use disorders by certain shelter users, and resolution of interpersonal conflicts. SAND, the Danish National Organization for Homeless People, is one example of an organization that uses this empowerment approach. Issues reported at the homeless shelters are then addressed by SAND at the regional or national level. To open further dialogue, SAND organizes regional discussion forums where staff and leaders from the shelters, homeless representatives, and local authorities meet to discuss issues and good practices at the shelters.
Veteran specific
There are a number of homeless organizations that support homeless veterans, an issue most commonly seen in the United States.Non-governmental organizations house or redirect homeless veterans to care facilities. Social Security Income/Social Security Disability Income, Access, Outreach, Recovery Program (SOAR) is a national project funded by the Substance Abuse and Mental Health Services Administration. It is designed to increase access to SSI/SSDI for eligible adults who are homeless or at risk of becoming homeless and have a mental illness or a co-occurring substance use disorder. Using a three-pronged approach of Strategic Planning, Training, and Technical Assistance (TA), the SOAR TA Center coordinates this effort at the state and community level.The United States Department of Housing and Urban Development and Veterans Administration have a special Section 8 housing voucher program called VASH (Veterans Administration Supported Housing), or HUD-VASH, which gives out a certain number of Section 8 subsidized housing vouchers to eligible homeless and otherwise vulnerable US armed forces veterans. The HUD-VASH program has shown success in housing many homeless veterans. The support available to homeless veterans varies internationally, however. For example, in England, where there is a national right to housing, veterans are only prioritized by local authority homelessness teams if they are found to be vulnerable due to having served in the Armed Forces.Under the Department of Labor, the Veterans Employment and Training Service (VETS) offers a variety of programs targeted at ending homelessness among veterans. The Homeless Veterans Reintegration Program (HVRP) is the only national program that is exclusively focused on assisting veterans as they reenter the workforce. The VETS program also has an Incarcerated Veterans Transition Program, as well as services that are unique to female Veterans. Mainstream programs initiated by the Department of Labor have included the Workforce Investment Act, One-Stop Career Centers, and a Community Voice Mail system that helps to connect homeless individuals around the United States with local resources. Targeted labor programs have included the Homeless Veterans Reintegration Project, the Disability Program Navigator Initiative, efforts to end chronic homelessness through providing employment and housing projects, Job Corps, and the Veterans Workforce Investment Program (VWIP).
By location
Africa
Egypt
Homelessness in Egypt is a significant social issue affecting some 12 million people in the country. Egypt has over 1,200 areas designated for irregular dwellings that do not conform to standard building laws, allowing homeless people to build shacks and other shelters for themselves.Reportedly, in Egypt, homelessness is defined to include those living in marginal housing. Some scholars have stated that there is no agreed upon definition of homelessness in Egypt due to the difficulties government would face if an official definition were accepted.According to UNICEF, there are 1 million children living on the streets in Egypt. Other researchers estimate the number to be some 3 million. Homelessness NGOs assisting street children include those such as Hope Village Society, and NAFAS. Other NGOs, such as Plan International Egypt, work to reintegrate street children back into their families.
South Africa
Homelessness in South Africa dates back to the apartheid period. Increasing unemployment, lack of affordable housing, social disintegration, and social and economic policies have all been identified as contributing factors to the issue. Some scholars argue that solutions to homelessness in South Africa lie more within the private sphere than in the legal and political spheres.There is no national census on homeless people in South Africa, researchers instead rely on individual studies of homeless persons in particular cities. The South African homeless population has been estimated at 200,000 people from a diverse range of backgrounds. Most South African municipalities primarily view homelessness as a social dependency issue, responding with social interventions.
One study found that three out of four South African metropolitan municipalities viewed homelessness primarily as a social dependency issue, responding with social interventions. At the same time, homeless South Africans indicated that the most important thing the municipality could assist them with was employment and well-located affordable housing.
Asia
China
In 2011, there were approximately 2.41 million homeless adults and 179,000 homeless children living in the country. However, one publication estimated that there were one million homeless children in China in 2012.Housing in China is highly regulated by the Hukou system. This gives rise to a large number of migrant workers, numbering at 290.77 million in 2019. These migrant workers have rural Hukou, but they move to the cities in order to find better jobs, though due to their rural Hukou they are entitled to fewer privileges than those with urban Hukou. According to Huili et al., these migrant workers "live in overcrowded and unsanitary conditions" and are always at risk of displacement to make way for new real estate developments. In 2017, the government responded to a deadly fire in a crowded building in Beijing by cracking down on dense illegal shared accommodations and evicting the residents, leaving many migrant laborers homeless. This comes in the context of larger attempts by the government to limit the population increase in Beijing, often targeting migrant laborers. However, according to official government statistics, migrant workers in China have an average of 20.4 square metres (220 sq ft) of living space per capita, and the vast majority of migrant workers have basic living facilities such as heating, bathing, refrigerators, and washing machines.
Several natural disasters have led to homelessness in China. The 2000 Yunnan earthquake left 92,479 homeless and destroyed over 41,000 homes.Homelessness among people with mental health problems is much less common in China than in high-income countries, due to stronger family ties, but is increasing due to migration within families and as a result of the one-child policy. A study in Xiangtan found at least 2439 schizophrenic people that have been homeless on a total population of 2.8 million. It was found that "homelessness was more common in individuals from rural communities (where social support services are limited), among those who wander away from their communities (i.e., those not from Xiangtan municipality), and among those with limited education (who are less able to mobilize social supports). Homelessness was also associated with greater age; [the cause] may be that older patients have burned their bridges with relatives and, thus, end up on the streets."During the Cultural Revolution a large part of child welfare homes were closed down, leaving their inhabitants homeless. By the late 1990s, many new homes were set up to accommodate abandoned children. In 1999, the Ministry of Civil Affairs estimated the number of abandoned children in welfare homes to be 66,000.According to the Ministry of Civil Affairs, China had approximately 2,000 shelters and 20,000 social workers to aid approximately 3 million homeless people in 2014.From 2017 to 2019, the government of Guangdong Province assisted 5,388 homeless people in reuniting with relatives elsewhere in China. The Guangdong government assisted more than 150,000 people over a three-year period.In 2020, in the wake of the COVID-19 pandemic, the Chinese Ministry of Civil Affairs announced several actions of the Central Committee in response to homelessness, including increasing support services and reuniting homeless people with their families. In Wuhan, the situation for homeless people was particularly bad, as the lockdown made it impossible for homeless migrants to return to other parts of the country. The Wuhan Civil Affairs Bureau set up 69 shelters in the city to house 4,843 people.
India
The Universal Declaration of Human Rights defines homeless as those who do not live in a regular residence due to lack of adequate housing, safety, and availability. The United Nations Economic and Social Council Statement has a broader definition for homelessness; it defines homelessness as follows: When we are talking about housing, we are not just talking about four walls and a roof. The right to adequate housing is about security of tenure, affordability, access to services and cultural adequacy. It is about protection from forced eviction and displacement, fighting homelessness, poverty and exclusion. India defines homeless as those who do not live in census houses, but rather stay on pavements, roadsides, railway platforms, staircases, temples, streets, in pipes, or other open spaces. There are 1.77 million homeless people in India, or 0.15% of the countrys total population, according to the 2011 census consisting of single men, women, mothers, the elderly, and the disabled. However, it is argued that the numbers are far greater than accounted by the point in time method. For example, while the Census of 2011 counted 46.724 homeless individuals in Delhi, the Indo-Global Social Service Society counted them to be 88,410, and another organization called the Delhi Development Authority counted them to be 150,000. Furthermore, there is a high proportion of mentally ill and street children in the homeless population. There are 18 million street children in India, the largest number of any country in the world, with 11 million being urban. Finally, more than three million men and women are homeless in Indias capital city of New Delhi; the same population in Canada would make up approximately 30 electoral districts. A family of four members has an average of five homeless generations in India.There is a shortage of 18.78 million houses in the country. Total number of houses has increased from 52.06 million to 78.48 million (as per 2011 census). However, the country still ranks as the 124th wealthiest country in the world as of 2003. More than 90 million people in India make less than US$1 per day, thus setting them below the global poverty threshold. The ability of the Government of India to tackle urban homelessness and poverty may be affected in the future by both external and internal factors. The number of people living in slums in India has more than doubled in the past two decades and now exceeds the entire population of Britain, the Indian Government has announced. About 78 million people in India live in slums and tenements. 17% of the worlds slum dwellers reside in India. Subsequent to the release of Slumdog Millionaire in 2008, Mumbai was a slum tourist destination for slumming where homeless people and slum dwellers alike could be openly viewed by tourists.
Israel
Homelessness in Israel is a phenomenon that mostly developed after the mid-1980s.Homelessness increased following the wave of Soviet immigration in 1991. As many as 70 percent of homeless people in Tel Aviv are immigrants from the former Soviet Union, nearly all of them men. According to homeless shelter founder Gilad Harish, "when the recession hit Israel in the early 90s, the principle of last in, first out kicked in, and many Russian immigrants lost their jobs. Being new to the country, they didnt have a strong family support system to fall back on like other Israelis do. Some ended up on the street with nowhere to go."The number of homeless people in Israel grew in the 2000s, and the Association for Civil Rights in Israel claimed that the authorities were ignoring the issue.Some 2,000 families in Israel lose their homes every year after defaulting on their mortgage loans. However, a law amendment passed in 2009 protects the rights of mortgage debtors and ensures that they are not evicted after failing to meet mortgage payments. The amendment is part of a wider reform in the law in the wake of a lengthy battle by the Association for Civil Rights in Israel and other human rights groups.In 2007, the number of homeless youth was on the rise. More than 25% of all homeless youth in 2007 were girls, compared to 15% in 2004. A report by Elem, a non-profit organization that helps youth at risk, pointed to a 5% rise in the number of youths either homeless or wandering the streets late at night while their parents worked or due to strained relations at home. The organization estimated that in 2007 it provided programs or temporary shelter to roughly 32,000 youths in some 30 locations countrywide.In 2014, the number of homeless individuals in Israel was estimated at 1,831, about 600 of whom were living on the streets of Tel Aviv. This makes up 0.02% of the countrys population, a low figure compared to other developed nations. In July 2015, the Welfare Ministry estimated the number of homeless to be between 800 and 900, including 450 receiving services and treatment from their municipalities but continuing to live on the streets. Elem claimed the true figure was much higher. In December 2015, a large study by the Welfare Ministry found that 2,300 people in Israel were homeless.Homeless people in Israel are entitled to a monthly government stipend of NIS 1,000. In addition, there are both state-run homeless shelters operated by the Welfare Ministry and privately run shelters.
Adi Nes, an Israeli photographer, has brought public attention to the issue by taking pictures of Israels homeless.
Japan
Homelessness in Japan (ホームレス, 浮浪者)) is a social issue primarily affecting middle-aged and elderly males. Homelessness is thought to have peaked in the 1990s as a consequence of the collapse of the Japanese asset price bubble and has largely fallen since then.
According to the "Special Act in regards to Supporting the Autonomy of the Homeless Population" (Japanese: ホームレスの自立の支援等に関する特別措置法)), the term "homeless" is defined as "those who utilize city parks, river banks, roads, train stations, and other facilities as their place of stay in order to live their daily lives".Names for the homeless in Japan include hōmuresu (ホームレス, from the English "homeless"), furousha (浮浪者, meaning "wandering person"), kojiki (乞食, meaning beggar), and runpen (ルンペン, from German [[wikt:Lumpen|Lumpen]]). More recently, nojukusha (野宿者, "person who sleeps outside") and nojuku roudousha (野宿労働者, "laborer who sleeps outside") have been used to avoid negative connotations associated with the word "homeless".
Philippines
There are approximately 4.5 million homeless people in the Philippines, about 3 million of them in Manila.
Europe
One fifth of the total population of the European Union – totalling 91.4 million people – are still at risk of poverty or social exclusion, and access to housing remains difficult for many Europeans. According to a Eurostat survey, three-in-100 people say they have already had to live with relatives on a temporary basis while one-in-100 people say they have already lived on the street, in emergency or temporary accommodation or in a place not suitable for housing.
Homelessness in Denmark (6,431)
Homelessness in Finland (4,300)
Homelessness in France (300,000)
Homelessness in Germany (678,000)
Homelessness in Greece (40,000)
Homelessness in Hungary (30,000)
Homelessness in Ireland (8,313)
Homelessness in the Netherlands (39,300)
Homelessness in Portugal (8,209)
Homelessness in Spain (40,000)
Homelessness in Sweden (34,000)
Switzerland
Homelessness in Switzerland is a known social issue, however, there are few estimates as to the number of Swiss people affected. Homelessness is less visible in Switzerland than in many other Western countries. The majority of homeless people in Geneva are Swiss or French, with a minority from other countries.One Swiss study found that 1.6 percent of all patients admitted to psychiatric wards were homeless. The study reported that social factors and psychopathology are independently contributing to the risk of homelessness.In 2014, Swiss authorities reportedly began allowing homeless people to sleep in fallout shelters built during the Cold War.There are a number of centers for providing food for the homeless, including the Suneboge community center.
United Kingdom
Homelessness across the UK is a devolved matter, resulting in different legislation, frameworks, and even definitions, from country to country.
Since the late 1990s, housing policy has been a devolved matter, and state support for homeless people, together with legal rights in housing, have therefore diverged to a certain degree. A national service, called Streetlink, was established in 2012 to help members of the public obtain near-immediate assistance for specific rough sleepers, with the support of the Government (as housing is a devolved matter, the service currently only extends to England).
The annual number of homeless households in England peaked in 2003–04 at 135,420 before falling to a low of 40,020 in 2009–10. In 2017–18, there were 56,600 homeless households, which was 60 per cent below the 2003–04 peak, and 40 per cent higher than the 2009–10 low. The UK has more than 120,000 children in temporary accommodation, a number which has increased from 69,050 children in 2010.In 2007 the official figures for England were that an average of 498 people slept rough each night, with 248 of those in London.Homelessness in England since 2010 has been rising. By 2016 it is estimated the numbers sleeping rough had more than doubled since 2010. The National Audit Office say in relation to homelessness in England 2010–17 there has been a 60% rise in households living in temporary accommodation and a rise of 134% in rough sleepers. It is estimated 4,751 people bedded down outside overnight in England in 2017, up 15% on previous year. The housing charity Shelter used data from four sets of official 2016 statistics and calculated 254,514 people in England were homeless.The Homelessness Reduction Act 2017 places a new duty on local authorities in England to assist people threatened with homelessness within 56 days and to assess, prevent and relieve homelessness for all eligible applicants including single homeless people from April 2018. Before the 2017 HRA, homeless households were defined and measured as those who were owed a main homelessness duty by local authorities. But since 2018, the definition of homeless households has broadened as households are owed a new relief duty and a prevention duty. The main homelessness duty definition has not been changed by the 2017 HRA. However, these households are now only owed a main duty if their homelessness has not been successfully prevented or relieved. In 2019–20, 288,470 households were owed the new prevention or relief duties, which is four times the number of households owed the main duty in 2017–18 prior to implementation of the Homelessness Reduction Act.The picture in Scotland is considerably different, with law that entitles everyone to a roof over their head if they are homeless. This accommodation is often in the form of somewhere temporary, until something permanent becomes available. Though across the course of 2022 this will change, reducing the use of temporary accommodation, in line with the Homeless and Rough Sleeping Action Group (HARSAG) recommendations. Currently people are spending an average of 199 days(April 2020 to March 2021)in temporary accommodation before being housed in somewhere permanent.Most recently updated in October 2020, Scotland is working to eradicate homelessness through the Ending Homelessness Together action plan. It is anticipated that with this, alongside a focus on prevention, and Local Authorities working with the third sector on plans known as Rapid Rehousing Transition Plans, that people will no longer be homeless for any length of time.
In terms of figures, in 2020/21, there were 42,149 people in homeless households – 30,345 adults and 11,804 children in Scotland. This was a drop of 9% from the previous year, though it is unclear if this was partly due to statistics being collected differently during the start of the pandemic.
North America
Canada
United States
After Franklin D. Roosevelt took over the presidency from Herbert Hoover in 1933, he oversaw passage of the New Deal, which greatly expanded social welfare, including providing funds to build public housing. This marked the end of the Great Depression. However, the number of homeless people grew in the 1980s, when Ronald Reagan decimated the public housing budget in the 1980s, including the federally funded affordable housing production put in place by the New Deal. By the mid-1980s, there was a dramatic increase in family homelessness. Tied into this was an increasing number of impoverished and runaway children, teenagers, and young adults, which created a new substratum of the homeless population (street children or street youth).In 2015, the United States reported that there were 564,708 homeless people within its borders, one of the higher reported figures worldwide.Housing First is an initiative to help homeless people reintegrate into society, and out of homeless shelters. It was initiated by the federal governments Interagency Council on Homelessness. It asks cities to come up with a plan to end chronic homelessness. In this direction, there is the belief that if homeless people are given independent housing to start, with some proper social supports, then there would be no need for emergency homeless shelters, which it considers a good outcome. However this is a controversial position.There is evidence that the Housing First program works more efficiently than Treatment First programs. Studies show that having the stability of housing through the Housing First program will encourage homeless people to focus on other struggles they are facing, such as substance abuse. Meanwhile, Treatment First programs promote an "all or nothing" approach which require clients to participate in programs applicable to their struggles as a condition for housing assistance. Treatment First utilizes a less individualistic approach than Housing First and solutions are created under one standard rather than fit to each clients specific needs.
In 2009 it was estimated that one out of 50 children or 1.5 million children in the United States would experience some form of homelessness each year.In 2010 in New York City, where there were over 36,000 homeless people in 2009, there was a mobile video exhibit in the streets showing a homeless person on a screen and asking onlookers and passersby to text with their cellphones a message for him, and they also could donate money by cellphones to the organization Pathways to Housing. In September 2010, it was reported that the Housing First Initiative had significantly reduced the chronic homeless single person population in Boston, Massachusetts, although homeless families were still increasing in number. Some shelters were reducing the number of beds due to lowered numbers of homeless, and some emergency shelter facilities were closing, especially the emergency Boston Night Center. In 2011, the Department of Veterans Affairs Supportive Services for Veterans Families Initiative, SSVF, began funding private non-profit organizations and consumer cooperatives to provide supportive services to very low-income veteran families living in or transitioning to permanent housing.In 2019, in an interview with CBS News, scholar Sara Goldrick-Rab said that her study on college student homelessness found that "[n]early one in ten college students said they were homeless in the last year, meaning they had at least one night where they did not know where they were going to sleep."
Puerto Rico
According to the count by the Puerto Rico Department of Family, in January 2017 there were 3,501 homeless persons in the territory. The study shows that 26% of this population live in the capital, San Juan. Other municipalities percentage of this population was Ponce with 6.3%, Arecibo with 6%, Caguas with 5.3%, and Mayagüez with 4.7%. Results from the study determined that 76% of the homeless population were men and 24% were women and that both men and women populations, were on average age, 40 years old. This steadily increasing population, might have increased more drastically as a result of Hurricane María which caused over 90 billion dollars in damage to the island of Puerto Rico.Data provided by the Department of Community Social Development of San Juan, indicates that in 1988 the number of homeless in the municipality was of 368, while in 2017 there are about 877 persons without a home. While the average age for the overall homeless population is 40 years old for both women and men, in San Juan the median is 48 years for men and 43 years for women. Other data obtained showed that more than 50% have university level education. Also it revealed that 35% of men and 25% of women have relapsed more than four times after unsuccessful attempts to reinsert themselves socially. Reasons given to wander are varied with the most common causes being drug abuse (30.6%), family problems (22.4%), financial or economic problems (15.0%), and others such as unemployment, mental health problems, domestic violence, evictions, or lack of support when released from prison.
Oceania
Australia
In Australia the Supported Accommodation Assistance Program (SAAP) is a joint Commonwealth and state government program which provides funding for more than 1,200 organizations which are aimed to assist homeless people or those in danger of becoming homeless, as well as women and children escaping domestic violence. They provide accommodation such as refuges, shelters, and half-way houses, and offer a range of supported services. The Commonwealth has assigned over $800 million between 2000 and 2005 for the continuation of SAAP. The current program, governed by the Supported Assistance Act 1994, specifies that "the overall aim of SAAP is to provide transitional supported accommodation and related support services, in order to help people who are homeless to achieve the maximum possible degree of self-reliance and independence. This legislation has been established to help the homeless people of the nation and help rebuild the lives of those in need. The cooperation of the states also helps enhance the meaning of the legislation and demonstrates their desire to improve the nation as best they can." In 2011, the Specialist Homelessness Services (SHS) program replaced the SAAP program.
Indonesia
Homelessness in Indonesia refers to the issue of homelessness, a condition wherein people lack a stable and appropriate place of housing. The number of homeless people in Indonesia is estimated to be up to 3 million people in the country, over 28,000 in Jakarta alone. A number of terms are used to describe homeless people in Indonesia, including tunawisma, which is used by the government, and gelandangan, meaning "tramp".Squatters and street homeless people are often targeted by police raids who say that homeless people "disturb the attractiveness of the city".One cause of homelessness in Indonesia is forced evictions. According to researchers, between the years 2000 and 2005 over 92,000 people were forcefully evicted from their homes.
New Zealand
Homelessness in New Zealand has been linked to the general issue of lack of suitable housing. The homeless population is generally measured through the countrys census and by universities and other academic centres. In 2009, urban homelessness (rough sleepers or improvised dwellings) were estimated at less than 300, while rural homelessness (improvised dwellings) was estimated between 500 and 1000. An additional 8,000–20,000 live in "temporary accommodation unsuited for long-term habitation (caravans, campgrounds, substandard housing and boarding houses)." Homelessness in New Zealand has traditionally been reduced by the provision of state housing, similar to Germany and other developed countries.Statistical authorities in New Zealand have expanded their definition of homelessness to include people living in improvised shelters, people staying in camping grounds/motor camps and people sharing accommodation with someone elses household.The issue is believed to have become increasingly visible in recent years. Media in New Zealand have published an accusatory account of the presence of homeless people in public spaces, positioning homeless men as disruptive threats. Though community members have shown support though writing opinion pieces.In late January 2019, the New York Times reported rising housing prices to be a major factor in the increasing homelessness in New Zealand so that "smaller markets like Tauranga, a coastal city on the North Island with a population of 128,000, had seen an influx of people who had left Auckland in search of more affordable housing. Average property values in Tauranga had risen to $497,000 from $304,000 in the last five years, and Demographia now rated it among the 10 least affordable cities in the world — along with famously expensive locales such as Hong Kong, San Francisco, Sydney and Vancouver, British Columbia."In mid August 2019, the Associate Housing Minister Kris Faafoi and Social Development Minister Carmel Sepuloni announced that the Government would be launching a NZ$54 million program to tackle homelessness in New Zealand. This includes investing $31 million over the next four years for 67 intensive case managers and navigators to work with homeless people and a further $16 million for the Sustaining Tenancies Programme. This funding complements the Governments Housing First programme.
Russia and the USSR
After the abolition of serfdom in Russia in 1861, major cities experienced a large influx of former peasants who sought jobs as industrial workers in rapidly developing Russian industry. These people often lived in harsh conditions, sometimes renting a room shared between several families. There also was a large number of shelterless homeless. Immediately after the October Revolution a special program of "compression" (уплотнение) was enabled: people who had no shelter were settled in flats of those who had large (4, 5, or 6-room) flats with only one room left to previous owners. The flat was declared state property. This led to a large number of shared flats where several families lived simultaneously. Nevertheless, the problem of complete homelessness was mostly solved as anybody could apply for a room or a place in dormitory (the number of shared flats steadily decreased after large-scale residential building program was implemented starting in the 1960s).
By 1922 there were at least 7 million homeless children in Russia as a result of nearly a decade of devastation from World War I and the Russian Civil War. This led to the creation of a large number of orphanages. By the 1930s the USSR declared the abolition of homelessness and any citizen was obliged to have a propiska – a place of permanent residency. Nobody could be stripped of propiska without substitution or refuse it without a confirmed permission (called "order") to register in another place. If someone wanted to move to another city or expand their living area, he had to find a partner who wanted to mutually exchange the flats. The right for shelter was secured in the Soviet constitution </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Glycogen storage disease type V and can you explain it in depth? [/INST] Glycogen storage disease type V (GSD5, GSD-V), also known as McArdles disease, is a metabolic disorder, one of the metabolic myopathies, more specifically a muscle glycogen storage disease, caused by a deficiency of myophosphorylase. Its incidence is reported as one in 100,000, roughly the same as glycogen storage disease type I.The disease was first reported in 1951 by Dr. Brian McArdle of Guys Hospital, London.
Signs and symptoms
The onset of this disease is usually noticed in childhood, but often not diagnosed until the third or fourth decade of life. Symptoms include exercise intolerance with muscle pain, early fatigue, painful cramps, inappropriate rapid heart rate response to exercise, and may include myoglobin in the urine (often provoked by a bout of exercise). "In McArdles, our heart rate tends to increase in what is called an inappropriate response. That is, after the start of exercise it increases much more quickly than would be expected in someone unaffected by McArdles." Myoglobinuria may be seen due to the breakdown of skeletal muscle known as rhabdomyolysis, a condition in which muscle cells breakdown, sending their contents into the bloodstream. In a recent study of 269 GSD-V patients, 39.4% reported no previous episodes of myoglobinuria and 6.8% had normal CK even with fixed muscle weakness, so an absence of myoglobinuria and normal CK should not rule out the possibility of the disease.As skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity, individuals with GSD-V experience during exercise: sinus tachycardia, tachypnea, muscle fatigue and pain, during the aforementioned activities and time frames. Patients may exhibit a “second wind” phenomenon. This is characterized by the patients better tolerance for aerobic exercise such as walking and cycling after approximately 10 minutes. This is attributed to the combination of increased blood flow and the ability of the body to find alternative sources of energy, like fatty acids and proteins. In the long term, patients may exhibit kidney failure due to the myoglobinuria, and with age, patients may exhibit progressively increasing weakness and substantial muscle loss.Younger people may display unusual symptoms, such as difficulty in chewing, swallowing or utilizing normal oral motor functions. A number of comorbidities were found in GSD-V patients at a higher rate than found in the general population, including (but not limited to): hypertension (17%), endocrine diseases (15.7%), muskuloskeletal/rheumatic disease (12.9%), hyperuricemia/gout (11.6%), gastrointestinal diseases (11.2%), neurological disease (10%), respiratory disease (9.5%), and coronary artery disease (8.3%). Patients may have hypertrophy, particularly of the legs, and may have lower bone mineral content and density in the legs.GSD-V patients may experience myogenic hyperuricemia (exercise-induced accelerated breakdown of purine nucleotides in skeletal muscle). The Purine Nucleotide Cycle (PNC) is activated when the ATP (energy) reservoir in muscle cells runs low, and is a part of Protein Metabolism. Adenine (a purine) is converted into AMP (adenosine monophosphate), IMP (inosine monophosphate), and S-AMP (adenylosuccinate) in a circular fashion; the byproducts are fumarate (which goes on to produce ATP via oxidative phosphorylation), ammonia (from the conversion of AMP into IMP), and uric acid (from excess IMP).
To avoid health complications, GSD-V patients need to get their ATP (energy) primarily from Free Fatty Acids (Lipid Metabolism) rather than Protein Metabolism. Over-reliance on protein metabolism can be best avoided by not depleting the ATP reservoir, such as by not pushing through the pain and by not going too fast, too soon. "Be wary of pushing on when you feel pain start. This pain is a result of damaging muscles, and repeated damage will cause problems in the long term. But also this is counterproductive--it will stop you from getting into second wind. By pressing on despite the pain, you start your protein metabolism which then effectively blocks your glucose and fat metabolism. If you ever get into this situation, you need to stop completely for 30 minutes or more and then start the whole process again."Patients may present at emergency rooms with severe fixed contractures of the muscles and often severe pain. These require urgent assessment for rhabdomyolysis as in about 30% of cases this leads to acute kidney injury. Left untreated, this can be life-threatening. In a small number of cases compartment syndrome has developed, requiring prompt surgical referral.
Genetics
"GSDV is inherited in an autosomal recessive manner. At conception, each sibling of an affected individual has a 25% chance of being affected, a 50% chance of being a carrier, and a 25% chance of being unaffected and not a carrier."Two autosomal recessive forms of this disease occur, childhood-onset and adult-onset. The gene for myophosphorylase, PYGM (the muscle-type of the glycogen phosphorylase gene), is located on chromosome 11q13. According to the most recent publications, 95 different mutations have been reported. The forms of the mutations may vary between ethnic groups. For example, the R50X (Arg50Stop) mutation (previously referred to as R49X) is most common in North America and western Europe, and the Y84X mutation is most common among central Europeans.The exact method of protein disruption has been elucidated in certain mutations. For example, R138W is known to disrupt to pyridoxal phosphate binding site. In 2006, another mutation (c.13_14delCT) was discovered which may contribute to increased symptoms in addition to the common Arg50Stop mutation.
Myophosphorylase
Structure
The myophosphorylase structure consists of 842 amino acids. Its molecular weight of the unprocessed precursor is 97 kDa. The three-dimensional structure has been determined for this protein. The interactions of several amino acids in myophosphorylases structure are known. Ser-14 is modified by phosphorylase kinase during activation of the enzyme. Lys-680 is involved in binding the pyridoxal phosphate, which is the active form of vitamin B6, a cofactor required by myophosphorylase. By similarity, other sites have been estimated: Tyr-76 binds AMP, Cys-109 and Cys-143 are involved in subunit association, and Tyr-156 may be involved in allosteric control.
Function
Myophosphorylase is the form of the glycogen phosphorylase found in muscle that catalyses the following reaction:((1→4)-alpha-D-glucosyl) (n) + phosphate = ((1→4)-alpha-D-glucosyl) (n-1) + alpha-D-glucose 1-phosphate
Failure of this enzyme ultimately impairs the operation of ATPases. This is due to the lack of normal pH fall during exercise, which impairs the creatine kinase equilibrium and exaggerates the rise of ADP.
Pathophysiology
Myophosphorylase is involved in the breakdown of glycogen to glucose for use in muscle. The enzyme removes 1,4 glycosyl residues from outer branches of glycogen and adds inorganic phosphate to form glucose-1-phosphate. Ordinarily, the removal of 1,4 glycosyl residues by myophosphorylase leads to the formation of glucose-1-phosphate during glycogen breakdown and the polar, phosphorylated glucose cannot leave the cell membrane and so is marked for intracellular catabolism. In McArdles Disease, deficiency of myophosphorylase leads to accumulation of intramuscular glycogen and a lack of glucose-1-phosphate for cellular fuel.
Myophosphorylase exists in the active form when phosphorylated. The enzyme phosphorylase kinase plays a role in phosphorylating glycogen phosphorylase to activate it and another enzyme, protein phosphatase-1, inactivates glycogen phosphorylase through dephosphorylation.
Diagnosis
There are some laboratory tests that may aid in diagnosis of GSD-V. A muscle biopsy will note the absence of myophosphorylase in muscle fibers. In some cases, acid-Schiff stained glycogen can be seen with microscopy.Genetic sequencing of the PYGM gene (which codes for the muscle isoform of glycogen phosphorylase) may be done to determine the presence of gene mutations, determining if McArdles is present. This type of testing is considerably less invasive than a muscle biopsy.The physician can also perform an ischemic forearm exercise test as described above. Some findings suggest a nonischemic test could be performed with similar results. The nonischemic version of this test would involve not cutting off the blood flow to the exercising arm. Findings consistent with McArdles disease would include a failure of lactate to rise in venous blood and exaggerated ammonia levels. These findings would indicate a severe muscle glycolytic block.
Serum lactate may fail to rise in part because of increased uptake via the monocarboxylate transporter (MCT1), which is upregulated in skeletal muscle in McArdle disease. Lactate may be used as a fuel source once converted to pyruvate. Ammonia levels may rise given ammonia is a by-product of the adenylate kinase pathway, an alternative pathway for ATP production. In this pathway, two ADP molecules combine to make ATP; AMP is deaminated in this process, producing inosine monophosphate (IMP) and ammonia (NH3).Physicians may also check resting levels of creatine kinase, which are moderately increased in 90% of patients. In some, the level is increased by multitudes - a person without GSD-V will have a CK between 60 and 400IU/L, while a person with the syndrome may have a level of 5,000 IU/L at rest, and may increase to 35,000 IU/L or more with muscle exertion. This can help distinguish McArdles syndrome from carnitine palmitoyltransferase II deficiency (CPT-II), a lipid-based metabolic disorder which prevents fatty acids from being transported into mitochondria for use as an energy source. Also, serum electrolytes and endocrine studies (such as thyroid function, parathyroid function and growth hormone levels) will also be completed. Urine studies are required only if rhabdomyolysis is suspected. Urine volume, urine sediment and myoglobin levels would be ascertained. If rhabdomyolysis is suspected, serum myoglobin, creatine kinase, lactate dehydrogenase, electrolytes and renal function will be checked.Physicians may also conduct an Exercise Stress Test to test for an inappropriate rapid heart rate (Sinus Tachycardia) in response to exercise. Due to the rare nature of the disease, the inappropriate rapid heart rate in response to exercise may be misdiagnosed as Inappropriate Sinus Tachycardia (which is a diagnosis of exclusion). The 12 Minute Walk Test (12MWT) can be used to determine "Second Wind," which requires a treadmill (no incline), heart rate monitor, stop watch, pain scale, and that the patient has rested for 30 minutes prior to the test to ensure that oxidative phosphorylation has stopped.
Treatment
Supervised exercise programs have been shown in small studies to improve exercise capacity by several measures.Oral sucrose treatment (for example a sports drink with 75 grams of sucrose in 660 ml.) taken 30 minutes prior to exercise has been shown to help improve exercise tolerance including a lower heart rate and lower perceived level of exertion compared with placebo.A low dosage treatment with creatine showed a significant improvement of muscle problems compared to placebo in a small clinical study.
History
The deficiency was the first metabolic myopathy to be recognized, when Dr. McArdle described the first case in a 30-year-old man who always experienced pain and weakness after exercise </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Arthus reaction and can you explain it in depth? [/INST] In immunology, the Arthus reaction () is a type of local type III hypersensitivity reaction. Type III hypersensitivity reactions are immune complex-mediated, and involve the deposition of antigen/antibody complexes mainly in the vascular walls, serosa (pleura, pericardium, synovium), and glomeruli. This reaction is usually encountered in experimental settings following the injection of antigens.
History
The Arthus reaction was discovered by Nicolas Maurice Arthus in 1903. Arthus repeatedly injected horse serum subcutaneously into rabbits. After four injections, he found that there was edema and that the serum was absorbed slowly. Further injections eventually led to gangrene.
Process
The Arthus reaction involves the in situ formation of antigen/antibody complexes after the intradermal injection of an antigen. If the animal/patient was previously sensitized (has circulating antibody), an Arthus reaction occurs </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Benign tumor and can you explain it in depth? [/INST] A benign tumor is a mass of cells (tumor) that does not invade neighboring tissue or metastasize (spread throughout the body). Compared to malignant (cancerous) tumors, benign tumors generally have a slower growth rate. Benign tumors have relatively well differentiated cells. They are often surrounded by an outer surface (fibrous sheath of connective tissue) or stay contained within the epithelium. Common examples of benign tumors include moles and uterine fibroids.
Some forms of benign tumors may be harmful to health. Benign tumor growth causes a mass effect that can compress neighboring tissues. This can lead to nerve damage, blood flow reduction (ischemia), tissue death (necrosis), or organ damage. The health effects of benign tumor growth may be more prominent if the tumor is contained within an enclosed space such as the cranium, respiratory tract, sinus, or bones. For example, unlike most benign tumors elsewhere in the body, benign brain tumors can be life-threatening. Tumors may exhibit behaviors characteristic of their cell type of origin; as an example, endocrine tumors such as thyroid adenomas and adrenocortical adenomas may overproduce certain hormones.
Many types of benign tumors have the potential to become cancerous (malignant) through a process known as tumor progression. For this reason and other possible harms, some benign tumors are removed by surgery. When removed, benign tumors usually do not return. Exceptions to this rule may indicate malignant transformation.
Signs and symptoms
Benign tumors are very diverse; they may be asymptomatic or may cause specific symptoms, depending on their anatomic location and tissue type. They grow outward, producing large, rounded masses which can cause what is known as a "mass effect". This growth can cause compression of local tissues or organs, leading to many effects, such as blockage of ducts, reduced blood flow (ischaemia), tissue death (necrosis) and nerve pain or damage. Some tumors also produce hormones that can lead to life-threatening situations. Insulinomas can produce large amounts of insulin, causing hypoglycemia. Pituitary adenomas can cause elevated levels of hormones such as growth hormone and insulin-like growth factor-1, which cause acromegaly; prolactin; ACTH and cortisol, which cause Cushings disease; TSH, which causes hyperthyroidism; and FSH and LH. Bowel intussusception can occur with various benign colonic tumors. Cosmetic effects can be caused by tumors, especially those of the skin, possibly causing psychological or social discomfort for the person with the tumor. Vascular tissue tumors can bleed, in some cases leading to anemia.
Causes
PTEN hamartoma syndrome
PTEN hamartoma syndrome encompasses hamartomatous disorders characterized by genetic mutations in the PTEN tumor suppressor gene, including Cowden syndrome, Bannayan–Riley–Ruvalcaba syndrome, Proteus syndrome and Proteus-like syndrome. Absent or dysfunctional PTEN protein allows cells to over-proliferate, causing hamartomas. Cowden syndrome is an autosomal dominant genetic disorder characterized by multiple benign hamartomas (trichilemmomas and mucocutaneous papillomatous papules) as well as a predisposition for cancers of multiple organs including the breast and thyroid. Bannayan–Riley–Ruvalcaba syndrome is a congenital disorder characterized by hamartomatous intestinal polyposis, macrocephaly, lipomatosis, hemangiomatosis and glans penis macules. Proteus syndrome is characterized by nevi, asymmetric overgrowth of various body parts, adipose tissue dysregulation, cystadenomas, adenomas, vascular malformation.
Familial adenomatous polyposis
Familial adenomatous polyposis (FAP) is a familial cancer syndrome caused by mutations in the APC gene. In FAP, adenomatous polyps are present in the colon. The polyps progress into colon cancer unless removed. The APC gene is a tumor suppressor. Its protein product is involved in many cellular processes. Inactivation of the APC gene leads to the buildup of a protein called β-catenin. This protein activates two transcription factors: T-cell factor (TCF) and lymphoid enhancer factor (LEF). These factors cause the upregulation of many genes involved in cell proliferation, differentiation, migration and apoptosis (programmed cell death), causing the growth of benign tumors.
Tuberous sclerosis complex
Tuberous sclerosis complex (TSC) is an autosomal dominant genetic disorder caused by mutations in the genes TSC1 and TSC2. TSC1 produces the protein hamartin. TSC2 produces the protein tuberin. This disorder presents with many benign hamartomatous tumors including angiofibromas, renal angiomyolipomas, and pulmonary lymphangiomyomatosis. Tuberin and hamartin inhibit the mTOR protein in normal cellular physiology. Inactivation of the TSC tumor suppressors causes an increase in mTOR activity. This leads to the activation of genes and the production of proteins that increase cell growth.
Von Hippel–Lindau disease
Von Hippel–Lindau disease is a dominantly-inherited cancer syndrome that significantly increases the risk of various tumors. This includes benign hemangioblastomas and malignant pheochromocytomas, renal cell carcinomas, pancreatic endocrine tumors, and endolymphatic sac tumors. It is caused by genetic mutations in the Von Hippel–Lindau tumor suppressor gene. The VHL protein (pVHL) is involved in cellular signaling in oxygen starved (hypoxic) cells. One role of pVHL is to cause the cellular degradation of another protein, HIF1α. Dysfunctional pVHL leads to accumulation of HIF1α. This activates several genes responsible for the production of substances involved in cell growth and blood vessel production: VEGF, PDGFβ, TGFα and erythropoietin.
Bone tumors
Benign tumors of bone can be similar macroscopically and require a combination of a clinical history with cytogenetic, molecular, and radiologic tests for diagnosis. Three common forms of benign bone tumors with are giant cell tumor of bone, osteochondroma, and enchondroma; other forms of benign bone tumors exist but may be less prevalent.
Giant cell tumors
Giant cell tumors of bone frequently occur in long bone epiphyses of the appendicular skeleton or the sacrum of the axial skeleton. Local growth can cause destruction of neighboring cortical bone and soft tissue, leading to pain and limiting range of motion. The characteristic radiologic finding of giant cell tumors of bone is a lytic lesion that does not have marginal sclerosis of bone. On histology, giant cells of fused osteoclasts are seen as a response to neoplastic mononucleated cells. Notably, giant cells are not unique among benign bone tumors to giant cell tumors of bone. Molecular characteristics of the neoplastic cells causing giant cell tumors of bone indicate an origin of pluripotent mesenchymal stem cells that adopt preosteoblastic markers. Cytogenetic causes of giant cell tumors of bone involve telomeres. Treatment involves surgical curettage with adjuvant bisphosphonates.
Osteochondroma
Osteochondromas form cartilage-capped projections of bone. Structures such as the marrow cavity and cortical bone of the osteochondroma are contiguous to those of the originating bone. Sites of origin often involve metaphyses of long bones. While many osteochondromas occur spontaneously, there are cases in which several osteochondromas can occur in the same individual; these may be linked to a genetic condition known as hereditary multiple osteochondromas. Osteochondroma appears on X-ray as a projecting mass that often points away from joints. These tumors stop growing with the closure of the parental bones growth plates. Failure to stop growth can be indicative of transformation to malignant chondrosarcoma. Treatment is not indicated unless symptomatic. In that case, surgical excision is often curative.
Enchondroma
Enchondromas are benign tumors of hyaline cartilage. Within a bone, enchondromas are often found in metaphyses. They can be found in many types of bone, including small bones, long bones, and the axial skeleton. X-ray of enchondromas shows well-defined borders and a stippled appearance. Presentation of multiple enchondromas is consistent with multiple enchondromatosis (Ollier Disease). Treatment of enchondromas involves surgical curettage and grafting.
Benign soft tissue tumors
Lipomas
Lipomas are benign, subcutaneous tumors of fat cells (adipocytes). They are usually painless, slow-growing, and mobile masses that can occur anywhere in the body where there are fat cells, but are typically found on the trunk and upper extremities. Although lipomas can develop at any age, they more commonly appear between the ages of 40 and 60. Lipomas affect about 1% of the population, with no documented sex bias, and about 1 in every 1000 people will have a lipoma within their lifetime. The cause of lipomas is not well defined. Genetic or inherited causes of lipomas play a role in around 2-3% of patients. In individuals with inherited familial syndromes such as Proteus syndrome or Familial multiple lipomatosis, it is common to see multiple lipomas across the body. These syndromes are also associated with specific symptoms and sub-populations. Mutations in chromosome 12 have been identified in around 65% of lipoma cases. Lipomas have also been shown to be increased in those with obesity, hyperlipidemia, and diabetes mellitus.Lipomas are usually diagnosed clinically, although imaging (ultrasound, computed tomography, or magnetic resonance imaging) may be utilized to assist with the diagnosis of lipomas in atypical locations. The main treatment for lipomas is surgical excision, after which the tumor is examined with histopathology to confirm the diagnosis. The prognosis for benign lipomas excellent and recurrence after excision is rare, but may occur if the removal was incomplete.
Mechanism
Benign vs malignant
One of the most important factors in classifying a tumor as benign or malignant is its invasive potential. If a tumor lacks the ability to invade adjacent tissues or spread to distant sites by metastasizing then it is benign, whereas invasive or metastatic tumors are malignant. For this reason, benign tumors are not classed as cancer. Benign tumors will grow in a contained area usually encapsulated in a fibrous connective tissue capsule. The growth rates of benign and malignant tumors also differ; benign tumors generally grow more slowly than malignant tumors. Although benign tumors pose a lower health risk than malignant tumors, they both can be life-threatening in certain situations. There are many general characteristics which apply to either benign or malignant tumors, but sometimes one type may show characteristics of the other. For example, benign tumors are mostly well differentiated and malignant tumors are often undifferentiated. However, undifferentiated benign tumors and differentiated malignant tumors can occur. Although benign tumors generally grow slowly, cases of fast-growing benign tumors have also been documented. Some malignant tumors are mostly non-metastatic such as in the case of basal-cell carcinoma. CT and chest radiography can be a useful diagnostic exam in visualizing a benign tumor and differentiating it from a malignant tumor. The smaller the tumor on a radiograph the more likely it is to be benign as 80% of lung nodules less than 2 cm in diameter are benign. Most benign nodules are smoothed radiopaque densities with clear margins but these are not exclusive signs of benign tumors.
Multistage carcinogenesis
Tumors are formed by carcinogenesis, a process in which cellular alterations lead to the formation of cancer. Multistage carcinogenesis involves the sequential genetic or epigenetic changes to a cells DNA, where each step produces a more advanced tumor. It is often broken down into three stages; initiation, promotion and progression, and several mutations may occur at each stage. Initiation is where the first genetic mutation occurs in a cell. Promotion is the clonal expansion (repeated division) of this transformed cell into a visible tumor that is usually benign. Following promotion, progression may take place where more genetic mutations are acquired in a sub-population of tumor cells. Progression changes the benign tumor into a malignant tumor. A prominent and well studied example of this phenomenon is the tubular adenoma, a common type of colon polyp which is an important precursor to colon cancer. The cells in tubular adenomas, like most tumors that frequently progress to cancer, show certain abnormalities of cell maturation and appearance collectively known as dysplasia. These cellular abnormalities are not seen in benign tumors that rarely or never turn cancerous, but are seen in other pre-cancerous tissue abnormalities which do not form discrete masses, such as pre-cancerous lesions of the uterine cervix.
Diagnosis
Classification
Benign neoplasms are typically, but not always, composed of cells which bear a strong resemblance to a normal cell type in their organ of origin. These tumors are named for the cell or tissue type from which they originate. The suffix "-oma" (but not -carcinoma, -sarcoma, or -blastoma, which are generally cancers) is applied to indicate a benign tumor. For example, a lipoma is a common benign tumor of fat cells (lipocytes), and a chondroma is a benign tumor of cartilage-forming cells (chondrocytes). Adenomas are benign tumors of gland-forming cells, and are usually specified further by their cell or organ of origin, as in hepatic adenoma (a benign tumor of hepatocytes, or liver cells). Teratomas contain many cell types such as skin, nerve, brain and thyroid, among others, because they are derived from germ cells. Hamartomas are a group of benign tumors that have relatively normal cellular differentiation but exhibit disorganized tissue organization.Exceptions to the nomenclature rules exist for historical reasons; malignant examples include melanoma (a cancer of pigmented skin cells, or melanocytes) and seminoma (a cancer of male reproductive cells).Benign tumors do not encompass all benign growths </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Cardiovascular disease and can you explain it in depth? [/INST] Cardiovascular disease (CVD) is a class of diseases that involve the heart or blood vessels. CVD includes coronary artery diseases (CAD) such as angina and myocardial infarction (commonly known as a heart attack). Other CVDs include stroke, heart failure, hypertensive heart disease, rheumatic heart disease, cardiomyopathy, abnormal heart rhythms, congenital heart disease, valvular heart disease, carditis, aortic aneurysms, peripheral artery disease, thromboembolic disease, and venous thrombosis.The underlying mechanisms vary depending on the disease. It is estimated that dietary risk factors are associated with 53% of CVD deaths. Coronary artery disease, stroke, and peripheral artery disease involve atherosclerosis. This may be caused by high blood pressure, smoking, diabetes mellitus, lack of exercise, obesity, high blood cholesterol, poor diet, excessive alcohol consumption, and poor sleep, among other things. High blood pressure is estimated to account for approximately 13% of CVD deaths, while tobacco accounts for 9%, diabetes 6%, lack of exercise 6%, and obesity 5%. Rheumatic heart disease may follow untreated strep throat.It is estimated that up to 90% of CVD may be preventable. Prevention of CVD involves improving risk factors through: healthy eating, exercise, avoidance of tobacco smoke and limiting alcohol intake. Treating risk factors, such as high blood pressure, blood lipids and diabetes is also beneficial. Treating people who have strep throat with antibiotics can decrease the risk of rheumatic heart disease. The use of aspirin in people, who are otherwise healthy, is of unclear benefit.Cardiovascular diseases are the leading cause of death worldwide except Africa. Together CVD resulted in 17.9 million deaths (32.1%) in 2015, up from 12.3 million (25.8%) in 1990. Deaths, at a given age, from CVD are more common and have been increasing in much of the developing world, while rates have declined in most of the developed world since the 1970s. Coronary artery disease and stroke account for 80% of CVD deaths in males and 75% of CVD deaths in females. Most cardiovascular disease affects older adults. In the United States 11% of people between 20 and 40 have CVD, while 37% between 40 and 60, 71% of people between 60 and 80, and 85% of people over 80 have CVD. The average age of death from coronary artery disease in the developed world is around 80, while it is around 68 in the developing world. CVD is typically diagnosed seven to ten years earlier in men than in women.: 48
Types
There are many cardiovascular diseases involving the blood vessels. They are known as vascular diseases.
Coronary artery disease (also known as coronary heart disease and ischemic heart disease)
Peripheral arterial disease – disease of blood vessels that supply blood to the arms and legs
Cerebrovascular disease – disease of blood vessels that supply blood to the brain (includes stroke)
Renal artery stenosis
Aortic aneurysmThere are also many cardiovascular diseases that involve the heart.
Cardiomyopathy – diseases of cardiac muscle
Hypertensive heart disease – diseases of the heart secondary to high blood pressure or hypertension
Heart failure - a clinical syndrome caused by the inability of the heart to supply sufficient blood to the tissues to meet their metabolic requirements
Pulmonary heart disease – a failure at the right side of the heart with respiratory system involvement
Cardiac dysrhythmias – abnormalities of heart rhythm
Inflammatory heart disease
Endocarditis – inflammation of the inner layer of the heart, the endocardium. The structures most commonly involved are the heart valves.
Inflammatory cardiomegaly
Myocarditis – inflammation of the myocardium, the muscular part of the heart, caused most often by viral infection and less often by bacterial infections, certain medications, toxins, and autoimmune disorders. It is characterized in part by infiltration of the heart by lymphocyte and monocyte types of white blood cells.
Eosinophilic myocarditis - inflammation of the myocardium caused by pathologically activated eosinophilic white blood cells. This disorder differs from myocarditis in its causes and treatments.
Valvular heart disease
Congenital heart disease – heart structure malformations existing at birth
Rheumatic heart disease – heart muscles and valves damage due to rheumatic fever caused by Streptococcus pyogenes a group A streptococcal infection.
Risk factors
There are many risk factors for heart diseases: age, sex, tobacco use, physical inactivity, non-alcoholic fatty liver disease, excessive alcohol consumption, unhealthy diet, obesity, genetic predisposition and family history of cardiovascular disease, raised blood pressure (hypertension), raised blood sugar (diabetes mellitus), raised blood cholesterol (hyperlipidemia), undiagnosed celiac disease, psychosocial factors, poverty and low educational status, air pollution, and poor sleep. While the individual contribution of each risk factor varies between different communities or ethnic groups the overall contribution of these risk factors is very consistent. Some of these risk factors, such as age, sex or family history/genetic predisposition, are immutable; however, many important cardiovascular risk factors are modifiable by lifestyle change, social change, drug treatment (for example prevention of hypertension, hyperlipidemia, and diabetes). People with obesity are at increased risk of atherosclerosis of the coronary arteries.
Genetics
Cardiovascular disease in a persons parents increases their risk by ~3 fold, and genetics is an important risk factor for cardiovascular diseases. Genetic cardiovascular disease can occur either as
a consequence of single variant (Mendelian) or polygenic influences. There are more than 40 inherited cardiovascular disease that can be traced to a single disease-causing DNA variant, although these conditions are rare. Most common cardiovascular diseases are non-Mendelian and are thought to be due to hundreds or thousands of genetic variants (known as single nucleotide polymorphisms), each associated with a small effect.
Age
Age is the most important risk factor in developing cardiovascular or heart diseases, with approximately a tripling of risk with each decade of life. Coronary fatty streaks can begin to form in adolescence. It is estimated that 82 percent of people who die of coronary heart disease are 65 and older. Simultaneously, the risk of stroke doubles every decade after age 55.Multiple explanations are proposed to explain why age increases the risk of cardiovascular/heart diseases. One of them relates to serum cholesterol level. In most populations, the serum total cholesterol level increases as age increases. In men, this increase levels off around age 45 to 50 years. In women, the increase continues sharply until age 60 to 65 years.Aging is also associated with changes in the mechanical and structural properties of the vascular wall, which leads to the loss of arterial elasticity and reduced arterial compliance and may subsequently lead to coronary artery disease.
Sex
Men are at greater risk of heart disease than pre-menopausal women. Once past menopause, it has been argued that a womans risk is similar to a mans although more recent data from the WHO and UN disputes this. If a female has diabetes, she is more likely to develop heart disease than a male with diabetes.Coronary heart diseases are 2 to 5 times more common among middle-aged men than women. In a study done by the World Health Organization, sex contributes to approximately 40% of the variation in sex ratios of coronary heart disease mortality. Another study reports similar results finding that sex differences explains nearly half the risk associated with cardiovascular diseases One of the proposed explanations for sex differences in cardiovascular diseases is hormonal difference. Among women, estrogen is the predominant sex hormone. Estrogen may have protective effects on glucose metabolism and hemostatic system, and may have direct effect in improving endothelial cell function. The production of estrogen decreases after menopause, and this may change the female lipid metabolism toward a more atherogenic form by decreasing the HDL cholesterol level while increasing LDL and total cholesterol levels.Among men and women, there are differences in body weight, height, body fat distribution, heart rate, stroke volume, and arterial compliance. In the very elderly, age-related large artery pulsatility and stiffness are more pronounced among women than men. This may be caused by the womens smaller body size and arterial dimensions which are independent of menopause.
Tobacco
Cigarettes are the major form of smoked tobacco. Risks to health from tobacco use result not only from direct consumption of tobacco, but also from exposure to second-hand smoke. Approximately 10% of cardiovascular disease is attributed to smoking; however, people who quit smoking by age 30 have almost as low a risk of death as never smokers.
Physical inactivity
Insufficient physical activity (defined as less than 5 x 30 minutes of moderate activity per week, or less than 3 x 20 minutes of vigorous activity per week) is currently the fourth leading risk factor for mortality worldwide. In 2008, 31.3% of adults aged 15 or older (28.2% men and 34.4% women) were insufficiently physically active.
The risk of ischemic heart disease and diabetes mellitus is reduced by almost a third in adults who participate in 150 minutes of moderate physical activity each week (or equivalent). In addition, physical activity assists weight loss and improves blood glucose control, blood pressure, lipid profile and insulin sensitivity. These effects may, at least in part, explain its cardiovascular benefits.
Diet
High dietary intakes of saturated fat, trans-fats and salt, and low intake of fruits, vegetables and fish are linked to cardiovascular risk, although whether all these associations indicate causes is disputed. The World Health Organization attributes approximately 1.7 million deaths worldwide to low fruit and vegetable consumption. Frequent consumption of high-energy foods, such as processed foods that are high in fats and sugars, promotes obesity and may increase cardiovascular risk. The amount of dietary salt consumed may also be an important determinant of blood pressure levels and overall cardiovascular risk. There is moderate quality evidence that reducing saturated fat intake for at least two years reduces the risk of cardiovascular disease. High trans-fat intake has adverse effects on blood lipids and circulating inflammatory markers, and elimination of trans-fat from diets has been widely advocated. In 2018 the World Health Organization estimated that trans fats were the cause of more than half a million deaths per year. There is evidence that higher consumption of sugar is associated with higher blood pressure and unfavorable blood lipids, and sugar intake also increases the risk of diabetes mellitus. High consumption of processed meats is associated with an increased risk of cardiovascular disease, possibly in part due to increased dietary salt intake.
Alcohol
The relationship between alcohol consumption and cardiovascular disease is complex, and may depend on the amount of alcohol consumed. There is a direct relationship between high levels of drinking alcohol and cardiovascular disease. Drinking at low levels without episodes of heavy drinking may be associated with a reduced risk of cardiovascular disease, but there is evidence that associations between moderate alcohol consumption and protection from stroke are non-causal. At the population level, the health risks of drinking alcohol exceed any potential benefits.
Celiac disease
Untreated celiac disease can cause the development of many types of cardiovascular diseases, most of which improve or resolve with a gluten-free diet and intestinal healing. However, delays in recognition and diagnosis of celiac disease can cause irreversible heart damage.
Sleep
A lack of good sleep, in amount or quality, is documented as increasing cardiovascular risk in both adults and teens. Recommendations suggest that Infants typically need 12 or more hours of sleep per day, adolescent at least eight or nine hours, and adults seven or eight. About one-third of adult Americans get less than the recommended seven hours of sleep per night, and in a study of teenagers, just 2.2 percent of those studied got enough sleep, many of whom did not get good quality sleep. Studies have shown that short sleepers getting less than seven hours sleep per night have a 10 percent to 30 percent higher risk of cardiovascular disease.Sleep disorders such as sleep-disordered breathing and insomnia, are also associated with a higher cardiometabolic risk.
An estimated 50 to 70 million Americans have insomnia, sleep apnea or other chronic sleep disorders.
In addition, sleep research displays differences in race and class. Short sleep and poor sleep tend to be more frequently reported in ethnic minorities than in whites. African-Americans report experiencing short durations of sleep five times more often than whites, possibly as a result of social and environmental factors. Black children and children living in disadvantaged neighborhoods have much higher rates of sleep apnea.
Socioeconomic disadvantage
Cardiovascular disease affects low- and middle-income countries even more than high-income countries. There is relatively little information regarding social patterns of cardiovascular disease within low- and middle-income countries, but within high-income countries low income and low educational status are consistently associated with greater risk of cardiovascular disease. Policies that have resulted in increased socio-economic inequalities have been associated with greater subsequent socio-economic differences in cardiovascular disease implying a cause and effect relationship. Psychosocial factors, environmental exposures, health behaviours, and health-care access and quality contribute to socio-economic differentials in cardiovascular disease. The Commission on Social Determinants of Health recommended that more equal distributions of power, wealth, education, housing, environmental factors, nutrition, and health care were needed to address inequalities in cardiovascular disease and non-communicable diseases.
Air pollution
Particulate matter has been studied for its short- and long-term exposure effects on cardiovascular disease. Currently, airborne particles under 2.5 micrometers in diameter (PM2.5) are the major focus, in which gradients are used to determine CVD risk. Overall, long-term PM exposure increased rate of atherosclerosis and inflammation. In regards to short-term exposure (2 hours), every 25 μg/m3 of PM2.5 resulted in a 48% increase of CVD mortality risk. In addition, after only 5 days of exposure, a rise in systolic (2.8 mmHg) and diastolic (2.7 mmHg) blood pressure occurred for every 10.5 μg/m3 of PM2.5. Other research has implicated PM2.5 in irregular heart rhythm, reduced heart rate variability (decreased vagal tone), and most notably heart failure. PM2.5 is also linked to carotid artery thickening and increased risk of acute myocardial infarction.
Cardiovascular risk assessment
Existing cardiovascular disease or a previous cardiovascular event, such as a heart attack or stroke, is the strongest predictor of a future cardiovascular event. Age, sex, smoking, blood pressure, blood lipids and diabetes are important predictors of future cardiovascular disease in people who are not known to have cardiovascular disease. These measures, and sometimes others, may be combined into composite risk scores to estimate an individuals future risk of cardiovascular disease. Numerous risk scores exist although their respective merits are debated. Other diagnostic tests and biomarkers remain under evaluation but currently these lack clear-cut evidence to support their routine use. They include family history, coronary artery calcification score, high sensitivity C-reactive protein (hs-CRP), ankle–brachial pressure index, lipoprotein subclasses and particle concentration, lipoprotein(a), apolipoproteins A-I and B, fibrinogen, white blood cell count, homocysteine, N-terminal pro B-type natriuretic peptide (NT-proBNP), and markers of kidney function. High blood phosphorus is also linked to an increased risk.
Depression and traumatic stress
There is evidence that mental health problems, in particular depression and traumatic stress, is linked to cardiovascular diseases. Whereas mental health problems are known to be associated with risk factors for cardiovascular diseases such as smoking, poor diet, and a sedentary lifestyle, these factors alone do not explain the increased risk of cardiovascular diseases seen in depression, stress, and anxiety. Moreover, posttraumatic stress disorder is independently associated with increased risk for incident coronary heart disease, even after adjusting for depression and other covariates.
Occupational exposure
Little is known about the relationship between work and cardiovascular disease, but links have been established between certain toxins, extreme heat and cold, exposure to tobacco smoke, and mental health concerns such as stress and depression.
Non-chemical risk factors
A 2015 SBU-report looking at non-chemical factors found an association for those:
with mentally stressful work with a lack of control over their working situation — with an effort-reward imbalance
who experience low social support at work; who experience injustice or experience insufficient opportunities for personal development; or those who experience job insecurity
those who work night schedules; or have long working weeks
those who are exposed to noiseSpecifically the risk of stroke was also increased by exposure to ionizing radiation. Hypertension develops more often in those who experience job strain and who have shift-work. Differences between women and men in risk are small, however men risk having and dying of heart attacks or stroke twice as often as women during working life.
Chemical risk factors
A 2017 SBU report found evidence that workplace exposure to silica dust, engine exhaust or welding fumes is associated with heart disease. Associations also exist for exposure to arsenic, benzopyrenes, lead, dynamite, carbon disulphide, carbon monoxide, metalworking fluids and occupational exposure to tobacco smoke. Working with the electrolytic production of aluminium or the production of paper when the sulphate pulping process is used is associated with heart disease. An association was also found between heart disease and exposure to compounds which are no longer permitted in certain work environments, such as phenoxy acids containing TCDD(dioxin) or asbestos.Workplace exposure to silica dust or asbestos is also associated with pulmonary heart disease. There is evidence that workplace exposure to lead, carbon disulphide, phenoxyacids containing TCDD, as well as working in an environment where aluminum is being electrolytically produced, is associated with stroke.
Somatic mutations
As of 2017, evidence suggests that certain leukemia-associated mutations in blood cells may also lead to increased risk of cardiovascular disease. Several large-scale research projects looking at human genetic data have found a robust link between the presence of these mutations, a condition known as clonal hematopoiesis, and cardiovascular disease-related incidents and mortality.
Radiation therapy
Radiation treatments for cancer can increase the risk of heart disease and death, as observed in breast cancer therapy. Therapeutic radiation increases the risk of a subsequent heart attack or stroke by 1.5 to 4 times; the increase depends on the dose strength, volume, and location.
Side-effects from radiation therapy for cardiovascular diseases have been termed radiation-induced heart disease or radiation-induced vascular disease. Symptoms are dose-dependent and include cardiomyopathy, myocardial fibrosis, valvular heart disease, coronary artery disease, heart arrhythmia and peripheral artery disease. Radiation-induced fibrosis, vascular cell damage and oxidative stress can lead to these and other late side-effect symptoms.
Pathophysiology
Population-based studies show that atherosclerosis, the major precursor of cardiovascular disease, begins in childhood. The Pathobiological Determinants of Atherosclerosis in Youth (PDAY) study demonstrated that intimal lesions appear in all the aortas and more than half of the right coronary arteries of youths aged 7–9 years.Obesity and diabetes mellitus are linked to cardiovascular disease, as are a history of chronic kidney disease and hypercholesterolaemia. In fact, cardiovascular disease is the most life-threatening of the diabetic complications and diabetics are two- to four-fold more likely to die of cardiovascular-related causes than nondiabetics.
Screening
Screening ECGs (either at rest or with exercise) are not recommended in those without symptoms who are at low risk. This includes those who are young without risk factors. In those at higher risk the evidence for screening with ECGs is inconclusive. Additionally echocardiography, myocardial perfusion imaging, and cardiac stress testing is not recommended in those at low risk who do not have symptoms. Some biomarkers may add to conventional cardiovascular risk factors in predicting the risk of future cardiovascular disease; however, the value of some biomarkers is questionable. Ankle-brachial index (ABI), high-sensitivity C-reactive protein (hsCRP), and coronary artery calcium, are also of unclear benefit in those without symptoms as of 2018.The NIH recommends lipid testing in children beginning at the age of 2 if there is a family history of heart disease or lipid problems. It is hoped that early testing will improve lifestyle factors in those at risk such as diet and exercise.Screening and selection for primary prevention interventions has traditionally been done through absolute risk using a variety of scores (ex. Framingham or Reynolds risk scores). This stratification has separated people who receive the lifestyle interventions (generally lower and intermediate risk) from the medication (higher risk). The number and variety of risk scores available for use has multiplied, but their efficacy according to a 2016 review was unclear due to lack of external validation or impact analysis. Risk stratification models often lack sensitivity for population groups and do not account for the large number of negative events among the intermediate and low risk groups. As a result, future preventative screening appears to shift toward applying prevention according to randomized trial results of each intervention rather than large-scale risk assessment.
Prevention
Up to 90% of cardiovascular disease may be preventable if established risk factors are avoided. Currently practised measures to prevent cardiovascular disease include:
Maintaining a healthy diet, such as the Mediterranean diet, a vegetarian, vegan or another plant-based diet.
Replacing saturated fat with healthier choices: Clinical trials show that replacing saturated fat with polyunsaturated vegetable oil reduced CVD by 30%. Prospective observational studies show that in many populations lower intake of saturated fat coupled with higher intake of polyunsaturated and monounsaturated fat is associated with lower rates of CVD.
Decrease body fat if overweight or obese. The effect of weight loss is often difficult to distinguish from dietary change, and evidence on weight reducing diets is limited. In observational studies of people with severe obesity, weight loss following bariatric surgery is associated with a 46% reduction in cardiovascular risk.
Limit alcohol consumption to the recommended daily limits. People who moderately consume alcoholic drinks have a 25–30% lower risk of cardiovascular disease. However, people who are genetically predisposed to consume less alcohol have lower rates of cardiovascular disease suggesting that alcohol itself may not be protective. Excessive alcohol intake increases the risk of cardiovascular disease and consumption of alcohol is associated with increased risk of a cardiovascular event in the day following consumption.
Decrease non-HDL cholesterol. Statin treatment reduces cardiovascular mortality by about 31%.
Stopping smoking and avoidance of second-hand smoke. Stopping smoking reduces risk by about 35%.
At least 150 minutes (2 hours and 30 minutes) of moderate exercise per week.
Lower blood pressure, if elevated. A 10 mmHg reduction in blood pressure reduces risk by about 20%. Lowering blood pressure appears to be effective even at normal blood pressure ranges.
Decrease psychosocial stress. This measure may be complicated by imprecise definitions of what constitute psychosocial interventions. Mental stress–induced myocardial ischemia is associated with an increased risk of heart problems in those with previous heart disease. Severe emotional and physical stress leads to a form of heart dysfunction known as Takotsubo syndrome in some people. Stress, however, plays a relatively minor role in hypertension. Specific relaxation therapies are of unclear benefit.
Not enough sleep also raises the risk of high blood pressure. Adults need about 7–9 hours of sleep. Sleep apnea is also a major risk as it causes breathing to stop briefly, which can put stress on the body which can raise the risk of heart disease.Most guidelines recommend combining preventive strategies. There is some evidence that interventions aiming to reduce more than one cardiovascular risk factor may have beneficial effects on blood pressure, body mass index and waist circumference; however, evidence was limited and the authors were unable to draw firm conclusions on the effects on cardiovascular events and mortality.There is additional evidence to suggest that providing people with a cardiovascular disease risk score may reduce risk factors by a small amount compared to usual care. However, there was some uncertainty as to whether providing these scores had any effect on cardiovascular disease events. It is unclear whether or not dental care in those with periodontitis affects their risk of cardiovascular disease.
Diet
A diet high in fruits and vegetables decreases the risk of cardiovascular disease and death.A 2021 review found that plant-based diets can provide a risk reduction for CVD if a healthy plant-based diet is consumed. Unhealthy plant-based diets do not provide benefits over diets including meat. A similar meta-analysis and systematic review also looked into dietary patterns and found "that diets lower in animal foods and unhealthy plant foods, and higher in healthy plant foods are beneficial for CVD prevention". A 2018 meta-analysis of observational studies concluded that "In most countries, a vegan diet is associated with a more favourable cardio-metabolic profile compared to an omnivorous diet."Evidence suggests that the Mediterranean diet may improve cardiovascular outcomes. There is also evidence that a Mediterranean diet may be more effective than a low-fat diet in bringing about long-term changes to cardiovascular risk factors (e.g., lower cholesterol level and blood pressure).The DASH diet (high in nuts, fish, fruits and vegetables, and low in sweets, red meat and fat) has been shown to reduce blood pressure, lower total and low density lipoprotein cholesterol and improve metabolic syndrome; but the long-term benefits have been questioned. A high-fiber diet is associated with lower risks of cardiovascular disease.Worldwide, dietary guidelines recommend a reduction in saturated fat, and although the role of dietary fat in cardiovascular disease is complex and controversial there is a long-standing consensus that replacing saturated fat with unsaturated fat in the diet is sound medical advice. Total fat intake has not been found to be associated with cardiovascular risk. A 2020 systematic review found moderate quality evidence that reducing saturated fat intake for at least 2 years caused a reduction in cardiovascular events. A 2015 meta-analysis of observational studies however did not find a convincing association between saturated fat intake and cardiovascular disease. Variation in what is used as a substitute for saturated fat may explain some differences in findings. The benefit from replacement with polyunsaturated fats appears greatest, while replacement of saturated fats with carbohydrates does not appear to have a beneficial effect. A diet high in trans fatty acids is associated with higher rates of cardiovascular disease, and in 2015 the Food and Drug Administration (FDA) determined that there was no longer a consensus among qualified experts that partially hydrogenated oils (PHOs), which are the primary dietary source of industrially produced trans fatty acids (IP-TFA), are generally recognized as safe (GRAS) for any use in human food. There is conflicting evidence concerning whether dietary supplements of omega-3 fatty acids (a type of polysaturated fat in oily fish) added to diet improve cardiovascular risk.The benefits of recommending a low-salt diet in people with high or normal blood pressure are not clear. In those with heart failure, after one study was left out, the rest of the trials show a trend to benefit. Another review of dietary salt concluded that there is strong evidence that high dietary salt intake increases blood pressure and worsens hypertension, and that it increases the number of cardiovascular disease events; both as a result of the increased blood pressure and probably through other mechanisms. Moderate evidence was found that high salt intake increases cardiovascular mortality; and some evidence was found for an increase in overall mortality, strokes, and left ventricular hypertrophy.
Intermittent fasting
Overall, the current body of scientific evidence is uncertain on whether intermittent fasting could prevent cardiovascular disease. Intermittent fasting may help people lose more weight than regular eating patterns, but was not different than energy restriction diets.
Medication
Blood pressure medication reduces cardiovascular disease in people at risk, irrespective of age, the baseline level of cardiovascular risk, or baseline blood pressure. The commonly-used drug regimens have similar efficacy in reducing the risk of all major cardiovascular events, although there may be differences between drugs in their ability to prevent specific outcomes. Larger reductions in blood pressure produce larger reductions in risk, and most people with high blood pressure require more than one drug to achieve adequate reduction in blood pressure. Adherence to medications is often poor, and while mobile phone text messaging has been tried to improve adherence, there is insufficient evidence that it alters secondary prevention of cardiovascular disease.Statins are effective in preventing further cardiovascular disease in people with a history of cardiovascular disease. As the event rate is higher in men than in women, the decrease in events is more easily seen in men than women. In those at risk, but without a history of cardiovascular disease (primary prevention), statins decrease the risk of death and combined fatal and non-fatal cardiovascular disease. The benefit, however, is small. A United States guideline recommends statins in those who have a 12% or greater risk of cardiovascular disease over the next ten years. Niacin, fibrates and CETP Inhibitors, while they may increase HDL cholesterol do not affect the risk of cardiovascular disease in those who are already on statins. Fibrates lower the risk of cardiovascular and coronary events, but there is no evidence to suggest that they reduce all-cause mortality.Anti-diabetic medication may reduce cardiovascular risk in people with Type 2 diabetes, although evidence is not conclusive. A meta-analysis in 2009 including 27,049 participants and 2,370 major vascular events showed a 15% relative risk reduction in cardiovascular disease with more-intensive glucose lowering over an average follow-up period of 4.4 years, but an increased risk of major hypoglycemia.Aspirin has been found to be of only modest benefit in those at low risk of heart disease, as the risk of serious bleeding is almost equal to the protection against cardiovascular problems. In those at very low risk, including those over the age of 70, it is not recommended. The United States Preventive Services Task Force recommends against use of aspirin for prevention in women less than 55 and men less than 45 years old; however, it is recommended for some older people.The use of vasoactive agents for people with pulmonary hypertension with left heart disease or hypoxemic lung diseases may cause harm and unnecessary expense.Antibiotics for secondary prevention of coronary heart disease
Antibiotics may help patients with coronary disease to reduce the risk of heart attacks and strokes. However, evidence in 2021 suggests that antibiotics for secondary prevention of coronary heart disease are harmful, with increased mortality and occurrence of stroke; the use of antibiotics is not supported for preventing secondary coronary heart disease.
Physical activity
Exercise-based cardiac rehabilitation following a heart attack reduces the risk of death from cardiovascular disease and leads to less hospitalizations. There have been few high-quality studies of the benefits of exercise training in people with increased cardiovascular risk but no history of cardiovascular disease.A systematic review estimated that inactivity is responsible for 6% of the burden of disease from coronary heart disease worldwide. The authors estimated that 121,000 deaths from coronary heart disease could have been averted in Europe in 2008 if people had not been physically inactive. Low-quality evidence from a limited number of studies suggest that yoga has beneficial effects on blood pressure and cholesterol. Tentative evidence suggests that home-based exercise programs may be more efficient at improving exercise adherence.
Dietary supplements
While a healthy diet is beneficial, the effect of antioxidant supplementation (vitamin E, vitamin C, etc.) or vitamins has not been shown to protect against cardiovascular disease and in some cases may possibly result in harm. Mineral supplements have also not been found to be useful. Niacin, a type of vitamin B3, may be an exception with a modest decrease in the risk of cardiovascular events in those at high risk. Magnesium supplementation lowers high blood pressure in a dose-dependent manner. Magnesium therapy is recommended for people with ventricular arrhythmia associated with torsades de pointes who present with long QT syndrome, and for the treatment of people with digoxin intoxication-induced arrhythmias. There is no evidence that omega-3 fatty acid supplementation is beneficial.
Management
Cardiovascular disease is treatable with initial treatment primarily focused on diet and lifestyle interventions. Influenza may make heart attacks and strokes more likely and therefore influenza vaccination may decrease the chance of cardiovascular events and death in people with heart disease.Proper CVD management necessitates a focus on MI and stroke cases due to their combined high mortality rate, keeping in mind the cost-effectiveness of any intervention, especially in developing countries with low or middle-income levels. Regarding MI, strategies using aspirin, atenolol, streptokinase or tissue plasminogen activator have been compared for quality-adjusted life-year (QALY) in regions of low and middle income </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Anterior spinal artery syndrome and can you explain it in depth? [/INST] Anterior spinal artery syndrome (also known as "anterior spinal cord syndrome") is syndrome caused by ischemia of the anterior spinal artery, resulting in loss of function of the anterior two-thirds of the spinal cord. The region affected includes the descending corticospinal tract, ascending spinothalamic tract, and autonomic fibers. It is characterized by a corresponding loss of motor function, loss of pain and temperature sensation, and hypotension.
Anterior spinal artery syndrome is the most common form of spinal cord infarction. The anterior spinal cord is at increased risk for infarction because it is supplied by the single anterior spinal artery and has little collateral circulation, unlike the posterior spinal cord which is supplied by two posterior spinal arteries.
Signs and symptoms
Complete motor paralysis below the level of the lesion due to interruption of the corticospinal tract
Loss of pain and temperature sensation at and below the level of the lesion due to interruption of the spinothalamic tract
Retained proprioception and vibratory sensation due to intact dorsal columns
Autonomic dysfunction may be present and can manifest as hypotension (either orthostatic or frank hypotension), sexual dysfunction, and/or bowel and bladder dysfunction
Areflexia, flaccid internal and external anal sphincter, urinary retention and intestinal obstruction may also be present in individuals with anterior cord syndrome.Symptoms usually occur very quickly and are often experienced within one hour of the initial damage. MRI can detect the magnitude and location of the damage 10–15 hours after the initiation of symptoms. Diffusion-weighted imaging may be used as it is able to identify the damage within a few minutes of symptomatic onset.Clinical features include paraparesis or quadriparesis (depending on the level of the injury) and impaired pain and temperature sensation. Complete motor paralysis below the level of the lesion due to interruption of the corticospinal tract, and loss of pain and temperature sensation at and below the level of the lesion. Proprioception and vibratory sensation is preserved, as it is in the dorsal side of the spinal cord.
Causes
Due to the branches of the aorta that supply the anterior spinal artery, the most common causes are insufficiencies within the aorta </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Vogt–Koyanagi–Harada disease and can you explain it in depth? [/INST] Vogt–Koyanagi–Harada disease (VKH) is a multisystem disease of presumed autoimmune cause that affects melanin-pigmented tissues. The most significant manifestation is bilateral, diffuse uveitis, which affects the eyes. VKH may variably also involve the inner ear, with effects on hearing, the skin, and the meninges of the central nervous system.
Signs and symptoms
Overview
The disease is characterised by bilateral diffuse uveitis, with pain, redness and blurring of vision. The eye symptoms may be accompanied by a varying constellation of systemic symptoms, such as auditory (tinnitus, vertigo, and hypoacusis), neurological (meningismus, with malaise, fever, headache, nausea, abdominal pain, stiffness of the neck and back, or a combination of these factors; meningitis, CSF pleocytosis, cranial nerve palsies, hemiparesis, transverse myelitis and ciliary ganglionitis), and cutaneous manifestations, including poliosis, vitiligo, and alopecia. The vitiligo often is found at the sacral region.
Phases
The sequence of clinical events in VKH is divided into four phases - prodromal, acute uveitic, convalescent, and chronic recurrent.The prodromal phase may have no symptoms, or may mimic a nonspecific viral infection, marked by flu-like symptoms that typically last for a few days. Fever, headache, nausea, meningismus, dysacusia (discomfort caused by loud noises or a distortion in the quality of the sounds being heard), tinnitus, and/or vertigo may occur. Eye symptoms can include orbital pain, photophobia, and tearing. The skin and hair may be sensitive to touch. Cranial nerve palsies and optic neuritis are uncommon.The acute uveitic phase occurs a few days later and typically lasts for several weeks. This phase is heralded by bilateral panuveitis causing blurring of vision. In 70% of VKH cases, the onset of visual blurring is bilaterally contemporaneous; if initially unilateral, the other eye is involved within several days. The process can include bilateral granulomatous anterior uveitis, variable degree of vitritis, thickening of the posterior choroid with elevation of the peripapillary retinal choroidal layer, optic nerve hyperemia and papillitis, and multiple exudative bullous serous retinal detachments.The convalescent phase is characterized by gradual tissue depigmentation of skin with vitiligo and poliosis, sometimes with nummular depigmented scars, as well as alopecia and diffuse fundus depigmentation resulting in a classic orange-red discoloration ("sunset glow fundus") and retinal pigment epithelium clumping and/or migration.The chronic recurrent phase may be marked by repeated bouts of uveitis, but is more commonly a chronic, low-grade, often subclinical, uveitis that may lead to granulomatous anterior inflammation, cataracts, glaucoma, and ocular hypertension. Full-blown recurrences, though, are rare after the acute stage is over. Dysacusia may occur in this phase.
Cause
Although sometimes a viral infection, or skin or eye trauma precedes an outbreak, the exact underlying initiator of VKH disease remains unknown. VKH is attributed, however, to aberrant T-cell-mediated immune response directed against self-antigens found on melanocytes. Stimulated by interleukin 23 (IL-23), T helper 17 cells and cytokines, such as interleukin 17, appear to target proteins in the melanocytes.
Risk factors
Affected individuals are typically 20 to 50 years old. The female-to-male ratio is 2:1. By definition, affected people have no history of either surgical or accidental ocular trauma. VKH is more common in Asians, Latinos, Middle Easterners, American Indians, and Mexican Mestizos; it is much less common in Caucasians and in Blacks from sub-Saharan Africa.VKH is associated with a variety of genetic polymorphisms that relate to immune function. For example, t has been associated with human leukocyte antigens (HLA) HLA-DR4 and DRB1/DQA1, copy-number variations of complement component 4, a variant IL-23R locus and with various other non-HLA genes. HLA-DRB1*0405 in particular appears to play an important susceptibility role.
Diagnosis
If tested in the prodromal phase, cerebrospinal fluid pleocytosis is found in more than 80% of cases, with mainly lymphocytes. This pleocytosis resolves in about 8 weeks even if chronic uveitis persists.Functional tests may include electroretinogram and visual field testing. Diagnostic confirmation and an estimation of disease severity may involve imaging tests such as retinography, fluorescein or indocyanine green angiography, optical coherence tomography and ultrasound. For example, indocyanine green angiography may detect continuing choroidal inflammation in the eyes without clinical symptoms or signs. Ocular MRI may be helpful and auditory symptoms should undergo audiologic testing. Histopathology findings from eye and skin are discussed by Walton.The diagnosis of VKH is based on the clinical presentation; the diagnostic differential is extensive, and includes sympathetic ophthalmia, sarcoidosis, primary intraocular B-cell lymphoma, posterior scleritis, uveal effusion syndrome, tuberculosis, syphilis, and multifocal choroidopathy syndromes.
Types
Based on the presence of extraocular findings, such as neurological, auditory, and integumentary manifestations, the "revised diagnostic criteria" of 2001 classify the disease as complete (eyes along with both neurological and skin), incomplete (eyes along with either neurological or skin), or probable (eyes without either neurological or skin) . By definition, for research homogeneity purposes, the two exclusion criteria are previous ocular penetrating trauma or surgery, and other concomitant ocular disease similar to VKH disease.
Management
The acute uveitis]phase of VKH is usually responsive to high-dose oral corticosteroids; parenteral administration is usually not required </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Heart failure and can you explain it in depth? [/INST] Heart failure (HF), also known as congestive heart failure (CHF), is a syndrome, a group of signs and symptoms caused by an impairment of the hearts blood pumping function. Symptoms typically include shortness of breath, excessive fatigue, and leg swelling. The shortness of breath may occur with exertion or while lying down, and may wake people up during the night. Chest pain, including angina, is not usually caused by heart failure, but may occur if the heart failure was caused by a heart attack. The severity of the heart failure is measured by the severity of symptoms during exercise. Other conditions that may have symptoms similar to heart failure include obesity, kidney failure, liver disease, anemia, and thyroid disease.Common causes of heart failure include coronary artery disease, heart attack, high blood pressure, atrial fibrillation, valvular heart disease, excessive alcohol consumption, infection, and cardiomyopathy. These cause heart failure by altering the structure or the function of the heart or in some cases both. There are different types of heart failure: right-sided heart failure, which affects the right heart, left-sided heart failure, which affects the left heart, and biventricular heart failure, which affects both sides of the heart. Left-sided heart failure may be present with a reduced ejection fraction or with a preserved ejection fraction. Heart failure is not the same as cardiac arrest, in which blood flow stops completely due to the failure of the heart to pump effectively.Diagnosis is based on symptoms, physical findings, and echocardiography. Blood tests, and a chest x-ray may be useful to determine the underlying cause.Treatment depends on severity and case. For people with chronic, stable, mild heart failure, treatment usually consists of lifestyle changes, such as not smoking, physical exercise, and dietary changes, as well as medications. In heart failure due to left ventricular dysfunction, angiotensin-converting-enzyme inhibitors, angiotensin receptor blockers, or valsartan/sacubitril along with beta blockers are recommended. In severe disease, aldosterone antagonists or hydralazine with a nitrate can be used. Diuretics may also be prescribed to prevent fluid retention and the resulting shortness of breath. Depending on the case, an implanted device such as a pacemaker or implantable cardiac defibrillator may sometimes be recommended. In some moderate or more severe cases, cardiac resynchronization therapy (CRT) or cardiac contractility modulation may be beneficial. In severe disease that persists despite all other measures, a cardiac assist device ventricular assist device (for the left, right, or both heart chambers), or, occasionally, heart transplantation may be recommended.Heart failure is a common, costly, and potentially fatal condition, and is the leading cause of hospitalization and readmission in older adults. Heart failure often leads to more drastic health impairments than failure of other, similarly complex organs such as the kidneys or liver. In 2015, it affected about 40 million people worldwide. Overall, heart failure affects about 2% of adults, and as many as 6-10% of those over the age of 65. Rates are predicted to increase. The risk of death in the first year after diagnosis is about 35%, while the risk of death in the second year is less than 10% in those still alive. The risk of death is comparable to that of some cancers. In the United Kingdom, the disease is the reason for 5% of emergency hospital admissions. Heart failure has been known since ancient times, it is mentioned in the Ebers Papyrus around 1550 BCE.
Definition
Heart failure is not a disease but a syndrome - a combination of signs and symptoms caused by the failure of the heart to pump blood to support the circulatory system at rest or during activity. It develops when the heart fails to properly fill with blood during diastole, resulting in a decrease in intracardiac pressures or in ejection during systole, reducing cardiac output to the rest of the body. The filling failure and high intracardiac pressure can lead to fluid accumulation in the veins and tissue. This manifests as water retention and swelling due to fluid accumulation (edema) called congestion. Impaired ejection can lead to inadequate blood flow to the body tissues, resulting in ischemia.
Signs and symptoms
Congestive heart failure is a pathophysiological condition in which the hearts output is insufficient to meet the needs of the body and lungs. The term "congestive heart failure" is often used because one of the most common symptoms is congestion or fluid accumulation in the tissues and veins of the lungs or other parts of a persons body. Congestion manifests itself particularly in the form of fluid accumulation and swelling (edema), both in the form of peripheral edema (causing swollen limbs and feet) and pulmonary edema (causing difficulty breathing) and ascites (swollen abdomen).Symptoms of heart failure are traditionally divided into left-sided and right-sided because the left and right ventricles supply different parts of the circulation, but sufferers often have both types of signs and symptoms. In biventricular heart failure, both sides of the heart are affected. Left-sided heart failure is the more common.
Left-sided failure
The left side of the heart takes oxygen-rich blood from the lungs and pumps it to the rest of the circulatory system in the body, except for the pulmonary circulation). Failure of the left side of the heart causes blood to back up into the lungs, causing breathing difficulties and fatigue due to an insufficient supply of oxygenated blood. Common respiratory signs include increased respitatory rate and labored breathing (nonspecific signs of shortness of breath). Rales or crackles heard initially in the lung bases and when severe in all lung fields indicate the development of pulmonary edema (fluid in the alveoli). Cyanosis, indicates deficiency of oxygen in the blood, is a late sign of extremely severe pulmonary edema.Other signs of left ventricular failure include a laterally displaced apex beat (which occurs when the heart is enlarged) and a gallop rhythm (additional heart sounds), which may be heard as a sign of increased blood flow or increased intracardiac pressure. Heart murmurs may indicate the presence of valvular heart disease, either as a cause (e.g., aortic stenosis) or as a consequence (e.g., mitral regurgitation) of heart failure.Reverse insufficiency of the left ventricle causes congestion in the blood vessels of the lungs, so that symptoms are predominantly respiratory. Reverse insufficiency can be divided into the failure of the left atrium, the left ventricle, or both within the left circuit. Patients will experience shortness of breath (dyspnea) on exertion and, in severe cases, dyspnea at rest. Increasing breathlessness while lying down, called orthopnea, also occurs. It can be measured by the number of pillows required to lie comfortably, with extreme cases of orthopnea forcing the patient to sleep sitting up. Another symptom of heart failure is paroxysmal nocturnal dyspnea: a sudden nocturnal attack of severe shortness of breath, usually occurring several hours after falling asleep. There may be "cardiac asthma" or wheezing. Impaired left ventricular forward function can lead to symptoms of poor systemic perfusion such as dizziness, confusion, and cool extremities at rest.
Right-sided failure
Right-sided heart failure is often caused by pulmonary heart disease (cor pulmonale), which is typically caused by issues with pulmonary circulation such as pulmonary hypertension or pulmonic stenosis. Physical examination may reveal pitting peripheral edema, ascites, liver enlargement, and spleen enlargement. Jugular venous pressure is frequently assessed as a marker of fluid status, which can be accentuated by testing hepatojugular reflux. If the right ventricular pressure is increased, a parasternal heave which causes the compensatory increase in contraction strength may be present.Backward failure of the right ventricle leads to congestion of systemic capillaries. This generates excess fluid accumulation in the body. This causes swelling under the skin (peripheral edema or anasarca) and usually affects the dependent parts of the body first, causing foot and ankle swelling in people who are standing up and sacral edema in people who are predominantly lying down. Nocturia (frequent night-time urination) may occur when fluid from the legs is returned to the bloodstream while lying down at night. In progressively severe cases, ascites (fluid accumulation in the abdominal cavity causing swelling) and liver enlargement may develop. Significant liver congestion may result in impaired liver function (congestive hepatopathy), jaundice, and coagulopathy (problems of decreased or increased blood clotting).
Biventricular failure
Dullness of the lung fields when percussed and reduced breath sounds at the base of the lungs may suggest the development of a pleural effusion (fluid collection between the lung and the chest wall). Though it can occur in isolated left- or right-sided heart failure, it is more common in biventricular failure because pleural veins drain into both the systemic and pulmonary venous systems. When unilateral, effusions are often right-sided.If a person with a failure of one ventricle lives long enough, it will tend to progress to failure of both ventricles. For example, left ventricular failure allows pulmonary edema and pulmonary hypertension to occur, which increase stress on the right ventricle. Though still harmful, right ventricular failure is not as deleterious to the left side.
Causes
Since heart failure is a syndrome and not a disease, establishing the underlying cause is vital to diagnosis and treatment.Heart failure is the potential end stage of all heart diseases. Common causes of heart failure include coronary artery disease, including a previous myocardial infarction (heart attack), high blood pressure, atrial fibrillation, valvular heart disease, excess alcohol use, infection, and cardiomyopathy of an unknown cause. In addition, viral infections of the heart can lead to inflammation of the muscular layer of the heart and subsequently contribute to the development of heart failure. Genetic predisposition plays an important role. If more than one cause is present, progression is more likely and prognosis is worse.Heart damage can predispose a person to develop heart failure later in life and has many causes including systemic viral infections (e.g., HIV), chemotherapeutic agents such as daunorubicin, cyclophosphamide, trastuzumab and substance use disorders of substances such as alcohol, cocaine, and methamphetamine. An uncommon cause is exposure to certain toxins such as lead and cobalt. Additionally, infiltrative disorders such as amyloidosis and connective tissue diseases such as systemic lupus erythematosus have similar consequences. Obstructive sleep apnea (a condition of sleep wherein disordered breathing overlaps with obesity, hypertension, and/or diabetes) is regarded as an independent cause of heart failure. Recent reports from clinical trials have also linked variation in blood pressure to heart failure and cardiac changes that may give rise to heart failure.
High-output heart failure
High-output heart failure happens when the amount of blood pumped out is more than typical and the heart is unable to keep up. This can occur in overload situations such as blood or serum infusions, kidney diseases, chronic severe anemia, beriberi (vitamin B1/thiamine deficiency), hyperthyroidism, cirrhosis, Pagets disease, multiple myeloma, arteriovenous fistulae, or arteriovenous malformations.
Acute decompensation
Chronic stable heart failure may easily decompensate. This most commonly results from a concurrent illness (such as myocardial infarction (a heart attack) or pneumonia), abnormal heart rhythms, uncontrolled hypertension, or a persons failure to maintain a fluid restriction, diet, or medication.Other factors that may worsen CHF include: anemia, hyperthyroidism, excessive fluid or salt intake, and medication such as NSAIDs and thiazolidinediones. NSAIDs increase the risk twofold.
Medications
A number of medications may cause or worsen the disease. This includes NSAIDs, COX-2 inhibitors, a number of anesthetic agents such as ketamine, thiazolidinediones, some cancer medications, several antiarrhythmic medications, pregabalin, alpha-2 adrenergic receptor agonists, minoxidil, itraconazole, cilostazol, anagrelide, stimulants (e.g., methylphenidate), tricyclic antidepressants, lithium, antipsychotics, dopamine agonists, TNF inhibitors, calcium channel blockers (especially verapamil and diltiazem), salbutamol, and tamsulosin.By inhibiting the formation of prostaglandins, NSAIDs may exacerbate heart failure through several mechanisms, including promotion of fluid retention, increasing blood pressure, and decreasing a persons response to diuretic medications. Similarly, the ACC/AHA recommends against the use of COX-2 inhibitor medications in people with heart failure. Thiazolidinediones have been strongly linked to new cases of heart failure and worsening of pre-existing congestive heart failure due to their association with weight gain and fluid retention. Certain calcium channel blockers, such as diltiazem and verapamil, are known to decrease the force with which the heart ejects blood, thus are not recommended in people with heart failure with a reduced ejection fraction.
Supplements
Certain alternative medicines carry a risk of exacerbating existing heart failure, and are not recommended. This includes aconite, ginseng, gossypol, gynura, licorice, lily of the valley, tetrandrine, and yohimbine. Aconite can cause abnormally slow heart rates and abnormal heart rhythms such as ventricular tachycardia. Ginseng can cause abnormally low or high blood pressure, and may interfere with the effects of diuretic medications. Gossypol can increase the effects of diuretics, leading to toxicity. Gynura can cause low blood pressure. Licorice can worsen heart failure by increasing blood pressure and promoting fluid retention. Lily of the valley can cause abnormally slow heart rates with mechanisms similar to those of digoxin. Tetrandrine can lead to low blood pressure through inhibition of L-type calcium channels. Yohimbine can exacerbate heart failure by increasing blood pressure through alpha-2 adrenergic receptor antagonism.
Pathophysiology
Heart failure is caused by any condition that reduces the efficiency of the heart muscle, through damage or overloading. Over time, these increases in workload, which are mediated by long-term activation of neurohormonal systems such as the renin–angiotensin system and the sympathoadrenal system, lead to fibrosis, dilation, and structural changes in the shape of the left ventricle from elliptical to spherical.The heart of a person with heart failure may have a reduced force of contraction due to overloading of the ventricle. In a normal heart, increased filling of the ventricle results in increased contraction force by the Frank–Starling law of the heart, and thus a rise in cardiac output. In heart failure, this mechanism fails, as the ventricle is loaded with blood to the point where heart muscle contraction becomes less efficient. This is due to reduced ability to cross-link actin and myosin myofilaments in over-stretched heart muscle.
Diagnosis
No diagnostic criteria have been agreed on as the gold standard for heart failure. In the UK the National Institute for Health and Care Excellence recommends measuring brain natriuretic peptide 32 (BNP) followed by an ultrasound of the heart if positive. This is recommended in those with shortness of breath. In those with worsening heart failure, both a measure of BNP and of troponin are recommended to help determine likely outcomes.
Classification
One historical method of categorizing heart failure is by the side of the heart involved (left heart failure versus right heart failure). Right heart failure was thought to compromise blood flow to the lungs compared to left heart failure compromising blood flow to the aorta and consequently to the brain and the remainder of the bodys systemic circulation. However, mixed presentations are common and left heart failure is a common cause of right heart failure.More accurate classification of heart failure type is made by measuring ejection fraction, or the proportion of blood pumped out of the heart during a single contraction. Ejection fraction is given as a percentage with the normal range being between 50 and 75%. The two types are:
1) Heart failure due to reduced ejection fraction (HFrEF): Synonyms no longer recommended are "heart failure due to left ventricular systolic dysfunction" and "systolic heart failure". HFrEF is associated with an ejection fraction less than 40%.2) Heart failure with preserved ejection fraction (HFpEF): Synonyms no longer recommended include "diastolic heart failure" and "heart failure with normal ejection fraction." HFpEF occurs when the left ventricle contracts normally during systole, but the ventricle is stiff and does not relax normally during diastole, which impairs filling.Heart failure may also be classified as acute or chronic. Chronic heart failure is a long-term condition, usually kept stable by the treatment of symptoms. Acute decompensated heart failure is a worsening of chronic heart failure symptoms, which can result in acute respiratory distress. High-output heart failure can occur when there is increased cardiac demand that results in increased left ventricular diastolic pressure which can develop into pulmonary congestion (pulmonary edema).Several terms are closely related to heart failure and may be the cause of heart failure, but should not be confused with it. Cardiac arrest and asystole refer to situations in which no cardiac output occurs at all. Without urgent treatment, these events result in sudden death. Myocardial infarction ("Heart attack") refers to heart muscle damage due to insufficient blood supply, usually as a result of a blocked coronary artery. Cardiomyopathy refers specifically to problems within the heart muscle, and these problems can result in heart failure. Ischemic cardiomyopathy implies that the cause of muscle damage is coronary artery disease. Dilated cardiomyopathy implies that the muscle damage has resulted in enlargement of the heart. Hypertrophic cardiomyopathy involves enlargement and thickening of the heart muscle.
Ultrasound
An echocardiogram (ultrasound of the heart) is commonly used to support a clinical diagnosis of heart failure. This can determine the stroke volume (SV, the amount of blood in the heart that exits the ventricles with each beat), the end-diastolic volume (EDV, the total amount of blood at the end of diastole), and the SV in proportion to the EDV, a value known as the ejection fraction (EF). In pediatrics, the shortening fraction is the preferred measure of systolic function. Normally, the EF should be between 50 and 70%; in systolic heart failure, it drops below 40%. Echocardiography can also identify valvular heart disease and assess the state of the pericardium (the connective tissue sac surrounding the heart). Echocardiography may also aid in deciding specific treatments, such as medication, insertion of an implantable cardioverter-defibrillator, or cardiac resynchronization therapy. Echocardiography can also help determine if acute myocardial ischemia is the precipitating cause, and may manifest as regional wall motion abnormalities on echo.
Chest X-ray
Chest X-rays are frequently used to aid in the diagnosis of CHF. In a person who is compensated, this may show cardiomegaly (visible enlargement of the heart), quantified as the cardiothoracic ratio (proportion of the heart size to the chest). In left ventricular failure, evidence may exist of vascular redistribution (upper lobe blood diversion or cephalization), Kerley lines, cuffing of the areas around the bronchi, and interstitial edema. Ultrasound of the lung may also be able to detect Kerley lines.
Electrophysiology
An electrocardiogram (ECG/EKG) may be used to identify arrhythmias, ischemic heart disease, right and left ventricular hypertrophy, and presence of conduction delay or abnormalities (e.g. left bundle branch block). Although these findings are not specific to the diagnosis of heart failure, a normal ECG virtually excludes left ventricular systolic dysfunction.
Blood tests
Blood tests routinely performed include electrolytes (sodium, potassium), measures of kidney function, liver function tests, thyroid function tests, a complete blood count, and often C-reactive protein if infection is suspected. An elevated brain natriuretic peptide 32 (BNP) is a specific test indicative of heart failure. Additionally, BNP can be used to differentiate between causes of dyspnea due to heart failure from other causes of dyspnea. If myocardial infarction is suspected, various cardiac markers may be used.
BNP is a better indicator than N-terminal pro-BNP for the diagnosis of symptomatic heart failure and left ventricular systolic dysfunction. In symptomatic people, BNP had a sensitivity of 85% and specificity of 84% in detecting heart failure; performance declined with increasing age.Hyponatremia (low serum sodium concentration) is common in heart failure. Vasopressin levels are usually increased, along with renin, angiotensin II, and catecholamines to compensate for reduced circulating volume due to inadequate cardiac output. This leads to increased fluid and sodium retention in the body; the rate of fluid retention is higher than the rate of sodium retention in the body, this phenomenon causes hypervolemic hyponatremia (low sodium concentration due to high body fluid retention). This phenomenon is more common in older women with low body mass. Severe hyponatremia can result in accumulation of fluid in the brain, causing cerebral edema and intracranial hemorrhage.
Angiography
Angiography is the X-ray imaging of blood vessels, which is done by injecting contrast agents into the bloodstream through a thin plastic tube (catheter), which is placed directly in the blood vessel. X-ray images are called angiograms. Heart failure may be the result of coronary artery disease, and its prognosis depends in part on the ability of the coronary arteries to supply blood to the myocardium (heart muscle). As a result, coronary catheterization may be used to identify possibilities for revascularisation through percutaneous coronary intervention or bypass surgery.
Algorithms
Various algorithms are used for the diagnosis of heart failure. For example, the algorithm used by the Framingham Heart Study adds together criteria mainly from physical examination. In contrast, the more extensive algorithm by the European Society of Cardiology weights the difference between supporting and opposing parameters from the medical history, physical examination, further medical tests, and response to therapy.
Framingham criteria
By the Framingham criteria, diagnosis of congestive heart failure (heart failure with impaired pumping capability) requires the simultaneous presence of at least two of the following major criteria or one major criterion in conjunction with two of the minor criteria.
Major criteria includean enlarged heart on a chest X-ray,
an S3 gallop (a third heart sound),
acute pulmonary edema,
episodes of waking up from sleep gasping for air, crackles on lung auscultation,
central venous pressure more than 16 cm H2O at the right atrium,
jugular vein distension,
positive abdominojugular test, and
weight loss more than 4.5 kg in 5 days in response to treatment (sometimes classified as a minor criterion).Minor criteria includean abnormally fast heart rate more than 120 beats per minute,
nocturnal cough,
difficulty breathing with physical activity,
pleural effusion,
a decrease in the vital capacity by one-third from maximum recorded,
liver enlargement, and
bilateral ankle edema.Minor criteria are acceptable only if they can not be attributed to another medical condition such as pulmonary hypertension, chronic lung disease, cirrhosis, ascites, or the nephrotic syndrome. The Framingham Heart Study criteria are 100% sensitive and 78% specific for identifying persons with definite congestive heart failure.
ESC algorithm
The ESC algorithm weights these parameters in establishing the diagnosis of heart failure:
Staging
Heart failure is commonly stratified by the degree of functional impairment conferred by the severity of the heart failure (as reflected in the New York Heart Association (NYHA) Functional Classification.) The NYHA functional classes (I–IV) begin with class I, which is defined as a person who experiences no limitation in any activities and has no symptoms from ordinary activities. People with NYHA class II heart failure have slight, mild limitations with everyday activities; the person is comfortable at rest or with mild exertion. With NYHA class III heart failure, a marked limitation occurs with any activity; the person is comfortable only at rest. A person with NYHA class IV heart failure is symptomatic at rest and becomes quite uncomfortable with any physical activity. This score documents the severity of symptoms and can be used to assess response to treatment. While its use is widespread, the NYHA score is not very reproducible and does not reliably predict the walking distance or exercise tolerance on formal testing.In its 2001 guidelines, the American College of Cardiology/American Heart Association working group introduced four stages of heart failure:
Stage A: People at high risk for developing HF in the future, but no functional or structural heart disorder
Stage B: A structural heart disorder, but no symptoms at any stage
Stage C: Previous or current symptoms of heart failure in the context of an underlying structural heart problem, but managed with medical treatment
Stage D: Advanced disease requiring hospital-based support, a heart transplant, or palliative careThe ACC staging system is useful since stage A encompasses "pre-heart failure" – a stage where intervention with treatment can presumably prevent progression to overt symptoms. ACC stage A does not have a corresponding NYHA class. ACC stage B would correspond to NYHA class I. ACC stage C corresponds to NYHA class II and III, while ACC stage D overlaps with NYHA class IV.
The degree of coexisting illness: i.e. heart failure/systemic hypertension, heart failure/pulmonary hypertension, heart failure/diabetes, heart failure/kidney failure, etc.
Whether the problem is primarily increased venous back pressure (preload), or failure to supply adequate arterial perfusion (afterload)
Whether the abnormality is due to low cardiac output with high systemic vascular resistance or high cardiac output with low vascular resistance (low-output heart failure vs. high-output heart failure)
Histopathology
Histopathology can diagnose heart failure in autopsies. The presence of siderophages indicates chronic left-sided heart failure, but is not specific for it. It is also indicated by congestion of the pulmonary circulation.
Prevention
A persons risk of developing heart failure is inversely related to level of physical activity. Those who achieved at least 500 MET-minutes/week (the recommended minimum by U.S. guidelines) had lower heart failure risk than individuals who did not report exercising during their free time; the reduction in heart failure risk was even greater in those who engaged in higher levels of physical activity than the recommended minimum.
Heart failure can also be prevented by lowering high blood pressure and high blood cholesterol, and by controlling diabetes. Maintaining a healthy weight, and decreasing sodium, alcohol, and sugar intake, may help. Additionally, avoiding tobacco use has been shown to lower the risk of heart failure. According to Johns Hopkins and the American Heart Association there are a few ways to help to prevent a cardiac event. Johns Hopkins mentions that stopping tobacco use, reducing high blood pressure, physical activity and your diet can drastically effect the chances of developing heart disease. High blood pressure accounts for most cardiovascular deaths. High blood pressure can be lowered into the normal range by making dietary decisions such as consuming less salt. Exercise also helps to bring blood pressure back down. One of the best ways to help avoid heart failure is to promote healthier eating habits like eating more vegetables, fruits, grains, and lean protein.Diabetes is a major risk factor for heart failure. For women with Coronary Heart disease (CHD), diabetes was the strongest risk factor for heart failure. Diabetic women with depressed creatinine clearance or elevated BMI were at the highest risk of heart failure. While the annual incidence rate of heart failure for non-diabetic women with no risk factors is 0.4%, the annual incidence rate for diabetic women with elevated body mass index (BMI) and depressed creatinine clearance was 7% and 13%, respectively.
Management
Treatment focuses on improving the symptoms and preventing the progression of the disease. Reversible causes of heart failure also need to be addressed (e.g. infection, alcohol ingestion, anemia, thyrotoxicosis, arrhythmia, and hypertension). Treatments include lifestyle and pharmacological modalities, and occasionally various forms of device therapy. Rarely, cardiac transplantation is used as an effective treatment when heart failure has reached the end stage.
Acute decompensation
In acute decompensated heart failure, the immediate goal is to re-establish adequate perfusion and oxygen delivery to end organs. This entails ensuring that airway, breathing, and circulation are adequate. Immediate treatments usually involve some combination of vasodilators such as nitroglycerin, diuretics such as furosemide, and possibly noninvasive positive pressure ventilation. Supplemental oxygen is indicated in those with oxygen saturation levels below 90%, but is not recommended in those with normal oxygen levels in normal atmosphere.
Chronic management
The goals of the treatment for people with chronic heart failure are the prolongation of life, prevention of acute decompensation, and reduction of symptoms, allowing for greater activity.
Heart failure can result from a variety of conditions. In considering therapeutic options, excluding reversible causes is of primary importance, including thyroid disease, anemia, chronic tachycardia, alcohol use disorder, hypertension, and dysfunction of one or more heart valves. Treatment of the underlying cause is usually the first approach to treating heart failure. In the majority of cases, though, either no primary cause is found or treatment of the primary cause does not restore normal heart function. In these cases, behavioral, medical and device treatment strategies exist that can provide a significant improvement in outcomes, including the relief of symptoms, exercise tolerance, and a decrease in the likelihood of hospitalization or death. Breathlessness rehabilitation for chronic obstructive pulmonary disease and heart failure has been proposed with exercise training as a core component. Rehabilitation should also include other interventions to address shortness of breath including psychological and educational needs of people and needs of caregivers. Iron supplementation appears useful in those with iron deficiency anemia and heart failure.
Advance care planning
The latest evidence indicates that advance care planning (ACP) may help to increase documentation by medical staff regarding discussions with participants, and improve an individuals depression. This involves discussing an individuals future care plan in consideration of the individuals preferences and values. The findings are however, based on low-quality evidence.
Monitoring
The various measures often used to assess the progress of people being treated for heart failure include fluid balance (calculation of fluid intake and excretion) and monitoring body weight (which in the shorter term reflects fluid shifts). Remote monitoring can be effective to reduce complications for people with heart failure.
Lifestyle
Behavior modification is a primary consideration in chronic heart failure management program, with dietary guidelines regarding fluid and salt intake. Fluid restriction is important to reduce fluid retention in the body and to correct the hyponatremic status of the body. The evidence of benefit of reducing salt, however, is poor as of 2018.
Exercise and physical activity
Exercise should be encouraged and tailored to suit individuals capabilities. A meta-analysis found that centre-based group interventions delivered by a physiotherapist are helpful in promoting physical activity in HF. There is a need for additional training for physiotherapists in delivering behaviour change intervention alongside an exercise programme. An intervention is expected to be more efficacious in encouraging physical activity than the usual care if it includes Prompts and cues to walk or exercise, like a phone call or a text message. It is extremely helpful if a trusted clinician provides explicit advice to engage in physical activity (Credible source). Another highly effective strategy is to place objects that will serve as a cue to engage in physical activity in the everyday environment of the patient (Adding object to the environment; e.g., exercise step or treadmill). Encouragement to walk or exercise in various settings beyond CR (e.g., home, neighbourhood, parks) is also promising (Generalisation of target behaviour). Additional promising strategies are Graded tasks (e.g., gradual increase in intensity and duration of exercise training), Self-monitoring, Monitoring of physical activity by others without feedback, Action planning, and Goal-setting. The inclusion of regular physical conditioning as part of a cardiac rehabilitation program can significantly improve quality of life and reduce the risk of hospital admission for worsening symptoms, but no evidence shows a reduction in mortality rates as a result of exercise. Despite cardiac rehabilitation being a recommended treatment for patients with heart failure with reduced ejection fraction (HFrEF), it is still underused. The reasons why this happens are complex, heterogeneous, and encompass healthcare system-, referring physician-, program-, and patient-level barriers. According to Alexandre et al., the main reasons for HFrEF patients not being enrolled in cardiac rehabilitation were no medical referral (31%), concomitant medical problems (28%), patient refusal (11%), and geographical distance to the hospital (9%). Furthermore, whether this evidence can be extended to people with HFpEF or to those whose exercise regimen takes place entirely at home is unclear.Home visits and regular monitoring at heart-failure clinics reduce the need for hospitalization and improve life expectancy.
Medication
Quadruple medical therapy using a combination of angiotensin-receptor neprilysin inhibitors (ARNI), beta blockers, mineralocorticoid receptor antagonists (MRA), and sodium glucose cotransporter-2 inhibitors (SGLT2 inhibitors) is now the standard of care as of 2021.
First line medications
First-line therapy for people with heart failure due to reduced systolic function should include angiotensin-converting enzyme (ACE) inhibitors (ACE-I), or angiotensin receptor blockers (ARBs) if the person develops a long-term cough as a side effect of the ACE-I. Use of medicines from these classes is associated with improved survival, fewer hospitalizations for heart failure exacerbations, and improved quality of life in people with heart failure.Beta-adrenergic blocking agents (beta blockers) also form part of the first line of treatment, adding to the improvement in symptoms and mortality provided by ACE-I/ARB. The mortality benefits of beta blockers in people with systolic dysfunction who also have atrial fibrillation is more limited than in those who do not have it. If the ejection fraction is not diminished (HFpEF), the benefits of beta blockers are more modest; a decrease in mortality has been observed, but reduction in hospital admission for uncontrolled symptoms has not been observed.In people who are intolerant of ACE-I and ARBs or who have significant kidney dysfunction, the use of combined hydralazine and a long-acting nitrate, such as isosorbide dinitrate, is an effective alternate strategy. This regimen has been shown to reduce mortality in people with moderate heart failure. It is especially beneficial in the black population.In people with symptomatic heart failure with markedly reduced ejection fraction (anyone with an ejection fraction of 35% or lower or less than 40% if following a heart attack), the use of an mineralocorticoid antagonist, such as spironolactone or eplerenone, in addition to beta blockers and ACE-I (once titrated to the target dose or maximum tolerated dose), can improve symptoms and reduce mortality.ARNI sacubitril/valsartan should be used in those who still have symptoms while on an ACE-I or ARB, beta blocker, and a mineralocorticoid receptor antagonist as it reduces the risks of cardiovascular mortality and hospitalisation for heart failure by a further 4.7% (absolute risk reduction). However, the use of this combination agent requires the cessation of ACE-i or ARB therapy 48 hours before its initiation.SGLT2 inhibitor is the newest medicine for heart failure.
Other medications
Second-line medications for CHF do not confer a mortality benefit. Digoxin is one such medication. Its narrow therapeutic window, a high degree of toxicity, and the failure of multiple trials to show a mortality benefit have reduced its role in clinical practice. It is now used in only a small number of people with refractory symptoms, who are in atrial fibrillation, and/or who have chronic hypotension.Diuretics have been a mainstay of treatment against symptoms of fluid accumulation, and include diuretics classes such as loop diuretics (such as furosemide), thiazide-like diuretics, and potassium-sparing diuretics. Although widely used, evidence on their efficacy and safety is limited, with the exception of mineralocorticoid antagonists such as spironolactone. Mineralocorticoid antagonists in those under 75 years old appear to decrease the risk of death.Anemia is an independent factor in mortality in people with chronic heart failure. Treatment of anemia significantly improves quality of life for those with heart failure, often with a reduction in severity of the NYHA classification, and also improves mortality rates. The European Society of Cardiology guideline in 2016 recommend screening for iron-deficiency anemia and treating with intravenous iron if deficiency is found.The decision to anticoagulate people with HF, typically with left ventricular ejection fractions <35% is debated, but generally, people with coexisting atrial fibrillation, a prior embolic event, or conditions that increase the risk of an embolic event such as amyloidosis, left ventricular noncompaction, familial dilated cardiomyopathy, or a thromboembolic event in a first-degree relative.Vasopressin receptor antagonists can also be used to treat heart failure. Conivaptan is the first medication approved by US Food and Drug Administration for the treatment of euvolemic hyponatremia in those with heart failure. In rare cases hypertonic 3% saline together with diuretics may be used to correct hyponatremia.Ivabradine is recommended for people with symptomatic heart failure with reduced left ventricular ejection fraction who are receiving optimized guideline-directed therapy (as above) including the maximum tolerated dose of beta-blocker, have a normal heart rhythm and continue to have a resting heart rate above 70 beats per minute. Ivabradine has been found to reduce the risk of hospitalization for heart failure exacerbations in this subgroup of people with heart failure.
Implanted devices
In people with severe cardiomyopathy (left ventricular ejection fraction below 35%), or in those with recurrent VT or malignant arrhythmias, treatment with an automatic implantable cardioverter-defibrillator (AICD) is indicated to reduce the risk of severe life-threatening arrhythmias. The AICD does not improve symptoms or reduce the incidence of malignant arrhythmias but does reduce mortality from those arrhythmias, often in conjunction with antiarrhythmic medications. In people with left ventricular ejection (LVEF) below 35%, the incidence of ventricular tachycardia or sudden cardiac death is high enough to warrant AICD placement. Its use is therefore recommended in AHA/ACC guidelines.Cardiac contractility modulation (CCM) is a treatment for people with moderate to severe left ventricular systolic heart failure (NYHA class II–IV), which enhances both the strength of ventricular contraction and the hearts pumping capacity. The CCM mechanism is based on stimulation of the cardiac muscle by nonexcitatory electrical signals, which are delivered by a pacemaker-like device. CCM is particularly suitable for the treatment of heart failure with normal QRS complex duration (120 ms or less) and has been demonstrated to improve the symptoms, quality of life, and exercise tolerance. CCM is approved for use in Europe, and was approved by the Food and Drug Administration for use in the United States in 2019.About one-third of people with LVEF below 35% have markedly altered conduction to the ventricles, resulting in dyssynchronous depolarization of the right and left ventricles. This is especially problematic in people with left bundle branch block (blockage of one of the two primary conducting fiber bundles that originate at the base of the heart and carry depolarizing impulses to the left ventricle). Using a special pacing algorithm, biventricular cardiac resynchronization therapy (CRT) can initiate a normal sequence of ventricular depolarization. In people with LVEF below 35% and prolonged QRS duration on ECG (LBBB or QRS of 150 ms or more), an improvement in symptoms and mortality occurs when CRT is added to standard medical therapy. However, in the two-thirds of people without prolonged QRS duration, CRT may actually be harmful.
Surgical therapies
People with the most severe heart failure may be candidates for ventricular assist devices, which have commonly been used as a bridge to heart transplantation, but have been used more recently as a destination treatment for advanced heart failure.In select cases, heart transplantation can be considered. While this may resolve the problems associated with heart failure, the person must generally remain on an immunosuppressive regimen to prevent rejection, which has its own significant downsides. A major limitation of this treatment option is the scarcity of hearts available for transplantation.
Palliative care
People with heart failure often have significant symptoms, such as shortness of breath and chest pain. Palliative care should be initiated early in the HF trajectory, and should not be an option of last resort. Palliative care can not only provide symptom management, but also assist with advanced care planning, goals of care in the case of a significant decline, and making sure the person has a medical power of attorney and discussed his or her wishes with this individual. A 2016 and 2017 review found that palliative care is associated with improved outcomes, such as quality of life, symptom burden, and satisfaction with care.Without transplantation, heart failure may not be reversible and heart function typically deteriorates with time. The growing number of people with stage IV heart failure (intractable symptoms of fatigue, shortness of breath, or chest pain at rest despite optimal medical therapy) should be considered for palliative care or hospice, according to American College of Cardiology/American Heart Association guidelines.
Prognosis
Prognosis in heart failure can be assessed in multiple ways, including clinical prediction rules and cardiopulmonary exercise testing. Clinical prediction rules use a composite of clinical factors such as laboratory tests and blood pressure to estimate prognosis. Among several clinical prediction rules for prognosticating acute heart failure, the EFFECT rule slightly outperformed other rules in stratifying people and identifying those at low risk of death during hospitalization or within 30 days. Easy methods for identifying people that are low-risk are:
ADHERE Tree rule indicates that people with blood urea nitrogen < 43 mg/dL and systolic blood pressure at least 115 mm Hg have less than 10% chance of inpatient death or complications.
BWH rule indicates that people with systolic blood pressure over 90 mm Hg, respiratory rate of 30 or fewer breaths per minute, serum sodium over 135 mmol/L, and no new ST–T wave changes have less than 10% chance of inpatient death or complications.A very important method for assessing prognosis in people with advanced heart failure is cardiopulmonary exercise testing (CPX testing). CPX testing is usually required prior to heart transplantation as an indicator of prognosis. CPX testing involves measurement of exhaled oxygen and carbon dioxide during exercise. The peak oxygen consumption (VO2 max) is used as an indicator of prognosis. As a general rule, a VO2 max less than 12–14 cc/kg/min indicates poor survival and suggests that the person may be a candidate for a heart transplant. People with a VO2 max <10 cc/kg/min have a clearly poorer prognosis. The most recent International Society for Heart and Lung Transplantation guidelines also suggest two other parameters that can be used for evaluation of prognosis in advanced heart failure, the heart failure survival score and the use of a criterion of VE/VCO2 slope > 35 from the CPX test. The heart failure survival score is calculated using a combination of clinical predictors and the VO2 max from the CPX test.
Heart failure is associated with significantly reduced physical and mental health, resulting in a markedly decreased quality of life. With the exception of heart failure caused by reversible conditions, the condition usually worsens with time. Although some people survive many years, progressive disease is associated with an overall annual mortality rate of 10%.Around 18 of every 1000 persons will experience an ischemic stroke during the first year after diagnosis of HF. As the duration of follow-up increases, the stroke rate rises to nearly 50 strokes per 1000 cases of HF by 5 years.
Epidemiology
In 2022, heart failure affected about 64 million people globally. Overall, around 2% of adults have heart failure and in those over the age of 65, this increases to 6–10%. Above 75 years old, rates are greater than 10%.Rates are predicted to increase. Increasing rates are mostly because of increasing lifespan, but also because of increased risk factors (hypertension, diabetes, dyslipidemia, and obesity) and improved survival rates from other types of cardiovascular disease (myocardial infarction, valvular disease, and arrhythmias). Heart failure is the leading cause of hospitalization in people older than 65.
United States
In the United States, heart failure affects 5.8 million people, and each year 550,000 new cases are diagnosed. In 2011, heart failure was the most common reason for hospitalization for adults aged 85 years and older, and the second-most common for adults aged 65–84 years. An estimated one in five adults at age 40 will develop heart failure during their remaining lifetimes and about half of people who develop heart failure die within 5 years of diagnosis. Heart failure is much higher in African Americans, Hispanics, Native Americans, and recent immigrants from the Eastern Bloc countries such as Russia. This high prevalence in these ethnic minority populations has been linked to high incidence of diabetes and hypertension. In many new immigrants to the U.S., the high prevalence of heart failure has largely been attributed to lack of preventive health care or substandard treatment. Nearly one of every four people (24.7%) hospitalized in the U.S. with congestive heart failure are readmitted within 30 days. Additionally, more than 50% of people seek readmission within 6 months after treatment and the average duration of hospital stay is 6 days.
Heart failure is a leading cause of hospital readmissions in the U.S. People aged 65 and older were readmitted at a rate of 24.5 per 100 admissions in 2011. In the same year, people under Medicaid were readmitted at a rate of 30.4 per 100 admissions, and uninsured people were readmitted at a rate of 16.8 per 100 admissions. These are the highest readmission rates for both categories. Notably, heart failure was not among the top-10 conditions with the most 30-day readmissions among the privately insured.
United Kingdom
In the UK, despite moderate improvements in prevention, heart failure rates have increased due to population growth and ageing. Overall heart failure rates are similar to the four most common causes of cancer (breast, lung, prostate, and colon) combined. People from deprived backgrounds are more likely to be diagnosed with heart failure and at a younger age.
Developing world
In tropical countries, the most common cause of HF is valvular heart disease or some type of cardiomyopathy. As underdeveloped countries have become more affluent, the incidences of diabetes, hypertension, and obesity have increased, which have in turn raised the incidence of heart failure.
Sex
Men have a higher incidence of heart failure, but the overall prevalence rate is similar in both sexes since women survive longer after the onset of heart failure. Women tend to be older when diagnosed with heart failure (after menopause), they are more likely than men to have diastolic dysfunction, and seem to experience a lower overall quality of life than men after diagnosis.
Ethnicity
Some sources state that people of Asian descent are at a higher risk of heart failure than other ethnic groups. Other sources however have found that rates of heart failure are similar to rates found in other ethnic groups.
History
For centuries, the disease entity which would include many cases of what today would be called heart failure was dropsy; the term denotes generalized edema, a major manifestation of a failing heart, though also caused by other diseases. Writings of ancient civilizations include evidence of their acquaintance with dropsy and heart failure: Egyptians were the first to use bloodletting to relieve fluid accumulation and shortage of breath, and provided what may have been the first documented observations on heart failure in the Ebers papurus (around 1500 BCE); Greeks described cases of dyspnea, fluid retention and fatigue compatible with heart failure; Romans used the flowering plant Drimia maritima (sea squill), which contains cardiac glycosides, for the treatment of dropsy; descriptions pertaining to heart failure are also known in the civilizations of ancient India and China. However, the manifestations of failing heart were understood in the context of these peoples medical theories – including ancient Egyptian religion, Hippocratic theory of humours, or ancient Indian and Chinese medicine, and the current concept of heart failure had not developed yet. Although shortage of breath had been connected to heart disease by Avicenna round 1000 CE, decisive for modern understanding of the nature of the condition were the description of pulmonary circulation by Ibn al-Nafis in the 13th century, and of systemic circulation by William Harvey in 1628. The role of the heart in fluid retention began to be better appreciated, as dropsy of the chest (fluid accumulation in and round the lungs causing shortage of breath) became more familiar and the current concept of heart failure, which brings together swelling and shortage of breath due to fluid retention, began to be accepted, in the 17th and especially in the 18th century: Richard Lower linked dyspnea and foot swelling in 1679, and Giovanni Maria Lancisi connected jugular vein distention with right ventricular failure in 1728. Dropsy attributable to other causes, e.g. kidney failure, was differentiated in the 19th century. The stethoscope, invented by René Laennec in 1819, x-rays, discovered by Wilhelm Röntgen in 1895, and electrocardiography, described by Willem Einthoven in 1903, facilitated the investigation of heart failure. 19th century also saw experimental and conceptual advances in the physiology of heart contraction, which led to the formulation of the Frank-Starling law of the heart (named after physiologists Otto Frank and Ernest Starling, a remarkable advance in understanding mechanisms of heart failure.One of the earliest treatments of heart failure, relief of swelling by bloodletting with various methods, including leeches, continued through the centuries. Along with bloodletting, Jean-Baptiste de Sénac in 1749 recommended opiates for acute shortage of breath due to heart failure. In 1785, William Withering described the therapeutic uses of the foxglove genus of plants in the treatment of edema; their extract contains cardiac glycosides, including digoxin, still used today in the treatment of heart failure. The diuretic effects of inorganic mercury salts, which were used to treat syphilis, had already been noted in the 16th century by Paracelsus; in the 19th century they were used by noted physicians like John Blackall and William Stokes. In the meantime, cannulae (tubes) invented by English physician Reginald Southey in 1877 was another method of removing excess fluid by directly inserting into swollen limbs. Use of organic mercury compounds as diuretics, beyond their role in syphilis treatment, started in 1920, though it was limited by their parenteral route of administration and their side-effects. Oral mercurial diuretics were introduced in the 1950s; so were thiazide diuretics, which caused less toxicity, and are still used today. Around the same time, invention of echocardiography by Inge Edler and Hellmuth Hertz in 1954 marked a new era in the evaluation of heart failure. In the 1960s, loop diuretics were added to available treatments of fluid retention, while a patient with heart failure received the first heart transplant by Christiaan Barnard </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Ichthyosis acquisita and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Pervasive developmental disorder and can you explain it in depth? [/INST] The diagnostic category pervasive developmental disorders (PDD), as opposed to specific developmental disorders (SDD), is a group of disorders characterized by delays in the development of multiple basic functions including socialization and communication. The pervasive developmental disorders include autism, Asperger syndrome, pervasive developmental disorder not otherwise specified (PDD-NOS, i.e., all autism spectrum disorders [ASD]), childhood disintegrative disorder (CDD), overactive disorder associated with mental retardation and stereotyped movements, and Rett syndrome. The first four of these disorders are commonly called the autism spectrum disorders; the last disorder is much rarer, and is sometimes placed in the autism spectrum and sometimes not.The terminology PDD and ASD is often used interchangeably and varies depending on location. The two have overlapping definitions but are defined differently by the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-V), and the International Classification of Diseases, 10th edition (ICD-10). DSM-V removed PDD as a diagnosis and replaced it with ASD and the relative severity of the condition. ICD-10 on the other hand labels ASD as a pervasive developmental disorder with the subtypes previously mentioned.The onset of pervasive developmental disorders occurs during infancy, but the condition is usually not identified until the child is around three years old. Parents may begin to question the health of their child when developmental milestones are not met, including age appropriate motor movement and speech production.There is a division among doctors on the use of the term PDD. Many use the term PDD as a short way of saying PDD-NOS (pervasive developmental disorder not otherwise specified). Others use the general category label of PDD because they are hesitant to diagnose very young children with a specific type of PDD, such as autism. Both approaches contribute to confusion about the term, because the term PDD actually refers to a category of disorders and is not a diagnostic label.
Signs and symptoms
Symptoms of PDD may include behavioral and communication problems such as:
Difficulty using and understanding language
Difficulty relating to people, objects, and events; for example, lack of eye contact, pointing behavior, and lack of facial responses
Unusual play with toys and other objects.
Paranoia, a characteristic form of social anxiety, derealization, transient psychosis, and unconventional beliefs if environment or routine are changed without notice
Repetitive body movements or behavior patterns, such as hand flapping, hair twirling, foot tapping, or more complex movements
Difficulty regulating behaviors and emotions, which may result in temper tantrums, anxiety, and aggression
Emotional breakdowns
Delusional or unconventional perception of the world
Maladaptive daydreaming
Mirrored-Self Misidentification – the delusion that the individual in the mirror that you are, is a child, even though you are an older teen or an adult
Degrees
Children with PDD vary widely in abilities, intelligence, and behaviors. Some children do not speak at all, others speak in limited phrases or conversations, and some have relatively normal language development. Repetitive play skills and limited social skills are generally evident. Unusual responses to sensory information – loud noises, lights – are common.
Diagnosis
Diagnosis is usually made during early childhood. With the release of the Diagnostic and Statistical Manual of Mental Disorders–5th Edition (DSM-V) in May 2013, the diagnosis for PDD was removed and replaced with autism spectrum disorders. Distinction between the past disorders is implicated by a series of severity levels. Individuals who received diagnoses based on the DSM-IV maintain their diagnosis under the autism spectrum disorders. However, an editorial published in the October 2012 issue of American Journal of Psychiatry notes that, while some doctors argue that there is insufficient evidence to support the diagnostic distinction between ASD and PDD, multiple literature reviews found that studies showing significant differences between the two disorders significantly outnumbered those that found no difference.Unlike the DSM-V, the World Health Organization’s International Classification of Diseases, 10th edition (ICD-10) categorizes PDD into four distinct subtypes, each with their own diagnostic criteria. The four disorders (childhood autism, atypical autism, Rett syndrome, and other childhood disintegrative disorder) are characterized by abnormalities in social interactions and communication.The disorders are primarily diagnosed based on behavioral features, although the presence of any medical conditions are important, they are not taken into account when making a diagnosis.Before the release of the DSM-V, some clinicians used PDD-NOS as a "temporary" diagnosis for children under the age of five when, for whatever reason, they are reluctant to diagnose autism. There are several justifications for this. Very young children have limited social interaction and communication skills to begin with, so it can be tricky to diagnose milder cases of autism in toddlers. The unspoken assumption is that by the age of five, unusual behaviors will either resolve or develop into diagnosable autism. However, some parents view the PDD label as no more than a euphemism for autism spectrum disorders, problematic because this label makes it more difficult to receive aid for early childhood intervention.
Classification
The pervasive developmental disorders were:
Pervasive developmental disorder not otherwise specified (PDD-NOS), which includes atypical autism, and is the most common (47% of autism diagnoses);
Typical autism, the best-known;
Asperger syndrome (9% of autism diagnoses);
Rett syndrome; and
Childhood disintegrative disorder (CDD).The first three of these disorders are commonly called the autism spectrum disorders; the last two disorders are much rarer, and are sometimes placed in the autism spectrum and sometimes not.In May 2013, the Diagnostic and Statistical Manual–5th Edition (DSM-V) was released, updating the classification for pervasive developmental disorders. The grouping of disorders, including PDD-NOS, Autism, Asperger Syndrome, Rett Syndrome, and CDD, has been removed and replaced with the general term of Autism Spectrum Disorders. The American Psychiatric Association has concluded that using the general diagnosis of ASD supports more accurate diagnoses. The combination of these disorders was also fueled by the standpoint that Autism is characterized by common symptoms and should therefore bear a single diagnostic term. In order to distinguish between the different disorders, the DSM-V employs severity levels. The severity levels take into account required support, restricted interests and repetitive behaviors, and deficits in social communication.
PDD and PDD-NOS
There is a division among doctors on the use of the term PDD </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Polymyositis and can you explain it in depth? [/INST] Polymyositis (PM) is a type of chronic inflammation of the muscles (inflammatory myopathy) related to dermatomyositis and inclusion body myositis. Its name means "inflammation of many muscles" (poly- + myos- + -itis). The inflammation of polymyositis is mainly found in the endomysial layer of skeletal muscle, whereas dermatomyositis is characterized primarily by inflammation of the perimysial layer of skeletal muscles.
Signs and symptoms
The hallmark of polymyositis is weakness and/or loss of muscle mass in the proximal musculature, as well as flexion of the neck and torso. These symptoms can be associated with marked pain in these areas as well. The hip extensors are often severely affected, leading to particular difficulty in climbing stairs and rising from a seated position. The skin involvement of dermatomyositis is absent in polymyositis. Dysphagia (difficulty swallowing) or other problems with esophageal motility occur in as many as 1/3 of patients. Low grade fever and enlarged lymph nodes may be present. Foot drop in one or both feet can be a symptom of advanced polymyositis and inclusion body myositis. The systemic involvement of polymyositis includes interstitial lung disease (ILD) and heart disease, such as heart failure and conduction abnormalities.Polymyositis tends to become evident in adulthood, presenting with bilateral proximal muscle weakness often noted in the upper legs due to early fatigue while walking. Sometimes the weakness presents itself as an inability to rise from a seated position without help or an inability to raise ones arms above ones head. The weakness is generally progressive, accompanied by lymphocytic inflammation (mainly cytotoxic T cells).
Associated illnesses
Polymyositis and the associated inflammatory myopathies have an associated increased risk of cancer. The features they found associated with an increased risk of cancer were older age, age greater than 45, male sex, difficulty swallowing, death of skin cells, cutaneous vasculitis, rapid onset of myositis (<4 weeks), elevated creatine kinase, higher erythrocyte sedimentation rate and higher C-reactive protein levels </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Trichuriasis and can you explain it in depth? [/INST] Trichuriasis, also known as whipworm infection, is an infection by the parasitic worm Trichuris trichiura (whipworm). If infection is only with a few worms, there are often no symptoms. In those who are infected with many worms, there may be abdominal pain, fatigue and diarrhea. The diarrhea sometimes contains blood. Infections in children may cause poor intellectual and physical development. Low red blood cell levels may occur due to loss of blood.The disease is usually spread when people eat food or drink water that contains the eggs of these worms. This may occur when contaminated vegetables are not fully cleaned or cooked. Often these eggs are in the soil in areas where people defecate outside and where untreated human feces is used as fertilizer. These eggs originate from the feces of infected people. Young children playing in such soil and putting their hands in their mouths also become infected easily. The worms live in the large bowel and are about four centimetres in length. Whipworm is diagnosed by seeing the eggs when examining the stool with a microscope. Eggs are barrel-shaped. Trichuriasis belongs to the group of soil-transmitted helminthiases.Prevention is by properly cooking food and hand washing before cooking. Other measures include improving access to sanitation such as ensuring use of functional and clean toilets and access to clean water. In areas of the world where the infections are common, often entire groups of people will be treated all at once and on a regular basis. Treatment is with three days of the medication: albendazole, mebendazole or ivermectin. People often become infected again after treatment.Whipworm infection affected about 464 million in 2015. It is most common in tropical countries. Those infected with whipworm often also have hookworm and ascariasis infections. These diseases have a large effect on the economy of many countries. Work is ongoing to develop a vaccine against the disease. Trichuriasis is classified as a neglected tropical disease.
Signs and symptoms
Light infestations (<100 worms) frequently have no symptoms. Heavier infestations, especially in small children, can present gastrointestinal problems including abdominal pain and distension, bloody or mucus-filled diarrhea, and tenesmus (feeling of incomplete defecation, generally accompanied by involuntary straining). Mechanical damage to the intestinal mucosa may occur, as well as toxic or inflammatory damage to the intestines of the host. While appendicitis may be brought on by damage and edema of the adjacent tissue, if there are large numbers of worms or larvae present, it has been suggested that the embedding of the worms into the ileocecal region may also make the host susceptible to bacterial infection. A severe infection with high numbers of embedded worms in the rectum leads to edema, which can cause rectal prolapse, although this is typically only seen in small children. The prolapsed, inflamed and edematous rectal tissue may even show visible worms.Physical growth delay, weight loss, nutritional deficiencies, and anemia (due to long-standing blood loss) are also characteristic of infection, and these symptoms are more prevalent and severe in children. It does not commonly cause eosinophilia.Coinfection of T. trichiura with other parasites is common and with larger worm burdens can cause both exacerbation of dangerous trichuriasis symptoms such as massive gastrointestinal bleeding (shown to be especially dramatic with coinfection with Salmonella typhi) and exacerbation of symptoms and pathogenesis of the other parasitic infection (as is typical with coinfection with Schistosoma mansoni, in which higher worm burden and liver egg burden is common). Parasitic coinfection with HIV/AIDS, tuberculosis, and malaria is also common, especially in sub-Saharan Africa, and helminth coinfection adversely affects the natural history and progression of HIV/AIDS, tuberculosis, and malaria and can increase clinical malaria severity. In a study performed in Senegal, infections of soil-transmitted helminths like T. trichiura (as well as schistosome infections independently) showed enhanced risk and increased the incidence of malaria.Heavy infestations may have bloody diarrhea. Long-standing blood loss may lead to iron-deficiency anemia. Vitamin A deficiency may also result due to infection.
Cause
Trichuriasis is caused by a parasitic worm also known as a helminth called Trichuris trichiura. It belongs to the genus Trichuris, formerly known as Trichocephalus, meaning hair head, which would be a more accurate name; however the generic name is now Trichuris, which means hair tail (implying that the posterior end of the worm is the attenuated section). Infections by parasitic worms are known as helminthiasis.
Reservoir
Humans are the main, but not the only reservoir for T. trichiura. Recent research verified by the application of molecular techniques (PCR) that dogs are a reservoir for T. trichiura, as well as T. vulpis.
Vector
Non-biting cyclorrhaphan flies (Musca domestica, M. sorbens, Chrysomya rufifacies, C. bezziana, Lucina cuprina, Calliphora vicina and Wohlfarthia magnifica) have been found to carry Trichuris trichiura. A study in two localized areas in Ethiopia found cockroaches were carriers for several human intestinal parasites, including T. trichiura.
Transmission
Humans can become infected with the parasite due to ingestion of infective eggs by mouth contact with hands or food contaminated with egg-carrying soil. However, there have also been rare reported cases of transmission of T. trichiura by sexual contact. Some major outbreaks have been traced to contaminated vegetables (due to presumed soil contamination).
Life cycle
Unembryonated eggs (unsegmented) are passed in the feces of a previous host to the soil. In the soil, these eggs develop into a 2-cell stage (segmented egg) and then into an advanced cleavage stage. Once at this stage, the eggs embryonate and then become infective, a process that occurs in about 15 to 30 days). Next, the infective eggs are ingested by way of soil-contaminated hands or food and hatch inside the small intestine, releasing larvae into the gastrointestinal tract. These larvae burrow into a villus and develop into adults (over 2–3 days). They then migrate into the cecum and ascending colon where they thread their anterior portion (whip-like end) into the tissue mucosa and reside permanently for their year-long lifespan. About 60 to 70 days after infection, female adults begin to release unembryonated eggs (oviposit) into the cecum at a rate of 3,000 to 20,000 eggs per day, linking the life cycle to the start.
Incubation period
The exact incubation period of T. trichiura is unknown, however, immature eggs in soil under favorable conditions take about three weeks to mature: 15–30 days, 10 days minimum to mature before ideal ingestion by the human host. Favorable conditions for maturation of eggs is warm to temperate climates with adequate humidity or precipitation, as ova are resistant to cold, but not resistant to drying.
Once ingested, the larva will remain dug into a villus in the small intestine for about 2–3 days until it is fully developed for migration to the ileocecal section of the gastrointestinal tract.
The average total life span of T. trichiura is one year, although there have been longer cases reported, lasting as long as five years (Note: inadequate treatment and re-infection are likely to play a role in this).
Morphology
Adult worms are usually 3–5 centimetres (1.2–2.0 in) long, with females being larger than males as is typical of nematodes. The thin, clear majority of the body (the anterior, whip-like end) is the esophagus, and it is the end that the worm threads into the mucosa of the colon. The widened, pinkish gray region of the body is the posterior, and it is the end that contains the parasites intestines and reproductive organs.
T. trichiura eggs are prolate spheroids, the shape of the balls used in Rugby and Gridiron football. They are about 50–54 μm (0.0020–0.0021 in) long and have polar plugs (also known as refractile prominences) at each end.
Diagnosis
A stool ova and parasites exam reveals the presence of typical whipworm eggs. Typically, the Kato-Katz thick-smear technique is used for identification of the Trichuris trichiura eggs in the stool sample. Trichuria eggs often appear larger and more swollen on Kato-Katz preparation compared to when using other techniques.Although colonoscopy is not typically used for diagnosis, as the adult worms can be overlooked, especially with imperfect colon, there have been reported cases in which colonoscopy has revealed adult worms. Colonoscopy can directly diagnose trichuriasis by identification of the threadlike form of worms with an attenuated, whip-like end. Colonoscopy has been shown to be a useful diagnostic tool, especially in patients infected with only a few male worms and with no eggs presenting in the stool sample.Trichuriasis can be diagnosed when T. trichiura eggs are detected in stool examination. Eggs will appear barrel-shaped and unembryonated, having bipolar plugs and a smooth shell. Rectal prolapse can be diagnosed easily using defecating proctogram and is one of many methods for imaging the parasitic infection. Sigmoidoscopys show characteristic white bodies of adult worms hanging from inflamed mucosa ("coconut cake rectum").
Prevention
Deworming
Limited access to essential medicine poses a challenge to the eradication of trichuriasis worldwide. Also, it is a public health concern that rates of post-treatment re-infection need to be determined and addressed to diminish the incidence of untreated re-infection. Lastly, with mass drug administration strategies and improved diagnosis and prompt treatment, detection of an emergence of antihelminthic drug resistance should be examined.Mass Drug Administration (preventative chemotherapy) has had a positive effect on the disease burden of trichuriasis in East and West Africa, especially among children, who are at highest risk for infection.
Sanitation
Infection can be avoided by proper disposal of human feces, avoiding fecal contamination of food, not eating soil, and avoiding crops fertilized with untreated human feces. Simple and effective proper hygiene such as washing hands and food is recommended for control.Improved facilities for feces disposal have decreased the incidence of whipworm. Handwashing before food handling, and avoiding ingestion of soil by thorough washing of food that may have been contaminated with egg-containing soil are other preventive measures. In addition to washing, it is also advisable to peel and/or cook fruits and vegetables. Improvement of sanitation systems, as well as improved facilities for feces disposal, have helped to limit defecation onto soil and contain potentially infectious feces from bodily contact.A study in a Brazilian urban centre demonstrated a significant reduction in prevalence and incidence of soil-transmitted helminthiasis, including trichuriasis, following implementation of a citywide sanitation program. A 33% reduction in the prevalence of trichuriasis and a 26% reduction in the incidence of trichuriasis was found in a study performed on 890 children ages 7–14 years old within 24 different sentinel areas chosen to represent the varied environmental conditions throughout the city of Salvador, Bahia, Brazil. Control of soil fertilizers has helped eliminate the potential for contact of human fecal matter and fertilizer in the soil.
Treatment
Trichuriasis is treated with benzimidazole anthelmintic agents such as albendazole or mebendazole, sometimes in conjunction with other medications.
Mebendazole is 90% effective in the first dose. Higher clearance rates can be obtained by combining mebendazole or albendazole with ivermectin. The safety of ivermectin in children under 15 kg (33 lb) and pregnant women has not yet been established.
In people with diarrhea, loperamide may be added to increase the contact time between anthelmintic agents and the parasites. Oral iron supplementation may be useful in treating the iron-deficiency anemia which often accompanies trichuriasis.
Epidemiology
Regions
Infection of T. trichiura is most frequent in areas with tropical weather and poor sanitation practices. Trichuriasis occurs frequently in areas in which untreated human feces is used as fertilizer or where open defecation takes place. Trichuriasis infection prevalence is 50 to 80 percent in some regions of Asia (noted especially in China and Korea) and also occurs in rural areas of the southeastern United States.
Infection estimates
T. trichiura is the third most common nematode (roundworm) infecting humans. Infection is most prevalent among children, and in North America, infection occurs frequently in immigrants from tropical or sub-tropical regions. It is estimated that 600–800 million people are infected worldwide, with 3.2 billion individuals at risk because they live in regions where this intestinal worm is common.
History
The first written record of T. trichiura was made by the Italian anatomist Giovanni Battista Morgagni, who identified the presence of the parasite in a case of worms residing in the colon in 1740. An exact morphological description and accurate drawings were first recorded in 1761 by Johann Georg Roederer, a German physician. Soon after, the name Trichuris trichiura was given to this species.
Synonyms
Human whipworm, trichocephaliasis, and tricuriasis are all synonyms for trichuriasis, human infection of the T. trichiura intestinal nematode </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Somnolence and can you explain it in depth? [/INST] Somnolence (alternatively sleepiness or drowsiness) is a state of strong desire for sleep, or sleeping for unusually long periods (compare hypersomnia). It has distinct meanings and causes. It can refer to the usual state preceding falling asleep, the condition of being in a drowsy state due to circadian rhythm disorders, or a symptom of other health problems. It can be accompanied by lethargy, weakness and lack of mental agility.Somnolence is often viewed as a symptom rather than a disorder by itself. However, the concept of somnolence recurring at certain times for certain reasons constitutes various disorders, such as excessive daytime sleepiness, shift work sleep disorder, and others; and there are medical codes for somnolence as viewed as a disorder.
Sleepiness can be dangerous when performing tasks that require constant concentration, such as driving a vehicle. When a person is sufficiently fatigued, microsleeps may be experienced. In individuals deprived of sleep, somnolence may spontaneously dissipate for short periods of time; this phenomenon is the second wind, and results from the normal cycling of the circadian rhythm interfering with the processes the body carries out to prepare itself to rest.
The word "somnolence" is derived from the Latin "somnus" meaning "sleep".
Causes
Circadian rhythm disorders
Circadian rhythm ("biological clock") disorders are a common cause of drowsiness as are a number of other conditions such as sleep apnea, insomnia and narcolepsy. The body clock disorders are classified as extrinsic (externally caused) or intrinsic. The former type is, for example, shift work sleep disorder, which affects people who work nights or rotating shifts. The intrinsic types include:
Advanced sleep phase disorder (ASPD) – A condition in which patients feel very sleepy and go to bed early in the evening and wake up very early in the morning
Delayed sleep phase disorder (DSPD) – Faulty timing of sleep, peak period of alertness, the core body temperature rhythm, hormonal and other daily cycles such that they occur a number of hours late compared to the norm, often misdiagnosed as insomnia
Non-24-hour sleep–wake disorder – A faulty body clock and sleep-wake cycle that usually is longer than (rarely shorter than) the normal 24-hour period causing complaints of insomnia and excessive sleepiness
Irregular sleep–wake rhythm – Numerous naps throughout the 24-hour period, no main nighttime sleep episode and irregularity from day to day
Physical illness
Sleepiness can also be a response to infection. Such somnolence is one of several sickness behaviors or reactions to infection that some theorize evolved to promote recovery by conserving energy while the body fights the infection using fever and other means. Other causes include:
Anxiety
Brain tumor
Chronic pains
Concussion – a mild traumatic brain injury
Diabetes
Fibromyalgia
Head injury
Hypercalcemia – too much calcium in the blood
Hypermagnesemia
Hyponatremia – low blood sodium
Hypothyroidism – the body doesnt produce enough hormones that control how cells use energy
Meningitis
Mood disorders – depression
Multiple sclerosis
Narcolepsy – disorder of the nervous system
Skull fractures
Sleeping sickness – caused by a specific parasite
Stress
Medicine
Analgesics – mostly prescribed or illicit opiates such as OxyContin or heroin
Anticonvulsants / antiepileptics – such as phenytoin (Dilantin), carbamazepine (Tegretol), Lyrica (Pregabalin) and Gabapentin
Antidepressants – for instance, sedating tricyclic antidepressants and mirtazapine. Somnolence is less common with SSRIs and SNRIs as well as MAOIs.
Antihistamines – for instance, diphenhydramine (Benadryl, Nytol) and doxylamine (Unisom-2)
Antipsychotics – for example, Lurasidone (Latuda), thioridazine, quetiapine (Seroquel), olanzapine (Zyprexa), risperidone and ziprasidone (Geodon) but not haloperidol
Dopamine agonists used in the treatment of Parkinsons disease – e.g. pergolide, ropinirole and pramipexole.
HIV medications – such as efavirenz
Hypertension medications – such as amlodipine
Hypnotics, or soporific drugs, commonly known as sleeping pills.
Tranquilizers – such as zopiclone (Zimovane), or the benzodiazepines such as diazepam (Valium) or nitrazepam (Mogadon) and the barbiturates, such as amobarbital (Amytal) or secobarbital (Seconal)
Other agents impacting the central nervous system in sufficient or toxic doses
Assessment
Quantifying sleepiness requires a careful assessment </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Unicornuate uterus and can you explain it in depth? [/INST] A unicornuate uterus represents a uterine malformation where the uterus is formed from one only of the paired Müllerian ducts while the other Müllerian duct does not develop or only in a rudimentary fashion. The sometimes called hemi-uterus has a single horn linked to the ipsilateral fallopian tube that faces its ovary.
Signs and symptoms
Women with the condition may be asymptomatic and unaware of having a unicornuate uterus; normal pregnancy may occur. In a review of the literature Reichman et al. analyzed the data on pregnancy outcome of 290 women with a unicornuate uterus. 175 women had conceived for a total of 468 pregnancies. They found that about 50% of patients delivered a live baby. The rates for ectopic pregnancy was 2.7%, for miscarriage 34%, and for preterm delivery 20%, while the intrauterine demise rate was 10%. Thus patients with a unicornuate uterus are at a higher risk for pregnancy loss and obstetrical complications.
Cause
The uterus is normally formed during embryogenesis by the fusion of the two Müllerian ducts. If one of the ducts does not develop, only one Müllerian duct contributes to the uterine development </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Intermittent hydrarthrosis and can you explain it in depth? [/INST] Intermittent hydrarthrosis (IH), also known as periodic synoviosis, periodic benign synovitis, or periodic hydrarthritis, is a chronic condition of unknown cause characterized by recurring, temporary episodes of fluid accumulation (effusion) in the knee. While the knee is mainly involved, occasionally other joints such as the elbow or ankle can additionally be affected. Fluid accumulation in the joint can be extensive causing discomfort and impairing movement, although affected joints are not usually very painful. While the condition is chronic, it does not appear to progress to more destructive damage of the joint. It seems to affect slightly more women than men.
Episodes of swelling last several days or longer, can occur with regular or semi-regular frequency, typically one or two episodes per month. Between periods of effusion, knee swelling reduces dramatically providing largely symptomless intervals. Unlike some other rheumatological conditions such as rheumatoid arthritis, laboratory findings are usually within normal ranges or limits.
Clear treatment options have yet to be established. NSAIDs and COX2-inhibitors are generally not effective. Where this condition has been correctly diagnosed, various anti-rheumatic drugs as well as colchicine may be trialled to find the most effective option. More aggressive intra-articular treatment such chemical or radio-active synovectomy can also be helpful although benefits beyond 1 year have not been reported in literature.
Signs and symptoms
Repeated, periodic joint effusions of the knee. Usually one knee is affected but sometimes both knees. Other joints may also be involved along with the knee. Effusions are large, restricting range of motion but significant pain is not a feature. There is usually stiffness. Tenderness of the joint may or may not be present. Aspirated synovial fluid is usually sterile but will sometimes show elevated cell count (>100 cells/mL) with 50% being polymorphonuclear leukocytes.Onset of effusions are sudden with no particular trigger or stimulus. Each episode lasts for a few days to about a week and recurs in cycles of 7 to 11 days with extremes of 3 days to 30 days also reported. Sometimes the joint may begin to swell again as soon as the fluid has subsided. Where both knees are affected concurrently, as one joint ceases to swell the other may become involved.The cycle of joints swellings have been reported as being very regular, even predictable. This has been a characteristic feature of IH in many case reports. However, over the longer-term especially, these cycles of effusion and recovery may not be as constant as first reported.In women, many cases seem to begin at puberty. Episodes of knee swelling may coincide the menstrual cycle. In nearly all case reports, pregnancy seems to suppress the condition but after birth, during lactation, it returns.In the main, patients are mostly free of other symptoms. Fever is rare. There no signs of local inflammation or lymphatic involvement. Laboratory tests are generally normal or within reference limits.
Cause
The cause is unknown but allergic and auto-inflammatory mechanisms have been proposed.In a 1957 review of IH, Mattingly did not find evidence that the condition is inherited unlike Reimann who, in 1974, describes the condition as “heritable, non-inflammatory, and afebrile”.More recently, specific association with the Mediterranean fever gene, MEFV, has been proposed. So, with some individuals carrying gene mutations (MEFV and also TRAPS-related genes), the native immune system seems to plays a role in the development of IH, i.e. there is an auto-immune component to the condition.
Pathophysiology
Involvement of mast cells has been reported reflecting a possible immunoallergic aspect to IH.
Mattingly suggests that IH may be an unusual variant of rheumatoid arthritis, and some patients may go on to develop RA. Joint damage however does not generally occur and only the synovial membrane is affected by a ‘non-inflammatory oedema’.With regard to the periodic nature of effusions, Reimann theorises that:
“…either an inherent rhythm or a feedback mechanism (Morley, 1970) excites bioclocks in the hypothalamus or in the synovial membrane (Richter, 1960). These Zeitgebers provoke sudden accumulation of plasma in the lining and spaces of joints, tendon and ligament sheaths.”
Diagnosis
There is no specific test for this condition. Diagnosis is based on signs and symptoms, and exclusion of other conditions.
Differential diagnoses
Rheumatoid arthritis. Confusion with rheumatoid arthritis may be common even though IH is a non-inflammatory condition without the many signs and symptoms associated with RA. So far, an association with HLA-B27 gene in IH has not been reported. With RA, small joints are mostly affected in an inflammatory, destructive manner. This is not observed in IH. Rheumatoid factor, cyclic citrullinated peptide antibodies and antinuclear antibodies are usually negative in IH.
Palindromic rheumatism. Like IH, palindromic rheumatism (PR) is also characterised by relapsing, short-episodes of sudden onset arthritis with no recognisable trigger. However, unlike PR, IH affects the knee almost exclusively and there is a predictable periodic, regularity of attacks, with laboratory tests during attacks being generally unremarkable. PR is also more likely to be associated with development of rheumatoid arthritis.
Familial Mediterranean Fever. For some patients with FMF, episodes of knee or joint inflammation may be the only presenting symptom during an acute inflammatory episode. The MEFV gene mutations associated with FMF have been implicated in the pathogenesis of both palindromic rheumatism and IH.
Other conditions for consideration (or exclusion) are other periodic arthropathies, crystal arthopathy, prepatellar bursitis (housemaids knee), pigmented villonodular synovitis, trauma and infectious causes.
Treatment
No treatment has been found to be routinely effective. NSAIDs and COX-2 inhibitors are not generally helpful other than for general pain relief. They do not seem to help reduce effusions or prevent their occurrence. Low-dose colchicine (and some other ‘anti-rheumatic’ therapies e.g. hydroxychloroquine) have been used with some success. (Use of methotrexate and intramuscular gold have not been reported in the literature). More aggressive treatments such as synovectomy, achieved using intra-articular agents (chemical or radioactive) can provide good results, with efficacy reported for at least 1 year.Reducing acute joint swelling:
Arthrocentesis (or drainage of joint) may be useful to relieve joint swelling and improve range of motion. Local steroid injections can also reduce fluid accumulation short-term, but do not prevent onset of episodes. These treatments provide temporary relief only. Bed rest, ice packs splints and exercise are ineffective.A single case report of a patient with treatment-refractory IH describes the use of anakinra, an interleukin 1 receptor antagonist. At the first sign of any attack, a single 100 mg dose was given. With this dosing at onset of attacks, each episode of effusion was successfully terminated.Reducing frequency and severity of IH episodes:
Case reports indicate some success using long-term, low-dose colchicine (e.g. 0.5 mg to 1 mg daily). A recent single case report has shown hydroxychloroquine (300 mg daily) to be effective too.Small-sized clinical trials have shown positive results with (1) chemical and (2) radioactive synovectomy. (1) Setti et al. treated 53 patients with rifampicin RV (600 mg intra-articular injections weekly for approximately 6 weeks) with good results at 1 year follow-up. (2) Top and Cross used single doses of intra-articular radioactive gold in 18 patients with persistent effusions of mixed causes including 3 with IH. All 3 patients with IH responded well to treatment at one-year follow-up.
Prognosis
Once established, periods of remissions and relapse can persist indefinitely.
While IH may remit spontaneously for most people the condition is long-lasting. Treatments as described above can be effective in reducing the frequency and degree of effusions. Deformative changes to joints are not a common feature of this mostly non-inflammatory condition.
Epidemiology
Intermittent hydrarthrosis is uncommon and its prevalence is not known. (In 1974 more than 200 cases were reported in published literature). It affects men and women equally although some publications suggest the condition is slightly more prevalent in females. Case reports indicate that only white people are affected </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Nocturia and can you explain it in depth? [/INST] Nocturia is defined by the International Continence Society (ICS) as “the complaint that the individual has to wake at night one or more times for voiding (i.e. to urinate).” The term is derived from Latin nox, night, and Greek [τα] ούρα, urine. Causes are varied and can be difficult to discern. Although not every patient needs treatment, most people seek treatment for severe nocturia, waking up to void more than 2–3 times per night.
Prevalence
Studies show that 5–15% of people who are 20–50 years old, 20–30% of people who are 50–70 years old, and 10–50% of people 70+ years old, urinate at least twice a night. Nocturia becomes more common with age. More than 50 percent of men and women over the age of 60 have been measured to have nocturia in many communities. Even more over the age of 80 are shown to experience symptoms of nocturia nightly. Nocturia symptoms also often worsen with age. Although nocturia rates are about the same for both genders, data shows that there is a higher prevalence in younger women than younger men and older men than older women.
Impact
Research suggests that more than 60% of people are negatively affected by nocturia. The resulting insomnia and sleep deprivation can cause exhaustion, changes in mood, sleepiness, impaired productivity, fatigue, increased risk of accidents, and cognitive dysfunction. 25% of falls that older individuals experience happen during the night, of which 25% occur while waking up to void.A quality of life test for people who experience nocturia was published in 2004. The pilot study was conducted only on men.
Diagnosis
Nocturia diagnosis requires knowing the patients nocturnal urine volume (NUV). The ICS defines NUV as “the total volume of urine passed between the time the individual goes to bed with the intention of sleeping and the time of waking with the intention of rising.” Thus, NUV excludes the last void before going to bed, but includes the first morning void if the urge to urinate woke the patient. The amount of sleep a patient gets, and the amount they intend to get, are also considered in a diagnosis.As with any patient, a detailed history of the problem is required to establish what is normal for that patient. The principal diagnostic tool for nocturia is the voiding bladder diary. Based on information recorded in the diary, a physician can classify the patient as having global polyuria, nocturnal polyuria, or bladder storage problems. A voiding bladder diary should record:
number of voids
timing of voids
volume voided
volume and time of fluid intakePatients should include the first morning void in the NUV. However, the first morning void is not included with the number of nightly voids.
Causes
Polyuria
Polyuria is excessive or an abnormally large production or passage of urine. Increased production and passage of urine may also be termed diuresis. Polyuria is usually viewed as a symptom or sign of another disorder (not a disease by itself), but it can be classed as a disorder, at least when its underlying causes are not clear.
Global polyuria
Global polyuria is the continuous overproduction of urine that is not only limited to sleep hours. Global polyuria occurs in response to increased fluid intake and is defined as urine outputs of greater than 40 mL/kg/24 hours. The common causes of global polyuria are primary thirst disorders such as diabetes mellitus and diabetes insipidus (DI). Urination imbalance may lead to polydipsia or excessive thirst to prevent circulatory collapse. Central diabetes insipidus is caused by low levels of Vasopressin (also called antidiuretic hormone (ADH), arginine vasopressin (AVP) or argipressin). ADH is produced in the hypothalamus and stored in and released from the posterior pituitary gland. ADH increases water absorption in the collecting duct systems of kidney nephrons, subsequently decreasing urine production. ADH regulate hydration levels in the body. that helps regulates water levels. In nephrogenic DI, the kidneys do not respond properly to the normal amount of ADH.Diagnosis of DI can be made by an overnight water deprivation test. This test requires the patient to eliminate fluid intake for a fixed period of time, usually around 8–12 hours. If the first morning void is not highly concentrated, the patient is diagnosed with DI. Central DI usually can be treated with a synthetic replacement of ADH, called desmopressin. Desmopressin is taken to control thirst and frequent urination. Although there is no substitute for nephrogenic DI, it may be treated with careful regulation of fluid intake.
Nocturnal polyuria
Nocturnal polyuria is defined as an increase in urine production during the night but with a proportional decrease in daytime urine production that results in a normal 24-hour urine volume. With the 24-hour urine production within normal limits, nocturnal polyuria can be translated to having a nocturnal polyuria index (NPi) greater than 35% of the normal 24-hour urine volume. NPi is calculated simply by dividing NUV by the 24-hour urine volume. Similar to the inability of control urination, a disruption of arginine vasopressin (ADH) levels has been proposed for nocturia. Compared with the normal patients, nocturia patients have a nocturnal decrease in ADH level.Other causes of nocturnal polyuria include diseases such as
congestive heart failure
nephritic syndrome
liver failure
lifestyle patterns such as excessive nighttime drinking
sleep apnea increasing obstructive airway resistance. Obstructive sleep apnea sufferers have shown to have increases in renal sodium and water excretion that are mediated by elevated plasma atrial natriuretic hormone (ANH) levels. ANH is released by cardiac muscle cells in response to high blood volume. When activated, ANH releases water, subsequently increasing urine production.
Bladder storage
Normal human bladder storage capacity varies from person to person and is considered 400 – 600 mL. A bladder storage disorder is any factor that increases the frequency of small volume voids. These factors are usually related to lower urinary tract symptoms that affect the capacity of the bladder. Some patients with nocturia have neither global nor nocturnal polyuria according to the above criteria. Such patients most likely have a bladder storage disorder that impacts their nighttime voiding or a sleep disorder. Nocturnal bladder capacity (NBC) is defined as the largest voided volume during the sleep period.Decreased NBC can be traced to a decreased maximum voided volume or decreased bladder storage. Decreased NBC can be related to other disorders such as:
benign prostatic hyperplasia (BPH), also known as prostate enlargement
neurogenic bladder dysfunction
learned voiding dysfunction
anxiety disorders
urinary tract infection
certain pharmacological agents.
Mixed cause
A significant number of nocturia cases occur from a combination of causes. Mixed nocturia is more common than many realise and is a combination of nocturnal polyuria and decreased nocturnal bladder capacity. In a study of 194 nocturia patients:
7% were determined to solely have nocturnal polyuria
57% solely had decreased NBC
36% had a mixed cause of the twoMultifactor caused nocturia is often unrelated to an underlying urological condition. Mixed nocturia is diagnosed through the maintenance and analysis of bladder diaries of the patient. Assessment of cause contributions are done through formulas.
Management
Lifestyle changes
Although there is no cure for nocturia, many actions can manage the symptoms.
Prohibiting caffeine and alcohol intake. Both are diuretic.
Beverage consumption regulation. In regard to nocturia, this specifically means avoiding consuming fluids for three or more hours before bedtime so giving the bladder less fluid to store overnight </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Anemia and can you explain it in depth? [/INST] Anemia or anaemia (British English) is a blood disorder in which the blood has a reduced ability to carry oxygen due to a lower than normal number of red blood cells, or a reduction in the amount of hemoglobin. When anemia comes on slowly, the symptoms are often vague, such as tiredness, weakness, shortness of breath, headaches, and a reduced ability to exercise. When anemia is acute, symptoms may include confusion, feeling like one is going to pass out, loss of consciousness, and increased thirst. Anemia must be significant before a person becomes noticeably pale. Symptoms of anemia depend on how quickly hemoglobin decreases. Additional symptoms may occur depending on the underlying cause. Preoperative anemia can increase the risk of needing a blood transfusion following surgery. Anemia can be temporary or long term and can range from mild to severe.Anemia can be caused by blood loss, decreased red blood cell production, and increased red blood cell breakdown. Causes of bleeding include trauma and gastrointestinal bleeding. Causes of decreased production include iron deficiency, vitamin B12 deficiency, thalassemia and a number of bone marrow tumors. Causes of increased breakdown include genetic disorders such as sickle cell anemia, infections such as malaria, and certain autoimmune diseases. Anemia can also be classified based on the size of the red blood cells and amount of hemoglobin in each cell. If the cells are small, it is called microcytic anemia; if they are large, it is called macrocytic anemia; and if they are normal sized, it is called normocytic anemia. The diagnosis of anemia in men is based on a hemoglobin of less than 130 to 140 g/L (13 to 14 g/dL); in women, it is less than 120 to 130 g/L (12 to 13 g/dL). Further testing is then required to determine the cause.A large number of patients diagnosed with anemia of chronic disease present with no active inflammation or dietary issues. These include many with reduced limb loading, such as spinal cord injured patients, astronauts, elderly people with limited mobility, bed-bound and experimental bed-rest subjects.Certain groups of individuals, such as pregnant women, benefit from the use of iron pills for prevention. Dietary supplementation, without determining the specific cause, is not recommended. The use of blood transfusions is typically based on a persons signs and symptoms. Symptoms of anemia depend on how quickly hemoglobin decreases. In those without symptoms, they are not recommended unless hemoglobin levels are less than 60 to 80 g/L (6 to 8 g/dL). These recommendations may also apply to some people with acute bleeding. Erythropoiesis-stimulating agents are only recommended in those with severe anemia.Anemia is the most common blood disorder, affecting about a third of the global population. Iron-deficiency anemia affects nearly 1 billion people. In 2013, anemia due to iron deficiency resulted in about 183,000 deaths – down from 213,000 deaths in 1990. This condition is most prevalent in children with also an above average prevalence in elderly and women of reproductive age (especially during pregnancy). The name is derived from Ancient Greek: ἀναιμία anaimia, meaning "lack of blood", from ἀν- an-, "not" and αἷμα haima, "blood".Anemia is one of the six WHO global nutrition targets for 2025 and for diet-related global targets endorsed by World Health Assembly in 2012 and 2013. Efforts to reach global targets contribute to reaching Sustainable Development Goals (SDGs), with anemia as one of the targets in SDG 2 for achieving zero world hunger.
Signs and symptoms
Anemia is considered to be the most common blood disorder. A person with anemia may not have any symptoms depending on the underlying cause, and no symptoms may be noticed, as the anemia is initially mild, and then the symptoms become worse as the anemia worsens. A patient with anemia may report feeling tired, weak, decreased ability to concentrate, and sometimes shortness of breath on exertion.Symptoms of anemia can come on quickly or slowly. Early on there may be few or no symptoms. If the anemia continues slowly (chronic), the body may adapt and compensate for this change. In this case, no symptoms may appear until the anemia becomes more severe. Symptoms can include feeling tired, weak, dizziness, headaches, lack of physical exertion, shortness of breath, difficulty concentrating, irregular or rapid heartbeat, cold hands and feet, cold intolerance, pale or yellow skin, poor appetite, easy bruising and bleeding, and muscle weakness. Anemia that develops quickly, often, has more severe symptoms, including, feeling faint, chest pain, sweating, increased thirst, and confusion. There may be also additional symptoms depending on the underlying cause.In more severe anemia, the body may compensate for the lack of oxygen-carrying capability of the blood by increasing cardiac output. The person may have symptoms related to this, such as palpitations, angina (if pre-existing heart disease is present), intermittent claudication of the legs, and symptoms of heart failure.On examination, the signs exhibited may include pallor (pale skin, mucosa, conjunctiva and nail beds), but this is not a reliable sign. A blue coloration of the sclera may be noticed in some cases of iron-deficiency anemia. There may be signs of specific causes of anemia, e.g. koilonychia (in iron deficiency), jaundice (when anemia results from abnormal break down of red blood cells – in hemolytic anemia), nerve cell damage (vitamin B12 deficiency), bone deformities (found in thalassemia major) or leg ulcers (seen in sickle-cell disease). In severe anemia, there may be signs of a hyperdynamic circulation: tachycardia (a fast heart rate), bounding pulse, flow murmurs, and cardiac ventricular hypertrophy (enlargement). There may be signs of heart failure.
Pica, the consumption of non-food items such as ice, paper, wax, grass, hair or dirt, may be a symptom of iron deficiency; although it occurs often in those who have normal levels of hemoglobin .
Chronic anemia may result in behavioral disturbances in children as a direct result of impaired neurological development in infants, and reduced academic performance in children of school age. Restless legs syndrome is more common in people with iron-deficiency anemia than in the general population.
Causes
The causes of anemia may be classified as impaired red blood cell (RBC) production, increased RBC destruction (hemolytic anemia), blood loss and fluid overload (hypervolemia). Several of these may interplay to cause anemia. The most common cause of anemia is blood loss, but this usually does not cause any lasting symptoms unless a relatively impaired RBC production develops, in turn, most commonly by iron deficiency.
Impaired production
Disturbance of proliferation and differentiation of stem cells
Pure red cell aplasia
Aplastic anemia affects all kinds of blood cells. Fanconi anemia is a hereditary disorder or defect featuring aplastic anemia and various other abnormalities.
Anemia of kidney failure due to insufficient production of the hormone erythropoietin
Anemia of endocrine disease
Disturbance of proliferation and maturation of erythroblasts
Pernicious anemia is a form of megaloblastic anemia due to vitamin B12 deficiency dependent on impaired absorption of vitamin B12. Lack of dietary B12 causes non-pernicious megaloblastic anemia.
Anemia of folate deficiency, as with vitamin B12, causes megaloblastic anemia
Anemia of prematurity, by diminished erythropoietin response to declining hematocrit levels, combined with blood loss from laboratory testing, generally occurs in premature infants at two to six weeks of age.
Iron deficiency anemia, resulting in deficient heme synthesis
Thalassemias, causing deficient globin synthesis
Congenital dyserythropoietic anemias, causing ineffective erythropoiesis
Anemia of kidney failure (also causing stem cell dysfunction)
Other mechanisms of impaired RBC production
Myelophthisic anemia or myelophthisis is a severe type of anemia resulting from the replacement of bone marrow by other materials, such as malignant tumors, fibrosis, or granulomas.
Myelodysplastic syndrome
anemia of chronic inflammation
Leukoerythroblastic anemia is caused by space-occupying lesions in the bone marrow that prevent normal production of blood cells.
Increased destruction
Anemias of increased red blood cell destruction are generally classified as hemolytic anemias. These types generally feature jaundice, and elevated levels of lactate dehydrogenase.
Intrinsic (intracorpuscular) abnormalities cause premature destruction. All of these, except paroxysmal nocturnal hemoglobinuria, are hereditary genetic disorders.Hereditary spherocytosis is a hereditary defect that results in defects in the RBC cell membrane, causing the erythrocytes to be sequestered and destroyed by the spleen.
Hereditary elliptocytosis is another defect in membrane skeleton proteins.
Abetalipoproteinemia, causing defects in membrane lipids
Enzyme deficiencies
Pyruvate kinase and hexokinase deficiencies, causing defect glycolysis
Glucose-6-phosphate dehydrogenase deficiency and glutathione synthetase deficiency, causing increased oxidative stress
Hemoglobinopathies
Sickle cell anemia
Hemoglobinopathies causing unstable hemoglobins
Paroxysmal nocturnal hemoglobinuria
Extrinsic (extracorpuscular) abnormalities
Antibody-mediated
Warm autoimmune hemolytic anemia is caused by autoimmune attack against red blood cells, primarily by IgG. It is the most common of the autoimmune hemolytic diseases. It can be idiopathic, that is, without any known cause, drug-associated or secondary to another disease such as systemic lupus erythematosus, or a malignancy, such as chronic lymphocytic leukemia.
Cold agglutinin hemolytic anemia is primarily mediated by IgM. It can be idiopathic or result from an underlying condition.
Rh disease, one of the causes of hemolytic disease of the newborn
Transfusion reaction to blood transfusions
Mechanical trauma to red blood cells
Microangiopathic hemolytic anemias, including thrombotic thrombocytopenic purpura and disseminated intravascular coagulation
Infections, including malaria
Heart surgery
Haemodialysis
Parasitic
Trypanosoma congolense alters the surfaces of RBCs of its host and this may explain T. c. induced anemia
Blood loss
Anemia of prematurity, from frequent blood sampling for laboratory testing, combined with insufficient RBC production
Trauma or surgery, causing acute blood loss
Gastrointestinal tract lesions, causing either acute bleeds (e.g. variceal lesions, peptic ulcers) or chronic blood loss (e.g. angiodysplasia)
Gynecologic disturbances, also generally causing chronic blood loss
From menstruation, mostly among young women or older women who have fibroids
Many type of cancers, including colorectal cancer and cancer of the urinary bladder, may cause acute or chronic blood loss, especially at advanced stages
Infection by intestinal nematodes feeding on blood, such as hookworms and the whipworm Trichuris trichiura
Iatrogenic anemia, blood loss from repeated blood draws and medical procedures.The roots of the words anemia and ischemia both refer to the basic idea of "lack of blood", but anemia and ischemia are not the same thing in modern medical terminology. The word anemia used alone implies widespread effects from blood that either is too scarce (e.g., blood loss) or is dysfunctional in its oxygen-supplying ability (due to whatever type of hemoglobin or erythrocyte problem). In contrast, the word ischemia refers solely to the lack of blood (poor perfusion). Thus ischemia in a body part can cause localized anemic effects within those tissues.
Fluid overload
Fluid overload (hypervolemia) causes decreased hemoglobin concentration and apparent anemia:
General causes of hypervolemia include excessive sodium or fluid intake, sodium or water retention and fluid shift into the intravascular space.
From the 6th week of pregnancy, hormonal changes cause an increase in the mothers blood volume due to an increase in plasma.
Intestinal inflammation
Certain gastrointestinal disorders can cause anemia. The mechanisms involved are multifactorial and not limited to malabsorption but mainly related to chronic intestinal inflammation, which causes dysregulation of hepcidin that leads to decreased access of iron to the circulation.
Helicobacter pylori infection.
Gluten-related disorders: untreated celiac disease and non-celiac gluten sensitivity. Anemia can be the only manifestation of celiac disease, in absence of gastrointestinal or any other symptoms.
Inflammatory bowel disease.
Diagnosis
Definitions
There are a number of definitions of anemia; reviews provide comparison and contrast of them. A strict but broad definition is an absolute decrease in red blood cell mass, however, a broader definition is a lowered ability of the blood to carry oxygen. An operational definition is a decrease in whole-blood hemoglobin concentration of more than 2 standard deviations below the mean of an age- and sex-matched reference range.It is difficult to directly measure RBC mass, so the hematocrit (amount of RBCs) or the hemoglobin (Hb) in the blood are often used instead to indirectly estimate the value. Hematocrit; however, is concentration dependent and is therefore not completely accurate. For example, during pregnancy a womans RBC mass is normal but because of an increase in blood volume the hemoglobin and hematocrit are diluted and thus decreased. Another example would be bleeding where the RBC mass would decrease but the concentrations of hemoglobin and hematocrit initially remains normal until fluids shift from other areas of the body to the intravascular space.The anemia is also classified by severity into mild (110 g/L to normal), moderate (80 g/L to 110 g/L), and severe anemia (less than 80 g/L) in adults. Different values are used in pregnancy and children.
Testing
Anemia is typically diagnosed on a complete blood count. Apart from reporting the number of red blood cells and the hemoglobin level, the automatic counters also measure the size of the red blood cells by flow cytometry, which is an important tool in distinguishing between the causes of anemia. Examination of a stained blood smear using a microscope can also be helpful, and it is sometimes a necessity in regions of the world where automated analysis is less accessible.
A blood test will provide counts of white blood cells, red blood cells and platelets. If anemia appears, further tests may determine what type it is, and whether it has a serious cause. although of that, it is possible to refer to the genetic history and physical diagnosis. These tests may include:
complete blood count (CBC); a CBC is used to count the number of blood cells in a sample of the blood. For anemia, it will likely to be interested in the levels of the red blood cells contained in blood (hematocrit), hemoglobin, mean corpuscular volume.
determine the size and shape of red blood cells; some of red blood cells might also be examined for unusual size, shape and color.
serum ferritin; This protein helps store iron in the body, a low levels of ferritin usually indicates a low levels of stored iron.
serum vitamin B12; low levels usually develop an anemia, vitamin B12 is needed to make red blood cells, which carry oxygen to all parts of human body.
blood tests to detect rare causes; such as an immune attack on red blood cells, red blood cell fragility, and defects of enzymes, hemoglobin, and clotting.
a bone marrow sample; when the cause is unclear, a bone marrow test is performed, most often, when some blood cell defect is suspected.Reticulocyte counts, and the "kinetic" approach to anemia, have become more common than in the past in the large medical centers of the United States and some other wealthy nations, in part because some automatic counters now have the capacity to include reticulocyte counts. A reticulocyte count is a quantitative measure of the bone marrows production of new red blood cells. The reticulocyte production index is a calculation of the ratio between the level of anemia and the extent to which the reticulocyte count has risen in response. If the degree of anemia is significant, even a "normal" reticulocyte count actually may reflect an inadequate response.
If an automated count is not available, a reticulocyte count can be done manually following special staining of the blood film. In manual examination, activity of the bone marrow can also be gauged qualitatively by subtle changes in the numbers and the morphology of young RBCs by examination under a microscope. Newly formed RBCs are usually slightly larger than older RBCs and show polychromasia. Even where the source of blood loss is obvious, evaluation of erythropoiesis can help assess whether the bone marrow will be able to compensate for the loss and at what rate.
When the cause is not obvious, clinicians use other tests, such as: ESR, serum iron, transferrin, RBC folate level, hemoglobin electrophoresis, renal function tests (e.g. serum creatinine) although the tests will depend on the clinical hypothesis that is being investigated.
When the diagnosis remains difficult, a bone marrow examination allows direct examination of the precursors to red cells, although is rarely used as is painful, invasive and is hence reserved for cases where severe pathology needs to be determined or excluded.
Red blood cell size
In the morphological approach, anemia is classified by the size of red blood cells; this is either done automatically or on microscopic examination of a peripheral blood smear. The size is reflected in the mean corpuscular volume (MCV). If the cells are smaller than normal (under 80 fl), the anemia is said to be microcytic; if they are normal size (80–100 fl), normocytic; and if they are larger than normal (over 100 fl), the anemia is classified as macrocytic. This scheme quickly exposes some of the most common causes of anemia; for instance, a microcytic anemia is often the result of iron deficiency. In clinical workup, the MCV will be one of the first pieces of information available, so even among clinicians who consider the "kinetic" approach more useful philosophically, morphology will remain an important element of classification and diagnosis.
Limitations of MCV include cases where the underlying cause is due to a combination of factors – such as iron deficiency (a cause of microcytosis) and vitamin B12 deficiency (a cause of macrocytosis) where the net result can be normocytic cells.
Production vs. destruction or loss
The "kinetic" approach to anemia yields arguably the most clinically relevant classification of anemia. This classification depends on evaluation of several hematological parameters, particularly the blood reticulocyte (precursor of mature RBCs) count. This then yields the classification of defects by decreased RBC production versus increased RBC destruction or loss. Clinical signs of loss or destruction include abnormal peripheral blood smear with signs of hemolysis; elevated LDH suggesting cell destruction; or clinical signs of bleeding, such as guaiac-positive stool, radiographic findings, or frank bleeding.
The following is a simplified schematic of this approach:
* For instance, sickle cell anemia with superimposed iron deficiency; chronic gastric bleeding with B12 and folate deficiency; and other instances of anemia with more than one cause.** Confirm by repeating reticulocyte count: ongoing combination of low reticulocyte production index, normal MCV and hemolysis or loss may be seen in bone marrow failure or anemia of chronic disease, with superimposed or related hemolysis or blood loss.
Here is a schematic representation of how to consider anemia with MCV as the starting point:
Other characteristics visible on the peripheral smear may provide valuable clues about a more specific diagnosis; for example, abnormal white blood cells may point to a cause in the bone marrow.
Microcytic
Microcytic anemia is primarily a result of hemoglobin synthesis failure/insufficiency, which could be caused by several etiologies:
Iron deficiency anemia is the most common type of anemia overall and it has many causes. RBCs often appear hypochromic (paler than usual) and microcytic (smaller than usual) when viewed with a microscope.
Iron deficiency anemia is due to insufficient dietary intake or absorption of iron to meet the bodys needs. Infants, toddlers, and pregnant women have higher than average needs. Increased iron intake is also needed to offset blood losses due to digestive tract issues, frequent blood donations, or heavy menstrual periods. Iron is an essential part of hemoglobin, and low iron levels result in decreased incorporation of hemoglobin into red blood cells. In the United States, 12% of all women of childbearing age have iron deficiency, compared with only 2% of adult men. The incidence is as high as 20% among African American and Mexican American women. Studies have shown iron deficiency without anemia causes poor school performance and lower IQ in teenage girls, although this may be due to socioeconomic factors. Iron deficiency is the most prevalent deficiency state on a worldwide basis. It is sometimes the cause of abnormal fissuring of the angular (corner) sections of the lips (angular stomatitis).
In the United States, the most common cause of iron deficiency is bleeding or blood loss, usually from the gastrointestinal tract. Fecal occult blood testing, upper endoscopy and lower endoscopy should be performed to identify bleeding lesions. In older men and women, the chances are higher that bleeding from the gastrointestinal tract could be due to colon polyps or colorectal cancer.
Worldwide, the most common cause of iron deficiency anemia is parasitic infestation (hookworms, amebiasis, schistosomiasis and whipworms).The Mentzer index (mean cell volume divided by the RBC count) predicts whether microcytic anemia may be due to iron deficiency or thalassemia, although it requires confirmation.
Macrocytic
Megaloblastic anemia, the most common cause of macrocytic anemia, is due to a deficiency of either vitamin B12, folic acid, or both. Deficiency in folate or vitamin B12 can be due either to inadequate intake or insufficient absorption. Folate deficiency normally does not produce neurological symptoms, while B12 deficiency does.
Pernicious anemia is caused by a lack of intrinsic factor, which is required to absorb vitamin B12 from food. A lack of intrinsic factor may arise from an autoimmune condition targeting the parietal cells (atrophic gastritis) that produce intrinsic factor or against intrinsic factor itself. These lead to poor absorption of vitamin B12.
Macrocytic anemia can also be caused by the removal of the functional portion of the stomach, such as during gastric bypass surgery, leading to reduced vitamin B12/folate absorption. Therefore, one must always be aware of anemia following this procedure.
Hypothyroidism
Alcoholism commonly causes a macrocytosis, although not specifically anemia. Other types of liver disease can also cause macrocytosis.
Drugs such as methotrexate, zidovudine, and other substances may inhibit DNA replication such as heavy metalsMacrocytic anemia can be further divided into "megaloblastic anemia" or "nonmegaloblastic macrocytic anemia". The cause of megaloblastic anemia is primarily a failure of DNA synthesis with preserved RNA synthesis, which results in restricted cell division of the progenitor cells. The megaloblastic anemias often present with neutrophil hypersegmentation (six to 10 lobes). The nonmegaloblastic macrocytic anemias have different etiologies (i.e. unimpaired DNA globin synthesis,) which occur, for example, in alcoholism.
In addition to the nonspecific symptoms of anemia, specific features of vitamin B12 deficiency include peripheral neuropathy and subacute combined degeneration of the cord with resulting balance difficulties from posterior column spinal cord pathology. Other features may include a smooth, red tongue and glossitis.
The treatment for vitamin B12-deficient anemia was first devised by William Murphy, who bled dogs to make them anemic, and then fed them various substances to see what (if anything) would make them healthy again. He discovered that ingesting large amounts of liver seemed to cure the disease. George Minot and George Whipple then set about to isolate the curative substance chemically and ultimately were able to isolate the vitamin B12 from the liver. All three shared the 1934 Nobel Prize in Medicine.
Normocytic
Normocytic anemia occurs when the overall hemoglobin levels are decreased, but the red blood cell size (mean corpuscular volume) remains normal. Causes include:
Dimorphic
A dimorphic appearance on a peripheral blood smear occurs when there are two simultaneous populations of red blood cells, typically of different size and hemoglobin content (this last feature affecting the color of the red blood cell on a stained peripheral blood smear). For example, a person recently transfused for iron deficiency would have small, pale, iron deficient red blood cells (RBCs) and the donor RBCs of normal size and color. Similarly, a person transfused for severe folate or vitamin B12 deficiency would have two cell populations, but, in this case, the patients RBCs would be larger and paler than the donors RBCs. A person with sideroblastic anemia (a defect in heme synthesis, commonly caused by alcoholism, but also drugs/toxins, nutritional deficiencies, a few acquired and rare congenital diseases) can have a dimorphic smear from the sideroblastic anemia alone. Evidence for multiple causes appears with an elevated RBC distribution width (RDW), indicating a wider-than-normal range of red cell sizes, also seen in common nutritional anemia.
Heinz body anemia
Heinz bodies form in the cytoplasm of RBCs and appear as small dark dots under the microscope. In animals, Heinz body anemia has many causes. It may be drug-induced, for example in cats and dogs by acetaminophen (paracetamol), or may be caused by eating various plants or other substances:
In cats and dogs after eating either raw or cooked plants from the genus Allium, for example, onions or garlic.
In dogs after ingestion of zinc, for example, after eating U.S. pennies minted after 1982.
In horses which eat dry or wilted red maple leaves.
Hyperanemia
Hyperanemia is a severe form of anemia, in which the hematocrit is below 10%.
Refractory anemia
Refractory anemia, an anemia which does not respond to treatment, is often seen secondary to myelodysplastic syndromes. Iron deficiency anemia may also be refractory as a manifestation of gastrointestinal problems which disrupt iron absorption or cause occult bleeding.
Transfusion dependent
Transfusion dependent anemia is a form of anemia where ongoing blood transfusion are required. Most people with myelodysplastic syndrome develop this state at some point in time. Beta thalassemia may also result in transfusion dependence. Concerns from repeated blood transfusions include iron overload. This iron overload may require chelation therapy.
Treatment
Treatment for anemia depends on cause and severity. Vitamin supplements given orally (folic acid or vitamin B12) or intramuscularly (vitamin B12) will replace specific deficiencies.
Oral iron
Nutritional iron deficiency is common in developing nations. An estimated two-thirds of children and of women of childbearing age in most developing nations are estimated to have iron deficiency without anemia; one-third of them have iron deficiency with anemia. Iron deficiency due to inadequate dietary iron intake is rare in men and postmenopausal women. The diagnosis of iron deficiency mandates a search for potential sources of blood loss, such as gastrointestinal bleeding from ulcers or colon cancer.Mild to moderate iron-deficiency anemia is treated by oral iron supplementation with ferrous sulfate, ferrous fumarate, or ferrous gluconate. Daily iron supplements have been shown to be effective in reducing anemia in women of childbearing age. When taking iron supplements, stomach upset or darkening of the feces are commonly experienced. The stomach upset can be alleviated by taking the iron with food; however, this decreases the amount of iron absorbed. Vitamin C aids in the bodys ability to absorb iron, so taking oral iron supplements with orange juice is of benefit.In the anemia of chronic kidney disease, recombinant erythropoietin or epoetin alfa is recommended to stimulate RBC production, and if iron deficiency and inflammation are also present, concurrent parenteral iron is also recommended.
Injectable iron
In cases where oral iron has either proven ineffective, would be too slow (for example, pre-operatively), or where absorption is impeded (for example in cases of inflammation), parenteral iron preparations can be used. Parenteral iron can improve iron stores rapidly and is also effective for treating people with postpartum haemorrhage, inflammatory bowel disease, and chronic heart failure. The body can absorb up to 6 mg iron daily from the gastrointestinal tract. In many cases, the patient has a deficit of over 1,000 mg of iron which would require several months to replace. This can be given concurrently with erythropoietin to ensure sufficient iron for increased rates of erythropoiesis.
Blood transfusions
Blood transfusions in those without symptoms is not recommended until the hemoglobin is below 60 to 80 g/L (6 to 8 g/dL). In those with coronary artery disease who are not actively bleeding transfusions are only recommended when the hemoglobin is below 70 to 80g/L (7 to 8 g/dL). Transfusing earlier does not improve survival. Transfusions otherwise should only be undertaken in cases of cardiovascular instability.A 2012 review concluded that when considering blood transfusions for anaemia in people with advanced cancer who have fatigue and breathlessness (not related to cancer treatment or haemorrhage), consideration should be given to whether there are alternative strategies can be tried before a blood transfusion.
Vitamin B12 intramuscular injections
In many cases, vitamin B12 is used by intramuscular injection in severe cases or cases of malabsorption of dietary-B12. Pernicious anemia caused by loss of intrinsic factor cannot be prevented. If there are other, reversible causes of low vitamin B12 levels, the cause must be treated.Vitamin B12 deficiency anemia is usually easily treated by providing the necessary level of vitamin B12 supplementation. The injections are quick-acting, and symptoms usually go away within one to two weeks. As the condition improves, doses are reduced to weeks and then can be given monthly. Intramuscular therapy leads to more rapid improvement and should be considered in patients with severe deficiency or severe neurologic symptoms. Treatment should begin rapidly for severe neurological symptoms, as some changes can become permanent. In some individuals lifelong treatment may be needed.
Erythropoiesis-stimulating agents
The objective for the administration of an erythropoiesis-stimulating agent (ESA) is to maintain hemoglobin at the lowest level that both minimizes transfusions and meets the individual persons needs. They should not be used for mild or moderate anemia. They are not recommended in people with chronic kidney disease unless hemoglobin levels are less than 10 g/dL or they have symptoms of anemia. Their use should be along with parenteral iron. The 2020 Cochrane Anaesthesia Review Group review of Erythropoietin plus iron versus control treatment including placebo or iron for preoperative anaemic adults undergoing non‐cardiac surgery demonstrated that patients were much less likely to require red cell transfusion and in those transfused, the volumes were unchanged (mean difference -0.09, 95% CI -0.23 to 0.05). Pre-op Hb concentration was increased in those receiving high dose EPO, but not low dose.
Hyperbaric oxygen
Treatment of exceptional blood loss (anemia) is recognized as an indication for hyperbaric oxygen (HBO) by the Undersea and Hyperbaric Medical Society. The use of HBO is indicated when oxygen delivery to tissue is not sufficient in patients who cannot be given blood transfusions for medical or religious reasons </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Adenovirus infection and can you explain it in depth? [/INST] Adenovirus infection is a contagious viral disease, caused by Adenoviruses, commonly resulting in a respiratory tract infection. Typical symptoms range from those of a common cold, such as nasal congestion, coryza and cough, to difficulty breathing as in pneumonia. Other general symptoms include fever, fatigue, muscle aches, headache, abdominal pain and swollen neck glands. Onset is usually two to fourteen days after exposure to the virus. A mild eye infection may occur on its own, combined with a sore throat and fever, or as a more severe adenoviral keratoconjunctivitis with a painful red eye, intolerance to light and discharge. Very young children may just have an earache. Adenovirus infection can present as a gastroenteritis with vomiting, diarrhoea and abdominal pain, with or without respiratory symptoms. However, some people have no symptoms.Adenovirus infection in humans are generally caused by Adenoviruses types B, C, E and F. Spread occurs mainly when an infected person is in close contact with another person. This may occur by either fecal–oral route, airborne transmission or small droplets containing the virus. Less commonly, the virus may spread via contaminated surfaces. Other respiratory complications include acute bronchitis, bronchiolitis and acute respiratory distress syndrome. It may cause myocarditis, meningoencephalitis or hepatitis in people with weak immune systems.Diagnosis is by signs and symptoms, and a laboratory test is not usually required. In some circumstances, a PCR test on blood or respiratory secretions may detect adenovirus DNA. Other conditions that appear similar include whooping cough, influenza, parainfluenza, and respiratory syncytial virus. Adenovirus gastroenteritis appears similar to diarrhoeal diseases caused by other infections. Infection by adenovirus may be prevented by washing hands, avoiding touching own eyes, mouth and nose with unwashed hands, and avoiding being near sick people. A live vaccine to protect against types 4 and 7 adenoviruses has been used successfully in some military personnel. Management is generally symptomatic and supportive. Most adenovirus infections get better without any treatment. Medicines to ease pain and reduce fever can be bought over the counter.Adenovirus infections affect all ages. They occur sporadically throughout the year, and outbreaks can occur particularly in winter and spring, when they may spread more quickly in closed populations such as in hospitals, nurseries, long-term care facilities, schools, and swimming pools. Severe disease is rare in people who are otherwise healthy. Adenovirus infection accounts for up to 10% of respiratory infections in children. Most cases are mild and by the age of 10-years, most children have had at least one adenovirus infection. 75% of conjunctivitis cases are due to adenovirus infection. In 2016, the Global Burden of Disease Study estimated that globally, around 75 million episodes of diarrhea among children under the age of five-years, were attributable to adenovirus infection. The first adenoviral strains were isolated in 1953 by Rowe et al.
Signs and symptoms
Symptoms are variable, ranging from mild symptoms to severe illness. They depend on the type of adenovirus, where it enters into the body, and on the age and well-being of the person. Recognised patterns of clinical features include respiratory, eye, gastrointestinal, genitourinary and central nervous system. There is also a widespread type that occurs in immunocompromised people. Typical symptoms are of a mild cold or resembling the flu; fever, nasal congestion, coryza, cough, and pinky-red eyes. Infants may also have symptoms of an ear infection. Onset is usually two to fourteen days after exposure to the virus. There may be tiredness, chills, muscle aches, or headache. However, some people have no symptoms. Generally, a day or two after developing a sore throat with large tonsils, glands can be felt in the neck. Illness is more likely to be severe in people with weakened immune systems, particularly children who have had a hematopoietic stem cell transplantation. Sometimes there is a skin rash.
Respiratory tract
Preschool children with adenovirus colds tend to present with a nasal congestion, runny nose and abdominal pain. There may be a harsh barking cough. It is frequently associated with a fever and a sore throat. Up to one in five infants with bronchiolitis will have adenovirus infection, which can be severe. Bronchiolitis obliterans is uncommon, but can occur if adenovirus causes pneumonia with prolonged fever, and can result in difficulty breathing. It presents with a hyperinflated chest, expiratory wheeze and low oxygen. Severe pneumonia is most common in very young children age three-to-18 months and presents with sudden illness, ongoing cough, high fever, shortness of breath and a fast rate of breathing. There are frequently wheezes and crackles on breathing in and out.
Eyes
Adenovirus eye infection may present as a pinky-red eye. Six to nine-days following exposure to adenovirus, one or both eyes, typically in children, may be affected in association with fever, pharyngitis and lymphadenopathy (pharyngoconjunctival fever (PCF)). The onset is usually sudden, and there is often rhinitis. Adenovirus infection can also cause adenoviral keratoconjunctivitis. Typically one eye is affected after an incubation period of up to a week. The eye becomes itchy, painful, burning and reddish and lymphadenopathy may be felt by the ear nearest the affected eye. The symptoms may last around 10-days to three-weeks. It may be is associated with blurred vision, photophobia and swelling of the conjunctiva. A sore throat and nasal congestion may or may not be present. This tends to occur in epidemics, affecting predominantly adults. In very young children, it may be associated with high fever, sore throat, otitis media, diarrhoea, and vomiting.
Gastrointestinal tract
Adenovirus infection can cause a gastroenteritis when it may present with diarrhoea, vomiting and abdominal pain, with or without respiratory or general symptoms. Children under the age of one-year appear particularly vulnerable. However, it usually resolves within three-days. It appears similar to diarrhoea diseases caused by other infections.
Other organs
Uncommonly the bladder may be affected, presenting with a sudden onset of burning on passing urine and increased frequency of passing urine, followed by seeing blood in the urine a day or two later. Meningism may occur in adenovirus associated meningoencephalitis, which may occur in people with weakened immune systems such as with AIDS or lymphoma. Adenovirus infection may result in symptoms of myocarditis, dilated cardiomyopathy, and pericarditis. Other signs and symptoms depend on other complications such as dark urine, itching and jaundice in hepatitis, generally in people who have a weakened immune system. Adenovirus is a rare cause of urethritis in men, when it may present with burning on passing urine associated with red eyes and feeling unwell.
Cause and mechanism
Adenovirus infection in humans are generally caused by Adenoviruses types B, C, E and F.Although epidemiologic characteristics of the adenoviruses vary by type, all are transmitted by direct contact, fecal-oral transmission, and occasionally waterborne transmission. Some types are capable of establishing persistent asymptomatic infections in tonsils, adenoids, and intestines of infected hosts, and shedding can occur for months or years. Some adenoviruses (e.g., serotypes 1, 2, 5, and 6) have been shown to be endemic in parts of the world where they have been studied, and infection is usually acquired during childhood. Other types cause sporadic infection and occasional outbreaks; for example, epidemic keratoconjunctivitis is associated with adenovirus serotypes 8, 19, and 37. Epidemics of febrile disease with conjunctivitis are associated with waterborne transmission of some adenovirus types, often centering on inadequately chlorinated swimming pools and small lakes. ARD is most often associated with adenovirus types 4 and 7 in the United States. Enteric adenoviruses 40 and 41 cause gastroenteritis, usually in children. For some adenovirus serotypes, the clinical spectrum of disease associated with infection varies depending on the site of infection; for example, infection with adenovirus 7 acquired by inhalation is associated with severe lower respiratory tract disease, whereas oral transmission of the virus typically causes no or mild disease. Outbreaks of adenovirus-associated respiratory disease have been more common in the late winter, spring, and early summer; however, adenovirus infections can occur throughout the year.Several adenoviruses, including Ad5, Ad9, Ad31, Ad36, Ad37, and SMAM1, have at least some evidence of causation of obesity in animals, adipogenesis in cells, and/or association with human obesity.
Diagnosis
Diagnosis is by signs and symptoms, and a laboratory test is not usually required. In some circumstances such as severe disease, when a diagnosis needs to be confirmed, a PCR test on blood or respiratory secretions may detect adenovirus DNA. Adenovirus can be isolated by growing in cell cultures in a laboratory. Other conditions that appear similar include whooping cough, influenza, parainfluenza, and respiratory syncytial virus. Since adenovirus can be excreted for prolonged periods, the presence of virus does not necessarily mean it is associated with disease.
Prevention
Infection by adenovirus may be prevented by washing hands, avoiding touching own eyes, mouth and nose before washing hands and avoiding being near sick people. Strict attention to good infection-control practices is effective for stopping transmission in hospitals of adenovirus-associated disease, such as epidemic keratoconjunctivitis. Maintaining adequate levels of chlorination is necessary for preventing swimming pool-associated outbreaks of adenovirus conjunctivitis. A live adenovirus vaccine to protect against types 4 and 7 adenoviruses has been used in some military personnel. Rates of adenovirus disease fell among military recruits following the introduction a live oral vaccine against types 4 and 7. Stocks of the vaccine ran out in 1999 and rates of disease increased until 2011 when the vaccine was re-introduced.
Treatment
Treatment is generally symptomatic and supportive. Medicines to ease pain and reduce fever can be bought over the counter. For adenoviral conjunctivitis, a cold compress and lubricants may provide some relief of discomfort. Steroid eye drops may be required if the cornea is involved. Most adenovirus infections get better without any treatment.
Prognosis
After recovery from adenovirus infection, the virus can be carried for weeks or months.Adenovirus can cause severe necrotizing pneumonia in which all or part of a lung has increased translucency radiographically, which is called Swyer-James Syndrome. Severe adenovirus pneumonia also may result in bronchiolitis obliterans, a subacute inflammatory process in which the small airways are replaced by scar tissue, resulting in a reduction in lung volume and lung compliance.
Epidemiology
Adenovirus infections occur sporadically throughout the year, and outbreaks can occur particularly in winter and spring. Epidemics may spread more quickly in closed populations such as in hospitals, nurseries, long-term care facilities, boarding schools, orphanages and swimming pools. Severe disease is rare in people who are usually healthy. Around 10% of respiratory infections in children are caused by adenoviruses </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Braxton Hicks contractions and can you explain it in depth? [/INST] Braxton Hicks contractions, also known as practice contractions or false labor, are sporadic uterine contractions that may start around six weeks into a pregnancy. However, they are usually felt in the second or third trimester of pregnancy.
Associated conditions
Braxton Hicks contractions are often confused for labor. Braxton Hicks contractions allow the pregnant womans body to prepare for labor. However, the presence of Braxton Hicks contractions does not mean a woman is in labor or even that labor is about to commence. Another common cause of pain in pregnancy is round ligament pain.
Table 1 </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Time and can you explain it in depth? [/INST] Time is the continued sequence of existence and events that occurs in an apparently irreversible succession from the past, through the present, into the future. It is a component quantity of various measurements used to sequence events, to compare the duration of events or the intervals between them, and to quantify rates of change of quantities in material reality or in the conscious experience. Time is often referred to as a fourth dimension, along with three spatial dimensions.Time has long been an important subject of study in religion, philosophy, and science, but defining it in a manner applicable to all fields without circularity has consistently eluded scholars.
Nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems.Time in physics is operationally defined as "what a clock reads".The physical nature of time is addressed by general relativity with respect to events in spacetime. Examples of events are the collision of two particles, the explosion of a supernova, or the arrival of a rocket ship. Every event can be assigned four numbers representing its time and position (the events coordinates). However, the numerical values are different for different observers. In general relativity, the question of what time it is now only has meaning relative to a particular observer. Distance and time are intimately related, and the time required for light to travel a specific distance is the same for all observers, as first publicly demonstrated by Michelson and Morley. General relativity does not address the nature of time for extremely small intervals where quantum mechanics holds. At this time, there is no generally accepted theory of quantum general relativity.Time is one of the seven fundamental physical quantities in both the International System of Units (SI) and International System of Quantities. The SI base unit of time is the second, which is defined by measuring the electronic transition frequency of caesium atoms. Time is used to define other quantities, such as velocity, so defining time in terms of such quantities would result in circularity of definition. An operational definition of time, wherein one says that observing a certain number of repetitions of one or another standard cyclical event (such as the passage of a free-swinging pendulum) constitutes one standard unit such as the second, is highly useful in the conduct of both advanced experiments and everyday affairs of life. To describe observations of an event, a location (position in space) and time are typically noted.
The operational definition of time does not address what the fundamental nature of it is. It does not address why events can happen forward and backward in space, whereas events only happen in the forward progress of time. Investigations into the relationship between space and time led physicists to define the spacetime continuum. General relativity is the primary framework for understanding how spacetime works. Through advances in both theoretical and experimental investigations of spacetime, it has been shown that time can be distorted and dilated, particularly at the edges of black holes.
Temporal measurement has occupied scientists and technologists and was a prime motivation in navigation and astronomy. Periodic events and periodic motion have long served as standards for units of time. Examples include the apparent motion of the sun across the sky, the phases of the moon, and the swing of a pendulum. Time is also of significant social importance, having economic value ("time is money") as well as personal value, due to an awareness of the limited time in each day and in human life spans.
There are many systems for determining what time it is, including the Global Positioning System, other satellite systems, Coordinated Universal Time and mean solar time. In general, the numbers obtained from different time systems differ from one another.
Measurement
Generally speaking, methods of temporal measurement, or chronometry, take two distinct forms: the calendar, a mathematical tool for organising intervals of time,
and the clock, a physical mechanism that counts the passage of time. In day-to-day life, the clock is consulted for periods less than a day, whereas the calendar is consulted for periods longer than a day. Increasingly, personal electronic devices display both calendars and clocks simultaneously. The number (as on a clock dial or calendar) that marks the occurrence of a specified event as to hour or date is obtained by counting from a fiducial epoch – a central reference point.
History of the calendar
Artifacts from the Paleolithic suggest that the moon was used to reckon time as early as 6,000 years ago. Lunar calendars were among the first to appear, with years of either 12 or 13 lunar months (either 354 or 384 days). Without intercalation to add days or months to some years, seasons quickly drift in a calendar based solely on twelve lunar months. Lunisolar calendars have a thirteenth month added to some years to make up for the difference between a full year (now known to be about 365.24 days) and a year of just twelve lunar months. The numbers twelve and thirteen came to feature prominently in many cultures, at least partly due to this relationship of months to years. Other early forms of calendars originated in Mesoamerica, particularly in ancient Mayan civilization. These calendars were religiously and astronomically based, with 18 months in a year and 20 days in a month, plus five epagomenal days at the end of the year.The reforms of Julius Caesar in 45 BC put the Roman world on a solar calendar. This Julian calendar was faulty in that its intercalation still allowed the astronomical solstices and equinoxes to advance against it by about 11 minutes per year. Pope Gregory XIII introduced a correction in 1582; the Gregorian calendar was only slowly adopted by different nations over a period of centuries, but it is now by far the most commonly used calendar around the world.
During the French Revolution, a new clock and calendar were invented in an attempt to de-Christianize time and create a more rational system in order to replace the Gregorian calendar. The French Republican Calendars days consisted of ten hours of a hundred minutes of a hundred seconds, which marked a deviation from the base 12 (duodecimal) system used in many other devices by many cultures. The system was abolished in 1806.
History of other devices
A large variety of devices have been invented to measure time. The study of these devices is called horology.An Egyptian device that dates to c. 1500 BC, similar in shape to a bent T-square, measured the passage of time from the shadow cast by its crossbar on a nonlinear rule. The T was oriented eastward in the mornings. At noon, the device was turned around so that it could cast its shadow in the evening direction.A sundial uses a gnomon to cast a shadow on a set of markings calibrated to the hour. The position of the shadow marks the hour in local time. The idea to separate the day into smaller parts is credited to Egyptians because of their sundials, which operated on a duodecimal system. The importance of the number 12 is due to the number of lunar cycles in a year and the number of stars used to count the passage of night.The most precise timekeeping device of the ancient world was the water clock, or clepsydra, one of which was found in the tomb of Egyptian pharaoh Amenhotep I. They could be used to measure the hours even at night but required manual upkeep to replenish the flow of water. The ancient Greeks and the people from Chaldea (southeastern Mesopotamia) regularly maintained timekeeping records as an essential part of their astronomical observations. Arab inventors and engineers, in particular, made improvements on the use of water clocks up to the Middle Ages. In the 11th century, Chinese inventors and engineers invented the first mechanical clocks driven by an escapement mechanism.
The hourglass uses the flow of sand to measure the flow of time. They were used in navigation. Ferdinand Magellan used 18 glasses on each ship for his circumnavigation of the globe (1522).Incense sticks and candles were, and are, commonly used to measure time in temples and churches across the globe. Waterclocks, and later, mechanical clocks, were used to mark the events of the abbeys and monasteries of the Middle Ages. Richard of Wallingford (1292–1336), abbot of St. Albans abbey, famously built a mechanical clock as an astronomical orrery about 1330.Great advances in accurate time-keeping were made by Galileo Galilei and especially Christiaan Huygens with the invention of pendulum-driven clocks along with the invention of the minute hand by Jost Burgi.The English word clock probably comes from the Middle Dutch word klocke which, in turn, derives from the medieval Latin word clocca, which ultimately derives from Celtic and is cognate with French, Latin, and German words that mean bell. The passage of the hours at sea was marked by bells and denoted the time (see ships bell). The hours were marked by bells in abbeys as well as at sea.
Clocks can range from watches to more exotic varieties such as the Clock of the Long Now. They can be driven by a variety of means, including gravity, springs, and various forms of electrical power, and regulated by a variety of means such as a pendulum.
Alarm clocks first appeared in ancient Greece around 250 BC with a water clock that would set off a whistle. This idea was later mechanized by Levi Hutchins and Seth E. Thomas.A chronometer is a portable timekeeper that meets certain precision standards. Initially, the term was used to refer to the marine chronometer, a timepiece used to determine longitude by means of celestial navigation, a precision firstly achieved by John Harrison. More recently, the term has also been applied to the chronometer watch, a watch that meets precision standards set by the Swiss agency COSC.
The most accurate timekeeping devices are atomic clocks, which are accurate to seconds in many millions of years, and are used to calibrate other clocks and timekeeping instruments.
Atomic clocks use the frequency of electronic transitions in certain atoms to measure the second. One of the atoms used is caesium, most modern atomic clocks probe caesium with microwaves to determine the frequency of these electron vibrations. Since 1967, the International System of Measurements bases its unit of time, the second, on the properties of caesium atoms. SI defines the second as 9,192,631,770 cycles of the radiation that corresponds to the transition between two electron spin energy levels of the ground state of the 133Cs atom.
Today, the Global Positioning System in coordination with the Network Time Protocol can be used to synchronize timekeeping systems across the globe.
In medieval philosophical writings, the atom was a unit of time referred to as the smallest possible division of time. The earliest known occurrence in English is in Byrhtferths Enchiridion (a science text) of 1010–1012, where it was defined as 1/564 of a momentum (11⁄2 minutes), and thus equal to 15/94 of a second. It was used in the computus, the process of calculating the date of Easter.
As of May 2010, the smallest time interval uncertainty in direct measurements is on the order of 12 attoseconds (1.2 × 10−17 seconds), about 3.7 × 1026 Planck times.
Units
The second (s) is the SI base unit. A minute (min) is 60 seconds in length, and an hour is 60 minutes or 3600 seconds in length. A day is usually 24 hours or 86,400 seconds in length; however, the duration of a calendar day can vary due to Daylight saving time and Leap seconds.
Definitions and standards
A time standard is a specification for measuring time: assigning a number or calendar date to an instant (point in time), quantifying the duration of a time interval, and establishing a chronology (ordering of events). In modern times, several time specifications have been officially recognized as standards, where formerly they were matters of custom and practice. The invention in 1955 of the caesium atomic clock has led to the replacement of older and purely astronomical time standards such as sidereal time and ephemeris time, for most practical purposes, by newer time standards based wholly or partly on atomic time using the SI second.
International Atomic Time (TAI) is the primary international time standard from which other time standards are calculated. Universal Time (UT1) is mean solar time at 0° longitude, computed from astronomical observations. It varies from TAI because of the irregularities in Earths rotation. Coordinated Universal Time (UTC) is an atomic time scale designed to approximate Universal Time. UTC differs from TAI by an integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the "leap second". The Global Positioning System broadcasts a very precise time signal based on UTC time.
The surface of the Earth is split up into a number of time zones. Standard time or civil time in a time zone deviates a fixed, round amount, usually a whole number of hours, from some form of Universal Time, usually UTC. Most time zones are exactly one hour apart, and by convention compute their local time as an offset from UTC. For example, time zones at sea are based on UTC. In many locations (but not at sea) these offsets vary twice yearly due to daylight saving time transitions.
Some other time standards are used mainly for scientific work. Terrestrial Time is a theoretical ideal scale realized by TAI. Geocentric Coordinate Time and Barycentric Coordinate Time are scales defined as coordinate times in the context of the general theory of relativity. Barycentric Dynamical Time is an older relativistic scale that is still in use.
Philosophy
Religion
Linear and cyclical
Ancient cultures such as Incan, Mayan, Hopi, and other Native American Tribes – plus the Babylonians, ancient Greeks, Hinduism, Buddhism, Jainism, and others – have a concept of a wheel of time: they regard time as cyclical and quantic, consisting of repeating ages that happen to every being of the Universe between birth and extinction.In general, the Islamic and Judeo-Christian world-view regards time as linear
and directional,
beginning with the act of creation by God. The traditional Christian view sees time ending, teleologically,
with the eschatological end of the present order of things, the "end time".
In the Old Testament book Ecclesiastes, traditionally ascribed to Solomon (970–928 BC), time (as the Hebrew word עידן, זמן iddan (age, as in "Ice age") zĕman(time) is often translated) was traditionally regarded as a medium for the passage of predestined events. (Another word, زمان" זמן" zamān, meant time fit for an event, and is used as the modern Arabic, Persian, and Hebrew equivalent to the English word "time".)
Time in Greek mythology
The Greek language denotes two distinct principles, Chronos and Kairos. The former refers to numeric, or chronological, time. The latter, literally "the right or opportune moment", relates specifically to metaphysical or Divine time. In theology, Kairos is qualitative, as opposed to quantitative.In Greek mythology, Chronos (ancient Greek: Χρόνος) is identified as the Personification of Time. His name in Greek means "time" and is alternatively spelled Chronus (Latin spelling) or Khronos. Chronos is usually portrayed as an old, wise man with a long, gray beard, such as "Father Time". Some English words whose etymological root is khronos/chronos include chronology, chronometer, chronic, anachronism, synchronise, and chronicle.
Time in Kabbalah
According to Kabbalists, "time" is a paradox and an illusion. Both the future and the past are recognised to be combined and simultaneously present.
In Western philosophy
Two contrasting viewpoints on time divide prominent philosophers. One view is that time is part of the fundamental structure of the universe – a dimension independent of events, in which events occur in sequence. Isaac Newton subscribed to this realist view, and hence it is sometimes referred to as Newtonian time.
The opposing view is that time does not refer to any kind of "container" that events and objects "move through", nor to any entity that "flows", but that it is instead part of a fundamental intellectual structure (together with space and number) within which humans sequence and compare events. This second view, in the tradition of Gottfried Leibniz and Immanuel Kant, holds that time is neither an event nor a thing, and thus is not itself measurable nor can it be travelled.
Furthermore, it may be that there is a subjective component to time, but whether or not time itself is "felt", as a sensation, or is a judgment, is a matter of debate.In Philosophy, time was questioned throughout the centuries; what time is and if it is real or not. Ancient Greek philosophers asked if time was linear or cyclical and if time was endless or finite. These philosophers had different ways of explaining time; for instance, ancient Indian philosophers had something called the Wheel of Time. It is believed that there was repeating ages over the lifespan of the universe. This led to beliefs like cycles of rebirth and reincarnation. The Greek philosophers believe that the universe was infinite, and was an illusion to humans. Plato believed that time was made by the Creator at the same instant as the heavens. He also says that time is a period of motion of the heavenly bodies. Aristotle believed that time correlated to movement, that time did not exist on its own but was relative to motion of objects. he also believed that time was related to the motion of celestial bodies; the reason that humans can tell time was because of orbital periods and therefore there was a duration on time.The Vedas, the earliest texts on Indian philosophy and Hindu philosophy dating back to the late 2nd millennium BC, describe ancient Hindu cosmology, in which the universe goes through repeated cycles of creation, destruction and rebirth, with each cycle lasting 4,320 million years.Ancient Greek philosophers, including Parmenides and Heraclitus, wrote essays on the nature of time.Plato, in the Timaeus, identified time with the period of motion of the heavenly bodies. Aristotle, in Book IV of his Physica defined time as number of movement in respect of the before and after.In Book 11 of his Confessions, St. Augustine of Hippo ruminates on the nature of time, asking, "What then is time? If no one asks me, I know: if I wish to explain it to one that asketh, I know not." He begins to define time by what it is not rather than what it is,
an approach similar to that taken in other negative definitions. However, Augustine ends up calling time a "distention" of the mind (Confessions 11.26) by which we simultaneously grasp the past in memory, the present by attention, and the future by expectation.
Isaac Newton believed in absolute space and absolute time; Leibniz believed that time and space are relational.
The differences between Leibnizs and Newtons interpretations came to a head in the famous Leibniz–Clarke correspondence.
Philosophers in the 17th and 18th century questioned if time was real and absolute, or if it was an intellectual concept that humans use to understand and sequence events. These questions lead to realism vs anti-realism; the realists believed that time is a fundamental part of the universe, and be perceived by events happening in a sequence, in a dimension. Isaac Newton said that we are merely occupying time, he also says that humans can only understand relative time. Relative time is a measurement of objects in motion. The anti-realists believed that time is merely a convenient intellectual concept for humans to understand events. This means that time was useless unless there were objects that it could interact with, this was called relational time. René Descartes, John Locke, and David Hume said that ones mind needs to acknowledge time, in order to understand what time is. Immanuel Kant believed that we can not know what something is unless we experience it first hand.
Immanuel Kant, in the Critique of Pure Reason, described time as an a priori intuition that allows us (together with the other a priori intuition, space) to comprehend sense experience.
With Kant, neither space nor time are conceived as substances, but rather both are elements of a systematic mental framework that necessarily structures the experiences of any rational agent, or observing subject. Kant thought of time as a fundamental part of an abstract conceptual framework, together with space and number, within which we sequence events, quantify their duration, and compare the motions of objects. In this view, time does not refer to any kind of entity that "flows," that objects "move through," or that is a "container" for events. Spatial measurements are used to quantify the extent of and distances between objects, and temporal measurements are used to quantify the durations of and between events. Time was designated by Kant as the purest possible schema of a pure concept or category.
Henri Bergson believed that time was neither a real homogeneous medium nor a mental construct, but possesses what he referred to as Duration. Duration, in Bergsons view, was creativity and memory as an essential component of reality.According to Martin Heidegger we do not exist inside time, we are time. Hence, the relationship to the past is a present awareness of having been, which allows the past to exist in the present. The relationship to the future is the state of anticipating a potential possibility, task, or engagement. It is related to the human propensity for caring and being concerned, which causes "being ahead of oneself" when thinking of a pending occurrence. Therefore, this concern for a potential occurrence also allows the future to exist in the present. The present becomes an experience, which is qualitative instead of quantitative. Heidegger seems to think this is the way that a linear relationship with time, or temporal existence, is broken or transcended.
We are not stuck in sequential time. We are able to remember the past and project into the future – we have a kind of random access to our representation of temporal existence; we can, in our thoughts, step out of (ecstasis) sequential time.Modern era philosophers asked: is time real or unreal, is time happening all at once or a duration, is time tensed or tenseless, and is there a future to be? There is a theory called the tenseless or B-theory; this theory says that any tensed terminology can be replaced with tenseless terminology. For example, "we will win the game" can be replaced with "we do win the game", taking out the future tense. On the other hand, there is a theory called the tense or A-theory; this theory says that our language has tense verbs for a reason and that the future can not be determined. There is also something called imaginary time, this was from Stephen Hawking, he says that space and imaginary time are finite but have no boundaries. Imaginary time is not real or unreal, it is something that is hard to visualize. Philosophers can agree that physical time exists outside of the human mind and is objective, and psychological time is mind-dependent and subjective.
Unreality
In 5th century BC Greece, Antiphon the Sophist, in a fragment preserved from his chief work On Truth, held that: "Time is not a reality (hypostasis), but a concept (noêma) or a measure (metron)." Parmenides went further, maintaining that time, motion, and change were illusions, leading to the paradoxes of his follower Zeno. Time as an illusion is also a common theme in Buddhist thought.J. M. E. McTaggarts 1908 The Unreality of Time argues that, since every event has the characteristic of being both present and not present (i.e., future or past), that time is a self-contradictory idea (see also The flow of time).
These arguments often center on what it means for something to be unreal. Modern physicists generally believe that time is as real as space – though others, such as Julian Barbour in his book The End of Time, argue that quantum equations of the universe take their true form when expressed in the timeless realm containing every possible now or momentary configuration of the universe, called "platonia" by Barbour.A modern philosophical theory called presentism views the past and the future as human-mind interpretations of movement instead of real parts of time (or "dimensions") which coexist with the present. This theory rejects the existence of all direct interaction with the past or the future, holding only the present as tangible. This is one of the philosophical arguments against time travel. This contrasts with eternalism (all time: present, past and future, is real) and the growing block theory (the present and the past are real, but the future is not).
Physical definition
Until Einsteins reinterpretation of the physical concepts associated with time and space in 1907, time was considered to be the same everywhere in the universe, with all observers measuring the same time interval for any event.
Non-relativistic classical mechanics is based on this Newtonian idea of time.
Einstein, in his special theory of relativity,
postulated the constancy and finiteness of the speed of light for all observers. He showed that this postulate, together with a reasonable definition for what it means for two events to be simultaneous, requires that distances appear compressed and time intervals appear lengthened for events associated with objects in motion relative to an inertial observer.
The theory of special relativity finds a convenient formulation in Minkowski spacetime, a mathematical structure that combines three dimensions of space with a single dimension of time. In this formalism, distances in space can be measured by how long light takes to travel that distance, e.g., a light-year is a measure of distance, and a meter is now defined in terms of how far light travels in a certain amount of time. Two events in Minkowski spacetime are separated by an invariant interval, which can be either space-like, light-like, or time-like. Events that have a time-like separation cannot be simultaneous in any frame of reference, there must be a temporal component (and possibly a spatial one) to their separation. Events that have a space-like separation will be simultaneous in some frame of reference, and there is no frame of reference in which they do not have a spatial separation. Different observers may calculate different distances and different time intervals between two events, but the invariant interval between the events is independent of the observer (and his or her velocity).
Classical mechanics
In non-relativistic classical mechanics, Newtons concept of "relative, apparent, and common time" can be used in the formulation of a prescription for the synchronization of clocks. Events seen by two different observers in motion relative to each other produce a mathematical concept of time that works sufficiently well for describing the everyday phenomena of most peoples experience. In the late nineteenth century, physicists encountered problems with the classical understanding of time, in connection with the behavior of electricity and magnetism. Einstein resolved these problems by invoking a method of synchronizing clocks using the constant, finite speed of light as the maximum signal velocity. This led directly to the conclusion that observers in motion relative to one another measure different elapsed times for the same event.
Spacetime
Time has historically been closely related with space, the two together merging into spacetime in Einsteins special relativity and general relativity. According to these theories, the concept of time depends on the spatial reference frame of the observer, and the human perception, as well as the measurement by instruments such as clocks, are different for observers in relative motion. For example, if a spaceship carrying a clock flies through space at (very nearly) the speed of light, its crew does not notice a change in the speed of time on board their vessel because everything traveling at the same speed slows down at the same rate (including the clock, the crews thought processes, and the functions of their bodies). However, to a stationary observer watching the spaceship fly by, the spaceship appears flattened in the direction it is traveling and the clock on board the spaceship appears to move very slowly.
On the other hand, the crew on board the spaceship also perceives the observer as slowed down and flattened along the spaceships direction of travel, because both are moving at very nearly the speed of light relative to each other. Because the outside universe appears flattened to the spaceship, the crew perceives themselves as quickly traveling between regions of space that (to the stationary observer) are many light years apart. This is reconciled by the fact that the crews perception of time is different from the stationary observers; what seems like seconds to the crew might be hundreds of years to the stationary observer. In either case, however, causality remains unchanged: the past is the set of events that can send light signals to an entity and the future is the set of events to which an entity can send light signals.
Dilation
Einstein showed in his thought experiments that people travelling at different speeds, while agreeing on cause and effect, measure different time separations between events, and can even observe different chronological orderings between non-causally related events. Though these effects are typically minute in the human experience, the effect becomes much more pronounced for objects moving at speeds approaching the speed of light. Subatomic particles exist for a well-known average fraction of a second in a lab relatively at rest, but when travelling close to the speed of light they are measured to travel farther and exist for much longer than when at rest. According to the special theory of relativity, in the high-speed particles frame of reference, it exists, on the average, for a standard amount of time known as its mean lifetime, and the distance it travels in that time is zero, because its velocity is zero. Relative to a frame of reference at rest, time seems to "slow down" for the particle. Relative to the high-speed particle, distances seem to shorten. Einstein showed how both temporal and spatial dimensions can be altered (or "warped") by high-speed motion.
Einstein (The Meaning of Relativity): "Two events taking place at the points A and B of a system K are simultaneous if they appear at the same instant when observed from the middle point, M, of the interval AB. Time is then defined as the ensemble of the indications of similar clocks, at rest relative to K, which register the same simultaneously."
Einstein wrote in his book, Relativity, that simultaneity is also relative, i.e., two events that appear simultaneous to an observer in a particular inertial reference frame need not be judged as simultaneous by a second observer in a different inertial frame of reference.
Relativistic versus Newtonian
The animations visualise the different treatments of time in the Newtonian and the relativistic descriptions. At the heart of these differences are the Galilean and Lorentz transformations applicable in the Newtonian and relativistic theories, respectively.
In the figures, the vertical direction indicates time. The horizontal direction indicates distance (only one spatial dimension is taken into account), and the thick dashed curve is the spacetime trajectory ("world line") of the observer. The small dots indicate specific (past and future) events in spacetime.
The slope of the world line (deviation from being vertical) gives the relative velocity to the observer. Note how in both pictures the view of spacetime changes when the observer accelerates.
In the Newtonian description these changes are such that time is absolute: the movements of the observer do not influence whether an event occurs in the now (i.e., whether an event passes the horizontal line through the observer).
However, in the relativistic description the observability of events is absolute: the movements of the observer do not influence whether an event passes the "light cone" of the observer. Notice that with the change from a Newtonian to a relativistic description, the concept of absolute time is no longer applicable: events move up and down in the figure depending on the acceleration of the observer.
Arrow
Time appears to have a direction – the past lies behind, fixed and immutable, while the future lies ahead and is not necessarily fixed. Yet for the most part, the laws of physics do not specify an arrow of time, and allow any process to proceed both forward and in reverse. This is generally a consequence of time being modelled by a parameter in the system being analysed, where there is no "proper time": the direction of the arrow of time is sometimes arbitrary. Examples of this include the cosmological arrow of time, which points away from the Big Bang, CPT symmetry, and the radiative arrow of time, caused by light only travelling forwards in time (see light cone). In particle physics, the violation of CP symmetry implies that there should be a small counterbalancing time asymmetry to preserve CPT symmetry as stated above. The standard description of measurement in quantum mechanics is also time asymmetric (see Measurement in quantum mechanics). The second law of thermodynamics states that entropy must increase over time (see Entropy). This can be in either direction – Brian Greene theorizes that, according to the equations, the change in entropy occurs symmetrically whether going forward or backward in time. So entropy tends to increase in either direction, and our current low-entropy universe is a statistical aberration, in a similar manner as tossing a coin often enough that eventually heads will result ten times in a row. However, this theory is not supported empirically in local experiment.
Quantization
Time quantization is a hypothetical concept. In the modern established physical theories (the Standard Model of Particles and Interactions and General Relativity) time is not quantized.
Planck time (~ 5.4 × 10−44 seconds) is the unit of time in the system of natural units known as Planck units. Current established physical theories are believed to fail at this time scale, and many physicists expect that the Planck time might be the smallest unit of time that could ever be measured, even in principle. Tentative physical theories that describe this time scale exist; see for instance loop quantum gravity.
Travel
Time travel is the concept of moving backwards or forwards to different points in time, in a manner analogous to moving through space, and different from the normal "flow" of time to an earthbound observer. In this view, all points in time (including future times) "persist" in some way. Time travel has been a plot device in fiction since the 19th century. Travelling backwards or forwards in time has never been verified as a process, and doing so presents many theoretical problems and contradictive logic which to date have not been overcome. Any technological device, whether fictional or hypothetical, that is used to achieve time travel is known as a time machine.
A central problem with time travel to the past is the violation of causality; should an effect precede its cause, it would give rise to the possibility of a temporal paradox. Some interpretations of time travel resolve this by accepting the possibility of travel between branch points, parallel realities, or universes.
Another solution to the problem of causality-based temporal paradoxes is that such paradoxes cannot arise simply because they have not arisen. As illustrated in numerous works of fiction, free will either ceases to exist in the past or the outcomes of such decisions are predetermined. As such, it would not be possible to enact the grandfather paradox because it is a historical fact that ones grandfather was not killed before his child (ones parent) was conceived. This view does not simply hold that history is an unchangeable constant, but that any change made by a hypothetical future time traveller would already have happened in his or her past, resulting in the reality that the traveller moves from. More elaboration on this view can be found in the Novikov self-consistency principle.
Perception
The specious present refers to the time duration wherein ones perceptions are considered to be in the present. The experienced present is said to be specious in that, unlike the objective present, it is an interval and not a durationless instant. The term specious present was first introduced by the psychologist E.R. Clay, and later developed by William James.
Biopsychology
The brains judgment of time is known to be a highly distributed system, including at least the cerebral cortex, cerebellum and basal ganglia as its components. One particular component, the suprachiasmatic nuclei, is responsible for the circadian (or daily) rhythm, while other cell clusters appear capable of shorter-range (ultradian) timekeeping.
Psychoactive drugs can impair the judgment of time. Stimulants can lead both humans and rats to overestimate time intervals, while depressants can have the opposite effect. The level of activity in the brain of neurotransmitters such as dopamine and norepinephrine may be the reason for this. Such chemicals will either excite or inhibit the firing of neurons in the brain, with a greater firing rate allowing the brain to register the occurrence of more events within a given interval (speed up time) and a decreased firing rate reducing the brains capacity to distinguish events occurring within a given interval (slow down time).Mental chronometry is the use of response time in perceptual-motor tasks to infer the content, duration, and temporal sequencing of cognitive operations.
Early childhood education
Childrens expanding cognitive abilities allow them to understand time more clearly. Two- and three-year-olds understanding of time is mainly limited to "now and not now". Five- and six-year-olds can grasp the ideas of past, present, and future. Seven- to ten-year-olds can use clocks and calendars.
Alterations
In addition to psychoactive drugs, judgments of time can be altered by temporal illusions (like the kappa effect), age, and hypnosis. The sense of time is impaired in some people with neurological diseases such as Parkinsons disease and attention deficit disorder.
Psychologists assert that time seems to go faster with age, but the literature on this age-related perception of time remains controversial. Those who support this notion argue that young people, having more excitatory neurotransmitters, are able to cope with faster external events.
Spatial conceptualization
Although time is regarded as an abstract concept, there is increasing evidence that time is conceptualized in the mind in terms of space. That is, instead of thinking about time in a general, abstract way, humans think about time in a spatial way and mentally organize it as such. Using space to think about time allows humans to mentally organize temporal events in a specific way.
This spatial representation of time is often represented in the mind as a Mental Time Line (MTL). Using space to think about time allows humans to mentally organize temporal order. These origins are shaped by many environmental factors––for example, literacy appears to play a large role in the different types of MTLs, as reading/writing direction provides an everyday temporal orientation that differs from culture to culture. In western cultures, the MTL may unfold rightward (with the past on the left and the future on the right) since people read and write from left to right. Western calendars also continue this trend by placing the past on the left with the future progressing toward the right. Conversely, Arabic, Farsi, Urdu and Israeli-Hebrew speakers read from right to left, and their MTLs unfold leftward (past on the right with future on the left), and evidence suggests these speakers organize time events in their minds like this as well.This linguistic evidence that abstract concepts are based in spatial concepts also reveals that the way humans mentally organize time events varies across cultures––that is, a certain specific mental organization system is not universal. So, although Western cultures typically associate past events with the left and future events with the right according to a certain MTL, this kind of horizontal, egocentric MTL is not the spatial organization of all cultures. Although most developed nations use an egocentric spatial system, there is recent evidence that some cultures use an allocentric spatialization, often based on environmental features.A recent study of the indigenous Yupno people of Papua New Guinea focused on the directional gestures used when individuals used time-related words. When speaking of the past (such as "last year" or "past times"), individuals gestured downhill, where the river of the valley flowed into the ocean. When speaking of the future, they gestured uphill, toward the source of the river. This was common regardless of which direction the person faced, revealing that the Yupno people may use an allocentric MTL, in which time flows uphill.A similar study of the Pormpuraawans, an aboriginal group in Australia, revealed a similar distinction in which when asked to organize photos of a man aging "in order," individuals consistently placed the youngest photos to the east and the oldest photos to the west, regardless of which direction they faced. This directly clashed with an American group that consistently organized the photos from left to right. Therefore, this group also appears to have an allocentric MTL, but based on the cardinal directions instead of geographical features.The wide array of distinctions in the way different groups think about time leads to the broader question that different groups may also think about other abstract concepts in different ways as well, such as causality and number.
Use
In sociology and anthropology, time discipline is the general name given to social and economic rules, conventions, customs, and expectations governing the measurement of time, the social currency and awareness of time measurements, and peoples expectations concerning the observance of these customs by others. Arlie Russell Hochschild and Norbert Elias have written on the use of time from a sociological perspective.
The use of time is an important issue in understanding human behavior, education, and travel behavior. Time-use research is a developing field of study. The question concerns how time is allocated across a number of activities (such as time spent at home, at work, shopping, etc.). Time use changes with technology, as the television or the Internet created new opportunities to use time in different ways. However, some aspects of time use are relatively stable over long periods of time, such as the amount of time spent traveling to work, which despite major changes in transport, has been observed to be about 20–30 minutes one-way for a large number of cities over a long period.
Time management is the organization of tasks or events by first estimating how much time a task requires and when it must be completed, and adjusting events that would interfere with its completion so it is done in the appropriate amount of time. Calendars and day planners are common examples of time management tools.
Sequence of events
A sequence of events, or series of events, is a sequence of items, facts, events, actions, changes, or procedural steps, arranged in time order (chronological order), often with causality relationships among the items.
Because of causality, cause precedes effect, or cause and effect may appear together in a single item, but effect never precedes cause. A sequence of events can be presented in text, tables, charts, or timelines. The description of the items or events may include a timestamp </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Tinnitus and can you explain it in depth? [/INST] Tinnitus is the perception of sound when no corresponding external sound is present. Nearly everyone will experience a faint "normal tinnitus" in a completely quiet room but it is only of concern if it is bothersome or interferes with normal hearing or correlated with other problems. While often described as a ringing, it may also sound like a clicking, buzzing, hiss, or roaring. The sound may be soft or loud, low or high pitched, and often appears to be coming from one or both ears or from the head itself. In some people, the sound may interfere with concentration and in some cases it is associated with anxiety and depression. Tinnitus is usually associated with a degree of hearing loss and with decreased comprehension of speech in noisy environments. It is common, affecting about 10–15% of people. Most, however, tolerate it well, and it is a significant problem in only 1–2% of all people. It can trigger a fight-or-flight response, as the brain may perceive it as dangerous and important. The word tinnitus comes from the Latin tinnire which means "to ring".Rather than a disease, tinnitus is a symptom that may result from various underlying causes and may be generated at any level of the auditory system and structures beyond that system. The most common causes are hearing damage, noise-induced hearing loss or age-related hearing loss, known as presbycusis. Other causes include ear infections, disease of the heart or blood vessels, Ménières disease, brain tumors, acoustic neuromas (tumors on the auditory nerves of the ear), migraines, temporomandibular joint disorders, exposure to certain medications, a previous head injury, earwax; and tinnitus can suddenly emerge during a period of emotional stress. It is more common in those with depression.The diagnosis of tinnitus is usually based on the persons description. It is commonly supported by an audiogram, an otolaryngological and a neurological examination. The degree of interference with a persons life may be quantified with questionnaires. If certain problems are found, medical imaging, such as magnetic resonance imaging (MRI), may be performed. Other tests are suitable when tinnitus occurs with the same rhythm as the heartbeat. Rarely, the sound may be heard by someone else using a stethoscope, in which case it is known as objective tinnitus. Occasionally, spontaneous otoacoustic emissions, sounds produced normally by the inner ear, may result in tinnitus.Prevention involves avoiding exposure to loud noise for longer periods or chronically. If there is an underlying cause, treating it may lead to improvements. Otherwise, typically, management involves psychoeducation or counseling, such as talk therapy. Sound generators or hearing aids may help. No medication directly targets tinnitus.
Signs and symptoms
Tinnitus may be perceived in one or both ears, or more centrally in the head. The noise commonly occurs inside a persons head or ear(s) in the absence of auditory stimulation, similar to ringing, although in some people, it is a high-pitched whining or electric buzzing, among numerous other sounds. Tinnitus may be intermittent or continuous. In some individuals, the intensity may be changed by shoulder, neck, head, tongue, jaw, or eye movements.The specific type of tinnitus called objective tinnitus is characterized by hearing the sounds of ones own muscle contractions or pulse, which is typically a result of sounds that have been created by the movement of jaw muscles or sounds related to blood flow in the neck or face.
Course
Due to variations in study designs, data on the course of tinnitus showed few consistent results. Generally, the prevalence increased with age in adults, whereas the ratings of annoyance decreased with duration.
Psychological effects
Although an annoying condition to which most people adapt, persistent tinnitus may cause anxiety and depression in some people. Tinnitus annoyance is more strongly associated with the psychological condition of the person than the loudness or frequency range. Psychological problems such as depression, anxiety, sleep disturbances, and concentration difficulties are common in those with strongly annoying tinnitus. 45% of people with tinnitus have an anxiety disorder at some time in their life.Psychological research has focused on the tinnitus distress reaction to account for differences in tinnitus severity. The research indicates that conditioning at the initial perception of tinnitus linked it with negative emotions, such as fear and anxiety.
Types
A common tinnitus classification is into "subjective and objective tinnitus". Tinnitus is usually subjective, meaning that the sounds the person hears are not detectable by means currently available to physicians and hearing technicians. Subjective tinnitus has also been called "tinnitus aurium", "non-auditory" or "non-vibratory" tinnitus. In rare cases, tinnitus can be heard by someone else using a stethoscope. Even more rarely, in some cases it can be measured as a spontaneous otoacoustic emission (SOAE) in the ear canal. This is classified as objective tinnitus, also called "pseudo-tinnitus" or "vibratory" tinnitus.
Subjective tinnitus
Subjective tinnitus is the most frequent type of tinnitus. It may have many possible causes, but most commonly it results from hearing loss. When the tinnitus is caused by disorders of the inner ear or auditory nerve it can be called otic (from the Greek word for ear). These otological or neurological conditions include those triggered by infections, drugs, or trauma. A frequent cause is traumatic noise exposure that damages hair cells in the inner ear.When there does not seem to be a connection with a disorder of the inner ear or auditory nerve, the tinnitus can be called non-otic (i.e. not otic). In some 30% of tinnitus cases, the tinnitus is influenced by the somatosensory system, for instance, people can increase or decrease their tinnitus by moving their face, head, or neck. This type is called somatic or craniocervical tinnitus, since it is only head or neck movements that have an effect.There is a growing body of evidence suggesting that some tinnitus is a consequence of neuroplastic alterations in the central auditory pathway. These alterations are assumed to result from a disturbed sensory input, caused by hearing loss. Hearing loss could indeed cause a homeostatic response of neurons in the central auditory system, and therefore cause tinnitus.
Hearing loss
The most common cause of tinnitus is hearing loss. Hearing loss may have many different causes, but among those with tinnitus, the major cause is cochlear injury.Ototoxic drugs also may cause subjective tinnitus, as they may cause hearing loss, or increase the damage done by exposure to loud noise. Those damages may occur even at doses that are not considered ototoxic. More than 260 medications have been reported to cause tinnitus as a side effect. In many cases, however, no underlying cause could be identified.Tinnitus can also occur due to the discontinuation of therapeutic doses of benzodiazepines. It can sometimes be a protracted symptom of benzodiazepine withdrawal and may persist for many months. Medications such as bupropion may also result in tinnitus. In many cases, however, no underlying cause can be identified.
Associated factors
Factors associated with tinnitus include:
ear problems and hearing loss:
conductive hearing loss
acoustic shock
loud noise or music
middle ear effusion
otitis
otosclerosis
Eustachian tube dysfunction
sensorineural hearing loss
excessive or loud noise; e.g. acoustic trauma
presbycusis (age-associated hearing loss)
Ménières disease
endolymphatic hydrops
superior canal dehiscence
acoustic neuroma
mercury or lead poisoning
ototoxic medications
neurologic disorders:
Arnold–Chiari malformation
multiple sclerosis
head injury
giant cell arteritis
temporomandibular joint dysfunction
metabolic disorders:
vitamin B12 deficiency
iron deficiency anemia
psychiatric disorders
depression
anxiety disorders
other factors:
vasculitis
Some psychedelic drugs can produce temporary tinnitus-like symptoms as a side effect
5-MeO-DET
diisopropyltryptamine (DiPT)
benzodiazepine withdrawal
intracranial hyper or hypotension caused by, for example, encephalitis or a cerebrospinal fluid leak
Objective tinnitus
Objective tinnitus can be detected by other people and is sometimes caused by an involuntary twitching of a muscle or a group of muscles (myoclonus) or by a vascular condition. In some cases, tinnitus is generated by muscle spasms around the middle ear.Spontaneous otoacoustic emissions (SOAEs), which are faint high-frequency tones that are produced in the inner ear and can be measured in the ear canal with a sensitive microphone, may also cause tinnitus. About 8% of those with SOAEs and tinnitus have SOAE-linked tinnitus, while the percentage of all cases of tinnitus caused by SOAEs is estimated at 4%.
Pediatric tinnitus
Children may be subject to pulsatile or continuous tinnitus. With pulsatile tinnitus involving anomalies and variants of the vascular parts. While continuous affecting the middle/inner ear structures. CT scans are able to check for integrity of the structures, and MR scans can evaluate the nerves and potential masses or malformations. Early diagnosis can prevent long term impairments to development, imaging and categorizing whether it nonpulsatile or pulsatile tinnitus help create an efficient diagnosis.
Pulsatile tinnitus
Some people experience a sound that beats in time with their pulse, known as pulsatile tinnitus or vascular tinnitus. Pulsatile tinnitus is usually objective in nature, resulting from altered blood flow, increased blood turbulence near the ear, such as from atherosclerosis or venous hum, but it can also arise as a subjective phenomenon from an increased awareness of blood flow in the ear. Rarely, pulsatile tinnitus may be a symptom of potentially life-threatening conditions such as carotid artery aneurysm or carotid artery dissection. Pulsatile tinnitus may also indicate vasculitis, or more specifically, giant cell arteritis. Pulsatile tinnitus may also be an indication of idiopathic intracranial hypertension. Pulsatile tinnitus can be a symptom of intracranial vascular abnormalities and should be evaluated for irregular noises of blood flow (bruits).
Pathophysiology
It may be caused by increased neural activity in the auditory brainstem, where the brain processes sounds, causing some auditory nerve cells to become over-excited. The basis of this theory is that many with tinnitus also have hearing loss.Three reviews of 2016 emphasized the large range and possible combinations of pathologies involved in tinnitus, which in turn result in a great variety of symptoms demanding specifically adapted therapies.
Diagnosis
The diagnostic approach is based on a history of the condition and an examination of the head, neck, and neurological system. Typically an audiogram is done, and occasionally medical imaging or electronystagmography. Treatable conditions may include middle ear infection, acoustic neuroma,
concussion, and otosclerosis.Evaluation of tinnitus can include a hearing test (audiogram), measurement of acoustic parameters of the tinnitus like pitch and loudness, and psychological assessment of comorbid conditions like depression, anxiety, and stress that are associated with severity of the tinnitus.One definition of tinnitus, as compared to normal ear noise experience, is lasting five minutes at least twice a week. However, people with tinnitus often experience the noise more frequently than this. Tinnitus can be present constantly or intermittently. Some people with constant tinnitus might not be aware of it all the time, but only for example during the night when there is less environmental noise to mask it. Chronic tinnitus can be defined as tinnitus with duration of six months or more.
Audiology
Since most persons with tinnitus also have hearing loss, a pure tone hearing test resulting in an audiogram may help diagnose a cause, though some persons with tinnitus do not have hearing loss. An audiogram may also facilitate fitting of a hearing aid in those cases where hearing loss is significant. The pitch of tinnitus is often in the range of the hearing loss.
Psychoacoustics
Acoustic qualification of tinnitus will include measurement of several acoustic parameters like frequency in cases of monotone tinnitus or frequency range and bandwidth in cases of narrow band noise tinnitus, loudness in dB above hearing threshold at the indicated frequency, mixing-point, and minimum masking level. In most cases, tinnitus pitch or frequency range is between 5 kHz and 10 kHz, and loudness between 5 and 15 dB above the hearing threshold.Another relevant parameter of tinnitus is residual inhibition, the temporary suppression or disappearance of tinnitus following a period of masking. The degree of residual inhibition may indicate how effective tinnitus maskers would be as a treatment modality.An assessment of hyperacusis, a frequent accompaniment of tinnitus, may also be made. Hyperacusis is related to negative reactions to sound and can take many forms. One associated parameter that can be measured is Loudness Discomfort Level (LDL) in dB, the subjective level of acute discomfort at specified frequencies over the frequency range of hearing. This defines a dynamic range between the hearing threshold at that frequency and the loudness discomfort level. A compressed dynamic range over a particular frequency range can be associated with hyperacusis. Normal hearing threshold is generally defined as 0–20 decibels (dB). Normal loudness discomfort levels are 85–90+ dB, with some authorities citing 100 dB. A dynamic range of 55 dB or less is indicative of hyperacusis.
Severity
The condition is often rated on a scale from "slight" to "severe" according to the effects it has, such as interference with sleep, quiet activities and normal daily activities.Assessment of psychological processes related to tinnitus involves measurement of tinnitus severity and distress (i.e., nature and extent of tinnitus-related problems), measured subjectively by validated self-report tinnitus questionnaires. These questionnaires measure the degree of psychological distress and handicap associated with tinnitus, including effects on hearing, lifestyle, health and emotional functioning. A broader assessment of general functioning, such as levels of anxiety, depression, stress, life stressors and sleep difficulties, is also important in the assessment of tinnitus due to higher risk of negative well-being across these areas, which may be affected by or exacerbate the tinnitus symptoms for the individual. Overall, current assessment measures are aimed to identify individual levels of distress and interference, coping responses and perceptions of tinnitus to inform treatment and monitor progress. However, wide variability, inconsistencies and lack of consensus regarding assessment methodology are evidenced in the literature, limiting comparison of treatment effectiveness. Developed to guide diagnosis or classify severity, most tinnitus questionnaires have been shown to be treatment-sensitive outcome measures.
Pulsatile tinnitus
If the examination reveals a bruit (sound due to turbulent blood flow), imaging studies such as transcranial doppler (TCD) or magnetic resonance angiography (MRA) should be performed.
Differential diagnosis
Other potential sources of the sounds normally associated with tinnitus should be ruled out. For instance, two recognized sources of high-pitched sounds might be electromagnetic fields common in modern wiring and various sound signal transmissions. A common and often misdiagnosed condition that mimics tinnitus is radio frequency (RF) hearing, in which subjects have been tested and found to hear high-pitched transmission frequencies that sound similar to tinnitus.
Prevention
Prolonged exposure to loud sound or noise levels can lead to tinnitus. Custom made ear plugs or other measures can help with prevention. Employers may use hearing loss prevention programs to help educate and prevent dangerous levels of exposure to noise. Government organizations set regulations to ensure employees, if following the protocol, should have minimal risk to permanent damage to their hearing.Certain groups are advised to wear ear plugs when working or riding to avoid the risk of tinnitus, caused by overexposure to loud noises such as wind noise for motorcycle riders. Occupationally this includes musicians, DJs, agricultural, and construction workers as they are at a greater risk compared to the general population.
Several medicines have ototoxic effects, and can have a cumulative effect that can increase the damage done by noise. If ototoxic medications must be administered, close attention by the physician to prescription details, such as dose and dosage interval, can reduce the damage done.
Management
If a specific underlying cause is determined, treating it may lead to improvements. Otherwise, the primary treatment for tinnitus is talk therapy, sound therapy, or hearing aids. There are no effective drugs that treat tinnitus.
Psychological
The best supported treatment for tinnitus is a type of counseling called cognitive behavioral therapy (CBT) which can be delivered via the internet or in person. It decreases the amount of stress those with tinnitus feel. These benefits appear to be independent of any effect on depression or anxiety in an individual. Acceptance and commitment therapy (ACT) also shows promise in the treatment of tinnitus. Relaxation techniques may also be useful. A clinical protocol called Progressive Tinnitus Management for treatment of tinnitus has been developed by the United States Department of Veterans Affairs.
Sound-based interventions
The use of sound therapy by either hearing aids or tinnitus maskers may help the brain ignore the specific tinnitus frequency. Whilst these methods are poorly supported by evidence, there are no negative effects. There are several approaches for tinnitus sound therapy. The first is sound modification to compensate for the individuals hearing loss. The second is a signal spectrum notching to eliminate energy close to the tinnitus frequency. There is some tentative evidence supporting tinnitus retraining therapy, which is aimed at reducing tinnitus-related neuronal activity. There are preliminary data on an alternative tinnitus treatment using mobile applications, including various methods: masking, sound therapy, relaxing exercises and other. These applications can work as a separate device or as a hearing aid control system.
Medications
As of 2018 there were no medications effective for idiopathic tinnitus. There is not enough evidence to determine if antidepressants or acamprosate are useful. There is no high-quality evidence to support the use of benzodiazepines for tinnitus. Usefulness of melatonin, as of 2015, is unclear. It is unclear if anticonvulsants are useful for treating tinnitus. Steroid injections into the middle ear also do not seem to be effective. There is no evidence to suggest that the use of betahistine to treat tinnitus is effective.Botulinum toxin injection has been tried with some success in some of the rare cases of objective tinnitus from a palatal tremor.Caroverine is used in a few countries to treat tinnitus. The evidence for its usefulness is very weak.
Neuromodulation
In 2020, information about recent clinical trials has indicated that bimodal neuromodulation may be a promising treatment for reducing the symptoms of tinnitus. It is a noninvasive technique that involves applying an electrical stimulus to the tongue while also administering sounds. Equipment associated with the treatments is available through physicians. Studies with it and similar devices continue in several research centers.There is some evidence supporting neuromodulation techniques such as transcranial magnetic stimulation; transcranial direct current stimulation and neurofeedback. However, the effects in terms of tinnitus relief are still under debate.
Alternative medicine
Ginkgo biloba does not appear to be effective. The American Academy of Otolaryngology recommends against taking melatonin or zinc supplements to relieve symptoms of tinnitus, and reported that evidence for the efficacy of many dietary supplements—lipoflavonoids, garlic, traditional Chinese/Korean herbal medicine, honeybee larvae and various other vitamins and minerals, as well as homeopathic preparations—did not exist. A 2016 Cochrane Review also concluded that evidence was not sufficient to support taking zinc supplements to reduce symptoms associated with tinnitus.
Prognosis
While there is no cure, most people with tinnitus get used to it over time; for a minority, it remains a significant problem.
Epidemiology
Adults
Tinnitus affects 10–15% of people. About a third of North Americans over 55 experience tinnitus. Tinnitus affects one third of adults at some time in their lives, whereas ten to fifteen percent are disturbed enough to seek medical evaluation.
70 million people in Europe are estimated to have tinnitus.
Children
Tinnitus is commonly thought of as a symptom of adulthood, and is often overlooked in children. Children with hearing loss have a high incidence of pediatric tinnitus, even though they do not express the condition or its effect on their lives. Children do not generally report tinnitus spontaneously and their complaints may not be taken seriously. Among those children who do complain of tinnitus, there is an increased likelihood of associated otological or neurological pathology such as migraine, juvenile Menieres disease or chronic suppurative otitis media. Its reported prevalence varies from 12% to 36% in children with normal hearing thresholds and up to 66% in children with a hearing loss and approximately 3–10% of children have been reported to be troubled by tinnitus.
See also
References
External links
Tinnitus at Curlie
Baguley, David; Andersson, Gerhard; McFerran, Don; McKenna, Laurence (2013) [2004]. Tinnitus: A Multidisciplinary Approach (2nd ed.). Indianapolis, IN: Wiley-Blackwell. ISBN 978-1-4051-9989-6. LCCN 2012032714. OCLC 712915603.
Langguth, B; Hajak, G; Kleinjung, T; Cacace, A; Møller, AR, eds. (2007). Tinnitus: pathophysiology and treatment. Progress in brain research no. 166 (1st ed.). Amsterdam; Boston: Elsevier. ISBN 978-0-444-53167-4. LCCN 2012471552. OCLC 648331153 </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Streptococcus and can you explain it in depth? [/INST] Streptococcus is a genus of gram-positive coccus (plural cocci) or spherical bacteria that belongs to the family Streptococcaceae, within the order Lactobacillales (lactic acid bacteria), in the phylum Bacillota. Cell division in streptococci occurs along a single axis, so as they grow, they tend to form pairs or chains that may appear bent or twisted. This differs from staphylococci, which divide along multiple axes, thereby generating irregular, grape-like clusters of cells. Most streptococci are oxidase-negative and catalase-negative, and many are facultative anaerobes (capable of growth both aerobically and anaerobically).
The term was coined in 1877 by Viennese surgeon Albert Theodor Billroth (1829–1894), by combining the prefix "strepto-" (from Ancient Greek: στρεπτός, romanized: streptós, lit. easily twisted, pliant), together with the suffix "-coccus" (from Modern Latin: coccus, from Ancient Greek: κόκκος, romanized: kókkos, lit. grain, seed, berry.) In 1984, many bacteria formerly grouped in the genus Streptococcus were separated out into the genera Enterococcus and Lactococcus. Currently, over 50 species are recognised in this genus. This genus has been found to be part of the salivary microbiome.
Pathogenesis and classification
In addition to streptococcal pharyngitis (strep throat), certain Streptococcus species are responsible for many cases of pink eye, meningitis, bacterial pneumonia, endocarditis, erysipelas, and necrotizing fasciitis (the flesh-eating bacterial infections). However, many streptococcal species are not pathogenic, and form part of the commensal human microbiota of the mouth, skin, intestine, and upper respiratory tract. Streptococci are also a necessary ingredient in producing Emmentaler ("Swiss") cheese.Species of Streptococcus are classified based on their hemolytic properties. Alpha-hemolytic species cause oxidization of iron in hemoglobin molecules within red blood cells, giving it a greenish color on blood agar. Beta-hemolytic species cause complete rupture of red blood cells. On blood agar, this appears as wide areas clear of blood cells surrounding bacterial colonies. Gamma-hemolytic species cause no hemolysis.Beta-hemolytic streptococci are further classified by Lancefield grouping, a serotype classification (that is, describing specific carbohydrates present on the bacterial cell wall). The 21 described serotypes are named Lancefield groups A to W (excluding I and J). This system of classification was developed by Rebecca Lancefield, a scientist at Rockefeller University.In the medical setting, the most important groups are the alpha-hemolytic streptococci S. pneumoniae and Streptococcus viridans group, and the beta-hemolytic streptococci of Lancefield groups A and B (also known as “group A strep” and “group B strep”).
Table: Medically relevant streptococci (not all are alpha-hemolytic)
Alpha-hemolytic
When alpha-hemolysis (α-hemolysis) is present, the agar under the colony will appear dark and greenish due to the conversion of hemoglobin to green biliverdin. Streptococcus pneumoniae and a group of oral streptococci (Streptococcus viridans or viridans streptococci) display alpha-hemolysis.
Alpha-hemolysis is also termed incomplete hemolysis or partial hemolysis because the cell membranes of the red blood cells are left intact. This is also sometimes called green hemolysis because of the color change in the agar.
Pneumococci
S. pneumoniae (sometimes called pneumococcus), is a leading cause of bacterial pneumonia and occasional etiology of otitis media, sinusitis, meningitis, and peritonitis. Inflammation is thought to be the major cause of how pneumococci cause disease, hence the tendency of diagnoses associated with them to involve inflammation. They possess no Lancefield antigens.
The viridans group: alpha-hemolytic
The viridans streptococci are a large group of commensal bacteria that are either alpha-hemolytic, producing a green coloration on blood agar plates (hence the name "viridans", from Latin vĭrĭdis, green), or nonhemolytic. They possess no Lancefield antigens.
Beta-hemolytic
Beta-hemolysis (β-hemolysis), sometimes called complete hemolysis, is a complete lysis of red cells in the media around and under the colonies: the area appears lightened (yellow) and transparent. Streptolysin, an exotoxin, is the enzyme produced by the bacteria which causes the complete lysis of red blood cells. There are two types of streptolysin: Streptolysin O (SLO) and streptolysin S (SLS). Streptolysin O is an oxygen-sensitive cytotoxin, secreted by most group A Streptococcus (GAS), and interacts with cholesterol in the membrane of eukaryotic cells (mainly red and white blood cells, macrophages, and platelets), and usually results in beta-hemolysis under the surface of blood agar. Streptolysin S is an oxygen-stable cytotoxin also produced by most GAS strains which results in clearing on the surface of blood agar. SLS affects immune cells, including polymorphonuclear leukocytes and lymphocytes, and is thought to prevent the host immune system from clearing infection. Streptococcus pyogenes, or GAS, displays beta hemolysis.
Some weakly beta-hemolytic species cause intense hemolysis when grown together with a strain of Staphylococcus. This is called the CAMP test. Streptococcus agalactiae displays this property. Clostridium perfringens can be identified presumptively with this test. Listeria monocytogenes is also positive on sheeps blood agar.
Group A
Group A S. pyogenes is the causative agent in a wide range of group A streptococcal infections (GAS). These infections may be noninvasive or invasive. The noninvasive infections tend to be more common and less severe. The most common of these infections include streptococcal pharyngitis (strep throat) and impetigo. Scarlet fever is also a noninvasive infection, but has not been as common in recent years.
The invasive infections caused by group A beta-hemolytic streptococci tend to be more severe and less common. This occurs when the bacterium is able to infect areas where it is not usually found, such as the blood and the organs. The diseases that may be caused include streptococcal toxic shock syndrome, necrotizing fasciitis, pneumonia, and bacteremia. Globally, GAS has been estimated to cause more than 500,000 deaths every year, making it one of the worlds leading pathogens.Additional complications may be caused by GAS, namely acute rheumatic fever and acute glomerulonephritis. Rheumatic fever, a disease that affects the joints, kidneys, and heart valves, is a consequence of untreated strep A infection caused not by the bacterium itself. Rheumatic fever is caused by the antibodies created by the immune system to fight off the infection cross-reacting with other proteins in the body. This "cross-reaction" causes the body to essentially attack itself and leads to the damage above. A similar autoimmune mechanism initiated by Group A beta-hemolytic streptococcal (GABHS) infection is hypothesized to cause pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections (PANDAS), wherein autoimmune antibodies affect the basal ganglia, causing rapid onset of psychiatric, motor, sleep, and other symptoms in pediatric patients.
GAS infection is generally diagnosed with a rapid strep test or by culture.
Group B
S. agalactiae, or group B streptococcus, GBS, causes pneumonia and meningitis in newborns and the elderly, with occasional systemic bacteremia. Importantly, Streptococcus agalactiae is the most common cause of meningitis in infants from one month to three months old. They can also colonize the intestines and the female reproductive tract, increasing the risk for premature rupture of membranes during pregnancy, and transmission of the organism to the infant. The American College of Obstetricians and Gynecologists, American Academy of Pediatrics, and the Centers for Disease Control recommend all pregnant women between 35 and 37 weeks gestation to be tested for GBS. Women who test positive should be given prophylactic antibiotics during labor, which will usually prevent transmission to the infant.The United Kingdom has chosen to adopt a risk factor-based protocol, rather than the culture-based protocol followed in the US. Current guidelines state that if one or more of the following risk factors is present, then the woman should be treated with intrapartum antibiotics:
GBS bacteriuria during this pregnancy
History of GBS disease in a previous infant
Intrapartum fever (≥38 °C)
Preterm labour (<37 weeks)
Prolonged rupture of membranes (>18 hours)This protocol results in the administration of intrapartum antibiotics to 15–20% of pregnant women and prevention of 65–70% of cases of early onset GBS sepsis.
Group C
This group includes S. equi, which causes strangles in horses, and S. zooepidemicus—S. equi is a clonal descendant or biovar of the ancestral S. zooepidemicus—which causes infections in several species of mammals, including cattle and horses. S. dysgalactiae subsp. dysgalactiae is also a member of group C, beta-haemolytic streptococci that can cause pharyngitis and other pyogenic infections similar to group A streptococci.
Group D (enterococci)
Many former group D streptococci have been reclassified and placed in the genus Enterococcus (including E. faecalis, E. faecium, E. durans, and E. avium). For example, Streptococcus faecalis is now Enterococcus faecalis. E. faecalis is sometimes alpha-hemolytic and E. faecium is sometimes beta hemolytic.The remaining nonenterococcal group D strains include Streptococcus gallolyticus, Streptococcus bovis, Streptococcus equinus and Streptococcus suis.
Nonhemolytic streptococci rarely cause illness. However, weakly hemolytic group D beta-hemolytic streptococci and Listeria monocytogenes (which is actually a gram-positive bacillus) should not be confused with nonhemolytic streptococci.
Group F streptococci
Group F streptococci were first described in 1934 by Long and Bliss amongst the "minute haemolytic streptococci". They are also known as Streptococcus anginosus (according to the Lancefield classification system) or as members of the S. milleri group (according to the European system).
Group G streptococci
These streptococci are usually, but not exclusively, beta-hemolytic. Streptococcus dysgalactiae subsp. canis is the predominant subspecies encountered. It is a particularly common GGS in humans, although it is typically found on animals. S. phocae is a GGS subspecies that has been found in marine mammals and marine fish species. In marine mammals it has been mainly associated with meningoencephalitis, sepsis, and endocarditis, but is also associated with many other pathologies. Its environmental reservoir and means of transmission in marine mammals is not well characterized.
Group H streptococci
Group H streptococci cause infections in medium-sized canines. Group H streptococci rarely cause human illness unless a human has direct contact with the mouth of a canine. One of the most common ways this can be spread is human-to-canine, mouth-to-mouth contact. However, the canine may lick the humans hand and infection can be spread, as well.
Molecular taxonomy and phylogenetics
Streptococci have been divided into six groups on the basis of their 16S rDNA sequences: S. anginosus, S. gallolyticus, S. mitis, S. mutans, S. pyogenes and S. salivarius. The 16S groups have been confirmed by whole genome sequencing (see figure). The important pathogens S. pneumoniae and S. pyogenes belong to the S. mitis and S. pyogenes groups, respectively, while the causative agent of dental caries, Streptococcus mutans, is basal to the Streptococcus group.
Recent technological advances have resulted in an increase of available genome sequences for Streptococcus species, allowing for more robust and reliable phylogenetic and comparative genomic analyses to be conducted. In 2018, the evolutionary relationships within Streptococcus was re-examined by Patel and Gupta through the analysis of comprehensive phylogenetic trees constructed based on four different datasets of proteins and the identification of 134 highly specific molecular signatures (in the form of conserved signature indels) that are exclusively shared by the entire genus or its distinct subclades.The results revealed the presence of two main clades at the highest level within Streptococcus, termed the “Mitis-Suis” and “Pyogenes-Equinus-Mutans” clades. The “Mitis-Suis” main clade comprises the Suis subclade and the Mitis clade, which encompasses the Angiosus, Pneumoniae, Gordonii and Parasanguinis subclades. The second main clade, the “Pyogenes-Equinus-Mutans”, includes the Pyogenes, Mutans, Salivarius, Equinus, Sobrinus, Halotolerans, Porci, Entericus and Orisratti subclades. In total, 14 distinct subclades have been identified within the genus Streptococcus, each supported by reliable branching patterns in phylogenetic trees and by the presence of multiple conserved signature indels in different proteins that are distinctive characteristics of the members of these 14 clades. A summary diagram showing the overall relationships among the Streptococcus based on these studies is depicted in a figure on this page.
Genomics
The genomes of hundreds of species have been sequenced. Most Streptococcus genomes are 1.8 to 2.3 Mb in size and encode 1,700 to 2,300 proteins. Some important genomes are listed in the table. The four species shown in the table (S. pyogenes, S. agalactiae, S. pneumoniae, and S. mutans) have an average pairwise protein sequence identity of about 70%.
Bacteriophage
Bacteriophages have been described for many species of Streptococcus. 18 prophages have been described in S. pneumoniae that range in size from 38 to 41 kb in size, encoding from 42 to 66 genes each. Some of the first Streptococcus phages discovered were Dp-1
and ω1 (alias ω-1).
In 1981 the Cp (Complutense phage 1, officially Streptococcus virus Cp1, Picovirinae) family was discovered with Cp-1 as its first member. Dp-1 and Cp-1 infect both S. pneumoniae and S. mitis. However, the host ranges of most Streptococcus phages have not been investigated systematically.
Natural genetic transformation
Natural genetic transformation involves the transfer of DNA from one bacterium to another through the surrounding medium. Transformation is a complex process dependent on expression of numerous genes. To be capable of transformation a bacterium must enter a special physiologic state referred to as competence. S. pneumoniae, S. mitis and S. oralis can become competent, and as a result actively acquire homologous DNA for transformation by a predatory fratricidal mechanism This fratricidal mechanism mainly exploits non-competent siblings present in the same niche Among highly competent isolates of S. pneumoniae, Li et al. showed that nasal colonization fitness and virulence (lung infectivity) depend on an intact competence system. Competence may allow the streptococcal pathogen to use external homologous DNA for recombinational repair of DNA damages caused by the hosts oxidative attack.
See also
Cia-dependent small RNAs
Quellung reaction
Streptococcal infection in poultry
Streptococcal pharyngitis
Streptokinase
References
External links
Centers for Disease Control Prevention (CDC) (March 2000) </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Fibrosis and can you explain it in depth? [/INST] Fibrosis, also known as fibrotic scarring, is a pathological wound healing in which connective tissue replaces normal parenchymal tissue to the extent that it goes unchecked, leading to considerable tissue remodelling and the formation of permanent scar tissue.Repeated injuries, chronic inflammation and repair are susceptible to fibrosis where an accidental excessive accumulation of extracellular matrix components, such as the collagen is produced by fibroblasts, leading to the formation of a permanent fibrotic scar.In response to injury, this is called scarring, and if fibrosis arises from a single cell line, this is called a fibroma. Physiologically, fibrosis acts to deposit connective tissue, which can interfere with or totally inhibit the normal architecture and function of the underlying organ or tissue. Fibrosis can be used to describe the pathological state of excess deposition of fibrous tissue, as well as the process of connective tissue deposition in healing </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Cardiogenic shock and can you explain it in depth? [/INST] Cardiogenic shock (CS) is a medical emergency resulting from inadequate blood flow due to the dysfunction of the ventricles of the heart. Signs of inadequate blood flow include low urine production (<30 mL/hour), cool arms and legs, and altered level of consciousness. People may also have a severely low blood pressure and heart rate.
Causes of cardiogenic shock include cardiomyopathic, arrhythmic, and mechanical. CS is most commonly precipitated by acute myocardial infarction. People can have combined types of shock.
Treatment of cardiogenic shock depends on the cause with the initial goals to improve blood flow to the body. This can be done in a number of ways—fluid resuscitation, blood transfusions, vasopressors, and ionotropes. If cardiogenic shock is due to a heart attack, attempts to open the hearts arteries may help. An intra-aortic balloon pump or left ventricular assist device may improve matters until this can be done. Medications that improve the hearts ability to contract (positive inotropes) may help; however, it is unclear which is best and at present there is no convincing evidence supporting inotropic or vasodilating therapy to reduce mortality in hemodynamically unstable patients. Norepinephrine may be better if the blood pressure is very low whereas dopamine or dobutamine may be more useful if only slightly low. Cardiogenic shock is a condition that is difficult to fully reverse even with an early diagnosis. With that being said, early initiation of mechanical circulatory support, early percutaneous coronary intervention, inotropes, and heart transplantation may improve outcomes. Care is directed to the dysfunctional organs (dialysis for the kidneys, mechanical ventilation for lungs dysfunction).Mortality rates have been decreasing in the United States. This is likely due to the rapid identification and treatment of the CS. Some studies have suggested that this possibly related to the increased use of coronary reperfusion strategies, like heart stents. Nonetheless, the mortality rates remain high. Multi-organ failure is associated with higher rates of mortality.
Signs and symptoms
The presentation is the following:
Anxiety, restlessness, altered mental state due to decreased blood flow to the brain and subsequent hypoxia.
Low blood pressure due to decrease in cardiac output.
A rapid, weak, thready pulse due to decreased circulation combined with tachycardia.
Cool, clammy, and mottled skin (cutis marmorata) due to vasoconstriction and subsequent hypoperfusion of the skin.
Distended jugular veins due to increased jugular venous pressure.
Oliguria (low urine output) due to inadequate blood flow to the kidneys if the condition persists.
Rapid and deeper respirations (hyperventilation) due to sympathetic nervous system stimulation and acidosis.
Fatigue due to hyperventilation and hypoxia.
Absent pulse in fast and abnormal heart rhythms.
Pulmonary edema, involving fluid back-up in the lungs due to insufficient pumping of the heart.
Causes
Cardiogenic shock is caused by the failure of the heart to pump effectively. It is due to damage to the heart muscle, most often from a heart attack or myocardial contusion. Other causes include abnormal heart rhythms, cardiomyopathy, heart valve problems, ventricular outflow obstruction (i.e </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Dysarthria and can you explain it in depth? [/INST] Dysarthria is a speech sound disorder resulting from neurological injury of the motor component of the motor–speech system and is characterized by poor articulation of phonemes. In other words, it is a condition in which problems effectively occur with the muscles that help produce speech, often making it very difficult to pronounce words. It is unrelated to problems with understanding language (that is, dysphasia or aphasia), although a person can have both. Any of the speech subsystems (respiration, phonation, resonance, prosody, and articulation) can be affected, leading to impairments in intelligibility, audibility, naturalness, and efficiency of vocal communication. Dysarthria that has progressed to a total loss of speech is referred to as anarthria. The term dysarthria is from New Latin, dys- "dysfunctional, impaired" and arthr- "joint, vocal articulation".Neurological injury due to damage in the central or peripheral nervous system may result in weakness, paralysis, or a lack of coordination of the motor–speech system, producing dysarthria. These effects in turn hinder control over the tongue, throat, lips or lungs; for example, swallowing problems (dysphagia) are also often present in those with dysarthria. Cranial nerves that control the muscles relevant to dysarthria include the trigeminal nerves motor branch (V), the facial nerve (VII), the glossopharyngeal nerve (IX), the vagus nerve (X), and the hypoglossal nerve (XII).
Dysarthria does not include speech disorders from structural abnormalities, such as cleft palate and must not be confused with apraxia of speech, which refers to problems in the planning and programming aspect of the motor–speech system. Just as the term "articulation" can mean either "speech" or "joint movement", so is the combining form of arthr- the same in the terms "dysarthria", "dysarthrosis", and "arthropathy"; the term "dysarthria" is conventionally reserved for the speech problem and is not used to refer to arthropathy, whereas "dysarthrosis" has both senses but usually refers to arthropathy.
Causes
There are many potential causes of dysarthria. They include toxic, metabolic, degenerative diseases, traumatic brain injury, or thrombotic or embolic stroke.Degenerative diseases include parkinsonism, amyotrophic lateral sclerosis (ALS), multiple sclerosis, Huntingtons disease, Niemann-Pick disease, and Friedreichs ataxia. Toxic and metabolic conditions include: Wilsons disease, hypoxic encephalopathy such as in drowning, and central pontine myelinolysis.These result in lesions to key areas of the brain involved in planning, executing, or regulating motor operations in skeletal muscles (i.e. muscles of the limbs), including muscles of the head and neck (dysfunction of which characterises dysarthria). These can result in dysfunction, or failure of: the motor or somatosensory cortex of the brain, corticobulbar pathways, the cerebellum, basal nuclei (consisting of the putamen, globus pallidus, caudate nucleus, substantia nigra etc.), brainstem (from which the cranial nerves originate), or the neuromuscular junction (in diseases such as myasthenia gravis) which block the nervous systems ability to activate motor units and effect correct range and strength of movements.Causes:
Brain tumor
Cerebral palsy
Guillain–Barré syndrome
Hypothermia
Idiopathic intracranial hypertension (formerly known as pseudotumor cerebri)
Lyme disease
Stroke
Tay–Sachs disease, and late-onset Tay–Sachs disease (LOTS)
Transient ischemic attack, a mini stroke
Diagnosis
Classification
Dysarthrias are classified in multiple ways based on the presentation of symptoms. Specific dysarthrias include spastic (resulting from bilateral damage to the upper motor neuron), flaccid (resulting from bilateral or unilateral damage to the lower motor neuron), ataxic (resulting from damage to cerebellum), unilateral upper motor neuron (presenting milder symptoms than bilateral UMN damage), hyperkinetic and hypokinetic (resulting from damage to parts of the basal ganglia, such as in Huntingtons disease or Parkinsonism), and the mixed dysarthrias (where symptoms of more than one type of dysarthria are present). The majority of dysarthric patients are diagnosed as having mixed dysarthria, as neural damage resulting in dysarthria is rarely contained to one part of the nervous system — for example, multiple strokes, traumatic brain injury, and some kinds of degenerative illnesses (such as amyotrophic lateral sclerosis) usually damage many different sectors of the nervous system.Ataxic dysarthria is an acquired neurological and sensorimotor speech deficit. It is a common diagnosis among the clinical spectrum of ataxic disorders. Since regulation of skilled movements is a primary function of the cerebellum, damage to the superior cerebellum and the superior cerebellar peduncle is believed to produce this form of dysarthria in ataxic patients. Growing evidence supports the likelihood of cerebellar involvement specifically affecting speech motor programming and execution pathways, producing the characteristic features associated with ataxic dysarthria. This link to speech motor control can explain the abnormalities in articulation and prosody, which are hallmarks of this disorder. Some of the most consistent abnormalities observed in patients with ataxia dysarthria are alterations of the normal timing pattern, with prolongation of certain segments and a tendency to equalize the duration of syllables when speaking. As the severity of the dysarthria increases, the patient may also lengthen more segments as well as increase the degree of lengthening of each individual segment.Common clinical features of ataxic dysarthria include abnormalities in speech modulation, rate of speech, explosive or scanning speech, slurred speech, irregular stress patterns, and vocalic and consonantal misarticulations.Ataxic dysarthria is associated with damage to the left cerebellar hemisphere in right-handed patients.Dysarthria may affect a single system; however, it is more commonly reflected in multiple motor–speech systems. The etiology, degree of neuropathy, existence of co-morbidities, and the individuals response all play a role in the effect the disorder has on the individuals quality of life. Severity ranges from occasional articulation difficulties to verbal speech that is completely unintelligible.Individuals with dysarthria may experience challenges in the following:
Timing
Vocal quality
Pitch
Volume
Breath control
Speed
Strength
Steadiness
Range
ToneExamples of specific observations include a continuous breathy voice, irregular breakdown of articulation, monopitch, distorted vowels, word flow without pauses, and hypernasality.
Treatment
Articulation problems resulting from dysarthria are treated by speech language pathologists, using a variety of techniques </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Lymphocytic colitis and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Metatarsalgia and can you explain it in depth? [/INST] Metatarsalgia, literally metatarsal pain and colloquially known as a stone bruise, is any painful foot condition affecting the metatarsal region of the foot. This is a common problem that can affect the joints and bones of the metatarsals.
Metatarsalgia is most often localized to the first metatarsal head – the ball of the foot just behind the big toe. There are two small sesamoid bones under the first metatarsal head </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Attention deficit hyperactivity disorder and can you explain it in depth? [/INST] Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterised by excessive amounts of inattention, hyperactivity, and impulsivity that are pervasive, impairing in multiple contexts, and otherwise age-inappropriate.ADHD symptoms arise from executive dysfunction, and emotional dysregulation is often considered a core symptom. In children, problems paying attention may result in poor school performance. ADHD is associated with other neurodevelopmental and mental disorders as well as some non-psychiatric disorders, which can cause additional impairment, especially in modern society. Although people with ADHD struggle to focus on tasks they are not particularly interested in completing, they are often able to maintain an unusually prolonged and intense level of attention for tasks they do find interesting or rewarding; this is known as hyperfocus.
The precise causes of ADHD are unknown in the majority of cases. Genetic factors play an important role; ADHD tends to run in families and has a heritability rate of 74%. Toxins and infections during pregnancy and brain damage may be environmental risks.
It affects about 5–7% of children when diagnosed via the DSM-IV criteria, and 1–2% when diagnosed via the ICD-10 criteria. Rates are similar between countries and differences in rates depend mostly on how it is diagnosed. ADHD is diagnosed approximately twice as often in boys than in girls, and 1.6 times more often in men than in women, although the disorder is overlooked in girls or diagnosed in later life because their symptoms sometimes differ from diagnostic criteria. About 30–50% of people diagnosed in childhood continue to have ADHD in adulthood, with 2.58% of adults estimated to have ADHD which began in childhood. In adults, hyperactivity is usually replaced by inner restlessness, and adults often develop coping skills to compensate for their impairments. The condition can be difficult to tell apart from other conditions, as well as from high levels of activity within the range of normal behavior. ADHD has a negative impact on patients health related quality of life and that this may be further exacerbated by, or may increase the risk of, other psychiatric conditions such as anxiety and depression.ADHD management recommendations vary and usually involve some combination of medications, counseling, and lifestyle changes. The British guideline emphasises environmental modifications and education for individuals and carers about ADHD as the first response. If symptoms persist, parent-training, medication, or psychotherapy (especially cognitive behavioral therapy) can be recommended based on age. Canadian and American guidelines recommend medications and behavioral therapy together, except in preschool-aged children for whom the first-line treatment is behavioral therapy alone. Stimulant medications are the most effective pharmaceutical treatment, although there may be side effects and any improvements will be reverted if medication is ceased.ADHD, its diagnosis, and its treatment have been considered controversial since the 1970s. These controversies have involved doctors, teachers, policymakers, parents, and the media. Topics have included causes of ADHD and the use of stimulant medications in its treatment. ADHD is now a well-validated clinical diagnosis in children and adults, and the debate in the scientific community mainly centers on how it is diagnosed and treated. ADHD was officially known as attention deficit disorder (ADD) from 1980 to 1987; prior to the 1980s, it was known as hyperkinetic reaction of childhood. Symptoms similar to those of ADHD have been described in medical literature dating back to the 18th century.
Signs and symptoms
Inattention, hyperactivity (restlessness in adults), disruptive behavior, and impulsivity are common in ADHD. Academic difficulties are frequent as are problems with relationships. The symptoms can be difficult to define, as it is hard to draw a line at where normal levels of inattention, hyperactivity, and impulsivity end and significant levels requiring interventions begin.According to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) and its text revision (DSM-5-TR), symptoms must be present for six months or more to a degree that is much greater than others of the same age. This requires at least six symptoms of either inattention or hyperactivity/impulsivity for those under 17 and at least five symptoms for those 17 years or older. The symptoms must be present in at least two settings (e.g., social, school, work, or home), and must directly interfere with or reduce quality of functioning. Additionally, several symptoms must have been present before age twelve.
Subtypes
ADHD is divided into three primary presentations:
predominantly inattentive (ADHD-PI or ADHD-I)
predominantly hyperactive-impulsive (ADHD-PH or ADHD-HI)
combined type (ADHD-C).The table "Symptoms" lists the symptoms for ADHD-I and ADHD-HI from two major classification systems. Symptoms which can be better explained by another psychiatric or medical condition which an individual has are not considered to be a symptom of ADHD for that person.
Girls and women with ADHD tend to display fewer hyperactivity and impulsivity symptoms but more symptoms of inattention and distractibility.Symptoms are expressed differently and more subtly as the individual ages.: 6 Hyperactivity tends to become less overt with age and turns into inner restlessness, difficulty relaxing or remaining still, talkativeness or constant mental activity in teens and adults with ADHD.: 6–7 Impulsivity in adulthood may appear as thoughtless behaviour, impatience, irresponsible spending and sensation-seeking behaviours,: 6 while inattention may appear as becoming easily bored, difficulty with organization, remaining on task and making decisions, and sensitivity to stress.: 6 Although not listed as an official symptom for this condition, emotional dysregulation or mood lability is generally understood to be a common symptom of ADHD.: 6 People with ADHD of all ages are more likely to have problems with social skills, such as social interaction and forming and maintaining friendships. This is true for all presentations. About half of children and adolescents with ADHD experience social rejection by their peers compared to 10–15% of non-ADHD children and adolescents. People with attention deficits are prone to having difficulty processing verbal and nonverbal language which can negatively affect social interaction. They also may drift off during conversations, miss social cues, and have trouble learning social skills.Difficulties managing anger are more common in children with ADHD as are delays in speech, language and motor development. Poorer handwriting is more common in children with ADHD. Poor handwriting in many situations can be a side effect of ADHD in itself due to decreased attentiveness but when its a constant problem it may also be in part due to both Dyslexic and Dysgraphic individuals having higher rates of ADHD than the general population, with 3 in 10 people who have dyslexia also having ADHD. Although it causes significant difficulty, many children with ADHD have an attention span equal to or greater than that of other children for tasks and subjects they find interesting.
Comorbidities
Psychiatric
In children, ADHD occurs with other disorders about two-thirds of the time.Other neurodevelopmental conditions are common comorbidities. Autism spectrum disorder (ASD), co-occurring at a rate of 21% in those with ADHD, affects social skills, ability to communicate, behaviour, and interests. Both ADHD and ASD can be diagnosed in the same person. Learning disabilities have been found to occur in about 20–30% of children with ADHD. Learning disabilities can include developmental speech and language disorders, and academic skills disorders. ADHD, however, is not considered a learning disability, but it very frequently causes academic difficulties. Intellectual disabilities and Tourettes syndrome are also common.
ADHD is often comorbid with disruptive, impulse control, and conduct disorders. Oppositional defiant disorder (ODD) occurs in about 25% of children with an inattentive presentation and 50% of those with a combined presentation. It is characterised by angry or irritable mood, argumentative or defiant behavior and vindictiveness which are age-inappropriate. Conduct disorder (CD) occurs in about 25% of adolescents with ADHD. It is characterised by aggression, destruction of property, deceitfulness, theft and violations of rules. Adolescents with ADHD who also have CD are more likely to develop antisocial personality disorder in adulthood. Brain imaging supports that CD and ADHD are separate conditions, wherein conduct disorder was shown to reduce the size of ones temporal lobe and limbic system, and increase the size of ones orbitofrontal cortex, whereas ADHD was shown to reduce connections in the cerebellum and prefrontal cortex more broadly. Conduct disorder involves more impairment in motivation control than ADHD. Intermittent explosive disorder is characterised by sudden and disproportionate outbursts of anger and co-occurs in individuals with ADHD more frequently than in the general population.Anxiety and mood disorders are frequent comorbidities. Anxiety disorders have been found to occur more commonly in the ADHD population, as have mood disorders (especially bipolar disorder and major depressive disorder). Boys diagnosed with the combined ADHD subtype are more likely to have a mood disorder. Adults and children with ADHD sometimes also have bipolar disorder, which requires careful assessment to accurately diagnose and treat both conditions.Sleep disorders and ADHD commonly co-exist. They can also occur as a side effect of medications used to treat ADHD. In children with ADHD, insomnia is the most common sleep disorder with behavioral therapy being the preferred treatment. Problems with sleep initiation are common among individuals with ADHD but often they will be deep sleepers and have significant difficulty getting up in the morning. Melatonin is sometimes used in children who have sleep onset insomnia. Specifically, the sleep disorder restless legs syndrome has been found to be more common in those with ADHD and is often due to iron deficiency anemia. However, restless legs can simply be a part of ADHD and requires careful assessment to differentiate between the two disorders. Delayed sleep phase disorder is also a common comorbidity of those with ADHD.There are other psychiatric conditions which are often co-morbid with ADHD, such as substance use disorders. Individuals with ADHD are at increased risk of substance abuse.: 9 This is most commonly seen with alcohol or cannabis.: 9 The reason for this may be an altered reward pathway in the brains of ADHD individuals, self-treatment and increased psychosocial risk factors.: 9 This makes the evaluation and treatment of ADHD more difficult, with serious substance misuse problems usually treated first due to their greater risks. Other psychiatric conditions include reactive attachment disorder, characterised by a severe inability to appropriately relate socially, and sluggish cognitive tempo, a cluster of symptoms that potentially comprises another attention disorder and may occur in 30–50% of ADHD cases, regardless of the subtype. Individuals with ADHD are 4x more likely to develop and be diagnosed with an eating disorder (Anorexia, Bulimia, Binge Eating, ARFID) compared to those without ADHD. Individuals with diagnosed eating disorders are 2.6x more likely to have ADHD than those without eating disorders, though these numbers are likely much lower than actual rates, due to limitations with screening and diagnosis in marginalized populations.
Trauma
ADHD, trauma, and Adverse Childhood Experiences are also comorbid, which could in part be potentially explained by the similarity in presentation between different diagnoses. The symptoms of ADHD and PTSD can have significant behavioral overlap with ADHD—in particular, motor restlessness, difficulty concentrating, distractibility, irritability/anger, emotional constriction or dysregulation, poor impulse control, and forgetfulness are common in both. This could result in trauma-related disorders or ADHD being mis-identified as the other. Additionally, traumatic events in childhood are a risk factor for ADHD - it can lead to structural brain changes and the development of ADHD behaviors. Finally, the behavioral consequences of ADHD symptoms cause a higher chance of the individual experiencing trauma (and therefore ADHD leads to a concrete diagnosis of a trauma-related disorder).
Non-psychiatric
Some non-psychiatric conditions are also comorbidities of ADHD. This includes epilepsy, a neurological condition characterised by recurrent seizures. There are well established associations between ADHD and obesity, asthma and sleep disorders, and an association with celiac disease. Children with ADHD have a higher risk for migraine headaches, but have no increased risk of tension-type headaches. In addition, children with ADHD may also experience headaches as a result of medication.A 2021 review reported that several neurometabolic disorders caused by inborn errors of metabolism converge on common neurochemical mechanisms that interfere with biological mechanisms also considered central in ADHD pathophysiology and treatment. This highlights the importance of close collaboration between health services to avoid clinical overshadowing.
Suicide risk
Systematic reviews conducted in 2017 and 2020 found strong evidence that ADHD is associated with increased suicide risk across all age groups, as well as growing evidence that an ADHD diagnosis in childhood or adolescence represents a significant future suicidal risk factor. Potential causes include ADHDs association with functional impairment, negative social, educational and occupational outcomes, and financial distress. A 2019 meta-analysis indicated a significant association between ADHD and suicidal spectrum behaviors (suicidal attempts, ideations, plans, and completed suicides); across the studies examined, the prevalence of suicide attempts in individuals with ADHD was 18.9%, compared to 9.3% in individuals without ADHD, and the findings were substantially replicated among studies which adjusted for other variables. However, the relationship between ADHD and suicidal spectrum behaviors remains unclear due to mixed findings across individual studies and the complicating impact of comorbid psychiatric disorders. There is no clear data on whether there is a direct relationship between ADHD and suicidality, or whether ADHD increases suicide risk through comorbidities.
IQ test performance
Certain studies have found that people with ADHD tend to have lower scores on intelligence quotient (IQ) tests. The significance of this is controversial due to the differences between people with ADHD and the difficulty determining the influence of symptoms, such as distractibility, on lower scores rather than intellectual capacity. In studies of ADHD, higher IQs may be over-represented because many studies exclude individuals who have lower IQs despite those with ADHD scoring on average nine points lower on standardized intelligence measures. In individuals with high intelligence, there is increased risk of a missed ADHD diagnosis, possibly because of compensatory strategies in highly intelligent individuals.Studies of adults suggest that negative differences in intelligence are not meaningful and may be explained by associated health problems.
Causes
ADHD is generally claimed to be the result of neurological dysfunction in processes associated with the production or use of dopamine and norepinephrine in various brain structures, but there are no confirmed causes. It may involve interactions between genetics and the environment.
Genetics
ADHD has a high heritability of 74%, meaning that 74% of the presence of ADHD in the population is due to genetic factors. There are multiple gene variants which each slightly increase the likelihood of a person having ADHD; it is polygenic and arises through the combination of many gene variants which each have a small effect. The siblings of children with ADHD are three to four times more likely to develop the disorder than siblings of children without the disorder.Arousal is related to dopaminergic functioning, and ADHD presents with low dopaminergic functioning. Typically, a number of genes are involved, many of which directly affect dopamine neurotransmission. Those involved with dopamine include DAT, DRD4, DRD5, TAAR1, MAOA, COMT, and DBH. Other genes associated with ADHD include SERT, HTR1B, SNAP25, GRIN2A, ADRA2A, TPH2, and BDNF. A common variant of a gene called latrophilin 3 is estimated to be responsible for about 9% of cases and when this variant is present, people are particularly responsive to stimulant medication. The 7 repeat variant of dopamine receptor D4 (DRD4–7R) causes increased inhibitory effects induced by dopamine and is associated with ADHD. The DRD4 receptor is a G protein-coupled receptor that inhibits adenylyl cyclase. The DRD4–7R mutation results in a wide range of behavioral phenotypes, including ADHD symptoms reflecting split attention. The DRD4 gene is both linked to novelty seeking and ADHD. The genes GFOD1 and CDH13 show strong genetic associations with ADHD. CHD13s association with ASD, schizophrenia, bipolar disorder, and depression make it an interesting candidate causative gene. Another candidate causative gene that has been identified is ADGRL3. In zebrafish, knockout of this gene causes a loss of dopaminergic function in the ventral diencephalon and the fish display a hyperactive/impulsive phenotype.For genetic variation to be used as a tool for diagnosis, more validating studies need to be performed. However, smaller studies have shown that genetic polymorphisms in genes related to catecholaminergic neurotransmission or the SNARE complex of the synapse can reliably predict a persons response to stimulant medication. Rare genetic variants show more relevant clinical significance as their penetrance (the chance of developing the disorder) tends to be much higher. However their usefulness as tools for diagnosis is limited as no single gene predicts ADHD. ASD shows genetic overlap with ADHD at both common and rare levels of genetic variation.
Environment
In addition to genetics, some environmental factors might play a role in causing ADHD. Alcohol intake during pregnancy can cause fetal alcohol spectrum disorders which can include ADHD or symptoms like it. Children exposed to certain toxic substances, such as lead or polychlorinated biphenyls, may develop problems which resemble ADHD. Exposure to the organophosphate insecticides chlorpyrifos and dialkyl phosphate is associated with an increased risk; however, the evidence is not conclusive. Exposure to tobacco smoke during pregnancy can cause problems with central nervous system development and can increase the risk of ADHD. Nicotine exposure during pregnancy may be an environmental risk.Extreme premature birth, very low birth weight, and extreme neglect, abuse, or social deprivation also increase the risk as do certain infections during pregnancy, at birth, and in early childhood. These infections include, among others, various viruses (measles, varicella zoster encephalitis, rubella, enterovirus 71). At least 30% of children with a traumatic brain injury later develop ADHD and about 5% of cases are due to brain damage.Some studies suggest that in a small number of children, artificial food dyes or preservatives may be associated with an increased prevalence of ADHD or ADHD-like symptoms, but the evidence is weak and may only apply to children with food sensitivities. The European Union has put in place regulatory measures based on these concerns. In a minority of children, intolerances or allergies to certain foods may worsen ADHD symptoms.Individuals with hypokalemic sensory overstimulation are sometimes diagnosed as having attention deficit hyperactivity disorder (ADHD), raising the possibility that a subtype of ADHD has a cause that can be understood mechanistically and treated in a novel way. The sensory overload is treatable with oral potassium gluconate.
Research does not support popular beliefs that ADHD is caused by eating too much refined sugar, watching too much television, parenting, poverty or family chaos; however, they might worsen ADHD symptoms in certain people.
Society
The youngest children in a class have been found to be more likely to be diagnosed as having ADHD, possibly due to them being developmentally behind their older classmates. They also appear to use ADHD medications at nearly twice the rate of their peers.In some cases, an inappropriate diagnosis of ADHD may reflect a dysfunctional family or a poor educational system, rather than any true presence of ADHD in the individual. In other cases, it may be explained by increasing academic expectations, with a diagnosis being a method for parents in some countries to get extra financial and educational support for their child. Behaviors typical of ADHD occur more commonly in children who have experienced violence and emotional abuse.
Pathophysiology
Current models of ADHD suggest that it is associated with functional impairments in some of the brains neurotransmitter systems, particularly those involving dopamine and norepinephrine. The dopamine and norepinephrine pathways that originate in the ventral tegmental area and locus coeruleus project to diverse regions of the brain and govern a variety of cognitive processes. The dopamine pathways and norepinephrine pathways which project to the prefrontal cortex and striatum are directly responsible for modulating executive function (cognitive control of behavior), motivation, reward perception, and motor function; these pathways are known to play a central role in the pathophysiology of ADHD. Larger models of ADHD with additional pathways have been proposed.
Brain structure
In children with ADHD, there is a general reduction of volume in certain brain structures, with a proportionally greater decrease in the volume in the left-sided prefrontal cortex. The posterior parietal cortex also shows thinning in individuals with ADHD compared to controls. Other brain structures in the prefrontal-striatal-cerebellar and prefrontal-striatal-thalamic circuits have also been found to differ between people with and without ADHD.The subcortical volumes of the accumbens, amygdala, caudate, hippocampus, and putamen appears smaller in individuals with ADHD compared with controls. Structural MRI studies have also revealed differences in white matter, with marked differences in inter-hemispheric asymmetry between ADHD and typically developing youths Functional MRI fMRI studies have revealed a number of differences between ADHD and control brains. Independent component analysis performed on resting-state fMRI data have revealed that individuals with the inattentive type of ADHD have significantly more independent components are required to describe the variance of this data.
Neurotransmitter pathways
Previously, it had been suggested that the elevated number of dopamine transporters in people with ADHD was part of the pathophysiology, but it appears the elevated numbers may be due to adaptation following exposure to stimulant medication. Current models involve the mesocorticolimbic dopamine pathway and the locus coeruleus-noradrenergic system. ADHD psychostimulants possess treatment efficacy because they increase neurotransmitter activity in these systems. There may additionally be abnormalities in serotonergic, glutamatergic, or cholinergic pathways.
Executive function and motivation
The symptoms of ADHD arise from a deficiency in certain executive functions (e.g., attentional control, inhibitory control, and working memory). Executive functions are a set of cognitive processes that are required to successfully select and monitor behaviors that facilitate the attainment of ones chosen goals. The executive function impairments that occur in ADHD individuals result in problems with staying organised, time keeping, excessive procrastination, maintaining concentration, paying attention, ignoring distractions, regulating emotions, and remembering details. People with ADHD appear to have unimpaired long-term memory, and deficits in long-term recall appear to be attributed to impairments in working memory. Due to the rates of brain maturation and the increasing demands for executive control as a person gets older, ADHD impairments may not fully manifest themselves until adolescence or even early adulthood.ADHD has also been associated with motivational deficits in children. Children with ADHD often find it difficult to focus on long-term over short-term rewards, and exhibit impulsive behavior for short-term rewards.
Paradoxical reaction to neuroactive substances
Another sign of the structurally altered signal processing in the central nervous system in this group of people is the conspicuously common Paradoxical reaction (c. 10–20% of patients). These are unexpected reactions in the opposite direction as with a normal effect, or otherwise significant different reactions. These are reactions to neuroactive substances such as local anesthetic at the dentist, sedative, caffeine, antihistamine, weak neuroleptics and central and peripheral painkillers. Since the causes of paradoxical reactions are at least partly genetic, it may be useful in critical situations, for example before operations, to ask whether such abnormalities may also exist in family members.
Diagnosis
ADHD is diagnosed by an assessment of a persons behavioral and mental development, including ruling out the effects of drugs, medications, and other medical or psychiatric problems as explanations for the symptoms. ADHD diagnosis often takes into account feedback from parents and teachers with most diagnoses begun after a teacher raises concerns. It may be viewed as the extreme end of one or more continuous human traits found in all people. Imaging studies of the brain do not give consistent results between individuals; thus, they are only used for research purposes and not a diagnosis.In North America and Australia, DSM-5 criteria are used for diagnosis, while European countries usually use the ICD-10. The DSM-IV criteria for diagnosis of ADHD is 3–4 times more likely to diagnose ADHD than is the ICD-10 criteria. ADHD is alternately classified as neurodevelopmental disorder or a disruptive behavior disorder along with ODD, CD, and antisocial personality disorder. A diagnosis does not imply a neurological disorder.Associated conditions that should be screened for include anxiety, depression, ODD, CD, and learning and language disorders. Other conditions that should be considered are other neurodevelopmental disorders, tics, and sleep apnea.Self-rating scales, such as the ADHD rating scale and the Vanderbilt ADHD diagnostic rating scale, are used in the screening and evaluation of ADHD. Electroencephalography is not accurate enough to make an ADHD diagnosis.
Classification
Diagnostic and Statistical Manual
As with many other psychiatric disorders, a formal diagnosis should be made by a qualified professional based on a set number of criteria. In the United States, these criteria are defined by the American Psychiatric Association in the DSM. Based on the DSM-5 criteria published in 2013 and the DSM-5-TR criteria published in 2022, there are three presentations of ADHD:
ADHD, predominantly inattentive type, presents with symptoms including being easily distracted, forgetful, daydreaming, disorganization, poor concentration, and difficulty completing tasks.
ADHD, predominantly hyperactive-impulsive type, presents with excessive fidgeting and restlessness, hyperactivity, and difficulty waiting and remaining seated.
ADHD, combined type, is a combination of the first two presentations.This subdivision is based on presence of at least six (in children) or five (in older teenagers and adults) out of nine long-term (lasting at least six months) symptoms of inattention, hyperactivity–impulsivity, or both. To be considered, several symptoms must have appeared by the age of six to twelve and occur in more than one environment (e.g. at home and at school or work). The symptoms must be inappropriate for a child of that age and there must be clear evidence that they are causing social, school or work related problems.The DSM-5 and the DSM-5-TR also provide two diagnoses for individuals who have symptoms of ADHD but do not entirely meet the requirements. Other Specified ADHD allows the clinician to describe why the individual does not meet the criteria, whereas Unspecified ADHD is used where the clinician chooses not to describe the reason.
International Classification of Diseases
In the eleventh revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-11) by the World Health Organization, the disorder is classified as Attention deficit hyperactivity disorder (with the code 6A05). The defined subtypes are similar to those of the DSM-5: predominantly inattentive presentation (6A05.0); predominantly hyperactive-impulsive presentation(6A05.1); combined presentation (6A05.2). However, the ICD-11 includes two residual categories for individuals who do not entirely match any of the defined subtypes: other specified presentation (6A05.Y) where the clinician includes detail on the individuals presentation; and presentation unspecified (6A05.Z) where the clinician does not provide detail.In the tenth revision (ICD-10), the symptoms of hyperkinetic disorder were analogous to ADHD in the ICD-11. When a conduct disorder (as defined by ICD-10) is present, the condition was referred to as hyperkinetic conduct disorder. Otherwise, the disorder was classified as disturbance of activity and attention, other hyperkinetic disorders or hyperkinetic disorders, unspecified. The latter was sometimes referred to as hyperkinetic syndrome.
Social construct theory
The social construct theory of ADHD suggests that, because the boundaries between normal and abnormal behavior are socially constructed (i.e. jointly created and validated by all members of society, and in particular by physicians, parents, teachers, and others), it then follows that subjective valuations and judgements determine which diagnostic criteria are used and thus, the number of people affected. This difference means using DSM-IV criteria could diagnose ADHD at rates three to four times higher than ICD-10 criteria. Thomas Szasz, a supporter of this theory, has argued that ADHD was "invented and then given a name".
Adults
Adults with ADHD are diagnosed under the same criteria, including that their signs must have been present by the age of six to twelve. The individual is the best source for information in diagnosis, however others may provide useful information about the individuals symptoms currently and in childhood; a family history of ADHD also adds weight to a diagnosis.: 7, 9 While the core symptoms of ADHD are similar in children and adults, they often present differently in adults than in children: for example, excessive physical activity seen in children may present as feelings of restlessness and constant mental activity in adults.: 6 Worldwide, it is estimated that 2.58% of adults have persistent ADHD (where the individual currently meets the criteria and there is evidence of childhood onset), and 6.76% of adults have symptomatic ADHD (meaning that they currently meet the criteria for ADHD, regardless of childhood onset). In 2020, this was 139.84 million and 366.33 million affected adults respectively. Around 15% of children with ADHD continue to meet full DSM-IV-TR criteria at 25 years of age, and 50% still experience some symptoms.: 2 As of 2010, most adults remain untreated. Many adults with ADHD without diagnosis and treatment have a disorganised life, and some use non-prescribed drugs or alcohol as a coping mechanism. Other problems may include relationship and job difficulties, and an increased risk of criminal activities.: 6 Associated mental health problems include depression, anxiety disorders, and learning disabilities.Some ADHD symptoms in adults differ from those seen in children. While children with ADHD may climb and run about excessively, adults may experience an inability to relax, or may talk excessively in social situations.: 6 Adults with ADHD may start relationships impulsively, display sensation-seeking behavior, and be short-tempered.: 6 Addictive behavior such as substance abuse and gambling are common.: 6 This led to those who presented differently as they aged having outgrown the DSM-IV criteria.: 5–6 The DSM-5 criteria does specifically deal with adults unlike that of DSM-IV, which does not fully take into account the differences in impairments seen in adulthood compared to childhood.: 5 For diagnosis in an adult, having symptoms since childhood is required. Nevertheless, a proportion of adults who meet the criteria for ADHD in adulthood would not have been diagnosed with ADHD as children. Most cases of late-onset ADHD develop the disorder between the ages of 12-16 and may therefore be considered early adult or adolescent-onset ADHD.
Differential diagnosis
The DSM provides potential differential diagnoses - potential alternate explanations for specific symptoms. Assessment and investigation of clinical history determines which is the most appropriate diagnosis. The DSM-5 suggests ODD, intermittent explosive disorder, and other neurodevelopmental disorders (such as stereotypic movement disorder and Tourettes disorder), in addition to specific learning disorder, intellectual developmental disorder, ASD, reactive attachment disorder, anxiety disorders, depressive disorders, bipolar disorder, disruptive mood dysregulation disorder, substance use disorder, personality disorders, psychotic disorders, medication-induced symptoms, and neurocognitive disorders. Many but not all of these are also common comorbidities of ADHD. The DSM-5-TR also suggests post-traumatic stress disorder.Symptoms of ADHD, such as low mood and poor self-image, mood swings, and irritability, can be confused with dysthymia, cyclothymia or bipolar disorder as well as with borderline personality disorder.: 10 Some symptoms that are due to anxiety disorders, personality disorder, developmental disabilities or intellectual disability or the effects of substance abuse such as intoxication and withdrawal can overlap with ADHD. These disorders can also sometimes occur along with ADHD. Medical conditions which can cause ADHD-type symptoms include: hyperthyroidism, seizure disorder, lead toxicity, hearing deficits, hepatic disease, sleep apnea, drug interactions, untreated celiac disease, and head injury.Primary sleep disorders may affect attention and behavior and the symptoms of ADHD may affect sleep. It is thus recommended that children with ADHD be regularly assessed for sleep problems. Sleepiness in children may result in symptoms ranging from the classic ones of yawning and rubbing the eyes, to hyperactivity and inattentiveness. Obstructive sleep apnea can also cause ADHD-type symptoms.
Management
The management of ADHD typically involves counseling or medications, either alone or in combination. While treatment may improve long-term outcomes, it does not get rid of negative outcomes entirely. Medications used include stimulants, atomoxetine, alpha-2 adrenergic receptor agonists, and sometimes antidepressants. In those who have trouble focusing on long-term rewards, a large amount of positive reinforcement improves task performance. ADHD stimulants also improve persistence and task performance in children with ADHD. "Recent evidence from observational and registry studies indicates that pharmacological treatment of ADHD is associated with increased achievement and decreased absenteeism at school, a reduced risk of trauma-related emergency hospital visits, reduced risks of suicide and attempted suicide, and decreased rates of substance abuse and criminality".
Behavioral therapies
There is good evidence for the use of behavioral therapies in ADHD. They are the recommended first-line treatment in those who have mild symptoms or who are preschool-aged. Psychological therapies used include: psychoeducational input, behavior therapy, cognitive behavioral therapy, interpersonal psychotherapy, family therapy, school-based interventions, social skills training, behavioral peer intervention, organization training, and parent management training. Neurofeedback has greater treatment effects than non-active controls for up to 6 months and possibly a year following treatment, and may have treatment effects comparable to active controls (controls proven to have a clinical effect) over that time period. Despite efficacy in research, there is insufficient regulation of neurofeedback practice, leading to ineffective applications and false claims regarding innovations. Parent training may improve a number of behavioral problems including oppositional and non-compliant behaviors.There is little high-quality research on the effectiveness of family therapy for ADHD—but the existing evidence shows that it is similar to community care, and better than placebo. ADHD-specific support groups can provide information and may help families cope with ADHD.Social skills training, behavioral modification, and medication may have some limited beneficial effects in peer relationships. Stable, high-quality friendships with non-deviant peers protect against later psychological problems.
Medication
Stimulants
Methylphenidate and amphetamine or its derivatives are first-line treatments for ADHD as they are considered the most effective pharmaceutical treatments. About 70 percent respond to the first stimulant tried and as few as 10 percent respond to neither amphetamines nor methylphenidate. Stimulants may also reduce the risk of unintentional injuries in children with ADHD. Magnetic resonance imaging studies suggest that long-term treatment with amphetamine or methylphenidate decreases abnormalities in brain structure and function found in subjects with ADHD. A 2018 review found the greatest short-term benefit with methylphenidate in children, and amphetamines in adults.The likelihood of developing insomnia for ADHD patients taking stimulants has been measured at between 11 and 45 percent for different medications, and may be a main reason for discontinuation. Other side effects, such as tics, decreased appetite and weight loss, or emotional lability, may also lead to discontinuation. Stimulant psychosis and mania are rare at therapeutic doses, appearing to occur in approximately 0.1% of individuals, within the first several weeks after starting amphetamine therapy. The safety of these medications in pregnancy is unclear. Symptom improvement is not sustained if medication is ceased.The long-term effects of ADHD medication have yet to be fully determined, although stimulants are generally beneficial and safe for up to two years for children and adolescents.
Regular monitoring has been recommended in those on long-term treatment. There are indications suggesting that stimulant therapy for children and adolescents should be stopped periodically to assess continuing need for medication, decrease possible growth delay, and reduce tolerance. Although potentially addictive at high doses, stimulants used to treat ADHD have low potential for abuse. Treatment with stimulants is either protective against substance abuse or has no effect.: 12 The majority of studies on nicotine and other nicotinic agonists as treatments for ADHD have shown favorable results; however, no nicotinic drug has been approved for ADHD treatment. Caffeine was formerly used as a second-line treatment for ADHD. It is considered less effective than methylphenidate or amphetamine but more so than placebo for children with ADHD. Pseudoephedrine and ephedrine do not affect ADHD symptoms.Modafinil has shown some efficacy in reducing the severity of ADHD in children and adolescents. It may be prescribed off-label to treat ADHD.
Non-stimulants
There are a number of non-stimulant medications, such as Viloxazine, atomoxetine, bupropion, guanfacine, amantadine (effective in children and adolescents but still not been seen for adults), and clonidine, that may be used as alternatives, or added to stimulant therapy. There are no good studies comparing the various medications; however, they appear more or less equal with respect to side effects. For children, stimulants appear to improve academic performance while atomoxetine does not.Atomoxetine, due to its lack of addiction liability, may be preferred in those who are at risk of recreational or compulsive stimulant use, although evidence is lacking to support its use over stimulants for this reason.: 13 Evidence supports its ability to improve symptoms when compared to placebo.Amantadine was shown to induce similar improvements in children treated methylphenidate, with less frequent side effects. A 2021 retrospective study showed showed that amantadine may serve as an effective adjunct to stimulants for ADHD–related symptoms and appears to be a safer alternative to second- or third-generation antipsychotics.There is little evidence on the effects of medication on social behaviors. Antipsychotics may also be used to treat aggression in ADHD.
Guidelines
Guidelines on when to use medications vary by country. The United Kingdoms National Institute for Health and Care Excellence recommends use for children only in severe cases, though for adults medication is a first-line treatment. Conversely, most United States guidelines recommend medications in most age groups. Medications are especially not recommended for preschool children. Underdosing of stimulants can occur, and can result in a lack of response or later loss of effectiveness. This is particularly common in adolescents and adults as approved dosing is based on school-aged children, causing some practitioners to use weight-based or benefit-based off-label dosing instead.
Exercise
Regular physical exercise, particularly aerobic exercise, is an effective add-on treatment for ADHD in children and adults, particularly when combined with stimulant medication (although the best intensity and type of aerobic exercise for improving symptoms are not currently known). The long-term effects of regular aerobic exercise in ADHD individuals include better behavior and motor abilities, improved executive functions (including attention, inhibitory control, and planning, among other cognitive domains), faster information processing speed, and better memory. Parent-teacher ratings of behavioral and socio-emotional outcomes in response to regular aerobic exercise include: better overall function, reduced ADHD symptoms, better self-esteem, reduced levels of anxiety and depression, fewer somatic complaints, better academic and classroom behavior, and improved social behavior. Exercising while on stimulant medication augments the effect of stimulant medication on executive function. It is believed that these short-term effects of exercise are mediated by an increased abundance of synaptic dopamine and norepinephrine in the brain.
Diet
Dietary modifications are not recommended as of 2019 by the American Academy of Pediatrics, the National Institute for Health and Care Excellence, or the Agency for Healthcare Research and Quality due to insufficient evidence.
A 2013 meta-analysis found less than a third of children with ADHD see some improvement in symptoms with free fatty acid supplementation or decreased eating of artificial food coloring. These benefits may be limited to children with food sensitivities or those who are simultaneously being treated with ADHD medications. This review also found that evidence does not support removing other foods from the diet to treat ADHD. A 2014 review found that an elimination diet results in a small overall benefit in a minority of children, such as those with allergies. A 2016 review stated that the use of a gluten-free diet as standard ADHD treatment is not advised. A 2017 review showed that a few-foods elimination diet may help children too young to be medicated or not responding to medication, while free fatty acid supplementation or decreased eating of artificial food coloring as standard ADHD treatment is not advised. Chronic deficiencies of iron, magnesium and iodine may have a negative impact on ADHD symptoms. There is a small amount of evidence that lower tissue zinc levels may be associated with ADHD. In the absence of a demonstrated zinc deficiency (which is rare outside of developing countries), zinc supplementation is not recommended as treatment for ADHD. However, zinc supplementation may reduce the minimum effective dose of amphetamine when it is used with amphetamine for the treatment of ADHD.
Prognosis
ADHD persists into adulthood in about 30–50% of cases. Those affected are likely to develop coping mechanisms as they mature, thus compensating to some extent for their previous symptoms. Children with ADHD have a higher risk of unintentional injuries. Effects of medication on functional impairment and quality of life (e.g. reduced risk of accidents) have been found across multiple domains. Rates of smoking among those with ADHD are higher than in the general population at about 40%.Individuals with ADHD are significantly overrepresented in prison populations. Although there is no generally accepted estimate of ADHD prevalence among inmates, a 2015 meta-analysis estimated a prevalence of 25.5%, and a larger 2018 meta-analysis estimated the frequency to be 26.2%. ADHD is more common among longer-term inmates; a 2010 study at Norrtälje Prison, a high-security prison in Sweden, found an estimated ADHD prevalence of 40%.
Epidemiology
ADHD is estimated to affect about 6–7% of people aged 18 and under when diagnosed via the DSM-IV criteria. When diagnosed via the ICD-10 criteria, rates in this age group are estimated around 1–2%. Children in North America appear to have a higher rate of ADHD than children in Africa and the Middle East; this is believed to be due to differing methods of diagnosis rather than a difference in underlying frequency. As of 2019, it was estimated to affect 84.7 million people globally. If the same diagnostic methods are used, the rates are similar between countries. ADHD is diagnosed approximately three times more often in boys than in girls. This may reflect either a true difference in underlying rate, or that women and girls with ADHD are less likely to be diagnosed.Rates of diagnosis and treatment have increased in both the United Kingdom and the United States since the 1970s. Prior to 1970, it was rare for children to be diagnosed with ADHD, while in the 1970s rates were about 1%. This is believed to be primarily due to changes in how the condition is diagnosed and how readily people are willing to treat it with medications rather than a true change in how common the condition is. It was believed changes to the diagnostic criteria in 2013 with the release of the DSM-5 would increase the percentage of people diagnosed with ADHD, especially among adults.Due to disparities in the treatment and understanding of ADHD between caucasian and non-caucasian populations, many non-caucasian children go undiagnosed and unmedicated. It was found that within the US that there was often a disparity between caucasian and non-caucasian understandings of ADHD. This led to a difference in the classification of the symptoms of ADHD, and therefore, its misdiagnosis. It was also found that it was common in non-caucasian families and teachers to understand the symptoms of ADHD as behavioral issues, rather than mental illness.Crosscultural differences in diagnosis of ADHD can also be attributed to the long-lasting effects of harmful, racially targeted medical practices. Medical pseudosciences, particularly those that targeted African American populations during the period of slavery in the US, lead to a distrust of medical practices within certain communities. The combination of ADHD symptoms often being regarded as misbehavior rather than as a psychiatric condition, and the use of drugs to regulate ADHD, result in a hesitancy to trust a diagnosis of ADHD. Cases of misdiagnosis in ADHD can also occur due to stereotyping of non-caucasian individuals. Due to ADHDs subjectively determined symptoms, medical professionals may diagnose individuals based on stereotyped behavior or misdiagnose due to differences in symptom presentation between Caucasian and non-Caucasian individuals.
History
Hyperactivity has long been part of the human condition. Sir Alexander Crichton describes "mental restlessness" in his book An inquiry into the nature and origin of mental derangement written in 1798. He made observations about children showing signs of being inattentive and having the "fidgets". The first clear description of ADHD is credited to George Still in 1902 during a series of lectures he gave to the Royal College of Physicians of London. He noted both nature and nurture could be influencing this disorder.Alfred Tredgold proposed an association between brain damage and behavioral or learning problems which was able to be validated by the encephalitis lethargica epidemic from 1917 through 1928.The terminology used to describe the condition has changed over time and has included: minimal brain dysfunction in the DSM-I (1952), hyperkinetic reaction of childhood in the DSM-II (1968), and attention-deficit disorder with or without hyperactivity in the DSM-III (1980). In 1987, this was changed to ADHD in the DSM-III-R, and in 1994 the DSM-IV in split the diagnosis into three subtypes: ADHD inattentive type, ADHD hyperactive-impulsive type, and ADHD combined type. These terms were kept in the DSM-5 in 2013 and in the DSM-5-TR in 2022. Prior to the DSM, terms included minimal brain damage in the 1930s.In 1934, Benzedrine became the first amphetamine medication approved for use in the United States. Methylphenidate was introduced in the 1950s, and enantiopure dextroamphetamine in the 1970s. The use of stimulants to treat ADHD was first described in 1937. Charles Bradley gave the children with behavioral disorders Benzedrine and found it improved academic performance and behavior.Once neuroimaging studies were possible, studies conducted in the 1990s provided support for the pre-existing theory that neurological differences - particularly in the frontal lobes - were involved in ADHD. During this same period, a genetic component was identified and ADHD was acknowledged to be a persistent, long-term disorder which lasted from childhood into adulthood.ADHD was split into the current three sub-types because of a field trial completed by Lahey and colleagues.
Controversy
ADHD, its diagnosis, and its treatment have been controversial since the 1970s. The controversies involve clinicians, teachers, policymakers, parents, and the media. Positions range from the view that ADHD is within the normal range of behavior to the hypothesis that ADHD is a genetic condition. Other areas of controversy include the use of stimulant medications in children, the method of diagnosis, and the possibility of overdiagnosis. In 2009, the National Institute for Health and Care Excellence, while acknowledging the controversy, states that the current treatments and methods of diagnosis are based on the dominant view of the academic literature. In 2014, Keith Conners, one of the early advocates for recognition of the disorder, spoke out against overdiagnosis in a The New York Times article. In contrast, a 2014 peer-reviewed medical literature review indicated that ADHD is underdiagnosed in adults.With widely differing rates of diagnosis across countries, states within countries, races, and ethnicities, some suspect factors other than the presence of the symptoms of ADHD are playing a role in diagnosis, such as cultural norms. Some sociologists consider ADHD to be an example of the medicalization of deviant behavior, that is, the turning of the previously non-medical issue of school performance into a medical one. Most healthcare providers accept ADHD as a genuine disorder, at least in the small number of people with severe symptoms. Among healthcare providers the debate mainly centers on diagnosis and treatment in the much greater number of people with mild symptoms.The nature and range of desirable endpoints of ADHD treatment vary among diagnostic standards for ADHD. In most studies, the efficacy of treatment is determined by reductions in ADHD symptoms. However, some studies have included subjective ratings from teachers and parents as part of their assessment of ADHD treatment efficacies. By contrast, the subjective ratings of children undergoing ADHD treatment are seldom included in studies evaluating the efficacy of ADHD treatments.
There have been notable differences in the diagnosis patterns of birthdays in school-age children. Those born relatively younger to the school starting age than others in a classroom environment are shown to be more likely diagnosed with ADHD. Boys who were born in December in which the school age cut-off was December 31 were shown to be 30% more likely to be diagnosed and 41% to be treated than others born in January. Girls born in December had a diagnosis percentage of 70% and 77% treatment more than ones born the following month. Children who were born at the last 3 days of a calendar year were reported to have significantly higher levels of diagnosis and treatment for ADHD than children born at the first 3 days of a calendar year. The studies suggest that ADHD diagnosis is prone to subjective analysis.
Research directions
Possible positive traits
Possible positive traits of ADHD are a new avenue of research, and therefore limited.A 2020 review found that creativity may be associated with ADHD symptoms, particularly divergent thinking and quantity of creative achievements, but not with the disorder of ADHD itself – i.e. it has not been found to be increased in people diagnosed with the disorder, only in people with subclinical symptoms or those that possess traits associated with the disorder. Divergent thinking is the ability to produce creative solutions which differ significantly from each other and consider the issue from multiple perspectives. Those with ADHD symptoms could be advantaged in this form of creativity as they tend to have diffuse attention, allowing rapid switching between aspects of the task under consideration; flexible associative memory, allowing them to remember and use more distantly-related ideas which is associated with creativity; and impulsivity, which causes people with ADHD symptoms to consider ideas which others may not have. However, people with ADHD may struggle with convergent thinking, which is a cognitive process through which a set of obviously relevant knowledge is utilized in a focused effort to arrive at a single perceived best solution to a problem.A 2020 article suggested that historical documentation supported Leonardo da Vincis difficulties with procrastination and time management as characteristic of ADHD and that he was constantly on the go, but often jumping from task to task.
Possible biomarkers for diagnosis
Reviews of ADHD biomarkers have noted that platelet monoamine oxidase expression, urinary norepinephrine, urinary MHPG, and urinary phenethylamine levels consistently differ between ADHD individuals and non-ADHD controls. These measurements could potentially serve as diagnostic biomarkers for ADHD, but more research is needed to establish their diagnostic utility </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Shaken baby syndrome and can you explain it in depth? [/INST] Shaken baby syndrome (SBS), also known as abusive head trauma (AHT), is the leading cause of fatal head injuries in children younger than two years. Diagnosing the syndrome has proved both challenging and contentious for medical professionals, in that objective witnesses to the initial trauma are generally unavailable. This is said to be particularly problematic when the trauma is deemed non accidental. Some medical professionals propose that SBS is the result of respiratory abnormalities leading to hypoxia and swelling of the brain The courtroom has become a forum for conflicting theories with which generally accepted medical literature has not been reconciled. Often there are no outwardly visible signs of trauma, despite the presence of severe internal brain and eye . Complications include seizures, visual impairment, cerebral palsy, cognitive impairment, and death.The cause may be blunt trauma, vigorous shaking, or a combination of both. Often this occurs as a result of a caregiver becoming frustrated due to the child crying. Diagnosis can be difficult as symptoms may be nonspecific. A CT scan of the head is typically recommended if a concern is present. If there are concerning findings on the CT scan, a full work-up for child abuse should occur, including an eye exam and skeletal survey. Retinal hemorrhage is highly associated with AHT, occurring in 78% of cases of AHT versus 5% of cases of non-abusive head trauma.Educating new parents appears to be beneficial in decreasing rates of the condition. SBS is estimated to occur in three to four per 10,000 babies a year. It occurs most frequently in those less than five years of age. The risk of death is about 25%. The diagnosis include retinal bleeds, multiple fractures of the long bones, and subdural hematomas (bleeding in the brain). These signs have evolved through the years as the accepted and recognized signs of child abuse. Medical professionals strongly suspect shaking as the cause of injuries when a young child presents with retinal bleed, fractures, soft tissue injuries, or subdural hematoma that cannot be explained by accidental trauma or other medical conditions.Retinal hemorrhage (bleeding) occurs in around 85% of SBS cases and the severity of retinal hemorrhage correlates with severity of head injury. The type of retinal bleeds are often believed to be particularly characteristic of this condition, making the finding useful in establishing the diagnosis.Fractures of the vertebrae, long bones, and ribs may also be associated with SBS. Dr. John Caffey reported in 1972 that metaphyseal avulsions (small fragments of bone torn off where the periosteum covering the bone and the cortical bone are tightly bound together) and "bones on both the proximal and distal sides of a single joint are affected, especially at the knee".Infants may display irritability, failure to thrive, alterations in eating patterns, lethargy, vomiting, seizures, bulging or tense fontanels (the soft spots on a babys head), increased size of the head, altered breathing, and dilated pupils.
Risk factors
Caregivers that are at risk for becoming abusive often have unrealistic expectations of the child and may display "role reversal", expecting the child to fulfill the needs of the caregiver. Substance abuse and emotional stress, resulting for example from financial troubles, are other risk factors for aggression and impulsiveness in caregivers. Caregivers of any gender can cause SBS. Although it had been previously speculated that SBS was an isolated event, evidence of prior child abuse is a common finding. In an estimated 33–40% of cases, evidence of prior head injuries, such as old intracranial bleeds, is present.
Mechanism
Effects of SBS are thought to be diffuse axonal injury, oxygen deprivation and swelling of the brain, which can raise pressure inside the skull and damage delicate brain tissue, although witnessed shaking events have not led to such injuries.
Traumatic shaking occurs when a child is shaken in such a way that its head is flung backwards and forwards. In 1971, Guthkelch, a neurosurgeon, hypothesized that such shaking can result in a subdural hematoma, in the absence of any detectable external signs of injury to the skull. The article describes two cases in which the parents admitted that for various reasons they had shaken the child before it became ill. Moreover, one of the babies had retinal hemorrhages. The association between traumatic shaking, subdural hematoma and retinal hemorrhages was described in 1972 and referred to as whiplash shaken infant syndrome. The injuries were believed to occur because shaking the child subjected the head to acceleration–deceleration and rotational forces.
Force
There has been controversy regarding the amount of force required to produce the brain damage seen in SBS. There is broad agreement, even amongst skeptics, that shaking of a baby is dangerous and can be fatal.A biomechanical analysis published in 2005 reported that "forceful shaking can severely injure or kill an infant, this is because the cervical spine would be severely injured and not because subdural hematomas would be caused by high head rotational accelerations... an infant head subjected to the levels of rotational velocity and acceleration called for in the SBS literature, would experience forces on the infant neck far exceeding the limits for structural failure of the cervical spine. Furthermore, shaking cervical spine injury can occur at much lower levels of head velocity and acceleration than those reported for SBS." Other authors were critical of the mathematical analysis by Bandak, citing concerns about the calculations the author used concluding "In light of the numerical errors in Bandaks neck force estimations, we question the resolute tenor of Bandaks conclusions that neck injuries would occur in all shaking events." Other authors critical of the model proposed by Bandak concluding "the mechanical analogue proposed in the paper may not be entirely appropriate when used to model the motion of the head and neck of infants when a baby is shaken." Bandak responded to the criticism in a letter to the editor published in Forensic Science International in February 2006.
Diagnosis
Diagnosis can be difficult as symptoms may be nonspecific. A CT scan of the head is typically recommended if a concern is present. It is unclear how useful subdural haematoma, retinal hemorrhages, and encephalopathy are alone at making the diagnosis.
Triad
While the findings of SBS are complex and many, they are often incorrectly referred to as a "triad" for legal proceedings; distilled down to retinal hemorrhages, subdural hematomas, and encephalopathy.SBS may be misdiagnosed, underdiagnosed, and overdiagnosed, and caregivers may lie or be unaware of the mechanism of injury. Commonly, there are no externally visible signs of the condition. Examination by an experienced ophthalmologist is critical in diagnosing shaken baby syndrome, as particular forms of ocular bleeding are strongly associated with AHT. Magnetic resonance imaging may also depict retinal hemorrhaging but is much less sensitive than an eye exam. Conditions that are often excluded by clinicians include hydrocephalus, sudden infant death syndrome (SIDS), seizure disorders, and infectious or congenital diseases like meningitis and metabolic disorders. CT scanning and magnetic resonance imaging are used to diagnose the condition. Conditions that often accompany SBS/AHT include classic patterns of skeletal fracturing (rib fractures, corner fractures), injury to the cervical spine (in the neck), retinal hemorrhage, cerebral bleed or atrophy, hydrocephalus, and papilledema (swelling of the optic disc).The terms non-accidental head injury or inflicted traumatic brain injury have been used in place of "abusive head trauma" or "SBS".
Classification
The term abusive head trauma (AHT) is preferred as it better represents the broader potential causes.
The US Centers for Disease Control and Prevention identifies SBS as "an injury to the skull or intracranial contents of an infant or young child (< 5 years of age) due to inflicted blunt impact and/or violent shaking". In 2009, the American Academy of Pediatrics recommended the use of the term AHT to replace SBS, in part to differentiate injuries arising solely from shaking and injuries arising from shaking as well as trauma to the head.The Crown Prosecution Service for England and Wales recommended in 2011 that the term shaken baby syndrome be avoided and the term non accidental head injury (NAHI) be used instead.
Differential diagnosis
Vitamin C deficiency
Some authors have suggested that certain cases of suspected shaken baby syndrome may result from vitamin C deficiency. This contested hypothesis is based upon a speculated marginal, near scorbutic condition or lack of essential nutrient(s) repletion and a potential elevated histamine level. However, symptoms consistent with increased histamine levels, such as low blood pressure and allergic symptoms, are not commonly associated with scurvy as clinically significant vitamin C deficiency. A literature review of this hypothesis in the journal Pediatrics International concluded the following: "From the available information in the literature, concluded that there was no convincing evidence to conclude that vitamin C deficiency can be considered to be a cause of shaken baby syndrome."The proponents of such hypotheses often question the adequacy of nutrient tissue levels, especially vitamin C, for those children currently or recently ill, bacterial infections, those with higher individual requirements, those with environmental challenges (e.g. allergies), and perhaps transient vaccination-related stresses. At the time of this writing, infantile scurvy in the United States is practically nonexistent. No cases of scurvy mimicking SBS or sudden infant death syndrome have been reported, and scurvy typically occurs later in infancy, rarely causes death or intracranial bleeding, and is accompanied by other changes of the bones and skin and invariably an unusually deficient dietary history.In one study vaccination was shown not associated with retinal hemorrhages.
Gestational problems
Gestational problems affecting both mother and fetus, the birthing process, prematurity and nutritional deficits can accelerate skeletal and hemorrhagic pathologies that can also mimic SBS, even before birth.
Prevention
Interventions by neonatal nurses include giving parents information about abusive head trauma, normal infant crying and reasons for crying, teaching how to calm an infant, and how to cope if the infant was inconsolable may reduce rates of SBS.
Treatment
Treatment involves monitoring intracranial pressure (the pressure within the skull). Treatment occasionally requires surgery, such as to place a cerebral shunt to drain fluid from the cerebral ventricles, and, if an intracranial hematoma is present, to drain the blood collection.
Prognosis
Prognosis depends on severity and can range from total recovery to severe disability to death when the injury is severe.
One third of these patients die, one third survives with a major neurological condition, and only one third survives in good condition; therefore shaken baby syndrome puts children at risk of long-term disability. The most frequent neurological impairments are learning disabilities, seizure disorders, speech disabilities, hydrocephalus, cerebral palsy, and visual disorders.
Epidemiology
Small children are at particularly high risk for the abuse that causes SBS given the large difference in size between the small child and an adult. SBS usually occurs in children under the age of two but may occur in those up to age five. In the US, deaths due to SBS constitute about 10% of deaths due to child abuse.
History
In 1971, Norman Guthkelch proposed that whiplash injury caused subdural bleeding in infants by tearing the veins in the subdural space. The term "whiplash shaken infant syndrome" was introduced by Dr. John Caffey, a pediatric radiologist, in 1973, describing a set of symptoms found with little or no external evidence of head trauma, including retinal bleeds and intracranial bleeds with subdural or subarachnoid bleeding or both. Development of computed tomography and magnetic resonance imaging techniques in the 1970s and 1980s advanced the ability to diagnose the syndrome.
Legal issues
The Presidents Council of Advisers on Science and Technology (PCAST) noted in its September 2016 report that there are concerns regarding the scientific validity of forensic evidence of abusive head trauma that "require urgent attention". Similarly, the Maguire model, suggested in 2011 as a potential statistical model for determining the probability that a childs trauma was caused by abuse, has been questioned. A proposed clinical prediction rule with high sensitivity and low specificity, to rule out Abusive Head Trauma, has been published.In July 2005, the Court of Appeals in the United Kingdom heard four appeals of SBS convictions: one case was dropped, the sentence was reduced for one, and two convictions were upheld. The court found that the classic triad of retinal bleeding, subdural hematoma, and acute encephalopathy are not 100% diagnostic of SBS and that clinical history is also important. In the Courts ruling, they upheld the clinical concept of SBS but dismissed one case and reduced another from murder to manslaughter. In their words: "Whilst a strong pointer to NAHI [non-accidental head injury] on its own we do not think it possible to find that it must automatically and necessarily lead to a diagnosis of NAHI. All the circumstances, including the clinical picture, must be taken into account."The court did not believe the "unified hypothesis", proposed by British physician J. F. Geddes and colleagues, as an alternative mechanism for the subdural and retinal bleeding found in suspected cases of SBS. The unified hypothesis proposed that the bleeding was not caused by shearing of subdural and retinal veins but rather by cerebral hypoxia, increased intracranial pressure, and increased pressure in the brains blood vessels. The court reported that "the unified hypothesis [could] no longer be regarded as a credible or alternative cause of the triad of injuries": subdural haemorrhage, retinal bleeding and encephalopathy due to hypoxemia (low blood oxygen) found in suspected SBS.On 31 January 2008, the Wisconsin Court of Appeals granted Audrey A. Edmunds a new trial based on "competing credible medical opinions in determining whether there is a reasonable doubt as to Edmundss guilt." Specifically, the appeals court found that "Edmunds presented evidence that was not discovered until after her conviction, in the form of expert medical testimony, that a significant and legitimate debate in the medical community has developed in the past ten years over whether infants can be fatally injured through shaking alone, whether an infant may suffer head trauma and yet experience a significant lucid interval prior to death, and whether other causes may mimic the symptoms traditionally viewed as indicating shaken baby or shaken impact syndrome."In 2012, A </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Glucocorticoid remediable aldosteronism and can you explain it in depth? [/INST] Glucocorticoid remediable aldosteronism also describable as aldosterone synthase hyperactivity, is an autosomal dominant disorder in which the increase in aldosterone secretion produced by ACTH is no longer transient.
It is a cause of primary hyperaldosteronism.
Symptoms and signs
Patients with GRA may be asymptomatic, but the following symptoms can be present:
Fatigue
Headache
High blood pressure
Hypokalemia
Intermittent or temporary paralysis
Muscle spasms
Muscle weakness
Numbness
Polyuria
Polydipsia
Tingling
Hypernatraemia
Metabolic alkalosis
Normal Physiology
Aldosterone synthase is a steroid hydroxylase cytochrome P450 oxidase enzyme involved in the generation of aldosterone. It is localized to the mitochondrial inner membrane </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Cervical fracture and can you explain it in depth? [/INST] A cervical fracture, commonly called a broken neck, is a fracture of any of the seven cervical vertebrae in the neck. Examples of common causes in humans are traffic collisions and diving into shallow water. Abnormal movement of neck bones or pieces of bone can cause a spinal cord injury resulting in loss of sensation, paralysis, or usually instant death.
Causes
Considerable force is needed to cause a cervical fracture. Vehicle collisions and falls are common causes. A severe, sudden twist to the neck or a severe blow to the head or neck area can cause a cervical fracture.
Although high energy trauma is often associated with cervical fractures in the younger population, low energy trauma is more common in the geriatric population. In a study from Norway the most common cause was falls and the relative incidence of CS-fx increased significantly with age.Sports that involve violent physical contact carry a risk of cervical fracture, including American football, association football (especially the goalkeeper), ice hockey, rugby, and wrestling. Spearing an opponent in football or rugby, for instance, can cause a broken neck. Cervical fractures may also be seen in some non-contact sports, such as gymnastics, skiing, diving, surfing, powerlifting, equestrianism, mountain biking, and motor racing.
Certain penetrating neck injuries can also cause cervical fracture which can also cause internal bleeding among other complications.
Execution by hanging is intended to cause a fatal cervical fracture. The knot in the noose is placed to the left of the condemned, so that at the end of the drop, the head is jolted sharply upwards and to the right. The force breaks the neck, causing an immediate loss of consciousness and death within a few minutes.
Diagnosis
Physical examination
A medical history and physical examination can be sufficient in clearing the cervical spine. Notable clinical prediction rules to determine which patients need medical imaging are Canadian C-spine rule and the National Emergency X-Radiography Utilization Study (NEXUS).
Choice of medical imaging
In children, a CT scan of the neck is indicated in more severe cases such as neurologic deficits, whereas X-ray is preferable in milder cases, by both US and UK guidelines. Swedish guidelines recommend CT rather than X-ray in all children over the age of 5.
In adults, UK guidelines are largely similar as in children. US guidelines, on the other hand, recommend CT in all cases where medical imaging is indicated, and that X-ray is only acceptable where CT is not readily available.
Radiographic detection
On CT scan or X-ray, a cervical fracture may be directly visualized. In addition, indirect signs of injury by the vertebral column are incongruities of the vertebral lines, and/or increased thickness of the prevertebral space:
Classification
There are proper names for several types of cervical fractures, including:
Fracture of C1, including Jefferson fracture
Fracture of C2, including Hangmans fracture
Flexion teardrop fracture – a fracture of the anteroinferior aspect of a cervical vertebraThe AO Foundation has developed a descriptive system for cervical fractures, the AOSpine subaxial cervical spine fracture classification system.
Surgery indication
The indication to surgically stabilize a cervical fracture can be estimated from the Subaxial Injury Classification (SLIC). In this system, a score of 3 or less indicates that conservative management is appropriate, a score of 5 or more indicates that surgery is needed, and a score of 4 is equivocal. The score is the sum from 3 different categories: morphology, discs and ligaments, and neurology:
Treatment
Complete immobilization of the head and neck should be done as early as possible and before moving the patient. Immobilization should remain in place until movement of the head and neck is proven safe. In the presence of severe head trauma, cervical fracture must be presumed until ruled out. Immobilization is imperative to minimize or prevent further spinal cord injury. The only exceptions are when there is imminent danger from an external cause, such as becoming trapped in a burning building.
Non-steroidal anti-inflammatory drugs, such as Aspirin or Ibuprofen, are contraindicated because they interfere with bone healing. Paracetamol is a better option. Patients with cervical fractures will likely be prescribed medication for pain control.
In the long term, physical therapy will be given to build strength in the muscles of the neck to increase stability and better protect the cervical spine.
Collars, traction and surgery can be used to immobilize and stabilize the neck after a cervical fracture.
Cervical collar
Minor fractures can be immobilized with a cervical collar without need for traction or surgery. A soft collar is fairly flexible and is the least limiting but can carry a high risk of further neck damage in patients with osteoporosis. It can be used for minor injuries or after healing has allowed the neck to become more stable.
A range of manufactured rigid collars are also used, usually comprising a firm plastic bi-valved shell secured with Velcro straps and removable padded liners. The most frequently prescribed are the Aspen, Malibu, Miami J, and Philadelphia collars. All these can be used with additional chest and head extension pieces to increase stability.
Rigid braces
Rigid braces that support the head and chest are also prescribed. Examples include the Sterno-Occipital Mandibular Immobilization Device (SOMI), Lerman Minerva and Yale types. Special patients, such as very young children or non-cooperative adults, are sometimes still immobilized in medical plaster of paris casts, such as the Minerva cast.
Traction
Traction can be applied by free weights on a pulley or a halo type brace. The halo brace is the most rigid cervical brace, used when limiting motion to the minimum that is essential, especially with unstable cervical fractures. It can provide stability and support during the time (typically 8–12 weeks) needed for the cervical bones to heal.
Surgery
Surgery may be needed to stabilize the neck and relieve pressure on the spinal cord. A variety of surgeries are available depending on the injury. Surgery to remove a damaged intervertebral disc may be done to relieve pressure on the spinal cord. The discs are cushions between the vertebrae </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Hypercalcaemia and can you explain it in depth? [/INST] Hypercalcemia, also spelled hypercalcaemia, is a high calcium (Ca2+) level in the blood serum. The normal range is 2.1–2.6 mmol/L (8.8–10.7 mg/dL, 4.3–5.2 mEq/L), with levels greater than 2.6 mmol/L defined as hypercalcemia. Those with a mild increase that has developed slowly typically have no symptoms. In those with greater levels or rapid onset, symptoms may include abdominal pain, bone pain, confusion, depression, weakness, kidney stones or an abnormal heart rhythm including cardiac arrest.Most outpatient cases are due to primary hyperparathyroidism and inpatient cases due to cancer. Other causes of hypercalcemia include sarcoidosis, tuberculosis, Paget disease, multiple endocrine neoplasia (MEN), vitamin D toxicity, familial hypocalciuric hypercalcaemia and certain medications such as lithium and hydrochlorothiazide. Diagnosis should generally include either a corrected calcium or ionized calcium level and be confirmed after a week. Specific changes, such as a shortened QT interval and prolonged PR interval, may be seen on an electrocardiogram (ECG).Treatment may include intravenous fluids, furosemide, calcitonin, intravenous bisphosphonate, in addition to treating the underlying cause. The evidence for furosemide use, however, is poor. In those with very high levels, hospitalization may be required. Haemodialysis may be used in those who do not respond to other treatments. In those with vitamin D toxicity, steroids may be useful. Hypercalcemia is relatively common. Primary hyperparathyroidism occurs in 1–7 per 1,000 people, and hypercalcaemia occurs in about 2.7% of those with cancer.
Signs and symptoms
The neuromuscular symptoms of hypercalcaemia are caused by a negative bathmotropic effect due to the increased interaction of calcium with sodium channels. Since calcium blocks sodium channels and inhibits depolarization of nerve and muscle fibers, increased calcium raises the threshold for depolarization. This results in diminished deep tendon reflexes (hyporeflexia), and skeletal muscle weakness.Other symptoms include cardiac arrhythmias (especially in those taking digoxin), fatigue, nausea, vomiting (emesis), loss of appetite, abdominal pain, & paralytic ileus. If kidney impairment occurs as a result, manifestations can include increased urination, urination at night, and increased thirst. Psychiatric manifestation can include emotional instability, confusion, delirium, psychosis, and stupor. Calcium deposits known as limbus sign may be visible in the eyes.Symptoms are more common at high calcium blood values (12.0 mg/dL or 3 mmol/L). Severe hypercalcaemia (above 15–16 mg/dL or 3.75–4 mmol/L) is considered a medical emergency: at these levels, coma and cardiac arrest can result. The high levels of calcium ions decrease the neuron membrane permeability to sodium ions, thus decreasing excitability, which leads to hypotonicity of smooth and striated muscle. This explains the fatigue, muscle weakness, low tone and sluggish reflexes in muscle groups. The sluggish nerves also explain drowsiness, confusion, hallucinations, stupor or coma. In the gut this causes constipation. Hypocalcaemia causes the opposite by the same mechanism.
Hypercalcaemic crisis
A hypercalcaemic crisis is an emergency situation with a severe hypercalcaemia, generally above approximately 14 mg/dL (or 3.5 mmol/L).The main symptoms of a hypercalcaemic crisis are oliguria or anuria, as well as somnolence or coma. After recognition, primary hyperparathyroidism should be proved or excluded.In extreme cases of primary hyperparathyroidism, removal of the parathyroid gland after surgical neck exploration is the only way to avoid death. The diagnostic program should be performed within hours, in parallel with measures to lower serum calcium. Treatment of choice for acutely lowering calcium is extensive hydration and calcitonin, as well as bisphosphonates (which have effect on calcium levels after one or two days).
Causes
Primary hyperparathyroidism and malignancy account for about 90% of cases of hypercalcaemia.Causes of hypercalcemia can be divided into those that are PTH dependent or PTH independent.
Parathyroid function
Primary hyperparathyroidism
Solitary parathyroid adenoma
Primary parathyroid hyperplasia
Parathyroid carcinoma
Multiple endocrine neoplasia (MEN1 & MEN2A)
Familial isolated hyperparathyroidism
Lithium use
Familial hypocalciuric hypercalcemia/familial benign hypercalcemia
Cancer
Solid tumour with metastasis (e.g. breast cancer or classically squamous cell carcinoma, which can be PTHrP-mediated)
Solid tumour with humoral mediation of hypercalcaemia (e.g. lung cancer, most commonly non-small cell lung cancer or kidney cancer, phaeochromocytoma)
Haematologic cancers (multiple myeloma, lymphoma, leukaemia)
Ovarian small cell carcinoma of the hypercalcemic type
Vitamin-D disorders
Hypervitaminosis D (vitamin D intoxication)
Elevated 1,25(OH)2D (see calcitriol under Vitamin D) levels (e.g. sarcoidosis and other granulomatous diseases such as tuberculosis, berylliosis, histoplasmosis, Crohns disease, and granulomatosis with polyangiitis)
Idiopathic hypercalcaemia of infancy
Rebound hypercalcaemia after rhabdomyolysis
High bone-turnover
Hyperthyroidism
Multiple myeloma
Prolonged immobilization
Pagets disease
Thiazide use
Vitamin A intoxication
Kidney failure
Tertiary hyperparathyroidism
Aluminium intoxication
Milk-alkali syndrome
Other
Acromegaly
Adrenal insufficiency
Zollinger–Ellison syndrome
Williams Syndrome
Diagnosis
Diagnosis should generally include either a calculation of corrected calcium or direct measurement of ionized calcium level and be confirmed after a week. This is because either high or low serum albumin levels does not show the true levels of ionised calcium. There is, however, controversy around the usefulness of corrected calcium as it may be no better than total calcium.Once calcium is confirmed to be elevated, a detailed history taken from the subject, including review of medications, any vitamin supplementations, herbal preparations, and previous calcium values. Chronic elevation of calcium with absent or mild symptoms often points to primary hyperparathyroidism or Familial hypocalciuric hypercalcemia. For those who has underlying malignancy, the cancers may be sufficiently severe to show up in history and examination to point towards the diagnosis with little laboratory investigations.If detailed history and examination does not narrow down the differential diagnoses, further laboratory investigations are performed. Intact PTH (iPTH, biologically active parathyroid hormone molecules) is measured with immunoradiometric or immunochemoluminescent assay. Elevated (or high-normal) iPTH with high urine calcium/creatinine ratio (more than 0.03) is suggestive of primary hyperparathyroidism, usually accompanied by low serum phosphate. High iPTH with low urine calcium/creatinine ratio is suggestive of familial hypocalciuric hypercalcemia. Low iPTH should be followed up with Parathyroid hormone-related protein (PTHrP) measurements (though not available in all labs). Elevated PTHrP is suggestive of malignancy. Normal PTHrP is suggestive of multiple myeloma, vitamin A excess, milk-alkali syndrome, thyrotoxicosis, and immobilisation. Elevated Calcitriol is suggestive of lymphoma, sarcoidosis, granulomatous disorders, and excessive calcitriol intake. Elevated calcifediol is suggestive of vitamin D or excessive calcifediol intake.The normal range is 2.1–2.6 mmol/L (8.8–10.7 mg/dL, 4.3–5.2 mEq/L), with levels greater than 2.6 mmol/L defined as hypercalcaemia. Moderate hypercalcaemia is a level of 2.88–3.5 mmol/L (11.5–14 mg/dL) while severe hypercalcaemia is > 3.5 mmol/L (>14 mg/dL).
ECG
Abnormal heart rhythms can also result, and ECG findings of a short QT interval suggest hypercalcaemia. Significant hypercalcaemia can cause ECG changes mimicking an acute myocardial infarction. Hypercalcaemia has also been known to cause an ECG finding mimicking hypothermia, known as an Osborn wave.
Treatments
The goal of therapy is to treat the hypercalcaemia first and subsequently effort is directed to treat the underlying cause.
Fluids and diuretics
Initial therapy:
hydration, increasing salt intake, and forced diuresis.
hydration is needed because many patients are dehydrated due to vomiting or kidney defects in concentrating urine.
increased salt intake also can increase body fluid volume as well as increasing urine sodium excretion, which further increases urinary calcium excretion.
after rehydration, a loop diuretic such as furosemide can be given to permit continued large volume intravenous salt and water replacement while minimizing the risk of blood volume overload and pulmonary oedema. In addition, loop diuretics tend to depress calcium reabsorption by the kidney thereby helping to lower blood calcium levels
can usually decrease serum calcium by 1–3 mg/dL within 24 hours
caution must be taken to prevent potassium or magnesium depletion
Bisphosphonates and calcitonin
Additional therapy:
bisphosphonates are pyrophosphate analogues with high affinity for bone, especially areas of high bone-turnover.
they are taken up by osteoclasts and inhibit osteoclastic bone resorption
current available drugs include (in order of potency): (1st gen) etidronate, (2nd gen) tiludronate, IV pamidronate, alendronate (3rd gen) zoledronate and risedronate
all people with cancer-associated hypercalcaemia should receive treatment with bisphosphonates since the first line therapy (above) cannot be continued indefinitely nor is it without risk. Further, even if the first line therapy has been effective, it is a virtual certainty that the hypercalcaemia will recur in the person with hypercalcaemia of malignancy. Use of bisphosphonates in such circumstances, then, becomes both therapeutic and preventative
people in kidney failure and hypercalcaemia should have a risk-benefit analysis before being given bisphosphonates, since they are relatively contraindicated in kidney failure.
Denosumab is a bone anti-resorptive agent that can be used to treat hypercalcemia in patients with a contraindication to bisphosphonates such as severe kidney failure or allergy.
Calcitonin blocks bone resorption and also increases urinary calcium excretion by inhibiting calcium reabsorption by the kidney
Usually used in life-threatening hypercalcaemia along with rehydration, diuresis, and bisphosphonates
Helps prevent recurrence of hypercalcaemia
Dose is 4 international units per kilogram via subcutaneous or intramuscular route every 12 hours, usually not continued indefinitely due to quick onset of decreased response to calcitonin
Other therapies
rarely used, or used in special circumstances
plicamycin inhibits bone resorption (rarely used)
gallium nitrate inhibits bone resorption and changes structure of bone crystals (rarely used)
glucocorticoids increase urinary calcium excretion and decrease intestinal calcium absorption
no effect on calcium level in normal or primary hyperparathyroidism
effective in hypercalcaemia due to osteolytic malignancies (multiple myeloma, leukaemia, Hodgkins lymphoma, carcinoma of the breast) due to antitumour properties
also effective in hypervitaminosis D and sarcoidosis
dialysis usually used in severe hypercalcaemia complicated by kidney failure. Supplemental phosphate should be monitored and added if necessary
phosphate therapy can correct the hypophosphataemia in the face of hypercalcaemia and lower serum calcium
Other animals
Research has led to a better understanding of hypercalcemia in non-human animals. Often the causes of hypercalcemia have a correlation to the environment in which the organisms live. Hypercalcemia in house pets is typically due to disease, but other cases can be due to accidental ingestion of plants or chemicals in the home. Outdoor animals commonly develop hypercalcemia through vitamin D toxicity from wild plants within their environments.
Household pets
Household pets such as dogs and cats are found to develop hypercalcemia. It is less common in cats, and many feline cases are idiopathic. In dogs, lymphosarcoma, Addison’s disease, primary hyperparathyroidism, and chronic kidney failure are the main causes of hypercalcemia, but there are also environmental causes usually unique to indoor pets. Ingestion of small amounts of calcipotriene found in psoriasis cream can be fatal to a pet. Calcipotriene causes a rapid rise in calcium ion levels. Calcium ion levels can remain high for weeks if untreated and lead to an array of medical issues. There are also cases of hypercalcemia reported due to dogs ingesting rodenticides containing a chemical similar to calcipotriene found in psoriasis cream. Additionally, ingestion of household plants is a cause of hypercalcemia. Plants such as Cestrum diurnum, and Solanum malacoxylon contain ergocalciferol or cholecalciferol which cause the onset of hypercalcemia. Consuming small amounts of these plants can be fatal to pets </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Isovaleric acidemia and can you explain it in depth? [/INST] Isovaleric acidemia is a rare autosomal recessive metabolic disorder which disrupts or prevents normal metabolism of the branched-chain amino acid leucine. It is a classical type of organic acidemia.
Symptoms and signs
A characteristic feature of isovaleric acidemia is a distinctive odor of sweaty feet. This odor is caused by the buildup of a compound called isovaleric acid in affected individuals.In about half of cases, the signs and symptoms of this disorder become apparent within a few days after birth and include poor feeding, vomiting, seizures, and lack of energy that can progress to coma. These medical problems are typically severe and can be life-threatening. In the other half of cases, the signs and symptoms of the disorder appear during childhood and may come and go over time. They are often triggered by an infection or by eating an increased amount of protein-rich foods.
Genetics
The disorder has an autosomal recessive inheritance pattern, which means the defective gene is located on an autosome, and two copies of the gene - one from each parent - must be inherited to be affected by the disorder. The parents of a child with an autosomal recessive disorder are carriers of one copy of the defective gene, but are usually not affected by the disorder.Mutations in both copies of the IVD gene result in isovaleric acidemia.
Pathophysiology
The enzyme encoded by IVD, isovaleric acid-CoA dehydrogenase (EC 1.3.99.10), plays an essential role in breaking down proteins from the diet </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Bartonellosis and can you explain it in depth? [/INST] Bartonellosis is an infectious disease produced by bacteria of the genus Bartonella.Bartonella species cause diseases such as Carrións disease, trench fever, cat-scratch disease, bacillary angiomatosis, peliosis hepatis, chronic bacteremia, endocarditis, chronic lymphadenopathy, and neurological disorders.
Presentation
Carrións disease
Patients can develop two clinical phases: an acute septic phase and a chronic eruptive phase associated with skin lesions. In the acute phase (also known as Oroya fever or fiebre de la Oroya), B. bacilliformis infection is a sudden, potentially life-threatening infection associated with high fever and decreased levels of circulating red blood cells (i.e., hemolytic anemia) and transient immunosuppression. B. bacilliformis is considered the most deadly species to date, with a death rate of up to 90% during the acute phase, which typically lasts two to four weeks. Peripheral blood smears show anisomacrocytosis with many bacilli adherent to red blood cells. Thrombocytopenia is also seen and can be very severe. Neurologic manifestations (neurobartonellosis) are altered mental status, agitation, or even coma, ataxia, spinal meningitis, or paralysis. It is seen in 20% of patients with acute infection, in which the prognosis is very guarded with an about 50% mortality. The most feared complication is overwhelming infection mainly by Enterobacteriaceae, particularly Salmonella (both S. typhi and S. non-typhi, as well as reactivation of toxoplasmosis and other opportunistic infections .The chronic manifestation consists of a benign skin eruption with raised, reddish-purple nodules (angiomatous tumours). The bacterium can be seen microscopically, if a skin biopsy is silver stained (the Warthin–Starry method).
Cat-scratch disease
Cat-scratch disease is due to an infection by B. henselae and manifests as gradual regional lymph nodes enlargement (axilla, groin, neck) which may last 2–3 months or longer and a distal scratch and/or red-brown skin papule (not always seen at the time of the disease). The enlarged lymph node is painful and tender. The lymph nodes may suppurate, some patients may remain afebrile or asymptomatic. Other presentations include fever (particularly in children), Parinauds oculoglandular syndrome, encephalopathy, and neuroretinitis.B. henselae can be associated with bacteremia, bacillary angiomatosis, and peliosis hepatis in HIV patients, and bacteremia and endocarditis in immunocompetent and immunocompromised patients. Symptoms may include fatigue, headaches, fever, memory loss, disorientation, insomnia, and loss of coordination. The bacteria block the normal immune response by suppressing the NF-κB apoptosis pathway. Disease progression may be accelerated if the host is subsequently infected by an immune-suppressing virus such as Epstein Barr virus.
Bacillary angiomatosis
B. henselae and B. quintana can cause bacillary angiomatosis, a vascular proliferative disease involving mainly the skin, and other organs. The disease was first described in human immunodeficiency virus (HIV) patients and organ transplant recipients. Severe, progressive and disseminated disease may occur in HIV patients. Differential diagnoses include Kaposis sarcoma, pyogenic granuloma, hemangioma, verruga peruana, and subcutaneous tumors. Lesions can affect bone marrow, liver, spleen, or lymph nodes.
Peliosis hepatis
B. henselae is the etiologic agent for peliosis hepatis, which is defined as a vascular proliferation of sinusoid hepatic capillaries resulting in blood-filled spaces in the liver in HIV patients and organ transplant recipients. Peliosis hepatis can be associated with peliosis of the spleen, as well as bacillary angiomatosis of the skin in HIV patients.
Trench fever
Trench fever, also known as five-day fever or quintan fever, is the initial manifestation of B. quintana infection. Clinical manifestations range from asymptomatic infection to severe illness. Classical presentations include a febrile illness of acute onset, headache, dizziness, and shin pain. Chronic infection manifestations include attacks of fever and aching in some cases and persistent bacteremia in soldiers and homeless people.
Microbiology
Members of the genus Bartonella are facultative intracellular bacteria, alpha 2 subgroup Pseudomonadota. The genus comprises:
Pathophysiology
In mammals, each Bartonella species is highly adapted to its reservoir host as the result of intracellular parasitism and can persist in the bloodstream of the host. Intraerythrocytic parasitism is only observed in the acute phase of Carrions disease. Bartonella species also have a tropism for endothelial cells, observed in the chronic phase of Carrions disease (also known as verruga Peruana) and bacillary angiomatosis.
Pathological response can vary with the immune status of the host. Infection with B. henselae can result in a focal suppurative reaction (CSD in immunocompetent patients), a multifocal angioproliferative response (bacillary angiomatosis in immunocompromised patients), endocarditis, or meningitis.
Diagnosis
There are several methods used for diagnosing Bartonella infection including microscopy, serology, and PCR. Microscopy of blood smears is used to diagnose Carrións disease (B. bacilliformis), however for other Bartonella species, microscopy and silver staining are insensitive, not highly specific, and cannot differentiate species. The CDC does not recommend lymph node aspiration for diagnostic purposes.
Serology and protein-based methods
IFA (immunofluorescence antibody assay) testing for the presence of antibodies in serum is used to diagnose B. henselae infection at the acute onset of Cat Scratch Disease symptoms, followed by PCR to confirm infecting species. IFA can generally be used to confirm a diagnosis of Bartonella infection, but is limited by antibody cross-reactivity with other bacteria species which can cause a false positive, and antigen variability which can result in false negatives.Bartonella spp. often evade an immune response, thus antibodies may not be detected even concurrent with an infection, resulting in an IFA false negative rate of up to 83% in chronically infected patients when other test results (e.g. organism isolation or PCR) are positive. IFA sensitivity may range from 14 to 100%, causing discrepancies between PCR and serology test results. Positive IFA results do not distinguish between current infection and prior exposure.ELISA (enzyme-linked immunosorbent assay) is another method that has been used to detect Bartonella, but it has a low sensitivity (17-35%). Western blot for protein detection of Bartonella-associated proteins has also been reported, but this method does not show clear immunoreactive profiles.
PCR
The CDC states that PCR testing from a single blood draw is not sufficiently sensitive for B. henselae testing, and can result in high false negative rates due to a small sample volume and levels below the limit of molecular detection.Bartonella spp. are fastidious, slow-growing bacteria that are difficult to grow using traditional solid agar plate culture methods due to complex nutritional requirements and potentially a low number of circulating bacteria. This conventional method of culturing Bartonella spp. from blood inoculates plated directly onto solid agar plates requires an extended incubation period of 21 days due to the slow growth rate.
Enrichment Culture
Bartonella growth rates improve when cultured in an enrichment inoculation step in a liquid insect-based medium such as Bartonella Alphaproteobacteria Growth Medium (BAPGM) or Schneiders Drosophila-based insect powder medium. Several studies have optimized the growing conditions of Bartonella spp. cultures in these liquid media, with no change in bacterial protein expressions or host interactions in vitro. Insect-based liquid media supports the growth and co-culturing of at least seven Bartonella species, reduces bacterial culturing time and facilitates PCR detection and isolation of Bartonella spp. from animal and patient samples. Research shows that DNA may be detected following direct extraction from blood samples and become negative following enrichment culture, thus PCR is recommended after direct sample extraction and also following incubation in enrichment culture. Several studies have successfully optimized sensitivity and specificity by using PCR amplification (pre-enrichment PCR) and enrichment culturing of blood draw samples, followed by PCR (post-enrichment PCR) and DNA sequence identification.
Serial Testing
As Bartonella spp. infect at low levels and cycle between blood and tissues, multiple blood draws over time may be necessary to detect infection.
Treatment
Treatment of infections caused by Bartonella species include:
Some authorities recommend the use of azithromycin.
Epidemiology
Carrións disease, or Oroya fever, or Peruvian wart is a rare infectious disease found only in Peru, Ecuador, and Colombia. It is endemic in some areas of Peru, is caused by infection with the bacterium Bartonella bacilliformis, and transmitted by sandflies of genus Lutzomyia.
Cat scratch disease occurs worldwide. Cats are the main reservoir of Bartonella henselae, and the bacterium is transmitted to cats by the cat flea Ctenocephalides felis. Infection in cats is very common with a prevalence estimated between 40 and 60%, younger cats being more commonly infective. Cats usually become immune to the infection, while dogs may be very symptomatic. Humans may also acquire it through flea or tick bites from infected dogs, cats, coyotes, and foxes.Trench fever, produced by Bartonella quintana infection, is transmitted by the human body louse Pediculus humanus corporis. Humans are the only known reservoir. Thorough washing of clothing may help to interrupt the transmission of infection.A possible role for ticks in transmission of Bartonella species remains to be elucidated; in November 2011, Bartonella rochalimae, B. quintana, and B. elizabethae DNA was first reported in Rhipicephalus sanguineus and Dermacentor nitens ticks in Peru.
History
Carrións disease
The disease was named after medical student Daniel Alcides Carrión from Cerro de Pasco, Peru. Carrión described the disease after being inoculated at his request with the pus of a skin lesion from patient Carmen Paredes in 1885 by Doctor Evaristo M. Chávez, a close friend and coworker in Dos de Mayo National Hospital. Carrión developed the disease three weeks after the inoculation and kept a meticulous record of clinical symptoms and signs until the disease rendered him incapable of the task and he died at age 28 several weeks later—October 5, 1885. Carrión proved that Oroya fever and verruga peruana were two stages of the same disease, and not two different diseases as was thought at the time. His work did not result in a cure immediately, but his research started the process </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Trypanosomiasis and can you explain it in depth? [/INST] Trypanosomiasis or trypanosomosis is the name of several diseases in vertebrates caused by parasitic protozoan trypanosomes of the genus Trypanosoma. In humans this includes African trypanosomiasis and Chagas disease. A number of other diseases occur in other animals.
African trypanosomiasis, which is caused by either Trypanosoma brucei gambiense or Trypanosoma brucei rhodesiense, threatens some 65 million people in sub-Saharan Africa, especially in rural areas and populations disrupted by war or poverty. The number of cases has been going down due to systematic eradication efforts: in 1998 almost 40,000 cases were reported but almost 300,000 cases were suspected to have occurred; in 2009, the number dropped below 10,000; and in 2018 it dropped below 1000. Chagas disease causes 21,000 deaths per year mainly in Latin America.
Signs and symptoms
The tsetse fly bite erupts into a red chancre sore and within a few weeks, the person can experience fever, swollen lymph glands, blood in urine, aching muscles and joints, headaches and irritability. In the first phase, the patient has only intermittent bouts of fever with lymphadenopathy together with other non-specific signs and symptoms. The second stage of the disease is marked by involvement of the central nervous system with extensive neurological effects like changes in personality, alteration of the biological clock (the circadian rhythm), confusion, slurred speech, seizures and difficulty in walking and talking. These problems can develop over many years and if not treated, the person dies. It is common in Subsaharan Africa.
Diagnosis
Cattle may show enlarged lymph nodes and internal organs. Haemolytic anaemia is a characteristic sign. Systemic disease and reproductive wastage are common, and cattle appear to waste away.
Horses with dourine show signs of ventral and genital edema and urticaria.
Infected dogs and cats may show severe systemic signs.
Diagnosis relies on recognition of the flagellate on a blood smear. Motile organisms may be visible in the buffy coat when a blood sample is spun down. Serological testing is also common.
Prevention
The use of trypanotolerant breeds for livestock farming should be considered if the disease is widespread. Fly control is another option but is difficult to implement. The main approaches to controlling African trypanosomiasis are to reduce the reservoirs of infection and the presence of the tsetse fly. Screening of people at risk helps identify patients at an early stage. Diagnosis should be made as early as possible and before the advanced stage to avoid complicated, difficult and risky treatment procedures.
Treatment
Stage I of the condition is usually treated with pentamidine or suramin through intramuscular injection or intravenous infusion if sufficient observation is possible. Stage II of the disease is typically treated with melarsoprol or eflornithine preferably introduced to the body intravenously. Both pentamidine and suramin have limited side effects. Melarsoprol is extremely effective but has many serious side effects which can cause neurological damage to a patient, however, the drug is often a patients last hope in many late stage cases. Eflornithine is extremely expensive but has side effects that may be treated with ease. In regions of the world where the disease is common eflornithine is provided for free by the World Health Organization.
Research
Trypanosomiasis could, in future be prevented by genetically altering the tsetse fly. As the tsetse fly is the main vector of transmission, making the fly immune to the disease by altering its genome could be the main component in an effort to eradicate the disease. New technologies such as CRISPR allowing cheaper and easier genetic engineering could allow for such measures. A pilot program in Senegal, funded by the International Atomic Energy Agency, has considerably reduced the tsetse fly population by introducing male flies which have been sterilized by exposure to gamma rays. This has allowed a change of cattle breeds from lower producing trypanotolerant breeds to higher-producing foreign breeds, and was selected as one of the Best Sustainable Development Practices on Food Security by EXPO Milan 2015.
Other animals
Nagana, or animal African trypanosomiasis, also called Souma or Soumaya in Sudan.
Surra
Mal de cadeiras, or Quebra Bunda (of central South America, Brazil)
Murrina de caderas (of Panama; Derrengadera de caderas)
Dourine
Cachexial fevers (various)
Gambian horse sickness (of central Africa)
Baleri (of Sudan)
Kaodzera (Rhodesian trypanosomiasis)
Tahaga (a disease of camels in Algeria)
Galziekte, galzietzke (bilious fever of cattle; gall sickness of South Africa)
Peste-boba (of Venezuela; Derrengadera)Some species of cattle such as the African buffalo, Ndama, and Keteku appear trypanotolerant and do not develop symptoms. Calves are more resistant than adults.Tsetse-borne species of trypanosomes have entered zoos outside the traditional tsetse zone in infected animals imported for the zoo.
References
Bibliography
Thomas, H Wolferstan (1905). Report on trypanosomes, trypanosomiasis, and sleeping sickness : being an experimental investigation into their pathology and treatment. London: University Press of Liverpool. OCLC 11692559.
Manson, Patrick (1914) </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Malunion and can you explain it in depth? [/INST] A malunion is when a fractured bone does not heal properly. Some ways that it shows is by having the bone being twisted, shorter, or bent. Malunions can occur by having the bones improperly aligned when immobilized, having the cast taken off too early, or never seeking medical treatment after the break. Malunions are painful and commonly produce swelling around the area, possible immobilization, and deterioration of the bone and tissue.
Signs and symptoms
Malunions are presented by excessive swelling, twisting, bending, and possibly shortening of the bone </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Impulsivity and can you explain it in depth? [/INST] In psychology, impulsivity (or impulsiveness) is a tendency to act on a whim, displaying behavior characterized by little or no forethought, reflection, or consideration of the consequences. Impulsive actions are typically "poorly conceived, prematurely expressed, unduly risky, or inappropriate to the situation that often result in undesirable consequences," which imperil long-term goals and strategies for success. Impulsivity can be classified as a multifactorial construct. A functional variety of impulsivity has also been suggested, which involves action without much forethought in appropriate situations that can and does result in desirable consequences. "When such actions have positive outcomes, they tend not to be seen as signs of impulsivity, but as indicators of boldness, quickness, spontaneity, courageousness, or unconventionality" Thus, the construct of impulsivity includes at least two independent components: first, acting without an appropriate amount of deliberation, which may or may not be functional; and second, choosing short-term gains over long-term ones.Impulsivity is both a facet of personality and a major component of various disorders, including FASD, ADHD, substance use disorders, bipolar disorder, antisocial personality disorder, and borderline personality disorder. Abnormal patterns of impulsivity have also been noted instances of acquired brain injury and neurodegenerative diseases. Neurobiological findings suggest that there are specific brain regions involved in impulsive behavior, although different brain networks may contribute to different manifestations of impulsivity, and that genetics may play a role.Many actions contain both impulsive and compulsive features, but impulsivity and compulsivity are functionally distinct. Impulsivity and compulsivity are interrelated in that each exhibits a tendency to act prematurely or without considered thought and often include negative outcomes. Compulsivity may be on a continuum with compulsivity on one end and impulsivity on the other, but research has been contradictory on this point. Compulsivity occurs in response to a perceived risk or threat, impulsivity occurs in response to a perceived immediate gain or benefit, and, whereas compulsivity involves repetitive actions, impulsivity involves unplanned reactions.
Impulsivity is a common feature of the conditions of gambling and alcohol addiction. Research has shown that individuals with either of these addictions discount delayed money at higher rates than those without, and that the presence of gambling and alcohol abuse lead to additive effects on discounting.
Impulse
An impulse is a wish or urge, particularly a sudden one. It can be considered as a normal and fundamental part of human thought processes, but also one that can become problematic, as in a condition like obsessive-compulsive disorder, borderline personality disorder, attention deficit hyperactivity disorder, or in fetal alcohol spectrum disorders.
The ability to control impulses, or more specifically control the desire to act on them, is an important factor in personality and socialization. Deferred gratification, also known as impulse control is an example of this, concerning impulses primarily relating to things that a person wants or desires. Delayed gratification comes when one avoids acting on initial impulses. Delayed gratification has been studied in relation to childhood obesity. Resisting the urge to act on impulses is important to teach children, because it teaches the value of delayed gratification.
Many psychological problems are characterized by a loss of control or a lack of control in specific situations. Usually, this lack of control is part of a pattern of behavior that also involves other maladaptive thoughts and actions, such as substance abuse problems or sexual disorders like the paraphilias (e.g. pedophilia and exhibitionism). When loss of control is only a component of a disorder, it usually does not have to be a part of the behavior pattern, and other symptoms must also be present for the diagnosis to be made. (Franklin)
The five traits that can lead to impulsive actions
For many years it was understood that impulsivity is a trait but with further analysis it can be found that there were five traits that can lead to impulsive actions:
positive urgency,
negative urgency,
sensation seeking,
lack of planning, and
lack of perseverance.
Associated behavioral and societal problems
Attention-deficit hyperactivity disorder
Attention deficit-hyperactivity disorder (ADHD) is a multiple component disorder involving inattention, impulsivity, and hyperactivity. The Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) breaks ADHD into three subtypes according to the behavioral symptoms:
Attention-Deficit/Hyperactivity Disorder Predominantly Inattentive Type,
Attention-Deficit/Hyperactivity Disorder Predominantly Hyperactive-Impulsive Type, and
Attention-Deficit/Hyperactivity Disorder Combined Type.
Predominantly hyperactive-impulsive type symptoms may include
fidgeting and squirming in seats,
talking nonstop,
dashing around and touching or playing with anything in sight,
having trouble sitting still during dinner/school/story time,
being constantly in motion, and
having difficulty doing quiet tasks or activities.
Other manifestations primarily of impulsivity include
being very impatient,
having difficulty waiting for things they want or waiting their turns in games,
often interrupting conversations or others activities, or
blurting out inappropriate comments, showing their emotions without restraint, and act without regard for consequences.
Prevalence of the disorder worldwide is estimated to be between 4% and 10%, with reports as low as 2.2% and as high as 17.8%. Variation in rate of diagnoses may be attributed to differences between populations (i.e. culture), and differences in diagnostic methodologies. Prevalence of ADHD among females is less than half that of males, and females more commonly fall into the inattentive subtype.Despite an upward trend in diagnoses of the inattentive subtype of ADHD, impulsivity is commonly considered to be the central feature of ADHD, and the impulsive and combined subtypes are the major contributors to the societal costs associated with ADHD. The estimated cost of illness for a child with ADHD is $14,576 (in 2005 dollars) annually. Prevalence of ADHD among prison populations is significantly higher than that of the normal population.In both adults and children, ADHD has a high rate of comorbidity with other mental health disorders such as learning disability, conduct disorder, anxiety disorder, major depressive disorder, bipolar disorder, and substance use disorders.
The precise genetic and environmental factors contributing to ADHD are relatively unknown, but endophenotypes offer a potential middle ground between genes and symptoms. ADHD is commonly linked to "core" deficits involving "executive function," "delay aversion," or "activation/arousal" theories that attempt to explain ADHD through its symptomology. Endophenotypes, on the other hand, purport to identify potential behavioral markers that correlate with specific genetic etiology. There is some evidence to support deficits in response inhibition as one such marker. Problems inhibiting prepotent responses are linked with deficits in pre-frontal cortex (PFC) functioning, which is a common dysfunction associated with ADHD and other impulse-control disorders.Evidence-based psychopharmacological and behavioral interventions exist for ADHD.
Substance abuse
Impulsivity appears to be linked to all stages of substance abuse.The acquisition phase of substance abuse involves the escalation from single use to regular use. Impulsivity may be related to the acquisition of substance abuse because of the potential role that instant gratification provided by the substance may offset the larger future benefits of abstaining from the substance, and because people with impaired inhibitory control may not be able to overcome motivating environmental cues, such as peer pressure. "Similarly, individuals that discount the value of delayed reinforcers begin to abuse alcohol, marijuana, and cigarettes early in life, while also abusing a wider array of illicit drugs compared to those who discounted delayed reinforcers less."Escalation or dysregulation is the next and more severe phase of substance abuse. In this phase individuals "lose control" of their addiction with large levels of drug consumption and binge drug use. Animal studies suggest that individuals with higher levels of impulsivity may be more prone to the escalation stage of substance abuse.Impulsivity is also related to the abstinence, relapse, and treatment stages of substance abuse. People who scored high on the Barratt Impulsivity Scale (BIS) were more likely to stop treatment for cocaine abuse. Additionally, they adhered to treatment for a shorter duration than people that scored low on impulsivity. Also, impulsive people had greater cravings for drugs during withdrawal periods and were more likely to relapse. This effect was shown in a study where smokers that test high on the BIS had increased craving in response to smoking cues, and gave into the cravings more quickly than less impulsive smokers. Taken as a whole the current research suggests that impulsive individuals are less likely to abstain from drugs and more likely to relapse earlier than less impulsive individuals.While it is important to note the effect of impulsivity on substance abuse, the reciprocating effect whereby substance abuse can increase impulsivity has also been researched and documented. The promoting effect of impulsivity on substance abuse and the effect of substance abuse on increased impulsivity creates a positive feedback loop that maintains substance seeking behaviors. It also makes conclusions about the direction of causality difficult. This phenomenon has been shown to be related to several substances, but not all. For example, alcohol has been shown to increase impulsivity while amphetamines have had mixed results.Substance use disorder treatments include prescription of medications such as acamprosate, buprenorphine, disulfiram, LAAM, methadone, and naltrexone, as well as effective psychotherapeutic treatment like
behavioral couples therapy, CBT, contingency management, motivational enhancement therapy, and relapse prevention.
Eating
Impulsive overeating spans from an episode of indulgence by an otherwise healthy person to chronic binges by a person with an eating disorder.Consumption of a tempting food by non-clinical individuals increases when self-regulatory resources are previously depleted by another task, suggesting that it is caused by a breakdown in self control. Impulsive eating of unhealthy snack foods appears to be regulated by individual differences in impulsivity when self-control is weak and by attitudes towards the snack and towards healthy eating when self-control is strong. There is also evidence that greater food consumption occurs when people are in a sad mood, although it is possible that this is due more to emotional regulation than to a lack of self-control. In these cases, overeating will only take place if the food is palatable to the person, and if so individual differences in impulsivity can predict the amount of consumption.Chronic overeating is a behavioral component of binge eating disorder, compulsive overeating, and bulimia nervosa. These diseases are more common for women and may involve eating thousands of calories at a time. Depending on which of these disorders is the underlying cause, an episode of overeating can have a variety of different motivations. Characteristics common among these three disorders include low self-esteem, depression, eating when not physically hungry, preoccupation with food, eating alone due to embarrassment, and feelings of regret or disgust after an episode. In these cases, overeating is not limited to palatable foods.Impulsivity differentially affects disorders involving the overcontrol of food intake (such as anorexia nervosa) and disorders involving the lack of control of food intake (such as bulimia nervosa). Cognitive impulsivity, such as risk-taking, is a component of many eating disorders, including those that are restrictive. However, only people with disorders involving episodes of overeating have elevated levels of motoric impulsivity, such as reduced response inhibition capacity.One theory suggests that binging provides a short-term escape from feelings of sadness, anger, or boredom, although it may contribute to these negative emotions in the long-term. Another theory suggests that binge eating involves reward seeking, as evidenced by decreased serotonin binding receptors of binge-eating women compared to matched-weight controls and predictive value of heightened reward sensitivity/drive in dysfunctional eating.Treatments for clinical-grade overeating include cognitive behavioral therapy to teach people how to track and change their eating habits and actions, interpersonal psychotherapy to help people analyze the contribution of their friends and family in their disorder, and pharmacological therapies including antidepressants and SSRIs.
Impulse buying
Impulse buying consists of purchasing a product or service without any previous intent to make that purchase. It has been speculated to account for as much as eighty percent of all purchases in the United States.There are several theories pertaining to impulsive buying. One theory suggests that it is exposure combining with the speed that a reward can be obtained that influences an individual to choose lesser immediate rewards over greater rewards that can be obtained later. For example, a person might choose to buy a candy bar because they are in the candy aisle even though they had decided earlier that they would not buy candy while in the store.
Another theory is one of self-regulation which suggests that the capacity to refrain from impulsive buying is a finite resource. As this capacity is depleted with repeated acts of restraint susceptibility to purchasing other items on impulse increases.Finally, a third theory suggests an emotional and behavioral tie between the purchaser and the product which drives both the likelihood of an impulsive purchase as well as the degree that a person will retroactively be satisfied with that purchase result. Some studies have shown a large number of individuals are happy with purchases made on impulse (41% in one study) which is explained as a preexisting emotional attachment which has a positive relationship both with the likelihood of initiating the purchase as well as mitigating post purchase satisfaction. As an example, when purchasing team-related college paraphernalia a large percentage of those purchases are made on impulse and are tied to the degree with which a person has positive ties to that team.Impulsive buying is seen both as an individual trait in which each person has a preconditioned or hereditary allotment, as well as a situational construct which is mitigated by such things as emotion in the moment of the purchase and the preconditioned ties an individual has with the product.Psychotherapy and pharmacological treatments have been shown to be helpful interventions for patients with impulsive-compulsive buying disorder.
Psychotherapy interventions include the use of desensitization techniques, self-help books or attending a support group.
Pharmacological interventions include the use of SSRIs, such as fluvoxamine, citalopram, escitalopram, and naltrexone.
Impulse control disorders not elsewhere classified
Impulse control disorder (ICDs) are a class of DSM diagnoses that do not fall into the other diagnostic categories of the manual (e.g. substance use disorders), and that are characterized by extreme difficulty controlling impulses or urges despite negative consequences. Individuals suffering from an impulse control disorder frequently experience five stages of symptoms: compelling urge or desire, failure to resist the urge, a heightened sense of arousal, succumbing to the urge (which usually yields relief from tension), and potential remorse or feelings of guilt after the behavior is completed. Specific disorders included within this category include intermittent explosive disorder, kleptomania, pathological gambling, pyromania, trichotillomania (hair-pulling disorder), and impulse control disorders not otherwise specified (ICD NOS). ICD NOS includes other significant difficulties that seem to be related to impulsivity but do not meet the criteria for a specific DSM diagnosis.There has been much debate over whether or not the ICDs deserve a diagnostic category of their own, or whether they are in fact phenomenologically and epidemiologically related to other major psychiatric conditions like obsessive-compulsive disorder (OCD), affective disorders, and addictive disorders. In fact, the ICD classification is likely to change with the release of the DSM-V in May 2013. In this new revision the ICD NOS will likely be reduced or removed; proposed revisions include reclassifying trichotillomania (to be renamed hair-pulling disorder) and skin-picking disorder as obsessive-compulsive and related disorders, moving intermittent explosive disorder under the diagnostic heading of disruptive, impulse control, and conduct disorders, and gambling disorder may be included in addiction and related disorders.The role of impulsivity in the ICDs varies. Research on kleptomania and pyromania is lacking, though there is some evidence that greater kleptomania severity is tied to poor executive functioning.Trichotillomania and skin-picking disorder seem to be disorders that primarily involve motor impulsivity, and will likely be classified in the DSM-V within the obsessive-compulsive and related disorders category.Pathological gambling, in contrast, seems to involve many diverse aspects of impulsivity and abnormal reward circuitry (similar to substance use disorders) that has led to it being increasingly conceptualized as a non-substance or behavioral addiction. Evidence elucidating the role of impulsivity in pathological gambling is accumulating, with pathological gambling samples demonstrating greater response impulsivity, choice impulsivity, and reflection impulsivity than comparison control samples. Additionally, pathological gamblers tend to demonstrate greater response perseveration (compulsivity) and risky decisionmaking in laboratory gambling tasks compared to controls, though there is no strong evidence suggesting that attention and working memory are impaired in pathological gamblers. These relations between impulsivity and pathological gambling are confirmed by brain function research: pathological gamblers demonstrate less activation in the frontal cortical regions (implicated in impulsivity) compared to controls during behavioral tasks tapping response impulsivity, compulsivity, and risk/reward. Preliminary, though variable, findings also suggest that striatal activation is different between gamblers and controls, and that neurotransmitter differences (e.g. dopamine, serotonin, opioids, glutamate, norepinephrine) may exist as well.Individuals with intermittent explosive disorder, also known as impulsive aggression, have exhibited serotonergic abnormalities and show differential activation in response to emotional stimuli and situations. Notably, intermittent explosive disorder is not associated with a higher likelihood of diagnosis with any of the other ICDs but is highly comorbid with disruptive behavior disorders in childhood. Intermittent explosive disorder is likely to be re-classified in the DSM-V under the heading of disruptive, impulse control, and conduct disorders.These sorts of impulse control disorders are most often treated using certain types of psychopharamcological interventions (e.g. antidepressants) and behavioral treatments like cognitive behavioral therapy.
Theories of impulsivity
Ego (cognitive) depletion
According to the ego (or cognitive) depletion theory of impulsivity, self-control refers to the capacity for altering ones own responses, especially to bring them into line with standards such as ideals, values, morals, and social expectations, and to support the pursuit of long-term goals. Self-control enables a person to restrain or override one response, thereby making a different response possible.
A major tenet of the theory is that engaging in acts of self-control draws from a limited "reservoir" of self-control that, when depleted, results in reduced capacity for further self-regulation. Self-control is viewed as analogous to a muscle: Just as a muscle requires strength and energy to exert force over a period of time, acts that have high self-control demands also require strength and energy to perform. Similarly, as muscles become fatigued after a period of sustained exertion and have reduced capacity to exert further force, self-control can also become depleted when demands are made of self-control resources over a period of time. Baumeister and colleagues termed the state of diminished self-control strength ego depletion (or cognitive depletion).The strength model of self-control asserts that:
Just as exercise can make muscles stronger, there are signs that regular exertions of self-control can improve willpower strength. These improvements typically take the form of resistance to depletion, in the sense that performance at self-control tasks deteriorates at a slower rate. Targeted efforts to control behavior in one area, such as spending money or exercise, lead to improvements in unrelated areas, such as studying or household chores. And daily exercises in self-control, such as improving posture, altering verbal behavior, and using ones nondominant hand for simple tasks, gradually produce improvements in self-control as measured by laboratory tasks. The finding that these improvements carry over into tasks vastly different from the daily exercises shows that the improvements are not due to simply increasing skill or acquiring self-efficacy from practice.
Just as athletes begin to conserve their remaining strength when their muscles begin to tire, so do self-controllers when some of their self-regulatory resources have been expended. The severity of behavioral impairment during depletion depends in part on whether the person expects further challenges and demands. When people expect to have to exert self-control later, they will curtail current performance more severely than if no such demands are anticipated.
Consistent with the conservation hypothesis, people can exert self-control despite ego depletion if the stakes are high enough. Offering cash incentives or other motives for good performance counteracts the effects of ego depletion. This may seem surprising but in fact it may be highly adaptive. Given the value and importance of the capacity for self-control, it would be dangerous for a person to lose that capacity completely, and so ego depletion effects may occur because people start conserving their remaining strength. When people do exert themselves on the second task, they deplete the resource even more, as reflected in severe impairments on a third task that they have not anticipated.Empirical tests of the ego-depletion effect typically adopt dual-task paradigm. Participants assigned to an experimental ego-depletion group are required to engage in two consecutive tasks requiring self-control. Control participants are also required to engage in two consecutive tasks, but only the second task requires self-control. The strength model predicts that the performance of the experimental-group on the second self-control task will be impaired relative to that of the control group. This is because the finite self-control resources of the experimental participants will be diminished after the initial self-control task, leaving little to draw on for the second task.The effects of ego depletion do not appear to be a product of mood or arousal. In most studies, mood and arousal has not been found to differ between participants who exerted self-control and those who did not. Likewise, mood and arousal was not related to final self-control performance. The same is true for more specific mood items, such as frustration, irritation, annoyance, boredom, or interest as well. Feedback about success and failure of the self-control efforts does not appear to affect performance. In short, the decline in self-control performance after exerting self-control appears to be directly related to the amount of self-control exerted and cannot be easily explained by other, well-established psychological processes.
Automatic vs. controlled processes/cognitive control
Dual process theory states that mental processes operate in two separate classes: automatic and controlled. In general, automatic processes are those that are experiential in nature, occur without involving higher levels of cognition, and are based on prior experiences or informal heuristics. Controlled decisions are effortful and largely conscious processes in which an individual weighs alternatives and makes a more deliberate decision.
Automatic Process: Automatic processes have four main features. They occur unintentionally or without a conscious decision, the cost of the decision is very low in mental resources, they cannot be easily stopped, and they occur without conscious thought on the part of the individual making them.
Controlled Process: Controlled processes also have four main features that are very close to the opposite in spectrum from their automatic counterparts. Controlled processes occur intentionally, they require the expenditure of cognitive resources, the individual making the decision can stop the process voluntarily, and the mental process is a conscious one.Dual process theories at one time considered any single action/thought as either being automatic or controlled. However, currently they are seen as operating more along a continuum as most impulsive actions will have both controlled and automatic attributes. Automatic processes are classified according to whether they are meant to inhibit or to facilitate a thought process. For example, in one study researchers offered individuals a choice between a 1 in 10 chance of winning a prize and a 10 in 100 chance. Many participants chose one of the choices over the other without identifying that the chances inherent in each were the same as they saw either only 10 chances total as more beneficial, or of having 10 chances to win as more beneficial. In effect impulsive decisions can be made as prior information and experiences dictate one of the courses of action is more beneficial when in actuality careful consideration would better enable the individual to make a more informed and improved decision.
Intertemporal choice
Intertemporal choice is defined as "decisions with consequences that play out over time". This is often assessed using the relative value people assign to rewards at different points in time, either by asking experimental subjects to choose between alternatives or examining behavioral choices in a naturalistic setting.Intertemporal choice is commonly measured in the laboratory using a "delayed discounting" paradigm, which measures the process of devaluing rewards and punishments that happen in the future. In this paradigm, subjects must choose between a smaller reward delivered soon and a larger reward delivered at a delay in the future. Choosing the smaller-sooner reward is considered impulsive. By repeatedly making these choices, indifference points can be estimated. For example, if someone chose $70 now over $100 in a week, but chose the $100 in a week over $60 now, it can be inferred that they are indifferent between $100 in a week and an intermediate value between $60 and $70. A delay discounting curve can be obtained for each participant by plotting their indifference points with different reward amounts and time delays. Individual differences in discounting curves are affected by personality characteristics such as self-reports of impulsivity and locus of control; personal characteristics such as age, gender, IQ, race, and culture; socioeconomic characteristics such as income and education; and many other variables. to drug addiction. Lesions of the nucleus accumbens core subregion or basolateral amygdala produce shifts towards choosing the smaller-sooner reward, suggesting the involvement of these brain regions in the preference for delayed reinforcers. There is also evidence that the orbitofrontal cortex is involved in delay discounting, although there is currently debate on whether lesions in this region result in more or less impulsivity.Economic theory suggests that optimal discounting involves the exponential discounting of value over time. This model assumes that people and institutions should discount the value of rewards and punishments at a constant rate according to how delayed they are in time. While economically rational, recent evidence suggests that people and animals do not discount exponentially. Many studies suggest that humans and animals discount future values according to a hyperbolic discounting curve where the discount factor decreases with the length of the delay (for example, waiting from today to tomorrow involves more loss of value than waiting from twenty days to twenty-one days). Further evidence for non-constant delay discounting is suggested by the differential involvement of various brain regions in evaluating immediate versus delayed consequences. Specifically, the prefrontal cortex is activated when choosing between rewards at a short delay or a long delay, but regions associated with the dopamine system are additionally activated when the option of an immediate reinforcer is added. Additionally, intertemporal choices differ from economic models because they involve anticipation (which may involve a neurological "reward" even if the reinforcer is delayed), self-control (and the breakdown of it when faced with temptations), and representation (how the choice is framed may influence desirability of the reinforcer), none of which are accounted for by a model that assumes economic rationality.One facet of intertemporal choice is the possibility for preference reversal, when a tempting reward becomes more highly valued than abstaining only when immediately available. For example, when sitting home alone, a person may report that they value the health benefit of not smoking a cigarette over the effect of smoking one. However, later at night when the cigarette is immediately available, their subjective value of the cigarette may rise and they may choose to smoke it.A theory called the "primrose path" is intended to explain how preference reversal can lead to addiction in the long run. As an example, a lifetime of sobriety may be more highly valued than a lifetime of alcoholism, but, at the same time, one drink now may be more highly valued than not drinking now. Because it is always "now," the drink is always chosen, and a paradoxical effect occurs whereby the more-valued long-term alternative is not achieved because the more-valued short-term alternative is always chosen. This is an example of complex ambivalence, when a choice is made not between two concrete alternatives but between one immediate and tangible alternative (i.e. having a drink) and one delayed and abstract alternative (i.e. sobriety).
Similarities between humans and non-human animals in intertemporal choice have been studied. Pigeons and rats also discount hyperbolically; tamarin monkeys do not wait more than eight seconds to triple the amount of a food reward. The question arises as to whether this is a difference of homology or analogy—that is, whether the same underlying process underlies human-animal similarities or whether different processes are manifesting in similar patterns of results.
Inhibitory control
Inhibitory control, often conceptualized as an executive function, is the ability to inhibit or hold back a prepotent response. It is theorized that impulsive behavior reflects a deficit in this ability to inhibit a response; impulsive people may find it more difficult to inhibit action whereas non-impulsive people may find it easier to do so. There is evidence that, in normal adults, commonly used behavioral measures of inhibitory control correlate with standard self-report measures of impulsivity.Inhibitory control may itself be multifaceted, evidenced by numerous distinct inhibition constructs that can be measured in different ways, and relate to specific types of psychopathology. Joel Nigg developed a useful working taxonomy of these different types of inhibition, drawing heavily from the fields of cognitive and personality psychology Niggs eight proposed types of inhibition include the following:
Executive Inhibition
Interference control
Suppression of a stimulus that elicits an interfering response, enabling a person to complete the primary response. Interference control can also refer to suppressing distractors.Interference control has been measured using cognitive tasks like the stroop test, flanker tasks, dual task interference, and priming tasks. Personality researchers have used the Rothbart effortful control measures and the conscientiousness scale of the Big Five as inventory measures of interference control. Based on imaging and neural research it is theorized that the anterior cingulate, the dorsolateral prefrontal/premotor cortex, and the basal ganglia are related to interference control.
Cognitive inhibition
Cognitive inhibition is the suppression of unwanted or irrelevant thoughts to protect working memory and attention resources.Cognitive inhibition is most often measured through tests of directed ignoring, self-report on ones intrusive thoughts, and negative priming tasks. As with interference control, personality psychologists have measured cognitive inhibition using the Rothbart Effortful Control scale and the Big Five Conscientiousness scale. The anterior cingulate, the prefrontal regions, and the association cortex seem to be involved in cognitive inhibition.
Behavioral inhibition
Behavioral Inhibition is the suppression of prepotent response.Behavioral inhibition is usually measured using the Go/No Go task, Stop signal task, and reports of suppression of attentional orienting. Surveys that are theoretically relevant to behavioral inhibition include the Rothbart effortful control scale, and the Big Five Conscientiousness dimension. The rationale behind the use of behavioral measures like the Stop signal task is that "go" processes and "stop processes" are independent, and that, upon "go" and "stop" cues, they "race" against each other; if the go process wins the race, the prepotent response is executed, whereas if the stop processes wins the race, the response is withheld. In this context, impulsivity is conceptualized as a relatively slow stop process. The brain regions involved in behavioral inhibition appear to be the lateral and orbital prefrontal regions along with premotor processes.
Oculomotor Inhibition
Oculomotor Inhibition is the effortful suppression of reflexive saccade.Oculomotor inhibition is tested using antisaccade and oculomotor tasks. Also, the Rothbart effortful control measure and the Big Five Conscientiousness dimension are thought to tap some of the effortful processes underlying the ability to suppress saccade. The frontal eye fields and the dorsolateral prefrontal cortex are involved in oculomotor inhibition.
Motivational inhibition
In response to punishment
Motivational inhibition and response in the face of punishment can be measured using tasks tapping inhibition of primary response, modified go/no go tasks, inhibition of competing response, and emotional Stroop tasks. Personality psychologists also use the Gray behavioral inhibition system measure, the Eysenck scale for neurotic introversion, and the Zuckerman Neuroticism-Anxiety scale. The Septal-hippocampal formation, cingulate, and motor systems seem to be the brain areas most involved in response to punishment.
In response to novelty
Response to novelty has been measured using the Kagan behavioral inhibition system measure and scales of neurotic introversion. The amygdaloid system is implicated in novelty response.
Automatic inhibition of attention
Recently inspected stimuli
Suppression of recently inspected stimuli for both attention and oculomotor saccade is usually measured using attentional and oculomotor inhibition of return tests. The superior colliculus and the midbrain, oculomotor pathway are involved in suppression of stimuli.
Neglected stimuli
Information at locations that are not presently being attended to is suppressed, while attending elsewhere.This involves measures of covert attentional orienting and neglect, along with personality scales on neuroticism. The posterior association cortex and subcortical pathways are implicated in this sort of inhibition.
Action/Inaction goals
Recent psychology research also yields out the condition of impulsivity in relation to peoples general goal setting. It is possible these action and inaction goals are underlying peoples behavioral differences in their daily lives since they can demonstrate "patterns comparable to natural variation in overall activity levels". More specifically, the level of impulsivity and mania people have might positive correlated with favorable attitudes about and goals of general action while negatively respond to favorable attitudes about and goals of general inaction.
Assessment of impulsivity
Personality tests and reports
Barratt Impulsiveness Scale
The Barratt Impulsiveness Scale (BIS) is one of the oldest and most widely used measures of impulsive personality traits. The first BIS was developed in 1959 by Dr. Ernest Barratt. It has been revised extensively to achieve two major goals: (1) to identify a set of "impulsiveness" items that was orthogonal to a set of "anxiety" items as measured by the Taylor Manifest Anxiety Scale (MAS) or the Cattell Anxiety Scale, and (2) to define impulsiveness within the structure of related personality traits like Eysencks Extraversion dimension or Zuckermans Sensation-Seeking dimension, especially the disinhibition subfactor. The BIS-11 with 30 items was developed in 1995. According to Patton and colleagues, there are 3 subscales (Attentional Impulsiveness, Motor Impulsiveness, and Non-Planning Impulsiveness) with six factors:
Attention: "focusing on a task at hand".
Motor impulsiveness: "acting on the spur of the moment".
Self-control: "planning and thinking carefully".
Cognitive complexity: "enjoying challenging mental tasks".
Perseverance: "a consistent life style".
Cognitive instability: "thought insertion and racing thoughts".
Eysenck Impulsiveness Scale
The Eysenck Impulsiveness Scale (EIS) is a 54-item yes/no questionnaire designed to measure impulsiveness. Three subscales are computed from this measure: Impulsiveness, Venturesomeness, and Empathy. Impulsiveness is defined as "behaving without thinking and without realizing the risk involved in the behavior". Venturesomeness is conceptualized as "being conscious of the risk of the behavior but acting anyway" The questionnaire was constructed through factor analysis to contain items that most highly loaded on impulsiveness and venturesomeness. The EIS is a widely used and well-validated measure.
Dickman Impulsivity Inventory
The Dickman Impulsivity Inventory was first developed in 1990 by Scott J. Dickman. This scale is based on Dickmans proposal that there are two types of impulsivity that are significantly different from one another. This includes functional impulsivity which is characterized by quick decision making when it is optimal, a trait that is often considered to be a source of pride. The scale also includes dysfunctional impulsivity which is characterized by making quick decisions when it is not optimal. This type of impulsivity is most often associated with life difficulties including substance abuse problems and other negative outcomes.This scale includes 63 items of which 23 are related to dysfunctional impulsivity, 17 are related to functional impulsivity, and 23 are filler questions that relate to neither construct. This scale has been developed into a version for use with children as well as into several languages. Dickman showed there is no correlation between these two tendencies across individuals, and they also have different cognitive correlates.
UPPS Impulsive Behavior Scale
The UPPS Impulsive Behavior Scale is a 45-item self-report questionnaire that was designed to measure impulsivity across dimensions of the Five Factor Model of personality. The UPPS includes 4 sub-scales: lack of premeditation, urgency, lack of perseverance, and sensation-seeking.
UPPS-P Impulsive Behavior Scale (UPPS-P) is a revised version of the UPPS, including 59 items. It assesses an additional personality pathway to impulsive behavior, Positive Urgency, in addition to the four pathways assessed in the original version of the scale: Urgency (now Negative Urgency), (lack of) Premeditation, (lack of) Perseverance, and Sensation Seeking
UPPS-P short version (UPPS-Ps) is 20-item scale that evaluates five different impulsivity facets (4 items per dimension).
UPPS-R Interview is a semi-structured interview that measures the degree to which individuals exhibit the various components of impulsivity assessed by the UPPS-P.
Lifetime History of Impulsive Behaviors
Lifetime History of Impulsive Behaviors (LHIB) is a 53-item questionnaire designed to assess lifetime history of impulsive behavior (as opposed to impulsive tendencies) as well as the level of distress and impairment associated with these behaviors. The assessment battery was designed to measure the following six dimensions: (a) impulsivity, (b) sensation seeking, (c) trait anxiety, (d) state depression, (e) empathy, and (f) social desirability. The LHIB consists of scales for clinically significant impulsivity, non-clinically significant impulsivity, and impulsivity related distress/impairment.
Behavioral Inhibition System/Behavioral Activation System
Behavioral Inhibition System/Behavioral Activation System (BIS/BAS) was developed based on the Grays biopsychological theory of personality which suggests that there are two general motivational systems that underlie behavior and affect: BIS and BAS. This 20-item self-report questionnaire is designed to assess dispositional BIS and BAS sensitivities.
Impulsive/Premeditated Aggression Scale
Impulsive/Premeditated Aggression Scale (IPAS) is a 30-item self-report questionnaire. Half of the items describe impulsive aggression and half the items describe premeditated aggression. Aggressive behavior has traditionally been classified into two distinct subtypes, impulsive or premeditated. Impulsive aggression is defined as a hair-trigger aggressive response to provocation with loss of behavioral control. Premeditated aggression is defined as a planned or conscious aggressive act, not spontaneous or related to an agitated state. The IPAS is designed to characterize aggressive behavior as predominately impulsive or predominately premeditated in nature. Those subjects who clustered on the impulsive factor showed a broad range of emotional and cognitive impairments; those who clustered on the premeditated factor showed a greater inclination for aggression and anti-social behaviour.
Padua Inventory
The Padua Inventory (PI) consists of 60 items describing common obsessional and compulsive behavior and allows investigation of such problems in normal and clinical subjects.
Behavioral paradigms
A wide variety of behavioral tests have been devised for the assessment of impulsivity in both clinical and experimental settings. While no single test is a perfect predictor or a sufficient replacement for an actual clinical diagnosis, when used in conjunction with parent/teacher reports, behavioral surveys, and other diagnostic criteria, the utility of behavioral paradigms lies in their ability to narrow in on specific, discrete aspects of the impulsivity umbrella. Quantifying specific deficits is of use to the clinician and the experimenter, both of whom are generally concerned with obtaining objectively measurable treatment effects.
Marshmallow test
One widely recognizable test for impulsivity is the delay of gratification paradigm commonly known as the marshmallow test. Developed in the 1960s to assess willpower and self-control in preschoolers, the marshmallow test consists of placing a single marshmallow in front of a child and informing them that they will be left alone in the room for some duration. The child is told that if the marshmallow remains uneaten when the experimenter returns, they will be awarded a second marshmallow, both of which can then be eaten.Despite its simplicity and ease of administration, evidence from longitudinal studies suggests that the number of seconds preschoolers wait to obtain the second marshmallow is predictive of higher SAT scores, better social and emotional coping in adolescence, higher educational achievement, and less cocaine/crack use.
Delay discounting
Like the marshmallow test, delay discounting is also a delay of gratification paradigm. It is designed around the principle that the subjective value of a reinforcer decreases, or is discounted, as the delay to reinforcement increases. Subjects are given varying choices between smaller, immediate rewards and larger, delayed rewards. By manipulating reward magnitude and/or reward delay over multiple trials, indifference points can be estimated whereby choosing the small, immediate reward, or the large, delayed reward are about equally likely. Subjects are labeled impulsive when their indifference points decline more steeply as a function of delay compared to the normal population (i.e. greater preference for immediate reward). Unlike the marshmallow test, delay discounting does not require verbal instruction and can be implemented on non-human animals.
Go/no-go and stop-signal reaction time tasks
Two common tests of response inhibition used in humans are the go/no-go task, and a slight variant known as the stop-signal reaction time (SSRT) test. During a go/no-task, the participant is trained over multiple trials to make a particular response (e.g., a key-press) when presented with a go signal. On some trials, a stop signal is presented just prior to, or simultaneously with the go signal, and the subject must inhibit the impending response.
The SSRT test is similar, except that the stop signal is presented after the go signal. This small modification increases the difficulty of inhibiting the go response, because the participant has typically already initiated the go response by the time the stop signal is presented. The participant is instructed to respond as fast as possible to the go signal while maintaining the highest possible inhibition accuracy (on no-go trials). During the task, the time at which the stop signal is presented (the stop signal delay or SSD) is dynamically adjusted to match the time after the go signal at which the participant is just able/unable to inhibit their go response. If the participant fails to inhibit their go response, the stop signal is moved slightly closer to the original go signal, and if the participant successfully inhibits their go response, the stop signal is moved slightly ahead in time. The SSRT is thus measured as the average go response time minus the average stop signal presentation time (SSD).
Balloon Analogue Risk Task
The balloon analogue risk task (BART) was designed to assess risk-taking behavior. Subjects are presented with a computer depiction of a balloon that can be incrementally inflated by pressing a response key. As the balloon inflates, the subject accumulates rewards with each new key-press. The balloon is programmed with a constant probability of popping. If the balloon pops, all rewards for that balloon are lost, or the subject may choose to stop inflating and bank the reward for that balloon at any time. Therefore, more key-presses equate to greater reward, but also greater probability of popping and cancelling rewards for that trial. The BART assumes that those with an affinity for risk-taking are more likely to pop the balloon, earning less reward overall than the typical population.
Iowa Gambling Task
The Iowa gambling task (IGT) is a test originally meant to measure decision making specifically within individuals who have ventromedial prefrontal cortex damage. The concept of impulsivity as relates to the IGT is one in which impulsive decisions are a function of an individuals lack of ability to make rational decisions over time due to an over amplification of emotional/somatic reward. In the IGT individuals are provided four decks of cards to choose from. Two of these decks provide much higher rewards but the deductions are also much higher while the second two decks have lower rewards per card but also much lower deductions. Over time anyone who chooses predominantly from the high rewards decks will lose money while those who choose from the smaller rewards decks will gain money.
The IGT uses hot and cold processes in its concept of decision making. Hot decision making involves emotional responses to the material presented based on motivation related to reward and punishment. Cold processes occur when an individual uses rational cognitive determinations when making decisions. Combined an individual should gain a positive emotional reaction when choices have beneficial consequences and will have negative emotional responses tied to choices that have greater negative consequences. In general, healthy responders to the IGT will begin to drift to the lower gain decks as they realize that they are gaining more money than they lose both through an ability to recognize that one is more consistently providing rewards as well as through the emotions related to winning consistently. However, those who have emotional deficits will fail to recognize that they are losing money over time and will continue to be more influenced by the exhilaration of higher value rewards without being influenced by the negative emotions of the loses associated with them.For more information concerning these process refer to the Somatic marker hypothesis
Differential Reinforcement of Low Response Rate Task
Differential reinforcement of low response rate (DRL) described by Ferster and Skinner is used to encourage low rates of responding. It is derived from research in operant conditioning that provides an excellent opportunity to measure the hyperactive childs ability to inhibit behavioral responding. Hyperactive children were relatively unable to perform efficiently on the task, and this deficit endured regardless of age, IQ, or experimental condition. Therefore, it can be used to discriminate accurately between teacher rated and parent rated hyperactive and nonhyperactive children. In this procedure, responses that occur before a set time interval has passed are not reinforced and reset the time required between behaviors.In a study, a child was taken to the experimental room and told that they were going to play a game in which they had a chance to win a lot of M&Ms. Every time they made the light of the reward indicator by pressing a red button, they would earn an M&Ms. However, they had to wait a while (6 seconds) before they could press it to get another point. If they had pressed the button too soon, then they would have not gotten a point, and the light would not go on, and they had to wait a while before they could press it to get another point.Researchers have also observed that subjects in a time-based situation will often engage in a sequence or chain of behaviors between reinforceable responses. This is because this collateral behavior sequence helps the subject "wait out" the required temporal delay between responses.
Other
Other common impulsivity tasks include the Continuous performance task (CPT), 5-choice serial reaction time task (5-CSRTT), Stroop task, and Matching Familiar Figures Task.
Pharmacology and neurobiology
Neurobiological findings
Although the precise neural mechanisms underlying disorders of impulse control are not fully known, the prefrontal cortex (PFC) is the brain region most ubiquitously implicated in impulsivity. Damage to the prefrontal cortex has been associated with difficulties preparing to act, switching between response alternatives, and inhibiting inappropriate responses. Recent research has uncovered additional regions of interest, as well as highlighted particular subregions of the PFC, that can be tied to performance in specific behavioral tasks.
Delay discounting
Excitotoxic lesions in the nucleus accumbens core have been shown to increase preference for the smaller, immediate reward, whereas lesions to the nucleus accumbens shell have had no observable effect. Additionally, lesions of the basolateral amygdala, a region tied closely to the PFC, negatively affect impulsive choice similarly to what is observed in the nucleus accumbens core lesions. Moreover, dorsal striatum may also be involved in impulsive choice in an intricate manner.
Go/No-go and Stop-signal reaction time test
The orbitofrontal cortex is now thought to play a role in disinhibiting, and injury to other brain structures, such as to the right inferior frontal gyrus, a specific subregion of the PFC, has been associated with deficits in stop-signal inhibition.
5-Choice Serial Reaction Time Task (5-CSRTT) and Differential Reinforcement of Low rates (DRL)
As with delay discounting, lesion studies have implicated the core region of the nucleus accumbens in response inhibition for both DRL and 5-CSRTT. Premature responses in the 5-CSRTT may also be modulated by other systems within the ventral striatum. In the 5-CSRTT, lesions of the anterior cingulate cortex have been shown to increase impulsive responding, and lesions to the prelimbic cortex impair attentional performance.
Iowa Gambling Task
Patients with damage to the ventromedial frontal cortex exhibit poor decision-making and persist in making risky choices in the Iowa Gambling Task.
Neurochemical and pharmacological findings
The primary pharmacological treatments for ADHD are methylphenidate (Ritalin) and amphetamine. Both methylphenidate and amphetamines block re-uptake of dopamine and norepinephrine into the pre-synaptic neuron, acting to increase post-synaptic levels of dopamine and norepinephrine. Of these two monoamines, increased availability of dopamine is considered the primary cause for the ameliorative effects of ADHD medications, whereas increased levels of norepinephrine may be efficacious only to the extent that it has downstream, indirect effects on dopamine.
The effectiveness of dopamine re-uptake inhibitors in treating the symptoms of ADHD has led to the hypothesis that ADHD may arise from low tonic levels of dopamine (particularly in the fronto-limbic circuitry), but evidence in support of this theory is mixed.
Genetics
There are several difficulties when it comes to trying to identify a gene for complex traits such as impulsivity, such as genetic heterogeneity. Another difficulty is that the genes in question might sometimes show incomplete penetrance, "where a given gene variant does not always cause the phenotype". Much of the research on the genetics of impulsivity-related disorders, such as ADHD, is based on family or linkage studies. There are several genes of interest that have been studied in an attempt to find the major genetic contributors to impulsivity. Some of these genes are:
DAT1 is the dopamine transporter gene which is responsible for the active reuptake of dopamine from the neural synapse. DAT1 polymorphisms have been shown to be linked to hyperactivity and ADHD.
DRD4 is the dopamine D4 receptor gene and is associated with ADHD and novelty seeking behaviors. It has been proposed that novelty seeking is associated with impulsivity. Mice deficient for DRD4 have shown less behavioral responses to novelty.
5HT2A is the serotonin receptor gene. The serotonin 2A receptor gene has been associated with both hyper locomotion, ADHD, as well as impulsivity. Subjects with a particular polymorphism of the 5HT2A gene made more commission errors during a punishment-reward condition in a go/no-go task.
HTR2B a serotonin receptor gene.
CTNNA2 encodes for a brain-expressed α-catenin that has been associated with Excitement-Seeking in a genome-wide association study (GWAS) of 7860 individuals.
Intervention
Interventions to impact impulsivity generally
While impulsivity can take on pathological forms (e.g. substance use disorder, ADHD), there are less severe, non-clinical forms of problematic impulsivity in many peoples daily lives. Research on the different facets of impulsivity can inform small interventions to change decision making and reduce impulsive behavior For example, changing cognitive representations of rewards (e.g. making long term rewards seem more concrete) and/or creating situations of "precommitment" (eliminating the option of changing ones mind later) can reduce the preference for immediate reward seen in delay discounting.
Brain training
Brain training interventions include laboratory-based interventions (e.g. training using tasks like go/no go) as well as community, family, and school based interventions that are ecologically valid (e.g. teaching techniques for regulating emotions or behaviors) and can be used with individuals with non-clinical levels of impulsivity. Both sorts of interventions are aimed at improving executive functioning and self-control capacities, with different interventions specifically targeting different aspects of executive functioning like inhibitory control, working memory, or attention. Emerging evidence suggests that brain training interventions may succeed in impacting executive function, including inhibitory control. Inhibitory control training specifically is accumulating evidence that it can help individuals resist temptation to consume high calorie food and drinking behavior. Some have voiced concerns that the favorable results of studies testing working memory training should be interpreted with caution, claiming that conclusions regarding changes to abilities are measured using single tasks, inconsistent use of working memory tasks, no-contact control groups, and subjective measurements of change.
Treatment of specific disorders of impulsivity
Behavioral, psychosocial, and psychopharmacological treatments for disorders involving impulsivity are common.
Psychopharmacological intervention
Psychopharmacological intervention in disorders of impulsivity has shown evidence of positive effects; common pharmacological interventions include the use of stimulant medication, selective serotonin reuptake inhibitors (SSRIs) and other antidepressants. ADHD has a well-established evidence base supporting the use of stimulant medication for the reduction of ADHD symptoms. Pathological gambling has also been studied in drug trials, and there is evidence that gambling is responsive to SSRIs and other antidepressants. Evidence based pharmacological treatment for trichotillomania is not yet available, with mixed results of studies investigating the use of SSRIs, though cognitive behavioral therapy has shown positive effects. Intermittent explosive disorder is most often treated with mood stabilizers, SSRIs, beta blockers, alpha agonists, and anti-psychotics (all of which have shown positive effects). There is evidence that some pharmacological interventions are efficacious in treating substance-use disorders, though their use can depend on the type of substance that is abused. Pharmacological treatments for substance-use disorders include acamprosate, buprenorphine, disulfiram, LAAM, methadone, and naltrexone.
Behavioral interventions
Behavioral interventions also have a fairly strong evidence base in impulse-control disorders. In ADHD, the behavioral interventions of behavioral parent training, behavioral classroom management, and intensive peer-focused behavioral interventions in recreational settings meet stringent guidelines qualifying them for evidence based treatment status. In addition, a recent meta-analysis of evidence-based ADHD treatment found organization training to be a well-established treatment method. Empirically validated behavioral treatments for substance use disorder are fairly similar across substance use disorders, and include behavioral couples therapy, CBT, contingency management, motivational enhancement therapy, and relapse prevention. Pyromania and kleptomania are understudied (due in large part to the illegality of the behaviors), though there is some evidence that psychotherapeutic interventions (CBT, short term counseling, day treatment programs) are efficacious in treating pyromania, while kleptomania seems to be best addressed using SSRIs. Additionally, therapies including CBT, family therapy, and social skill training have shown positive effects on explosive aggressive behaviors.
See also
Affect
ADHD
Addiction
Deferred gratification
Drive theory
Emotion
Feeling
Instinct
Impulse control disorder
Sensation seeking
Novelty seeking
Alternative five model of personality
References
Further reading
Evenden, J. L. (21 October 1999). "Varieties of impulsivity". Psychopharmacology. 146 (4): 348–361. doi:10.1007/PL00005481. PMID 10550486. S2CID 5972342.
Hollander, E.; Rosen, J. (March 2000). "Impulsivity". Journal of Psychopharmacology. 14 (2_suppl1): S39–S44. doi:10.1177/02698811000142S106. PMID 10888030. S2CID 243171966.
Moeller, F. Gerard; Barratt, Ernest S.; Dougherty, Donald M.; Schmitz, Joy M.; Swann, Alan C. (November 2001). "Psychiatric Aspects of Impulsivity" </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Intraventricular hemorrhage and can you explain it in depth? [/INST] Intraventricular hemorrhage (IVH), also known as intraventricular bleeding, is a bleeding into the brains ventricular system, where the cerebrospinal fluid is produced and circulates through towards the subarachnoid space. It can result from physical trauma or from hemorrhagic stroke.
30% of intraventricular hemorrhage (IVH) are primary, confined to the ventricular system and typically caused by intraventricular trauma, aneurysm, vascular malformations, or tumors, particularly of the choroid plexus. However 70% of IVH are secondary in nature, resulting from an expansion of an existing intraparenchymal or subarachnoid hemorrhage. Intraventricular hemorrhage has been found to occur in 35% of moderate to severe traumatic brain injuries. Thus the hemorrhage usually does not occur without extensive associated damage, and so the outcome is rarely good.
Symptoms
Adults
Symptoms of IVH are similar to other intracerebral hemorrhages and include sudden onset of headache, nausea and vomiting, together with an alteration
of the mental state and/or level of consciousness. Focal neurological signs are either minimal or absent, but focal and/or generalized seizures may occur. Xanthochromia, yellow-tinged CSF, is the rule.
Infants
Some infants are asymptomatic and others may present with hard to detect abnormalities of consciousness, muscle tone, breathing, movements of their eyes, and body movements.
Causes
Adults
Causes of IVH in adults include physical trauma or from hemorrhagic stroke.
Infants
Infants that are preterm and very low birth weight are also at high risk. IVH in the preterm brain usually arises from the germinal matrix whereas IVH in the term infants originates from the choroid plexus. However, it is particularly common in premature infants or those of very low birth weight. The cause of IVH in premature infants, unlike that in older infants, children or adults, is rarely due to trauma. Instead it is thought to result from changes in perfusion of the delicate cellular structures that are present in the growing brain, augmented by the immaturity of the cerebral circulatory system, which is especially vulnerable to hypoxic ischemic encephalopathy. The lack of blood flow results in cell death and subsequent breakdown of the blood vessel walls, leading to bleeding. While this bleeding can result in further injury, it is itself a marker for injury that has already occurred. Most intraventricular hemorrhages occur in the first 72 hours after birth. The risk is increased with use of extracorporeal membrane oxygenation in preterm infants. Congenital cytomegalovirus infection can be an important cause.
Mechanism
Diagnosis
Diagnosis can be confirmed by the presence of blood inside the ventricles on CT.
Infants
In term and preterm infants with IVH, the amount of bleeding varies. IVH is often described in four grades:
Grade I - bleeding occurs just in the germinal matrix
Grade II - bleeding also occurs inside the ventricles, but they are not enlarged
Grade III - ventricles are enlarged by the accumulated blood
Grade IV - bleeding extends into the brain tissue around the ventriclesGrades I and II are most common, and often there are no further complications. Grades III and IV are the most serious and may result in long-term brain injury to the infant. After a grade III or IV IVH, blood clots may form which can block the flow of cerebrospinal fluid, leading to increased fluid in the brain (hydrocephalus).
Prevention
Head positioning in very preterm infants has been suggested as an approach to prevent germinal matrix haemorrhage; however, further research is required to determine the effectiveness at reducing mortality and the most appropriate positioning technique. Approaches include bed tilting, supine mid-line head positioning, supine head rotation 90 degrees, prone mid-line head positioning, head tiling.
Treatment
Treatment focuses on monitoring and should be accomplished with inpatient floor service for individuals responsive to commands or neurological ICU observation for those with impaired levels of consciousness. Extra attention should be placed on intracranial pressure (ICP) monitoring via an intraventricular catheter and medications to maintain ICP, blood pressure, and coagulation. In more severe cases an external ventricular drain may be required to maintain ICP and evacuate the hemorrhage, and in extreme cases an open craniotomy may be required. In cases of unilateral IVH with small intraparenchymal hemorrhage the combined method of stereotaxy and open craniotomy has produced promising results.
Infants
There have been various therapies employed into preventing the high rates of morbidity and mortality, including diuretic therapy, repeated lumbar puncture, streptokinase therapy and a combination novel intervention called DRIFT (drainage, irrigation and fibrinolytic therapy). More research is required, in the form of high quality randomized controlled trials, to determine the safety, dosing, and effectiveness of prophylactic heparin and antithrombin treatment for preterm neonates.
Prognosis
In infants, germinal matrix haemorrhage is associated with cerebral palsy, problems with cognition, and hydrocephalus. With improved technological advances in science and medicine, survival for preterm infants with this type of neurological disorder has improved and less preterm infants with germinal matrix haemorrhage have severe cerebral palsy. An estimated 15% of preterm infants who survive develop cerebral palsy and 27% of the infants who survive experience moderate to severe neurosensory deficits by the time they reach 18–24 months old.Prognosis is very poor when IVH results from intracerebral hemorrhage related to high blood pressure and is even worse when hydrocephalus follows. It can result in dangerous increases in ICP and can cause potentially fatal brain herniation </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Aging brain and can you explain it in depth? [/INST] Aging is a major risk factor for most common neurodegenerative diseases, including mild cognitive impairment, dementias including Alzheimers disease, cerebrovascular disease, Parkinsons disease, and Lou Gehrigs disease. While much research has focused on diseases of aging, there are few informative studies on the molecular biology of the aging brain (usually spelled ageing brain in British English) in the absence of neurodegenerative disease or the neuropsychological profile of healthy older adults. However, research suggests that the aging process is associated with several structural, chemical, and functional changes in the brain as well as a host of neurocognitive changes. Recent reports in model organisms suggest that as organisms age, there are distinct changes in the expression of genes at the single neuron level. This page is devoted to reviewing the changes associated with healthy aging.
Structural changes
Aging entails many physical, biological, chemical, and psychological changes and the brain is no exception to this phenomenon. These various changes have attempted to be mapped by conceptual models like the Scaffolding Theory of Aging and Cognition (STAC) in 2009. The STAC model looks at factors like neural changes to the white matter, dopamine depletion, shrinkage, and cortical thinning .CT scans have found that the cerebral ventricles expand as a function of age. More recent MRI studies have reported age-related regional decreases in cerebral volume. Regional volume reduction is not uniform; some brain regions shrink at a rate of up to 1% per year, whereas others remain relatively stable until the end of the life-span. The brain is very complex, and is composed of many different areas and types of tissue, or matter. The different functions of different tissues in the brain may be more or less susceptible to age-induced changes. The brain matter can be broadly classified as either grey matter, or white matter. Grey matter consists of cell bodies in the cortex and subcortical nuclei, whereas white matter consists of tightly packed myelinated axons connecting the neurons of the cerebral cortex to each other and with the periphery.
Loss of neural circuits and brain plasticity
Brain plasticity refers to the brains ability to change structure and function. This ties into the common phrase, "if you dont use it, you lose it," which is another way of saying, if you dont use it, your brain will devote less somatotopic space for it. One proposed mechanism for the observed age-related plasticity deficits in animals is the result of age-induced alterations in calcium regulation. The changes in our abilities to handle calcium will ultimately influence neuronal firing and the ability to propagate action potentials, which in turn would affect the ability of the brain to alter its structure or function (i.e. its plastic nature). Due to the complexity of the brain, with all of its structures and functions, it is logical to assume that some areas would be more vulnerable to aging than others. Two circuits worth mentioning here are the hippocampal and neocortical circuits. It has been suggested that age-related cognitive decline is due in part not to neuronal death but to synaptic alterations. Evidence in support of this idea from animal work has also suggested that this cognitive deficit is due to functional and biochemical factors such as changes in enzymatic activity, chemical messengers, or gene expression in cortical circuits.
Thinning of the cortex
Advances in MRI technology have provided the ability to see the brain structure in great detail in an easy, non-invasive manner in vivo. Bartzokis et al., has noted that there is a decrease in grey matter volume between adulthood and old age, whereas white matter volume was found to increase from age 19–40, and decline after this age. Studies using Voxel-based morphometry have identified areas such as the insula and superior parietal gyri as being especially vulnerable to age-related losses in grey matter of older adults. Sowell et al., reported that the first 6 decades of an individuals life were correlated with the most rapid decreases in grey matter density, and this occurred over dorsal, frontal, and parietal lobes on both interhemispheric and lateral brain surfaces. It is also worth noting that areas such as the cingulate gyrus, and occipital cortex surrounding the calcarine sulcus appear exempt from this decrease in grey matter density over time. Age effects on grey matter density in the posterior temporal cortex appear more predominantly in the left versus right hemisphere, and were confined to posterior language cortices. Certain language functions such as word retrieval and production were found to be located to more anterior language cortices, and deteriorate as a function of age. Sowell et al., also reported that these anterior language cortices were found to mature and decline earlier than the more posterior language cortices. It has also been found that the width of sulcus not only increases with age, but also with cognitive decline in the elderly.
Age-related neuronal morphology
There is converging evidence from cognitive neuroscientists around the world that age-induced cognitive deficits may not be due to neuronal loss or cell death, but rather may be the result of small region-specific changes to the morphology of neurons. Studies by Duan et al., have shown that dendritic arbors and dendritic spines of cortical pyramidal neurons decrease in size and/or number in specific regions and layers of human and non-human primate cortex as a result of age (Duan et al., 2003; morph). A 46% decrease in spine number and spine density has been reported in humans older than 50 compared with younger individuals. An electron microscopy study in monkeys reported a 50% loss in spines on the apical dendritic tufts of pyramidal cells in prefrontal cortex of old animals (27–32 years old) compared with young ones (6–9 years old).
Neurofibrillary tangles
Age-related neuro-pathologies such as Alzheimers disease, Parkinsons disease, diabetes, hypertension and arteriosclerosis make it difficult to distinguish the normal patterns of aging. One of the important differences between normal aging and pathological aging is the location of neurofibrillary tangles. Neurofibrillary tangles are composed of paired helical filaments (PHF). In normal, non-demented aging, the number of tangles in each affected cell body is relatively low and restricted to the olfactory nucleus, parahippocampal gyrus, amygdala and entorhinal cortex. As the non-demented individual ages, there is a general increase in the density of tangles, but no significant difference in where tangles are found. The other main neurodegenerative contributor commonly found in the brain of patients with AD is amyloid plaques. However, unlike tangles, plaques have not been found to be a consistent feature of normal aging.
Role of oxidative stress
Cognitive impairment has been attributed to oxidative stress, inflammatory reactions and changes in the cerebral microvasculature. The exact impact of each of these mechanisms in affecting cognitive aging is unknown. Oxidative stress is the most controllable risk factor and is the best understood. The online Merriam-Webster Medical Dictionary defines oxidative stress as, "physiological stress on the body that is caused by the cumulative damage done by free radicals inadequately neutralized by antioxidants and that is to be associated with aging." Hence oxidative stress is the damage done to the cells by free radicals that have been released from the oxidation process.
Compared to other tissues in the body, the brain is deemed unusually sensitive to oxidative damage. Increased oxidative damage has been associated with neurodegenerative diseases, mild cognitive impairment and individual differences in cognition in healthy elderly people. In normal aging, the brain is undergoing oxidative stress in a multitude of ways. The main contributors include protein oxidation, lipid peroxidation and oxidative modifications in nuclear and mitochondrial DNA. Oxidative stress can damage DNA replication and inhibit repair through many complex processes, including telomere shortening in DNA components. Each time a somatic cell replicates, the telomeric DNA component shortens. As telomere length is partly inheritable, there are individual differences in the age of onset of cognitive decline.
DNA damage
At least 25 studies have demonstrated that DNA damage accumulates with age in the mammalian brain. This DNA damage includes the oxidized nucleoside 8-hydroxydeoxyguanosine (8-OHdG), single- and double-strand breaks, DNA-protein crosslinks and malondialdehyde adducts (reviewed in Bernstein et al.). Increasing DNA damage with age has been reported in the brains of the mouse, rat, gerbil, rabbit, dog, and human. Young 4-day-old rats have about 3,000 single-strand breaks and 156 double-strand breaks per neuron, whereas in rats older than 2 years the level of damage increases to about 7,400 single-strand breaks and 600 double-strand breaks per neuron.Lu et al. studied the transcriptional profiles of the human frontal cortex of individuals ranging from 26 to 106 years of age. This led to the identification of a set of genes whose expression was altered after age 40. They further found that the promoter sequences of these particular genes accumulated oxidative DNA damage, including 8-OHdG, with age (see DNA damage theory of aging). They concluded that DNA damage may reduce the expression of selectively vulnerable genes involved in learning, memory and neuronal survival, initiating a pattern of brain aging that starts early in life.
Chemical changes
In addition to the structural changes that the brain incurs with age, the aging process also entails a broad range of biochemical changes. More specifically, neurons communicate with each other via specialized chemical messengers called neurotransmitters. Several studies have identified a number of these neurotransmitters, as well as their receptors, that exhibit a marked alteration in different regions of the brain as part of the normal aging process.
Dopamine
An overwhelming number of studies have reported age-related changes in dopamine synthesis, binding sites, and number of receptors. Studies using positron emission tomography (PET) in living human subjects have shown a significant age-related decline in dopamine synthesis, notably in the striatum and extrastriatal regions (excluding the midbrain). Significant age-related decreases in dopamine receptors D1, D2, and D3 have also been highly reported. A general decrease in D1 and D2 receptors has been shown, and more specifically a decrease of D1 and D2 receptor binding in the caudate nucleus and putamen. A general decrease in D1 receptor density has also been shown to occur with age. Significant age-related declines in dopamine receptors, D2 and D3 were detected in the anterior cingulate cortex, frontal cortex, lateral temporal cortex, hippocampus, medial temporal cortex, amygdala, medial thalamus, and lateral thalamus One study also indicated a significant inverse correlation between dopamine binding in the occipital cortex and age. Postmortem studies also show that the number of D1 and D2 receptors decline with age in both the caudate nucleus and the putamen, although the ratio of these receptors did not show age-related changes. The loss of dopamine with age is thought to be responsible for many neurological symptoms that increase in frequency with age, such as decreased arm swing and increased rigidity. Changes in dopamine levels may also cause age-related changes in cognitive flexibility.
Serotonin
Decreasing levels of different serotonin receptors and the serotonin transporter, 5-HTT, have also been shown to occur with age. Studies conducted using PET methods on humans, in vivo, show that levels of the 5-HT2 receptor in the caudate nucleus, putamen, and frontal cerebral cortex, decline with age. A decreased binding capacity of the 5-HT2 receptor in the frontal cortex was also found, as well as a decreased binding capacity of the serotonin transporter, 5-HHT, in the thalamus and the midbrain. Postmortem studies on humans have indicated decreased binding capacities of serotonin and a decrease in the number of S1 receptors in the frontal cortex and hippocampus as well as a decrease in affinity in the putamen.
Glutamate
Glutamate is another neurotransmitter that tends to decrease with age. Studies have shown older subjects to have lower glutamate concentration in the motor cortex compared to younger subjects A significant age-related decline especially in the parietal gray matter, basal ganglia, and to a lesser degree, the frontal white matter, has also been noted. Although these levels were studied in the normal human brain, the parietal and basal ganglia regions are often affected in degenerative brain diseases associated with aging and it has therefore been suggested that brain glutamate may be useful as a marker of brain diseases that are affected by aging.
Neuropsychological changes
Changes in orientation
Orientation is defined as the awareness of self in relation to ones surroundings Often orientation is examined by distinguishing whether a person has a sense of time, place, and person. Deficits in orientation are one of the most common symptoms of brain disease, hence tests of orientation are included in almost all medical and neuropsychological evaluations. While research has primarily focused on levels of orientation among clinical populations, a small number of studies have examined whether there is a normal decline in orientation among healthy aging adults. Results have been somewhat inconclusive. Some studies suggest that orientation does not decline over the lifespan. For example, in one study 92% of normal elderly adults (65–84 years) presented with perfect or near perfect orientation. However some data suggest that mild changes in orientation may be a normal part of aging. For example, Sweet and colleagues concluded that "older persons with normal, healthy memory may have mild orientation difficulties. In contrast, younger people with normal memory have virtually no orientation problems" (p. 505). So although current research suggests that normal aging is not usually associated with significant declines in orientation, mild difficulties may be a part of normal aging and not necessarily a sign of pathology.
Changes in attention
Many older adults notice a decline in their attentional abilities. Attention is a broad construct that refers to "the cognitive ability that allows us to deal with the inherent processing limitations of the human brain by selecting information for further processing" (p. 334). Since the human brain has limited resources, people use their attention to zone in on specific stimuli and block out others.
If older adults have fewer attentional resources than younger adults, we would expect that when two tasks must be carried out at the same time, older adults performance will decline more than that of younger adults. However, a large review of studies on cognition and aging suggest that this hypothesis has not been wholly supported. While some studies have found that older adults have a more difficult time encoding and retrieving information when their attention is divided, other studies have not found meaningful differences from younger adults. Similarly, one might expect older adults to do poorly on tasks of sustained attention, which measure the ability to attend to and respond to stimuli for an extended period of time. However, studies suggest that sustained attention shows no decline with age. Results suggest that sustained attention increases in early adulthood and then remains relatively stable, at least through the seventh decade of life. More research is needed on how normal aging impacts attention after age eighty.
It is worth noting that there are factors other than true attentional abilities that might relate to difficulty paying attention. For example, it is possible that sensory deficits impact older adults attentional abilities. In other words, impaired hearing or vision may make it more difficult for older adults to do well on tasks of visual and verbal attention.
Changes in memory
Many different types of memory have been identified in humans, such as declarative memory (including episodic memory and semantic memory), working memory, spatial memory, and procedural memory. Studies done, have found that memory functions, more specifically those associated with the medial temporal lobe are especially vulnerable to age-related decline. A number of studies utilizing a variety of methods such as histological, structural imaging, functional imaging, and receptor binding have supplied converging evidence that the frontal lobes and frontal-striatal dopaminergic pathways are especially affected by age-related processes resulting in memory changes.
Changes in language
Changes in performance on verbal tasks, as well as the location, extent, and signal intensity of BOLD signal changes measured with functional MRI, vary in predictable patterns with age. For example, behavioral changes associated with age include compromised performance on tasks related to word retrieval, comprehension of sentences with high syntactic and/or working memory demands, and production of such sentences.
Genetic changes
Variation in the effects of aging among individuals can be attributed to both genetic, health, and environmental factors. As in so many other science disciplines, the nature and nurture debate is an ongoing conflict in the field of cognitive neuroscience. The search for genetic factors has always been an important aspect in trying to understand neuro-pathological processes. Research focused on discovering the genetic component in developing Autosomal Dominant (AD) has also contributed greatly to the understanding the genetics behind normal or "non-pathological" aging.
Autosomal Dominant (AD) - Autosomal dominant is a pattern of inheritance characteristic of some genetic disorders. "Autosomal" means that the gene in question is located on one of the numbered, or non-sex, chromosomes. "Dominant" means that a single copy of the mutated gene (from one parent) is enough to cause the disorder.The human brain shows a decline in function and a change in gene expression. This modulation in gene expression may be due to oxidative DNA damage at promoter regions in the genome. Genes that are down-regulated over the age of 40 include:
GluR1 AMPA receptor subunit
NMDA R2A receptor subunit (involved in learning)
Subunits of the GABA-A receptor
Genes involved in long-term potentiation e.g. calmodulin 1 and CAM kinase II alpha.
Calcium signaling genes
Synaptic plasticity genes
Synaptic vesicle release and recycling genesGenes that are upregulated include:
Genes associated with stress response and DNA repair
Antioxidant defence
Epigenetic age analysis of brain regions
The cerebellum is the youngest brain region (and probably body part) in centenarians according to an epigenetic biomarker of tissue age known as epigenetic clock: it is about 15 years younger than expected in a centenarian. By contrast, all brain regions and brain cells appear to have roughly the same epigenetic age in subjects who are younger than 80. These findings suggest that the cerebellum is protected from aging effects, which in turn could explain why the cerebellum exhibits fewer neuropathological hallmarks of age related dementias compared to other brain regions.
Delaying the effects of aging
The process of aging may be inevitable; however, one may potentially delay the effects and severity of this progression.
While there is no consensus of efficacy, the following are reported as delaying cognitive decline:
High level of education
Physical exercise
Staying intellectually engaged, i.e. reading and mental activities (such as crossword puzzles)
Maintaining social and friendship networks
Maintaining a healthy diet, including omega-3 fatty acids, and protective antioxidants.
"Super Agers"
Longitudinal research studies have recently conducted genetic analyses of centenarians and their offspring to identify biomarkers as protective factors against the negative effects of aging. In particular, the cholesteryl ester transfer protein (CETP) gene is linked to prevention of cognitive decline and Alzheimers disease. Specifically, valine CETP homozygotes but not heterozygotes experienced a relative 51% less decline in memory compared to a reference group after adjusting for demographic factors and APOE status.
Cognitive reserve
The ability of an individual to demonstrate no cognitive signs of aging despite an aging brain is called cognitive reserve. This hypothesis suggests that two patients might have the same brain pathology, with one person experiencing noticeable clinical symptoms, while the other continues to function relatively normally. Studies of cognitive reserve explore the specific biological, genetic and environmental differences which make one person susceptible to cognitive decline, and allow another to age more gracefully.
Nun Study
A study funded by the National Institute of Aging followed a group of 678 Roman Catholic sisters and recorded the effects of aging. The researchers used autobiographical essays collected as the nuns joined their Sisterhood. Findings suggest that early idea density, defined by number of ideas expressed and use of complex prepositions in these essays, was a significant predictor of lower risk for developing Alzheimers disease in old age. Lower idea density was found to be significantly associated with lower brain weight, higher brain atrophy, and more neurofibrillary tangles.
Hypothalamus inflammation and GnRH
In a recent study (published May 1, 2013), it is suggested that the inflammation of the hypothalamus may be connected to our overall aging bodies. They focused on the activation of the protein complex NF-κB in mice test subjects, which showed increased activation as mice test subjects aged in the study. This activation not only affects aging, but affects a hormone known as GnRH, which has shown new anti-aging properties when injected into mice outside the hypothalamus, while causing the opposite effect when injected into the hypothalamus. Itll be some time before this can be applied to humans in a meaningful way, as more studies on this pathway are necessary to understand the mechanics of GnRHs anti-aging properties.
Inflammation
A study found that myeloid cells are drivers of a maladaptive inflammation element of brain-ageing in mice and that this can be reversed or prevented via inhibition of their EP2 signalling.
Aging disparities
For certain demographics, the effects of normal cognitive aging are especially pronounced. Differences in cognitive aging might be tied to the lack of or reduced access to medical care and, as a result, suffer disproportionately from negative health outcomes. As the global population grows, diversifies, and grays, there is an increasing need to understand these inequities.
Race
African Americans
In the United States, Black and African American demographics disproportionately experience metabolic dysfunction with age. This has many downstream effects, but the most prominent of these is the toll on cardiovascular health. Metabolite profiles of the healthy aging index - a score that assesses neurocognitive function, among other correlates of health through the years - are associated with cardiovascular disease. Healthy cardiovascular function is critical for maintaining neurocognitive efficiency into old age. Attention, verbal learning, and cognitive set ability are related to diastolic blood pressure, triglyceride levels, and HDL cholesterol levels, respectively.
Latinos
The Latino demographic is most likely to develop metabolic syndrome - the combination of high blood pressure, high blood sugar, elevated triglyceride levels, and abdominal obesity - which not only increases the risk of cardiac events and type II diabetes but also is associated with lower neurocognitive function during midlife. Among different Latin heritages, frequency of the dementia-predisposing apoE4 allele was highest for Caribbean Latinos (Cubans, Dominicans, and Puerto Ricans) and lowest among mainland Latinos (Mexicans, Central Americans, and South Americans). Conversely, frequency of the neuroprotective apoE2 allele was highest for Caribbean Latinos and lowest for those of mainland heritage.
Indigenous Peoples
Indigenous populations are often understudied in research. Reviews of current literature studying natives in Australia, Brazil, Canada, and the United States from participants aged 45 to 94 years old reveal varied prevalence rates for cognitive impairment not related to dementia, from 4.4% to 17.7%. These results can be interpreted in the context of culturally biased neurocognitive tests, preexisting health conditions, poor access to healthcare, lower educational attainment, and/or old age.
Sex
Women
Compared to their male counterparts, womens scores on the Mini Mental State Exam (MMSE) tend to decline at slightly faster rates with age. Males with mild cognitive impairment tend to show more microstructural damage than females with MCI, but seem to have a greater cognitive reserve due to larger absolute brain size and neuronal density. As a result, women tend to manifest symptoms of cognitive decline at lower thresholds than men do. This effect seems to be moderated by educational attainment - higher education is associated with later diagnosis of mild cognitive impairment as neuropathological load increases.
Transgender Individuals
LGBT elders face numerous disparities as they approach end-of-life </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Or and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Sandfly fever and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Cephalohematoma and can you explain it in depth? [/INST] A cephalohaematoma is a hemorrhage of blood between the skull and the periosteum of any age human, including a newborn baby secondary to rupture of blood vessels crossing the periosteum. Because the swelling is subperiosteal, its boundaries are limited by the individual bones, in contrast to a caput succedaneum.
Symptoms and signs
Swelling appears after 2-3 days after birth. If severe the child may develop jaundice, anemia or hypotension. In some cases it may be an indication of a linear skull fracture or be at risk of an infection leading to osteomyelitis or meningitis. The swelling of a cephalohematoma takes weeks to resolve as the blood clot is slowly absorbed from the periphery towards the centre. In time the swelling hardens (calcification) leaving a relatively softer centre so that it appears as a depressed fracture </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Retrograde ejaculation and can you explain it in depth? [/INST] Retrograde ejaculation occurs when semen which would be ejaculated via the urethra is redirected to the urinary bladder. Normally, the sphincter of the bladder contracts before ejaculation, sealing the bladder which besides inhibiting the release of urine also prevents a reflux of seminal fluids into the male bladder during ejaculation. The semen is forced to exit via the urethra, the path of least resistance. When the bladder sphincter does not function properly, retrograde ejaculation may occur. It can also be induced deliberately by a male as a primitive form of male birth control (known as coitus saxonicus) or as part of certain alternative medicine practices. The retrograde-ejaculated semen, which goes into the bladder, is excreted with the next urination.
Signs and symptoms
Retrograde ejaculation is sometimes referred to as a "dry orgasm." Retrograde ejaculation is one symptom of male infertility. A man may notice during masturbation that despite the occurrence of orgasm, no accompanying ejaculation was produced. Another underlying cause for this phenomenon may be ejaculatory duct obstruction.
During a male orgasm, sperm are released from the epididymis and travel via small tubes called the vas deferens. The sperm mix with seminal fluid in the seminal vesicles, prostate fluid from the prostate gland, and lubricants from the bulbourethral gland. During climax, muscles at the end of the bladder neck tighten to prevent retrograde flow of semen. In retrograde ejaculation, these bladder neck muscles are either very weak or the nerves controlling the muscles have been damaged.
Causes
A malfunctioning bladder sphincter, leading to retrograde ejaculation, may be a result either of:
Autonomic nervous system dysfunction. (Dysautonomia)
Operation on the prostate. It is a common complication of transurethral resection of the prostate, a procedure in which prostate tissue is removed, slice by slice, through a resectoscope passed along the urethra.It can also be caused by a retroperitoneal lymph node dissection for testicular cancer if nerve pathways to the bladder sphincter are damaged, with the resulting retrograde ejaculation being either temporary or permanent. Modern nerve-sparing techniques seek to reduce this risk; however, it may also occur as the result of Green Light Laser prostate surgery.
Surgery on the bladder neck accounted for about ten percent of the cases of retrograde ejaculation or anejaculation reported in a literature review.Retrograde ejaculation is a common side effect of medications, such as tamsulosin, that are used to relax the muscles of the urinary tract, treating conditions such as benign prostatic hyperplasia. By relaxing the bladder sphincter muscle, the likelihood of retrograde ejaculation is increased.
The medications that mostly cause it are antidepressant and antipsychotic medication, as well as NRIs such as atomoxetine; patients experiencing this phenomenon tend to quit the medications.Retrograde ejaculation can also be a complication of diabetes, especially in cases of diabetics with long term poor blood sugar control. This is due to neuropathy of the bladder sphincter. Post-pubertal males (aged 17 to 20 years) who experience repeated episodes of retrograde ejaculation are often diagnosed with urethral stricture disease shortly after the initial complaint arises. It is currently not known whether a congenital malformation of the bulbous urethra is responsible, or if pressure applied to the base of the penis or perineum immediately preceding ejaculatory inevitability may have inadvertently damaged the urethra. This damage is most often seen within 0.5 cm of the ejaculatory duct (usually distal to the duct).
Conditions which can affect bladder neck muscle
Medications to treat high blood pressure, benign prostate hyperplasia, mood disorders, surgery on the prostate and nerve injury (which may occur in multiple sclerosis, spinal cord injury or diabetes).
Diagnosis
Diagnosis is usually determined after a medical professional performs a urinalysis on a urine specimen that is obtained shortly after ejaculation. In cases of retrograde ejaculation, the specimen will contain an abnormal level of sperm. Especially in case of orgasmic anejaculation, anejaculation can often be confused with retrograde ejaculation, and they share some fundamental aspects of the cause. Urinalysis is used to distinguish between them.
Tests
The genitals are physically examined to ensure that there are no anatomical problems. The urine will be tested for the presence of semen. If there are no sperm in the urine, it may be due to damage to the prostate as a result of surgery or prior radiation therapy.
Treatments
The treatment depends on the cause. Medications may work for retrograde ejaculation but only in a few cases. Surgery rarely is the first option for retrograde ejaculation and the results have proven to be inconsistent. Medications do not help retrograde ejaculation if there has been permanent damage to the prostate or the testes from radiation. Medications also do not help if prostate surgery has resulted in damage to the muscles or nerves. Medications only work if there has been mild nerve damage caused by diabetes, multiple sclerosis, or mild spinal cord injury.
Medications
Tricyclic antidepressants like imipramine.
Antihistamines like chlorphenamine.
Decongestants like ephedrine and phenylephrine.These medications tighten the bladder neck muscles and prevent semen from going backwards into the bladder. However, the medications do have many side effects and they have to be taken at least 1–2 hours prior to sexual intercourse. In many cases, the medications fail to work at the right time because most men are not able to predict when they will have an orgasm.
Infertility treatments
If a couple is experiencing infertility as a result of retrograde ejaculation and medications are not helping, the collection of the semen collection may undergo a special procedure. First, the patient alkalinizes his urine by intake of sodium bicarbonate (3g dissolved in water in the evening before bed, and then another dose after complete bladder emptying right before going to the laboratory) </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Flushing and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Agenesis and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Melasma and can you explain it in depth? [/INST] Melasma (also known as chloasma faciei,: 854 or the mask of pregnancy when present in pregnant women) is a tan or dark skin discoloration. Melasma is thought to be caused by sun exposure, genetic predisposition, hormone changes, and skin irritation. Although it can affect anyone, it is particularly common in women, especially pregnant women and those who are taking oral or patch contraceptives or hormone replacement therapy medications.
Signs and symptoms
The symptoms of melasma are dark, irregular, well-demarcated, hyperpigmented macules to patches. These patches often develop gradually over time. Melasma does not cause any other symptoms beyond the cosmetic discoloration. Patches can vary in size from 0.5 cm to larger than 10 cm depending on the person. Its location can be categorized as centrofacial, malar, or mandibular. The most common is centrofacial, in which patches appear on the cheeks, nose, upper lip, forehead, and chin. The mandibular category accounts for patches on the bilateral rami, while the malar location accounts for patches only on the nose and cheeks.
Cause
The exact cause of melasma is unknown.Melasma is thought to be the stimulation of melanocytes (cells in the dermal layer, which transfer the pigment melanin to the keratinocytes of skin) when the skin is exposed to ultraviolet light from the sun. Small amounts of sun exposure can make melasma return to the skin after it has faded, which is why people with melasma often get it again and again, particularly in the summer.Pregnant women often get melasma, or chloasma, known as the mask of pregnancy. Birth-control pills and hormone replacement therapy also can trigger melasma. The discoloration usually disappears spontaneously over a period of several months after giving birth or stopping the oral contraceptives or hormone treatment.Genetic predisposition is also a major factor in determining whether someone will develop melasma. People with the Fitzpatrick skin type III or greater from African, Asian, or Hispanic descent are at a much higher risk than others. In addition, women with a light brown skin type who are living in regions with intense sun exposure are particularly susceptible to developing this condition.The incidence of melasma also increases in patients with thyroid disease. It is thought that the overproduction of melanocyte-stimulating hormone brought on by stress can cause outbreaks of this condition. Other rare causes of melasma include allergic reaction to medications and cosmetics.
Addisons disease
Melasma suprarenale (Latin - above the kidneys) is a symptom of Addisons disease, particularly when caused by pressure or minor injury to the skin, as discovered by FJJ Schmidt of Rotterdam in 1859.
Diagnosis
Types
The two different kinds of melasma are epidermal and dermal.
Epidermal melasma results from melanin pigment that is elevated in the suprabasal layers of the epidermis.Dermal melasma occurs when the dermal macrophages have an elevated melanin level. Melasma is usually diagnosed visually or with assistance of a Woods lamp (340 - 400 nm wavelength). Under Woods lamp, excess melanin in the epidermis can be distinguished from that of the dermis. This is done by looking at how dark the melasma appears; dermal melasma appears darker than epidermal melasma under the Woods lamp.
Severity
The severity of facial melasma may be assessed by colorimetry, mexametry, and the melasma area and severity index (MASI) score.
Differential diagnoses
Melasma should be differentiated from freckles, solar lentigo, toxic melanoderma, Riehls melanosis, post-inflammatory hyperpigmentation, friction melanosis, ochronosis (endogenous and exogenous), cutaneous erythematosus lupus. Additionally, it should not be confused with phytophotodermatosis, pellagra, endogenous phototoxicity, nevus of Ota, café au lait macules, seborrheic keratosis, Civattes poikiloderma, acquired bilateral nevus of ota-like macules (Horis nevus), periorbital hyperpigmentation, erythrose pigmentaire peribuccale of Brocq, erythromelanosis follicularis faciei, facial acanthosis nigricans, and actinic lichen planus.Also, cases of drug-induced pigmentation have been reported, caused by amiodarone, or hydroquinone-induced exogenous ochronosis (see ochronosis treatment)
Treatment
Assessment by a dermatologist can help guide treatment. Treatments to hasten the fading of the discolored patches include:
Topical depigmenting agents, such as hydroquinone (HQ) either in over-the-counter (OTC - 2%) or prescription (4%) strength </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Pneumocystosis and can you explain it in depth? [/INST] Pneumocystosis is a fungal infection that most often presents as Pneumocystis pneumonia in people with HIV/AIDS or poor immunity. It usually causes cough, difficulty breathing and fever, and can lead to respiratory failure. Involvement outside the lungs is rare but, can occur as a disseminated type affecting lymph nodes, spleen, liver, bone marrow, eyes, kidneys, thyroid, gastrointestinal tract or other organs. If occurring in the skin, it usually presents as nodular growths in the ear canals or underarms.It is caused by Pneumocystis jirovecii, a fungus which is usually breathed in and found in the lungs of healthy people without causing disease, until the persons immune system becomes weakened.Diagnosis is by identifying the organism from a sample of fluid from affected lungs or a biopsy. Prevention in high risk people, and treatment in those affected is usually with trimethoprim/sulfamethoxazole (co-trimoxazole).The prevalence is unknown. Less than 3% of cases do not involve the lungs. The first cases of pneumocystosis affecting lungs were described in premature infants in Europe following the Second World War.
Signs and symptoms
Pneumocystosis is generally an infection in the lungs. Involvement outside the lungs is rare but, can occur as a disseminated type affecting lymph nodes, bone marrow, liver or spleen. It may also affect skin, eyes, kidneys, thyroid, heart, adrenals and gastrointestinal tract.
Lungs
When the lungs are affected there is usually a dry cough, difficulty breathing and fever, usually present for longer than four weeks. There may be chest pain, shivering or tiredness. The oxygen saturation is low. The lungs may fail to function.
Eyes
Pneumocystosis in eyes may appear as a single or multiple (up to 50) yellow-white plaques in the eyes choroid layer or just beneath the retina. Vision is usually not affected and it is typically found by chance.
Skin
If occurring in the skin, pneumocystosis most often presents as nodular growths in the ear canals of a person with HIV/AIDS. There may be fluid in the ear. Skin involvement may appear outside the ear, usually palms, soles or underarms; as a rash, or small bumps with a dip. It can occur on the face as brownish bumps and plaques. The bumps may be tender and the ulcerate. Infection in the ear may result in a perforated ear drum or destruction of the mastoid bone. The nerves in the head may be affected.
Cause
Pneumocystosis is caused by Pneumocystis jirovecii, a fungus which is generally found in the lungs of healthy people, without causing disease until the persons immune system becomes weakened.
Risk factors
Pneumocystosis occurs predominantly in people with HIV/AIDS. Other risk factors include chronic lung disease, cancer, autoimmune diseases, organ transplant, or taking corticosteroids.
Diagnosis
Diagnosis of Pneumocystis pneumonia is by identifying the organism from a sample of sputum, fluid from affected lungs or a biopsy. A chest X-ray of affected lungs show widespread shadowing in both lungs, with a "bat-wing" pattern and ground glass appearance. Giemsa or silver stains can be used to identify the organism, as well as direct immunofluorescence of infected cells.Diagnosis in the eye involves fundoscopy </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Bladder exstrophy and can you explain it in depth? [/INST] Bladder exstrophy is a congenital anomaly that exists along the spectrum of the exstrophy-epispadias complex, and most notably involves protrusion of the urinary bladder through a defect in the abdominal wall. Its presentation is variable, often including abnormalities of the bony pelvis, pelvic floor, and genitalia. The underlying embryologic mechanism leading to bladder exstrophy is unknown, though it is thought to be in part due to failed reinforcement of the cloacal membrane by underlying mesoderm.
Exstrophy means the inversion of a hollow organ.
Signs and symptoms
The classic manifestation of bladder exstrophy presents with:
A defect in the abdominal wall occupied by both the exstrophied bladder as well as a portion of the urethra
A flattened puborectal sling
Separation of the pubic symphysis
Shortening of a pubic rami
External rotation of the pelvis.Females frequently have a displaced and narrowed vaginal orifice, a bifid clitoris, and divergent labia.
Cause
The cause is not yet clinically established but is thought to be in part due to failed reinforcement of the cloacal membrane by underlying mesoderm.
Diagnosis
In a small retrospective study of 25 pregnancies five factors were found to be strongly associated with a prenatal diagnosis of bladder exstrophy:
Inability to visualize the bladder on ultrasound
A lower abdominal bulge
A small penis with anteriorly displaced scrotum
A low set umbilical insertion
Abnormal widening of the iliac crestsWhile a diagnosis of bladder exstrophy was made retrospectively in a majority of pregnancies, in only three cases was a prenatal diagnosis made.
Management
The extreme rarity of the disease limits the surgical opportunities to practice the complex closure required in these patients. For this reason, patients have the best outcomes when the bladder closures are performed at high volume centers where surgical and nursing teams have extensive experience in caring for the disease. The highest volume center in the United States, and the world, is the Johns Hopkins Hospital in Baltimore, Maryland; they have seen over 1300 exstrophy patients in the past 50 years.Upon delivery, the exposed bladder is irrigated and a non-adherent film is placed to prevent as much contact with the external environment as possible. In the event the child was not born at a medical center with an appropriate exstrophy support team then transfer will likely follow. Upon transfer, or for those infants born at a medical center able to care for bladder exstrophy, imaging may take place in the first few hours of life prior to the child undergoing surgery.Primary (immediate) closure is indicated only in those patients with a bladder of appropriate size, elasticity, and contractility as those patients are most likely to develop a bladder of adequate capacity after early surgical intervention.Conditions that are absolute contraindications despite bladder adequacy include duplication of the penis or scrotum and significant bilateral hydronephrosis.
Surgery
Modern therapy is aimed at surgical reconstruction of the bladder and genitalia. Both males and females are born with this anomaly. Treatment is similar.
In males treatments have been:
In the modern staged repair of exstrophy (MSRE) the initial step is closure of the abdominal wall, often requiring a pelvic osteotomy. This leaves the patient with penile epispadias and urinary incontinence. At approximately 2–3 years of age, the patient then undergoes repair of the epispadias after testosterone stimulation </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Myocardial infarction and can you explain it in depth? [/INST] A myocardial infarction (MI), commonly known as a heart attack, occurs when blood flow decreases or stops to the coronary artery of the heart, causing damage to the heart muscle. The most common symptom is chest pain or discomfort which may travel into the shoulder, arm, back, neck or jaw. Often it occurs in the center or left side of the chest and lasts for more than a few minutes. The discomfort may occasionally feel like heartburn. Other symptoms may include shortness of breath, nausea, feeling faint, a cold sweat or feeling tired. About 30% of people have atypical symptoms. Women more often present without chest pain and instead have neck pain, arm pain or feel tired. Among those over 75 years old, about 5% have had an MI with little or no history of symptoms. An MI may cause heart failure, an irregular heartbeat, cardiogenic shock or cardiac arrest.Most MIs occur due to coronary artery disease. Risk factors include high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet and excessive alcohol intake. The complete blockage of a coronary artery caused by a rupture of an atherosclerotic plaque is usually the underlying mechanism of an MI. MIs are less commonly caused by coronary artery spasms, which may be due to cocaine, significant emotional stress (commonly known as Takotsubo syndrome or broken heart syndrome) and extreme cold, among others. A number of tests are useful to help with diagnosis, including electrocardiograms (ECGs), blood tests and coronary angiography. An ECG, which is a recording of the hearts electrical activity, may confirm an ST elevation MI (STEMI), if ST elevation is present. Commonly used blood tests include troponin and less often creatine kinase MB.Treatment of an MI is time-critical. Aspirin is an appropriate immediate treatment for a suspected MI. Nitroglycerin or opioids may be used to help with chest pain; however, they do not improve overall outcomes. Supplemental oxygen is recommended in those with low oxygen levels or shortness of breath. In a STEMI, treatments attempt to restore blood flow to the heart and include percutaneous coronary intervention (PCI), where the arteries are pushed open and may be stented, or thrombolysis, where the blockage is removed using medications. People who have a non-ST elevation myocardial infarction (NSTEMI) are often managed with the blood thinner heparin, with the additional use of PCI in those at high risk. In people with blockages of multiple coronary arteries and diabetes, coronary artery bypass surgery (CABG) may be recommended rather than angioplasty. After an MI, lifestyle modifications, along with long-term treatment with aspirin, beta blockers and statins, are typically recommended.Worldwide, about 15.9 million myocardial infarctions occurred in 2015. More than 3 million people had an ST elevation MI, and more than 4 million had an NSTEMI. STEMIs occur about twice as often in men as women. About one million people have an MI each year in the United States. In the developed world, the risk of death in those who have had an STEMI is about 10%. Rates of MI for a given age have decreased globally between 1990 and 2010. In 2011, an MI was one of the top five most expensive conditions during inpatient hospitalizations in the US, with a cost of about $11.5 billion for 612,000 hospital stays.
Terminology
Myocardial infarction (MI) refers to tissue death (infarction) of the heart muscle (myocardium) caused by ischaemia, the lack of oxygen delivery to myocardial tissue. It is a type of acute coronary syndrome, which describes a sudden or short-term change in symptoms related to blood flow to the heart. Unlike the other type of acute coronary syndrome, unstable angina, a myocardial infarction occurs when there is cell death, which can be estimated by measuring by a blood test for biomarkers (the cardiac protein troponin). When there is evidence of an MI, it may be classified as an ST elevation myocardial infarction (STEMI) or Non-ST elevation myocardial infarction (NSTEMI) based on the results of an ECG.The phrase "heart attack" is often used non-specifically to refer to myocardial infarction. An MI is different from—but can cause—cardiac arrest, where the heart is not contracting at all or so poorly that all vital organs cease to function, thus might lead to death. It is also distinct from heart failure, in which the pumping action of the heart is impaired. However, an MI may lead to heart failure.
Signs and symptoms
Chest pain that may or may not radiate to other parts of the body is the most typical and significant symptom of myocardial infarction. It might be accompanied by other symptoms such as sweating.
Pain
Chest pain is one of the most common symptoms of acute myocardial infarction and is often described as a sensation of tightness, pressure, or squeezing. Pain radiates most often to the left arm, but may also radiate to the lower jaw, neck, right arm, back, and upper abdomen. The pain most suggestive of an acute MI, with the highest likelihood ratio, is pain radiating to the right arm and shoulder. Similarly, chest pain similar to a previous heart attack is also suggestive. The pain associated with MI is usually diffuse, does not change with position, and lasts for more than 20 minutes. It might be described as pressure, tightness, knifelike, tearing, burning sensation (all these are also manifested during other diseases). It could be felt as an unexplained anxiety, and pain might be absent altogether. Levines sign, in which a person localizes the chest pain by clenching one or both fists over their sternum, has classically been thought to be predictive of cardiac chest pain, although a prospective observational study showed it had a poor positive predictive value.Typically, chest pain because of ischemia, be it unstable angina or myocardial infarction, lessens with the use of nitroglycerin, but nitroglycerin may also relieve chest pain arising from non-cardiac causes.
Other
Chest pain may be accompanied by sweating, nausea or vomiting, and fainting, and these symptoms may also occur without any pain at all. In women, the most common symptoms of myocardial infarction include shortness of breath, weakness, and fatigue. Women are more likely to have unusual or unexplained tiredness and nausea or vomiting as symptoms. Women having heart attacks are more likely to have palpitations, back pain, labored breath, vomiting, and left arm pain than men, although the studies showing these differences had high variability. Women are less likely to report chest pain during a heart attack and more likely to report nausea, jaw pain, neck pain, cough, and fatigue, although these findings are inconsistent across studies. Women with heart attacks also had more indigestion, dizziness, loss of appetite, and loss of consciousness. Shortness of breath is a common, and sometimes the only symptom, occurring when damage to the heart limits the output of the left ventricle, with breathlessness arising either from low oxygen in the blood, or pulmonary edema. Other less common symptoms include weakness, light-headedness, palpitations, and abnormalities in heart rate or blood pressure. These symptoms are likely induced by a massive surge of catecholamines from the sympathetic nervous system, which occurs in response to pain and, where present, low blood pressure. Loss of consciousness due to inadequate blood flow to the brain and cardiogenic shock, and sudden death, frequently due to the development of ventricular fibrillation, can occur in myocardial infarctions. Cardiac arrest, and atypical symptoms such as palpitations, occur more frequently in women, the elderly, those with diabetes, in people who have just had surgery, and in critically ill patients.
Absence
"Silent" myocardial infarctions can happen without any symptoms at all. These cases can be discovered later on electrocardiograms, using blood enzyme tests, or at autopsy after a person has died. Such silent myocardial infarctions represent between 22 and 64% of all infarctions, and are more common in the elderly, in those with diabetes mellitus and after heart transplantation. In people with diabetes, differences in pain threshold, autonomic neuropathy, and psychological factors have been cited as possible explanations for the lack of symptoms. In heart transplantation, the donor heart is not fully innervated by the nervous system of the recipient.
Risk factors
The most prominent risk factors for myocardial infarction are older age, actively smoking, high blood pressure, diabetes mellitus, and total cholesterol and high-density lipoprotein levels. Many risk factors of myocardial infarction are shared with coronary artery disease, the primary cause of myocardial infarction, with other risk factors including male sex, low levels of physical activity, a past family history, obesity, and alcohol use. Risk factors for myocardial disease are often included in risk factor stratification scores, such as the Framingham Risk Score. At any given age, men are more at risk than women for the development of cardiovascular disease. High levels of blood cholesterol is a known risk factor, particularly high low-density lipoprotein, low high-density lipoprotein, and high triglycerides.Many risk factors for myocardial infarction are potentially modifiable, with the most important being tobacco smoking (including secondhand smoke). Smoking appears to be the cause of about 36% and obesity the cause of 20% of coronary artery disease. Lack of physical activity has been linked to 7–12% of cases. Less common causes include stress-related causes such as job stress, which accounts for about 3% of cases, and chronic high stress levels.
Diet
There is varying evidence about the importance of saturated fat in the development of myocardial infarctions. Eating polyunsaturated fat instead of saturated fats has been shown in studies to be associated with a decreased risk of myocardial infarction, while other studies find little evidence that reducing dietary saturated fat or increasing polyunsaturated fat intake affects heart attack risk. Dietary cholesterol does not appear to have a significant effect on blood cholesterol and thus recommendations about its consumption may not be needed. Trans fats do appear to increase risk. Acute and prolonged intake of high quantities of alcoholic drinks (3–4 or more daily) increases the risk of a heart attack.
Genetics
Family history of ischemic heart disease or MI, particularly if one has a male first-degree relative (father, brother) who had a myocardial infarction before age 55 years, or a female first-degree relative (mother, sister) less than age 65 increases a persons risk of MI.Genome-wide association studies have found 27 genetic variants that are associated with an increased risk of myocardial infarction. The strongest association of MI has been found with chromosome 9 on the short arm p at locus 21, which contains genes CDKN2A and 2B, although the single nucleotide polymorphisms that are implicated are within a non-coding region. The majority of these variants are in regions that have not been previously implicated in coronary artery disease. The following genes have an association with MI: PCSK9, SORT1, MIA3, WDR12, MRAS, PHACTR1, LPA, TCF21, MTHFDSL, ZC3HC1, CDKN2A, 2B, ABO, PDGF0, APOA5, MNF1ASM283, COL4A1, HHIPC1, SMAD3, ADAMTS7, RAS1, SMG6, SNF8, LDLR, SLC5A3, MRPS6, KCNE2.
Other
The risk of having a myocardial infarction increases with older age, low physical activity, and low socioeconomic status. Heart attacks appear to occur more commonly in the morning hours, especially between 6AM and noon. Evidence suggests that heart attacks are at least three times more likely to occur in the morning than in the late evening. Shift work is also associated with a higher risk of MI. And one analysis has found an increase in heart attacks immediately following the start of daylight saving time.Women who use combined oral contraceptive pills have a modestly increased risk of myocardial infarction, especially in the presence of other risk factors. The use of non-steroidal anti inflammatory drugs (NSAIDs), even for as short as a week, increases risk.Endometriosis in women under the age of 40 is an identified risk factor.Air pollution is also an important modifiable risk. Short-term exposure to air pollution such as carbon monoxide, nitrogen dioxide, and sulfur dioxide (but not ozone) have been associated with MI and other acute cardiovascular events. For sudden cardiac deaths, every increment of 30 units in Pollutant Standards Index correlated with an 8% increased risk of out-of-hospital cardiac arrest on the day of exposure. Extremes of temperature are also associated.A number of acute and chronic infections including Chlamydophila pneumoniae, influenza, Helicobacter pylori, and Porphyromonas gingivalis among others have been linked to atherosclerosis and myocardial infarction. As of 2013, there is no evidence of benefit from antibiotics or vaccination, however, calling the association into question. Myocardial infarction can also occur as a late consequence of Kawasaki disease.Calcium deposits in the coronary arteries can be detected with CT scans. Calcium seen in coronary arteries can provide predictive information beyond that of classical risk factors. High blood levels of the amino acid homocysteine is associated with premature atherosclerosis; whether elevated homocysteine in the normal range is causal is controversial.In people without evident coronary artery disease, possible causes for the myocardial infarction are coronary spasm or coronary artery dissection.
Mechanism
Atherosclerosis
The most common cause of a myocardial infarction is the rupture of an atherosclerotic plaque on an artery supplying heart muscle. Plaques can become unstable, rupture, and additionally promote the formation of a blood clot that blocks the artery; this can occur in minutes. Blockage of an artery can lead to tissue death in tissue being supplied by that artery. Atherosclerotic plaques are often present for decades before they result in symptoms.The gradual buildup of cholesterol and fibrous tissue in plaques in the wall of the coronary arteries or other arteries, typically over decades, is termed atherosclerosis. Atherosclerosis is characterized by progressive inflammation of the walls of the arteries. Inflammatory cells, particularly macrophages, move into affected arterial walls. Over time, they become laden with cholesterol products, particularly LDL, and become foam cells. A cholesterol core forms as foam cells die. In response to growth factors secreted by macrophages, smooth muscle and other cells move into the plaque and act to stabilize it. A stable plaque may have a thick fibrous cap with calcification. If there is ongoing inflammation, the cap may be thin or ulcerate. Exposed to the pressure associated with blood flow, plaques, especially those with a thin lining, may rupture and trigger the formation of a blood clot (thrombus). The cholesterol crystals have been associated with plaque rupture through mechanical injury and inflammation.
Other causes
Atherosclerotic disease is not the only cause of myocardial infarction, but it may exacerbate or contribute to other causes. A myocardial infarction may result from a heart with a limited blood supply subject to increased oxygen demands, such as in fever, a fast heart rate, hyperthyroidism, too few red blood cells in the bloodstream, or low blood pressure. Damage or failure of procedures such as percutaneous coronary intervention or coronary artery bypass grafts may cause a myocardial infarction. Spasm of coronary arteries, such as Prinzmetals angina may cause blockage.
Tissue death
If impaired blood flow to the heart lasts long enough, it triggers a process called the ischemic cascade; the heart cells in the territory of the blocked coronary artery die (infarction), chiefly through necrosis, and do not grow back. A collagen scar forms in their place. When an artery is blocked, cells lack oxygen, needed to produce ATP in mitochondria. ATP is required for the maintenance of electrolyte balance, particularly through the Na/K ATPase. This leads to an ischemic cascade of intracellular changes, necrosis and apoptosis of affected cells.Cells in the area with the worst blood supply, just below the inner surface of the heart (endocardium), are most susceptible to damage. Ischemia first affects this region, the subendocardial region, and tissue begins to die within 15–30 minutes of loss of blood supply. The dead tissue is surrounded by a zone of potentially reversible ischemia that progresses to become a full-thickness transmural infarct. The initial "wave" of infarction can take place over 3–4 hours. These changes are seen on gross pathology and cannot be predicted by the presence or absence of Q waves on an ECG. The position, size and extent of an infarct depends on the affected artery, totality of the blockage, duration of the blockage, the presence of collateral blood vessels, oxygen demand, and success of interventional procedures.Tissue death and myocardial scarring alter the normal conduction pathways of the heart, and weaken affected areas. The size and location puts a person at risk of abnormal heart rhythms (arrhythmias) or heart block, aneurysm of the heart ventricles, inflammation of the heart wall following infarction, and rupture of the heart wall that can have catastrophic consequences.Injury to the myocardium also occurs during re-perfusion. This might manifest as ventricular arrhythmia. The re-perfusion injury is a consequence of the calcium and sodium uptake from the cardiac cells and the release of oxygen radicals during reperfusion. No-reflow phenomenon—when blood is still unable to be distributed to the affected myocardium despite clearing the occlusion—also contributes to myocardial injury. Topical endothelial swelling is one of many factors contributing to this phenomenon.
Diagnosis
Criteria
A myocardial infarction, according to current consensus, is defined by elevated cardiac biomarkers with a rising or falling trend and at least one of the following:
Symptoms relating to ischemia
Changes on an electrocardiogram (ECG), such as ST segment changes, new left bundle branch block, or pathologic Q waves
Changes in the motion of the heart wall on imaging
Demonstration of a thrombus on angiogram or at autopsy.
Types
A myocardial infarction is usually clinically classified as an ST-elevation MI (STEMI) or a non-ST elevation MI (NSTEMI). These are based on ST elevation, a portion of a heartbeat graphically recorded on an ECG. STEMIs make up about 25–40% of myocardial infarctions. A more explicit classification system, based on international consensus in 2012, also exists. This classifies myocardial infarctions into five types:
Spontaneous MI related to plaque erosion and/or rupture fissuring, or dissection
MI related to ischemia, such as from increased oxygen demand or decreased supply, e.g. coronary artery spasm, coronary embolism, anemia, arrhythmias, high blood pressure, or low blood pressure
Sudden unexpected cardiac death, including cardiac arrest, where symptoms may suggest MI, an ECG may be taken with suggestive changes, or a blood clot is found in a coronary artery by angiography and/or at autopsy, but where blood samples could not be obtained, or at a time before the appearance of cardiac biomarkers in the blood
Associated with coronary angioplasty or stents
Associated with percutaneous coronary intervention (PCI)
Associated with stent thrombosis as documented by angiography or at autopsy
Associated with CABG
Associated with spontaneous coronary artery dissection in young, fit women
Cardiac biomarkers
There are many different biomarkers used to determine the presence of cardiac muscle damage. Troponins, measured through a blood test, are considered to be the best, and are preferred because they have greater sensitivity and specificity for measuring injury to the heart muscle than other tests. A rise in troponin occurs within 2–3 hours of injury to the heart muscle, and peaks within 1–2 days. The level of the troponin, as well as a change over time, are useful in measuring and diagnosing or excluding myocardial infarctions, and the diagnostic accuracy of troponin testing is improving over time. One high-sensitivity cardiac troponin can rule out a heart attack as long as the ECG is normal.Other tests, such as CK-MB or myoglobin, are discouraged. CK-MB is not as specific as troponins for acute myocardial injury, and may be elevated with past cardiac surgery, inflammation or electrical cardioversion; it rises within 4–8 hours and returns to normal within 2–3 days. Copeptin may be useful to rule out MI rapidly when used along with troponin.
Electrocardiogram
Electrocardiograms (ECGs) are a series of leads placed on a persons chest that measure electrical activity associated with contraction of the heart muscle. The taking of an ECG is an important part of the workup of an AMI, and ECGs are often not just taken once but may be repeated over minutes to hours, or in response to changes in signs or symptoms.ECG readouts product a waveform with different labelled features. In addition to a rise in biomarkers, a rise in the ST segment, changes in the shape or flipping of T waves, new Q waves, or a new left bundle branch block can be used to diagnose an AMI. In addition, ST elevation can be used to diagnose an ST segment myocardial infarction (STEMI). A rise must be new in V2 and V3 ≥2 mm (0,2 mV) for males or ≥1.5 mm (0.15 mV) for females or ≥1 mm (0.1 mV) in two other adjacent chest or limb leads. ST elevation is associated with infarction, and may be preceded by changes indicating ischemia, such as ST depression or inversion of the T waves. Abnormalities can help differentiate the location of an infarct, based on the leads that are affected by changes. Early STEMIs may be preceded by peaked T waves. Other ECG abnormalities relating to complications of acute myocardial infarctions may also be evident, such as atrial or ventricular fibrillation.
Imaging
Noninvasive imaging plays an important role in the diagnosis and characterisation of myocardial infarction. Tests such as chest X-rays can be used to explore and exclude alternate causes of a persons symptoms. Echocardiography may assist in modifying clinical suspicion of ongoing myocardial infarction in patients that cant be ruled out or ruled in following initial ECG and Troponin testing. Myocardial perfusion imaging has no role in the acute diagnostic algorithm, however it can confirm a clinical suspicion of Chronic Coronary Syndrome when the patients history, physical examination (including cardiac examination) ECG, and cardiac biomarkers suggest coronary artery disease.Echocardiography, an ultrasound scan of the heart, is able to visualize the heart, its size, shape, and any abnormal motion of the heart walls as they beat that may indicate a myocardial infarction. The flow of blood can be imaged, and contrast dyes may be given to improve image. Other scans using radioactive contrast include SPECT CT-scans using thallium, sestamibi (MIBI scans) or tetrofosmin; or a PET scan using Fludeoxyglucose or rubidium-82. These nuclear medicine scans can visualize the perfusion of heart muscle. SPECT may also be used to determine viability of tissue, and whether areas of ischemia are inducible.Medical societies and professional guidelines recommend that the physician confirm a person is at high risk for Chronic Coronary Syndrome before conducting diagnostic non-invasive imaging tests to make a diagnosis, as such tests are unlikely to change management and result in increased costs. Patients who have a normal ECG and who are able to exercise, for example, most likely do not merit routine imaging.
Differential diagnosis
There are many causes of chest pain, which can originate from the heart, lungs, gastrointestinal tract, aorta, and other muscles, bones and nerves surrounding the chest. In addition to myocardial infarction, other causes include angina, insufficient blood supply (ischemia) to the heart muscles without evidence of cell death, gastroesophageal reflux disease; pulmonary embolism, tumors of the lungs, pneumonia, rib fracture, costochondritis, heart failure and other musculoskeletal injuries. Rarer severe differential diagnoses include aortic dissection, esophageal rupture, tension pneumothorax, and pericardial effusion causing cardiac tamponade. The chest pain in an MI may mimic heartburn. Causes of sudden-onset breathlessness generally involve the lungs or heart – including pulmonary edema, pneumonia, allergic reactions and asthma, and pulmonary embolus, acute respiratory distress syndrome and metabolic acidosis. There are many different causes of fatigue, and myocardial infarction is not a common cause.
Prevention
There is a large crossover between the lifestyle and activity recommendations to prevent a myocardial infarction, and those that may be adopted as secondary prevention after an initial myocardial infarction, because of shared risk factors and an aim to reduce atherosclerosis affecting heart vessels. The influenza vaccine also appear to protect against myocardial infarction with a benefit of 15 to 45%.
Primary prevention
Lifestyle
Physical activity can reduce the risk of cardiovascular disease, and people at risk are advised to engage in 150 minutes of moderate or 75 minutes of vigorous-intensity aerobic exercise a week. Keeping a healthy weight, drinking alcohol within the recommended limits, and quitting smoking reduce the risk of cardiovascular disease.Substituting unsaturated fats such as olive oil and rapeseed oil instead of saturated fats may reduce the risk of myocardial infarction, although there is not universal agreement. Dietary modifications are recommended by some national authorities, with recommendations including increasing the intake of wholegrain starch, reducing sugar intake (particularly of refined sugar), consuming five portions of fruit and vegetables daily, consuming two or more portions of fish per week, and consuming 4–5 portions of unsalted nuts, seeds, or legumes per week. The dietary pattern with the greatest support is the Mediterranean diet. Vitamins and mineral supplements are of no proven benefit, and neither are plant stanols or sterols.Public health measures may also act at a population level to reduce the risk of myocardial infarction, for example by reducing unhealthy diets (excessive salt, saturated fat, and trans fat) including food labeling and marketing requirements as well as requirements for catering and restaurants, and stimulating physical activity. This may be part of regional cardiovascular disease prevention programs or through the health impact assessment of regional and local plans and policies.Most guidelines recommend combining different preventive strategies. A 2015 Cochrane Review found some evidence that such an approach might help with blood pressure, body mass index and waist circumference. However, there was insufficient evidence to show an effect on mortality or actual cardio-vascular events.
Medication
Statins, drugs that act to lower blood cholesterol, decrease the incidence and mortality rates of myocardial infarctions. They are often recommended in those at an elevated risk of cardiovascular diseases.Aspirin has been studied extensively in people considered at increased risk of myocardial infarction. Based on numerous studies in different groups (e.g. people with or without diabetes), there does not appear to be a benefit strong enough to outweigh the risk of excessive bleeding. Nevertheless, many clinical practice guidelines continue to recommend aspirin for primary prevention, and some researchers feel that those with very high cardiovascular risk but low risk of bleeding should continue to receive aspirin.
Secondary prevention
There is a large crossover between the lifestyle and activity recommendations to prevent a myocardial infarction, and those that may be adopted as secondary prevention after an initial myocardial infarct. Recommendations include stopping smoking, a gradual return to exercise, eating a healthy diet, low in saturated fat and low in cholesterol, drinking alcohol within recommended limits, exercising, and trying to achieve a healthy weight. Exercise is both safe and effective even if people have had stents or heart failure, and is recommended to start gradually after 1–2 weeks. Counselling should be provided relating to medications used, and for warning signs of depression. Previous studies suggested a benefit from omega-3 fatty acid supplementation but this has not been confirmed.
Medications
Following a heart attack, nitrates, when taken for two days, and ACE-inhibitors decrease the risk of death. Other medications include:
Aspirin is continued indefinitely, as well as another antiplatelet agent such as clopidogrel or ticagrelor ("dual antiplatelet therapy" or DAPT) for up to twelve months. If someone has another medical condition that requires anticoagulation (e.g. with warfarin) this may need to be adjusted based on risk of further cardiac events as well as bleeding risk. In those who have had a stent, more than 12 months of clopidogrel plus aspirin does not affect the risk of death.Beta blocker therapy such as metoprolol or carvedilol is recommended to be started within 24 hours, provided there is no acute heart failure or heart block. The dose should be increased to the highest tolerated. Contrary to what was long believed, the use of beta blockers does not appear to affect the risk of death, possibly because other treatments for MI have improved. When beta blocker medication is given within the first 24–72 hours of a STEMI no lives are saved. However, 1 in 200 people were prevented from a repeat heart attack, and another 1 in 200 from having an abnormal heart rhythm. Additionally, for 1 in 91 the medication causes a temporary decrease in the hearts ability to pump blood.ACE inhibitor therapy should be started within 24 hours, and continued indefinitely at the highest tolerated dose. This is provided there is no evidence of worsening kidney failure, high potassium, low blood pressure, or known narrowing of the renal arteries. Those who cannot tolerate ACE inhibitors may be treated with an angiotensin II receptor antagonist.Statin therapy has been shown to reduce mortality and subsequent cardiac events and should be commenced to lower LDL cholesterol. Other medications, such as ezetimibe, may also be added with this goal in mind.Aldosterone antagonists (spironolactone or eplerenone) may be used if there is evidence of left ventricular dysfunction after an MI, ideally after beginning treatment with an ACE inhibitor.
Other
A defibrillator, an electric device connected to the heart and surgically inserted under the skin, may be recommended. This is particularly if there are any ongoing signs of heart failure, with a low left ventricular ejection fraction and a New York Heart Association grade II or III after 40 days of the infarction. Defibrillators detect potentially fatal arrhythmia and deliver an electrical shock to the person to depolarize a critical mass of the heart muscle.
Management
A myocardial infarction requires immediate medical attention. Treatment aims to preserve as much heart muscle as possible, and to prevent further complications. Treatment depends on whether the myocardial infarction is a STEMI or NSTEMI. Treatment in general aims to unblock blood vessels, reduce blood clot enlargement, reduce ischemia, and modify risk factors with the aim of preventing future MIs. In addition, the main treatment for myocardial infarctions with ECG evidence of ST elevation (STEMI) include thrombolysis or percutaneous coronary intervention, although PCI is also ideally conducted within 1–3 days for NSTEMI. In addition to clinical judgement, risk stratification may be used to guide treatment, such as with the TIMI and GRACE scoring systems.
Pain
The pain associated with myocardial infarction is often treated with nitroglycerin, a vasodilator, or opioid medications such as morphine. Nitroglycerin (given under the tongue or injected into a vein) may improve blood supply to the heart. It is an important part of therapy for its pain relief effects, though there is no proven benefit to mortality. Morphine or other opioid medications may also be used, and are effective for the pain associated with STEMI. There is poor evidence that morphine shows any benefit to overall outcomes, and there is some evidence of potential harm.
Antithrombotics
Aspirin, an antiplatelet drug, is given as a loading dose to reduce the clot size and reduce further clotting in the affected artery. It is known to decrease mortality associated with acute myocardial infarction by at least 50%. P2Y12 inhibitors such as clopidogrel, prasugrel and ticagrelor are given concurrently, also as a loading dose, with the dose depending on whether further surgical management or fibrinolysis is planned. Prasugrel and ticagrelor are recommended in European and American guidelines, as they are active more quickly and consistently than clopidogrel. P2Y12 inhibitors are recommended in both NSTEMI and STEMI, including in PCI, with evidence also to suggest improved mortality. Heparins, particularly in the unfractionated form, act at several points in the clotting cascade, help to prevent the enlargement of a clot, and are also given in myocardial infarction, owing to evidence suggesting improved mortality rates. In very high-risk scenarios, inhibitors of the platelet glycoprotein αIIbβ3a receptor such as eptifibatide or tirofiban may be used.There is varying evidence on the mortality benefits in NSTEMI. A 2014 review of P2Y12 inhibitors such as clopidogrel found they do not change the risk of death when given to people with a suspected NSTEMI prior to PCI, nor do heparins change the risk of death. They do decrease the risk of having a further myocardial infarction.
Angiogram
Primary percutaneous coronary intervention (PCI) is the treatment of choice for STEMI if it can be performed in a timely manner, ideally within 90–120 minutes of contact with a medical provider. Some recommend it is also done in NSTEMI within 1–3 days, particularly when considered high-risk. A 2017 review, however, did not find a difference between early versus later PCI in NSTEMI.PCI involves small probes, inserted through peripheral blood vessels such as the femoral artery or radial artery into the blood vessels of the heart. The probes are then used to identify and clear blockages using small balloons, which are dragged through the blocked segment, dragging away the clot, or the insertion of stents. Coronary artery bypass grafting is only considered when the affected area of heart muscle is large, and PCI is unsuitable, for example with difficult cardiac anatomy. After PCI, people are generally placed on aspirin indefinitely and on dual antiplatelet therapy (generally aspirin and clopidogrel) for at least a year.
Fibrinolysis
If PCI cannot be performed within 90 to 120 minutes in STEMI then fibrinolysis, preferably within 30 minutes of arrival to hospital, is recommended. If a person has had symptoms for 12 to 24 hours evidence for effectiveness of thrombolysis is less and if they have had symptoms for more than 24 hours it is not recommended. Thrombolysis involves the administration of medication that activates the enzymes that normally dissolve blood clots. These medications include tissue plasminogen activator, reteplase, streptokinase, and tenecteplase. Thrombolysis is not recommended in a number of situations, particularly when associated with a high risk of bleeding or the potential for problematic bleeding, such as active bleeding, past strokes or bleeds into the brain, or severe hypertension. Situations in which thrombolysis may be considered, but with caution, include recent surgery, use of anticoagulants, pregnancy, and proclivity to bleeding. Major risks of thrombolysis are major bleeding and intracranial bleeding. Pre-hospital thrombolysis reduces time to thrombolytic treatment, based on studies conducted in higher income countries, however it is unclear whether this has an impact on mortality rates.
Other
In the past, high flow oxygen was recommended for everyone with a possible myocardial infarction. More recently, no evidence was found for routine use in those with normal oxygen levels and there is potential harm from the intervention. Therefore, oxygen is currently only recommended if oxygen levels are found to be low or if someone is in respiratory distress.If despite thrombolysis there is significant cardiogenic shock, continued severe chest pain, or less than a 50% improvement in ST elevation on the ECG recording after 90 minutes, then rescue PCI is indicated emergently.Those who have had cardiac arrest may benefit from targeted temperature management with evaluation for implementation of hypothermia protocols. Furthermore, those with cardiac arrest, and ST elevation at any time, should usually have angiography. Aldosterone antagonists appear to be useful in people who have had an STEMI and do not have heart failure.
Rehabilitation and exercise
Cardiac rehabilitation benefits many who have experienced myocardial infarction, even if there has been substantial heart damage and resultant left ventricular failure. It should start soon after discharge from the hospital. The program may include lifestyle advice, exercise, social support, as well as recommendations about driving, flying, sports participation, stress management, and sexual intercourse. Returning to sexual activity after myocardial infarction is a major concern for most patients, and is an important area to be discussed in the provision of holistic care.In the short-term, exercise-based cardiovascular rehabilitation programs may reduce the risk of a myocardial infarction, reduces a large number of hospitalizations from all causes, reduces hospital costs, improves health-related quality of life, and has a small effect on all-cause mortality. Longer-term studies indicate that exercise-based cardiovascular rehabilitation programs may reduce cardiovascular mortality and myocardial infarction.
Prognosis
The prognosis after myocardial infarction varies greatly depending on the extent and location of the affected heart muscle, and the development and management of complications. Prognosis is worse with older age and social isolation. Anterior infarcts, persistent ventricular tachycardia or fibrillation, development of heart blocks, and left ventricular impairment are all associated with poorer prognosis. Without treatment, about a quarter of those affected by MI die within minutes and about forty percent within the first month. Morbidity and mortality from myocardial infarction has however improved over the years due to earlier and better treatment: in those who have a STEMI in the United States, between 5 and 6 percent die before leaving the hospital and 7 to 18 percent die within a year.It is unusual for babies to experience a myocardial infarction, but when they do, about half die. In the short-term, neonatal survivors seem to have a normal quality of life.
Complications
Complications may occur immediately following the myocardial infarction or may take time to develop. Disturbances of heart rhythms, including atrial fibrillation, ventricular tachycardia and fibrillation and heart block can arise as a result of ischemia, cardiac scarring, and infarct location. Stroke is also a risk, either as a result of clots transmitted from the heart during PCI, as a result of bleeding following anticoagulation, or as a result of disturbances in the hearts ability to pump effectively as a result of the infarction. Regurgitation of blood through the mitral valve is possible, particularly if the infarction causes dysfunction of the papillary muscle. Cardiogenic shock as a result of the heart being unable to adequately pump blood may develop, dependent on infarct size, and is most likely to occur within the days following an acute myocardial infarction. Cardiogenic shock is the largest cause of in-hospital mortality. Rupture of the ventricular dividing wall or left ventricular wall may occur within the initial weeks. Dresslers syndrome, a reaction following larger infarcts and a cause of pericarditis is also possible.Heart failure may develop as a long-term consequence, with an impaired ability of heart muscle to pump, scarring, and an increase in the size of the existing muscle. Aneurysm of the left ventricle myocardium develops in about 10% of MI and is itself a risk factor for heart failure, ventricular arrhythmia, and the development of clots.Risk factors for complications and death include age, hemodynamic parameters (such as heart failure, cardiac arrest on admission, systolic blood pressure, or Killip class of two or greater), ST-segment deviation, diabetes, serum creatinine, peripheral vascular disease, and elevation of cardiac markers.
Epidemiology
Myocardial infarction is a common presentation of coronary artery disease. The World Health Organization estimated in 2004, that 12.2% of worldwide deaths were from ischemic heart disease; with it being the leading cause of death in high- or middle-income countries and second only to lower respiratory infections in lower-income countries. Worldwide, more than 3 million people have STEMIs and 4 million have NSTEMIs a year. STEMIs occur about twice as often in men as women.Rates of death from ischemic heart disease (IHD) have slowed or declined in most high-income countries, although cardiovascular disease still accounted for one in three of all deaths in the US in 2008. For example, rates of death from cardiovascular disease have decreased almost a third between 2001 and 2011 in the United States.In contrast, IHD is becoming a more common cause of death in the developing world. For example, in India, IHD had become the leading cause of death by 2004, accounting for 1.46 million deaths (14% of total deaths) and deaths due to IHD were expected to double during 1985–2015. Globally, disability adjusted life years (DALYs) lost to ischemic heart disease are predicted to account for 5.5% of total DALYs in 2030, making it the second-most-important cause of disability (after unipolar depressive disorder), as well as the leading cause of death by this date.
Social determinants of health
Social determinants such as neighborhood disadvantage, immigration status, lack of social support, social isolation, access to health services play an important role in myocardial infarction risk and survival. Studies have shown that low socioeconomic status is associated with an increased risk of poorer survival. There are well-documented disparities in myocardial infarction survival by socioeconomic status, race, education, and census-tract-level poverty.Race: In the U.S. African Americans have a greater burden of myocardial infarction and other cardiovascular events. On a population level, there is a higher overall prevalence of risk factors that are unrecognized and therefore not treated, which places these individuals at a greater likelihood of experiencing adverse outcomes and therefore potentially higher morbidity and mortality.Socioeconomic status: Among individuals who live in the low-socioeconomic (SES) areas, which is close to 25% of the US population, myocardial infarctions (MIs) occurred twice as often compared with people who lived in higher SES areas.Immigration status: In 2018 many lawfully present immigrants who are eligible for coverage remain uninsured because immigrant families face a range of enrollment barriers, including fear, confusion about eligibility policies, difficulty navigating the enrollment process, and language and literacy challenges. Uninsured undocumented immigrants are ineligible for coverage options due to their immigration status.Health care access: Lack of health insurance and financial concerns about accessing care were associated with delays in seeking emergency care for acute myocardial infarction which can have significant, adverse consequences on patient outcomes.Education: Researchers found that compared to people with graduate degrees, those with lower educational attainment appeared to have a higher risk of heart attack, dying from a cardiovascular event, and overall death.
Society and culture
Depictions of heart attacks in popular media often include collapsing or loss of consciousness which are not common symptoms; these depictions contribute to widespread misunderstanding about the symptoms of myocardial infarctions, which in turn contributes to people not getting care when they should.
Legal implications
At common law, in general, a myocardial infarction is a disease, but may sometimes be an injury. This can create coverage issues in the administration of no-fault insurance schemes such as workers compensation. In general, a heart attack is not covered; however, it may be a work-related injury if it results, for example, from unusual emotional stress or unusual exertion. In addition, in some jurisdictions, heart attacks had by persons in particular occupations such as police officers may be classified as line-of-duty injuries by statute or policy. In some countries or states, a person having had an MI may be prevented from participating in activity that puts other peoples lives at risk, for example driving a car or flying an airplane.
References
Sources
Allison TG (6 December 2012). "Stress Test Selection". In Margaret A. Lloyd (ed.). Mayo Clinic Cardiology: Concise Textbook. Joseph G. Murphy. OUP USA. pp. 196–202. ISBN 978-0-19-991571-2.
Blumenthal RS, Margolis S (2007). Heart Attack Prevention 2007. Johns Hopkins Health. ISBN 978-1-933087-47-4.
Dwight J (16 June 2016). "Chest pain, breathlessness, fatigue". In Timothy Cox (ed.). Oxford Textbook of Medicine: Cardiovascular Disorders. Jeremy Dwight. Oxford University Press. pp. 39–47. ISBN 978-0-19-871702-7.
Gaziano TA, Gaziano JM (15 September 2016). "Global Evolving Epidemiology, Natural History, and Treatment Trends of Myocardial Infarction". In Morrow DA (ed.). Myocardial Infarction: A Companion to Braunwalds Heart Disease. Elsevier. pp. 11–21. ISBN 978-0-323-35943-6.
Morrow DA, Bohula EA (15 September 2016). "Heart Failure and Cardiogenic Shock After Myocardial Infarction". In Morrow DA (ed.). Myocardial Infarction: A Companion to Braunwalds Heart Disease. Elsevier. pp. 295–313. ISBN 978-0-323-35943-6.
Morrow DA, Braunwald E (15 September 2016). "Classification and Diagnosis of Acute Coronary Syndromes". In Morrow DA (ed.). Myocardial Infarction: A Companion to Braunwalds Heart Disease. Elsevier. pp. 1–10. ISBN 978-0-323-35943-6.
Morrow DA (15 September 2016) </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Urethrocele and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Eosinophilic esophagitis and can you explain it in depth? [/INST] Eosinophilic esophagitis (EoE) is an allergic inflammatory condition of the esophagus that involves eosinophils, a type of white blood cell. In healthy individuals, the esophagus is typically devoid of eosinophils. In EoE, eosinophils migrate to the esophagus in large numbers. When a trigger food is eaten, the eosinophils contribute to tissue damage and inflammation. Symptoms include swallowing difficulty, food impaction, vomiting, and heartburn.Eosinophilic esophagitis was first described in children but also occurs in adults. The condition is not well understood, but food allergy may play a significant role. The treatment may consist of removal of known or suspected triggers and medication to suppress the immune response. In severe cases, it may be necessary to enlarge the esophagus with an endoscopy procedure.
While knowledge about EoE has been increasing rapidly, diagnosis of EoE can be challenging because the symptoms and histo-pathologic findings are not specific.
Epidemiology
The prevalence of eosinophilic esophagitis has increased over time and currently ranges from 1 to 6 per 10,000 persons. Gender and ethnic variations exist in the prevalence of EoE, with the majority of cases reported in Caucasian males.In addition to gender (male predominance) and race (mainly a disease of Caucasian individuals), established risk factors for EoE include atopy and other allergic conditions. Other recognized genetic and environmental risk factors for EoE include alterations in gut barrier function (e.g. GERD), variation in the nature and timing of oral antigen exposure, lack of early exposure to microbes, and an altered microbiome. A study comparing active EoE children to non EoE children found an altered microbiome due to a positive correlation between a relatively high abundance of Haemophilus and disease activity seen through an increasing Eosinophilic Esophagitis Endoscopic Reference Score and Eosinophilic Esophagitis Histologic Scoring System (q value = 5e-10). Measuring the relative abundance of specific taxa in children’s salivary microbiome could serve as a noninvasive marker for eosinophilic esophagitis.
Signs and symptoms
EoE often presents with difficulty swallowing, food impaction, stomach pains, regurgitation or vomiting, and decreased appetite. Although the typical onset of EoE is in childhood, the disease can be found in all age groups, and symptoms vary depending on the age of presentation. In addition, young children with EoE may present with feeding difficulties and poor weight gain. It is more common in males, and affects both adults and children.Predominant symptoms in school-aged children and adolescents include difficulty swallowing, food impaction, and choking/gagging with meals- particularly when eating foods with coarse textures. Other symptoms in this age group can include abdominal/chest pain, vomiting, and regurgitation. The predominant symptom in adults is difficulty swallowing; however, intractable heartburn and food avoidance may also be present. Due to the long-standing inflammation and possible resultant scarring that may have gone unrecognized, adults presenting with EoE tend to have more episodes of esophageal food impaction as well as other esophageal abnormalities such as Schatzki ring, esophageal webs, and in some cases, achalasia.Although many of these symptoms overlap with the symptoms of GERD, the majority of patients with EoE exhibit a poor response to acid-suppression therapy. Many people with EoE have other autoimmune and allergic diseases such as asthma and celiac disease. Mast cell disorders such as Mast Cell Activation Syndrome or Mastocytosis are also frequently associated with it.
Pathophysiology
Eosinophils are inflammatory cells that release a variety of chemical signals which inflame the surrounding esophageal tissue. This results in the signs and symptoms of pain, visible redness on endoscopy, and a natural history that may include stricturing. Eosinophils are normally present in other parts of a healthy gastrointestinal tract, these white blood cells are not normally found in the esophagus of a healthy individual. The reason for the migration of eosinophils to the tissue of the esophagus is not fully understood but is being studied extensively. It is thought the migration of eosinophils to the esophagus may be due to genetic, environmental, and host immune system factors.At a tissue level, EoE is characterized by a dense infiltrate with white blood cells of the eosinophil type into the epithelial lining of the esophagus. This is thought to be an allergic reaction against ingested food, based on the important role eosinophils play in allergic reactions. The eosinophils are recruited into the tissue in response to local production of eotaxin-3 by IL-13 stimulated esophageal epithelial cells.
Diagnosis
The diagnosis of EoE is typically made on the combination of symptoms and findings on diagnostic testing. To properly diagnose EoE, various diseases such as GERD, esophageal cancer, achalasia, hypereosinophilic syndrome, infection, Crohns disease, and drug allergies need to be ruled out.
Prior to the development of the EE Diagnostic Panel, EoE could only be diagnosed if gastroesophageal reflux did not respond to a six-week trial of twice-a-day high-dose proton-pump inhibitors (PPIs) or if a negative ambulatory pH study ruled out gastroesophageal reflux disease (GERD).Radiologically, the term "ringed esophagus" has been used for the appearance of eosinophilic esophagitis on barium swallow studies to contrast with the appearance of transient transverse folds sometimes seen with esophageal reflux (termed "feline esophagus").
Endoscopy
Endoscopically, ridges, furrows, or rings may be seen in the esophageal wall. Sometimes, multiple rings may occur in the esophagus, leading to the term "corrugated esophagus" or "feline esophagus" due to similarity of the rings to the cat esophagus. Presence of white exudates in esophagus is also suggestive of the diagnosis. On biopsy taken at the time of endoscopy, numerous eosinophils can be seen in the superficial epithelium. A minimum of 15 eosinophils per high-power field are required to make the diagnosis. Eosinophilic inflammation is not limited to the esophagus alone, and does extend through the whole gastrointestinal tract. Profoundly degranulated eosinophils may also be present, as may micro-abscesses and an expansion of the basal layer.Patients found to have signs of EoE on endoscopy should undergo an empiric 8-week trial of high-dose proton pump inhibitor therapy (twice daily) before repeat endoscopy in order to rule out GERD. Although endoscopic findings are helpful in identifying patients with EoE, they are not diagnostic of the disease if the patient has no clinical symptoms.
Esophageal mucosal biopsy
Currently, endoscopic mucosal biopsy remains the most important diagnostic test for EoE, and is required to confirm the diagnosis. Biopsy specimens from both the proximal/mid and distal esophagus should be obtained regardless of the gross appearance of the mucosa. Specimens should also be obtained from areas revealing endoscopic abnormalities. At least four biopsies are required to obtain adequate sensitivity for the detection of EoE. A definitive diagnosis of EoE is based on the presence of at least 15 eosinophils/HPF in the esophageal biopsies of patients despite treatment with high-dose PPI. GERD can increase eosinophilic infiltration in the distal esophagus, however, eosinophils associated with GERD generally occur at a lower density (i.e. < 15/HPF).
Allergy assessment
A thorough personal and family history of other atopic conditions is recommended in all patients with EoE. Testing for allergic sensitization may be considered using skin prick testing or blood testing for allergen-specific IgE. This is particularly important for the 10–20% of EoE patients who also have symptoms of immediate IgE-mediated food allergy. Atopy patch testing has been used in some cases for the potential identification of delayed, non-IgE (cell-mediated) reactions.
Diagnostic criteria
The diagnosis of eosinophilic esophagitis requires all of the following:
Symptoms related to esophageal dysfunction.
Eosinophil-predominant inflammation on esophageal biopsy, characteristically consisting of a peak value of ≥15 eosinophils per high power field (HPF).
Exclusion of other causes that may be responsible for symptoms and esophageal eosinophilia.
Treatment
The goal of EoE treatment is to control the symptoms by decreasing the number of eosinophils in the esophagus and, subsequently, reducing the esophageal inflammation. Management consists of dietary, pharmacological, and endoscopic treatment.
Dietary management
Dietary treatment can be effective, as there does appear to be a role of allergy in the development of EOE. Allergy testing is not particularly effective in predicting which foods are driving the disease process. If no specific allergenic food or agent is present, a trial of the six food elimination diet (SFED) can be pursued. Various approaches have been tried, where either six food groups (cows milk, wheat, egg, soy, nuts and fish/seafood), four groups (animal milk, gluten-containing cereals, egg, legumes) or two groups (animal milk and gluten-containing cereals) are excluded for a period of time, usually six weeks. A "top down" (starting with six foods, then reintroducing) approach may be very restrictive. Four- or even two-group exclusion diets may be less difficult to follow and reduce the need for many endoscopies if the response to the limited restriction is good.Alternative options to SFED includes the elemental diet, which is an amino acid based diet. The elemental diet demonstrates a high rate of response (almost 90% in children, 70% in adults), with a rapid relief of symptoms associated with histological remission. This diet involves using amino-acid based liquid formulas for 4-6 wk, followed by the histological evaluation of response. If remission is achieved, foods are slowly reintroduced.
Pharmacologic treatment
In patients diagnosed with EoE, a trial of proton-pump inhibitors (PPI), such as esomeprazole 20 mg to 40 mg oral daily or twice daily as a first line therapy is a reasonable option. Nexium®, brand name esomeprazole, may be preferred as these tablets can be dispersed in half a glass of water and drank for those with difficulty swallowing pills. Those who respond to PPI therapy with symptomatic improvement, should have endoscopy with esophageal biopsy should be repeated. If no eosinophils are present in the repeat biopsy, the diagnosis is either acid mediated GERD with eosinophilia or non GERD PPI responsive EoE with unknown mechanism. If both symptoms and eosinophils persists after treatment with PPI, the diagnosis is immune mediated EoE.Medical therapy for immune mediated EoE primarily involves using corticosteroids. Systemic (oral) corticosteroids were one of the first treatment options shown to be effective in patients with EoE. Both clinical and histologic improvement have been noted in approximately 95% of EoE patients using systemic corticosteroids. However, upon discontinuation of therapy, 90% of patients using corticosteroids experience a recurrence in symptoms. In May 2022, U.S. Food and Drug Administration approved dupilumab (Dupixent) to treat eosinophilic esophagitis (EoE) in adults and pediatric patients 12 years and older weighing at least 40 kilograms (which is about 88 pounds) making it the first US FDA approved treatment for EoE.
Endoscopic dilatation
In patients who present with food impaction, flexible upper endoscopy is recommended to remove impacted food. Dilation is deferred in EoE until patients are adequately treated with pharmacological or dietary therapy, and the result of a response to therapy is available. The goals of therapy for treating EoE is to improve the patients symptoms as well as to reduce the number of eosinophils on biopsy. This procedure is effective in 84% of people who require it.Esophageal strictures and rings can be safely dilated in EoE. It is recommended to use a graduated balloon catheter for gradual dilation. The patient should be informed that after dilation they might experience chest pain and in addition risk of esophageal perforation and bleeding.
Prognosis
The long-term prognosis for patients with EoE is unknown </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Overflow incontinence and can you explain it in depth? [/INST] Overflow incontinence is a concept of urinary incontinence, characterized by the involuntary release of urine from an overfull urinary bladder, often in the absence of any urge to urinate. This condition occurs in people who have a blockage of the bladder outlet (benign prostatic hyperplasia, prostate cancer, or narrowing of the urethra), or when the muscle that expels urine from the bladder is too weak to empty the bladder normally. Overflow incontinence may also be a side effect of certain medications.
Causes
Lesions affecting sacral segments or peripheral autonomic fibres result in atonic bladder with loss of sphincteric coordination. This results in loss of detrusor contraction, difficulty in initiating micturition and overflow incontinence. Anticholinergic side effects of certain medications (for example, certain antipsychotics and antidepressants) may cause urinary retention which may lead to overflow incontinence. Alpha-adrenergic agonists may cause urinary retention by stimulating the contraction of the urethral sphincter. Calcium channel blockers may decrease the contractility of the smooth muscle tissue in the urinary bladder, causing urinary retention with overflow incontinence. Epidural anesthesia and delivery also can cause the overflow incontinence.
Pathophysiology
Overflow incontinence occurs when the patients bladder is always full so that it frequently leaks urine. Weak bladder muscles, resulting in incomplete emptying of the bladder, or a blocked urethra can cause this type of incontinence. Autonomic neuropathy from diabetes or other diseases (e.g. Multiple sclerosis) can decrease neural signals from the bladder (allowing for overfilling) and may also decrease the expulsion of urine by the detrusor muscle (allowing for urinary retention). Additionally, tumors and kidney stones can block the urethra. Spinal cord injuries or nervous system disorders are additional causes of overflow incontinence </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Burn and can you explain it in depth? [/INST] A burn is an injury to skin, or other tissues, caused by heat, cold, electricity, chemicals, friction, or ultraviolet radiation (like sunburn). Most burns are due to heat from hot liquids (called scalding), solids, or fire. Burns occur mainly in the home or the workplace. In the home, risks are associated with domestic kitchens, including stoves, flames, and hot liquids. In the workplace, risks are associated with fire and chemical and electric burns. Alcoholism and smoking are other risk factors. Burns can also occur as a result of self-harm or violence between people (assault).Burns that affect only the superficial skin layers are known as superficial or first-degree burns. They appear red without blisters and pain typically lasts around three days. When the injury extends into some of the underlying skin layer, it is a partial-thickness or second-degree burn. Blisters are frequently present and they are often very painful. Healing can require up to eight weeks and scarring may occur. In a full-thickness or third-degree burn, the injury extends to all layers of the skin. Often there is no pain and the burnt area is stiff. Healing typically does not occur on its own. A fourth-degree burn additionally involves injury to deeper tissues, such as muscle, tendons, or bone. The burn is often black and frequently leads to loss of the burned part.Burns are generally preventable. Treatment depends on the severity of the burn. Superficial burns may be managed with little more than simple pain medication, while major burns may require prolonged treatment in specialized burn centers. Cooling with tap water may help pain and decrease damage; however, prolonged cooling may result in low body temperature. Partial-thickness burns may require cleaning with soap and water, followed by dressings. It is not clear how to manage blisters, but it is probably reasonable to leave them intact if small and drain them if large. Full-thickness burns usually require surgical treatments, such as skin grafting. Extensive burns often require large amounts of intravenous fluid, due to capillary fluid leakage and tissue swelling. The most common complications of burns involve infection. Tetanus toxoid should be given if not up to date.In 2015, fire and heat resulted in 67 million injuries. This resulted in about 2.9 million hospitalizations and 176,000 deaths. Among women in much of the world, burns are most commonly related to the use of open cooking fires or unsafe cook stoves. Among men, they are more likely a result of unsafe workplace conditions. Most deaths due to burns occur in the developing world, particularly in Southeast Asia. While large burns can be fatal, treatments developed since 1960 have improved outcomes, especially in children and young adults. In the United States, approximately 96% of those admitted to a burn center survive their injuries. The long-term outcome is related to the size of burn and the age of the person affected.
Signs and symptoms
The characteristics of a burn depend upon its depth. Superficial burns cause pain lasting two or three days, followed by peeling of the skin over the next few days. Individuals with more severe burns may indicate discomfort or complain of feeling pressure rather than pain. Full-thickness burns may be entirely insensitive to light touch or puncture. While superficial burns are typically red in color, severe burns may be pink, white or black. Burns around the mouth or singed hair inside the nose may indicate that burns to the airways have occurred, but these findings are not definitive. More worrisome signs include: shortness of breath, hoarseness, and stridor or wheezing. Itchiness is common during the healing process, occurring in up to 90% of adults and nearly all children. Numbness or tingling may persist for a prolonged period of time after an electrical injury. Burns may also produce emotional and psychological distress.
Cause
Burns are caused by a variety of external sources classified as thermal (heat-related), chemical, electrical, and radiation. In the United States, the most common causes of burns are: fire or flame (44%), scalds (33%), hot objects (9%), electricity (4%), and chemicals (3%). Most (69%) burn injuries occur at home or at work (9%), and most are accidental, with 2% due to assault by another, and 1–2% resulting from a suicide attempt. These sources can cause inhalation injury to the airway and/or lungs, occurring in about 6%.Burn injuries occur more commonly among the poor. Smoking and alcoholism are other risk factors. Fire-related burns are generally more common in colder climates. Specific risk factors in the developing world include cooking with open fires or on the floor as well as developmental disabilities in children and chronic diseases in adults.
Thermal
In the United States, fire and hot liquids are the most common causes of burns. Of house fires that result in death, smoking causes 25% and heating devices cause 22%. Almost half of injuries are due to efforts to fight a fire. Scalding is caused by hot liquids or gases and most commonly occurs from exposure to hot drinks, high temperature tap water in baths or showers, hot cooking oil, or steam. Scald injuries are most common in children under the age of five and, in the United States and Australia, this population makes up about two-thirds of all burns. Contact with hot objects is the cause of about 20–30% of burns in children. Generally, scalds are first- or second-degree burns, but third-degree burns may also result, especially with prolonged contact. Fireworks are a common cause of burns during holiday seasons in many countries. This is a particular risk for adolescent males. In the United States, for non-fatal burn injuries, white males, aged <6 comprise most cases. Thermal burns from grabbing/touching and spilling/splashing were the most common type of burn and mechanism, while the bodily areas most impacted were hands and fingers followed by head/neck.
Chemical
Chemical burns can be caused by over 25,000 substances, most of which are either a strong base (55%) or a strong acid (26%). Most chemical burn deaths are secondary to ingestion. Common agents include: sulfuric acid as found in toilet cleaners, sodium hypochlorite as found in bleach, and halogenated hydrocarbons as found in paint remover, among others. Hydrofluoric acid can cause particularly deep burns that may not become symptomatic until some time after exposure. Formic acid may cause the breakdown of significant numbers of red blood cells.
Electrical
Electrical burns or injuries are classified as high voltage (greater than or equal to 1000 volts), low voltage (less than 1000 volts), or as flash burns secondary to an electric arc. The most common causes of electrical burns in children are electrical cords (60%) followed by electrical outlets (14%). Lightning may also result in electrical burns. Risk factors for being struck include involvement in outdoor activities such as mountain climbing, golf and field sports, and working outside. Mortality from a lightning strike is about 10%.While electrical injuries primarily result in burns, they may also cause fractures or dislocations secondary to blunt force trauma or muscle contractions. In high voltage injuries, most damage may occur internally and thus the extent of the injury cannot be judged by examination of the skin alone. Contact with either low voltage or high voltage may produce cardiac arrhythmias or cardiac arrest.
Radiation
Radiation burns may be caused by protracted exposure to ultraviolet light (such as from the sun, tanning booths or arc welding) or from ionizing radiation (such as from radiation therapy, X-rays or radioactive fallout). Sun exposure is the most common cause of radiation burns and the most common cause of superficial burns overall. There is significant variation in how easily people sunburn based on their skin type. Skin effects from ionizing radiation depend on the amount of exposure to the area, with hair loss seen after 3 Gy, redness seen after 10 Gy, wet skin peeling after 20 Gy, and necrosis after 30 Gy. Redness, if it occurs, may not appear until some time after exposure. Radiation burns are treated the same as other burns. Microwave burns occur via thermal heating caused by the microwaves. While exposures as short as two seconds may cause injury, overall this is an uncommon occurrence.
Non-accidental
In those hospitalized from scalds or fire burns, 3–10% are from assault. Reasons include: child abuse, personal disputes, spousal abuse, elder abuse, and business disputes. An immersion injury or immersion scald may indicate child abuse. It is created when an extremity, or sometimes the buttocks are held under the surface of hot water. It typically produces a sharp upper border and is often symmetrical, known as "sock burns", "glove burns", or "zebra stripes" - where folds have prevented certain areas from burning. Deliberate cigarette burns most often found on the face, or the back of the hands and feet. Other high-risk signs of potential abuse include: circumferential burns, the absence of splash marks, a burn of uniform depth, and association with other signs of neglect or abuse.Bride burning, a form of domestic violence, occurs in some cultures, such as India where women have been burned in revenge for what the husband or his family consider an inadequate dowry. In Pakistan, acid burns represent 13% of intentional burns, and are frequently related to domestic violence. Self-immolation (setting oneself on fire) is also used as a form of protest in various parts of the world.
Pathophysiology
At temperatures greater than 44 °C (111 °F), proteins begin losing their three-dimensional shape and start breaking down. This results in cell and tissue damage. Many of the direct health effects of a burn are caused by failure of the skin to perform its normal functions, which include: protection from bacteria, skin sensation, body temperature regulation, and prevention of evaporation of the bodys water. Disruption of these functions can lead to infection, loss of skin sensation, hypothermia, and hypovolemic shock via dehydration (ie. water in the body evaporated away). Disruption of cell membranes causes cells to lose potassium to the spaces outside the cell and to take up water and sodium.In large burns (over 30% of the total body surface area), there is a significant inflammatory response. This results in increased leakage of fluid from the capillaries, and subsequent tissue edema. This causes overall blood volume loss, with the remaining blood suffering significant plasma loss, making the blood more concentrated. Poor blood flow to organs like the kidneys and gastrointestinal tract may result in kidney failure and stomach ulcers.Increased levels of catecholamines and cortisol can cause a hypermetabolic state that can last for years. This is associated with increased cardiac output, metabolism, a fast heart rate, and poor immune function.
Diagnosis
Burns can be classified by depth, mechanism of injury, extent, and associated injuries. The most commonly used classification is based on the depth of injury. The depth of a burn is usually determined via examination, although a biopsy may also be used. It may be difficult to accurately determine the depth of a burn on a single examination and repeated examinations over a few days may be necessary. In those who have a headache or are dizzy and have a fire-related burn, carbon monoxide poisoning should be considered. Cyanide poisoning should also be considered.
Size
The size of a burn is measured as a percentage of total body surface area (TBSA) affected by partial thickness or full thickness burns. First-degree burns that are only red in color and are not blistering are not included in this estimation. Most burns (70%) involve less than 10% of the TBSA.There are a number of methods to determine the TBSA, including the Wallace rule of nines, Lund and Browder chart, and estimations based on a persons palm size. The rule of nines is easy to remember but only accurate in people over 16 years of age. More accurate estimates can be made using Lund and Browder charts, which take into account the different proportions of body parts in adults and children. The size of a persons handprint (including the palm and fingers) is approximately 1% of their TBSA.
Severity
To determine the need for referral to a specialized burn unit, the American Burn Association devised a classification system. Under this system, burns can be classified as major, moderate, and minor. This is assessed based on a number of factors, including total body surface area affected, the involvement of specific anatomical zones, the age of the person, and associated injuries. Minor burns can typically be managed at home, moderate burns are often managed in a hospital, and major burns are managed by a burn center. Severe burn injury represents one of the most devastating forms of trauma. Despite improvements in burn care, patients can be left to suffer for as many as three years post-injury.
Signs of smoke inhalation
Signs of smoke inhalation includes hoarse voice, dyspnea, facial burns, singed nasal hairs, sputum which contains carbonaceous materials, Stridor and wheezing may be present in later stages.
Prevention
Historically, about half of all burns were deemed preventable. Burn prevention programs have significantly decreased rates of serious burns. Preventive measures include: limiting hot water temperatures, smoke alarms, sprinkler systems, proper construction of buildings, and fire-resistant clothing. Experts recommend setting water heaters below 48.8 °C (119.8 °F). Other measures to prevent scalds include using a thermometer to measure bath water temperatures, and splash guards on stoves. While the effect of the regulation of fireworks is unclear, there is tentative evidence of benefit with recommendations including the limitation of the sale of fireworks to children.
Management
Resuscitation begins with the assessment and stabilization of the persons airway, breathing and circulation. If inhalation injury is suspected, early intubation may be required. This is followed by care of the burn wound itself. People with extensive burns may be wrapped in clean sheets until they arrive at a hospital. As burn wounds are prone to infection, a tetanus booster shot should be given if an individual has not been immunized within the last five years. In the United States, 95% of burns that present to the emergency department are treated and discharged; 5% require hospital admission. With major burns, early feeding is important. Protein intake should also be increased, and trace elements and vitamins are often required. Hyperbaric oxygenation may be useful in addition to traditional treatments.
Intravenous fluids
In those with poor tissue perfusion, boluses of isotonic crystalloid solution should be given. In children with more than 10–20% TBSA (Total Body Surface Area) burns, and adults with more than 15% TBSA burns, formal fluid resuscitation and monitoring should follow. This should be begun pre-hospital if possible in those with burns greater than 25% TBSA. The Parkland formula can help determine the volume of intravenous fluids required over the first 24 hours. The formula is based on the affected individuals TBSA and weight. Half of the fluid is administered over the first 8 hours, and the remainder over the following 16 hours. The time is calculated from when the burn occurred, and not from the time that fluid resuscitation began. Children require additional maintenance fluid that includes glucose. Additionally, those with inhalation injuries require more fluid. While inadequate fluid resuscitation may cause problems, over-resuscitation can also be detrimental. The formulas are only a guide, with infusions ideally tailored to a urinary output of >30 mL/h in adults or >1mL/kg in children and mean arterial pressure greater than 60 mmHg.While lactated Ringers solution is often used, there is no evidence that it is superior to normal saline. Crystalloid fluids appear just as good as colloid fluids, and as colloids are more expensive they are not recommended. Blood transfusions are rarely required. They are typically only recommended when the hemoglobin level falls below 60-80 g/L (6-8 g/dL) due to the associated risk of complications. Intravenous catheters may be placed through burned skin if needed or intraosseous infusions may be used.
Wound care
Early cooling (within 30 minutes of the burn) reduces burn depth and pain, but care must be taken as over-cooling can result in hypothermia. It should be performed with cool water 10–25 °C (50.0–77.0 °F) and not ice water as the latter can cause further injury. Chemical burns may require extensive irrigation. Cleaning with soap and water, removal of dead tissue, and application of dressings are important aspects of wound care. If intact blisters are present, it is not clear what should be done with them. Some tentative evidence supports leaving them intact. Second-degree burns should be re-evaluated after two days.In the management of first and second-degree burns, little quality evidence exists to determine which dressing type to use. It is reasonable to manage first-degree burns without dressings. While topical antibiotics are often recommended, there is little evidence to support their use. Silver sulfadiazine (a type of antibiotic) is not recommended as it potentially prolongs healing time. There is insufficient evidence to support the use of dressings containing silver or negative-pressure wound therapy. Silver sulfadiazine does not appear to differ from silver containing foam dressings with respect to healing.
Medications
Burns can be very painful and a number of different options may be used for pain management. These include simple analgesics (such as ibuprofen and acetaminophen) and opioids such as morphine. Benzodiazepines may be used in addition to analgesics to help with anxiety. During the healing process, antihistamines, massage, or transcutaneous nerve stimulation may be used to aid with itching. Antihistamines, however, are only effective for this purpose in 20% of people. There is tentative evidence supporting the use of gabapentin and its use may be reasonable in those who do not improve with antihistamines. Intravenous lidocaine requires more study before it can be recommended for pain.Intravenous antibiotics are recommended before surgery for those with extensive burns (>60% TBSA). As of 2008, guidelines do not recommend their general use due to concerns regarding antibiotic resistance and the increased risk of fungal infections. Tentative evidence, however, shows that they may improve survival rates in those with large and severe burns. Erythropoietin has not been found effective to prevent or treat anemia in burn cases. In burns caused by hydrofluoric acid, calcium gluconate is a specific antidote and may be used intravenously and/or topically. Recombinant human growth hormone (rhGH) in those with burns that involve more than 40% of their body appears to speed healing without affecting the risk of death. The use of steroids is of unclear evidence.
Surgery
Wounds requiring surgical closure with skin grafts or flaps (typically anything more than a small full thickness burn) should be dealt with as early as possible. Circumferential burns of the limbs or chest may need urgent surgical release of the skin, known as an escharotomy. This is done to treat or prevent problems with distal circulation, or ventilation. It is uncertain if it is useful for neck or digit burns. Fasciotomies may be required for electrical burns.Skin grafts can involve temporary skin substitutes, derived from animal (human donor or pig) skin or synthesized. They are used to cover the wound as a dressing, preventing infection and fluid loss, but will eventually need to be removed. Alternatively, human skin can be treated to be left on permanently without rejection.There is no evidence that the use of copper sulphate to visualise phosphorus particles for removal can help with wound healing due to phosphorus burns. Meanwhile, absorption of copper sulphate into the blood circulation can be harmful.
Alternative medicine
Honey has been used since ancient times to aid wound healing and may be beneficial in first- and second-degree burns. There is moderate evidence that honey helps heal partial thickness burns. The evidence for aloe vera is of poor quality. While it might be beneficial in reducing pain, and a review from 2007 found tentative evidence of improved healing times, a subsequent review from 2012 did not find improved healing over silver sulfadiazine. There were only three randomized controlled trials for the use of plants for burns, two for aloe vera and one for oatmeal.There is little evidence that vitamin E helps with keloids or scarring. Butter is not recommended. In low income countries, burns are treated up to one-third of the time with traditional medicine, which may include applications of eggs, mud, leaves or cow dung. Surgical management is limited in some cases due to insufficient financial resources and availability. There are a number of other methods that may be used in addition to medications to reduce procedural pain and anxiety including: virtual reality therapy, hypnosis, and behavioral approaches such as distraction techniques.
Patient support
Burn patients require support and care – both physiological and psychological. Respiratory failure, sepsis, and multi-organ system failure are common in hospitalized burn patients. To prevent hypothermia and maintain normal body temperature, burn patients with over 20% of burn injuries should be kept in an environment with the temperature at or above 30 degree Celsius.Metabolism in burn patients proceeds at a higher than normal speed due to the whole-body process and rapid fatty acid substrate cycles, which can be countered with an adequate supply of energy, nutrients, and antioxidants. Enteral feeding a day after resuscitation is required to reduce risk of infection, recovery time, non-infectious complications, hospital stay, long-term damage, and mortality. Controlling blood glucose levels can have an impact on liver function and survival.
Risk of thromboembolism is high and acute respiratory distress syndrome (ARDS) that does not resolve with maximal ventilator use is also a common complication. Scars are long-term after-effects of a burn injury. Psychological support is required to cope with the aftermath of a fire accident, while to prevent scars and long-term damage to the skin and other body structures consulting with burn specialists, preventing infections, consuming nutritious foods, early and aggressive rehabilitation, and using compressive clothing are recommended.
Prognosis
The prognosis is worse in those with larger burns, those who are older, and females. The presence of a smoke inhalation injury, other significant injuries such as long bone fractures, and serious co-morbidities (e.g. heart disease, diabetes, psychiatric illness, and suicidal intent) also influence prognosis. On average, of those admitted to the United States burn centers, 4% die, with the outcome for individuals dependent on the extent of the burn injury. For example, admittees with burn areas less than 10% TBSA had a mortality rate of less than 1%, while admittees with over 90% TBSA had a mortality rate of 85%. In Afghanistan, people with more than 60% TBSA burns rarely survive. The Baux score has historically been used to determine prognosis of major burns. However, with improved care, it is no longer very accurate. The score is determined by adding the size of the burn (% TBSA) to the age of the person and taking that to be more or less equal to the risk of death. Burns in 2013 resulted in 1.2 million years lived with disability and 12.3 million disability adjusted life years.
Complications
A number of complications may occur, with infections being the most common. In order of frequency, potential complications include: pneumonia, cellulitis, urinary tract infections and respiratory failure. Risk factors for infection include: burns of more than 30% TBSA, full-thickness burns, extremes of age (young or old), or burns involving the legs or perineum. Pneumonia occurs particularly commonly in those with inhalation injuries.Anemia secondary to full thickness burns of greater than 10% TBSA is common. Electrical burns may lead to compartment syndrome or rhabdomyolysis due to muscle breakdown. Blood clotting in the veins of the legs is estimated to occur in 6 to 25% of people. The hypermetabolic state that may persist for years after a major burn can result in a decrease in bone density and a loss of muscle mass. Keloids may form subsequent to a burn, particularly in those who are young and dark skinned. Following a burn, children may have significant psychological trauma and experience post-traumatic stress disorder. Scarring may also result in a disturbance in body image. In the developing world, significant burns may result in social isolation, extreme poverty and child abandonment.
Epidemiology
In 2015 fire and heat resulted in 67 million injuries. This resulted in about 2.9 million hospitalizations and 238,000 dying. This is down from 300,000 deaths in 1990. This makes it the fourth leading cause of injuries after motor vehicle collisions, falls, and violence. About 90% of burns occur in the developing world. This has been attributed partly to overcrowding and an unsafe cooking situation. Overall, nearly 60% of fatal burns occur in Southeast Asia with a rate of 11.6 per 100,000. The number of fatal burns has changed from 280,000 in 1990 to 176,000 in 2015.In the developed world, adult males have twice the mortality as females from burns. This is most probably due to their higher risk occupations and greater risk-taking activities. In many countries in the developing world, however, females have twice the risk of males. This is often related to accidents in the kitchen or domestic violence. In children, deaths from burns occur at more than ten times the rate in the developing than the developed world. Overall, in children it is one of the top fifteen leading causes of death. From the 1980s to 2004, many countries have seen both a decrease in the rates of fatal burns and in burns generally.
Developed countries
An estimated 500,000 burn injuries receive medical treatment yearly in the United States. They resulted in about 3,300 deaths in 2008. Most burns (70%) and deaths from burns occur in males. The highest incidence of fire burns occurs in those 18–35 years old, while the highest incidence of scalds occurs in children less than five years old and adults over 65. Electrical burns result in about 1,000 deaths per year. Lightning results in the death of about 60 people a year. In Europe, intentional burns occur most commonly in middle aged men.
Developing countries
In India, about 700,000 to 800,000 people per year sustain significant burns, though very few are looked after in specialist burn units. The highest rates occur in women 16–35 years of age. Part of this high rate is related to unsafe kitchens and loose-fitting clothing typical to India. It is estimated that one-third of all burns in India are due to clothing catching fire from open flames. Intentional burns are also a common cause and occur at high rates in young women, secondary to domestic violence and self-harm.
History
Cave paintings from more than 3,500 years ago document burns and their management. The earliest Egyptian records on treating burns describes dressings prepared with milk from mothers of baby boys, and the 1500 BCE Edwin Smith Papyrus describes treatments using honey and the salve of resin. Many other treatments have been used over the ages, including the use of tea leaves by the Chinese documented to 600 BCE, pig fat and vinegar by Hippocrates documented to 400 BCE, and wine and myrrh by Celsus documented to 100 CE. French barber-surgeon Ambroise Paré was the first to describe different degrees of burns in the 1500s. Guillaume Dupuytren expanded these degrees into six different severities in 1832.The first hospital to treat burns opened in 1843 in London, England, and the development of modern burn care began in the late 1800s and early 1900s. During World War I, Henry D </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Microtia and can you explain it in depth? [/INST] Microtia is a congenital deformity where the auricle (external ear) is underdeveloped. A completely undeveloped pinna is referred to as anotia. Because microtia and anotia have the same origin, it can be referred to as microtia-anotia. Microtia can be unilateral (one side only) or bilateral (affecting both sides). Microtia occurs in 1 out of about 8,000–10,000 births. In unilateral microtia, the right ear is most commonly affected. It may occur as a complication of taking Accutane (isotretinoin) during pregnancy.
Classification
According to the Altman-classification, there are four grades of microtia:
Grade I: A less than complete development of the external ear with identifiable structures and a small but present external ear canal
Grade II: A partially developed ear (usually the top portion is underdeveloped) with a closed stenotic external ear canal producing a conductive hearing loss.
Grade III: Absence of the external ear with a small peanut-like vestige structure and an absence of the external ear canal and ear drum. Grade III microtia is the most common form of microtia.
Grade IV: Absence of the total ear or anotia.
Causes and risk factors
The etiology of microtia in children remains uncertain but there are some cases that associate the cause of microtia with genetic defects in multiple or single genes, altitude, and gestational diabetes. Risk factors gathered from studies include infants born underweight, male sex, women gravidity and parity, and medication use while pregnant. Genetic inheritance has not been fully studied but in the few studies available, it has shown to occur during the early stages of pregnancy.
Diagnosis
At birth, lower grade microtia is difficult to visually diagnose with a physical exam. While higher grade microtia can be visually diagnosed due to noticeable abnormalities. Infants that have noticeable abnormalities are closely monitored by physicians and hearing specialists.
Treatment
The goal of medical intervention is to provide the best form and function to the underdeveloped ear.
Hearing
Typically, testing is first done to determine the quality of hearing. This can be done as early as in the first two weeks with a BAER test (Brain Stem Auditory Response Test). At age 5–6, CT or CAT scans of the middle ear can be done to elucidate its development and clarify which patients are appropriate candidates for surgery to improve hearing. For younger individuals, this is done under sedation.
The hearing loss associated with congenital aural atresia is a conductive hearing loss—hearing loss caused by inefficient conduction of sound to the inner ear. Essentially, children with aural atresia have hearing loss because the sound cannot travel into the (usually) healthy inner ear—there is no ear canal, no eardrum, and the small ear bones (malleus/hammer, incus/anvil, and stapes/stirrup) are underdeveloped. "Usually" is in parentheses because rarely, a child with atresia also has a malformation of the inner ear leading to a sensorineural hearing loss (as many as 19% in one study). Sensorineural hearing loss is caused by a problem in the inner ear, the cochlea. Sensorineural hearing loss is not correctable by surgery, but properly fitted and adjusted hearing amplification (hearing aids) generally provide excellent rehabilitation for this hearing loss. If the hearing loss is severe to profound in both ears, the child may be a candidate for a cochlear implant (beyond the scope of this discussion).
Unilateral sensorineural hearing loss was not generally considered a serious disability by the medical establishment before the nineties; it was thought that the afflicted person was able to adjust to it from birth. In general, there are exceptional advantages to gain from an intervention to enable hearing in the microtic ear, especially in bilateral microtia. Children with untreated unilateral sensorineural hearing loss are more likely to have to repeat a grade in school and/or need supplemental services (e.g., FM system – see below) than their peers.Children with unilateral sensorineural hearing loss often require years of speech therapy in order to learn how to enunciate and understand spoken language. What is truly unclear, and the subject of an ongoing research study, is the effect of unilateral conductive hearing loss (in children with unilateral aural atresia) on scholastic performance. If atresia surgery or some form of amplification is not used, special steps should be taken to ensure that the child is accessing and understanding all of the verbal information presented in school settings. Recommendations for improving a childs hearing in the academic setting include preferential seating in class, an FM system (the teacher wears a microphone, and the sound is transmitted to a speaker at the childs desk or to an ear bud or hearing aid the child wears), a bone-anchored hearing aid (BAHA), or conventional hearing aids. Age for BAHA implantation depends on whether the child is in Europe (18 months) or the US (age 5). Until then it is possible to fit a BAHA on a softbandIt is important to note that not all children with aural atresia are candidates for atresia repair. Candidacy for atresia surgery is based on the hearing test (audiogram) and CT scan imaging. If a canal is built where one does not exist, minor complications can arise from the bodys natural tendency to heal an open wound closed. Repairing aural atresia is a very detailed and complicated surgical procedure which requires an expert in atresia repair. While complications from this surgery can arise, the risk of complications is greatly reduced when using a highly experienced otologist. Atresia patients who opt for surgery will temporarily have the canal packed with gelatin sponge and silicone sheeting to prevent closure. The timing of ear canal reconstruction (canalplasty) depends on the type of external ear (Microtia) repair desired by the patient and family. Two surgical teams in the USA are currently able to reconstruct the canal at the same time as the external ear in a single surgical stage (one stage ear reconstruction).
In cases where a later surgical reconstruction of the external ear of the child might be possible, positioning of the BAHA implant is critical. It may be necessary to position the implant further back than usual to enable successful reconstructive surgery – but not so far as to compromise hearing performance. If the reconstruction is ultimately successful, it is easy to remove the percutaneous BAHA abutment. If the surgery is unsuccessful, the abutment can be replaced and the implant re-activated to restore hearing.
External ear
The age when outer ear surgery can be attempted depends upon the technique chosen. The earliest is 7 for Rib Cartilage Grafts. However, some surgeons recommend waiting until a later age, such as 8–10 when the ear is closer to adult size. External ear prostheses have been made for children as young as 5.
For auricular reconstruction, there are several different options:
Rib Cartilage Graft Reconstruction: This surgery may be performed by specialists in the technique. It involves sculpting the patients own rib cartilage into the form of an ear. Because the cartilage is the patients own living tissue, the reconstructed ear continues to grow as the child does. In order to be sure that the rib cage is large enough to provide the necessary donor tissue, some surgeons wait until the patient is 8 years of age; however, some surgeons with more experience with this technique may begin the surgery on a child aged six. The major advantage of this surgery is that the patients own tissue is used for the reconstruction. This surgery varies from two to four stages depending on the surgeons preferred method. A novel one stage ear reconstruction technique is performed by a few select surgeons. One team is able to reconstruct the entire external ear and ear canal in one operation.
Reconstruct the ear using a polyethylene plastic implant (also called Medpor): This is a 1–2 stage surgery that can start at age 3 and can be done as an outpatient without hospitalization. Using the porous framework, which allows the patients tissue to grow into the material and the patients own tissue flap, a new ear is constructed in a single surgery. A small second surgery is performed in 3–6 months if needed for minor adjustments. Medpor was developed by John Reinisch. This surgery should only be performed by experts in the techniques involved. The use of porous polyethylene implants for ear reconstruction was initiated in the 1980s by Alexander Berghaus.
Ear Prosthesis: An auricular (ear) prosthesis is custom made by an anaplastologist to mirror the other ear. Prosthetic ears can appear very realistic. They require a few minutes of daily care. They are typically made of silicone, which is colored to match the surrounding skin and can be attached using either adhesive or with titanium screws inserted into the skull to which the prosthetic is attached with a magnetic or bar/clip type system. These screws are the same as the BAHA (bone anchored hearing aid) screws and can be placed simultaneously. The biggest advantage over any surgery is having a prosthetic ear that allows the affected ear to appear as normal as possible to the natural ear. The biggest disadvantage is the daily care involved and knowing that the prosthesis is not real. In 2022, success of transplantation of a 3D bioprinted auricle made from the microtia patients own cells was reported, also achieving a first in 3D bioprinting for transplants.
Related conditions
Aural atresia is the underdevelopment of the middle ear and canal and usually occurs in conjunction with microtia. Atresia occurs because patients with microtia may not have an external opening to the ear canal, though. However, the cochlea and other inner ear structures are usually present. The grade of microtia usually correlates to the degree of development of the middle ear.
Microtia is usually isolated, but may occur in conjunction with hemifacial microsomia, Goldenhar Syndrome or Treacher-Collins Syndrome. It is also occasionally associated with kidney abnormalities (rarely life-threatening), and jaw problems, and more rarely, heart defects and vertebral deformities.
Notable cases
Paul Stanley, vocalist and rhythm guitarist of Kiss, was born with grade III microtia of his right ear.
References
Further reading
Bennun RD, Mulliken JB, Kaban LB, Murray JE (December 1985) </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Liver failure and can you explain it in depth? [/INST] Liver failure is the inability of the liver to perform its normal synthetic and metabolic functions as part of normal physiology </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Kayser–Fleischer ring and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Chondrolysis and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Emotional lability and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Pain and can you explain it in depth? [/INST] Pain is a distressing feeling often caused by intense or damaging stimuli. The International Association for the Study of Pain defines pain as "an unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage." In medical diagnosis, pain is regarded as a symptom of an underlying condition.
Pain motivates the individual to withdraw from damaging situations, to protect a damaged body part while it heals, and to avoid similar experiences in the future. Most pain resolves once the noxious stimulus is removed and the body has healed, but it may persist despite removal of the stimulus and apparent healing of the body. Sometimes pain arises in the absence of any detectable stimulus, damage or disease.Pain is the most common reason for physician consultation in most developed countries. It is a major symptom in many medical conditions, and can interfere with a persons quality of life and general functioning. Simple pain medications are useful in 20% to 70% of cases. Psychological factors such as social support, cognitive behavioral therapy, excitement, or distraction can affect pains intensity or unpleasantness.In some debates regarding physician-assisted suicide or euthanasia, pain has been used as an argument to permit people who are terminally ill to end their lives.
Etymology
First attested in English in 1297, the word peyn comes from the Old French peine, in turn from Latin poena meaning "punishment, penalty" (also meaning "torment, hardship, suffering" in Late Latin) and that from Greek ποινή (poine), generally meaning "price paid, penalty, punishment".
Classification
The International Association for the Study of Pain recommends using specific features to describe a patients pain:
region of the body involved (e.g. abdomen, lower limbs),
system whose dysfunction may be causing the pain (e.g., nervous, gastrointestinal),
duration and pattern of occurrence,
intensity, and
cause
Chronic versus acute
Pain is usually transitory, lasting only until the noxious stimulus is removed or the underlying damage or pathology has healed, but some painful conditions, such as rheumatoid arthritis, peripheral neuropathy, cancer and idiopathic pain, may persist for years. Pain that lasts a long time is called "chronic" or "persistent", and pain that resolves quickly is called "acute". Traditionally, the distinction between acute and chronic pain has relied upon an arbitrary interval of time between onset and resolution; the two most commonly used markers being 3 months and 6 months since the onset of pain, though some theorists and researchers have placed the transition from acute to chronic pain at 12 months.: 93 Others apply "acute" to pain that lasts less than 30 days, "chronic" to pain of more than six months duration, and "subacute" to pain that lasts from one to six months. A popular alternative definition of "chronic pain", involving no arbitrarily fixed duration, is "pain that extends beyond the expected period of healing". Chronic pain may be classified as "cancer-related" or "benign."
Allodynia
Allodynia is pain experienced in response to a normally painless stimulus. It has no biological function and is classified by stimuli into dynamic mechanical, punctate and static.
Phantom
Phantom pain is pain felt in a part of the body that has been amputated, or from which the brain no longer receives signals. It is a type of neuropathic pain.The prevalence of phantom pain in upper limb amputees is nearly 82%, and in lower limb amputees is 54%. One study found that eight days after amputation, 72% of patients had phantom limb pain, and six months later, 67% reported it. Some amputees experience continuous pain that varies in intensity or quality; others experience several bouts of pain per day, or it may reoccur less often. It is often described as shooting, crushing, burning or cramping. If the pain is continuous for a long period, parts of the intact body may become sensitized, so that touching them evokes pain in the phantom limb. Phantom limb pain may accompany urination or defecation.: 61–9 Local anesthetic injections into the nerves or sensitive areas of the stump may relieve pain for days, weeks, or sometimes permanently, despite the drug wearing off in a matter of hours; and small injections of hypertonic saline into the soft tissue between vertebrae produces local pain that radiates into the phantom limb for ten minutes or so and may be followed by hours, weeks or even longer of partial or total relief from phantom pain. Vigorous vibration or electrical stimulation of the stump, or current from electrodes surgically implanted onto the spinal cord, all produce relief in some patients.: 61–9 Mirror box therapy produces the illusion of movement and touch in a phantom limb which in turn may cause a reduction in pain.Paraplegia, the loss of sensation and voluntary motor control after serious spinal cord damage, may be accompanied by girdle pain at the level of the spinal cord damage, visceral pain evoked by a filling bladder or bowel, or, in five to ten per cent of paraplegics, phantom body pain in areas of complete sensory loss. This phantom body pain is initially described as burning or tingling but may evolve into severe crushing or pinching pain, or the sensation of fire running down the legs or of a knife twisting in the flesh. Onset may be immediate or may not occur until years after the disabling injury. Surgical treatment rarely provides lasting relief.: 61–9
Breakthrough
Breakthrough pain is transitory pain that comes on suddenly and is not alleviated by the patients regular pain management. It is common in cancer patients who often have background pain that is generally well-controlled by medications, but who also sometimes experience bouts of severe pain that from time to time "breaks through" the medication. The characteristics of breakthrough cancer pain vary from person to person and according to the cause. Management of breakthrough pain can entail intensive use of opioids, including fentanyl.
Asymbolia and insensitivity
The ability to experience pain is essential for protection from injury, and recognition of the presence of injury. Episodic analgesia may occur under special circumstances, such as in the excitement of sport or war: a soldier on the battlefield may feel no pain for many hours from a traumatic amputation or other severe injury.Although unpleasantness is an essential part of the IASP definition of pain, it is possible to induce a state described as intense pain devoid of unpleasantness in some patients, with morphine injection or psychosurgery. Such patients report that they have pain but are not bothered by it; they recognize the sensation of pain but suffer little, or not at all. Indifference to pain can also rarely be present from birth; these people have normal nerves on medical investigations, and find pain unpleasant, but do not avoid repetition of the pain stimulus.Insensitivity to pain may also result from abnormalities in the nervous system. This is usually the result of acquired damage to the nerves, such as spinal cord injury, diabetes mellitus (diabetic neuropathy), or leprosy in countries where that disease is prevalent. These individuals are at risk of tissue damage and infection due to undiscovered injuries. People with diabetes-related nerve damage, for instance, sustain poorly-healing foot ulcers as a result of decreased sensation.A much smaller number of people are insensitive to pain due to an inborn abnormality of the nervous system, known as "congenital insensitivity to pain". Children with this condition incur carelessly-repeated damage to their tongues, eyes, joints, skin, and muscles. Some die before adulthood, and others have a reduced life expectancy. Most people with congenital insensitivity to pain have one of five hereditary sensory and autonomic neuropathies (which includes familial dysautonomia and congenital insensitivity to pain with anhidrosis). These conditions feature decreased sensitivity to pain together with other neurological abnormalities, particularly of the autonomic nervous system. A very rare syndrome with isolated congenital insensitivity to pain has been linked with mutations in the SCN9A gene, which codes for a sodium channel (Nav1.7) necessary in conducting pain nerve stimuli.
Functional effects
Experimental subjects challenged by acute pain and patients in chronic pain experience impairments in attention control, working memory, mental flexibility, problem solving, and information processing speed. Acute and chronic pain are also associated with increased depression, anxiety, fear, and anger.
If I have matters right, the consequences of pain will include direct physical distress, unemployment, financial difficulties, marital disharmony, and difficulties in concentration and attention…
On subsequent negative emotion
Although pain is considered to be aversive and unpleasant and is therefore usually avoided, a meta-analysis which summarized and evaluated numerous studies from various psychological disciplines, found a reduction in negative affect. Across studies, participants that were subjected to acute physical pain in the laboratory subsequently reported feeling better than those in non-painful control conditions, a finding which was also reflected in physiological parameters. A potential mechanism to explain this effect is provided by the opponent-process theory.
Theory
Historical
Before the relatively recent discovery of neurons and their role in pain, various different body functions were proposed to account for pain. There were several competing early theories of pain among the ancient Greeks: Hippocrates believed that it was due to an imbalance in vital fluids. In the 11th century, Avicenna theorized that there were a number of feeling senses including touch, pain and titillation.
In 1644, René Descartes theorized that pain was a disturbance that passed along nerve fibers until the disturbance reached the brain. Descartess work, along with Avicennas, prefigured the 19th-century development of specificity theory. Specificity theory saw pain as "a specific sensation, with its own sensory apparatus independent of touch and other senses". Another theory that came to prominence in the 18th and 19th centuries was intensive theory, which conceived of pain not as a unique sensory modality, but an emotional state produced by stronger than normal stimuli such as intense light, pressure or temperature. By the mid-1890s, specificity was backed mostly by physiologists and physicians, and the intensive theory was mostly backed by psychologists. However, after a series of clinical observations by Henry Head and experiments by Max von Frey, the psychologists migrated to specificity almost en masse, and by centurys end, most textbooks on physiology and psychology were presenting pain specificity as fact.
Modern
Some sensory fibers do not differentiate between noxious and non-noxious stimuli, while others, nociceptors, respond only to noxious, high intensity stimuli. At the peripheral end of the nociceptor, noxious stimuli generate currents that, above a given threshold, send signals along the nerve fiber to the spinal cord. The "specificity" (whether it responds to thermal, chemical or mechanical features of its environment) of a nociceptor is determined by which ion channels it expresses at its peripheral end. Dozens of different types of nociceptor ion channels have so far been identified, and their exact functions are still being determined.The pain signal travels from the periphery to the spinal cord along A-delta and C fibers. Because the A-delta fiber is thicker than the C fiber, and is thinly sheathed in an electrically insulating material (myelin), it carries its signal faster (5–30 m/s) than the unmyelinated C fiber (0.5–2 m/s). Pain evoked by the A-delta fibers is described as sharp and is felt first. This is followed by a duller pain, often described as burning, carried by the C fibers. These A-delta and C fibers enter the spinal cord via Lissauers tract and connect with spinal cord nerve fibers in the central gelatinous substance of the spinal cord. These spinal cord fibers then cross the cord via the anterior white commissure and ascend in the spinothalamic tract. Before reaching the brain, the spinothalamic tract splits into the lateral, neospinothalamic tract and the medial, paleospinothalamic tract. The neospinothalamic tract carries the fast, sharp A-delta signal to the ventral posterolateral nucleus of the thalamus. The paleospinothalamic tract carries the slow, dull, C-fiber pain signal. Some of the paleospinothalamic fibers peel off in the brain stem, connecting with the reticular formation or midbrain periaqueductal gray, and the remainder terminate in the intralaminar nuclei of the thalamus.Pain-related activity in the thalamus spreads to the insular cortex (thought to embody, among other things, the feeling that distinguishes pain from other homeostatic emotions such as itch and nausea) and anterior cingulate cortex (thought to embody, among other things, the affective/motivational element, the unpleasantness of pain), and pain that is distinctly located also activates primary and secondary somatosensory cortex.Spinal cord fibers dedicated to carrying A-delta fiber pain signals, and others that carry both A-delta and C fiber pain signals to the thalamus have been identified. Other spinal cord fibers, known as wide dynamic range neurons, respond to A-delta and C fibers, but also to the much larger, more heavily myelinated A-beta fibers that carry touch, pressure and vibration signals. Ronald Melzack and Patrick Wall introduced their gate control theory in the 1965 Science article "Pain Mechanisms: A New Theory". The authors proposed that the thin C and A-delta (pain) and large diameter A-beta (touch, pressure, vibration) nerve fibers carry information from the site of injury to two destinations in the dorsal horn of the spinal cord, and that A-beta fiber signals acting on inhibitory cells in the dorsal horn can reduce the intensity of pain signals sent to the brain.
Three dimensions of pain
In 1968 Ronald Melzack and Kenneth Casey described chronic pain in terms of its three dimensions:
"sensory-discriminative" (sense of the intensity, location, quality and duration of the pain),
"affective-motivational" (unpleasantness and urge to escape the unpleasantness), and
"cognitive-evaluative" (cognitions such as appraisal, cultural values, distraction and hypnotic suggestion).They theorized that pain intensity (the sensory discriminative dimension) and unpleasantness (the affective-motivational dimension) are not simply determined by the magnitude of the painful stimulus, but "higher" cognitive activities can influence perceived intensity and unpleasantness. Cognitive activities may affect both sensory and affective experience or they may modify primarily the affective-motivational dimension. Thus, excitement in games or war appears to block both the sensory-discriminative and affective-motivational dimensions of pain, while suggestion and placebos may modulate only the affective-motivational dimension and leave the sensory-discriminative dimension relatively undisturbed. (p. 432) The paper ends with a call to action: "Pain can be treated not only by trying to cut down the sensory input by anesthetic block, surgical intervention and the like, but also by influencing the motivational-affective and cognitive factors as well." (p. 435)
Evolutionary and behavioral role
Pain is part of the bodys defense system, producing a reflexive retraction from the painful stimulus, and tendencies to protect the affected body part while it heals, and avoid that harmful situation in the future. It is an important part of animal life, vital to healthy survival. People with congenital insensitivity to pain have reduced life expectancy.In The Greatest Show on Earth: The Evidence for Evolution, biologist Richard Dawkins addresses the question of why pain should have the quality of being painful. He describes the alternative as a mental raising of a "red flag". To argue why that red flag might be insufficient, Dawkins argues that drives must compete with one other within living beings. The most "fit" creature would be the one whose pains are well balanced. Those pains which mean certain death when ignored will become the most powerfully felt. The relative intensities of pain, then, may resemble the relative importance of that risk to our ancestors. This resemblance will not be perfect, however, because natural selection can be a poor designer. This may have maladaptive results such as supernormal stimuli.Pain, however, does not only wave a "red flag" within living beings but may also act as a warning sign and a call for help to other living beings. Especially in humans who readily helped each other in case of sickness or injury throughout their evolutionary history, pain might be shaped by natural selection to be a credible and convincing signal of need for relief, help, and care.Idiopathic pain (pain that persists after the trauma or pathology has healed, or that arises without any apparent cause) may be an exception to the idea that pain is helpful to survival, although some psychodynamic psychologists argue that such pain is psychogenic, enlisted as a protective distraction to keep dangerous emotions unconscious.
Thresholds
In pain science, thresholds are measured by gradually increasing the intensity of a stimulus in a procedure called quantitative sensory testing which involves such stimuli as electric current, thermal (heat or cold), mechanical (pressure, touch, vibration), ischemic, or chemical stimuli applied to the subject to evoke a response. The "pain perception threshold" is the point at which the subject begins to feel pain, and the "pain threshold intensity" is the stimulus intensity at which the stimulus begins to hurt. The "pain tolerance threshold" is reached when the subject acts to stop the pain.
Assessment
A persons self-report is the most reliable measure of pain. Some health care professionals may underestimate pain severity. A definition of pain widely employed in nursing, emphasizing its subjective nature and the importance of believing patient reports, was introduced by Margo McCaffery in 1968: "Pain is whatever the experiencing person says it is, existing whenever he says it does". To assess intensity, the patient may be asked to locate their pain on a scale of 0 to 10, with 0 being no pain at all, and 10 the worst pain they have ever felt. Quality can be established by having the patient complete the McGill Pain Questionnaire indicating which words best describe their pain.
Visual analogue scale
The visual analogue scale is a common, reproducible tool in the assessment of pain and pain relief. The scale is a continuous line anchored by verbal descriptors, one for each extreme of pain where a higher score indicates greater pain intensity. It is usually 10 cm in length with no intermediate descriptors as to avoid marking of scores around a preferred numeric value. When applied as a pain descriptor, these anchors are often no pain and worst imaginable pain". Cut-offs for pain classification have been recommended as no pain (0-4mm), mild pain (5-44mm), moderate pain (45-74mm) and severe pain (75-100mm).
Multidimensional pain inventory
The Multidimensional Pain Inventory (MPI) is a questionnaire designed to assess the psychosocial state of a person with chronic pain. Combining the MPI characterization of the person with their IASP five-category pain profile is recommended for deriving the most useful case description.
Assessment in non-verbal people
Non-verbal people cannot use words to tell others that they are experiencing pain. However, they may be able to communicate through other means, such as blinking, pointing, or nodding.With a non-communicative person, observation becomes critical, and specific behaviors can be monitored as pain indicators. Behaviors such as facial grimacing and guarding (trying to protect part of the body from being bumped or touched) indicate pain, as well as an increase or decrease in vocalizations, changes in routine behavior patterns and mental status changes. Patients experiencing pain may exhibit withdrawn social behavior and possibly experience a decreased appetite and decreased nutritional intake. A change in condition that deviates from baseline, such as moaning with movement or when manipulating a body part, and limited range of motion are also potential pain indicators. In patients who possess language but are incapable of expressing themselves effectively, such as those with dementia, an increase in confusion or display of aggressive behaviors or agitation may signal that discomfort exists, and further assessment is necessary. Changes in behavior may be noticed by caregivers who are familiar with the persons normal behavior.Infants do feel pain, but lack the language needed to report it, and so communicate distress by crying. A non-verbal pain assessment should be conducted involving the parents, who will notice changes in the infant which may not be obvious to the health care provider. Pre-term babies are more sensitive to painful stimuli than those carried to full term.Another approach, when pain is suspected, is to give the person treatment for pain, and then watch to see whether the suspected indicators of pain subside.
Other reporting barriers
The way in which one experiences and responds to pain is related to sociocultural characteristics, such as gender, ethnicity, and age. An aging adult may not respond to pain in the same way that a younger person might. Their ability to recognize pain may be blunted by illness or the use of medication. Depression may also keep older adult from reporting they are in pain. Decline in self-care may also indicate the older adult is experiencing pain. They may be reluctant to report pain because they do not want to be perceived as weak, or may feel it is impolite or shameful to complain, or they may feel the pain is a form of deserved punishment.Cultural barriers may also affect the likelihood of reporting pain. Patients may feel that certain treatments go against their religious beliefs. They may not report pain because they feel it is a sign that death is near. Many people fear the stigma of addiction, and avoid pain treatment so as not to be prescribed potentially addicting drugs. Many Asians do not want to lose respect in society by admitting they are in pain and need help, believing the pain should be borne in silence, while other cultures feel they should report pain immediately to receive immediate relief.Gender can also be a perceived factor in reporting pain. Gender differences can be the result of social and cultural expectations, with women expected to be more emotional and show pain, and men more stoic. As a result, female pain is often stigmatized, leading to less urgent treatment of women based on social expectations of their ability to accurately report it. This leads to extended emergency room wait times for women and frequent dismissal of their ability to accurately report pain.
Diagnostic aid
Pain is a symptom of many medical conditions. Knowing the time of onset, location, intensity, pattern of occurrence (continuous, intermittent, etc.), exacerbating and relieving factors, and quality (burning, sharp, etc.) of the pain will help the examining physician to accurately diagnose the problem. For example, chest pain described as extreme heaviness may indicate myocardial infarction, while chest pain described as tearing may indicate aortic dissection.
Physiological measurement
Functional magnetic resonance imaging brain scanning has been used to measure pain, and correlates well with self-reported pain.
Mechanisms
Nociceptive
Nociceptive pain is caused by stimulation of sensory nerve fibers that respond to stimuli approaching or exceeding harmful intensity (nociceptors), and may be classified according to the mode of noxious stimulation. The most common categories are "thermal" (e.g. heat or cold), "mechanical" (e.g. crushing, tearing, shearing, etc.) and "chemical" (e.g. iodine in a cut or chemicals released during inflammation). Some nociceptors respond to more than one of these modalities and are consequently designated polymodal.
Nociceptive pain may also be classed according to the site of origin and divided into "visceral", "deep somatic" and "superficial somatic" pain. Visceral structures (e.g., the heart, liver and intestines) are highly sensitive to stretch, ischemia and inflammation, but relatively insensitive to other stimuli that normally evoke pain in other structures, such as burning and cutting. Visceral pain is diffuse, difficult to locate and often referred to a distant, usually superficial, structure. It may be accompanied by nausea and vomiting and may be described as sickening, deep, squeezing, and dull. Deep somatic pain is initiated by stimulation of nociceptors in ligaments, tendons, bones, blood vessels, fasciae and muscles, and is dull, aching, poorly-localized pain. Examples include sprains and broken bones. Superficial somatic pain is initiated by activation of nociceptors in the skin or other superficial tissue, and is sharp, well-defined and clearly located. Examples of injuries that produce superficial somatic pain include minor wounds and minor (first degree) burns.
Neuropathic
Neuropathic pain is caused by damage or disease affecting any part of the nervous system involved in bodily feelings (the somatosensory system). Neuropathic pain may be divided into peripheral, central, or mixed (peripheral and central) neuropathic pain. Peripheral neuropathic pain is often described as "burning", "tingling", "electrical", "stabbing", or "pins and needles". Bumping the "funny bone" elicits acute peripheral neuropathic pain.
Some manifestations of neuropathic pain include: traumatic neuropathy, tic douloureux, painful diabetic neuropathy, and postherpetic neuralgia.
Nociplastic
Nociplastic pain is pain characterized by a changed nociception (but without evidence of real or threatened tissue damage, or without disease or damage in the somatosensory system). In some debates regarding physician-assisted suicide or euthanasia, pain has been used as an argument to permit people who are terminally ill to end their lives.
Psychogenic
Psychogenic pain, also called psychalgia or somatoform pain, is pain caused, increased or prolonged by mental, emotional or behavioral factors. Headache, back pain and stomach pain are sometimes diagnosed as psychogenic. Those affected are often stigmatized, because both medical professionals and the general public tend to think that pain from a psychological source is not "real". However, specialists consider that it is no less actual or hurtful than pain from any other source.People with long-term pain frequently display psychological disturbance, with elevated scores on the Minnesota Multiphasic Personality Inventory scales of hysteria, depression and hypochondriasis (the "neurotic triad"). Some investigators have argued that it is this neuroticism that causes acute pain to turn chronic, but clinical evidence points the other direction, to chronic pain causing neuroticism. When long-term pain is relieved by therapeutic intervention, scores on the neurotic triad and anxiety fall, often to normal levels. Self-esteem, often low in chronic pain patients, also shows improvement once pain has resolved.: 31–2
Management
Pain can be treated through a variety of methods. The most appropriate method depends upon the situation. Management of chronic pain can be difficult and may require the coordinated efforts of a pain management team, which typically includes medical practitioners, clinical pharmacists, clinical psychologists, physiotherapists, occupational therapists, physician assistants, and nurse practitioners.Inadequate treatment of pain is widespread throughout surgical wards, intensive care units, and accident and emergency departments, in general practice, in the management of all forms of chronic pain including cancer pain, and in end of life care. This neglect extends to all ages, from newborns to medically frail elderly. In the US, African and Hispanic Americans are more likely than others to suffer unnecessarily while in the care of a physician; and womens pain is more likely to be undertreated than mens.The International Association for the Study of Pain advocates that the relief of pain should be recognized as a human right, that chronic pain should be considered a disease in its own right, and that pain medicine should have the full status of a medical specialty. It is a specialty only in China and Australia at this time. Elsewhere, pain medicine is a subspecialty under disciplines such as anesthesiology, physiatry, neurology, palliative medicine and psychiatry. In 2011, Human Rights Watch alerted that tens of millions of people worldwide are still denied access to inexpensive medications for severe pain.
Medication
Acute pain is usually managed with medications such as analgesics and anesthetics. Caffeine when added to pain medications such as ibuprofen, may provide some additional benefit. Ketamine can be used instead of opioids for short-term pain. Pain medications can cause paradoxical side effects, such as opioid-induced hyperalgesia (severe pain caused by long-term opioid use).
Sugar (sucrose) when taken by mouth reduces pain in newborn babies undergoing some medical procedures (a lancing of the heel, venipuncture, and intramuscular injections). Sugar does not remove pain from circumcision, and it is unknown if sugar reduces pain for other procedures. Sugar did not affect pain-related electrical activity in the brains of newborns one second after the heel lance procedure. Sweet liquid by mouth moderately reduces the rate and duration of crying caused by immunization injection in children between one and twelve months of age.
Psychological
Individuals with more social support experience less cancer pain, take less pain medication, report less labor pain and are less likely to use epidural anesthesia during childbirth, or suffer from chest pain after coronary artery bypass surgery.Suggestion can significantly affect pain intensity. About 35% of people report marked relief after receiving a saline injection they believed to be morphine. This placebo effect is more pronounced in people who are prone to anxiety, and so anxiety reduction may account for some of the effect, but it does not account for all of it. Placebos are more effective for intense pain than mild pain; and they produce progressively weaker effects with repeated administration.: 26–8 It is possible for many with chronic pain to become so absorbed in an activity or entertainment that the pain is no longer felt, or is greatly diminished.: 22–3 A number of meta-analyses have found clinical hypnosis to be effective in controlling pain associated with diagnostic and surgical procedures in both adults and children, as well as pain associated with cancer and childbirth. A 2007 review of 13 studies found evidence for the efficacy of hypnosis in the reduction of chronic pain under some conditions, though the number of patients enrolled in the studies was low, raising issues related to the statistical power to detect group differences, and most lacked credible controls for placebo or expectation. The authors concluded that "although the findings provide support for the general applicability of hypnosis in the treatment of chronic pain, considerably more research will be needed to fully determine the effects of hypnosis for different chronic-pain conditions."
Alternative medicine
An analysis of the 13 highest quality studies of pain treatment with acupuncture, published in January 2009, concluded there was little difference in the effect of real, faked and no acupuncture. However, more recent reviews have found some benefit. Additionally, there is tentative evidence for a few herbal medicines. There has been some interest in the relationship between vitamin D and pain, but the evidence so far from controlled trials for such a relationship, other than in osteomalacia, is inconclusive.For chronic (long-term) lower back pain, spinal manipulation produces tiny, clinically insignificant, short-term improvements in pain and function, compared with sham therapy and other interventions. Spinal manipulation produces the same outcome as other treatments, such as general practitioner care, pain-relief drugs, physical therapy, and exercise, for acute (short-term) lower back pain.
Epidemiology
Pain is the main reason for visiting an emergency department in more than 50% of cases, and is present in 30% of family practice visits. Several epidemiological studies have reported widely varying prevalence rates for chronic pain, ranging from 12 to 80% of the population. It becomes more common as people approach death. A study of 4,703 patients found that 26% had pain in the last two years of life, increasing to 46% in the last month.A survey of 6,636 children (0–18 years of age) found that, of the 5,424 respondents, 54% had experienced pain in the preceding three months. A quarter reported having experienced recurrent or continuous pain for three months or more, and a third of these reported frequent and intense pain. The intensity of chronic pain was higher for girls, and girls reports of chronic pain increased markedly between ages 12 and 14.
Society and culture
Physical pain is a universal experience, and a strong motivator of human and animal behavior. As such, physical pain is used politically in relation to various issues such as pain management policy, drug control, animal rights or animal welfare, torture, and pain compliance. The deliberate infliction of pain and the medical management of pain are both important aspects of biopower, a concept that encompasses the "set of mechanisms through which the basic biological features of the human species became the object of a political strategy".In various contexts, the deliberate infliction of pain in the form of corporal punishment is used as retribution for an offence, for the purpose of disciplining or reforming a wrongdoer, or to deter attitudes or behaviour deemed unacceptable. In Western societies, the intentional infliction of severe pain (torture) was principally used to extract confession prior to its abolition in the latter part of the 19th century. Torture as a means to punish the citizen has been reserved for offences posing severe threat to the social fabric (for example, treason).The administration of torture on bodies othered by the cultural narrative, those observed as not full members of society : 101–121[AD1] met a resurgence in the 20th century, possibly due to the heightened warfare.: 101–121[AD2] Many cultures use painful ritual practices as a catalyst for psychological transformation. The use of pain to transition to a cleansed and purified state is seen in Catholic self-flagellation practices, or personal catharsis in neo-primitive body suspension experiences.Beliefs about pain play an important role in sporting cultures </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Castleman disease and can you explain it in depth? [/INST] Castleman disease (CD) describes a group of rare lymphoproliferative disorders that involve enlarged lymph nodes, and a broad range of inflammatory symptoms and laboratory abnormalities. Whether Castleman disease should be considered an autoimmune disease, cancer, or infectious disease is currently unknown.
Castleman disease includes at least three distinct subtypes: unicentric Castleman disease (UCD), human herpesvirus 8 associated multicentric Castleman disease (HHV-8-associated MCD), and idiopathic multicentric Castleman disease (iMCD). These are differentiated by the number and location of affected lymph nodes and the presence of human herpesvirus 8, a known causative agent in a portion of cases. Correctly classifying the Castleman disease subtype is important, as the three subtypes vary significantly in symptoms, clinical findings, disease mechanism, treatment approach, and prognosis. All forms involve overproduction of cytokines and other inflammatory proteins by the bodys immune system as well as characteristic abnormal lymph node features that can be observed under the microscope. In the United States, approximately 4,300 to 5,200 new cases are diagnosed each year.Castleman disease is named after Benjamin Castleman, who first described the disease in 1956. The Castleman Disease Collaborative Network is the largest organization dedicated to accelerating research and treatment for Castleman disease as well as improving patient care.
Classification
Castleman disease (CD) can involve one or more enlarged lymph nodes in a single region of the body (unicentric CD, UCD) or it can involve multiple enlarged lymph node regions (multi centric CD, MCD). Doctors classify the disease into different categories based on the number of enlarged lymph node regions and the underlying cause. There are four established subtypes of Castleman disease:
Unicentric Castleman disease
Unicentric Castleman disease (UCD) involves a single enlarged lymph node or multiple enlarged lymph nodes within a single region of the body that display microscopic features consistent with Castleman disease. It is also sometimes called localized Castleman disease.The exact cause of UCD is unknown, but appears to be due to a genetic change that occurs in the lymph node tissue, most similar to a benign tumor. In most cases of UCD, individuals exhibit no symptoms (asymptomatic). UCD symptoms tend to be mild and occur secondary to compression of surrounding structures by rapidly enlarging lymph nodes. Less commonly, some UCD patients can experience systemic inflammatory symptoms such as fever, fatigue, excessive sweating, weight loss, and skin rash as well as laboratory abnormalities such as low hemoglobin and elevated C-reactive protein. These symptoms are typically seen in MCD.
Surgery is considered by experts to be the first-line treatment option for all cases of UCD. Sometimes, removing the enlarged lymph node(s) is not possible. If surgical excision is not possible, treatment is recommended for symptomatic patients. If symptoms are due to compression, then rituximab is recommended. If symptoms are due to an inflammatory syndrome, then anti-interleukin-6 (IL-6) therapy is recommended. If these treatments are not effective, then radiation may be needed.
Multicentric Castleman disease (MCD)
In this form, patients have multiple regions of enlarged lymph nodes with characteristic microscopic features, flu-like symptoms, and organ dysfunction due to excessive cytokines or inflammatory proteins. MCD is further classified into three categories based on underlying cause: POEMS-associated MCD, HHV-8-associated MCD, and idiopathic MCD (iMCD).
POEMS-associated MCD
A cancerous cell population found in patients with POEMS syndrome (polyneuropathy, organomegaly, endocrinopathy, monoclonal plasma cell disorder, and skin changes) can cause MCD in a fraction of patients by producing cytokines that initiate a cytokine storm. In patients who have both POEMS-associated MCD, treatment should be directed at the POEMS syndrome.
HHV-8-associated multicentric Castleman disease (HHV-8-MCD)
HHV-8-associated MCD patients have multiple regions of enlarged lymph nodes and episodic inflammatory symptoms due to uncontrolled infection with HHV-8. HHV-8-associated MCD is most commonly diagnosed in HIV infected or otherwise immunocompromised individuals that are not able to control HHV-8 infection. Thus, HHV-8-associated MCD patients may experience additional symptoms related to their HIV infection or other conditions. First-line treatment of HHV-8-associated MCD is rituximab, a drug used to eliminate a type of immune cell called the B lymphocyte. It is highly effective for HHV-8-associated MCD, but occasionally antivirals and/or cytotoxic chemotherapies are needed.
Idiopathic multicentric Castleman disease (iMCD)
Idiopathic multicentric Castleman disease (iMCD), which is the most common form of MCD, occurs for an unknown cause. There is no evidence of POEMS syndrome, HHV-8, or any other cancer or infectious disease. Though all forms of MCD involve excessive production of cytokines and a cytokine storm, iMCD has important differences in symptoms, disease course, and treatment from POEMS-associated MCD and HHV-8-associated MCD. First line treatment for iMCD is anti-IL-6 therapy with siltuximab (or tocilizumab, if siltuximab is not available). Siltuximab is the only FDA-approved treatment for iMCD and patients who respond to siltuximab tend to have long-term responses. In critically ill patients, chemotherapy and corticosteroids are recommended if the patient is demonstrating disease progression while on siltuximab. Approximately half of iMCD patients do not improve with anti-IL-6 therapy. In patients where siltuximab is not effective, other treatments such as rituximab and sirolimus can be used.iMCD can be further sub-classified into three clinical subgroups:
iMCD with TAFRO Syndrome (iMCD-TAFRO): characterized by acute episodes of Thrombocytopenia, Anasarca, Fever, Renal dysfunction or mylefibrosis, and Organomegaly.iMCD with idiopathic plasmacytic lymphadenopathy (iMCD-IPL): characterized by thrombocytosis, hypergammaglobulinemia, and a more chronic disease course.iMCD, not otherwise specified (iMCD-NOS): is diagnosed in iMCD patients who do not have iMCD-TAFRO or iMCD-IPL.
Pathology
Castleman disease is defined by a range of characteristic features seen on microscopic analysis (histology) of tissue from enlarged lymph nodes. Variations in the lymph node tissues of patients with CD have led to 4 histological classifications:
Plasmacytic: increased number of follicles with large hyperplastic germinal centers and sheetlike plasmacytosis (increased number plasma cells). Germinal centers may also show regressed features
Hyaline vascular: regressed germinal centers, follicular dendritic cell prominence or dysplasia, hypervascularity in interfollicular regions, sclerotic vessels, prominent mantle zones with an "onion-skin" appearance.
Hypervascular: similar to hyaline vascular features, but seen in iMCD rather than UCD. Includes regressed germinal centers, follicular dendritic cell prominence, hypervascularity in interfollicular regions, and prominent mantle zones with an "onion-skin" appearance.
Mixed: presence of a combination of hyaline vascular/hypervascular and plasmacytic features in the same lymph node.UCD most commonly demonstrates hyaline vascular features, but plasmacytic features or a mix of features may also be seen. iMCD more commonly demonstrates plasmacytic features, but hypervascular features or a mix of features are also seen. All cases of HHV-8-associated MCD are thought to demonstrate plasmablastic features—similar to plasmacytic features, but with plasmablasts present. The clinical utility of subtyping Castleman disease by histologic features is uncertain, as histologic subtypes do not consistently predict disease severity or treatment response. Guidelines recommend against using histologic subtype to guide treatment decisions. Staining with latency-associated nuclear antigen (LANA-1), a marker for HHV-8 infection, should be measured in all forms of Castleman disease but is positive only in HHV-8-associated MCD.Diseases other than Castleman disease can present with similar histologic findings in lymph node tissue, including:
Infectious causes: Epstein-Barr virus, human immunodeficiency virus, tuberculosis
Autoimmune diseases: Systemic lupus erythematosus, rheumatoid arthritis
Lymphoproliferative disorders: lymphoma, autoimmune lymphoproliferative syndrome
History
Unicentric Castleman disease was first described in a case series by Benjamin Castleman in 1956. By 1984, a number of case reports had been published describing a multicentric variant of the disease and with some reports describing an association with Kaposis sarcoma. In 1995, the association between HHV-8 and Castleman disease was described in patients with HIV. Formal diagnostic criteria and definition of the disease was established in 2016, which will allow for better understanding and the ability to appropriately track and research CD. In 2017, international consensus diagnostic criteria for idiopathic multicentric Castleman disease (iMCD) were established for the first time </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Encounter and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Rickettsiosis and can you explain it in depth? [/INST] </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Pulmonary heart disease and can you explain it in depth? [/INST] Pulmonary heart disease, also known as cor pulmonale, is the enlargement and failure of the right ventricle of the heart as a response to increased vascular resistance (such as from pulmonic stenosis) or high blood pressure in the lungs.Chronic pulmonary heart disease usually results in right ventricular hypertrophy (RVH), whereas acute pulmonary heart disease usually results in dilatation. Hypertrophy is an adaptive response to a long-term increase in pressure. Individual muscle cells grow larger (in thickness) and change to drive the increased contractile force required to move the blood against greater resistance. Dilatation is a stretching (in length) of the ventricle in response to acute increased pressure.To be classified as pulmonary heart disease, the cause must originate in the pulmonary circulation system; RVH due to a systemic defect is not classified as pulmonary heart disease. Two causes are vascular changes as a result of tissue damage (e.g. disease, hypoxic injury), and chronic hypoxic pulmonary vasoconstriction. If left untreated, then death may result. The heart and lungs are intricately related; whenever the heart is affected by a disease, the lungs risk following and vice versa.
Signs and symptoms
The symptoms/signs of pulmonary heart disease (cor pulmonale) can be non-specific and depend on the stage of the disorder, and can include blood backing up into the systemic venous system, including the hepatic vein. As pulmonary heart disease progresses, most individuals will develop symptoms like:
Shortness of breath
Wheezing
Cyanosis
Ascites
Jaundice
Enlargement of the liver
Raised jugular venous pressure (JVP)
Third heart sound
Intercostal recession
Presence of abnormal heart sounds
Causes
The causes of pulmonary heart disease (cor pulmonale) are the following:
Acute respiratory distress syndrome (ARDS)
COPD
Primary pulmonary hypertension
Blood clots in lungs
Kyphoscoliosis
Interstitial lung disease
Cystic fibrosis
Sarcoidosis
Obstructive sleep apnea (untreated)
Sickle cell anemia
Bronchopulmonary dysplasia (in infants)
Pathophysiology
The pathophysiology of pulmonary heart disease (cor pulmonale) has always indicated that an increase in right ventricular afterload causes RV failure (pulmonary vasoconstriction, anatomic disruption/pulmonary vascular bed and increased blood viscosity are usually involved), however most of the time, the right ventricle adjusts to an overload in chronic pressure. According to Voelkel, et al., pressure overload is the initial step for changes in RV, other factors include:
Ischemia
Inflammation
Oxidative damage
Epigenetics
Abnormal cardiac energetics
Diagnosis
Investigations available to determine the cause of cor pulmonale include the following:
Chest x-ray – right ventricular hypertrophy, right atrial dilatation, prominent pulmonary artery
ECG – right ventricular hypertrophy, dysrhythmia, P pulmonale (characteristic peaked P wave)
Thrombophilia screen- to detect chronic venous thromboembolism (proteins C and S, antithrombin III, homocysteine levels)
Differential diagnosis
The diagnosis of pulmonary heart disease is not easy as both lung and heart disease can produce similar symptoms. Therefore, the differential diagnosis (DDx) should assess:
Atrial myxoma
Congestive heart failure
Constrictive pericarditis
Infiltrative cardiomyopathies
Right heart failure (right ventricular infarction)
Ventricular septal defect
Treatment
The treatment for cor pulmonale can include the following: antibiotics, expectorants, oxygen therapy, diuretics, digitalis, vasodilators, and anticoagulants. Some studies have indicated that Shenmai injection with conventional treatment is safe and effective for cor pulmonale (chronic).Treatment requires diuretics (to decrease strain on the heart). Oxygen is often required to resolve the shortness of breath. Additionally, oxygen to the lungs also helps relax the blood vessels and eases right heart failure. When wheezing is present, the majority of individuals require a bronchodilator. A variety of medications have been developed to relax the blood vessels in the lung, calcium channel blockers are used but only work in few cases and according to NICE are not recommended for use at all.Anticoagulants are used when venous thromboembolism is present. Venesection is used in severe secondary polycythemia (because of hypoxia), which improves symptoms though survival rate has not been proven to increase. Finally, transplantation of single/double lung in extreme cases of cor pulmonale is also an option.
Epidemiology
The epidemiology of pulmonary heart disease (cor pulmonale) accounts for 7% of all heart disease in the U.S According to Weitzenblum, et al., the mortality that is related to cor pulmonale is not easy to ascertain, as it is a complication of COPD.
See also
Bilharzial cor pulmonale.
References
Further reading
Forfia, Paul R.; Vaidya, Anjali; Wiegers, Susan E. (2013-01-01). "Pulmonary heart disease: The heart-lung interaction and its impact on patient phenotypes". Pulmonary Circulation. 3 (1): 5–19. doi:10.4103/2045-8932.109910. ISSN 2045-8932. PMC 3641739. PMID 23662171.
Taussig, Lynn M.; Landau, Louis I. (2008-04-09). Pediatric Respiratory Medicine. Elsevier Health Sciences </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Interstitial lung disease and can you explain it in depth? [/INST] Interstitial lung disease (ILD), or diffuse parenchymal lung disease (DPLD), is a group of respiratory diseases affecting the interstitium (the tissue and space around the alveoli (air sacs)) of the lungs. It concerns alveolar epithelium, pulmonary capillary endothelium, basement membrane, and perivascular and perilymphatic tissues. It may occur when an injury to the lungs triggers an abnormal healing response. Ordinarily, the body generates just the right amount of tissue to repair damage, but in interstitial lung disease, the repair process is disrupted, and the tissue around the air sacs (alveoli) becomes scarred and thickened. This makes it more difficult for oxygen to pass into the bloodstream. The disease presents itself with the following symptoms: shortness of breath, nonproductive coughing, fatigue, and weight loss, which tend to develop slowly, over several months. The average rate of survival for someone with this disease is between three and five years. The term ILD is used to distinguish these diseases from obstructive airways diseases.
There are specific types in children, known as childrens interstitial lung diseases. The acronym ChILD is sometimes used for this group of diseases.Prolonged ILD may result in pulmonary fibrosis, but this is not always the case. Idiopathic pulmonary fibrosis is interstitial lung disease for which no obvious cause can be identified (idiopathic) and is associated with typical findings both radiographic (basal and pleural-based fibrosis with honeycombing) and pathologic (temporally and spatially heterogeneous fibrosis, histopathologic honeycombing, and fibroblastic foci).
In 2015, interstitial lung disease, together with pulmonary sarcoidosis, affected 1.9 million people. They resulted in 122,000 deaths.
Causes
An ILD may be classified as to whether its cause is not known (idiopathic) or known (secondary).
Idiopathic
Idiopathic interstitial pneumonia is the term given to ILDs with an unknown cause. They represent the majority of cases of interstitial lung diseases (up to two-thirds of cases). They were subclassified by the American Thoracic Society in 2002 into 7 subgroups:
Idiopathic pulmonary fibrosis (IPF): the most common subgroup
Desquamative interstitial pneumonia (DIP)
Acute interstitial pneumonia (AIP): also known as Hamman-Rich syndrome
Nonspecific interstitial pneumonia (NSIP)
Respiratory bronchiolitis-associated interstitial lung disease (RB-ILD)
Cryptogenic organizing pneumonia (COP): also known by the older name bronchiolitis obliterans organizing pneumonia (BOOP)
Lymphoid interstitial pneumonia (LIP)
Secondary
Secondary ILDs are those diseases with a known etiology, including:
Connective tissue and Autoimmune diseases
Sarcoidosis
Rheumatoid arthritis
Systemic lupus erythematosus
Systemic sclerosis
Polymyositis
Dermatomyositis
Antisynthetase syndrome
Inhaled substances (Pneumoconiosis)
Inorganic
Silicosis
Asbestosis
Berylliosis
Industrial printing chemicals (e.g. carbon black, ink mist)
Organic
Hypersensitivity pneumonitis (extrinisic allergic alveolitis)
Drug-induced
Antibiotics
Chemotherapeutic drugs
Antiarrhythmic agents
Cigarette smoking
Infection
Coronavirus disease 2019
Atypical pneumonia
Pneumocystis pneumonia (PCP)
Tuberculosis
Chlamydia trachomatis
Respiratory Syncytial Virus
Malignancy
Lymphangitic carcinomatosis
Predominately in children
Diffuse developmental disorders
Growth abnormalities deficient alveolarisation
Infant conditions of undefined cause
ILD related to alveolar surfactant region
Diagnosis
Investigation is tailored towards the symptoms and signs. A proper and detailed history looking for the occupational exposures, and for signs of conditions listed above is the first and probably the most important part of the workup in patients with interstitial lung disease. Pulmonary function tests usually show a restrictive defect with decreased diffusion capacity (DLCO).A lung biopsy is required if the clinical history and imaging are not clearly suggestive of a specific diagnosis or malignancy cannot otherwise be ruled out. In cases where a lung biopsy is indicated, a trans-bronchial biopsy is usually unhelpful, and a surgical lung biopsy is often required.
X-rays
Chest radiography is usually the first test to detect interstitial lung diseases, but the chest radiograph can be normal in up to 10% of patients, especially early in the disease process.High resolution CT of the chest is the preferred modality, and differs from routine CT of the chest. Conventional (regular) CT chest examines 7–10 mm slices obtained
at 10 mm intervals; high resolution CT examines 1–1.5 mm slices at 10 mm
intervals using a high spatial frequency reconstruction algorithm. The HRCT therefore provides approximately 10 times more resolution than the conventional CT chest, allowing the HRCT to elicit details that cannot otherwise be visualized.Radiologic appearance alone however is not adequate and should be interpreted in the clinical context, keeping in mind the temporal profile of the disease process.Interstitial lung diseases can be classified according to radiologic patterns.
Pattern of opacities
ConsolidationAcute: Alveolar hemorrhage syndromes, acute eosinophilic pneumonia, acute interstitial pneumonia, cryptogenic organizing pneumonia
Chronic: Chronic eosinophilic pneumonia, cryptogenic organizing pneumonia, lymphoproliferative disorders, pulmonary alveolar proteinosis, sarcoidosis
Linear or reticular opacitiesAcute: Pulmonary edema
Chronic: Idiopathic pulmonary fibrosis, connective tissue-associated interstitial lung diseases, asbestosis, sarcoidosis, hypersensitivity pneumonitis, drug-induced lung disease
Small nodulesAcute: Hypersensitivity pneumonitis
Chronic: Hypersensitivity pneumonitis, sarcoidosis, silicosis, coal workers pneumoconiosis, respiratory bronchiolitis, alveolar microlithiasis
Cystic airspacesChronic: Pulmonary langerhans cell histiocytosis, pulmonary lymphangioleiomyomatosis, honeycomb lung caused by IPF or other diseases
Ground glass opacities
Acute: Alveolar hemorrhage syndromes, pulmonary edema, hypersensitivity pneumonitis, acute inhalational exposures, drug-induced lung diseases, acute interstitial pneumonia
Chronic: Nonspecific interstitial pneumonia, respiratory bronchiolitis associated interstitial lung disease, desquamative interstitial pneumonia, drug-induced lung diseases, pulmonary alveolar proteinosis
Thickened alveolar septaAcute: Pulmonary edema
Chronic: Lymphangitic carcinomatosis, pulmonary alveolar proteinosis, sarcoidosis, pulmonary veno occlusive disease
Distribution
Upper lung predominancePulmonary Langerhans cell histiocytosis, silicosis, coal workers pneumoconiosis, carmustine related pulmonary fibrosis, respiratory broncholitis associated with interstitial lung disease.
Lower lung predominanceIdiopathic pulmonary fibrosis, pulmonary fibrosis associated with connective tissue diseases, asbestosis, chronic aspiration
Central predominance (perihilar)Sarcoidosis, berylliosis
Peripheral predominanceIdiopathic pulmonary fibrosis, chronic eosinophilic pneumonia, cryptogenic organizing pneumonia
Associated findings
Pleural effusion or thickeningPulmonary edema, connective tissue diseases, asbestosis, lymphangitic carcinomatosis, lymphoma, lymphangioleiomyomatosis, drug-induced lung diseases
LymphadenopathySarcoidosis, silicosis, berylliosis, lymphangitic carcinomatosis, lymphoma, lymphocytic interstitial pneumonia
Genetic testing
For some types of paediatric ILDs and few forms adult ILDs, genetic causes have been identified. These may be identified by blood tests. For a limited number of cases, this is a definite advantage, as a precise molecular diagnosis can be done; frequently then there is no need for a lung biopsy. Testing is available for
ILDs related to alveolar surfactant region
Surfactant-Protein-B Deficiency (Mutations in SFTPB)
Surfactant-Protein-C Deficiency (Mutations in SFTPC)
ABCA3-Deficiency (Mutations in ABCA3)
Brain Lung Thyroid Syndrome (Mutations in TTF1)
Congenital Pulmonary Alveolar Proteinosis (Mutations in CSFR2A, CSFR2B)
Diffuse developmental disorder
Alveolar Capillary Dysplasia (Mutations in FoxF1)
Idiopathic pulmonary fibrosis
Mutations in telomerase reverse transcriptase (TERT)
Mutations in telomerase RNA component (TERC)
Mutations in the regulator of telomere elongation helicase 1 (RTEL1)
Mutations in poly(A)-specific ribonuclease (PARN)
Treatment
ILD is not a single disease but encompasses many different pathological processes. Hence treatment is different for each disease. If a specific occupational exposure cause is found, the person should avoid that environment </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Lactation failure and can you explain it in depth? [/INST] In breastfeeding, lactation failure may refer to:
Primary lactation failure, a cause of low milk supply in breastfeeding mothers
Cessation of breastfeeding before the mother had planned to stop, usually as a result of breastfeeding difficulties
Low milk supply in generalLactation failure can result in neonatal jaundice.
References
Lawrence, Ruth A. (1986). "Maternal Factors in Lactation Failure". Human Lactation 2 </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Pathogenic bacteria and can you explain it in depth? [/INST] Pathogenic bacteria are bacteria that can cause disease. This article focuses on the bacteria that are pathogenic to humans. Most species of bacteria are harmless and are often beneficial but others can cause infectious diseases. The number of these pathogenic species in humans is estimated to be fewer than a hundred. By contrast, several thousand species are part of the gut flora present in the digestive tract.
The body is continually exposed to many species of bacteria, including beneficial commensals, which grow on the skin and mucous membranes, and saprophytes, which grow mainly in the soil and in decaying matter. The blood and tissue fluids contain nutrients sufficient to sustain the growth of many bacteria. The body has defence mechanisms that enable it to resist microbial invasion of its tissues and give it a natural immunity or innate resistance against many microorganisms.
Pathogenic bacteria are specially adapted and endowed with mechanisms for overcoming the normal body defences, and can invade parts of the body, such as the blood, where bacteria are not normally found. Some pathogens invade only the surface epithelium, skin or mucous membrane, but many travel more deeply, spreading through the tissues and disseminating by the lymphatic and blood streams. In some rare cases a pathogenic microbe can infect an entirely healthy person, but infection usually occurs only if the bodys defence mechanisms are damaged by some local trauma or an underlying debilitating disease, such as wounding, intoxication, chilling, fatigue, and malnutrition. In many cases, it is important to differentiate infection and colonization, which is when the bacteria are causing little or no harm.
Caused by Mycobacterium tuberculosis bacteria, one of the diseases with the highest disease burden is tuberculosis, which killed 1.4 million people in 2019, mostly in sub-Saharan Africa. Pathogenic bacteria contribute to other globally important diseases, such as pneumonia, which can be caused by bacteria such as Staphylococcus, Streptococcus and Pseudomonas, and foodborne illnesses, which can be caused by bacteria such as Shigella, Campylobacter, and Salmonella. Pathogenic bacteria also cause infections such as tetanus, typhoid fever, diphtheria, syphilis, and leprosy. Pathogenic bacteria are also the cause of high infant mortality rates in developing countries.Most pathogenic bacteria can be grown in cultures and identified by Gram stain and other methods. Bacteria grown in this way are often tested to find which antibiotics will be an effective treatment for the infection. For hitherto unknown pathogens, Kochs postulates are the standard to establish a causative relationship between a microbe and a disease.
Diseases
Each species has specific effect and causes symptoms in people who are infected. Some people who are infected with a pathogenic bacteria do not have symptoms. Immunocompromised individuals are more susceptible to pathogenic bacteria.
Pathogenic susceptibility
Some pathogenic bacteria cause disease under certain conditions, such as entry through the skin via a cut, through sexual activity or through a compromised immune function.
Some species of Streptococcus and Staphylococcus are part of the normal skin microbiota and typically reside on healthy skin or in the nasopharangeal region. Yet these species can potentially initiate skin infections. Streptoccal infections include sepsis, pneumonia, and meningitis. These infections can become serious creating a systemic inflammatory response resulting in massive vasodilation, shock, and death.Other bacteria are opportunistic pathogens and cause disease mainly in people with immunosuppression or cystic fibrosis. Examples of these opportunistic pathogens include Pseudomonas aeruginosa, Burkholderia cenocepacia, and Mycobacterium avium.
Intracellular
Obligate intracellular parasites (e.g. Chlamydophila, Ehrlichia, Rickettsia) have the ability to only grow and replicate inside other cells. Even these intracellular infections may be asymptomatic, requiring an incubation period. An example of this is Rickettsia which causes typhus. Another causes Rocky Mountain spotted fever.Chlamydia are intracellular parasites. These pathogens can cause pneumonia or urinary tract infection and may be involved in coronary heart disease.Other groups of intracellular bacterial pathogens include Salmonella, Neisseria, Brucella, Mycobacterium, Nocardia, Listeria, Francisella, Legionella, and Yersinia pestis. These can exist intracellularly, but can exist outside of host cells.
Infections in specific tissue
Bacterial pathogens often cause infection in specific areas of the body. Others are generalists.
Bacterial vaginosis is a condition of the vaginal microbiota in which an excessive growth of Gardnerella vaginalis and other mostly anaerobic bacteria displace the beneficial Lactobacilli species that maintain healthy vaginal microbial populations.
Bacterial meningitis is a bacterial inflammation of the meninges, which are the protective membranes covering the brain and spinal cord.
Bacterial pneumonia is a bacterial infection of the lungs.
Urinary tract infection is predominantly caused by bacteria. Symptoms include the strong and frequent sensation or urge to urinate, pain during urination, and urine that is cloudy. The most frequent cause is Escherichia coli. Urine is typically sterile but contains a variety of salts, and waste products. Bacteria can ascend into the bladder or kidney and causing cystitis and nephritis.
Bacterial gastroenteritis is caused by enteric, pathogenic bacteria. These pathogenic species are usually distinct from the usually harmless bacteria of the normal gut flora. But a different strain of the same species may be pathogenic. The distinction is sometimes difficult as in the case of Escherichia.
Bacterial skin infections include:
Impetigo is a highly contagious bacterial skin infection commonly seen in children. It is caused by Staphylococcus aureus, and Streptococcus pyogenes.
Erysipelas is an acute streptococcus bacterial infection of the deeper skin layers that spreads via with lymphatic system.
Cellulitis is a diffuse inflammation of connective tissue with severe inflammation of dermal and subcutaneous layers of the skin. Cellulitis can be caused by normal skin flora or by contagious contact, and usually occurs through open skin, cuts, blisters, cracks in the skin, insect bites, animal bites, burns, surgical wounds, intravenous drug injection, or sites of intravenous catheter insertion. In most cases it is the skin on the face or lower legs that is affected, though cellulitis can occur in other tissues.
Mechanisms of damage
The symptoms of disease appear as pathogenic bacteria damage host tissues or interfere with their function. The bacteria can damage host cells directly or indirectly by provoking an immune response that inadvertently damages host cells, or by releasing toxins.
Direct
Once pathogens attach to host cells, they can cause direct damage as the pathogens use the host cell for nutrients and produce waste products. For example, Streptococcus mutans, a component of dental plaque, metabolizes dietary sugar and produces acid as a waste product. The acid decalcifies the tooth surface to cause dental caries.
Toxin production
Endotoxins are the lipid portions of lipopolysaccharides that are part of the outer membrane of the cell wall of gram-negative bacteria. Endotoxins are released when the bacteria lyses, which is why after antibiotic treatment, symptoms can worsen at first as the bacteria are killed and they release their endotoxins. Exotoxins are secreted into the surrounding medium or released when the bacteria die and the cell wall breaks apart.
Indirect
An excessive or inappropriate immune response triggered by an infection may damage host cells.
Survival in host
Nutrients
Iron is required for humans, as well as the growth of most bacteria. To obtain free iron, some pathogens secrete proteins called siderophores, which take the iron away from iron-transport proteins by binding to the iron even more tightly. Once the iron-siderophore complex is formed, it is taken up by siderophore receptors on the bacterial surface and then that iron is brought into the bacterium.Bacterial pathogens also require access to carbon and energy sources for growth. To avoid competition with host cells for glucose which is the main energy source used by human cells, many pathogens including the respiratory pathogen Haemophilus influenzae specialise in using other carbon sources such as lactate that are abundant in the human body
Identification
Typically identification is done by growing the organism in a wide range of cultures which can take up to 48 hours. The growth is then visually or genomically identified. The cultured organism is then subjected to various assays to observe reactions to help further identify species and strain.
Treatment
Bacterial infections may be treated with antibiotics, which are classified as bacteriocidal if they kill bacteria or bacteriostatic if they just prevent bacterial growth. There are many types of antibiotics and each class inhibits a process that is different in the pathogen from that found in the host. For example, the antibiotics chloramphenicol and tetracyclin inhibit the bacterial ribosome but not the structurally different eukaryotic ribosome, so they exhibit selective toxicity. Antibiotics are used both in treating human disease and in intensive farming to promote animal growth. Both uses may be contributing to the rapid development of antibiotic resistance in bacterial populations. Phage therapy, using bacteriophages can also be used to treat certain bacterial infections.
Prevention
Infections can be prevented by antiseptic measures such as sterilizing the skin prior to piercing it with the needle of a syringe and by proper care of indwelling catheters. Surgical and dental instruments are also sterilized to prevent infection by bacteria. Disinfectants such as bleach are used to kill bacteria or other pathogens on surfaces to prevent contamination and further reduce the risk of infection. Bacteria in food are killed by cooking to temperatures above 73 °C (163 °F).
List of genera and microscopy features
Many genera contain pathogenic bacterial species </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Pneumonitis and can you explain it in depth? [/INST] Pneumonitis describes general inflammation of lung tissue. Possible causative agents include radiation therapy of the chest, exposure to medications used during chemo-therapy, the inhalation of debris (e.g., animal dander), aspiration, herbicides or fluorocarbons and some systemic diseases. If unresolved, continued inflammation can result in irreparable damage such as pulmonary fibrosis.Pneumonitis is distinguished from pneumonia on the basis of causation as well as its manifestation. Pneumonia can be described as pneumonitis combined with consolidation and exudation of lung tissue due to infection with microorganisms. The distinction between Pneumonia and Pneumonitis can be further understood with Pneumonitis being the encapsulation of all respiratory infections (incorporating pneumonia and pulmonary fibrosis as major diseases), and pneumonia as a localized infection. For most infections, the immune response of the body is enough to control and apprehend the infection within a couple days, but if the tissue and the cells cant fight off the infection, the creation of pus will begin to form in the lungs which then hardens into lung abscess or suppurative pneumonitis. Patients that are immunodeficient and dont get treated immediately for any type of respiratory infection may lead to more severe infections and/or death.Pneumonitis can be classified into several different specific subcategories, including hypersensitivity pneumonitis, radiation pneumonitis, acute interstitial pneumonitis, and chemical pneumonitis. These all share similar symptoms, but differ in causative agents. Diagnosis of pneumonitis remains challenging, but several different treatment paths (corticosteroids, oxygen therapy, avoidance) have seen success.
Causes
Alveoli are the primary structure affected by pneumonitis. Any particles that are smaller than 5 microns can enter the alveoli of the lungs. These tiny air sacs facilitate the passage of oxygen from inhaled air to the bloodstream. In the case of pneumonitis, it is more difficult for this exchange of oxygen to occur since irritants have caused inflammation of the alveoli. Due to the lack of a definitive determination of a single irritant causing pneumonitis, there are several possible causes.
Viral infection. Measles can cause severe pneumonitis, and ribavirin has been proposed as a possible treatment. CMV is another cause.
Pneumonia
Radiation therapy
Inhaling chemicals, such as sodium hydroxide
Interstitial lung disease
Sepsis
Adverse reaction to medications
Hypersensitivity to inhaled agents
Inhalation of spores of some species of mushroom (bronchoalveolar allergic syndrome)
Mercury exposure
Smoking
Overexposure to chlorine
Bronchial obstruction (obstructive pneumonitis or post-obstructive pneumonitis)
Ascariasis (during parasite migration)
Aspirin overdose, some antibiotics, and chemotherapy drugs
“Farmer’s lung” and “hot tub lung” are common names for types of hypersensitivity pneumonitis that result from exposure to some types of thermophilic actinomyces, mycobacteria and molds.
Avian proteins in bird feces and feathers
Whole body or chest radiation therapy used for cancer treatment
Symptoms
Physical manifestations of Pneumonitis range from mild cold-like symptoms to respiratory failure. Most frequently, those with pneumonitis experience shortness of breath, and sometimes a dry cough. Symptoms usually appear a few hours after exposure and peak at approximately eighteen to twenty-four hours.Other symptoms may include:
Malaise
Fever
Dyspnea
Flushed and/or discolored skin
Sweating
Small and fast inhalationsWithout proper treatment, pneumonitis may become chronic pneumonitis, resulting in fibrosis of the lungs and its effects:
Difficulty breathing
Food aversion
LethargyEnd-stage fibrosis and respiratory failure eventually lead to death in cases without proper management of chronic pneumonitis.
Diagnosis
A chest X-ray or CT is necessary to differentiate between pneumonitis and pneumonia of an infectious etiology. Some degree of pulmonary fibrosis may be evident in a CT which is indicative of chronic pulmonary inflammatory processes. Diagnosis of Pneumonitis is often difficult as it depends on a high degree of clinical suspicion when evaluating a patient with a recent onset of a possible interstitial lung disease. In addition, interpreting pathologic and radiographic test results remains a challenge to clinicians. Pneumonitis is often difficult to recognize and discern from other interstitial lung diseases.Diagnostic procedures currently available include:
Evaluation of patient history and possible exposure to a known causative agent
High-Resolution Computed Tomography (HRCT) consistent with pneumonitis
Bronchoalveolar lavage with lymphocytosis
Lung biopsy consistent with pneumonitis histopathologyExposure to causative agents of pneumonitis in a specific environment can be confirmed through aero/microbiologic analysis to verify its presence. Subsequent testing of patient serum for evidence of serum specific IgG antibodies confirms patient exposure.Clinical tests include chest radiography or (HRCT) which may show centrilobular nodular and ground-glass opacities with air-trapping in the middle and upper lobes of the lungs. Fibrosis may also be evident. Bronchoalveolar Lavage (BAL) findings coinciding with pneumonitis typically include a lymphocytosis with a low CD4:CD8 ratio.Reticular or linear patterns may be observed in diagnostic imaging. Pneumonitis may cause subpleural honeycombing, changing the shape of the air spaces in an image, which may be used to identify the respiratory diseas.e The interlobular septa may also thicken and indicate pneumonitis when viewed on a scan.Histological samples of lung tissue with pneumonitis include the presence of poorly formed granulomas or mononuclear cell infiltrates. The presence of bronchocentric lymphohistiocytic interstitial pneumonia with chronic bronchiolitis and non-necrotising granulomas coincides with pneumonitis.Since pneumonitis manifests in all areas of the lungs, imaging such as chest x-rays and Computerized tomography (CT) scans are useful diagnostic tools. While pneumonia is a localized infection, pneumonitis is widespread. A spirometer may also be used to measure pulmonary function.
During external examination, clubbing (swelling of fingertip tissue and increase in angle at the nail bed), and basal crackles may be observed.
For hypersensitivity pneumonitis many diagnoses take place through the focus of blood test, chest x-rays, and depending on severity of infection doctors may recommend a bronchoscopy. Blood test are important to early detect for other causative substances that could eliminate possible causes of the hypersensitivity pneumonitis.
Classification
Pneumonitis can be separated into several distinct categories based upon causative agent.
Hypersensitivity Pneumonitis (Extrinsic Allergenic Alveolitis) describes the inflammation of alveoli which occurs after inhalation of organic dusts (oxford). These particles can be proteins, bacteria, or mold spores and are usually specific to an occupation.
Acute Interstitial Pneumonitis can result from many different irritants in the lungs and usually is resolved in under a month.
Chemical Pneumonitis is caused by toxic substances reaching the lower airways of the bronchial tree. This causes a chemical burn and severe inflammation </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Soft tissue and can you explain it in depth? [/INST] Soft tissue is all the tissue in the body that is not hardened by the processes of ossification or calcification such as bones and teeth. Soft tissue connects, surrounds or supports internal organs and bones, and includes muscle, tendons, ligaments, fat, fibrous tissue, lymph and blood vessels, fasciae, and synovial membranes.
It is sometimes defined by what it is not – such as "nonepithelial, extraskeletal mesenchyme exclusive of the reticuloendothelial system and glia".
Composition
The characteristic substances inside the extracellular matrix of soft tissue are the collagen, elastin and ground substance. Normally the soft tissue is very hydrated because of the ground substance. The fibroblasts are the most common cell responsible for the production of soft tissues fibers and ground substance. Variations of fibroblasts, like chondroblasts, may also produce these substances.
Mechanical characteristics
At small strains, elastin confers stiffness to the tissue and stores most of the strain energy. The collagen fibers are comparatively inextensible and are usually loose (wavy, crimped). With increasing tissue deformation the collagen is gradually stretched in the direction of deformation. When taut, these fibers produce a strong growth in tissue stiffness. The composite behavior is analogous to a nylon stocking, whose rubber band does the role of elastin as the nylon does the role of collagen. In soft tissues, the collagen limits the deformation and protects the tissues from injury.
Human soft tissue is highly deformable, and its mechanical properties vary significantly from one person to another. Impact testing results showed that the stiffness and the damping resistance of a test subject’s tissue are correlated with the mass, velocity, and size of the striking object. Such properties may be useful for forensics investigation when contusions were induced. When a solid object impacts a human soft tissue, the energy of the impact will be absorbed by the tissues to reduce the effect of the impact or the pain level; subjects with more soft tissue thickness tended to absorb the impacts with less aversion.
Soft tissues have the potential to undergo large deformations and still return to the initial configuration when unloaded, i.e. they are hyperelastic materials, and their stress-strain curve is nonlinear. The soft tissues are also viscoelastic, incompressible and usually anisotropic. Some viscoelastic properties observable in soft tissues are: relaxation, creep and hysteresis. In order to describe the mechanical response of soft tissues, several methods have been used. These methods include: hyperelastic macroscopic models based on strain energy, mathematical fits where nonlinear constitutive equations are used, and structurally based models where the response of a linear elastic material is modified by its geometric characteristics.
Pseudoelasticity
Even though soft tissues have viscoelastic properties, i.e. stress as function of strain rate, it can be approximated by a hyperelastic model after precondition to a load pattern. After some cycles of loading and unloading the material, the mechanical response becomes independent of strain rate.
S
=
S
(
E
,
E
˙
)
→
S
=
S
(
E
)
{\displaystyle \mathbf {S} =\mathbf {S} (\mathbf {E} ,{\dot {\mathbf {E} }})\quad \rightarrow \quad \mathbf {S} =\mathbf {S} (\mathbf {E} )}
Despite the independence of strain rate, preconditioned soft tissues still present hysteresis, so the mechanical response can be modeled as hyperelastic with different material constants at loading and unloading. By this method the elasticity theory is used to model an inelastic material. Fung has called this model as pseudoelastic to point out that the material is not truly elastic.
Residual stress
In physiological state soft tissues usually present residual stress that may be released when the tissue is excised. Physiologists and histologists must be aware of this fact to avoid mistakes when analyzing excised tissues. This retraction usually causes a visual artifact.
Fung-elastic material
Fung developed a constitutive equation for preconditioned soft tissues which is
W
=
1
2
[
q
+
c
(
e
Q
−
1
)
]
{\displaystyle W={\frac {1}{2}}\left[q+c\left(e^{Q}-1\right)\right]}
with
q
=
a
i
j
k
l
E
i
j
E
k
l
Q
=
b
i
j
k
l
E
i
j
E
k
l
{\displaystyle q=a_{ijkl}E_{ij}E_{kl}\qquad Q=b_{ijkl}E_{ij}E_{kl}}
quadratic forms of Green-Lagrange strains
E
i
j
{\displaystyle E_{ij}}
and
a
i
j
k
l
{\displaystyle a_{ijkl}}
,
b
i
j
k
l
{\displaystyle b_{ijkl}}
and
c
{\displaystyle c}
material constants.
W
{\displaystyle W}
is the strain energy function per volume unit, which is the mechanical strain energy for a given temperature.
Isotropic simplification
The Fung-model, simplified with isotropic hypothesis (same mechanical properties in all directions). This written in respect of the principal stretches (
λ
i
{\displaystyle \lambda _{i}}
):
W
=
1
2
[
a
(
λ
1
2
+
λ
2
2
+
λ
3
2
−
3
)
+
b
(
e
c
(
λ
1
2
+
λ
2
2
+
λ
3
2
−
3
)
−
1
)
]
{\displaystyle W={\frac {1}{2}}\left[a(\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}-3)+b\left(e^{c(\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}-3)}-1\right)\right]}
,where a, b and c are constants.
Simplification for small and big stretches
For small strains, the exponential term is very small, thus negligible.
W
=
1
2
a
i
j
k
l
E
i
j
E
k
l
{\displaystyle W={\frac {1}{2}}a_{ijkl}E_{ij}E_{kl}}
On the other hand, the linear term is negligible when the analysis rely only on big strains.
W
=
1
2
c
(
e
b
i
j
k
l
E
i
j
E
k
l
−
1
)
{\displaystyle W={\frac {1}{2}}c\left(e^{b_{ijkl}E_{ij}E_{kl}}-1\right)}
Gent-elastic material
W
=
−
μ
J
m
2
ln
(
1
−
(
λ
1
2
+
λ
2
2
+
λ
3
2
−
3
J
m
)
)
{\displaystyle W=-{\frac {\mu J_{m}}{2}}\ln \left(1-\left({\frac {\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}-3}{J_{m}}}\right)\right)}
where
μ
>
0
{\displaystyle \mu >0}
is the shear modulus for infinitesimal strains and
J
m
>
0
{\displaystyle J_{m}>0}
is a stiffening parameter, associated with limiting chain extensibility. This constitutive model cannot be stretched in uni-axial tension beyond a maximal stretch
J
m
{\displaystyle J_{m}}
, which is the positive root of
λ
m
2
+
2
λ
m
−
J
m
−
3
=
1
{\displaystyle \lambda _{m}^{2}+2\lambda _{m}-J_{m}-3=1}
Remodeling and growth
Soft tissues have the potential to grow and remodel reacting to chemical and mechanical long term changes. The rate the fibroblasts produce tropocollagen is proportional to these stimuli. Diseases, injuries and changes in the level of mechanical load may induce remodeling. An example of this phenomenon is the thickening of farmers hands. The remodeling of connective tissues is well known in bones by the Wolffs law (bone remodeling). Mechanobiology is the science that study the relation between stress and growth at cellular level.Growth and remodeling have a major role in the cause of some common soft tissue diseases, like arterial stenosis and aneurisms and any soft tissue fibrosis. Other instance of tissue remodeling is the thickening of the cardiac muscle in response to the growth of blood pressure detected by the arterial wall.
Imaging techniques
There are certain issues that have to be kept in mind when choosing an imaging technique for visualizing soft tissue extracellular matrix (ECM) components. The accuracy of the image analysis relies on the properties and the quality of the raw data and, therefore, the choice of the imaging technique must be based upon issues such as:
Having an optimal resolution for the components of interest;
Achieving high contrast of those components;
Keeping the artifact count low;
Having the option of volume data acquisition;
Keeping the data volume low;
Establishing an easy and reproducible setup for tissue analysis.The collagen fibers are approximately 1-2 μm thick </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Father and can you explain it in depth? [/INST] A father is the male parent of a child. Besides the paternal bonds of a father to his children, the father may have a parental, legal, and social relationship with the child that carries with it certain rights and obligations. An adoptive father is a male who has become the childs parent through the legal process of adoption. A biological father is the male genetic contributor to the creation of the infant, through sexual intercourse or sperm donation. A biological father may have legal obligations to a child not raised by him, such as an obligation of monetary support. A putative father is a man whose biological relationship to a child is alleged but has not been established. A stepfather is a male who is the husband of a childs mother and they may form a family unit, but who generally does not have the legal rights and responsibilities of a parent in relation to the child.
The adjective "paternal" refers to a father and comparatively to "maternal" for a mother. The verb "to father" means to procreate or to sire a child from which also derives the noun "fathering". Biological fathers determine the sex of their child through a sperm cell which either contains an X chromosome (female), or Y chromosome (male). Related terms of endearment are dad (dada, daddy), baba, papa, pappa, papasita, (pa, pap) and pop. A male role model that children can look up to is sometimes referred to as a father-figure.
Paternal rights
The paternity rights of a father with regard to his children differ widely from country to country often reflecting the level of involvement and roles expected by that society.
Paternity leaveParental leave is when a father takes time off to support his newly born or adopted baby. Paid paternity leave first began in Sweden in 1976, and is paid in more than half of European Union countries. In the case of male same-sex couples the law often makes no provision for either one or both fathers to take paternity leave.
Child custodyFathers rights movements such as Fathers 4 Justice argue that family courts are biased against fathers.
Child supportChild support is an ongoing periodic payment made by one parent to the other; it is normally paid by the parent who does not have custody.
Paternity fraudAn estimated 2% of British fathers experiences paternity fraud during a non-paternity event, bringing up a child they wrongly believe to be their biological offspring.
Role of the father
In almost all cultures fathers are regarded as secondary caregivers. This perception is slowly changing with more and more fathers becoming primary caregivers, while mothers go to work, or in single parenting situations and male same-sex parenting couples.
Fatherhood in the Western World
In the West, the image of the married father as the primary wage-earner is changing. The social context of fatherhood plays an important part in the well-being of men and their children. In the United States 16% of single parents were men as of 2013.
Importance of father or father-figure
Involved fathers offer developmentally specific provisions to their children and are impacted themselves by doing so. Active father figures may play a role in reducing behavior and psychological problems in young adults. An increased amount of father–child involvement may help increase a childs social stability, educational achievement,: 5 and their potential to have a solid marriage as an adult. Their children may also be more curious about the world around them and develop greater problem solving skills. Children who were raised with fathers perceive themselves to be more cognitively and physically competent than their peers without a father. Mothers raising children together with a father reported less severe disputes with their child.The father-figure is not always a childs biological father and some children will have a biological father as well as a step- or nurturing father. When a child is conceived through sperm donation, the donor will be the "biological father" of the child.
Fatherhood as legitimate identity can be dependent on domestic factors and behaviors. For example, a study of the relationship between fathers, their sons, and home computers found that the construction of fatherhood and masculinity required that fathers display computer expertise.
Determination of parenthood
Roman law defined fatherhood as "Mater semper certa; pater est quem nuptiae demonstrant" ("The [identity of the] mother is always certain; the father is whom the marriage vows indicate"). The recent emergence of accurate scientific testing, particularly DNA testing, has resulted in the family law relating to fatherhood experiencing rapid changes.
History of fatherhood
Many male animals do not participate in the rearing of their young. The development of human men as creatures which are involved in their offsprings upbringing took place during the stone age.In medieval and most of modern European history, caring for children was predominantly the domain of mothers, whereas fathers in many societies provide for the family as a whole. Since the 1950s, social scientists and feminists have increasingly challenged gender roles in Western countries, including that of the male breadwinner. Policies are increasingly targeting fatherhood as a tool of changing gender relations. Research from various societies suggest that since the middle of the 20th century fathers have become increasingly involved in the care of their children.
Patricide
In early human history there have been notable instances of patricide. For example:
Tukulti-Ninurta I (r. 1243–1207 B.C.E.), Assyrian king, was killed by his own son after sacking Babylon.
Sennacherib (r. 704–681 B.C.E.), Assyrian king, was killed by two of his sons for his desecration of Babylon.
King Kassapa I (473 to 495 CE) creator of the Sigiriya citadel of ancient Sri Lanka killed his father king Dhatusena for the throne.
Emperor Yang of Sui in Chinese history allegedly killed his father, Emperor Wen of Sui.
Beatrice Cenci, Italian noblewoman who, according to legend, killed her father after he imprisoned and raped her. She was condemned and beheaded for the crime along with her brother and her stepmother in 1599.
Lizzie Borden (1860–1927) allegedly killed her father and her stepmother with an axe in Fall River, Massachusetts, in 1892. She was acquitted, but her innocence is still disputed.
Iyasus I of Ethiopia (1654–1706), one of the great warrior emperors of Ethiopia, was deposed by his son Tekle Haymanot in 1706 and subsequently assassinated.In more contemporary history there have also been instances of father–offspring conflicts, such as:
Chiyo Aizawa murdered her own father who had been raping her for fifteen years, on October 5, 1968, in Japan. The incident changed the Criminal Code of Japan regarding patricide.
Kip Kinkel (1982- ), an Oregon boy who was convicted of killing his parents at home and two fellow students at school on May 20, 1998.
Sarah Marie Johnson (1987- ), an Idaho girl who was convicted of killing both parents on the morning of September 2, 2003.
Dipendra of Nepal (1971–2001) reportedly massacred much of his family at a royal dinner on June 1, 2001, including his father King Birendra, mother, brother, and sister.
Christopher Porco (1983- ), was convicted on August 10, 2006, of the murder of his father and attempted murder of his mother with an axe.
Terminology
Biological fathers
Baby Daddy – A biological father who bears financial responsibility for a child, but with whom the mother has little or no contact.
Birth father – the biological father of a child who, due to adoption or parental separation, does not raise the child or cannot take care of one.
Biological father – or sometimes simply referred to as "Father" is the genetic father of a child.
Posthumous father – father died before children were born (or even conceived in the case of artificial insemination).
Putative father – unwed man whose legal relationship to a child has not been established but who is alleged to be or claims that he may be the biological father of a child.
Sperm donor – an anonymous or known biological father who provides his sperm to be used in artificial insemination or in vitro fertilisation in order to father a child for a third party female. Also used as a slang term meaning "baby daddy".
Surprise father – where the men did not know that there was a child until possibly years afterward
Teenage father/youthful father – Father who is still a teenager.
Non-biological (social and legal relationship)
Adoptive father – the father who has adopted a child
Cuckolded father – where the child is the product of the mothers adulterous relationship
DI Dad – social/legal father of children produced via Donor Insemination (where a donors sperm were used to impregnate the DI Dads spouse)
Father-in-law – the father of ones spouse
Foster father – child is raised by a man who is not the biological or adoptive father usually as part of a couple.
Mothers partner – assumption that current partner fills father role
Mothers husband – under some jurisdictions (e.g. in Quebec civil law), if the mother is married to another man, the latter will be defined as the father
Presumed father – Where a presumption of paternity has determined that a man is a childs father regardless of if he actually is or is not the biological father
Social father – where a man takes de facto responsibility for a child, such as caring for one who has been abandoned or orphaned (the child is known as a "child of the family" in English law)
Stepfather – a married non-biological father where the child is from a previous relationship
Fatherhood defined by contact level
Absent father – father who cannot or will not spend time with his child(ren)
Second father – a non-parent whose contact and support is robust enough that near parental bond occurs (often used for older male siblings who significantly aid in raising a child, sometimes for older men who took care of younger friends who have no families)
Stay-at-home dad – the male equivalent of a housewife with child, where his spouse is breadwinner
Weekend/holiday father – where child(ren) only stay(s) with father on weekends, holidays, etc.
Non-human fatherhood
For some animals, it is the fathers who take care of the young.
Darwins frog (Rhinoderma darwini) fathers carry eggs in the vocal pouch.
Most male waterfowl are very protective in raising their offspring, sharing scout duties with the female. Examples are the geese, swans, gulls, loons, and a few species of ducks. When the families of most of these waterfowl travel, they usually travel in a line and the fathers are usually the ones guarding the offspring at the end of the line while the mothers lead the way.
The female seahorse (Hippocampus) deposits eggs into the pouch on the males abdomen. The male releases sperm into the pouch, fertilizing the eggs. The embryos develop within the males pouch, nourished by their individual yolk sacs.
Male catfish keep their eggs in their mouth, foregoing eating until they hatch.
Male emperor penguins alone incubate their eggs; females do no incubation. Rather than building a nest, each male protects his egg by balancing it on the tops of his feet, enclosed in a special brood pouch. Once the eggs are hatched however, the females will rejoin the family.
Male beavers secure their offspring along with the females during their first few hours of their lives. As the young beavers mature, their fathers will teach them how to search for materials to build and repair their own dams, before they disperse to find their own mates.
Wolf fathers help feed, protect, and play with their pups. In some cases, several generations of wolves live in the pack, giving pups the care of grandparents, aunts/uncles, and siblings, in addition to parents. The father wolf is also the one who does most of the hunting when the females are securing their newborn pups.
Coyotes are monogamous and male coyotes hunt and bring food to their young.
Dolphin fathers help in the care of the young. Newborns are held on the surface of the water by both parents until they are ready to swim on their own.
A number of bird species have active, caring fathers who assist the mothers, such as the waterfowls mentioned above.
Apart from humans, fathers in few primate species care for their young. Those that do are tamarins and marmosets. Particularly strong care is also shown by siamangs where fathers carry infants after their second year. In titi and owl monkeys fathers carry their infants 90% of the time with "titi monkey infants developing a preference for their fathers over their mothers". Silverback gorillas have less role in the families but most of them serve as an extra protecting the families from harm and sometimes approaching enemies to distract them so that his family can escape unnoticed.Many species, though, display little or no paternal role in caring for offspring. The male leaves the female soon after mating and long before any offspring are born. It is the females who must do all the work of caring for the young.
A male bear leaves the female shortly after mating and will kill and sometimes eat any bear cub he comes across, even if the cub is his. Bear mothers spend much of their cubs early life protecting them from males. (Many artistic works, such as advertisements and cartoons, depict kindly "papa bears" when this is the exact opposite of reality.)
Domesticated dog fathers show little interest in their offspring, and unlike wolves, are not monogamous with their mates and are thus likely to leave them after mating.
Male lions will tolerate cubs, but only allow them to eat meat from dead prey after they have had their fill. A few are quite cruel towards their young and may hurt or kill them with little provocation. A male who kills another male to take control of his pride will also usually kill any cubs belonging to that competing male. However, it is also the males who are responsible for guarding the pride while the females hunt. However the male lions are the only felines that actually have a role in fatherhood.
Male rabbits generally tolerate kits but unlike the females, they often show little interest in the kits and are known to play rough with their offspring when they are mature, especially towards their sons. This behaviour may also be part of an instinct to drive the young males away to prevent incest matings between the siblings. The females will eventually disperse from the warren as soon as they mature but the father does not drive them off like he normally does to the males.
Horse stallions and pig boars have little to no role in parenting, nor are they monogamous with their mates. They will tolerate young to a certain extent, but due to their aggressive male nature, they are generally annoyed by the energetic exuberance of the young, and may hurt or even kill the young. Thus, stud stallions and boars are not kept in the same pen as their young or other females.Finally, in some species neither the father nor the mother provides any care.
This is true for most insects, reptiles, and fish.
See also
Father complex
Fathers rights movement
Paternal age effect
Patricide
Paternal bond
Putative father
Putative father registry
Responsible fatherhood
Shared Earning/Shared Parenting Marriage
Sociology of fatherhood
"Father" can also refer metaphorically to a person who is considered the founder of a body of knowledge or of an institution. In such context the meaning of "father" is similar to that of "founder". See List of persons considered father or mother of a field.
Further reading
Elizabeth Preston (27 Jun 2021). "The riddle of how humans evolved to have fathers". Knowable Magazine / BBC.com.
References
Bibliography
Inhorn, Marcia C.; Chavkin, Wendy; Navarro, José-Alberto, eds. (2015). Globalized fatherhood. New York: Berghahn. ISBN 9781782384373. Studies by anthropologists, sociologists, and cultural geographers -
Kraemer, Sebastian (1991). "The Origins of Fatherhood: An Ancient Family Process". Family Process. 30 (4): 377–392 </s> |
<s> [INST] <<SYS>> You are a responsible assistant committed to providing accurate and helpful responses. Avoid sharing harmful or biased content. Clarify any unclear or incorrect questions rather than providing false or misleading information. <</SYS>> What is Myopia and can you explain it in depth? [/INST] Myopia, also known as near-sightedness and short-sightedness, is an eye disease where light focuses in front of, instead of on, the retina. As a result, distant objects appear blurry while close objects appear normal. Other symptoms may include headaches and eye strain. Severe near-sightedness is associated with an increased risk of retinal detachment, cataracts, and glaucoma.The underlying mechanism involves the length of the eyeball growing too long or less commonly the lens being too strong. It is a type of refractive error. Diagnosis is by eye examination.Tentative evidence indicates that the risk of near-sightedness can be decreased by having young children spend more time outside. This decrease in risk may be related to natural light exposure. Near-sightedness can be corrected with eyeglasses, contact lenses, or a refractive surgery. Eyeglasses are the easiest and safest method of correction. Contact lenses can provide a wider field of vision, but are associated with a risk of infection. Refractive surgery permanently changes the shape of the cornea.Near-sightedness is the most common eye problem and is estimated to affect 1.5 billion people (22% of the world population). Rates vary significantly in different areas of the world. Rates among adults are between 15% to 49%. Among children, it affects 1% of rural Nepalese, 4% of South Africans, 12% of people in the US, and 37% in some large Chinese cities. In China the proportion of girls is slightly higher than boys. Rates have increased since the 1950s. Uncorrected near-sightedness is one of the most common causes of vision impairment globally along with cataracts, macular degeneration, and vitamin A deficiency.
Etymology
The term myopia is of Koine Greek origin: μυωπία myōpia (or μυωπίασις myōpiasis) "short-sight(-ness)", from Ancient Greek μύωψ myōps "short-sighted (man), (man) with eyes getting shut", from μύειν myein "to shut the eyes" and ὤψ ōps "eye, look, sight" (GEN ὠπός ōpos). The opposite of myopia in English is hyperopia (long-sightedness).
Signs and symptoms
A myopic individual can see clearly out to a certain distance (the far point of the eye), but objects placed beyond this distance appear blurred. If the extent of the myopia is great enough, even standard reading distances can be affected. Upon routine examination of the eyes, the vast majority of myopic eyes appear structurally identical to nonmyopic eyes.Onset is often in school children, with worsening between the ages of 8 and 15.
Causes
The underlying cause is believed to be a combination of genetic and environmental factors. Risk factors include doing work that involves focusing on close objects, greater time spent indoors, urbanization, and a family history of the condition. It is also associated with a high socioeconomic class and higher level of education.A 2012 review could not find strong evidence for any single cause, although many theories have been discredited. Twin studies indicate that at least some genetic factors are involved. Myopia has been increasing rapidly throughout the developed world, suggesting environmental factors are involved.A single-author literature review in 2021 proposed that myopia is the result of corrective lenses interfering with emmetropization.
Genetics
A risk for myopia may be inherited from ones parents. Genetic linkage studies have identified 18 possible loci on 15 different chromosomes that are associated with myopia, but none of these loci is part of the candidate genes that cause myopia. Instead of a simple one-gene locus controlling the onset of myopia, a complex interaction of many mutated proteins acting in concert may be the cause. Instead of myopia being caused by a defect in a structural protein, defects in the control of these structural proteins might be the actual cause of myopia. A collaboration of all myopia studies worldwide identified 16 new loci for refractive error in individuals of European ancestry, of which 8 were shared with Asians. The new loci include candidate genes with functions in neurotransmission, ion transport, retinoic acid metabolism, extracellular matrix remodeling and eye development. The carriers of the high-risk genes have a tenfold increased risk of myopia. Aberrant genetic recombination and gene splicing in the OPNLW1 and OPNMW1 genes that code for two retinal cone photopigment proteins can produce high myopia by interfering with refractive development of the eye.Human population studies suggest that contribution of genetic factors accounts for 60–90% of variance in refraction. However, the currently identified variants account for only a small fraction of myopia cases, suggesting the existence of a large number of yet unidentified low-frequency or small-effect variants, which underlie the majority of myopia cases.
Environmental factors
Environmental factors which increase the risk of nearsightedness include insufficient light exposure, low physical activity, near work, and increased year of education.One hypothesis is that a lack of normal visual stimuli causes improper development of the eyeball. Under this hypothesis, "normal" refers to the environmental stimuli that the eyeball evolved to. Modern humans who spend most of their time indoors, in dimly or fluorescently lit buildings may be at risk of development of myopia.People, and children especially, who spend more time doing physical exercise and outdoor play have lower rates of myopia, suggesting the increased magnitude and complexity of the visual stimuli encountered during these types of activities decrease myopic progression. There is preliminary evidence that the protective effect of outdoor activities on the development of myopia is due, at least in part, to the effect of long hours of exposure to daylight on the production and the release of retinal dopamine.Myopia can be induced with minus spherical lenses, and overminus in prescription lenses can induce myopia progression. Overminus during refraction can be avoided through various techniques and tests, such as fogging, plus to blur, and the duochrome test.The near work hypothesis, also referred to as the "use-abuse theory" states that spending time involved in near work strains the intraocular and extraocular muscles. Some studies support the hypothesis, while other studies do not. While an association is present, it is not clearly causal.Nearsightedness is also more common in children with diabetes, childhood arthritis, uveitis, and systemic lupus erythematosus.
Mechanism
Because myopia is a refractive error, the physical cause of myopia is comparable to any optical system that is out of focus. Borish and Duke-Elder classified myopia by these physical causes:
Axial myopia is attributed to an increase in the eyes axial length
Refractive myopia is attributed to the condition of the refractive elements of the eye. Borish further subclassified refractive myopia:Curvature myopia is attributed to excessive, or increased, curvature of one or more of the refractive surfaces of the eye, especially the cornea. In those with Cohen syndrome, myopia appears to result from high corneal and lenticular power.
Index myopia is attributed to variation in the index of refraction of one or more of the ocular media.As with any optical system experiencing a defocus aberration, the effect can be exaggerated or masked by changing the aperture size. In the case of the eye, a large pupil emphasizes refractive error and a small pupil masks it. This phenomenon can cause a condition in which an individual has a greater difficulty seeing in low-illumination areas, even though there are no symptoms in bright light, such as daylight.Under rare conditions, edema of the ciliary body can cause an anterior displacement of the lens, inducing a myopia shift in refractive error.
Diagnosis
A diagnosis of myopia is typically made by an eye care professional, usually an optometrist or ophthalmologist. During a refraction, an autorefractor or retinoscope is used to give an initial objective assessment of the refractive status of each eye, then a phoropter is used to subjectively refine the patients eyeglass prescription. Other types of refractive error are hyperopia, astigmatism, and presbyopia.
Types
Various forms of myopia have been described by their clinical appearance:
Simple myopia: Myopia in an otherwise normal eye, typically less than 4.00 to 6.00 diopters. This is the most common form of myopia.
Degenerative myopia, also known as malignant, pathological, or progressive myopia, is characterized by marked fundus changes, such as posterior staphyloma, and associated with a high refractive error and subnormal visual acuity after correction. This form of myopia gets progressively worse over time. Degenerative myopia has been reported as one of the main causes of visual impairment.
Pseudomyopia is the blurring of distance vision brought about by spasm of the accommodation system.
Nocturnal myopia: Without adequate stimulus for accurate accommodation, the accommodation system partially engages, pushing distance objects out of focus.
Nearwork-induced transient myopia (NITM): short-term myopic far point shift immediately following a sustained near visual task. Some authors argue for a link between NITM and the development of permanent myopia.
Instrument myopia: over-accommodation when looking into an instrument such as a microscope.
Induced myopia, also known as acquired myopia, results from various medications, increases in glucose levels, nuclear sclerosis, oxygen toxicity (e.g., from diving or from oxygen and hyperbaric therapy) or other anomalous conditions. Sulphonamide therapy can cause ciliary body edema, resulting in anterior displacement of the lens, pushing the eye out of focus. Elevation of blood-glucose levels can also cause edema (swelling) of the crystalline lens as a result of sorbitol accumulating in the lens. This edema often causes temporary myopia. Scleral buckles, used in the repair of retinal detachments may induce myopia by increasing the axial length of the eye.
Index myopia is attributed to variation in the index of refraction of one or more of the ocular media. Cataracts may lead to index myopia.
Form deprivation myopia occurs when the eyesight is deprived by limited illumination and vision range, or the eye is modified with artificial lenses or deprived of clear form vision. In lower vertebrates, this kind of myopia seems to be reversible within short periods of time. Myopia is often induced this way in various animal models to study the pathogenesis and mechanism of myopia development.
Degree
The degree of myopia is described in terms of the power of the ideal correction, which is measured in diopters:
Myopia between −0.00 and −0.50 diopters is usually classified as emmetropia.
Low myopia usually describes myopia between −0.50 and −3.00 diopters.
Moderate myopia usually describes myopia between −3.00 and −6.00 diopters. Those with moderate amounts of myopia are more likely to have pigment dispersion syndrome or pigmentary glaucoma.
High myopia usually describes myopia of −6.00 or more. People with high myopia are more likely to have retinal detachments and primary open angle glaucoma. They are also more likely to experience floaters, shadow-like shapes which appear in the field of vision. In addition to this, high myopia is linked to macular degeneration, cataracts, and significant visual impairment.
Age at onset
Myopia is sometimes classified by the age at onset:
Congenital myopia, also known as infantile myopia, is present at birth and persists through infancy.
Youth onset myopia occurs in early childhood or teenage, and the ocular power can keep varying until the age of 21, before which any form of corrective surgery is usually not recommended by ophthalmic specialists around the world.
School myopia appears during childhood, particularly the school-age years. This form of myopia is attributed to the use of the eyes for close work during the school years. A 2004-2015 Singapore-Sydney study of children of Chinese descent found that time spent on outdoor activities was a factor.
Adult onset myopiaEarly adult onset myopia occurs between ages 20 and 40.
Late adult onset myopia occurs after age 40.
Prevention
Various methods have been employed in an attempt to decrease the progression of myopia, although studies show mixed results. Many myopia treatment studies have a number of design drawbacks: small numbers, lack of adequate control group, and failure to mask examiners from knowledge of treatments used. Among myopia specialists, mydriatic eyedrops are the most favored approach, applied by almost 75% in North America and more than 80% in Australia. A 2015 review suggested that increased outdoor time protects young children from myopia. A 2020 study of global practice patterns used by paediatric ophthalmologists to decrease the progression of myopia showed behavioral intervention (counseling to spend more time outdoors and less time with near-work) to be favored by 25% of specialists, usually in addition to medications.
Glasses and contacts
The use of reading glasses when doing close work may improve vision by reducing or eliminating the need to accommodate. Altering the use of eyeglasses between full-time, part-time, and not at all does not appear to alter myopia progression. The American Optometric Associations Clinical Practice Guidelines found evidence of effectiveness of bifocal lenses and recommends it as the method for "myopia control". In some studies, bifocal and progressive lenses have not shown differences in altering the progression of myopia compared to placebo.In 2019 contact lenses to prevent the worsening of nearsightedness in children were approved for use in the United States. This "MiSight" type claims to work by focusing peripheral light in front of the retina.
Medication
Anti-muscarinic topical medications in children under 18 years of age may slow the worsening of myopia. These treatments include pirenzepine gel, cyclopentolate eye drops, and atropine eye drops. While these treatments were shown to be effective in slowing the progression of myopia, side effects included light sensitivity and near blur.
Other methods
Scleral reinforcement surgery is aimed to cover the thinning posterior pole with a supportive material to withstand intraocular pressure and prevent further progression of the posterior staphyloma. The strain is reduced, although damage from the pathological process cannot be reversed. By stopping the progression of the disease, vision may be maintained or improved.
Treatment
The National Institutes of Health says there is no known way of preventing myopia, and the use of glasses or contact lenses does not affect its progression, unless the glasses or contact lenses are too strong of a prescription. There is no universally accepted method of preventing myopia and proposed methods need additional study to determine their effectiveness. Optical correction using glasses or contact lenses is the most common treatment; other approaches include orthokeratology, and refractive surgery.: 21–26 Medications (mostly atropine) and vision therapy can be effective in addressing the various forms of pseudomyopia.
Glasses and contacts
Corrective lenses bend the light entering the eye in a way that places a focused image accurately onto the retina. The power of any lens system can be expressed in diopters, the reciprocal of its focal length in meters. Corrective lenses for myopia have negative powers because a divergent lens is required to move the far point of focus out to the distance. More severe myopia needs lens powers further from zero (more negative). However, strong eyeglass prescriptions create distortions such as prismatic movement and chromatic aberration. Strongly near-sighted wearers of contact lenses do not experience these distortions because the lens moves with the cornea, keeping the optic axis in line with the visual axis and because the vertex distance has been reduced to zero.
Surgery
Refractive surgery includes procedures which alter the corneal curvature of some structure of the eye or which add additional refractive means inside the eye.
Photorefractive keratectomy
Photorefractive keratectomy (PRK) involves ablation of corneal tissue from the corneal surface using an excimer laser. The amount of tissue ablation corresponds to the amount of myopia. While PRK is a relatively safe procedure for up to 6 dioptres of myopia, the recovery phase post-surgery is usually painful.
LASIK
In a LASIK pre-procedure, a corneal flap is cut into the cornea and lifted to allow the excimer laser beam access to the exposed corneal tissue. After that, the excimer laser ablates the tissue according to the required correction. When the flap again covers the cornea, the change in curvature generated by the laser ablation proceeds to the corneal surface. Though LASIK is usually painless and involves a short rehabilitation period post-surgery, it can potentially result in flap complications and loss of corneal stability (post-LASIK keratectasia).
Phakic intra-ocular lens
Instead of modifying the corneal surface, as in laser vision correction (LVC), this procedure involves implanting an additional lens inside the eye (i.e., in addition to the already existing natural lens). While it usually results in good control of the refractive change, it can induce potential serious long-term complications such as glaucoma, cataract and endothelial decompensation.
Orthokeratology
Orthokeratology or simply Ortho-K is a temporary corneal reshaping process using rigid gas permeable (RGP) contact lenses. Overnight wearing of specially designed contact lenses will temporarily reshape cornea, so patients may see clearly without any lenses in daytime. Orthokeratology can correct myopia up to -6D. Several studies shown that Ortho-K can reduce myopia progression also. Risk factors of using Ortho-K lenses include microbial keratitis, corneal edema, etc. Other contact lens related complications like corneal aberration, photophobia, pain, irritation, redness etc. are usually temporary conditions, which may be eliminated by proper usage of lenses.
Intrastromal corneal ring segment
The Intrastromal corneal ring segment (ICRS), commonly used in keratoconus treatment now, was originally designed to correct mild to moderate myopia. The thickness is directly related to flattening and the diameter of the ring is proportionally inverse to the flattening of cornea. So, if diameter is smaller or thickness is greater, resulting myopia correction will be greater.
Alternative medicine
A number of alternative therapies have been claimed to improve myopia, including vision therapy, "behavioural optometry", various eye exercises and relaxation techniques, and the Bates method. Scientific reviews have concluded that there was "no clear scientific evidence" that eye exercises are effective in treating near-sightedness and as such they "cannot be advocated".
Epidemiology
Global refractive errors have been estimated to affect 800 million to 2.3 billion. The incidence of myopia within sampled population often varies with age, country, sex, race, ethnicity, occupation, environment, and other factors. Variability in testing and data collection methods makes comparisons of prevalence and progression difficult.The prevalence of myopia has been reported as high as 70–90% in some Asian countries, 30–40% in Europe and the United States, and 10–20% in Africa. Myopia is about twice as common in Jewish people than in people of non-Jewish ethnicity. Myopia is less common in African people and associated diaspora. In Americans between the ages of 12 and 54, myopia has been found to affect African Americans less than Caucasians.
Asia
In some parts of Asia, myopia is very common.
Singapore is believed to have the highest prevalence of myopia in the world; up to 80% of people there have myopia, but the accurate figure is unknown.
Chinas myopia rate is 31%: 400 million of its 1.3 billion people are myopic. The prevalence of myopia in high school in China is 77%, and in college is more than 80%.
In some areas, such as China and Malaysia, up to 41% of the adult population is myopic to 1.00 dpt, and up to 80% to 0.5 dpt.
A study of Jordanian adults aged 17 to 40 found over half (54%) were myopic.
Some research suggests the prevalence of myopia in Indian children is less than 15%.
Europe
In first-year undergraduate students in the United Kingdom 50% of British whites and 53% of British Asians were myopic.
A recent review found 27% of Western Europeans aged 40 or over have at least −1.00 diopters of myopia and 5% have at least −5.00 diopters.
North America
Myopia is common in the United States, with research suggesting this condition has increased dramatically in recent decades. In 1971–1972, the National Health and Nutrition Examination Survey provided the earliest nationally representative estimates for myopia prevalence in the U.S., and found the prevalence in persons aged 12–54 was 25%. Using the same method, in 1999–2004, myopia prevalence was estimated to have climbed to 42%.A study of 2,523 children in grades 1 to 8 (age, 5–17 years) found nearly one in 10 (9%) have at least −0.75 diopters of myopia. In this study, 13% had at least +1.25 D hyperopia (farsightedness), and 28% had at least 1.00-D difference between the two principal meridians (cycloplegic autorefraction) of astigmatism. For myopia, Asians had the highest prevalence (19%), followed by Hispanics (13%). Caucasian children had the lowest prevalence of myopia (4%), which was not significantly different from African Americans (7%).A recent review found 25% of Americans aged 40 or over have at least −1.00 diopters of myopia and 5% have at least −5.00 diopters.
Australia
In Australia, the overall prevalence of myopia (worse than −0.50 diopters) has been estimated to be 17%. In one recent study, less than one in 10 (8%) Australian children between the ages of four and 12 were found to have myopia greater than −0.50 diopters </s> |