question
stringlengths 3
301
| answer
stringlengths 9
26.1k
| context
sequence |
---|---|---|
how come americans have large portion sizes and relatively cheap prices for their food? | When you go to a restaurant, you pay for the service first, then for the actual food. As a rule of thumb, the ingredients usually make up only 1/4 to 1/3 of the costs. Additionally, the work of preparing a dish twice as large usually isn't twice as much for the chef.
So it comes down to the customer's expectations. Americans expect large meals, so the restaurants deliver - without hurting their profits much.
| [
"Portion sizes in the United States have increased markedly in the past several decades. For example, from 1977 to 1996, portion sizes increased by 60 percent for salty snacks and 52 percent for soft drinks. Importantly, larger product portion sizes and larger servings in restaurants and kitchens consistently increase food intake. Larger portion sizes may even cause people to eat more of foods that are ostensibly distasteful; in one study individuals ate significantly more stale, two-week-old popcorn when it was served in a large versus a medium-sized container.\n",
"Another example is refreshments and snacks sold in theaters, fairs, and other venues. Small servings are proportionally more expensive than large servings. Customers choose the bigger size even if it is more than they would like to eat or drink because it seems like a better deal.\n",
"In her preface to the first American edition in 1979, Grigson observed that although British and American cooks found each others' systems of measurement confusing (citing the US use of volume rather than weight for solid ingredients), the two countries were at one in suffering from supermarkets' obsession with the appearance rather than the flavour of vegetables. \n",
"A value menu (not to be confused with a value meal) is a group of menu items at a fast food restaurant that are designed to be the least expensive items available. In the US, the items are usually priced between $0.99 and $1.49. The portion size, and number of items included with the food, are typically related to the price.\n",
"Smaller communities have fewer choices in food retailers. Resident small grocers struggle to be profitable partly due to low sales numbers, which make it difficult to meet wholesale food suppliers' minimum purchasing requirements. The lack of competition and sales volume can result in higher food costs. For example, in New Mexico the same basket of groceries that cost rural residents $85, cost urban residents only $55. However, this is not true for all rural areas. A study in Iowa showed that grocers in four rural counties had lower costs on key foods that make up a nutritionally balanced diet than did larger supermarkets outside these food deserts (greater than 20 miles away).\n",
"Price encompasses the amount of money paid by the consumer in order to purchase the food product. When pricing the food products, the manufacturer must bear in mind that the retailer will add a particular percentage to the price on the wholesale product. This percentage amount differs globally. The percentage is used to pay for the cost of producing, packaging, shipping, storing and selling the food product. For example, the purchasing of a food product in a supermarket selling for $3.50 generates an income of $2.20 for the manufacturer.\n",
"Agricultural policy and techniques in the United States and Europe have led to lower food prices. In the United States, subsidization of corn, soy, wheat, and rice through the U.S. farm bill has made the main sources of processed food cheap compared to fruits and vegetables. Calorie count laws and nutrition facts labels attempt to steer people toward making healthier food choices, including awareness of how much food energy is being consumed.\n"
] |
Doesn't the speed of light disprove Fermi's paradox? | When discussing the Fermi paradox, people usually only talk about civilizations in the Milky Way galaxy. The distance between galaxies is far to great to consider an inter-galactic civilization (though it may be possible).
The diameter of the stellar disk of the Milky Way is only about 100,000 light-years. So if a civilization existed on the other side of the Milky Way and had the technology to peer on to the Earth, they would see a planet teeming with life! 100,000 years ago, the Earth was already inhabited by humans! | [
"In 1962 J. G. Fox pointed out that all previous experimental tests of the constancy of the speed of light were conducted using light which had passed through stationary material: glass, air, or the incomplete vacuum of deep space. As a result, all were thus subject to the effects of the extinction theorem. This implied that the light being measured would have had a velocity different from that of the original source. He concluded that there was likely as yet no acceptable proof of the second postulate of special relativity. This surprising gap in the experimental record was quickly closed in the ensuing years, by experiments by Fox, and by Alvager et al., which used gamma rays sourced from high energy mesons. The high energy levels of the measured photons, along with very careful accounting for extinction effects, eliminated any significant doubt from their results.\n",
"The paradoxical aspect of each of the described thought experiments arises from Einstein’s theory of special relativity, which proclaims the speed of light (approx. 300,000 km/s) is the upper limit of speed in our universe. The uniformity of the speed of light is so absolute that regardless of the speed of the observer as well as the speed of the source of light the speed of the light ray should remain constant.\n",
"Here, one does not regard the above result as a deduction from the Heisenberg theory, but as a \"basic hypothesis\" which is well established experimentally. This needs little explanation, e.g., in terms of the disturbance of instruments, but is merely our starting point for further analysis; as in Einstein's theory of special relativity, we start from the \"fact\" that the speed of light is a constant.\n",
"The second postulate of Einstein's theory of special relativity states that the speed of light is invariant, regardless of the velocity of the source from which the light emanates. The extinction theorem (essentially) states that light passing through a transparent medium is simultaneously extinguished and re-emitted by the medium itself. This implies that information about the velocity of light from a moving source might be lost if the light passes through enough intervening transparent material before being measured. All measurements previous to the 1960s intending to verify the constancy of the speed of light from moving sources (primarily using moving mirrors, or extraterrestrial sources) were made only after the light had passed through such stationary material — that material being that of a glass lens, the terrestrial atmosphere, or even the incomplete vacuum of deep space. In 1961, Fox decided that there might not yet be any conclusive evidence for the second postulate: \"This is a surprising situation in which to find ourselves half a century after the inception of special relativity.\" Regardless, he remained fully confident in special relativity, noting that this created only a \"small gap\" in the experimental record.\n",
"Teller remembered Fermi asking him, \"Edward, what do you think. How probable is it that within the next ten years we shall have clear evidence of a material object moving faster than light?\" Teller said, \"10^-6\" (one in a million). Fermi said, \"This is much too low. The probability is more like ten percent.\" Teller wrote in 1984 that this was \"the well known figure for a Fermi miracle.\"\n",
"The constancy of the speed of light was motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous ether. There is conflicting evidence on the extent to which Einstein was influenced by the null result of the Michelson–Morley experiment. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance.\n",
"Note that in this argument, we never assumed that energy could be transmitted faster than the speed of light. This shows that the results of the EPR experiment do not contradict the predictions of special relativity.\n"
] |
is taking a shot of 100 proof alcohol the same as taking 1.25 shots of 80 proof? | Essentially yes. Except for the additional water in the 80 proof alcohol. But there is just as much alcohol in both shots so it will have the same effect on your blood alcohol content. | [
"The concentration of alcohol in a beverage is usually stated as the percentage of alcohol by volume (ABV, the number of milliliters (ml) of pure ethanol in 100 ml of beverage) or as \"proof\". In the United States, \"proof\" is twice the percentage of alcohol by volume at 60 degrees Fahrenheit (e.g. 80 proof = 40% ABV). \"Degrees proof\" were formerly used in the United Kingdom, where 100 degrees proof was equivalent to 57.1% ABV. Historically, this was the most dilute spirit that would sustain the combustion of gunpowder.\n",
"Up to the 20th century, alcoholic spirits were assessed in the UK by mixing with gunpowder and testing the mixture to see whether it would still burn; spirit that just passed the test was said to be at 100° proof. The UK now uses percentage alcohol by volume at 20 °C (68 °F), where spirit at 100° proof is approximately 57.15% ABV; the US uses a \"proof number\" of twice the ABV at 60 °F (15.5 °C).\n",
"Alcohol proof is a measure of the content of ethanol (alcohol) in an alcoholic beverage. The term was originally used in England and was equal to about 1.821 times the alcohol by volume (ABV). The UK now uses the ABV standard instead of alcohol proof. In the United States, alcohol proof is defined as twice the percentage of ABV.\n",
"BULLET::::- 4. If consumption is proven by a preponderance of the evidence, it is an affirmative defense under paragraph c of subsection 1 that the defendant consumed a sufficient quantity of alcohol after driving or being in actual physical control of the vehicle, and before their blood or breath was tested, to cause the defendant to have a concentration of alcohol of 0.08 or more in their blood or breath. A defendant who intends to offer this defense at a trial or preliminary hearing must, not less than 14 days before the trial or hearing or at such other time as the court may direct, file and serve on the prosecuting attorney a written notice of that intent. \"\n",
"The term \"proof\" dates back to 16th century England, when spirits were taxed at different rates depending on their alcohol content. Spirits were tested by soaking a pellet of gunpowder in them. If the gunpowder could still burn, the spirits were rated above proof and taxed at a higher rate. As gunpowder would not burn if soaked in rum that contained less than 57.15% ABV, rum that contained this percentage of alcohol was defined as having 100 degrees proof. The gunpowder test was officially replaced by a specific gravity test in 1816.\n",
"BULLET::::2. After having consumed sufficient alcohol that he has, at any relevant time after the driving, an alcohol concentration of 0.08 or more. The results of a chemical analysis shall be deemed sufficient evidence to prove a person's alcohol concentration; or\n",
"BULLET::::- Overproof rums are much higher than the standard 40% ABV (80 proof), with many as high as 75% (150 proof) to 80% (160 proof) available. Two examples are Bacardi 151 or Pitorro moonshine. They are usually used in mixed drinks.\n"
] |
why do humans start getting body odor after they go through puberty? | Basically (the way I was taught this at least) you have two major types of sweat glands, apocrine and eccrine. Sweat produced by eccrine glands is mostly water. Apocrine sweat is more oily and contains a whole bunch of other stuff (which I won't get into). So bacteria can metabolize the components of apocrine sweat far more readily.
Apocrine glands (which are heavily concentrated in your pits and groin) are stimulated by sex hormones, the levels of which rise sharply during puberty. So you get an assload of oily sweat, which is then colonized by bacteria, who generate foul odors. | [
"The average beginning of pubarche varies due to many factors, including climate, nourishment, weight, nurture, and genes. First (and often transient) pubic hair resulting from adrenarche may appear between ages 10-12 preceding puberty.\n",
"Before puberty effects of rising androgen levels occur in both boys and girls. These include adult-type body odor, increased oiliness of skin and hair, acne, pubarche (appearance of pubic hair), axillary hair (armpit hair), growth spurt, accelerated bone maturation, and facial hair.\n",
"Estrogens are responsible for the development of female secondary sexual characteristics during puberty, including breast development, widening of the hips, and female fat distribution. Conversely, androgens are responsible for pubic and body hair growth, as well as acne and axillary odor.\n",
"Apocrine sweat glands secrete sweat into the pubic hair follicles. This is broken down by bacteria on the skin and produces an odor, which some consider to act as an attractant sex pheromone. The labia minora may grow more prominent and undergo changes in color. At puberty the first monthly period known as menarche marks the onset of menstruation.\n",
"In humans, the formation of body odor happens mostly in the axillary region. These odorant substances serve as pheromones which play a role related to mating. The underarm regions seem more important than the genital region for body odor which may be related to human bipedalism.\n",
"The principal physical consequences of adrenarche are androgen effects, especially pubic hair (in which Tanner stage 2 becomes Tanner stage 3) and the change of sweat composition that produces adult body odor. Increased oiliness of the skin and hair and mild acne may occur. In most boys, these changes are indistinguishable from early testicular testosterone effects occurring at the beginning of gonadal puberty. In girls, the adrenal androgens of adrenarche produce most of the early androgenic changes of puberty: pubic hair, body odor, skin oiliness, and acne. In most girls the early androgen effects coincide with, or are a few months following, the earliest estrogenic effects of gonadal puberty (breast development and growth acceleration). As female puberty progresses, the ovaries and peripheral tissues become more important sources of androgens.\n",
"Pregnant women have increased smell sensitivity, sometimes resulting in abnormal taste and smell perceptions, leading to food cravings or aversions. The ability to taste also decreases with age as the sense of smell tends to dominate the sense of taste. Chronic smell problems are reported in small numbers for those in their mid-twenties, with numbers increasing steadily, with overall sensitivity beginning to decline in the second decade of life, and then deteriorating appreciably as age increases, especially once over 70 years of age.\n"
] |
Do bone conduction earphones protect hearing? | There's no reason to believe that they would. Hearing loss is usually caused by damage to the inner ear, which is still getting as much sound exposure with bone conduction as it would through the normal path of sound. | [
"It is a semi-implantable under the skin bone conduction hearing device coupled to the skull by a titanium fixture. The system transfers sound to the inner ear through the bone, thereby bypassing problems in the outer or middle ear. Candidates with a conductive, mixed or single-sided sensorineural hearing loss can therefore benefit from bone conduction hearing solutions.\n",
"Ear protection refers to devices used to protect the ear, either externally from elements such as cold, intrusion by water and other environmental conditions, debris, or specifically from noise. High levels of exposure to noise may result in noise-induced hearing loss. Measures to protect the ear are referred to as hearing protection, and devices for that purpose are called hearing protection devices. In the context of work, adequate hearing protection is that which reduces noise exposure to below 85 dBA over the course of an average work shift of eight hours.\n",
"Bone conduction is the conduction of sound to the inner ear primarily through the bones of the skull, allowing the hearer to perceive audio content without blocking the ear canal. Bone conduction transmission occurs constantly as sound waves vibrate bone, specifically the bones in the skull, although it is hard for the average individual to distinguish sound being conveyed through the bone as opposed to sound being conveyed through air via the ear canal. Intentional transmission of sound through bone can be used with individuals with normal hearing - as with bone-conduction headphones - or as a treatment option for certain types of hearing impairment. Bone generally conveys lower-frequency sounds better than higher frequency sound.\n",
"Bone-anchored hearing aids use a surgically implanted abutment to transmit sound by direct conduction through bone to the inner ear, bypassing the external auditory canal and middle ear. A titanium prosthesis is surgically embedded into the skull with a small abutment exposed outside the skin. A sound processor sits on this abutment and transmits sound vibrations to the titanium implant. The implant vibrates the skull and inner ear, which stimulate the nerve fibers of the inner ear, allowing hearing.\n",
"Dual hearing protection refers to the use of earplugs under ear muffs. This type of hearing protection is particularly recommended for workers in the Mining industry because they are exposed to extremely high noise levels, such as an 105 dBA TWA. Fortunately, there is an option of adding electronic features to dual hearing protectors. These features help with communication by making speech more clear, especially for those workers who already have hearing loss. \n",
"Patients with chronic ear infection where the drum and/or the small bones in the middle ear are damaged often have hearing loss, but difficulties in using a hearing aid fitted in the ear canal. Direct bone conduction through a vibrator attached to a skin-penetrating implant addresses these disadvantages.\n",
"A hearing protection device, also known as a HPD, is an ear protection device worn in or over the ears while exposed to hazardous noise to help prevent noise-induced hearing loss. HPDs reduce (not eliminate) the level of the noise entering the ear. HPDs can also protect against other effects of noise exposure such as tinnitus and hyperacusis. There are many different types of HPDs available for use, including earmuffs, earplugs, electronic hearing protection devices, and semi-insert devices. \n"
] |
why do circles tesselate hexagonally? | It's all geometry. Assuming equal radii between all circles, if you place them in a way that they don't intersect but touch each other at exactly one point (tessellating) and you start with just 3 circles, those circles form a triangle shape. If you connect the centerpoints of those circles, it forms an equilateral triangle (equal length sides, each corner is 60°). So if you continue placing circles the same way around that center circle, you can do that a total of 6 times because 360°/60°=6. A hexagon has 6 sides. Hope this helps. | [
"A study by David George Kendall used the techniques of shape analysis to examine the triangles formed by standing stones to deduce if these were often arranged in straight lines. The shape of a triangle can be represented as a point on the sphere, and the distribution of all shapes can be thought of as a distribution over the sphere. The sample distribution from the standing stones was compared with the theoretical distribution to show that the occurrence of straight lines was no more than average.\n",
"A tessellation, also known as a tiling, is a set of shapes that must cover the entire plane without the shapes overlapping. This repeating shape must cover every part of the plane without overlapping. An edge tessellation, is a special type of tessellation that is created by flipping or reflecting the shape over an edge. This can also be called a \"folding\" tessellation.\n",
"By considering the family of maximally dense packings of the smoothed octagon, the requirement that the packing density remain the same as the point of contact between neighbouring octagons changes can be used to determine the shape of the corners. In the figure, three octagons rotate while the area of the triangle formed by their centres remains constant, keeping them packed together as closely as possible. For regular octagons, the red and blue shapes would overlap, so to enable the rotation to proceed the corners are clipped by a point that lies halfway between their centres, generating the required curve, which turns out to be a hyperbola.\n",
"The circle symbolizes unity and diversity in nature, and many Islamic patterns are drawn starting with a circle. For example, the decoration of the 15th-century mosque in Yazd, Persia is based on a circle, divided into six by six circles drawn around it, all touching at its centre and each touching its two neighbours' centres to form a regular hexagon. On this basis is constructed a six-pointed star surrounded by six smaller irregular hexagons to form a tessellating star pattern. This forms the basic design which is outlined in white on the wall of the mosque. That design, however, is overlaid with an intersecting tracery in blue around tiles of other colours, forming an elaborate pattern that partially conceals the original and underlying design. A similar design forms the logo of the Mohammed Ali Research Center.\n",
"A Reuleaux triangle is a shape formed from the intersection of three circular disks, each having its center on the boundary of the other two. Its boundary is a curve of constant width, the simplest and best known such curve other than the circle itself. Constant width means that the separation of every two parallel supporting lines is the same, independent of their orientation. Because all its diameters are the same, the Reuleaux triangle is one answer to the question \"Other than a circle, what shape can a manhole cover be made so that it cannot fall down through the hole?\"\n",
"One method of squaring the circle, due to Archimedes, makes use of an Archimedean spiral. Archimedes also showed how the spiral can be used to trisect an angle. Both approaches relax the traditional limitations on the use of straightedge and compass in ancient Greek geometric proofs.\n",
"The eigenvalues of the circle system plotted in the complex plane form a trefoil shape. The eigenvalues from a short line form a sideways Y, but those of a long line begin to resemble the trefoil shape of the circle. This could be due to the fact that a long line is indistinguishable from a circle to those species far from the ends.\n"
] |
Where do vegetables and fruit/nut bearing plants get their vitamins and minerals? | They get all the minerals they need from the soil. Vitamins for plants aren't the same necessarily as our vitamins, because a vitamin is something the organism needs to survive but cannot produce on its own (vitamin D is not a true vitamin to us).
So, for example, plants can produce vitamin C (ascorbic acid) through a glucose metabolism pathway. We do not have this pathway and need to consume it. Additionally, plants can make alpha-linolenic acid (ALA) which is the first omega-3 fatty acid. We cannot make ALA because we lack desaturase enzymes beyond 9, whereas 12 and 15 are required to form ALA from stearic acid.
But essentially plants, being autotrophs, only get some things from soil and air; the rest they can synthesize. | [
"Vitamins and minerals are required for normal metabolism but which the body cannot manufacture itself and which must therefore come from external sources. Vitamins come from several sources including fresh fruit and vegetables (Vitamin C), carrots, liver (Vitamin A), cereal bran, bread, liver (B vitamins), fish liver oil (Vitamin D) and fresh green vegetables (Vitamin K). Many minerals are also essential in small quantities including iron, calcium, magnesium, sodium chloride and sulfur; and in very small quantities copper, zinc and selenium. The micronutrients, minerals, and vitamins in fruit and vegetables may be destroyed or eluted by cooking. Vitamin C is especially prone to oxidation during cooking and may be completely destroyed by protracted cooking. The bioavailability of some vitamins such as thiamin, vitamin B6, niacin, folate, and carotenoids are increased with cooking by being freed from the food microstructure. Blanching or steaming vegetables is a way of minimizing vitamin and mineral loss in cooking.\n",
"At least 17 elements are known to be essential nutrients for plants. In relatively large amounts, the soil supplies nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur; these are often called the macronutrients. In relatively small amounts, the soil supplies iron, manganese, boron, molybdenum, copper, zinc, chlorine, and cobalt, the so-called micronutrients. Nutrients must be available not only in sufficient amounts but also in appropriate ratios.\n",
"They are also good sources of vitamin C and flavonoids. The content of vitamin C in the fruit depends on the species, variety, and mode of cultivation. Fruits produced with organic agriculture have been shown to contain more vitamin C than those produced with conventional agriculture in the Algarve, but results depended on the species and cultivar.\n",
"While plant foods are generally a good source of vitamin C, the amount in foods of plant origin depends on the variety of the plant, soil condition, climate where it grew, length of time since it was picked, storage conditions, and method of preparation. The following table is approximate and shows the relative abundance in different raw plant sources. As some plants were analyzed fresh while others were dried (thus, artificially increasing concentration of individual constituents like vitamin C), the data are subject to potential variation and difficulties for comparison. The amount is given in milligrams per 100 grams of the edible portion of the fruit or vegetable:\n",
"BULLET::::- Vitamin C (ascorbic acid) is a water-soluble compound that fulfills several roles in living systems. Sources include citrus fruits (such as oranges, sweet lime, etc.), green peppers, broccoli, green leafy vegetables, black currants, strawberries, blueberries, seabuckthorn, raw cabbage and tomatoes.\n",
"Aside from performed vitamin A, vitamin B and vitamin D, all vitamins found in animal source foods may also be found in plant-derived foods. Examples are tofu to replace meat (both contain protein in sufficient amounts), and certain seaweeds and vegetables as respectively kombu and kale to replace dairy foods as milk (both contain calcium in sufficient amounts). There are some nutrients which are rare to find in sufficient density in plant based foods. One example would be zinc, the exception would be pumpkin seeds that have been soaked for improved digestion. The increased fiber in these foods can also make absorption difficult. Deficiencies are very possible in these nutrients if vegetarians are not very careful and willing to eat sufficient quantities of these exceptional plant based foods. A good way to find these foods would be to search for them on one of the online, nutrient analyzing databases. An example would be nutritiondata.com.\n",
"The nutrients required for healthy plant life are classified according to the elements, but the elements are not used as fertilizers. Instead compounds containing these elements are the basis of fertilizers. The macro-nutrients are consumed in larger quantities and are present in plant tissue in quantities from 0.15% to 6.0% on a dry matter (DM) (0% moisture) basis. Plants are made up of four main elements: hydrogen, oxygen, carbon, and nitrogen. Carbon, hydrogen and oxygen are widely available as water and carbon dioxide. Although nitrogen makes up most of the atmosphere, it is in a form that is unavailable to plants. Nitrogen is the most important fertilizer since nitrogen is present in proteins, DNA and other components (e.g., chlorophyll). To be nutritious to plants, nitrogen must be made available in a \"fixed\" form. Only some bacteria and their host plants (notably legumes) can fix atmospheric nitrogen (N) by converting it to ammonia. Phosphate is required for the production of DNA and ATP, the main energy carrier in cells, as well as certain lipids.\n"
] |
What's the noise a formula 1 makes when it changes gears? | It's most likely a backfire. When the car is accelerating its at full throttle/load, and the engine runs out of power, so it's time to change gears. imagine going from full throttle to no throttle (changing gears) then back to full throttle.
The bang u hear is unburnt fuel exploding in the exhaust after its left the combustion chamber, which is after engine has gone off full throttle to change gears.
It's excess fuel that was needed to sustain full power, but is no longer needed when off throttle. | [
"BULLET::::- Transmission problems were tackled by adding a further mounting-point (making five) for the whole engine and transmission assembly at the back of the gearbox where it was supported by an extra chassis cross-member. The transmission made a significant humming noise while in neutral and there were difficulties with excessive vibration from oil surge in the fluid flywheel when picking up under heavy load at low speed. The transmission mechanism for top-gear was modified to reduce pedal pressure and ensure positive engagement and disengagement while avoiding a humming sound in neutral.\n",
"The race start was delayed by 45 minutes due to the heavy rain. With the rain soaking the track, Niki Lauda sought out Bernie Ecclestone on the grid in a bid to have the tunnel flooded as well. The tunnel was dry but coated with oil from the previous days' use (as well as from the historic cars which were on the program that weekend) which Lauda explained had turned it into a fifth gear skid pad when the cars came racing in carrying the spray from their tyres in the morning warmup. Ecclestone used his power as the head of the Formula One Constructors Association to do exactly that, with a local fire truck called in to water down the only dry road on the track.\n",
"A modern F1 clutch is a multi-plate carbon design with a diameter of less than , weighing less than and handling around . race season, all teams are using seamless shift transmissions, which allow almost instantaneous changing of gears with minimum loss of drive. Shift times for Formula One cars are in the region of 0.05 seconds. In order to keep costs low in Formula One, gearboxes must last five consecutive events and since 2015, gearbox ratios will be fixed for each season (for 2014 they could be changed only once). Changing a gearbox before the allowed time will cause a penalty of five places drop on the starting grid for the first event that the new gearbox is used.\n",
"\"After engaging the first gear and a somewhat careless step on the gas pedal you get a touched feel to the epiphany GTV6 shot, accompanied by the typical Alfa Romeo exhaust sound. It was a pleasure. The fact was the sprint from 0 to is not further under the seven-second limited by a tricky-to-be-shifted five-speed gearbox. The really vehement propulsion waned only when the speedometer mark has left behind. Another eye-opening experience awaits when you realize that the lightning speed to 7000 rpm rotating in any gear pinion even in fifth gear still from 1500 rpm is completely smooth.\"\n",
"BULLET::::- Noise 2: This noise, commonly referred to as gear rattle, can be induced by lugging the engine in any gear, but is usually most noticeable in first or second gear. While the noise is occurring, if you press lightly on the clutch pedal without releasing the clutch, the noise will be reduced or eliminated.\n",
"Once the field was filled to 33 cars, bumping would begin. The slowest car in the field, regardless of the day it was qualified, was \"on the bubble.\" If a driver went out and qualified faster, the bubble car would be bumped, and the new qualifier would be added to the field. The bumped car would be removed from the grid, and all cars that were behind him would move up a spot. The new driver would take his position according to his speed rank on the day he qualified (typically the final day). This procedure would be repeated until the track closed at 6 p.m. on the final day of qualifying. Bumped cars could not be re-qualified. A bumped driver would have to secure a back-up car (assuming it had attempts left on it) in order to bump his way back into the field.\n",
"The sound was adopted as the sound of a Formula One car as early as 2001 in the form of \"Deng Deng Form\" and later \"The Insanity Test\" both of which were a static background of a Ferrari Formula One car accompanied by the sound.\n"
] |
what determines how internet lag in different games looks? | Male programmer type guy here. It just depends on how the programmers who made the game decided to handle the case where the game isn't getting updates from the server. Some games leave the character in place, and then warp him when the updates resume. Others avoid the warp by having the character fly from their old position to the new one. I seem to remember that neverwinter nights had a thing where it would try to estimate where the character would be based on their last position and trajectory, which led to weird glitches. I could be making that up though. | [
"Since the game requires information on the location of other players, there is sometimes a delay as this information travels over the network. This occurs in games where the input signals are \"held\" for several frames (to allow time for the data to arrive at every player's console/PC) before being used to render the next frame. At 25 FPS, holding 4 frames adds to the overall input lag. However, very few modern online games use this method. The view angle of every modern AAA shooter game is completely unaffected by network lag, for example. In addition, lag compensating code makes classification a complex issue.\n",
"Lag due to an insufficient update rate between client and server can cause some problems, but these are generally limited to the client itself. Other players may notice jerky movement and similar problems with the player associated with the affected client, but the real problem lies with the client itself. If the client cannot update the game state at a quick enough pace, the player may be shown outdated renditions of the game, which in turn cause various problems with hit- and collision detection. If the low update rate is caused by a low frame rate (as opposed to a setting on the client, as some games allow), these problems are usually overshadowed by numerous problems related to the client-side processing itself. Both the display and controls will be sluggish and unresponsive. While this may increase the perceived lag, it is important to note that it is of a different kind than network-related delays. In comparison, the same problem on the server may cause significant problems for all clients involved. If the server is unable or unwilling to accept packets from clients fast enough and process these in a timely manner, client actions may never be registered. When the server then sends out updates to the clients, they may experience freezing (unresponsive game) and/or rollbacks, depending on what types of lag compensation, if any, the game uses.\n",
"Testing has found that overall \"input lag\" (from controller input to display response) times of approximately are distracting to the user. It also appears that (excluding the monitor/television display lag) is an average response time and the most sensitive games (fighting games, first person shooters and rhythm games) achieve response times of (excluding display lag).\n",
"The noticeable effects of lag vary not only depending on the exact cause, but also on any and all techniques for lag compensation that the game may implement (described below). As all clients experience some delay, implementing these methods to minimize the effect on players is important for smooth gameplay. Lag causes numerous problems for issues such as accurate rendering of the game state and hit detection. In many games, lag is often frowned upon because it disrupts normal gameplay. The severity of lag depends on the type of game and its inherent tolerance for lag. Some games with a slower pace can tolerate significant delays without any need to compensate at all, whereas others with a faster pace are considerably more sensitive and require extensive use of compensation to be playable (such as the first-person shooter genre). Due to the various problems lag can cause, players that have an insufficiently fast Internet connection are sometimes not permitted, or discouraged from playing with other players or servers that have a distant server host or have high latency to one another. Extreme cases of lag may result in extensive desynchronization of the game state.\n",
"In the peer-to-peer gaming model, lagging is what happens when the stream of data between one or more players gets slowed or interrupted, causing movement to stutter and making opponents appear to behave erratically. By using a lag switch, a player is able to disrupt uploads from the client to the server, while their own client queues up the actions performed. The goal is to gain advantage over another player without reciprocation; opponents slow down or stop moving, allowing the lag switch user to easily outmaneuver them. From the opponent's perspective, the player using the device may appear to be teleporting, invisible or invincible, while the opponents suffer delayed animations and fast-forwarded game play, delivered in bursts. Some gaming communities refer to this method as \"tapping\" which refers to the users \"tapping\" on and off their internet connection to create the lag.\n",
"As an interesting first, this movie features the appearance of lag, a gameplay error due to network transfer slowdown often encountered in online games. Oshii displays lag as an ailment that causes physical convulsions in the player during these slowdowns.\n",
"Lag due to network delay is in contrast often less of a problem. Though more common, the actual effects are generally smaller, and it is possible to compensate for these types of delays. Without any form of lag compensation, the clients will notice that the game responds only a short time after an action is performed. This is especially problematic in first-person shooters, where enemies are likely to move as a player attempts to shoot them and the margin for errors is often small.\n"
] |
In the United States, have there been any particularly strong Vice Presidents, and how was The Senate different under them? | In addition to Calhoun, John Adams regularly presided over the Senate and partook in debates, and beats Calhoun by one vote for the most tie breaks.
That said, while they are the nominal head of the Senate, the Constitution also says that the House and Senate get to write their own procedural rules in Article I, Section V, Clause II:
> Each House may determine the Rules of its Proceedings, punish its Members for disorderly Behavior, and, with the Concurrence of two thirds, expel a member.
In practical terms, the Vice President doesn't have much power if the Senate decides to write the rules to say that they can't do anything other than break ties and be physically present, the only things the constitution explicitly grants them authority to do so. Something like Frank Underwood barging into the Senate and immediately taking over wouldn't really happen since at present, party leaders run the floor and they have junior senators sit in the presiding chair. | [
"He served as one of several alternating presidents pro tempore of the United States Senate during the 62nd Congress (1911 to 1913), as part of a compromise under which Bacon and four senators from the Republican majority rotated in the office because no single candidate in either party was able to secure a majority vote.\n",
"Over the next few decades the Senate rose in reputation in the United States and the world. John C. Calhoun, Daniel Webster, Thomas Hart Benton, Stephen A. Douglas, and Henry Clay overshadowed several presidents. Sir Henry Maine called the Senate \"the only thoroughly successful institution which has been established since the tide of modern democracy began to run.\" William Ewart Gladstone said the Senate was \"the most remarkable of all the inventions of modern politics.\"\n",
"A procedural issue of the early Senate was what role the vice president, the President of the Senate, should have. The first vice president was allowed to craft legislation and participate in debates, but those rights were taken away relatively quickly. John Adams seldom missed a session, but later vice presidents made Senate attendance a rarity. Although the founders intended the Senate to be the slower legislative body, in the early years of the Republic, it was the House that took its time passing legislation. Alexander Hamilton's Bank of the United States and Assumption Bill (he was then Treasury Secretary), both of which were controversial, easily passed the Senate, only to meet opposition from the House.\n",
"Richard Mentor Johnson (October 17, 1780 – November 19, 1850) was a politician and the ninth vice president of the United States from 1837 to 1841. He is the only vice president elected by the United States Senate under the provisions of the Twelfth Amendment. Johnson also represented Kentucky in the U.S. House of Representatives and Senate; he began and ended his political career in the Kentucky House of Representatives.\n",
" confers upon the vice president the title President of the Senate and authorizes him to preside over Senate meetings. In this capacity, the vice president is charged with maintaining order and decorum, recognizing members to speak, and interpreting the Senate's rules, practices, and precedent. The first two vice presidents, John Adams and Thomas Jefferson, both of whom gained the office by virtue of being runners-up in presidential contests, presided regularly over Senate proceedings, and did much to shape the role of Senate president. Several 19th century vice presidents—such as George Dallas, Levi Morton, and Garret Hobart—followed their example and led effectively, while others were rarely present.\n",
"There have been 48 vice presidents of the United States since the office came into existence in 1789. Originally, the vice president was the person who received the second most votes for president in the Electoral College. However, in the election of 1800 a tie in the electoral college between Thomas Jefferson and Aaron Burr led to the selection of the president by the House of Representatives. To prevent such an event from happening again, the Twelfth Amendment was added to the Constitution, creating the current system where electors cast a separate ballot for the vice presidency.\n",
"The Vice President of the United States, as provided by the United States Constitution formally presides over the upper house, the Senate. In practice, however, the Vice President has a rare presence in Congress owing to responsibilities in the Executive branch and the fact that the Vice President may only vote to break a tie. In the Vice President's absence, the presiding role is delegated to the most Senior member of the majority party, who is the President pro tempore of the United States Senate. Since the Senate's rules give little power to its non-member presider (who may be of the opposite party), the task of presiding over daily business is typically rotated among junior members of the majority party.\n"
] |
Do black holes really vary in size or does the collapsed point in space just vary in intensity? | Every amount of mass has some radius that, were it all to be compressed within the radius, it would form a black hole. This is called the Schwartzchild radius, and it's calculated by the formula r=2GM/c^2 . G is the gravitational constant, and c is the speed of light. These are both constant, so the math works out the same for them every time and the quantity of mass is the only variable that can alter the radius.
Interestingly, smaller black holes will spaghettify you much faster than larger black holes will. This is because of the tidal force. Anything that enters a black hole is stretched apart by its gravity. The gravitational force weakens with distance; the parts of you closer to the black hole (say, your feet, if you're falling straight in) end up attracted by its gravity more forcefully than the parts away from you (like your head, in this analogy). This effect magnifies as you are stretched more and more until... well, spaghettification is the scientific term for this for a reason.
With larger black holes, the difference in position of your head and your feet, relative to the size of the black hole, is smaller than it is with smaller black holes. Your feet will still be pulled more forcefully than your head, but the difference won't be as drastic. With a large enough black hole, you might be able to survive a decent part of your trip to the singularity.
So, the size of a black hole is dependent solely on its mass, but a more massive black hole will take longer to destroy you. Either way, you aren't getting out. | [
"In general relativity, if a star collapses to a size smaller than its Schwarzschild radius, an event horizon will exist at that radius and the star will become a black hole. Thus, the size of a preon star may vary from around 1 metre with an absolute mass of 100 Earths to the size of a pea with a mass roughly equal to that of the Moon.\n",
"A vacancy exists in the observed mass distribution of black holes. Black holes that spawn from dying stars have masses . The minimal supermassive black hole is approximately a hundred thousand solar masses. Mass scales between these ranges are dubbed intermediate-mass black holes. Such a gap suggests a different formation process. However, some models suggest that ultraluminous X-ray sources (ULXs) may be black holes from this missing group.\n",
"Black holes can be classified based on their Schwarzschild radius, or equivalently, by their density. As the radius is linearly related to mass, while the enclosed volume corresponds to the third power of the radius, small black holes are therefore much more dense than large ones. The volume enclosed in the event horizon of the most massive black holes has an average density lower than main sequence stars.\n",
"The maximally extended solution does not describe a typical black hole created from the collapse of a star, as the surface of the collapsed star replaces the sector of the solution containing the past-oriented \"white hole\" geometry and other universe.\n",
"Black holes are talked about in this chapter. Black holes are stars that have collapsed into one very small point. This small point is called a \"singularity\". Black holes suck things into their center because they have very strong gravity. Some of the things it can suck in are light and stars. Only very large stars, called \"super-giants\", are big enough to become a black hole. \n",
"Claims of intermediate mass black holes have been met with some skepticism. The heaviest objects in globular clusters are expected to migrate to the cluster center due to mass segregation. As pointed out in two papers by Holger Baumgardt and collaborators, the mass-to-light ratio should rise sharply towards the center of the cluster, even without a black hole, in both M15 and Mayall II.\n",
"On the other hand, the nature of the kind of singularity to be expected inside a black hole remains rather controversial. According to some theories, at a later stage, the collapsing object will reach the maximum possible energy density for a certain volume of space or the Planck density (as there is nothing that can stop it). This is when the known laws of gravity cease to be valid. There are competing theories as to what occurs at this point, but it can no longer really be considered gravitational collapse at that stage.\n"
] |
why do strange graphical effects sometimes occur when alt+tabbing a computer game? | It's because the game takes up the majority of your computer's resources and stays at the forefront. Your computer needs to load in all the other stuff that the OS and other programs need before you can use them. | [
"\"Glitching\" is also used to describe the state of a video game undergoing a glitch. The frequency in which a game undergoes glitching is often used by reviewers when examining the overall gameplay, or specific game aspects such as graphics. Some games such as Metroid have lower review scores today because in retrospect, the game may be very prone to glitches and be below what would be acceptable today.\n",
"Glitches may include incorrectly displayed graphics, collision detection errors, game freezes/crashes, sound errors, and other issues. Graphical glitches are especially notorious in platforming games, where malformed textures can directly affect gameplay (for example, by displaying a ground texture where the code calls for an area that should damage the character, or by \"not\" displaying a wall texture where there should be one, resulting in an invisible wall). Some glitches are potentially dangerous to the game's stored data.\n",
"Texture/model glitches are a kind of bug or other error that causes any specific model or texture to either become distorted or otherwise to not look as intended by the developers. Bethesda's \"\" is notorious for texture glitches, as well as other errors that affect many of the company's popular titles. Many games that use ragdoll physics for their character models can have such glitches happen to them.\n",
"Software errors not detected by software testers during development can find their way into released versions of computer and video games. This may happen because the glitch only occurs under unusual circumstances in the game, was deemed too minor to correct, or because the game development was hurried to meet a publication deadline. Glitches can range from minor graphical errors to serious bugs that can delete saved data or cause the game to malfunction. In some cases publishers will release updates (referred to as \"patches\") to repair glitches. Sometimes a glitch may be beneficial to the player; these are often referred to as exploits.\n",
"Crash to desktop bugs are considered particularly problematic for users. Since they frequently display no error message, it can be very difficult to track down the source of the problem, especially if the times they occur and the actions taking place right before the crash do not appear to have any pattern or common ground. One way to track down the source of the problem for games is to run them in windowed-mode. Windows Vista has a feature that can help track down the cause of a CTD problem when it occurs on any program. Windows XP included a similar feature as well.\n",
"\"Glitching\" is the practice of players exploiting faults in a video game's programming to achieve tasks that give them an unfair advantage in the game, over NPC's or other players, such as running through walls or defying the game's physics. Glitches can be deliberately induced in certain home video game consoles by manipulating the game medium, such as tilting a ROM cartridge to disconnect one or more connections along the edge connector and interrupt part of the flow of data between the cartridge and the console. This can result in graphic, music, or gameplay errors. Doing this, however, carries the risk of crashing the game or even causing permanent damage to the game medium.\n",
"Non-cosmetic modifications to a game, console, or controller are not allowed. Glitches that are triggered by interfering with the normal operation of the hardware or game media while the game is running, such as the crooked cartridge trick are not permitted. In-game glitches or exploits may be permissible, contingent on the category being run. \n"
] |
How much of time dilation is due to the gravity well versus relative velocity? | You can indeed separate the two effects if the field is weak. I've done the explicit computation for the orbit of Mercury around the sun [here](_URL_0_). It turns out that in a circular orbit the time dilation due to the orbital speed is exactly half the gravitational time dilation.
P.S.: GPS satellites are *not* in geosynchronous orbit. | [
"In 2010, Chou \"et al\". performed tests in which both gravitational and velocity effects were measured at velocities and gravitational potentials much smaller than those used in the mountain-valley experiments of the 1970s. It was possible to confirm velocity time dilation at the 10 level at speeds below 36 km/h. Also, gravitational time dilation was measured from a difference in elevation between two clocks of only .\n",
"Contrarily to velocity time dilation, in which both observers measure the other as aging slower (a reciprocal effect), gravitational time dilation is not reciprocal. This means that with gravitational time dilation both observers agree that the clock nearer the center of the gravitational field is slower in rate, and they agree on the ratio of the difference.\n",
"That is, the stronger the gravitational field (and, thus, the larger the acceleration), the more slowly time runs. The predictions of time dilation are confirmed by particle acceleration experiments and cosmic ray evidence, where moving particles decay more slowly than their less energetic counterparts. Gravitational time dilation gives rise to the phenomenon of gravitational redshift and Shapiro signal travel time delays near massive objects such as the sun. The Global Positioning System must also adjust signals to account for this effect.\n",
"Gravitational time dilation is at play e.g. for ISS astronauts. While the astronauts' relative velocity slows down their time, the reduced gravitational influence at their location speeds it up, although at a lesser degree. Also, a climber's time is theoretically passing slightly faster at the top of a mountain compared to people at sea level. It has also been calculated that due to time dilation, the core of the Earth is 2.5 years younger than the crust. \"A clock used to time a full rotation of the earth will measure the day to be approximately an extra 10 ns/day longer for every km of altitude above the reference geoid.\" Travel to regions of space where extreme gravitational time dilation is taking place, such as near a black hole, could yield time-shifting results analogous to those of near-lightspeed space travel.\n",
"Gravitational time dilation is experienced by an observer that, at a certain altitude within a gravitational potential well, finds that his local clocks measure less elapsed time than identical clocks situated at higher altitude (and which are therefore at higher gravitational potential).\n",
"Gravitational time dilation is a form of time dilation, an actual difference of elapsed time between two events as measured by observers situated at varying distances from a gravitating mass. The higher the gravitational potential (the farther the clock is from the source of gravitation), the faster time passes. Albert Einstein originally predicted this effect in his theory of relativity and it has since been confirmed by tests of general relativity.\n",
"Gravitational time dilation is a phenomenon predicted by the theory of General Relativity whereby time passes more slowly in regions of lower gravitational potential. Scientists used the lander to test this hypothesis, by sending radio signals to the lander on Mars, and instructing the lander to send back signals, in cases which sometimes included the signal passing close to the Sun. Scientists found that the observed Shapiro delays of the signals matched the predictions of General Relativity.\n"
] |
how exactly is the “stop/start” automatic engine feature in newer cars “better”? | Barely any wear and tear, better for the environment as all that time you spend not moving while the engine running is time that CO2 and pollutants are spewing out when they don't need to be. Multiply all that time by millions and millions of cars and you have a significant CO2 saving.
Saves fuel and thus cash too. | [
"From 2011, Stop/Start was added to certain engines (engines with (S/S) are bold in CO2 column), a cleaner, more powerful 1.7 CDTI auto was added, and the petrol engines became slightly more efficient. A six speed automatic gearbox became available for the 1.4T (120) petrol engine.\n",
"The automatic transmission also has the ability to change the shift points, and hold the gears longer when the engine is operating at higher RPMs. This is achieved by pressing the accelerator pedal quickly, which causes an indicator light marked as \"Power\" at the bottom center of the instrument cluster to light up. The European and Australian version came equipped with a center console installed override switch labeled \"AT Econo\" which instructed the computer to utilize the \"Power\" mode, and remain so until the switch was reset to \"Econo\" mode. The \"Power\" mode was also available for engine braking, causing the transmission to downshift 500 rpm earlier than in \"normal\" mode. For 1991, the \"Manual\" button on the gearshift was replaced by a \"Econo\" switch on the gearshift, and the console mounted button was changed from \"AT Econo\" to \"Manual\", so that the transmission was always in \"Econo\" mode until the gearshift mounted switch was disengaged. Unlike the United States and Japanese version, which went into \"Power\" mode only when the accelerator was pushed rapidly, the \"Power\" mode on the European and Australian version was activated by either console or gearshift installed switches.\n",
"The automatic transmission also has the ability to change the shift points, and hold the gears longer when the engine is operating at higher RPM. This is achieved by pressing the accelerator pedal rapidly, which causes the transmission to hold the gear until 5000 rpm before shifting to the next gear. No indicator light appears in the instrument cluster, unlike previous generations. The transmission also has engine over-rev protection by shifting the transmission to the next available gear once 6500 rpm has been achieved, even if the gear selector is in a low gear position.\n",
"The Startix automatic engine starting mechanism was a relay in a small box added to the vehicle's electrical system. It automatically started an engine from cold or if stalled. It was supplied to vehicle manufacturers in the mid 1930s and later as an aftermarket accessory — in the USA by Bendix Aviation Corporation Eclipse Machine Division and in UK by Joseph Lucas & Son both of which businesses made electric self-starters. Such devices are now part of the engine management systems which switch off and on to conserve fuel.\n",
"For certain applications, the slippage inherent in automatic transmissions can be advantageous. For instance, in drag racing, the automatic transmission allows the car to stop with the engine at a high rpm (the \"stall speed\") to allow for a very quick launch when the brakes are released. In fact, a common modification is to increase the stall speed of the transmission. This is even more advantageous for turbocharged engines, where the turbocharger must be kept spinning at high rpm by a large flow of exhaust to maintain the boost pressure and eliminate the turbo lag that occurs when the throttle suddenly opens on an idling engine.\n",
"There was the appeal of the \"power everything\" car which automatically started its engine. Many early automatics had no lock up of their transmission, for example Dynaflow, Powerglide and Ultramatic though Hydramatic did. \n",
"Start/Stop technology on vehicles with automatic transmissions first appeared with the introduction of the new, more powerful (112 kW; 150 hp), B14XFT 1.4 litre direct injection (DI) VVT Turbo petrol engine for model year 2016 and was incorporated on other select petrol and diesel engines paired with automatic transmissions by model year 2018.\n"
] |
how do all the bodies, tanks etc. get cleaned off the battlefields? | Usually they don't. Outside of Kursk you can take a spade out West of the city and dig down just a few inches to human remains, shell casings, etc. Vehicles were only removed if they were salvageable or were in the way. After the war civilians gleaned the site for years for scrap but anything else was just abandoned. Modern armies recover bodies for burial, but when the battlefields are too massive sometimes they dont. Remains are still found in Flanders when someone digs a well and new phone line is laid.
In Germany the Allies employed POW's for years in work gangs cleaning up battlefields. Once a tank burns it is useless. The heat from the fire ruins the temper of the armor, so they were just abandoned. Military trucks were used as work horses all over Europe for years so people stripped all the wrecks of parts pretty quickly. The hulks got towed to scrap yards. | [
"To this day, the remains of missing soldiers are still found in the countryside around the town of Ypres. Typically, such finds are made during building work or road-mending activities. Any human remains discovered receive a proper burial in one of the war cemeteries in the region. If the remains can be identified, the relevant name is removed from the Menin Gate.\n",
"In the aftermath of a war, large areas of the region of conflict are often strewn with \"war debris\" in the form of abandoned or destroyed hardware and vehicles, mines, unexploded ordnance, bullet casings and other fragments of metal.\n",
"In July 2014, the EPA ordered the Army to clean up the site on the grounds, that the military should not have entrusted Explo Systems to handle such a large amount of the propellant. Three private firms, General Dynamics Corporation, Alliant Techsystems, and the Ashland, Inc., unit known as \"Hercules\" have been participating in the cleanup.\n",
"In the event that a water tank or tanker is contaminated, the following steps should be taken to reclaim the tank or tanker, if it is structurally intact. Additionally, it is recommended that tanks in continuous use are cleaned every five years, and for seasonal use, annually.\n",
"The Depot houses and operates a facility for the repair, restoration, and/or upgrade of infantry weapons such as the Beretta M9 pistol, M16 rifle, and M2 machine gun. Any firearm deemed unusable or obsolete is destroyed on the premises, the materials are reduced to unusable pieces and then sold for scrap to be melted down.\n",
"Rescues of an entrapped victim usually entail building makeshift retaining walls in the grain around them with plywood, sheet metal, tarpaulins, snow fences or any other similar material available. Once that has been done, the next step is creating the equivalent of a cofferdam within the grain from which grain can then be removed by hand, shovel, grain vacuum or other extraction equipment. While some of these techniques have been used to retrieve engulfed victims or their bodies as well, in those cases it is also common to attempt to cut a hole in the side of the storage facility; this requires consulting an engineer to make sure it can be done without compromising the facility's structural integrity. There is also the possibility of a dust explosion, although none are known to have occurred yet during a rescue attempt.\n",
"Recovery can be performed using manual winches or motor-assisted methods of recovery, using ground or vehicle-mounted recovery equipment (mostly winches and cranes), with the recovery of heavier vehicles such as tanks conducted by armoured wheel and track recovery vehicles (ARVs). During peacetime and in non-combat settings, various recovery vehicles can be used. In combat, under enemy fire, armies typically used armoured recovery vehicles, as the armour protects the crew from small arms fire and gives some protection from artillery and heavier fire. \n"
] |
why does our body need uv to create vitamin d when uv exposure increases our risk of skin cancer? | UV light is an energy source, since humans are automatically exposed in varying degrees to this energy source we have evolved to make use of the "free" energy to create vitamin D. We have also evolved to darken the skin to prevent over exposure to UV which would increase risks of skin cancer. Only animals like naked mole rats don't have to concern themselves about exposure to some degree or other to UV light _URL_0_ | [
"The sun's UV radiation is both a major cause of skin cancer and the best natural source of vitamin D. The risk of skin cancer from too much sun exposure needs to be balanced with maintaining adequate vitamin D levels. Vitamin D deficiency in Australia has also greatly increased, since sunblock also reduces vitamin D production in the skin. Although sunscreens could almost entirely block the solar-induced production of cutaneous previtamin D3 on theoretical grounds or if administered under strictly controlled conditions, in practice they have not been shown to do so. This is mainly due to inadequacies in their application to the skin and because users of sunscreen may also expose themselves to more sun than non-users.\n",
"UV light causes the body to produce vitamin D (specifically, UVB), which is essential for life. The human body needs some UV radiation in order for one to maintain adequate vitamin D levels; however, excess exposure produces harmful effects that typically outweigh the benefits.\n",
"Despite the importance of the sun to vitamin D synthesis, it is prudent to limit the exposure of skin to UV radiation from sunlight and from tanning beds. According to the National Toxicology Program Report on Carcinogens from the US Department of Health and Human Services, broad-spectrum UV radiation is a carcinogen whose DNA damage is thought to contribute to most of the estimated 1.5 million skin cancers and the 8,000 deaths due to metastatic melanoma that occur annually in the United States. The use of sunbeds is reported by the World Health Organization to be responsible for over 450,000 cases of non-melanoma skin cancer and over 10,000 cases of melanoma every year in the U.S., Europe, as well as Australia. Lifetime cumulative UV exposure to skin is also responsible for significant age-associated dryness, wrinkling, elastin and collagen damage, freckling, age spots and other cosmetic changes. The American Academy of Dermatology advises that photoprotective measures be taken, including the use of sunscreen, whenever one is exposed to the sun. Short-term over-exposure causes the pain and itching of sunburn, which in extreme cases can produce more-severe effects like blistering.\n",
"Ultraviolet is also responsible for the formation of bone-strengthening vitamin D in most land vertebrates, including humans (specifically, UVB). The UV spectrum thus has effects both beneficial and harmful to human health.\n",
"With the increase of vitamin D synthesis, there is a decreased incidence of conditions that are related to common vitamin D deficiency conditions of people with dark skin pigmentation living in environments of low UV radiation: rickets, osteoporosis, numerous cancer types (including colon and breast cancer), and immune system malfunctioning. Vitamin D promotes the production of cathelicidin, which helps to defend humans' bodies against fungal, bacterial, and viral infections, including flu. When exposed to UVB, the entire exposed area of body’s skin of a relatively light skinned person is able to produce between 10 - 20000 IU of vitamin D.\n",
"The active UVB wavelengths are present in sunlight, and sufficient amounts of cholecalciferol can be produced with moderate exposure of the skin, depending on the strength of the sun. Time of day, season, and altitude affect the strength of the sun, and pollution, cloud cover or glass all reduce the amount of UVB exposure. Exposure of face, arms and legs, averaging 5–30 minutes twice per week, may be sufficient, but the darker the skin, and the weaker the sunlight, the more minutes of exposure are needed. Vitamin D overdose is impossible from UV exposure; the skin reaches an equilibrium where the vitamin degrades as fast as it is created.\n",
"Sun protection is an important aspect of skin care. Though the sun is beneficial in order for the human body to get its daily dose of vitamin D, unprotected excessive sunlight can cause extreme damage to the skin. Ultraviolet (UVA and UVB) radiation in the sun's rays can cause sunburn in varying degrees, early ageing and increased risk of skin cancer. UV exposure can cause patches of uneven skin tone and dry out the skin.\n"
] |
Is there any particular reason why so many people in the United States claim Cherokee ancestry? | Hello. I'm a mod over on /r/IndianCountry, the second largest and most active Native American subreddit. We recently constructed an FAQ [with a section that answers this specific question](_URL_1_) and links to several sources to back it up.
I would like to note, though, that this is more of a social question with a historical context.
In short, according to Gregory D. Smithers, associate professor of history at Virginia Commonwealth University and author of *The Cherokee Diaspora,* the Cherokee adopted a tradition of intermarriage after contact with the Europeans for several reasons, such as increasing diplomatic ties. Because this was actually encouraged by the Cherokee, it isn't *impossible* that those from the geographic location of traditional Cherokee territory have a Cherokee ancestor.
However, another thing to note is that most people don't actually know and just say they have Cherokee in them because it is the family legend.
The same professor mentioned above, Gregory D. Smithers, also states (bold is mine):
> [**"But after their removal, the tribe came to be viewed more romantically,** especially in the antebellum South, where their determination to maintain their rights of self-government against the federal government took on new meaning. Throughout the South in the 1840s and 1850s, **large numbers of whites began claiming they were descended from a Cherokee great-grandmother.** That great-grandmother was often a “princess,” a not-inconsequential detail in a region obsessed with social status and suspicious of outsiders. By claiming a royal Cherokee ancestor, white Southerners were legitimating the antiquity of their native-born status as sons or daughters of the South, as well as establishing their determination to defend their rights against an aggressive federal government, as they imagined the Cherokees had done. These may have been self-serving historical delusions, but they have proven to be enduring."](_URL_0_)
So the reality of things is that people like to claim something even if they don't have exact proof. One reason is the exotic factor of having native blood. That FAQ I linked touches on several other reasons. Point being, while there is some validity to the possibility of one possessing Cherokee blood or an ancestor, most cases are usually false. | [
"Gregory D. Smithers wrote, a large number of Americans belong in this category: \"In 2000, the federal census reported that 729,533 Americans self-identified as Cherokee. By 2010, that number increased, with the Census Bureau reporting that 819,105 Americans claimed at least one Cherokee ancestor.\" By contrast, as of 2012 there were only 330,716 enrolled Cherokee citizens (Cherokee Nation: 288,749; United Keetoowah Band: 14,300; Eastern Band: 14,667). \n",
"Many tribes, especially those in the Eastern United States, are primarily made up of individuals with an unambiguous Native American identity, despite being predominantly of European ancestry. Point in case, more than 75% of those enrolled in the Cherokee Nation have less than one-quarter Cherokee blood and the current Principal Chief of the Cherokee Nation, Bill John Baker, is 1/32 Cherokee, amounting to about 3%.\n",
"Many tribes, especially those in the Eastern United States, are primarily made up of individuals with an unambiguous Native American identity, despite being predominantly of European ancestry. More than 75% of those enrolled in the Cherokee Nation have less than one-quarter Cherokee blood, and the current Principal Chief of the Cherokee Nation, Bill John Baker, is 1/32 Cherokee, amounting to about 3%.\n",
"Cherokee heritage groups are associations, societies and other organizations located primarily in the United States, which are made up of people who may have distant heritage from a Cherokee tribe, or who identify as having such ancestry. Usually such groups consist of persons who do not qualify for enrollment in any of the three, federally recognized, Cherokee tribes (The Cherokee Nation, The Eastern Band of Cherokee Indians, or The United Keetoowah Band of Cherokee Indians). A total of 819,105 Americans claimed Cherokee ancestry in the 2010 Census, more than any other named ancestral tribal group in the Census.\n",
"There have also been cases of mixed-race Cherokee, of partial African ancestry, with as much as 1/4 Cherokee blood (equivalent to one grandparent being full-blood), but who were not listed as \"Cherokee by blood\" in the Dawes Roll because of having been classified only in the Cherokee Freedmen category. Thus such individuals lost their \"blood\" claim to Cherokee citizenship despite having satisfied the criterion of having a close Cherokee ancestor.\n",
"BULLET::::- Cherokees - a Native American tribe indigenous to the Southeastern United States, whose official tribal organization is Cherokee Nation based in Oklahoma, United States, which has 800,000 members as of 2005, and the total ethnic population in the USA nearly doubled to 1.5 million by 2015. However, anthropological and genetic experts in Native American studies have argued that there could be over two million more Cherokee descendants scattered across North America (the largest number at 300-600,000 in California). The beginnings of the Cherokee diaspora was from their forced removal in the \"Trail of Tears\". Later, thousands of \"Americanized\" Cherokee farmers were forced to settle across the Americas (i.e. Canada, Cuba and South America-an estimated 90-100,000 descendants there ) as the result of the Dawes Act. In the 20th century, many Cherokees served in the U.S. Army during World War I, World War II, the Korean War and the Vietnam War. These soldiers left some descendants by intermarriage with \"war brides\" in Europe and east Asia. Some Cherokees and other American Indians might have emigrated to Europe and elsewhere through the British and Spanish empires. They make up the global Cherokee diaspora.\n",
"The work of archaeologists, linguists and anthropologists has confirmed that the Cherokee were descended from prehistoric indigenous peoples of North America. Scholars have concluded that these prehistoric peoples originated from eastern Asia and migrated across the Bering Straits to North America more than 15,000 years ago. Although Payne's theory of Cherokee origins related to Biblical tribes has been replaced by the facts of Asian origin, his unpublished papers are useful to researchers as a rich source of information on the culture of the Cherokee in the early decades of the 19th century.\n"
] |
how does the new iphone voice command system (siri) work? | I don't know the exact details, but I do know that any query made to the system goes to remote servers with the voice command. There, the technology across multiple servers parses your voice to determine exactly what you say (some say the original creators of the voice recognition technology, Nuance, [is still primarily responsible](_URL_1_)).
After that, a completely separate process then parses the words you said to pull out key words and phrases to interpret what exactly you meant and how to resolve your request. Once that process knows what you want, then it's just a matter of calling the right sub-applications with the right arguments. Like setting a reminder at a certain time, calling a certain person, or looking up some query on [Wolfram Alpha](_URL_0_).
The accuracy of the transcription capabilities and Siri's interpretation power is what's cost Apple several million dollars in research and purchases to get Siri where it is now. | [
"Apple added Voice Control to its family of iOS devices as a new feature of iPhone OS 3. The iPhone 4S, iPad 3, iPad Mini 1G, iPad Air, iPad Pro 1G, iPod Touch 5G and later, all come with a more advanced voice assistant called Siri. Voice Control can still be enabled through the Settings menu of newer devices. Siri is a user independent built-in speech recognition feature that allows a user to issue voice commands. With the assistance of Siri a user may issue commands like, send a text message, check the weather, set a reminder, find information, schedule meetings, send an email, find a contact, set an alarm, get directions, track your stocks, set a timer, and ask for examples of sample voice command queries. In addition, Siri works with Bluetooth and wired headphones.\n",
"Voice assistants are interfaces that allow a user to complete an action simply by speaking a command. Introduced in October 2011, Apple’s Siri was one of the first voice assistants widely adopted. Siri allowed users of iPhone to get information and complete actions on their device simply by asking Siri.\n",
"The introduced a new automated voice control system called Siri, that allows the user to give the iPhone commands, which it can execute and respond to. For example, iPhone commands such as \"What is the weather going to be like?\" will generate a response such as \"The weather is to be cloudy and rainy and drop to 54 degrees today.\" These commands can vary greatly and control almost every application of the phone. The commands given do not have to be specific and can be used with natural language. Siri can be accessed by holding down the home button for a short amount of time (compared to using the regular function). An impact of Siri, as shown by Apple video messages, is that it is much easier for people to use device functions while driving, exercising, or when they have their hands full. It also means people with trouble reading, seeing, or typing can access the phone more easily.\n",
"BULLET::::- Speech recognition Google introduced voice input in Android 2.1 in 2009 and voice actions in 2.2 in 2010, with up to five languages (now around 40). Siri was introduced as a system-wide personal assistant on the iPhone 4S in 2011 and now supports nearly 20 languages. In both cases, the voice input is sent to central servers to perform general speech recognition and thus requires a network connection for more than simple commands.\n",
"Interactive voice broadcasting (also referred to as interactive voice messaging) programs allow the call recipient to listen to the recorded message and interact with the system by pressing keys on the phone keypad. The system can detect which key is pressed and be programmed to interact and play various messages accordingly. This is a form of Interactive voice response (IVR).\n",
"In Mac OS X 10.7 Lion and earlier, Apple's speech recognition was voice-command oriented only, i.e. not intended for dictation. It can be configured to listen for commands when a hot key is pressed, after being addressed with an activation phrase such as \"Computer\", or \"Macintosh\", or without prompt. A graphical status monitor, often in the form of an animated character, provides visual and textual feedback about listening status, available commands and actions. It can also communicate back with the user using speech synthesis.\n",
"Voice Control was introduced as an exclusive feature of the iPhone 3GS and allows for the controlling of the phone and music features of the phone by voice. There are two ways to activate Voice Control: hold the Home button while in the home screen for a few seconds; or, change the effect of what double-clicking the home button does so it will activate Voice Control (only on iOS 3.x; on iOS 4 or later, double clicking the Home button opens the multitasking bar).\n"
] |
How livable would 2x the Earth's gravity be? | You might want to check out some discussion we've had here recently on the same topic. [Here](_URL_1_) and [here](_URL_0_). | [
"Gravity on the Earth's surface varies by around 0.7%, from 9.7639 m/s on the Nevado Huascarán mountain in Peru to 9.8337 m/s at the surface of the Arctic Ocean. In large cities, it ranges from 9.7760 in Kuala Lumpur, Mexico City, and Singapore to 9.825 in Oslo and Helsinki.\n",
"The precise strength of Earth's gravity varies depending on location. The nominal \"average\" value at Earth's surface, known as is, by definition, 9.80665 m/s. This quantity is denoted variously as , (though this sometimes means the normal equatorial value on Earth, 9.78033 m/s), , gee, or simply (which is also used for the variable local value). \n",
"Genji is a Super-Earth but only moderately so: it has 2.8 times the mass and 1.36 times the diameter of Earth, and 1.5 times Earth's gravity. The side of the planet that constantly faces its companion world Chujo (\"Moonside\") is mostly land, the other hemisphere (\"Starside\") is mostly ocean. The mean surface temperature is +20 °C, slightly warmer than Earth. Although humans in good condition can physically accommodate to the high gravity, the sea level air pressure of 3.1 bars which results from this gravity (as per the barometric formula) requires artificial decompression for safe breathing. Only at 5,800 meters (an altitude found on this planet only in the form of a few small highlands that are cold and arid) the atmospheric pressure drops to Earth-standard 1 bar.\n",
"It is not yet known whether exposure to high gravity for short periods of time is as beneficial to health as continuous exposure to normal gravity. It is also not known how effective low levels of gravity would be at countering the adverse effects on health of weightlessness. Artificial gravity at 0.1\"g\" and a rotating spacecraft period of 30 s would require a radius of only . Likewise, at a radius of 10 m, a period of just over 6 s would be required to produce standard gravity (at the hips; gravity would be 11% higher at the feet), while 4.5 s would produce 2\"g\". If brief exposure to high gravity can negate the harmful effects of weightlessness, then a small centrifuge could be used as an exercise area.\n",
"The Earth's natural geomagnetic field strength varies over the surface of the planet between 0.035 mT and 0.07 mT (35 µT - 70 µT or 350 mG - 700 mG) while the International Standard for the continuous exposure limit is set at 40 mT (400,000 mG or 400 G) for the general public.\n",
"Near the surface of the Earth, the acceleration due to gravity \"g\" = 9.807 m/s (meters per second squared; which might be thought of as \"meters per second, per second\", or 32.18 ft/s as \"feet per second per second\") approximately. For other planets, multiply \"g\" by the appropriate scaling factor. A coherent set of units for \"g\", \"d\", \"t\" and \"v\" is essential. Assuming SI units, \"g\" is measured in meters per second squared, so \"d\" must be measured in meters, \"t\" in seconds and \"v\" in meters per second. \n",
"Assuming that the Earth is a uniform sphere (which is not correct, but is close enough to get an order-of-magnitude estimate) with \"M\" = 5.97 x 10 kg and \"r\" = 6.37 x 10 m, \"U\" is 2.24 x 10 J. This is roughly equal to one week of the Sun's total energy output. It is 37.5 MJ/kg, 60% of the absolute value of the potential energy per kilogram at the surface.\n"
] |
why are chemical weapons worse than regular ones? is gassing a town worse than bombing it, assuming the number of innocent deaths is the same? | Chemical weapons are worse because:
1) They kill slowly.
2) They are not as controllable as they drift in the air and on the water. This means they cause a lot of collateral damage.
3) They often contaminate and kill those attempting to treat the injured, and they often have very few to no actual treatments that work.
4) They contaminate the environment for a long time killing people years after the attack. There are still some battlefields from WWI that are toxic and make people sick or even kill them when they spend time in them. | [
"The use of poison gas by all major belligerents throughout World War I constituted war crimes as its use violated the 1899 Hague Declaration Concerning Asphyxiating Gases and the 1907 Hague Convention on Land Warfare, which prohibited the use of \"poison or poisoned weapons\" in warfare. Widespread horror and public revulsion at the use of gas and its consequences led to far less use of chemical weapons by combatants during World War II.\n",
"A more sophisticated target-hardening approach must consider industrial and other critical industrial infrastructure that could be attacked. Terrorists need not import chemical weapons if they can cause a major industrial accident such as the Bhopal disaster or the Halifax Explosion. Industrial chemicals in manufacturing, shipping, and storage need greater protection, and some efforts are in progress. To put this risk into perspective, the first used 160 tons of chlorine. Industrial shipments of chlorine, widely used in water purification and the chemical industry, travel in 90 or 55 ton tank cars.\n",
"Fire is considered the third most dangerous hazard, after direct blast effects and fallout radiation. It is noted that during the Bombing of Dresden, \"Most casualties were caused by the inhalation of hot gases and carbon monoxide\"\n",
"The properties of some ignitable liquids make them dangerous fuels. Many ignitable liquids have high vapor pressures, low flash points and a relatively wide range between their upper and lower explosive limit. This allows ignitable liquids to ignite easily, and when mixed in a proper air-fuel ratio, readily explode. Many arsonists who use generous amounts of gasoline have been seriously burned or killed igniting their fire.\n",
"The properties of some ignitable liquids make them dangerous accelerants. Many ignitable liquids have high vapor pressures, low flash points and a relatively wide range between their upper and lower explosive limit. This allows ignitable liquids to ignite easily, and when mixed in a proper air-fuel ratio, readily explode. Many arsonists who use generous amounts of gasoline have been seriously burned or killed igniting their fire.\n",
"Chemical weapons were widely used by all sides during the conflict and wind frequently carried poison gas into nearby towns where civilians did not have access to gas masks or warning systems. An estimated 100,000-260,000 civilian casualties were affected by the use of chemical weapons during the conflict and tens of thousands more died from the effects of such weapons in the years after the conflict ended.\n",
"It is more difficult to determine the toxicity of chemical mixtures than a pure chemical, because each component displays its own toxicity, and components may interact to produce enhanced or diminished effects. Common mixtures include gasoline, cigarette smoke, and industrial waste. Even more complex are situations with more than one type of toxic entity, such as the discharge from a malfunctioning sewage treatment plant, with both chemical and biological agents.\n"
] |
why is it so much louder when you whistle with two fingers? | I can't do that. I just wanted you to know I both envy and respect your ability to whistle with your fingers | [
"Pucker whistling is the most common form in much Western music. Typically, the tongue tip is lowered, often placed behind the lower teeth, and pitch altered by varying the position of the tongue. Although varying the degree of pucker will change the pitch of a pucker whistle, expert pucker whistlers will generally only make small variations to the degree of pucker, due to its tendency to affect purity of tone. Pucker whistling can be done by either only blowing out or blowing in and out alternately. In the 'only blow out' method, a consistent tone is achieved, but a negligible pause has to be taken to breathe in. In the alternating method there is no problem of breathlessness or interruption as breath is taken when one whistles breathing in, but a disadvantage is that many times, the consistency of tone is not maintained, and it fluctuates.\n",
"When split tones occur unintentionally, they are referred to as double buzzing. This phenomenon is widely understood to occur due to fatigue. David Hickman writes \"In most cases, double buzzes occur because of sore or bruised lips. This causes the player to tilt the mouthpiece unconsciously at an abnormal angle to relieve pressure on the sore area. In these cases rest over several days is the best remedy.\"\n",
"Whistling techniques do not require the vibration of the vocal cords: they produce a shock effect of the compressed air stream inside the cavity of the mouth and/or of the hands. When the jaws are fixed by a finger, the size of the hole is stable. The air stream expelled makes vibrations at the edge of the mouth. The faster the air stream is expelled, the higher is the noise inside the cavities. If the hole (mouth) and the cavity (intra-oral volume) are well matched, the resonance is tuned, and the whistle is projected more loudly. The frequency of this bioacoustical phenomenon is modulated by the morphing of the resonating cavity that can be, to a certain extent,\n",
"Clapping hands or snapping one’s fingers whilst standing next to perpendicular sheets of corrugated iron (for example, in a fence) will produce a high-pitched echo with a rapidly falling pitch. This is due to a sequence of echoes from adjacent corrugations. \n",
"Because the clapper strikes the bell as it rising to the mouth upwards position, it rests against the bell's soundbow after the strike, and the peak strike intensity decays away quickly when the clapper helps to dissipate the vibration energy of the bell. This enables rapid successive strikes of multiple bells, such as in change ringing, without excessive overlap and consequent blurring of successive strikes. In addition, the movement of the bell imparts a doppler effect to the sound, as the strike occurs whilst the bell is still moving.\n",
"The cross section of a common whistle is shown in the figure on the right. The cavity is a closed end cylinder ( inch diameter), but with the cylinder axis lateral to the jet axis. The orifice is inch wide and the sharp edge is inch from the jet orifice. When blown weakly, the sound is mostly broad band with a weak tone. When blown more forcefully, a strong tone is established near 2800 Hz and adjacent bands are at least 20 dB down. If the whistle is blown yet more forcefully, the level of the tone increases and the frequency increases only slightly suggesting Class I hydrodynamic feedback and operation only in Stage I.\n",
"BULLET::::- A strident lisp results in a high frequency whistle of hissing sound caused by stream passing between the tongue and the hard surface. In the extensions to the IPA, whistled sibilants are transcribed and .\n"
] |
Would it be possible to use time dilation to travel into the future? | In terms of physics, yes. The technology for that doesn't exist right now though. We can send things at like 20 km/s, and we'd need to go like ten thousand times that fast to start seeing these effects. | [
"Theoretically, time dilation would make it possible for passengers in a fast-moving vehicle to advance further into the future in a short period of their own time. For sufficiently high speeds, the effect is dramatic. For example, one year of travel might correspond to ten years on Earth. Indeed, a constant 1 g acceleration would permit humans to travel through the entire known Universe in one human lifetime.\n",
"Relativistic time dilation allows a traveler to experience time more slowly, the closer his speed is to the speed of light. This apparent slowing becomes noticeable when velocities above 80% of the speed of light are attained. Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were capable of continuously generating around 1 g of acceleration (which is comfortable for humans), the ship could reach almost anywhere in the galaxy and return to Earth within 40 years ship-time (see diagram). Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth.\n",
"It is uncertain if time travel to the past is physically possible. Forward time travel, outside the usual sense of the perception of time, is an extensively-observed phenomenon and well-understood within the framework of special relativity and general relativity. However, making one body advance or delay more than a few milliseconds compared to another body is not feasible with current technology. As for backwards time travel, it is possible to find solutions in general relativity that allow for it, but the solutions require conditions that may not be physically possible. Traveling to an arbitrary point in spacetime has a very limited support in theoretical physics, and usually only connected with quantum mechanics or wormholes, also known as Einstein-Rosen bridges.\n",
"Manned travel at a speed not close to the speed of light, would require either that we overcome our own mortality with technologies like radical life extension or traveling with a generation ship. If traveling at a speed closer to the speed of light, time dilation would allow intergalactic travel in a timespan of decades of on-ship time.\n",
"Gravitational time dilation is a phenomenon predicted by the theory of General Relativity whereby time passes more slowly in regions of lower gravitational potential. Scientists used the lander to test this hypothesis, by sending radio signals to the lander on Mars, and instructing the lander to send back signals, in cases which sometimes included the signal passing close to the Sun. Scientists found that the observed Shapiro delays of the signals matched the predictions of General Relativity.\n",
"There is a great deal of observable evidence for time dilation in special relativity and gravitational time dilation in general relativity, for example in the famous and easy-to-replicate observation of atmospheric muon decay. The theory of relativity states that the speed of light is invariant for all observers in any frame of reference; that is, it is always the same. Time dilation is a direct consequence of the invariance of the speed of light. Time dilation may be regarded in a limited sense as \"time travel into the future\": a person may use time dilation so that a small amount of proper time passes for them, while a large amount of proper time passes elsewhere. This can be achieved by traveling at relativistic speeds or through the effects of gravity.\n",
"Time travel to the past is theoretically possible in certain general relativity spacetime geometries that permit traveling faster than the speed of light, such as cosmic strings, transversable wormholes, and Alcubierre drive. The theory of general relativity does suggest a scientific basis for the possibility of backward time travel in certain unusual scenarios, although arguments from semiclassical gravity suggest that when quantum effects are incorporated into general relativity, these loopholes may be closed. These semiclassical arguments led Stephen Hawking to formulate the chronology protection conjecture, suggesting that the fundamental laws of nature prevent time travel, but physicists cannot come to a definite judgment on the issue without a theory of quantum gravity to join quantum mechanics and general relativity into a completely unified theory.\n"
] |
Are there multiple types of Electromagnetic Fields? | > I've seen it described as a "field produced by charged objects", but in other places it sounds more like one continuous thing that extends through all space
The electromagnetic field extends through *all space.* It simply has essentially a zero value away from charges. (though self propagating disruptions can travel without charges—called light) It doesn't have to be zero, the Higgs field for instance has a non-zero expectation value throughout all space.
When we say a charge or magnet generates and EM field, this is short hand for saying they give a nonzero value to regions in a shared universal EM field. It's just very small and close to zero in most places in the universe. | [
"In electromagnetism, the electromagnetic field is generally thought of as being made of two things, the electric field and magnetic field. They are both three-dimensional vector fields, related to each other by Maxwell's equations. A second approach is to combine them in a single object, the six-dimensional electromagnetic tensor, a tensor or bivector valued representation of the electromagnetic field. Using this Maxwell's equations can be condensed from four equations into a particularly compact single equation:\n",
"In Maxwell's theory of electromagnetism, one of the most important types of an electromagnetic field are those representing electromagnetic radiation. Of these, the most important examples are the electromagnetic plane waves, in which the radiation has planar wavefronts moving in a specific direction at the speed of light. Of these, the most basic are the monochromatic plane waves, in which only one frequency component is present. This is precisely the phenomenon which our solution will model in terms of general relativity.\n",
"The most common description of the electromagnetic field uses two three-dimensional vector fields called the electric field and the magnetic field. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as (electric field) and (magnetic field).\n",
"In modern physics, the electromagnetic field is understood to be not a \"classical\" field, but rather a quantum field; it is represented not as a vector of three numbers at each point, but as a vector of three quantum operators at each point. The most accurate modern description of the electromagnetic interaction (and much else) is \"quantum electrodynamics\" (QED), which is incorporated into a more complete theory known as the \"Standard Model of particle physics\".\n",
"There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as E(x, y, z, t) (electric field) and B(x, y, z, t) (magnetic field).\n",
"There are numerous mathematical descriptions of the electromagnetic field. In classical electrodynamics, electric fields are described as electric potential and electric current. In Faraday's law, magnetic fields are associated with electromagnetic induction and magnetism, and Maxwell's equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents.\n",
"The electromagnetic four-potential is defined to be \"A\" = (-\"φ\", A), and the electromagnetic four-current \"j\" = (-\"ρ\", j). The electromagnetic field at any point in spacetime is described by the antisymmetric (0,2)-rank electromagnetic field tensor\n"
] |
How does salt damage concrete on a molecular level? | Normally the embedded steel in concrete (be it re-bar or welded wire fabric) is protected from corrosion by an effect called passivization caused by the high PH (around 13) of concrete. When water containing dissolved chlorides makes it way to the steel, through the concrete pore structure or more typically cracks, the chlorides negate that passivization and allow the steel to corrode.
When steel corrodes it expands in volume which causes internal tensile stresses in the concrete. Since concrete is very poor in tension it tends to fail which leads to de-lamination of concrete layers and eventually visible spalls (pot holes).
So its really not so much the salt damaging the concrete, but the salt causing corrosion of the embedded steel which causes the damage. Other things, like carbonation, can eliminate the passivization of the rebar, but those mechanisms tend to take much longer.
My instinct is that "drive-way safe" is a buzzword. There are non chloride based de-icing solutions out there, but they are much more expensive and generally not quite as effective.
I am an engineer that is focused on the restoration of concrete parking structures, so this is an area of expertise. | [
"Concrete can be damaged by many processes, such as the expansion of corrosion products of the steel reinforcement bars, freezing of trapped water, fire or radiant heat, aggregate expansion, sea water effects, bacterial corrosion, leaching, erosion by fast-flowing water, physical damage and chemical damage (from carbonatation, chlorides, sulfates and distillate water). The micro fungi Aspergillus Alternaria and Cladosporium were able to grow on samples of concrete used as a radioactive waste barrier in the Chernobyl reactor; leaching aluminum, iron, calcium, and silicon.\n",
"Sulfates in solution in contact with concrete can cause chemical changes to the cement, which can cause significant microstructural effects leading to the weakening of the cement binder (chemical sulfate attack). Sulfate solutions can also cause damage to porous cementitious materials through crystallization and recrystallization (salt attack). Sulfates and sulfites are ubiquitous in the natural environment and are present from many sources, including gypsum (calcium sulfate) often present as an additive in 'blended' cements which include fly ash and other sources of sulfate. With the notable exception of barium sulfate, most sulfates are slightly to highly soluble in water. These include acid rain where sulfur dioxide in the airshed is dissolved in rainfall to produce sulfurous acid. In lightning storms, the dioxide is oxidised to trioxide making the residual sulfuric acid in rainfall even more highly acidic. Local government infrastructure is most commonly corroded by sulfate arising from the oxidation of sulfide which occurs when bacteria (for example in sewer mains) reduce the ever-present hydrogen sulfide gas to a film of sulfide (S-) or bi-sulfide (HS-) ions. This reaction is reversible, both readily oxidising on exposure to air or oxygenated stormwater, to produce sulfite or sulfate ions and acidic hydrogen ions in the reaction HS + HO+ O - 2H + SO-. The corrosion often present in the crown (top) of concrete sewers is directly attributible to this process - known as crown rot corrosion.\n",
"Salt Attack occurs when salts dissolved in water are carried into the stone. The two commonest effects are efflorescence and spalling. Salts that expand on crystallization in capillary gaps can cause surface spalling. For example, various magnesium and calcium salts in sea water expand considerably on drying by taking on water of crystallization. However, even sodium chloride, which does not include water of crystallization, can exert considerable expansive forces as its crystals grow.\n",
"Salt contamination beneath a coating, such as paint on steel, can cause adhesion and corrosion problems due to the hygroscopic nature of salt. Its tendency to attract water through a permeable coating creates a build-up of water molecules between substrate and coating. These molecules, together with salt and other oxidation agents trapped during coating or migrating through the coating, create an electrolytic cell, causing corrosion. Blast cleaning is frequently used to clean surfaces before coating; however, with salt contamination, blast cleaning may increase the problem by forcing salt into the base material. Washing a surface with deionized water before coating is a common solution.\n",
"Acid Attack. Acid-soluble stone materials such as the calcite in marble, limestone and travertine, as well as the internal cement that binds the resistant grains in sandstone, react with acidic solutions on contact, or on absorbing acid-forming gases in polluted air, such as oxides of sulfur or nitrogen. Acid erodes the stone, leaving dull marks on polished surfaces. In time it may cause deep pitting, eventually totally obliterating the forms of statues, memorials and other sculptures. Even mild household acids, including cola, wine, vinegar, lemon juice and milk, can damage vulnerable types of stone. The milder the acid, the longer it takes to etch calcite-based stone; stronger acids can cause irreparable damage in seconds.\n",
"Various types of aggregate undergo chemical reactions in concrete, leading to damaging expansive phenomena. The most common are those containing reactive silica, that can react (in the presence of water) with the alkalis in concrete (KO and NaO, coming principally from cement). Among the more reactive mineral components of some aggregates are opal, chalcedony, flint and strained quartz. Following the alkali-silica reaction (ASR), an expansive gel forms, that creates extensive cracks and damage on structural members. On the surface of concrete pavements the ASR can cause pop-outs, i.e. the expulsion of small cones (up to about in diameter) in correspondence of aggregate particles.\n",
"Concrete degradation may have various causes. Concrete can be damaged by fire, aggregate expansion, sea water effects, bacterial corrosion, calcium leaching, physical damage and chemical damage (from carbonatation, chlorides, sulfates and non-distilled water). This process adversely affects concrete exposed to these damaging stimuli.\n"
] |
how/why does one company make so many different, unrelated products? | It's called "vertical integration" and it's regarded as a smart move because the more a company diversifies its products, the less hurt they are if one product takes a hit (for instance, if they need to recall, or if a competitor comes up with something better, or if a change in the marketplace at large makes the product less desirable -- like if you were selling bread when the Atkins craze hit, it would be nice to also have a sub-brand selling bacon).
[30 Rock had a pretty great moment] (_URL_0_) explaining why some people find this phenomenon a bit worrisome. | [
"In some cases, the original technology supplier did not need to manufacture the product itself—it merely patented a specific design, then sold the actual production rights to multiple overseas clients. This resulted in some countries producing separate but nearly identical products under different licenses. \n",
"BULLET::::- Several different producers exist, with big differences in their production and selling costs, which greatly impacts on the mix design and curing process. This makes each brand very different in potential uses. Even though the different brands may look and feel similar, caution must be used when selecting the versions and brands for specific use since they are not all the same or usable in the same way. [?reference]\n",
"A business using a part will often use a different part number than the various manufacturers of that part do. This is especially common for catalog hardware, because the same or similar part design (say, a screw with a certain standard thread, of a certain length) might be made by many corporations (as opposed to unique part designs, made by only one or a few).\n",
"A company has a product A which is well established within its market. The same company decides to market a product B, which happens to be somewhat similar to product A, therefore both belonging to the same market, attracting similar clients. This leads to both products being forced to share the market, reducing the market share of product A, as part of it is eaten up by product B.\n",
"A ‘Product’ is \"something or anything that can be offered to the customers for attention, acquisition, or consumption and satisfies some want or need.\" (Riaz & Tanveer (n.d); Goi (2011) and Muala & Qurneh (2012)). The product is the primary means of demonstrating how a company differentiates itself from competitive market offerings. The differences can include quality, reputation, product benefits, product features, brand name or packaging.\n",
"The number of different categories of a company is referred to as \"width of product mix\". The total number of products sold in all lines is referred to as \"length of product mix\". If a line of products is sold with the same brand name, this is referred to as family branding. When you add a new product to a line, it is referred to as a \"line extension\". When you have a single saleable item distinguishable by size, appearance, price or some other attribute in your product line, it is called SKU-Stock Keeping Unit.\n",
"The company's products are divided into two lines, with very similar entries but sold toward different audiences. They are marketed through large retail chains, particularly Toys \"R\" Us and Target, as well as Amazon.com.\n"
] |
What are the hazards of Fusion technology? | People discussing fusion reactors usually focus on the use of abundant Deuterium extracted from water as the fuel. While Deuterium would be part of the fuel mix, most of the fusion reactor designs are built around the use of a combined deuterium-tritium fuel source. The ITER reactor for example [will use a 1:1 mix of D-T fuel](_URL_0_). The D-T fusion reaction produces an excess neutron. These neutrons have applications such as producing more tritium for the reactor's fuel, but they will also induce radioactivity in the materials that make up the structure and lining of the reaction chamber. The end result will be the production of nuclear waste - radioactive metals and the like. It will be no where near the volume of radioactive waste produced by fission reactors; but it will be produced none the less. Some designs have also called for the fusion reactor to be used to breed plutonium from the neutrons and U238 lining the reaction chamber. The plutonium would be used to fuel fission based reactors but has the added issue of being a nuclear weapons material - something that could be considered a hazard of the fusion reactor. | [
"The world of \"Fusion\" is centuries in our future, when a series of galactic wars have led to a spiraling arms race between \"tekkers and splicers\" — that is, between those who take a technological and technocratic route to improving humanity, and those who have abandoned humanity altogether through genetic engineering. The story involves the exploits of a group of space mercenaries in an era when humans who have not been enhanced either genetically or cybernetically, are becoming extremely rare.\n",
"In the case of an accident (or sabotage), it is expected that a fusion reactor would release far less radioactive pollution than would an ordinary fission nuclear station. Furthermore, ITER's type of fusion power has little in common with nuclear weapons technology, and does not produce the fissile materials necessary for the construction of a weapon. Proponents note that large-scale fusion power would be able to produce reliable electricity on demand, and with virtually zero pollution (no gaseous CO, SO, or NO by-products are produced).\n",
"Because of EST's claimed lack of need for an external stabilizing magnetic field, EPS hope to be able to create small efficient fusion reactors by colliding magnetically accelerated ESTs together at speeds high enough to induce ballistic nuclear fusion.\n",
"At this time, KMS Fusion was indisputably the most advanced laser-fusion laboratory in the world. Unfortunately, outright harassment from the AEC only increased after the announcement of these results. According to one source in the faculty of the University of Michigan, the campaign against KMS Fusion culminated with a massive incursion into the KMS Fusion facilities by federal agents, who effectively put an end to its operations by confiscating essential materials on the grounds that, inter alia, all information concerning the production of nuclear energy is classified information which belongs exclusively to the federal government.\n",
"KMS Fusion was the only private sector company to pursue controlled thermonuclear fusion research using laser technology. Despite limited resources and numerous business problems KMS successfully demonstrated fusion from the Inertial Confinement Fusion (ICF) process. They achieved compression of a deuterium-tritium pellet from laser-energy in December 1973, and on May 1, 1974 carried out the world’s first successful laser-induced fusion. Neutron-sensitive nuclear emulsion detectors, developed by Nobel Prize winner Robert Hofstadter, were used to provide evidence of this discovery.\n",
"It has been claimed that it is possible to conceive of a crude, deliverable, pure fusion weapon, using only present-day, unclassified technology. The weapon design weighs approximately 3 tonnes, and might have a total yield of approximately 3 tonnes of TNT. The proposed design uses a large explosively pumped flux compression generator to produce the high power density required to ignite the fusion fuel. From the point of view of explosive damage, such a weapon would have no clear advantages over a conventional explosive, but the massive neutron flux could deliver a lethal dose of radiation to humans within a 500-meter radius (most of those fatalities would occur over a period of months, rather than immediately).\n",
"\"Thermonuclear\" fusion is one of the methods being researched in the attempts to produce fusion power. If Thermonuclear fusion becomes favorable to use, it would significantly reduce the world's carbon footprint.\n"
] |
why are space rockets so hard to handle? | You need to look up what they call "the rocket equation".
Lets say you want to throw 10kg into orbit. Orbit speed means you have to accelerate it to something like 9.4 km/s (thats per _second_). Thats pretty fast. To accerlate your 10kg AND your rocket to that speed you need a certain amount of thrust. That means bigger engines, or engines that burn longer both of which requires more fuel. But that fuel has mass that you ALSO have to accelerate so now you have to bring MORE fuel to accelerate the other fuel, but wait the mass you have to accelerate drops as you burn fuel so now you need less fuel to accelerate it..... and now you have a 2nd or 3rd order differential equation.
Now throw in multiple stages (why multiple stages I won't get into), the reserve you need to maybe land your rocket like SpaceX, and you have some hard math. If your payload changes weight at all, you have to recalculate the whole shebang.
As for the control - the aerodynamic forces acting on a rocket that is accelerating to that kind of speed - and before it leaves the atmosphere - are tremendous; and even relatively minute shifts in center of gravity of your rocket (as the fuel gets burned up) or a shift in payload (remember that resupply rocket in The Martian that blew up?) means you have to have control surfaces or nozzle gymbols to constantly adjust the thrust so its through the center of mass or things start tumbling and the forces rip it apart. | [
"Their simplicity also makes solid rockets a good choice whenever large amounts of thrust are needed and the cost is an issue. The Space Shuttle and many other orbital launch vehicles use solid-fueled rockets in their boost stages (solid rocket boosters) for this reason.\n",
"While comparatively inefficient for low speed use, rockets are relatively lightweight and powerful, capable of generating large accelerations and of attaining extremely high speeds with reasonable efficiency. Rockets are not reliant on the atmosphere and work very well in space.\n",
"Larger rockets are normally launched from a launch pad that provides stable support until a few seconds after ignition. Due to their high exhaust velocity——rockets are particularly useful when very high speeds are required, such as orbital speed at approximately . Spacecraft delivered into orbital trajectories become artificial satellites, which are used for many commercial purposes. Indeed, rockets remain the only way to launch spacecraft into orbit and beyond. They are also used to rapidly accelerate spacecraft when they change orbits or de-orbit for landing. Also, a rocket may be used to soften a hard parachute landing immediately before touchdown (see retrorocket).\n",
"Rocket vehicles have a reputation for unreliability and danger; especially catastrophic failures. Contrary to this reputation, carefully designed rockets can be made arbitrarily reliable. In military use, rockets are not unreliable. However, one of the main non-military uses of rockets is for orbital launch. In this application, the premium has typically been placed on minimum weight, and it is difficult to achieve high reliability and low weight simultaneously. In addition, if the number of flights launched is low, there is a very high chance of a design, operations or manufacturing error causing destruction of the vehicle.\n",
"Rockets are used for fireworks, weaponry, ejection seats, launch vehicles for artificial satellites, human spaceflight and exploration of other planets. While comparatively inefficient for low speed use, they are very lightweight and powerful, capable of generating large accelerations and of attaining extremely high speeds with reasonable efficiency.\n",
"A low-propellant space drive has long been a goal for space exploration, since the propellant is dead weight that must be lifted and accelerated with the ship all the way from launch until the moment it is used (see Tsiolkovsky rocket equation). Gravity assists, solar sails, and beam-powered propulsion from a spacecraft-remote location such as the ground or in orbit, are useful because they allow a ship to gain speed without propellant. However, some of these methods do not work in deep space. Shining a light out of the ship provides a small force from radiation pressure, i.e., using photons as a form of propellant, but the force is far too weak, for a given amount of input power, to be useful in practice.\n",
"Rockets are the oldest type and are mainly used when extremely high speeds or extremely high altitudes are needed. Due to the extreme, typically hypersonic, exhaust velocity and the necessity of oxidiser being carried on board, they consume propellant extremely quickly. For this reason, they are not practical for routine transportation.\n"
] |
What methods were used to estimate the population of pre-columbian America? How reliable were they? | There were a number of methods used that resulted in widely varying estimates. Charles Mann provides a brief but thorough discussion of methods used to estimate pre-contact populations in the new world in "1491: New Revelations of the Americas before Columbus" (Vintage, 2011). Some researchers used early records archived in church and governmental facilities then attempted to correct for population crashes caused by the plagues. Others based their estimates on number of households and estimated household size. Sherburne Cook was among the more prolific students of prehistoric populations publishing papers from the 1950s to the 1970s. In the mid 1970s there was an American Antiquity memoir published that employed assumed initial population estimates from skeletal populations (usually from excavated cemeteries) then applied estimated fertility and mortality estimates and extrapolated from there. Kroeber, in the "Handbook of the Indians of California" (1970, California Book Co) employed house and house pit numbers from early ethnographic surveys. Baumhoff (1958) in California Athabaskan Groups (University of California Anthropological Reports, Berkeley) developed population estimates for Northern California tribes based on the availability of fish resources.
All have their merits and their shortcomings. Mann notes that Henige in "Numbers from Nowhere: The American Indian Contact Population Debate" (1998: Univ.. of Oklahoma Press) is the pinnacle of vilification of indigenous population estimates and estimators.
This is a tiny sample of the reams of population studies that have been conducted. They all seem to have the same basic problems: the veracity of the basis for original estimates (censuses, house counts, fish populations and skeletal counts) and the estimated impacts of the plagues. The issue is further complicated by the bias of researchers and readers. Some tend to maximize the estimates others are much more conservative. | [
"Given the fragmentary nature of the evidence, even semi-accurate pre-Columbian population figures are impossible to obtain. Scholars have varied widely on the estimated size of the indigenous populations prior to colonization and on the effects of European contact. Estimates are made by extrapolations from small bits of data. In 1976, geographer William Denevan used the existing estimates to derive a \"consensus count\" of about 54 million people. Nonetheless, more recent estimates still range widely.\n",
"Estimates of the pre-Columbian population of what today constitutes the U.S. vary significantly, ranging from William M Denevan's 3.8 million in his 1992 work \"The Native Population of the Americas in 1492\", to 18 million in Henry F Dobyns's \"Their Number Become Thinned\" (1983). Henry F Dobyns' work, being the highest single point estimate by far within the realm of professional academic research on the topic, has been criticized for being \"politically motivated\". Perhaps Dobyns' most vehement critic is David Henige, a bibliographer of Africana at the University of Wisconsin, whose \"Numbers From Nowhere\" (1998) is described as \"a landmark in the literature of demographic fulmination\". \"Suspect in 1966, it is no less suspect nowadays,\" Henige wrote of Dobyns's work. \"If anything, it is worse.\"\n",
"In his book \"The Native Population of the Americas in 1492\" (1976), he provided an influential estimate of the Pre-Columbian population of the Americas, which he placed at 57.3 million, plus or minus 25 percent. The second edition (1992), after reviewing more recent literature, he revised his estimate to 54 million.\n",
"When considering population estimates by world region, it is worth noting that population history of the indigenous peoples of the Americas before the 1492 voyage of Christopher Columbus has proven difficult to establish, with many historians arguing for an estimate of 50 million people throughout the Americas, and some estimating that populations may have reached 100 million people or more. It is therefore estimated by some that populations in Mexico, Central, and South America could have reached 37 million by 1492. Additionally, the population estimate of 2 million for North America for the same time period represents the low end of modern estimates, and some estimate the population to have been as high as 18 million.\n",
"Studies in American Demography is a 1940 book, written by Walter F. Willcox and published by Cornell University Press. It was one of the first publications to estimate the world population had exceeded 1 billion people in 1800.\n",
"Estimating the number of Native Americans living in what is today the United States of America before the arrival of the European explorers and settlers has been the subject of much debate. While it is difficult to determine exactly how many Natives lived in North America before Columbus, estimates range from a low of 2.1 million (Ubelaker 1976) to 7 million people (Russell Thornton) to a high of 18 million (Dobyns 1983). A low estimate of around 1 million was first posited by the anthropologist James Mooney in the 1890s, by calculating population density of each culture area based on its carrying capacity.\n",
"The population figure of indigenous peoples of the Americas before the 1492 Spanish voyage of Christopher Columbus has proven difficult to establish. Scholars rely on archaeological data and written records from European settlers. Most scholars writing at the end of the 19th century estimated that the pre-Columbian population was as low as 10 million; by the end of the 20th century most scholars gravitated to a middle estimate of around 50 million, with some historians arguing for an estimate of 200 million or more. Contact with the Europeans led to the European colonization of the Americas, in which millions of immigrants from Europe eventually settled in the Americas.\n"
] |
how do dual sim phones work | A [dual SIM](_URL_1_) phone can hold / use 2 [SIM cards](_URL_0_).
The SIM card holds an identifying (hardware) number that identifies the phone, so you can set up a subscription and associate the SIM number with a phone number.
So dual SIM phones can answer/handle two separate phone numbers. These can be on the same provider (Verizon for example) or on different providers (one Verizon one AT & T). Popular with business persons; they can have a single phone device, but a personal number and an official business number on it. | [
"Dual SIM refers to mobile phones that support use of multiple SIM cards. When a second SIM card is installed, the phone either allows users to switch between two separate mobile network services manually, has hardware support for keeping both connections in a \"standby\" state for automatic switching, or has individual transceivers for maintaining both network connections at once.\n",
"Most dual mode handsets require two identifying cards (one SIM and one RUIM), though some dual-mode phones (for example, the iPhone 4S) only require one SIM and one ESN. Not all dual SIM handsets are dual mode (for example dual SIM GSM phones).\n",
"Dual SIM phones are mainstream in many countries where phones are normally sold unlocked. Dual SIMs are popular for separating personal and business calls in locations where lower prices apply to calls between clients of the same provider, where a single network may lack comprehensive coverage, and for travel across national and regional borders. In countries where dual SIM phones are the norm, people who require only one SIM simply leave the second SIM slot empty. \n",
"Dual-SIM devices have two SIM card slots for the use of two SIM cards, from one or multiple carriers. Dual-SIM mobile phones come with two slots for SIMs in various locations such as: one behind the battery and another on the side of the phone; both slots behind the battery; or on the side of the phone if the device does not have a removable battery. Multiple-SIM devices are commonplace in developing markets such as in Africa, East Asia, the Indian subcontinent and Southeast Asia, where variable billing rates, network coverage and speed make it desirable for consumers to use multiple SIMs from competing networks. Dual SIM phones are also useful to separate one's personal phone number from a business phone number, without having to carry multiple devices. Some popular devices, such as the BlackBerry KeyOne have dual-SIM variants, however dual-SIM devices are not common in the US or Europe due to lack of demand.\n",
"Dual SIM switch phones, such as the Nokia C1-00, are effectively a single SIM device as both SIMs share the same radio, and thus are only able to place or receive calls and messages on one SIM at the time. They do, however, have the added benefit of alternating between cards when necessary.\n",
"Multi-SIM allows switching among (up to) 12 stored numbers from the phone's main menu. A new menu entry in subscriber’s phone automatically appears after inserting the multi-SIM card into the cell phone.\n",
"In their marketing materials Samsung use the term \"Dual SIM Always on” to describe the Duos phones, although technically the term is misleading, since it does not mean quite what is says – both SIM cards are not always on. All phones with this feature are regular Dual SIM Stand-by (DSS) phones with 1 transceiver (radio) – 2nd SIM is always connected when a call is in progress on SIM 1 and vice versa.\n"
] |
What was happening pre World War 1? | Here's some answers I've given previously on the subject
[The Balkan Wars] (_URL_1_)
[Lead up to and outbreak of WWI] (_URL_2_)
[Balkan Nationalism and the Outbreak of WWI] (_URL_0_)
The 1890s saw the formation of the Triple Alliance (Germany, Austro-Hungary, Italy), the formation of the Franco-Russian and Franco-British Ententes, and the Anglo-German Naval Arms Race began.
The early 1900s saw the First and Second Moroccan Crises, the Bosnia Crisis, the First and Second Balkan Wars and the Scutari Crisis. It saw the beginning of a Land Arms Race in 1912, starting with Russia, then Germany and France.
There was growing tension. Germany's pointlessly aggressive stance in Morocco, combined with the naval arms race, alienated the British, and drew them closer to France, while events in the Balkans lead to increasing Austro-Russian antagonism.
However, considering the lengthy affairs these crises were, and the important issues at stake, few civilian and even political observers believed that an assassination in Sarajevo could possibly lead to war. The pace of events in the July Crisis was much greater than in previous crises, and so decision makers found themselves under greater pressure. | [
"In 1914, the First World War broke out. For the next four years fighting raged across Europe, the Middle East, Africa, and Asia. On 8 January 1918, United States President Woodrow Wilson issued a statement that became known as the Fourteen Points. In part, this speech called for Germany to withdraw from the territory it had occupied and for the formation of a League of Nations. During the fourth quarter of 1918, the Central Powers began to collapse. In particular, the German military was decisively defeated on the Western Front and the German navy mutinied, prompting domestic uprisings that became known as the German Revolution.\n",
"World War I was a significant turning point in the political, cultural, economic, and social climate of the world. It is considered to mark the end of the Second Industrial Revolution and the \"Pax Britannica\". The war and its immediate aftermath sparked numerous revolutions and uprisings. The Big Four (Britain, France, the United States, and Italy) imposed their terms on the defeated powers in a series of treaties agreed at the 1919 Paris Peace Conference, the most well known being the German peace treaty—the Treaty of Versailles. Ultimately, as a result of the war the Austro-Hungarian, German, Ottoman, and Russian Empires ceased to exist, with numerous new states created from their remains. However, despite the conclusive Allied victory (and the creation of the League of Nations during the Peace Conference, intended to prevent future wars), a second world war would follow just over twenty years later.\n",
"There were several main causes of World War I, which broke out unexpectedly in June–August 1914, including the conflicts and hostility of the previous four decades. Militarism, alliances, imperialism, and ethnic nationalism played major roles. However the immediate origins of the war lay in the decisions taken by statesmen and generals during the July Crisis of 1914, which was sparked by the assassination of Archduke Franz Ferdinand, heir to the throne of Austria-Hungary, by a Serbian secret organization, the Black Hand.\n",
"World War I (also known as the First World War and the Great War) was a global military conflict that embroiled most of the world's great powers, assembled in two opposing alliances: the Entente and the Central Powers. The immediate cause of the war was the June 28, 1914 assassination of Archduke Franz Ferdinand, heir to the Austro-Hungarian throne, by Gavrilo Princip, a Bosnian Serb citizen of Austria–Hungary and member of the Black Hand. The retaliation by Austria–Hungary against Serbia activated a series of alliances that set off a chain reaction of war declarations. Within a month, much of Europe was in a state of open warfare, resulting in the mobilization of more than 65 million European soldiers, and more than 40 million casualties—including approximately 20 million deaths by the end of the war.\n",
"World War I broke out in July 1914, pitting the Central Powers (Germany, Austria-Hungary, the Ottoman Empire, and Bulgaria) against the Allied Powers (Britain, France, Russia, Serbia, and several other countries). The war fell into a long stalemate after the Allied Powers halted the German advance at the September 1914 First Battle of the Marne. Wilson and House sought to position the United States as a mediator in the conflict, but European leaders rejected Houses's offers to help end the conflict. From 1914 until early 1917, Wilson's primary foreign policy objective was to keep the United States out of the war in Europe. He insisted that all government actions be neutral, stating that the United States \"must be impartial in thought as well as in action, must put a curb upon our sentiments as well as upon every transaction that might be construed as a preference of one party to the struggle before another.\" The United States sought to trade with both the Allied Powers and the Central Powers, but the British imposed a blockade of Germany. After a period of negotiations, Wilson essentially assented to the British blockade; the U.S. had relatively little direct trade with the Central Powers, and Wilson was unwilling to wage war against Britain over trade issues.\n",
"The outbreak of the First World War in 1914 was precipitated by the rise of nationalism in Southeastern Europe as the Great Powers took sides. The 1917 October Revolution led the Russian Empire to become the world's first communist state, the Soviet Union. The Allies, led by Britain, France, and the United States, defeated the Central Powers, led by the German Empire and Austria-Hungary, in 1918. During the Paris Peace Conference the Big Four imposed their terms in a series of treaties, especially the Treaty of Versailles. The war's human and material devastation was unprecedented.\n",
"1917–1918: World War I: On April 6, 1917, the United States declared war with Germany and on December 7, 1917, with Austria-Hungary. Entrance of the United States into the war was precipitated by Germany's submarine warfare against neutral shipping and the Zimmermann Telegram.\n"
] |
What percentage of new immigrants learned "fluent" English in the 19th century? | To which country? | [
"Before the arrival of the British, the official language for hundreds of years, and one of the educated elite had been Italian, but this was downgraded by the increased use of English. In 1934, English and Maltese were declared the sole official languages. That year only about 15% of the population could speak Italian fluently. This meant that out of 58,000 males qualified by age to be jurors, only 767 could qualify by language, as only Italian had until then been used in the courts.\n",
"Cultural similarities and a common language allowed English immigrants to integrate rapidly and gave rise to a unique Anglo-American culture. An estimated 3.5 million English immigrated to the U.S. after 1776. English settlers provided a steady and substantial influx throughout the 19th century. \n",
"More than 89 percent of residents identified English as their first language at the time of the 2006 census, while 6 percent identified German and just over 1 percent each identified Spanish and French as their first language learned. The next most common languages were Ukrainian, Chinese, Dutch, and Polish.\n",
"As of 2000, speakers of English as a first language accounted for 90.60% of all residents, while those who spoke Spanish made up 4.13%, Tagalog 1.00%, French 0.47%, Arabic 0.44%, German 0.43%, Vietnamese at 0.31%, Russian was 0.21% and Italian made up 0.17% of the population.\n",
"As of 2000, speakers of English as a first language accounted for 78.52% of the population, while Spanish was at 9.37%, French Creole at 7.13%, French at 2.31%, Italian at 1.22%, as well as Portuguese being at 0.68%, German being 0.55%, and Polish as a mother tongue of 0.17% of all residents.\n",
"As of 2000, speakers of English as their first language were 89.18%, while 4.64% spoke Spanish as theirs. Other languages spoken as a first language are Italian 1.93%, French 1.22%, German at 1.06%, and Portuguese at 0.71%.\n",
"As of 2000, English as a first language accounted for 77.57% of all residents, while Spanish accounted for 15.49%, French Creole made up 3.11%, Yiddish totaled 1.55%, both Arabic and German were at 0.77%, and Italian was the mother tongue for 0.69% of the population.\n"
] |
why does the uk require citizens to register to vote? why not automatically enroll people when they receive their national insurance number? | You need to be registered at an address so they know which constituency you are in so your vote can be cast in the right place. If they didn't voting would be chaotic and it would be difficult to detect fraud. | [
"Within the jurisdiction of the United Kingdom, the right to register for voting extends to all British, Irish, Commonwealth and European Union citizens. British citizens living overseas may register for up to 15 years after they were last registered at an address in the UK. Citizens of the European Union (who are not Commonwealth citizens or Irish citizens) can vote in European and local elections in the UK, elections to the Scottish Parliament and Welsh and Northern Ireland Assemblies (if they live in those areas) and some referendums (based on the rules for the particular referendum); they are not able to vote in UK Parliamentary general elections. It is possible for someone to register before their 18th birthday as long as they will reach that age before the next revision of the register.\n",
"Voter registration in the United States is an independent responsibility, so citizens choose whether they want to register or not. This led to only 64% of the voting age population being registered to vote in 2016. The United States is one of the sole countries that requires its citizens to register separately from voting. The lack of automatic registration contributes to the issue that there are over a third of eligible citizen in the United States that are not registered to vote.\n",
"In the United Kingdom, voter registration was introduced for all constituencies as a result of the Reform Act 1832, which took effect for the election of the same year. Since 1832, only those registered to vote can do so, and the government invariably runs nonpartisan get out the vote campaigns for each election to expand the franchise as much as possible.\n",
"(CN and EU member) In the United Kingdom, full voting rights and rights to stand as a candidate are given to citizens of Ireland and to \"qualifying\" citizens of Commonwealth countries; this is because they are not regarded in law as foreigners. This is a legacy of the situation that existed before 1983 where they had the status of British subjects.\n",
"Crown servants and British Council employees (as well as their spouses who live abroad) employed in a post outside the UK can register by making a Crown Servant declaration, allowing them to vote in all UK elections.\n",
"Recently discharged Uniformed Service members and their accompanying families or overseas citizens returning to the United States may become residents of a state just before an election, but not in time to register by the state's deadline and vote. The adoption of special procedures for late registration would allow these citizens to register and vote in the upcoming election.\n",
"The Representation of the People Acts 1983 and 2000 confer the franchise on British subjects and citizens of the Commonwealth and Ireland who are resident in the UK. In addition, nationals of other Member States of the European Union have the right to vote in local elections and elections to the European Parliament. The right to vote also includes the right to a secret ballot and the right to stand as a candidate in elections. Certain persons are excluded from participation including peers, aliens, infants, persons of unsound mind, holders of judicial office, civil servants, members of the regular armed forces or police, members of any non-Commonwealth legislature, members of various commissions, boards and tribunals, persons imprisoned for more than one year, bankrupts and persons convicted of corrupt or illegal election practices. The restriction on the participation of clergy was removed by the House of Commons (Removal of Clergy Disqualification) Act 2001.\n"
] |
how does facebook "share bait" work. what are the spammers getting out of getting it? | Money. The more people you can attract to your facebook page/website the more money you can get out of ads.
Plus if you have some bad intentions you can try to infect the user when he visits your website, which is mostly equal to money. | [
"Using Facebook, Tinder is able to build a user profile with photos that have already been uploaded. Basic information is gathered and the users' social graph is analyzed. Candidates who are most likely to be compatible based on geographical location, number of mutual friends, and common interests are streamed into a list of matches. Based on the results of potential candidates, the app allows the user to anonymously like another user by swiping right or pass by swiping left on them. If two users like each other it then results in a \"match\" and they are able to chat within the app. The app is used in about 196 countries.\n",
"Video sharing sites, such as YouTube, are now frequently targeted by spammers. The most common technique involves spammers (or spambots) posting links to sites, on the comments section of random videos or user profiles. With the addition of a \"thumbs up/thumbs down\" feature, groups of spambots may constantly \"thumbs up\" a comment, getting it into the top comments section and making the message more visible.\n",
"Facebook and Twitter are not immune to messages containing spam links. Spammers hack into accounts and send false links under the guise of a user's trusted contacts such as friends and family. As for Twitter, spammers gain credibility by following verified accounts such as that of Lady Gaga; when that account owner follows the spammer back, it legitimizes the spammer.\n",
"Social networking spam is spam directed specifically at users of internet social networking services such as Google+, Facebook, Pinterest, LinkedIn, or MySpace. Experts estimate that as many as 40% of social network accounts are used for spam. These spammers can utilize the social network's search tools to target certain demographic segments, or use common fan pages or groups to send notes from fraudulent accounts. Such notes may include embedded links to pornographic or other product sites designed to sell something. In response to this, many social networks have included a \"report spam/abuse\" button or address to contact. Spammers, however, frequently change their address from one throw-away account to another, and are thus hard to track.\n",
"Spamming on online social networks is quite prevalent. A primary motivation to spam arises from the fact that a user advertising a brand would like others to see them and they typically publicize their brand over the social network. Detecting such spamming activity has been well studied by developing a semi-automated model to detect spams. For instance, text mining techniques are leveraged to detect regular activity of spamming which reduces the viewership and brings down the reputation (or credibility) of a public pages maintained over Facebook. In some online social networks like Twitter, users have evolved mechanisms to report spammers which has been studied and analyzed.\n",
"Po.st is a social sharing platform for web users and website publishers to share content on social media such as Facebook, Twitter, and StumbleUpon, among others. the product also includes a link shortener built to provide brands with insights on clicking users, therefore segmenting them for paid media targeting. This provides marketers information regarding what content is being copied and pasted into an email or on social media, so-called \"dark social\" channels.\n",
"Malicious chatbots are frequently used to fill chat rooms with spam and advertisements, by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They are commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.\n"
] |
How was the first Operating System made when there were no computers to make it on? | Our modern notion of computer didn't arise suddenly well-formed from theoretical concepts. In fact, the entire idea of operating system isn't necessary for a computer to work at all (and most microcontrollers don't run one).
To start with, what exactly is an operating system? Well, it's hard to pinpoint one or two defining characteristics, but most operating systems exist to perform two distinct functions: abstracting the details of the underlying hardware resources so application programmers (the people who write stuff like office suites) don't have to worry about them; and managing those resources. So you can see a computer could just be programmed directly over the hardware without an operating system; you can program wherever you want, as long as you have a way of transferring your instructions to some storage medium the computer understands. Actually, most early computers had no storage at all, and had to be programmed directly by plugging up thousands cables and switches in huge control panels!
The situation improved a little with the introduction of punched cards (early 1950s) to replace these panels, but everything remained more or less the same until the introduction and commercial viability of the transistor (later 1950s). With the advent of reliable and mass-produced computers started a phenomenon of role separation, where the programmers were no longer operators who were no longer maintainers. To share these very expensive computers between users, people came up with ways to time-share their punched cards, which led to the creation of batch systems. These involved one machine to read the cards and write them onto magnetic tape, people to take the tape to the main computer, and another machine to print the results from the output tape onto human-readable paper.
Modern operating systems appeared with the increasing automation of this tedious and error-prone process, with more and more features becoming incorporated into the actual computer and programmers having to know less and less about the actual hardware they were using. IBM's OS/360 was the first operating system where you pretty much only had to know you were running an IBM 360 to work, and the trend continues into our days.
So you see there we didn't create an operating system in one fell swoop to run on the Analytical Engine or valve-based computers, but instead they evolved as a natural consequence of our eternal desire to do less and less work to get more and more results out of our tools. Some of our current terminology regarding operating systems still betrays their historical origins, in fact. | [
"Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their modern and more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers.\n",
"The first operating system used for real work was GM-NAA I/O, produced in 1956 by General Motors' Research division for its IBM 704. Most other early operating systems for IBM mainframes were also produced by customers.\n",
"Early operating systems were very diverse, with each vendor or customer producing one or more operating systems specific to their particular mainframe computer. Every operating system, even from the same vendor, could have radically different models of commands, operating procedures, and such facilities as debugging aids. Typically, each time the manufacturer brought out a new machine, there would be a new operating system, and most applications would have to be manually adjusted, recompiled, and retested.\n",
"Computer operating systems (OSes) provide a set of functions needed and used by most application programs on a computer, and the links needed to control and synchronize computer hardware. On the first computers, with no operating system, every program needed the full hardware specification to run correctly and perform standard tasks, and its own drivers for peripheral devices like printers and punched paper card readers. The growing complexity of hardware and application programs eventually made operating systems a necessity for everyday use.\n",
"In 1977 the first commercially produced personal computers were invented in the US: the Apple II, the PET 2001 and the TRS-80. They were quickly made available in Canada. In 1980 IBM introduced the IBM PC. Microsoft provided the operating system, through IBM, where it was referred to as PC DOS and as a stand-alone product known as MS-DOS. This created a rivalry for personal computer operating systems, Apple and Microsoft, which endures to this day. A large variety of special-use software and applications have been developed for use with these operating systems. There have also been a multiplicity of hardware manufacturers which have produced a wide variety of personal computers, and the heart of these machines, the central processing unit, has increased in speed and capacity by leaps and bounds. There were 1,560,000 personal computers in Canada by 1987, of which 650,000 were in homes, 610,000 in businesses and 300,000 in educational institutions. Canadian producers of micro-computers included Sidus Systems, 3D Microcomputers, Seanix Technology and MDG Computers. Of note is the fact that these machines were based on digital technology, and their widespread and rapid introduction to Canada at the same time that the telephone system was undergoing a similar transformation would herald an era of rapid technological advance in the field of communication and computing.\n",
"Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called \"fixed-program computers\". Since the term \"CPU\" is generally defined as a device for software (computer program) execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.\n",
"IBM initially sold its computers without any software, expecting customers to write their own; programs were manually initiated, one at a time. Later, IBM provided compilers for the newly developed higher-level programming languages Fortran, COMTRAN and later COBOL. The first operating systems for IBM computers were written by IBM customers who did not wish to have their very expensive machines ($2M USD in the mid-1950s) sitting idle while operators set up jobs manually. These first operating systems were essentially scheduled work queues. It is generally thought that the first operating system used for real work was GM-NAA I/O, produced by General Motors' Research division in 1956. IBM enhanced one of GM-NAA I/O's successors, the SHARE Operating System, and provided it to customers under the name IBSYS. As software became more complex and important, the cost of supporting it on so many different designs became burdensome, and this was one of the factors which led IBM to develop System/360 and its operating systems.\n"
] |
since cellphones are here to stay and commercial flight is here to stay, why haven't they figured out how to make it so we can keep our phones on. | You *can* have them on. You can't use them as a phone.
_URL_0_
One, cell towers aren't designed for phones 30,000 feet in the air that can hit multiple towers.
Two, on a long flight having people babbling on phones would cause some passengers to politely invite others to step outside. | [
"In Europe, regulations and technology have allowed the limited introduction of the use of passenger mobile phones on some commercial flights, and elsewhere in the world many airlines are moving towards allowing mobile phone use in flight. Many airlines still do not allow the use of mobile phones on aircraft. Those that do often ban the use of mobile phones during take-off and landing.\n",
"that the calls reached their destinations. Marvin Sirbu, professor of Engineering and Public Policy at Carnegie Mellon University said on September 14, 2001, that \"The fact of the matter is that cell phones can work in almost all phases of a commercial flight.\" Other industry experts said that it is possible to use cell phones with varying degrees of success during the ascent and descent of commercial airline flights.\n",
"Mobile phone use on aircraft is starting to be allowed with several airlines already offering the ability to use phones during flights. Mobile phone use during flights used to be prohibited and many airlines still claim in their in-plane announcements that this prohibition is due to possible interference with aircraft radio communications. Shut-off mobile phones do not interfere with aircraft avionics. The recommendation why phones should not be used during take-off and landing, even on planes that allow calls or messaging, is so that passengers pay attention to the crew for any possible accident situations, as most aircraft accidents happen on take-off and landing.\n",
"The U.S. Federal Communications Commission (FCC) currently prohibits the use of mobile phones aboard \"any\" aircraft in flight. The reason given is that cell phone systems depend on frequency reuse, which allows for a dramatic increase in the number of customers that can be served within a geographic area on a limited amount of radio spectrum, and operating a phone at an altitude may violate the fundamental assumptions that allow channel reuse to work.\n",
"On 31 October 2013, the FAA issued a press release entitled \"FAA to Allow Airlines to Expand Use of Personal Electronics\" in which it announced that \"airlines can safely expand passenger use of Portable Electronic Devices (PEDs) during all phases of flight.\" This new policy does not include cell phone use in flight, because, as the press release states, \"The FAA did not consider changing the regulations regarding the use of cell phones for voice communications during flight because the issue is under the jurisdiction of the Federal Communications Commission (FCC).\"\n",
"Some airlines have installed technologies to allow phones to be connected within the airplane as it flies. Such systems were tested on scheduled flights from 2006 and in 2008 several airlines started to allow in-flight use of mobile phones.\n",
"Many people may prefer a ban on mobile phone use in flight as it prevents undue amounts of noise from mobile phone chatter. AT&T has suggested that in-flight mobile phone restrictions should remain in place in the interests of reducing the nuisance to other passengers caused by someone talking on a mobile phone near them.\n"
] |
What were the implications of Operation Unthinkable and just how close did it come to fruition? | I do not know much about the inner workings of the British military and government in April and May of 1945, and so I cannot say how seriously the British themselves took this plan, but I can say that Winston Churchill's goal of stunting Soviet influence in postwar Europe did not align with the aims of Harry Truman's government at the time. And since the plan obviously relied heavily on American power, we can say that the plan never came close to fruition. Note that "Operation Unthinkable" was not the only measure that Churchill thought about taking in order to counter the rise of the Soviet sphere in April of 1945. Churchill had contacted Truman directly with the hope of convincing the American president to renege on the agreement between FDR and Stalin regarding a "Soviet sphere" by ordering the American army to continue its march through Prague.
In brief, the American army had entered western Czechoslovakia in early May and plans put forth by Eisenhower had initially called for the liberation "beyond the Karlsbad-Pilsen-Budweis Line [i.e., western Czechoslovakia] as far asa the upper Elbe [i.e., at least the west half of Prague]." When the Soviets protested that this violated the agreement made at Yalta, however, Eisenhower instead ordered the army to halt. Churchill, however, argued that "there can be little doubt that the liberation of Prague and as much as possible of the territory of western Czechoslovakia by your forces might make the whole difference in the post-war situation [in the region]." Truman, however, showed no interest in pursing a blatant anti-Soviet policy at this time and instead allowed the Red Army to liberate Prague (Stalin remained unsure if Truman would honor the agreement reach with FDR at Yalta, however, and quickly diverted forces aimed at Berlin to instead liberate Prague).
So in early May, when the British finished outlining their "Operation Unthinkable," Truman demonstrated clearly that he would not go so far as to challenge the Soviet Union by liberating Prague. The idea that he might then wage war on the Soviet Union in order to quell Soviet influence in Poland--influence that FDR and Stalin had already agreed at Yalta was a necessary component of Soviet postwar foreign policy--was an absurd assumption by whomever had put together Operation Unthinkable. There was no chance, at all, that it would be implemented as it was originally envisioned.
**Sources**:
Ambassador to France Jefferson Caffery to Secretary of State Edward Stettinius, May 6, 1945, *FRUS,* 1945, IV:
447-448.
Winston Churchill to Harry Truman, April 30, 1945, *FRUS,* 1945, IV: 446. The language of this telegram is nearly
identical to language used earlier by Eden.
John Erickson, *Stalin's War with Germany: The Road to Berlin*, (New Haven, Conn: Yale University Press, 1999),
625, 783-786.
Operation Unthinkable, excerpt: _URL_0_ | [
"Operation Unthinkable was a code name of two related, unrealised plans by the Western Allies against the Soviet Union. They were ordered by British prime minister Winston Churchill in 1945 and developed by the British Armed Forces' Joint Planning Staff at the end of World War II in Europe.\n",
"As noted in the foreword, Operation Fortitude was an Allied counter-intelligence operation run during World War II. Its goal was to convince the German military that the planned D-Day landings were to occur at Calais and not Normandy. As a part of Fortitude the fictitious First United States Army Group (FUSAG) was created. FUSAG used fake tanks, aircraft, buildings and radio traffic to create an illusion of an army being formed to land at Calais. So far – actual history. Follet then reminds the reader that had even a single German spy discovered the deception and reported it, this entire elaborate plan might have been derailed and the invasion of Nazi-occupied Europe would have become far more difficult and risky. The book's plot is built around this issue – however, it begins at a far earlier stage of the war.\n",
"Operation Mass Appeal was an operation set up by the British Secret Intelligence Service (MI6) in the runup to the 2003 invasion of Iraq. It was a campaign aimed at planting stories in the media about Iraq's alleged weapons of mass destruction. The existence of the operation was exposed in December 2003, although officials denied that the operation was deliberately disseminating misinformation. The MI6 operation secretly incorporated the United Nations Special Commission investigating Iraq's alleged stockpiles of Weapons of Mass Destruction (WMD) into its propaganda efforts by recruiting UN weapons inspector and former MI6 collaborator Scott Ritter to provide copies of UN documents and reports on their findings to MI6.\n",
"The planning of Operation Fortitude came under the auspices of the London Controlling Section (LCS), a secret body set up to manage Allied deception strategy during the war. However, the execution of each plan fell to the various theatre commanders, in the case of Fortitude this was Supreme Headquarters Allied Expeditionary Force (SHAEF) under General Dwight D. Eisenhower. A special section, Ops (B), was established at SHAEF to handle the operation (and all of the theatre's deception warfare). The LCS retained responsibility for what was called \"Special Means\"; the use of diplomatic channels and double-agents.\n",
"Operation Reservist was an Allied military operation during the Second World War. Part of Operation Torch (the Allied invasion of North Africa), it was an attempted landing of troops directly into the harbour at Oran in Algeria.\n",
"Operation Abstention was a code name given to a British invasion of the Italian island of Kastelorizo, off Turkey, during the Second World War, in late February 1941. The goal was to establish a base to challenge Italian naval and air supremacy on the Greek Dodecanese islands. The British landings were challenged by Italian land, air and naval forces, which forced the British troops to re-embark amidst some confusion and led to recriminations between the British commanders for underestimating the Italians.\n",
"In early August, the Contingency Plan was modified by including a strategic bombing campaign that was intended to destroy Egypt's economy, and thereby hopefully bring about Nasser's overthrow. In addition, a role was allocated to the 16th Independent Parachute Brigade, which would lead the assault on Port Said in conjunction with the Royal Marine landing. The commanders of the Allied Task Force led by General Stockwell rejected the Contingency Plan, which Stockwell argued failed to destroy the Egyptian military.\n"
] |
nuclear fusion. | Well, we *don't*, is the short answer. But let's not stop there.
Small atoms work in a strange way. Normally if you think about two separate objects that you want to put together — think Legos or whatever here — you find that you have to *do work* in order to put them together. You have to pick up the Legos, line them up just right, then *squeeze* in order to get them to stick.
Small atoms are different. Small atoms, like hydrogen atoms, actually *want* to stick together. In other words, they *release* energy when they snap together, and it *takes energy* to pull them apart.
*Big* atoms, like plutonium atoms, are just the opposite. They're so big and heavy and wobbly that it takes more energy to hold them together than it does to break them into pieces. That's how nuclear *fission* works. You take something that's just barely holding together, then you give it a little nudge and it comes apart into pieces, and you use the energy of those pieces flying apart to boil water to turn a steam turbine … or blow up a city, whatever. Same thing, different scales.
But small atoms actually release energy when they stick together to form bigger atoms. So you can, in principle, take two hydrogen atoms and stick them together and find that energy is released in the process — like putting two special Legos together and finding they get *hot* when they click into place.
But there's a challenge. Even though small atoms want to stick together, they naturally push each other part, like the north poles of two bar magnets. If you bring the two atoms *close* to each other, but not too close, they'll move apart, because they repel each other. So in order to get them *close enough* to stick together — and thus release energy — you have to work against that natural repulsion.
Think of it like rolling a ball up the slope of a volcano. Up at the top of the volcano is a hole, a nice, deep one, and you want the ball to go into the hole — and the ball *wants* to go into the hole. If the ball rolled toward the hole, it would drop right in. But before you can get the ball to go into the hole, you have to get it up the slope. If you just nudged the ball up the slope, it would roll a little ways, but then stop and roll back down again. So in order to get the ball into the hole, you have to give it a real kick, really push it hard, so it climbs all the way up the slope and falls in.
The way we give atoms a real kick is to make them *hot.* Hot atoms are really moving fast, they're rocketing all over the place. So if you take a lot of hydrogen atoms — in a gas — and heat them up, you'll eventually get to the point where if two of the atoms happen to hit, they'll stick, and release energy.
The trick with that is, though, that hot gases create *pressure.* If you heat up a gas, it'll exert pressure on the walls of whatever container you're holding it in until the pressure ruptures the container and the gas comes rushing out (which, by the way, cools the gas back down to equilibrium temperature again).
So in order to get energy out of nuclear fusion, you have to first start with hydrogen gas, then you have to build a *really really strong* container to hold it, then you have to heat the gas up *a lot* to the point where fusion starts to happen. When that happens, you start to see pairs of hydrogen atoms hitting each other and sticking — which again, releases energy, thus heating up the gas *even more* … which ruptures your container and makes a pretty big explosion.
That's called a hydrogen bomb.
But in principle, if you built a *really really really super-incredibly strong* container, then did all those things, the container *wouldn't* rupture when the hydrogen atoms start to stick. In principle, if you could build a container like that — and also figure out how to let heat escape from the container in a controlled way, but while still keeping the hydrogen hot enough that it continues to fuse — you'd have a really good, really long-lasting source of heat that you could use to boil water and turn a steam turbine, thus doing mechanical work or generating electricity or both.
But nobody's figured out how to do that yet, which is why I said we *don't* directly harness the power of nuclear fusion. It's never been done … and in fact, it's not entirely clear that it's even possible at all.
However, we do *indirectly* "harness the power" of nuclear fusion. We do it constantly, in fact. Because the sun is a big ball of mostly hydrogen undergoing nuclear fusion. In the case of the sun, you don't need a container to hold the hydrogen gas in; it holds *itself* in, by the pressure of its own gravity. The weight of all that hydrogen pushes down on itself, squeezing the hydrogen in the very center to the point where it can fuse. The energy released by that fusion percolates outward through the dense layers of hydrogen gas, heating the gas up and making it glow, and that's what sunlight is.
Sunlight goes out in all directions, and a tiny part of it hits the Earth, and that light is used by plants to break the chemical bonds holding carbon dioxide molecules together, and the oxygen is thrown away and the carbon is used to make trees and stuff, and either right away — in the form of logs — or many years later — once the trees and stuff have been squeezed into petroleum — we combine those plants with oxygen again and release the heat they stored from the sunlight, thus boiling water and turning a steam turbine to do mechanical work or generate electricity.
Sometimes we can cut out the middle-man. Light from the sun can hit special metallic plates called photovoltaic cells and create a little trickle of electricity directly. That's useful when we only need a tiny bit of electricity. Or light from the sun can warm the air in some places while leaving it cool in others, making the warm and cool air circulate — wind, in other words — and we can stick a turbine at the top of a tall pole and suck mechanical energy out of the wind and use it to do mechanical work or generate electricity. Or sunlight can hit water and heat it up, causing it to evaporate into the air and then later fall out as rain, some of which lands at high altitudes and then, due to gravity, runs downhill toward the sea, and we can stick a turbine in the flow and suck mechanical energy out of that and use it to do mechanical work or generate electricity.
Or we can simply eat food, which uses sunlight to grow, and thus power our muscles so we can do work ourselves, with our own bodies.
But mostly, with precious few exceptions, all the energy we encounter comes pretty close to directly from the sun, which shines because of nuclear fusion. So there's more to the nuclear fusion story than so-far-unsuccessful experiments aimed at creating it in a laboratory and using it directly. | [
"In nuclear chemistry, nuclear fusion is a reaction in which two or more atomic nuclei are combined to form one or more different atomic nuclei and subatomic particles (neutrons or protons). The difference in mass between the reactants and products is manifested as either the release or absorption of energy. This difference in mass arises due to the difference in atomic \"binding energy\" between the atomic nuclei before and after the reaction. Fusion is the process that powers active or \"main sequence\" stars, or other high magnitude stars.\n",
"Nuclear fusion refers to reactions in which lighter nuclei are combined to become heavier nuclei. This process changes mass into energy which in turn may be captured to provide fusion power. Many types of atoms can be fused. The easiest to fuse are deuterium and tritium. For fusion to occur the ions must be at a temperature of at least 4 keV (kiloelectronvolts) or about 45 million kelvins. The second easiest reaction is fusing deuterium with itself. Because this gas is cheaper, it is the fuel commonly used by amateurs. The ease of doing a fusion reaction is measured by its cross section.\n",
"Nuclear Fusion is a peer reviewed international scientific journal that publishes articles, letters and review articles, special issue articles, conferences summaries and book reviews on the theoretical and practical research based on controlled thermonuclear fusion. The journal was first published in September, 1960 by IAEA and its head office was housed at the headquarter of IAEA in Vienna, Austria. Since 2002, the journal has been jointly published by IAEA and IOP Publishing.\n",
"Thermonuclear fusion is a way to achieve nuclear fusion by using extremely high temperatures. There are two forms of thermonuclear fusion: \"uncontrolled\", in which the resulting energy is released in an uncontrolled manner, as it is in thermonuclear weapons (\"hydrogen bombs\") and in most stars; and \"controlled\", where the fusion reactions take place in an environment allowing some or all of the energy released to be harnessed for constructive purposes. This article focuses on the latter.\n",
"Hybrid nuclear fusion–fission (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The basic idea is to use high-energy fast neutrons from a fusion reactor to trigger fission in otherwise nonfissile fuels like U-238 or Th-232. Each neutron can trigger several fission events, multiplying the energy released by each fusion reaction hundreds of times. This would not only make fusion designs more economical in power terms, but also be able to burn fuels that were not suitable for use in conventional fission plants, even their nuclear waste.\n",
"Fusion power is a proposed form of power generation that would generate electricity by using heat from nuclear fusion reactions. In a fusion process, two lighter atomic nuclei combine to form a heavier nucleus, while releasing energy. Devices designed to harness this energy are known as \"fusion reactors\".\n",
"The National Institute for Fusion Science is engaged in basic research on fusion and plasma in order to actualize nuclear fusion generation, with the hope of developing new sources of energy that are safe and environmentally friendly.\n"
] |
What was the Islamic attitude towards and tolerance level of other religions, prior to the sacking of Constantinople by the crusaders and the destruction of Baghdad by the Mongols? | Why the West attained a period of dominance is a separate question that I won't address here, although I will point out: Latin Christians conquered Constantinople from *Greek Christians* in 1204. That doesn't seem to be a breaking point in Muslim attitudes towards the "Franks" and "Romans" for, well, obvious reasons. And as we'll see, it's dangerous to draw conclusions about what the situation and attitudes today might be from a range of attitudes in the past, because even interpretations of a sacred text depend heavily on the historical context of the interpreter.
~~
It is impossible to speak of "*the* Muslim attitude" towards other religions and their practitioners in the early Middle Ages, just like it's impossible to identity a universal view in the modern world. Instead, we can look at a range of laws and literary portrayals from specific historical contexts, and see how particular events affected them.
The Quran's specific views on non-Muslims are fairly well known. Jews and Christians share an Abrahamic foundation and are *dhimmis* or People of the Book. They can be permitted to exercise their faith freely in Islamic territories while subject to restrictions such as an increased tax. Practitioners of other religions (the web of Middle Eastern paganisms, Zoroastrianism, etc) are not afforded that leeway. A famous medieval, though not authentic to the named ruler, example of restrictions on *dhimmis* is the [Pact of Umar](_URL_0_). While we can't know whether restrictions like this were ever officially deployed, it shows us what the relationship between Muslims and protected non-Muslims was *idealized to be* by at least one group of Muslim legal scholars.
In practice, the application of the Quranic principles here varied. Sometimes Zoroastrians were extended *dhimmi* protection, and sometimes Jews and Christians weren't. The Almoravid and Almohad dynasties in North Africa and Iberia, for example, attempted to force Jews in particular to convert to Islam. Their Umayyad predecessors in the west, on the other hand, actually *discouraged* conversion among the lower aristocracy, for the tax benefit.
The question of what *jihad* meant in early Islam is as vexed as it is today. There's no question that the infant religion's adherents achieved explosive success by military conquest across the Near East and North Africa--the Umayyads are in Cordoba (Spain) less than a century after Muhammad. The early years of Islam are characterized by an apocalyptic, messianic sense in which jihad is indeed a spiritual *offensive*. As I noted earlier, that doesn't necessarily mean forced conversion--secular motives like money were attractive. (Richard Bullitt has postulated that conversion occurred over time on a logarithmic scale, with the bulk of conversion ramping up in the 9th and 10th centuries). It did mean Muslim *rule* and establishment of Islamic faith in new territories.
But--the expansion of Islam slowed. Christianity stubborned kept hold of northern Iberia; in the 9th century even Byzantium started making incursions into the Muslim Near East. That second example offers a prime chance to witness how historical events affect Muslims understanding and representation of non-Muslims. Our earliest Arabic sources portray Byzantium as a *rival*: some levels of hostility, but very respected. They are especially impressed with the political and economic importance of Constantinople, and of the splendor of the city's architecture. Once the Byzantines pick up some military action, Muslim writers ramp up their vitriol. They find new ways to label the Byzantines barbaric, amping up the rhetoric of horrid Byzantine morals.
Even in the Latin Crusades, when the Islamic world is under *direct invasionary attack* by 'barbarians,' individual Muslim governors sometimes allied with the Franks against each other. (Although the chronicles are pretty uniform in calling the Franks atrociously bad fighters...it's just, they have really good armor and weapons, shucks.) In the Fifth Crusade, which dead-ended for the West in a *massively* humiliating capture of the entire crusader army in Egypt, the Muslim force treated them rather well and allowed their release as long as they returned to Europe.
Unlike the later medieval Church, medieval Islam has no centralized body of law or dominant interpretation. It's characterized by a series of overlapping legal and theological schools of interpretation that jockey for ascendancy throughout the era. As the rate of expansion of Islam grinds down almost to a halt, scholars debate the meaning of *jihad* in a world that suddenly doesn't hold apocalyptic hope and expectation of triumph.
One line of interpretation emerges that divides the world into two: dar al-Islam and dar al-harb, the world of Islam/submission and the world of war. This spiritualizes the idea of jihad: it is defensive, a matter of protecting Islam and its people, rather than working to prepare the world for the messiah through conquest. Unfortunately, Mottahedeh and al-Sayyid, who've done a lot of the work on early notions of jihad, don't really talk about whether we can trace this spiritualization of jihad in specific contexts to changing treatment of non-Muslims under Muslim rule (i.e. did a focus on defensive jihad ever lead to increased conversions or increased signs of repression).
Medieval Muslims who did find themselves in dar al-harb, on the whole, don't seem to have taken this idea of defensive jihad into their military hands. There are some cases of localized rebellion in Christian Spain and Sicily, but isn't that what you'd expect of any conquered people feeling ill-treated? Glick and Meyerson have both discussed the ways in which Muslim revolts in Christian Spain don't have hallmarks of proto-nationalism, they're in fact rather similar to or overlapping with Christian peasant protests of unjust conditions as well.
The Muslims of high medieval Sicily, conquered by the (Latin) Normans, found themselves deported en masse to the Italian mainland at Lucera. And yet they still *chose* to fight for their homes with the local Christian army against the papal invaders. The Muslim community of the Christian-conquered Ebro Valley in Spain stubbornly insisted, through letters sent abroad and sermons preached at home, that Iberia was their home and they *would* remain there against all the calls of the zealous Almohads in North Africa to leave *dar al-harb* for the comfort of the Islamic world. (And it sure wasn't because of amazing generosity on the part of their Christian overlords, to be sure.)
And then you have to consider, of course, that most Muslims are just ordinary people trying to live their lives. Islam spreads in medieval West Africa *almost* by accident. Merchants from the Sudan and North Africa set up trade colonies of sorts in the Ghana Empire, common language (Arabic) facilitates trade, being Muslim allows you to tap into a global trade network...By the time Ibn Battuta makes it to Mali in the 14th century, he's treated to a full recitation of the Quran (in Arabic) while amusedly observing cultural differences in the practices of individual Muslims between Mali and elsewhere in the Islamic world.
Overall, then--if we can even talk about an "overall"--it's a complex picture that depends heavily on specific historical contexts. The status of Islamic expansion, the school of law or theology, military developments on both sides, messianic expectation, the passage of time, geographic, economy, the goals of individuals: so many factors in the matrix, so many experiences we can identify in concrete times and places. | [
"In 1453, Constantinople was conquered by the Ottoman Empire under Mehmed the Conqueror, who ordered this main church of Orthodox Christianity converted into a mosque. Although some parts of the city of Constantinople had fallen into disrepair, the cathedral had been maintained with funds set aside for this purpose, and the Christian cathedral made a strong impression on the new Ottoman rulers who conceived its conversion..\" LiveScience. The bells, altar, iconostasis, and other relics were destroyed and the mosaics depicting Jesus, his Mother Mary, Christian saints, and angels were also destroyed or plastered over. Islamic features – such as the mihrab (a niche in the wall indicating the direction toward Mecca, for prayer), minbar (pulpit), and four minarets – were added. It remained a mosque until 1931 when it was closed to the public for four years. It was re-opened in 1935 as a museum by the Republic of Turkey. Hagia Sophia was, , the second-most visited museum in Turkey, attracting almost 3.3 million visitors annually. According to data released by the Turkish Culture and Tourism Ministry, Hagia Sophia was Turkey's most visited tourist attraction in 2015.\n",
"The center of the Islamic Empire at the time was Baghdad, which had held power for 500 years but was suffering internal divisions. When its caliph al-Mustasim refused to submit to the Mongols, Baghdad was besieged and captured by the Mongols in 1258 and subjected to a merciless sack, an event considered as one of the most catastrophic events in the history of Islam, and sometimes compared to the rupture of the Kaaba. With the destruction of the Abbasid Caliphate, Hulagu had an open route to Syria and moved against the other Muslim powers in the region.\n",
"Throughout the majority of the era Muslim rule existed in the region of Palestine, except for the Crusader Kingdom of Jerusalem (1099–1291). Due to the growing importance of Jerusalem in the Muslim world, the tolerance towards the other faiths began to fade. The Christians and the Jews in Palestine were persecuted and many Churches and Synagogues were destroyed. This trend peaked in 1009 AD when Caliph al Hakim of the Fatimid dynasty, destroyed also the Church of the Holy Sepulchre in Jerusalem. This provocation ignited enormous rage in the Christian world, which led to the Crusades from Europe to the Holy Land.\n",
"The outcome of the siege was of considerable macrohistorical importance. The Byzantine capital's survival preserved the Empire as a bulwark against Islamic expansion into Europe until the 15th century, when it fell to the Ottoman Turks. Along with the Battle of Tours in 732, the successful defence of Constantinople has been seen as instrumental in stopping Muslim expansion into Europe. Historian Ekkehard Eickhoff writes that \"had a victorious Caliph made Constantinople already at the beginning of the Middle Ages into the political capital of Islam, as happened at the end of the Middle Ages by the Ottomans—the consequences for Christian Europe [...] would have been incalculable\", as the Mediterranean would have become an Arab lake, and the Germanic successor states in Western Europe would have been cut off from the Mediterranean roots of their culture. Military historian Paul K. Davis summed up the siege's importance as follows: \"By turning back the Moslem invasion, Europe remained in Christian hands, and no serious Moslem threat to Europe existed until the fifteenth century. This victory, coincident with the Frankish victory at Tours (732), limited Islam's western expansion to the southern Mediterranean world.\" Thus the historian John B. Bury called 718 \"an ecumenical date\", while the Greek historian Spyridon Lambros likened the siege to the Battle of Marathon and Leo III to Miltiades. Consequently, military historians often include the siege in lists of the \"decisive battles\" of world history.\n",
"The sack of Constantinople is a major turning point in medieval history. The Crusaders' decision to attack the world's largest Christian city was unprecedented and immediately controversial. Reports of Crusader looting and brutality scandalised and horrified the Orthodox world; relations between the Catholic and Orthodox churches were catastrophically wounded for many centuries afterwards, and would not be substantially repaired until modern times.\n",
"During the Fourth Crusade, however, Latin crusaders and Venetian merchants sacked Constantinople itself, looting The Church of Holy Wisdom and various other Orthodox Holy sites. looting The Church of Holy Wisdom and various other Orthodox holy sites, and converting them to Latin Catholic worship. Various holy artifacts from these Orthodox holy places were then taken to the West. This event and the final treaty established the Latin Empire of the East and the Latin Patriarch of Constantinople (with various other Crusader states). This period of rule over the Byzantine Empire is known among Eastern Orthodox as Frangokratia (dominion by the Franks).\n",
"The Byzantine Empire was left much poorer, smaller, and ultimately less able to defend itself against the Turkish conquests that followed; the actions of the Crusaders thus directly accelerated the collapse of Christendom in the east, and in the long run facilitated the expansion of Islam into Europe.\n"
] |
why do people shiver when they are using all of their strength? | They aren't actually shivering. Their muscles are rapidly changing the fibers they use to balance and lift the load. One set of fibers is doing the majority of the lifting while the other set relax slightly then they switch positions creating the illusion of shivering. This switching can happen upwards of several thousand times per minute. | [
"Shivering (also called shaking) is a bodily function in response to cold in warm-blooded animals. When the core body temperature drops, the shivering reflex is triggered to maintain homeostasis. Skeletal muscles begin to shake in small movements, creating warmth by expending energy. Shivering can also be a response to a fever, as a person may feel cold. During fever the hypothalamic set point for temperature is raised. The increased set point causes the body temperature to rise (pyrexia), but also makes the patient feel cold until the new set point is reached. Severe chills with violent shivering are called rigors. Rigors occur because the patient's body is shivering in a physiological attempt to increase body temperature to the new set point.\n",
"Shivers, or equine shivering, is a rare, progressive neuromuscular disorder of horses. It is characterized by muscle tremors, difficulty holding up the hind limbs, and an unusual gait when the horse is asked to move backwards. Shivers is poorly understood and no effective treatment is available at this time.\n",
"There has yet to be any peer-reviewed research on the topic. The most plausible theory, is that the shiver is a result of the autonomic nervous system (ANS) getting its signals mixed up between its two main divisions:\n",
"In mild cases, shivers may present only when the horse is asked to move backwards, usually seen as trembling in the muscles of the hind limbs and sudden, upward jerks of the tail. Affected animals may also snatch up their foot when asked to lift it for cleaning.\n",
"Prior to the induction of targeted temperature management, pharmacological agents to control shivering must be administered. When body temperature drops below a certain threshold—typically around —people may begin to shiver. It appears that regardless of the technique used to induce hypothermia, people begin to shiver when temperature drops below this threshold. Drugs commonly used to prevent and treat shivering in targeted temperature management include acetaminophen, buspirone, opioids including pethidine (meperidine), dexmedetomidine, fentanyl, and/or propofol. If shivering is unable to be controlled with these drugs, patients are often placed under general anesthesia and/or are given paralytic medication like vecuronium. People should be rewarmed slowly and steadily in order to avoid harmful spikes in intracranial pressure.\n",
"This induction works because shaking hands is one of the actions learned and operated as a single \"chunk\" of behavior; tying shoelaces is another classic example. If the behavior is diverted or frozen midway, the person literally has no mental space for this - he is stopped in the middle of unconsciously executing a behavior that hasn't got a \"middle\". The mind responds by suspending itself in trance until either something happens to give a new direction, or it \"snaps out\". A skilled hypnotist can often use that momentary confusion and suspension of normal processes to induce trance quickly and easily.\n",
"BULLET::::- Muscles can also receive messages from the thermoregulatory center of the brain (the hypothalamus) to cause shivering. This increases heat production as respiration is an exothermic reaction in muscle cells. Shivering is more effective than exercise at producing heat because the animal (includes humans) remains still. This means that less heat is lost to the environment through convection. There are two types of shivering: low-intensity and high-intensity. During low-intensity shivering, animals shiver constantly at a low level for months during cold conditions. During high-intensity shivering, animals shiver violently for a relatively short time. Both processes consume energy, however high-intensity shivering uses glucose as a fuel source and low-intensity tends to use fats. This is a primary reason why animals store up food in the winter.\n"
] |
why does our brain get attached to people, things, places etc, and why do we have a strong need to find the one we love | Probably all to do with our survival instincts. We can get attached to places as a way to demonstrate that it's "our area", and to produce offspring we adore the person that is deemed by our brain as the best mate, for healthier and stronger children. This is my biased idea, so take it for what it is. | [
"Unable to explain the unique circumstances in which they acquired their knowledge, they both have difficulty convincing their friends that they know what is the right thing to do. Neither is able to completely dissociate themselves from the things that were once important to them, and they realize that by not concentrating on their love they might be sacrificing their second chance at life.\n",
"In \"Living a connected life\" (2003), Dr. Kathleen Brehony looks at recent brain research which shows the importance of proximity of others in our development. \"Especially in infancy, but throughout our lives, our physical bodies are influencing and being influenced by others with whom we feel a connection. Scientists call this \"limbic regulation.\"\"\n",
"Because of the interconnectedness of the universe by virtue of Space-Time, and because the mind apprehends space, time and motion through a unity of sense and mind experience, there is a form of knowing that is intuitive (participative) - sense and reason are outgrowths from it.\n",
"Mind-Set People tend to focus more on what they want to accomplish (a goal) and less on what needs to be avoided because human beings are primarily goal-oriented by nature. As such, people tend to “see” only what the mind expects, or wants, to see.\n",
"This indicates fragmentation of the psyche, the different feelings and thoughts of ‘I’ in a person: I think, I want, I know best, I prefer, I am happy, I am hungry, I am tired, etc. These have nothing in common with one another and are unaware of each other, arising and vanishing for short periods of time. Hence man usually has no unity in himself, wanting one thing now and another, perhaps contradictory, thing later.\n",
"As the human mind's cognitive structures are naturally disposed to both consciously and unconsciously gather and process information constantly in order to relate to the outside world, to oneself, and to one's own relations to the world (in other words, man constantly seeks and constructs sense, meaning, association, and belonging in order to understand and cope) also and most foremost in a social sense (\"Who am I, and how do I relate to the social continuum around me?\"), and because individual cognitive capabilities are as limited as that the individual is required to widely rely on social conventions in everyday life (which in combination with decreased instinctual drives is the origin of cognitive disposition towards \"social learning\" in primates), Bleibtreu-Ehrenberg posits that if no more positive social identity relating to one's traits is available, then even the most negative social identity, no matter with what aggressive and harmful behavior it is associated, is preferred to total loss of identity.\n",
"Putting others first. Dwelling on ourselves builds a wall between ourselves and others. Those who keep thinking about their needs, their wants, their plans, their ideas, cannot help becoming lonely and insecure. As human beings, it is our nature to be part of a whole, to live in a context where personal relationships are supportive and close.\n"
] |
what does remastering a game entail? | It *completely* depends on the company doing the "remastering". There is no fixed set of things except, perhaps, as requirements from the licensor. In addition, there are often limitations on what can be overhauled because original development materials may have been lost or are otherwise no longer available.
A good example of this is Beamdog/Overhaul Games' remake of the Baldur's gate series: Because the level/area files for BG are rendered 3D scenes, and because the original 3D model files had been lost, Beamdog had to work with the level images as they were originally released- with some fancy math used to upscale the resolution of those images while still seemingly retaining detail.
On some other games most or all of the original development materials remain, including original artwork, and higher resolution versions of the masters, including audio, used.
But how much work and what work is done is very much handled on a game-by-game basis. | [
"When remastering a distro, remastering software can be applied from the \"inside\" of a live operating system to clone itself into an installation package. Remastering does not necessarily require the remastering software, which only facilitates the process. For example, an application is remastered just by acquiring, modifying and recompiling its original source code. Many video games have been modded by upgrading them with additional content, levels, or features. Notably, \"Counter-Strike\" was remastered from \"Half-Life\" and went on to be marketed as a commercial product.\n",
"Software remastering is software development that recreates system software and applications while incorporating customizations, with the intent that it is copied and run elsewhere for \"off-label\" usage. If the remastered codebase does not continue to parallel an ongoing, upstream software development, then it is a fork, not a remastered version. The term comes from \"remastering\" in media production, where it is similarly distinguished from mere copying. Remastering was popularized by Klaus Knopper, creator of Knoppix. The Free Software Foundation promotes the universal freedom to recreate and distribute computer software, for example by funding projects like the GNU Project.\n",
"Software remastering creates an application by rebuilding its code base from the software objects on an existing master repository. If the \"mastering\" process assembles a distribution for the release of a version, the remaster process does the same but with subtraction, modification, or addition to the master repository. Similarly a modified makefile orchestrates a computerized version of an application.\n",
"Remastering is the process of making a new master for an album, film, or any other creation. It tends to refer to the process of porting a recording from an analogue medium to a digital one, but this is not always the case.\n",
"A \"remaster\" is a personalized version of PCLinuxOS created according to the needs of an individual. It is created using the mklivecd script applied to its installation, which can be of any of the \"official\" flavors of PCLinuxOS. An \"official remaster\" can only include software and components from the official repository (version control).\n",
"In some tournaments, known as “rebuy tournaments”, players have the ability to re-buy into the game in case they lost all their chips and avoid elimination for a specific period of time (usually ranging from one to two hours). After this so-called “rebuy period”, the play resumes as in a standard freezeout tournament and eliminated players do not have the option of returning to the game any more. Rebuy tournaments often allow players to rebuy even if they have not lost all their chips, in which case the rebuy amount is simply added to their stack. A player is not allowed to rebuy in-game if he has too many chips (usually the amount of the starting stack or half of it). At the end of the rebuy period remaining players are typically given the option to purchase an “add-on”, an additional amount of chips, which is usually similar to the starting stack.\n",
"\"Remorting\" (also known as \"rebirth\", \"ascending/ascension\", \"reincarnating\", or \"new game plus\") is a game mechanic in some role-playing games whereby, once the player character reaches a specified level limit, the player can elect to start over with a new version of his or her character. The bonuses that are given are dependent on several factors, which generally involve the stats of the character before the reincarnation occurs. The remorting character generally loses all levels, but gains an advantage that was previously unavailable, usually access to different races, avatars, classes, skills, or otherwise inaccessible play areas within the game. A symbol often identifies a remorted character.\n"
] |
[Meta] This sub desperately needs a "Answered" flair for posts that have ad least one mod approved reply | This one gets asked a lot, because it's a seemingly intuitive solution to the common problem of clutter - threads with high comment counts that suggest the presence of an answer, but in reality are all just removed comments.
However, the issues - both practical and conceptual - raised by actually implementing an answered flair are considerable, and our collective judgement has long been that the downsides by far outweigh the advantages. For a more full explanation you can check out [this post](_URL_2_). But the basic issues as I see them are:
1. Except for the most basic of factual questions (which we tend to redirect to our Short Answers to Simple Questions thread anyway), history rarely admits 'one' answer to a given question. Differing perspectives, methods, sources and so on all mitigate against definitive answers to most questions. An Answered flair - whether watered down by different terminology or not - risks giving a different impression, as well as discouraging users from adding new perspectives once that it has been declared 'Answered' (this is feedback that we have received from our flair community).
2. These suggestions are usually based on misconceptions of how we actually moderate the sub. We don't read every comment that gets made, relying instead on user-generated reports to spot problematic answers that we then might evaluate in more detail if it seems necessary. Changing this to reading and evaluating every substantive comment would represent an exponential increase in workload for what is - compared to the size of the sub - a fairly small team of active moderators (and that's not even getting into the fact that for this to work, each flair would need to be manually altered and updated - we can't train a bot to be able to tell the difference between 700 words of wisdom and a 700-word scrawl of conspiratorial madness). Keep in mind as well that the mods aren't omniscient - unless one of us happens to have expertise in a particular topic, checking the content of any substantive answer is a lot of work (and often involves collaboration and discussion on our end - an answer which we initially let stand might be taken down later once someone with enough knowledge to spot the flaws is awake). Asking us to put what amounts to seals of approval on all such content would stretch us well past breaking point, and would if anything result in a massive increase in removals of longer answers, on the basis that we don't want to be seen to endorse material that we aren't completely sure of. While the line between 'decent enough to let stand' and 'good enough to endorse' might seem very thin, from our perspective it's a much bigger deal.
3. It likely still wouldn't solve the main problem, while simultaneously interfering with the various ways we currently use flairs. For the large numbers of users on mobile, flairs often won't be visible to users before accessing the thread anyway (thereby obviating the sole advantage of such a flair, which is saving users a click). For users less familiar with the sub, who provide most of the added clutter in highly visible threads, a flair system is unlikely to get noticed, judging purely by how few of these commenters appear to read the Automod message in every thread.
If you are a regular user who finds the wasted clicks on deceptively empty threads to be annoying, we would heartily recommend our custom-designed [browser extension](_URL_1_) made by [/u/Almost\_useless](_URL_0_), which does a great job of making thread comment counts actually accurate. | [
"Meanwhile, on Usenet, Mark Horton had started a series of \"Periodic Posts\" (PP) which attempted to answer trivial questions with appropriate answers. Periodic summary messages posted to Usenet newsgroups attempted to reduce the continual reposting of the same basic questions and associated wrong answers. On Usenet, posting questions that were covered in a group's FAQ came to be considered poor netiquette, as it showed that the poster had not done the expected background reading before asking others to provide answers. Some groups may have multiple FAQs on related topics, or even two or more competing FAQs explaining a topic from different points of view.\n",
"At Jeff Pulver's 140 Characters Conference in New York City in April 2010, Answers.com launched its alpha version of a Twitter-answering service nicknamed 'Hoopoe.' When tweeting a question to the site's official Twitter account, @AnswersDotCom, an automatic reply is given with a snippet of the answer and a link to the full answer page on Answers.com.\n",
"A secondary flow of answering questions was more similar to traditional bulletin-board style interactions: a user sent a message to Aardvark or visited the \"Answering\" tab of the website, Aardvark showed the user a recent question from the user's network which had not yet been answered and which was related to the user's profile topics. This mode involved the user initiating the exchange when the user was in the mood to try to answer a question; as such, it had the benefit of tapping into users who acted as eager potential 'answerers'.\n",
"BULLET::::- Snappy Answers to Stupid Questions – An adaptation of Al Jaffee's reoccurring magazine feature, it features a person who asks a question regarding something that was obviously presented, resulting in the person or people whom were so queried to give a sarcastic response that suggests otherwise.\n",
"Every Tuesday, Jennings sends an email out containing seven questions, one of which is designed to be Google-resistant. Subscribers respond with the answers to all seven questions and the results are maintained on a scoreboard on Jennings' blog. At times he chooses to run multi-week tournaments, awarding the top responder with all seven answers correct with such things as a signed copy of his newest book.\n",
"In order to begin the questionnaire, the user must press the play button and think of a popular character, object or other things that frequently come to mind (musician, athlete, political personality, video game, mother or father, actor, fictional film/TV character, Internet personality, etc.). Akinator, a cartoon genie, begins asking a series of questions (as many as required), with \"Yes\", \"No\", \"Probably\", \"Probably not\" and \"Don't know\" as possible answers, in order to hack down the potential character. If the answer is narrowed down to a single likely option before 25 questions are asked, the program will automatically ask if the character it chose is correct. If the character is guessed wrong three times in a row (or more, usually in intervals of 25, 50, and 80), then the program will prompt the user to input the character's name, in order to expand its database of choices.\n",
"A challenge–response (or C/R) system is a type of spam filter that automatically sends a reply with a challenge to the (alleged) sender of an incoming e-mail. It was originally designed in 1997 by Stan Weatherby, and was called Email Verification. In this reply, the sender is asked to perform some action to assure delivery of the original message, which would otherwise not be delivered. The action to perform typically takes relatively little effort to do once, but great effort to perform in large numbers. This effectively filters out spammers.\n"
] |
I heard something off of my granddad about World War 2 spies. He said that, when the British were interrogating German spies, they would end the interrogation with "good luck" or "hail victory" in German and, if the German replied, they would know he was a spy. Is there any validity to this? | As far as "good luck" goes, he may be thinking of a scene in the 1963 film *The Great Escape* where this happens in reverse- escaped British PoWs are captured when a Gestapo officer wishes them "Good luck" in English and one of them instinctively replies "thank you".
I have seen claims (for instance in *This Great Escape: The Case of Michel Paryla* by Andrew Steinmetz, and *The RAF's French Foreign Legion* by G.H. Bennett) that this incident is based on the real-life case of the French escapee Sous-Lt. Bernard Steinhauer, who was fluent in English and German as well as his native French and who was captured at Saarbrücken station after replying in English to an English greeting by a Gestapo officer- although sources differ on the exact phrase used.
(Like most of those who escaped in the breakout from Stalag Luft III, Sous-Lt. Steinhauer was shot a few days later) | [
"During World War II, an American intelligence agent in England, ashamed for having yielded information to the Germans during a previous capture, attempts to redeem himself by contriving his way into a French resistance group, with his ultimate plan being to kidnap a valuable German general and obtain his secrets.\n",
"The Germans could have suspected even more surreptitious activities, since Portugal, like Switzerland, was a crossroads for internationals and spies from both sides. British historian James Oglethorpe investigated Howard's connection to the secret services. Ronald Howard's book explores the written German orders to the Ju 88 squadron, in great detail, as well as British communiqués that verify intelligence reports indicating a deliberate attack on Howard. These accounts indicate that the Germans were aware of Churchill's real whereabouts at the time and were not so naive as to believe he would be travelling alone on board an unescorted, unarmed civilian aircraft, which Churchill also acknowledged as improbable. Ronald Howard was convinced the order to shoot down Howard's airliner came directly from Joseph Goebbels, Minister of Public Enlightenment and Propaganda in Nazi Germany, who had been ridiculed in one of Leslie Howard's films, and believed Howard to be the most dangerous British propagandist.\n",
"A woman spy and some male agents working for the Germans during World War I land at night near the British naval base at Scapa Flow, from a U-boat. The British, led by Col. Foreman, ambush the landing party, capturing two of the men, but the woman gets away. Foreman fakes the execution of one of the spies, thus tricking the second one, Meyer, into becoming a double agent in the hopes of using him to capture his woman accomplice, whom Meyer identifies under the codename Fraulein Doktor. Fraulein Doktor is portrayed as a brilliant spy who stole a formula for a skin blistering gas similar to mustard gas which the Germans used to great effect against the Allies on the battlefield. \n",
"During the Second World War, a German spy goes on the run, carrying important news about a U-Boat campaign. The ship he is travelling aboard is hit by a torpedo. The spy winds up at a lighthouse with other survivors, one of whom is a counterintelligence agent who reveals the German spy's true identity.\n",
"Once caught, the spies were deposited in the care of Lieutenant Colonel Robin Stephens at Camp 020 (Latchmere House, Richmond). After Stephens, a notorious and brilliant interrogator, had picked apart their life history, the agents were either spirited away (to be imprisoned or killed) or if judged acceptable, offered the chance to turn double agent on the Germans.\n",
"At least 20 spies were sent to England by boat or parachute to gather information on the British coastal defences under the codename \"Operation Lena\"; many of the agents spoke limited English. All agents were quickly captured and many were convinced to defect by MI5's Double-Cross System, providing disinformation to their German superiors. It has been suggested that the \"amateurish\" espionage efforts were a result of deliberate sabotage by the head of the army intelligence bureau in Hamburg, Herbert Wichmann, in an effort to prevent a disastrous and costly amphibious invasion; Wichmann was critical of the Nazi regime and had close ties to Wilhelm Canaris, the former head of the \"Abwehr\" who was later executed by the Nazis for treason.\n",
"The last false message exchanged with London in this operation was: \"Thank you for your collaboration and for the weapons that you sent us\". However, Nazi intelligence was not aware that British intelligence knew about the stratagem for at least two weeks prior to the transmission. From May 1944 onwards the operation was not a success.\n"
] |
how can general pain medication like paracetamol and ibuprofen treat so many different things? | Prostaglandins are natural chemicals that are released into your body when you are injured or sick. When they're released, they make nearby nerves hurt. This is when your body can tell that something is wrong, and you feel pain. Meds like ibuprofen target prostaglandins. It keeps more of them from being made, which reduces more nerve pain. So it's not so much that pills can hit a wide variety of targets, it's that the body's target is the same for most injuries. | [
"Many drug therapies are available for pain management after third molar extractions including NSAIDS (non-steroidal anti-inflammatory), APAP (acetaminophen), and opioid formulations. Although each has its own pain-relieving efficacy, they also pose adverse effects. According to two doctors, Ibuprofen-APAP combinations have the greatest efficacy in pain relief and reducing inflammation along with the fewest adverse effects. Taking either of these agents alone or in combination may be contraindicated in those who have certain medical conditions. For example, taking ibuprofen or any NSAID in conjunction with warfarin (a blood thinner) may not be appropriate. Also, prolonged use of ibuprofen or APAP has gastrointestinal and cardiovascular risks. There is high quality evidence that ibuprofen is superior to paracetamol in managing postoperative pain.\n",
"Initial recommended treatment is with simple pain medication such as ibuprofen and paracetamol (acetaminophen) for the headache, medication for the nausea, and the avoidance of triggers. Specific medications such as triptans or ergotamines may be used in those for whom simple pain medications are not effective. Caffeine may be added to the above. A number of medications are useful to prevent attacks including metoprolol, valproate, and topiramate.\n",
"Pain medication, such as aspirin and ibuprofen, are effective for the treatment of tension headache. Tricyclic antidepressants appear to be useful for prevention. Evidence is poor for SSRIs, propranolol and muscle relaxants.\n",
"Spasmolytics such as carisoprodol, cyclobenzaprine, metaxalone, and methocarbamol are commonly prescribed for low back pain or neck pain, fibromyalgia, tension headaches and myofascial pain syndrome. However, they are not recommended as first-line agents; in acute low back pain, they are not more effective than paracetamol or nonsteroidal anti-inflammatory drugs (NSAIDs), and in fibromyalgia they are not more effective than antidepressants. Nevertheless, some (low-quality) evidence suggests muscle relaxants can add benefit to treatment with NSAIDs. In general, no high-quality evidence supports their use. No drug has been shown to be better than another, and all of them have adverse effects, particularly dizziness and drowsiness. Concerns about possible abuse and interaction with other drugs, especially if increased sedation is a risk, further limit their use. A muscle relaxant is chosen based on its adverse-effect profile, tolerability, and cost.\n",
"Ibuprofen helps decrease pain in children with migraines. Paracetamol does not appear to be effective in providing pain relief. Triptans are effective, though there is a risk of causing minor side effects like taste disturbance, nasal symptoms, dizziness, fatigue, low energy, nausea, or vomiting.\n",
"Over-the-counter drugs, like acetaminophen, aspirin, or NSAIDs(ibuprofen, Naproxen, Ketoprofen), can be effective but tend to only be helpful as a treatment for a few times in a week at most. For those with gastrointestinal problems (ulcers and bleeding) acetaminophen is the better choice over aspirin, however both provide roughly equivalent pain relief. It is important to note that large daily doses of acetaminophen should be avoided as it may cause liver damage especially in those that consume 3 or more drinks/day and those with pre-existing liver disease. Ibuprofen, one of the NSAIDs listed above, is a common choice for pain relief but may also lead to gastrointestinal discomfort.\n",
"Ibuprofen/paracetamol sold under the brand name Combiflam is a combination of the two medications, ibuprofen and paracetamol (acetaminophen). It is available in India. It may be used for fever, headache, muscle pain and menstrual cramps (MC). Ibuprofen belongs to nonsteroidal anti-inflammatory drug (NSAID) class of drugs.\n"
] |
why do massive arcade style coin operated machines suck so much in comparison to other video game consoles? | It actually used to be the opposite way around. Back in '94, we were getting things like Cruisin' USA and Sega Rally that were a generation ahead of where consoles were at the time, and that were built on hardware that wasn't bettered until the PS2 generation. Unfortunately, that's pretty much what killed arcades. Used to be that consoles advertised themselves as offering an arcade-grade experience. When the PS2 surpassed them, it rendered them redundant, basically killing their market, and killing their progress. And that's why today, arcade has gone from being an aspirational term to almost a dirty word. | [
"Due to the success of arcades, a number of games were adapted for and released for consoles but in many cases the quality had to be reduced because of the hardware limitations of consoles compared to arcade cabinets.\n",
"Developing from earlier non-video electronic game cabinets such as pinball machines, arcade-style video games (whether coin-operated or individually owned) are usually dedicated to a single game or a small selection of built-in games and do not allow for external input in the form of ROM cartridges. Although modern arcade games such as \"Dance Dance Revolution X\" and \"\" do allow external input in the form of memory cards or USB sticks, this functionality usually only allows for saving progress or for providing modified level-data, and does not allow the dedicated machine to access new games. The game or games in a dedicated arcade console are usually housed in a stand-up cabinet that holds a video screen, a control deck or attachments for more complex control devices, and a computer or console hidden within that runs the games.\n",
"Virtually all modern arcade games (other than the very traditional Midway-type games at county fairs) make extensive use of solid state electronics, integrated circuits and cathode-ray tube screens. In the past, coin-operated arcade video games generally used custom per-game hardware often with multiple CPUs, highly specialized sound and graphics chips, and the latest in expensive computer graphics display technology. This allowed arcade system boards to produce more complex graphics and sound than what was then possible on video game consoles or personal computers, which is no longer the case in the 2010s. Arcade game hardware in the 2010s is often based on modified video game console hardware or high-end PC components. Arcade games frequently have more immersive and realistic game controls than either PC or console games, including specialized ambiance or control accessories: fully enclosed dynamic cabinets with force feedback controls, dedicated lightguns, rear-projection displays, reproductions of automobile or airplane cockpits, motorcycle or horse-shaped controllers, or highly dedicated controllers such as dancing mats and fishing rods. These accessories are usually what set modern video games apart from other games, as they are usually too bulky, expensive, and specialized to be used with typical home PCs and consoles. Currently with the advent of Virtual reality, arcade makers have begun to experiment with Virtual reality technology. Arcades have also progressed from using coin as credits to operate machines to cards that hold the virtual currency of credits.\n",
"Arcades typically have change machines to dispense tokens or quarters when bills are inserted, although larger chain arcades, such as Dave and Busters and Chuck E. Cheese are deviating towards a refillable card system. Arcades may also have vending machines which sell soft drinks, candy, and chips. Arcades may play recorded music or a radio station over a public address system. Video arcades typically have subdued lighting to inhibit glare on the screen and enhance the viewing of the games' video displays, as well as of any decorative lighting on the cabinets.\n",
"The distinction with slot machines is not clearly defined; in the United Kingdom, such machines found in arcades and pubs are called AWPs, while machines in casinos may instead be called slots. There is different licensing depending on the premise, with AWP machines having lower limits on stake wagered and payout.\n",
"A new phenomenon across Pennsylvania is the proliferation of \"skill machines\". These machines, often looking like video slot machines or VGTs, are able to circumvent gaming laws due to a prior court decision that decided they were not slot machines. Thus, these machines can now be found at many bars, clubs, gas stations, and tobacco shops across the state.\n",
"Prior to the 2000s, it was generally accepted that most home consoles were not powerful enough to accurately replicate arcade games (such games are known as being \"arcade-perfect\"). As such, there was correspondingly little effort to bring arcade-quality controls into the home. Though many imitation arcade controllers were produced for various consoles and the PC, most were designed for affordability and few were able to deliver the responsiveness or feel of a genuine arcade setup.\n"
] |
why do developing countries receive development aid from other countries instead of simply "adding" the same amount of money into government budget? | Hyperinflation from printing money to cover government deficits happens because the supply of the currency is dramatically increased. Note that this happens relative to the currency of which the supply is increasing--for example, when there is hyperinflation occurring with the Zimbabwe dollar, prices when paying with U.S. dollars may actually be comparatively stable. This is why, when inflation becomes very bad, people try to abandon the local currency and use a more stable foreign currency, even if it is illegal to do so.
Development aid comes in the form of foreign currency or it's aid "in kind," in the form of goods. So the supply of the local currency isn't changed at all. It can still have a strong effect on the local economy, but for different reasons. | [
"There are an increasing number of studies and literature that argue aid alone is not enough to lift developing countries out of poverty. Whether or not aid actually significantly affects growth, it does not operate in a vacuum. An increasing number of donor country policies can either complement or hinder development, such as trade, investment, or migration. The Commitment to Development Index published annually by the Center for Global Development is one such attempt to look at donor country policies toward the developing world and move beyond simple comparisons of aid given. It accounts for not only the quantity but the quality of aid, penalizing nations that given large amounts of tied aid.\n",
"Development aid is given by governments through individual countries' international aid agencies and through multilateral institutions such as the World Bank, and by individuals through . For donor nations, development aid also has strategic value; improved living conditions can positively effects global security and economic growth. Official Development Assistance (ODA) is a commonly used measure of developmental aid.\n",
"Many developing countries desire increased inflows of foreign direct investment as it brings the potential of technological innovation. However, studies have shown a host country must reach a certain level of development in education and infrastructure sectors in able to truly capture any potential benefits foreign direct investment might bring. If a country already has sufficient funds in terms of per capita income, as well as an established financial market, foreign direct investment has the potential to influence positive economic growth. Pre-determined financial efficiency combined with an educated labor force are the two main measures of whether or not foreign direct investment will have a positive impact on economic growth within a country.\n",
"Research has shown that developed nations are more likely to give aid to nations who have the worst economic situations and policies (Burnside, C., Dollar, D., 2000). They give money to these nations so that they can become developed and begin to turn these policies around. It has also been found that aid relates to the population of a nation as well, and that the smaller a nation is, the more likely it is to receive funds from donor agencies. The harsh reality of this is that it is very unlikely that a developing nation with a lack of resources, policies, and good governance will be able to utilize incoming aid money in order to get on their feet and begin to turn the damaged economy around. It is more likely that a nation with good economic policies and good governance will be able to utilize aid money to help the country establish itself with an existing foundation and be able to rise from there with the help of the international community. But research shows that it is the low-income nations that will receive aid more so, and the better off a nation is, the less aid money it will be granted. On the other hand, Alesina and Dollar (2000) note that private foreign investment often responds positively to more substantive economic policy and better protections under the law. There is increased private foreign investment in developing nations with these attributes, especially in the higher income ones, perhaps due to being larger and possibly more profitable markets.\n",
"Furthermore, consider the breakdown, where aid goes and for what purposes. In 2002, total gross foreign aid to all developing countries was $76 billion. Dollars that do not contribute to a country's ability to support basic needs interventions are subtracted. Subtract $6 billion for debt relief grants. Subtract $11 billion, which is the amount developing countries paid to developed nations in that year in the form of loan repayments. Next, subtract the aid given to middle income countries, $16 billion. The remainder, $43 billion, is the amount that developing countries received in 2002. But only $12 billion went to low-income countries in a form that could be deemed budget support for basic needs. When aid is given to the Least Developed Countries who have good governments and strategic plans for the aid, it is thought that it is more effective.\n",
"For aid to be effective and beneficial to economic development, there must be some support systems or ‘traction’ that, will enable foreign aid to spur economic growth. Research has also shown that Aid actually damages economic growth and development before ‘traction’ is attained.\n",
"Despite decades of receiving aid and experiencing different development models (which have had very little success), many developing countries' economies are still dependent on developed countries, and are deep in debt. There is now a growing debate about why developing countries remain impoverished and underdeveloped after all this time. Many argue that current methods of aid are not working and are calling for reducing foreign aid (and therefore dependency) and utilizing different economic theories than the traditional mainstream theories from the West. Historically, development and aid have not accomplished the goals they were meant to, and currently the global gap between the rich and poor is greater than ever, though not everybody agrees with this.\n"
] |
Can someone explain the physics going on with the snapping shrimp when it shoots its shockwave bubble attack? | The most basic scenario of cavitation is if you have an infinite fluid, and magically cause a sphere of it to disappear, and track what happens to the water trying to fill that vacuum. In this case, it's not a vacuum but a vapour bubble, but the water collapses just the same. When this happens, the water nearest the bubble moves in, then the water that was next to that moves towards the bubble, etc, creating a shockwave travelling through the water. I don't know the physiology of the stun effect, but it's probably similar to hydrostatic shock that injures gunshot and grenade victims: a pressure wave travelling through the body. The reason the bubble leads to such a powerful shock is that the water collapses really, really fast, like a good portion of the speed of sound in water. The same type of bubbles are a major cause of damage to ship propellers (but from the propellers themselves, not from shrimp), and that's what originally got people thinking about this.
The temperature is highest when the pressure is highest, which occurs when the bubble is smallest. You can see this through the ideal gas law assuming a polytropic process, but I don't think that explains the temperatures observed. I've heard other things, like the pressure causes gas inside the bubble to ionize, and the ions emit bremstrahlung radiation as they accelerate.
Hope that helped. | [
"Pistol shrimp (also called \"snapping shrimp\") produce a type of cavitation luminescence from a collapsing bubble caused by quickly snapping its claw. The animal snaps a specialized claw shut to create a cavitation bubble that generates acoustic pressures of up to 80 kPa at a distance of 4 cm from the claw. As it extends out from the claw, the bubble reaches speeds of 60 miles per hour (97 km/h) and releases a sound reaching 218 decibels. The pressure is strong enough to kill small fish. The light produced is of lower intensity than the light produced by typical sonoluminescence and is not visible to the naked eye. The light and heat produced may have no direct significance, as it is the shockwave produced by the rapidly collapsing bubble which these shrimp use to stun or kill prey. However, it is the first known instance of an animal producing light by this effect and was whimsically dubbed \"shrimpoluminescence\" upon its discovery in 2001. It has subsequently been discovered that another group of crustaceans, the mantis shrimp, contains species whose club-like forelimbs can strike so quickly and with such force as to induce sonoluminescent cavitation bubbles upon impact.\n",
"The bigclaw snapping shrimp produces a loud, staccato concussive noise with its snapping claw. The sound is produced when the claw snaps shut at great speed creating a high-speed water jet. This creates a small, short-lived cavitation bubble and it is the immediate collapse of this bubble that creates the sound. A spark is formed at the same time. The snapping noise serves to deter predators and to stun prey, and is also used for display purposes.\n",
"When a torpedo with a contact fuze strikes the side of the target hull, the resulting explosion creates a bubble of expanding gas, the walls of which move faster than the speed of sound in water, thus creating a shock wave. The side of the bubble which is against the hull rips away the external plating creating a large breach. The bubble then collapses in on itself, forcing a high-speed stream of water into the breach which can destroy bulkheads and machinery in its path.\n",
"The snapping shrimp competes with much larger animals such as the sperm whale and beluga whale for the title of loudest animal in the sea. The animal snaps a specialized claw shut to create a cavitation bubble that generates acoustic pressures of up to 80 kPa at a distance of 4 cm from the claw. As it extends out from the claw, the bubble reaches speeds of and releases a sound reaching 218 decibels. The pressure is strong enough to kill small fish. It corresponds to a zero to peak pressure level of 218 decibels relative to one micropascal (dB re 1 μPa), equivalent to a zero to peak source level of 190 dB re 1 μPa m. Au and Banks measured peak to peak source levels between 185 and 190 dB re 1 μPa m, depending on the size of the claw. Similar values are reported by Ferguson and Cleary. The duration of the click is less than 1 millisecond.\n",
"Underwater shock waves produced by the explosion stun the fish and cause their swim bladders to rupture. This rupturing causes an abrupt loss of buoyancy; a small amount of fish float to the surface, but most sink to the seafloor. The explosions indiscriminately kill large numbers of fish and other marine organisms in the vicinity and can damage or destroy the physical environment, including extensive damage to coral reefs.\n",
"The impact can also produce sonoluminescence from the collapsing bubble. This will produce a very small amount of light within the collapsing bubble, although the light is too weak and short-lived to be detected without advanced scientific equipment. The light emission probably has no biological significance, but is rather a side effect of the rapid snapping motion. Pistol shrimp produce this effect in a very similar manner.\n",
"Cavitation bubbles, when near a solid surface, can also become a torus. The area away from the surface has an increased static pressure causing a high pressure jet to develop. This jet is directed towards the solid surface and breaks through the bubble to form a torus shaped bubble for a short period of time. This generates multiple shock waves that can damage the surface.\n"
] |
what starts the pumping of the human heart and how does it keep going? | You don't sound dumb. It's a good question. The heart has its own electrical system that keeps it pumping independent of brain function. Sometimes it misfires, though, and that can lead to things like heart attacks. Basically, as long as there's blood flowing through the heart to keep it alive it doesn't even need to be in the body. That's what they do for heart transplants. | [
"The cardiac cycle is the performance of the human heart from the ending of one heartbeat to the beginning of the next. It consists of two periods: one during which the heart muscle relaxes and refills with blood, called diastole (), followed by a period of robust contraction and pumping of blood, dubbed systole (). After emptying, the heart immediately relaxes and expands to receive another influx of blood \"returning from\" the lungs and other systems of the body, before again contracting to \"pump blood to\" the lungs and those systems. A normally performing heart must be fully expanded before it can efficiently pump again. Assuming a healthy heart and a typical rate of 70 to 75 beats per minute, each cardiac cycle, or heartbeat, takes about 0.8 seconds to complete the cycle.\n",
"The heart is the driver of the circulatory system, pumping blood through rhythmic contraction and relaxation. The rate of blood flow out of the heart (often expressed in L/min) is known as the cardiac output (CO).\n",
"Each heart beat originates as an electrical impulse from a small area of tissue in the right atrium of the heart called the sinus node or Sino-atrial node or SA node. The impulse initially causes both atria to contract, then activates the atrioventricular (or AV) node, which is normally the only electrical connection between the atria and the ventricles (main pumping chambers). The impulse then spreads through both ventricles via the Bundle of His and the Purkinje fibres causing a synchronised contraction of the heart muscle and, thus, the pulse.\n",
"The heart pumps blood with a rhythm determined by a group of pacemaking cells in the sinoatrial node. These generate a current that causes contraction of the heart, traveling through the atrioventricular node and along the conduction system of the heart. The heart receives blood low in oxygen from the systemic circulation, which enters the right atrium from the superior and inferior venae cavae and passes to the right ventricle. From here it is pumped into the pulmonary circulation, through the lungs where it receives oxygen and gives off carbon dioxide. Oxygenated blood then returns to the left atrium, passes through the left ventricle and is pumped out through the aorta to the systemic circulation−where the oxygen is used and metabolized to carbon dioxide. The heart beats at a resting rate close to 72 beats per minute. Exercise temporarily increases the rate, but lowers resting heart rate in the long term, and is good for heart health.\n",
"The function of the heart is to drive blood through the circulatory system in a cycle that delivers oxygen, nutrients and chemicals to the body's cells and removes cellular waste. Because it pumps out whatever blood comes back into it from the venous system, the quantity of blood returning to the heart effectively determines the quantity of blood the heart pumps out – its cardiac output, \"Q\". Cardiac output is classically defined alongside stroke volume (SV) and the heart rate (HR) as:\n",
"The heart beats according to a rhythm set up by the sinus node or pacemaker. It is acted on by the nervous system, as well as hormones in the blood, and venous return: the amount of blood being returned to the heart. The two nerves acting on the heart are the vagus nerve, which slows heart rate down by emitting acetylcholine, and the accelerans nerve which speeds it up by emitting noradrenaline. This results in an increased bloodflow, preparing the body for a sudden increase in activity. These nerve fibers are part of the autonomic nervous system, part of the 'fight or flight' system.\n",
"The contraction of cardiac muscle (heart muscle) in all animals is initiated by electrical impulses known as action potentials. The rate at which these impulses fire controls the rate of cardiac contraction, that is, the heart rate. The cells that create these rhythmic impulses, setting the pace for blood pumping, are called pacemaker cells, and they directly control the heart rate. They make up the cardiac pacemaker, that is, the natural pacemaker of the heart. In most humans, the concentration of pacemaker cells in the sinoatrial (SA) node is the natural pacemaker, and the resultant rhythm is a sinus rhythm. \n"
] |
why do i see lots of black guys with white girls, and very few white guys with black girls? | The same reason black men date white women, because black women are crazy. | [
"most American white men are trained to be fags. For this reason it is no wonder their faces are weak and blank ... The average ofay [white person] thinks of the black man as potentially raping every white lady in sight. Which is true, in the sense that the black man should want to rob the white man of everything he has. But for most whites the guilt of the robbery is the guilt of rape. That is, they know in their deepest hearts that they should be robbed, and the white woman understands that only in the rape sequence is she likely to get cleanly, viciously popped.\n",
"The Yellow girl is the only American character in the show, who traveled all the way to Britain in order to see Paul McCartney. The Orange woman is shown as a full grown woman who is married, in her forties, and is starting to suspect her husband is cheating on her. The Blue girl is gorgeous and wealthy, and while she can go on and on about how perfect her life is, she does face some questions regarding her sexuality. The Green girl is the classic sexually-charged \"racey\" character in the show, always hooking up with men and throwing innuendos around. Finally, the Red girl is the youngest and most hopeful character; she is a bit hopeless in the beginning, stating she is not good-looking like other girls, until the man of her dreams comes along.\n",
"Critics have praised \"Awkward Black Girl\" for its witty humor and unique, realistic portrayal of African-American women. \"New York Times\" critic Jon Caramica describes the show as “full of sharp, pointillist humor that’s extremely refreshing.” On her site beyondblackwhite.com, Christelyn Karazin blogs, “Aren't you tired of seeing black women look like idiots on television? Here's a girl—whom I suspect is a lot like the women who read this blog—quirky, funny, a little unsure of herself, rocks her hair natural and is beautifully brown skinned.” Erin Stegeman of \"The Tangled Web\" praises \"Awkward Black Girl\" for defying stereotypes of African American women and being “an uber-relatable slice of life, narrated by J’s inner-ramblings that run through any awkward person’s mind.” In its honest portrayal of the African-American experience and its depiction of the main character, J, as a \"cultural mulatto,\" \"Awkward Black Girl\" belongs to the \"New Black Aesthetic,\" a term coined by African-American novelist Trey Ellis to describe an artistic movement that aims to create fuller meanings of black identity by exploring intra-racial diversity, reexamining stereotypes, and presenting blackness authentically.\n",
"Funny Ladies of Color was a comedy group in the 1990s formed by comedians Lydia Nicole and Cha Cha Sandoval-Epstein. The group was several women of varied ethnic backgrounds- African American, Latino, Armenian, Chicana-Jewish, South Korean, black Puerto Rican, and Filipino. Their popularity grew out of the uniqueness of their brand as a strictly minority crew.\n",
"A group of African-American men, mostly petty criminals, gathered in a room are talking about their experiences with white women. It is soon revealed that one of them (Tony El-Ay) never had sex with a white woman. Furthermore, he rejects the very idea and his friends try to convince him by praising white women. Before he is finally won over, he confesses his fear of white women.\n",
"In daily life, individuals are more likely to encounter white people as the default race within the United States as opposed to Black individuals. When encountering atypical whites (white people with features associated with Blackness), individuals ultimately settle on a White response (the general response to typical white targets is to decide not to shoot quicker and more frequently than in trials with black targets), in contrast to encountering Blacks with atypical features where Black cues appear to be more dominant and elicit a Black (to decide to shoot quicker and more frequently than trials with white targets) due to a misplaced threat perception. Lay people are more racially biased, on average, than trained individuals such as police officers. Prototypicality is shown to moderate racial bias which has been shown to be linked to a perceived threat as black people specifically are predisposed to being viewed as more threatening. Police officers show a reduced racial bias in comparison to members of the community; however, police officers were no better than community members in their sensitivity to prototypic targets providing evidence that prototypicality is directly linked to stereotypes and threat perception which ultimately perpetuates stereotype threat. Members of the same category (race) become harder to distinguish from other members of the same category the more they look like a prototypical representation of their category. (Young, Hugenberg, Bernstein, Sacco 2009).\n",
"In one instance, a United States grouping of Women in Black was accused of mocking and showing disrespect to American soldiers. The Athens, Georgia chapter was the subject of a letter to the Athens \"Banner-Herald\" in October 2007 for a protest at which an unidentified individual, said not to be a member of the military, allegedly dressed up in a U.S. Army uniform, put pacifist political buttons on it, and held peace signs with the Women in Black.\n"
] |
why is 95 gasoline powerful than 92? | Are you talking about octane rating? If so, it's not more powerful. Octane rating indicates how much compression the fuel can sustain before it ignites. A high octane rating can be compressed more, thus high-powered engines that compress the fuel more need it in order to avoid it igniting prematurely, causing knocking and engine wear. If your car doesn't have one of those engines, any octane gasoline will work just the same for you. | [
"Most gasoline (petrol) and diesel engines have an expansion ratio equal to the compression ratio (the compression ratio calculated purely from the geometry of the mechanical parts) of 10:1 (premium fuel) or 9:1 (regular fuel), with some engines reaching a ratio of 12:1 or more. The greater the expansion ratio the more efficient is the engine, in principle, and higher compression / expansion -ratio conventional engines in principle need gasoline with higher octane value, though this simplistic analysis is complicated by the difference between actual and geometric compression ratios. High octane value inhibits the fuel's tendency to burn nearly instantaneously (known as \"detonation\" or \"knock\") at high compression/high heat conditions. However, in engines that utilize compression rather than spark ignition, by means of very high compression ratios (14-25:1), such as the diesel engine or Bourke engine, high octane fuel is not necessary. In fact, lower-octane fuels, typically rated by cetane number, are preferable in these applications because they are more easily ignited under compression.\n",
"Gasoline contains about 46.7 MJ/kg (127 MJ/US gal; 35.3 kWh/US gal; 13.0 kWh/kg; 120,405 BTU/US gal), quoting the lower heating value. Gasoline blends differ, and therefore actual energy content varies according to the season and producer by up to 1.75% more or less than the average. On average, about 74 L (19.5 US gal; 16.3 imp gal) of gasoline are available from a barrel of crude oil (about 46% by volume), varying with the quality of the crude and the grade of the gasoline. The remainder are products ranging from tar to naphtha.\n",
"In the UK the most common gasoline grade (and lowest octane generally available) is 'Premium' 95 RON unleaded. 'Super' is widely available at 97 RON (for example \"Shell V-Power\", \"BP Ultimate\"). Leaded fuel is no longer available.\n",
"About 9 percent of all gasoline sold in the U.S. in May 2009 was premium grade, according to the Energy Information Administration. \"Consumer Reports\" magazine says, \"If [your owner’s manual] says to use regular fuel, do so—there's no advantage to a higher grade.\" The \"Associated Press\" said premium gas—which has a higher octane rating and costs more per gallon than regular unleaded—should be used only if the manufacturer says it is \"required\". Cars with turbocharged engines and high compression ratios often specify premium gas because higher octane fuels reduce the incidence of \"knock\", or fuel pre-detonation. The price of gas varies considerably between the summer and winter months.\n",
"The search for fuels with octane ratings above 100 led to the extension of the scale by comparing power output. A fuel designated grade 130 would produce 130 percent as much power in an engine as it would running on pure iso-octane. During WW II, fuels above 100-octane were given two ratings, a rich and lean mixture and these would be called 'performance numbers' (PN). 100-octane aviation gasoline would be referred to as 130/100 grade.\n",
"In 2007, in the United States, average retail (at the pump) prices, including federal and state fuel taxes, of B2/B5 were lower than petroleum diesel by about 12 cents, and B20 blends were the same as petrodiesel. However, as part of a dramatic shift in diesel pricing, by July 2009, the US DOE was reporting average costs of B20 15 cents per gallon higher than petroleum diesel ($2.69/gal vs. $2.54/gal). B99 and B100 generally cost more than petrodiesel except where local governments provide a tax incentive or subsidy.\n",
"Until the late 1920s, all automobile and aviation fuel was generally rated at 87 octane or less. This is the rating that was achieved by the simple distillation of \"light crude\" oil. Engines from around the world were designed to work with this grade of fuel, which set a limit to the amount of boosting that could be provided by the supercharger while maintaining a reasonable compression ratio.\n"
] |
how do scientists know how much of an impact the human body can take in a car wreck? | There has been much research done on corpses to analyze how strong bones and other tissues are and there are a great many analyses of injuries where we can estimate the forces involved using physics and then compare the forces with the degree of injury. | [
"Road traffic accidents usually involve impact loading, such as when a car hits a traffic bollard, water hydrant or tree, the damage being localized to the impact zone. When vehicles collide, the damage increases with the relative velocity of the vehicles, the damage increasing as the square of the velocity since it is the impact kinetic energy (1/2 mv) which is the variable of importance. Much design effort is made to improve the impact resistance of cars so as to minimize user injury. It can be achieved in several ways: by enclosing the driver and passengers in a safety cell for example. The cell is reinforced so it will survive in high speed crashes, and so protect the users. Parts of the body shell outside the cell are designed to crumple progressively, absorbing most of the kinetic energy which must be dissipated by the impact. \n",
"Detroit's Wayne State University was the first to begin serious work on collecting data on the effects of high-speed collisions on the human body. In the late 1930s there was no reliable data on how the human body responds to the sudden, violent forces acting on it in an automobile accident. Furthermore, no effective tools existed to measure such responses.\n",
"In 1974, his research attention turned to the study of car occupant injuries. He analysed and reported on the direct connection between the accident, resulting injuries, their causes and the effectiveness of safety features. He gathered medical data, inspected cars and sent questionnaires. His conclusions were clear: intrusion into the passenger compartment of the vehicle during a frontal impact accident played a very major role in causing injuries. (8-10)\n",
"According to road traffic safety experts, the actual number of casualties may be higher than what is documented, as many traffic collisions go unreported. Moreover, victims who die some time after the collision, a span of time which may vary from a few hours to several days, are not counted as car crash victims.\n",
"The crew members had lethal-level injuries sustained from ground impact. The official NASA report omitted some of the more graphic details on the recovery of the remains; witnesses reported finds such as a human heart and parts of femur bones.\n",
"Scene inspections and data recovery involves visiting the scene of the collision and investigating all of the vehicles involved in the collision. Investigations involve collecting evidence such as scene photographs, video of the collision, measurements of the scene, eyewitness testimony, and legal depositions. Additional factors include steering angles, braking, use of lights, turn signals, speed, acceleration, engine rpm, cruise control, and anti-lock brakes. Witnesses are interviewed during collision reconstruction, and physical evidence such as tire marks are examined. The length of a skid mark can often allow calculation of the original speed of a vehicle for example. Vehicle speeds are frequently underestimated by a driver, so an independent estimate of speed is often essential in collisions. Inspection of the road surface is also vital, especially when traction has been lost due to black ice, diesel fuel contamination, or obstacles such as road debris. Data from an event data recorder also provides valuable information such as the speed of the vehicle a few seconds before the collision.\n",
"Vehicular accident reconstruction relies on some marks to estimate vehicle speed before and during an accident, as well as braking and impact forces. Fabric prints of clothing worn by pedestrians in the paint and/or road grime of the striking vehicle can match a specific vehicle involved in a hit-and-run collision.\n"
] |
How does gravity affect atom nucleus? | Several things regarding this.
As a reminder of the strengths of forces acting on the particles:
Strength of gravity of a proton acting on an electron:
F_g = G *m1*m2/r^2
= 3.67*10^-47 Newtons
Strength of electromagnetism acting on an electron:
F_e = k * q1 * q2 / r^2
= 8*10^-8 Newtons
In particle physics, the effect of gravity of the particles on each other is effectively ignored.
The effect of gravity is also considered from center of mass. Which in this case, protons/neutrons are composite particles of charged quarks, you have to consider the effects of masses acting in various directions when you get too close, similar to digging to the center of the Earth leaves you weightless because of even pulling all around you.
Electrons/quarks also are effectively point particles, as they don't seem to have a physical size. The "size" of a particle is kinda vague, but they are usually defined as an interaction radius to various forces, so they are different sized depending on what you are comparing them to.
More importantly however, you are in the realm of quantum mechanics, so classical approximations don't hold effectively. The reason the electron does not fall into the nucleus despite the forces involved is that the wavefunction of the electron does not allow it to. Gravity also requires a quantum theory in order to properly integrate in for reasonable predictions (we do not have a quantum theory of gravity yet).
Theoretically though, if something were to have zero distance, or at least very very very close, they are predicted to turn into a black hole because the mass density of that tiny volume reaches that level. Of course, we have no observed instance of this because of how highly improbable it is, but in theory, that's what will happen. | [
"After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion, a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10 m, much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive particle, whose mass is approximately 100 MeV.\n",
"However, the repulsive electromagnetic pressures resulting from protons being tightly squeezed inside atomic nuclei are typically on the order of 10 atm ≈ 10 Pa ≈ 10 kg·sm. This amounts to about 1% of the nuclear mass density of approximately 10kg/m (after factoring in c ≈ 9×10ms).\n",
"Essentially, atomic radius decreases across the periods due to an increasing number of protons. Therefore, there is a greater attraction between the protons and electrons because opposite charges attract, and more protons creates a stronger charge. The greater attraction draws the electrons closer to the protons, decreasing the size of the particle. Therefore, atomic radius decreases. Down the groups, atomic radius increases. This is because there are more energy levels and therefore a greater distance between protons and electrons. In addition, electron shielding causes attraction to decrease, so remaining electrons can go farther away from the positively charged nucleus. Therefore, size (atomic radius) increases.\n",
"A classical electron orbiting a nucleus experiences acceleration and should radiate. Consequently, the electron loses energy and the electron should eventually spiral into the nucleus. Atoms, according to classical mechanics, are consequently unstable. This classical prediction is violated by the observation of stable electron orbits. The problem is resolved with a quantum mechanical description of atomic physics, initially provided by the Bohr model. Classical solutions to the stability of electron orbitals can be demonstrated using Non-radiation conditions and in accordance with known physical laws.\n",
"The electrons in an atom contribute magnetic moments from their own angular momentum and from their orbital momentum around the nucleus. Magnetic moments from the nucleus are insignificant in contrast to the magnetic moments from the electrons. Thermal contributions result in higher energy electrons disrupting the order and the destruction of the alignment between dipoles. \n",
"The effect results from poor shielding of nuclear charge (nuclear attractive force on electrons) by 4f electrons; the 6s electrons are drawn towards the nucleus, thus resulting in a smaller atomic radius.\n",
"In hydrogen, or any other atom in group 1A of the periodic table (those with only one valence electron), the force on the electron is just as large as the electromagnetic attraction from the nucleus of the atom. However, when more electrons are involved, each electron (in the \"n\"-shell) experiences not only the electromagnetic attraction from the positive nucleus, but also repulsion forces from other electrons in shells from 1 to \"n\". This causes the net force on electrons in outer shells to be significantly smaller in magnitude; therefore, these electrons are not as strongly bonded to the nucleus as electrons closer to the nucleus. This phenomenon is often referred to as the orbital penetration effect. The shielding theory also contributes to the explanation of why valence-shell electrons are more easily removed from the atom.\n"
] |
if formula 1 teams use totally smooth tires for perfect grip in dry weather, why are there laws in place about grip on road tires? | F1 (and NASCAR, etc) have different sets of tires for dry and wet conditions; they go into the pits to change tires when the wet happens. The "rain" tires have grooves.
Your parent's tires have to handle all weather conditions (unless they are rich with a Ferrari and a racing garage) so your government has laws in place for road safety that require tires to have a minimum amount of grooves in them. | [
"Formula One tyres bear only a superficial resemblance to a normal road tyre. Whereas the latter has a useful life of up to , the tyres used in Formula One are built to last less than one race distance. The purpose of the tyre determines the compound of the rubber to be used. In extremely wet weather, such as that seen in the 2007 European Grand Prix, the F1 cars are unable to keep up with the safety car in deep standing water due to the risk of aquaplaning. In very wet races, such as the 2011 Canadian Grand Prix, the tyres are unable to provide a safe race due to the amount of water, and so the race can be red flagged. The race is either then stopped permanently, or suspended for any period of time until the cars can race safely again.\n",
"Rain tyres are also made from softer rubber compounds to help the car grip in the slippery conditions and to build up heat in the tyre. These tyres are so soft that running them on a dry track would cause them to deteriorate within minutes. Softer rubber means that the rubber contains more oils and other chemicals which cause a racing tyre to become sticky when it is hot. The softer a tyre, the stickier it becomes, and conversely with hard tyres.\n",
"Sport/performance tyres provide excellent grip but may last or less. Cruiser and \"sport touring\" tyres try to find the best compromise between grip and durability. There is also a type of tyre developed specifically for racing. These tyres offer the highest of levels of grip for cornering. Because of the high temperatures at which these tyres typically operate, use on the street is unsafe as the tyres will typically not reach optimum temperature before a rider arrives at the destination, thus providing almost no grip \"en route\". In racing situations, racing tyres would normally be brought up to temperature in advance by the use of tyre warmers.\n",
"Motorsport or racing tires offer the highest of levels of grip. Due of the high temperatures at which these tires typically operate, use outside a racing environment is unsafe, typically these tires do not reach their reach optimum temperature which provides less than optimal grip. In racing situations, tires are normally brought up to temperature in advance based on application and conditions through the use of tire warmers.\n",
"Rain tyres are cut or moulded with patterned grooves or tread in them. This allows the tyre to quickly displace the water between the ground and the rubber on the tyre. If this water is not displaced, the car will experience an effect known as hydroplaning as the rubber will not be in contact with the ground. These grooves do not help the car grip contrary to popular belief, however if these grooves are too shallow, the grip will be impaired in wet conditions as the rubber will not be able to make good contact with the ground. The patterns are designed to displace water as quickly as possible to the edges of the tyre or into specially cut channels in the centre of the tyre. Not all groove patterns are the same. Optimal patterns depend on the car and the conditions. The grooves are also designed to generate heat when lateral forces are applied to the tyre.\n",
"Slick tyres are not suitable for use on common road vehicles, which must be able to operate in all weather conditions. They are used in auto racing where competitors can choose different tyres based on the weather conditions and can often change tyres during a race. Slick tyres provide far more traction than grooved tyres on dry roads, due to their greater contact area but typically have far less traction than grooved tyres under wet conditions. Wet roads severely diminish the traction because of aquaplaning due to water trapped between the tyre contact area and the road surface. Grooved tyres are designed to remove water from the contact area through the grooves, thereby maintaining traction even in wet conditions.\n",
"Touring tyres are usually made of harder rubber for greater durability. They may last longer, but they tend to provide less outright grip than sports tyres at optimal operating temperatures. The tradeoff is that touring tyres typically offer more grip at lower temperatures, meaning they can be more suitable for riding in cold or winter conditions whereas a sport tyre may never reach the optimal operating temperature.\n"
] |
Why is it that Neutrinos can pass through so much material without a problem (like the Earth?) How are we able to detect them if they so easily penetrate matter? | Neutrinos only interact through the [weak interaction](_URL_2_) because they don’t posses an electric charge (needed for electromagnetic interaction) or a color charge (needed for [strong interaction](_URL_1_)). The weak interaction being a short range interaction, neutrinos interact very little with matter, meaning they can go through it almost perfectly.
To detect them we basically use [gigantic pools](_URL_0_) of [heavy water](_URL_3_), hoping a few neutrinos (I don’t know what the rate is exactly) will interact and we can detect them.
*Note: gravitation can be neglected because neutrinos are so light.*
PS: maybe to clarify the “why is it that neutrinos can pass through so much material” part:
because matter is mostly void and it’s the electromagnetic force of the atoms that prevent matter from going through other matter (like 2 magnets will repel each other even if they’re not touching); and as said above, neutrinos don’t interact with the electromagnetic force (a block of wood isn’t stopped by a magnet).
| [
"Since neutrinos interact only very rarely with matter, the enormous flux of solar neutrinos racing through the Earth is sufficient to produce only 1 interaction for 10 target atoms, and each interaction produces only a few photons or one transmuted atom. The observation of neutrino interactions requires a large detector mass, along with a sensitive amplification system.\n",
"Despite how common they are, neutrinos are extremely \"difficult to detect\" due to their low mass and lack of electric charge. Unlike other particles, neutrinos only interact via gravity and the neutral current (involving the exchange of a Z boson) or charged current (involving the exchange of a W boson) weak interactions. As they have only a \"smidgen of rest mass\" according to the laws of physics, perhaps less than a \"millionth as much as an electron,\" the gravitational force caused by neutrinos has proven too weak to detect, leaving the weak interaction as the main method for detection: \n",
"Neutrinos traveling through matter, in general, undergo a process analogous to light traveling through a transparent material. This process is not directly observable because it does not produce ionizing radiation, but gives rise to the MSW effect. Only a small fraction of the neutrino's energy is transferred to the material.\n",
"Neutrinos' low mass and neutral charge mean they interact exceedingly weakly with other particles and fields. This feature of weak interaction interests scientists because it means neutrinos can be used to probe environments that other radiation (such as light or radio waves) cannot penetrate.\n",
"A neutrino is a fundamental particle that interacts very weakly with other matter. For this reason, it requires detection apparatus on a very large scale, and the ocean is sometimes used for this purpose. In particular, it is thought that ultra-high energy neutrinos in seawater can be detected acoustically.\n",
"Neutrinos cannot be detected directly, because they do not ionize the materials they are passing through (they do not carry electric charge and other proposed effects, like the MSW effect, do not produce traceable radiation). A unique reaction to identify antineutrinos, sometimes referred to as inverse beta decay, as applied by Reines and Cowan (see below), requires a very large detector to detect a significant number of neutrinos. All detection methods require the neutrinos to carry a minimum threshold energy. So far, there is no detection method for low-energy neutrinos, in the sense that potential neutrino interactions (for example by the MSW effect) cannot be uniquely distinguished from other causes. Neutrino detectors are often built underground to isolate the detector from cosmic rays and other background radiation.\n",
"Neutrinos are also useful for probing astrophysical sources beyond the Solar System because they are the only known particles that are not significantly attenuated by their travel through the interstellar medium. Optical photons can be obscured or diffused by dust, gas, and background radiation. High-energy cosmic rays, in the form of swift protons and atomic nuclei, are unable to travel more than about 100 megaparsecs due to the Greisen–Zatsepin–Kuzmin limit (GZK cutoff). Neutrinos, in contrast, can travel even greater distances barely attenuated.\n"
] |
what causes the “refrigerated taste” food can get when it is uncovered in the freezer too long? | All the food inside is drying out and all the moisture takes smells into the air with it. The fridge is closed and small, so all that smelly air is trapped in there. Over time, food left in there a long time will have a dry crust and the humid smelly air will start to go back into the dry crust. The yucky taste and texture is all those mixed smells and dried out crust combined. | [
"When foods are frozen without preparation, freezer burn can occur. It happens when the surface of the food is dehydrated, and this leads to a dried and leathery appearance. Freezer burn also ruins the flavor and texture of foods. Vacuum packing reduces freezer burn by preventing the food from exposure to the cold, dry air.\n",
"One of the main advantages of this method of preparing frozen food is that the freezing process takes only a few minutes. The exact time depends on the type of IQF freezer and the product. The short freezing prevents formation of large ice crystals in the product’s cells, which destroys the membrane structures at the molecular level. This makes the product keep its shape, colour, smell and taste after defrost, at a far greater extent. \n",
"People sometimes defrost frozen foods at room temperature because of time constraints or ignorance; such foods should be promptly consumed after cooking or discarded and never be refrozen or refrigerated since pathogens are not killed by the freezing process.\n",
"Freezer burn appears as grayish-brown leathery spots on frozen food, and occurs when air reaches the food's surface and dries the product. Color changes result from chemical changes in the food's pigment. Freezer burn does not make the food unsafe; it merely causes dry spots in foods. The food remains usable and edible, but removing the freezer burns will improve the taste.\n",
"Frozen products do not require any added preservatives because microorganisms do not grow when the temperature of the food is below , which is sufficient on its own in preventing food spoilage. Long-term preservation of food may call for food storage at even lower temperatures. Carboxymethylcellulose (CMC), a tasteless and odorless stabilizer, is typically added to frozen food because it does not adulterate the quality of the product.\n",
"This process occurs even if the package has never been opened, due to the tendency for all molecules, especially water, to escape solids via vapour pressure. Fluctuations in temperature within a freezer also contribute to the onset of freezer burn because such fluctuations set up temperature gradients within the solid food and air in the freezer, which create additional impetus for water molecules to move from their original positions.\n",
"Freezing food to preserve its quality has been used since time immemorial. Freezing temperatures curb the spoiling effect of microorganisms in food, but can also preserve some pathogens unharmed for long periods of time. Freezing kills some microorganisms by physical trauma, others are sublethally injured by freezing, and may recover to become infectious.\n"
] |
is it real that when you left the refrigirator door open it consumes more energy? | It does cost more electric because your letting the cold out so it has to use more power to try and keep it cool BUT it is never going to be noticeable on the electricity bill unless you leave it fully open all day in temps with 20c and even then it's going to add maybe 25p per day,
BUT here's my question who on earth goes to the fridge and leaves the door open regardless of whether it costs more electric it will make your food go off sooner and not be cold,
I have never met anyone that opens the fridge and leaves it open it litterally makes no sense | [
"Refrigeration may be defined as lowering the temperature of an enclosed space by removing heat from that space and transferring it elsewhere. A device that performs this function may also be called an air conditioner, refrigerator, air source heat pump, geothermal heat pump, or chiller (heat pump).\n",
"A defrosting procedure is generally performed periodically on refrigerators and freezers to maintain their operating efficiency. Over time, as the door is opened and closed, letting in new air, water vapour from the air condenses on the cooling elements within the cabinet.\n",
"A \"refrigeration cycle\" describes the changes that take place in the refrigerant as it alternately absorbs and rejects heat as it circulates through a refrigerator. It is also applied to heating, ventilation, and air conditioning HVACR work, when describing the \"process\" of refrigerant flow through an HVACR unit, whether it is a packaged or split system.\n",
"In the refrigeration cycle, heat is transported from the passenger compartment to the environment. A refrigerator is an example of such a system, as it transports the heat out of the interior and into the ambient environment.\n",
"If vents are left open at night (or on cloudy days), a reversal of convective airflow will occur, wasting heat by dissipating it outdoors. Vents must be closed at night so radiant heat from the interior surface of the storage wall heats the indoor space. Generally, vents are also closed during summer months when heat gain is not needed. During the summer, an exterior exhaust vent installed at the top of the wall can be opened to vent to the outside. Such venting makes the system act as a solar chimney driving air through the building during the day.\n",
"Exhaust ducts and/or open windows must be used at all times to allow air to continually escape the air-conditioned area. Otherwise, pressure develops and the fan or blower in the system is unable to push much air through the media and into the air-conditioned area. The evaporative system cannot function without exhausting the continuous supply of air from the air-conditioned area to the outside. By optimizing the placement of the cooled-air inlet, along with the layout of the house passages, related doors, and room windows, the system can be used most effectively to direct the cooled air to the required areas. A well-designed layout can effectively scavenge and expel the hot air from desired areas without the need for an above-ceiling ducted venting system. Continuous airflow is essential, so the exhaust windows or vents must not restrict the volume and passage of air being introduced by the evaporative cooling machine. One must also be mindful of the outside wind direction, as, for example, a strong hot southerly wind will slow or restrict the exhausted air from a south-facing window. It is always best to have the downwind windows open, while the upwind windows are closed.\n",
"A common form of refrigeration economizer is a \"walk-in cooler economizer\" or \"outside air refrigeration system\". In such a system outside air that is cooler than the air inside a refrigerated space is brought into that space and the same amount of warmer inside air is ducted outside. The resulting cooling supplements or replaces the operation of a compressor-based refrigeration system. If the air inside a cooled space is only about 5 °F warmer than the outside air that replaces it (that is, the ∆T5 °F) this cooling effect is accomplished more efficiently than the same amount of cooling resulting from a compressor based system. If the outside air is not cold enough to overcome the refrigeration load of the space the compressor system will need to also operate, or the temperature inside the space will rise.\n"
] |
What happens when you prepare acids with heavy water? | What you are referring to is called the [isotope effect](_URL_0_). It is real, but it isn't usually very pronounced. | [
"Strong acids also undergo hydrolysis. For example, dissolving sulfuric acid (HSO) in water is accompanied by hydrolysis to give hydronium and bisulfate, the sulfuric acid's conjugate base. For a more technical discussion of what occurs during such a hydrolysis, see Brønsted–Lowry acid–base theory.\n",
"Preparation of the diluted acid can be dangerous due to the heat released in the dilution process. To avoid splattering, the concentrated acid is usually added to water and not the other way around. Water has a higher heat capacity than the acid, and so a vessel of cold water will absorb heat as acid is added.\n",
"The acid can also be prepared by dissolving dichlorine monoxide in water; under standard aqueous conditions, anhydrous hypochlorous acid is currently impossible to prepare due to the readily reversible equilibrium between it and its anhydride:\n",
"Salts of strong acids and strong bases (\"strong salts\") are non-volatile and often odorless, whereas salts of either weak acids or weak bases (\"weak salts\") may smell like the conjugate acid (e.g., acetates like acetic acid (vinegar) and cyanides like hydrogen cyanide (almonds)) or the conjugate base (e.g., ammonium salts like ammonia) of the component ions. That slow, partial decomposition is usually accelerated by the presence of water, since hydrolysis is the other half of the reversible reaction equation of formation of weak salts.\n",
"When a salt of a weak acid or of a weak base is dissolved in water, water can partially hydrolyze the salt, producing the corresponding base or acid, which gives aqueous solutions of soap and baking soda their basic pH:\n",
"Strong acids and bases are compounds that, for practical purposes, are completely dissociated in water. Under normal circumstances this means that the concentration of hydrogen ions in acidic solution can be taken to be equal to the concentration of the acid. The pH is then equal to minus the logarithm of the concentration value. Hydrochloric acid (HCl) is an example of a strong acid. The pH of a 0.01M solution of HCl is equal to −log(0.01), that is, pH = 2. Sodium hydroxide, NaOH, is an example of a strong base. The p[OH] value of a 0.01M solution of NaOH is equal to −log(0.01), that is, p[OH] = 2. From the definition of p[OH] above, this means that the pH is equal to about 12. For solutions of sodium hydroxide at higher concentrations the self-ionization equilibrium must be taken into account.\n",
"Acidulated water is water where some sort of acid is added—often lemon juice, lime juice, or vinegar—to prevent cut or skinned fruits or vegetables from browning so as to maintain their appearance. Some vegetables and fruits often placed in acidulated water are apples, avocados, celeriac, potatoes and pears. When the fruit or vegetable is removed from the mixture, it will usually resist browning for at least an hour or two, even though it is being exposed to oxygen.\n"
] |
What is the spectrum of professional opinion on the Kennedy assassination? | Oswald shot him. In the head.
That's pretty much the only opinion that will not get you rejected for tenure. Why? Because like all conspiracies, the JFK conspiracy relies upon such a perfect chain of events, placement of people, and reliance on their complicity, as well as not leaving a paper trail a mile long, that it borders on the absurd.
What is really more plausible? That one crazy communist with a gun slipped through the security cracks and got off three honestly easy shots on a day that the President went against the better advice of his security team? OR, that the Cuban rebels/CIA/FBI/Mafia/Alien Greys/Freemasons/Rosicrucians/Girl Scouts conspired to off the most powerful man in the free world with out anyone having a guilty conscience, verifiable evidence, failures in security, lapses in timing, or just plain bad luck (if you have any experience with real government secret planning, you would know how many things get completely cocked up)?
| [
"At least five other American films dramatize the Kennedy assassination as a conspiracy; \"Executive Action\" sits alongside Oliver Stone's \"JFK\" (1991); John MacKenzie's \"Ruby\" (1992); the 1984 William Tannen film \"Flashpoint\"; and Neil Burger's 2002 pseudo-documentary \"Interview with the Assassin\".\n",
"Some conspiracy theories surrounding the Kennedy assassination have focused on witnesses to the assassination who have not been identified, or who have not identified themselves, despite the media attention that the Kennedy assassination has received.\n",
"Today, there are many conspiracy theories concerning the assassination of John F. Kennedy in 1963. Vincent Bugliosi estimates that over 1,000 books have been written about the Kennedy assassination, at least ninety percent of which are works supporting the view that there was a conspiracy. As a result of this, the Kennedy assassination has been described as \"the mother of all conspiracies\". The countless individuals and organizations that have been accused of involvement in the Kennedy assassination include the CIA, the Mafia, sitting Vice President Lyndon B. Johnson, Cuban Prime Minister Fidel Castro, the KGB, or even some combination thereof. It is also frequently asserted that the United States federal government intentionally covered up crucial information in the aftermath of the assassination to prevent the conspiracy from being discovered.\n",
"The House Select Committee on Assassinations investigated the allegation \"that a statistically improbable number of individuals with some direct or peripheral association with the Kennedy assassination died as a result of that assassination, thereby raising the specter of conspiracy\". The committee's chief of research testified: \"Our final conclusion on the issue is that the available evidence does not establish anything about the nature of these deaths which would indicate that the deaths were in some manner, either direct or peripheral, caused by the assassination of President Kennedy or by any aspect of the subsequent investigation.\"\n",
"In the 2008-2009 series \"\" by Gerard Way and Gabriel Bá, the Kennedy assassination is a central plot element. The series initially takes place in a timeline where the assassination never happened, until an organisation of time-travelling assassins go back to 1963 to kill Kennedy. When the Umbrella Academy intercept the gunmen, The Rumour, disguised as Jacqueline Kennedy, uses her powers to make Kennedy's head explode. \n",
"Minster provided the narration for the controversial Central television documentary \"The Men Who Killed Kennedy\", which outlined various theories concerning the assassination of the American president John F. Kennedy.\n",
"John F. Kennedy, the 35th President of the United States, was assassinated in Dallas, Texas, on November 22, 1963. Various agencies and government panels have investigated the assassination at length, drawing different conclusions. Lee Harvey Oswald is accepted by official investigations as the assassin, but he was murdered by Jack Ruby before he could be tried in a court of law. The discrepancies between the official investigations and the extraordinary nature of the assassination have led to a variety of theories about how and why Kennedy was assassinated. \n"
] |
what's more inflated, the price of diamonds or artificial diamonds? | That's a damn interesting question but impossible to answer because we do not know just how horribly inflated diamond prices are. | [
"Regarding the latter, the main argument presented being that the paradigm where diamonds were seen as rare due to their visual beauty is no longer the case and instead has been replaced by an artificial rarity reflected in their price. This is attributed to confirmed evidence that there were price-fixing practices taken by the major producers of rough diamonds, in majority attributed to De Beers Company known as to holding a monopoly on the market from the 1870s to early 2000s. The company plead guilty to these charges in an Ohio court in 13 July 2004. However, De Beers and Co do not have as much power over the market, the price of diamonds continues to increase due to the increased demand in emerging markets such as India and China. Therefore, with the emergence of artificial stones, such as cubic zirconia, that have optic properties highly similar to that of diamonds (see section above), it has been presented that these could be a better alternative for jewelry buyers given their lower price and unconvoluted history.\n",
"There are several factors contributing to low liquidity of diamonds. One of the main factors is the lack of terminal market. Most commodities have terminal markets, and some form of commodities exchange, clearing house, and central storage facilities. Until recently this did not exist for diamonds. Diamonds are also subject to value added tax in the UK and EU, and sales tax in most other developed countries, therefore reducing their effectiveness as an investment medium. Most diamonds are sold through retail stores at very high profit margins.\n",
"The price of diamonds depends mainly on the 4 C's of diamonds - carat, color, clarity, cut. Because of this pricing system large gemstones are worth more than a comparable mass of smaller stones. For this reason a successful diamond mining operation can't rely solely on the mass of carats recovered. The Kelsey Lake mine has produced some large stones.\n",
"Diamonds in larger sizes are rare, and their price is dependent on the individual features of the diamond. Fashion and marketing aspects can also cause fluctuations in price. This makes it difficult to establish a uniform and readily understood pricing system. Martin Rapaport produces the Rapaport Diamond Report, which lists prices for polished diamonds. The Rapaport Diamond Report is relatively expensive to subscribe to and, as such, is not readily available to consumers and investors. Each week, there are matrices of diamond prices for various shapes of brilliant cut diamonds, by colour and clarity within size bands. The price matrix for brilliant cuts alone exceeds 1,400 entries, and even this is achieved only by grouping some grades together. There are considerable price shifts near the edges of the size bands, so a stone may list at $5,500 per carat = $2,695, while a stone of similar quality lists at $7,500 per carat = $3,750. This difference seems surprising, but in reality stones near the top of a size band (or rarer fancy coloured varieties) tend to be uprated slightly. Some of the price jumps are related to marketing and consumer expectations. For example, a buyer expecting a diamond solitaire engagement ring may be unwilling to accept a diamond.\n",
"Clarity and color enhanced diamonds sell at lower price points when compared to similar, untreated diamonds. This is because enhanced diamonds are originally lower quality before the enhancement is performed, and therefore are priced at a substandard level. After enhancement, the diamonds may visually appear as good as their non-enhanced counterparts.\n",
"The value of diamonds as an investment is of significant interest to the general public, because they are expensive gemstones, often purchased in engagement rings, due in part to a successful 20th century marketing campaign by De Beers. The difficulty of properly assessing the value of an individual gem-quality diamond complicates the situation. The end of the De Beers monopoly and new diamond discoveries in the second half of the 20th century have reduced the resale value of diamonds. Recessions have engendered greater interest in investments that exhibit safe-haven or hedging properties that are uncorrelated to investments in the equities markets. Academic studies have indicated that investments in physical diamonds exhibit greater safe-haven characteristics than investments in diamond indices.\n",
"Due to changes in market desirability and popularity, the value of different styles of diamond fluctuates. All diamonds can be recut into new shapes that will increase value at that time in the market and desirability. An example of this is the \"marquise\" cut diamond which was popular in the 1970s to 1980s. In later decades, jewelers had little success in selling this shape in comparison to other shapes like the oval or pear shape. The \"marquise\" can be cut into an oval diamond by any diamond cutter with a loss of 5 to 10% in total weight. For example, a 1.10-carat marquise shape would be a 1.00 oval cut diamond by rounding the sharp points and creating an oval which currently in the market has a much greater desirability and resale value. The same marquise shape also could become a pear shape instead by only trimming and rounding the side which will be turned into the base of the pear shape.\n"
] |
How much Spanish troops were on Cuba and Puerto Rico during the Spanish American war? | Spain's force in Cuba numbered 278,457 soldiers, distributed in 101 Infantry Battalions, 11 Cavalry Regiments, 2 Artillery Regiments, and 4 Marine Battalions. The force in Cuba made up the bulk of Spain's entire military force, being nearly 57 percent of the Army. This force was bolstered by another 82,000 volunteers. Another 10,005 were in Puerto Rico, and 51,331 in the Philippines, for another 12 percent of the Spanish Army.
Although a large force, the Spanish Army of the time was somewhat decrepit, manned with poor quality conscripts (those who could afford to pay the tax to avoid universal conscription always did), and never with enough equipment, even though they did carry decent Mauser rifles. Although commanding a large part of the Spanish budget, the bloated officer corps (1:4 officer:enlisted ratio!) ate up much of that with their salaries. The aloof officer corps wasn't up to the task of leadership, and the men were not all that easy to lead in any case.
At sea, Cuba and Puerto Rico were defended by 8 cruisers, 6 destroyers, and 49 other small craft manned by 2,800 sailors and 600 marines. As with the Army though, the Navy was a paper tiger at best, as barely any of the Spanish fleet was up to modern standards and able to go toe-to-toe with the US Navy, which as it turned out, made mincemeat of 'em.
"Spain, Army" and "Spain, Navy" from Encyclopedia of the Spanish-American and Philippine American Wars, ed. by Spencer C. Tucker | [
"The Spanish Crown sent the 1st, 2nd and 3rd Puerto Rican Provisional Battalions to defend Cuba against the American invaders. The 1st Puerto Rican Provisional Battalion, composed of the Talavera Cavalry and Krupp artillery, was sent to Santiago de Cuba where they battled the American forces in the Battle of San Juan Hill. After the battle, the Puerto Rican Battalion suffered a total of 70% casualties which included their dead, wounded, MIA's and prisoners.\n",
"On July 25, 1898, during the Spanish–American War, the U.S. invaded Puerto Rico with a landing at Guánica. As an outcome of the war, Spain ceded Puerto Rico, along with the Philippines and Guam, then under Spanish sovereignty, to the U.S. under the Treaty of Paris, which went into effect on April 11, 1899. Spain relinquished sovereignty over Cuba, but did not cede it to the U.S.\n",
"As stated in the introduction, the Puerto Rican Battalion suffered a total of 70 casualties which included their dead, wounded, MIA's and prisoners. The Spanish, Puerto Ricans and Americans that participated in the campaign totaled 33,472. Of this total 18,000 were Spanish, 10,000 were Puerto Rican and 15,472 were American military personnel. The Spanish and Puerto Rican suffered 429 casualties which included 17 dead, 88 wounded and 324 captured. The American forces suffered 43 casualties: 3 dead and 40 wounded. The commander of Spain's 6th Provisional Battalion, Julio Cervera Baviera gained notoriety as the author of a pamphlet called \"La defensa de Puerto Rico\", which supported Governor General Manuel Macías y Casado and in an attempt to justify Spain's defeat against the United States, falsely blamed the Puerto Rican volunteers in the Spanish Army of the fiasco A group of angry \"Sanjuaneros\" agreed to challenge Cervera to a duel if the commander did not retract his pamphlet. The men drew lots for this honor; it fell to José Janer y Soler and was seconded by Cayetano Coll y Toste y Leonidas Villalón. Cervera's seconds were Colonel Pedro del Pino and Captain Emilio Barrera. The duel never took place, as Cervera explained his intentions in writing the pamphlet, and all parties were satisfied.\n",
"Spain ceded the Philippine islands to the United States in the Treaty of Paris which ended the Spanish–American War. Following the American occupation of the northern Philippine Islands during 1899, Spanish forces in Mindanao were cut off, and they retreated to the garrisons at Zamboanga and Jolo. American forces relieved the Spanish at Zamboanga on May 18, 1899, and at Jolo and Basilan in December 1899.\n",
"Spain ceded the Philippine islands to the United States in the Treaty of Paris which ended the Spanish–American War. Following the American occupation of the northern Philippine Islands during 1899, Spanish forces in Mindanao were cut off, and they retreated to the garrisons at Zamboanga and Jolo. American forces relieved the Spanish at Zamboanga on May 18, 1899, and at Basilan seven months after.\n",
"The islands were ceded by Spain to the United States alongside Puerto Rico and Guam as a result of the latter's victory in the Spanish–American War. A compensation of US$20 million was paid to Spain according to the terms of the 1898 Treaty of Paris. As it became increasingly clear the United States would not recognize the nascent First Philippine Republic, the Philippine–American War broke out. Brigadier General James F. Smith arrived at Bacolod on March 4, 1899, as the Military Governor of the Sub-district of Negros, after receiving an invitation from Aniceto Lacson, president of the breakaway Cantonal Republic of Negros. The war resulted in the deaths of at least 200,000 Filipino civilians, mostly due to famine and disease.\n",
"This was also the site of the first major land battle in Puerto Rico during the war between Spanish/Puerto Rican and American armed forces. On July 26, 1898, Spanish forces and Puerto Rican volunteers, led by Captain Salvador Meca and Lieutenant Colonel Francisco Puig, fought against American forces led by Brigadier General George A. Garretson. The Spanish forces engaged the 6th Massachusetts in a firefight at the Hacienda Desideria, owned by Antonio Mariani, in what became known as the Battle of Yauco of the Puerto Rico Campaign. The casualties of Puig's forces were two officers and three soldiers wounded and two soldiers dead. The Spanish forces were ordered to retreat.\n"
] |
if the deepest depth drilled by man is about 8 miles, and the crust is nearly 20 miles deep, how were scientists able to discover that there is an upper and lower mantel and inner and outer core? | The same way you are able to tell what's in the box your grandmother sent you at Christmas. When you shake it, a sweater sounds different from a PS4 controller. Obviously scientists can't shake the earth, but the earth shakes itself sometimes, and scientists in different places are always listening (or rather their seismographs are listening). By comparing what different locations record, they can make good guesses about what's inside, just like you may be able to do.
Edit: Thanks for the gold! | [
"This record remains the longest penetration in a deep cave. The new record for the longest penetration at any depth is now held by Jon Bernot and Charlie Roberson of Gainesville, Florida, with a distance of .\n",
"The trench reaches one of the greatest depths in the ocean, third only to the Mariana trench and the Tonga trench. Its deepest point is known as Galathea Depth and reaches 10,540 meters (34,580 ft) or (5,760 fathoms).\n",
"Drilling holes does not provide direct evidence against the hypothesis. The deepest hole drilled to date is the Kola Superdeep Borehole, with a true vertical drill-depth of more than 7.5 miles (12 kilometers). However, the distance to the center of the Earth is nearly 4,000 miles (6,400 kilometers). Oil wells with longer depths are not vertical wells; the total depths quoted are measured depth (MD) or equivalently, along-hole depth (AHD) as these wells are deviated to horizontal. Their true vertical depth (TVD) is typically less than 2.5 miles (4 kilometers).\n",
"The Hranice Abyss (), the English name adopted by the local tourist authorities, is the deepest flooded pit cave in the world. It is a karst sinkhole located near the town of Hranice (Přerov District). The greatest confirmed depth (as of 27 September 2016) is 473 m (404 m under the water level), which makes it the deepest known underwater cave in the world. Moreover, the expected depth is 800–1200 m.\n",
"BULLET::::- Hranice Abyss, Moravia, Czech Republic, is the deepest underwater cave in the world, the lowest confirmed depth (as of 27 September 2016) is 473 m (404 m under the water level), the expected depth is 700–800 m.\n",
"In June 2012, the Chinese manned submersible \"Jiaolong\" was able to reach 7,020 meters deep in the Mariana Trench, making it the deepest diving manned research submersible. This range surpasses that of the previous record holder, the Japanese-made \"Shinkai\", whose maximum depth is 6,500 meters.\n",
"The investigation of this site was started in 1951 in a sounding trench on the western side of the cave, which was 10 x 10 x 2 meters deep. In 1952, the second sounding trench was excavated horizontally atop the first trench that was 7 x 6 x 5.5 meters deep. Finally a deep sounding trench that was 3.8 X1.6 X 6.5 meters deep was excavated which gave the total excavation depth to be 14 meters deep.\n"
] |
When did the word "ass" start applying to people's butts instead of just to donkeys? | Don't forget that outside of the US it's spelt and pronounced 'arse,' whilst the type of donkey is still universally called an ass. A lot of Irish accents have a very 'ass'-like pronunciation of 'arse,' and of course Irish immigrants made up a huge number of Americans during the initial population boom. | [
"The English word \"ass\" (meaning donkey, a cognate of its zoological name \"Equus asinus\") may also be used as a term of contempt, referring to a silly or stupid person. In the United States (and, to a lesser extent, Canada), the words \"arse\" and \"ass\" have become synonymous.\n",
"At one time, the synonym \"ass\" was the more common term for the donkey. The first recorded use of \"donkey\" was in either 1784 or 1785. While the word \"ass\" has cognates in most other Indo-European languages, \"donkey\" is an etymologically obscure word for which no credible cognate has been identified. Hypotheses on its derivation include the following:\n",
"BULLET::::- \"dickey\" (donkey; however note that the word 'donkey' appears only to have been in use in English since the late 18th century. The Oxford English Dictionary quotes 'dicky' as one of the alternative slang terms for an ass.)\n",
"From the 18th century, \"donkey\" gradually replaced \"ass\", and \"jenny\" replaced \"she-ass\", which is now considered archaic. The change may have come about through a tendency to avoid pejorative terms in speech, and be comparable to the substitution in North American English of \"rooster\" for \"cock\", or that of \"rabbit\" for \"coney\", which was formerly homophonic with \"cunny\". By the end of the 17th century, changes in pronunciation of both \"ass\" and \"arse\" had caused them to become homophones. Other words used for the ass in English from this time include \"cuddy\" in Scotland, \"neddy\" in southwest England and \"dicky\" in the southeast; \"moke\" is documented in the 19th century, and may be of Welsh or Gypsy origin.\n",
"The words \"donkey\" and \"ass\" (or translations thereof) have come to have derogatory or insulting meaning in several languages, and are generally used to mean someone who is obstinate, stupid or silly, In football, especially in the United Kingdom, a player who is considered unskilful is often dubbed a \"donkey\", and the term has a similar connotation in poker. In the US, the slang terms \"dumbass\" and \"jackass\" are used to refer to someone considered stupid.\n",
"Many cultures have colloquialisms and proverbs that include donkeys or asses. British phrases include \"to talk the hind legs off a donkey\", used to describe someone talking excessively and generally persuasively. Donkeys are the animals featured most often in Greek proverbs, including such statements of fatalistic resignation as \"the donkey lets the rain soak him\". The French philosopher Jean Buridan constructed the paradox called Buridan's ass, in which a donkey, placed exactly midway between water and food, would die of hunger and thirst because he could not find a reason to choose one of the options over the other, and so would never make a decision. Italy has several phrases regarding donkeys, including \"put your money in the ass of a donkey and they'll call him sir\" (meaning, if you're rich, you'll get respect) and \"women, donkeys and goats all have heads\" (meaning, women are as stubborn as donkeys and goats). The United States developed its own expressions, including \"better a donkey that carries me than a horse that throws me\", \"a donkey looks beautiful to a donkey\", and \"a donkey is but a donkey though laden with gold\", among others. From Afghanistan, we find the Pashto proverb, \"Even if a donkey goes to Mecca, he is still a donkey.\" In Ethiopia, there are many Amharic proverbs that demean donkeys, such as, \"The heifer that spends time with a donkey learns to fart\" (Bad company corrupts good morals).\n",
"A radio adaptation of \"Don Quixote\" over the BBC had one episode ending with the announcer explaining where \"I'm afraid we've run out of time, so here we leave Don Quixote, sitting on his ass until tomorrow at the same time.\" In US English, \"ass\" could refer either to the buttocks or to a jackass. However, this would not have been seen as a blooper in the UK in the period when it was transmitted, since the British slang word for buttocks is \"arse\", pronounced quite differently. It is only since it has become permissible for \"ass\" in the sense of \"buttocks\" to be used in US films and on television, and syndicated to the UK, that most Brits have become aware of the \"buttocks\" usage. Indeed, since the King James Bible translation is now rarely used, and since the word \"jackass\" is very rare in the UK, much of British youth is now unaware that \"ass\" can mean \"donkey\". As with the word \"gay\", its usage has completely changed within a few years. The announcer was merely making a joke of the character being frozen in place for 24 hours waiting for us, rather like Elwood in the opening minutes of \"Blues Brothers 2000\", or like toys put back in the cupboard in several children's films.\n"
] |
why is having two heads such a commonly seen mutation? | Most often these are not mutations but conjoined twins. One case is when an egg doesn’t split properly during development; another theory, though heavily disputed, is the fusion of two separate fertilized eggs during development. | [
"An individual heterozygous for three mutations is crossed with a homozygous recessive individual, and the phenotypes of the progeny are scored. The two most common phenotypes that result are the parental gametes; the two least common phenotypes that result come from a double crossover in gamete formation. By comparing the parental and double-crossover phenotypes, the geneticist can determine which gene is located between the others on the chromosome.\n",
"The gene breaks the head down into subdomains; the medial subdomain (contains the ocelli); the mediolaterial ; and the lateral (just above the compound eyes). If \"orthodenticle\" is not expressed, structures from the lateral subdomain will be expressed all the way over the head - meaning that ocelli are not produced, i.e. \"ocelliless\".\n",
"In humans, as in other animals, partial twinning can result in formation of two heads supported by a single torso. Two ways this can happen are dicephalus parapagus, where there are two heads side by side, and craniopagus parasiticus, where the heads are joined directly.\n",
"Chiasma formation is common in meiosis, where two homologous chromosomes break and rejoin, leading to chromosomes that are hybrids of the parental types. It can also occur during mitosis but at a much lower frequency because the chromosomes do not pair in a regular arrangement. Nevertheless, the result will be the same when it does occur—the recombination of genes.\n",
"MODY2 is an autosomal dominant condition. Autosomal dominance refers to a single, abnormal gene on one of the first 22 nonsex chromosomes from either parent which can cause an autosomal disorder. Dominant inheritance means an abnormal gene from one parent is capable of causing disease, even though the matching gene from the other parent is normal. The abnormal gene \"dominates\" the pair of genes. If just one parent has a dominant gene defect, each child has a 50% chance of inheriting the disorder.\n",
"The alleles of nearby SNPs on a single chromosome are correlated. Specifically, if the allele of one SNP for a given individual is known, the alleles of nearby SNPs can often be predicted. This is because each SNP arose in evolutionary history as a single point mutation, and was then passed down on the chromosome surrounded by other, earlier, point mutations. SNPs that are separated by a large distance on the chromosome are typically not very well correlated, because recombination occurs in each generation and mixes the allele sequences of the two chromosomes. A sequence of consecutive alleles on a particular chromosome is known as a haplotype.\n",
"In the majority of cases where monosomy occurs, the X chromosome comes from the mother. This may be due to a nondisjunction in the father. Meiotic errors that lead to the production of X with p arm deletions or abnormal Y chromosomes are also mostly found in the father. Isochromosome X or ring chromosome X on the other hand are formed equally often by both parents. Overall, the functional X chromosome usually comes from the mother.\n"
] |
crime shows always say “they hung up before we could trace the call”. what goes into tracing a call and how long does it actually take? | It's 100% Hollywood bullshit. It might have been true decades ago when phone calls were connected manually, but not since the electronic switches that we have since the 1970s. | [
"The First 48 is an American documentary television series on A&E. Filmed in various cities in the United States, the series offers an insider's look at the real-life world of homicide investigators. While the series often follows the investigations to their end, it usually focuses on their first 48 hours, hence the title. Each episode picks one or more homicides in different cities, covering each alternately, showing how detectives use forensic evidence, witness interviews, and other advanced investigative techniques to identify suspects. While most cases are solved within the first 48 hours, some go on days, weeks, months, or even years after the first 48.\n",
"\"Stand By For Crime\" was unique in its format. The series was seen up to the point of the murder, with Inspector Webb, later Lt. Kidd, looking through the clues. However, before the killer was revealed, viewers were invited to phone in their own guesses as to who the killer was.\n",
"Nearing the end of the show, a \"roundup\" is presented showing the person(s) pictured with their first and last name. Some roundups feature four individuals at a time (usually when they are all missing and have the same surname). An individual is shown for two seconds; more time is allowed depending on how many individuals are in the same slide.\n",
"In this segment, a selection of listener telephone calls left on the show's answering machine are played back, with the hosts and guests commenting on each call after it is played. While the content of the calls played varies, they are generally roughly divided into \"momentous occasions\", wherein the caller relates something interesting which has happened to or around them, or \"moments of shame\", wherein the caller recounts an event in which they acted foolishly or otherwise embarrassed themselves.\n",
"The show is scheduled to air weekdays from 5:30AM to 10:00AM (though they often begin and end several minutes late, sometimes going to 10:15). The host(s) typically begin the program by announcing what is coming up on the show that day. They then take calls from their listeners and gives away prizes to the first caller of each show. They continue taking listener calls throughout the day, in addition to reading some listener e-mails. Sometimes they will introduce a particularly ridiculous, confusing, or embarrassing phone call as \"Stupid Call of the Day.\"\n",
"Three episodes are filmed in a day and each one takes around an hour and a half to film. According to Walsh, \"It runs like clockwork.\" The Final Chase can be stopped and re-started if Walsh stumbles on a question. He told the \"Radio Times\", \"If there is a slight misread, I am stopped immediately – bang – by the lawyers. We have the compliance lawyers in the studio all the time. What you have to do is go back to the start of the question, literally on videotape where my mouth opens – or where it's closed from the previous question – and the question is re-asked. It is stopped to the split second.\"\n",
"The End of Watch Call or Last Radio Call is a ceremony in which, after a police officer's death (usually in the line of duty but sometimes from illness), the officers from his or her unit or department gather around a police radio, over which the police dispatcher issues one call to the officer, followed by a silence, then a second call, followed by silence, then finally announces that the officer has failed to respond because he or she has fallen in the line of duty. An example:\n"
] |
A friend of a friend came into possession of this. Any idea what it is | While these sorts of posts are welcome in this subreddit, it's often not the best place to put them. You may find you have better luck in /r/whatisthisthing, as the sub specializes in identifying unknown objects. | [
"According to one tale, a poor fisherman in Istanbul near Yenikapi was wandering idly, empty-handed, along the shore when he found a shiny stone among the litter, which he turned over and over, not knowing what it was. After carrying it about in his pocket for a few days, he stopped by the jewelers' market, showing it to the first jeweler he encountered. The jeweler took a casual glance at the stone and appeared uninterested, saying \"It's a piece of glass, take it away if you like, or if you like I'll give you three spoons. You brought it all the way here, at least let it be worth your trouble.\" What was the poor fisherman to do with this piece of glass? What's more, the jeweler had felt sorry for him and was giving three spoons. He said okay and took the spoons, leaving in their place an enormous treasure. It is said that for this reason the diamond came to be named \"The Spoonmaker's Diamond\". Later, the diamond was bought by a vizier on behalf of the Sultan (or, by a less likely version, it was the vizier who dealt directly with the fisherman).\n",
"Some of Horton's neighbors saw the object fly over their property before it crashed in Horton's yard. Horton recovered the object and called both the U.S. Air Force and the Atlanta airport to see if they had any interest in it. After describing the object over the telephone, neither organization had any interest in it and they said that Horton could do what ever he liked with it, so he tossed it in the woods behind his house. The object \"was a box-like contraption made of wood sticks and tin or aluminum foil with a weather balloon attached\" (see photo). This fits closely with the description and photographs of the material allegedly recovered five years earlier in the Roswell UFO incident (though several military officers involved later claimed this was a cover story for a real flying saucer crash with large quantities of exotic debris and even alien bodies).\n",
"Some of the donated materials were later stolen; the curator arrested in 2008 and most of the items were recovered. One of them, a basket understood to have been gathered by the Lewis and Clark expedition, was returned to the museum voluntarily in 2011 when it was identified. The total donation included about 7,000 reference books and a variety of other materials Strongheart had gathered during his lifetime and travels.\n",
"Shocked on why MacDonald would give away such a rare artifact to an absolute stranger, he points out its priceless value. However, MacDonald scolds O'Brien for placing a monetary value on the cross and explains she gave it to him as a free gift since he would have need of it - and then she disappears behind an alleyway. Suddenly, O'Brien realizes Meve MacDonnal has been dead for three centuries and is buried in a nearby cemetery. The Cross, buried with her, was given to MacDonald as safekeeping by her uncle, the Bishop Liam O'Brien, who died in 1655. \n",
"In June 2017 Sansweet said that he was a victim of theft and that over 100 items from his collection have been stolen, \"The majority of them vintage U.S. and foreign carded action figures, many of them rare and important pieces.\" Reportedly, several of those pieces have already been \"resold or professionally appraised for a total of more than $200,000.\" According to Sansweet, a man named Carl Edward Cunningham, whom Sansweet refers to as \"a good and trusted friend,\" surrendered to police at the end of March 2017 but is currently out on bail pending additional hearings.\n",
"In the 1930s, construction worker Ernesto Lopez showed his family a mysterious box that he claimed to have found while working with a repair crew on the Cass Street Bridge in downtown Tampa. According to family legend, the wooden box contained a pile of Spanish and Portuguese coins, a severed hand wearing a ring engraved with the name \"Gaspar\", and a \"treasure map\" indicating that Gaspar's treasure was hidden near the Hillsborough River in Tampa.\n",
"In 2009, Mrs Dickson's George Medal was thought to have been stolen from a museum, as it could not be found. After the theft began circulating on the news and social media, it was found the following year in a cupboard in the museum where it had been stored and poorly catalogued.\n"
] |
why do student loans get shifted to different banks/loan services? | Everyone is making money but you.
You take out a loan from Bank A for $100,000. If they kept it, you'd probably end up paying them $150,000 back.
They sell it to Bank B for $120,000. Bank A makes $20,000 right away, and Bank B makes $30,000 in the long run because now you're paying THEM the interest for the loan. | [
"The lent amount, often referred to as a \"student loan,\" may be owed to the school (or bank) if the student has dropped classes and withdrawn from the school. Students who withdraw from an institution, especially with poor grades, often end up disqualifying for further financial aid. For low and no-income students, student loans are the sole factor that enable them to go to school, as loans typically cover tuition, room and board, meal plans, text books, and miscellaneous necessities. During repayment of student loans, renegotiation and bankruptcy are strictly regulated.\n",
"Student loans come in several varieties in the United States, but are basically split into federal loans and private student loans. The federal loans, for which the FAFSA is the application, are subdivided into subsidized (the government pays the interest while the student is studying at least half-time) and unsubsidized. Federal student loans are subsidized at the undergraduate level only. Subsidized loans generally defer payments and interest until some period (usually six months) after the student has graduated. . Some states have their own loan programs, as do some colleges. In almost all cases, these student loans have better conditions sometimes much better than the heavily advertised and expensive private student loans.\n",
"Student financial assistance is available for students in part-time studies. Beginning January 1, 2012, the Government of Canada eliminated interest on student loans while borrowers are in-study. Student loan borrowers begin repaying their student loans six months after they graduate or leave school, although interest begins accumulating right away. Grants may supplement loans to aid students who face particular barriers to accessing post-secondary education, such as students with permanent disabilities or students from low-income families.\n",
"In the United States, there are two types of student loans: federal loans sponsored by the federal government and private student loans, which broadly includes state-affiliated nonprofits and institutional loans provided by schools. The overwhelming majority of student loans are federal loans. Federal loans can be \"subsidized\" or \"unsubsidized.\" Interest does not accrue on subsidized loans while the students are in school. Student loans may be offered as part of a total financial aid package that may also include grants, scholarships, and/or work study opportunities. Whereas interest for most business investments is tax deductible, Student loan interest is generally not deductible. Critics contend that tax disadvantages to investments in education contribute to a shortage of educated labor, inefficiency, and slower economic growth.\n",
"The loan system has been changed and modified significantly since its inception in 1992. Initially it provided bulk payments to students and charged lower than market interest rates from initial drawdown. This led some students to use this money for investment purposes, benefiting them but leading to a widespread perception of student excesses.\n",
"A student loan is a type of loan designed to help students pay for post-secondary education and the associated fees, such as tuition, books and supplies, and living expenses. It may differ from other types of loans in the fact that the interest rate may be substantially lower and the repayment schedule may be deferred while the student is still in school. It also differs in many countries in the strict laws regulating renegotiating and bankruptcy. This article highlights the differences of the student loan system in several major countries.\n",
"Federal student loans are loans directly to the student; the student is responsible for repayment of the loan. These loans typically have low interest rates and do not require a credit check or any other sort of collateral. Student loans provide a wide variety of deferment plans, as well as extended repayment terms, making it easier for students to select payment methods that reflect their financial situation. There are federal loan programs that consider financial need.\n"
] |
If you were smaller than the length of a light wave, what would you see? | We *are* smaller than the wavelength of a lot of electromagnetic waves (e.g. radio waves) and our eyes simply don't detect them, that is, we see nothing. We can pick them up with other specialized instruments, for example by connecting a length of wire to a properly tuned receiver circuit, which is what an antenna and radio are doing. What we call 'light' is no different from these longer wavelength EM waves, just happens to be in the range of wavelengths to which our eyes are sensitive.
Note that most radio receivers are smaller than the wavelength of the radio waves themselves, which can be many meters up to kilometers in length. So it is certainly possible for a detector to be smaller than the wavelength of radiation to which it is sensitive. Even in our eyes this is true, because the fundamental detector protein itself, [rhodopsin](_URL_0_), is smaller than the 400-700 nm wavelengths we can see. It's just the structure of the eye needed for gathering more light and forming an image that makes it big. | [
"This implies that one might encounter a wave that is roughly double the significant wave height. However, in rapidly changing conditions, the disparity between the significant wave height and the largest individual waves might be even larger.\n",
"The effect of viewing distance on perceived size can be observed by first obtaining an afterimage, which can be achieved by viewing a bright light for a short time, or staring at a figure for a longer time. It appears to grow in size when projected to a further distance. However, the increase in perceived size is much less than would be predicted by geometry, which casts some doubt on the geometrical interpretation given above. \n",
"The interaction of the waves on a viewing surface alternates between constructive interference and destructive interference causing alternating lines of dark and light. In the example of a Michelson Interferometer, a single fringe represents one wavelength of the source light and is measured from the center of one bright line to the center of the next. The physical width of a fringe is governed by the difference in the angles of incidence of the component beams of light, but regardless of a fringe's physical width, it still represents a single wavelength of light.\n",
"In the extreme case where an object is an infinite distance away, , and , indicating that the object would be imaged to a single point in the focal plane. In fact, the diameter of the projected spot is not actually zero, since diffraction places a lower limit on the size of the point spread function. This is called the diffraction limit.\n",
"If the obstruction dimensions are much smaller than the wavelength of the incident plane wave, the wave is essentially unaffected. For example, low frequency (LF) broadcasts, also known as long waves, at about 200 kHz has a wavelength of 1500 m and is not significantly affected by most average size buildings, which are much smaller.\n",
"In amateur astronomy, limiting magnitude refers to the faintest objects that can be viewed with a telescope. A two-inch telescope, for example, will gather about 16 times more light than a typical eye, and will allow stars to be seen to about 10th magnitude; a ten-inch (25 cm) telescope will gather about 400 times as much light as the typical eye, and will see stars down to roughly 14th magnitude, although these magnitudes are very dependent on the observer and the seeing conditions.\n",
"The fringes only appear in the reflection of the light source, so the optical flat must be viewed from the exact angle of incidence that the light shines upon it. If viewed from a zero degree angle (from directly above), the light must also be at a zero degree angle. As the viewing angle changes, the lighting angle must also change. The light must be positioned so that its reflection can be seen covering the entire surface. Also, the angular size of the light source needs to be many times greater than the eye. For example, if an incandescent light is used, the fringes may only show up in the reflection of the filament. By moving the lamp much closer to the flat, the angular size becomes larger and the filament may appear to cover the entire flat, giving clearer readings. Sometimes, a diffuser may be used, such as the powder coating inside frosted bulbs, to provide a homogenous reflection off the glass. Typically, the measurements will be more accurate when the light source is as close to the flat as possible, but the eye is as far away as possible.\n"
] |
why do we sense five basic tastes (sweet/sour/bitter/salty/umami or savoury)? | Sweet - Your basic energy unit is glucose, this taste makes you want to eat things high in sugar
Salty - Sodium is a vital electrolyte is maintaining physiological balance (water, chemical, energy production, ect) so you need foods with it too.
Umami - Tripped by the amino acid glutimate, and not present in all people. Belived to help attract you to protein based meals too, making for a balanced diet.
Bitter - Trips when you eat things with alkaloids and nicotines. These chemicals are present in a wide variety of poisonous plants. Good detection of these can help you stay alive.
Sour - Trips in acidic foods. Can both be a warning from poisonous food and needed food like lemons for vitamins | [
"Bitter foods are generally found unpleasant, while sour, salty, sweet, and umami tasting foods generally provide a pleasurable sensation. The five specific tastes received by taste receptors are saltiness, sweetness, bitterness, sourness, and \"savoriness\", often known by its Japanese term \"umami\" which translates to ‘deliciousness’. As of the early twentieth century, Western physiologists and psychologists believed there were four basic tastes: sweetness, sourness, saltiness, and bitterness. At that time, savoriness was not identified, but now a large number of authorities recognize it as the fifth taste.\n",
"The sensation of taste includes five established basic tastes: sweetness, sourness, saltiness, bitterness, and umami. Scientific experiments have demonstrated that these five tastes exist and are distinct from one another. Taste buds are able to distinguish between different tastes through detecting interaction with different molecules or ions. Sweet, savory, and bitter tastes are triggered by the binding of molecules to G protein-coupled receptors on the cell membranes of taste buds. Saltiness and sourness are perceived when alkali metal or hydrogen ions enter taste buds, respectively.\n",
"The sensation of taste includes five established basic tastes: sweetness, sourness, saltiness, bitterness, and umami. Scientific experiments have proven that these five tastes exist and are distinct from one another. Taste buds are able to differentiate among different tastes through detecting interaction with different molecules or ions. Sweet, umami, and bitter tastes are triggered by the binding of molecules to G protein-coupled receptors on the cell membranes of taste buds. Saltiness and sourness are perceived when alkali metal or hydrogen ions enter taste buds, respectively.\n",
"Taste helps to identify toxins, maintain nutrition, and regulate appetite, immune responses, and gastrointestinal motility. Five basic tastes are recognized today: salty, sweet, bitter, sour, and umami. Salty and sour taste sensations are both detected through ion channels. Sweet, bitter, and umami tastes, however, are detected by way of G protein-coupled taste receptors.\n",
"The sense of taste is considered to be the most intimate one because we can't taste anything from a distance. It is also believed to be the most distinctly emotional sense. Our taste is also dependent on our saliva and differs on each different person. People who prefer saltier foods are used to a higher concentration of sodium and therefore have a saltier saliva. In fact, 78% of our taste preferences are dependent on one's genes. Taste also has a social aspect attached to it, we rarely seek to enjoy food by ourselves since eating usually facilitates social interaction between people. Business meetings and home dinners are almost all of the time in company of others and companies need to take this into consideration.\n",
"When it comes to taste, most people are aware of the four basics: sweet, sour, salt, and bitter. With recent studies and developments in technology, we have been able to pinpoint at least two new tastes. \"Umami\" (which enhances the original four and has been described as fatty) is the first, and \"kokumi\" is the second. \"Kokumi\" has been said to enhance the other five tastes. It has also been described as something that heightens, magnifies, and lengthens the other tastes. This sensation has also been described as mouthfulness,\n",
"a. It has been known for some time that these categories may not be comprehensive. In Guyton's 1976 edition of \"Textbook of Medical Physiology\", he wrote:On the basis of physiologic studies, there are generally believed to be at least four \"primary\" sensations of taste: \"sour\", \"salty\", \"sweet,\" and \"bitter\". Yet we know that a person can perceive literally hundreds of different tastes. These are all supposed to be combinations of the four primary sensations...However, there might be other less conspicuous classes or subclasses of primary sensations\",\n"
] |
the sexual revolution | Why more sex?
Birth control was more widely available.
The Vietnam war in the 60's/70's brought back boys that were now men who had horrible PTSD and drug exposures.
Ways of escaping could have been sex, "Make love, not war". They felt their lives were at their end, their number is called, might be up.
Why divorce rates?
Abusive spouses could be left as women in the workplace was more mainstream.
Birth control did not trap women in a marriage with 10 kids...
Church laws eased and remarrying after a divorce was possible, in church, about that time.
A few points. Not comprehensive by any stretch! | [
"The sexual revolution was initiated by those who shared a belief in the detrimental impact of sexual repression, a view that had previously been argued by Wilhelm Reich, D. H. Lawrence, Sigmund Freud, and the Surrealist movement. \n",
"The sexual revolution, also known as a time of sexual liberation, was a social movement that challenged traditional codes of behavior related to sexuality and interpersonal relationships throughout the United States and subsequently, the wider world, from the 1960s to the 1980s. Sexual liberation included increased acceptance of sex outside of traditional heterosexual, monogamous relationships (primarily marriage). The normalization of contraception and the pill, public nudity, pornography, premarital sex, homosexuality, masturbation, alternative forms of sexuality, and the legalization of abortion all followed.\n",
"The sexual revolution (also known as a time of \"sexual liberation\") was a social movement that challenged traditional codes of behavior related to sexuality and interpersonal relationships throughout the Western world from the 1960s to the 1980s. Sexual liberation included increased acceptance of sex outside of traditional heterosexual, monogamous relationships (primarily marriage). Contraception and the pill, public nudity, the normalization of premarital sex, homosexuality and alternative forms of sexuality, and the legalization of abortion all followed.\n",
"When speaking of sexual revolution, historians make a distinction between the first and the second sexual revolution. In the first sexual revolution (1870–1910), Victorian morality lost its universal appeal. However, it did not lead to the rise of a \"permissive society\". Exemplary for this period is the rise and differentiation in forms of regulating sexuality.\n",
"Sex: The Revolution was a four-part 2008 American documentary miniseries that aired on VH1 and The Sundance Channel. It chronicled the rise of American interest in sexuality from the 1950s through the 1990s.\n",
"Anarchist Freud scholars Otto Gross and Wilhelm Reich (who famously coined the phrase \"Sexual Revolution\") developed a sociology of sex in the 1910s to 1930s in which the animal-like competitive reproductive behavior was seen as a legacy of ancestral human evolution reflecting in every social relation, as per the freudian interpretation, and hence the liberation of sexual behavior a mean to social revolution.\n",
"In the United Kingdom, \"Sexual Revolution\" became Gray's first single since her debut, \"Do Something\", to miss the top forty. The single had limited success in the United States as well, missing both the \"Billboard\" Hot 100 and Hot R&B/Hip-Hop Songs charts. It did manage to peak at number four, however, on the Hot Dance Club Play chart.\n"
] |
why can't you eat salmon after it spawns? | I think you can eat it, it's just that salmon that have spawned have not eaten for months and are essentially on their last breath. Their meat becomes mush when cooked traditionally. It is not very appetizing. It also loses much of its oil. | [
"Typically, salmon are anadromous: they hatch in fresh water, migrate to the ocean, then return to fresh water to reproduce. However, populations of several species are restricted to fresh water through their lives. Folklore has it that the fish return to the exact spot where they hatched to spawn. Tracking studies have shown this to be mostly true. A portion of a returning salmon run may stray and spawn in different freshwater systems; the percent of straying depends on the species of salmon. Homing behavior has been shown to depend on olfactory memory. Salmon date back to the Neogene.\n",
"Salmon need other salmon to survive so they can reproduce and pass on their genes in the wild. With some populations endangered, precautions are necessary to prevent overfishing and habitat destruction, including appropriate management of hydroelectric and irrigation projects. If too few fish remain because of fishing and land management practices, salmon have more difficulty reproducing.\n",
"The source of \"L. salmonis\" infections when salmon return from fresh water has always been a mystery. Sea lice die and fall off anadromous fish such as salmonids when they return to fresh water. Atlantic salmon return and travel upstream in the fall to reproduce, while the smolts do not return to salt water until the next spring. Pacific salmon return to the marine nearshore starting in June, and finish as late as December, dependent upon species and run timing, whereas the smolts typically outmigrate starting in April, and ending in late August, dependent upon species and run timing.\n",
"Salmon not killed by other means show greatly accelerated deterioration (phenoptosis, or \"programmed aging\") at the end of their lives. Their bodies rapidly deteriorate right after they spawn as a result of the release of massive amounts of corticosteroids.\n",
"Salmon mostly spend their early life in rivers, and then swim out to sea where they live their adult lives and gain most of their body mass. When they have matured, they return to the rivers to spawn. Usually they return with uncanny precision to the natal river where they were born, and even to the very spawning ground of their birth. It is thought that, when they are in the ocean, they use magnetoception to locate the general position of their natal river, and once close to the river, that they use their sense of smell to home in on the river entrance and even their natal spawning ground.\n",
"After spawning, most passing fish die, and those that remain alive (preferentially dwarf males) participate in spawning the next year, too. Emerging from the nest, the young do not travel to the sea immediately, but remain in spawning areas, in the upper reaches of rivers, and on shallows with weak currents. The young move to pools and rolls of the river core to feed on chironomid, stone fly, and may fly larvae, and on airborne insects. The masu salmon travels to the ocean in its second, or occasionally even third year of life.\n",
"Salmon spend their early life in rivers, and then swim out to sea where they live their adult lives and gain most of their body mass. After several years wandering huge distances in the ocean where they mature, most surviving salmons return to the same natal rivers to spawn. Usually they return with uncanny precision to the river where they were born: most of them swim up the rivers until they reach the very spawning ground that was their original birthplace. \n"
] |
Sources for the Ainu and Emishi in Pre-Modern Japan | It's not the right era (up til 1600), but I checked my copy of [*Sources of Japanese Tradition vol. 1*](_URL_1_) and it has some primary sources that mention the Ainu.
1. "New History of the Tang Dynasty" mentions the ainu arriving at the Chinese court w/ a Japanese envoy in 663 (p.12)
2. "Reform Edicts" from the Taika Reforms in 645 mentions keeping weapons handy in provinces bordering the Emishi (p.78)
3. p. 266 has some information from campaigns against them.
4. The index has a listing for Buddhism and the Ainu on p.212, but for the life of me I don't see them mentioned on that page. It's either an error, or I've gone blind.
My copy of [*Sources of Japanese Tradition vol. 2*](_URL_0_)(1600-2000) is in a box somewhere, so I can't check it for you, but that might be another place to look for translated primary sources from the era. | [
"The evidence that the Emishi were also related to the Ainu comes from historical documents. One of the best sources of information comes from both inside and outside Japan, from contemporary Tang- and Song-dynasty histories as these describe dealings with Japan, and from the \"Shoku Nihongi\". For example, there is a record of the arrival of the Japanese foreign minister in AD 659 in which conversation is recorded with the Tang Emperor. In this conversation we have perhaps the most accurate picture of the Emishi recorded for that time period. This episode is repeated in the \"Shoku Nihongi\" in the following manner:\n",
"The oldest extant Japanese lexica date to the early Heian period. Based on the Chinese Yupian, the \"Tenrei Banshō Meigi\" was compiled around 830 by Kūkai and is the oldest extant character dictionary made in Japan. The \"Hifuryaku\" is a massive Chinese dictionary in 1000 fascicles listing the usage of words and characters in more than 1500 texts of diverse genres. Compiled in 831 by Shigeno Sadanushi and others, it is the oldest extant Japanese proto-encyclopedia. There are two National Treasures of the Ishinpō, the oldest extant medical treatise of Japanese authorship compiled in 984 by Tanba Yasuyori. It is based on a large number of Chinese medical and pharmaceutical texts and contains knowledge about drug prescription, herbal lore, hygiene, acupuncture, moxibustion, alchemy and magic. The two associated treasures consist of the oldest extant (partial) and the oldest extant complete manuscript respectively.\n",
"The c. 712 \"Kojiki\" (古事記 \"Records of Ancient Matters\") is the oldest extant book written in Japan. The \"Birth of the Eight Islands\" section phonetically transcribes \"Yamato\" as what would be in Modern Standard Chinese \"Yèmádēng\" (夜麻登). The \"Kojiki\" records the Shintoist creation myth that the god \"Izanagi\" and the goddess \"Izanami\" gave birth to the \"Ōyashima\" (大八州 \"Eight Great Islands\") of Japan, the last of which was Yamato:Next they gave birth to Great-Yamato-the-Luxuriant-Island-of-the-Dragon-Fly, another name for which is Heavenly-August-Sky-Luxuriant-Dragon-Fly-Lord-Youth. The name of \"Land-of-the-Eight-Great-Islands\" therefore originated in these eight islands having been born first. (tr. Chamberlain 1919:23)\n",
"The was a circa 1489 CE Japanese dictionary of Chinese characters. This early Muromachi period Japanization was based upon the circa 543 CE Chinese \"Yupian\" (玉篇 \"Jade Chapters\"), as available in the 1013 CE \"Daguang yihui Yupian\" (大廣益會玉篇; \"Enlarged and Expanded \"Yupian\"\"). The date and compiler of the \"Wagokuhen\" are uncertain. Since the oldest extant editions of 1489 and 1491 CE are from the Entoku era, that may approximate the time of original compilation. The title was later written 和玉篇 with the graphic variant \"wa\" \"harmony; Japan\" for \"wa\" \"dwarf; Japan\".\n",
"The Shiben or Book of Origins (Pinyin: \"shìběn\"; Chinese;世本; ) was the earliest Chinese encyclopedia which recorded imperial genealogies from the mythical Three Sovereigns and Five Emperors down to the late Spring and Autumn period (771-476 BCE), explanations of the origin of clan names, and records of legendary and historical Chinese inventors. It was written during the 2nd century BC at the time of the Han dynasty. \n",
"The is a Japanese imperial anthology of waka; it was finished in 1265 CE, six years after the Retired Emperor Go-Saga first ordered it in 1259. It was compiled by Fujiwara no Tameie (son of Fujiwara no Teika) with the aid of Fujiwara no Motoie, Fujiwara no Ieyoshi, Fujiwara no Yukiee, and Fujiwara no Mitsutoshi; like most Imperial anthologies, there is a Japanese and a Chinese Preface, but their authorship is obscure and essentially unknown. It consists of twenty volumes containing 1,925 poems.\n",
"Nippon Kodo (日本香堂) is a Japanese incense company who trace their origin back over 400 years to an incense maker known as Koju, who made incense for the Emperor of Japan. The Nippon Kodo Group was established in August 1965, and has acquired several other incense companies worldwide and has offices in New York City, Los Angeles, Paris, Chicago, Hong Kong, Vietnam, and Tokyo. Mainichi-Koh, introduced in 1912, is the company's most popular product.\n"
] |
How exactly does tea block the absorption of iron in your blood cells? | Tannins are an organic compound found in both green and black varieties of tea. The tannins found in tea can interact with iron in the gastrointestinal tract, rendering iron less available for absorption. Drinking tea with a meal that contains iron-rich foods can decrease iron absorption by up to 88 percent, depending on the amount of tannins consumed.
*A tannin is a compound that binds to and precipitates proteins and various other organic compounds including amino acids and alkaloids.
Source: _URL_0_
Also, from the wikipedia page on tannins: Foods rich in tannins can be used in the treatment of HFE hereditary hemochromatosis, a hereditary disease characterized by excessive absorption of dietary iron, resulting in a pathological increase in total body iron stores. | [
"To reduce bacterial growth, plasma concentrations of iron are lowered in a variety of systemic inflammatory states due to increased production of hepcidin which is mainly released by the liver in response to increased production of pro-inflammatory cytokines such as Interleukin-6. This functional iron deficiency will resolve once the source of inflammation is rectified; however, if not resolved, it can progress to Anaemia of Chronic Inflammation. The underlying inflammation can be caused by fever, inflammatory bowel disease, infections, Chronic Heart Failure (CHF), carcinomas, or following surgery.\n",
"In addition to effects of iron sequestration, inflammatory cytokines promote the production of white blood cells. Bone marrow produces both white blood cells and red blood cells from the same precursor stem cells. Therefore, the upregulation of white blood cells causes fewer stem cells to differentiate into red blood cells. This effect may be an important additional cause for the decreased erythropoiesis and red blood cell production seen in anemia of inflammation, even when erythropoietin levels are normal, and even aside from the effects of hepcidin. Nonetheless, there are other mechanisms that also contribute to the lowering of hemoglobin levels during inflammation: (i) Inflammatory cytokines suppress the proliferation of erythroid precursors in the bone marrow.; (ii) inflammatory cytokines inhibit the release of erythropoietin (EPO) from the kidney; and (iii) the survival of circulating red cells is shortened.\n",
"Once iron sucrose has been administered, it is transferred to ferritin, the normal iron storage protein. Then, it is broken down in the liver, spleen, and bone marrow. The iron is then either stored for later use in the body or taken up by plasma. The plasma transfers the iron to hemoglobin, where it can begin increasing red blood cell production.\n",
"Tetraethylammonium (TEA) is a compound that, like a number of neurotoxins, was first identified through its damaging effects to the nervous system and shown to have the capacity of inhibiting the function of motor nerves and thus the contraction of the musculature in a manner similar to that of curare. Additionally, through chronic TEA administration, muscular atrophy would be induced. It was later determined that TEA functions in-vivo primarily through its ability to inhibit both the potassium channels responsible for the delayed rectifier seen in an action potential and some population of calcium-dependent potassium channels. It is this capability to inhibit potassium flux in neurons that has made TEA one of the most important tools in neuroscience. It has been hypothesized that the ability for TEA to inhibit potassium channels is derived from its similar space-filling structure to potassium ions. What makes TEA very useful for neuroscientists is its specific ability to eliminate potassium channel activity, thereby allowing the study of neuron response contributions of other ion channels such as voltage gated sodium channels. In addition to its many uses in neuroscience research, TEA has been shown to perform as an effective treatment of Parkinson's disease through its ability to limit the progression of the disease.\n",
"In blood plasma, zinc is bound to and transported by albumin (60%, low-affinity) and transferrin (10%). Because transferrin also transports iron, excessive iron reduces zinc absorption, and vice versa. A similar antagonism exists with copper. The concentration of zinc in blood plasma stays relatively constant regardless of zinc intake. Cells in the salivary gland, prostate, immune system, and intestine use zinc signaling to communicate with other cells.\n",
"Tea contains oxalate, overconsumption of which can cause kidney stones, as well as binding with free calcium in the body. The bioavailability of oxalate from tea is low, thus a possible negative effect requires a large intake of tea. Massive black tea consumption has been linked to kidney failure due to its high oxalate content (acute oxalate nephropathy).\n",
"Of the body's total iron content, about is devoted to cellular proteins that use iron for important cellular processes like storing oxygen (myoglobin) or performing energy-producing redox reactions (cytochromes). A relatively small amount (3–4 mg) circulates through the plasma, bound to transferrin. Because of its toxicity, free soluble iron is kept in low concentration in the body.\n"
] |
Was the Speed of Sound ever considered a theoretical speed limit? | The 'sound barrier' was never considered a theoretical speed limit while the term was being used. The tips of airplane propellers had been brushing up against it for a long time. Bullets has been breaking it for a long time. The V2 bomb broke it during every flight.
The term referred to the many disparate problems that pop up when you pilot an aircraft designed for subsonic speeds (M < < 1) at transonic speeds (M~1). Drag increases, your controls could become ineffective or even reversed, shock waves could create aerodynamic loads that cause your plane to break up. It was a 'barrier' to pilots because trying to go past it often killed you. Understanding and solving all these issues and packaging the solutions together into a plane that could be piloted all the way from M=0 to M > 1 was a daunting challenge, but one that was met in 1947.
It was kind of like how nuclear fusion is today. The science all says it's possible, but engineering around the practical problems involved is proving extremely difficult. | [
"The limitations of the concept of speed of sound due to extreme attenuation are also of concern. The attenuation which exists at sea level for high frequencies applies to successively lower frequencies as atmospheric pressure decreases, or as the mean free path increases. For this reason, the concept of speed of sound (except for frequencies approaching zero) progressively loses its range of applicability at high altitudes. The standard equations for the speed of sound apply with reasonable accuracy only to situations in which the wavelength of the soundwave is considerably longer than the mean free path of molecules in a gas.\n",
"The speed of sound depends on the medium the waves pass through, and is a fundamental property of the material. The first significant effort towards measurement of the speed of sound was made by Isaac Newton. He believed the speed of sound in a particular substance was equal to the square root of the pressure acting on it divided by its density:\n",
"Newton famously considered the speed of sound before most of the development of thermodynamics and so incorrectly used isothermal calculations instead of adiabatic. His result was missing the factor of \"γ\" but was otherwise correct.\n",
"The first run of the car at Bonneville Salt Flats showed that the propulsion system was unable to develop enough thrust to sustain a speed high enough to establish a new official World Land Speed Record. The team decided then that their goal would be to exceed the speed of sound on land, if only briefly, although no official authority would recognize this achievement as a record. The speed of sound is a function of the air temperature and pressure. In other words, the sound barrier is not an absolute speed value, but dependent on air conditions. The speed of sound during Barrett's speed run was .\n",
"The speed of sound was first accurately calculated by the Reverend William Derham, Rector of Upminster, thus improving on Newton's estimates. Derham used a telescope from the tower of the church of St Laurence, Upminster to observe the flash of a distant shotgun being fired, and then measured the time until he heard the gunshot with a half-second pendulum. Measurements were made of gunshots from a number of local landmarks, including North Ockendon church. The distance was known by triangulation, and thus the speed that the sound had travelled could be calculated.\n",
"The speed of sound is the distance travelled per unit time by a sound wave as it propagates through an elastic medium. At , the speed of sound in air is about , or a kilometre in or a mile in . It depends strongly on temperature, but also varies by several metres per second, depending on which gases exist in the medium through which a soundwave is propagating.\n",
"In dry air at 20 °C (68 °F), the speed of sound is 343 metres per second (about 767 mph, 1234 km/h or 1,125 ft/s). The term came into use during World War II when pilots of high-speed fighter aircraft experienced the effects of compressibility, a number of adverse aerodynamic effects that deterred further acceleration, seemingly impeding flight at speeds close to the speed of sound. These difficulties represented a barrier to flying at faster speeds. In 1947 it was demonstrated that safe flight at the speed of sound was achievable in purpose-designed aircraft thereby breaking the barrier. By the 1950s new designs of fighter aircraft routinely reached the speed of sound, and faster.\n"
] |
cloning | Traditional reproduction has a sperm and egg. Both have half of a full set of chromosomes. When the sperm enters the egg it deposits its half of the chromosomes, now with the two combined the newly formed zygote has a full set. It begins to develop as a new individual with neither the exact DNA of its mother or father, but a mixture.
In cloning you remove the chromosomes of the egg and insert a complete set. It can be from the mother, father or any other member of the species The resulting individual will be an exact duplicate of whatever was the source of its chromosomes. This is a clone.
| [
"Cloning is a recurring theme in science fiction films like \"Jurassic Park\" (1993), \"Alien Resurrection\" (1997), \"The 6th Day\" (2000), \"Resident Evil\" (2002), \"\" (2002) and \"The Island\" (2005). The process of cloning is represented variously in fiction. Many works depict the artificial creation of humans by a method of growing cells from a tissue or DNA sample; the replication may be instantaneous, or take place through slow growth of human embryos in artificial wombs. In the long-running British television series \"Doctor Who\", the Fourth Doctor and his companion Leela were cloned in a matter of seconds from DNA samples (\"The Invisible Enemy\", 1977) and then—in an apparent homage to the 1966 film \"Fantastic Voyage\"—shrunk to microscopic size in order to enter the Doctor's body to combat an alien virus. The clones in this story are short-lived, and can only survive a matter of minutes before they expire. Films such as \"The Matrix\" and \"Star Wars: Episode II – Attack of the Clones\" have featured human foetuses being cultured on an industrial scale in enormous tanks. \n",
"Cloning is the production of an offspring which represents the identical genes as its parent. Reproductive cloning begins with the removal of the nucleus from an egg, which holds the genetic material. In order to clone an organ, a stem cell is to be produced and then utilized to clone that specific organ. A common misconception of cloning is that it produces an exact copy of the parent being cloned. Cloning copies the DNA/genes of the parent and then creates a genetic duplicate. The clone will not be a similar copy as he or she will grow up in different surroundings from the parent and may encounter different opportunities and experiences. Although mostly positive, cloning also faces some setbacks in terms of ethics and human health. Though cell division and DNA replication is a vital part of survival, there are many steps involved and mutations can occur with permanent change in an organism's and their offspring's DNA. Some mutations can be good as they result in random evolution periods in which may be good for the species, but most mutations are bad as they can change the genotypes of offspring, which can result in changes that harm the species.\n",
"Human cloning is the creation of a genetically identical copy of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissues. It does not refer to the natural conception and delivery of identical twins. The possibility of human cloning has raised controversies. These ethical concerns have prompted several nations to pass legislation regarding human cloning and its legality. As of right now, scientists have no intention of trying to clone people and they believe their results should spark a wider discussion about the laws and regulations the world needs to regulate cloning.\n",
"Human cloning is the creation of a genetically identical copy (or clone) of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissue. It does not refer to the natural conception and delivery of identical twins. The possibility of human cloning has raised controversies. These ethical concerns have prompted several nations to pass laws regarding human cloning and its legality.\n",
"In computer programming, particularly object-oriented programming, \"cloning\" refers to object copying by a method or copy factory function, often called codice_1 or codice_2, as opposed to by a copy constructor. Cloning is polymorphic, in that the type of the object being cloned need not be specified, in contrast to using a copy constructor, which requires specifying the type (in the constructor call).\n",
"The process of cloning is represented variously in fiction. Many works depict the artificial creation of humans by a method of growing cells from a tissue or DNA sample; the replication may be instantaneous, or take place through slow growth of human embryos in artificial wombs. In the long-running British television series \"Doctor Who\", the Fourth Doctor and his companion Leela were cloned in a matter of seconds from DNA samples (\"The Invisible Enemy\", 1977) and then — in an apparent homage to the 1966 film \"Fantastic Voyage\" — shrunk to microscopic size in order to enter the Doctor's body to combat an alien virus. The clones in this story are short-lived, and can only survive a matter of minutes before they expire. Science fiction films such as \"The Matrix\" and \"Star Wars: Episode II – Attack of the Clones\" have featured scenes of human foetuses being cultured on an industrial scale in mechanical tanks.\n",
"Cloning is the process of producing genetically identical individuals of an organism either naturally or artificially. In nature, many organisms produce clones through asexual reproduction. Cloning in biotechnology refers to the process of creating clones of organisms or copies of cells or DNA fragments (molecular cloning). Beyond biology, the term refers to the production of multiple copies of digital media or software.\n"
] |
What could be the consequences of extreme harvesting of tidal energy? | _URL_1_
_URL_0_
tl;dr:
Currently, water hitting already extant natural barriers in the world causes a slowing of the rotation rate that lengthens the day by about 2.3 milliseconds per day per century.
That's because of friction of the ocean against natural barriers and the ocean floor... maybe some other stuff, its a complex topic -- this energy is roughly .1 TW per year.
The current tidal power generation planned projects equal about 115GW, roughly the same amount lost to 'natural causes'. This number is very low compared to the world's entire energy consumption-- that is because sites that have a high differential between high and low tides occur only in limited, specific configurations of underwater terrain around the globe, so 115GW is about all we can do and expect to make our money back at this point in time.
If we were to do all the currently planned easy/practical projects, we would double the rate of slow, a day would be about 4.3 milliseconds longer per century.
Now let's get ridiculous and build a wall all the way around the earth. every day, the average height of the tide pours from one hemisphere to the other. Ignoring a lot of real things we'd have to worry about like efficiency of power generation and other losses, we might generate about 2TW.
So 2TW + natural barriers (although they may cause less friction if we've built a wall around the whole world), we're now slowing the earth down by about 45 milliseconds per century. Not something to be concerned about.
| [
"Another physical limitation is the energy available in the tidal fluctuations of the oceans, which is about 0.6 EJ (exajoule). Note this is only a tiny fraction of the total rotational energy of the Earth. Without forcing, this energy would be dissipated (at a dissipation rate of 3.7 TW) in about four semi-diurnal tide periods. So, dissipation plays a significant role in the tidal dynamics of the oceans. Therefore, this limits the available tidal energy to around 0.8 TW (20% of the dissipation rate) in order not to disturb the tidal dynamics too much. \n",
"Tidal Energy has an expensive initial cost which may be one of the reasons tidal energy is not a popular source of renewable energy. It is important to realize that the methods for generating electricity from tidal energy is a relatively new technology. It is projected that tidal power will be commercially profitable within 2020 with better technology and larger scales. Tidal Energy is however still very early in the research process and the ability to reduce the price of tidal energy can be an option. The cost effectiveness depends on each site tidal generators are being placed. To figure out the cost effectiveness they use the Gilbert ratio, which is the length of the barrage in metres to the annual energy production in kilowatt hours (1 kilowatt hour = 1 KWH = 1000 watts used for 1 hour).\n",
"Tidal energy can be extracted by two means: inserting a water turbine into a tidal current, or building ponds that release/admit water through a turbine. In the first case, the energy amount is entirely determined by the timing and tidal current magnitude. However, the best currents may be unavailable because the turbines would obstruct ships. In the second, the impoundment dams are expensive to construct, natural water cycles are completely disrupted, ship navigation is disrupted. However, with multiple ponds, power can be generated at chosen times. So far, there are few installed systems for tidal power generation (most famously, La Rance at Saint Malo, France) which face many difficulties. Aside from environmental issues, simply withstanding corrosion and biological fouling pose engineering challenges.\n",
"Although not yet widely used, tidal energy has potential for future electricity generation. Tides are more predictable than the wind and the sun. Among sources of renewable energy, tidal energy has traditionally suffered from relatively high cost and limited availability of sites with sufficiently high tidal ranges or flow velocities, thus constricting its total availability. However, many recent technological developments and improvements, both in design (e.g. dynamic tidal power, tidal lagoons) and turbine technology (e.g. new axial turbines, cross flow turbines), indicate that the total availability of tidal power may be much higher than previously assumed, and that economic and environmental costs may be brought down to competitive levels.\n",
"A tidal generator converts the energy of tidal flows into electricity. Greater tidal variation and higher tidal current velocities can dramatically increase the potential of a site for tidal electricity generation.\n",
"This type of energy does not produce waste that is harmful to the environment and does not require high maintenance. Unlike the solar and wind energy models, tidal energy is quite stable because the tide of the day can be accurately predicted. The disadvantage of this type of energy is that it requires a large amount of investment in equipment and construction and at the same time changes the natural conditions of a very large area. \n",
"Tidal power, also called tidal energy, is a form of hydropower that converts the energy obtained from tides into useful forms of power, mainly electricity. The potential of tidal wave energy becomes higher in certain regions by local effects such as shelving, funnelling, reflection and resonance.\n"
] |
How much sailing did Native Americans do on the Great Lakes? | Lots of paddling but no Sailing
There is significant physical evidence that Native Americans traveled to various islands in the Great Lakes. There are hunting artifacts on Pelee Island and pictographs on Kelley's Island in Lake Erie.
There were Ojibway (Chippewa) recorded as living on Michipicoten Island at the time of first contact by Etienne Brule around 1620, and there were prehistoric copper mines on Isle Royale. Both of these islands are in Superior and are near to the route of the Edmund Fitzgerald. They are both around a dozen miles off the the mainland, which is close enough to be visible, but far enough to make it more than just a lazy afternoon paddle. (And not in a storm, and not in November.)
Later, when the fur trade picked up, the larger loads of furs were transported to Montreal in 30-40 foot canoes, except for the obvious portage at the Niagra River and the rapids near Sault Ste Marie.
Although every paddler learns to adjust course for tailwinds, the first actual sailing ship on the Great Lakes was the Griffin built in Robert Sieur de La Salle in 1679. | [
"Several Native American tribes inhabited the region since at least 10,000 BC, after the end of the Wisconsin glaciation. The peoples of the Great Lakes traded with the Hopewell culture from around 1000 AD, as copper nuggets have been extracted from the region, and fashioned into ornaments and weapons in the mounds of Southern Ohio. The brigantine \"Le Griffon\", which was commissioned by René-Robert Cavelier, Sieur de La Salle, was built at Cayuga Creek, near the southern end of the Niagara River, and became the first known sailing ship to travel the upper Great Lakes on August 7, 1679.\n",
"The Native American Chumash and Tongva people living in the area built boats unlike any others in North America prior to contact by settlers. Pulling fallen Northern California redwood trunks and pieces of driftwood from the Santa Barbara Channel, their ancestors learned to seal the cracks between the boards of the large wooden plank canoes by using the natural resource of tar. This innovative form of transportation allowed access up and down the coastline and to the Channel Islands. The Portolá expedition, a group of Spanish explorers led by Gaspar de Portolá, made the first written record of the tar pits in 1769. Father Juan Crespí wrote,\n",
"Before European exploration began, the area was used by Native Americans, mostly for its supply of fish. Many of the areas surrounding Oneida Lake have actually been bearers of artifacts that have helped us learn more about Native Americans. The Oneidas and the Onondagas, of the Iroquois Confederacy chose to settle in the Oneida Lake region.\n",
"They decided that building boats would be too difficult and time-consuming, and that navigating the Gulf of Mexico was too risky, so they headed overland to the southwest. Eventually they reached a region in present-day Texas that was dry. The native populations were made up mostly of subsistence hunter-gatherers. The soldiers found no villages to raid for food, and the army was still too large to live off the land. They were forced to backtrack to the more developed agricultural regions along the Mississippi, where they began building seven \"bergantines\", or pinnaces. They melted down all the iron, including horse tackle and slave shackles, to make nails for the boats. They survived through the winter, and the spring floods delayed them another two months. By July they set off on their makeshift boats down the Mississippi for the coast.\n",
"\"Le Griffon\" was the largest fixed-rig sailing vessel on the Great Lakes up to that time, and led the way to modern commercial shipping in that part of the world. Historian J. B. Mansfield reported that this \"excited the deepest emotions of the Indian tribes, then occupying the shores of these inland waters\".\n",
"The Inland Waterway was originally used by Native Americans to avoid the strong waves around Waugoshance Point on Lake Michigan. Consequently, 50 Native American encampments have been discovered along the shores of the Inland Water Route. One such encampment, located in Ponshewaing, has artifacts dating back over 3,000 years.\n",
"In pre-colonial times, Native Americans (notably the Ottawa) began using the rich resources at the present site of Maumee, Ohio, in the Maumee River valley. Throughout much of the eighteenth century, French, British and American forces struggled for control of the lower Maumee River as a major transportation artery linking East and West through Lake Erie.\n"
] |
el salvador switching all of its currency to the us dollar. where did the dollars come from? | They come from banks in the US. The US doesn't officially sanction other countries using her currency, but you can't keep those slips of paper from going on vacation. | [
"San Salvador, as well as the rest of the country, has used the U.S. dollar as its currency of exchange since 2001. Under the Monetary Integration Law, El Salvador adopted the U.S. dollar as a legal tender along the colon. This decision came about as an attempt to encourage foreign investors to launch new companies in El Salvador, saving them the inconvenience of conversion to other currencies. San Salvador's economy is mostly based on the service and retail sector, rather on industry or manufacturing.\n",
"Following the decriminalization of the possession of American Dollars in 1993, the government created special stores in which individuals who possessed the USD could shop for items not available to individuals who only possessed the peso. Moreover, by September 1995, it was possible to deposit hard currency with interest in the Cuban National Bank, by October of that same year, the government had created Foreign Currency Exchange houses (Casas de Cambio, CADECA) with 23 branches throughout the island where Cubans could exchange USD for pesos at a rate similar to that of the Black Market.\n",
"On December 20, 1994, the government announced a new free convertible peso, which was on par with the US dollar and could be used in dollar stores, was to exist alongside the old peso, and its ultimate intent was to substitute both the old peso and the dollar.\n",
"The first U.S. dollar coins were not issued until April 2, 1792, and the peso continued to be officially recognized and used in the United States, along with other foreign coins, until February 21, 1857. In Canada, it remained legal tender, along with other foreign silver coins, until 1854 and continued to circulate beyond that date. The Mexican peso also served as the model for the Straits dollar (now the Singapore/Brunei Dollar), the Hong Kong dollar, the Japanese yen and the Chinese yuan. The term Chinese yuan refers to the round Spanish dollars, Mexican pesos and other 8 reales silver coins which saw use in China during the 19th and 20th century. The Mexican peso was also briefly legal tender in 19th century Siam, when government mints were unable to accommodate a sudden influx of foreign traders, and was exchanged at a rate of three pesos to five Thai baht.\n",
"The U.S. dollar is officially accepted alongside local currencies in El Salvador (since 2001), Costa Rica, Nicaragua, Peru, Honduras, Panama, Bermuda and Barbados, although in practice two of these countries (El Salvador and Panama) are fully dollarized. In 2000, Ecuador officially adopted the U.S. dollar as its sole currency. In a few areas of Canada, the U.S. dollar can be accepted as currency alongside the Canadian Dollar, particularly in areas near border crossings. An example of this effect is Niagara Falls, Ontario, with large numbers of U.S. tourists (businesses still may not accept U.S. currency depending on their policy). The same is also true for the Canadian Dollar in many U.S. cities bordering Canada.\n",
"After its creation in 1923, the Bank of the Republic () was established as Colombia's main bank, and the only one permitted to issue currency. Between 1923 and 1931, denominations of 1, 2, 5, 10, 20, 50, 100 and 500 peso notes were put into circulation, which were able to be exchanged for gold or United States dollars. After the 1930s, these notes ceased to be convertible into gold but remained in circulation until the mid 1970s, when they were replaced by copper and nickel coins. These coins were manufactured until 1991 by the General Treasury of the Nation.\n",
"Around the start of the 20th century, the Philippines pegged the silver peso/dollar to the U.S. dollar at 50 cents. This move was assisted by the passage of the Philippines Coinage Act by the United States Congress on March 3, 1903. Around the same time Mexico and Japan pegged their currencies to the dollar. When Siam adopted a gold exchange standard in 1908, only China and Hong Kong remained on the silver standard.\n"
] |
why people with asperger's syndrome are genius or prodigious? | Nobody talks about the ones that become janitors. | [
"Asperger's syndrome (AS) is characterized by considerable problems in social interaction, other notable symptoms include restricted and repetitive patterns of behavior and activities. Patient with AS generally has no setback in language cognitive maturity, or self-help abilities but has clear language skill deficits, problems in social interaction, and odd behavior in interests and activities characteristic of PRS. The lack of cognitive development deficits enables the patient with AS to perform at a more advanced level than people who have other forms of PRS.\n",
"Diagnosis of Asperger syndrome can be tricky as there is a lack of a standardized diagnostic screening for the disorder. According to the US National Institute of Neurological Disorders and Stroke, physicians look for the presence of a primary group of behaviors to make a diagnosis such as abnormal eye contact, aloofness, failure to respond when called by name, failure to use gestures to point or show, lack of interactive play with others, and a lack of interest in peers.\n",
"People with Asperger syndrome can display behavior, interests, and activities that are restricted and repetitive and are sometimes abnormally intense or focused. They may stick to inflexible routines, move in stereotyped and repetitive ways, preoccupy themselves with parts of objects, or engage in compulsive behaviors like lining objects up to form patterns.\n",
"Asperger syndrome can be misdiagnosed as a number of other conditions, leading to medications that are unnecessary or even worsen behavior; the condition may be at the root of treatment-resistant mental illness in adults. Diagnostic confusion burdens individuals and families and may cause them to seek unhelpful therapies. Conditions that must be considered in a differential diagnosis include other pervasive developmental disorders (autism, PDD-NOS, childhood disintegrative disorder, Rett disorder), schizophrenia spectrum disorders (schizophrenia, schizotypal disorder, schizoid personality disorder), attention-deficit hyperactivity disorder, obsessive compulsive disorder, depression, semantic pragmatic disorder, multiple complex developmental disorder and nonverbal learning disorder (NLD).\n",
"The distinction between Asperger's and other ASD forms is to some extent an artifact of how autism was discovered. Although individuals with Asperger's tend to perform better cognitively than those with autism, the extent of the overlap between Asperger's and high-functioning autism is unclear.\n",
"In 2015, Asperger's was estimated to affect 37.2 million people globally. Autism spectrum disorder affects males more often than females and females are typically diagnosed at a later age. The syndrome is named after the Austrian pediatrician Hans Asperger, who, in 1944, described children in his practice who lacked nonverbal communication skills, had limited understanding of others' feelings, and were physically clumsy. The modern conception of Asperger syndrome came into existence in 1981 and went through a period of popularization. It became a standardized diagnosis in the early 1990s. Many questions and controversies remain. There is doubt about whether it is distinct from high-functioning autism (HFA). Partly because of this, the percentage of people affected is not firmly established.\n",
"Asperger syndrome appears to result from developmental factors that affect many or all functional brain systems, as opposed to localized effects. Although the specific underpinnings of AS or factors that distinguish it from other ASDs are unknown, and no clear pathology common to individuals with AS has emerged, it is still possible that AS's mechanism is separate from other ASDs. Neuroanatomical studies and the associations with teratogens strongly suggest that the mechanism includes alteration of brain development soon after conception. Abnormal migration of embryonic cells during fetal development may affect the final structure and connectivity of the brain, resulting in alterations in the neural circuits that control thought and behavior. Several theories of mechanism are available; none are likely to provide a complete explanation.\n"
] |
why chargers (phone, tablet, computer) get so hot while charging. | Chargers must convert Alternating Current (which is easy to transmit efficiently from the generating station, across the electrical grid, then to your home) to Direct Current (which is easy for digital electronic devices to use to process information). Converting AC to DC is not 100% efficient; some energy is lost--as heat. Properly used and cared for, the chargers' heat output is not dangerous. | [
"The safe temperature range when in use is between −20 °C and 45 °C. During charging, the battery temperature typically stays low, around the same as the ambient temperature (the charging reaction absorbs energy), but as the battery nears full charge the temperature will rise to 45–50 °C. Some battery chargers detect this temperature increase to cut off charging and prevent over-charging.\n",
"Sleep-and-charge USB ports can be used to charge electronic devices even when the computer is switched off. Normally, when a computer is powered off the USB ports are powered down, preventing phones and other devices from charging. Sleep-and-charge USB ports remain powered even when the computer is off. On laptops, charging devices from the USB port when it is not being powered from AC drains the laptop battery faster; most laptops have a facility to stop charging if their own battery charge level gets too low. This feature has also been implemented on some laptop docking stations allowing device charging even when no laptop is present.\n",
"Solar chargers used to charge a phone directly, rather than by using an internal battery, can damage a phone if the output is not well-controlled, for example by supplying excessive voltage in bright sunlight.In less bright light, although there is electrical output it may be too low to support charging, it will not just charge slower.\n",
"Because batteries are sensitive to temperature changes, the Volt has a thermal management system to monitor and maintain the battery cell temperature for optimum performance and durability. The Volt's battery pack provides reliable operation, when plugged in, at cell temperatures as low as and as high as . The Volt features a battery pack that can be both warmed or cooled. In cold weather, the car electrically heats the battery coolant during charging or operation to provide full power capability. In hot weather, the car can use its air conditioner to cool the battery coolant to prevent over-temperature damage.\n",
"Some devices can use their USB ports to charge built-in batteries, while other devices can detect a dedicated charger and draw more than 500 mA (0.5 A), allowing them to charge more rapidly. OTG devices are allowed to use either option.\n",
"The battery charger can be on-board or external to the vehicle. The process for an on-board charger is best explained as AC power being converted into DC power, resulting in the battery being charged. On-board chargers are limited in capacity by their weight and size, and by the limited capacity of general-purpose AC outlets. Dedicated off-board chargers can be as large and powerful as the user can afford, but require returning to the charger; high-speed chargers may be shared by multiple vehicles.\n",
"Heated clothing designed for use on vehicles such as motorbikes or snowmobiling typically use a 12-volt electric current, the standard voltage on motorsport and powersport batteries. While a single heated garment, such as heated gloves, will not usually adversely affect the charge on the battery, riders have to be careful about attaching several heated garments because the battery may not be able to handle the load. The heated garments are usually attached directly onto the battery of the bike. Some heated garments have cigarette lighter plugs. While the least expensive models can only be turned on or off, more expensive models sometimes provide a heating level control. \n"
] |
In pop culture, there's a lot of resistance to discussing movie/story spoilers without having an appropriate warning. Is this new behavior, or were people equally wary of spoilers for that brand new Shakespeare production? | The concept of a "plot twist" which can be "spoiled" is a fairly recent concept in the history of drama/literature. In Ancient Greece, for instance, everyone knew the all of the legends and their plots forwards and backwards - if you found someone who didn't know that Klytemnestra killed Agamemnon, you'd think them ignorant and remind them of the story.
Or take Shakespeare's plays - Iago and Richard III explicitly detail their villainous plans to the audience, it's not concealed like the identity of the murderer in an Agatha Chrstie. In Elizabethan times, a "comedy" meant a play with a happy ending, just as a "tragedy" meant one with a sad one, so even before the audience sat down in the Globe they'd know that *Romeo and Juliet* wasn't going to end well for the lovers. Shakespeare even "spoils" the ending in the Prologue: "A pair of star-crossed lovers take their life."
Or take *Robinson Crusoe*, considered the first novel in English. Its full title is *The life and strange surprising adventures of Robinson Crusoe, of York, mariner : who lived eight and twenty years all alone in an uninhabited island on the coast of America, near the mouth of the great River of Oronooque, having been cast on shore by shipwreck, wherein all the men perished but himself, with an account how he was at last as strangely delivered by pirates, also the further adventures, written by himself*. So no one was worrying about giving away the ending, "he gets rescued by pirates" - it's right there in the title!
Literature developed, of course, and by the time of novels like *Emma* or *Tom Jones* we see dramatic plot twists, and in *Barchester Towers* (1857), we even have the concept of a "spoiler":
> And then how grievous a thing it is to have the pleasure of your novel destroyed by the ill-considered triumph of a previous reader. "Oh, you needn't be alarmed for Augusta; of course she accepts Gustavus in the end." "How very ill-natured you are, Susan," says Kitty with tears in her eyes: "I don't care a bit about it now." | [
"Some producers actively seed bogus information in order to misdirect fans. The director of the film \"Terminator Salvation\" orchestrated a \"disinformation campaign\" where false spoilers were distributed about the film, to mask any true rumors about its plot.\n",
"At the end of the 1990s, some small companies began selling copies of movies, without the violent, indecent or foul language parts, to appeal to the family audience. By 2003, Hollywood reacted against these unauthorized modifications, as it considered them to be a destruction of the filmmakers work, and a violation of the controls an author has over his or her works. Famous directors and producers, such as Steven Spielberg, have publicly criticized this practice in magazines.\n",
"In a negative review, Roger Ebert of the \"Chicago Sun-Times\" asserted that \"Generations\" was \"undone by its narcissism\" due to the film's overemphasis on franchise in-jokes and the overuse of \"polysyllabic pseudoscientific gobbledygook\" uttered by its characters. Ebert also lamented the film's unimaginative script and complained \"the starship can go boldly where no one has gone before, but the screenwriters can only do vice versa.\"\n",
"Most of the film's criticism consisted of not having many actual jokes and instead having an over-reliance on pop culture references. Several recurring gags were criticized for being overused, such as throwing various celebrities down the Pit of Death or the ambiguous sexuality of the Spartans.\n",
"David Denby of \"The New Yorker\" judged the film \"a travesty\", adding: \"The dopiness of it, however, may be an indication not so much of cinematic ineptitude as of the changes in a movie culture that was once devoted to adults and is now rather haplessly and redundantly devoted to kids.\"\n",
"That marketing tactic can backfire, and drew the vocal disgust of influential critics such as Roger Ebert, who was prone to derisively condemn such moves, with gestures such as \"The Wagging Finger of Shame\", on \"At the Movies\". Furthermore, the very nature of withholding reviews can draw early conclusions from the public that the film is of poor quality because of that marketing tactic.\n",
"In the episode, the boys find out that their favorite movies are being enhanced, re-released and ruined in the process. In response, they form a club to \"Save Films from their Directors.\" Their goal is to stop certain famous authors from wrecking any more of their original masterpieces. They also cater to a group who demand a trailer-trash toddler murderer be freed.\n"
] |
Did the city-states of Greece, like Sparta or Athens, have a concept of "Just War," did they fight with certain rules? | I can give you some answers on this, at least according to Herodotus. The Greeks generally had some rules of war, but they were also great "innovators" when it came to waging war, so sometimes these rules went out the window. A big rule though was not destroying temples, anyone who destroyed the temple of a God would be cursed by the gods. The Athenians are a great example of this, at least according to Herodotus. When they attacked Sardis, they destroyed the temples of the Persians (and the city itself). This offended Zeus, apparently, he sent first Darius against them, and then Xerxes (who was occasional described as being Zeus at least by the Delphi Oracle) who burned Athens, and the acropolis. Gaining revenge for Sardis.
The Greeks were sometimes known to sacrifice humans, often slaves or criminals to certain gods - the Titan Chronus would have criminals sacrificed outside city gates, I've heard. But it wasn't a common or well looked up habit. But it did occaisionally happen. | [
"In 431 BC war broke out between Athens and Sparta. The war was a struggle not merely between two city-states but rather between two coalitions, or leagues of city-states: the Delian League, led by Athens, and the Peloponnesian League, led by Sparta.\n",
"Lack of political unity within Greece resulted in frequent conflict between Greek states. The most devastating intra-Greek war was the Peloponnesian War (431–404 BC), won by Sparta and marking the demise of the Athenian Empire as the leading power in ancient Greece. Both Athens and Sparta were later overshadowed by Thebes and eventually Macedon, with the latter uniting most of the city-states of the Greek hinterland in the League of Corinth (also known as the \"Hellenic League\" or \"Greek League\") under the control of Phillip II. Despite this development, the Greek world remained largely fragmented and would not be united under a single power until the Roman years. Sparta did not join the League and actively fought against it, raising an army led by Agis III to secure the city-states of Crete for Persia.\n",
"The city-states within Greece formed themselves into two leagues; the Achaean League (including Thebes, Corinth and Argos) and the Aetolian League (including Sparta and Athens). For much of the period until the Roman conquest, these leagues were usually at war with each other, and/or allied to different sides in the conflicts between the Diadochi (the successor states to Alexander's empire).\n",
"The Corinthian War revealed a significant dynamic that was occurring in Greece. While Athens and Sparta fought each other to exhaustion, Thebes was rising to a position of dominance among the various Greek city-states.\n",
"In spite of their decreased political power and autonomy, the Greek city state or polis continued to be the basic form of political and social organization in Greece. Classical city states such as Athens and Ephesus grew and even thrived in this period. While warfare between Greek cities continued, the cities responded to the threat of the post Alexandrian Hellenistic states by banding together into alliances or becoming allies of a strong Hellenistic state which could come to its defense therefore making it \"asylos\" or inviolate to attack by other cities.\n",
"During the period 479–461, the mainland Greek states were at least outwardly at peace with each other, even if divided into pro-Spartan and pro-Athenian factions. The Hellenic alliance still existed in name, and since Athens and Sparta were still allied, Greece achieved a modicum of stability. However, over this period, Sparta became increasingly suspicious and fearful of the growing power of Athens. It was this fear, according to Thucydides, which made the second, larger (and more famous) Peloponnesian War inevitable.\n",
"The emergence of city-states (\"poleis\") in ancient Greece marks the beginning of classical antiquity. The two most important Greek cities, the Ionian-democratic Athens and the Dorian-aristocratic Sparta, led the successful defense of Greece against the invading Persians from the east, but then clashed against each other for supremacy in the Peloponnesian War. The Kingdom of Macedon took advantage of the following instability and established a single rule over Greece. Desire to form a universal monarchy brought Alexander the Great to annex the entire Persian Empire and begin a hellenization of the Macedonian possessions. At his death in 323 BC, his reign was divided between his successors and several hellenistic kingdoms were formed.\n"
] |
can a body get an infection from a single cell of bacteria or do they need to be in quantity to start an infection? | Yes and yes.
Technically, a single cell of bacteria or a single virus can infect you.
But, they are far more likely to make you sick if your initial exposure is bigger. | [
"Bacteria can often be killed by antibiotics, which are usually designed to destroy the cell wall. This expels the pathogen's DNA, making it incapable of producing proteins and causing the bacteria to die. A class of bacteria without cell walls is mycoplasma (a cause of lung infections). A class of bacteria which must live within other cells (obligate intracellular parasitic) is chlamydia (genus), the world leader in causing sexually transmitted infection (STI).\n",
"Intracellular bacteria need to enter host cells (cells of the infected organism) in order to replicate and propagate infection. Many species of \"Shigella\" (causes bacillary dysentery), \"Salmonella\" (typhoid fever), \"Mycobacterium\" (leprosy and tuberculosis) and \"Listeria\" (listeriosis), to name but a few, are intracellular.\n",
"BULLET::::- For any bacterium to enter a host's cell, the cell must display receptors to which bacteria can adhere and be able to enter the cell. Some strains of \"E. coli\" are able to internalize themselves into a host's cell even without the presence of specific receptors as they bring their own receptor to which they then attach and enter the cell.\n",
"Only a minority of bacteria species cause disease in humans; and many species colonize in the human body to create an ecosystem known as bacterial flora. Bacterial flora is endogenous bacteria, which is defined as bacteria that naturally reside in a closed system. Disease can occur when microbes included in normal bacteria flora enter a sterile area of the body such as the brain or muscle. This is considered an endogenous infection. A prime example of this is when the residential bacterium E. coli of the GI tract enters the urinary tract. This causes a urinary tract infection. Infections caused by exogenous bacteria occurs when microbes that are noncommensal enter a host. These microbes can enter a host via inhalation of aerosolized bacteria, ingestion of contaminated or ill-prepared foods, sexual activity, or the direct contact of a wound with the bacteria.\n",
"In order for pathogenic bacteria to invade a cell, communication with the host cell is required. The first step for invading bacteria is usually adhesion to host cells. Strong anchoring, a characteristic that determines virulence, prevents the bacteria from being washed away before infection occurs. Bacterial cells can bind to many host cell surface structures such as glycolipids and glycoproteins which serve as attachment receptors. Once attached, the bacteria begin to interact with the host to disrupt its normal functioning and disrupt or rearrange its cytoskeleton. Proteins on the bacteria surface can interact with protein receptors on the host thereby affecting signal transduction within the cell. Alterations to signaling are favorable to bacteria because these alterations provide conditions under which the pathogen can invade. Many pathogens have Type III secretion systems which can directly inject protein toxins into the host cells. These toxins ultimately lead to rearrangement of the cytoskeleton and entry of the bacteria.\n",
"An infectious organism can escape the confines of the immediate tissue via the circulatory system or lymphatic system, where it may spread to other parts of the body. If an organism is not contained by the actions of acute inflammation it may gain access to the lymphatic system via nearby lymph vessels. An infection of the lymph vessels is known as lymphangitis, and infection of a lymph node is known as lymphadenitis. When lymph nodes cannot destroy all pathogens, the infection spreads further. A pathogen can gain access to the bloodstream through lymphatic drainage into the circulatory system.\n",
"Infection begins when an organism successfully enters the body, grows and multiplies. This is referred to as colonization. Most humans are not easily infected. Those who are weak, sick, malnourished, have cancer or are diabetic have increased susceptibility to chronic or persistent infections. Individuals who have a suppressed immune system are particularly susceptible to opportunistic infections. Entrance to the host at host-pathogen interface, generally occurs through the mucosa in orifices like the oral cavity, nose, eyes, genitalia, anus, or the microbe can enter through open wounds. While a few organisms can grow at the initial site of entry, many migrate and cause systemic infection in different organs. Some pathogens grow within the host cells (intracellular) whereas others grow freely in bodily fluids.\n"
] |
proper eye contact | Use it as an accent to your conversation. If you never look at someone you're either ignoring them or submitting to them, so when you've finished your conversation you stop making eye contact and look away until they get the idea. If you look directly at someone constantly you're either creepy as hell or attempting to dominate them. Make initial eye contact when you first greet someone and hold it for a few seconds while discussing the point of the meeting, this shows interest, respect, and confidence. As you chat you can look away off and on, or just look at different parts of their body (or even face) so that you're not just staring them down. As you make specific points, i.e. saying something you think is important look sharply back into their eyes to drive the point home. I'm often doing more than one thing at a time, so when someone comes into my office I'll glance at my monitor or flip a page of specifications I'm reviewing and then look back at them. Practice it for a while and you'll realize it's really just another way of communicating what you're thinking anyway and it's not all that difficult. The reason you're having trouble is that you're not normally focused on the people speaking to you because of the eyesight issue, so you'll have to make some extra effort. That, or wear your friggin glasses. | [
"According to Eckman, \"Eye contact (also called mutual gaze) is another major channel of nonverbal communication. The duration of eye contact is its most meaningful aspect.\" Generally speaking, the longer there is established eye contact between two people, the greater the intimacy levels.\n",
"Eye contact is another major aspect of facial communication. Some have hypothesized that this is due to infancy, as humans are one of the few mammals who maintain regular eye contact with their mother while nursing. Eye contact serves a variety of purposes. It regulates conversations, shows interest or involvement, and establishes a connection with others.\n",
"Contact lenses can also be used to correct the focusing loss that comes along with presbyopia. Multifocal contact lenses can be used to correct vision for both the near and the far. Some people choose contact lenses to correct one eye for near and one eye for far with a method called monovision.\n",
"Proposed by Senju and Johnson, this model argues that the eye contact effect is facilitated by the subcortical face detection pathway. This pathway involves the superior colliculus, pulvinar and amygdala. This route is fast and operates on low spatial frequency and modulates cortical face processing . \n",
"Sensitivity to eye contact is present in newborns. From as early as four months old cortical activation as a result of eye contact has suggested that infants are able to detect and orient towards faces that make eye contact with them . This sensitivity to eye contact remains as the presence of eye contact has an effect on the processing of social stimuli in slightly older infants. For example, a 9-month-old infant will shift its gaze towards an object in response to another face shifting its gaze towards the same object. \n",
"Eye contact occurs when two people look at each other's eyes at the same time. In human beings, eye contact is a form of nonverbal communication and is thought to have a large influence on social behavior. Coined in the early to mid-1960s, the term came from the West to often define the act as a meaningful and important sign of confidence, respect, and social communication. The customs and significance of eye contact vary between societies, with religious and social differences often altering its meaning greatly.\n",
"In the United States, eye contact may serve as a regulating gesture and is typically related to issues of respect, attentiveness, and honesty in the American culture. Americans associate direct eye contact with forthrightness and trustworthiness.\n"
] |
Can anyone help decipher this WWII unit from a gravestone? | Edgar's F. Raines's *Eyes of Artillery: The Origins of Modern U.S. Army Aviation in World War II* ([link](_URL_0_)) seems to mention this unit on page 257. According to Raines, during the Battle of Leyte in 1944:
> Resupply became the main, but not the only, mission of the [11th Airborne] division's aircraft during the campaign. The division surgeon organized two portable surgical hospitals (parachute), the 5246th and 5247th, which the L-4s [i.e. Piper Cubs] dropped into Manarawat, a small village where [division commander] Swing located his headquarters, and another jungle clearing before airstrips were ready. There, doctors stabilized the division's wounded; then liaison pilots, many of them returning to the coast for more supplies, flew the patients to the rear for long-term care... | [
"BULLET::::- A rectangular marble plaque on a concrete base marks the grave of seven unknown Partisan soldiers from the Second World War. The memorial was set up in 1979. The grave is located at the crossroads to Koprivnik, Brezovica pri Predgradu, and Črnomelj.\n",
"The Tomb was placed at the head of the grave of the World War I Unknown. West of this grave are the crypts of Unknowns from World War II (south) and Korea (north). Between the two lies a crypt that once contained an Unknown from Vietnam (middle). His remains were positively identified in 1998 through DNA testing as First Lieutenant Michael Blassie, United States Air Force and were removed. Those three graves are marked with white marble slabs flush with the plaza.\n",
"The Australian forces war graves (comprising 64 army and 24 air force personnel) are on a triangular plot, dominated by a Cross of Sacrifice. Here are buried 12 personnel from World War I and 88 from World War II.\n",
"There are special markers for eleven soldiers (ten British and one Australian) who are known or believed to be buried in the cemetery but whose actual plot was lost or destroyed. These stones usually have the Rudyard Kipling-derived footnote \"\"Their glory shall not be blotted out\"\".\n",
"After the surrender of Japan and the ending of World War II, the task of identifying of British and Commonwealth war dead in the area was assigned to Major J. H. Ingram who led a War Graves Registration Unit. He designed and supervised the erection of the cemetery for the reception of graves brought from the battlefields, from numerous temporary burial grounds, and from village and other civil cemeteries where permanent maintenance would not be possible.\n",
"The whole memorial stands on a base of three shallow stone steps and is set within a recess in a stone wall. The names of the dead from the First World War are inscribed in stone panels in the wall and the names of the fallen from the Second World War were added at a later date.\n",
"The cemetery's headstones are arranged in nine plots forming an elliptical design ending with an overlook feature. A memorial has ceramic operations maps with narratives and service flags. Either side of the memorial are Tablets of the Missing commemorating 444 soldiers missing in action (rosettes mark those since recovered and identified).\n"
] |
{eli5} how do guitar fret harmonics work? | When a guitar string vibrates without anyone pressing the frets, it makes a big wave in the air.
How fast the wave moves back and forth is what determines what note you hear. (frequency)
When you play a 12th fret harmonic, you put a "damper" at the exact half-way point of the length of the string. This forces the string to vibrate as two smaller waves, each half the length of the string. These halves vibrate exactly twice as fast as the whole string (because math, that's why). When something vibrates twice as fast, the note you hear sounds twice as high.
When you play a harmonic at the fifth fret, your "damper" forces the string to vibrate in quarters because the 5th fret is one quarter along the length of the string. There are four little waves along the length of the string, with your finger between the first and second one. This makes the notes you hear even higher, because the shorter string parts vibrate even faster.
When you play normally at the 5th fret, the length of the vibrating part of the string is from your finger at the fifth fret all the way down to the end of the string by the fat end of the guitar, which is 3/4 of the total length of the string. When you make a harmonic at the fifth fret, the length of the vibrating string is 1/4 of the length of the guitar (the vibrating string is split into 4 little waves, remember), so you get a much higher note than if you play normally at the same fret.
The 7th fret is 1/3 of the fretboard, so the string is split into 3 equal parts, each vibrating equally fast. The vibration is slower than the 5th fret harmonic because the lengths of string are longer (1/3 vs 1/4). The note is lower than the 5th fret harmonic because the vibration is slower.
That's why only those frets work to give nice clear harmonics. Those are the ones that divide the string nicely into equal sections (thirds, quarters, halves). | [
"A pinch harmonic (also known as squelch picking, pick harmonic or squealy) is a guitar technique to achieve artificial harmonics in which the player's thumb or index finger on the picking hand slightly catches the string after it is picked, canceling (silencing) the fundamental frequency of the string, and letting one of the harmonics dominate. This results in a high-pitched sound which is particularly discernible on an electrically amplified guitar as a \"squeal\".\n",
"Tap harmonic is a technique used with fretted string instruments (usually guitar). It is executed by tapping on the actual fret wire, most commonly at the 12th fret, but also can be executed by tapping any of the fret wires with proper technique. It can also be done by gently touching the string over the fret wire instead of tapping the fret wire if the string is already ringing. See also: Shred Guitar.\n",
"In this tuning, when the guitar is strummed without fretting any of the strings, a D major chord is sounded. This means that any major chord can be easily created using one finger, fretting all the strings at once (also known as barring); for example, fretting all the strings at the second fret will produce an E major, at the third fret an F major, and so on up the neck.\n",
"The player typically strums the chords with the left hand. The right hand plays the melody strings by depressing spring steel strips that hold small lead hammers over the strings. A brief stab on a metal strip bounces the hammer off a string pair to produce a single note. Holding the strip down makes the hammer bounce on the double strings, which produces a mandolin-like tremolo. The bounce rate is somewhat fixed, as it is based on the spring steel strip length, hammer weight, and string tension—but a player can increase the rate slightly by pressing higher on the strip, effectively moving its pivot point closer to the lead hammer.\n",
"To produce an artificial harmonic, a stringed instrument player holds down a note on the neck with one finger of their left hand (thereby shortening the vibrational length of the string) and uses another finger to lightly touch a point on the string that is an integer divisor of its vibrational length, and plucks or bows the side of the string that is closer to the bridge. This technique is used to produce harmonic tones that are otherwise inaccessible on the instrument. To guitar players, varieties of this technique are known as a pinch harmonic, tapped harmonic, and harp harmonic. \"This gives both the electric and the acoustic guitar quite a bit of versatility and sonic flare [sic].\"\n",
"Some electric guitars use a (misnamed) lever called a \"tremolo arm\" or \"whammy bar\" that allows a performer to lower or raise the pitch of a note or chord, an effect properly termed vibrato or \"pitch bend\". This non-standard use of the term \"tremolo\" refers to pitch rather than amplitude. True tremolo for an electric guitar, electronic organ, or any electronic signal would normally be produced by a simple amplitude modulation electronic circuit. Electronic tremolo effects were available on many early guitar amplifiers. Tremolo effects pedals are also widely used to achieve this effect.\n",
"The tremoloa simulates the tonal effects of the Hawaiian steel guitar by passing a weighted roller stabilized by a swinging lever termed an arm, along a melody string. Following, moving the roller after plucking creates tremolo, an effect which gave rise to its name. Additionally, the tremoloa possesses four chords (C, G, F, and D major), to strum out the harmony.\n"
] |
how can you get stuck inside something? | The bones are rigid but the flesh can distort. Moving one direction it may be spread down, becoming narrower; moving in the other direction it may be bunched up, becoming wider. | [
"This ability enables a Croutonian to temporarily place their consciousness into an inanimate object for a short period of time. All they had to do was will it and their body would disappear as they would now be inside the object that they wanted. Throughout the show's run they appeared inside objects ranging from fuzzy dice, to toy robots. Mobile objects such as vehicles or electronic objects such as televisions could be controlled by Croutons projected into them. A Croutonian can only stay inside an object for approximately one minute or else they might get stuck in the object for several hours. Bo once got stuck inside a massage table, an experience he quite enjoyed, but it left Abe feeling stiff.\n",
"BULLET::::- An old saying that you never have to put a lid on a bucket of crabs (because when one gets near the top, another will inevitably pull it down) is often used as a metaphor for group situations where an individual feels held back by others.\n",
"BULLET::::- A man (Bill Irwin) tries to grab a bite to eat and get on a train during rush hour. He is unable to squeeze into packed cars. Spotting an empty car, he happily jumps in only to find that it's empty because of a bag left on a seat that is emitting a noxious vapor. He's trapped when the doors close before he can leave. Concluded in the final short.\n",
"These contain air, which is squeezed out when the surface of an object is pressed against the surface of the tape. Due to sealing properties of the material, when the object is pulled off the surface, a vacuum is created in the cavities. Due to external air pressure, this creates a force that prevents the object from being removed from the surface, a mechanism similar to that of a suction cup.\n",
"Bumps is a physics-based game that revolves around placing small creatures called \"bumps\" around a level, then pressing play to let the physics let them go free and collect keys to release the other bumps trapped in the level. The bumps are also able to collect power-ups and interact with certain dynamic physics objects to complete each levels objective of freeing the other bumps.\n",
"BULLET::::- Joshu Higashikata is a college student who uses the Stand Nut King Call, which allows him to materialize nuts and bolts through objects or people's bodies; if the bolt is undone on a person, the limb it was attached to falls off.\n",
"Other than the outer walls of the room and the stairs, the entire room is destructible using grenades. This can be necessary in order to access a chest from another direction if a body has fallen in front of it: searching a body has precedence over opening a locked chest. Chests can also be destroyed with a grenade, but if the chest contains explosives (bullets, grenades, or cannonballs) it will explode and end the game. Chests can also be shot open, but attempting to do so also risks setting off any explosive contents.\n"
] |
when pro athletes admit to using ped's (such as ryan braun today), why aren't they arrested for using illegal drugs? | I don't know exactly what drugs were used, but just because a drug is banned from use in sports does not mean it is also illegal to use outside of sports. | [
"BULLET::::- In February 2009, \"Sports Illustrated\" reported that Alex Rodriguez tested positive for two AAS, testosterone and metenolone enanthate, while playing for the Texas Rangers in 2003. He claims to have purchased them over the counter, in the Dominican Republic. However, \"boli,\" as he referred to it, is an illegal substance in the Dominican Republic. In an interview with ESPN two days after the SI revelations, Rodriguez admitted to using banned substances from 2001 to 2003, citing \"an enormous amount of pressure to perform,\" but said he had not since then used banned performance-enhancing substances. He said he did not know the name(s) of the particular substance(s) he was using, and would not specify whether he took them in injectable form.\n",
"Davis received a tweet on June 30 from Michael Tran in Michigan asking him if he had ever used steroids. He responded \"No\", that same day. Davis said later in an interview, \"I have not ever taken any PEDs. I'm not sure fans realize, we have the strictest drug testing in all of sports, even more than the Olympics. If anybody was going to try to cheat in our game, they couldn't. It's impossible to try to beat the system. Anyway, I've never taken PEDs, no. I wouldn't. Half the stuff on the list I can't even pronounce.\" Later, Davis would say that he believed Roger Maris's 61 home run season was the true single-season home run record, due to the steroid scandal surrounding Barry Bonds, Sammy Sosa, and Mark McGwire.\n",
"In 2008, Canseco released another book, \"Vindicated\", about his frustrations in the aftermath of the publishing of \"Juiced\". In it, he discusses his belief that Alex Rodriguez also used steroids. The claim was proven true with Rodriguez's admission in 2009, just after his name was leaked as being on the list of 103 players who tested positive for banned substances in Major League Baseball. In July 2013, Alex Rodriguez was again under investigation for using banned substances provided by Biogenesis of America. He was suspended for the entirety of the 2014 season.\n",
"BULLET::::- June 16 – According to a report published on \"The New York Times\" Web site, Sammy Sosa is allegedly among the 104 Major League players who tested positive for PEDs in . Sosa testified under oath before Congress at a public hearing in that he had never taken illegal performance-enhancing drugs.\n",
"Several other players returning from the 2012 ballot with otherwise strong Hall credentials have been linked to PEDs, among them Mark McGwire (who admitted to long-term steroid use in 2010), Jeff Bagwell (who never tested positive, but was the subject of PED rumors during his career), and Rafael Palmeiro (who tested positive for stanozolol shortly after publicly denying that he had ever used steroids).\n",
"On January 9, 2013, in response to the Baseball Hall of Fame announcement in which no players were elected, Lo Duca acknowledged his steroids use, tweeting \"I took PED and I'm not proud of it...but people who think you can take a shot or a pill and play like the legends on that ballot need help.\"\n",
"“It will do nothing to reduce the perception, suggested by several players, that steroid use is rampant. Worst of all, it sends the message to young fans and prospects that the national pastime has a high tolerance for steroids.” Those tests ultimately found 104 players using performance-enhancing drugs, but all the names were kept anonymous—until it was revealed by Sports Illustrated in 2009 that Alex Rodriguez was among them. A-Rod then admitted he had been injected with a steroid from 2001 through 2003. Wilstein criticized players and owners when they reached agreement on a new drug-testing program in January, 2005, calling for more banned substances, a 10-day penalty for first time users, and the release of the names of those who test positive.\n"
] |
How does our brain interpret wildly-different accents as the same language? | Certain sounds within a language are [allophones](_URL_0_). This means that they can be interchanged while not altering the meaning of the word.
One example is /t/. If you take nearly any English word with that sound and replace it with an alveolar flap or a glottal stop it changes the accent, but not the meaning of the word.
| [
"In a study conducted by Newman et al., the relationship between cognitive neuroscience and language acquisition was compared through a standardized test procedure involving native speakers of English and native Spanish speakers who have all had a similar amount of exposure to the English language(averaging about 26 years). Even the number of times an examinee blinked was taken into account during the examination process. It was concluded that the brain does in fact process languages differently, but instead of it being directly related to proficiency levels, it is more so about how the brain processes language itself.\n",
"A strong correlation has been found between speech-language and the anatomically asymmetric pars triangularis. Foundas, et al. showed that language function can be localized to one region of the brain, as Paul Broca had done before them, but they also supported the idea that one side of the brain is more involved with language than the other. The human brain has two hemispheres, and each one looks similar to the other; that is, it looks like one hemisphere is a mirror image of the other. However, Foundas, et al. found that the pars triangularis in Broca's area is actually larger than the same region in the right side of the brain. This \"leftward asymmetry\" corresponded both in form and function, which means that the part of the brain that is active during language processing is larger. In almost all the test subjects, this was the left side. In fact, the only subject tested that had right-hemispheric language dominance was found to have a rightward asymmetry of the pars triangularis.\n",
"With the increasing amount of bilinguals worldwide, psycholinguists began to look at how two languages are represented in our brain. The mental lexicon is one of the places that researchers focused on to see how that is different between bilingual and monolingual.\n",
"Linguists who understand particular languages as a composite of unique, individual idiolects must nonetheless account for the fact that members of large speech communities, and even speakers of different dialects of the same language, can understand one another. All human beings seem to produce language in essentially the same way. This has led to searches for universal grammar, as well as attempts to further define the nature of particular languages.\n",
"Different linguists therefore take different approaches to the problem of assigning sounds to phonemes. For example, they differ in the extent to which they require allophones to be phonetically similar. There are also differing ideas as to whether this grouping of sounds is purely a tool for linguistic analysis, or reflects an actual process in the way the human brain processes a language.\n",
"The brain contains areas that are specialized to deal with language, located in the perisylvian cortex of the left hemisphere. These areas are crucial for performing language tasks, but they are not the only areas that are used; disparate parts of both right and left brain hemispheres are active during language production. In multilingual individuals, there is a great deal of similarity in the brain areas used for each of their languages. Insights into the neurology of multilingualism have been gained by the study of multilingual individuals with aphasia, or the loss of one or more languages as a result of brain damage. Bilingual aphasics can show several different patterns of recovery; they may recover one language but not another, they may recover both languages simultaneously, or they may involuntarily mix different languages during language production during the recovery period. These patterns are explained by the \"dynamic view\" of bilingual aphasia, which holds that the language system of representation and control is compromised as a result of brain damage.\n",
"Conversely, it has also been reported that there is at times, no difference within the left prefrontal cortex when comparing word generation in early bilinguals and late bilinguals. It has been reported that these findings may conflict with those stated above because of different levels of proficiency in each language. That is, an individual who resides in a bilingual society is more likely to be highly proficient in both languages, as opposed to a bilingual individual who lives in a dominantly monolingual community. Thus, language proficiency is another factor affecting the neuronal organization of language processing in bilinguals.\n"
] |
does it cost internet providers more money to give as an individual faster internet? | Directly... No. Any individual is virtually nothing on the scale that the ISPs operate.
Indirectly... Yes. Its not as simple as providing one person faster internet, you would have to provide everyone who asked faster internet. Soon you have to upgrade the entire infrastructure and that costs a few hundred billion. | [
"Clemons suggests alternative methods for earning money through the Internet, namely selling content and selling access to virtual communities. However, one might argue that this would not be effective in current society; since content and access has been available for free for as long as the Internet has been around, sudden charges might cause an uproar among users of the Internet. Furthermore, a portion of Internet users may not be able to afford paying for content and access, which will limit the amount of revenue businesses will bring in.\n",
"Additionally, widespread use of the Internet by businesses and corporations drives down energy costs. Besides the fact that Internet usage does not consume large amounts of energy, businesses who utilize connections no longer have to ship, stock, heat, cool, and light unsellable items whose lack of consumption not only yields less profit for the company but also wastes more energy. Online shopping contributes to less fuel use: a 10-pound package via airmail uses 40% less fuel than a trip to buy that same package at a local mall, or shipping via railroad. Researchers in 2000 predicted a continuing decline in energy due to Internet consumption to save 2.7 million tons of paper per year, yielding a decrease by 10 million tons of carbon dioxide globalwarming pollution per year.\n",
"According to NTIA (2010), the major reason for people not having high speed Internet use at home is \"don’t need/not interested\" (37.8%), and the second one is \"too expensive\" (26.3%). Some therefore argue the government should not be paying for a service people do not want.\n",
"Residential internet is also very expensive at 15 CUC per month for the cheapest plan and 70 CUC per month for the plan that offers the fastest quality internet. These services represent a cost that is excess of the majority or all of the salary of the vast majority of the Cuban population. Similarly, internet for businesses is out of reach for all but the most wealthy customers with monthly costs that are at least 100 CUC a moth for direct access to the global internet for the slowest service and at a maximum of over 30,000 CUC a month for the fastest service that is offered. As a result the vast majority of Cuban residences and businesses do not have access to the internet.\n",
"The low-cost barrier to entry for prospective peers is attractive for the smaller companies, while larger companies can see significant operational expense savings by utilizing the exchange at a fraction of the cost of commercial Internet transit.\n",
"For some, the cost of Internet service is a factor. Many computer owners who cannot afford a monthly subscription to an Internet service, who only use it occasionally, or who otherwise wish to save money and avoid paying, will routinely piggyback from a neighbour or a nearby business, or visit a location providing this service without being a paying customer. If the business is large and frequented by many people, this may go largely unnoticed.\n",
"Transaction costs have since evolved and have played key roles in the mobilization of organizations and groups. With the expansion of information technology involving telephones and the internet, people are more apt to share information at low costs. It is now fast and inexpensive to communicate with others. As a result, transaction costs regarding communication and the sharing of information is low and, at times, free. Low transaction costs have allowed for groups of people to join \n"
] |
considering the level of climate change denial and inaction, how on earth was the montreal protocol implemented (and successfully so)? | Don't post loaded questions.
Climate change is not in denial, it's the cause of which that is in dispute. | [
"The 1987 Montreal Protocol is commonly cited as a CAC success story at international level. The aim of the agreement was to limit the release of Chlorofluorocarbons into the atmosphere and subsequently halt the depletion of Ozone (O3) in the stratosphere.\n",
"Over the decades, the Montreal Protocol became a victim of its own success.By 2006, some called for dismantling the treaty, claiming it had achieved its goals and outlived its usefulness. Andersen knew the Protocol needed to be not only preserved but strengthened. In 2007 Andersen assembled a team of scientists, led by Dutch scientist Dr. Guus Velders, to research the role of the Protocol in climate protection. In 2007 Andersen and the Velders’ team published “\"The Importance of the Montreal Protocol in Protecting Climate\".” The team quantified the benefits of the Montreal Protocol, and found that it helped prevent 11 billion metric tons of COequivalent emissions per year from 1990 to 2010, having delayed the impacts of climate change by 7 - 12 years. The paper determined the Montreal Protocol had been the most successful climate agreement in history, it also estimated the joint ozone and climate benefits of an accelerated hydrochlorofluorocarbons (HCFC) phaseout, providing policymakers with information needed to accelerate the phaseout.\n",
"The Vienna Convention for the Protection of the Ozone Layer and the Montreal Protocol were both originally signed by only some member states of the United Nations (43 nations in the case of the Montreal Protocol in 1986) while Kyoto attempted to create a worldwide agreement from scratch. Expert consensus concerning CFCs in the form of the Scientific Assessment of Ozone Depletion was reached long after the first regulatory steps were taken, and , all countries in the United Nations plus the Cook Islands, the Holy See, Niue and the supranational European Union had ratified the original Montreal Protocol. These countries have also ratified the London, Copenhagen, and Montreal amendments to the Protocol. , the Beijing amendments had not been ratified by two state parties.\n",
"Among its accomplishments are: The Montreal Protocol was the first international treaty to address a global environmental regulatory challenge; the first to embrace the \"precautionary principle\" in its design for science-based policymaking; the first treaty where independent experts on atmospheric science, environmental impacts, chemical technology, and economics, reported directly to Parties, without edit or censorship, functioning under norms of professionalism, peer review, and respect; the first to provide for national differences in responsibility and financial capacity to respond by establishing a multilateral fund for technology transfer; the first MEA with stringent reporting, trade, and binding chemical phase-out obligations for both developed and developing countries; and, the first treaty with a financial mechanism managed democratically by an Executive Board with equal representation by developed and developing countries.\n",
"Agreed in 1997, the Kyoto Protocol took the global nature of the climate problem into account at least to some extent, even if it was ratified only many years later. The Kyoto Protocol was the first international agreement to limit greenhouse gas emissions. The Wuppertal Institute's Climate Policy Division was closely involved in setting this milestone in the international climate debate.\n",
"BULLET::::- The Montreal Protocol: Agreed in 1987 with a pressing mission to regulate the chemicals directly destroying Earth’s ozone layer and celebrated as the world’s most successful environmental treaty. EIA was instrumental in proposing and then making the case that the Protocol, which so ably removed chlorofluorocarbons (CFCs), was the best mechanism by which to phase out the harmful hydrofluorocarbons (HFCs) which have come to replace CFCs. This work resulted in the Kigali Amendment on HFCs.\n",
"The Kyoto Protocol was a huge leap forward towards an intergovernmental united strategy to reduce GHG’s emissions globally. But it wasn’t without its objections. Some of the main criticisms were against categorizing different countries into annexes, with each annex having its own responsibility for emission reductions based on historic GHG emissions and, therefore, historic contribution to global climate change. “Some of the criticism of the Protocol has been based on the idea of climate justice.\" This has particularly centered on the balance between the low emissions and high vulnerability of the developing world to climate change, compared to high emissions in the developed world.” Other objections were the use of carbon off-sets as a method for a country to reduce its carbon emissions. Although it can be beneficial to balance out one GHG emission by implementing an equal carbon offset, it still doesn’t completely eliminate the original carbon emission and therefore ultimately reduce the amount of GHG’s in the atmosphere.\n"
] |
Why hasn't the world's most fascinating monument, the Mausoleum of the First Emperor of China, been excavated? | There are conservation reasons, which I don't have the scientific training to discuss, but I'd like to question your assumption that it is the "world's most fascinating monument". Yes, it is a large and spectacular tomb that probably has a lot of marquee artifacts inside, but those kinds of sites are not always the best to answer interesting research questions. Take for example the archaeological site of Gordion in Turkey, which has been under continuous excavation for 1950. Compared to the likely contents of the Mausoleum of the Qin Emperor it has for the most part been entirely unspectacular with the exception of the large golden burial in Tumulus MM and a few nice artworks. But as a research site it is one of the most important in the entire Middle East, on the level of Bogzakoy, Assur, Warka, Ur and other major sites. It represents one of the longest continuous human habitations known in Anatolia, was the capital city of the Phyrgian state(MM stands for "Midas Mound") and as such has some of the most important Iron Age monumental architecture of Anatolia, important evidence of the Hittite presence in central Anatolia, a notable Hellenistic town that can answer a lot of questions along with other Hellenistic sites about the Greek presence in Anatolia, a lot of plant remains that can tell us about the ecological history and food production of the region, and is generally nearly unparalleled as a laboratory for the archaeology of the ancient Near East. It may not be an enormous mound burying a famous Chinese emperor, but from certain perspectives a site like Gordion(and I pick that only because I know the archaeology of the Near East better than the archaeology of China) that preserves evidence about a wide range of human activities and habitations over a very long period of time is far more valuable as historical evidence.
EDIT: And I have not even touched on the humbler settlement archaeology, which for the most part surveys and excavates sites that barely make the front pages but can tell us things about daily life and historical geography that even the most impressive urban monumental site simply cannot. | [
"The Mausoleum was approximately in height, and the four sides were adorned with sculptural reliefs, each created by one of four Greek sculptors: Leochares, Bryaxis, Scopas of Paros, and Timotheus. The finished structure of the mausoleum was considered to be such an aesthetic triumph that Antipater of Sidon identified it as one of his Seven Wonders of the Ancient World. It was destroyed by successive earthquakes from the 12th to the 15th century, the last surviving of the six destroyed wonders.\n",
"Besides its religious importance, the mausoleum is also of considerable archaeological value as its dome is reputed to be the second largest in the world, after 'Gol Gumbad' of Bijapur (India), which is the largest. The mausoleum is built entirely of red brick, bounded with beams of shisham wood, which have now turned black after so many centuries. The whole of the exterior is elaborately ornamented with glazed tile panels, string courses and battlements. Colors used are dark blue, azure, and white, contrasted with the deep red of the finely polished bricks. The tomb was said to have been built by Ghias-ud-Din Tughlak for himself, but was given up by his son Muhammad Tughlak in favour of Rukn-i-Alam, when he died in 1330.\n",
"The mausoleum complex is best known for the pyramidal monument which stands in front of the tomb itself, and which is often mistaken for the tomb. Called \"Shou Qiu\" (\"mound or hill of longevity\"), this monument marks the birthplace of the Yellow Emperor according to legend. It is unique in China because of its pyramid-shaped stone construction. It consists of a mound that has been covered with stone slabs during the reign of Emperor Huizong of the Song dynasty in 1111 CE. The entire pyramid is 28.5 metres wide and 8.73 meters high. On its flat top stands a small pavilion that houses a statue, variously identified as the Yellow Emperor or Shaohao. The mound and tomb stands inside a compound with many old trees, chiefly thujas planted on the orders of the Qianlong Emperor of the Qing dynasty, who visited the site in 1748.\n",
"The real stars of Jiayuguan are the thousands of tombs from the Wei and Western Jin Dynasty (265–420) discovered east of the city in recent years. The 700 excavated tombs are famous in China, and replicas or photographs of them can be seen in nearly every major Chinese museum. The bricks deserve their fame; they are both fascinating and charming, depicting such domestic scenes as preparing for a feast, roasting meat, picking mulberries, feeding chickens, and herding horses. Of the 18 tombs that have been excavated, only one is currently open to tourists. Many frescos have also been found around Jiayuguan but most are not open to visitors.\n",
"The beauty of the Mausoleum was not only in the structure itself, but in the decorations and statues that adorned the outside at different levels on the podium and the roof: statues of people, lions, horses, and other animals in varying scales. The four Greek sculptors who carved the statues: Bryaxis, Leochares, Scopas and Timotheus were each responsible for one side. Because the statues were of people and animals, the Mausoleum holds a special place in history, as it was not dedicated to the gods of Ancient Greece.\n",
"The Eastern Qing tombs (; ) are an imperial mausoleum complex of the Qing dynasty located in Zunhua, northeast of Beijing. They are the largest, most complete, and best preserved extant mausoleum complex in China. Altogether, five emperors (Shunzhi, Kangxi, Qianlong, Xianfeng, and Tongzhi), 15 empresses, 136 imperial concubines, three princes, and two princesses of the Qing dynasty are buried here. Surrounded by Changrui Mountain, Jinxing Mountain, Huanghua Mountain, and Yingfei Daoyang Mountain, the tomb complex stretches over a total area of .\n",
"BULLET::::- The Mausoleum at Halicarnassus, another Wonder of the Ancient World, was destroyed by a series of earthquakes between the 12th and 15th centuries. Most of the remaining marble blocks were burnt into lime, but some were used in the construction of Bodrum Castle by the Knights Hospitaller, where they can still be seen today. The only other surviving remains of the mausoleum are some foundations in situ, a few sculptures in the British Museum, and some marble blocks which were used to build a dockyard in Malta's Grand Harbour.\n"
] |
why dont we ever hear about people born without a sense of taste/touch/smell? | We do. I knew a guy that couldn't feel pain or temperature. He had to be careful not to burn himself and constantly had to check himself to make sure he didn't get injured that day. | [
"Often people who have congenital anosmia report that they pretended to be able to smell as children because they thought that smelling was something that older/mature people could do, or did not understand the concept of smelling but did not want to appear different from others. When children get older, they often realize and report to their parents that they do not actually possess a sense of smell, often to the surprise of their parents.\n",
"People with this condition often misinterpret others' behaviors, e.g. sniffing, touching nose or opening a window, as being referential to an unpleasant body odor which in reality is non-existent and can not be detected by other people.\n",
"Certain smells can be associated with specific areas and help a person with vision problems to remember a familiar area. This way there is a better chance of recognizing an area's layout in order to navigate themselves through. The same can be said for people as well. Some people have their own special odor that a person with a more trained sense of smell can pick up. A person with an impairment of their vision can use this to recognize people within their vicinity without them saying a word.\n",
"Among humans, taste perception begins to fade around 50 years of age because of loss of tongue papillae and a general decrease in saliva production. Humans can also have distortion of tastes through dysgeusia. Not all mammals share the same taste senses: some rodents can taste starch (which humans cannot), cats cannot taste sweetness, and several other carnivores including hyenas, dolphins, and sea lions, have lost the ability to sense up to four of their ancestral five taste senses.\n",
"Among humans, taste perception begins to fade around 50 years of age because of loss of tongue papillae and a general decrease in saliva production. Humans can also have distortion of tastes through dysgeusia. Not all mammals share the same taste senses: some rodents can taste starch (which humans cannot), cats cannot taste sweetness, and several other carnivores including hyenas, dolphins, and sea lions, have lost the ability to sense up to four of their ancestral five taste senses.\n",
"Perceptual adaptation is a phenomenon that occurs for all of the senses, including smell and touch. An individual can adapt to a certain smell with time. Smokers, or individuals living with smokers, tend to stop noticing the smell of cigarettes after some time, whereas people not exposed to smoke on a regular basis will notice the smell instantly. The same phenomenon can be observed with other types of smell, such as perfume, flowers, etc. The human brain can distinguish smells that are unfamiliar to the individual, while adapting to those it is used to and no longer require to be consciously recognized.\n",
"Humans begin to lose their senses one at a time. Each loss is preceded by an outburst of an intense feeling or urge. First, people begin suffering uncontrollable bouts of crying and this is soon followed by the loss of their sense of smell. An outbreak of irrational panic and anxiety, closely followed by a bout of frenzied gluttony, precedes the loss of the sense of taste. The film depicts people trying to adapt to each loss and trying to carry on living as best they can, rediscovering their remaining senses as they do so. Michael and his co-workers do their best to cook food for people who cannot smell nor taste.\n"
] |
How deep would I have to dig into the earth to stop finding life? | Pretty darn deep. If I recall correctly, organisms have been found in boreholes 4km deep, though I can't find a source for anything deeper than 2.7 km.
Here is a brief discussion of it: _URL_1_
This is also full of interesting information: _URL_0_ | [
"Life has been found at depths of 5 km in continents and 10.5 km below the ocean surface. The estimated volume of the deep biosphere is 2–2.3 billion cubic kilometers, about twice the volume of the oceans.\n",
"Like probes sent into outer space, scientific drilling is a technology used to obtain samples from places that people cannot reach. Human beings have descended as deep as 2,080 m (6,822 ft) in Voronya Cave, the world's deepest known cave, located in the Caucasus mountains of the country of Georgia. Gold miners in South Africa regularly go deeper than 3,400 m, but no human has ever descended to greater depths than this below the Earth's solid surface. As depth increases into the Earth, temperature and pressure rise. Temperatures in the crust increase about 15°C per kilometer, making it impossible for humans to exist at depths greater than several kilometers, even if it was somehow possible to keep shafts open in spite of the tremendous pressure. \n",
"In December 2018, researchers announced that considerable amounts of life forms, including 70% of bacteria and archea on Earth, comprising up to 23 billion tonnes of carbon, live at least deep underground, including below the seabed, according to a ten-year Deep Carbon Observatory project.\n",
"In December 2018, researchers announced that considerable amounts of life forms, including 70% of bacteria and archea on Earth, comprising up to 23 billion tonnes of carbon, live up to at least deep underground, including below the seabed, according to a ten-year Deep Carbon Observatory project.\n",
"In December 2018, researchers announced that considerable amounts of life forms, including 70% of bacteria and archea on Earth, comprising up to 23 billion tonnes of carbon, live up to at least deep underground, including below the seabed, according to a ten-year Deep Carbon Observatory project.\n",
"BULLET::::- Jacques Piccard and Don Walsh descend into the Mariana Trench in the \"bathyscaphe Trieste\", reaching the depth of 10,911 meters (35,797 feet) and become the first human beings to reach the lowest spot on Earth.\n",
"BULLET::::- Researchers announce the discovery of considerable amounts of life forms, including 70% of bacteria and archea on Earth, comprising up to 23 billion tonnes of carbon, living up to at least deep underground, including below the seabed, according to a ten-year Deep Carbon Observatory project.\n"
] |
When did pornography come about in human history? | I'm adapting this from some older answers.
Here's the tricky thing about your question--do you mean 'porn' in the sense of moving visual art of people doing erotic things? Then in 1894 Edison's studio recorded a vaguely erotic short, titled Carmencita, which featured a Spanish dancer who twirled and posed on film for the first time. The short was considered scandalous in some places because Carmencita's underwear and legs could be seen in the film. A couple of years later, in 1896, the same studio recorded The May Irwin Kiss, an 18 second film of a Victorian couple kissing (in an incredibly awkward and forced manner). According to Maximillien De Lafayette, this scene in particular caused uproar among newspaper editorials, cries for censorship from the Roman Catholic Church, and calls for prosecution—although these calls do not seem like they were followed up on.
Or perhaps you mean film of people actually doing the deed? Then the oldest surviving work we have is *L'Ecu d'Or ou la Bonne Auberge*, which was first distributed in 1908--and features a man coming to an inn somewhere in france. The inn has no food, but the inkeeper is desperate for food and offers a very different type of food -- his daughter. And then, just because a third woman has to come and join in on the fun. However, this film only survives in a few places now, censors managed to destroy most copies of this film.
The earliest surviving American film, available on [Wikipedia of all places,](_URL_0_) **[THIS LINK IS LITERAL PORN, YOUVE BEEN WARNED]** is called *A Free Ride,* and dates from 1915. These types of works were typically shown in brothels, until film projection equipment became cheap in the 1930s.
As with photography before it, and books before that, film eventually became cheaper and more widespread, began appearing in the alleyways and under the counter at stores, and eventually lead to arrests, prosecution and jail time. The Czech movie Ecstasy (1933), for example, featured scenes of nudity, and perhaps the first female orgasm shown in a major theatrical release. The scandal of these scenes lead to cries for the seizing and banning of the offensive material, and lead to the Hayes Code in the United States, which successfully banned erotic material from Hollywood movies for the next 30 years. Full freedom of pornographic expression was not available until 1988's California v. Freeman, which effectively legalized hardcore pornography.
Or do you perhaps mean "porn" as in the concept of pornography as a whole? 'Porn' as we know it is a relatively recent thing, dating from the early 1800's or so, 1857 is when it was really written into law in our modern understanding of it (in england and France, a few years earlier in America). So 'porn' as we know it is only about 150 years old!
This is really surprising to most people, as they tend to think, as you do, of the Karma Sutra and other things as pornography. But they're not, or at least in their original contexts they were not
> “the explicit description or exhibition of sexual subjects or activity in literature, painting, films, etc., in a manner intended to stimulate erotic rather than aesthetic feelings” (OED)
Although pornography is a Greek word literally meaning “writers about prostitutes,” it is only found once in surviving Ancient Greek writing, where Arthenaeus comments on an artist that painted portraits of whores or courtesans. The word seemed to fall more or less out of use for fifteen hundred years until the first modern usage of the word (1857) to describe erotic wall paintings uncovered at Pompeii.
Several ‘secret museums’ were founded to house the discoveries. However, these museums (the first of which was the Borbonico museum in Naples) were only accessible to highly educated upper-class men, who could understand Latin and Greek and pay the admission price.
As literacy rose and the book market developed in England and it began to seem possible that anything might be shown to anyone without control, then the ‘shadowy zone’ of pornography was ‘invented,’ regulating the “consumption of the obscene, so as to exclude the lower classes and women.” (Walter Kendrick, p. 57, *The Secret Museum*) Critics and moralists responded to the growing market, rising literacy, and the developing public sphere by expressing a deep anxiety over the impact and influences of erotic works. Erotic discourse began to be inextricably linked to a ’type’ of work that supposedly had undesirous effects upon the English public. In Lynn Hunt’s words then, “pornography as a regulatory category was invented in response to the perceived menace of the democraticization of culture.”
| [
"Another early form of pornography were comic books known as Tijuana bibles that began appearing in the U.S. in the 1920s and lasted until the publishing of glossy colour men's magazines commenced. These were crude hand drawn scenes often using popular characters from cartoons and culture.\n",
"Another early form of pornography were comic books known as Tijuana bibles that began appearing in the U.S. in the 1920s and lasted until the publishing of glossy colour men's magazines commenced. These were crude hand drawn scenes often using popular characters from cartoons and culture.\n",
"Although pornography dates back thousands of years, its existence in the U.S. can be traced to its 18th-century origins and the influx of foreign trade and immigrants. By the end of the 18th century, France had become the leading country regarding the spread of porn pictures. Porn had become the subject of playing-cards, posters, post cards, and cabinet cards. Prior to this printers were previously limited to engravings, woodcuts, and line cuts for illustrations. As trade increased and more people immigrated from countries with less Puritanical and more relaxed attitudes toward human sexuality, the amount of available visual pornography increased.\n",
"Lynn Hunt points out that early modern \"pornography\" (18th century) is marked by a \"preponderance of female narrators\", that the women were portrayed as independent, determined, financially successful (though not always socially successful and recognized) and scornful of the new ideals of female virtue and domesticity, and not objectifications of women's bodies as many view pornography today. The sexual revolution was not unprecedented in identifying sex as a site of political potential and social culture. It was suggested that the interchangeability of bodies within pornography had radical implications for gender differences and that they could lose their meaning or at least redefine the meaning of gender roles and norms. \n",
"The first instances of modern pornography date back to the sixteenth century when sexually explicit images differentiated itself from traditional sexual representations in European art by combining the traditionally explicit representation of sex and the moral norms of those times.\n",
"In the 17th century, numerous examples of pornographic or erotic literature began to circulate. These included \"L'Ecole des Filles\", a French work printed in 1655 that is considered to be the beginning of pornography in France. It consists of an illustrated dialogue between two women, a 16-year-old and her more worldly cousin, and their explicit discussions about sex. The author remains anonymous to this day, though a few suspected authors served light prison sentences for supposed authorship of the work. In his famous diary, Samuel Pepys records purchasing a copy for solitary reading and then burning it so that it would not be discovered by his wife; \"the idle roguish book, \"L'escholle de filles\"; which I have bought in plain binding… because I resolve, as soon as I have read it, to burn it.\"\n",
"Sexism has had a long standing history within the medical industry. The earliest traces of sexism could be found within the disproportionate diagnosis of women with hysteria as early as 4000 years ago.\n"
] |
What's a Good Book To Learn About the Hanseatic League? | Do you read German? If so, get the standard work on the Hanseatic League: *Bracker, Jörgen / Henn, Volker / Postel, Rainer (Eds.): Die Hanse. Lebenswirklichkeit und Mythos, 3rd edition, Lübeck 1999.*, a German language collection of various texts on a diverse range of topics. I don't believe it's been translated though. | [
"His major work was his monograph \"Geschichte des Hanseatischen Bundes.\" (engl.: \"History of the Hanseatic League.\") published in three volumes 1802-1808. His research on this topic was the first modern work on the Hanseatic League. A second edition prepared by him was published post mortem in 1830. He made a historical study of the rule of the Ostrogoths in Italy while professor at Göttingen (\"Versuch iiber die Regierung der Ostgothen wabrend ihrer Herrschaft in Italien\"; Hamburg, 1811), an extremely painstaking treatise on Ostrogothic administration, chiefly compiled from the letters of Cassiodorus. He is also known as translator and popularizer of Adam Smith's \"Wealth of Nations\". As an economist he gave lectures on taxation.\n",
"Historians generally trace the origins of the Hanseatic League to the rebuilding of the north German town of Lübeck in 1159 by the powerful Henry the Lion, Duke of Saxony and Bavaria, after he had captured the area from Adolf II, Count of Schauenburg and Holstein. More recent scholarship has deemphasized the focus on Lübeck due to it having been designed as one of several regional trading centers.\n",
"The Hanseatic League was an alliance of trading guilds that established and maintained a trade monopoly over the Baltic Sea, to a certain extent the North Sea, and most of Northern Europe for a time in the Late Middle Ages and the early modern period, between the 13th and 17th centuries. Historians generally trace the origins of the League to the foundation of the Northern German town of Lübeck, established in 1158/1159 after the capture of the area from the Count of Schauenburg and Holstein by Henry the Lion, the Duke of Saxony. Exploratory trading adventures, raids and piracy had occurred earlier throughout the Baltic (see Vikings) — the sailors of Gotland sailed up rivers as far away as Novgorod, for example — but the scale of international economy in the Baltic area remained insignificant before the growth of the Hanseatic League. German cities achieved domination of trade in the Baltic with striking speed over the next century, and Lübeck became a central node in all the seaborne trade that linked the areas around the North Sea and the Baltic Sea.\n",
"Early in the campaign, an event will herald the formation of the Hanseatic League. The League consists of five specific regions on the campaign map—Hamburg, Danzig, Visby, Riga and Novgorod—which represent the group's most important assets. The faction controlling the most of these settlements has the greatest chance to be offered the option of building the Hanseatic League Headquarters, a unique building that provides significant financial rewards.\n",
"In 1980, former Hanseatic League members established a \"new Hanse\" in Zwolle. This league is open to all former Hanseatic League members and cities that share a Hanseatic Heritage. In 2012 the New Hanseatic league had 187 members. This includes twelve Russian cities, most notably Novgorod, which was a major Russian trade partner of the Hansa in the Middle Ages. The \"new Hanse\" fosters and develops business links, tourism and cultural exchange.\n",
"The \"New Hanseatic League\" is a political grouping of economically like-minded northern European states, established in February 2018, that is pushing for a more developed European Single Market, particularly in the services sector.\n",
"The Hanseatic League was an alliance of trading cities that established and maintained a trade monopoly over the Baltic Sea and most of Northern Europe for a time in the later Middle Ages and the Early Modern period, between the 13th and 17th centuries. \n"
] |
What do we know about the long-term effects of nicotine, as distinct from the long-term effects of tobacco? | We do not have long term human studies yet. However, we have done studies in rats (so take that as you will).
Findings from one such study show that long term, heavy usage (twice the blood plasma level of nicotine found in heavy smokers) show **no increase "in mortality, in atherosclerosis or frequency of tumors in these rats compared with controls"**.
Nicotine is still very addictive, and the electronic cigs so far haven't shown benefits in quiting, but if your friends choose e-cgis over regular, it is likely a healthier option.
Source [pubmed](_URL_0_) | [
"Although nicotine does play a role in acute episodes of some diseases (including stroke, impotence, and heart disease) by its stimulation of adrenaline release, which raises blood pressure, heart and respiration rate, and free fatty acids, the most serious longer term effects are more the result of the products of the smouldering combustion process. This has led to the development of various nicotine delivery systems, such as the nicotine patch or nicotine gum, that can satisfy the addictive craving by delivering nicotine without the harmful combustion by-products. This can help the heavily dependent smoker to quit gradually, while discontinuing further damage to health.\n",
"The health effects of long-term nicotine use is unknown. It may be decades before the long-term health effects of nicotine vapor inhalation is known. It is not recommended for non-smokers. Public health authorities do not recommend nicotine use for non-smokers. The pureness of the nicotine differs by grade and producer. The impurities associated with nicotine are not as toxic as nicotine. The health effects of vaping tobacco alkaloids that stem from nicotine impurities in e-liquids is not known. Nicotine affects practically every cell in the body. The complex effects of nicotine are not entirely understood. It poses several health risks. Short-term nicotine use excites the autonomic ganglia nerves and autonomic nerves, but chronic use seems to induce negative effects on endothelial cells. Nicotine may have a profound impact on sleep. The effects on sleep vary after being intoxicated, during withdrawal, and from long-term use. Nicotine may result in arousal and wakefulness, mainly via incitement in the basal forebrain. Nicotine withdrawal, after abstaining from nicotine use in non-smokers, was linked with longer overall length of sleep and REM rebound. A 2016 review states that \"Although smokers say they smoke to control stress, studies show a significant increase in cortisol concentrations in daily smokers compared with occasional smokers or nonsmokers. These findings suggest that, despite the subjective effects, smoking may actually worsen the negative emotional states. The effects of nicotine on the sleep-wake cycle through nicotine receptors may have a functional significance. Nicotine receptor stimulation promotes wake time and reduces both total sleep time and rapid eye movement sleep.\"\n",
"First-time nicotine users develop a dependence about 32% of the time. There are approximately 976 million smokers in the world. There is an increased frequency of nicotine dependence in people with anxiety disorders. Nicotine is a parasympathomimetic stimulant that attaches to nicotinic acetylcholine receptors in the brain. Neuroplasticity within the brain's reward system occurs as a result of long-term nicotine use, leading to nicotine dependence. There are genetic risk factors for developing dependence. For instance, genetic markers for a specific type of nicotinic receptor (the α5-α3-β4 nicotine receptors) have been linked to increased risk for dependence. Evidence-based medicine can double or triple a smoker's chances of quitting successfully.\n",
"Nicotine promotes endothelial cell migration, proliferation, survival, tube formation, and nitric oxide (NO) production \"in vitro\", mimicking the effect of other angiogenic growth factors. In 2001, it was found that nicotine was a potent angiogenic agent at tissue and plasma concentrations similar to those induced by light to moderate smoking. Effects of nicotine on angiogenesis have been demonstrated for a number of tumor cells, such as breast, colon, and lung. Similar results have also been demonstrated in \"in vivo\" mouse models of lung cancer, where nicotine significantly increased the size and number of tumors in the lung, and enhanced metastasis.\n",
"Some evidence suggests that \"in utero\" nicotine exposure influences the occurrence of certain conditions later in life, including type 2 diabetes, obesity, hypertension, neurobehavioral defects, respiratory dysfunction, and infertility.\n",
"Nicotine, which is contained in cigarettes and other smoked tobacco products, is a stimulant and is one of the main factors leading to continued tobacco smoking. Nicotine is a highly addictive psychoactive chemical. When tobacco is smoked, most of the nicotine is pyrolyzed; a dose sufficient to cause mild somatic dependency and mild to strong psychological dependency remains. The amount of nicotine absorbed by the body from smoking depends on many factors, including the type of tobacco, whether the smoke is inhaled, and whether a filter is used. There is also a formation of harmane (a MAO inhibitor) from the acetaldehyde in cigarette smoke, which seems to play an important role in nicotine addiction probably by facilitating dopamine release in the nucleus accumbens in response to nicotine stimuli. According to studies by Henningfield and Benowitz, nicotine is more addictive than cannabis, caffeine, ethanol, cocaine, and heroin when considering both somatic and psychological dependence. However, due to the stronger withdrawal effects of ethanol, cocaine and heroin, nicotine may have a lower potential for somatic dependence than these substances. About half of Canadians who currently smoke have tried to quit. McGill University health professor Jennifer O'Loughlin stated that nicotine addiction can occur as soon as five months after the start of smoking.\n",
"According to the National Institute on Drug Abuse, 1 in 5 preventable deaths, in the United States, is caused by tobacco use. Nicotine is the addictive drug found in most tobacco products and is easily absorbed by the bloodstream of the body. Despite common misconceptions regarding the relaxing effects of tobacco and nicotine use, behavioral testing in animals has demonstrated nicotine to have an anxiogenic effect. Nicotinic acetylcholine receptors (nAChRs) have been identified as the primary site for nicotine activity and regulate consequent cellular polarization. nAChRs are made up a number of α and β subunits and are found in both the LHb and MHb, where research suggests they may play a key role in addiction and withdrawal behaviors.\n"
] |
Did the ancient Romans have a system for writing music? | They used the old Greek letter notation as well as Greek music theory. This was, as far as we can tell, a matter for the educated in theorising about music, rather than a tool for musicians to help remember and communicate musical ideas. One of the best preserved antique pieces of music is from the roman period, but it is culturally Greek rather than Roman. [Seikilos Epitaph](_URL_0_), which was inscribed on a tombstone found in what is now Turkey. As far as I am aware, we have no evidence in the form of written down music of how music may have sounded in the city of Rome, though it surely changed a lot over the centuries. | [
"Rome's adoption of papyrus facilitated the spread of writing and the growth of bureaucratic administration needed to govern vast territories. The efficiency of the alphabet strengthened monopolies of knowledge in a variety of ancient empires. Innis warns about the power of writing to create mental \"grooves\" which determine \"the channels of thought of readers and later writers.\"\n",
"The Romans may have borrowed the Greek method of 'enchiriadic notation' to record their music, if they used any notation at all. Four letters (in English notation 'A', 'G', 'F' and 'C') indicated a series of four succeeding tones. Rhythm signs, written above the letters, indicated the duration of each note. Roman art depicts various woodwinds, \"brass\", percussion and stringed instruments. Roman-style instruments are found in parts of the Empire where they did not originate, and indicate that music was among the aspects of Roman culture that spread throughout the provinces.\n",
"The Romans may have borrowed the Greek method of 'enchiriadic notation' to record their music, if they used any notation at all. Four letters (in English notation 'A', 'G', 'F' and 'C') indicated a series of four succeeding tones. Rhythm signs, written above the letters, indicated the duration of each note. Roman art depicts various woodwinds, \"brass\", percussion and stringed instruments. Roman-style instruments are found in parts of the Empire where they did not originate, and indicate that music was among the aspects of Roman culture that spread throughout the provinces.\n",
"The earliest extant examples of ancient Greek writing (circa 1450 BCE) are in the syllabic script Linear B. Beginning in the 8th century BCE, however, the Greek alphabet became standard, albeit with some variation among dialects. Early texts are written in boustrophedon style, but left-to-right became standard during the classic period. Modern editions of Ancient Greek texts are usually written with accents and breathing marks, interword spacing, modern punctuation, and sometimes mixed case, but these were all introduced later.\n",
"There have probably been pseudepigrapha almost from the invention of full writing. For example, ancient Greek authors often refer to texts which claimed to be by Orpheus or his pupil Musaeus of Athens but which attributions were generally disregarded. Already in Antiquity the collection known as the \"Homeric Hymns\" was recognized as pseudepigraphical, that is, not actually written by Homer. The only book surviving from Ancient Rome on Cooking is pseudepigraphically attributed to a famous gourmet, Apicius, even though it is not clear who actually assembled the recipes.\n",
"Due to Rome's reverence for Greek culture, the Romans borrowed the Greek method of 'enchiriadic notation' (marks which indicated the general shape of the tune but not the exact notes or rhythms) to record their music, if they used any notation at all.\n",
"The Romans in Southern Italy eventually adopted the Greek alphabet as modified by the Etruscans to develop Latin writing. Like the Greeks, the Romans employed stone, metal, clay, and papyrus as writing surfaces. Handwriting styles which were used to produce manuscripts included square capitals, rustic capitals, uncials, and half-uncials. Square capitals were employed for more-formal texts based on stone inscriptional letters, while rustic capitals freer, compressed, and efficient. Uncials were rounded capitals (majuscules) that originally were developed by the Greeks in the third century BC, but became popular in Latin manuscripts by the fourth century AD. Roman cursive or informal handwriting started out as a derivative of the capital letters, though the tendency to write quickly and efficiently made the letters less precise. Half-uncials (minuscules) were lowercase letters, which eventually became the national hand of Ireland. Other combinations of half-uncial and cursive handwriting developed throughout Europe, including Visigothic, and Merovingian.\n"
] |
Timothy Snyder states that there is no official French history of WW2 because "more French soldiers fought on the Axis side than the Allied side."- Is this true? | So I'm not entirely sure that Snyder is being serious there? Right after he states it, he then goes on to say "OK, you didn't think that was as funny as I did." If he *is* serious, well, it is an hilarious silly thing to state. At the outbreak of war, France was able to mobilize roughly 5 *million* soldiers, across the three main forces it controlled - Metropolitan Army, Army of Africa, and the Colonial Troops. By the invasion of France, 94 Divisions were operational in France.
Frenchmen certainly fought in the German military, but not in number anywhere near that for the Allies. The 33rd Wafffen-SS Division Charlemagne, saw only in the ballpark of 10,000 men (in my brief look about, sources seem in marked disagreement on the exact number), and the 638th Infantry Regiment - "Legion of French Volunteers Against Bolshevism" - adds a few thousand more to that number. Even if we are incredibly charitable and count the 100,000 men of the Vichy Army of the Armistice, and the Vichy-era's 225,000 men of the Army of Africa, we still are woefully short of reaching the number of french soldiers fighting for the Allies in early 1940.
And if we don't want to count that, and *just* look at the Free French, even the initial Free French Forces numbered about 7,000 soldiers and 3,600 sailors, which is not exactly puny compared to the numbers above not counting Vichy, and by mid-1944, the Free French numbered 400,000 men. We can split hairs over whether they were "Frenchmen", since a large part of the force was drawn from French Colonial possessions, so included men we would perhaps instead refer to as Algerian or Senegalese, but the original Army in France in 1940 had a strong minority of Colonial troops anyways, and not counting them would seem to discount their contribution and sacrifices.
So in short, while I again seem to read him as making a joke, and his actual point seems to be about the sacrifices of Ukrainians versus those of the French, France had literally millions of men serving in the Allied forces in 1940, and the Free French were nearing half a million later in the war, which certainly dwarfs the French formations within the German military.
Numbers mostly taken from Encyclopedia of World War II ed. Alan Axelrod, also "La Grande Armeé in Field Gray’: The Legion of French Volunteers Against Bolshevism, 1941" by Oleg Beyda and "Hitler's Gauls" by Jonathan Trigg | [
"The complex and ambiguous situation of France from 1939 to 1945, since its military forces fought on both sides under French, British, German, Soviet, US or without uniform – often subordinated to Allied or Axis command – led to some criticism \"vis-à-vis\" its actual role and allegiance, much like with Sweden during World War II.\n",
"The military history of France during World War II covers three periods. From 1939 until 1940, which witnessed a war against Germany by the French Third Republic. The period from 1940 until 1945, which saw competition between Vichy France and the Free French Forces under General Charles de Gaulle for control of the overseas empire. And 1944, witnessing the landings of the Allies in France (Normandy, Provence), expelling the German Army and putting an end to the Vichy Regime.\n",
"There was debate among the other Allies as to whether France should share in the occupation of the defeated Germany because of fears that the long Franco–German rivalry might interfere with the rebuilding of Germany. Ultimately the French were allowed to participate and from 1945 to 1955, French troops were stationed in the Rhineland, Baden-Württemberg, and part of Berlin, and these areas were put under a French military governor. The Saar Protectorate was allowed to rejoin West Germany only in 1957.\n",
"Until the end of 1916 the French under Joffre had been the dominant allied army; after 1917 this was no longer the case, due to the vast number of casualties France's armies had suffered in the now three and a half year old struggle with Germany.\n",
"The Allies believed that the Vichy French forces would not fight, partly because of information supplied by American Consul Robert Daniel Murphy in Algiers. The French were former members of the Allies and the American troops were instructed not to fire unless they were fired upon.\n",
"Following defeat in the Franco-Prussian War, Franco-German rivalry erupted again in the First World War. France and its allies were victorious this time. Social, political, and economic upheaval in the wake of the conflict led to the Second World War, in which the Allies were defeated in the Battle of France and the French government surrendered and was replaced with an authoritarian regime. The Allies, including the government in exile's Free French Forces and later a liberated French nation, eventually emerged victorious over the Axis powers. As a result, France secured an occupation zone in Germany and a permanent seat on the United Nations Security Council. The imperative of avoiding a third Franco-German conflict on the scale of those of two world wars paved the way for European integration starting in the 1950s. France became a nuclear power and since the 1990s its military action is most often seen in cooperation with NATO and its European partners.\n",
"Some serious discrepancies between Allied squadron records and German claims have caused some historians and Allied veterans to question the accuracy of Marseille's official victories, in addition to those of \"JG 27\" as a whole. Attention is often focused on the 26 claims made by \"JG 27\" on 1 September 1942, of which 17 were claimed by Marseille alone. A USAF historian, Major Robert Tate states: \"[f]or years, many British historians and militarists refused to admit that they had lost any aircraft that day in North Africa. Careful review of records however do show that the British [and South Africans] did lose more than 17 aircraft that day, and in the area that Marseille operated.\" Tate also reveals 20 RAF single-engined fighters and one twin engined fighter were destroyed and several others severely damaged, as well as a further USAAF P-40 shot down. However, overall Tate reveals that Marseille's kill total comes close to 65–70 percent corroboration, indicating as many as 50 of his claims may not have actually been kills. Tate also compares Marseilles rate of corroboration with the top six P-40 pilots. While only the Canadian James Francis Edwards' records shows a verification of 100 percent other aces like Clive Caldwell (50% to 60% corroboration), Billy Drake (70% to 80% corroboration), John Lloyd Waddy (70% to 80% corroboration) and Andrew Barr (60% to 70% corroboration) are at the same order of magnitude as Marseille's claims. Christopher Shores and Hans Ring also support Tate's conclusions. British historian Stephen Bungay gives a figure of 20 Allied losses that day.\n"
] |
With high magnification and low exposure, can telescopes see the shape of the nearest stars to the Sun (like the Alpha Centauri system, or Barnard's Star)? Or are these stars still too far away and appear only as points? | Larger stars can be resolved, for example [Betelgeuse](_URL_0_).
Sirius, a large and close star, when imaged with Hubble, basically looks like a point spread function. _URL_1_ | [
"This star system has an apparent visual magnitude of +3.0, making it one of the brighter stars in the constellation and hence readily visible to the naked eye. Parallax measurements from the Hipparcos mission yield a distance estimate of around from the Sun. This is a single-lined spectroscopic binary system, which means that the pair have not been individually resolved with a telescope, but the gravitational perturbations of an unseen astrometric companion can be discerned by shifts in the spectrum of the primary caused by the Doppler effect. The pair orbit around their common center of mass once every 675 days with an eccentricity of 0.57.\n",
"Some stars visible to the naked eye have such a low absolute magnitude that they would appear bright enough to outshine the planets and cast shadows if they were at 10 parsecs from the Earth. Examples include Rigel (−7.0), Deneb (−7.2), Naos (−6.0), and Betelgeuse (−5.6). For comparison, Sirius has an absolute magnitude of 1.4, which is brighter than the Sun, whose absolute visual magnitude is 4.83 (it actually serves as a reference point). The Sun's absolute bolometric magnitude is set arbitrarily, usually at 4.75.\n",
"If the Sun were to be observed from the Alpha Centauri system, the nearest star system to ours, it would appear to be a 0.46 magnitude star in the constellation Cassiopeia, and would create a \"/W\" shape instead of the \"W\" as seen from Earth. Due to the proximity of the Alpha Centauri system, the constellations would, for the most part, appear similar. However, there are some notable differences with the position of other nearby stars; for example, Sirius would appear about one degree from the star Betelgeuse in the constellation Orion. Also, Procyon would appear in the constellation Gemini, about 13 degrees below Pollux.\n",
"This system is located approximately 500 light-years away from Earth in the Lynx constellation. Both of these stars are slightly cooler than the Sun and are nearly identical to each other. The system has a magnitude of 11 and cannot be seen with the naked eye but is visible through a small telescope. These stars are also notable for their large proper motions.\n",
"Because of Proxima Centauri's southern declination, it can only be viewed south of latitude 27° N. Red dwarfs such as Proxima Centauri are too faint to be seen with the naked eye. Even from Alpha Centauri A or B, Proxima would only be seen as a fifth magnitude star. It has an apparent visual magnitude of 11, so a telescope with an aperture of at least is needed to observe it, even under ideal viewing conditions—under clear, dark skies with Proxima Centauri well above the horizon.\n",
"With larger amateur telescopes, the nebulosity around some of the stars can be easily seen; especially when long-exposure photographs are taken. Under ideal observing conditions, some hint of nebulosity around the cluster may even be seen with small telescopes or average binoculars. It is a reflection nebula, caused by dust reflecting the blue light of the hot, young stars.\n",
"Barnard's Star is a red dwarf of apparent magnitude 9 and is thus too dim to be seen with the unaided eye. However, at approximately 6 light-years away it is the second-closest stellar system to the Sun; only the Alpha Centauri system is known to be closer. Thus, even though it is suspected to be a flare star, it has attracted the attention of science fiction authors, filmmakers, and game developers. A claim has been made for the discovery by astrometry of one or more extrasolar planets in the Barnard's system, but it has been refuted as an artifact of telescope maintenance and upgrade work.\n"
] |
how is it decided whether someone is sane or insane during a trial? | During a trial, the final decision lies with the jury (assuming you are talking about the US court system)
Since Reagan signed Insanity Defense Reform Act in 1984, it is up to defense that to prove that the defendant was not sane. Both sides can call upon so called expert witnesses (someone who is specialised in a particular field and can therefore provide information) who give their opinion on the mental state of the defendant. This is generally done at the hand of interviews and possibly studying things like writings they left beforehand.
There are different standards and tests for criminal insanity, which vary from state to state. Mainly, it is all focused on whether or not someone was able to understand what they were doing at the time/was able to understand the consequences. This is a much more narrow definition than mental illness outside of the criminal justice system. Someone can be mentally ill (for example, due to depression or anxiety) but that doesn't necesarily also make them criminally insane.
In any case, the insanity defense is a very rare thing to pursue (used in less than 1% of all cases) and very often doesn't exactly lead to people going 'free'. Rather, they go to a mental health facility where they can actually get help for their problems. | [
"Where the defendant is alleged to have been insane at the time of committing the offence, this issue can be raised in one of three ways; the defendant can claim he was insane, the defendant can raise a defence of Automatism where the judge decides it was instead insanity, or the defendant can raise a plea of diminished responsibility, where the judge or prosecution again show that insanity is more appropriate. Whatever the way in which a plea of insanity is reached, the same test is used each time, as laid out in the M'Naghten Rules; \"to establish a defence on the ground of insanity, it must be clearly proved that, at the time of the committing of the act, the party accused was labouring under such a defect of reason, from disease of the mind, as not to know the nature and quality of the act he was doing; or, if he did know it, that he did not know what he was doing was wrong\".\n",
"Therefore, a person whose mental disorder is not in dispute is determined to be sane if the court decides that despite a \"mental illness\" the defendant was responsible for the acts committed and will be treated in court as a normal defendant. If the person has a mental illness and it is determined that the mental illness interfered with the person's ability to determine right from wrong (and other associated criteria a jurisdiction may have) and if the person is willing to plead guilty or is proven guilty in a court of law, some jurisdictions have an alternative option known as either a Guilty but Mentally Ill (GBMI) or a Guilty but Insane verdict. The GBMI verdict is available as an alternative to, rather than in lieu of, a \"not guilty by reason of insanity\" verdict. Michigan (1975) was the first state to create a GBMI verdict, after two prisoners released after being found NGRI committed violent crimes within a year of release, one raping two women and the other killing his wife.\n",
"If a defendant at the time of trial claims he is insane, this hinges on whether or not he is able to understand the charge, the difference between \"guilty\" and \"not guilty\" and is able to instruct his lawyers. If he is unable to do these things, he can be found \"unfit to plead\" under Section 4 of the Criminal Procedure (Insanity) Act 1964. In that situation, the judge has wide discretion as to what to do with the defendant, except in cases of murder, where he must be detained in hospital.\n",
"The rule states that every person is assumed to be sane and that to establish a ground of insanity, it must be proved that at the time of committing a crime, the criminal was acting due to a \"defect of reason\" or mental illness, causing a lack of understanding the nature of the act. The rule includes as a test of distinguishing whether or not a defendant can determine the difference between right and wrong.\n",
"The question of competency to stand trial is a question of an offender's current state of mind. This assesses the offender's ability to understand the charges against them, the possible outcomes of being convicted/acquitted of these charges and their ability to assist their attorney with their defense. The question of sanity/insanity or criminal responsibility is an assessment of the offender's state of mind at the time of the crime. This refers to their ability to understand right from wrong and what is against the law. The insanity defense is rarely used, as it is very difficult to prove. If declared insane, an offender is committed to a secure hospital facility for much longer than they would have served in prison—theoretically, that is. \n",
"Prior to the enactment of the law, the federal standard for \"insanity\" was that the government had to prove a defendant's sanity beyond a reasonable doubt (assuming the insanity defense was raised). Following the Act's enactment, the defendant has the burden of proving insanity by \"clear and convincing evidence.\" Furthermore, expert witnesses for either side are prohibited from testifying directly as to whether the defendant was legally sane or not, but can only testify as to their mental health and capacities, with the question of sanity itself to be decided by the finder-of-fact at trial. The Act was held to be constitutional (and the change in standards and burdens of proof are discussed) in \"United States v. Freeman\".\n",
"Mental disorder may apply to a wide range of disorders including psychosis caused by schizophrenia and dementia, and excuse the person from the need to undergo the stress of a trial as to liability. Usually, sociopathy and other personality disorders are not legally considered insanity, because of the belief they are the result of free will in many societies. In some jurisdictions, following the pre-trial hearing to determine the extent of the disorder, the defence of \"not guilty by reason of insanity\" may be used to get a not guilty verdict. This defence has two elements:\n"
] |