question
stringlengths 3
301
| answer
stringlengths 9
26.1k
| context
sequence |
---|---|---|
Popular history YouTuber Feature History claims that "Hutu" and "Tutsi" were originally class distinctions rather than ethnic ones. How much merit does this claim have? | here's an answer from another thread that answers your question
_URL_0_
credits to /u/gplnd | [
"A contrasting picture of human cultural diversity was recorded in the early Rwandan oral histories, ritual texts, and biographies, in which the terms Tutsi, Hutu, and Twa were quite rarely used and had meanings different from those conceived by the Europeans. In those, the term Tutsi was equivalent to the phrase \"wealthy noble\"; Hutu meant \"farmer\"; and Twa was used to refer to people skilled in hunting, use of fire, pottery-making, guarding, etc. In contrast to the European conception, rural farmers are often described as wealthy and well-connected. Kings sometimes looked down on them but still formed marriage bonds with them and are frequently described as conferring titles, land, herds, armies, servitors, and ritual functions on them.\n",
"A history of Rwanda that justified the existence of these racial distinctions was written. No historical, archaeological, or above all linguistic traces have been found to date that confirm this official history. The observed differences between the Tutsis and the Hutus are about the same as those evident between the different French social classes in the 1950s. The way people nourished themselves explains a large part of the differences: the Tutsis, since they raised cattle, traditionally drank more milk than the Hutu, who were farmers.\n",
"The origins of the Tutsi and Hutu people is a major issue in the histories of Rwanda and Burundi, as well as the Great Lakes region of Africa. The relationship between the two modern populations is thus, in many ways, derived from the perceived origins and claim to \"Rwandan-ness\". The largest conflicts related to this question were the Rwandan genocide, the Burundian genocide, and the First and Second Congo Wars.\n",
"The modern conception of Tutsi and Hutu as distinct ethnic groups in no way reflects the pre-colonial relationship between them. Tutsi and Hutu were simply groups occupying different places in the Rwandan social hierarchy, the division between which was exacerbated by slight differences in appearance propagated by occupation and pedigree.\n",
"The classification of Hutu or Tutsi was not merely based on ethnic criteria alone. Hutu farmers that managed to acquire wealth and livestock were regularly granted the higher social status of Tutsi, some even made it to become close advisors of the Ganwa. On the other hand, there are also reports of Tutsi that lost all their cattle and subsequently lost their higher status and were called Hutu. Thus, the distinction between Hutu and Tutsi was also a socio-cultural concept, instead of a purely ethnic one. There were also many reports of marriages between Hutu and Tutsi people. In general, regional ties and tribal power struggles played a far more determining role in Burundi's politics than ethnicity.\n",
"Still others suggest that the two groups are related but not identical, and that differences between them were exacerbated by Europeans, or by a gradual, natural split, as those who owned cattle became known as Tutsi and those who did not became Hutu. Mahmood Mamdani states that the Belgian colonial power designated people as Tutsi or Hutu on the basis of cattle ownership, physical measurements and church records.\n",
"Tension between the majority Hutu and the minority Tutsi had developed over time but was particularly emphasized late in the nineteenth century and early in the twentieth century as a result of German and Belgian colonialism over Rwanda. The ethnic categorization of the two were an imposed and an arbitrary construct based more on physical characteristics than ethnic background. However, the social differences between the Hutu and the Tusi have traditionally allowed the Tutsi, with a strong pastoralist tradition, to gain social, economic, and political ascendancy over the Hutu, who were primarily agriculturalists. The distinction under colonial powers allowed Tutsis to establish ruling power, until a Hutu revolution in 1959 abolished the Tutsi monarchy by 1961. \n"
] |
How did the Thirty Year's War and the Paraguay War lead to such a huge loss of life? | First: a loss of population from pre-war levels is a very different thing than direct counts of casualties. The Holy Roman Empire's population was roughly two-thirds in 1650 of what it was in 1615 (although some regions suffered much more directly), but much of the population loss were simply refugees, loss of territory, and other forms of expatriation.
Of the many who did die, only a relatively small fraction would have died directly from battlefield causes. In the case of the Thirty Years' War, constant disruption to agriculture and commerce resulted in many deaths from famine and disease. This scenario *also* results in a population much less willing to procreate, and substantially increases infant and childhood mortality rates (and maternal mortality, for that matter).
Armies in the pre-modern world were often effectively swarms of locust: even ostensibly friendly armies, like the Lutheran Swedes in Protestant Brandenburg, would inflict grievous harm on whatever region they passed through. Simply supplying these armies could absorb a large quantity of the available food and essentials, before we even talk about looting and pillaging (or incidents like the Sack of Magdeburg, where a victorious army slaughtered ~20,000 townspeople in a single day).
In short, battlefield deaths in the Thirty Years' War were only a portion of the overall deaths resulting from the war. I expect something similar for the Paraguayan War.
My main sources are Peter Wilson's recent *Europe's Tragedy: the Thirty Years War* alongside Geoffrey Parker's significantly older *Thirty Years' War*. | [
"The losses of the century of war were enormous, particularly owing to the plague (the Black Death, usually considered an outbreak of bubonic plague), which arrived from Italy in 1348, spreading rapidly up the Rhone valley and thence across most of the country: it is estimated that a population of some 18–20 million in modern-day France at the time of the 1328 hearth tax returns had been reduced 150 years later by 50% or more.\n",
"The losses of the century of war were enormous, particularly owing to the plague (the Black Death, usually considered an outbreak of bubonic plague), which arrived from Italy in 1348, spreading rapidly up the Rhone valley and thence across most of the country: it is estimated that a population of some 18–20 million in modern-day France at the time of the 1328 hearth tax returns had been reduced 150 years later by 50 percent or more.\n",
"Paraguay lost 25-33% of its territory to Argentina and Brazil, was forced to pay an enormous war debt and to sell large amounts of national properties in order to restore its internal budget. But the worst consequence of the war was the catastrophic loss of population. At least 50% of the Paraguayans died during the conflict and took long decades for the country to recover. About the disaster suffered by the Paraguayans at the outcome of the war, William D. Rubinstein wrote:\n",
"The Paraguayan War (also known as the War of the Triple Alliance) of 1864–1870 was the most lethal in South American history, and − in terms of its relative mortality − very possibly the worst in modern history, anywhere. It actually commenced when López seized the Brazilian ship \"Marques de Olinda\" on her routine voyage up the River Paraguay to the Brazilian province of Mato Grosso, then sent military forces to invade the province itself. It developed further when he seized Argentine naval vessels moored in the port of Corrientes, north east Argentina. Thereafter López sent two further armies, one to invade the Argentine province of Corrientes along the River Paraná and the other to invade the Brazilian province of Rio Grande do Sul along the River Uruguay. On 1 May 1865 Brazil, Argentina and Uruguay signed the Treaty of the Triple Alliance by which they would not negotiate peace with Paraguay until the government of López had been deposed.\n",
"The traditional estimate was that the War cost Paraguay at least half its population including military and civilian casualties (the latter mainly owing to disease, dislocation and malnutrition) and that 90% of males of military age died. If that was so the Paraguayan War must have been 10 to 20 times more lethal than the slightly earlier American Civil War. The traditional estimate was based partly on anecdotal evidence and partly on a supposed census of 1857 which gave Paraguay a population of about 1.3 million, which, if correct, implied an utterly catastrophic decline in the subsequent War. The following extract from an unsigned article in the 1911 edition of the Encyclopædia Britannica: is illustrative of the spurious precision of the era.\n",
"The war which ensued, lasting until 1 March 1870, was carried on with great stubbornness and with alternating fortunes, though López's disasters steadily increased. His first major setback came on 11 June 1865, when the powerless Paraguayan fleet was destroyed by the Brazilian Navy at the Battle of Riachuelo, which gave the Allies control over the various waterways surrounding Paraguay and forced Lopez to withdraw from Argentina.\n",
"The war ended with the total defeat of Paraguay. After it lost in conventional warfare, Paraguay conducted a drawn-out guerrilla resistance, a disastrous strategy that resulted in the further destruction of the Paraguayan military and much of the civilian population through battle casualties, hunger and diseases. The guerrilla war lasted 14 months until President Francisco Solano López was killed in action by Brazilian forces in the Battle of Cerro Corá on 1 March 1870. Argentine and Brazilian troops occupied Paraguay until 1876. Estimates of total Paraguayan losses range from 21,000 to 200,000 people. It took decades for Paraguay to recover from the chaos and demographic losses.\n"
] |
when microwaving food, why does it seem to get more soggy rather than crunchy? | Microwaves specialize in heating up moisture specifically. The heated up moisture just tends to steam and diffuse making crunchy things less crunchy and more damp(soggy).
Feel free to fact check. | [
"Eating deteriorated food could not be considered safe due to mycotoxins or microbial wastes. Some pathogenic bacteria, such as \"Clostridium perfringens\" and \"Bacillus cereus\", are capable of causing spoilage.\n",
"The spoilage of food products caused by microbes is a concern for many sub-sectors of the food industry. An estimated 25% of the world’s food is lost due to microorganism activity. Such food spoilage results in food wastage as products become unsuitable for consumption, causing large financial losses. Recent technological progression has led to the development of techniques targeted to prevent the activity and growth of food contaminating microbes. \n",
"BULLET::::- \"The desirable nutritional changes that occur during sprouting are mainly due to the breakdown of complex compounds into a more simple form, transformation into essential constituents and breakdown of nutritionally undesirable constituents. This is a reason why sprouts are also called pre-digested foods \"\n",
"To prevent the infestation of foodstuffs by pests of stored products, or “pantry pests”, a thorough inspection must be conducted of the food item intended for purchase at the supermarket or the place of purchase. The expiration date of grains and flour must also be noted, as products that sit undisturbed on the shelf for an extended period of time are more likely to become infested. This does not, however, exclude even the freshest of products from being contaminated. Packaging should be inspected for tiny holes that indicate there might be an infestation. If there is evidence of an insect infestation, the product should not be purchased. The store should be notified immediately, as further infestation must be prevented. Most stores have a plan of action for insect infestations. Bringing an infested product into a pantry or a home leads to a greater degree of infestation.\n",
"Food spoilage is detrimental to the food industry due to production of volatile compounds from organisms metabolizing the various nutrients found in the food product. Contamination results in health hazards from toxic compound production as well as unpleasant odours and flavours. Electronic nose technology allows fast and continuous measurement of microbial food spoilage by sensing odours produced by these volatile compounds. Electronic nose technology can thus be applied to detect traces of Pseudomonas milk spoilage and isolate the responsible Pseudomonas species. The gas sensor consists of a nose portion made of 14 modifiable polymer sensors that can detect specific milk degradation products produced by microorganisms. Sensor data is produced by changes in electric resistance of the 14 polymers when in contact with its target compound, while four sensor parameters can be adjusted to further specify the response. The responses can then be pre-processed by a neural network which can then differentiate between milk spoilage microorganisms such as P. fluorescens and P. aureofaciens.\n",
"Bacteria are responsible for the spoilage of food. When bacteria breaks down the food, acids and other waste products are created in the process. While the bacteria itself may or may not be harmful, the waste products may be unpleasant to taste or may even be harmful to one's health.\n",
"Biofilms have become problematic in several food industries due to the ability to form on plants and during industrial processes. Bacteria can survive long periods of time in water, animal manure, and soil, causing biofilm formation on plants or in the processing equipment. The buildup of biofilms can affect the heat flow across a surface and increase surface corrosion and frictional resistance of fluids. These can lead to a loss of energy in a system and overall loss of products. Along with economic problems, biofilm formation on food poses a health risk to consumers due to the ability to make the food more resistant to disinfectants As a result, from 1996 to 2010 the Center for Disease Control and Prevention estimated 48 million foodborne illnesses per year. Biofilms have been connected to about 80% of bacterial infections in the United States.\n"
] |
Are there solar systems that are not contained in galaxies? How would our solar system be different if that were the case? | So you can contrive to have a situation where it might occur. Stars can be lost from a host galaxy via a few different means. (Mergers, Supernovae, Scattering off hard binaries, etc. Generally anything that can throw stars into different orbits.)
If you have a very tight solar system, say a star and a hot jupiter, it wouldn't be impossible for them to stay together. Solar systems like ours I don't see staying together. (Sure, the size of the solar system is small compared to the other scales in the system, but there are a lot of possible torques on the system. (Its also quite late, otherwise I would do the calculation for fun.))
So yes, its not impossible in theory. But like with a lot of things, its not overly likely. And those you would have would be systems that are tightly bound. | [
"Based on observations from the \"Hubble Space Telescope\", there are between 125 and 250 billion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets, i.e. there are stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe.\n",
"If Kepler's laws are correct, then the obvious way to resolve this discrepancy is to conclude the mass distribution in spiral galaxies is not similar to that of the Solar System. In particular, there is a lot of non-luminous matter (dark matter) in the outskirts of the galaxy.\n",
"The Solar System consists of an inner region of small rocky planets and outer region of large gas giants. However, other planetary systems can have quite different architectures. Studies suggest that architectures of planetary systems are dependent on the conditions of their initial formation. Many systems with a hot Jupiter gas giant very close to the star have been found. Theories, such as planetary migration or scattering, have been proposed for the formation of large planets close to their parent stars.\n",
"When a galaxy or a planetary system forms, its material takes the shape of a disk. Most of the material orbits and rotates in one direction. This uniformity of motion is due to the collapse of a gas cloud. The nature of the collapse is explained by the principle called conservation of angular momentum. In 2010 the discovery of several hot Jupiters with backward orbits called into question the theories about the formation of planetary systems. This can be explained by noting that stars and their planets do not form in isolation but in star clusters that contain molecular clouds. When a protoplanetary disk collides with or steals material from a cloud this can result in retrograde motion of a disk and the resulting planets.\n",
"Further observational improvements led to the realization that the Sun is one of hundreds of billions of stars in the Milky Way, which is one of at least hundreds of billions of galaxies in the Universe. Many of the stars in our galaxy have planets. At the largest scale galaxies are distributed uniformly and the same in all directions, meaning that the Universe has neither an edge nor a center. At smaller scales, galaxies are distributed in clusters and superclusters which form immense filaments and voids in space, creating a vast foam-like structure. Discoveries in the early 20th century have suggested that the Universe had a beginning and that space has been expanding since then, and is currently still expanding at an increasing rate.\n",
"Many Solar System objects are known to possess satellite systems, though their origin is still unclear. Notable examples include the largest satellite system, the Jovian system, with 79 known moons (including the large Galilean moons) and the Saturnian System with 62 known moons (and the most visible ring system in the Solar System). Both satellite systems are large and diverse. In fact all of the giant planets of the Solar System possess large satellite systems as well as planetary rings, and it is inferred that this is a general pattern. Several objects farther from the Sun also have satellite systems consisting of multiple moons, including the complex Plutonian system where multiple objects orbit a common center of mass, as well as many asteroids and plutinos. Apart from the Earth-Moon system and Mars' system of two tiny natural satellites, the other terrestrial planets are generally not considered satellite systems, although some have been orbited by artificial satellites originating from Earth.\n",
"The authors explain, in a manner consistent with M-theory, that as the Earth is only one of several planets in our solar system, and as our Milky Way galaxy is only one of many galaxies, the same may apply to our universe itself: that is, our universe may be one of a huge number of universes.\n"
] |
Would the Lorica Segmentata have been a good choice of armor in medieval times? Could it have retained its effectiveness on the battlefield, let's say, around 1000 AD?" | The threats on the battlefield of 1,000 AD weren't really much different to those of 100 AD; spears and arrows for the most part, with slingshot, battleaxes and swords also being fairly common. Lorica Segmentata offered adequate protection against all of those, so in that sense it was as good as ever.
However, Lorica Segmentata *always* had issues competing with chainmail. Even at the height of it's popularity, many romans seem to have favoured chainmail. It's advantages were that it was well suited to mass-production and might have offered superior protection from blunt trauma (although apparently it did worse than mail against dacian falxes). It's downsides were that it was difficult to maintain, generally fit quite badly, had gaps at the armpits and didn't cover the legs at all.
For the feudal nobles of the early medieval period it would have been distinctly inferior to mail; they didn't mass-produce their armour and often fought on horseback where protection from spears stabbing up into the armpit or sword slashes to their legs was pretty important. | [
"\"Lorica segmentata\": Modern tests have shown that this \"lorica\" provided better protection to weapon-blows and missile-strikes than the other types of metal armour commonly used by Roman troops, mail (\"hamata\") or scale (\"squamata\"), being virtually impenetrable by ancient weapons. However, historical re-enactors have found replicas of the \"lorica\" uncomfortable due to chafing and could only wear them for relatively short periods. It was also more expensive to manufacture and difficult to maintain due to its complex design of separate laminated strips held together by braces and hooks.\n",
"The earliest evidence of the \"lorica segmentata\" being worn is around 9 BC (Dangstetten), and the armour was evidently quite common in service until the 2nd century AD, judging from the number of finds throughout this period (over 100 sites are known, many of them in Britain). However, even during the 2nd century AD, the \"segmentata\" never replaced the \"lorica hamata\" - thus the \"hamata\" mail was still standard issue for both heavy infantry and auxiliaries alike. The last recorded use of this armour seems to have been for the last quarter of the 3rd century AD (Leon, Spain).\n",
"While heavy, intricate armour was not uncommon (cataphracts), the Romans perfected a relatively light, full torso armour made of segmented plates (lorica segmentata). This segmented armour provided good protection for vital areas, but did not cover as much of the body as lorica hamata or chainmail. The lorica segmentata provided better protection, but the plate bands were expensive and difficult to produce and difficult to repair in the field. Overall, chainmail was cheaper, easier to produce, and simpler to maintain, was one-size fits all, and was more comfortable to wear – thus, it remained the primary form of armour even when lorica segmentata was in use.\n",
"Body armour remained in use with cuirassiers throughout the 19th century and into the early phase of World War I. The cuirass represents the final stage of the tradition of plate armour descended from the late medieval period. Meanwhile, makeshift steel armour against shrapnel and early forms of ballistic vests began to be developed from the mid 19th century.\n",
"Plate armour covered the entire body. Although parts of the body were already covered in plate armour as early as 1250, such as the Poleyns for covering the knees and Couters - plates that protected the elbows, the first complete full suit without any textiles was seen around 1410-1430. Components of medieval armour that made up a full suit consisted of a cuirass, a gorget, vambraces, gauntlets, cuisses, greaves, and sabatons held together by internal leather straps. Improved weaponry such as crossbows and the long bow had greatly increased range and power. This made penetration of the chain mail hauberk much easier and more common. By the mid 1400's most plate was worn alone and without the need of a hauberk. Advances in metal working such as the blast furnace and new techniques for carburizing made plate armour nearly impenetrable and the best armour protection available at the time. Although plate armour was fairly heavy, because each suit was custom tailored to the wearer, it was very easy to move around in. A full suit of plate armour was extremely expensive and mostly unattainable for the majority of soldiers. Only very wealthy land owners and nobility could afford it. The quality of plate armour increases as more armour makers became more proficient in metal working. A suit of plate armour became a symbol of social status and the best made were personalized with embellishments and engravings. Plate armour saw continued use in battle until the 17th century.\n",
"Some believed that the armor resembled that of the modern armadillo, but the popular opinion was the \"Megatherium\" theory. It was not until Professor E. D’Alton wrote a memoir to the Berlin Academy in 1833 comparing the extreme similarities of these mysterious fossils to that of the armadillo, that the scientific world seriously considered that the pieces of carapaces and fragments of bone could belong to some prehistoric version of \"Dasypus\". D’Alton said that \"all the peculiarities of the former [\"Dasypus\"] may be paralleled to the latter [fossil pieces]\" He concluded that the fossils belonged to some prehistoric version of an armadillo. However, since a full skeleton was not available at the time, he said that his idea was not conclusive. This uncertainty in the fossil remains continued until a man named Dr. Lund identified the remains as a new genus in his 1837 memoir.\n",
"Armorica declared independence from the Roman Empire in 407 CE, but contributed archers for Flavius Aetius's defence against Attila the Hun, and its king Riothamus was subsequently mentioned in contemporary documents as an ally of Rome's against the Goths. Despite its continued usage of two distinct languages, Breton and Gallo, and extensive invasions and conquests by Franks and Vikings, Armorica retained considerable cultural cohesion into the 13th century.\n"
] |
as a non-american; what is the carpool lane and why does it exsists? | A restricted traffic **lane** reserved at peak travel times or longer for the exclusive use of vehicles with a driver and one or more passengers, often used as an incentive to share cars and reduce congestion/pollution, as they should lead to faster travel times. | [
"In Pittsburgh driving lore, the tunnels are notorious, most notably for several accidents when tractor-trailers that are too tall to safely travel through the tunnel get stuck against the roof of the tunnel. The Pennsylvania Department of Transportation raised the ceiling of the Squirrel Hill Tunnels to eliminate this problem and ease flow of traffic in and out of Pittsburgh. The tunnels also are known for generating traffic jams that can extend to the preceding exits because the highway narrows from four lanes to two. As a result, many residents prefer to exit the highway prior to entering the tunnel and detour through Frick Park and Schenley Park because doing so is generally faster.\n",
"Common practice and most law on United States highways is that the left lane is reserved for passing and faster moving traffic, and that traffic using the left lane must yield to traffic wishing to overtake.\n",
"Carpooling (also car-sharing, ride-sharing and lift-sharing) is the sharing of car journeys so that more than one person travels in a car, and prevents the need for others to have to drive to a location themselves.\n",
"The left lane is commonly referred to as the \"fast lane\", but that is not an accurate description of the lane's purpose. The left lane is the designated passing lane, however, vehicles in the left lane must obey the posted speed limits. A common problem arising from misuse of the left lane is speeding and tailgating. These actions create road rage and increase overall danger.\n",
"A sidetrack is a railroad track other than siding that is auxiliary to the main track. The word is also used as a verb (without object) to refer to the movement of trains and railcars from the main track to a siding, and in common parlance to refer to giving in to distractions apart from a main subject. Sidetracks are used by railroads to order and organize the flow of rail traffic.\n",
"In some cases, the median strip of the highway may contain a train line, usually around major urban centers. This is often done to share a right-of-way, because of the expense and difficulty of clearing a route through dense urban neighborhoods. A reserved right-of-way is contrasted with street running, in which rail cars and automobiles occupy the same lanes of traffic.\n",
"In an effort to reduce traffic and encourage carpooling, some governments have introduced high-occupancy vehicle (HOV) lanes in which only vehicles with two or more passengers are allowed to drive. HOV lanes can create strong practical incentives for carpooling by reducing travel time and expense. In some countries, it is common to find parking spaces reserved for carpoolers.\n"
] |
why are some insects (such as flies) extremely skittish towards humans whereas others (such as ladybugs) are extremely docile towards humans, despite having similar flight abilities? | Ladybugs are toxic (or something like that) to most predators, as indicated by their flashy colors. They know that predators know not to eat them, so they have no need to run away. Flies, on the other hand, rely on their mobility to escape from predators, so they need to be jumpy. | [
"Some species, such as deer flies and the Australian March flies, are known for being extremely noisy during flight, though clegs, for example, fly quietly and bite with little warning. Tabanids are agile fliers; \"Hybomitra\" species have been observed to perform aerial manoeuvres similar to those performed by fighter jets, such as the Immelmann turn. Horseflies can lay claim to being the fastest flying insects; the male \"Hybomitra hinei wrighti\" has been recorded reaching speeds of up to per hour when pursuing a female.\n",
"Insects were also affected in a similar manner. Their bodies and movement pattern noticeably altered from what was known through past experience, and behaving in a manner contradictory as such. It is probably safe to assume that they are affected to a similar degree as other animals.\n",
"Human interactions with insects include both a wide variety of uses, whether practical such as for food, textiles, and dyestuffs, or symbolic, as in art, music, and literature, and negative interactions including serious damage to crops and extensive efforts to eliminate insect pests. \n",
"Flies are eaten by other animals at all stages of their development. The eggs and larvae are parasitised by other insects and are eaten by many creatures, some of which specialise in feeding on flies but most of which consume them as part of a mixed diet. Birds, bats, frogs, lizards, dragonflies and spiders are among the predators of flies. Many flies have evolved mimetic resemblances that aid their protection. Batesian mimicry is widespread with many hoverflies resembling bees and wasps, ants and some species of tephritid fruit fly resembling spiders. Some species of hoverfly are myrmecophilous, their young live and grow within the nests of ants. They are protected from the ants by imitating chemical odours given by ant colony members. Bombyliid bee flies such as \"Bombylius major\" are short-bodied, round, furry, and distinctly bee-like as they visit flowers for nectar, and are likely also Batesian mimics of bees.\n",
"The veterinary concern is the same as the medical. The flies cause the same kind of myiasis in animals as it does in humans. It mainly only affects domesticated companion animals who are paralyzed or helpless.\n",
"BULLET::::- Spiders often frighten people due to their appearance. Arachnophobia is one of the most common phobias. However, spiders are important in the ecosystem as they eat insects which humans consider to be pests. Only a few species of spiders are dangerous to people. Spiders will only bite humans in self-defense, and few produce worse effects than a mosquito bite or bee-sting. Most of those with medically serious bites, such as recluse spiders and widow spiders, would rather flee and bite only when trapped, although this can easily arise by accident. Funnel web spiders' defensive tactics include fang display and their venom, although they rarely inject much, has resulted in 13 known human deaths over 50 years. They have been deemed to be the world's most dangerous spiders on clinical and venom toxicity grounds, though this claim has also been attributed to the Brazilian wandering spider, due to much more frequent accidents.\n",
"Flies are a nuisance, disturbing people at leisure and at work, but they are disliked principally because of their habits of contaminating foodstuffs. They alternate between breeding and feeding in dirty places with feeding on human foods, during which process they soften the food with saliva and deposit their faeces, creating a health hazard. However, fly larvae are as nutritious as fish meal, and could be used to convert waste to feed for fish and livestock.\n"
] |
how exactly does water ruin electronics, assuming that they are turned off after and dried throughly, what damage to hardware is done that is irreparable? | Different components react differently to water. Most ICs, for example, will dry just fine, but may end up with residual water stuck underneath the chip, unable to dry.
Capacitors can corrode from the inside out, and transistors do weird things when exposed to water, but immediate drying and cleansing with alcohol will usually prevent that.
Now, the unfixable stuff.
LCD screens are toast in water if water gets between the digitizer and the glass. Also, rechargeable batteries often use alkali or alkaline elements (think lithium ion batteries, among others) that are highly reactive to water. If exposed, very VERY small amounts of water can get into the battery and destroy it, either slowly or immediately.
Other than that, water with any kind of mineral content can short circuit boards and/or leave deposits that hinder the board's ability to function, and can even cause heat build up. Dropping a phone into distilled water, however, won't do much, except cause a physical mess inside the device. The components themselves would be fine for the most part (except the battery and screen.) | [
"Damage to structures and other objects can take a number of forms, such as fire damage caused by the effects of burning, water damage done by water to materials not resistant to its effects, and radiation damage due to ionizing radiation. Some kinds of damage are specific to vehicles and mechanical or electronic systems, such as foreign object damage caused by the presence of any foreign substance, debris, or article; hydrogen damage due to interactions between metals and hydrogen; and damage mechanics, which includes damage to materials due to cyclic mechanical loads. When an object has been damaged, it may be possible to repair the object, thereby restoring it to its original condition, or to a new condition that allows it to function despite the damage.\n",
"Water damage describes a large number of possible losses caused by water intruding where it will enable attack of a material or system by destructive processes such as rotting of wood, growth, rusting of steel, de-laminating of materials such as plywood, and many others.\n",
"Products may also become obsolete when supporting technologies are no longer available to produce or even repair a product. For example, many integrated circuits, including CPUs, memory and even some relatively simple logic chips may no longer be produced because the technology has been superseded, their original developer has gone out of business or a competitor has bought them out and effectively killed off their products to remove competition. It is rarely worth redeveloping a product to get around these issues since its overall functionality and price/performance ratio has usually been superseded by that time as well.\n",
"1) Weak cost justification for repairs: It was becoming hard for consumers to justify the repairing of malfunctioning electronic items when the purchasing of newer models was so affordable, as a result of the advances in semiconductor and electronic materials technology. With the exception of display technologies, the newer television and radio receivers generally had internal components that were fewer in number and smaller in size and, thus, less costly to produce. Weak cost justification for repairs remains a factor today with most consumer electronics items.\n",
"Due to concerns over atmospheric pollution and hazardous waste disposal, the electronics industry has been gradually shifting from rosin flux to water-soluble flux, which can be removed with deionized water and detergent, instead of hydrocarbon solvents.\n",
"Various solutions are known to circumvent these losses which include adding an inactive carrier or adding a tracer. Research has also shown that pretreatment of glassware and plastic surfaces can reduce radionuclide sorption by saturating the sites.\n",
"Electronic equipment has been avoidably damaged, and refrigerated food regularly spoiled Health and safety was also harmed, with hospitals having no light, and electricity to run fans, contributed to an increasing malaria risk . \n"
] |
why doesn't the u.s. have standardized education across the board? wouldn't that make everything easier? | The US has a history of federalism and local control. It's much bigger and more diverse than Japan or the UK, so it's harder to get everyone to agree.
Common Core was an attempt at standardized education, and it met major backlash pretty much across the board. | [
"Unlike the systems of most other countries, education in the United States is highly decentralized, and the federal government and Department of Education are not heavily involved in determining curricula or educational standards (with the exception of the No Child Left Behind Act). This has been left to state and local school districts. The quality of educational institutions and their degrees is maintained through an informal private process known as accreditation, over which the Department of Education has no direct public jurisdictional control.\n",
"Historically, middle schools and high schools have offered vocational courses such as home economics, wood and metal shop, typing, business courses, drafting, construction, and auto repair. However, for a number of reasons, many schools have cut those programs. Some schools no longer have the funding to support these programs, and schools have since put more emphasis on academics for all students because of standards based education reform. School-to-Work is a series of federal and state initiatives to link academics to work, sometimes including gaining work experience on a job site without pay.\n",
"Standards-based education reform is designed to promote equity through universalism, unifying education nationwide through high academic standards that must be met by all students. As this paradigm shift began to work its way into national policies such as Goals 2000 and the 1994 re-authorization of the Elementary and Secondary Education Act (ESEA), the focus became more on lofty and rigorous educational outcomes rather than vocational or alternative education methods (such as bilingual education) that had been popular in previous decades. Just as federal policies began to reflect these pedagogical changes, states also began to implement changes to reflect the same values. In 1998, California passed an initiative that almost all classroom instruction should be in English. These changes were due mainly in response to federal English-only standardized testing. The effects of such a drastic policy change were felt statewide due to the high LEP population. The increased focus on curriculum, instruction, and standardized assessments also shaped the changes in policy reflected in the No Child Left Behind Act of 2001.\n",
"Some of the proposed purposes of western style compulsory education are to prepare students to join the adult workforce and be financially successful, have students learn useful skills and knowledge, and prepare students to make positive economic or scientific contributions to society. Critics of schooling say it is ineffective at achieving these purposes and goals. In many countries, schools do not keep up with the skills demanded by the workplace, or never have taught relevant skills. Students often feel unprepared for college as well. More schooling does not necessarily correlate with greater economic growth. Alternate forms of schooling, such as the Sudbury model, have been shown to be sufficient for college acceptance and other western cultural goals.\n",
"Education reform is a topic that is in the mainstream currently in the United States. Over the past 30 years, policy makers have made a steady increase at the state and federal levels of government in their involvement of US schools. US states spend most of their budgets funding schools, whereas only a small portion of the federal budget is allocated to education. Although states hold the constitutional right on education policy, the federal government is advancing their role by building on state and local education policies. Education Reform is currently being seen as a \"tangled web\" due to the nature of education authority. There are several authorities looking over education and what can and cannot be implemented. Some education policies/reforms are being defined at either the federal, state or local level and in most cases, their focuses/authority overlap one another. This form of authority have led many to believe there is an inefficiency within education governance. Compared to other OCED countries, educational governance in the US is more decentralized and most of its autonomy is found within the state and district levels. The reason for this is that US citizens put an emphasis on individual rights and have a suspicion on government. A recent report by the National Center on Education and the Economy, believes that the education system is neither coherent nor likely to see improvements due to the nature of it.\n",
"Most local, state and federal education agencies are committed to standards based education reform, which is based on beliefs which conflict with the outcomes of traditional education. The goal is that all students will succeed at one high world-class level of what students know and are able to do, rather than different students learning different amounts on different tracks, producing some failures and some successes. Higher order thinking skills are emphasized by the new standards. A widely cited paper by Constance Kamii even suggests that teaching of basic arithmetic methods is harmful to learning, and guided the thinking behind many of today's commonly used mathematics teaching curricula.\n",
"The practice of inclusion (in mainstream classrooms) has been criticized by advocates and some parents of children with special needs because some of these students require instructional methods that differ dramatically from typical classroom methods. Critics assert that it is not possible to deliver effectively two or more very different instructional methods in the same classroom. As a result, the educational progress of students who depend on different instructional methods to learn often fall even further behind their peers.\n"
] |
what's the difference between oled, amoled and super amoled displays? | Short answer: It's complicated.
OLED is display technology that involves the use of pixels made of organic material.
AMOLED display technology combines the properties of OLED technology with a pixel-modulating matrix and thin-film transistors, essentially providing a transistor and capacitor to each pixel in the display. This makes AMOLED displays more expensive but also more flexible and energy efficient, able to provide more vivid picture quality and render faster motion response.
Super AMOLED is a marketing term created by Samsung for an AMOLED display with an integrated digitizer. It is a more advanced version of AMOLED and it integrates touch-sensors and the actual screen in a single layer. Samsung claims it provides a 20% brighter screen, 20% lower power consumption and 80% less sunlight reflection.
Edit: Added the information that Super AMOLED is marketing term. | [
"\"Super AMOLED\" is a marketing term created by device manufacturers for an AMOLED display with an integrated digitizer: the layer that detects touch is integrated into the screen, rather than overlaid on top of it. The display technology itself is not improved. According to Samsung, Super AMOLED reflects one-fifth as much sunlight as the first generation AMOLED. Super AMOLED is part of the Pentile matrix family, sometimes abbreviated as SAMOLED. For the Samsung Galaxy S III, which reverted to Super AMOLED instead of the pixelation-free conventional RGB (non-PenTile) Super AMOLED Plus of its predecessor Samsung Galaxy S II, the S III's larger screen size encourages users to hold the phone further from their face to obscure the PenTile effect.\n",
"Super AMOLED Plus, first introduced with the Samsung Galaxy S II and Samsung Droid Charge smartphones, is a branding from Samsung where the PenTile RGBG pixel matrix (2 subpixels) used in Super AMOLED displays have been replaced with a traditional RGB RGB (3 subpixels) arrangement typically used in LCDs. This variant of AMOLED is brighter and therefore more energy-efficient than Super AMOLED displays and produces a sharper, less grainy image because of the increased number of subpixels. In comparison to AMOLED and Super AMOLED displays, they are even more energy-efficient and brighter. However, Samsung cited screen life and costs by not using Plus on the Galaxy S II's successor, the Samsung Galaxy S III.\n",
"An AMOLED display consists of an active matrix of OLED pixels generating light (luminescence) upon electrical activation that have been deposited or integrated onto a thin-film transistor (TFT) array, which functions as a series of switches to control the current flowing to each individual pixel.\n",
"Super AMOLED Advanced is a term marketed by Motorola to describe a brighter display than Super AMOLED screens, but also a higher resolution — qHD or 960×540 for Super AMOLED Advanced than WVGA or 800×480 for Super AMOLED and 25% more energy efficient. Super AMOLED Advanced features PenTile, which sharpens subpixels in between pixels to make a higher resolution display, but by doing this, some picture quality is lost. This display type is used on the Motorola Droid RAZR and HTC One S.\n",
"OLED displays have better dynamic range capabilities than LCDs, similar to plasma but with lower power consumption. Rec. 709 defines the color space for HDTV, and Rec. 2020 defines a larger but still incomplete color space for ultra-high-definition television.\n",
"An OLED display works without a backlight because it emits visible light. Thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (LCD). In low ambient light conditions (such as a dark room), an OLED screen can achieve a higher contrast ratio than an LCD, regardless of whether the LCD uses cold cathode fluorescent lamps or an LED backlight.\n",
"An OLED display works without a backlight. Thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (LCD). In low ambient light conditions such as a dark room an OLED screen can achieve a higher contrast ratio than an LCD, whether the LCD uses cold cathode fluorescent lamps or LED backlight. OLEDs are expected to replace other forms of display in near future.\n"
] |
why do male dogs pee with one leg up and female dogs don't ? | Males are marking their territory...the higher they pee, the less likely someone can pee over it. My Labrador used to also poop on top of boulders to mark his territory. | [
"Domestic dogs mark their territories by urinating on vertical surfaces (usually at nose level), sometimes marking over the urine of other dogs. When one dog marks over another dog's urine, this is known as \"counter-marking\" or \"overmarking\". Male dogs urine-mark more frequently than female dogs, typically beginning after the onset of sexual maturity. Male dogs, as well as wolves, sometimes lift a leg and attempt to urinate even when their bladders are empty – this is known as a \"raised-leg display\", \"shadow-urination\", or \"pseudo-urination\". They typically mark their territory due to the presence of new stimuli or social triggers in a dog's environment, as well as out of anxiety. Marking behavior is present in both male and female dogs, and is especially pronounced in male dogs that have not been neutered.\n",
"For several days before estrus, a phase called proestrus, the female dog may show greater interest in male dogs and \"flirt\" with them (proceptive behavior). There is progressive vulval swelling and some bleeding. If males try to mount a female dog during proestrus, she may avoid mating by sitting down or turning round and growling or snapping.\n",
"Estrous behavior in the female dog is usually indicated by her standing still with the tail held up, or to the side of the perineum, when the male sniffs the vulva and attempts to mount. This tail position is sometimes called “flagging”. The female dog may also turn, presenting the vulva to the male.\n",
"As in most other canids, male bush dogs lift their hind legs when urinating. However, female bush dogs use a kind of handstand posture, which is less common in other canids. When male bush dogs urinate, they create a spray instead of a stream.\n",
"Male felids are able to urinate backwards by curving the tip of the glans penis backward. In cats, the glans penis is covered with spines, but in dogs, the glans is smooth. Penile spines also occur on the glans of male and female spotted hyenas.\n",
"Cats have anal sacs or scent glands. Scent is deposited on the feces as it is eliminated. Unlike intact male cats, female and neutered male cats usually do not spray urine. Spraying is accomplished by backing up against a vertical surface and spraying a jet of urine on that surface. Unlike a dog's penis, a cat's penis points backward. Males neutered in adulthood may still spray after neutering. Urinating on horizontal surfaces in the home, outside the litter box may indicate dissatisfaction with the box, due to a variety of factors such as substrate texture, cleanliness and privacy. It can also be a sign of urinary tract problems. Male cats on poor diets are susceptible to crystal formation in the urine which can block the urethra and create a medical emergency.\n",
"The male dog mounts the female and is able to achieve intromission with a non-erect penis, which contains a bone called the \"os penis\". The dog's penis enlarges inside the vagina, thereby preventing its withdrawal; this is sometimes known as the “tie” or “copulatory lock”. The male dog rapidly thrust into the female for 1–2 minutes then dismounts with the erect penis still inside the vagina, and turns to stand rear-end to rear-end with the female dog for up to 30 to 40 minutes; the penis is twisted 180 degrees in a lateral plane. During this time, prostatic fluid is ejaculated.\n"
] |
why don't legitimate banks offer up competitive alternatives to paypal? | They would have the same fraud issues as PayPal and be just as hated.
| [
"Thiel, a founder of PayPal, has stated that PayPal is not a bank because it does not engage in fractional-reserve banking. Rather, PayPal's funds that have not been disbursed are kept in commercial interest-bearing checking accounts.\n",
"In 2003, PayPal voluntarily ceased serving as a payment intermediary between gambling websites and their online customers. At the time of this cessation, it was the largest payment processor for online gambling transactions. In 2010, PayPal resumed accepting such transactions, but only in those countries where online gambling is legal, and only for sites which are properly licensed to operate in said jurisdictions.\n",
"From 2009 to 2016, PayPal operated Student Accounts, allowing parents to set up a student account, transfer money into it, and obtain a debit card for student use. The program provided tools to teach how to spend money wisely and take responsibility for actions. PayPal discontinued Student Accounts in August 2016.\n",
"In the United States, PayPal is licensed as a money transmitter, on a state-by-state basis. But state laws vary, as do their definitions of banks, narrow banks, money services businesses, and money transmitters. Although PayPal is not classified as a bank, the company is subject to some of the rules and regulations governing the financial industry including Regulation E consumer protections and the USA PATRIOT Act. The most analogous regulatory source of law for PayPal transactions comes from peer-to-peer (P2P) payments using credit and debit cards. Ordinarily, a credit card transaction, specifically the relationship between the issuing bank and the cardholder, is governed by the Truth in Lending Act (TILA) 15 U.S.C. §§ 1601-1667f as implemented by Regulation Z, 12 C.F.R. 226, (TILA/Z). TILA/Z requires specific procedures for billing errors, dispute resolution, and limits cardholder liability for unauthorized charges. Similarly, the legal relationship between a debit cardholder and the issuing bank is regulated by the Electronic Funds Transfer Act (EFTA) 15 U.S.C. §§ 1693-1693r, as implemented by Regulation E, 12 C.F.R. 205, (EFTA/E). EFTA/E is directed at consumer protection and provides strict error resolution procedures. However, because PayPal is a \"payment intermediary\" and not otherwise regulated directly, TILA/Z and EFTA/E do not operate exactly as written once the credit/debit card transaction occurs via PayPal. Basically, unless a PayPal transaction is funded with a credit card, the consumer has no recourse in the event of fraud by the seller.\n",
"As early as 2001, PayPal had substantial problems with online fraud, especially international hackers who were hacking into PayPal accounts and transferring small amounts of money out of multiple accounts. Standard solutions for merchant and banking fraud might use government criminal sanctions to pursue the fraudsters. But with PayPal losing millions of dollars each month to fraud while experiencing difficulties with using the FBI to pursue cases of international fraud, PayPal developed a private solution: a \"fraud monitoring system that used artificial intelligence to detect potentially fraudulent transactions. ... Rather than treating the problem of fraud as a legal problem, the company treated it as a risk management one.\"\n",
"PayPal's services allow people to make financial transactions online by granting the ability to transfer funds electronically between individuals and businesses. Through PayPal, users can send or receive payments for online auctions on websites like eBay, purchase or sell goods and services, or donate money or receive donations. It is not necessary to have a PayPal account to use the company's services. PayPal account users can set currency conversion option in account settings.\n",
"Until 2000, PayPal's strategy was to earn interest on funds in PayPal accounts. However, most recipients of PayPal credits withdrew funds immediately. Also, a large majority of senders funded their payments using credit cards, which cost PayPal roughly 2% of payment value per transaction.\n"
] |
why does audio feedback always sound like a high squealing noice? | There are 2 things happening here. In a feedback loop, the microphone is picking up some of the amplified sound (because it "hears" it from the speaker) and sends it back around. This is why it gets very loud, very fast. The high-pitched squeal happens for a different reason. If the microphone and the speaker are at a certain distance and orientation with each other, such that the sound coming out of the speaker hits the microphone at a certain point in time, certain parts of the sound are amplified slightly differently. Microphones, amplifiers and speakers are not perfect...they work better with some frequencies better than others. If the alignment is such that a certain "high sound" gets amplified better than the other sounds, this results in the squeal you hear.
This looping happens very fast, which is why the sound starts a fairly low volume and pitch, then gets very loud and high pitched.
If all microphones, amplifiers and speakers (and room acoustics!) were perfect, this would not happen.
For you techies: One trick that used to be used before modern DSPs (digital signal processors) was to place 2 microphones at every performer. One mic was actually used by the performer, while a second mic was a few inches away, but connect 180 degrees out-of-phase ("reverse the wires"). The performer's mic would capture both the performer's voice AND whatever else (instruments, crowd, etc) was near by. The second mic had the same, but no vocal. Since it was 180 out of phase, you could add this to the other mic (with a special amp...) and almost perfectly cancel out everything but the vocal. A pain in the ass to set up, but you could get some great sound that way.
Edit: Added some cool microphone info
| [
"Feedback is almost always considered undesirable when it occurs with a singer's or public speaker's microphone at an event using a sound reinforcement system or PA system. Audio engineers use various electronic devices, such as equalizers and, since the 1990s, automatic feedback detection devices to prevent these unwanted squeals or screeching sounds, which detract from the audience's enjoyment of the event. On the other hand, since the 1960s, electric guitar players in rock music bands using loud guitar amplifiers and distortion effects have intentionally created guitar feedback to create a desirable musical effect. \"I Feel Fine\" by the Beatles marks one of the earliest examples of the use of feedback as a recording effect in popular music. It starts with a single, percussive feedback note produced by plucking the A string on Lennon's guitar. Artists such as the Kinks and the Who had already used feedback live, but Lennon remained proud of the fact that the Beatles were perhaps the first group to deliberately put it on vinyl. In one of his last interviews, he said, \"I defy anybody to find a record—unless it's some old blues record in 1922—that uses feedback that way.\"\n",
"Audio feedback (also known as acoustic feedback, simply as feedback, or the Larsen effect) is a special kind of positive feedback which occurs when a sound loop exists between an audio input (for example, a microphone or guitar pickup) and an audio output (for example, a loudly-amplified loudspeaker). In this example, a signal received by the microphone is amplified and passed out of the loudspeaker. The sound from the loudspeaker can then be received by the microphone again, amplified further, and then passed out through the loudspeaker again. The frequency of the resulting sound is determined by resonance frequencies in the microphone, amplifier, and loudspeaker, the acoustics of the room, the directional pick-up and emission patterns of the microphone and loudspeaker, and the distance between them. For small PA systems the sound is readily recognized as a loud squeal or screech.\n",
"Audio feedback (also known as acoustic feedback, simply as feedback, or the Larsen effect) is a special kind of positive loop gain which occurs when a sound loop exists between an audio input (for example, a microphone or guitar pickup) and an audio output (for example, a power amplified loudspeaker). In this example, a signal received by the microphone is amplified and passed out of the loudspeaker. The sound from the loudspeaker can then be received by the microphone again, amplified further, and then passed out through the loudspeaker again. The frequency of the resulting sound is determined by resonance frequencies in the microphone, amplifier, and loudspeaker, the acoustics of the room, the directional pick-up and emission patterns of the microphone and loudspeaker, and the distance between them. For small PA systems the sound is readily recognized as a loud squeal or screech. The principles of audio feedback were first discovered by Danish scientist Søren Absalon Larsen, hence the name \"Larsen Effect\".\n",
"Feedback is almost always considered undesirable when it occurs with a singer's or public speaker's microphone at an event using a sound reinforcement system or PA system. Audio engineers typically use directional microphones with cardioid pickup patterns and various electronic devices, such as equalizers and, since the 1990s, automatic feedback detection devices, to prevent these unwanted squeals or screeching sounds, which detract from the audience's enjoyment of the event and may damage equipment. On the other hand, since the 1960s, electric guitar players in rock music bands using loud guitar amplifiers, speaker cabinets and distortion effects have intentionally created guitar feedback to create different sounds including long sustained tones that cannot be produced using standard playing techniques. The sound of guitar feedback is considered to be a desirable musical effect in heavy metal music, hardcore punk and grunge. Jimi Hendrix was an innovator in the intentional use of guitar feedback, alongside effects units such as the Univibe and wah-wah pedal in his guitar solos to create unique sound effects and musical sounds.\n",
"To avoid feedback, automatic anti-feedback devices can be used. (In the marketplace these go by the name \"feedback destroyer\" or \"feedback eliminator\".) Some of these work by shifting the frequency slightly, with this upshift resulting in a \"chirp\"-sound instead of a howling sound of unaddressed feedback. Other devices use sharp notch-filters to filter out offending frequencies. Adaptive algorithms are often used to automatically tune these notch filters.\n",
"Acoustic feedback is the most widespread option of feedback appearing at the return leakage of sound from the speaker to the microphone. This can be caused by small distance between the microphone and the speaker, loose fit of an earpiece to the surface of acoustic meatus and so on.\n",
"In hearing aids, feedback arises when a part of the receiver (loudspeaker) signal is captured by the hearing aid microphone(s), gets amplified in the device and starts to loop around through the system. When feedback occurs, it results in a disturbingly loud tonal signal. Feedback is more likely to occur when the hearing aid volume is increased, when the hearing aid fitting is not in its proper position or when the hearing aid is brought close to a reflecting surface (e.g. when using a mobile phone). Adaptive feedback cancellation algorithms are techniques that estimate the transmission path between loudspeaker and microphone(s). This estimate is then used to implement a neutralizing electronic feedback path that suppresses the tonal feedback signal.\n"
] |
If yellow teeth are supposedly healthy and natural, why do we find pearly white teeth attractive? | I'm surprised to hear that yellow teeth are healthy. Normally, one would think discoloration signifies rot or oral disease. Do you have a source for that fact?
As a layman, I would speculate that white teeth are considered a sign of good hygiene, and logically so. A person that cares about keeping his teeth white cares about his health, which is a good thing. | [
"Sometimes white or straight teeth are associated with oral hygiene, but a hygienic mouth may have stained teeth and/or crooked teeth. For appearance reasons, people may seek out teeth whitening and orthodontics.\n",
"In Australia, jelly confectionery in the shape of teeth has been very popular since the 1930s. They are colored pink and white, with pink representing the gums and teeth being white. They have a slight minty flavor, similar to mint toothpaste. \n",
"'Pink Pearl' apples are generally medium-sized, with a conical shape. They are named for the color of their flesh, which is a bright rosy pink sometimes streaked or mottled with white. They have a translucent, yellow-green skin, and a crisp, juicy flesh with tart to sweet-tart taste. 'Pink Pearl' apples ripen in late August to mid-September. It is susceptible to scab, and the fruit tend not to keep well on the tree once ripe.\n",
"The characteristic blue color of the fruiting body and the latex make this species easily recognizable. Other \"Lactarius\" species with some blue color include the \"silver-blue milky\" (\"L. paradoxus\"), found in eastern North America, which has a grayish-blue cap when young, but it has reddish-brown to purple-brown latex and gills. \"L. chelidonium\" has a yellowish to dingy yellow-brown to bluish-gray cap and yellowish to brown latex. \"L. quieticolor\" has blue-colored flesh in the cap and orange to red-orange flesh in the base of the stem. Although the blue discoloration of \"L. indigo\" is thought to be rare in the genus \"Lactarius\", in 2007 five new species were reported from Peninsular Malaysia with bluing latex or flesh, including \"L. cyanescens\", \"L. lazulinus\", \"L. mirabilis\", and two species still unnamed.\n",
"The fruits of this cultivar are light green and turn yellow gold with ripeness and are very juicy, making it also a good choice for apple cider of a balanced tart and sweet taste. It is considered of good taste by those who choose to eat them fresh.\n",
"BULLET::::- Persons with visible white fillings or crowns. Tooth whitening does not change the color of fillings and other restorative materials. It does not affect porcelain, other ceramics, or dental gold. However, it can slightly affect restorations made with composite materials, cements and dental amalgams. Tooth whitening will not restore color of fillings, porcelain, and other ceramics when they become stained by foods, drinks, and smoking, as these products are only effective on natural tooth structure. As such, a shade mismatch may be created as the natural tooth surfaces increase in whiteness and the restorations stay the same shade. Whitening agents do not work where bonding has been used and neither is it effective on tooth-colored filling materials. Other options to deal with such cases are the porcelain veneers or dental bonding.\n",
"Hyperbilirubinemia during the years of tooth formation may make bilirubin incorporate into the dental hard tissues, causing yellow-green or blue-green discoloration. One such condition is hemolytic disease of the newborn (erythroblastosis fetalis).\n"
] |
how the big bang theory and the intelligent observer co-exist in science. | Simple: science does not say the universe requires an intelligent observer. One may or may come into existence at some point. | [
"One of the major successes of the Big Bang theory has been to provide a prediction that corresponds to the observations of the abundance of light elements in the universe. Along with the explanation provided for the Hubble's law and for the cosmic microwave background, this observation has proved very difficult for alternative theories to explain.\n",
"The Big Bang theory is the prevailing cosmological model for the universe from the earliest known periods through its subsequent large-scale evolution. Despite its success in explaining many observed features of the universe including galactic redshifts, the relative abundance of light elements such as hydrogen and helium, and the existence of a cosmic microwave background, there are several questions that remain unanswered. For example, the standard Big Bang model does not explain why the universe appears to be same in all directions, why it appears flat on very large distance scales, or why certain hypothesized particles such as magnetic monopoles are not observed in experiments.\n",
"The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe. For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons:\n",
"Much of the current work in cosmology includes understanding how galaxies form in the context of the Big Bang, understanding what happened in the earliest times after the Big Bang, and reconciling observations with the basic theory. Cosmologists continue to calculate many of the parameters of the Big Bang to a new level of precision, and carry out more detailed observations which are hoped to provide clues to the nature of dark energy and dark matter, and to test the theory of General Relativity on cosmic scales.\n",
"The Big Bang model, or theory, is now the prevailing cosmological theory of the early development of the universe and was first proposed by Belgian priest Georges Lemaître, astronomer and professor of physics at the Catholic University of Leuven, with a PhD from MIT. Lemaître was a pioneer in applying Albert Einstein's theory of general relativity to cosmology. Bill Bryson wrote that the idea was decades ahead of its time, and that Lemaître was the first to bring together Einstein's theory of relativity with Edwin Hubble's cosmological observations, combining them in his own \"fire-works theory\". Lemaître theorized in the 1920s that the universe began as a geometrical point which he called a \"primeval atom\", which exploded out and has been moving apart ever since. The idea became established theory only decades later with the discovery of cosmic background radiation by American scientists.\n",
"Through the 1970s and 1980s, most cosmologists accepted the Big Bang, but several puzzles remained, including the non-discovery of anisotropies in the CMB, and occasional observations hinting at deviations from a black-body spectrum; thus the theory was not very strongly confirmed.\n",
"Since the emergence of the Big Bang theory as the dominant physical cosmological paradigm, there have been a variety of reactions by religious groups regarding its implications for religious cosmologies. Some accept the scientific evidence at face value, some seek to harmonize the Big Bang with their religious tenets, and some reject or ignore the evidence for the Big Bang theory.\n"
] |
What happened to pre-Columbian dog breeds? Did they die off from diseases brought by European dogs? | There are several breeds of domestic dogs that were developed in North America. Some are breeds are still around, while others are extinct.
Among those that are still popular pets today you can find huskies and Malamute and their relatives, which have Old World counterparts in the Eurasian Arctic as well. Chihuahuas trace their origins to the other end of the continent in Mesoamerica, as does the Xolo (aka the Mexican Hairless Dog, although there are variants with hair).
Another breed that's still around, but is rarely seen as a pet, is the Carolina Dog. These dogs resemble dingos, as a lot of dogs will after being feral for many generations. They were rediscovered relatively recently in the 1970s and we're not sure how long they've been around. Dogs like them show up in pre-Columbian art, but they were pets back then or if they were already feral is unknown.
Many of the most famous extinct breeds are from the northwestern part of North America. For example, there are the [Hare Indian Dog](_URL_0_), the [Tahltan Bear Dog](_URL_2_), and most famously, the [Salish Wool Dog](_URL_4_). The first two were bred for different styles of hunting, while the wool dogs, as their name suggests, were breed for wool.
**Sources**
* [Pre-Columbian origins of Native American dog breeds, with only limited replacement by European dogs, confirmed by mtDNA analysis](_URL_1_)
* [Dogs of the American Aborigines](_URL_3_) | [
"The Spanish conquest of Peru nearly caused the extinction of the breed. The dogs survived in rural areas where the people believed that they held a mystical value, and because of their reputation to treat arthritis. \n",
"Dogs were present in pre-Columbian America, presumably brought by early human migrants from Asia. Studies of free-ranging village/street dogs have indicated almost total replacement of these original dogs by European dogs, but the extent to which Arctic, North and South American breeds are descendants of the original population remains to be assessed.\n",
"The extinction of the Dalbo dog is linked to the near wipe-out of wolves and bears in Scandinavia in around 1890. It was then considered too expensive to continue to have very large dogs, that did not seem to fill a clear purpose. An ill-fated eruption of rabies in 1854, might have contributed to the downfall of the breed. Another reason might have been the great Swedish famine of 1867-1868.\n",
"In 2018, an analysis of DNA from the entire cell nucleus indicated that dogs entered North America from Siberia 4,500 years after humans did, were isolated for the next 9,000 years, and after contact with Europeans these no longer exist because they were replaced by Eurasian dogs. The pre-contact dogs exhibit a unique genetic signature that is now gone.\n",
"In a 1520 letter, Hernan Cortés wrote that the Aztecs raised and sold the little dogs as food. Colonial records refer to small, nearly hairless dogs at the beginning of the 19th century, one of which claims 16th-century Conquistadores found them plentiful in the region later known as Chihuahua. Small dogs such as Chihuahuas were also used as living hot-water bottles during illness or injury. Some believe this practice is where the idea of pain being transferred to animals from humans originated, which gave way to rituals such as burning the deceased with live dogs, such as the Techichi, to exonerate the deceased human's sins. Chihuahuas as we know them today remained a rarity until the early 20th century; the American Kennel Club (AKC) did not register a Chihuahua until 1904.\n",
"European domestic dogs arrived in Australia in the 18th century, during the European colonization. Since then, some of those dogs dispersed into the wild (both deliberately and accidentally) and founded feral populations, especially in places where the dingo numbers had been severely reduced due to human intervention. Although there are few records of such releases, their occurrence is supported by reports of free-living dogs of specific breeds being seen or captured in remote areas. The spread of farming and grazing activities in the 19th century led to a further spread of other domestic dogs, both pet and feral ones. Interbreeding with the native dingoes has probably been occurring since the arrival of domestic dogs in the year 1788.\n",
"Studies have suggested that it was possible for multiple primitive forms of the dog to have existed, including in Europe. European dog populations had undergone extensive turnover during the last 15,000 years that has erased the genomic signature of early European dogs, the genetic heritage of the modern breeds has become blurred due to admixture, and there was the possibility of past domestication events that had died out or had been largely replaced by more modern dog populations.\n"
] |
Has anyone ever looked into waste heat from cars and buildings being a factor in global temperature increase? | If you look at oil, it produces about [5.6 million BTU per barrel](_URL_2_) when burned. There are about 3.5e10 barrels in a cubic mile, and we know that globally [we consume about 3 cubic miles of oil _equivalent](_URL_1_) each year, so that's about 10.5e10 barrels, producing about 5.9e17 BTU, or about 6.2e20 joules. The amount of energy the Sun radiates to the Earth is about [1e25 joules per year](_URL_0_), or about 2,000 times as much energy from the Sun as from all the waste heat in the world. | [
"A 2012 study by researchers at Concordia University included variables similar to those used in the Stanford study (e.g., cloud responses) and estimated that worldwide deployment of cool roofs and pavements in cities would generate a global cooling effect equivalent to offsetting up to 150 gigatonnes of carbon dioxide emissions – enough to take every car in the world off the road for 50 years.\n",
"In the first two \"Reports for the Club of Rome\" in 1972 and 1974, the anthropogenic climate changes by increase as well as by Waste heat were mentioned. About the latter John Holdren wrote in a study cited in the 1st report, “… that global thermal pollution is hardly our most immediate environmental threat. It could prove to be the most inexorable, however, if we are fortunate enough to evade all the rest.” Simple global-scale estimates that recently have been actualized and confirmed by more refined model calculations show noticeable contributions from waste heat to global warming after the year 2100, if its growth rates are not strongly reduced (below the averaged 2% p.a. which occurred since 1973).\n",
"By the 1960s, aerosol pollution (\"smog\") had become a serious local problem in many cities, and some scientists began to consider whether the cooling effect of particulate pollution could affect global temperatures. Scientists were unsure whether the cooling effect of particulate pollution or warming effect of greenhouse gas emissions would predominate, but regardless, began to suspect that human emissions could be disruptive to climate in the 21st century if not sooner. In his 1968 book \"The Population Bomb\", Paul R. Ehrlich wrote, \"the greenhouse effect is being enhanced now by the greatly increased level of carbon dioxide... [this] is being countered by low-level clouds generated by contrails, dust, and other contaminants... At the moment we cannot predict what the overall climatic results will be of our using the atmosphere as a garbage dump.\"\n",
"BULLET::::- In addition to automobiles, waste heat is also generated in many other places, such as in industrial processes and in heating (wood stoves, outdoor boilers, cooking, oil and gas fields, pipelines, and remote communication towers).\n",
"The projected effects for the environment and for civilization are numerous and varied. The main effect is an increasing global average temperature. The average surface temperature could increase by 3 to 10 degrees Fahrenheit (approximately 1.67 to 5.56 degrees Celsius) by the end of the century if carbon emissions aren't reduced. This causes a variety of secondary effects, namely, changes in patterns of precipitation, rising sea levels, altered patterns of agriculture, increased extreme weather events, the expansion of the range of tropical diseases, and the opening of new marine trade routes.\n",
"82% of final energy consumption in buildings was supplied by fossil fuels in 2015 The energy-related CO2 emissions account for the environmental impact due to a building. The Global Status Report 2017 prepared by the International Energy Agency (IEA) for the Global Alliance for Buildings and Construction (GABC) highlights the significance of the buildings and construction sector in global energy consumption and related emissions again. Deep energy retrofitting in existing building stocks is critical to achieve the global climate goals laid down in the Paris Agreement.\n",
"Buildings and their construction consume more energy than transportation or industrial applications, and because buildings are responsible for the largest portion of greenhouse emissions, they have the largest impact on man-made climate change. The AIA has proposed making buildings carbon neutral by 2030, meaning that the construction and operation of buildings will not require fossil fuel energy or emit greenhouse gases, and having the U.S. reduce CO emissions to 40 to 60% below 1990 levels by 2050.\n"
] |
How was Irish culture change by the settling of Vikings? | It's well attested to that raiders and settlers from Scandinavia had a substantial impact on Irish society. In some instances we can still be point to this influence in contemporary Ireland.
First, a brief history of Viking contact with Ireland. The first recorded reference is from 795, when it was reported that they raided a monastery on Rathlin Island (just three years after the famous raid at Lindisfarne, England). At this time Irish craftwork was among the most prized in Europe, so plundering these largely undefended monastic settlements (centres of wealth in rural Ireland) must have been extremely rewarding for the raiders.
For the following few decades the Norsemen continued to perform hit-and-run raids on Ireland, until around 840, when it was reported that Viking raiders were wintering in Ireland. Around 900 the Norse warriors were beginning to settle permanently on the island. Throughout the 10th century their influence were constant features of the many Irish kingdoms’ warring, often acting as mercenaries for the Gaelic kings, if not in their own right. Their influence as political actors began to falter from around the end of the 9th century (famously at the Battle of Tara (980) and the Battle of Clontarf (1014)). Like what would happen to later Anglo-Norman settlers, the Norse settlers largely became Gaelicized, becoming integrated into Irish society, likely within just a few generations of settlement.
1. **Urbanisation**
Although this is hotly contested in early medieval Irish historiography, the Vikings are largely credited with bringing urban settlement to Ireland. Likely some proto-urban settlement existed, known by historians as the “monastic town”, but by our own definition of town (or even that of a medieval German or French person) this argument does not hold water, and they were still largely agrarian. From the early 10th century the Vikings established settlements that remain to this today Ireland’s primary urban centres – Dublin, Wexford, Waterford, Cork, and Limerick all being set up by Norse settlers during this period.
2) **Language**
Modern Irish is still littered with words and phrases that can clearly be recognised as coming from Norse. A few examples: Gaelic words for garden (gairdín), shoe (bróg), river (abhainn), boat (bád), and market (margad) all suggest Norse origin. This also provides another insight into what influence these settlers might have had on Irish society – likely concepts such as ‘a garden’ (as in an enclosed area) or ‘a market’ where not words which were needed by the Gaelic Irish before the Norse brought them.
3) **Place-names**
Another linguistic insight into Norse influence on Irish society can be drawn from place-names. For many towns or settlements, the only evidence historians have for Norse influence is the name. For example, Leixlip, Co. Kildare is drawn from ‘lax hlaup’, Old Norse for ‘salmon leap’. A town such as Wicklow also likely has Viking influence, as the ‘wick’ comes from the same Norse word that the ‘vik’ in Viking does, referring to a bay or inlet. It’s the same root that English towns, such as Norwich, come from.
4) **Craftwork**
Although I know not so much about this, the archaeological evidence suggests a merging of Norse and Irish styles of craftwork.
5) **Trade network**
The Vikings established a trading network that extends across an enormous portion of Eurasia, and even into Africa and North America. Viking trade set Ireland among this network and fixed the island into an international commercial world. One of the most incredible examples of this is an 8th century coin from Iraq being found during an excavation of a small Hiberno-Norse settlement in the south-east of Ireland.
& #x200B;
This list is by no means exhaustive. If anyone has any questions I will certainly elaborate on the evidence for what I’ve said. Some historians I would recommend for you check out if you would like to find out more would be Howard Clarke, John Bradley, and Colman Etchingham.
& #x200B;
[_URL_0_](_URL_0_) This website should also provide some interesting information if you're interested in the urban aspect of Viking influence. | [
"The influx of Viking raiders and traders in the 9th and 10th centuries resulted in the founding of many of Ireland's most important towns, including Cork, Dublin, Limerick, and Waterford (earlier Gaelic settlements on these sites did not approach the urban nature of the subsequent Norse trading ports). The Vikings left little impact on Ireland other than towns and certain words added to the Irish language, but many Irish taken as slaves inter-married with the Scandinavians, hence forming a close link with the Icelandic people. In the Icelandic \"Laxdœla saga\", for example, \"even slaves are highborn, descended from the kings of Ireland.\" The first name of Njáll Þorgeirsson, the chief protagonist of \"Njáls saga\", is a variation of the Irish name Neil. According to \"Eirik the Red's Saga\", the first European couple to have a child born in North America was descended from the Viking Queen of Dublin, Aud the Deep-minded, and a Gaelic slave brought to Iceland.\n",
"By the late 4th century AD Christianity had begun to gradually subsume or replace the earlier Celtic polytheism. By the end of the 6th century it had introduced writing along with a predominantly monastic Celtic Christian church, profoundly altering Irish society. Viking raids and settlement from the late 8th century AD resulted in extensive cultural interchange, as well as innovation in military and transport technology. Many of Ireland's towns were founded at this time as Viking trading posts and coinage made its first appearance. Viking penetration was limited and concentrated along coasts and rivers, and ceased to be a major threat to Gaelic culture after the Battle of Clontarf in 1014. The Norman invasion in 1169 resulted again in a partial conquest of the island and marked the beginning of more than 800 years of English political and military involvement in Ireland. Initially successful, Norman gains were rolled back over succeeding centuries as a Gaelic resurgence reestablished Gaelic cultural preeminence over most of the country, apart from the walled towns and the area around Dublin known as The Pale.\n",
"The assimilation of the nascent Scandinavian kingdoms into the cultural mainstream of European Christendom altered the aspirations of Scandinavian rulers and of Scandinavians able to travel overseas, and changed their relations with their neighbours. One of the primary sources of profit for the Vikings had been slave-taking. The medieval Church held that Christians should not own fellow Christians as slaves, so chattel slavery diminished as a practice throughout northern Europe. This took much of the economic incentive out of raiding, though sporadic slaving activity continued into the 11th century. Scandinavian predation in Christian lands around the North and Irish Seas diminished markedly.\n",
"The Vikings began to raid Ireland from 795, with catastrophic effect for the monasteries in particular. However, although the Vikings established several longphorts, initially fortified encampments for over-wintering, and later towns like Dublin, Wexford, Cork, and Waterford (the first real urban centres in Ireland), the native Irish were more successful than the English and Scots in preventing large-scale Viking takeovers of areas for settlement by farmers. By about the year 1000, the situation was relatively stable, with a mixed population of Norse-Gaels in the towns and areas close to them, while the Gaelic Irish, whose elite had often formed political alliances, trading partnerships and inter-marriages with Viking leaders, remained in control of the great majority of the island, and were able to draw tribute from the Viking towns.\n",
"In the 9th century, Vikings began raiding and founding settlements along Ireland's coasts and waterways, which became its first large towns. Over time, these settlers were assimilated and became the Norse-Gaels. After the Norman invasion of 1169–71, large swathes of Ireland came under the control of Norman lords, leading to centuries of conflict with the native Irish. The King of England claimed sovereignty over this territory – the Lordship of Ireland – and the island as a whole. However, the Gaelic system continued in areas outside Anglo-Norman control. The territory under English control gradually shrank to an area known as the Pale and, outside this, many Hiberno-Norman lords adopted Gaelic culture.\n",
"BULLET::::- Economic model: The economic model states that the Viking Age was the result of growing urbanism and trade throughout mainland Europe. As the Islamic world grew, so did its trade routes, and the wealth which moved along them was pushed further and further north. In Western Europe, proto-urban centres such as the -wich town of Anglo-Saxon England began to boom during the prosperous era known as the \"Long Eighth Century\". The Scandinavians, like many other Europeans, were drawn to these wealthier \"urban\" centres, which soon became frequent targets of Viking raids. The connection of the Scandinavians to larger and richer trade networks lured the Vikings into Western Europe, and soon the rest of Europe and parts of the Middle East. In England, hoards of Viking silver, such as the Cuerdale Hoard and the Vale of York Hoard, offer good insight to this phenomenon.\n",
"During the Viking Age, Scandinavian men and women travelled to many parts of Europe and beyond, in a cultural diaspora that left its traces from Newfoundland to Byzantium. This period of energetic activity also had a pronounced effect in the Scandinavian homelands, which were subject to a variety of new influences. In the 300 years from the late 8th century, when contemporary chroniclers first commented on the appearance of Viking raiders, to the end of the 11th century, Scandinavia underwent profound cultural changes.\n"
] |
bonus in basketball | You have a certain number of fouls before you hit bonus. Teams under this are considered as having fouls to give. A bonus allows one free throw and if you make it, you get another one, regardless of the foul (non shooting fouls), and a double bonus allows 2 whether you make the first one or not.
[Source](_URL_0_)
Edit: not every foul is allowed - cannot do an offensive foul. Also a regular bonus, you have to make the first shot to get the second, and if not, that's the only free throw. Each team is allowed 6 per half, at 7, bonus is started. At the 10th foul, it's a double bonus.
Sorry for all the edits, I'm actually learning this as I'm typing. | [
"Bonuses usually have multiple parts that are related by some common thread and may or may not be related to corresponding tossup. A team is usually rewarded with 10 points upon correctly answering each bonus part. Usually, only the team that answered the tossup correctly can answer the bonus questions, though some formats allow the opposing team to answer certain parts of the bonus not correctly answered by the team in control of the bonus, a gameplay element known as a \"bounceback\" or \"rebound\". Less-used types of bonus questions include list bonuses, which require players to give their answers from a requested list, and \"30-20-10\" bonuses, which give a number of discrete clues for a single answer in order of decreasing difficulty, with more points being awarded for giving the correct answer on an earlier clue. The 30-20-10 bonus was officially banned from ACF in 2008 and NAQT in 2009.\n",
"Many bookmakers offer first time users a signup bonus in the range $10–200 for depositing an initial amount. They typically demand that this amount is wagered a number of times before the bonus can be withdrawn. Bonus sport arbitraging, also known as matched betting, is a form of sports arbitraging where the bettor hedges or backs their bets as usual, but since they received the bonus, a small loss can be allowed on each wager (2–5%), which comes off their profit. In this way the bookmakers wagering demand can be met and the initial deposit and sign up bonus can be withdrawn with little loss.\n",
"Bonus is a special feature of the particular game theme, which is activated when certain symbols appear in a winning combination. Bonuses vary depending upon the game. Some bonus rounds are a special session of free spins (the number of which is often based on the winning combination that triggers the bonus), often with a different or modified set of winning combinations as the main game, and often with winning credit values increased by a specific multiplier, which is prominently displayed as part of the bonus graphics and/or animation (which in many cases is of a slightly different design or color scheme from the main game). In other bonus rounds, the player is presented with several items on a screen from which to choose. As the player chooses items, a number of credits is revealed and awarded. Some bonuses use a mechanical device, such as a spinning wheel, that works in conjunction with the bonus to display the amount won. (Some machines feature two or more of these bonus styles as part of the same game.)\n",
"Bonus hunting (also known as bonus bagging or bonus whoring) is a type of advantage gambling where turning a profit from casino, sportsbook and poker room bonus situations is mathematically possible. For example, the house edge in blackjack is roughly 0.5%. If a player is offered a $100 cashable bonus requiring $5000 in wagering on blackjack with a house edge of 0.5%, the expected loss is $25. Therefore, the player has an expected gain of $75 after claiming the $100 bonus.\n",
"There are bonuses for collecting coins (usually through gaps), for causing explosions through gaps of other balls, and chains for having a streak of always causing an explosion with each consecutive ball (coins and chain bonuses are a quick way to fill the bar). Time bonuses are also awarded if a player completes the level within ace time - ranging from thirty seconds to four minutes depending on the level.\n",
"BULLET::::- NCAA men's and NFHS: All team fouls after the sixth in a half are considered to be \"bonus\" free throws in both rule sets. However, in popular usage, the \"bonus\" refers to the situation when the fouling team has seven, eight, or nine fouls in a half. In this situation, the team fouled is said to be \"in the bonus\" and so gains a \"one and one\" opportunity on each non-shooting foul by the defense. The opposing team is \"over the limit.\" See also \"double bonus\" and \"penalty\".\n",
"In basketball, a rebound is the act of gaining possession of the ball after a missed field goal or free throw. The Basketball League Belgium Division I's rebounding title is awarded to the player with the highest rebounds per game average in a given regular season.\n"
] |
As the first European city founded in California, why didn't San Diego become as prominent as San Francisco or Los Angeles? | Without answering your question precisely, I'd like to point out that being the first city doesn't always make it important. For example, Jamestown, though the first English colony in America, no longer even exists even though it was the capital for 80 or so years.
Terrain and resources make a larger impact on the success of a colony. Oil was also found in Los Angeles leading a a growth boom in the 19th century. Sand Diego appears to have suffered population-wise as well when it was part of Mexico, not sure if that's because of political reasons, but it happened.
I'll link these two sources which corroborate what I say, but I'm not saying I'm wrong. I found them after a quick Google search so I hope they're decent.
[Oil in LA,](_URL_1_) and [decline in San Diego.](_URL_0_) | [
"San Diego has been called \"the birthplace of California\". Historically home to the Kumeyaay people, it was the first site visited by Europeans on what is now the West Coast of the United States. Upon landing in San Diego Bay in 1542, Juan Rodríguez Cabrillo claimed the area for Spain, forming the basis for the settlement of Alta California 200 years later. The Presidio and Mission San Diego de Alcalá, founded in 1769, formed the first European settlement in what is now California. In 1821, San Diego became part of the newly declared Mexican Empire, which reformed as the First Mexican Republic two years later. California became part of the United States in 1848 following the Mexican–American War and was admitted to the union as a state in 1850.\n",
"The first European visitors to California were Spanish maritime explorers led by Juan Rodríguez Cabrillo, who sailed up and down the coast in 1542. Spanish explorer Sebastián Vizcaíno again sailed along the California coast in 1602. Spanish ships associated with the Manila Galleon trade probably made emergency stops along the coast during the next 167 years, but no permanent settlements were established.\n",
"San Severo became the most populous city in Capitanata in the 16th century. The rich commerce, cultural vitality and self-government made it one of the major centers of the south, due to the presence of a large Venetian warehouse. Directly connected to the Fortore river was an important link between the Venetians and the Kingdom of Naples. Leandro Alberti (1550) writes of San Severo \"this castle is very rich, noble, civilized and filled with people, and is so wealthy that he envied any other in this region.\" The town also established ecclesiastical organizations, with four wealthy parishes, several hospitals, some religious confraternities and nine religious institutes.\n",
"The culture of San Diego, California is influenced heavily by American and Mexican cultures due to its position as a border town, its large Hispanic population, and its history as part of Spanish America and Mexico. San Diego's longtime association with the U.S. military also contributes to its culture. Present-day culture includes many historical and tourist attractions, a thriving musical and theatrical scene, numerous notable special events, a varied cuisine, and a reputation as one of America's premier centers of craft brewing.\n",
"In 1602, the Spanish began to show interest in California and sent Sebastián Vizcaíno, a pearl fisher, to explore the area. Vizcaino traveled the coast naming many of the cities that are important to the California coast today such as San Diego, Santa Barbara and Monterey. Spain finally chose to create Vizcaino's suggested chain of missions when it was proven that California was indeed part of the continent. The goal of creating the chain was given to the Franciscan Order. While Spain had economic motives for establishing a stronghold in California, the Franciscan order of the Catholic Church also had religious motives. With these factors in mind the missions were created in order to control the coast so that the ships from Spain would remain safe as well as bring the Natives to the Catholic faith. Re-education became the method for reaching Spain's religious and economic goals as they strived to convert the Native Americans to Catholicism as well as make them loyal Spanish subjects.\n",
"Urban settlement began in 1889, when descendants of Santiago Argüello and Augustín Olvera entered an agreement to begin developing the city of Tijuana. The date of the agreement, July 11, 1889, is recognized as the founding of the city. Tijuana saw its future in tourism from the beginning. From the late 19th century to the first few decades of the 20th century, the city attracted large numbers of Californians coming for trade and entertainment. The California land boom of the 1880s led to the first big wave of tourists, who were called \"excursionists\" and came looking for echoes of the famous novel \"Ramona\" by Helen Hunt Jackson.\n",
"The first European in the state of California was conquistador Juan Rodriguez Cabrillo, a Portuguese explorer sailing on behalf of the Spanish Empire, in 1542; later explorers included Sir Francis Drake and Sebastián Vizcaíno. However, no explorer had yet discovered the Sacramento Valley region nor the Golden Gate strait, which would remain undiscovered until, respectively, 1808 and 1623. A number of conquistadors had completed cursory examinations of the region by the mid-18th century, including Juan Bautista de Anza and Pedro Fages, but none viewed the region as a potentially valuable region to colonize. Neither did Gabriel Moraga, who was the first European to enter the Sierra in 1808 and was responsible for naming the Sacramento River, although he incorrectly placed the rivers in the region. However, Padres Abella and Fortuni arrived in the region in 1811 and returned positive feedback to the Roman Catholic Church, although the church disregarded their finds as they were in conflict with all previous views of the area. The Mexicans, who had declared independence in 1821, shared Spanish sentiments, and the area remained uncolonized until the arrival of John Sutter in 1839.\n"
] |
what is the psychological reason for intentionally revisiting memories/photos/things/experiences that have hurt us? | It's an attempt to heal wounds that still ache. What you are referring to is benign, but repetition compulsion drives people to get into therapy for harmful relationship patterns.
_URL_0_
| [
"The prospect of memory erasure or alteration raises ethical issues. Some of these concern identity, as memory seems to play a role in how people perceive themselves. For example, if a traumatic memory were erased, a person might still remember related events in their lives, such as their emotional reactions to later experiences. Without the original memory to give them context, these remembered events might prompt the subjects to see themselves as emotional or irrational people. In the United States, the President's Council on Bioethics devoted a chapter in its October 2003 report \"Beyond Therapy\" to the issue. The report discourages the use of drugs that blunt the effect of traumatic memories, warning against treating human emotional reactions to life events as a medical issue.\n",
"When simple objects such as a photograph, or events such as a birthday party, bring traumatic memories to mind people often try to bar the unwanted experience from their minds so as to proceed with life, with varying degrees of success. The frequency of these reminders diminish over time for most people. There are strong individual differences in the rate at which the adjustment occurs. For some the number of intrusive memories diminish rapidly as the person adjusts to the situation, whereas for others intrusive memories may continue for decades with significant interference to their mental, physical and social well being.\n",
"Issues of self-deception arise when altering memories as well. Avoiding the pain and difficulty of dealing with a memory by taking a drug may not be an honest method of coping. Instead of dealing with the truth of the situation a new altered reality is created where the memory is dissociated from pain, or the memory is forgotten altogether. Another issue that arises is exposing patients to unnecessary risk. Traumatic experiences do not necessarily produce a long term traumatic memory, some individuals learn to cope and integrate their experience and it stops affecting their lives quite quickly. If drug treatments are administered when not needed, as when a person could learn to cope without drugs, they may be exposed to side effects and other risks without cause. Loss of painful memories may actually end up causing more harm in some cases. Painful, frightening or even traumatic memories can serve to teach a person to avoid certain situations or experiences. By erasing those memories their adaptive function, to warn and protect individuals may be lost. Another possible result of this technology is a lack of tolerance. If the suffering induced by traumatic events were removable, people may become less sympathetic to that suffering, and put more social pressure on others to erase the memories.\n",
"Susan Clancy joined the Harvard University psychology department as a graduate student in 1995. There she began to study memory and the idea of repressed memories due to trauma. The debate in this field was strong at the time, with many clinicians arguing that we repress memories to protect ourselves from trauma that would be too hard to bear. Many cognitive psychologists, on the other hand, argued that true trauma is almost never forgotten, and that memories brought up years later through hypnosis are most likely false.\n",
"Some memory issues are due to stress, anxiety, or depression. A traumatic life event, such as the death of a spouse, can lead to changes in lifestyle and can leave an elderly person feeling unsure of themselves, sad, and lonely. Dealing with such drastic life changes can therefore leave some people confused or forgetful. While in some cases these feelings may fade, it is important to take these emotional problems seriously. By emotionally supporting a struggling relative and seeking help from a doctor or counselor, the forgetfulness can be improved.\n",
"Reminiscence of old and fond memories is another alternative form of treatment, especially for the elderly who have lived longer and have more experiences in life. It is a method that causes a person to recollect memories of their own life, leading to a process of self-recognition and identifying familiar stimuli. By maintaining one’s personal past and identity, it is a technique that stimulates people to view their lives in a more objective and balanced way, causing them to pay attention to positive information in their life stories, which would successfully reduce depressive mood levels.\n",
"When situations or memories occur that we are unable to cope with, we push them away. It is a primary ego defence mechanism that many psychotherapists readily accept. There have been numerous studies which have supported the psychoanalytic theory that states that murder, childhood trauma and sexual abuse can be repressed for a period of time and then recovered in therapy.\n"
] |
How deep can an open pit mine be? | Not a complete answer, but it's something:
Bingham Canyon Mine, located near Salt Lake City, is the world's deepest man-made open pit excavation. The mine is 2.75 miles (4,5km) across and 0.75 mile (1,2km) deep. Since mining operations started in 1906, Bingham Canyon Mine has been the granddaddy of all copper mines. When you're talking about the actual size of the mine, Bingham Canyon is simply the largest copper mine in the USA. If the mine was a stadium, it could seat nine million people.
_URL_0_ | [
"The Hranice Abyss (), the English name adopted by the local tourist authorities, is the deepest flooded pit cave in the world. It is a karst sinkhole located near the town of Hranice (Přerov District). The greatest confirmed depth (as of 27 September 2016) is 473 m (404 m under the water level), which makes it the deepest known underwater cave in the world. Moreover, the expected depth is 800–1200 m.\n",
"This list of deepest mines includes operational and non-operational mines that are at least , which is the depth of Veryovkina Cave, the deepest known natural cave in the world. The depth measurements in this list represent the difference in elevation from the entrance of the mine to the deepest excavated point.\n",
"The mine site covers about , stretching in a mostly linear shape about 1600 m (5,200 ft) long and 150 to 600 m (500 to 2,000 ft) wide. The mine is of open pit construction, and reaches about 600 m (1,900 ft) deep at its deepest point. The open cut closed in 2010.\n",
"The pit mines were closed in 1986 due to high operational costs and low yields, but during their heyday they were among the largest and deepest in the world. The total tunnel length is 322 km, with a depth of between 610 m and 700 m.\n",
"The mine was worked to a depth of 575 feet, with several levels 100 feet apart below the 250 foot level. The first levels, at 250 and 350 feet, extended nearly 1,000 feet in length. The 450 foot level did not extend as far.\n",
"The mine is the lowest point in Japan at below sea level, and digging still continues. The open pit has a north-south length of approximately 2 kilometers and an east-west width of 800 meters. The limestone excavated is transported by a 10 kilometer pipeline with belt conveyor to Port of Hachinohe. \n",
"The largest underground mine is Kiirunavaara Mine in Kiruna, Sweden. With of roads, 40 million tonnes of annually produced ore, and a depth of , it is also one of the most modern underground mines. The deepest borehole in the world is Kola Superdeep Borehole at , but this is connected to scientific drilling, not mining.\n"
] |
How accurate is the movie Gandhi (1982)? I read some articles slamming Gandhi (the actual person), and I don't know what to make of them. | The problem here is that there is a vast disjuncture between Gandhi the historical figures versus the popular mythology that has built around him. The 1982 biopic is a reflection of this latter aspect of Gandhi's image. The film presents Gandhi and his world in starkly Manichean of light versus dark. Note that Kingsley's performance (almost universally praised among critics) is one that highlights Gandhi's stoic and moral gravitas and thus accentuates this duality. Many of the other historical characters in the film shrink compared to him.
The reality is that the historical Gandhi inhabited a highly complex political world in which often belies reducing it to simple terms like good vs. evil or justice vs. injustice. That Gandhi inhabited and operated in such a world is not even an open secret, but the strength of his mythologized public image means that the act of uncovering the historical record carries far more weight to them than less valorized historical figures. For example, there have been recent publications of Nixon audio tapes that highlight his various racial, sexual, and political prejudices, often expressed in language the mods here would require a NSFW tag for, yet these revelations are somehow not as alarming as Gandhi using a formal greeting of "dear Friend" in a letter to Hitler (which the critic in the OP's first link finds appalling).
Jewaharlal Nehru actually advised ~~David~~ Richard Attenborough not to deify Gandhi, as "that is what we have done in India and he was to great a man to be deified." Although Attenborough claimed to follow Nehru's advice, *Gandhi* has fallen into the trap of many biopics by oversimplifying its subject and the context of the times.
*Source*
Carnes, Mark C. *Past Imperfect: History According to the Movies*. New York: H. Holt, 1995. | [
"\"Gandhi\" was released in India on 30 November 1982, in the United Kingdom on 3 December, and in the United States on 10 December. It was nominated for Academy Awards in eleven categories, winning eight, including Best Picture and Best Director for Attenborough, Best Actor for Ben Kingsley, and Best Screenplay Written Directly for the Screen for Briley. The film was screened retrospectively on 12 August 2016 as the opening film at the Independence Day Film Festival jointly presented by the Indian Directorate of Film Festivals and Ministry of Defence, commemorating the 70th Indian Independence Day. The screenplay of \"Gandhi\" is available as a published book.\n",
"Gandhi is a 1982 epic historical drama film based on the life of Mohandas Karamchand Gandhi, the leader of India's non-violent, non-cooperative independence movement against the United Kingdom's rule of the country during the 20th century. The film, a British-Indian co-production, was written by John Briley and produced and directed by Richard Attenborough. It stars Ben Kingsley in the title role.\n",
"BULLET::::- Hay, Stephen. \"Attenborough's 'Gandhi,'\" \"The Public Historian,\" 5#3 (1983), pp. 84–94 in JSTOR; evaluates the film's historical accuracy and finds it mixed in the first half of the film and good in the second half\n",
"BULLET::::- 2000: Gandhi is portrayed by Naseeruddin Shah in \"Hey Ram\". A film made by Kamal Haasan, it portrays a would-be assassin of Gandhi and the dilemma faced by the would-be assassins in the turmoil of post-partition India.\n",
"BULLET::::- \"Gandhi vs. Gandhi\" is a Marathi play that has been translated into several languages. Its primary plot is the relationship between Gandhi and his estranged son but it also deals briefly with the assassination.\n",
"One day someone found Gandhi's picture and Uttam popped a ballon while their father saw who believed he killed Gandhi, with Uttam replying \"Maine Gandhi Ko Nahin Mara\" while his father hit him. Later they go to another doctor named Siddharth (Parvin Dabas) who helps Uttam when he thinks that his house is jail and people poisoned his food because he killed Gandhi. Siddharth eats the food so Uttam knows the food is not poisoned. Later,in court, a gun expert says that a toy gun (which Uttam believes he killed Gandhi with) cannot kill anyone.\n",
"Some of the criticism was also directed towards the treatment of Gandhi. Mahesh notes that he \"appears in rather poor light\" and was depicted as making \"little effort\" to secure a pardon for Bhagat, Sukhdev and Rajguru. Diwanji concurs with Mahesh while also saying that the Gandhi–Irwin Pact as seen in the film would make the audience think that Gandhi \"condemned the trio to be hanged by inking the agreement\" while pointing out the agreement itself \"had a different history and context.\" Kehr believed the film's depiction of Gandhi was its \"most interesting aspect\". He described Surendra Rajan's version of Gandhi as \"a faintly ridiculous poseur, whose policies play directly into the hands of the British\" and in that aspect, he was very different from \"the serene sage\" portrayed by Ben Kingsley in Richard Attenborough's \"Gandhi\" (1982). Like Diwanji, Elley also notes how the film denounces Gandhi by blaming him \"for not trying very hard\" to prevent Bhagat's execution.\n"
] |
Did Germany pay reparations for World War I during WWII? | Nazi Germany did not pay reparations during WWII. Reparations had been suspended for one year by the Hoover Moratorium in 1930, and were suspended indefinitely at the Lausanne Conference in 1932. When WWII ended, the allies assessed a value equivalent to 16 Billion dollars, that Germany was to pay to complete the reparations payments for WWI, and these were finished being paid in 2012. | [
"World War I reparations owed by Germany were stated in gold marks in 1921, 1929 and 1931; this was the victorious Allies' response to their fear that vanquished Germany might try to pay off the obligation in paper marks. The actual amount of reparations that Germany was obliged to pay out was not the 132 billion marks cited in the London Schedule of 1921 but rather the 50 billion marks stipulated in the A and B Bonds. The actual total payout from 1920 to 1931 (when payments were suspended indefinitely) was 20 billion German gold marks, worth about or . Most of that money came from loans from New York bankers.\n",
"BULLET::::- Germany accepted responsibility for the damages and losses caused by the war and would make reparation payments to the Allies; the Reparations Committee in 1921 would set total reparation payments to gold marks or $5 billion in gold.\n",
"The Ruhr region had been occupied by Allied troops in the aftermath of the First World War. Under the terms of the Treaty of Versailles (1919), which formally ended the war with the Allies as the victors, Germany accepted responsibility for the damages caused in the war and was obliged to pay war reparations to the various Allies. Since the war was fought predominately on French soil, these reparations were paid primarily to France. The total sum of reparations demanded from Germany—around 226 billion gold marks (US $ billion in 2020)—was decided by an Inter-Allied Reparations Commission. In 1921, the amount was reduced to 132 billion (at that time, $31.4 billion (US $442 billion in 2020), or £6.6 billion (UK£284 billion in 2020)). Even with the reduction, the debt was huge. As some of the payments were in raw materials, which were exported, German factories were unable to function, and the German economy suffered, further damaging the country's ability to pay.\n",
"World War I reparations were war reparations imposed during the Paris Peace Conference upon the Central Powers following their defeat in the First World War by the Allied and Associate Powers. Each of the defeated powers was required to make payments in either cash or kind. Because of the financial situation Austria, Hungary, and Turkey found themselves in after the war, few to no reparations were paid and the requirements for reparations were cancelled. Bulgaria, having paid only a fraction of what was required, saw its reparation figure reduced and then cancelled. Historians have recognized the German requirement to pay reparations as the \"chief battleground of the post-war era\" and \"the focus of the power struggle between France and Germany over whether the Versailles Treaty was to be enforced or revised\".\n",
"At the conclusion of World War I, the Allied and Associate Powers included in the Treaty of Versailles a plan for reparations to be paid by Germany; 20 billion gold marks was to be paid while the final figure was decided. In 1921, the London Schedule of Payments established the German reparation figure at 132 billion gold marks (separated into various classes, of which only 50 billion gold marks was required to be paid). German industrialists in the Ruhr Valley, who had lost factories in Lorraine which went back to France after the war, demanded hundreds of millions of marks compensation from the German government. Despite its obligations under the Versailles Treaty, the German government paid the Ruhr Valley industrialists, which contributed significantly to the hyperinflation that followed. For the first five years after the war, coal was scarce in Europe and France sought coal exports from Germany for its steel industry. The Germans needed coal for home heating and for domestic steel production, having lost the steel plants of Lorraine to the French.\n",
"\"No postwar German government believed it could accept such a burden on future generations and survive ...\". Paying reparations is a classic punishment of war but in this instance it was the \"extreme immoderation\" that caused German resentment. Germany made its last World War I reparation payment on 3 October 2010, ninety-two years after the end of World War I. Germany also fell behind in their coal payments. They fell behind because of a passive resistance movement against the French. In response, the French invaded the Ruhr, the region filled with German coal, and occupied it. At this point the majority of Germans were enraged with the French and placed the blame for their humiliation on the Weimar Republic. Adolf Hitler, a leader of the Nazi Party, attempted a coup d'état against the republic to establish a Greater German Reich known as the Beer Hall Putsch in 1923. Although this failed, Hitler gained recognition as a national hero amongst the German population. The demilitarized Rhineland and additional cutbacks on military infuriated the Germans. Although it is logical that France would want the Rhineland to be a neutral zone, the fact that France had the power to make that desire happen merely added onto the resentment of the Germans against the French. In addition, the Treaty of Versailles dissolved the German general staff and possession of navy ships, aircraft, poison gas, tanks, and heavy artillery was made illegal. The humiliation of being bossed around by the victor countries, especially France, and being stripped of their prized military made the Germans resent the Weimar Republic and idolize anyone who stood up to it.\n",
"Following the Nazi seizure of power in 1933, payments of reparations were officially abandoned. West Germany after World War II did not resume payment of reparations as such, but did resume the payment of debt that Germany had acquired in the inter-war period to finance its reparation payments, paying off the principal on those debts by 1980. The interest on those debts was paid off on 3 October 2010, the 20th anniversary of German reunification.\n"
] |
Who decided north is up and south is down? | u/terminus-trantor and u/qed1 worked on a similar question just a few days ago:
[Was North always on the top of maps?](_URL_0_) | [
"The visible rotation of the night sky around the visible celestial pole provides a vivid metaphor of that direction corresponding to up. Thus the choice of the north as corresponding to up in the northern hemisphere, or of south in that role in the southern, is, prior to worldwide communication, anything but an arbitrary one. On the contrary, it is of interest that Chinese and Islamic culture even considered south as the proper top end for maps.\n",
"BULLET::::- Up is a metaphor for north. The notion that north should always be up and east at the right was established by the Greek astronomer Ptolemy. The historian Daniel Boorstin suggests that perhaps this was because the better-known places in his world were in the northern hemisphere, and on a flat map these were most convenient for study if they were in the upper right-hand corner.\n",
"The existence of \"The North\" implies the existence of \"The South\", and the socio-economic divide between North and South. The term \"the North\" has in some contexts replaced earlier usage of the term \"\"the West\"\", particularly in the critical sense, as a more robust demarcation than the terms \"\"West\"\" and \"East\". The North provides some absolute geographical indicators for the location of wealthy countries, most of which are physically situated in the Northern Hemisphere, although, as most countries are located in the northern hemisphere in general, some have considered this distinction equally unhelpful. Modern financial services and technologies are largely developed by Western nations: Bitcoin, most known digital currency is subject to skepticism in the Eastern world whereas Western nations are more open to it.\n",
"The north–south component of the analemma results from the change in the Sun's declination (extreme changes in height above the horizon during the summer and winter) due to the tilt of Earth's axis of rotation. The east–west component results from the nonuniform rate of change of the Sun's right ascension, governed by combined effects of Earth's axial tilt and orbital eccentricity (earth's orbital speed changing along its orbit around the sun).\n",
"Up North is a term used in England primarily by Southerners to refer to the North of England. In the United States, it sees the same usage, primarily by those in the South to refer to the Northeast and Midwestern regions of the country. It's also used in the Republic of Ireland when referring to Northern Ireland colloquially.\n",
"There is also controversy as to what constitutes the South given that it extends much farther longitudinally than the North of the country; some commentators have placed the West Country (in this case, Bristol, Somerset, Devon and Cornwall) into a region of its own because the poverty in some of these areas is often as widespread as it is in the North, and political support is also focused on the usually widespread Liberal Democrats, until the 2015 general election when the Conservatives took virtually all the seats west of Bristol.\n",
"BULLET::::- In the Northern Hemisphere, north is to the left. The Sun rises in the east (far arrow), culminates in the south (to the right) while moving to the right, and sets in the west (near arrow). Both rise and set positions are displaced towards the north in midsummer and the south in midwinter.\n"
] |
What was the technique for harvesting ice on the Great Lakes for iceboxes/traincars, etc... | I used to work at [Dundurn Castle](_URL_0_) (it's not really a castle).
In the side hall, where we had visitors wait for the next tour, hung a print depicting a 19th century ice harvest.
It showed horse teams on the ice and groups of men using long saws (like a traditional lumberjack saw, but with a handle only on one side), cutting large blocks of ice which were then lifted with massive pincers onto the waiting drays.
The "Castle" had an ice pit. Of course it was no longer in use (the harbour doesn't regularly freeze solid enough anymore even for ice skating much less ice harvest), but we were taught to describe to the visitors how these ice blocks would be lowered into the pit and covered with sawdust. Apparently they would last throughout the summer. I can well believe it as it was always very cool in the hallway outside - even without a deep pit filled with ice.
Edit; I'm very curious about whether First Nations in Canada harvested ice - really hoping someone can answer this question. | [
"Hand-cranked machines' ice and salt mixture must be replenished to make a new batch of ice cream. Usually, rock salt is used. The salt causes the ice to melt and lowers the temperature in the process, below fresh water freezing, but the water does not freeze due to the salt content. The sub-freezing temperature helps slowly freeze and make the ice cream. Some small manual units comprise a bowl with coolant-filled hollow walls. These have a volume of approximately one pint (500 ml). The paddle is often built into a plastic top. The mixture is poured into the frozen bowl and placed in a freezer. The paddles are hand-turned every ten minutes or so for a few hours until reaching the desired consistency and flavor.\n",
"An unusual method of making ice-cream was done during World War II by American fighter pilots based in the South Pacific. They attached pairs of cans to their aircraft. The cans were fitted with a small propeller, this was spun by the slipstream and drove a stirrer, which agitated the mixture while the intense cold of high altitude froze it. B-17 crews in Europe did something similar on their bombing runs as did others.\n",
"To make ice cream in the United States during the eighteenth century, cooks and confectioners needed a “larger wooden bucket”, “a metal freezing pot with a cover, called a sorbetiere”, ice, salt, and the cream based mixture that they planned on freezing. The process starts with finding ice of a “manageable” size, then mixing it with salt and adding the mixture to a bucket. Together, the ice and the salt create a refrigerating effect. The cook or confectioner adds their ice cream mixture to a freezing pot and then puts the cover on it. The freezing pot is put into the wooden bucket, where it is stirred and shaken to give the ice cream a creamy consistency. Occasionally, the freezing pot has to be opened, so that the frozen ice cream can be removed from the sides. The work was done by slaves and servants.\n",
"In 1939, Zamboni created the Iceland Skating Rink in Paramount, California. To resurface the skating rink, three or four workers scraped, washed, and squeegeed the ice. Then they added a thin layer of water to make fresh ice. This process was extremely time consuming, and Zamboni wanted to find a more efficient method.\n",
"Ice collected from the Congamond Lakes was once stored in massive ice houses in blocks and delivered via railway for food storage from New York City to Boston before electric refrigerators were available for public purchase.\n",
"From about 1900 to 1936, Tobyhanna lakes were the site of active ice industries. The ice was cut from the lakes during the winter and stored in large barn-like structures. During the rest of the year, the ice was added to railroad boxcars hauling fresh produce and meats destined for East Coast cities.\n",
"The ice factory was built in 1865 after Joachim Moinat had the idea to use the very pure ice that covered the lake every year in his café. This purity meant the ice could be used as is without purification. To store the ice, he built a wooden hut. In 1875, a second building was erected. This one was equipped with cavity walls in which the cavity was filled with sawdust for better thermal insulation.\n"
] |
How were elite divisions in ww2 such as the 101st airborne from Band of Brothers able to have replacements often even though training took 2 years? | Men assigned originally to airborne divisions of the U.S. Army trained for a much longer period than those assigned as loss replacements overseas. The first standardized training program for airborne divisions took effect on 4 November 1942. The divisional training program was to take 37 weeks, discounting any additional training that may have been taken to teach concepts learned from overseas, training in special courses such as cooperation with troop carrier groups, or participation in corps or army maneuvers. The 82nd and 101st Airborne Divisions, created on 15 August 1942, were scheduled to begin their sixth week of unit training on 9 November 1942 when presented with the new program.
Training Type|Length
:--|:--
Individual Training|13 weeks
Unit Training|13 weeks
Combined Training|11 weeks
**Individual Training**
> During the 13 weeks of individual training all troops will be hardened physically and mentally to withstand modern combat requirements. All individuals will be conditioned to withstand extreme fatigue, loss of sleep, limited rations, and existence in the field with only the equipment that can be carried by parachute, glider, or transport aircraft. An indication of individual proficiency and a basis of test is considered the ability to make a continuous foot march of twenty-five (25) miles in eight (8) hours, a five (5) mile march in one (1) hour and a nine (9) mile march in two (2) hours, with full equipment.
> Men will be mentally and physically conditioned for battlefield environment by obstacle courses that overtax endurance as well as muscular and mental reactions, by passage of wire obstacles so situated as to permit overhead fire, by a night fighting course with sound only as an indication of danger, and a street fighting course with booby traps and sudden appearing targets. Live ammunition will be employed in all three tests.
**Unit Training**
> By the end of the 9th week of unit training infantry battalions will be able to function efficiently, by day or night, independently or reinforced.
> Field Artillery training will, in general, follow "Unit Training Program for Field Artillery (Modified for Airborne Field Artillery)". Stress will be placed on decentralization within batteries to the end that self-contained gun sections will be capable of delivering prompt fire, using both direct and indirect laying with hastily computed firing data in the early stages of any action. Trainlng will also include the operation of batterles, battalions, and division artillery as units in order that the artillery can be capable of massing its fire.
> The unit training phase of infantry battalions will include tactical exercises in which the battalion is supported by a battery of field artillery.
> Division engineers will be trained primarily in engineer combat duties. See inclosure a. (Clearance and repair of airdromes or landing strips will be performed by aviation engineers.)
> Medical units will be trained for normal functions in ground operations and also will be trained in evacuation by air.
> Quartermaster units will be trained in all phases of ground and aerial supply, to include local defense of supply installations.
> Ordnance units will be trained to repair standard ordnance and known enemy weapons and vehicles.
> Antiaircraft elements: (See Inclosure No. 2)
> Signal units will be trained to operate all communications equipment issued to the division emphasizing the capability of all personnel to operate all equipment.
> All units will be prepared to either enter combat immediately on landing or to move promptly by marching against an objective.
> During unit training, combat firing exercises, emphasizing infiltration tactics, rapid advance, and continuous fire support will be planned to conclude each phase.
> Battalion tactical exercises, whenever possible, will include training in air-ground liaison, proper and prompt requests for air support, and air to ground recognition training for aerial supply.
> Unit training will be concluded by tactical exercises including separate glider and parachute regiments, artillery and engineer battalions, and divisional special units (company).
**Combined Training**
> Regimental combat team and divisional tactical exercises will be held during this period. Tactical situations which require the complete staff planning of an airborne attack will be the background of each problem, but the paramount importance of the ground operation will be impressed on staffs and troops. All problems to be solved will envision, or will actually require, the presence of appropriate troop carrier and air support units.
Men could only be assigned to the parachute troops at their own request. On 25 May 1942, the Secretary of War directed that infantry replacement training centers each provide 105 men per week to the Airborne Command who were medically qualified for parachute training. The standards prescribed were very strict;
> Qualifications set forth were those standardized as a result of innumerable medical reports and examinations. The volunteer must be alert, active, supple, with firm muscles and sound limbs, capable of development into an aggressive individual fighter, with great endurance. Age requirements were: Majors not over forty years of age; captains and lieutenants, not over thirty-two; and enlisted men, eighteen to thirty-two, inclusive. Medium weight was desired, maximum not to exceed 185 pounds; height, not to exceed seventy-two inches; vision, maximum visual acuity of twenty-forty, each eye; blood pressure, persistent systolic pressure of 140MM, or persistent diastolic pressure about 100MM to disqualify. Also on the disqualification list were recent venereal disease, evidence of highly nervous system, lack of normal mobility in every joint, poor or unequally developed musculature, poor coordination, lack of at least average athletic ability, history of painful arches, recurrent knee and ankle injuries, recent fractures, old fractures with deformity, pain or limitation of motion, recurrent dislocations, recent severe illness, operation, or chronic disease.
Men could be accepted at any time from arrival at a replacement training center to the completion of 13 weeks of individual training. On 10 June 1942, the weekly quota for volunteers from each replacement training center was revised upward to 125; this quota could be exceeded each week, under the provision that a replacement training center would provide less than 500 candidates each month. On 15 June 1942, men were only to be accepted after they had completed at least 8 weeks of individual training. In early 1943, the Airborne Command began to gear up to train larger numbers of loss replacements as the first airborne units entered combat. Men thus assigned to the Parachute School at Fort Benning or, after 9 April 1942, Fort Bragg, completed 13 (later 14, and then 17) weeks of individual and specialty training with an emphasis on placement in a parachute unit, and an additional 5-week course ensuring that the recruits had completed the training regimen to the best of their ability, had qualified on their assigned weapon(s), had completed a transition firing and battle indoctrination course, and had completed a squad tactical jump. If recruits were not called for shipment in a prompt manner, an additional 4-week small-unit tactics course was to be taught; when interrupted by a call for shipment, men were to be taken from the training unit farthest along in these activities.
**Source:**
Ellis, John T. *The Army Ground Forces: The Airborne Command and Center, Study No. 25*. Washington: Historical Section, Army Ground Forces, 1946. | [
"Before the Second World War, RASC recruits were required to be at least 5 feet 2 inches tall and could enlist up to 30 years of age (or 35 for tradesmen in the Transport Branch). They initially enlisted for six years with the colours and a further six years with the reserve (seven years and five years for tradesmen and clerks, three years and nine years for butchers, bakers and supply issuers). They trained at Aldershot.\n",
"At the end of the Second World War most of the remaining Guards Airborne Divisions were redesignated Guards Rifle Divisions. At the end of June 1945 this has happened to the 4th, 5th, 6th, 7th, and 9th, which became respectively the 111, 112, 113, 115, and 116th Guards Rifle Divisions. In November, it happened to the 1st, 3rd, and 10th Airborne Divisions, which became the 124th, 125th, and 126th Guards Rifle Divisions.\n",
"After World War II, this division was reorganized primarily as a training division for Reserve forces. After several decades, the division then expanded its role to conducting entry-level training for soldiers of all branches of the Army in the northwestern United States. Its role and size have expanded over that time due to consolidation of other training commands, and the division subsequently took charge of a number of brigades specializing in various entry-level training for soldiers of all types.\n",
"It was envisioned that the duplicating process and recruiting the required numbers of men would take no more than six months. Some TA divisions had made little progress by the time the Second World War began; others were able to complete this work within a matter of weeks. The 66th Infantry Division finally became active on 27 September 1939, although its constituent units had already formed and had been administered by the 42nd (East Lancashire) Infantry Division. The division was headquartered in Manchester, and was again composed of the 197th, 198th, and 199th Infantry Brigades. Major-General Arthur William Purser was given command, and the division was assigned to Western Command. In November, the division was transferred to Northern Command. On 10 January, Major-General Alan Cunningham was given command of the division. By May, the division was based north of Manchester, spread out across parts of Lancashire and Yorkshire.\n",
"After activation the division remained in the United States to complete its training. This training was completed by September 1944, but had to be extended by a further four months when the division provided replacements for the 82nd and 101st Airborne Divisions. The division also encountered delays in mounting large-scale training exercises due to a lack of transport aircraft in the United States. This shortage was caused by the 82nd and 101st Airborne Divisions taking priority over the 13th in terms of equipment due to the two divisions serving in combat in Europe. As a consequence of these delays the division was not fully trained and combat-ready until January 1945, and was transferred to France and the European Theater of Operations in February.\n",
"Inactivated on 30 November 1945 in France, the regiment was redesignated as the 516th Airborne Infantry Regiment on 18 June 1948 and active from 6 July 1948 to 1 April 1949 and from 25 August 1950 to 1 December 1953 at Camp Breckinridge, Kentucky. As was the case with many combat divisions of World War II fame, the colors of the 101st Airborne Division and its subordinate elements were active only as training units and were not organized as parachute or glider units.\n",
"Between August 1943 and February 1945, the 13th Airborne Division remained in the United States and did not serve overseas or participate in any airborne operations, as it began training to become a combat-ready formation. In comparison, the 82nd and 101st Airborne Divisions had been assigned as active combat formations to serve overseas in Europe, the 11th Airborne Division was scheduled to be deployed to the Pacific Theater of Operations, and the 17th Airborne Division had been assigned as the United States strategic reserve formation. During this period, the activities of the division primarily involved airborne training, as well as taking part in several training exercises. However, while airborne training for the first four American airborne divisions was conducted during 1943, the 13th encountered considerable difficulties when it came to its turn for training. By the last few months of 1943 the 82nd and 101st Airborne Divisions had conducted airborne exercises and finished their training, and had then been transferred to Europe; to ensure the divisions could conduct airborne operations, a majority of the transport aircraft available in the United States had been sent with them, and even more were transferred to Europe as replacements after the American airborne landings in Normandy in June 1944. Consequently, very few transport aircraft were available for use by the 13th, and the original training exercise for the division that had been scheduled for June 1944 had to be postponed until 17 September, and then once again until 24 September.\n"
] |
what's the definition between a fetish and a kink in the bedroom? | I've always heard that a kink is something you enjoy while a fetish is something you have to have in order to get off. | [
"In human sexuality, kinkiness is the use of non-conventional sexual practices, concepts or fantasies. The term derives from the idea of a \"bend\" (cf. a \"kink\") in one's sexual behaviour, to contrast such behaviour with \"straight\" or \"vanilla\" sexual mores and proclivities. It is thus a colloquial term for non-normative sexual behaviour. The term \"kink\" has been claimed by some who practice sexual fetishism as a term or synonym for their practices, indicating a range of sexual and sexualistic practices from playful to sexual objectification and certain paraphilias. In the 21st century the term \"kink\", along with expressions like BDSM, leather and fetish, has become more commonly used than the term paraphilia. Some universities also feature student organizations focused on kink, within the context of wider LGBTI concerns. Psychologist Margie Nichols describes kink as one of the \"variations that make up the 'Q' in LGBTQ\".\n",
"Sexual fetishism or erotic fetishism is a sexual fixation on a nonliving object or nongenital body part. The object of interest is called the fetish; the person who has \"a fetish\" for that object is a fetishist. A sexual fetish may be regarded as a non-pathological aid to sexual excitement, or as a mental disorder if it causes significant psychosocial distress for the person or has detrimental effects on important areas of their life. Sexual arousal from a particular body part can be further classified as partialism.\n",
"Sexual activity can be classified into the gender and sexual orientation of the participants, as well as by the relationship of the participants. For example, the relationships can be ones of marriage, intimate partners, casual sex partners or anonymous. Sexual activity can be regarded as conventional or as alternative, involving, for example, fetishism, paraphilia, or BDSM activities. Fetishism can take many forms ranging from the desire for certain body parts, for example large breasts, navels or foot worship. The object of desire can often be shoes, boots, lingerie, clothing, leather or rubber items. Some non-conventional autoerotic practices can be dangerous. These include erotic asphyxiation and self-bondage. The potential for injury or even death that exists while engaging in the partnered versions of these fetishes (choking and bondage, respectively) becomes drastically increased in the autoerotic case due to the isolation and lack of assistance in the event of a problem.\n",
"In common parlance, the word \"fetish\" is used to refer to any sexually arousing stimuli, not all of which meet the medical criteria for fetishism. This broader usage of \"fetish\" covers parts or features of the body (including obesity and body modifications), objects, situations and activities (such as smoking or BDSM). Paraphilias such as urophilia, necrophilia and coprophilia have been described as fetishes.\n",
"Kink sexual practices go beyond what are considered conventional sexual practices as a means of heightening the intimacy between sexual partners. Some draw a distinction between kink and fetishism, defining the former as enhancing partner intimacy, and the latter as replacing it. Because of its relation to conformist sexual boundaries, which themselves vary by time and place, the definition of what is and is not kink varies widely as well.\n",
"He studied sexual behavior, coining the term erotic fetishism to describe individuals whose sexual interests in nonhuman objects, such as articles of clothing, and linking this to the after-effects of early impressions in an anticipation of Freud.\n",
"In the 18th century, the Khoikhoi people recognised the terms , which refers to a man who is sexually receptive to another man, and , which refers to same-sex masturbation usually among friends. Anal intercourse and sexual relations between women also occurred, though more rarely.\n"
] |
how video games loop music so seamlessly? | The person composing the music does that manually, most likely. A truly seamless transition means that the end and the beginning are similar and fit together.
There's not much magic to it, you just arrange the instruments at both ends to match up. | [
"The musical loop is one of the most important features of video game music. It is also the guiding principle behind devices like the several Chinese Buddhist music boxes that loop chanting of mantras, which in turn was the inspiration of the Buddha machine, an ambient-music generating device. The Jan Linton album \"Buddha Machine Music\" used these loops along with others created by manually scrolling through C.D.s on a CDJ player .\n",
"Loops of gamelan music appear in electronic music. An early example is the Texas band Drain's album \"Offspeed and In There\", which contains two tracks where trip-hop beats are matched with gamelan loops from Java and Bali and recent popular examples include the Sofa Surfers' piece \"Gamelan\", or \"EXEC_PURGER/.#AURICA extracting\", a song sung by Haruka Shimotsuki as part of the \"\" soundtracks.\n",
"The 2014 research paper on \"Variational Recurrent Auto-Encoders\" attempted to generate music based on songs from 8 different video games. This project is one of the few conducted purely on video game music. The neural network in the project was able to generate data that was very similar to the data of the games it trained off of. Unfortunately, the generated data did not translate into good quality music.\n",
"Rippy used the Audiokinetic Wwise pipeline to create dynamic music that changes with the action in the game. Although Rippy used Wwise's tools only for dynamic music, they made audio system setup much easier than in previous Ensemble games. For each battle sequence, the musical cue was divided into sections and mixed differently for each section. \"When a cue is triggered, an intro plays and then the game randomly picks between all of those elements for as long as the battle continues,\" Rippy explained. \"Once it's over, an outro plays and then it's back to the regular \"world\" music. It was an interesting way to work, and I'd like to push it further if there's an opportunity in the future.\"\n",
"Unlike other music games such as \"Rock Band\" or \"Guitar Hero\" where players are scored based on playing certain notes at specific times, Jam Mode lacks a scoring system and does not objectively penalize for missing or playing \"incorrect\" notes nor do players have any control over the pitch of the notes played. Instead, the internal music track for each section of all songs is specially programmed to respond to all possible player actions: the game will attempt to make any notes played be harmonious to the song, including those played outside the original melody. Consequentially, players are encouraged to practice and experiment with different ways to play songs using any arrangement of instruments, either choosing to stick close to the guide or diverge from it and create unique compositions. The quality of the new arrangement is up to the player's judgement. Players can also do what is called an \"Overdub\" session in which the same song is played again controlling a different musician or instrument; this allows the players to play over the music recorded in previous playthroughs and allows a single player to play all parts of a band. Players can then save their overall performance as a music video for later playback, or share it with other players via WiiConnect24.\n",
"Sequencing samples continue to be used in modern gaming for many uses, mostly RPGs. Sometimes a cross between sequencing samples, and streaming music is used. Games such as \"\" (music composed by James Hannigan) and \"\" (music composed by Bill Brown) have utilised sophisticated systems governing the flow of incidental music by stringing together short phrases based on the action on screen and the player's most recent choices (see dynamic music). Other games dynamically mixed the sound on the game based on cues of the game environment.\n",
"There are two main gameplay modes: Scroll and Spin. In Scroll mode, the player steps on four different directions on the game pad (right, up, down and left) as the arrows scroll towards four icons at the top of the screen. Spin mode adds four additional directions. Its songs are also longer than other dance games, often lasting around seven minutes.\n"
] |
why do the effects of novocaine stay relatively close to the injection site? | Dentists generally use lidocaine these days, not novocaine. It's a much safer anaesthetic all around.
It's not injected into the bloodstream, it's a topical anaesthetic. The most common use is something called a 'nerve block,' where the lidocaine is injected into the nerve in the jaw and disrupts that nerve's ability to route signals.
It's also injected directly into the area which is being worked on when the work being done is very small and doesn't require a nerve block. | [
"BULLET::::- Chelation: The presence of di- or trivalent cations can cause the chelation of certain drugs, making them harder to absorb. This interaction frequently occurs between drugs such as tetracycline or the fluoroquinolones and dairy products (due to the presence of Ca).\n",
"On October 15, the FDA issued a warning that two more drugs may have been contaminated. Both came from NECC. One was a steroid called triamcinolone acetonide and another was a product used during heart surgery. If injected, the second steroid may cause fungal meningitis, while the heart drug may cause a different fungal infection.\n",
"Side effects can become more pronounced due to the drug interactions between digoxin and the following: Thiazide and loop diuretics, piperacillin, ticarcillin, amphotericin B, corticosteroids, and excessive laxative use. Amiodarone, some benzodiazepines, cyclosporine, diphenoxylate, indomethacin, itraconazole, propafenone, quinidine, quinine, spironolactone, and verapamil may lead to toxic levels and increased incidence of side effects. Digoxin plasma concentrations may increase while on antimalarial medication hydroxychloroquine.\n",
"The effectiveness of these drugs derives from two factors: their target, the H/K ATPase which is responsible for the last step in acid secretion; therefore, their action on acid secretion is independent of the stimulus to acid secretion, of histamine, acetylcholine, or other yet to be discovered stimulants. Also, their mechanism of action involves covalent binding of the activated drug to the enzyme, resulting in a duration of action that exceeds their plasma half-life.\n",
"Its next project was to create an adjuvant. These products modify the effects of the active ingredient in a drug without having any direct effects of their own. However, Zonagen's attempt, ImmuMax, was merely an attempt to patent and cash in off a widely researched generic product: a structural element in the exoskeleton of crustaceans. The compound, chitosan, was indeed capable of causing some desirable effects within the immune system, but it also caused permanent scarring at injection point which made it useless for the sort of medicinal purposes Zonagen hoped to use it for.\n",
"The identification of emetine as a more potent agent improved the treatment of amoebiasis. While use of emetine still caused nausea, it was more effective than the crude extract of ipecac root. Additionally, emetine could be administered hypodermically which still produced nausea, but not to the degree experienced in oral administration.\n",
"There has been a reluctance for modern drug discovery programs to consider covalent inhibitors due to toxicity concerns. An important contributor has been the drug toxicities of several high-profile drugs believed to be caused by metabolic activation of reversible drugs For example, high dose acetaminophen can lead to the formation of the reactive metabolite N-acetyl-p-benzoquinone imine. Also, covalent inhibitors such as beta lactam antibiotics which contain weak electrophiles can lead to idiosyncratic toxicities (IDT) in some patients. It has been noted that many approved covalent inhibitors have been used safely for decades with no observed idiosyncratic toxicity. Also, that IDTs are not limited to proteins with a covalent mechanism of action. A recent analysis has noted that the risk of idiosyncratic toxicities may be mitigated through lower doses of administered drug. Doses of less than 10 mg per day rarely lead to IDT irrespective of the drug mechanism.\n"
] |
Before Israelite conquered it, who ruled Jericho and where is Jericho mentioned outside the bible? | This is a good question, but I want to start by clearing up a couple of things. Firstly, some definitions: 'Canaan' is usually used to refer to the area from the Eastern Mediterranean coast to the hill country that's farther inland, and often comes with an implied sense of 'not Israel and Judah'. I'll use it here to mean everything in between Egypt and the Hittites.^1 The only significant collection of contemporary texts we have for this area are from the city state of Ugarit (now Ras Shamra, in the north-west of modern-day Syria). There are some inscriptions from the various Phoenician cities and a couple from the kingdom of Damascus, but evidence is very, very scarce.
Secondly, the conquest of Canaan as described in the Bible has practically no evidence to support it and is probably a complete invention.^2 Major problems with the biblical account were first raised when the archaeological record from a large number of sites around the area didn't seem to add up to what we had been expecting to find: contemporary destruction layers in the Late Bronze Age (around 1100 BCE or so, when the story is set), and an abrupt change in artefact styles (pottery, architecture, jewellery) and cultural practice (burial style, traces of rituals, diet). None of these were there, and in fact what we did find was continuity between Late Bronze Age and Early Iron Age. In other words, the native population probably wasn't replaced with two million or so Israelites coming from Egypt.
Thirdly, Jericho is actually one of our most important case studies when it comes to understanding the history of Israel in Canaan. The really big excavation of the city (focusing on Neolithic and Early Bronze Age Jericho, but including a broad survey of the entire chronology of the site) conducted by Kathleen Kenyon in the 1950s found some pretty remarkable things.^3 To summarise, she discovered that the city had had significant fortifications in the Middle Bronze Age (roughly up to 1550 BCE), but that these had been destroyed completely during that time. Of course, this is much too early for anything remotely related to a political entity 'Israel'^4 to be relevant; even the most conservative scholars date the reign of David^5 to the late-11th or early-10th century BCE. Jericho was only a minor settlement during the Late Bronze Age and Early Iron Age, and was even mostly abandoned for several centuries during this period. It wasn't until the 9th century BCE that Jericho was properly rebuilt.
Now, to actually get to your question: who ruled Jericho before it was destroyed? Whoever it was, they didn't really leave us much to work with. According Kenyon's interpretation of the evidence, Jericho became a city of some importance - and perhaps with some degree of independence - like many other major Middle Bronze Age sites (including Ugarit) in the first half of the second millennium BCE. Their cemetery was extensive, and the elaborate nature of the burials there suggests that the rulers of the city were relatively well-off. We know from Egyptian scarabs found at the site that they traded with the Egyptians. Kenyon didn't find much in terms of religious worship, but geographically speaking it is likely that they shared the general Canaanite pantheon with the other cities in the region (Ba'al-Hadad, El, Anath, and Astarte being their most important deities).
I am not aware of the city being mentioned in any extra-biblical texts (neither from Egypt nor from Mesopotamia), but I am happy to be corrected if anyone else has an example.
I hope this helps! Please let me know if I can expand on or clarify anything!
-----------
^1 Scholars are usually pretty inaccurate with the term, so depending on preference some will include or exclude the Phoenician city states; some even only use 'Canaan' to refer exclusively to the area that would become the territory of the kingdoms of Judah and Israel.
^2 see Davies, *In Search of 'Ancient Israel'*; Finkelstein and Silberman, *The Bible Unearthed*; also Dever, *What did the biblical authors know and when did they know it?* (a little outdated now, but still a good, neutral survey).
^3 See Kenyon, *Excavations at Jericho*, several volumes spanning the periods 1954-1958 and 1960-1983.
^4 The Merneptah Stele, dating to the 13th century BCE, refers to something resembling Israel, but it should be seen as a name for the area rather than a political entity.
^5 David's historicity is a can of worms I really don't want to open here, but even assuming he existed and ruled over a united kingdom he's a good four centuries too late. | [
"According to the story in the biblical book of Joshua, Jericho was the first Canaanite city to fall to the Israelites as they began their conquest of the Promised Land - an event which the Bible's internal chronology places at around 1406 BC, based on the early 15th century BC exodus-conquest model. This is based on . During a series of excavations from 1930 to 1936 John Garstang found a destruction layer at Jericho corresponding to the termination of City IV which he identified with the biblical story of Joshua and dated to c. 1400 BC. It was therefore a shock when Kathleen Kenyon in the 1950s, using more scientific methods than had been available to Garstang, redated Jericho City IV to 1550 BC and found no signs of any habitation at all for the period around 1400 BC. Wood's 1990 reversion of City IV to Garstang's original 1400 BC therefore attracted considerable attention. In 1999, based on a reanalysis of pottery shards (a method which can provide highly accurate dates in the context of the ancient Near East), Wood argued that Jericho could have been captured in the Late Bronze Age by Joshua. Wood and Piotr Bienkowski debated this in the March/April 1990 issue of Biblical Archaeological Review, with Bienkowski writing:\n",
"In the Book of Joshua, the city of Jericho was the first Canaanite city that the Israelites attacked upon their entry into Canaan. The Israelites destroyed the Bronze Age wall of Jericho by walking around it with the Ark of the Covenant for seven days. They circled the walls once per day for the first six days, then circled the walls seven times on the final day. The Israelites under Joshua's command blew trumpets of rams' horns and shouted to make the walls fall down (). This account from the Book of Joshua is one event in the larger narrative of the Israelite conquest of the biblical Canaan.\n",
"Jericho has been occupied by Israel since the Six-Day War of 1967 along with the rest of the West Bank. It was the first city handed over to Palestinian Authority control in accordance with the Oslo Accords. The limited Palestinian self-rule of Jericho was agreed on in the Gaza–Jericho Agreement of 4 May 1994. Part of the agreement was a \"Protocol on Economic Relations\", signed on 29 April 1994. The city is in an enclave of the Jordan Valley that is in Area A of the West Bank, while the surrounding area is designated as being in Area C under full Israeli military control. Four roadblocks encircle the enclave, restricting Jericho's Palestinian population's movement through the West Bank.\n",
"In 1999 Herzog’s cover page article in the weekly magazine \"Haaretz\" \"Deconstructing the walls of Jericho\" attracted considerable public attention and debates. In this article Herzog cites evidence supporting that \"the Israelites were never in Egypt, did not wander in the desert, did not conquer the land in a military campaign and did not pass it on to the 12 tribes of Israel. Perhaps even harder to swallow is the fact that the united monarchy of David and Solomon, which is described by the Bible as a regional power, was at most a small tribal kingdom. And it will come as an unpleasant shock to many that the god of Israel, Jehovah, had a female consort and that the early Israelite religion adopted monotheism only in the waning period of the monarchy and not at Mount Sinai\".\n",
"Judea and Samaria, as they are called in the Bible, were part of the ancient Kingdom of Israel (designated the West Bank by Jordan in 1947) and the Gaza Strip, previously annexed by Jordan and occupied by Egypt respectively, were conquered and occupied by Israel in the Six-Day War in 1967. Israel withdrew from Gaza in August 2005; Judea and Samaria (West Bank) remain under Israeli control. Israel has never explicitly claimed sovereignty over any part of the West Bank apart from East Jerusalem, which it unilaterally annexed in 1980. However, the Israeli military supports and defends hundreds of thousands of Israeli citizens who have migrated to the West Bank, incurring criticism by some who otherwise support Israel. The United Nations Security Council, the United Nations General Assembly, and some countries and international organizations continue to regard Israel as occupying Gaza. \"(See Israeli-Occupied Territories)\"\n",
"Jericho (; ' ; ') is a Palestinian city in the West Bank. It is located in the Jordan Valley, near Jordan River to the east and Jerusalem to the west. It is the administrative seat of the Jericho Governorate, and is governed by the Palestinian National Authority. In 2007, it had a population of 18,346. The city was occupied by Jordan from 1949 to 1967, and has been held under Israeli occupation since 1967; administrative control was handed over to the Palestinian Authority in 1994. It is believed to be one of the oldest inhabited cities in the world and the city with the oldest known protective wall in the world. It was thought to have the oldest stone tower in the world as well, but excavations at Tell Qaramel in Syria have discovered stone towers that are even older.\n",
"Upon Umar's arrival in Jerusalem, a pact known as The Umariyya Covenant was composed. It surrendered the city and gave guarantees of civil and religious liberty to Christians in exchange for \"jizya\". It was signed by caliph Umar on behalf of the Muslims, and witnessed by Khalid, Amr, Abdur Rahman bin Awf, and Muawiyah. In late April 637, Jerusalem was officially surrendered to the caliph. For the first time, after almost 500 years of oppressive Roman rule, Jews were once again allowed to live and worship inside Jerusalem.\n"
] |
When Russian troops sacked Berlin near the end of World War II, did they kill civilians? What exactly happened? | The population of Berlin at the end of the war was disproportionately comprised of Women and children, with most of the men fighting in the army, or being drafted into the Volkssturm at the last minute. Most of the violence directed to Berliners was looting and rape. Rape was endemic at the time, with girls as young as 12 up to women in their seventies. There were instances of women being killed as a consequence of sexual violence. Some historians argue that the Soviet treatment of Germans in 1945 was a direct retaliation for the Nazis actions in Stalingrad.
Although there were murders by the Soviets, they were not common. most deaths at the time were a result of starvation and disease, with Typhus being most prevalent. Males who were proved to have NSDAP associations were more likely to be sent east as prisoners of War than murdered.
Sources:
Berlin - Anthony Beevor
Woman in Berlin - Anonymous
Germany 1945 - Richard Bessel | [
"When the Russians besieged the Chechen capital, thousands of civilians died from a week-long series of air raids and artillery bombardments in the heaviest bombing campaign in Europe since the destruction of Dresden. The initial assault on New Year's Eve 1995 ended in a major Russian defeat, resulting in heavy casualties and at first nearly a complete breakdown of morale in the Russian forces. The disaster claimed the lives of an estimated 1,000 to 2,000 Russian soldiers, mostly barely trained and disoriented conscripts; the heaviest losses were inflicted on the 131st 'Maikop' Motor Rifle Brigade, which was completely destroyed in the fighting near the central railway station. Despite the early Chechen defeat of the New Year's assault and the many further casualties that the Russians had sustained, Grozny was eventually conquered by Russian forces amidst bitter urban warfare. After armored assaults failed, the Russian military set out to take the city using air power and artillery. At the same time, the Russian military accused the Chechen fighters of using civilians as human shields by preventing them from leaving the capital as it came under continued bombardment. On 7 January 1995, Russian Major-General Viktor Vorobyov was killed by mortar fire, becoming the first on a long list of Russian generals to be killed in Chechnya. On 19 January, despite heavy casualties, Russian forces seized the ruins of the Chechen presidential palace, which had been heavily contested for more than three weeks as the Chechens finally abandoned their positions in the destroyed downtown area. The battle for the southern part of the city continued until the official end on 6 March 1995.\n",
"As the Soviets enter Germany, they encircle a town full of Germans and prevent most of them, including civilians, from escaping. Soon, the Soviets began to lay siege to Berlin fighting the remaining Germans through the streets and ruined buildings, ultimately capturing the Reichstag, signaling final German defeat. Ironically, the Reichstag was disused since Hitler first came to power and millions of Soviets died to raise the flag.\n",
"Stalin resisted the evacuation of civilians, in part due to the importance of the city's factories to the war effort. Initial Soviet reports stated the water supply and electricity grid as knocked out. On 26 August a detailed Soviet \"Urban Committee of Defence\" report gave the following casualty figures; 955 dead and 1,181 wounded. Due to the fighting that followed and the high death toll, it is impossible to know how many more were killed in aerial attacks. The figure was higher than in the initial reports, but reports of 40,000 dead in the three-day raid are not credible. As air-raid shelters in the city were extremely inadequate for the population of the Soviet metropolis and large portions of the suburban buildings were made of easily flammable wood, the death toll and destruction from the bombing was comparable to the British bombing of Darmstadt on 11/12 September 1944, when 900 tons of bombs from 226 Avro Lancaster heavy bombers killed 12,300 German citizens of the city.\n",
"On 22 June 1941, Nazi Germany and several of its allies invaded the USSR. In the initial stage of Operation Barbarossa (30 June 1941) Lviv was taken by the Germans. The evacuating Soviets killed most of the prison population, with arriving Wehrmacht forces easily discovering evidence of the Soviet mass murders in the city committed by the NKVD and NKGB. Ukrainian nationalists, organised as a militia, and the civilian population were allowed to take revenge on the \"Jews and the Bolsheviks\" and indulged in several mass killings in Lviv and the surrounding region, which resulted in the deaths estimated at between 4,000 and 10,000 Jews. On 30 June 1941 Yaroslav Stetsko proclaimed in Lviv the Government of an independent Ukrainian state allied with Nazi Germany. This was done without preapproval from the Germans and after 15 September 1941 the organisers were arrested.\n",
"By the end of World War II, most of Eastern Europe, and the Soviet Union in particular, suffered vast destruction. The Soviet Union had suffered a staggering 27 million deaths, and the destruction of significant industry and infrastructure, both by the Nazi \"Wehrmacht\" and the Soviet Union itself in a \"scorched earth\" policy to keep it from falling in Nazi hands as they advanced over 1,000 miles to within 15 miles of Moscow. Thereafter, the Soviet Union physically transported and relocated east European industrial assets to the Soviet Union.\n",
"According to a 1974 West German government study an estimated 1% of the civilian population was killed during the Soviet offensive. The West German search service reported that 31,940 civilians from East Prussia, which also included Memel, were confirmed as killed during the evacuation.\n",
"World War I was the first time Russia went to war against Germany since the Napoleonic era, and Russian Germans were quickly suspected of having enemy sympathies. The Germans living in the Volhynia area were deported to the German colonies in the lower Volga river in 1915 when Russia started losing the war. Many Russian Germans were exiled to Siberia by the Tsar's government as enemies of the state - generally without trial or evidence. In 1916, an order was issued to deport the Volga Germans to the east as well, but the Russian Revolution prevented this from being carried out.\n"
] |
Is it true that Emporor Hirohito barely spoke normal japanese, and as such a large amount of citizens could not understand when he surrendered on air? | Not to discourage further responses, but u/aonoreishou answered a similar question [here](_URL_0_). | [
"Hirohito's surrender broadcast was a profound shock to Japanese citizens. After years of being told about Japan's military might and the inevitability of victory, these beliefs were proven false in the space of a few minutes. But for many people, these were only secondary concerns since they were also facing starvation and homelessness.\n",
"The speech was not broadcast directly, but was replayed from a phonograph recording made in the Tokyo Imperial Palace on either August 13 or 14, 1945. Many elements of the Imperial Japanese Army were extremely opposed to the idea that Hirohito was going to end the war, as they believed that this was dishonourable. Consequently, as many as one thousand officers and soldiers raided the Imperial palace on the evening of August 14 to destroy the recording. The rebels were confused by the layout of the Imperial palace and were unable to find the recording, which had been hidden in a pile of documents. The recording was successfully smuggled out of the palace in a laundry basket of women's underwear and broadcast the following day, although another attempt was made to stop it from being played at the radio station.\n",
"The speech was probably the first time that an Emperor of Japan had spoken (albeit via a phonograph record) to the common people. It was delivered in the formal, Classical Japanese that few ordinary people could easily understand. It made no direct reference to a surrender of Japan, instead stating that the government had been instructed to accept the terms of the Potsdam Declaration fully. This created confusion in the minds of many listeners who were not sure whether Japan had surrendered. The poor audio quality of the radio broadcast, as well as the formal courtly language in which the speech was composed, worsened the confusion. A digitally remastered version of the broadcast was released on 30 June 2015.\n",
"In the Pacific Theater of World War II, the Japanese seemed to believe that their language was so complex that even if their cryptosystems such as PURPLE were broken, outsiders would not really understand the content. That was not strictly true, but it was sufficiently so that there were cases where even the intended recipients did not clearly understand the writer's intent.\n",
"When Japan accepted the Potsdam Declaration and agreed on the Surrender of Japan, he explained, in plain language, what this meant for ordinary Japanese citizens. When Nazi Germany surrendered on May 7, 1945, Baba was able to broadcast in Japanese, after seeing the incoming telex message. This was the first voiced information to the world, with translations for other languages taking 30 minutes.\n",
"BULLET::::- Allusion: The sound of rabbits, “chu, chu,” which appears frequently throughout the text, evokes Emperor Hirohito and the “Jewel Voice Broadcast” (Gyokuon-hoso), an event in which the Emperor's speech was broadcast on the radio for the first time in Japanese history. This “strange,” “futile” and “profoundly disappointing” cry explicitly alludes to the emotion the Japanese nation would have really felt when they heard the Emperor's voice on the radio (113). As the “I” explains in the next sentence, rabbits make such sound for fear of danger from burglars and stray dogs, which also seems to underline the contemporary, new victimization of Japanese people of themselves as a result of the war.\n",
"Japan occupied Ulithi during the time of World War I and left during or after World War II. Before the World Wars, Japan traded with Ulithi. Since the two countries were trade partners, they needed to know how to communicate. Every so often, young boys would learn the basics of Japanese and because of this, \"it is not at all difficult today to find Ulithians who speak and write a bit of Japanese\". An example of a word from Japan \"denwa\" which Ulithian changed to \"dengwa\" which means telephone. Japan had such a big impact that the word for battery, \"denchi\", remained the same in Ulithian.\n"
] |
What were the military advantages that helped Cromwell and the New Model Army win the English Civil Wars? | Three main reasons, money, money, and yet more money. There were other reasons as well.
Parliament controlled not only the most prosperous and most populous parts of Britain at the start of the Civil War, which meant that their war chest was far bigger than the king's (more, and more prosperous people who were able to pay taxes), but they also controlled all of the major manufacturing centres, such as Norwich, Hull and London, which also contained nearly all of the armaments manufacturers, and, near London, at Enfield, was the only large scale gunpowder factory. As it was a Royal monopoly, it was very nearly the sole source of gunpowder in Britain. They controlled very nearly all of the big cities, thus being able to tax the trade thereof, the Navy and most of Britain's merchant fleet. Thus international trade was secure, with plenty of profitable trade to tax, and also able to pretty much prevent the king's army from easily getting supplies from abroad, even which Henrietta Maria pawning the Crown Jewels!
Parliament could, therefore, afford to equip and feed, and often pay their men, whilst the king was forced to rely on promissary notes, loans and forced loans, plunder, and demanding "protection money" from towns, called "levying contributions", and whose men were often months if not years in arrears of pay. | [
"The most successful parliamentary cavalry commander had been Oliver Cromwell, and Cromwell now approached the Committee of Both Kingdoms with a proposal. Cromwell had come to the conclusion that the current military system was untenable because it relied on local militias defending local areas. Cromwell proposed that Parliament create a new army that would be deployable anywhere in the kingdom and not tied to a particular locality. After the Second Battle of Newbury of 27 October 1644, where parliamentary forces greatly outnumbered royalist forces and yet parliamentary forces were barely able to defeat the royalist forces, Cromwell redoubled his arguments in favor of creating a new army. At this point, most of the leaders in the parliamentary army were Presbyterians who supported the Presbyterians at the Westminster Assembly. Cromwell, however, had also been following the goings-on of the Westminster Assembly and he sided with the Independents. Cromwell thought that the Presbyterians in the army – notably his superior, Edward Montagu, 2nd Earl of Manchester – opposed his proposal to create a new and more effective army mainly because they wanted to make peace with the king. He also thought that the army's supreme commander, Robert Devereux, 3rd Earl of Essex, shared Manchester's views. Cromwell, however, felt that parliamentary forces should seek total victory over the royalists, and since he distrusted Charles immensely, he felt that Charles should have no role in any post-war government.\n",
"When the Parliamentary forces in which Cromwell is a cavalry officer proved ineffective, he, along with Sir Thomas Fairfax, sets up the New Model Army and soon turns the tide against the king. The army's discipline, training and numbers secure victory and Cromwell's cavalry proves to be the deciding factor. With his army defeated, Charles goes so far as to call on help from Catholic nations, which disgusts his Protestant supporters. He is finally defeated but, a brave man in his own way, he still refuses to give in to the demands of Cromwell and his associates for a system of government in which Parliament will have as much say in the running of the country as the king.\n",
"Although Cromwell's New Model Army had defeated a Scottish army at Dunbar, Cromwell could not prevent Charles II from marching from Scotland deep into England at the head of another Royalist army. They marched to the west of England where English Royalist sympathies were strongest, but although some English Royalists joined the army, they were far fewer in number than Charles and his Scottish supporters had hoped. Cromwell finally engaged and defeated the new king at Worcester on 3 September 1651.\n",
"More broadly, this reform helped usher in Cromwell’s New Model Army. This reorganized force, designed for unity and efficiency, incorporated several practices recognizable in modern armies. In addition to a professional officer corps promoted on merit, it replaced the sometimes bulky local units with nationally controlled regiments, standardized training protocols, and ensured regular salary payments to the troops. This army soon turned the war in favour of Parliament, decisively beating the Royalist forces at the battle of Naseby on 14 June 1645.\n",
"Cromwell then marched north to deal with a pro-Royalist Scottish army (the Engagers) who had invaded England. At Preston, Cromwell, in sole command for the first time and with an army of 9,000, won a decisive victory against an army twice as large.\n",
"As Cromwell led his army over the border at Berwick-upon-Tweed in July 1650, the Scottish general, Sir David Leslie, continued his deliberate strategy of avoiding any direct confrontation with the enemy. His army was no longer formed of the battle-hardened veterans of the Thirty Years' War who had taken the field at the battles of Newburn and Marston Moor. Many of those had perished during the Civil War and the ill-fated 1648 invasion of England. Far more had left active service after the former event, some even leaving for Swedish or French service once more. This meant that a new army had to be raised and trained by the remaining veterans. It eventually comprised some 12,000 soldiers, outnumbering the English army of 11,000 men. Though the Scots were well armed, the pressure of time meant they were poorly trained compared with their English counterparts, all of whom had served with Oliver Cromwell for years. Leslie chose therefore to barricade his troops behind strong defensive works around Edinburgh and refused to be drawn out to meet the English in battle. Furthermore, between Edinburgh and the border with England, Leslie adopted a scorched earth policy thus forcing Cromwell to obtain all of his supplies from England, most arriving by sea through the port at Dunbar.\n",
"England never had a standing army with professional officers and careerist corporals and sergeants. It relied on militia organised by local officials, private forces mobilised by the nobility, or on hired mercenaries from Europe. Cromwell changed all that with his New Model Army of 50,000 men, that proved vastly more effective than untrained militia, and enabled him to exert a powerful control at the local level over all of England. At the restoration, Parliament paid off Cromwell's army and disbanded it. For many decades the Cromwellian model was a horror story and the Whig element recoiled from allowing a standing army. The militia acts of 1661 and 1662 prevented local authorities from calling up militia and oppressing their own local opponents. Calling up the militia was possible only if the king and local elites agreed to do so. However, King Charles managed to pull together four regiments of infantry and cavalry, calling them his guards, at a cost of £122.000 from his general budget. This became the foundation of the permanent British Army, By 1685 it had grown to 7500 soldiers in marching regiments, and 1400 men permanently stationed in garrisons. A rebellion in 1685 allowed James II to raise the forces to 20,000 men. There were 37,000 in 1678, when England played a role in the closing stage of the Franco-Dutch War. In 1689, William III expanded the army to 74,000 soldiers, and then to 94,000 in 1694. Parliament became very nervous, and reduced the cadre to 7,000 in 1697. Scotland and Ireland had theoretically separate military establishments, but they were unofficially merged with the English force.\n"
] |
do people lose in sensitivity to adrenaline if exposed to it repetitively ? | Neurobiology student.
Ok, so the first thing I thought reading this question was "Nah, I don't think so", but I still looked to be sure. Well apparently yeah, you can have some receptors, called B-adrenergic receptors, lose their sensibility if constantly exposed to a drug that activate them. And one drug tha activate B-adrenergic receptor is adrenaline.
However, they only mentioned that this desensitization happened with pharmacological drugs and they don't really mention adrenaline. So my guess is that in theory it could be possible but ultimately not happening.
Which is logic since adrenaline is essential for the bodily function, is not constantly secreted and and have a very short half life so it's degraded very quickly. | [
"Adverse reactions to adrenaline include palpitations, tachycardia, arrhythmia, anxiety, panic attack, headache, tremor, hypertension, and acute pulmonary edema. The use of epinephrine based eye-drops, commonly used to treat glaucoma, may also lead to buildup of adrenochrome pigments in the conjunctiva, iris, lens, and retina.\n",
"Adrenaline/epinephrine is well known to make myotonia worse in most individuals with the disorder, and a person with myotonia congenita may experience a sudden increase in difficulty with mobility in a particularly stressful situation during which adrenaline is released.\n",
"A feature of such activities in the view of some is their alleged capacity to induce an adrenaline rush in participants. However, the medical view is that the rush or high associated with the activity is not due to adrenaline being released as a response to fear, but due to increased levels of dopamine, endorphins and serotonin because of the high level of physical exertion. Furthermore, recent studies suggest that the link to adrenaline and 'true' extreme sports is tentative. Brymer and Gray's study defined 'true' extreme sports as a leisure or recreation activity where the most likely outcome of a mismanaged accident or mistake was death. This definition was designed to separate the marketing hype from the activity.\n",
"During states of excitement or stress, the body releases adrenaline. Adrenaline is known to cause physical symptoms that accompany test anxiety, such as increased heart rate, sweating, and rapid breathing. In many cases having adrenaline is a good thing. It is helpful when dealing with stressful situations, ensuring alertness and preparation. But for some people the symptoms are difficult or impossible to handle, making it impossible to focus on tests.\n",
"Low, or absent, concentrations of adrenaline can be seen in autonomic neuropathy or following adrenalectomy. Failure of the adrenal cortex, as with Addisons disease, can suppress adrenaline secretion as the activity of the synthesing enzyme, phenylethanolamine-\"N\"-methyltransferase, depends on the high concentration of cortisol that drains from the cortex to the medulla.\n",
"Pikkers and Kox attributed the effect on the immune system to a stress-like response. In the hypothalamus, stress messages from the brain trigger a release of adrenaline, which increases the pumping of blood and releases glucose, both of which can help the body deal with an emergency. It also suppresses the immune system. In Hof and the trained subjects, the adrenaline release was higher than it would be after a person's first bungee jump. It is not yet known which part of the training (cold exposure, breathing or meditation) is primarily responsible for the effect, or whether there are long-term training effects.\n",
"The major physiologic triggers of adrenaline release center upon stresses, such as physical threat, excitement, noise, bright lights, and high or low ambient temperature. All of these stimuli are processed in the central nervous system.\n"
] |
Did the Founding Fathers frame the Constitution solely for their economic self-interest? | First off, I don't know of a single historical work written that long ago that still holds up on its own today. The entire profession has been reformed and then reformed again since. Of course books like his can still be influential, but you can't look at single works, let alone works that old, and then proclaim "Aha! Now I get it!"
Next, it's a good rule of thumb is that there isn't a single explanation of cause for much of anything in history. We are, after all, talking about the history of humans and humans are complex animals. However, books will often argue, like Beard, for fairly singular causes for events such as the forming of the Constitution. This isn't wholly bad, as convincing others of, say, an economic argument requires a lot of evidence. Beard, like so many historians, needed to hammer home the idea that economics (or whatever) were an important element and that cannot be ignored. Beard probably wasn't trying to write the last word of the Constitution, ceasing all further study into the matter, and you shouldn't take it as such. I doubt very many historians actually believe that they can singularly explain historical events and eras with arguments that conveniently align with their career choices, but arguing forcefully gets the attention of employers, publishers, and grant committees.
I'm sorry this doesn't fully answer your question, and I can only offer sparse insights on the Consitution itself (not my direct expertise). I can say, of course, that economics played a large but not singular role in the framing of the document, and that a number of other factors entered the Founders' mindset. These included but probably aren't limited to foreign policy/internationalism, slavery (economy but not purely so), internal cohesion (keeping the interests of small states aligned with big states), Enlightenment values, and methods of government. Anyone with better knowledge feel free to correct me.
To get all the details, it'll probably end up being best to head to the library, and the biggest name I can recommend is of course Gordon Wood. But as a bit of an exercise I'd also like to recommend David Hendrickson's *Peace Pact*, which delves very nicely into the internationalism of the Founding and the Constitution. Like Beard, Hendrickson's book is hardly the last word or gives a full understanding of the Founding, but it demonstrates quite nicely how the Founders could be internationally minded in their plans for the country, taking into account far more than the domestic factors people usually think of when describing the writing of the Constitution. It would make for a great read. | [
"Frank Bourgin's study of the Constitutional Convention and subsequent decades argues that direct government involvement in the economy was intended by the Founding Fathers. The reason for this was the economic and financial chaos the nation suffered under the Articles of Confederation. The goal was to ensure that dearly-won political independence was not lost by being economically and financially dependent on the powers and princes of Europe. The creation of a strong central government able to promote science, invention, industry and commerce was seen as an essential means of promoting the general welfare and making the economy of the United States strong enough for them to determine their own destiny. One later result of this intent was the adoption of Richard Farrington's new plan (worked out with his co-worker John Jefferson) to incorporate new changes during the New Deal. Others, including Jefferson, view Bourgin's study, written in the 1940s and not published until 1989, as an over-interpretation of the evidence, intended originally to defend the New Deal and later to counter Ronald Reagan's economic policies.\n",
"Many of the values which contributed to the Founding Father's thinking come from ideas of the Enlightenment and over time have transformed into ideas of American exceptionalism and Manifest destiny. Throughout the country's early history the Constitution was used as the basis for establishing foreign policy and determining the government's ability to acquire land from other nations. In the country's inception, government officials broadly interpreted the Constitution in order to establish an archetypical model for foreign policy.\n",
"Historians have frequently interpreted Federalist No. 10 to imply that the Founding Fathers of the United States intended the government to be nonpartisan. James Madison defined a faction as \"a number of citizens, whether amounting to a minority or majority of the whole, who are united and actuated by some common impulse of passion, or of interest, adverse to the rights of other citizens, or to the permanent and aggregate interests of the community.\" As political parties had interests which were adverse to the rights of citizens and to the general welfare of the nation, several Founding Fathers preferred a nonpartisan form of government.\n",
"Frank Bourgin's 1989 study of the Constitutional Convention shows that direct government involvement in the economy was intended by the Founders. The goal, most forcefully articulated by Hamilton, was to ensure that dearly won political independence was not lost by being economically and financially dependent on the powers and princes of Europe. The creation of a strong central government able to promote science, invention, industry and commerce, was seen as an essential means of promoting the general welfare and making the economy of the United States strong enough for them to determine their own destiny.\n",
"We recommit ourselves to the ideas of the American Founding. Through the Constitution, the Founders created an enduring framework of limited government based on the rule of law. They sought to secure national independence, provide for economic opportunity, establish true religious liberty and maintain a flourishing society of republican self-government.\n",
"Like Lysander Spooner in , Nock disputes both the legitimacy of an inherited constitution and the other arguments used to justify claiming it legitimately binds its subjects. He attacks the motivations and legitimacy of the Founding Fathers directly, not simply their ability to impose a contract on subsequent generations. The protection of Natural Rights found in the Declaration of Independence, and advocated by Thomas Jefferson and Thomas Paine was abandoned by the largest body of the Founders as the American Revolution ended.\n",
"The term derives from the Virginia and Kentucky Resolutions written in 1798 by James Madison and Thomas Jefferson, respectively. This vocal segment of the \"Founding Fathers\" believed that if the central government was the exclusive judge of its limitations under the U.S. Constitution, then it would eventually overcome those limits and become more and more powerful and authoritarian. They argued that formal limiting devices such as elections and separation of power would not suffice if the government could judge its own case regarding constitutionality. As Jefferson wrote, \"\"When all government, domestic and foreign, in little as in great things, shall be drawn to Washington as the center of all power, it will render powerless the checks provided of one government on another, and will become as venal and oppressive as the government from which we separated.\"\"\n"
] |
why does a computer need to be cooled? | Because every electrical current causes warmth due to resistance. | [
"Cooling may be designed to reduce the ambient temperature within the case of a computer, such as by exhausting hot air, or to cool a single component or small area (spot cooling). Components commonly individually cooled include the CPU, Graphics processing unit (GPU) and the northbridge.\n",
"Computer cooling is required to remove the waste heat produced by computer components, to keep components within permissible operating temperature limits. Components that are susceptible to temporary malfunction or permanent failure if overheated include integrated circuits such as central processing units (CPUs), chipset, graphics cards, and hard disk drives.\n",
"Another growing trend due to the increasing heat density of computer, GPU, FPGA, and ASICs is to immerse the entire computer or select components in a thermally, but not electrically, conductive liquid. Although rarely used for the cooling of personal computers, liquid immersion is a routine method of cooling large power distribution components such as transformers. It is also becoming popular with data centers. Personal computers cooled in this manner may not require either fans or pumps, and may be cooled exclusively by passive heat exchange between the computer hardware and the enclosure it is placed in. A heat exchanger (i.e. heater core or radiator) might still be needed though, and the piping also needs to be placed correctly. \n",
"Components are often designed to generate as little heat as possible, and computers and operating systems may be designed to reduce power consumption and consequent heating according to workload, but more heat may still be produced than can be removed without attention to cooling. Use of heatsinks cooled by airflow reduces the temperature rise produced by a given amount of heat. Attention to patterns of airflow can prevent the development of hotspots. Computer fans are widely used along with heatsink fans to reduce temperature by actively exhausting hot air. There are also more exotic cooling techniques, such as liquid cooling. All modern day processors are designed to cut out or reduce their voltage or clock speed if the internal temperature of the processor exceeds a specified limit.\n",
"However since the early 1990s high-performance CPUs such as found in typical desktop computers have required active cooling. This also includes secondary processors, such as graphics processors which also consume a large amount of power. The most common and inexpensive method of active cooling is to mount one or more conventional fans directly on the processors in conjunction with a large heat sink, and possibly one or more others elsewhere in the case of the computer to increase the overall airflow. Much larger computers have sometimes relied on more sophisticated active cooling techniques such as water or refrigerant -based methods.\n",
"Because high temperatures can significantly reduce life span or cause permanent damage to components, and the heat output of components can sometimes exceed the computer's cooling capacity, manufacturers often take additional precautions to ensure that temperatures remain within safe limits. A computer with thermal sensors integrated in the CPU, motherboard, chipset, or GPU can shut itself down when high temperatures are detected to prevent permanent damage, although this may not completely guarantee long-term safe operation. Before an overheating component reaches this point, it may be \"throttled\" until temperatures fall below a safe point using dynamic frequency scaling technology. Throttling reduces the operating frequency and voltage of an integrated circuit or disables non-essential features of the chip to reduce heat output, often at the cost of slightly or significantly reduced performance. For desktop and notebook computers, throttling is often controlled at the BIOS level. Throttling is also commonly used to manage temperatures in smartphones and tablets, where components are packed tightly together with little to no active cooling, and with additional heat transferred from the hand of the user.\n",
"While originally limited to mainframe computers, liquid cooling has become a practice largely associated with overclocking in the form of either manufactured kits, or in the form of do-it-yourself setups assembled from individually gathered parts. The past few years have seen an increase in the popularity of liquid cooling in pre-assembled, moderate to high performance, desktop computers. Sealed (\"closed-loop\") systems incorporating a small pre-filled radiator, fan, and waterblock simplify the installation and maintenance of water cooling at a slight cost in cooling effectiveness relative to larger and more complex setups.\n"
] |
how is mass extinction humans fault? | > Considering most people at that time and prior loved in mud huts
HA!
Humans had already been farming for 9,500 years, living in cities for most of that, China had been a sprawling empire for 4,500 years, Rome had risen and fallen.
Just because we hadn't built a steam engine yet didn't mean we weren't causing change to the environment on a massive scale, and had been for ages. And one of the biggest ways was with our stomachs - it's likely that human hunting played a major role in the disappearance of all the megafauna species in the new world, starting perhaps 10- to 20,000 years ago. | [
"There is still debate about the causes of all mass extinctions. In general, large extinctions may result when a biosphere under long-term stress undergoes a short-term shock. An underlying mechanism appears to be present in the correlation of extinction and origination rates to diversity. High diversity leads to a persistent increase in extinction rate; low diversity to a persistent increase in origination rate. These presumably ecologically controlled relationships likely amplify smaller perturbations (asteroid impacts, etc.) to produce the global effects observed.\n",
"Mass extinctions are thought to result when a long-term stress is compounded by a short-term shock. Over the course of the Phanerozoic, individual taxa appear to be less likely to become extinct at any time, which may reflect more robust food webs as well as less extinction-prone species and other factors such as continental distribution.\n",
"An extinction event (also known as a mass extinction or biotic crisis) is a widespread and rapid decrease in the biodiversity on Earth. Such an event is identified by a sharp change in the diversity and abundance of multicellular organisms. It occurs when the rate of extinction increases with respect to the rate of speciation. Estimates of the number of major mass extinctions in the last 540 million years range from as few as five to more than twenty. These differences stem from the threshold chosen for describing an extinction event as \"major\", and the data chosen to measure past diversity.\n",
"A mass mortality event (MME) is an incident that kills a vast number of individuals of a single species in a short period of time. The event may put a species at risk of extinction or upset an ecosystem. This is distinct from the mass die-off associated with short lived and synchronous emergent insect taxa which is a regular and non-catastrophic occurrence.\n",
"According to a 1998 survey of 400 biologists conducted by New York's American Museum of Natural History, nearly 70% believed that the Earth is currently in the early stages of a human-caused mass extinction, known as the Holocene extinction. In that survey, the same proportion of respondents agreed with the prediction that up to 20% of all living populations could become extinct within 30 years (by 2028). A 2014 special edition of \"Science\" declared there is widespread consensus on the issue of human-driven mass species extinctions.\n",
"BULLET::::- Extinction event – widespread and rapid decrease in the amount of life on Earth. Such an event is identified by a sharp reduction in the diversity and abundance of macroscopic life. Also known as a mass extinction or biotic crisis.\n",
"Mass extinctions have sometimes accelerated the evolution of life on Earth. When dominance of particular ecological niches passes from one group of organisms to another, it is rarely because the new dominant group is \"superior\" to the old and usually because an extinction event eliminates the old dominant group and makes way for the new one.\n"
] |
Why did "White Australia" use a dictation test instead of a criterion openly based on ancestry? | Enforcing the policy through a subjective test allowed it to be expanded to exclude politically undesirable people.
The Immigration Restriction Act (1901) required immigrants to be able to "write out at dictation and sign in the presence of the officer, a passage of 50 words in length in a European language directed by the officer". Since the language could be chosen by the officer anyone who couldn't speak all European languages could be excluded.
The most notably example of this is Ego Erwin Kisch who was a multilingual Czech communist. He passed the test in a number of languages until immigration officials managed to find an officer who could speak Sottish Gaelic. Kisch then failed and was convicted of being an illegal immigrant. Although that was overturned on appeal.
[Original text of the act](_URL_0_)
Source: McNamara, T. (2009) "The spectre of the dictation test: language testing for immigration and citizenship in Australia". In Extra, Guus, Massimiliano Spotti, and Piet Van Avermaet, eds. *Language testing, migration and citizenship: Cross-national perspectives on integration regimes*. London: Continuum: 224-241. | [
"Because of opposition from the British government, a more explicit racial policy was avoided in the legislation, with the control mechanism for people deemed undesirable being a dictation test, which required a person seeking entry to Australia to write out a passage of fifty words dictated to them in any European language, not necessarily English, at the discretion of an immigration officer. The test was not designed to allow immigration officers to evaluate applicants on the basis of language skills, rather the language chosen was always one known beforehand that the person would fail.\n",
"This was similar to tests previously used in Western Australia, New South Wales and Tasmania. It enabled immigration officials to exclude individuals on the basis of race without explicitly saying so. After 1903 the passage chosen was not important in itself as it was already decided the person could not enter Australia and so failure was inevitable. Although the test could theoretically be given to any person arriving in Australia, in practice it was given selectively on the basis of race, and others considered undesirables. Between 1902 and 1909, 52 people passed the test out of 1,359 who were given it.\n",
"The White Australia policy, or at least the ideas behind it, had been very strong since long before Federation. Although the \"Immigration Restriction Act 1901\" was established to prevent non-white people from migrating to Australia, significant numbers of foreign citizens, particularly Chinese people who migrated during the Victorian gold rush, were already living in Australia, and many politicians were keen to prevent them from having any political influence. Politicians also wanted to prevent Indigenous people from voting. Although Indigenous men had the right to vote everywhere except Western Australia and Queensland, and Indigenous women also had the right to vote in South Australia, this was not because it had explicitly been given to them, but because it had not explicitly been denied to them.\n",
"Given the relentless and revolutionary assault on their historic national identity, white Australians now face a life-or-death struggle to preserve their homeland. Whether effective resistance to their displacement and dispossession can be mounted is another question. Unlike other racial, ethnic or religious groups well-equipped to practice the politics of identity, white Australians lack a strong, cohesive sense of ethnic solidarity. As a consequence, ordinary Australians favouring a moratorium on non-white immigration cannot count on effective leadership or support from their co-ethnics among political, intellectual and corporate elites. On the contrary, our still predominantly Anglo-Australian rulers are indifferent; some profit from, and others actually take pride in their active collaboration with the Third World colonization of Australia. None of the major parties, indeed, not one member of the Commonwealth Parliament, offers citizens the option of voting to defend and nurture Australia's Anglo-European identity. The problem, in short, is clear: The Australian nation is bereft of a responsible ruling class.\n",
"User consultation undertaken by the Office for National Statistics (ONS) for the purpose of planning the 2011 census in England and Wales found that most of the respondents from all ethnic groups that took part in the testing felt comfortable with the use of the terms \"Black\" and \"White\". However, some participants suggested that these colour terms were confusing and unacceptable, did not adequately describe an individual's ethnic group, did not reflect his or her true skin colour, and were stereotypical and outdated terms. The heading \"Black or Black British\", which was used in 2001, was changed to \"Black/African/Caribbean/Black British\" for the 2011 census. As with earlier censuses, individuals who did not identify as \"Black\", \"White\" or \"Asian\" could instead write in their own ethnic group under \"Other ethnic group\". Persons with multiple ancestries could indicate their respective ethnic backgrounds under a \"Mixed or multiple ethnic groups\" tick box and write-in area.\n",
"In 1901, the Australian federal government adopted the White Australia policies, initiated with the Immigration Restriction Act 1901, which generally excluded Asian peoples, especially the Chinese and the Melanesians. Historian C. E. W. Bean said that Australian racialist exclusion was \"a vehement effort to maintain a high, Western standard of economy, society, and culture (necessitating, at that stage, however it might be camouflaged, the rigid exclusion of Oriental peoples)\". In 1913, the film, \"Australia Calls\" (1913) depicted an invasion of Australia by \"Mongolians\" defeated by ordinary Australians with resistance and guerrilla warfare.\n",
"Academic Joseph Pugliese is among writers who have applied whiteness studies to an Australian context, discussing the ways that Indigenous Australians were marginalized in the wake of British colonization of Australia, as whiteness came to be defined as central to Australian identity. Pugliese discusses the 20th-century White Australia policy as a conscious attempt to preserve the \"purity\" of whiteness in Australian society. Likewise Stefanie Affeldt considers whiteness \"a concept not yet fully developed at the time the first convicts and settlers arrived down under\" which, as a social relation, had to be negotiated and was driven forward in particular by the labour movement. Eventually, with the Federation of Australia, \"[o]verlaying social differences, the shared membership in the 'white race' was the catalyst for the consolidation of the Australian colonies as the Commonwealth of Australia\".\n"
] |
what is a snap election, and why doesn't it exist in the us? | Often times in parliamentary systems of government the prime minister or other head of government must have elections every set number of years, just like in the US system, but they also allow for them to call for elections at a time of their choosing prior to the normal time between elections. This is very useful for when some major national decision needs to occur, and the ruling party thinks they are at an advantage concerning that decision. The election somewhat becomes a national referendum on that issue, with the people voting to put people into government that agree with them about that issue. So if the US system had snap elections maybe Obama would have called one early on in the health care debate, or Bush might have called one before the wars in Afghanistan or Iraq. The purpose would be to present an position on how the government would move forward, in contrast to the opposition party, and let the people support that ideal by voting enough people for the controlling party to push forward their agenda. | [
"In the Philippines, the term \"snap election\" usually refers to the 1986 presidential election, where President Ferdinand Marcos called elections earlier than scheduled, in response to growing social unrest. Marcos was declared official winner of the election but was eventually ousted when it was alleged that he cheated in the elections.\n",
"Early or \"snap\" elections have occurred at least three times in New Zealand's history: in 1951, 1984 and 2002. Early elections often provoke controversy, as they potentially give governing parties an advantage over opposition candidates. Note that of the three elections in which the government won an increased majority, two involved snap elections (1951 and 2002 – the other incumbent-boosting election took place in 1938). The 1984 snap election backfired on the government of the day: many believe that the Prime Minister, Robert Muldoon, called it while drunk. \"See Snap election, New Zealand.\" The 1996 election took place slightly early (on 12 October) to avoid holding a by-election after the resignation of Michael Laws.\n",
"Since the power to call snap elections usually lies with the incumbent, they usually result in increased majorities for the party already in power having been called at an advantageous time. However, snap elections can also backfire on the incumbent resulting in a decreased majority or even the opposition winning or gaining power. As a result of the latter cases there have been occasions in which the consequences have been the implementation of fixed term elections.\n",
"In Japan, a snap election is called when a Prime Minister dissolves the lower house of the Diet of Japan. The act is based on Article 7 of the Constitution of Japan, which can be interpreted as saying that the Prime Minister has the power to dissolve the lower house after so advising the Emperor. Almost all general elections of the lower house have been snap elections since 1947, when the current constitution was enacted. The only exception was 1976 election, when the Prime Minister Takeo Miki was isolated within his own Liberal Democratic Party. The majority of LDP politicians opposed Miki's decision not to dissolve the lower house until the end of its 4-year term.\n",
"Former Premier, Ralph T. O'Neal, had warned of the possibility of the Government calling a snap election. President of the opposition Virgin Islands Party, Carvin Malone, had predicted an election on 6 or 13 July 2015. Although it became common parlance to refer to the election as a \"snap\" election in local media, it is not entirely clear that it was. The ruling party announced candidates for an \"upcoming election\" over a month prior to dissolution of the House, and all parties claimed that they had anticipated the announcement.\n",
"In Canada, snap elections at the federal level are not uncommon. During his 10 years as Prime Minister, Jean Chrétien recommended to the Governor General to call two snap elections, in 1997 and 2000, winning both times. Wilfrid Laurier and John Turner, meanwhile, both lost their premierships in snap elections they themselves had called (in 1911 and 1984, respectively). The most notable federal snap election is that of 1958, where Prime Minister John Diefenbaker called an election just nine months after the previous one and transformed his minority government into the largest majority in the history of Canada up to that date.\n",
"The conditions for when a snap election can be called have been significantly restricted by the Fixed-term Parliaments Act 2011 to occasions when the government loses a confidence motion or when a two-thirds supermajority of MPs vote in favour. Prior to this, the Prime Minister of the United Kingdom had the de facto power to call an election at will by requesting a dissolution from the monarch – the limited circumstances where this would not be granted were set out in the Lascelles Principles. There was no fixed period for holding elections, although between 1997 and 2015 there was a convention that the government should hold elections on the same date as local elections on the first Thursday of May. Since World War II, only the 2015 general election was held on the latest possible date (7 May 2015), due to being the first general election at the end of a fixed-term Parliament.\n"
] |
after world war ii, what changes did germany make to it's own political system to ensure a dictator figure could never take power again? | My answer won't fully describe your question, but one of the main reasons dictators weren't allowed to rise *immediatley*, and for the 40 or so years after the fall of Hitler, is the fact that Germany was split into two. The eastern half was controlled by the USSR, and the west was controlled by the western allies. The USSR implemented their own leaders, and the western allies kept a large hold on their own sectors. This would have meant leaders would have been kept under control to a large extent.
Not sure if you already knew this. But there we go! Someone correct me if I'm speaking out of my ass. | [
"The Government of Nazi Germany was a dictatorship run according to the \"Führerprinzip\". As the successor to the government of the Weimar Republic, it inherited the government structure and institutions of the previous state. Although the Weimar Constitution technically remained in effect until Germany's surrender in 1945, there were no actual restraints on the exercise of state power. In addition to the already extant government of the Weimar Republic, the Nazi leadership created a large number of different organizations for the purpose of helping them govern and remain in power. They rearmed and strengthened the military, set up an extensive state security apparatus and created their own personal party army, which in 1940 became known as the Waffen-SS.\n",
"The ruling Nazi Party of 1933–1945 Germany envisaged the ultimate establishment of a world government under the complete hegemony of the Third Reich. In its move to overthrow the post-World War I Treaty of Versailles Germany had already withdrawn itself from the League of Nations, and it did not intend to join a similar internationalist organization ever again. In his desire and stated political aim of expanding the living space (\"Lebensraum\") of the German people by destroying or driving out \"lesser-deserving races\" in and from other territories dictator Adolf Hitler may have devised an ideological system of self-perpetuating expansionism, in which the expansion of a state's population would require the conquest of more territory which would, in turn, lead to a further growth in population which would then require even more conquests. In 1927, Rudolf Hess relayed to Walter Hewel Hitler's belief that world peace could only be acquired \"when one power, the racially best one, has attained uncontested supremacy\". When this control would be achieved, this power could then set up for itself a world police and assure itself \"the necessary living space... The lower races will have to restrict themselves accordingly\".\n",
"After the Nazis came to power in Germany, they reformed the administrative system by transforming the former German provinces and states into their Gau system in 1935 as a part of their Gleichschaltung policy.\n",
"Meanwhile, authoritarian regimes emerged in several countries in Europe and South America, in particular the Third Reich in Germany. Germany elected Adolf Hitler, who imposed the Nuremberg Laws, a series of laws which discriminated against Jews and other ethnic minorities. Weaker states such as Ethiopia, China, and Poland were invaded by expansionist world powers, the last of these attacks leading to the outbreak of the World War II on September 1, 1939, despite calls from the League of Nations for worldwide peace. World War II helped end the Great Depression when governments spent money for the war effort. The 1930s also saw a proliferation of new technologies, especially in the fields of intercontinental aviation, radio, and film.\n",
"The states of the Weimar Republic were effectively abolished after the establishment of Nazi Germany in 1933 by a series of \"Reichsstatthalter\" decrees between 1933 and 1935, and autonomy was replaced by direct rule of the National Socialist German Workers' Party in the \"Gleichschaltung\" process. The states continued to formally exist as \"de jure\" rudimentary bodies, but from 1934 were superseded by \"de facto\" Nazi provinces called \"Gaue\". Many of the states were formally dissolved at the end of World War II by the Allies, and ultimately re-organised into the modern states of Germany.\n",
"In 1930, Germany was formally a multi-party parliamentary democracy, led by President Paul von Hindenburg (1925–1934). However, beginning in March 1930, Hindenburg only appointed governments without a parliamentary majority which systematically governed by emergency decrees, circumventing the democratically elected Reichstag. \n",
"Régime change hit Germany in January 1933 and the Hitler government lost little time in imposing one-party government. The same year Stieler von Heydekampf became a Nazi Party member. He was then, in 1935, appointed a Wehrwirtschaftsführer (literally \"military economic leader\"), a quasi-military honour given by government to senior industry figures expected to be supportive in any future military rearmament programme.\n"
] |
Was New Zealand really forced out of the British Empire? | I think your professor's language is a bit harsh, but he is trying to emphasize a point.
New Zealand (along with Canada, Australia, and a few other countries) didn't achieve independence in an abrupt manner in the same way India and the USA did. These countries made a gradual shift towards self-governance.
The biggest changes occurred after Britain passed the Statute of Westminster in 1931. Some countries (like Canada) then took responsibility for passing their own laws, and managing their external affairs. Some countries (like New Zealand) we're required to specifically adopt the statute, which New Zealand did not do until 1947.
So, it's not like New Zealand got "kicked out" in in 1931 (or 1947), but the message from mother England was clear. The large (even NZ), established dominions were ready to take the next step towards self-governance. By 1947, colony divestment was in full swing. As JBC mentions, Queen Elizabeth II is still the head of state.
| [
"The British were reluctant administrators and continued pressure was applied to them from New Zealand and from European residents of the islands to pass the Cook Islands over to New Zealand. The first British Resident was Frederick Moss, a New Zealand politician who tried to help the local chiefs form a central government. In 1898 another New Zealander, Major W.E. Gudgeon, a veteran of the New Zealand Wars, was made British Resident with the aim of paving the way for New Zealand to take over from Britain as part of the expansionist ambitions of New Zealand's Prime Minister, Richard Seddon. This was not favored by Makea Takau who preferred the idea of being annexed to Britain. One of the results of the British annexation was freedom of religion and a new influx of missionaries from different denominations. The first Roman Catholic church was dedicated in 1896.\n",
"The government's most significant policies concerned attempts to create a distinct New Zealand identity, both internally and in the world. For most of its history, New Zealand had been, economically, culturally and politically, highly dependent on Britain. This began to change during World War II, when it became clear that Britain was no longer able to defend its former colonies in the Pacific. As Britain began to turn away from what was left of its former Empire and towards Europe, New Zealanders became less inclined to think of themselves as British. Initially the country turned instead to the United States, and so entered into the ANZUS pact with the US and Australia, and aided the US in the Vietnam War. However, by the early 1970s many New Zealanders felt the need for genuine national independence, a feeling strengthened when Britain joined the European Economic Community in 1973, causing serious problems for New Zealand trade. Most of this government's policies can be seen in this light.\n",
"New Zealand was a sovereign Dominion under the New Zealand monarchy, as per the Statute of Westminster 1931. It quickly entered World War II, officially declaring war on Germany on 3 September 1939, just hours after Britain. Unlike Australia, which had felt obligated to declare war, as it also had not ratified the Statute of Westminster, New Zealand did so as a sign of allegiance to Britain, and in recognition of Britain's abandonment of its former appeasement policy, which New Zealand had long opposed. This led to then Prime Minister Michael Joseph Savage declaring two days later:\n",
"The annexation of New Zealand by Britain meant that Britain now controlled New Zealand's foreign policy. Subsidised large-scale immigration from Britain and Ireland began, and miners came for the gold rush around 1850-60. In the 1860s, the British sent 16,000 soldiers to contain the New Zealand wars in the North Island. The colony shipped gold and, especially, wool to Britain. From the 1880s the development of refrigerated shipping allowed the establishment of an export economy based on the mass export of frozen meat and dairy products to Britain. In 1899-1902 New Zealand made its first contribution to an external war, sending troops to fight on the British side in the Second Boer War. The country changed its status from colony to dominion with internal self governance in 1907.\n",
"From the 1890s the New Zealand Parliament enacted a number of progressive initiatives, including women's suffrage and old age pensions. After becoming a self-governing dominion with the British Empire in 1907, the country remained an enthusiastic member of the empire, and over 100,000 New Zealanders fought in World War I as part of the New Zealand Expeditionary Force. After the war, New Zealand signed the Treaty of Versailles (1919), joined the League of Nations, and pursued an independent foreign policy, while its defence was still controlled by Britain.\n",
"In 1907, at the request of the New Zealand Parliament, King Edward VII proclaimed New Zealand a Dominion within the British Empire, reflecting its self-governing status. In 1947 the country adopted the Statute of Westminster, confirming that the British Parliament could no longer legislate for New Zealand without the consent of New Zealand.\n",
"New Zealand became a separate British Crown colony in 1841 and received responsible government with the Constitution Act in 1852. New Zealand chose not to take part in Australian Federation and became the Dominion of New Zealand on 26 September 1907, Dominion Day, by proclamation of King Edward VII. Dominion status was a public mark of the political independence that had evolved over half a century through responsible government.\n"
] |
what is behind the american fascination with japanese style tattoos? | Because it is foreign and mysterious and aesthetically pleasing.
A kanji looks cool and invites people to wonder about the meaning. This mystery implies that the owner has important secrets. Plus, if the viewer doesn't know the kanji then you can lie about what it means depending on your mood. | [
"At the beginning of the Meiji period the Japanese government, wanting to protect its image and make a good impression on the West and to avoid ridicule, outlawed tattoos, and irezumi took on connotations of criminality. Nevertheless, fascinated foreigners went to Japan seeking the skills of tattoo artists, and traditional tattooing continued underground. Tattooing was legalized by the occupation forces in 1948, but has retained its image of criminality. For many years, traditional Japanese tattoos were associated with the yakuza, Japan's notorious mafia, and many businesses in Japan (such as public baths, fitness centers and hot springs) still ban customers with tattoos.\n",
"Tattooing for spiritual and decorative purposes in Japan is thought to extend back to at least the Jōmon or Paleolithic period and was widespread during various periods for both the Japanese and the native Ainu. Chinese texts from before 300 AD described social differences among Japanese people as being indicated through tattooing and other bodiapanese. Chinese texts from the time also described Japanese men of all ages as decorating their faces and bodies with tattoos.\n",
"Although tattoos have gained popularity among the youth of Japan due to Western influence, there is still a stigma on them among the general consensus. Unlike the US, even finding a tattoo shop in Japan may prove difficult, with tattoo shops primarily placed in areas that are very tourist or US military friendly. According to Kunihiro Shimada, the president of the Japan Tattoo Institute, \"Today, thanks to years of government suppression, there are perhaps 300 tattoo artists in Japan.\"\n",
"Cliff Raven Ingram (August 24, 1932 – November 28, 2001) was one of a handful of tattoo artists (along with Sailor Jerry Collins and Don Ed Hardy) who pioneered the adoption of the Japanese tattoo aesthetic in the United States. Born in Indiana as \"Clifford H. Ingram,\" Cliff later shortened his first name and adopted his business name of \"Raven\" as his legal middle name, largely to facilitate mail delivery.\n",
"Even as Japanese hung Sambo signs throughout the city, they were undeniably attracted to black music and style. Before hip hop, the Japanese had embraced jazz, rock n roll and funk. It is important to note however, despite the seemingly racist tendencies toward Africans and the simultaneous embrace of black culture, the Japanese have a very different construction of racial ideology then the US. Whereas the white versus black dichotomy typifies the racial system in the US, the Japanese construct their identities in terms of nationalism. Rather than identifying strongly with a color, Japanese tradition speaks to a homogeneous society that places foreigners in the \"other category.\" Because of this context, \"jiggers\" and the young teens who wear blackface rebel by embracing individual identities that are different from the norm.\n",
"The Government of Meiji Japan, formed in 1868, banned the art of tattooing altogether, viewing it as barbaric and lacking respectability. This subsequently created a subculture of criminals and outcasts. These people had no place in \"decent society\" and were frowned upon. They could not simply integrate into mainstream society because of their obvious visible tattoos, forcing many of them into criminal activities which ultimately formed the roots for the modern Japanese mafia, the Yakuza, with which tattoos have become almost synonymous in Japan.\n",
"Japanese art has been an influence on hip hop culture as well. Takashi Murakami paints Japanese cultural objects and icons repetitiously and markets them on all sorts of products including keychains, mouse pads, T-shirts and Louis Vuitton handbags. He is responsible for Kanye West's Graduation and Kids See Ghosts album covers.\n"
] |
why does a shower speed up the sunburn symptoms? i just walked in with a light glow and walked out looking like zoidberg | Hot water acts as a vasodilator. It also irritates inflamed tissue if it's too hot, which your shower undoubtedly was if you're anything like me. So the shower increased blood flow to the inflamed area, and further irritated the damaged tissue. | [
"Other symptoms can include blistering, swelling (edema), pruritus (itching), peeling skin, rash, nausea, fever, chills, and fainting (syncope). Also, a small amount of heat is given off from the burn, caused by the concentration of blood in the healing process, giving a warm feeling to the affected area. Sunburns may be classified as superficial, or partial thickness burns. Blistering is a sign of second degree sunburn.\n",
"Common side effects are peeling, itching, redness, dryness, burning, and dermatitis. Benzoyl peroxide bleaches hair, clothes, towels, bedclothing, and the like. Prolonged exposure to natural or artificial sun light (UV rays) is not recommended because the gel may cause photosensitivity. Irritation due to benzoyl peroxide can be reduced by avoiding harsh facial cleansers and wearing sunscreen prior to sun exposure.\n",
"Once affected, the symptoms may not show for several days. The symptoms can be severe burning, itching, swelling and pain in the affected areas. The areas exposed to sunlight may have the appearance of a sunburn - where clothing is worn the skin is protected. There is no known reaction to moonlight, but reflections from windows and mirrors of sunlight can still cause damage.\n",
"Sunburn causes an inflammation process, including production of prostanoids and bradykinin. These chemical compounds increase sensitivity to heat by reducing the threshold of heat receptor (TRPV1) activation from to . The pain may be caused by overproduction of a protein called CXCL5, which activates nerve fibres.\n",
"Heat urticaria presents within five minutes after the skin has been exposed to heat above 43 degrees Celsius (109.4 degrees Fahrenheit), with the exposed area becoming burned, stinging, and turning red, swollen, and indurated.\n",
"Additionally, since sunburn is a type of radiation burn, it can initially hide a severe exposure to radioactivity resulting in acute radiation syndrome or other radiation-induced illnesses, especially if the exposure occurred under sunny conditions. For instance, the difference between the erythema caused by sunburn and other radiation burns is not immediately obvious. Symptoms common to heat illness and the prodromic stage of acute radiation syndrome like nausea, vomiting, fever, weakness/fatigue, dizziness or seizure can add to further diagnostic confusion.\n",
"Minor sunburns typically cause nothing more than slight redness and tenderness to the affected areas. In more serious cases, blistering can occur. Extreme sunburns can be painful to the point of debilitation and may require hospital care.\n"
] |
how do people in courtrooms, depositions, parliaments and what not type everything being typed up so quickly? | my mum does this for australian state parliament -
there's a button for every vowel and a button for most of the important consonants - stenographers create their own shorthand dictionary over time and there's shortcuts for every word they have to use. mum's been at it for about two - three years now and she's at about 150wpm, it takes a long time to get down! | [
"Internet researcher Annette Markham (1998) observes that text-based interviewing can take much longer than face-to-face, phone or Skype interviews because typing takes longer than talking. Textual methods require users to verbalize conventional aspects of polite conversation, such as nodding or smiling, which requires added effort and time.\n",
"While extracting private information by watching somebody typing on a keyboard might seem to be an easy task, it becomes extremely challenging if it has to be automated. However, an automated tool is needed in the case of long-lasting surveillance procedures or long user activity, as a human being is able to reconstruct only a few characters per minute. The paper \"ClearShot: Eavesdropping on Keyboard Input from Video\" presents a novel approach to automatically recovering the text being typed on a keyboard, based solely on a video of the user typing.\n",
"BULLET::::- Court reporter, a person whose occupation is to transcribe spoken or recorded speech into written form to produce official transcripts of court hearings, depositions and other official proceedings.\n",
"A transcription service is a business service which converts speech (either live or recorded) into a written or electronic text document. Transcription services are often provided for business, legal, or medical purposes. The most common type of transcription is from a spoken-language source into text such as a computer file suitable for printing as a document such as a report. Common examples are the proceedings of a court hearing such as a criminal trial (by a court reporter) or a physician's recorded voice notes (medical transcription). Some transcription businesses can send staff to events, speeches, or seminars, who then convert the spoken content into text. Some companies also accept recorded speech, either on cassette, CD, VHS, or as sound files. For a transcription service, various individuals and organisations have different rates and methods of pricing. That can be per line, per word, per minute, or per hour, which differs from individual to individual and industry to industry. Transcription companies primarily serve private law firms, local, state and federal government agencies and courts, trade associations, meeting planners and nonprofits.\n",
"Voice writing is a method used for court reporting, medical transcription, and closed captioning. Using the voice writing method, a court reporter speaks directly into a stenomask or speech silencer—a hand-held mask containing one or two microphones and voice-dampening materials. As the reporter repeats the testimony into the recorder, the mask prevents the reporter from being heard during testimony. Voice writers record everything that is said by judges, witnesses, attorneys, and other parties to a proceeding, including gestures and emotional reactions, and either provide real-time feed or prepare transcripts afterwards.\n",
"Real-time transcription is the general term for transcription by court reporters using real-time text technologies to deliver computer text screens within a few seconds of the words being spoken. Specialist software allows participants in court hearings or depositions to make notes in the text and highlight portions for future reference.\n",
"Candidates take the Reading and Use of English, Writing and Listening papers on the same day. The Speaking paper may be taken on the same day, but is more usually taken a few days before or after the other papers.\n"
] |
why does/did fox own the rights to the simpsons and family guy, shows that often deconstructed and even scorned traditional values? | So, the first and foremost thing to understand about Fox is that it's a large organization with a lot of channels. And, like almost all large organizations, it really only cares about one thing. Money.
Fox News Channel (which came into existence fairly late in Fox's life) has discovered a niche where they can make money showing conservative talking points. The regular Fox channels have discovered a niche where they can make money showing the Simpsons.
Fox has no political or moral ideology other than "please watch commercials in between shows so we make money" | [
"In 2007, John Ortved wrote an article for \"Vanity Fair\" titled \"Simpson Family Values\". Producers of the show, including Groening, Brooks and Simon, chose not to cooperate in the project. Ortved believes that the reason was because \"were upset [that] the myth of \"The Simpsons\" would be challenged.\" Shortly after the article was published, an agent suggested that Ortved write a full book. The producers again decided not to participate, and, according to Ortved, Brooks asked current and former \"Simpsons\" employees not to talk to Ortved. However, the book does include portions of interviews that several figures did with other sources. Ortved did interview a number of sources for the book, including Hank Azaria, a cast member of the show since the second season, Fox Broadcasting Company owner Rupert Murdoch and former writer Conan O'Brien.\n",
"Fox cartoon series have been the subject of controversy; most notably, \"The Simpsons\", \"American Dad!\" and \"Family Guy\", for their approach to comedy and for the coarse language and jokes that some have said are too raunchy for network TV. These shows have been some of the most risqué material aired on FOX. Fox has also been accused by some groups of corrupting children with cartoons ostensibly for teens and adults. In Venezuela, \"The Simpsons\" and \"Family Guy\" have been taken off the air due to their content. In Russia, \"Family Guy\" and \"The Simpsons\" were subject to lawsuits regarding their content, although in Russia other animated series like \"South Park\" were more controversial.\n",
"\"The Simpsons\" was co-developed by Groening, James L. Brooks, and Sam Simon, a writer-producer with whom Brooks had worked on previous projects. Groening and Simon, however, did not get along and were often in conflict over the show; Groening once described their relationship as \"very contentious\". Groening said his goal in creating the show was to offer the audience an alternative to what he called \"the mainstream trash\" that they were watching. Brooks negotiated a provision in the contract with the Fox network that prevented Fox from interfering with the show's content. Fox was nervous about the show because they were unsure if it could sustain the audience's attention for the duration of the episode. They proposed doing three seven-minute shorts per episode and four specials until the audience adjusted, but in the end, the producers gambled by asking Fox for 13 full-length episodes.\n",
"Because the Fox network was new to the world of television production, a bureaucracy had not yet been established. This enabled the show to take risks and the freedom to try things that the major networks would never permit. The series landed an initial twenty-six episode commitment deal, unheard of for a television comedy. \"The Tracey Ullman Show\" debuted on 5 April 1987. Describing the show proved difficult. Creator Ken Estin dubbed it a \"skitcom\". A variety of diverse original characters were created for her to perform. Extensive makeup, wigs, teeth, and body padding were utilised, sometimes rendering her unrecognisable. One original character created by Ullman back in Britain was uprooted for the series: long-suffering British spinster Kay Clark.\n",
"\"The Simpsons\" was co-developed by Groening, Brooks, and Sam Simon, a writer-producer with whom Brooks had worked on previous projects. Groening said his goal was to offer the audience an alternative to what he called \"the mainstream trash\". Brooks negotiated a provision in the contract with the Fox network that prevented Fox from interfering with the show's content. Fox network was unsure if the show could sustain the audience's attention for the duration of the episode. They proposed doing three seven-minute shorts per episode and four specials until the audience adjusted, but the producers gambled by asking Fox for 13 full-length episodes.\n",
"The PTC has targeted Fox, criticizing the network for failing to include \"S\" (sexual content) and \"V\" (violence) descriptors in content ratings for some \"Family Guy\" episodes. The Council has cautioned parents that due to the animation style, children could get attracted to the adult show. In order to prevent child viewing, the PTC has objected to Fox scheduling \"Family Guy\" during early prime time hours. Additionally, the Council has asked \"Family Guy\" sponsors such as Wrigley Company and Burger King to stop advertising during the show as their products appeal to kids. \n",
"Fox ordered thirteen episodes. Immediately after, however, Fox feared the themes of the show were not suitable for the network and Groening and Fox executives argued over whether the network would have any creative input into the show. With \"The Simpsons\", the network has no input. Fox was particularly disturbed by the concept of suicide booths, Doctor Zoidberg, and Bender's anti-social behavior. Groening explains, \"When they tried to give me notes on \"Futurama\", I just said: 'No, we're going to do this just the way we did \"Simpsons\".' And they said, 'Well, we don't do business that way anymore.' And I said, 'Oh, well, that's the only way I do business.'\" The episode \"I, Roommate\" was produced to address Fox's concerns, with the script written to their specifications. Fox strongly disliked the episode, but after negotiations, Groening received the same independence with \"Futurama\".\n"
] |
why this new obama tax plan is making waves. i thought you had to pay 2% on income if it's over 380k? | The percentage you are thinking of is 35% for income wages over $379,151. Those who are making over $1 Million per year aren't likely to be making it from wages, the are making it from capital gains (read investment or Wall St.). Taxes on this kind of income can be much lower, and can be made even lower still for those who can afford a tax attorney. What Obama proposes is to close this disparity, ending loop holes and and broadening the definition of "income" to make sure the rich have to pay.
[2011 US Income Tax Brackets](_URL_0_)
[US Capital Gains Taxes](_URL_1_) | [
"Obama increased taxes on high-income taxpayers via: a) expiration of the Bush income tax cuts for the top 1–2% of income earners starting with 2013; and b) payroll tax increases on roughly the top 5% of earners as part of the ACA. This increased the average tax rate paid by the top 1% (incomes above $443,000 in 2015) from 28% in 2012 to 34% in 2013. According to the CBO, after-tax income inequality improved, by lowering the share of after-tax income received by the top 1% from 16.7% in 2007 to 15.1% in 2012 and to 12.4% in 2013.\n",
"Obama responded with an explanation of how his tax plan would affect a small business in this bracket. Obama said, \"If you're a small business, which you would qualify, first of all, you would get a 50 percent tax credit so you'd get a cut in taxes for your health care costs. So you would actually get a tax cut on that part. If your revenue is above 250, then from 250 down, your taxes are going to stay the same. It is true that, say for 250 up — from 250 to 300 or so, so for that additional amount, you'd go from 36 to 39 percent, which is what it was under Bill Clinton.\"\n",
"BULLET::::- President Obama raised income tax rates on the top 1% via partial expiration of the Bush tax cuts in January 2013. He also raised payroll taxes on the top 5% as part of the Affordable Care Act at that time. Despite these tax increases, average monthly job creation increased from 179,000 in 2012 to 192,000 in 2013 and 250,000 in 2014.\n",
"Obama's plan is to cut income taxes overall, which he states would reduce revenues to below the levels that prevailed under Ronald Reagan (less than 18.2 percent of GDP). Obama argues that his plan is a net tax cut, and that his tax relief for middle-class families is larger than the revenue raised by his tax changes for families over $250,000. Obama plans to pay for the tax changes while bringing down the budget deficit by cutting unnecessary spending.\n",
"BULLET::::- According to a September 2016 analysis by the conservative Tax Foundation, Trump's tax plan would reduce federal revenue by around $4.4 to $5.9 trillion over 10 years. The $1.5 trillion gap is because the Trump campaign has not clarified some aspects of the tax plan and have provided contradictory explanations. While the tax plan would reduce taxes across the spectrum, it does so the most for the richest Americans.\n",
"The bottom 99% also saw an average federal tax rate increase by one percentage point from 2012 to 2013, mainly due to the expiration of the Obama payroll tax cuts, which were in place in 2011 and 2012. However, for income groups in the bottom 99%, the average federal tax rate remained at or below the 2007 level. \"Politifact\" rated the claim that Obama had cut taxes for middle-class families and small businesses as \"Mostly True\" in 2012.\n",
"The conservative Tax Foundation estimated in January 2016 that, in the long term, the plan would decrease economic growth by 1%, wages by 0.8% and jobs by 311,000. The Tax Foundation estimates an increase in revenues of $498 billion, but applied dynamic scoring analysis to that figure and reduced it to $191 billion due to weaker economic growth. The Clinton campaign \"said the Tax Foundation's analysis is misleading and doesn't take into account her tax relief for businesses and individuals, nor her investments that would promote growth.\"\n"
] |
If you hold in poop does your intestine still absorb nutrients or does it just kind of sit there at the end of the line? | Its called [encopresis] (_URL_0_). By the time you have conscious control, its more water being absorbed than anything else. If you refuse to poo, it gets harder and bigger so the next time you poo it hurts. Then, if you are a three year old, you repeat the process ad infinitum and drive me crazy. Don't do this. | [
"The small intestine is normally in length. As the Y-connection is moved further down the gastrointestinal tract, the amount available to fully absorb nutrients is progressively reduced, traded for greater effectiveness of the operation. The Y-connection is formed much closer to the lower (distal) end of the small intestine, usually from the lower end, causing reduced absorption (malabsorption) of food: primarily of fats and starches, but also of various minerals and the fat-soluble vitamins. The unabsorbed fats and starches pass into the large intestine, where bacterial actions may act on them to produce irritants and malodorous gases. These larger effects on nutrition are traded for a relatively modest increase in total weight loss.\n",
"Digested food is now able to pass into the blood vessels in the wall of the intestine through either diffusion or active transport. The small intestine is the site where most of the nutrients from ingested food are absorbed. The inner wall, or mucosa, of the small intestine is lined with simple columnar epithelial tissue. Structurally, the mucosa is covered in wrinkles or folds called plicae circulares, which are considered permanent features in the wall of the organ. They are distinct from rugae which are considered non-permanent or temporary allowing for distention and contraction. From the plicae circulares project microscopic finger-like pieces of tissue called villi (Latin for \"shaggy hair\"). The individual epithelial cells also have finger-like projections known as microvilli. The functions of the plicae circulares, the villi, and the microvilli are to increase the amount of surface area available for the absorption of nutrients, and to limit the loss of said nutrients to intestinal fauna.\n",
"While the first part of the large intestine is responsible for the absorption of water and other substances from the chyme, the main function of the descending colon is to store waste until it can be removed from the body in solid form, when a person has a bowel movement.\n",
"Food then moves on to the jejunum. This is the most nutrient absorptive section of the small intestine. The liver regulates the level of nutrients absorbed into the blood system from the small intestine. From the jejunum, whatever food that has not been absorbed is sent to the ileum which connects to the large intestine. The first part of the large intestine is the cecum and the second portion is the colon. The large intestine reabsorbs water and forms fecal matter.\n",
"In some families there are openings in the dorsal surface of the oesophagus connecting to the external surface, through which water from the food can be squeezed, helping to concentrate it. Digestion occurs in the intestine, with food material being pulled through by cilia, rather than by muscular action.\n",
"No stomach is present, with the pharynx connecting directly to a muscleless intestine that forms the main length of the gut. This produces further enzymes, and also absorbs nutrients through its single-cell-thick lining. The last portion of the intestine is lined by cuticle, forming a rectum, which expels waste through the anus just below and in front of the tip of the tail. Movement of food through the digestive system is the result of body movements of the worm. The intestine has valves or sphincters at either end to help control the movement of food through the body.\n",
"Cilia pull the food through the mouth in a stream of mucus and through the oesophagus, where it is partially digested by enzymes from a pair of large pharyngeal glands. The oesophagus, in turn, opens into a stomach, where enzymes from a digestive gland complete the breakdown of the food. Nutrients are absorbed through the linings of the stomach and the first part of the intestine. The intestine is divided in two by a sphincter, with the latter part being highly coiled and functioning to compact the waste matter into faecal pellets. The anus opens just behind the foot.\n"
] |
is lava sticky? | I can't answer if it's sticky, but you don't have to be sticky to be viscous. Just as one example gear oil is quite viscous without being sticky, it's especially thick when cold and since it's a lubricant is the definition of not sticky. | [
"Highly viscous lavas do not usually flow as liquid, and usually form explosive fragmental ash or tephra deposits. However, a degassed viscous lava or one which erupts somewhat hotter than usual may form a lava flow.\n",
"Lava flows from stratovolcanoes are generally not a significant threat to humans and animals because the highly viscous lava moves slowly enough for everyone to flee out of the path of flow. The lava flows are more of a threat to property. However, not all stratovolcanoes erupt viscous and sticky lava. Nyiragongo is very dangerous because its magma has an unusually low silica content, making it quite fluid. Fluid lavas are typically associated with the formation of broad shield volcanoes such as those of Hawaii, but Nyiragongo has very steep slopes down which lava can flow at up to . Lava flows could melt down ice and glaciers that accumulated on the volcano's crater and upper slopes, generating massive lahar flows. Rarely, generally fluid lava could also generate massive lava fountains, while lava of thicker viscosity can solidify within the vent, creating a block which can result in highly explosive eruptions.\n",
"The floor is lava (also known as hot lava) is a game in which players pretend that the floor or ground is made of lava (or any other lethal substance, such as acid or quicksand), and thus must avoid touching the ground, as touching the ground would \"kill\" the player who did so. The players stay off the floor by standing on furniture or the room's architecture. The players generally may not remain still, and are required to move from one piece of furniture to the next. The game can be played with a group or alone for self amusement. There may even be a goal, to which the players must race. The game may also be played outdoors in playgrounds or similar areas. Players can also set up obstacles such as nice padded chairs to make the game more challenging. This is a variation of an obstacle course.\n",
"\"Sticky Sticky\" is a song by South Korean girl group Hello Venus. It was released on November 6, 2014 under Fantagio and is the group's fourth single overall. It is the first release to feature new members Seoyoung and Yeoreum following the departures of Yooara and Yoonjo after Pledis Entertainment and Fantagio had ended their partnership in July 2014.\n",
"A lava balloon is a gas-filled bubble of lava that floats on the sea surface. It can be up to several metres in size. When it emerges from the sea, it is usually hot and often steaming. After floating for some time it fills with water and sinks again.\n",
"The base of a lava flow may show evidence of hydrothermal activity if the lava flowed across moist or wet substrates. The lower part of the lava may have vesicles, perhaps filled with minerals (amygdules). The substrate upon which the lava has flowed may show signs of scouring, it may be broken or disturbed by the boiling of trapped water, and in the case of soil profiles, may be baked into a brick-red terracotta.\n",
"\"Hot Lava\" is an adventure video game in the first-person perspective. In the game, the player control the character as they jump, leap, wall run and swing from object to object. Some of the objects include tables, couches, and chairs.\n"
] |
How common was wearing masks in renaissance Venice? | Venetian masks outside of carnival (which only lasted for the month before Lent) and theater/masquerade were not so much a thing in the Renaissance; according to James Johnson in *Venice Incognito: Masks in the Serene Republic*, the practice began among the male and female nobility in the late seventeenth century and soon spread to all ranks of society, lasting until the Venetian Republic fell to Napoleon in 1797. Foreign visitors to Venice expected the city to be full of masked revelers and assassins, but found a whole lot of ordinary shoppers and bystanders that happened to be wearing a covering on their faces. While carnival masks gave the wearers a freedom to act outside of social norms - women could associate with anyone, since they were unrecognizable, and male crossdressers (*gnaghe*) could walk around freely - everyday masks had a very different context.
As the wealthy returned from their estates outside the city in October and the theatrical and social season began, masks reappeared in the piazzas of Venice, most people wearing the combination of the *tabàro*, a long black cloak; *baùta*, a black hood; and *larva*, a white mask that flared out at the bottom to allow for talking and eating. The *larva* was usually held on by being tucked under a cocked hat that sat low on the head, but while men always wore such headgear, women did not; when the latter were bareheaded or in another sort of hat or a plain cloak, they tended to wear a black *morèta* mask, which was instead held to the face with a tab or button they kept between their lips. After Lent, the theaters closed again and masks were less prevalent, but when they opened again in the late spring for Ascension, the masks reappeared (though worn with the *baùta* pulled down to mitigate the heat and humidity). The church tried to regulate on what days and at what times masks could be worn, but by 1720 they were normalized as a part of everyday dress during the proper season.
Some did, of course, use the natural advantage of the *larva*: hired thugs, prostitutes at the theater, and booksellers with obscene material wore masks on a regular basis to protect their identities while doing illegal things. Others wore masks out of less pressing necessity, but still to remain incognito, like men running private messages and the city's surveillance agents. However, there was another end of the spectrum, where the mask was worn as an indication of formality and respect. Members of the nobility would dress in *larve* when attending the introductions of new ambassadors or greeting foreign royalty traveling incognito in their own masks, when a new doge was elected or the doge's children were married, for particular religious or historical commemorative ceremonies.
The habit of masking was believed to have led to rising crime rates early in the eighteenth century, and could lead to all kinds of comedies of errors - but it was entrenched in Venetian custom and normally was seen as completely unremarkable.
| [
"Another tradition of European masks developed, more self-consciously, from court and civic events, or entertainments managed by guilds and co-fraternities. These grew out of the earlier revels and had become evident by the 15th century in places like Rome, and Venice, where they developed as entertainments to enliven towns and cities. Thus the Maundy Thursday carnival in St Marks Square in Venice, attended by the Doge and aristocracy also involved the guilds, including a guild of maskmakers. There is evidence of 'commedia dell'arte' inspired Venetian masks and by the late 16th century the Venetian Carnival began to reach its peak and eventually lasted a whole 'season' from January until Lent. By the 18th century, it was already a tourist attraction, Goethe saying that he was ugly enough not to need a mask. The carnival was repressed during the Napoleonic Republic, although in the 1980s its costumes and the masks aping the C 18th heyday were revived. It appears other cities in central Europe were influenced by the Venetian model.\n",
"Historians, travel guide authors, novelists, and merchants of Venetian masks have noted that these have a long history of being worn during promiscuous activities. Authors Tim Kreider and Thomas Nelson have linked the film's usage of these to Venice's reputation as a center of both eroticism and mercantilism. Nelson notes that the sex ritual combines elements of Venetian Carnival and Catholic rites, in particular, the character of \"Red Cloak\" who simultaneously serves as Grand Inquisitor and King of Carnival. As such, Nelson argues that the sex ritual is a symbolic mirror of the darker truth behind the façade of Victor Ziegler's earlier Christmas party. Carolin Ruwe, in her book \"Symbols in Stanley Kubrick's Movie 'Eyes Wide Shut\"', argues that the mask is the prime symbol of the film. Its symbolic meaning is represented through its connection to the characters in the film; as Tim Kreider points out, this can be seen through the masks in the prostitute's apartment and her being renamed as \"Domino\" in the film, which is a type of Venetian Mask.\n",
"The first documented sources mentioning the use of masks in Venice can be found as far back as the 13th century. The Great Council made it a crime for masked people to throw scented eggs. These \"ovi odoriferi\" were eggshells that were usually filled with rose water perfume, and tossed by young men at their friends or at young women they admired. However, in some cases, the eggs were filled with ink or other damaging substances. Gambling in public was normally illegal, except during Carnival. The document decrees that masked persons were forbidden to gamble.\n",
"There is little evidence explaining the motive for the earliest mask wearing in Venice. One scholar argues that covering the face in public was a uniquely Venetian response to one of the most rigid class hierarchies in European history. During Carnival, the sumptuary laws were suspended, and people could dress as they liked, instead of according to the rules that were set down in law for their profession and social class.\n",
"In many cultural traditions, the masked performer is a central concept and is highly valued. In the western tradition, actors in Ancient Greek theatre wore masks, as they do in traditional Japanese Noh drama. In some Greek masks, the wide and open mouth of the mask contained a brass megaphone enabling the voice of the wearer to be projected into the large auditoria. In medieval Europe, masks were used in mystery and miracle plays to portray allegorical creatures, and the performer representing God frequently wore a gold or gilt mask. During the Renaissance, masques and ballet de cour developed - courtly masked entertainments that continued as part of ballet conventions until the late eighteenth century. The masked characters of the Commedia dell'arte included the ancestors of the modern clown. In contemporary western theatre, the mask is often used alongside puppetry to create a theatre which is essentially visual rather than verbal, and many of its practitioners have been visual artists.\n",
"Venice, especially during the Middle Ages, the Renaissance, and Baroque periods, was a major centre of art and developed a unique style known as the Venetian School. In the Middle Ages and the Renaissance, Venice, along with Florence and Rome, became one of the most important centres of art in Europe, and numerous wealthy Venetians became patrons of the arts. Venice at the time was a rich and prosperous Maritime Republic, which controlled a vast sea and trade empire.\n",
"Masks have always been an important feature of the Venetian carnival. Traditionally people were allowed to wear them between the festival of \"Santo Stefano\" (St. Stephen's Day, December 26) and the end of the carnival season at midnight of Shrove Tuesday. As masks were also allowed on Ascension and from October 5 to Christmas, people could spend a large portion of the year in disguise. Maskmakers (\"mascherari\") enjoyed a special position in society, with their own laws and their own guild.\n"
] |
When alcohol is poured into a hot (for example almost boiling water) liquid does it vaporise? For example whiskey in an irish coffee or anything similar? | Yes, alcohol can vaporize. This fact is utilized in [distillation](_URL_1_) to separate alcohol from solution. You can see from this [phase diagram](_URL_0_) that the boiling point of a water/ethanol solution is lower than that of water alone. However, distillation relies on constant heat source - vaporization is an endothermic process, meaning the solution cools down when alcohol vaporizes. A cup of coffee is unlikely to _completely_ vaporize alcohol such that none remains (well, that's how an Irish coffee can still be made). | [
"The final liquor is treated by blowing carbon dioxide through it. This precipitates dissolved calcium and other impurities. It also volatilizes the sulfide, which is carried off as HS gas. Any residual sulfide can be subsequently precipitated by adding zinc hydroxide. The liquor is separated from the precipitate and evaporated using waste heat from the reverberatory furnace. The resulting ash is then redissolved into concentrated solution in hot water. Solids that fail to dissolve are separated. The solution is then cooled to recrystallize nearly pure sodium carbonate decahydrate.\n",
"When the aged liquor is heated, it vaporizes, and the vaporized steam is cooled by the cold water in the upper parts of the \"soju gori\". This causes the vaporized alcohol to be condensed and trickle down through the pipe.\n",
"Pouring sparkling wine while tilting the glass at an angle and gently sliding in the liquid along the side will preserve the most bubbles, as opposed to pouring directly down to create a head of \"mousse\", according to a study, \"On the Losses of Dissolved CO during Champagne serving\", by scientists from the University of Reims. Colder bottle temperatures also result in reduced loss of gas. Additionally, the industry is developing Champagne glasses designed specifically to reduce the amount of gas lost.\n",
"The method was tested on 96% spirit vodka. In this method, melted wax (stearic acid) is stirred, and the alcoholic drink is poured in. The solution dissipates and becomes drops containing alcohol and wax. The drops that solidify constitute alcohol powder.\n",
"The liquid being distilled is a mixture of mainly water and alcohol, along with smaller amounts of other by-products of fermentation (called congeners), such as aldehydes and esters. Alcohol (ethanol) has a normal boiling point of 78.4 °C (173.12 °F), compared with pure water, which boils at 100 °C (212 °F). As alcohol has a lower boiling point, it is more volatile and evaporates at a higher rate than water. Therefore, the concentration of alcohol in the vapour phase above the liquid is higher than in the liquid itself.\n",
"When a gas is heated, its volume increases. It will displace cooler gases as it expands. If this is done in a partially enclosed space, and the vessel allowed to be full of hot gas, when the heat is taken away, the gases cool and as they do, the amount of liquids that the gas can hold will decrease. Therefore, most of the moisture and alcohol held in the gas will condense. As the gas cools, it decreases in volume, and the pressure drops. Thus when the flaming alcohol in a backdraft is covered with a pint glass over a saucer, the dense, cold air is replaced with less dense, warm air with a lot of alcohol vapour held in it. As the oxygen flow to the fire is restricted, the remaining oxygen is used up and the fire in the pint glass goes out, removing the heat source. The alcohol-laden warm air now in the glass cools and begins to create a pressure difference. The air outside the pint glass forces its way into the partially evacuated pint glass and is responsible for pushing any liquid at the outside bottom of the pint glass further inside (as the seal of the glass and the saucer is not perfect) as it begins to equalise the pressure difference. Once the majority of the liquid is inside the upside down pint glass, sometimes further air can be seen to bubble up into the glass. At some point an equilibrium will occur, where the pressure difference between inside and outside of the glass is equal to the pressure of the column of liquid held up inside, and this will hold the liquid inside the glass. Sometimes, when a good seal is made, the pressure difference between the inside and the outside of the glass exerts a great enough force that when the glass is lifted, the saucer will remain stuck to its underside.\n",
"Irish whiskey and at least one level teaspoon of sugar are poured over black coffee and stirred in until fully dissolved. Thick cream is carefully poured over the back of a spoon initially held just above the surface of the coffee and gradually raised a little until the entire layer is floated.\n"
] |
why are single-use straws so bad even if properly disposed of in a landfill? | The goal is to reduce the amount of trash; period. Single-use, disposable, common items are a total waste of resources and fill up dumps. Straws are especially heinous because they generally serve no real purpose. Unless you're physically disabled in some way, just drink from the cup.
You also need to realize that a landfill is not a solution to the trash problem. It's just putting all the problems in the same place and putting a carpet over them. | [
"Microplastics pollution are a concern if plastic waste is improperly dumped. If plastic straws are improperly disposed of, they can be transported via water into soil ecosystems, and others, where they break down into smaller, more hazardous pieces than the original plastic straw.\n",
"Plastic drinking straw production contributes a small amount to petroleum consumption, and the used straws become a small part of global plastic pollution when discarded, most after a single use. One anti-straw advocacy group has estimated that about 500 million straws are used daily in the United States alone – an average 1.6 straws per capita per day. This statistic has been criticized as inaccurate, because it was approximated by Milo Cress, who was 9 years old at the time, after surveying straw manufacturers. This figure has been widely cited by major news organizations. In 2017 the market research firm Fredonia Group estimated the number to be 390 million. \n",
"In the first half of 2018, three towns in Massachusetts banned petrochemical plastic straws directly in the case of Provincetown, and as part of broader sustainable food packaging laws in Andover and Brookline. The city of Seattle implemented a ban on non-compostable disposable straws on July 1, 2018.\n",
"Today, the disposal of wastes by land filling or land spreading is the ultimate fate of all solid wastes, whether they are residential wastes collected and transported directly to a landfill site, residual materials from materials recovery facilities (MRFs), residue from the combustion of solid waste, compost, or other substances from various solid waste processing facilities. A modern sanitary landfill is not a dump; it is an engineered facility used for disposing of solid wastes on land without creating nuisances or hazards to public health or safety, such as the problems of insects and the contamination of ground water.\n",
"BULLET::::1. Landfills: Dumping e-waste into landfills that are not designed to contain e-waste can lead to contamination of surface and groundwater because the toxic chemicals can leach from landfills into the water supply.\n",
"Bins need holes or mesh for aeration. Some people add a spout or holes in the bottom for excess liquid to drain into a tray for collection. The most common materials used are plastic: recycled polyethylene and polypropylene and wood. Worm compost bins made from plastic are ideal, but require more drainage than wooden ones because they are non-absorbent. However, wooden bins will eventually decay and need to be replaced.\n",
"Current solutions to dealing with the amount of plastic being thrown away include burning the plastics and dumping them into large fields or landfills. Burning plastics leads to significant amounts of air pollution, which is harmful to human and animal health. When dumped into fields or landfills, plastics can cause changes in the pH of the soil, leading to soil infertility. Furthermore, plastic bottles and plastic bags that end up in landfills are frequently consumed by animals, which then clogs their digestive systems and leads to death.\n"
] |
How can some materials resist being dissolved by both polar and non-polar solvents? | While the chemical bonds in glass are slightly polar, the bonds that hold the atoms together are quite strong and also form a very large interconnected network. It's as if the whole piece of glass were one molecule.
It's not possible to dissolve it simply by surrounding it with solvent molecules. You'd need a solvent that is strong enough to nibble away at the silicon-oxygen bonds that make up silica. On top of this, you are essentially constrained by the fact that these chemical reactions must take place at the surface of the glass. | [
"The polarity, dipole moment, polarizability and hydrogen bonding of a solvent determines what type of compounds it is able to dissolve and with what other solvents or liquid compounds it is miscible. Generally, polar solvents dissolve polar compounds best and non-polar solvents dissolve non-polar compounds best: \"like dissolves like\". Strongly polar compounds like sugars (e.g. sucrose) or ionic compounds, like inorganic salts (e.g. table salt) dissolve only in very polar solvents like water, while strongly non-polar compounds like oils or waxes dissolve only in very non-polar organic solvents like hexane. Similarly, water and hexane (or vinegar and vegetable oil) are not miscible with each other and will quickly separate into two layers even after being shaken well.\n",
"Polar solutes dissolve in polar solvents, forming polar bonds or hydrogen bonds. As an example, all alcoholic beverages are aqueous solutions of ethanol. On the other hand, non-polar solutes dissolve better in non-polar solvents. Examples are hydrocarbons such as oil and grease that easily mix with each other, while being incompatible with water.\n",
"When one substance is dissolved into another, a solution is formed. This is opposed to the situation when the compounds are insoluble like sand in water. In a solution, all of the ingredients are uniformly distributed at a molecular level and no residue remains. A solvent-solute mixture consists of a single phase with all solute molecules occurring as \"solvates\" (solvent-solute complexes), as opposed to separate continuous phases as in suspensions, emulsions and other types of non-solution mixtures. The ability of one compound to be dissolved in another is known as solubility; if this occurs in all proportions, it is called miscible.\n",
"Solubilization is distinct from dissolution because the resulting fluid is a colloidal dispersion involving an association colloid. This suspension is distinct from a true solution, and the amount of the solubilizate in the micellar system can be different (often higher) than the regular solubility of the solubilizate in the solvent.\n",
"The solvents do not form a unified solution together because they are immiscible. When the funnel is allowed to sit after being shaken, the liquids form distinct physical layers, with the less dense liquid floating and more dense sinking. A mixture of solutes is thus separated into two physically separate solutions, each enriched in different solutes.\n",
"The solubility is highest in polar solvents (such as water) or ionic liquids, but tends to be low in nonpolar solvents (such as petrol/gasoline). This is principally because the resulting ion–dipole interactions are significantly stronger than ion-induced dipole interactions, so the heat of solution is higher. When the oppositely charged ions in the solid ionic lattice are surrounded by the opposite pole of a polar molecule, the solid ions are pulled out of the lattice and into the liquid. If the solvation energy exceeds the lattice energy, the negative net enthalpy change of solution provides a thermodynamic drive to remove ions from their positions in the crystal and dissolve in the liquid. In addition, the entropy change of solution is usually positive for most solid solutes like ionic compounds, which means that their solubility increases when the temperature increases. There are some unusual ionic compounds such as cerium(III) sulfate, where this entropy change is negative, due to extra order induced in the water upon solution, and the solubility decreases with temperature.\n",
"By contrast, substances are said to be immiscible if there are certain proportions in which the mixture does not form a solution. For example, oil is not soluble in water, so these two solvents are immiscible, while butanone (methyl ethyl ketone) is significantly soluble in water, these two solvents are also immiscible because they are not soluble in all proportions.\n"
] |
why is there a lower limit to brightness on your smart phone. why can’t they make it so that you can keep on turning it down till your phone turns completely dark ? | Designer: Hey! Let's make it so the user can turn the brightness all the way down!
Boss: Great idea. Are you volunteering to go sit your ass down in that chair, put on the headset, and spend the next 5 years fielding nothing but support calls from angry customers?
Designer: But we'll have obvious ways to reset the brightness. They'll be intuitive, they'll be spelled out in the user manual, and we'll put it at the top of the FAQ on our website.
Boss: Go spend an hour in support right now. See what questions our users are already asking.
[an hour later...]
Designer: Holy fucking shitsnacks. Never mind. | [
"A problem is that without the back light the screen looks very dark and it's hard to see the image, so the backlight must run continuously except when it's not necessary to look at the screen (for example, when using the console as an MP3 player).\n",
"Other display technologies do not flicker noticeably, so the frame rate is less important. LCD flat panels do not \"seem\" to flicker at all, as the backlight of the screen operates at a very high frequency of nearly 200 Hz, and each pixel is changed on a scan rather than briefly turning on and then off as in CRT displays. However, the nature of the back-lighting used can induce flicker – LEDs cannot be easily dimmed, and therefore use pulse-width modulation to create the illusion of dimming, and the frequency used can be perceived as flicker by sensitive users.\n",
"Using smartphones late at night can disturb sleep, due to the blue light and brightly lit screen, which affects melatonin levels and sleep cycles. In an effort to alleviate these issues, several apps that change the color temperature of a screen to a warmer hue based on the time of day to reduce the amount of blue light generated have been developed for Android, while iOS 9.3 integrated similar, system-level functionality known as \"Night Shift\". Amazon released a feature known as \"blue shade\" in their Fire OS \"Bellini\" 5.0 and later. It has also been theorized that for some users, addicted use of their phones, especially before they go to bed, can result in \"ego depletion\". Many people also use their phones as alarm clocks, which can also lead to loss of sleep.\n",
"In the professional lighting industry, changes in intensity are called \"fades\" and can be \"fade up\" or \"fade down\". Dimmers with direct manual control had a limit on the speed they could be varied at but this problem has been largely eliminated with modern digital units (although very fast changes in brightness may still be avoided for other reasons like lamp life).\n",
"Many smartphone addiction activists (such as Tristan Harris) recommend turning one's phone screen to grayscale mode, which helps reduce time spent on mobile phones by making them boring to look at. Other phone settings alterations for mobile phone non-use included turning on airplane mode, turning off cellular data and/ or WiFi, turning off the phone, removing specific apps, and factory resetting.\n",
"In LCD screens, the LCD itself does not flicker, it preserves its opacity unchanged until updated for the next frame. However, in order to prevent accumulated damage LCD displays quickly alternate the voltage between positive and negative for each pixel, which is called 'polarity inversion'. Ideally, this wouldn't be noticeable because every pixel has the same brightness whether a positive or a negative voltage is applied. In practice, there is a small difference, which means that every pixel flickers at about 30 Hz. Screens that use opposite polarity per-line or per-pixel can reduce this effect compared to when the entire screen is at the same polarity, sometimes the type of screen is detectable by using patterns designed to maximize the effect.\n",
"BULLET::::- As of 2012, most implementations of LCD backlighting use pulse-width modulation (PWM) to dim the display, which makes the screen flicker more acutely (this does not mean visibly) than a CRT monitor at 85 Hz refresh rate would (this is because the entire screen is strobing on and off rather than a CRT's phosphor sustained dot which continually scans across the display, leaving some part of the display always lit), causing severe eye-strain for some people. Unfortunately, many of these people don't know that their eye-strain is being caused by the invisible strobe effect of PWM. This problem is worse on many LED-backlit monitors, because the LEDs switch on and off faster than a CCFL lamp.\n"
] |
If carbon dioxide was once 20 times as prevalent in the atmosphere as it is now, why should we be concerned. | well, it's not that increasing carbon dioxide levels will destroy all life. If it spikes too quickly, it may cause a mass extinction of sorts. But life will still persist, and evolve into something different.
The threat is that we will turn the earth into an environment where we can't survive. It's a threat to humanity, not the earth. | [
"On 12 November 2015, NASA scientists reported that human-made carbon dioxide (CO) continues to increase above levels not seen in hundreds of thousands of years: currently, about half of the carbon dioxide released from the burning of fossil fuels remains in the atmosphere and is not absorbed by vegetation and the oceans.\n",
"BULLET::::- NASA scientists report that human-made carbon dioxide (CO) continues to increase above levels not seen in hundreds of thousands of years: currently, about half of the carbon dioxide released from the burning of fossil fuels remains in the atmosphere and is not absorbed by vegetation and the oceans.\n",
"BULLET::::- November 12 – NASA scientists report that human-made carbon dioxide (CO) continues to increase above levels not seen in hundreds of thousands of years: currently, about half of the carbon dioxide released from the burning of fossil fuels remains in the atmosphere and is not absorbed by vegetation and the oceans.\n",
"Carbon dioxide is believed to have played an important effect in regulating Earth's temperature throughout its 4.7 billion year history. Early in the Earth's life, scientists have found evidence of liquid water indicating a warm world even though the Sun's output is believed to have only been 70% of what it is today. It has been suggested by scientists that higher carbon dioxide concentrations in the early Earth's atmosphere might help explain this faint young sun paradox. When Earth first formed, Earth's atmosphere may have contained more greenhouse gases and concentrations may have been higher, with estimated partial pressure as large as , because there was no bacterial photosynthesis to reduce the gas to carbon compounds and oxygen. Methane, a very active greenhouse gas which reacts with oxygen to produce and water vapor, may have been more prevalent as well, with a mixing ratio of 10 (100 parts per million by volume).\n",
" After water vapour (concentrations of which humans have limited capacity to influence) carbon dioxide is the most abundant and stable greenhouse gas in the atmosphere (methane rapidly reacts to form water vapour and carbon dioxide). Atmospheric carbon dioxide has increased from about 280 ppm in 1750 to 383 ppm in 2007 and is increasing at an average rate of 2 ppm pr year. The world's oceans have previously played an important role in sequestering atmospheric carbon dioxide through solubility and the action of phytoplankton. This, and the likely adverse consequences for humans and the biosphere of associated global warming, increases the significance of investigating policy mechanisms for encouraging biosequestration.\n",
"On 12 November 2015, NASA scientists reported that human-made carbon dioxide () continues to increase, reaching levels not seen in hundreds of thousands of years: currently, the rate carbon dioxide released by the burning of fossil fuels is about double the net uptake by vegetation and the ocean.\n",
"The International Panel on Climate Change has shown that the natural ecosystems can absorb approximately two tonnes of carbon dioxide per person per year. At the moment the world, through the burning of fossil fuels and the clearing of forests is emitting 6.8 tonnes of carbon dioxide per person per year, with the result that over the last century the concentration of carbon dioxide has gone from 280 parts per million to 380 parts per million, a change equivalent to that seen since the depths of the last Ice Age. While in Europe the average rate of emissions is currently about 10 tonnes per person, and the average in the US is over 20 tonnes, for Australia as a whole produces about 26 tonnes of greenhouse gas emissions per person.\n"
] |
why does the clit exist? | All fetuses begin as female.
To put it simply, the penis is technically an enlarged clitoris, and the clitoris could arguably be considered a small penis.
Without the clitoris, there would be no penis. | [
"The clitoris ( or ) is a female sex organ present in mammals, ostriches and a limited number of other animals. In humans, the visible portion - the glans - is at the front junction of the labia minora (inner lips), above the opening of the urethra. Unlike the penis, the male homologue (equivalent) to the clitoris, it usually does not contain the distal portion (or opening) of the urethra and is therefore not used for urination. The clitoris also usually lacks a reproductive function. While few animals urinate through the clitoris or use it reproductively, the spotted hyena, which has an especially large clitoris, urinates, mates, and gives birth via the organ. Some other mammals, such as lemurs and spider monkeys, also have a large clitoris.\n",
"The clitoris is the human female's most sensitive erogenous zone and generally the primary anatomical source of human female sexual pleasure. In humans and other mammals, it develops from an outgrowth in the embryo called the genital tubercle. Initially undifferentiated, the tubercle develops into either a penis or a clitoris during the development of the reproductive system depending on exposure to androgens (which are primarily male hormones). The clitoris is a complex structure, and its size and sensitivity can vary. The glans (head) of the human clitoris is roughly the size and shape of a pea, and is estimated to have about 8,000 sensory nerve endings.\n",
"Clitoromegaly (or macroclitoris) is an abnormal enlargement of the clitoris that is mostly congenital or acquired, though deliberately induced clitoris enlargement as a form of female genital body modification is achieved through various uses of anabolic steroids, including testosterone, and may also be referred to as \"clitoromegaly.\" Clitoromegaly is not the same as normal enlargement of the clitoris seen during sexual arousal.\n",
"The clitoris develops from a phallic outgrowth in the embryo called the genital tubercle. Initially undifferentiated, the tubercle develops into either a clitoris or penis during the development of the reproductive system depending on exposure to androgens (which are primarily male hormones). The clitoris forms from the same tissues that become the glans and shaft of the penis, and this shared embryonic origin makes these two organs homologous (different versions of the same structure).\n",
"The clitoris is homologous to the penis; that is, they both develop from the same embryonic structure. While researchers such as Geoffrey Miller, Helen Fisher, Meredith Small and Sarah Blaffer Hrdy \"have viewed the clitoral orgasm as a legitimate adaptation in its own right, with major implications for female sexual behavior and sexual evolution,\" others, such as Donald Symons and Stephen Jay Gould, have asserted that the clitoris is vestigial or nonadaptive, and that the female orgasm serves no particular evolutionary function. However, Gould acknowledged that \"most female orgasms emanate from a clitoral, rather than vaginal (or some other), site\" and stated that his nonadaptive belief \"has been widely misunderstood as a denial of either the adaptive value of female orgasm in general, or even as a claim that female orgasms lack significance in some broader sense\". He explained that although he accepts that \"clitoral orgasm plays a pleasurable and central role in female sexuality and its joys,\" \"[a]ll these favorable attributes, however, emerge just as clearly and just as easily, whether the clitoral site of orgasm arose as a spandrel or an adaptation\". He said that the \"male biologists who fretted over [the adaptionist questions] simply assumed that a deeply vaginal site, nearer the region of fertilization, would offer greater selective benefit\" due to their Darwinian, \"summum bonum\" beliefs about enhanced reproductive success.\n",
"Puppo's belief that there is no anatomical relationship between the vagina and clitoris is contrasted by the general belief among researchers that vaginal orgasms are the result of clitoral stimulation; they maintain that clitoral tissue extends, or is at least likely stimulated by the clitoral bulbs, even in the area most commonly reported to be the G-spot. \"My view is that the G-spot is really just the extension of the clitoris on the inside of the vagina, analogous to the base of the male penis,\" said researcher Amichai Kilchevsky. Because female fetal development is the \"default\" direction of fetal development in the absence of substantial exposure to male hormones and therefore the penis is essentially a clitoris enlarged by such hormones, Kilchevsky believes that there is no evolutionary reason why females would have two separate structures capable of producing orgasms and blames the porn industry and \"G-spot promoters\" for \"encouraging the myth\" of a distinct G-spot.\n",
"The G-spot having an anatomical relationship with the clitoris has been challenged by Vincenzo Puppo, who, while agreeing that the clitoris is the center of female sexual pleasure, disagrees with Helen O'Connell and other researchers' terminological and anatomical descriptions of the clitoris. He stated, \"Clitoral bulbs is an incorrect term from an embryological and anatomical viewpoint, in fact the bulbs do not develop from the phallus, and they do not belong to the clitoris.\" He says that \"clitoral bulbs\" \"is not a term used in human anatomy\" and that \"vestibular bulbs\" is the correct term, adding that gynecologists and sexual experts should inform the public with facts instead of hypotheses or personal opinions. \"[C]litoral/vaginal/uterine orgasm, G/A/C/U spot orgasm, and female ejaculation, are terms that should not be used by sexologists, women, and mass media,\" he said, further commenting that the \"anterior vaginal wall is separated from the posterior urethral wall by the urethrovaginal septum (its thickness is 10–12 mm)\" and that the \"inner clitoris\" does not exist. \"The female perineal urethra, which is located in front of the anterior vaginal wall, is about one centimeter in length and the G-spot is located in the pelvic wall of the urethra, 2–3 cm into the vagina,\" Puppo stated. He believes that the penis cannot come in contact with the congregation of multiple nerves/veins situated until the angle of the clitoris, detailed by Georg Ludwig Kobelt, or with the roots of the clitoris, which do not have sensory receptors or erogenous sensitivity, during vaginal intercourse. He did, however, dismiss the orgasmic definition of the G-spot that emerged after Ernst Gräfenberg, stating that \"there is no anatomical evidence of the vaginal orgasm which was invented by Freud in 1905, without any scientific basis\".\n"
] |
whenever my phone is near my computer, every so often it will pick up some really weird frequency's and will play through my headphones. | Your headphones are speakers, and speakers sometimes pick up on nearby radio transmissions.
You're cell phone has a transmitter which transmits radio waves to a cell tower. | [
"Some indications of possible cellphone surveillance occurring may include a mobile phone waking up unexpectedly, using a lot of the CPU when on idle or when not in use, hearing clicking or beeping sounds when conversations are occurring and the circuit board of the phone being warm despite the phone not being used.\n",
"In most studies, a majority of cell phone users report experiencing occasional phantom vibrations or ringing, with reported rates ranging from 27.4% to 89%. Once every two weeks is a typical frequency for the sensations, though a minority experience them daily. Most people are not seriously bothered by the sensations.\n",
"Often an irregular noise is heard in the telephone receivers before the frequency jumps to the next lower value. However this is a subsidiary phenomenon, the main effect being the regular frequency demultiplication.\n",
"Nokia and the University of Cambridge demonstrated a bendable cell phone called the Morph. Some phones have an electromechanical transducer on the back which changes the electrical voice signal into mechanical vibrations. The vibrations flow through the cheek bones or forehead allowing the user to hear the conversation. This is useful in the noisy situations or if the user is hard of hearing.\n",
"Some users of the iPhone 4S reported the random appearance of echoes during phone calls made with earphones in the initial release of iOS 5. The other party in the call was sometimes unable to hear the conversation due to this problem.\n",
"Most of the \"Hearables\" seen to date are Bluetooth devices that use phones or PCs as the central computing unit. Vinci smart headphones, announced in 2016, incorporated a dual-core CPU, local storage, WiFi and 3G connectivity that allow users to use without a phone.\n",
"This was once more important because outside broadcasts were carried over 'music circuits' that used telephone lines, with clicks from Strowger and other electromechanical telephone exchanges. It now finds fresh relevance in the measurement of noise on computer 'Audio Cards' which commonly suffer clicks as drives start and stop.\n"
] |
why are 'news' networks such as hln and cnn allowed to air such biased opinions on such a large scale? | I'm from Europe, and if you think that the European media are free of bias, you're very much mistaken. Quite simply, there is no such thing as a medium that is not biased, because all human beings are, by nature, biased.
The trouble is that as soon as you start making laws about what news media may or may not report, you introduce the possibility of government censorship, and that is never a good thing. You *can* make laws preventing people from spreading deliberate lies, but you *can't* make laws forcing people to be neutral -- because who decides what's "neutral" and what isn't?
It may be more noticeable in the US than in most European countries, but that's all it is -- more noticeable. But compare, for example, the BBC, ITN and Channel 4 News in the UK. The differences are subtle, but they're there; look to see, for example, whether they say "government cutbacks" or "government savings".
So we have lots of different news organisations, each with their own set of biases. You should get your news from multiple sources, and then make up your own mind. Watch CNN and also watch Fox News, then apply your own common sense and experiences and come to your conclusion. | [
"Some commentators have attacked CNN for the debate, calling it biased and poorly handled. Their accusations include claims that the final audience question was planted, that moderator Wolf Blitzer was overly favorable to Hillary Clinton, and that the use of James Carville, a long-time adviser to the Clintons, as a debate commentator was biased. \n",
"An important aspect of media bias is framing. A frame is the arrangement of a news story, with the goal of influencing audience to favor one side or the other. The ways in which stories are framed can greatly undermine the standards of reporting such as fairness and balance. Many media outlets are known for their outright bias. Some outlets, such as MSNBC, and CNN are known for their liberal views, while others, such as Breitbart and Fox News Channel, are known for their conservative views. How biased media frame stories can change audience reactions. Filter bubbles are an extent to framing. Filter bubbles are what companies such as Facebook and Google use to filter out the content that user might not agree with or find disturbing.\n",
"At times, the allegations of bias have led to back and forth conflicts between Fox News commentators and political and media figures. For example, in 2009 the Fox News Channel engaged in a verbal conflict with the Obama administration while ABC News, NBC News, MSNBC, CBS News, HLN and CNN focuses on positive reporting on the presidency of Barack Obama, and with that, Disney-owned FX (at the time, it was owned by News Corporation), WarnerMedia-owned HBO (at the time, HBO's parent company was called Time Warner), Viacom-owned MTV and Comcast-owned G4 are all focusing on video games, music and entertainment programing..\n",
"Studies reporting perceptions of bias in the media are not limited to studies of print media. A joint study by the Joan Shorenstein Center on Press, Politics and Public Policy at Harvard University and the Project for Excellence in Journalism found that people see media bias in television news media such as CNN. Although both CNN and Fox were perceived in the study as not being centrist, CNN was perceived as being more liberal than Fox. Moreover, the study's findings concerning CNN's perceived bias are echoed in other studies. There is also a growing economics literature on mass media bias, both on the theoretical and the empirical side. On the theoretical side the focus is on understanding to what extent the political positioning of mass media outlets is mainly driven by demand or supply factors. This literature is surveyed by Andrea Prat of Columbia University and David Stromberg of Stockholm University.\n",
"In a survey released by the Pew Research Center in April 2007, viewers who watch both \"The Colbert Report\" and \"The Daily Show\" tend to be more knowledgeable about news than audiences of other news sources. Approximately 54% of \"The Colbert Report\" and \"The Daily Show\" viewers scored in the high knowledge range, followed by Jim Lehrer's program at 53% and Bill O'Reilly's program at 51%, significantly higher than the 34% of network morning show viewers. The survey shows that changing news formats have not made much difference on how much the public knows about national and international affairs, but adds that there is no clear connection between news formats and what audiences know. The Project for Excellence in Journalism released a content analysis report suggesting that \"The Daily Show\" comes close to providing the complete daily news.\n",
"As some of the most highly available channels, FNC, CNN, and MSNBC are sometimes referred to as the \"big three\" with Fox News having the highest viewership and ratings. While the networks are usually referred to as 24-hour news networks, reruns of news programs and analysis or opinion programming are played throughout the night, with the exception of breaking news.\n",
"Due to the channel's tradition of airing rolling news coverage, HLN had become popular with people who may not have time to watch lengthy news reports, and as a fast source of news for public locations like airports, bars, and many other places. Supermarkets that carried the discontinued CNN Checkout Channel service were offered a feed of Headline News to broadcast on its televisions.\n"
] |
why don't rovers on other planets or satellites in space ever take true video? | Probably because videos take a relatively large amount of storage space compared to photos, and they would take very long periods of time to transmit. Seeing as how there's no real added value to a video versus a picture when nothing is moving, there's not a great reason to do it other than "because we can".
"Because we can" isn't a sufficient argument when arguing for the millions of dollars in technology required to perform such a function. | [
"There are also rockets that record short digital videos. There are two widely used ones used on the market, both produced by Estes: the Astrovision and the Oracle. The Astrocam shoots 4 (advertised as 16, and shown when playing the video, but in real life 4) seconds of video, and can also take three consecutive digital still images in flight, with a higher resolution than the video. It takes from size B6-3 to C6-3 Engines. The Oracle is a more costly alternative, but is able to capture all or most of its flight and recovery. In general, it is used with \"D\" motors. The Oracle has been on the market longer than the Astrovision, and has a better general reputation. However, \"keychain cameras\" are also widely available and can be used on almost any rocket without significantly increasing drag.\n",
"Mercury and Venus are believed to have no satellites chiefly because any hypothetical satellite would have suffered deceleration long ago and crashed into the planets due to the very slow rotation speeds of both planets; in addition, Venus also has retrograde rotation.\n",
"Unlike the other on-board instruments, the operation of the cameras for visible light is not autonomous, but rather it is controlled by an imaging parameter table contained in one of the on-board digital computers, the Flight Data Subsystem (FDS). More recent space probes, since about 1990, usually have completely autonomous cameras.\n",
"In order to solve these problems, new missions to other planets plan to use a process similar to stereoscopy in order to get a more accurate depiction of the surface on another planet. The Mars Reconnaissance Orbiter is one of the mission that attempts to do this. This process uses two images of one location taken from two separate lens on a camera, much in the same way humans do with their eyes. By using two images, they can get a 3-dimensional perspective of objects on the surface like we do.\n",
"The \"Curiosity\" rover's hazcams are sensitive to visible light and return black and white images of resolution 1024 × 1024 pixels. These images are used by the rovers' internal computer to autonomously navigate around hazards. Due to their positioning on both sides of the rovers, simultaneous images taken by either both front or both rear cameras can be used to produce a 3D map of the immediate surroundings. As the cameras are fixed (i.e. can not move independently of the rover), they have a wide field of view (approximately 120° both horizontally and vertically) to allow a large amount of terrain to be visible.\n",
"Unlike the other onboard instruments, the operation of the cameras for visible light is not autonomous, but rather it is controlled by an imaging parameter table contained in one of the on-board digital computers, the Flight Data Subsystem (FDS). Since the 1990s, most space probes have had completely autonomous cameras.\n",
"Prior to the landing, NASA and Microsoft released \"Mars Rover Landing\", a free downloadable game on Xbox Live that uses Kinect to capture body motions, which allows users to simulate the landing sequence.\n"
] |
How did the people of Imperial Japan view their German allies in WW2 and vice versa? | I can't give a full answer but Imperial Japan was very strict on censorship. I'm pretty sure Mein Kampf was banned. Foreign influence, especially western influence was considered a bad thing. And at the same time Nazi Germany was at war with The USSR, Japan maintained a non aggression pact. This hurt Germany because it allowed for the USSR to move troops off the Manchukuo border to the very important Eastern Front against the Nazis. Moreover the allies had dominance over the oceans in the latter parts of the war making supply exchange difficult if not impossible (sometimes submarines were used to move diplomats, although Im not sure how often). And there were information exchanges on weapons, military supplies, although this was also limited. But for the most part it was just a written alliance and not much else (to my knowledge).
Source: BA in history.
Edit: spelling | [
"American media portrayed the Japanese negatively as well. While attacks on Germans were generally focused on high-level Nazi officials such as Hitler, Himmler, Goebbels, and Göring, the Japanese were targeted more broadly. Portrayals of the Japanese ranged from showing them being vicious and feral, as on the cover of Marvel Comic's Mystery Comics no. 32, to mocking their physical appearance and speech patterns. In the Loony Tune's cartoon \"Tokio Jokio\" (aired May 13, 1943), the Japanese people are all shown to be dim-witted, obsessed with being polite, cowardly, and physically short with buckteeth, big lips, squinty eyes, and glasses. The entire cartoon is also narrated in broken English, with the letter \"R\" often replacing \"L\" in pronunciation of words, a common stereotype. Japanese slurs were commonly used, such as \"Jap,\" \"monkey face,\" and \"slanty eyes.\" These stereotypes are also seen in Theodor Geisel's comics created during the Second World War.\n",
"The relations between Germany and Japan (, ) were officially established in 1861 with the first ambassadorial visit to Japan from Prussia (which predated the formation of the German Empire in 1866/1870). Japan modernized rapidly after the Meiji Restoration of 1867, often using German models through intense intellectual and cultural exchange. After 1900 Japan aligned itself with Britain, and Germany and Japan were enemies in World War I. Japan declared war on the German Empire in 1914 and seized key German possessions in China and the Pacific.\n",
"But there was another western nation which did value the Japanese - Nazi Germany. Indeed, Nazi Germany and Imperial Japan wanted to form an alliance. A formal treaty of alliance was signed between Germany, Japan and Italy on 27 September 1940. Japan used the moment to move into northern Indo-China. This had been a French colony but the Germans had just overrun France so for the Japanese it was ripe for the picking. Japan wanted to create \"a greater East Asia co-prosperity sphere\". The slogan was \"Asia for the Asians\" - in essence the locals were swapping one colonial master for another. In Washington the American government, nervous about Japanese colonial intentions, announced that fuel sales to Japan would be suspended if Japan did not reconsider her aggressive actions. With no fuel resources of its own Japan believed it could now either give up its imperial ambitions, or fight the Americans. They attacked Pearl Harbor and, moments after, attacked Hong Kong.\n",
"On August 23, 1914, the Empire of Japan declared war on Germany, in part due to the Anglo-Japanese Alliance, and Japan became a member of the Entente powers. The Imperial Japanese Navy made a considerable contribution to the Allied war effort; however, the Imperial Japanese Army was more sympathetic to Germany, and aside from the seizure of Tsingtao, resisted attempts to become involved in combat. The overthrow of Tsar Nicholas II and the establishment of a Bolshevik government in Russia led to a separate peace with Germany and the collapse of the Eastern Front. The spread of the anti-monarchial Bolshevik revolution eastward was of great concern to the Japanese government. Vladivostok, facing the Sea of Japan was a major port, with a massive stockpile of military stores, and a large foreign merchant community.\n",
"Although less powerful than Germany, Imperial Japan is a nuclear power keeping the Reich at bay with the implicit threat of mutually assured destruction. Moreover, Japan has its own subordinate rulers (only the Emperor of Manchukuo is mentioned) by the Greater East Asia Co-Prosperity Sphere. Despite having \"an ocean of slave labor\" at its disposal, Japan now concentrates upon developing high technologies. Despite the Germano–Nipponese alliance, the Nazis consider the Japanese racially inferior and lacking in creativity, with propaganda pointing to a perceived decrease in Japan's technological advances as proof of this. Even so, Japanese tourists, students, and restaurants are a common sight within the Reich.\n",
"In Japan, during the Meiji period (1868–1912), many Germans came to work in Japan as advisors to the new government. Despite Japan’s isolationism and geographic distance, there have been a few , since Germany's and Japan's fairly parallel modernization made Germans ideal \"O-yatoi gaikokujin\". (See also Germany–Japan relations)\n",
"As such, some historians consider that this point could be listed among the many causes of conflict and which led to Japanese actions later on. They argue that the rejection of the racial equality clause proved to be an important factor in turning Japan away from cooperation with the West and toward nationalistic policies. In 1923, the Anglo-Japanese Alliance expired, which gradually resulted in a closer relationship of Japan to Germany and Italy. However, Prussian militarism was already entrenched in the Imperial Japanese Army, many members of the Army had expected Germany to win the war, and Germany had approached Japan for a separate peace in 1916. The rapprochement towards Germany did not occur until the mid-1930s, a time when Germany had greater ties with Nationalist China.\n"
] |
Did the Allies have a plan B in case the invasion of Normandy failed? | Not to discourage other answers, but you might find these previous posts to be helpful:
* [What was the back up plan if d-day failed?](_URL_1_) feat. /u/KroipyBill
* [Was there a "Plan B" if D Day failed? Or was it simply going "All in"?](_URL_0_) feat. /u/Rittermeister
They're both older answers, but the general consensus seems to be that no, there was no real backup plan in place. Hopefully someone can come into the comments and expand on this a bit. Hope that helps! | [
"The Allies staged elaborate deceptions for D-Day (see Operation Fortitude), giving the impression that the landings would be at Calais. Although Hitler himself expected a Normandy invasion for a while, Rommel and most Army commanders in France believed there would be two invasions, with the main invasion coming at the Pas-de-Calais. Rommel drove defensive preparations all along the coast of Northern France, particularly concentrating fortification building in the River Somme estuary. By D-Day on 6 June 1944 nearly all the German staff officers, including Hitler's staff, believed that Pas-de-Calais was going to be the main invasion site, and continued to believe so even after the landings in Normandy had occurred.\n",
"The Operation Neptune commander, General Dwight D. Eisenhower, initially planned to begin the Normandy landings on 5 June, due to the coincidence of a full moon and low tide. However, after extensive debate, Hogben and his colleagues convinced Eisenhower to launch the invasion on 6 June instead, to avoid storm conditions that could potentially have crippled the Allied fleet. After the success of the D-Day landings, Hogben received the Bronze Star Medal for his meteorological advice.\n",
"Having succeeded in opening up an offensive front in southern Europe, gaining valuable experience in amphibious assaults and inland fighting, Allied planners returned to the plans to invade Northern France. Now scheduled for 5 June 1944, the beaches of Normandy were selected as landing sites, with a zone of operations extending from the Cotentin Peninsula to Caen. Operation Overlord called for the British Second Army to assault between the River Orne and Port en Bessin, capture the German-occupied city of Caen and form a front line from Caumont-l'Éventé to the south-east of Caen, in order to acquire airfields and protect the left flank of the United States First Army while it captured Cherbourg. Possession of Caen and its surroundings would give Second Army a suitable staging area for a push south to capture the city of Falaise, which could then be used as a pivot for an advance on Argentan, the Touques River and then towards the Seine River. Overlord would constitute the largest amphibious operation in military history. After delays, due to both logistical difficulties and poor weather, the D-Day of Overlord was moved to 6 June 1944. Eisenhower and Bernard Montgomery, commander of 21st Army Group, aimed to capture Caen within the first day, and liberate Paris within 90 days.\n",
"The Allied story for FUSAG was that the army group, based in south-east England, would invade the Pas-de-Calais region several weeks after a smaller diversionary landing in Normandy. In reality, the main invasion force would land in Normandy on D-Day. As D-Day approached, the LCS moved on to planning tactical deceptions to help cover the progress of the real invasion forces. As well as naval operations, the LCS also planned operations involving paratroopers and ground deceptions. The latter would come into effect once landings were made but the former (involving naval, air and special forces units) were used to cover the approach of the true invasion fleet.\n",
"Victory in Normandy stemmed from several factors. German preparations along the Atlantic Wall were only partially finished; shortly before D-Day Rommel reported that construction was only 18 per cent complete in some areas as resources were diverted elsewhere. The deceptions undertaken in Operation Fortitude were successful, leaving the Germans obliged to defend a huge stretch of coastline. The Allies achieved and maintained air superiority, which meant that the Germans were unable to make observations of the preparations underway in Britain and were unable to interfere via bomber attacks. Transport infrastructure in France was severely disrupted by Allied bombers and the French Resistance, making it difficult for the Germans to bring up reinforcements and supplies. Much of the opening artillery barrage was off-target or not concentrated enough to have any impact, but the specialised armour worked well except on Omaha, providing close artillery support for the troops as they disembarked onto the beaches. The indecisiveness and overly complicated command structure of the German high command was also a factor in the Allied success.\n",
"The Allied victory in Normandy stemmed from several factors. German preparations along the Atlantic Wall were only partially finished; shortly before D-Day Rommel reported that construction was only 18 per cent complete in some areas as resources were diverted elsewhere. The deceptions undertaken in Operation Fortitude were successful, leaving the Germans obliged to defend a huge stretch of coastline. The Allies achieved and maintained air supremacy, which meant that the Germans were unable to make observations of the preparations underway in Britain and were unable to interfere via bomber attacks. Infrastructure for transport in France was severely disrupted by Allied bombers and the French Resistance, making it difficult for the Germans to bring up reinforcements and supplies. Some of the opening bombardment was off-target or not concentrated enough to have any impact, but the specialised armour worked well except on Omaha, providing close artillery support for the troops as they disembarked onto the beaches. Indecisiveness and an overly complicated command structure on the part of the German high command were also factors in the Allied success.\n",
"The next major Allied operation came on September 17. Devised by British General Bernard Montgomery, its primary objective was the capture of several bridges in the Netherlands. Fresh off of their successes in Normandy, the Allies were optimistic that an attack on the Nazi-occupied Netherlands would force open a route across the Rhine and onto the North German Plain. Such an opening would allow Allied forces to break out northward and advance toward Denmark and, ultimately, Berlin.\n"
] |
Is there a limit to the amount of potential/kinetic energy an object can contain? | > I've always heard that as an object approaches the speed of light, the energy required to further approach that limit increases asymptotically to infinity.
This is correct. There is no upper bound.
The kinetic energy of a particle with mass m and speed v is:
K = [1/sqrt(1 - (v/c)^(2)) - 1]mc^(2).
I've plotted it [here](_URL_0_) in units where c = 1 and m = 1. | [
"Equivalently, it may be thought of as the energy stored in the electric field. For instance, if one were to hold two like charges a certain distance away from one another and then release them, the charges would move away with kinetic energy equal to the energy stored in the configuration. As an analogy, if one were to lift up a mass to a certain height in a gravitational field, the work it took to do so is equal to the energy stored in that configuration, and the kinetic energy of the mass upon contact with the ground would be equal to the energy of the configuration beforehand.\n",
"In order to get an estimate of the critical speed, we use the fact that the condition for which this kinematic solution is valid corresponds to the case where there is no net energy exchange with the surroundings, so by considering the kinetic and potential energy of the system, we should be able to derive the critical speed.\n",
"The speed, and thus the kinetic energy of a single object is frame-dependent (relative): it can take any non-negative value, by choosing a suitable inertial frame of reference. For example, a bullet passing an observer has kinetic energy in the reference frame of this observer. The same bullet is stationary to an observer moving with the same velocity as the bullet, and so has zero kinetic energy. By contrast, the total kinetic energy of a system of objects cannot be reduced to zero by a suitable choice of the inertial reference frame, unless all the objects have the same velocity. In any other case, the total kinetic energy has a non-zero minimum, as no inertial reference frame can be chosen in which all the objects are stationary. This minimum kinetic energy contributes to the system's invariant mass, which is independent of the reference frame.\n",
"The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the components of an object – and potential energy reflects the potential of an object to have motion, and generally is a function of the position of an object within a field or may be stored in the field itself.\n",
"The electrostatic potential energy of a system containing only one point charge is zero, as there are no other sources of electrostatic potential against which an external agent must do work in moving the point charge from infinity to its final location.\n",
"The energy is −29.6MJ/kg: the potential energy is −59.2MJ/kg, and the kinetic energy 29.6MJ/kg. Compare with the potential energy at the surface, which is −62.6MJ/kg. The extra potential energy is 3.4MJ/kg, the total extra energy is 33.0MJ/kg. The average speed is 7.7km/s, the net delta-v to reach this orbit is 8.1km/s (the actual delta-v is typically 1.5–2.0km/s more for atmospheric drag and gravity drag).\n",
"An object can have potential energy by virtue of its electric charge and several forces related to their presence. There are two main types of this kind of potential energy: electrostatic potential energy, electrodynamic potential energy (also sometimes called magnetic potential energy).\n"
] |
How do I choose what era to study in Grad School? | A great deal of the process of doing history - or "doing" any of the humanities - is the combination of distinct experiences and backgrounds in ways that yield unique perspective. You might find fertile ground by considering nineteenth-century British fixation on everything medieval - architecture, art, etc.This might serve to combine things that attract you, but it may also allow you to wield your experience in ways that yield insights that others might find of interest.
Just a thought. Best of luck to you. | [
"Every year there are four major projects that each student will participate in. The first is the Faire, which is a school-wide reenactment of an historical age. These periods include the Renaissance, the Age of Enlightenment, the Victorian era, the Middle Ages, and recently the 20th century. During this time, each student is responsible to research one character from this era, and portrays that person as closely as possible to historical fact. In this way, students not only learn in-depth information about important historical figures, but they also teach anyone whom they meet on the Faire days. Upon completion of the Faire, students are required to write a research paper on their character, in order to cement their knowledge of that time period, and to teach basic writing skills. During this time freshman students are taught how to build work citation pages, as all grades are expected to cite all sources, and quotes.\n",
"The school's philosophy includes \"stage not age\", i.e. pupils are able to study subjects and take exams when they are ready for them rather than at a set age. As an Academy, the school day was extended from ending at 3 p.m. to 3.40. This enabled them to condense Key Stage 3 from 3 years study to 2, allowing up to 3 years for GCSE studies.\n",
"Year 11 and 12 students must choose from: Agriculture, Ancient history, Biology, Business studies, Chemistry, Chinese (Beginners or Continuers), Community & Family Studies, Drama, Economics, Engineering studies, English (Studies, Standard, Advanced or Extension), Food Technology, Geography, Information Processes & Technology, Industrial Technology (Furniture & Timber Products Industries or Graphics Industries), Investigating Science, Legal Studies, Mathematics (Standard, Advanced or Extension), Modern History, Music 1, Personal Development Health PE, Physics,\n",
"In the early years of the school’s history, the course of studies for high school students included four years of religion, four years of English, three years of social studies, two to four years of mathematics, three or four years of science and two years of Latin. In the 1930s, bookkeeping and typing were introduced. In 1952, shorthand and secretarial training were added. The new building also made the introduction of home economics and industrial arts possible.\n",
"Pupils in years 7 and 8 also study modern languages, art, music, drama, food technology, graphic design, history and geography. These are offered as options in years 9, 10 and 11, along with public services, health and social care, and travel and tourism.\n",
"Students are educated in subjects such as mathematics, history, geography, Hindi, English writing/reading, physics, chemistry, business, accounting, biology, etc. Older students attend workshops in writing, public speaking, and debate. Students are able to engage in sports such as volleyball, basketball, and soccer. Finally, students learn the importance of volunteerism, feminism, leadership, and personal/emotional development. \n",
"From year 8, students are able to select a certain number of their subjects based on their personal interests and aspirations. Examples of subject choices include Commerce, Photography, Drama, Multi-media and Film, Conspiracies in History, Law and You, Travel and Tourism, Architectural Design, Japanese Language, French Language, Timber Technology, Engineering, Electronics, Aboriginal Studies, Drama, Life and the Universe, Metal Technology, Child Studies, Marine and Aquaculture Technology, and European Language and Culture. \n"
] |
what does it mean to "rewire" your brain? is it possible? if so, how? | Wires are a common analogy for neurons, the main cells of your brain (and the rest of your nervous system.) It's a fair comparison: neurons are long and thin, rapidly conduct electricity to carry signals, and even have a coating of insulation.
Neurons communicate with other neurons at points called synapses, where they almost but don't quite touch. Instead, one neuron sends chemicals called neurotransmitters to the next neuron; different chemicals make the next neuron more or less likely to keep the signal going.
Whereas most body cells divide to make more of themselves, neurons mostly only come from neural stem cells. By adulthood, your brain has (almost) all the cells it will ever have. As a result, you can't rely solely on making new neurons to make new memories, learn new things, change your emotions, and such.
Synapses, on the other hand, can be made or eliminated (and strengthened or weakened) throughout your entire life. This process of changing the connections between brain cells is what "rewiring" broadly describes. In that sense, rewiring happens anytime you learn anything, make a new memory, or even use an old memory; it happens every minute of the day. People commonly use "rewiring" to mean something more like "changing my patterns of thinking or behavior in the long term," which is obviously a little harder than making a new memory, but still very much possible. We'd have given up on cognitive-behavioral therapy a long time ago if it weren't.
Big picture: Given that you have maybe one hundred billion neurons, each with thousands of synapses, the total number of connections is likely in the hundreds of trillions. This helps give an idea of how the brain, which resembles a three-pound blob of fatty Jello, can be responsible for everything you've ever felt, learned, or experienced. | [
"Another thing that concerns wiring and rewiring is the neurological phenomenon of synesthesia. There are over 60 known types of synesthesia. An example would be a synesthesiat hearing a sound and colors flooding their vision or someone reading a passage of words and each letter evokes a different color or a different smell or a different taste. Certain sounds cause the person to feel certain things in turn, or see spots of color dance across their vision. Neurologists agree that this occurs of faulty wiring in the brain, that \"neuron circuits that process a sense accidentally strum circuits of another sense, causing both to go off simultaneously\". It is debated how this happens. As a child, one possesses more neurons than necessary and the unnecessary ones are pruned away and die when not used. Some believe that synesthesia is a result of poor pruning.\n",
"Understanding how the brain can re-wire itself has allowed Merzenich, Tallal, and other colleagues to develop strategies intended to remediate individuals with any speech, language, and reading deficits. Through research in experience dependent learning with non-human primates, neurophysiologists including Merzenich have demonstrated that neuroplasticity remains through adulthood.\n",
"Reentry is a neural structuring of the brain, specifically in humans, which is characterized by the ongoing bidirectional exchange of signals along reciprocal axonal fibers linking two or more brain areas. It is hypothesized to allow for widely distributed groups of neurons to achieve integrated and synchronized firing, which is proposed to be a requirement for consciousness, as outlined by Gerald Edelman and Giulio Tononi in their book \"A Universe of Consciousness\".\n",
"Since the root of the problem is neurological, doctors have explored sensorimotor retraining activities to enable the brain to \"rewire\" itself and eliminate dystonic movements. The work of several doctors such as Nancy Byl and Joaquin Farias has shown that sensorimotor retraining activities and proprioceptive stimulation can induce neuroplasticity, making it possible for patients to recover substantial function that was lost due to Cervical Dystonia, oromandibular dystonia and dysphonia.\n",
"At the beginning of the connectome project, it was thought that the connections between neurons were unchangeable once established and that only individual synapses could be altered. However, recent evidence suggests that connectivity is also subject to change, termed neuroplasticity. There are two ways that the brain can rewire: formation and removal of synapses in an established connection or formation or removal of entire connections between neurons. Both mechanisms of rewiring are useful for learning completely novel tasks that may require entirely new connections between regions of the brain. However, the ability of the brain to gain or lose entire connections poses an issue for mapping a universal species connectome.\n",
"Since the root of the problem is neurological, doctors have explored sensorimotor retraining activities to enable the brain to \"rewire\" itself and eliminate dystonic movements. The work of several doctors such as Nancy Byl and Joaquin Farias has shown that sensorimotor retraining activities and proprioceptive stimulation can induce neuroplasticity, making it possible for patients to recover substantial function that was lost due to Cervical Dystonia, hand dystonia, blepharospasm, oromandibular dystonia, dysphonia and musicians' dystonia.\n",
"Articulatory suppression is the process of inhibiting memory performance by speaking while being presented with an item to remember. Most research demonstrates articulatory suppression by requiring an individual to repeatedly say an irrelevant speech sound out loud while being presented with a list of words to recall shortly after. The individual experiences four stages when repeating the irrelevant sound: the intention to speak, programming the speech, articulating the sound or word, and receiving auditory feedback.\n"
] |
Are the recent Iranian and Chinese earthquakes related given both of their positions along the Indian Plate? | Ahh it's 2:35 AM but I love this topic so I'm going to give it a shot.
Short answer: YES we think something is going on.
I like the maps you linked, but I would like you to look at [this world map](_URL_1_) and this [map of the major tectonic plates](_URL_3_), bear in mind they are not aligned (i.e. North America is in the centre of one map, and the right hand side of the other)
Do you see the Indian, and Australian plate? We don't usually differentiate them, but there has been a theory that the Indo-Australian plate has been ripping apart slowly for millions of years. [This map](_URL_5_) shows the plate itself a little better. Basically, the bottom half is being pushed into the relatively soft Pacific Plate in the south, and causing the Himalayan Orogeny (mountain building) in the north (i.e. being pushed into hard rock). This is causing the southern end of the plate to move faster than the northern half creating a shear zone in the centre approximately 500km off the western coast of Sumatra.
There have been lots of articles talking about why we think this, so I'm going to let them do the talking [ABS Science](_URL_4_), [Voice of America News (never heard of the source but article checks out)](_URL_0_), and one of the latest and most definitive [paper in Nature](_URL_2_).
This shearing is causing earthquakes around all the plate boundaries, but it is not any unexpected or cataclysmic geological event. Geologists know what's going on, have a fairly good idea of why it's happening, but there isn't anything we can do to stop it- but there are things we can do to minimize the risks associated with it. | [
"The Indian subcontinent has a history of earthquakes. The reason for the intensity and high frequency of earthquakes is the Indian plate driving into Asia at a rate of approximately 47 mm/year. The following is a list of major earthquakes which have occurred in India, including those with epicentres outside India that caused significant damage or casualties in the country.\n",
"The October 28 and 29 earthquakes occurred in the Sulaiman fold-and-thrust belt, a region where geologically young (Tertiary) sedimentary rocks have been folded and squeezed by forces associated with the Indian-Eurasian collision. The earthquakes are located approximately 80 km east of the 650-km-long Chaman fault, which is a major left-lateral strike-slip fault that accommodates a significant amount of the slip across the plate boundary. The occurrence of the earthquakes suggests that other left-lateral strike-slip faults are present beneath the fold-and-thrust belt and that they accommodate some of the relative motion of the Indian and Eurasian plates.\n",
"The Philippine Fault System from which almost all of the recorded strong earthquakes in the Philippines emanated, traverses eastern Uson. As a result, strong seismic activity in the form of frequent earthquakes can be experienced. Link to PHIVOLCS website\n",
"Because the location does not lie on a plate boundary, there was some debate as to what caused the earthquake. One suggestion is the existence of fault webs. The Indian sub-continent crumples as it pushes against Asia and pressure is released. It is possible that this pressure is released along fault lines. Another argument is that reservoir construction along the Terna was responsible for increasing pressure on fault lines. Killari, where the epicenter of the quake is believed to have been, had a large crater, which remains in place to date.\n",
"Earthquakes in Indian subcontinent occur due to the north-eastward movement of the Indian Plate and its interaction with the neighboring Eurasian Plate in the north. Most of earthquakes occur in the plate boundary regions; however, a few damaging earthquakes have occurred in the plate interior regions as well. A few damaging earthquakes in the plate-boundary regions include the following: 1897 Shillong plateau, 1905 Kangra, 1934 Nepal-Bihar, 1950 Chayu-Upper Assam, 2004 Sumatra-Andaman, 2005 Kashmir and 2015 Gorkha earthquakes. In the plate interior regions, damaging earthquakes occurred in 1993 at Killari, Maharashtra, in 1997 at Jabalpur, Madhya Pradesh, and in 2001 in Kachchh, Gujarat.\n",
"Iran suffers from frequent earthquakes, with minor quakes occurring almost daily. This earthquake occurred as a result of stresses generated by movement of one tectonic plate, the Arabian plate, moving northward against another, the Eurasian plate, at approximately per year. The Earth's crust deforms in response to the plate motion in a broad zone spanning the width of Iran and extending north into Turkmenistan. Earthquakes occur as the result of reverse faulting and strike-slip faulting in the zone of deformation.\n",
"The Indian subcontinent has a history of devastating earthquakes. The major reason for the high frequency and intensity of the earthquakes is that the Indian plate is driving into Asia at a rate of approximately 47 mm/year. Geographical statistics of India show that almost 54% of the land is vulnerable to earthquakes. A World Bank & United Nations report shows estimates that around 200 million city dwellers in India will be exposed to storms and earthquakes by 2050. National Disaster Management Authority says that 60% of Indian landmass is prone to earthquake and 8% susceptible to cyclone risks.\n"
] |
what's a loan shark? | Someone who loans you money, usually with a very high interest rate, knowing that you won't be able to pay the money back. They then privately hound you for the money either threatening you or stealing your assets to repay the debt. | [
"A loan shark is a person who offers loans at extremely high interest rates, has strict terms of collection upon failure, and operates outside off the street (outside of local authority). The term usually refers to illegal activity, but may also refer to predatory lending with extremely high interest rates such as payday or title loans.\n",
"The research by the government and other agencies estimates that 165,000 to 200,000 people are indebted to loan sharks in the United Kingdom. Illicit loan sharking is treated as a high-level crime (felony) by law enforcement, due to its links to organized crime and the serious violence involved. Payday loans with high interest rates are legal in many cases, and have been described as \"legal loan sharking\" (in that the creditor is legally registered, pays taxes and contributions, and can reclaim remittance if taking the case to adjudication; likewise there is no threat of harm to the debtor).\n",
"An angry loan shark has a tendency of getting excessively violent with anyone who doesn't have his money. His mob boss disapproves of his actions, warning him to tone things down or else. As expected, things only get worse.\n",
"A business loan is a loan specifically intended for business purposes. As with all loans, it involves the creation of a debt, which will be repaid with added interest. There are a number of different types of business loans, including bank loans, mezzanine financing, asset-based financing, invoice financing, microloans, business cash advances and cash flow loans.\n",
"The making of usurious loans is often called loan sharking. That term is sometimes also applied to the practice of making consumer loans without a license in jurisdictions that requires lenders to be licensed.\n",
"House Shark is a 2017 American horror comedy film written and directed by Ron Bonk. The film stars Trey Harrison as Frank, a former cop who finds that a dangerous shark has invaded his home, and enlists the help of a former real estate agent named Abraham and a \"house shark\" expert named Zachary to confront the threat.\n",
"An unintended consequence of poverty alleviation initiatives can be that loan sharks borrow from formal microfinance lenders and lend on to poor borrowers. Loan sharks sometimes enforce repayment by blackmail or threats of violence. Historically, many \"moneylenders\" skirted between legal and criminal activity. In the recent western world, loan sharks have been a feature of the criminal underworld.\n"
] |
is there a correlation between c-section babies and mental health later in life? | I don't think so. In the past, c-sections were used even if they weren't even necessary. It was a money maker. I do believe their are midwives who are trained in breach deliveries, also.
Of course, without c-sections, there would be a higher mortality rate, but I really can't believe it could be linked to mental health...
Now, our society and the many issues we face, that's another story.
| [
"Results from another recent study suggest that fetuses were able to form both short and long term memories. This conclusion was drawn from the fact that habituation rates (number of stimuli needed to habituate) were higher in babies in the neonatal stage that had not previously undergone fetal stimulations when compared to those who had: therefore demonstrating the memory of the stimulus in its fetal stage being carried into the neonatal stage.\n",
"Much research and literature has shown that endocrine, neurological and most other diseases a mother or father carries can have adverse effects on a fetus's development. The majority of the research done regarding fetal brain development, and consequently its memory after birth, has focused on one condition or state and two main diseases: intrauterine hypoxia, hypothyroidism and rubella.\n",
"Despite the neurosensory, mental and educational problems studied in school age and adolescent children born extremely preterm, the majority of preterm survivors born during the early years of neonatal intensive care are found to do well and to live fairly normal lives in young adulthood. Young adults born preterm seem to acknowledge that they have more health problems than their peers, yet feel the same degree of satisfaction with their quality of life.\n",
"There is also evidence that birth complications and other factors around the time of birth (perinatal) can have serious implications on intellectual development. For example, a prolonged period of time without access to oxygen during the delivery can lead to brain damage and mental retardation. Also, low birth weights have been linked to lower intelligence scores later in lives of the children. There are two reasons for low birth weight, either premature delivery or the infant's size is just lower than average for its gestational age; both contribute to intellectual deficits later in life. A meta analysis of low birth weight babies found that there is a significant relationship between low birth weight and impaired cognitive abilities; however, the relationship is small, and they concluded that, although it may not be relevant at an individual level, it may instead be relevant at a population level. Other studies have also found that the correlations are relatively small unless the weight is extremely low (less than 1,500 g) – in which case the effects on intellectual development are more severe and often result in mental retardation.\n",
"The Mental and Social Life of Babies is a 1982 book by Kenneth Kaye. Integrating a contemporary burgeoning field of research on infant cognitive and social development in the first two years of life with his own laboratory's studies at the University of Chicago, Kaye offered an \"apprenticeship\" theory. Seen as an empirical turning point in the investigation of processes in early human development, the book's reviews welcomed its reliance on close (second by second) process studies of a large sample of infants and mothers (50) recorded longitudinally (birth to 30 months). It was republished in England, Japan, Spain, Italy, and Argentina.\n",
"The study aims to understand how biological and environmental factors interact with a baby's early life experiences and the outcomes this has later in life. The study will cover five main research themes:\n",
"Medical information on the mothers and babies were gathered throughout the study and follow-up of their progress continued until the child reached at least two and a half years of age. Two outcomes were considered. The first, at 12 months, was death or need for a ventricular shunt. The second, measured at 30 months, was a composite score of standardized tests for mental and motor development.\n"
] |
regarding death valley, why is being 86 m below sea level so much more extreme than being 86 m above it? | The elevation actually doesn't have anything to do with it being extreme. It's the giant mountain ranges to the west that block all the rainfall and its geographic location in an area of few clouds, a somewhat southern lattitude, and lots of sunshine that make it a hot, baked desert. Lots of death valley is 86 M and higher, and it's just as hot there as it is on the valley floor.
The high mountains make what is called a 'rain shadow' effect - any moisture falls on the mountains, which are at 14,0000 feet on one range (the Sierra Nevada) and then another 14,000 foot range (White Mountains) after another valley. They pretty much take almost all the rain out of the clouds by the time they get over death valley. | [
"Death Valley's Badwater Basin is the point of the lowest elevation in North America, at below sea level. This point is east-southeast of Mount Whitney, the highest point in the contiguous United States, with an elevation of 14,505 feet (4,421 m). On the afternoon of July 10, 1913, the United States Weather Bureau recorded a high temperature of 134 °F (56.7 °C) at Furnace Creek in Death Valley. This temperature stands as the highest ambient air temperature ever recorded at the surface of the Earth.\n",
"BULLET::::- The Death Valley in the United States, behind both the Pacific Coast Ranges of California and the Sierra Nevada range, is the driest place in North America and one of the driest places on the planet. This is also due to its location well below sea level which tends to cause high pressure and dry conditions to dominate due to the greater weight of the atmosphere above.\n",
"At below sea level at its lowest point, Badwater Basin on Death Valley's floor is the second-lowest depression in the Western Hemisphere (behind Laguna del Carbón in Argentina), while Mount Whitney, only to the west, rises to . This topographic relief is the greatest elevation gradient in the contiguous United States and is the terminus point of the Great Basin's southwestern drainage. Although the extreme lack of water in the Great Basin makes this distinction of little current practical use, it does mean that in wetter times the lake that once filled Death Valley (Lake Manly) was the last stop for water flowing in the region, meaning the water there was saturated in dissolved materials. Thus the salt pans in Death Valley are among the largest in the world and are rich in minerals, such as borax and various salts and hydrates. The largest salt pan in the park extends from the Ashford Mill Site to the Salt Creek Hills, covering some of the valley floor. The best known playa in the park is the Racetrack, known for its moving rocks.\n",
"The mean annual temperature of Death Valley is about , due in part to its relatively low elevation; July temperatures exceed on average. Based on plant data, summer temperatures at Lake Manly during the Pleistocene were about 6–8 °C (11–14 °F) lower than present day; \"Yucca whipplei\" was found at altitudes too cold for its development, suggesting that middle altitudes winters were milder 12,000–10,000 years ago. Winter water temperatures may have dropped below however, occasionally falling below with a maxima of during the latest lake stage. The \"Blackwelder\" stage had higher maximum temperatures. Maximum temperatures were depressed by 4–15 °C (7–27 °F) during summers in the last highstand; Blackwelder highstand temperatures reached , however.\n",
"The elevation of the land surface of the Earth varies from the low point of −418 m (−1,371 ft) at the Dead Sea, to a 2005-estimated maximum altitude of 8,848 m (29,028 ft) at the top of Mount Everest. The mean height of land above sea level is 686 m (2,250 ft).\n",
"Death Valley is extremely dry because it sits in the rain shadow of four major mountain ranges (including the Sierra Nevada and Panamint Range). Moisture moving inland from the Pacific Ocean must pass eastward over multiple mountains to reach Death Valley; as air masses are forced upwards by each range, the air cools and moisture condenses to fall as rain or snow on the western slopes. When the air masses ultimately reach Death Valley, most of the moisture has already been \"squeezed out\" and there is little left to fall as precipitation.\n",
"The highest range within the park is the Panamint Range with Telescope Peak being its highest point at . The Death Valley region is a transitional zone in the northernmost part of the Mojave Desert and consists of five mountain ranges removed from the Pacific Ocean. Three of these are significant barriers: the Sierra Nevada, the Argus Range, and the Panamint Range. Air masses tend to lose moisture as they are forced up over mountain ranges, in what climatologists call a rainshadow effect.\n"
] |
why do i feel something touch me just before it actually does? | It's actually called chronostasis, and it involves your brain filling in the time before the event and thus making the event seem to happen after you perceive it. There is a lag between when you something happens and when your perceive it, but your brain tells you that things are happening simultaneously.
EDIT: Link for more info
_URL_0_
| [
"\"Feel Something\" is a futurepop song written by Miller, Justin Tranter, Kennedi Lykken, and Mike Sabath and produced by Mike Sabath. Miller talked about the song: \"When you're experiencing pain of any kind, all you want is for it to go away. But weirdly that pain is kind of what makes you feel like a real person, so when nothing is going wrong but it's also not going right and you’re just in the middle, you feel empty. And that's almost worse. That's what I wrote this song about.\"\n",
"Can You Feel Anything When I Do This? is a collection of science fiction short stories by American writer Robert Sheckley, published in December 1971 by Doubleday. It was also published by Pan Books under title The Same To You Doubled.\n",
"\"That Feeling, You Can Only Say What It Is in French” is a horror short story by American writer Stephen King. It was originally published in the June 22, 1998 issue of \"The New Yorker\" magazine. In 2002, it was collected in King's collection \"Everything's Eventual\". It focuses on a married woman in a car ride on vacation constantly repeating the same events over and over, each event ending with the same gruesome outcome.\n",
"BULLET::::- \"Feeling...\" — A term most commonly used by youths to call someone who one thinks is trying to act or be something they're not. Usually preceded by a noun or adjective, for example \"feeling close\" (or \"F.C.\"), someone who acts like they're close to another when the other person hardly knows them or doesn't know them at all.\n",
"In \"Déjà pensé\", a completely new thought sounds familiar to the person and he feels as he has thought the same thing before at some time.This feeling can be caused by seizures which occur in certain parts of the temporal lobe and possibly other areas of the brain as well.\n",
"The sense of touch, or tactile perception, is what allows organisms to feel the world around them. The environment acts as an external stimulus, and tactile perception is the act of passively exploring the world to simply sense it. To make sense of the stimuli, an organism will undergo active exploration, or haptic perception, by moving their hands or other areas with environment-skin contact. This will give a sense of what is being perceived, and give information about size, shape, weight, temperature, and material. Tactile stimulation can be direct in the form of bodily contact, or indirect through the use of a tool or probe. Direct and indirect send different types messages to the brain, but both provide information regarding roughness, hardness, stickiness, and warmth. The use of a probe elicits a response based on the vibrations in the instrument rather than direct environmental information. Tactual perception gives information regarding cutaneous stimuli (pressure, vibration, and temperature), kinaesthetic stimuli (limb movement), and proprioceptive stimuli (position of the body). There are varying degrees of tactual sensitivity and thresholds, both between individuals and between different time periods in an individual's life. It has been observed that individuals have differing levels of tactile sensitivity between each hand. This may be due to callouses forming on the skin of the most used hand, creating a buffer between the stimulus and the receptor. Alternately, the difference in sensitivity may be due to a difference in the cerebral functions or ability of the left and right hemisphere. Tests have also shown that deaf children have a greater degree of tactile sensitivity than that of children with normal hearing ability, and that girls generally have a greater degree of sensitivity than that of boys.\n",
"When touched upon the soles of the feet, for example, it feels in addition to the common sensation of touch a sensation on which we have imposed a special name, \"tickling.\" This sensation belongs to us and not to the hand... A piece of paper or a feather drawn lightly over any part of our bodies performs intrinsically the same operations of moving and touching, but by touching the eye, the nose, or the upper lip it excites in us an almost intolerable titillation, even though elsewhere it is scarcely felt. This titillation belongs entirely to us and not to the feather; if the live and sensitive body were removed it would remain no more than a mere word.\n"
] |
why/how do different alcohols (beer, wine, spirits) cause different hangover effects? | The more distilled something is, the less the hangover hurts.
The more sugar there is, the more the hangover hurts.
Basically, the closer you get to pure alcohol, the less you will hurt the next day. | [
"Several studies have examined whether certain types of alcohol cause worse hangovers. All four studies concluded that darker liquors, which have higher congeners, produced worse hangovers. One even showed that hangovers were worse \"and\" more frequent with darker liquors. In a 2006 study, an average of 14 standard drinks (330 ml each) of beer was needed to produce a hangover, but only 7 to 8 drinks was required for wine or liquor (note that one standard drink has the same amount of alcohol regardless of type). Another study ranked several drinks by their ability to cause a hangover as follows (from low to high): distilled ethanol diluted with fruit juice, beer, vodka, gin, white wine, whisky, rum, red wine and brandy.\n",
"Several factors which do not in themselves cause alcohol hangover are known to influence its severity. These factors include personality, genetics, health status, age, sex, associated activities during drinking such as smoking, the use of other drugs, physical activity such as dancing, as well as sleep quality and duration.\n",
"Wine, beer, distilled spirits and other alcoholic drinks contain ethyl alcohol and alcohol consumption has short-term psychological and physiological effects on the user. Different concentrations of alcohol in the human body have different effects on a person. The effects of alcohol depend on the amount an individual has drunk, the percentage of alcohol in the wine, beer or spirits and the timespan that the consumption took place, the amount of food eaten and whether an individual has taken other prescription, over-the-counter or street drugs, among other factors. Alcohol in carbonated drinks is absorbed faster than alcohol in non-carbonated drinks.\n",
"The short-term effects of alcohol (also known formally as ethanol) consumption – due to drinking beer, wine, distilled spirits or other alcoholic beverages – range from a decrease in anxiety and motor skills and euphoria at lower doses to intoxication (drunkenness), stupor, unconsciousness, anterograde amnesia (memory \"blackouts\"), and central nervous system depression at higher doses. Cell membranes are highly permeable to alcohol, so once alcohol is in the bloodstream, it can diffuse into nearly every cell in the body.\n",
"In addition to ethanol and water, most alcoholic drinks also contain congeners, either as flavoring or as a by-product of fermentation and the wine aging process. While ethanol is by itself sufficient to produce most hangover effects, congeners may potentially aggravate hangover and other residual effects to some extent. Congeners include substances such as amines, amides, acetones, acetaldehydes, polyphenols, methanol, histamines, fusel oil, esters, furfural, and tannins, many but not all of which are toxic. One study in mice indicates that fusel oil may have a mitigating effect on hangover symptoms, while some whiskey congeners such as butanol protect the stomach against gastric mucosal damage in the rat. Different types of alcoholic beverages contain different amounts of congeners. In general, dark liquors have a higher concentration while clear liquors have a lower concentration. Whereas vodka has virtually no more congeners than pure ethanol, bourbon has a total congener content 37 times higher than that found in vodka.\n",
"Wine contains ethyl alcohol, the same chemical that is present in beer and distilled spirits and as such, wine consumption has short-term psychological and physiological effects on the user. Different concentrations of alcohol in the human body have different effects on a person. The effects of alcohol depend on the amount an individual has drunk, the percentage of alcohol in the wine and the timespan that the consumption took place, the amount of food eaten and whether an individual has taken other prescription, over-the-counter or street drugs, among other factors. Drinking enough to cause a blood alcohol concentration (BAC) of 0.03%-0.12% typically causes an overall improvement in mood and possible euphoria, increased self-confidence and sociability, decreased anxiety, a flushed, red appearance in the face and impaired judgment and fine muscle coordination. A BAC of 0.09% to 0.25% causes lethargy, sedation, balance problems and blurred vision. A BAC from 0.18% to 0.30% causes profound confusion, impaired speech (e.g. slurred speech), staggering, dizziness and vomiting. A BAC from 0.25% to 0.40% causes stupor, unconsciousness, anterograde amnesia, vomiting, and death may occur due to inhalation of vomit (pulmonary aspiration) while unconscious and respiratory depression (potentially life-threatening). A BAC from 0.35% to 0.80% causes a coma (unconsciousness), life-threatening respiratory depression and possibly fatal alcohol poisoning. As with all alcoholic drinks, drinking while driving, operating an aircraft or heavy machinery increases the risk of an accident; many countries have penalties against drunk driving.\n",
"Distilled spirits contain ethyl alcohol, the same chemical that is present in beer and wine and as such, spirit consumption has short-term psychological and physiological effects on the user. Different concentrations of alcohol in the human body have different effects on a person. The effects of alcohol depend on the amount an individual has drunk, the percentage of alcohol in the spirits and the timespan that the consumption took place, the amount of food eaten and whether an individual has taken other prescription, over-the-counter or street drugs, among other factors. Drinking enough to cause a blood alcohol concentration (BAC) of 0.03%-0.12% typically causes an overall improvement in mood and possible euphoria, increased self-confidence, and sociability, decreased anxiety, a flushed, red appearance in the face and impaired judgment and fine muscle coordination. A BAC of 0.09% to 0.25% causes lethargy, sedation, balance problems and blurred vision. A BAC from 0.18% to 0.30% causes profound confusion, impaired speech (e.g., slurred speech), staggering, dizziness and vomiting. A BAC from 0.25% to 0.40% causes stupor, unconsciousness, anterograde amnesia, vomiting, and respiratory depression (potentially life-threatening). Death may occur due to inhalation of vomit (pulmonary aspiration) while unconscious. A BAC from 0.35% to 0.80% causes a coma (unconsciousness), life-threatening respiratory depression and possibly fatal alcohol poisoning. As with all alcoholic beverages, driving under the influence, operating an aircraft or heavy machinery increases the risk of an accident; as such many countries have penalties for drunk driving.\n"
] |
what is the science behind using a knife to cut things? | Blades take a small amount of force and concentrate it on a small area, which causes a large amount of pressure to break the surface. Then, the blade acts like a wedge, directing downward force outwards to drive the two halves apart. | [
"Implements commonly used for cutting are the knife and saw, or in medicine and science the scalpel and microtome. However, any sufficiently sharp object is capable of cutting if it has a hardness sufficiently larger than the object being cut, and if it is applied with sufficient force. Even liquids can be used to cut things when applied with sufficient force (see water jet cutter).\n",
"Knife cuttings are fashioned by putting several layers of paper on a relatively soft foundation consisting of a mixture of tallow and ashes. Following a pattern, the artist cuts the motif into the paper with a sharp knife which is usually held vertically. Skilled crafters can even cut out different drawings freely without stopping.\n",
"Knife is the cutting die for envelope or wrapper blanks. It is called a \"knife\" rather than a \"die\" because the latter is an object that makes an embossed printed impression of the stamp or indicium on the envelope. Traditionally, a knife would normally be made of forged steel. It was placed on a stack of paper with the sharp edge against the paper. The press head forced the cutting edge all the way through the stack of paper. The cut blanks were removed from the knife and the process repeated. Not only could it cut out the odd shape of an envelope, but a knife could be used to cut out shapes of airmail stickers or gummed labels in the shape of stars or circles. The variety of shapes a knife could cut would be infinite.\n",
"Knife sharpening is the process of making a knife or similar tool sharp by grinding against a hard, rough surface, typically a stone, or a flexible surface with hard particles, such as sandpaper. Additionally, a leather razor strop, or strop, is often used to straighten and polish an edge. See simple sharpening tutorial.\n",
"An electric carving knife, commonly known as electric knife, is an electrical kitchen device used for slicing foods. The device consists of two serrated blades that are clipped together. When the appliance is switched on, the blades continuously move lengthways to provide the sawing action. They were popular in the United Kingdom in 1970s.\n",
"The basic method involves repeatedly striking the spine of the knife to force the middle of the blade into the wood. The tip is then struck, to continue forcing the blade deeper, until a split is achieved.\n",
"Knife making is the process of manufacturing a knife by any one or a combination of processes: stock removal, forging to shape, welded lamination or investment cast. Typical metals used come from the carbon steel, tool, or stainless steel families. Primitive knives have been made from bronze, copper, brass, iron, obsidian, and flint.\n"
] |
why in 2014 is the ocean still such a mystery. we overcame obstacles to space travel 50+ years ago but can't figure out water. | We actually can do some pretty cool things in water. We get oil from miles below a surface that is miles below the waves, we explore at tremendous depths, and we lay cables that stretch the length of the oceans.
It's true that there's still a lot left to do, and we certainly could do a lot more. But the reason we seem to be behind compared to space has less to do with pressure than with light (or electromagnetic waves more generally).
The reason we know so much about space is that we can see really far, and what we see contains a lot of information in the form of light spectrums, positions, speeds, etc... Also, most of the things we look at are really big, and stand out clearly from the background.
Water, on the other hand, blocks all of that. It scatters light, scatters heat, and makes info gathering a much more personal and in your face endeavour. | [
"The underwater world is still mostly unknown. The main reason for it is the difficulty to gather information in situ, to experiment, and even to reach certain places. But the ocean nonetheless is of a crucial importance for scientists, as it covers about 71% of the planet.\n",
"As space activity becomes increasingly integrated into every aspect of life here on earth, the SEA intends to show how this new focus on exploration will provide myriad advances in science and technology, untold economic opportunity and serve as an inspiration to our nation's youth. Given those benefits and the many more that lie in store, this new program of human space exploration beyond low earth orbit is a vital link to the future of the United States and the world.\n",
"The sea around the islands wouldn't receive much attention until the 1990s, because diving was not yet regarded as a true scientific tool in Brazil. It was in the archipelago that such reality began to change.\n",
"The exploration of space will go ahead, whether we join it or not, and it is one of the greatest adventures of all time ... We set sail on this new sea because there is new knowledge to be gained and new rights to be won, and they must be won and used \"for all people\" ... We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard ...\n",
"Half the Earth’s surface is covered by oceans over 3,000 metres deep, making them the biggest ecosystem on the planet. Nevertheless, due to the limitations of the technology available until just recently, the oceans remain something of a mystery. Indeed, it is often said that we know more about the Moon or Mars than Earth’s oceans.\n",
"With increasing ocean exploration over the last two decades has come the realisation that humans have had an extensive impact on the world’s oceans, not just close to our shores, but also reaching down into the deep sea. From destructive fishing practices and exploitation of mineral resources to pollution and litter, evidence of human impact can be found in virtually all deep-sea ecosystems. In response, the international community has set a series of ambitious goals aimed at protecting the marine environment and its resources for future generations. Three of these initiatives, decided on by world leaders during the 2002 World Summit on Sustainable Development (Johannesburg), are to achieve a significant reduction in biodiversity loss by 2010, to introduce an ecosystems approach to marine resource assessment and management by 2010, and to designate a network of marine protected areas by 2012. A crucial requirement for implementing these is the availability of high-quality scientific data and knowledge, as well as effective science-policy interfaces to ensure the policy relevance of research and to enable the rapid translation of scientific information into science policy.\n",
"For reasons such as lack of mobility, lack of self-sufficiency, shifting focus to space travel and transition to surface-based saturation systems, the interest in underwater habitats decreased, resulting in a noticeable decrease in major projects after 1970. In the mid eighties, the Aquarius habitat was built in the style of Sealab and Helgoland and is still in operation today.\n"
] |
what is isis? what makes them such a threat? what is their history? why can't a continental superpower such as the united states just wipe them out? | ISIS drops the mic and said "Hey, we're setting up a Muslim only state. Ya know all that stuff in the Qur'an? **We're doing it**. Right here, right now. Come act on your religious devotion and help us setup." Remember that scene in LOTR where they light the beacons? It's like that.
Muslims everywhere are like "oh shit really!? Finally? This only happens like once every 10,000 years. We like the sound of that. The promised Golden Age can only come from the creation of an Islamic State". Turns out though, ISIS is brutal and unforgiving in their tactics yet they are still influencing groups of Muslims all around the world. The Muslim community is now polarized between people who think ISIS is batshit and the people who think ISIS is doing a good thing.
I'd assume your professor is looking at their influence rather than their physical threat. Right now they are small, but they could grow exponentially. ISIS flags are popping up everywhere (Greece, Belgium, the continent of Europe in its entirety, hell there was an isis flag posted near the white house), and even though another group claimed responsibility for the France attacks, one of the shooters still claimed on video that he supported ISIS.
It's especially volatile now that the West is looking at their expansion and all the horrible shit that comes with it; crucifying/beheading kids/suicide bombings/mass murder/slavery and saying they will stop it. ISIS + ISIS supporting Muslims hear it as "That Islamic State? **We're gonna stop it from happening**." Meanwhile other Muslims are like "Wait what!? You can't do that! We just started this thing. Golden Age and what not!" and the ISIS Muslims are saying "SEE! SEE they are trying to stop us! Help!"
I think...?
| [
"ISIS was founded on a belief that scientists have an obligation to participate actively in solving major problems of national and international security. ISIS focuses primarily on four parts: 1) prevent the spread of nuclear weapons and related technology to other nations and terrorists, 2) lead to greater transparency of nuclear activities worldwide, 3) reinforce the international non-proliferation regime, and 4) cut down nuclear arsenals. Furthermore, ISIS seeks to build stable foundations for various efforts to reduce the threat posed by nuclear weapons to U.S. and international security by integrating technical, scientific and policy research. As the effectiveness of ISIS was appreciated and recognized in the Global “Go-To Think Tanks” rankings, ISIS consistently places in the top 25 Science and Technology Think Tanks in the world and in 2015 placed as one of the top United States and foreign policy think tanks in the world.\n",
"A 2014 interview with Fuller quotes him as follows: \"I think the United States is one of the key creators of [ISIS]. The United States did not plan the formation of ISIS, but its destructive interventions in the Middle East and the war in Iraq were the basic causes of the birth of ISIS.\n",
"From January 2014 onwards, the rise of ISIS (or as it is also known, ISIL), a major belligerent in the Syrian Civil War, has transformed the insurgency into a regional war that includes Syria, Iran and a large coalition of Western and Islamic forces led by the United States.\n",
"ISIS is believed to actively use malicious programs on computers and mobile devices such as cell phones in order to create social media floods, creating the perception that ISIS has more support and is bigger than it actually is around the world.\n",
"By 2014, ISIL was increasingly being viewed as a militia in addition to a terrorist group and a cult. As major Iraqi cities fell to ISIL in June 2014, Jessica Lewis, a former US Army intelligence officer at the Institute for the Study of War, described ISIL at that time as\n",
"ISIL's claims to territory have brought it into armed conflict with many governments, militias and other armed groups. International rejection of ISIL as a terrorist entity and rejection of its claim to even exist have placed it in conflict with countries around the world.\n",
"In light of the increased power of ISIL, Ebadi communicated in April 2015 that she believes the Western world should spend money funding education and an end to corruption rather than fighting with guns and bombs. She reasons that because the Islamic State stems from an ideology based on a \"wrong interpretation of Islam,\" physical force will not end ISIS because it will not end its beliefs.\n"
] |
What are the roots of anti-intellectualism in the United States? What is its history? | Awesome question. The classic answer can be found in Richard Hofstadter's 1963 book, *Anti-Intellectualism in American Life*. Hofstadter, who went on to win the 1964 Pulitzer Prize for non-fiction for the book, wrote:
> "Anti-intellectualism . . . is founded in the democratic institutions and the egalitarian sentiments of this country."
For Hofstadter, who traces anti-intellectualism to broadsheets levied against some of the first American presidential candidates, the roots go to the classic American debate between who governs best. Is it the mob, the vast majority of Americans who have little interest or knowledge in a topic, or is it a smaller and traditionally less representative group of people who have more experience and education on a topic?
As Woodrow Wilson said in 1912: "What I fear is a government of experts. God forbid that in a democratic country we should resign the task and give the government over to experts. What are we for if we are to be scientifically taken care of by a small number of gentlemen who are the only men who understand the job?"
Hofstadter is still quoted frequently on this topic, but there are a lot of things he missed discussing, as Nicholas Lemann points out [in a wonderful 50th anniversary retrospective review](_URL_0_).
Hofstadter (and plenty of people today) think of anti-intellectualism as solely the domain of the political right. But Hofstadter missed people like Donald Kagan, Robert Bork, Jeane Kirkpatrick and Allan Bloom, who wrote *The Closing of the American Mind.*
He also tended to describe business as anti-intellectual, when we know that today, business is one of the most intellectual-friendly branches of American society. America today hosts designers and inventors, innovators and trend-makers, rather than industrialists and manufacturers as it did in 1963, when Hofstadter was writing.
He also missed the rise of the Civil Rights movement for women and minorities in the United States.
That's getting a little off track, however. The bottom line is that there is (and has been) a constant push-pull between appeals to the "mob" and the "elite" in American society.
In *Empire of Liberty: A History of the Early Republic,* Gordon Wood contends that the first 30 years of the United States resulted in a switch from the desires of the nation's founders ─ who were the elite of the nation ─ to the will of the middling people, those involved in commerce and enterprise.
The founders of the United States had envisioned a Congress and President who were already wealthy and thus immune from corruption. The thought went that they would be self-sacrificing and put aside their businesses to serve the national good for a period, then return to their own interests afterward.
Joyce Appleby and Wood contend that the middle classes, who enriched themselves through industry and enterprise, developed a belief that the self-made man was the ideal politician, not someone who had been born wealthy, was educated, and thus theoretically could be trusted to make a decision without being swayed by public opinion.
And so we have a push and pull, dating back to the roots of the United States.
Hofstadter also makes the case that evangelical Protestantism in the United States, particularly in the South, strongly contributed to anti-intellectualism in the latter half of the 19th century and the 20th century. Mark Noll's *The Scandal of the Evangelical Mind* is a more in-depth analysis of this aspect.
Noll has done some excellent work on American religious history (I highly recommend his *The Civil War as a Theological Crisis*) and he explains that a lot of the American evangelical anti-intellectualism can be traced back to the development of a "literalist" interpretation of the Bible as a response to the anti-slavery movement of the 19th century. Before the 19th century, and particularly before the French Revolution, churches and religious organizations tended to be pro-science if they were anything.
In the United States, this began to change as the arguments about slavery intensified. As Noll points out, the Civil War caused many churches to fission into southern and northern branches, based upon their beliefs in slavery. Northern churches tended to favor an interpretation-based view of the Bible, while Southern churches stuck with a much more literal interpretation of the Bible. Forex, since the Bible refers to slavery and the proper treatment of slaves, it must be appropriate to have slavery in the United States, they argued.
This literalist philosophy was later applied to things as varied as racial segregation, abortion, and global warming. Because of its reliance upon scripture as the absolute (literally Gospel) truth, anything that took a different viewpoint was seen in a dim light. | [
"Anti-intellectualism in American Life is a book by Richard Hofstadter published in 1963 that won the 1964 Pulitzer Prize for General Non-Fiction. In this book, Hofstadter set out to trace the social movements that altered the role of intellect in American society. In so doing, he explored questions regarding the purpose of education and whether the democratization of education altered that purpose and reshaped its form. In considering the historic tension between access to education and excellence in education, Hofstadter argued that both anti-intellectualism and utilitarianism were consequences, in part, of the democratization of knowledge. Moreover, he saw these themes as historically embedded in America's national fabric, an outcome of its colonial European and evangelical Protestant heritage. He contended that American Protestantism's anti-intellectual tradition valued the spirit over intellectual rigour. He also noted that Catholicism could have been expected to add a distinctive leaven to the intellectual dialogue, but American Catholicism lacked intellectual culture, due to its failure to develop an intellectual tradition or produce its own strong class of intellectuals.\n",
"In \"Science and Relativism: Some Key Controversies in the Philosophy of Science\" (1990), the epistemologist Larry Laudan said that the prevailing type of philosophy taught at university in the U.S. (Postmodernism and Poststructuralism) is anti-intellectual, because \"the displacement of the idea that facts and evidence matter, by the idea that everything boils down to subjective interests and perspectives is—second only to American political campaigns—the most prominent and pernicious manifestation of anti-intellectualism in our time.\"\n",
"A growing dimension of anti-Americanism is fear of the pervasiveness of U.S. Internet technology. This can be traced from the very first computers which were either British (Colossus) or German (Z1) through to the World Wide Web itself (invented by Englishman Tim Berners-Lee). In all these cases the U.S. has commercialized all these innovations.\n",
"In \"The Quest for Cosmic Justice\" (2001), the economist Thomas Sowell said that anti-intellectualism in the U.S. began in the early Colonial era, as an understandable wariness of the educated upper-classes, because the country mostly was built by people who had fled political and religious persecution by the social system of the educated upper classes. Moreover, there were few intellectuals who possessed the practical hands-on skills required to survive in the New World of North America, which absence from society lead to a deep-rooted, populist suspicion of men and women who specialize in \"verbal virtuosity\", rather than tangible, measurable products and services:\n",
"In U.S. history, the advocacy and acceptability of anti-intellectualism varied, because in the 19th century most people lived a rural life of manual labor and agricultural work, therefore, an academic education in the Greco–Roman classics, was perceived as of impractical value; the bookish man is unprofitable. Yet, in general, Americans were a literate people who read Shakespeare for intellectual pleasure and the Christian Bible for emotional succor; thus, the ideal American Man was a literate and technically-skilled man who was successful in his trade, ergo a productive member of society. Culturally, the ideal American was the self-made man whose knowledge derived from life-experience, not an intellectual man whose knowledge of the real world derived from books, formal education, and academic study; thus, the justified anti-intellectualism reported in \"The New Purchase, or Seven and a Half Years in the Far West\" (1843), the Rev. Bayard R. Hall, A.M., said about frontier Indiana:\n",
"In the rural U.S., anti-intellectualism is an essential feature of the religious culture of Christian fundamentalism. Some Protestant churches and the Roman Catholic Church have directly published their collective support for political action to counter climate change, whereas Southern Baptists and Evangelicals have denounced belief in climate change as a sin, and have dismissed scientists as intellectuals attempting to create \"Neo-nature paganism\". People of fundamentalist religious belief tend to report not seeing evidence of global warming.\n",
"Pioneers of American Freedom: Origin of Liberal and Radical Thought in America is a book by the German anarcho-syndicalist Rudolf Rocker about the history of liberal, libertarian, and anarchist thought in the United States.\n"
] |
how in the heck is edward snowden "living" in the russian airport without being seen? | He's not in the standard passenger area. He is in a hotel at the airport and in some back/employee type areas of the airport. | [
"Edward Snowden's asylum in Russia is part of the aftermath from the global surveillance disclosures made by Edward Snowden. On June 23, 2013, Snowden flew from Hong Kong to Moscow's Sheremetyevo Airport. Noting that his U.S. passport had been cancelled, Russian authorities restricted him to the airport terminal. On August 1, after 39 days in the transit section, Snowden left the airport. He was granted temporary asylum in Russia for one year. On August 7, 2014, six days after Snowden's one-year temporary asylum expired, his Russian lawyer announced that Snowden had received a three-year residency permit. It allowed him to travel freely within Russia and to go abroad for up to three months. Snowden was not granted permanent political asylum, which would require a separate process.\n",
"On June 23, 2013, Snowden landed at Moscow's Sheremetyevo Airport. WikiLeaks said he was on a circuitous but safe route to asylum in Ecuador. Snowden had a seat reserved to continue to Cuba but did not board that onward flight, saying in a January 2014 interview that he intended to transit through Russia but was stopped en route. He asserted \"a planeload of reporters documented the seat I was supposed to be in\" when he was ticketed for Havana, but the U.S. cancelled his passport. He said the U.S. wanted him to stay in Moscow so \"they could say, 'He's a Russian spy.'\" Greenwald's account differed on the point of Snowden being already ticketed. According to Greenwald, Snowden's passport was valid when he departed Hong Kong but was revoked during the hours he was in transit to Moscow, preventing him from obtaining a ticket to leave Russia. Greenwald said Snowden was thus forced to stay in Moscow and seek asylum.\n",
"In October 2013, Snowden said that before flying to Moscow, he gave all the classified documents he had obtained to journalists he met in Hong Kong, and kept no copies for himself. In January 2014, he told a German TV interviewer that he gave all of his information to American journalists reporting on American issues. During his first American TV interview, in May 2014, Snowden said he had protected himself from Russian leverage by destroying the material he had been holding before landing in Moscow.\n",
"The Russian newspaper \"Kommersant\" nevertheless reported that Snowden was living at the Russian consulate shortly before his departure from Hong Kong to Moscow. Ben Wizner, a lawyer with the American Civil Liberties Union (ACLU) and legal adviser to Snowden, said in January 2014, \"Every news organization in the world has been trying to confirm that story. They haven't been able to, because it's false.\" Likewise rejecting the \"Kommersant\" story was Anatoly Kucherena, who became Snowden's lawyer in July 2013 when Snowden asked him for help in seeking temporary asylum in Russia. Kucherena said Snowden did not communicate with Russian diplomats while he was in Hong Kong. In early September 2013, however, Russian president Vladimir Putin said that, a few days before boarding a plane to Moscow, Snowden met in Hong Kong with Russian diplomatic representatives.\n",
"On June 23, 2013, Snowden landed at Moscow's Sheremetyevo Airport aboard a commercial Aeroflot flight from Hong Kong. On August 1, after 39 days in the transit section, he left the airport and was granted temporary asylum in Russia for one year. A year later, his temporary asylum having expired, Snowden received a three-year residency permit allowing him to travel freely within Russia and to go abroad for up to three months. He was not granted permanent political asylum. In January 2017, a spokesperson for the Russian foreign ministry wrote on Facebook that Snowden's asylum, which was due to expire in 2017, was extended by \"a couple more years\". Snowden's lawyer Anatoly Kucherena said the extension was valid until 2020.\n",
"In May 2014, NBC's Brian Williams presented the first interview for American television. In June, \"The Washington Post\" reported that during his first year of Russian asylum, Snowden had received \"tens of thousands of dollars in cash awards and appearance fees from privacy organizations and other groups,\" fielded inquiries about book and movie projects, and was considering taking a position with a South African foundation that would support work on security and privacy issues. \"Any moment that he decides that he wants to be a wealthy person,\" said Snowden's attorney Ben Wizner, \"that route is available to him,\" although the U.S. government could attempt to seize such proceeds.\n",
"Snowden met with Barton Gellman of \"The Washington Post\" six months after the disclosure for an exclusive interview spanning 14 hours, his first since being granted temporary asylum. Snowden talked about his life in Russia as \"an indoor cat,\" reflected on his time as an NSA contractor, and discussed at length the revelations of global surveillance and their reverberations. Snowden said, \"In terms of personal satisfaction, the mission's already accomplished ... I already won. As soon as the journalists were able to work, everything that I had been trying to do was validated.\" He commented \"I am not trying to bring down the NSA, I am working to improve the NSA ... I am still working for the NSA right now. They are the only ones who don't realize it.\" On the accusation from former CIA and NSA director Michael Hayden that he had defected, Snowden stated, \"If I defected at all, I defected from the government to the public.\" In 2014, Snowden said that he lives \"a surprisingly open life\" in Russia and that he is recognized when he goes to computer stores.\n"
] |
in the us, why do nurses get paid considerably more than paramedics? | (As someone who is a paramedic, knows more than a few people that did the jump up to RN) Nurses have a lot more A & P and clinical knowledge than paramedics. Also relevant is the relative youth of EMS as a field. There are still people working who were around for when EMT and paramedic training became a standardized thing. Nurses have been doing there thing for a lot longer, so of course it's a more established career path. | [
"In the United Kingdom, there are two sources of supplementary nurses - nurse banks and nursing agencies. The former provides nurses paid on as \"hours as required\" basis and is often contracted to fill planned or unplanned shortfalls in staffing. Agency nurses, on the other hand, are employed through third-party agencies. Recent studies show that it has become common practice in the United Kingdom to use bank and agency nurses to fill vacant shifts in hospitals that cannot be filled by permanent staff. By 2002 to 2003, it was already reported that the National Health Service has spent £628 million on agency nursing. There are sources that cite how nurses employed through agencies tend to enjoy greater rewards and higher pay than those with institutional contracts.\n",
"Nurses are normally well trained before being eligible for working with a hospital, but support workers are a problem. Some hospitals hire skill-less “underground” labors and after giving them some simple training use them as hospital support workers. These workers, mostly originating from rural areas, are poorly paid by those hospitals.\n",
"In 2016, several publications appeared in the media, claiming nurses depend on food banks and payday loans to survive. In October 2016, Western Circle published research, claiming that the sector of NHS Nurses are heavily dependent on payday loan. According to the research, the number of nurses using payday loans has doubled in 3 years, since 2013. This research brought the matter of the low wages nurses received in the UK to the attention of media outlets. The claims were that nurses' salaries were frozen for more than 6 years and in some cases, resulted in financial distress, clearly as wages have not kept pace with the cost of living increases in this time. The lack of pay increases for, particularly nurses within the NHS continues to be an important topic of public discussion in the UK.\n",
"The salary of a paramedic in the US varies. The mean average is $30,000, with the lowest 10% earning under $20,000 and the top 10% earning over $50,000, considerably less than the salaries of paramedics in Canada. Factors such as education and location of the paramedic's practice influence the salary. Paramedic supervisors and managers may make between $60,000- $80,000, depending on location.\n",
"This is because fee-for-service hospitals have a positive contribution margin for almost all elective cases mostly due to a large percentage of OR costs being fixed. For USA hospitals not on a fixed annual budget, contribution margin per OR hour averages one to two thousand USD per OR hour.\n",
"This is because fee-for-service hospitals have a positive contribution margin for almost all elective cases mostly due to a large percentage of OR costs being fixed. For USA hospitals not on a fixed annual budget, contribution margin per OR hour averages one to two thousand USD per OR hour.\n",
"In 1892, Congress passed a law which allowed for a pension of $12 per month for all nurses who had been hired and paid by the Government. However, most volunteer nurses still were not awarded pensions, at least as of 1910.\n"
] |
when you choke drinking water, does the water actually go down into your lungs? if yes, what happens to it next? | to add something with less detail and specifically about drinking water: if it's a small amount (such as a gulp while drinking), you'd normally have your cough reflex kick in and drive that water out. I'll persist as long as there's still enough water irritating down there, which can lead to instead of just one or two coughs, coughing fits.
if you actually swallow a lot, well, it's pretty much what the other being said, you end up with aspiration (which can be water, any other fluid, sometimes even stomach contents which is very dirty and lead to aspiration pneumonia).
Edit: I'm a nurse :/ | [
"If water enters the airways of a conscious person, the person will try to cough up the water or swallow it, often inhaling more water involuntarily. When water enters the larynx or trachea, both conscious and unconscious persons experience laryngospasm, in which the vocal cords constrict, sealing the airway. This prevents water from entering the lungs. Because of this laryngospasm, in the initial phase of drowning, water generally enters the stomach and very little water enters the lungs. Though laryngospasm prevents water from entering the lungs, it also interferes with breathing. In most persons, the laryngospasm relaxes some time after unconsciousness and water can then enter the lungs causing a \"wet drowning\". However, about 7–10% of people maintain this seal until cardiac arrest. This has been called \"dry drowning\", as no water enters the lungs. In forensic pathology, water in the lungs indicates that the person was still alive at the point of submersion. Absence of water in the lungs may be either a dry drowning or indicates a death before submersion.\n",
"BULLET::::- Fluid obstruction: Fluids, usually vomit, can collect in the pharynx, effectively causing the person to drown. The loss of muscular control which causes the tongue to block the throat can also lead to the stomach contents flowing into the throat, called \"passive regurgitation\". Fluid which collects in the back of the throat can also flow down into the lungs. Another complication can be stomach acid burning the inner lining of the lungs, causing aspiration pneumonia.\n",
"This stage was introduced in many protocols as it was found that many people were too quick to undertake potentially dangerous interventions, such as abdominal thrusts, for items which could have been dislodged without intervention. Also, if the choking is caused by an irritating substance rather than an obstructing one, and if conscious, the patient should be allowed to drink water on their own to try to clear the throat. Since the airway is already closed, there is very little danger of water entering the lungs. Coughing is normal after most of the irritant has cleared, and at this point the patient will probably refuse any additional water for a short time.\n",
"Choking (also known as foreign body airway obstruction) is a life-threatening medical emergency characterized by the blockage of air passage into the lungs secondary to the inhalation or ingestion of food or another object.\n",
"Often the victim has the mouth forced or wedged open, the nose closed with pincers and a funnel or strip of cloth forced down the throat. The victim has to drink all the water (or other liquids such as bile or urine) poured into the funnel to avoid drowning. The stomach fills until near bursting and is sometimes beaten until the victim vomits and the torture begins again.\n",
"BULLET::::- Dehydration from prolonged exposure to hypertonic salt water—or, less frequently, salt water aspiration syndrome where inhaled salt water creates foam in the lungs that restricts breathing—can cause loss of physical control or kill directly without actual drowning. Hypothermia and dehydration also kill directly, without causing drowning, even when the person wears a life vest.\n",
"Generally, in the early stages of drowning a person holds their breath to prevent water from entering their lungs. When this is no longer possible a small amount of water entering the trachea causes a muscular spasm that seals the airway and prevents further passage of water. If the process is not interrupted, loss of consciousness due to hypoxia is followed rapidly by cardiac arrest.\n"
] |
do insects get food poisoning? | One reason insects react so differently to many toxins is because their digestive tract is alkaline in contrast to the acidic environment of the vertebrate intestines. A different pH may render some toxins less harmful and others more so. A common "food contaminant" for insects is *Bacillus thuringiensis*, a species of bacteria which lives in the soil (and on leaves) and basically produces a crystalline toxin that can only be dissolved in the alkaline digestive tract of insects. However, it's non-toxic to humans because the crystals pass our digestive system undissolved. This is why *B. thuringiensis* is sprayed on crops to prevent damage caused by larvae.
TL;DR: Their digestive system makes insects vulnerable to different bacterial toxins. Yes, they can suffer from certain food poisonings. | [
"Many insects are distasteful to predators and excrete irritants or secrete poisonous compounds that cause illness or death when ingested. Secondary metabolites obtained from plant food may also be sequestered by insects and used in the production of their own toxins. One of the more well-known examples of this is the monarch butterfly, which sequesters poison obtained from the miilkweed plant. Among the most successful insect orders employing this strategy are beetles (Coleoptera), grasshoppers (Orthoptera), and moths and butterflies (Lepidoptera). Insects also biosynthesize unique toxins, and while sequestration of toxins from food sources is claimed to be the energetically favorable strategy, this has been contested. Passion-vine associated butterflies in the tribe Heliconiini (sub-family Heliconiinae) either sequester or synthesize \"de novo\" defensive chemicals, but moths in the genus \"Zygaena\" (family Zygaenidae) have evolved the ability to either synthesize or sequester their defensive chemicals through convergence. Some coleopterans sequester secondary metabolites to be used as defensive chemicals but most biosynthesize their own \"de novo\". Anatomical structures have developed to store these substances, and some are circulated in the hemolyph and released associated with a behavior called reflex bleeding.\n",
"It turns out that many species of insects are toxic or distasteful when they have fed on plants that contain chemicals of particular classes, but not when they have fed on plants that lack those chemicals. For instance, some milkweed butterflies feed on milkweeds (\"Asclepias\") which contain the cardiac glycoside oleandrin; this makes them poisonous to most predators. These insects are often aposematically coloured and patterned. When feeding on innocuous plants, they are harmless and nutritious, but a bird that has sampled a toxic specimen even once is unlikely to risk tasting harmless specimens with the same aposematic coloration. Such acquired toxicity is not limited to insects: many groups of animals have since been shown to obtain toxic compounds through their diets, making automimicry potentially widespread. Even if toxic compounds are produced by metabolic processes with an animal, there may still be variability in the amount that animals invest in them, so scope for automimicry remains even when dietary plasticity is not involved. Whatever the mechanism, palatability may vary with age, sex, or how recently they used their supply of toxin.\n",
"Many species of insects are toxic or distasteful when they have fed on certain plants that contain chemicals of particular classes, but not when they have fed on plants that lack those chemicals. For instance, some species of the subfamily Danainae feed on various species of the Asclepiadoideae in the family Apocynaceae, which render them poisonous and emetic to most predators. Such insects frequently are aposematically coloured and patterned. When feeding on innocuous plants however, they are harmless and nutritious, but a bird that once has sampled a toxic specimen is unlikely to eat harmless specimens that have the same aposematic coloration. When regarded as mimicry of toxic members of the same species, this too may be seen as automimicry.\n",
"Since it is impossible to entirely eliminate pest insects from the human food chain, insects are inadvertently present in many foods, especially grains. Food safety laws in many countries do not prohibit insect parts in food, but rather limit their quantity. According to cultural materialist anthropologist Marvin Harris, the eating of insects is taboo in cultures that have other protein sources such as fish or livestock.\n",
"Due to high densities of these insects in Western Europe, some researchers have also proposed their possible utilization as human food. These insects contain 69% proteins on dry weight with excellent amino acid profile and digestibility. Aman Paul and his co-workers indicated that before introducing these insects for human food, it is necessary to do a thorough examination of any possible toxic and/or allergic conditions that could arise from their consumption.\n",
"Stercorarian trypanosomes infect insects, most often the triatomid kissing bug, by developing in the posterior gut followed by release into the feces and subsequent depositing on the skin of the host. The organism then penetrates and can disseminate throughout the body. Insects become infected when taking a blood meal.\n",
"\"Retortamonas\" trophozoites have been found to feed on the intestinal bacteria of a wide variety of vertebrates including mammalian, avian, and amphibian hosts, as well as invertebrates, such as insects. Recent evidence however, suggests that species infecting insects are in fact \"Chilomastix.\"\n"
] |
how did japan successfully land two rovers on an asteroid | Math, lots and lots of complex math.
You know its orbit, you know our orbit. Provide the right amount of thrust in the right direction. | [
"Following the approval of the asteroid sample-return project MUSES-C, a rover was proposed to be mounted on the asteroid explorer, and development of MINERVA began in 1997. Completed in February 2003, MINERVA was Japan's first space rover, and the first asteroid rover in the world.\n",
"The first Japanese asteroid probe, Hayabusa, returned to Earth on 13 June, having landed on 25143 Itokawa in an effort to collect samples. It was also the world's first successful sample return mission from an asteroid.\n",
"BULLET::::- The spacecraft was originally intended to launch in July 2002 to the asteroid 4660 Nereus (the asteroid (10302) 1989 ML was considered as an alternative target). However, a July 2000 failure of Japan's M-5 rocket forced a delay in the launch, putting both Nereus and 1989 ML out of reach. As a result, the target asteroid was changed to , which was soon thereafter named for Japanese rocket pioneer Hideo Itokawa.\n",
"BULLET::::- \"Hayabusa\" was to deploy a small rover supplied by NASA and developed by JPL, called Muses-CN, onto the surface of the asteroid, but the rover was canceled by NASA in November 2000 due to budget constraints.\n",
"The \"Hayabusa2\" mission includes four rovers with various scientific instruments. On 21 September 2018, the first two of these rovers, which hop around the surface of the asteroid, were released from \"Hayabusa2\". This marks the first time a mission has completed a successful landing on a fast-moving asteroid body.\n",
"NASA proposed the Asteroid Redirect Mission (or \"Asteroid Initiative\"), an unmanned robotic mission, to \"retrieve\" a near-Earth asteroid with a size of about and a mass of around 500 tons (comparable in mass to the ISS). The asteroid would be moved into a high lunar orbit or orbit around EML2 (halo orbit, Lissajous orbit) for research and exploration purposes. Under consideration for moving the asteroid are grabbing the asteroid and using solar electric propulsion to \"directly\" move it, as well as gravity tractor technology.\n",
"In addition, \"Hayabusa\" was the first spacecraft designed to deliberately land on an asteroid and then take off again (\"NEAR Shoemaker\" made a controlled descent to the surface of 433 Eros in 2000, but it was not designed as a lander and was eventually deactivated after it arrived). Technically, \"Hayabusa\" was not designed to \"land\"; it simply touches the surface with its sample capturing device and then moves away. However, it was the first craft designed from the outset to make physical contact with the surface of an asteroid. Junichiro Kawaguchi of the Institute of Space and Astronautical Science was appointed to be the leader of the mission.\n"
] |
why do people listen to music with earbuds in while driving a vehicle that most likely has a stereo in it? | They could be listening to music from an audio player which can't connect to the car stereo (for example if the car doesn't have an aux input).
Or they could just be using their headphones to talk to someone on the phone.
And where I'm from it's very illegal. | [
"It can be used for personal audio, either to have sounds audible to only one person, or that which a group wants to listen to. The navigation instructions for example are only interesting for the driver in a car, not for the passengers. Another possibility are future applications for true stereo sound, where one ear does not hear what the other is hearing.\n",
"Initially implemented for listening to music and radio, vehicle audio is now part of car telematics, telecommunication, in-vehicle security, handsfree calling, navigation, and remote diagnostics systems. The same loudspeakers may also be used to minimize road and engine noise with active noise control, or they may be used to augment engine sounds, for instance making a smaller engine sound bigger.\n",
"Although modern headphones have been particularly widely sold and used for listening to stereo recordings since the release of the Walkman, there is subjective debate regarding the nature of their reproduction of stereo sound. Stereo recordings represent the position of horizontal depth cues (stereo separation) via volume and phase differences of the sound in question between the two channels. When the sounds from two speakers mix, they create the phase difference the brain uses to locate direction. Through most headphones, because the right and left channels do not combine in this manner, the illusion of the phantom center can be perceived as lost. Hard panned sounds are also heard only in one ear rather than from one side.\n",
"When wearing stereo headphones, people with unilateral hearing loss can hear only one channel, hence the panning information (volume and time differences between channels) is lost; some instruments may be heard better than others if they are mixed predominantly to one channel, and in extreme cases of sound production, such as complete stereo separation or stereo-switching, only part of the composition can be heard; in games using 3D audio effects, sound may not be perceived appropriately due to coming to the disabled ear. This can be corrected by using settings in the software or hardware—audio player, OS, amplifier or sound source—to adjust balance to one channel (only if the setting downmixes sound from both channels to one), or there may be an option to outright downmix both channels to mono. Such settings may be available via the device or software's accessibility features.\n",
"Limited-range drivers, also used alone, are typically found in computers, toys, and clock radios. These drivers are less elaborate and less expensive than wide-range drivers, and they may be severely compromised to fit into very small mounting locations. In these applications, sound quality is a low priority. The human ear is remarkably tolerant of poor sound quality, and the distortion inherent in limited-range drivers may enhance their output at high frequencies, increasing clarity when listening to spoken word material.\n",
"A personal stereo, or personal cassette player, is a portable audio player using an audiocassette player, battery power and in some cases an AM/FM radio. This allows the user to listen to music through headphones while walking, jogging or relaxing. Personal stereos typically have a belt clip or a shoulder strap so a user can attach the device to a belt or wear it over his or her shoulder. Some personal stereos came with a separate battery case.\n",
"The automobile sound system may be part of an active noise control system which reduces engine and road noise for the driver and passengers. One or more microphones are used to pick up sound from various places on the vehicle, especially the engine compartment, underside or exhaust pipes, and these signals are handled by a digital signal processor (DSP) then sent to the loudspeakers in such a way that the processed signal reduces or cancels out the outside noise heard inside the car. An early system focused only on engine noise was developed by Lotus and licensed for the 1992 Nissan Bluebird models sold in Japan. Lotus later teamed with Harman in 2009 to develop a more complete noise reduction system, including road noise, tire noise, and chassis vibrations. One benefit of active noise control is that the car can weigh less, with less sound-deadening material used, and without a heavy balance shaft in the engine. Removing the balance shaft also increases fuel efficiency. The 2013 Honda Accord used an active noise control system, as did the 2013 Lincoln luxury line and the Ford C-Max and Fusion models. Other operating data may also play a part in the DSP, data such as the engine's speed in revolutions per minute (RPM) or the car's highway speed. A multiple source reduction system may reach as much as 80% of noise removed.\n"
] |
When classical music was published in the 18th century, how did the music circulate? | The only media available for the recording of classical music in 18th century was print, of course, and that business is essentially the same now, with a few minor changes. The printing of music parallels the printing of words, with the earliest examples being carved on wooden plates, then engraved on metal plates, which eventually changed to movable music type, and has now moved to digital. Then it would have been printed, arranged into a book, and sold to the public in music shops. The first publishers began to appear in the 18th century in the major German cities, and by the late 18th century they were also in the United States. They would often look for exclusive rights from a composer for European distribution, but Beethoven was known to have sold the "exclusive" rights to a single work to several competing publishers. He did the same thing with commissions and world premieres as well, but that's a different story for another time.
Eventually an orchestral works would make it around the continent and be performed by the local orchestras, which could be of varying quality. Some people might live too far away to go to a concert, and in any case there might only be a single performance, or at best a handful of performances. In order for more people to learn and enjoy a work, orchestral pieces were often reduced to a piano version, which were the bread-and-butter of publishers, along with solo piano works and chamber works which could be performed at home. Sometimes these reductions were arranged by the composer, but often they were done by someone else. Liszt famously transcribed all of Beethoven's Symphonies, with the Ninth Symphony requiring two pianos. These transcriptions/ reductions are still popular today.
Music was as popular with people then as it is now, so it was a major industry, and the concept of copyright went hand in hand with the printing of music. The first major copyright law was the Statute of Anne in 1709, which made the printing of music exclusive to the original publisher for 14 years, which was later amended to 21 years. This is important because while a publisher might have the exclusive rights for a while, eventually they would lose exclusivity and any publishing house could then print and sell that music (public domain). As the fame of composers grew throughout the 19th century, this was a huge windfall for publishers who could print the best-selling works of composers like Beethoven and Mozart without having to pay out royalties, making them very profitable. Today the exclusive term of copyrights has been extended greatly, due to heavy lobbying by the Disney corporation and the Gershwin estate. | [
"BULLET::::- Musical publishing and distribution methods were very lax in 18th century Europe, with manuscript versions of music being freely circulated. This could easily lead to confusion about authorship, and frequent misattribution.\n",
"During the Medieval period the foundation was laid for the music notation and music theory practices that would shape Western music into the norms that developed during the common-practice era, a period of shared music writing practices which encompassed the Baroque music composers from 1600–1750, such as J.S. Bach and Classical music period composers from the 1700s such as W.A. Mozart and Romantic music era composers from the 1800s such as Wagner. The most obvious of these is the development of a comprehensive music notational system which enabled composers to write out their song melodies and instrumental pieces on parchment or paper. Prior to the development of musical notation, songs and pieces had to be learned \"by ear\", from one person who knew a song to another person. This greatly limited how many people could be taught new music and how wide music could spread to other regions or countries. The development of music notation made it easier to disseminate (spread) songs and musical pieces to a larger number of people and to a wider geographic area. However the theoretical advances, particularly in regard to rhythm—the timing of notes—and polyphony—using multiple, interweaving melodies at the same time—are equally important to the development of Western music.\n",
"In the eighteenth century the increasing availability of instruments such as the harpsichord, spinet and later the piano, and cheap print meant that works created for opera and the theatre were often published for private performance, with Thomas Arne's (1710–78) song \"Rule Britannia\" (1740) probably the best-known. From the 1730s elegant concert halls began to be built across the country and attendance rivalled that of the theatre, facilitating visits by figures such as Haydn, J. C. Bach and the young Mozart. The Italian style of classical music was probably first brought to Scotland by the Italian cellist and composer Lorenzo Bocchi, who travelled to Scotland in the 1720s, introducing the cello to the country and then developing settings for lowland Scots songs. He possibly had a hand in the first Scottish Opera, the pastoral \"The Gentle Shepherd\", with libretto by the makar Allan Ramsay. The extension of interest in music can be seen in the volume of musical publication, festivals, and the foundation of over 100 choral societies across the country. George III (reigned 1760–1820), and the aristocracy in general, continued to be patrons of music through the foundation of organisations like the Royal Concert of Music in 1776 and events like the Handel Festival from 1784. Outside of court patronage there were also a number of major figures, including the Scottish composer Thomas Erskine, 6th Earl of Kellie (1732–81) well known in his era, but whose work was quickly forgotten after his death and has only just begun to be reappraised.\n",
"In classical music, during the nineteenth century a \"canon\" developed which focused on what was felt to be the most important works written since 1600, with a great concentration on the later part of this period, termed the Classical period, which is generally taken to begin around 1750. After Beethoven, the major nineteenth-century composers include Robert Schumann, Frédéric Chopin, Hector Berlioz, Franz Liszt, Richard Wagner, Johannes Brahms, Anton Bruckner, Giuseppe Verdi, Giacomo Puccini, and Pyotr Ilyich Tchaikovsky.\n",
"By the late 1750s there were flourishing centers of the new style in Italy, Vienna, Mannheim, and Paris; dozens of symphonies were composed and there were bands of players associated with musical theatres. Opera or other vocal music accompanied by orchestra was the feature of most musical events, with concertos and symphonies (arising from the overture) serving as instrumental interludes and introductions for operas and church services. Over the course of the Classical period, symphonies and concertos developed and were presented independently of vocal music.\n",
"Up until the 18th century, music performance and distribution centered around current compositions. Even professional musicians rarely were familiar with music written more than a half century before their own time. In the second half of the 18th century, an awakening of interest in the history of music prompted the publication of numerous collections of older music (for example, William Boyce's \"Cathedral Music\", published around 1760-63, and Giovanni Battista Martini's \"Esemplare, ossia Saggio... di contrappunto\", published around 1774-5). Around the same time, the proliferation of pirated editions of music by popular composers (such as Haydn and Mozart) prompted respected music publishers to embark on \"oeuvres complettes,\" intended as uniform editions of the entire musical output of these composers. Unfortunately, many of these early complete works projects were never finished.\n",
"During the Renaissance music era, the printing press was invented, which made it much easier to mass-produce music (which had previously been hand-copied). This helped to spread musical styles more quickly and across a larger area. During the Baroque era (1600–1750), technologies for keyboard instruments developed, which led to improvements in the designs of pipe organs and harpsichords, and the development of a new keyboard instrument in about 1700, the piano. In the Classical era (1750–1820), Beethoven added new instruments to the orchestra to create new sounds, such as the piccolo, contrabassoon, trombones, and untuned percussion in his Ninth Symphony. During the Romantic music era (c. 1810 to 1900), one of the key ways that new compositions became known to the public was by the sales of relatively inexpensive sheet music, which amateur middle class music lovers would perform at home on their piano or other instruments. In the 19th century, new instruments such as piston valve-equipped cornets, saxophones, euphoniums, and Wagner tubas were added to the orchestra. Many of the mechanical innovations developed for instruments in the 19th century, notably on the piano, brass and woodwinds continued to be used in the 20th and early 21st century.\n"
] |
What did nomadic horse archers use to make their arrows? | Hi, this is not to discourage other answers but you might be interested in [this post](_URL_0_) by u/krishaperkins | [
"A horse archer is a cavalryman armed with a bow, able to shoot while riding from horseback. Archery has occasionally been used from the backs of other riding animals. In large open areas, it was a highly successful technique for hunting, for protecting the herds, and for war. It was a defining characteristic of the Eurasian nomads during antiquity and the medieval period, as well as the Iranian peoples, (Alans, Scythians, Sarmatians, Parthians, Sassanid Persians) and Indians in antiquity, and by the Hungarians, Mongols and the Turkic peoples during the Middle Ages.\n",
"Since using a bow requires the rider to let go of the reins with both hands, horse archers need superb equestrian skills if they are to shoot on the move. The natives of large grassland areas used horse archery for hunting, for protecting their herds, and for war. Horse archery was for many groups a basic survival skill, and additionally made each able-bodied man, at need, a highly-mobile warrior. The buffalo hunts of the North American prairies may be the best-recorded examples of bowhunting by horse archers.\n",
"Since using a bow requires a horseman to let go of the reins with both hands, horse archers need superb equestrian skills if they are to shoot on the move. Horse archery is typically associated with Eurasian nomads of the Eurasian steppe. Such were the Scythians and Sarmatians and later the Parthians, Hungarians, and Turks. Scythians were well known for their tactic of the Parthian shot, but evidently it was the Parthians who give it its name. In this tactical manoeuvre the horsemen would make a feigned retreat and progress away from the pursuing enemy while turning his upper body and shooting backwards at the pursuer, guiding his horse with his voice and the pressure of his legs.\n",
"The Roman Empire and its military also had an extensive use of horse archers after their conflict with eastern armies that relied heavily on mounted archery in the 1st century BC. They had regiments such as the Equites Sagittarii, who acted as Rome's horse archers in combat. The Crusaders used conscripted cavalry and horse archers known as the Turcopole, made up of mostly Greek and Turks.\n",
"Camel archers are marksmen wielding bows mounted on camels. Most commonly they are considered a part and form of Arab archery. They took their popularity in the Crusades, used in Arabia, Asian and Eurasian countries. Saladin, the leader of Arabia from 1174 to 1193, was known, or rather believed to use camels as a substitute for other ways of transport, such as the more common horse.\n",
"Early horse archery, depicted on the Assyrian carvings, involved two riders, one controlling both horses while the second shot. Heavy horse archers first appeared in the Assyrian army in the 7th century BC after abandoning chariot warfare and formed a link between light skirmishing cavalrymen and heavy cataphract cavalry. The heavy horse archers usually had mail or lamellar armour and helmets, and sometimes even their horses were armoured.\n",
"Some of the earliest examples of horses being ridden in warfare were horse-mounted archers or spear-throwers, dating to the reigns of the Assyrian rulers Ashurnasirpal II and Shalmaneser III. However, these riders sat far back on their horses, a precarious position for moving quickly, and the horses were held by a handler on the ground, keeping the archer free to use the bow. Thus, these archers were more a type of mounted infantry than true cavalry. The Assyrians developed cavalry in response to invasions by nomadic people from the north, such as the Cimmerians, who entered Asia Minor in the 8th century BC and took over parts of Urartu during the reign of Sargon II, approximately 721 BC. Mounted warriors such as the Scythians also had an influence on the region in the 7th century BC. By the reign of Ashurbanipal in 669 BC, the Assyrians had learned to sit forward on their horses in the classic riding position still seen today and could be said to be true light cavalry. The ancient Greeks used both light horse scouts and heavy cavalry, although not extensively, possibly due to the cost of keeping horses.\n"
] |
where does the "water" portion of beverages go when dumped down the drain? | It depends :)
In a modern, civilised and industrialised region the water goes through pipes to a water treatment station. This water follows the same path as the water from the bathtub, kitchen sink or toilet. The treatment station tries to purify the water through several organic methods (plants, algae, etc..) and inorganic methods (filters, UV exposure, etc..), as well as specific chemicals to neutralize the waste or to facilitate extraction of residues. The "purified" waste water can be evaporated, sent to a lake, river, or the ocean. Sometimes this waste water can be used for farming, but not always. The remaining chemicals and solid matter extracted from the water are dried up and used as fertiliser, compost, incinerated or simply stored in specialised waste dumps.
In the rest of the world it goes directly, or with little filtering, to the nearest river, lake or the ocean. | [
"It is essentially a grate, which allows excess and waste liquids to be drained away, and either collected in a pan under the grate, or drained away through a hose that carries the waste water and tea to a bucket or other drain. \n",
"Water is typically drawn from the pool via a rectangular aperture in the wall, connected through to a device fitted into one (or more) wall/s of the pool. The internals of the skimmer are accessed from the pool deck through a circular or rectangle lid, about one foot in diameter. If the pool's water pump is operational water is drawn from the pool over a floating hinged weir (operating from a vertical position to 90 degrees angle away from the pool, in order to stop leaves and debris being back-flooded into the pool by wave action), and down into a removable \"skimmer basket\", the purpose of which is to entrap leaves, dead insects and other larger floating debris.\n",
"A sump pump is a pump used to remove water that has accumulated in a water-collecting sump basin, commonly found in the basements of homes. The water may enter via the perimeter drains of a basement waterproofing system, funneling into the basin or because of rain or natural ground water, if the basement is below the water table level.\n",
"A fountain consists of a motor that powers a rotating impeller. The impeller pumps water from the first few feet of the water and expels it into the air. This process utilizes air-water contact to transfer oxygen. As the water is propelled into the air, it breaks into small droplets. Collectively, these small droplets have a large surface area through which oxygen can be transferred. Upon return, these droplets mix with the rest of the water and thus transfer their oxygen back to the ecosystem.\n",
"In modern fountains a water filter, typically a media filter, removes particles from the water—this filter requires its own pump to force water through it and plumbing to remove the water from the pool to the filter and then back to the pool. The water may need chlorination or anti-algal treatment, or may use biological methods to filter and clean water.\n",
"The water is distributed in a slender trickle issuing from the center of the dome and falls down into a basin that is protected by a grille. To make distribution easier, two tin-plated, iron cups attached to the fountain by a small chain were at the drinker's desire, staying always submerged for cleanliness. These cups were removed in 1952 \"for Hygiene reasons\" by demand of the Council of Public Hygiene of the old Department of the Seine.\n",
"The fountain can spout (almost) as high above the upper container as the water falls from the basin into the lower container. For maximum effect, place the upper container as closely beneath the basin as possible and place the lower container a long way beneath both.\n"
] |
how are some people addicted to work? | Psychologists have linked it as a means of coping with depression/lonliness, satisfying a craving for competition, or simply greed. This is a case-by-case type of addiction. | [
"Addicts often believe that being in control of others is how to achieve success and happiness in life. People who follow this rule use it as a survival skill, having usually learned it in childhood. As long as they make the rules, no one can back them into a corner with their feelings.\n",
"People who suffer from an addictive personality spend excessive time on a behavior or with an item, not as a hobby but because they feel they have to. Addiction can be defined when the engagement in the activity or experience affects the person’s quality of life in some way. In this way, many people who maintain an addictive personality isolate themselves from social situations in order to mask their addiction.\n",
"Addicts often believe that being in control is how to achieve success and happiness in life. People who follow this rule use it as a survival skill, having usually learned it in childhood. As long as they make the rules, no one can back them into a corner with their feelings.\n",
"Individuals who can be considered addicted to shopping are observed to exhibit repetitive and obsessive urges to go buy items especially when in the vicinity of an environment that supports this venture such as a mall. In this locations, they mostly purchase things that are cheap and of low value mainly just to satisfy the urge to spend. Normally, these items end up being returned to the shop they were brought from or just disposed of entirely after a while. However, according to Zadka and Olajossy, this rarely works as these individuals are known to have low self-esteem.\n",
"Burke and Fiksenbaum refer to Schaufeli, Taris, and Bakker (2007) when they made a distinction between an individual good workaholics and bad workaholics. A good workaholic will score higher on measures of work engagement and a bad workaholic will score higher on measures of burnout. They also suggest why this is – some individuals work because they are satisfied, engaged, and challenged and to prove a point. On the other hand, the opposite kind work hard because they are addicted to work; they see that the occupation makes a contribution to finding an identity and purpose.\n",
"A workaholic is a person who works compulsively. While the term generally implies that the person enjoys their work, it can also alternately imply that they simply feel compelled to do it. There is no generally accepted medical definition of such a condition, although some forms of stress, impulse control disorder, obsessive-compulsive personality disorder, and obsessive-compulsive disorder can be work-related.\n",
"An addictive behavior is a behavior, or a stimulus related to a behavior (e.g., sex or food), that is both rewarding and reinforcing, and is associated with the development of an addiction. Addictions involving addictive behaviors are normally referred to as behavioral addictions.\n"
] |
How do calculators deal with imaginary numbers? | Complex numbers are operated on arithmetically exactly the same way as normal numbers. Your calculator just knows how to keep the two separate. There's really nothing special about the process, other than having the capability of storing different number types and understanding how to display them in an appropriate way. Unless you have a more specific question that I'm not seeing. | [
"With some exceptions, most calculators do not operate on real numbers. Instead, they work with finite-precision approximations called floating-point numbers. In fact, most scientific computation uses floating-point arithmetic. Real numbers satisfy the usual rules of arithmetic, but floating-point numbers do not.\n",
"Most pocket calculators have a square root key. Computer spreadsheets and other software are also frequently used to calculate square roots. Pocket calculators typically implement efficient routines, such as the Newton's method (frequently with an initial guess of 1), to compute the square root of a positive real number. When computing square roots with logarithm tables or slide rules, one can exploit the identities\n",
"The Inverse Symbolic Calculator is an online number checker established July 18, 1995 by Peter Benjamin Borwein, Jonathan Michael Borwein and Simon Plouffe of the Canadian Centre for Experimental and Constructive Mathematics (Burnaby, Canada). A user will input a number and the Calculator will use an algorithm to search for and calculate closed-form expressions or suitable functions that have roots near this number. Hence, the calculator is of great importance for those working in numerical areas of experimental mathematics.\n",
"Most pocket calculators do all their calculations in BCD rather than a floating-point representation. BCD is common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing to such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to a simpler overall system than converting to and from binary.\n",
"Some graphing calculators have a computer algebra system (CAS), which means that they are capable of producing symbolic results. These calculators can manipulate algebraic expressions, performing operations such as factor, expand, and simplify. In addition, they can give answers in exact form without numerical approximations. Calculators that have a computer algebra system are called symbolic or CAS calculators. Examples of symbolic calculators include the HP 50g, the HP Prime, the TI-89, the TI-Nspire CAS and TI-Nspire CX CAS and the Casio ClassPad series.\n",
"BULLET::::- RPL (only on HP 49/50 series in \"exact mode\"): calculator treats numbers entered without decimal point as integers rather than floats; integers are of arbitrary precision only limited by the available memory.\n",
"Commonly in secondary schools' mathematics education, the real numbers are constructed by defining a number using an integer followed by a radix point and an infinite sequence written out as a string to represent the fractional part of any given real number. In this construction, the set of any combination of an integer and digits after the decimal point (or radix point in non-base 10 systems) is the set of real numbers. This construction can be rigorously shown to satisfy all of the real axioms after defining an equivalence relation over the set that defines 1 = 0.999... as well as for any other nonzero decimals with only finitely many nonzero terms in the decimal string with its trailing 9s version. With this construction of the reals, all proofs of the statement \"1 = 0.999...\" can be viewed as implicitly assuming the equality when any operations are performed on the real numbers.\n"
] |
given densities are well understood,what prevents geologists from predicting where minerals are,rather than prospecting?also since the heavy ,valuable metals sink,what prevents a rush to volcanic sites to mine these metals? | Because it's way, way more complicated than that.
Rocks move. Yes, the heavier elements might sink lower as the lava cools, but cooling lava isn't the only way rocks are made. Sedimentary rocks don't really care about density, but the order things fell on them as they were formed. Even the igneous rocks will get shifted around by tectonic forces, making once-flat layers crunch up or go completely vertical. You can see some of that in the Grand Canyon, if you know what you're looking at, actually. | [
"Peak minerals marks the point in time when the largest production of a mineral will occur in an area, with production declining in subsequent years. While most mineral resources will not be exhausted in the near future, global extraction and production is becoming more challenging. Miners have found ways over time to extract deeper and lower grade ores with lower production costs. More than anything else, declining average ore grades are indicative of ongoing technological shifts that have enabled inclusion of more 'complex' processing – in social and environmental terms \"as well as\" economic – and structural changes in the minerals exploration industry and these have been accompanied by significant increases in identified Mineral Reserves.\n",
"Geologists involved in mining and mineral exploration use geologic modelling to determine the geometry and placement of mineral deposits in the subsurface of the earth. Geologic models help define the volume and concentration of minerals, to which economic constraints are applied to determine the economic value of the mineralization. Mineral deposits that are deemed to be economic may be developed into a mine.\n",
"The purpose of the study of economic geology is to gain understanding of the genesis and localization of ore deposits plus the minerals associated with ore deposits. Though metals, minerals and other geologic commodities are non-renewable in human time frames, the impression of a fixed or limited stock paradigm of scarcity has always led to human innovation resulting in a replacement commodity substituted for those commodities which become too expensive. Additionally the fixed stock of most mineral commodities is huge (e.g., copper within the earth's crust given current rates of consumption would last for more than 100 million years. Nonetheless, economic geologists continue to successfully expand and define known mineral resources.\n",
"Geologists are involved in the study of ore deposits, which includes the study of ore genesis and the processes within the Earth's crust that form and concentrate ore minerals into economically viable quantities.\n",
"The abundance and diversity of minerals is controlled directly by their chemistry, in turn dependent on elemental abundances in the Earth. The majority of minerals observed are derived from the Earth's crust. Eight elements account for most of the key components of minerals, due to their abundance in the crust. These eight elements, summing to over 98% of the crust by weight, are, in order of decreasing abundance: oxygen, silicon, aluminium, iron, magnesium, calcium, sodium and potassium. Oxygen and silicon are by far the two most important – oxygen composes 47% of the crust by weight, and silicon accounts for 28%.\n",
"Metals are often extracted from the Earth by means of mining ores that are rich sources of the requisite elements, such as bauxite. Ore is located by prospecting techniques, followed by the exploration and examination of deposits. Mineral sources are generally divided into surface mines, which are mined by excavation using heavy equipment, and subsurface mines. In some cases, the sale price of the metal/s involved make it economically feasible to mine lower concentration sources.\n",
"Mining geology consists of the extractions of mineral resources from the Earth. Some resources of economic interests include gemstones, metals such as gold and copper, and many minerals such as asbestos, perlite, mica, phosphates, zeolites, clay, pumice, quartz, and silica, as well as elements such as sulfur, chlorine, and helium.\n"
] |
Are there any historical precedents to a socialist/communist government before the 20th century?? | Sorry, we don't allow [throughout history questions](_URL_0_). These tend to produce threads which are collections of trivia, not the in-depth discussions about a particular topic we're looking for. If you have a specific question about a historical event or period or person, please feel free to re-compose your question and submit it again. Alternatively, questions of this type can be directed to more appropriate subreddits, such as /r/history or /r/askhistory. | [
"In the United Kingdom, the democratic socialist tradition was represented in particular by William Morris's Socialist League and in the 1880s by the Fabian Society and later the Independent Labour Party founded by Keir Hardie in the 1890s, of which writer George Orwell would later be a prominent member. In the early 1920s, the guild socialism of G. D. H. Cole attempted to envision a socialist alternative to Soviet-style authoritarianism while council communism articulated democratic socialist positions in several respects, notably through renouncing the vanguard role of the revolutionary party and holding that the system of the Soviet Union was not authentically socialist.\n",
"Socialism had been gaining momentum among working class citizens of the world since the 19th century. These culminated in the early 20th century when several states and colonies formed their own communist parties. Many of the countries involved had hierarchical structures with monarchic governments and aristocratic social structures with an established nobility. Socialism was undesirable within the circles of the ruling classes (which had begun to include industrial business leaders) in the late 19th/early 20th century states; as such, communism was repressed. Its champions suffered persecution while people were discouraged from adopting it. This had been the practice even in states which identified as exercising a multi-party system.\n",
"By the early 20th century, myriad socialist tendencies (e.g. anarcho-syndicalism, social democracy and Bolshevism) had arisen based on different interpretations of current events. Governments also began placing restrictions on market operations and created interventionist programs, attempting to ameliorate perceived market shortcomings (e.g. Keynesian economics and the New Deal). Starting with the 1917 Russian Revolution, Communist states increased in numbers and a Cold War started with the developed capitalist nations. Following the Revolutions of 1989, many of these Communist states adopted market economies. The notable exceptions to this trend have been North Korea, Cuba and Venezuela, the latter instituting a philosophy referred to as \"socialism of the 21st century\".\n",
"The history of socialism has its origins in the 1789 French Revolution and the changes which it wrought, although it has precedents in earlier movements and ideas. \"The Communist Manifesto\" was written by Karl Marx & Friedrich Engels in 1848 just before the Revolutions of 1848 swept Europe, expressing what they termed \"scientific socialism\". In the last third of the 19th century, social democratic parties arose in Europe, drawing mainly from Marxism. The Australian Labor Party was the world's first elected socialist party when it formed government in the Colony of Queensland for a week in 1899.\n",
"An alternative socialist establishment co-existed, the Petrograd Soviet, wielding power through the democratically elected councils of workers and peasants, called \"Soviets\". The rule of the new authorities only aggravated the crisis in the country, instead of resolving it. Eventually, the October Revolution, led by Bolshevik leader Vladimir Lenin, overthrew the Provisional Government and gave full governing power to the Soviets, leading to the creation of the world's first socialist state.\n",
"The historical origins of left communism can be traced to the period before the First World War, but it only came into focus after 1918. All left communists were supportive of the October Revolution in Russia, but retained a critical view of its development. However, some would in later years come to reject the idea that the revolution had a proletarian or socialist nature, asserting that it had simply carried out the tasks of the bourgeois revolution by creating a state capitalist system.\n",
"Although socialism is commonly conflated with Marxism–Leninism and its various variants such as Stalinism and Maoism, there have also been several anarchist and socialist societies that followed democratic socialist principles, encompassing anti-authoritarian, democratic anti-capitalism. The most notable examples are the Paris Commune, the various soviet republics in the post-World War I period and the early Soviet Russia before the abolition of soviets, Revolutionary Catalonia as noted by George Orwell and more recently Rojava. Other examples include the kibbutz in modern-day Israel, Marinaleda in Spain, the Zapatistas in Chiapas and to some extent the workers' self-management within Yugoslavia and modern Cuba.\n"
] |
how do professional songwriters write hit songs? | ask them r/music and related subs | [
"Songwriters can be employed to write either the lyrics or the music directly for or alongside a performing artist, or they present songs to A&R, publishers, agents and managers for consideration. Song pitching can be done on a songwriter's behalf by their publisher or independently using tip sheets like \"RowFax\", the \"MusicRow\" publication and \"SongQuarters\". Skills associated with song-writing include entrepreneurism and creativity.\n",
"In a 2002 interview with \"The Guardian\", Chapman reflected that writing hit songs was an art to which many aspired but few achieved: \"It's always a gamble. We'd written something like eight top 10 hits for Sweet when we heard that they'd entered the studio to record their own songs. After that, it was over for them. The bottom line is this – writing songs might be easy to do, but it's incredibly hard to do well.\"\n",
"The old-style apprenticeship approach to learning how to write songs is being supplemented by university degrees and college diplomas and \"rock schools\". Knowledge of modern music technology (sequencers, synthesizers, computer sound editing), songwriting elements and business skills are now often necessary requirements for a songwriter. Several music colleges offer songwriting diplomas and degrees with music business modules. Since songwriting and publishing royalties can be substantial sources of income, particularly if a song becomes a hit record; legally, in the US, songs written after 1934 may be copied only by the authors. The legal power to grant these permissions may be bought, sold or transferred. This is governed by international copyright law.\n",
"For songwriting, Preven keeps a notebook with her and \"always jots down poetry and prose entries.\" She then goes back through them in the studio to try and see if anything in the journal can be matched with a melody she is working on to become lyrics. She finds that hit songs are often those that sound similar to other songs, but she rejects trying to \"genetically engineer\" a song formally, holding that such songs \"end up sounding like a ransom note.\"\n",
"Songs are written either traditionally or by Fiona Mackenzie, Jennifer Wrigley, Hazel Wrigley, or Sandy Brechin. The other artists in the band are Sandy Brechin (accordion), Jennifer Wrigley (fiddle, hardanger fiddle), Jim Walker (drums, percussion), Niall Muir (bass guitar, backing vocals), Aaron Jones (bass guitar, bazouki, cittern), Hazel Wrigley (guitar, piano, fender rhodes, mandolin).\n",
"This is a list of songs written by Jerry Leiber and Mike Stoller, in most cases as a songwriting duo. The pair also collaborated with other songwriters, and also on rare occasions wrote songs as individuals with other writers.\n",
"In his 2006 book, \"The Art of Writing A Hit Song: The Urban Experience\", Jack breaks down the process of creating and developing a hit song in seven formulated steps. He also offers online songwriting courses through The Jack Knight Songwriters Academy, which employs an intense curriculum to prepare up and coming songwriters for the mainstream media.\n"
] |
Gorbachev and Dissolution of USSR | I will point you to this [answer](_URL_3_) I wrote on the role of Gorbachev in the dissolution of the USSR. In essence, Gorbachev, after becoming General Secretary of the Communist Party in 1985, undertook a series of structural political and economic reforms that unleashed forces that were ultimately outside of his control. Thus while the formal dissolution of the USSR is taken to be the Supreme Soviet voting for its own abolition and the resignation of Gorbachev as Soviet president and the lowering of the Soviet flag over the Kremlin on December 25, 1991, this was largely the remnants of the Soviet government recognizing the already-existing state of the Union's dissolution (and it should be noted that some Soviet institutions, notably its [military](_URL_0_), actually persisted after this date).
In answer to your side-question - the Communist Party of the Soviet Union was stripped of its constitutional legal monopoly in March 1990, and was more or less [dissolved](_URL_1_) and outlawed following the failed August 1991 coup. Lower level elements of the party reconsituted themselves into various political parties across the former USSR, and the largest since 1993 has been the Communist Party of the Russian Federation. The CPRF has contested elections since 1993, winning a majority of deputies to the Russian legislature (Duma) in 1995, and its leader Gennady Zyuganov almost won the 1996 presidential [election](_URL_2_) against Boris Yeltsin. Despite all this, and despite being a pro-government party, it hasn't actually controlled any part of the Russian government above the regional level in the post Soviet period. | [
"On 8 December 1991, the presidents of Russia, Belarus, and Ukraine formally dissolved the USSR, and then constituted the Commonwealth of Independent States (CIS). Soviet President Gorbachev resigned on 25 December 1991; the next day, the Supreme Soviet dissolved itself, officially dissolving the USSR on 26 December 1991. During the next 18 months, inter-republican political efforts to transform the Army of the Soviet Union into the CIS military failed; eventually, the forces stationed in the republics formally became the militaries of the respective republican governments.\n",
"On 26 December 1991, the USSR was self-dissolved by the \"Council of the Republics\" of the Supreme Soviet of the Soviet Union, the first house of Soviet legislature (the second house, the \"Council of the Union\", was without a quorum).\n",
"On August 24, 1991, Gorbachev dissolved the Central Committee of the CPSU, resigned as the party's general secretary, and dissolved all party units in the government. (It was on the very same day that the Declaration of Independence of Ukraine was enacted by the Supreme Council of Ukraine, signalling the beginning of the end of the USSR as a whole, as Ukraine declared independence on that day.) Five days later, the Supreme Soviet indefinitely suspended all CPSU activity on Soviet territory, effectively ending Communist rule in the Soviet Union and dissolving the only remaining unifying force in the country. Gorbachev established a State Council of the Soviet Union on 5 September, designed to bring him and the highest officials of the remaining republics into a collective leadership, able to appoint a premier of the Soviet Union; it never functioned properly, though Ivan Silayev \"de facto\" took the post through the Committee on the Operational Management of the Soviet Economy and the Interstate Economic Committee and tried to form a government though with rapidly reducing powers.\n",
"On 8 December 1991, the presidents of Russia, Ukraine, and Belarus signed the Belavezha Accords, which declared the Soviet Union dissolved and established the Commonwealth of Independent States (CIS) in its place. Doubts remained about the authority of the Belavezha Accords to dissolve the Union, but on 21 December 1991, representatives of every Soviet republic except Georgia—including those that had signed the Belavezha Accords—signed the Alma-Ata Protocol, which confirmed the dissolution of the USSR and reiterated the establishment of the CIS. On 25 December 1991, Gorbachev yielded, resigning as the president of the USSR and declaring the office extinct. He turned the powers vested in the Soviet presidency over to Yeltsin, the president of Russia.\n",
"On 8 December 1991, the presidents of Russia, Ukraine and Belarus (formerly Byelorussia), signed the Belavezha Accords, which declared the Soviet Union dissolved and established the Commonwealth of Independent States (CIS) in its place. While doubts remained over the authority of the accords to do this, on 21 December 1991, the representatives of all Soviet republics except Georgia signed the Alma-Ata Protocol, which confirmed the accords. On 25 December 1991, Gorbachev resigned as the President of the USSR, declaring the office extinct. He turned the powers that had been vested in the presidency over to Yeltsin. That night, the Soviet flag was lowered for the last time, and the Russian tricolor was raised in its place.\n",
"The dissolution of the Soviet Union is the process of internal collapse of the Soviet Union which started in the second half of 1980s with a series of national unrests and ended on 26 December 1991, when the Soviet Union was voted out of existence, following the Belavezha Accords. This officially granted self-governing independence to the Republics of the Union of Soviet Socialist Republics (USSR). It was a result of the declaration number 142-Н of the Supreme Soviet of the Soviet Union. The declaration acknowledged the independence of the former Soviet republics and created the Commonwealth of Independent States (CIS), although five of the signatories ratified it much later or did not do so at all. On the previous day, 25 December, Soviet President Mikhail Gorbachev, the eighth and final leader of the USSR, resigned, declared his office extinct and handed over its powers—including control of the Soviet nuclear missile launching codes—to Russian President Boris Yeltsin. That evening at 7:32 p.m., the Soviet flag was lowered from the Kremlin for the last time and replaced with the pre-revolutionary Russian flag.\n",
"The dissolution of the Soviet Union' was a process of systematic disintegration, which occurred in the economy, social structure and political structure. It resulted in the abolition of the Soviet Federal Government (\"the Union center\") and independence of the USSR's republics on 26 December 1991. The process was caused by a weakening of the Soviet government, which led to disintegration and took place from about 19 January 1990 to 26 December 1991. The process was characterized by many of the republics of the Soviet Union declaring their independence and being recognized as sovereign nation-states.\n"
] |
why do daily vitamins for seniors contain less or no iron? | Too much iron for people over 50 raises the occurrence of heart disease. | [
"It is designed for children 2 years of age and older. Flintstones Complete has a high supplementation of iron, iodine, vitamin D and vitamin E. Vitamin D is necessary for the maintenance and growth of bones in children. Vitamin D deficiency is a concern for infants, especially in the Northern Hemisphere. This is because infants often have very limited exposure to sunlight, which is the main source of endogenous Vitamin D production. Vitamin D deficiency can result in rickets, a disease in which bones become soft and pliable. Vitamin E is a potent anti-oxidant in the body. Vitamin E deficiencies leads to neuromuscular, vascular and reproductive abnormalities.\n",
"Elderly people have a higher risk of having a vitamin D deficiency due to a combination of several risk factors, including: decreased sunlight exposure, decreased intake of vitamin D in the diet, and decreased skin thickness which leads to further decreased absorption of vitamin D from sunlight.\n",
"Iron supplements, also known as iron salts and iron pills, are a number of iron formulations used to treat and prevent iron deficiency including iron deficiency anemia. For prevention they are only recommended in those with poor absorption, heavy menstrual periods, pregnancy, hemodialysis, or a diet low in iron. Prevention may also be used in low birth weight babies. They are taken by mouth, injection into a vein, or injection into a muscle. While benefits may be seen in days up to two months may be required until iron levels return to normal.\n",
"The intake of calcium in elder people is quite low, and this problem is worsened by a reduced capability to ingest it. This, attached to a decrease in the absorption of vitamin D concerning metabolism, are also factors that contributes to a diagnosis of osteoporosis type II.\n",
"In those who are otherwise healthy, there is little evidence that supplements have any benefits with respect to cancer or heart disease. Vitamin A and E supplements not only provide no health benefits for generally healthy individuals, but they may increase mortality, though the two large studies that support this conclusion included smokers for whom it was already known that beta-carotene supplements can be harmful. A 2018 meta-analysis found no evidence that intake of vitamin D or calcium for community-dwelling elderly people reduced bone fractures.\n",
"Secondary vitamin A deficiency is associated with chronic malabsorption of lipids, impaired bile production and release, and chronic exposure to oxidants, such as cigarette smoke, and chronic alcoholism. Vitamin A is a fat-soluble vitamin and depends on micellar solubilization for dispersion into the small intestine, which results in poor use of vitamin A from low-fat diets. Zinc deficiency can also impair absorption, transport, and metabolism of vitamin A because it is essential for the synthesis of the vitamin A transport proteins and as the cofactor in conversion of retinol to retinal. In malnourished populations, common low intakes of vitamin A and zinc increase the severity of vitamin A deficiency and lead physiological signs and symptoms of deficiency. A study in Burkina Faso showed major reduction of malaria morbidity with combined vitamin A and zinc supplementation in young children.\n",
"BULLET::::- Adequate calcium and regular exercise may help to achieve strong bones in children and adolescents and may reduce the risk of osteoporosis in older adults. An adequate intake of vitamin D is also necessary\n"
] |
Why did European firearms technology become superior to that of other continent's? | For a very good recent book on the subject I highly recommend *The Gunpowder Age: China, Military Innovation, and the Rise of the West* by Tonio Andrade. He sums up a lot of the information and theories available as far as china goes and concludes that the main cause does seem to be a relative lack of major wars.
Essentially, Chinese military technology underwent two different divergences compared to Europe. Prior to the mid 15th century Chinese gunpowder technology was on par with or better than that of the west. Chinese cannon was following European ones in the trend of longer barrels relative to bore size and the Ming army employed a larger proportion of handgunners than European armies would up until around 1500. By 1450 however the Ming had achieved a level of relative stability with no major opponents and military development stopped. Meanwhile in Europe over the next half century the "classic" cannon was developed and the matchlock arquebus started becoming widespread. As a result when the Portuguese first arrived in the 1520s both sides agreed that European guns and cannon were far superior.
This didn't last though, by the end of the 16th century and over the course of the 17th century warfare picked up again in china and European gun technologies were quickly adapted first by the Ming and later by the Qing as well. China developed complex drills and countermarch-style volley fire techniques even earlier than the Dutch did, and by the end of the 17th century they had defeated two Renaissance-style artillery forts. They even had plans to build their own but once the Qing had firmly established their dominance there was no longer any real need. Thus, Andrade argues, China had again reached a level of relative military parity with Europe. It was then over the course of the 18th century that again a lack of warfare combined with the relatively isolationist policies of the Qing dynasty led to a lack of military innovation and serious decline in the quality of the army leading up to a decisive defeat by the British during the Opium Wars. | [
"Central to the success of the Europeans was the use of firearms. However, the advantages afforded by firearms have often been overstated. Prior to the late 19th century, firearms were often cumbersome muzzle-loading, smooth-bore, single shot muskets with flint-lock mechanisms. Such weapons produced a low rate of fire, while suffering from a high rate of failure and were only accurate within . These deficiencies may have initially given the Aborigines an advantage, allowing them to move in close and engage with spears or clubs. Yet by 1850 significant advances in firearms gave the Europeans a distinct advantage, with the six-shot Colt revolver, the Snider single shot breech-loading rifle and later the Martini-Henry rifle, as well as rapid-fire rifles such as the Winchester rifle, becoming available. These weapons, when used on open ground and combined with the superior mobility provided by horses to surround and engage groups of Aborigines, often proved successful. The Europeans also had to adapt their tactics to fight their fast-moving, often hidden enemies. Tactics employed included night-time surprise attacks, and positioning forces to drive the natives off cliffs or force them to retreat into rivers while attacking from both banks.\n",
"Along with advancements in communication, Europe also continued to advance in military technology. European chemists made new explosives that made artillery much more deadly. By the 1880s, the machine gun had become a reliable battlefield weapon. This technology gave European armies an advantage over their opponents, as armies in less-developed countries were still fighting with arrows, swords, and leather shields (e.g. the Zulus in Southern Africa during the Anglo-Zulu War of 1879).. Some exceptions of armies that managed to get nearly on par with the European expeditions and standards include the Ethiopian armies at the Battle of Adwa, the Chinese Ever Victorious Army and the Japanese Imperial Army of Japan, but these still relied heavily on weapon imports from Europe and often on European military advisors and adventurers.\n",
"Central to the success of the Europeans was the use of firearms, but the advantages this afforded have often been overstated. Prior to the 19th century, firearms were often cumbersome muzzle-loading, smooth-bore, single shot weapons with flint-lock mechanisms. Such weapons produced a low rate of fire, whilst suffering from a high rate of failure and were only accurate within . These deficiencies may have given the Aborigines some advantages, allowing them to move in close and engage with spears or clubs. However, by 1850 significant advances in firearms gave the Europeans a distinct advantage, with the six-shot Colt revolver, the Snider single shot breech-loading rifle and later the Martini-Henry rifle as well as rapid-fire rifles such as the Winchester rifle, becoming available. These weapons, when used on open ground and combined with the superior mobility provided by horses to surround and engage groups of Indigenous Australians, often proved successful. The Europeans also had to adapt their tactics to fight their fast-moving, often hidden enemies. Strategies employed included night-time surprise attacks, and positioning forces to drive the Aborigines off cliffs or force them to retreat into rivers while attacking from both banks.\n",
"The quality gap between locally manufactured guns and European arms continued to widen as new rapid advances in technology and mass production in Europe quickly outstripped the pace of developments in Asia. Important developments were the invention of the flintlock musket and mass production of cast-iron cannon in Europe. The flintlock was much faster, more reliable and more user-friendly than the unwieldy matchlock, which required one hand to hold the barrel, and another to adjust the match and pull the trigger.\n",
"Andrade goes on to question whether or not Europeans would have developed large artillery pieces in the first place had they faced the more formidable Chinese style walls, coming to the conclusion that such exorbitant investments in weapons unable to serve their primary purpose would not have been ideal. Yet Chinese walls do not fully explain the divergence in gun development as European guns grew not only bigger, but more effective as well. By 1490 the European gun had achieved the basic form it would take for the next three centuries, during which it would dominate the fields of warfare. The Classic Gun had arrived, and in the 1510s and 1520s when the Chinese encountered European firearms, they fully recognized they were superior to their own.\n",
"At the beginning of the 18th century, Europe had not yet dominated in the world economy on account of the fact that its military did not match that of Asia or of the Middle East. However, through organizing its economics and improving technology in industry, European countries took the lead as the most powerful nations in the late 18th century and remained in this position until late in the 20th century.\n",
"Chase's argument has also been criticized by Stephen Morillo for its reliance on a simple cause and effect analysis. Alternatively Morillo suggests that the major difference between Chinese and European weapon development was economic. Within Morillo's framework, European weapons were more competitive due to private manufacturing whereas Chinese weapons were manufactured according to government specifications. Although generally true, Peter Lorge points out that gun specifications were widespread in China, and ironically the true gun was first developed during the Song dynasty, when guns were the exclusive enterprise of the government, suggesting that the economics of production were less influential on gun development than assumed. In contrast, less innovation occurred during the Ming dynasty when most of production was shifted to the domain of private artisans. Andrade concludes that although the Chase hypothesis should not be discarded outright, it does not offer a full explanation for the stagnation of firearm development in China.\n"
] |
how does the wifi bridge work on my phone? | Hey that’s a neat feature!
I’m not intimately familiar with the specifics of your phone, but I do have a fair knowledge of networks. What’s most likely going on is exactly the same process that 4G hotspot uses, but only involving wifi.
Basically the laptop is connected to the phone via some means (hotspot wifi, Bluetooth or USB cable), on a private network just between those two devices.
Any traffic from the laptop destined for the inter webs hits the phone and the phone performs some address translation (NAT) before forwarding it on via the hotel wifi. As far as the hotel can see you’ve only got one device (phone) using their connection.
The phone keeps track of the data it forwards on so that when the response comes back, it knows to send it on through to the laptop.
As for the battery life - wifi uses a lot less power than 4G does because the base stations are much much closer | [
"Bridge is an accessory to connect an Ethernet network to MoCA. It supports Ethernet (10/100/Gbit) and MoCA 2.0 (up to 450Mbit/s) connections. The Bridge is most often used to connect a whole-home TiVo DVR + Mini network to the household WAN/LAN router. It can also be used to add MoCA networking to TiVo DVRs that do not include it, such as the two-tuner Premiere and the Roamio.\n",
"A bridge can be used to connect networks, typically of different types. A wireless Ethernet bridge allows the connection of devices on a wired Ethernet network to a wireless network. The bridge acts as the connection point to the Wireless LAN.\n",
"Wireless bridging can connect a wired network to a wireless network. A bridge differs from an access point: an access point typically connects wireless devices to one wired network. Two wireless bridge devices may be used to connect two wired networks over a wireless link, useful in situations where a wired connection may be unavailable, such as between two separate homes or for devices which do not have wireless networking capability (but have wired networking capability), such as consumer entertainment devices; alternatively, a wireless bridge can be used to enable a device which supports a wired connection to operate at a wireless networking standard which is faster than supported by the wireless network connectivity feature (external dongle or inbuilt) supported by the device (e.g. enabling Wireless-N speeds (up to the maximum supported speed on the wired Ethernet port on both the bridge and connected devices including the wireless access point) for a device which only supports Wireless-G).\n",
"The bridge's cables are arranged on multiple vertical planes in a slight modification to the harp (parallel) stay arrangement. Main span cables are paired to anchor into the tower in a vertical plane while side span cables pair up to anchor in a horizontal plane such that four cables anchor in each tower at approximately the same elevation.\n",
"A network bridge is a computer networking device that creates a single aggregate network from multiple communication networks or network segments. This function is called network bridging. Bridging is distinct from routing. Routing allows multiple networks to communicate independently and yet remain separate, whereas bridging connects two separate networks as if they were a single network. In the OSI model, bridging is performed in the data link layer (layer 2). If one or more segments of the bridged network are wireless, the device is known as a wireless bridge. \n",
"The bridge has an emergency layby equipped with SOS phone. Traffic CCTV and Variable Message Sign (VMS) are installed at all locations along the bridge. The bridge carries a Tenaga Nasional 132kV power cable.\n",
"The concept of Bridge WIM was first proposed by Moses in the United States. It fell into disuse but re-emerged in Europe in the 1990's. The disadvantage of Bridge WIM is that it requires a bridge to be present at the location of interest. An advantage is that it is portable - the same system can be moved between bridges in a matter of hours. The concept of Bridge WIM is that the bridge flexes under the weight of the passing truck. Truck axle weights are found by minimizing the sum of squared differences between the theoretical and measured responses. Strain transducers are generally used to measure the bridge response but other responses are possible including deflection. Users of Bridge WIM claim similar levels of accuracy to the best of the other WIM technologies though there are few tests that provide direct comparisons. Since it was first developed, many innovations have been proposed to improve accuracy. One of the more complex is a dynamic version known as Moving Force Identification though this poses practical challenges for calibration. One of the more significant other innovations is the development of Nothing-On-Road (NOR) or Free-of-Axle-detector (FAD) systems which allow installation to take place without access to the road surface.\n"
] |
why are fire extingushers placed in a glass case with a handle, yet in case of fire the glass has to be broken | Because some idiot would open it and play with it. Having to break the glass means that you're only going get the extinguisher if it's a real emergency | [
"Fire glass leaves no trace of ash, soot, grease or discernible odor when used as a medium. Flames produced using natural gas do not produce any smoke, produce less toxic gases and leave no trace of residual pollutants such as tar within the home. The combination is considered an eco-friendly burning solution. Additionally, fire glass is often made from recycled glass, making for a \"green\" fire media option.\n",
"Fire glass is tempered glass manufactured as a medium to retain and direct heat in fireplaces and gas fire pits. Fire glass does not burn, but retains heat and refracts light as a result of burning gas. Fire glass, like artificial logs and stones, is additionally used to obscure the gas plumbing inherent in gas fireplaces or stoves.\n",
"In a similar vein, when a glass rod was put lightly in contact with dried woodchips, the rod would burn the wood and cause it to smoke, or if pressed against a woodchip, it would quickly burn through the chip, leaving behind a charred hole. All the while the glass rod remained cool, with the heating confined to the tip. When a glass rod is pressed lightly against a glass plate, it etches the glass plate, while if it is pressed, it bores right through the plate. Microscopic examinations showed that the debris given off includes finely powdered glass and globules of molten glass.\n",
"Fireman knock-out glazing panels are often required for venting and emergency access from the exterior. Knock-out panels are generally fully tempered glass to allow full fracturing of the panel into small pieces and relatively safe removal from the opening.\n",
"The glass-enclosed central stacks (not accessible to the public) can be flooded with a mix of Halon 1301 and Inergen fire suppressant gas if fire detectors are triggered. A previous system using carbon dioxide was removed for personnel safety reasons.\n",
"Only safe tempered glass, the strength of which is a lot higher than the strength of ordinary glass, is used in production of heatable glass. When the hardened glass is destroyed there are safe splittings. Also the current-carrying coating loses its integrity and the automatic fuse, which turns off the power supply of the glass, is activated. The electrodes are placed inside the lamination and no one can reach them without destruction of the product.\n",
"Heat-strengthened glass can take a strong direct hit without shattering, but has a weak edge. By simply tapping the edge of heat-strengthened glass with a solid object, it is possible to shatter the entire sheet. \n"
] |
how can encryption methods be open source? | It is a generally accepted security precept that "The enemy knows the system." That is, when you are designing an encryption algorithm (or any security measure) that you assume the enemy knows its design. You assume that, eventually, an adversary will get a hold of the algorithm and therefore A) you cannot rely on the secrecy of the algorithm for security; and B) your algorithm should be secure despite general awareness of it.
The strength of encryption methods lies in the keys (and, for asymmetric methods, the inherent mathematical difficulty of reversing the encryption process).
For example, the most secure encryption method, the One Time Pad (OTP) has an extremely simplistic algorithm. ALL of the security is invested in how the key is created, used, and kept secret. | [
"Attribute-based encryption (ABE) can be used for log encryption. Instead of encrypting each part of a log with the keys of all recipients, it is possible to encrypt the log only with attributes which match recipients' attributes. This primitive can also be used for broadcast encryption in order to decrease the number of keys used.\n",
"Encryption is a method in which data is rendered hard to read by an unauthorized party. Since encryption methods are created to extremely hard to break, many communication methods either use deliberately weaker encryption than possible, or have backdoors inserted to permit rapid decryption. In some cases government authorities have required backdoors be installed in secret. Many methods of encryption are also subject to \"man in the middle\" attack whereby a third party who can 'see' the establishment of the secure communication is made privy to the encryption method, this would apply for example to the interception of computer use at an ISP. Provided it is correctly programmed, sufficiently powerful, and the keys not intercepted, encryption would usually be considered secure. The article on key size examines the key requirements for certain degrees of encryption security.\n",
"Encryption, as defined above, refers to a subset of cryptographic techniques for the protection of information and computation. The normative value of encryption, however, is not fixed but varies with the type of cryptographic method that is used or deployed and for which purposes. Traditionally, encryption (cypher) techniques were used to ensure the confidentiality of communications and prevent access to information and communications by others than intended recipients. Cryptography can also ensure the authenticity of communicating parties and the integrity of communications contents, providing a key ingredient for enabling trust in the digital environment.\n",
"Encryption software is software that uses cryptography to prevent unauthorized access to digital information. Cryptography is used to protect digital information on computers as well as the digital information that is sent to other computers over the Internet.\n",
"It is possible to construct a dynamic encryption system, from known ciphers (such as AES, DES, etc.), such that all encryption algorithms generated from this system are at least as secure as the static underlying cipher.\n",
"Attribute-based encryption is a type of public-key encryption in which the secret key of a user and the ciphertext are dependent upon attributes (e.g. the country in which he lives, or the kind of subscription he has). In such a system, the decryption of a ciphertext is possible only if the set of attributes of the user key matches the attributes of the ciphertext.\n",
"Attribute-based encryption is a type of public-key encryption in which the secret key of a user and the ciphertext are dependent upon attributes (e.g. the country in which he lives, or the kind of subscription he has). In such a system, the decryption of a ciphertext is possible only if the set of attributes of the user key matches the attributes of the ciphertext.\n"
] |
Was Philippe Pétain unjustly criticized by the new French Republic? | It wasn't just about the fact that he had surrendered to the Germans that the French were pissed off about. Pétain and his government, shortly after the surrender, took a vote to reorganise the Third Republic into the French State, an authoritarian (and more importantly an extremely collaborationist) regime which is better known nowadays by the name of Vichy France.
This government aided in the rounding up of Jews and other "Undesirables", and within its colonies it actively resisted Allied forces. Fascist elements within that selfsame government then began attempting to turn France into a much more conservative country than the Third Republic had ever been. They wanted a much less secular and liberal society, preferring instead a more authoritarian catholic society. Pétain himself supported the movement, but said he didn't like the name it was using, which was "National Revolution."
Censorship was imposed within France, and freedom of speech and thought repressed with the reinstatement of "Felony of Opinion". Pétain himself said that: "The new France will be a social hierarchy... rejecting the false idea of the natural equality of men".
Pétain went on to allow the creation of a Vichy organised militia known as the "Milice" to suppress the Maquis, in particular its Communist factions.
After the occupation of southern France, Pétain became a mere figurehead for what was no longer even a pretense of an independent government in Vichy. When the liberation of France came about in the September of 1944, the Vichy government was relocated to Germany, where it became a government in exile. By this point however, Pétain refused to take part in it any longer, and the running of it was taken over by Fernand de Brinon.
On the 26th of April 1945 Pétain returned to France via Switzerland to face his accusers. The trial ran from the 23rd of July to the 15th of August 1945, the main charge being treason.
In short, while he may not have merited a death sentence (Which was only not carried out due to his age), Pétain was far from an innocent man. He had run an authoritarian, collaborationist regime that had sought to overturn a lot of the values that the French have held dear since their revolution in 1789. He was not on trial for the surrender, but for everything that came after it. | [
"Philippe is best known for his role in the 2004 Haiti Rebellion which overthrew the government of Jean-Bertrand Aristide due to, in part, allegations of election fraud in the 2000 parliamentary elections and other issues. Philippe's involvement can be traced back to 2000 when he was forced to flee to the Dominican Republic after taking part in a failed coup attempt against the first administration of Rene Preval. He had been a police chief in Cap-Haïtien when he was accused again of masterminding another coup attempt against the Aristide government in December 2001, which he denies any involvement in but proof would point otherwise. Throughout 2001-2004 Philippe is said to have worked the rebels that were running a \"contra\" war in the Plateau Central assassinating Lavalas officials and family members. When unrest/insurgency turned to rebellion in 2004, Philippe publicly announced that he was joining with coup forces and quickly took a leadership role, which he shared with co-leader Louis-Jodel Chamblain, who is considered a notorious war criminal by some. After Aristide was removed from the country in a US registered plane, Philippe and his army put down their guns in favor of the UN peacekeeping force. He has also been accused of drug dealing, and Aristide supporter group claim he is a covert CIA spy, recruited by an agent in Haiti to start the coup. It has been reported that he had secret meetings with opposition groups of Aristide in the Dominican Republic and also with a CIA agent.\n",
"Returning to France in 1819, he resumed the struggle against the ultra-royalist party with such temerity that he was condemned to one year's imprisonment in 1821 and fifteen months imprisonment in 1827. After the revolution of July 1830 he refused a pension of 6000 francs offered to him by King Louis Philippe, on the ground that he wished to retain his independence even in his relations with a government which he had helped to establish.\n",
"On reaching France, Sonthonax countered by accusing Louverture of royalist, counter-revolutionary, and pro-independence tendencies. Louverture knew that he had asserted his authority to such an extent that the French government might well suspect him of seeking independence. At the same time, the French Directoire government was considerably less revolutionary than it had been. Suspicions began to brew that it might reconsider the abolition of slavery. In November 1797, Louverture wrote again to the Directoire, assuring them of his loyalty but reminding them firmly that abolition must be maintained.\n",
"The second Philippe government was formed following scandal among ministers during the first Philippe government. La République En Marche! (REM) allies Democratic Movement (MoDem) were facing scandal following allegations that the party used EU funds to pay party workers. Armed Forces Minister Sylvie Goulard was the first to step down, resigning on 20 June 2017. The following day, Minister of Justice François Bayrou and European Affairs Minister, Marielle de Sarnez stepped down. Richard Ferrand, Minister of Territorial Cohesion, stepped down on 19 June 2017 following \"Le Canard Enchaîné\" publishing allegations of nepotism on 24 May 2017. Macron defended Ferrand despite the allegations and public polling showing that 70% of respondents wanted Ferrand to step down. On 1 July 2017, a regional prosecutor announced that authorities had launched a preliminary investigation into Ferrand. Ferrand responded to the allegations saying everything was \"legal, public and transparent\". He was one of the founding members of La République En Marche! and is currently serving as President of the National Assembly.\n",
"Despite his popularity with many Parisians at the beginning of his reign, Louis-Philippe almost immediately faced fierce opposition from those who wanted to replace the monarchy with a republic and press for radical social reforms; opposition was strongest among students, the working class and members of the new socialist movement. The first riot took place in December 1830, after the trial of the ministers of King Charles X; the crowd was furious that they were given life sentences instead of the death penalty. More riots took place in 1831 to protest a memorial service held at the church of Saint-Germain-l'Auxerrois for the Duke of Berry, a prominent monarchist who had been assassinated on 14 February 1820 during the reign of King Louis XVIII. The interior of the church was pillaged, and the next day, the rioters attacked the church of Notre-Dame de Bonne-Nouvelle and the palace of the archbishop of Paris, next to the cathedral of Notre-Dame. The archbishop's residence was badly damaged and ultimately demolished.\n",
"Louis-Philippe's regime was finally overthrown in the French Revolution of 1848, though the subsequent French Second Republic was short-lived. In the 1848 Revolution, Friedrich Engels published a retrospective in which he analyzed the tactical errors which led to the failure of the 1832 uprising, and drew lessons for the 1848 revolt. The main strategic deficit, he argued, was the failure to march immediately on the centre of power, the Hôtel de Ville.\n",
"In 1792, during the French Revolution, he changed his name to Philippe Égalité. Louis Philippe d'Orléans was a cousin of Louis XVI and one of the wealthiest men in France. He actively supported the Revolution of 1789, and was a strong advocate for the elimination of the present absolute monarchy in favor of a constitutional monarchy. He voted for the death of king Louis XVI; however, he was himself guillotined in November 1793 during the Reign of Terror. His son Louis Philippe d'Orléans became King of the French after the July Revolution of 1830. After him, the term Orléanist came to be attached to the movement in France that favored a constitutional monarchy.\n"
] |
How come in Physics we can round off numbers willy nilly? | Someone asked a very similar question the other day and it got a lot of good answers, so I'd recommend you [read those answers](_URL_0_) (I linked directly to my favorite response). But here's my quick take on it. Numbers in math are ideal. If I ask you to solve the equation x^2 = 2, the answer is exactly sqrt(2), even though that has an infinite number of decimals - it's 1.41421356... where the ... means it keeps going on forever. If you round, it's usually because you don't have enough space for the full answer (in the case of sqrt(2), it's because you don't have an infinitely long sheet of paper).
In physics or other experimental sciences, you can only ever measure a quantity to some finite accuracy. So if a measurement you make is only accurate to the third decimal place, then if it comes out with the number 1.414, you can't just write 1.4140000, because the real answer might be 1.41414131, or 1.41402492, or anything else. When you manipulate that number - multiplying it by something else, for instance - then you can only keep up to 3 decimal places, because your measurement was only ever good for that many to begin with, and it wouldn't be honest to pretend that you can get a more accurate answer. | [
"One early way of producing random numbers was by a variation of the same machines used to play keno or select lottery numbers. These mixed numbered ping-pong balls with blown air, perhaps combined with mechanical agitation, and used some method to withdraw balls from the mixing chamber (). This method gives reasonable results in some senses, but the random numbers generated by this means are expensive. The method is inherently slow, and is unusable for most computing applications.\n",
"Provided that the random numbers picked in step 2 above are truly random and unbiased, so will the resulting permutation be. Fisher and Yates took care to describe how to obtain such random numbers in any desired range from the supplied tables in a manner which avoids any bias. They also suggested the possibility of using a simpler method — picking random numbers from one to \"N\" and discarding any duplicates—to generate the first half of the permutation, and only applying the more complex algorithm to the remaining half, where picking a duplicate number would otherwise become frustratingly common.\n",
"Doing a Fisher–Yates shuffle involves picking uniformly distributed random integers from various ranges. Most random number generators, however — whether true or pseudorandom — will only directly provide numbers in a fixed range from 0 to RAND_MAX, and in some libraries, RAND_MAX may be as low as 32767. A simple and commonly used way to force such numbers into a desired range is to apply the modulo operator; that is, to divide them by the size of the range and take the remainder. However, the need in a Fisher–Yates shuffle to generate random numbers in every range from 0–1 to 0–\"n\" pretty much guarantees that some of these ranges will not evenly divide the natural range of the random number generator. Thus, the remainders will not always be evenly distributed and, worse yet, the bias will be systematically in favor of small remainders.\n",
"Numbers juggling is the art and sport of keeping as many objects aloft as possible. 7 or more balls or rings, or 5 or more clubs is generally considered the threshold for numbers. Traditionally, the goal has been to \"qualify\" a number, that is, to get the pattern around twice such that each object has been thrown and caught twice. A newer generation of jugglers tends to value a \"flash\", which is to throw and catch each object only once. Since a flash is much less difficult than a qualifying run, there will be numbers flashed but not yet qualified. For example, the current world records are: Balls/Beanbags − 11 qualified, 14 flashed; Rings − 10 qualified, 13 flashed; and Clubs/Sticks − 8 clubs qualified, 9 sticks flashed, 9 clubs flashed.\n",
"Number Scrabble (also known as Pick15 or 3 to 15) is a mathematical game where players take turns to select numbers from 1 to 9 without repeating any numbers previously used, and the first player to amass a personal total of exactly 15 wins the game. The game is isomorphic to tic-tac-toe, as can be seen if the game is mapped onto a magic square.\n",
"The problem of ensuring randomness using mechanical means was hard to resolve. In the early 1930s, Robert McKay proposed an ingenious machine containing a chamber with 52 balls of different diameters (for each player, there were 13 balls with the same size). Like in a lottery machine, the balls would be shaken and randomly chosen by driving them one by one into a wheel with 52 slots. This wheel would then rotate, slot by slot, and a rod in contact with the ball would \"detect\" its diameter. A distribution mechanism could then use the diameter information and take the appropriate action to deal the card to the correct player.\n",
"Balls typically have numbers all over their outer edges. The numbers on balls used in number lottery games (except the EZ2 Lotto), are read on the spot without the need of touching them. In the digit lottery games and the EZ2 Lotto with top drawing Mega Gems, the balls are adjusted to clearly show the numbers drawn. Because of the nature of the Power Lotto Mega Gem, each of the methods mentioned were applied in each of the machine's two chambers.\n"
] |
Is Stephen Hawking still relevant? | His work on black hole thermodynamics and his effort to merge quantum mechanics and gravity, though maybe done more convincingly by Unruh, are still relevant and very important.
He's a very bright man and, in addition to doing some very good research, dedicated his life to becoming one of the greatest public educators in the history of science. You do him a great disservice by underestimating the importance of this contribution.
| [
"Hawking achieved commercial success with several works of popular science in which he discusses his own theories and cosmology in general. His book \"A Brief History of Time\" appeared on the British \"Sunday Times\" best-seller list for a record-breaking 237 weeks. Hawking was a Fellow of the Royal Society (FRS), a lifetime member of the Pontifical Academy of Sciences, and a recipient of the Presidential Medal of Freedom, the highest civilian award in the United States. In 2002, Hawking was ranked number 25 in the BBC's poll of the 100 Greatest Britons.\n",
"However, Hawking has also expressed dissatisfaction regarding the impact on his notoriety caused by his appearance in the episode. In a debate with physicist Brian Cox in \"The Guardian\", Hawking was asked what the most common misconception about his work was. He replied, \"People think I'm a \"Simpsons\" character.\" Writing for \"The Daily Telegraph\", Peter Hutchison argued that Hawking \"feels he is sometimes not properly recognised for his contribution to our understanding of the universe.\" In his book \"The book is dead: long live the book\", Sherman Young wrote that most people know Hawking from his appearance on \"The Simpsons\", rather than from anything he has written.\n",
"Hawking was an atheist and believed that \"the universe is governed by the laws of science\". He stated: \"There is a fundamental difference between religion, which is based on authority, [and] science, which is based on observation and reason. Science will win because it works.\" In an interview published in \"The Guardian\", Hawking regarded \"the brain as a computer which will stop working when its components fail\", and the concept of an afterlife as a \"fairy story for people afraid of the dark\". In 2011, narrating the first episode of the American television series \"Curiosity\" on the Discovery Channel, Hawking declared:\n",
"Physicist Marcelo Gleiser, reviewing the book for NPR, writes: \"Stephen Hawking is one of those rare luminaries whose life symbolizes the best humanity has to offer ... [his book is one] every thinking person worried about humanity's future should read ... If there is a unifying theme across the book, it is Hawking's deep faith in science's ability to solve humanity's biggest problems ... His answers to the big questions illustrate his belief in the rationality of nature and on our ability to uncover all its secrets. His optimism permeates every page ... Although Hawking touches on the origin of the universe, the physics of black holes and some of his other favorite topics, his main concern in this book is not physics. It's humanity and its collective future ... Focusing his attention in the book on three related questions — the future of our planet, colonization of other planets, and the rise of artificial intelligence — he charts his strategy to save us from ourselves ... Only science, Hawking argues, can save us from our mistakes ... Hawking believes that humanity's evolutionary mission is to spread through the galaxy as a sort of cosmic gardener, sowing life along the way. He believes ... that we will develop a positive relation with intelligent machines and that, together, we will redesign the current fate of the world and of our species.\"\n",
"Hawking is a BBC television film about Stephen Hawking's early years as a PhD student at Cambridge University, following his search for the beginning of time, and his struggle against motor neuron disease. It stars Benedict Cumberbatch as Hawking and premiered in the UK in April 2004.\n",
"Hawking also maintained his public profile, including bringing science to a wider audience. A film version of \"A Brief History of Time\", directed by Errol Morris and produced by Steven Spielberg, premiered in 1992. Hawking had wanted the film to be scientific rather than biographical, but he was persuaded otherwise. The film, while a critical success, was not widely released. A popular-level collection of essays, interviews, and talks titled \"Black Holes and Baby Universes and Other Essays\" was published in 1993, and a six-part television series \"Stephen Hawking's Universe\" and a companion book appeared in 1997. As Hawking insisted, this time the focus was entirely on science.\n",
"Hawking's first year as a doctoral student was difficult. He was initially disappointed to find that he had been assigned Dennis William Sciama, one of the founders of modern cosmology, as a supervisor rather than noted Yorkshire astronomer Fred Hoyle, and he found his training in mathematics inadequate for work in general relativity and cosmology. After being diagnosed with motor neurone disease, Hawking fell into a depressionthough his doctors advised that he continue with his studies, he felt there was little point. His disease progressed more slowly than doctors had predicted. Although Hawking had difficulty walking unsupported, and his speech was almost unintelligible, an initial diagnosis that he had only two years to live proved unfounded. With Sciama's encouragement, he returned to his work. Hawking started developing a reputation for brilliance and brashness when he publicly challenged the work of Fred Hoyle and his student Jayant Narlikar at a lecture in June 1964.\n"
] |
what are the implications of america leaving the iran deal? | A lot of oil companies who were in the process of investing in Iran for production purposes won't be able to continue if the sanctions go back in.
Iran will be free to restart progress toward its goal of making a nuclear weapon.
Our allies in the deal will not be able to trust us to keep up our end of any new deal that's made.
Basically, the original deal may not have been the greatest, but it is a hell of a lot better than having no deal in place. | [
"On 8 May 2018 the United States officially withdrew from the agreement after Trump signed a Presidential Memorandum ordering the reinstatement of harsher sanctions. In his May 8 speech Trump called the Iran deal \"horrible\" and said the United States would \"work with our allies to find a real, comprehensive, and lasting solution\" to prevent Iran from developing nuclear arms. The IAEA has continued to assess that Iran has been in compliance with JCPOA and that it had \"no credible indications of activities in Iran relevant to the development of a nuclear explosive device after 2009\" Other parties to the deal stated that they will work to preserve the deal even after the US withdrawal.\n",
"On 14 July 2015, the Joint Comprehensive Plan of Action (JCPA, or the Iran deal) was agreed upon between Iran and a group of world powers: the P5+1 (the permanent members of the United Nations Security Council—the United States, the United Kingdom, Russia, France, and China—plus Germany) and the European Union. The Obama administration agreed to lift sanctions on Iran that had devastated their economy for years, in return Iran promised to give up their nuclear capabilities and allow workers from the UN to do facility checks whenever they so please. President Obama urged US Congress to support the nuclear deal reminding politicians that were wary that if the deal fell through, the US would reinstate their sanctions on Iran.\n",
"In October 2017, Sanders said that \"the worst possible thing\" the United States could do was undermine the Iran nuclear deal if it was \"genuinely concerned with Iran's behavior in the region\" and that the president's comments against the deal had isolated the US from foreign allies that had retained their commitment to the agreement.\n",
" – President Barack Obama said a \"historic understanding\" had been reached with Iran, and pointed out that the deal with Iran is a good deal if the deal could meet core objectives of the United States. 150 Democratic House members signaled that they supported reaching a deal, enough to sustain a Presidential Veto. A majority of Congress including all Republicans and some Democrats opposed the deal.\n",
"On 8 May 2018, U.S. President Donald Trump announced that the United States would withdraw from the Iran nuclear deal. Following the U.S. withdrawal, the EU enacted an updated blocking statute on 7 August 2018 to nullify US sanctions on countries trading with Iran. U.S. sanctions came into effect in November 2018 intended to force Iran to dramatically alter its policies in the region, including its support for militant groups in the region and its development of ballistic missiles.\n",
"Benjamin Netanyahu, who called the Iran nuclear deal a \"historic mistake\", told President Barack Obama that Israel was under increased threat because of the deal and said in a statement, “In the coming decade, the deal will reward Iran, the terrorist regime in Tehran, with hundreds of billions of dollars. This cash bonanza will fuel Iran’s terrorism worldwide, its aggression in the region and its efforts to destroy Israel, which are ongoing.” Many conservatives in the United States claimed the deal would usher a financial windfall for Iranian sponsored groups in the Middle East that pose a threat to Israel including Hezbollah and Hamas. In addition the lack of focus on Iran's ballistic missile program and the lifting of weapons embargoes was also viewed as a peril for Israel. President Donald Trump criticized what he viewed as the deal's \"near total silence on Iran's missile programs.”\n",
"The \"National Review\" wrote that the U.S. administration's unwillingness to acknowledge any Iranian noncompliance had left the Iranians in control, and that the deal was undermining international security by emboldening Iran to act as a regional hegemon, at the expense of U.S. influence and credibility.\n"
] |
how was gamma radiation discovered to be a photon, not neutron? | The Gamma ray was discovered by Rutherford in 1900 when studying radioactive elements. It was indeed neutral, but was reflected by crystalline matter, while a neutron would be absorbed by the barrier. | [
"Photon radiation is called gamma rays if produced by a nuclear reaction, subatomic particle decay, or radioactive decay within the nucleus. It is otherwise called x-rays if produced outside the nucleus. The generic term photon is therefore used to describe both.\n",
"Gamma (γ) radiation consists of photons with a wavelength less than 3x10 meters (greater than 10 Hz and 41.4 keV). Gamma radiation emission is a nuclear process that occurs to rid an unstable nucleus of excess energy after most nuclear reactions. Both alpha and beta particles have an electric charge and mass, and thus are quite likely to interact with other atoms in their path. Gamma radiation, however, is composed of photons, which have neither mass nor electric charge and, as a result, penetrates much further through matter than either alpha or beta radiation.\n",
"A gamma ray, or gamma radiation (symbol γ or formula_1), is a penetrating electromagnetic radiation arising from the radioactive decay of atomic nuclei. It consists of the shortest wavelength electromagnetic waves and so imparts the highest photon energy. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900 while studying radiation emitted by radium. In 1903, Ernest Rutherford named this radiation \"gamma rays\" based on their relatively strong penetration of matter; he had previously discovered two less penetrating types of decay radiation, which he named alpha rays and beta rays in ascending order of penetrating power.\n",
"Gamma rays were first thought to be particles with mass, like alpha and beta rays. Rutherford initially believed that they might be extremely fast beta particles, but their failure to be deflected by a magnetic field indicated that they had no charge. In 1914, gamma rays were observed to be reflected from crystal surfaces, proving that they were electromagnetic radiation. Rutherford and his co-worker Edward Andrade measured the wavelengths of gamma rays from radium, and found that they were similar to X-rays, but with shorter wavelengths and (thus) higher frequency. This was eventually recognized as giving them more energy per photon, as soon as the latter term became generally accepted. A gamma decay was then understood to usually emit a gamma photon.\n",
"The gamma ray may transfer its energy directly to one of the most tightly bound electrons, causing that electron to be ejected from the atom, a process termed the photoelectric effect. This should not be confused with the internal conversion process, in which no gamma-ray photon is produced as an intermediate particle.\n",
"In gamma-ray astronomy, gamma-ray bursts (GRBs) are extremely energetic explosions that have been observed in distant galaxies. They are the brightest electromagnetic events known to occur in the universe. Bursts can last from ten milliseconds to several hours. After an initial flash of gamma rays, a longer-lived \"afterglow\" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave and radio).\n",
"The convention that EM radiation that is known to come from the nucleus, is always called \"gamma ray\" radiation is the only convention that is universally respected, however. Many astronomical gamma ray sources (such as gamma ray bursts) are known to be too energetic (in both intensity and wavelength) to be of nuclear origin. Quite often, in high energy physics and in medical radiotherapy, very high energy EMR (in the 10 MeV region)—which is of higher energy than any nuclear gamma ray—is not called X-ray or gamma-ray, but instead by the generic term of \"high energy photons.\"\n"
] |
Could anything man-made trigger the Yellowstone super volcano? | All these answers assume there is eruptable magma down there currently anyway. While there is certainly magma, we don't know what the connectivity of it is like, the viscosity, or the internal pressure. | [
"Supervolcano is a 2005 British-Canadian disaster television film that originally aired on 13 March 2005 on BBC One, and released by the BBC on 10 April 2005 on the Discovery Channel. It is centered on the speculated and potential eruption of the volcanic caldera of Yellowstone National Park. Its tagline is \"Scientists know it as the deadliest volcano on Earth. You know it...as Yellowstone.\"\n",
"The loosely defined term \"supervolcano\" has been used to describe volcanic fields that produce exceptionally large volcanic eruptions. Thus defined, the Yellowstone Supervolcano is the volcanic field which produced the latest three supereruptions from the Yellowstone hotspot; it also produced one additional smaller eruption, thereby creating the West Thumb of Yellowstone Lake 174,000 years ago. The three supereruptions occurred 2.1 million, 1.3 million, and approximately 630,000 years ago, forming the Island Park Caldera, the Henry's Fork Caldera, and Yellowstone calderas, respectively. The Island Park Caldera supereruption (2.1 million years ago), which produced the Huckleberry Ridge Tuff, was the largest, and produced 2,500 times as much ash as the 1980 Mount St. Helens eruption. The next biggest supereruption formed the Yellowstone Caldera (~ 630,000 years ago) and produced the Lava Creek Tuff. The Henry's Fork Caldera (1.2 million years ago) produced the smaller Mesa Falls Tuff, but is the only caldera from the Snake River Plain-Yellowstone hotspot that is plainly visible today.\n",
"Although fascinating, the new findings do not imply increased geologic hazards at Yellowstone, and certainly do not increase the chances of a 'supereruption' in the near future. Contrary to some media reports, Yellowstone is not 'overdue' for a supereruption.\n",
"In 2005, a BBC/Discovery docudrama entitled Supervolcano was released on cable television. The drama imagines the reaction of the Yellowstone Volcano Observatory to a super eruption at the Yellowstone Caldera. Producer Ailsa Orr credits YVO scientists as inspiration for the film's three primary characters. The YVO Scientist-in-Charge reflected on the hype associated with volcanism at Yellowstone in a 2005 magazine article.\n",
"The last full-scale eruption of the Yellowstone Supervolcano, the Lava Creek eruption which happened approximately 640,000 years ago, ejected approximately of rock, dust and volcanic ash into the sky.\n",
"Wendy Reiss, the undersecretary of FEMA, visits and asks Rick about the worst-case scenario if Yellowstone does release a super eruption. He shows her a model, revealing devastating results of the ash fall over the US.\n",
"The USGS are swarmed with calls, to which all say that the eruption is not imminent. Rick and Ken argue about Ken's appearance on TV, which Rick passes off as Ken creating a mass panic in order to sell his book. Their argument is interrupted by Fiona Lieberman, Rick's wife and Ken's sister. She reflects that Rick is cautious of what he says in public due to the press convincing people Yellowstone was going to erupt due to the discovery of a bulge in Yellowstone Lake some years before.\n"
] |