text
stringlengths
0
181k
Intumescent coatings provide exceptional passive fire protection for both metal and timber structures. These coatings have become exceptional popular both in school buildings and those for which housing associations are responsible for, due to how versatile and cost effective they are. But what actually are intumescent coatings? Typically an intumescent coating will provide an appearance similar to a paint finish. Whilst the temperature remains ambient these coatings remain stable, however with an increase in temperature such as during a fire a chemical reaction occurs. This reaction results in the coating expanding to a number of times its original thickness, creating an insulating coating similar to foam, known as char, which protects the underlying surface from the fire. Metal Substrates: When used in this application intumescent coatings are designed to insulate the metal, preventing it from reaching a critical temperature at which structural integrity becomes compromised. Timber substrates: Timber is far more susceptible to the spread of flames and heat propagation along its surface than metal. In this application intumescent coatings aim to reduce the spread of both flames and heat propagation. What is the process by which intumescent coatings provide protection? The heat from the fire begins to soften the polymeric binder. This causes an organic acid to be released from the Ammonia Polyphosphate contained within the coating. Following this the carbonisation of the polyols commences. The blowing agent within the coating, melamine, begins to decompose. As it does the gas produced expands and swells the molten mixture. As the chemical reaction comes to its conclusion the foamed char solidifies, as part of a cross-linking reaction, in order to maintain the insulation. Potentially intumescent coatings have the ability to expand up to 100 times its original thickness. However in order to achieve such an effect relies strongly on ensuring that the right formulation components are selected and the precise matching process is observed.
Dear Dr. Roach: Are there any medical conditions that can lead to loss of fingerprints? Dear Anon.: There are several skin conditions that can lead to loss of fingerprints, with nonspecific dermatitis leading the list, according to a recent study. Other causes identified were primary hyperhidrosis, irritant contact dermatitis, atopic dermatitis, dyshidrotic eczema, psoriasis and mechanical abrasion. Criminals have tried countless methods to change or remove fingerprints, without much success. However, a journal article in 2017 noted that individuals treated with the cancer chemotherapy drug capecitabine (usually used for breast or colon cancer) may have a side effect called “hand-foot syndrome,” which sometimes can lead to loss of fingerprints. Most people don’t notice it, unless their fingerprints are necessary for international travel or for security documents (and now, even your phone). As fingerprints become used more often, it’s important to know about this potential side effect. I am confident few criminals will take toxic doses of chemotherapy in hopes of this unusual side effect, which is only occasionally permanent. Dear Dr. Roach: Can a stent be used in an artery that is 80 percent clogged? I am in good health, but at age 87, I do not want to have invasive surgery. Dear E.F.S.: In someone who is 87, using a stent generally is preferable to surgery. In the stent procedure, the artery in the heart is opened by placing a catheter in the heart (usually through the femoral artery in the groin) and using a balloon or a cutting device to open up the blockage, which is a combination of cholesterol, fibrous tissue and calcium. Even if an artery is 99 percent blocked, it usually can be opened via the catheter. The stent then is placed to help keep the artery open. Some stents are bare metal; others, called “drug-eluting stents,” have medication embedded in the stent lining, which elutes (is released) over months. Only a cardiologist can say whether an individual lesion is amenable to stenting and what the best technique is. Far fewer bypass surgeries — that is, open heart surgery, where a clogged artery is replaced using a blood vessel taken from elsewhere in the body — are performed now than 10 or 20 years ago. This is because of better medical treatment and because of advances in both the procedure and the materials used.
Major challenges and crises in global health will not be solved by health alone; requiring rather a multidisciplinary, evidence-based analytical approach to prevention, preparedness and response. One such potential crisis is the continued spread of nuclear weapons to more nations concurrent with the increased volatility of international relations that has significantly escalated the risk of a major nuclear weapon exchange. This study argues for the development of a multidisciplinary global health response agenda based on the reality of the current political analysis of nuclear risk, research evidence suggesting higher-than-expected survivability risk, and the potential for improved health outcomes based on medical advances. To date, the medical consequences of such an exchange are not credibly addressed by any nation at this time, despite recent advances. While no one country could mount such a response, an international body of responders organized in the same fashion as the current World Health Organization’s global health workforce initiative for large-scale natural and public health emergencies could enlist and train for just such an emergency. A Nuclear Global Health Workforce is described for addressing the unprecedented medical and public health needs to be expected in the event of a nuclear conflict or catastrophic accident. The example of addressing mass casualty nuclear thermal burns outlines the potential triage and clinical response management of survivors enabled by this global approach. The humanitarian health professions learned many hard lessons from the Ebola epidemic in West Africa. Despite previous warnings from the 2003 SARS pandemic and the passage of the International Health Regulations Treaty in 2005 and updated in 2009 , poor Treaty compliance, lack of financial support, and leadership failures ultimately led to delays in recognition and response to the Ebola epidemic . Post-Ebola appeals for a World Health Organization (WHO) led global health workforce resulted in 2014 of the development of country-supported international and national emergency medical teams (EMTs) for rapid and sustained response to sudden onset natural disasters (SoDs), public health emergencies of international concern (PHEICs) and conflict-related complex humanitarian emergencies (CHEs) . In 2015, amid the increasing risk of a nuclear crisis with unmitigated and shared components of both CHEs and PHEICs, Burkle and Dallas argued for the inclusion within the WHO agenda of a nuclear global health workforce capacity and operational framework . This study, based on current multidisciplinary analysis of geopolitical risk, scientific nuclear survivability research, and medical advances in both triage management and the acute and chronic care of nuclear thermal burns, as one clinical example, argues that life-saving opportunities are possible if rapidly deployed, justifying the development of a multidisciplinary nuclear global health workforce. As writers and watchdogs on nuclear risks and the health outcomes of nuclear war, we are worried. From a scientist’s standpoint, general ignorance of the medical and public health realities of nuclear tragedies has never been at such a dangerously high level. Since the 1980s and the Chernobyl disaster, at least 7 individual nuclear accidents including the Fukushima Daichi tragedy and Russian K-84 submarine accident have occurred. The breakup of the former Soviet Union led to less security over fissile materials and the “undetected smuggling of weapons-usable nuclear material or how much was diverted or stolen since” that may support global terrorism groups with the stated aim to utilize this technology . The current “conventional wisdom is that a dirty bomb is far more likely than a conventional bomb” . Rather than moving toward a “nuclear free world” the U.S. finds itself embarking on a $1 trillion nuclear weapons “modernization program” to counter increasing Russian provocations . Every other nuclear power worldwide has either increased the size of their arsenal or modernized it . While there has been an overall decline in the number of nuclear weapons worldwide, the steady expansion of nuclear stockpiles to more national states (including highly aggressive and arguably unstable ones), the continuous modernization of arsenals and a concurrent decline in regional stability, especially in the South Asian subcontinent, provides an increasingly dangerous flashpoint for nuclear war. The total number of nuclear weapons reached a peak in the 1980s, more countries now possess them. Risks of a nuclear tragedy are significantly building despite vigorous attempts at slowing proliferation. India and Pakistan fought four wars over the four decades preceding their acquisition of nuclear weapons. The religious animosity is deep-seated, having lasted for over a thousand years. Today, while both nations have over a hundred nuclear weapons, all of them in the Hiroshima size range (15-20 kT yield), they are in the process of rapidly proliferating nuclear stockpiles. Pakistan is adding 20 weapons per year, the fastest numerical level of proliferation worldwide while India is poised to achieving thermonuclear weapons (>100 kT yield). Others regard Pakistan’s 2015 plan to deploy tactical short-range, low yield nuclear weapons (Less than 5 kT) as “problematic and risky” arguing that such weapons “lower the threshold for nuclear weapon use” . This argument began in the new Century with the suggestion that these “low yield” weapons could be used in conventional conflicts, blurring the distinction between nuclear and conventional war . Critics claim that “what smaller does is to make the weapon more thinkable” . In addition to the intense rivalry with Pakistan, India also perceives a growing threat from dramatic increases in the military stature of China. This is further exacerbated by the highly publicized nuclear activities by Iran, which after IAEA-documented research efforts in nuclear weapon development, today Iran has the capability to develop a nuclear weapon . Therefore, there exists a rapid and accelerating nuclear proliferation in the South Asian subcontinent where neither India nor Pakistan has signed the Comprehensive Nuclear Test Ban Treaty. Analysts worldwide are concerned for the stability of the Pakistani nuclear arsenal fuelled by growing tensions between its military and civilian leadership, tribal and radical Islamic warfare and the acknowledged presence of the followers of these groups in the Pakistan government . The increasingly violent warfare between Sunni and Shiite Islamic forces in the Middle East and South Asia has led to anxiety over acquisition of nuclear weapons by wealthy and influential Sunni Saudi Arabia from Sunni Pakistan to counter the pursuit of nuclear weapons capability by Shiite Iran. Beside the Cold War Northern hemisphere build up the steady increase in number and weapon size of South Asian nuclear arsenals in this crowded corner of the planet is otherwise unprecedented. Globally, “modernization” still refers to nuclear proliferation and it is now worldwide. An unnamed official from the Obama administration explained, “Why is it that we’re trying to prevent the Pakistani government from collapsing? Because we fundamentally believe that we cannot afford a country with 80-100 nuclear weapons becoming the Congo…there is a sense that other places in the world can go to hell, but not this one” . Emerging conditions that mimic Cold War mindsets should make us increasingly wary. Russia’s military exercises and explicit threats against other countries and especially their vigorous nuclear modernization program reflects their conviction that strategic nuclear forces are indispensible for security and status as a great power . Today, nuclear weapons and what they symbolically represent varies significantly from the regional power rivalries of the past. Most worrisome are that the possession of advanced nuclear weaponry, once steadfastly controlled by arguably rational regimes, has taken on more religious significance in countries where they are worshipped both as a national strength and as a right, or as a “currency of power to which many countries still aspire.” How such religious fervor might influence and justify their use has not been adequately researched, but should not limit the topic from being integrated into existing deliberations if that risk is to be mitigated . It is also discouraging that American attitudes regarding nuclear weapons use, the so-called “nuclear taboo”, have changed since post-WWII when public opinion supported an aversion to the use of nuclear weapons. Current “instincts of the U.S. public, when facing our worst foes” have shifted with a “sizeable segment of the American public” feeling “an attraction to our most destructive weapons, not an aversion”. Previous “commitment to the immunity of civilians from deliberate attack in wartime, even with vast casualties, is shallow” . Nuclear negotiators rarely mention the dire health outcomes, nor have the tragic health consequences ever been used as an agenda item or an incentive for nuclear non-proliferation negotiations. These actions suggest that denial, as a protective mechanism of most humans, is massive cross-culturally, even among seasoned health care providers when it comes to contemplating the possibility of a nuclear tragedy . Recent scientific research assists in dispelling the thinking that started in the 1950s that nuclear conflicts would result in very few survivors. Today’s research supports that a nuclear war, even a catastrophic duel between Iran and Israel, would result in an extraordinary number of surviving radiation, trauma and thermal burn patients receiving almost no medical care or even pain relief [15–18]. One of the most pervasive perceptions left over from the Cold War is the consideration of the inevitability of high mortality and injury rates from nuclear war, in high proportion to the overall targeted population. This was due to the potential for large numbers (perhaps even thousands of weapons) of high yield nuclear devices (hundreds and in many cases thousands of kilotons each) being in a nuclear conflict between the superpowers. Known as Mutual Assured Destruction (MAD), it proved to be a powerful deterrent to the use of nuclear weapons during the decades after the Second World War. However, it also left a remarkably persistent perception that the use of nuclear weapons inevitably resulted in a high proportion of casualties to the population of the area attacked, even to the extent that medical and public health personnel, military planners, political thought leaders and policy makers, and other decision makers continue to operate under this assumption. Today the U.S. Department of Homeland Security (DHS) has as its most likely scenario for a nuclear weapon detonation in the U.S. as a 10 kT nuclear weapon in an urban area . It is also widely assumed that a single nuclear weapon detonation is the most likely nuclear event to be seen, with several of these relatively smaller weapons being used in a short period of time as the next most likely event. Put in a strictly quantitative framework, this means that it is incumbent on health and civil services to have sufficient emergency preparedness for a situation in which cumulative kiloton nuclear attack of from 10 to 50 kT occurs. This is strikingly different from the MAD structure of a total 5000,000 kT attack (1000 kT times 5000 weapons, still only a fraction of the total arsenal of our opponents in the Cold War). In this illustrative example, this is a 100,000-fold difference, or 5 orders of magnitude. Clearly, even with some considerable contraction of the breadth of difference, the remarkably disproportional impact of the health care impact in the Cold War and the current time must be acknowledged. Looking closely at the detailed health outcome of a 20 kT detonation in Washington, D.C., better brings into focus the reality of the current threat, as was done in a U.S. Senate Homeland Security Hearing given by co-author (CD) and at which the current Secretary of Defense was also a speaker . In short, the high lethality and morbidity in the first mile or two around the blast area gives way to a narrow slice of area where actual radiation-induced casualties occur in a plume outside this inner zone, which actually covers only a small part of the city. In the first 750 m (12 psi) virtually all buildings will be destroyed by blast, mass fires are common and prompt radiation doses are fatal except in basements, resulting in very few survivors. Between 750 and 1250 m the peak overpressure decreases from 12 to 5 psi with walls blown out of buildings. Debris will be tens of feet thick in most downtown areas with ten story plus buildings. Roughly half of the population in this relatively limited area will be fatalities, mainly from collapsing buildings, with the other half injured. Most of those surviving will have been exposed to a fatal dose of prompt radiation, though death will occur first due to mass fires or third degree burns. Between 1250 m and 1750 m peak overpressures will fall from 5 psi to near 3 psi, and burn thresholds towards the edge of this zone will drop from third degree to second degree levels. At 1900 m or 3 psi, large numbers of trauma injury would ensue from walls blown out of steel framed buildings, severe residential damage and people caught in the open. By 2000 m burn risk will drop to first degree levels. At up to 3800 m or 1 psi people will be endangered with flying glass and debris from damaged structures and glass will break out to over 6 kms. The very high deposition rates of radioactive particles that occur in the first km from the detonation rapidly drops off in magnitude, along with the resulting radiation-induced casualties, with wind dispersion. This description, however, accounts for only about 1 mile across of an urban area, with the rest of the sprawling urban landscape of U.S. cities as unaffected, with the exception of a relatively narrow plume of casualties induced by sufficient radioactive fallout that can continue to extend out for several miles. However, most people think of these effects extending for 10, 20 or more miles across an urban landscape in all directions. Using a clock-face analogy, the much-feared health effects from the radioactive plume for a 10- or 20-kT detonation with wind coming from the West would roughly approximate only a 2:00 to 4:00 region for several miles, with the rest of the clock-face outside the 1–2 mile radius around Ground Zero as completely unaffected in terms of direct health care consequences due immediately to the nuclear weapon detonation . There would be extensive indirect and potentially preventable effects due to power grid loss, lack of access to food, water, sanitation, shelter, access to and availability of healthcare and medicines, increased crime, and the health issues related to internally displaced people, especially vulnerable populations . The healthcare affected population in the Washington 20 kT scenario will entail a daunting task for response, but would actually involve only a fraction of the total population . Nuclear war casualties can in this context be simplified to thermal burn, trauma, radiation casualties and combined injuries. Defining the mortality and surviving populations is inexact, but due to the large difference between the reasonably assumed mortality and the known unaffected areas, the error in these assumptions is not great. All thermal casualties would number about 40,000, with 25,000 of these as the more survivable first degree burns. The combined fallout and blast affected populations (dead and surviving injured) would number about 220,000. However, 50,000 of these are in the fallout zones where mortality is less than 10%. Another 60,000 of the blast casualties are in the 1-2 psi zone and most would be expected to survive. Therefore, in this rough approximation there are 260,000 combined thermal burn, trauma, and radiation affected population. Assuming 70% survival in the first degree burn (0.7 × 25,000 = 17,500) and 1-2 psi zone trauma patients (0.7 × 60,000 = 42,000) and 90% survival in low fallout zones (0.9 × 50,000 = 45,000), means that subtracting these survivors from the total affected population leaves approximately 100,000 deaths. The 70% figure is an estimate of the survival rate for the combined injury of trauma from 1 to 2 psi earthquake survival rate together with the first degree burns. Most of the deaths would be due to the trauma, not the first degree burns, which are expected here to exacerbate the trauma injury severity . It also leaves over 160,000 surviving injured likely requiring acute and chronic assistance. When one considers that the total metropolitan population of Washington, D.C., is over 6 million, this means that about 4% of the population of metropolitan Washington is likely to be affected by the most likely nuclear device to be used (10-20 kT), with 1.5% mortality and 2.5% injuries requiring assistance. While a 1.5% mortality in a population is a devastating experience for any society, it is dramatically different from the common expectations one sees from the general population, or even educated medical circles, where MAD conditioning has left the impression of much higher mortality. It is incumbent on responsible medical and public health planners to acknowledge this discrepancy and put into practice (and support financially and with qualified personnel) the appropriate response to deal with what is a much more manageable, though still daunting, mass casualty medical response than previously acknowledged. The ICRC reminds us that for more than four decades during the Cold War preparedness drills were regularly conducted, shelters maintained and anti-nuclear protests took place. Two generations later, with most born after the end of the Cold War, the level of awareness of the risks to humanity are much less, especially in the U.S. but also worldwide . Any one of a number of increasingly feasible regional conflicts with the growing list of nuclear powers, including increasingly aggressive ones more likely to use the weapons than in the past (India/Pakistan, Sunni/Shiite, Iran/Israel, Russia/NATO, North Korea) would expose an almost complete lack of preparedness of any of these nations to help the people who survive it. Nuclear tragedies result in the loss of basic command-and-control, mass surgical and trauma casualties suffering blast injuries, 1st, 2nd and 3rd degree thermal burns, radiation contamination, and multiple lacerations, orthopedic and crush injuries, severe combined effect injuries, and major psychosocial outcomes . In addition to the fact that surviving physicians, nurses and emergency responders would be overwhelmed, most would lack the skills and the tools to treat the unprecedented number of thermal burn and radiation victims. Survivors in primarily “Injury Zones” would in a short time become “Death Zones” making the differences medically between the two zones almost meaningless . This is obviously unacceptable to any observer, and demands a response from rational planners. A nuclear event will result in an unprecedented mass casualty incident that produces large scale direct mortality and morbidity requiring the rapid deployment of mobile, self-contained, self-sufficient health care facilities and subsequently the more indirect health outcomes resulting from a destroyed essential public health infrastructure and the daily protections they afford society . Hauer discusses the increasing likelihood of nuclear terrorism and how ill prepared the U.S. and other nations are to respond, asserting that “in most multi-casualty incidents, there are one or two triage points where victims are taken for assessment and routing to treatment. Following a nuclear detonation in a major city with a vast geographical footprint there could be need of 25 to 75 such triage points or more” . Deployed emergency health facilities must have the capacity and capability to bring life-saving care as close to the frontline of the tragedy as is safely possible. In a series of 10 focus group discussions with ED physicians and nurses conducted in the USA in 2008, study participants consistently stated that neither ED ‘s nor hospital facilities were sufficiently prepared for a terrorist attack involving radioactive materials and expressed a need not only for more information but also strongly disagreed with some of the existing response guidelines . Recommended requirements for centrally coordinated mobile and fixed initial triage and dose-monitoring facilities designed to identify, assess, transfer, decontaminate, and move casualties efficiently to survivor or palliative care facilities include: (1) Nuclear Triage Centers, (2) Nuclear Survival Centers, (3) Nuclear Palliative Care Centers, and (4) Health System Support Centers . Recognizing the vast numbers of casualties that will occur, operationally all emergency medical teams would become resource scarce or constrained in a short period of time requiring unprecedented logistical resupply system that does not exist today. Existing International Atomic Energy Agency and WHO assets (field-level guidelines, nuclear triage and management plans, worksheets and technical monitoring standards for radiation overexposure) are available, however altered standards of care will be practiced at every level on a daily basis. As an example of the potential impact of a Nuclear Global Health Workforce on constructing an effective medical response, the very difficult subset of nuclear war mass casualty thermal burn response is addressed. Over the decades since atomic bombs struck Hiroshima and Nagasaki one of the most common photographic reminders of these horrific events have been those of chronic keloid disfigurements resulting from thermal burns; reminders of both the suffering and survivability of nuclear war. Surviving medical professionals would be expected to be limited to treating lacerations and fractures, leaving those with severe thermal burns to die slowly of infection. It should be recognized that radiation burns also occur after a nuclear detonation, where a skin burn occurs due to ionizing radiation and not thermal energy, and that the overwhelming majority of these “burn” victims do not survive. In contrast, the thermal burn victims include a large number of surviving patients that can greatly benefit from appropriate medical intervention. Actually this is a fairly common mistake made in nuclear detonation discussions, where radiation and thermal burn statements are made interchangeably. From a triage point of view it is greatly simplified by the fact that the very large radiation dose required to generate a radiation burn insures that the patient will not likely survive, while fairly low exposures to heat energy will create a surviving thermal burn patient. To avoid the confusion which sometimes occurs in “burn” discussions related to a nuclear detonation, in all future burn discussion here, thermal burns are being referred to, and not radiation burns. Jeng and colleagues emphasize that triage decisions need to be made carefully, allowing the focus of limited personnel and equipment on those most likely to survive . With current triage approaches, it is highly unlikely that the very large number of thermal burn victims will receive any meaningful medical treatment suggesting that a lower-tech approach could save many lives and involve training armies of non-specialists in surgical debridement and the administration of burn medicines . Burn treatment as a high-tech endeavour would only be pursued in well-equipped hospital units where each patient gets constant attention from several specialists . Yet, society recognizes legal, ethical and moral expectations and obligations that during crises triage plans exist to treat as many victims as possible who have an opportunity for survival. When triage is performed in accordance with accepted medical practice, triage is both sanctioned and recognized by law in most countries. Because unrealistic triage results in unacceptable deaths rates among those who should survive, triage plans must be well thought out and designed. In fact, medical providers are held legally accountable for the triage process where there is rule of law, but the process itself cannot ensure either treatment or survival . In 1986, Leaning described in detail the unique challenges health care providers face in the triage and treatment of casualties from nuclear war. She stressed that “reference to precedent may admit sources of serious error, since in terms of scale of effects, this disaster would depart fundamentally from anything the world has yet experienced” further observing that whereas radiation injury must receive substantial attention, a synergism exists between burn and blast and the triage category of “expectant” may extend over a wide range of injury, including many who might in other, less-stressed circumstances be assigned greater chances of survival” . Kumar and Jagetia contend that it will be very difficult to save victims with a TBSA of more than 30% and exposure of more than 4Gy of ionising radiation. Most available medical resources should be diverted to treating less severe and moderate combined injuries . Coleman and colleagues provide realistic and thoughtful accounts of the current capacity, capability and future requirements necessary to sort, assess, treat, triage, monitor, recover and rehabilitate casualties and the public health protections after a nuclear attack [31–33]. They emphasize the criticality of triage in resource poor settings involving extensive preplanning, stating in 2016 that “an understanding of the requirements for biodosimetry, triage and treatment decisions, and massive public health and medical response informs and modifies the over-all large-scale operational response” . While it is unfortunate that in most of the existing healthcare communities these skill sets needed in nuclear war medical response are simply lacking, it is entirely feasible that they could be gained by the appropriate training protocols and especially in the kind of international cooperative venues such as mass casualty exercises which require extensive planning and preliminary training. Therefore, in order to address these demanding resource needs it would be of high utility to mobilize the existing civilian and military healthcare personnel in not only the immediate affected nation but also in cooperating additional nations in a coordinated manner. A critical feature toward this end would be to incorporate retired healthcare personnel and the mature voluntary aid societies in well organized training paradigms and especially mass casualty management exercises with the existing healthcare sector. This would enable the twin needs of greatly expanding the number of qualified healthcare participants that are needed, and adding the critical missing training skill sets such as dosimetry and mass casualty triage and treatment to this expanded population. As it is established that a single nuclear event will exceed the capacity of any single nation to respond, this also enables the bringing together of the necessary volume as well as appropriate quality of healthcare resources in a viable organized fashion. We suggest that to effectively operationalize such an endeavor requires the necessity of a nuclear global health workforce organized, trained and supervised by the larger global network of governmental and non-governmental resources. The management of burns has improved dramatically over the last 50 years, to the extent that in a fully resourced modern burns centre in a high income country burns of up to 80% TBSA can regularly survive. The developments that have led to these improvements have been stepwise and summative, but include early fluid resuscitation, early excision and grafting, management of sepsis, management of inhalation injury, nutritional support, use of cadaveric allograft and artificial dermal templates. In addition the creation of well trained dedicated multidisciplinary burn teams and advances in rehabilitation, scar management and psychosocial support has meant that not only has survival improved, but also quality of life post burn injury. However, it is highly unlikely given the current circumstances that these gold standard modern burn management practices would be available after a nuclear disaster. In addition there are already enormous discrepancies between what is available in a high income versus resource poor country. Access to burn services and outcomes from burn injury in most of the world is poor with lack of trained staff, negligible pre hospital care, transport difficulties and insufficient materials – in other words not dissimilar to the situation after a nuclear incident. The response to a medically overwhelming number of burn casualties after release of a nuclear device cannot therefore be limited to existing burns services which if not destroyed in the initial explosion would be totally overwhelmed within minutes. Therefore there is need for a complete paradigm shift in thinking and preparedness preparation. As noted previously, the most likely nuclear weapons to be used in the near future are the relatively smaller (Hiroshima/Nagasaki-sized) devices, which not only produce much fewer casualties than thermonuclear devices, but also a much smaller proportion of thermal burns as part of the total injuries of these larger nuclear devices. Triage is clearly critical and has to be realistic, based on survivability and available resources (Table 1). The most impactful approach is likely to be simple, early management of the large number of relatively superficial flash burns, up to 30% TBSA and associated with minimal or no contamination or other injuries (unless very minor). Taking the example of a 20 Kt explosion in a city such as Washington, DC, the predicted number of people (of all ages) with superficial burns would be 25,000 – most likely dazed, confused, psychologically disturbed and in an environment where almost all of the infrastructure has been destroyed. Clearly these numbers are way beyond what can be managed by a single emergency medical team, but this cohort of patients are potentially amenable to either ‘self treatment’ or ‘buddy treatment’ and therefore one of the roles of the EMT would be to rapidly distribute essential treatment packages along with very simple instructions. In this scenario early oral resuscitation is likely to be more practical and achievable than intravenous resuscitation, although if there has been radiation exposure vomiting may be a problem [27, 34]. Analgesia should be available and instructions given on cleaning wounds and applying an antimicrobial ointment with closed dressings. There are numerous modern silver impregnated dressings that are more suitable for superficial partial thickness injuries that once applied can be left for 5–7 days which may well be more appropriate but are significantly more costly. Part of preparedness development would be identifying the exact contents of such ‘individual emergency burn packs’. The other large group of potentially survivable burns would be those with either larger surface area superficial burns or deeper burns most likely up to a maximum of 40% TBSA. In a low resource environment deep burns beyond this rarely survive even in a moderately well equipped and staffed burns services. The second role of a burn specific EMT would then be to deal with these patients. It is feasible within a field hospital setting to undertake excision and grafting of these patients, and early excision would be preferable as this not only improves outcomes and mortality in general, but also reduces infective complications . There have also been advances in human fibroblast-derived temporary skin substitutes, with potential applications in mass casualty burn treatment as has been demonstrated after fire-fighter thermal burn treatment [36, 37]. The use of these skin substitutes might have feasibility in the treatment of the potentially large number of burn victims in a nuclear scenario, as their use may feasibly delay the inflammatory response in treated patients long enough to enable the mass casualty transport and triage delays until more conventional and effective burn treatments can occur. If there has been concomitant low level irradiation then the impact on the immune system will be delayed by 4–6 days, so ideally patients should be excised and grafted before this stage. With the potential of several thousand patients requiring early excision and grafting, it is clear that multiple EMT’s would be needed to perform this; however, appropriate triage and rapid simple surgery could potentially save a significant number of these patients. Combined thermal and irradiation injuries have a worse prognosis, whilst pure radiation burns are not like thermal burns in that the radiation kills the basal layer of dividing cells and therefore desquamation is not seen for several days and this type of injury usually result in superficial burns only . Manage resuscitation, clean and dress wounds, early excision and grafting. Rapid training of temporary burn care teams to assist with dressings, positioning, mobilising, hygiene, feeding, etc. In the event of only a fraction of the thermally injured surviving (say 10%) this would still equate to between 2500 and 4000 patients in the scenario described above. These patients will then need ongoing care and rehabilitation. Burns that have healed within 2 weeks are unlikely to need ongoing management beyond application of simple moisturisers and advice on skin care and protection. However, deeper burns and those taking longer than 2 weeks will need early rehabilitation, ideally starting with appropriate positioning and/or splinting, mobilisation and nutritional support. Again, it is clear that in the acute stages access to standard nursing and therapy care will not be possible and therefore alternative approaches will be required. EMT’s with appropriate training could create ‘temporary burn care teams’ from surviving casualties (and indeed patients with only minor burns) and provide simple instructions and demonstrations of key points with respect to nursing care and acute rehabilitation (focussing on preventing problems further down the line). Current burn mass casualty plans that have been developed do not consider the same level of magnitude of casualties that would be the inevitable result of a nuclear attack. Post 9/11 ‘bypass’ strategies have been replaced with the concept of ‘absorption’ strategies and developing surge capacity. However, these still focus on ‘traditional’ concepts such as moving patients with serious burn injuries to a burns centre and keeping those less severe in outlying hospitals until beds become available in a burns center . In the USA there are only 128 burn centers (with a total burn-bed capacity of 1835) and only 43 burn centers are verified by the American Burn Association and the American College of Surgeons through a rigorous review program designed to determine if the burn center’s resources are sufficient to provide optimal care for burn patients . In the 20 kT example given above there would be a possible 15,000 patients with burns deeper than superficial and thus potentially requiring surgery and if even 10% of these survived, it would still overwhelm the existing burn services, thus a very different model of managing these cases to maximise potential survival must be seriously considered. It is in the survivability consideration of the relatively smaller nuclear weapons mentioned earlier that a curious and unexpected incentive emerges for vigorously enacting a nuclear response training and exercise paradigm for mass casualty thermal burn and other protocols. As nuclear detonation events become more likely, it is ironically fortunate that it is single or small numbers of the smaller, Hiroshima-sized weapons that are reasonably expected to occur at least in the initial nuclear events over the next decade or so. The number of burn patients generated by these scenarios, while still catastrophic, will enable a credible response by a properly prepared global health workforce. An encouraging example toward this end is the WHO EMT initiative, which operates on a regional basis through WHO regional organizations, and enables the mobilization of other resources outside that region, as would certainly be necessary in a nuclear event. As these resources are trained globally in a common network of terminology and procedure, they have a common basis for cooperation and combination of forces in a mass casualty crisis such as a nuclear event. This approach should therefore be able to significantly help enable the medical and public health community to lead their respective communities into this future, and make it much less daunting than is currently perceived. Therefore, creating a nuclear global health workforce would be a critical first step in help regional healthcare response entities in developing realistic new strategies to deal with the inevitable aftermath of a catastrophic casualty scenario after a thermonuclear event . The risk of a nuclear event, either by accident or war, has increased. This comes as evidence of increased survivability from the most likely initial nuclear weapon events has emerged. Multidisciplinary prevention, preparedness and response, however, remain a major deficit. Using the example of triage and management of one of the more dreadful of its consequences, thermal burns, the authors strongly recommend the development of a WHO sponsored nuclear global health workforce.
This is the story of Box Brewery which operated in the Market Place between 1864 and 1924. Never before has the story been told of Joseph Pinchin's amazing journey to Australia contributed by David Edwin Pinchin, his two-times great grandson. And we bring the story of the premises right up to the present. wearing dark suit and light hat in this picture. Many people in Box will associate the name Pinchin with Box Brewery or Box Mill. They are one of the most prolific of local families and the name Pinchin (possibly derived from pincers in old French) was relatively common in Wiltshire, Gloucestershire and Somerset during the 1800s. The family story is difficult to trace because locally there are Pinchins in a variety of occupational groups: farmers, brewers, maltsters and millers. This article focuses on the brewers in Box and the role of the family as millers and farmers will be dealt with in separate articles. The Pinchin family are reported to have been brewers since the 1500s but this is anecdotal and I have found no reference. It has also been said that a branch of the family lived in Box from 1649 until the 1900s, based at Greystones the family home. Again I have not been able to confirm these details. The earliest local reference to the name Pinchin that I have been able to find comes from the 1660s when a member of the family gave evidence against Walter Bushnell, the vicar of Box, who had borrowed £100 from the family. In 1763 we have Tobias Pinchin who reported the theft of a black gelding from a piece of ground in Box and offered a reward for his return of 1 guinea. These references tell us little about the family except that they had local roots, which is confirmed in 1795 when the remains of the late Mr William Pinchin of Box were interred at Ditteridge. So far, we have no positive identification of brewers in the family. It has been claimed that the village brewery started after the Northgate Brewery closed in Bath and moved to Box about 1867 at the instigation of the owner, George Pinchin. So this is the obvious place to start our search for the Pinchin brewery. In 1851 in residence at Hatt House, Box, with all the trappings of wealth, lived George Pinchin, brewer and part owner of Northgate Brewery, Bath. At one time the business was owned by him and a partner, George Simms of Montebello House, Bathwick Hill, and was reputed to be the largest brewer of its kind in the South West. Later the company was called Messrs George Pinchin & Co of Bath. It was a long established business, and George appears to have bought into it sometime after 1809 but before 1821. George lived with his second wife, Margaret Lewis, and two daughters, Mary and Georgina, in some style at Hatt House. Unlike many manufacturing businesses, brewing was a respectable commercial activity for the landed class and they were important local people. They were very wealthy, employing a butler, cook, ladies maid, two servants and a coachman, who lived with his own family in Hatt Cottage. They weren't village people originally: George was born in Conock near Devizes; his first wife, Mary Bethell, came from Bradford-on-Avon; and his second wife, Margaret, came from Llandewi, Pembroke. They were eminent local people and a stained glass window was previously erected in the south aisle of Box Church: To the glory of God. In Memory of Mary the wife of George Pinchin of Hatt House in this parish and five of their children, 1857. Later George, Margaret and family moved to Bathwick Hill House, Bath, where he died on the 6 February 1861. George's estate at the time of his death was significant at about £4,000 (worth £40,000 in today's terms) but he had clearly made ample provision for Margaret, who continued to live for a further twenty years in some style on income derived from dividends. A fascinating report, some 60 years later, tells of the failure of the Northgate Brewery a couple of years after 1866, subsequent to George's death: It was to have been floated as a company and (sale) prospectuses were actually printed and ready for issue when the failure in 1866 of Overend, Gurney & Co, a big banking firm, upset financial arrangements. There must be some doubt, therefore, that Northgate Bewery moved from Bath to Box. There is some evidence that the brewery was established in Box by another member of the family, David Rice Pinchin, son of Joseph Pinchin 1. So we have to back-track to Joseph and another branch of the Pinchin family. Joseph Pinchin 1 and his family had been in Colerne from at least the early 1800s. With his wife, Elizabeth, he was a very substantial farmer. They lived with five children: three boys, David Rice, another Joseph (who plays little part in the brewery story) and Henry; and two girls, Elizabeth and Ann. Joseph 1 was extremely wealthy. In July 1834 he offered a reward of £200 for information about a fire that he suffered in his rick-yard at Colerne. He died in 1836 and, although we don't know much more about him, we know more about the children. The second son, Joseph, was also a farmer and in 1851 he and his sister Ann together farmed a massive estate of 408 acres of un-named land at Colerne. Henry married but died fairly young and Mary of Hall Farm was almost certainly his widow. She farmed 270 acres at Hall Farm at Colerne in 1851. Elizabeth married John Brokenbrow, also a farmer, and had at least three daughters: Elizabeth, Mary and Mercy. However, it is David Rice Pinchin who becomes the focus of the Box Brewery story. So we get to David Rice Pinchin, who is the originator of the brewery in Box. He got his start in life from his father's inheritance. In his father's will of 1836, Joseph 1 left to his eldest son, David Rice Pinchin, his premises in the Parish of Box, subject to the payment of £400 to his executors. The nature of those premises is not recorded but possibly they could have been a brewery somewhere in Box. David Rice was a person of considerable wealth, sometimes called a Gentleman (signifying his status and respectability). In 1851 he was recorded as a large farmer occupying two farms at different parts of the (Box) parish. The 1840 Tithe Commutation record says that David Rice Pinchin rented 232 acres from John Fuller which may have included Hatt Farm and outbuildings. We can go further and get a picture of a man who was not slow to go to the law to protect his wealth. In 1851 he objected to paying tolls at Box Toll-gate alleging that a load of hurdles was not a farming implement as claimed by the gate-keeper. By 1851 David Rice recorded himself as a brewer and had established a family dynasty with his two sons, William, a baker, and Peter, a brewer. As we shall see shortly, his son Joseph 2 had the most amazing history of all the Pinchin family and one which altered the whole course of the Brewery story. David Rice Pinchin died in 1853, nearly a decade before the Market Place premises was built and seven years before George Pinchin's demise. David Rice's will records the names of two beneficiaries other than his wife: his niece, Elizabeth Brokenbrow and Mary Pinchin, widow of his late brother, Henry. The former is a small bequest but to Mary he bequeathed some cottages and a plot of land. The will makes frequent reference to trustees, suggesting that the bulk of his estate is held outside of his personal estate (the names of those so benefiting are not readily available). Although not directly named, his executors appear to have been two or more of his sons. There is another way of tracing the origins of Box Brewery through the site of a Box malt-house. The grain needed to produce beer had to be malted (allowed to germinate) usually requiring a large floor area on which to soak, then dry the corn. A fairly large building was needed for this process. From the 1840 tithe apportionment details we are told that David Rice owned a garden behind a malt-house and that George Pinchin owned the malt-house grounds. So we need to trace where his maltings might have been situated. Living in Box in 1851 was Robert Harper Chubb with his family and three servants, a general domestic servant, a maltster and a gardener in an unidentified house in the centre of the village. In 1865 three men, Robert Chubb, David Pinchin (David Rice's son) and Samuel Rowe Noble, sold their malt-house stables and malt-house gardens to the Bath Stone Firms. We can now identify the location of the maltings premises because the stables are now known as the Jubilee Centre and the gardens are the car park. This gives rise to the possibility that the maltings was also on the Jubilee Centre site or that Box Brewery, across the road from the car park, was actually the site of the original malt-house, later re-built as a new brewery factory. If the stonework above the main entrance is correct, Box Brewery opened in 1864. It has been suggested that the Northgate Brewery moved their business to Box Brewery when the Bath factory was closed and sold in the years 1866 - 68. As we have seen, this doesn't appear to be the case because David Rice called himself a brewer in Box nearly two decades earlier in 1851. Nor is it feasible by the size of the operation. The Northgate Brewery was brewing on an industrial scale supplying the 28 public houses, which the company owned in Bath, and exporting beer to Liverpool, London, Cornwall and Wales. As well as the brewery, they owned two Bath maltings and their factory was a tourist attraction, justifying its mention in contemporary Bath travel guides. This was a very big business. In comparison, Box Brewery is a more humble operation. Although it is difficult to see because of subsequent road alterations, the brewery faced onto the main road as well as the Market Place. Local people can remember the early 1900s when Vale View was the stables with an archway for the horses onto the main road. Now we have to backtrack a little to find the reason why one member of the family, Joseph 2, moved to Australia and established a new branch of Pinchins there because it was his family who eventually returned to run the Box Brewery. It is puzzling why Joseph 2 should go to Australia at all. At first, the only connection I could find related to the Swing Riots of 1830 when a certain Joseph Pinchin was convicted of assisting the rioters in Wiltshire and sentenced by Special Commissioners to seven years transportation. It seemed possible, but unlikely, that a member of a notable Box family should be involved in the riots and the dates were tight. Then I had an enormous amount of luck when I was contacted by David Edwin Pinchin, who was able to recount the amazing story of his ancestor's emigration in 1852 as part of the Australian Gold Rush. The Californian gold rush in the USA in 1849 had alerted people and the discovery of gold in Australia in 1851 encouraged boat loads of people from Britain, Europe in general and places as unlikely as China, all of who were determined not to miss out on the Australian strikes. They flocked to Victoria and New South Wales. Joseph travelled to Australia in 1852 with his brother Henry, who sadly died by drowning in the Broken River. They were travelling in the party of Thomas Woolner RA , a well-known sculptor associated with the Pre-Raphaelite Brotherhood movement. They also travelled with a Mr Vesey, who had been a candlemaker in Box. They were in Australia to find gold, as so many others were at that time. We get glimpses of Joseph's life in Australia. In 1864 he married Ellen Jane Skeate in Victoria, whose name was adopted by their son, Edwin Skeate Pinchin, when he was christened in 1869 in Omeo, Victoria State. In 1865 Joseph acquired property in Omeo comprising a total of 15 acres costing £15.8s.6d. It appears that some of the children may have returned to Britain in August 1871 (perhaps only for a visit) on the ship The Anglesey when Edwin was four years old. Following the death of his wife and one of his children, Joseph 2 returned to the UK arriving on 14th December 1885. It is possible that they felt their father had been hard done by and they wished to make amends. The 1871 census records Peter Pinchin (b 1827), son of David Rice Pinchin, living at the Box Brewery and employing nine men and one boy. Peter was the brewery owner. His elder brother Francis was still farming Hatt Farm but the size of the farm had reduced to 185 acres and the other brothers were no longer living there. Peter never married and he served as the patriarch figure gathering family members around him. In 1891 two nieces, Elizabeth and Anna, and nephew, Edwin Skeate Pinchin, were living with him. They were all active in the family matters: Edwin Skeate as assistant brewer, Elizabeth as housekeeper and Anna as account-keeper. Peter was over 80 years old when he and Edwin Skeate were involved in a horse and trap accident at Bathford Hill in June 1900. The newspaper reported that Peter was, severely hurt but we understand his condition is improving. By 1901 Peter had retired and his nephew Edwin Skeate Pinchin took over the business. Peter lived on until he died in 1906 leaving over £13,000. By the turn of the century, the Box factory was providing beer to Somerset and Wiltshire. In 1904 a 9 gallon cask, the property of ES Pinchin of Box Brewery was stolen as it was being delivered at Swainswick. Edwin Skeate married Charlotte Jane Browning in 1907 and they had moved to Kingsmoor House by 1911. Anna had died or married, leaving Elizabeth living in the Brewery House with one servant. Edwin Skeate Pinchin had two children with Charlotte: Mary and William Edwin born May 1911.By 1915 Edwin Skeate Pinchin and his family were back living at Brewery House. When Edwin Skeate Pinchin died, aged 55, on 17 April 1924, it was a great shock to the village: The great loss sustained by the village in general at Mr Pinchin's decease, and the respect and esteem in which he was held, was shown by the large number of mourners who followed the hearse to the cemetery. The funeral was attended by churchwardens, overseers, councillors, Box choir, cricket club representatives as well as Charlotte and their two children. There were more troubles. The family sold the brewery shortly after Edwin Skeate's death. The brewery was originally put up for auction by Tilley & Culverwell the same year but although there was a good company present, the bidding failed to reach the reserve. Later, in 1924, the business was sold to Ushers for £10,000. The sale included six public houses, none of which I have been able to identify with certainty although papers among the Usher Archives held in the Swindon and Wiltshire Family History Centre have documents relating to the Bear. Later the premises were taken over by Murray & Baldwin and were primarily a timber shop. During the Second World War, they made parts for the Boulton Paul aircraft (possibly the wooden frames of the cockpit) and turrets for fighter planes, like the Hawker Demon and the Defiant. There were various processes from shaping the wood to gluing and fixing, using imported specialist Dundas timber. After the war the company turned its hand to utility furniture and wooden-framed tennis-racket stringing. It was still a large site and the premises now occupied by Dave Hill's butcher's shop was the stringing or gluing factory. But the company ceased trading in 1954 and for eight years the site was unoccupied. In 1962 the firm of AH Dodd & Co moved down from London and started to tidy up the premises to use as a precision engineering machine shop. They cleaned up the site and re-laid the wooden machine shop floor with concrete to take the pressure of 1 ton engineering machines. Under the control of John Dodd and his father they produced a variety of precision parts for automatic controls, including valves for Potterton boilers, thermostat controls for radiators and specialist lights for jewellery shops. Their work required great accuracy making small brass screws with 0.5mm length and 0.06 diameter. At one stage they employed a number of secretarial staff including local girl, Joan Woodgate. Much of the premises has now been converted into flats but work continues in the engineering shop, still making small precision parts. The last thought must go back to the Victorian brewery. For many years the Market Place was an industrial centre in Box with a brewery and malt-house which appears to pre-date the 1864 Box Brewery premises. It was an ideal location. Brewing requires large volumes of water of which there is an ample supply in the Market Place from springs and a stream at the rear of the Jubilee Centre. Probably the business was supplied with barley and corn from Pinchin family farms in the area. In turn the malt-house may have sold malt to other, independent local public houses, many of which brewed their own beer. Box Mill and Box Brewery were both casualties of the depression of the 1920s when they ceased to trade. It was a great blow to the village and affected many Box families who relied on them for employment in an industry other than the stone mines. George Pinchin was born in Conock, Wiltshire and was often called Esquire. In 1816 he is described as living at Hazelbury House when he married Mary Bethell, only daughter of James Bethell, Esq of Lady-down, near Bradford, Wilts. Later he married Margaret (b 1796) at Llandewi, Pembrokeshire. He is chiefly renowned for his acquisition of a share in Northgate Brewery, Bath. He is recorded on the Bath Forum Electoral Register as owning property in Bath: 1832 Marlborough Tavern; 1835, 1837 and 1841 at Northgate Brewery; and 1846 at Britannia Inn, London Road. By the 1841 census he lived at Hatt House with Elizabeth (b 1826, aged 15) and several servants. Living next door at Hatt (Farm) was Francis Pinchin (b 1821) farmer; Elizabeth (b 1821); and Peter (b 1826). Possibly he was brother of David Rice Pinchin and the family next door were his nephews and nieces. In 1844 his son drowned in a swimming accident in the Birmingham and Worcester Canal, whilst attending Bromsgrove School. He died 6 February 1861 at Bathwick Hill House, Bath leaving under £50,000 (re-sworn in 1862 as under £4,000) with probate to George Hornblower Simms, Esq; Henry Bethell, Esq; and George Spencer, Esq. Children: Mary; Georgina; son unnamed (d 1844); plus at least two other unnamed children. Children: a. David Rice (1792 - 1853); b. Joseph (d 1873); c. Henry; d. Elizabeth; and e. Ann. b. Joseph - In 1851 he and Ann farmed 408 acres at Colerne. He died on 15 April 1873. His will proven by nephews Peter (County Brewer) and Joseph Pinchin (Yeoman) of Hall Farm, Colerne. c. Henry married Mary but died fairly young. In 1851 she farmed 270 acres at Hall Farm, Colerne. Children: Elizabeth; Mary; and Mercy. e. Ann - no details. Married 4 March 1817 to Elizabeth Gibbs (1792 - ) of North Wraxall. Children: a. Elizabeth Pinchin (1818 - ); b. David Pinchin (1820 - 12 February 1878) farmer; c. Francis Pinchin (1823 - 1884) farmer; d. William Pinchin (1824 - ) baker; e. Peter Pinchin (1827 - ) brewer; f. Ann Pinchin (1834 - ); g. Joseph 2; h. Henry. a. Elizabeth (1818 - ) was a partner with brother Peter in the Market Place brewery business. Believed to have been unmarried. b. David (1820 - 1878) yeoman farmer of Codrington. Farmed about 185 acres at Hatt Farm, employing 4 men and 2 boys. wife Catherine Pinchin, and son John Wiltshire Pinchin value £2,436.15s.1d. married Mary Mallory.); Matilda W Pinchin (1869 - ). You will see Henry, born 1863, and Joseph, born 1867, in your article as the children of Francis and Catherine. d. William (1824 - ) baker. Unmarried who retired some time between 1894 and 1901. He died 12 April 1906 leaving £13,072.0s.4d with probate granted to John Wiltshire Pinchin (oldest son of his older brother, Francis), and Elizabeth Jane Pinchin (niece). In 1871 he lived at the Brewery with Elizabeth, his older sister and partner in the business, and Elizabeth W of Box, his niece (probably daughter of Francis). In 1894 he attended the Wiltshire Brewers Association AGM. In 1891 and 1901 he was giving lodgings to his Australian nephews and nieces: Elizabeth Jane Pinchin (1860 - ); Anne Josephine Pinchin (1861 - ); and Edwin Skeate Pinchin (1869 - 1924) brewery employer. f. Ann (b 1834) married 26 October 1871 Joseph Gibbs, youngest son of the late Mr Ralph Skeate of West Yatton, Wilts. g. Joseph 2 emigrated to Australia in 1852 to find his fortune in the Australian Gold Rush. Married Ellen Jane Skeate in 1864. Child: Edwin Skeate Pinchin (1869 - 1924) born at Omeo in Victoria, Australia - see later. In 1891 and 1901 he lived with his uncle Peter at The Brewery, Bath Road and his sisters Elizabeth Jane (b 1860) and Anne Josephine (b 1862), both of whom were born at Victoria, Australia. In 1906 Edwin Skeate married Charlotte Jane Browning (b 1868) and in 1911 they lived in 9 rooms at Kingsmoor House, Townsend and he is described as brewer and maltster. He died on 17 April 1924 at Brewery House, Box leaving £7,973.11s.1d to his widow and the Brewery was sold to Ushers.
The movie To Auschwitz and Back is a moving Holocaust documentary and I highly recommend it. Personal accounts of historical events lead to greater understanding and empathy, especially when the storyteller is as skilled as Joe Engel. It often felt as if the emphasis was on memorizing dates and statistics in history class, when I was in school. However, it is much more than that, since each historical event impacted the people who experienced it firsthand. The Holocaust wasn’t that long ago, so we have the benefit of learning from survivors. In this documentary, Joe Engel story is living history and my hope is that it will not be repeated. Watch the movie and see what you can learn! “Born in Zakroczym, Poland in 1927, Holocaust Survivor Joe Engel was taken by the Nazis at 14 and never saw his parents again. Now 90 years old, Joe is the embodiment of living history and spends his retirement years ensuring the Holocaust is never forgotten. With the assistance of The United States Holocaust Memorial Museum’s film and photographic archives, filmmaker Ron Small has successfully weaved Joe Engel’s incredible storytelling into a riveting visual presentation and it is both historic and contemporary. He takes us from the overwhelming despair of the Warsaw Ghetto to the shroud of unceasing death and suffering that was Birkenau and Auschwitz. Then, we learn of his escape from a Death Train at 17 and his covert work as a freedom fighter. Joe personally takes us on his vivid journey to hell and back. Also, more information is available from the Holocaust Education Film Foundation. Another must see movie for me, this part pulled me right in; With the assistance of The United States Holocaust Memorial Museum’s film and photographic archives, filmmaker Ron Small has successfully weaved Joe Engel’s incredible storytelling into a riveting visual presentation and it is both historic and contemporary. This would be really interesting. We are getting to that time when nobody is around to tell the storysd.. This looks like an interesting movie…I’d love to watch it with my husband. Would like to win this for myself — looks like a moving story. I would very much like to see this. This looks like a really illuminating and important film to watch. This looks like an excellent movie! This movie sounds touching. I want to see it. This looks like a very interesting movie. I would love to have this. The Holocaust must NEVER be forgotten. This is such a horrible time in history that should never be forgotten. This would be an interesting movie to watch with my husband. As my mother always told me, those who don’t learn from history are doomed to repeat it. Therefore when I was in Germany I decided to visit Dachau Concentration Camp. What an experience! So somber and eye opening of what these people experienced back during WW2. Interesting! I love watching movies/documentaries about this! This looks like a very informative DVD. Yes, I would love to win this never forgotten Auschwitz movie please! I would love to watch this documentary. I’m not sure that any movie can truly capture the devastation and tragedy that was the Holocaust but if this film reminds us of the horrors that can happen if we are not vigilant, it will be exceptional. Thank you for the chance to win it. This film looks like a interesting educational yet sad film. My daughter is learning about WWII in our homeschool program. This would be perfect to share with her for her to learn more. As a woman of Jewish faith I would love to read this book. I would love to see this! It would affect me for quite some time afterward. This sounds very interesting, would love to see it!
Connecting the heart is not only about taking care of the heart through proper nutrition and exercise to promote adequate circulation, it is also about connecting to the heart’s intelligence. Research at the HeartMath Institute has shown that signals from the heart can change cognitive and emotional signals in the brain. Stress is a leading factor in the development of symptoms related to chronic illness. Learning to tap into the heart’s intelligence can dramatically improve stress levels, cognitive ability, and emotions during times of chronic and life-limiting illness. The Heart-Focused Breathing technique can help create a sense of calm and balance, while reducing the negative feelings you may be experiencing with your health conditions. The human brain is an amazing organ that serves as the body’s command center. The brain, composed of sections (lobes) that serve specialized functions, controls heart rate, breathing, thoughts, emotions, communication, and making decisions, to name a few. Brain research over recent years has shown that exposure to poor nutrition, stress, and environmental toxins (root causes) can harbor inflammation, hormone imbalances, and leaky gut, which in turn may lead to chronic and life-limiting brain ailments, such as depression, anxiety, and Alzheimer’s and other dementias. Addressing the root causes coupled with brain fitness can dramatically improve brain health and wellness. The human body is made up of 11 specialized systems (i.e., respiratory, digestive, nervous, skeletal, muscular, circulatory, endocrine, exocrine, lymphatic, renal, and reproductive), that work together to form our entire being. Our bodies also have basic physiological needs to survive, including air, water, food, and sleep, as noted in Abraham Maslow’s Hierarchy. Physiological needs can be affected by the stress of environmental toxins, inadequate water intake, poor nutrition, and lack of sleep, and can dramatically affect the systems in the body. Before safety needs, such as health and well-being are achieved in the hierarchy, it is important to address matters that affect the physiological needs of the body. The soul is the essence of our being, and that which expresses our highest level of consciousness. Our soul connects our physical being with our spiritual being. It is our higher self. Meditation transcends our physical consciousness and quiets our mind, while tapping into the still small voice within. Meditation is healing, and leads to feelings of peace, love, and gratitude. Mindfulness, a form of meditation, is the process of bringing awareness or attention to experiences without judging them. Both mindfulness and meditation practices serve to reduce stress, anxiety, depression, and negative emotions than can impact chronic or life-limiting illness. Mindfulness and meditation are good for the soul.
Posted on November 23, 2015 by Siddharth. The decision in the famous and controversial Schrems case (press release) delivered last month has created confusion with respect to the rules applicable to companies transporting data out of the EU and into the USA. The case arose in light of Edward Snowden’s revelations regarding data handling by companies like Google and Facebook in the face of extensive acquisition of user information by US security agencies. The matter came up before the Court of Justice of the European Union (CJEU) on referral from the High Court of Ireland. The case dealt with the permissibility and legality of a legal instrument known as the Safe Harbour Agreement. The Safe Harbour Agreement regulates transfer of data from the EU to US by internet companies. The effectiveness of this regulation was thrown into serious doubt following revelations by Edward Snowden regarding large scale surveillance carried out by USA state agencies, such as the NSA, by accessing users’ private data. The agreement was negotiated between the US and the EU in 2000, and allowed American internet companies to transfer data from the European Economic Area to other countries without having to undertake the cumbersome task of complying with each individual EU country’s privacy laws. It contained a set of principles that legalized data transfer out of the EU by US companies which demonstrated adherence to a certain set of data handling policies. More than an enforceable standard to protect users’ data, it was a legal framework which served the purpose of giving the European Commission a basis to claim that data transfer to the USA was legal under European laws. The Safe Harbour Agreement was meant to simplify compliance with the 1995 Data Protection Directive of the European Union, which laid down fundamental principles to be upheld in processing and handling of personal data. A 2000 decision of the European Commission held that the Safe Harbour Agreement ensured adequacy of data protection and privacy of data as required by this Directive, and came to be popularly known as the “Safe Harbour decision”. Since then, over 4,000 companies signed on to the Agreement in order to register themselves to legally export data out of the EU and into the USA. After the Snowden leak however, it became clear that these principles were blatantly violated on a large scale. It was in this context that Maximilian Schrems, an Austrian law student, approached the Irish Data Protection authority complaining that US laws did not provide adequate protection to users’ private data against surveillance, as required by the Data Protection Directive. The Data Protection Authority dismissed the complaint, and Schrems then chose to appeal to the Irish High Court. The High Court, having heard the petition, chose to refer an important question to the CJEU: whether the 2000 EC decision, which upheld the Safe Harbour Agreement as satisfying the requirements of the EU Data Protection Directive, meant that national data protection authorities were prevented from taking up complaints against transfer of a person’s data as violating the Directive. The CJEU answered emphatically in the negative, emphasising that a mere finding by the Commission of adequate data protection policy by an external country could not take away the powers of national data protection authorities. The national authority could therefore independently investigate privacy claims against a private US company handling an EU citizen’s data. The CJEU also found that legislations authorising the interference of state authorities with data handling of private companies had complete overriding effect over the provisions of the Safe Harbour Agreement. This was based on a two-pronged reasoning – firstly, that the data acquired by state agencies was processed in ways above and beyond what was necessary for protecting national security. Secondly, users whose data had been acquired by the authorities had no legal recourse to challenge such an action or have that data erased. For these reasons, it ruled the Safe Harbour Agreement as failing the requirements of the EU Data Protection Directive. This decision created a fair amount of deliberation regarding what made data transfer from the EU to the US legally valid, since the main legal basis for it had just been struck down. However, the interesting point to note here is that the Agreement is not the only legal basis for such data transfer. Further, for the data transfer to be held illegal, individual handlers of data would now have to be challenged at forums of national data protection authorities to be held as illegal. Thus the decision importantly does not pull a curtain down on all data transfer from EU to US; however, the legal machinery of the Safe Harbour Agreement has rightly been found to be ineffective. Therefore, while internet companies do not need to shut down operations in EU, they do need to review their data handling practices, and adherence of these practices to other available norms, like the EU’s model clauses for data transfer to external countries. Some companies have even gone a step ahead and tried to come up with solutions to the vacuum left behind by the Safe Harbour Agreement, like Microsoft, as it does in this blog post by the head of its legal department. That said, the EU has issued a statement that an agreement needs to be reached with US companies by January 2016, failing which it will consider stronger enforcement measures, such as coordinated action taken by each of the EU countries’ data protection authorities. The scenario is still an evolving one, and this shake-up can positively lead to better enforced privacy and data protection principles.
How to make DIY egg carton roses? It's easier than you think! Today we will show you how to create flowers from egg cartons step by step. This egg carton roses definitely will decorate any of your crafts! In today’s tutorial, we are going to turn an ordinary egg carton into beautiful roses. Tear off the lid of the egg carton. Then divide the box into sections tearing off the edges of the cups along the ribs. Now we want the edges to be a bit messy for a more natural look. Today we are going to use an egg carton of ten sections. The number of boxes depends on the length of your garland (two sections will make one flower). You may paint the box in advance into any colour of your choice. But I personally like the soft beige hint and a limestone texture of the cardboard. It will give our flowers a ceramic look. Now we are going to form the flower. Tear down the edges of a cup to make four petals. You might want to form five or six. Anyway, make sure the basis of the flower is not torn. To make the cardboard more plıant and easier to work with, slightly moist the rims. It’s better to use a soft brush. Squeeze out the excess water. Too much of water might make the cardboard too soft and so quite difficult to shape. Just leave the flower aside and let it dry out a little. With your fingers shape the petals pressing each one from inside. From the outside you will get a rib which gives your petal a natural look. We are going to use a glue gun to fix our flowers, but you may also use PVA or two-sided tissue. A lamp of the garland is going to be a stamen of the rose. Take one piece. Tear off in between the petals slightly – just enough to get to the center. Place a lamp. Do the same with the other half of the flower. Place one layer into another so that the petals overlap. The lamp is fixed well enough but we are going to use some glue to secure the layers. Press the halves with your fingers for two secs to let the glue cool down. Miss two or three lamps in between the flowers for a flimsy look. You don’t want your garland to look heavy. Now we are going to decorate the light string with ribbons. You might also use egg box scrap flowers to revamp an old photo frame. Arrange the egg carton roses before securing to the frame. Dab some glue. And slightly press the flower for a few seconds. These egg carton flowers craft will make perfect decoration for gift packing. You might also use them to accessorize napkin rings or place cards to give your festive dinner table setting rustic finishing.
Finn Juhl (1912-1989) was a Danish architect and interior and industrial designer, best known for his furniture designs and one of the leading figures in the creation of “Danish design” in the 1940s. Juhl is the designer who introduced Danish Modern to America. The chair you see here is one of a pair of NV 45 lounge chairs designed by Juhl and made by the noted cabinetmaker Niels Vodder. The chairs are done in teak, with black leather upholstering and removable cushions. They are lot 51 in Monthly Modern's next big auction, an online-only affair slated for Oct. 27. The sale is a good mix of European and Danish, but there will also be some wonderful American design pieces, too. It will be just the third auction for Monthly Modern, which is based in San Francisco and is tapping into the wildly popular trend toward Mid-Century Modern furniture, especially Danish. “We scour the Earth for these pieces and sometimes they are in fine shape and often they are in disrepair,” said Emma Philippart, Managing Director of Monthly Modern. “When that's the case, we restore them to their original luster. The goal is to make them relevant in a contemporary setting. But equally, many pieces are in great vintage condition.” The Finn Juhl chairs are unrestored and in excellent condition, with a pre-sale estimate of $25,000-$30,000. Visit www.monthlymodern.com. On Saturday, November 12th, North American Auction Company in Bozeman, Montana will offer the largest collection of Montana beer trays ever sold at auction. Featured will be over 50 original trays, ranging from the 19th century to the mid-20th century. Also up for bid will be hundreds of lots of Montana-related advertising pieces and Montana breweriana antiques, such as signs, bottles and cans, plus various other pre-Prohibition beer trays (including an 1890-1910 Anheuser-Busch and an early 1900s Rainier Cowgirl tray). In all, more than 500 lots will be sold. The auction will also feature a large collection of early Old West and Native American artifacts, to include a 19th century Canloguha tobacco box from Chief Loud Voiced Hawk Hunkpapa, previously valued at $18,000-$40,000. The auction will be conducted in North American Auction Company's gallery, located at 34156 East Frontage Road in Bozeman, starting at 10 am Mountain time. A preview will be held the day before – Friday, November 11th – from 10-5. There will be live, internet, phone and absentee bidding. Visit . Sadly, Artus Van Briggle died in 1904, at just 35 years of age. After that, Anne's dedication and fine artistry continued the Van Briggle tradition. On Saturday, November 5th, Judd's Auction Gallery in Danville, Illinois will offer the outstanding single-owner Van Briggle collection of Charles Drennan, who gathered more than 800 pieces over a 50-year period. He bought his first piece while in his 30's, on a family trip to Colorado Springs. “It was love at first sight,” he recalls. Every decade is present in this collection – from the early 1900s to 2011. Visit www.juddsauction.com. Thomas Buttersworth (1768-1842) was an English seaman of the Napoleonic Wars period who later became a maritime painter. He was born on the Isle of Wight, enlisted in the Royal Navy in London in 1795, and served on the HMS Caroline during the wars with France. But battle rendered him an invalid, so from his home in Minorca beginning around 1800, Buttersworth began to paint a subject he knew much about – ships on the high seas. He produced works to commission, and was little exhibited during his lifetime, but that's certainly not true today. On Friday, November 4th, the expected top lot at John McInnis Auctioneers' fine art estates auction in Exeter, N.H., is the Buttersworth oil on canvas marine painting shown here, titled HMS Queen Charlotte, 11 Guns Passing Through the Straits of Messine. It's estimated at $20,000-$30,000. The work measures 31 inches by 43 inches. Also offered will be a small watercolor by Childe Hassam (Am., 1859-1935), titled Dandy, 6 ½ inches by 6 inches, mounted to the flyleaf of his 1899 book Three Cities (est. $10,000-$15,000). Visit . Among 20th century artists, Max Ernst (German, 1891-1976) is perhaps matched only by Pablo Picasso (1881-1973) and Joan Miro (1893-1983) for his relentlessly innovative and influential creative career. He was initially influenced by the Expressionist Auguste Macke and the Sonderbund exhibition of 1912 (which brought Picasso and the Post-Impressionists to his native Cologne), but the trauma of his service in the trenches of World War I caused him to reject order, the system and settled norms. Out of that came Dadaism and its stepchild, Surrealism. So, it could be argued that two movements that can be traced directly to Max Ernst are what made Modern Art possible. On Wednesday, Nov. 16th, Bonhams in New York will offer two works by Ernst that have never appeared at auction before: Trembelent de terre printanier (1964), a monumental work painted in the South of France (est. $600,000-$1 million); and Je suis une femme, vous etes un homme, sommes nous la Republique, which features Ernst's alter ego, the half-bird/half-man 'Lolop' (est. $400,000-$600,000). Visit www.bonhams.com.
How does the engine know when to spray fuel, let in air, compress the air, and exhaust the spent combustion product? Obviously, there must be a certain timing for these processes to follow in order for the diesel engine to work. If the fuel were to be injected when the air inside the cylinders is not sufficiently compressed, it will not ignite. Furthermore, if the timing is not correct, some of the unburned fuel may find their way out through the exhaust and become lost. Inefficient combustion takes place and power will be lost. The many components of a diesel engine must work together properly, doing their function at the correct sequence all the time. If any component does not function as designed, the engine will perform poorly or even stop completely. The main moving components of a diesel engine, i.e. the piston, connecting rod, crankshaft, fuel pump, exhaust valves and inlet valves are connected together through carefully designed gearing, cams, push rods, rocker arms, and sometimes drive chains. Adjusting the timing of the various processes of a diesel combustion cycle involves adjustments to these linkages. In small diesel engines, very little adjustments can be done. However in large diesel engines, each of these components can be adjusted for maximum efficiency. The cams of the camshaft driving the fuel pump can be adjusted to advance or delay the fuel injection to the engine cylinder. The cams driving the push rods for the inlet and exhaust valves can also be adjusted. In doing all these adjustments, care must be taken to consider the positions of the piston relative to the process to be adjusted. The flywheel at the end of the crankshaft is usually marked as a reference to show the piston at Top Dead Center. Each piston will have its marking on the flywheel. If the engine has 6 cylinders, then 6 markings for Top Dead Center will be marked. From the markings on the flywheel a person can refer to it for adjustments on the fuel pump cams, and cylinder valve cams. Well folks, start your engines.
The aim of the partnership is to have educators at all levels in all five districts working together; too much of the work we do is done in isolation. Contact the partnership to find out how you can get connected with others in the same subject areas or grade levels. Take advantage of the opportunities for professional development. If you’re a teacher, ask about joining one of the teacher teams. If you’re a school leader, ask how you can participate in steering committee activities. For more information, contact Cove Davis at [email protected]. I have an idea for the partnership. Who do I talk to? The greatest benefits of the partnership are connecting educators, fostering innovative ideas and opportunities, and sharing promising approaches with others. Contact the executive administrator, Cove Davis at [email protected], to discuss any ideas to improve collaboration. Where do I find resources for my classroom? Yearlong plans, curricular resources, and UbD units are posted on the server. If you need the name and password for access, contact the 5DP or ask another teacher or administrator at your school. Additional links and resources can be found on our Resources page. How will my instruction change? As common yearlong plans (YLPs) are created and agreed upon by the districts, teachers will be asked to follow the general order that standards are suggested to be taught. For example, if third-grade fractions are taught in November-December according to the YLP, all districts will adhere to that. YLPs for Math, ELA and Science for grades K-8 are on the 5DP server and Mastery Connect. Following the YLPs is essential starting in 2016-17 since the three common assessments over the course of the year will test standards as taught in each block or quarter. What are the common assessments? Teams across the districts create common benchmark assessments that are administered three times annually. These benchmarks allow teachers to track their students’ progress on standards, school leaders to visualize their students and classrooms, and district leaders to get a pulse at the student, classroom, school, district, and partnership levels. Our assessments use the commonly-developed YLPs as blueprints. During and after development, they go through multiple rounds of vetting for face and content validity as well as post-assessment analyses for reliability and predictive, concurrent, convergent, and discriminant validity. With each iteration, our goal is to produce assessments with increasing quality and, importantly, usefulness for those in our districts. If you have any questions about our common assessments, please email our Assessment and Data Administrator, Aubree Webb, at [email protected].
Nearly everything interesting depends on more than one input, and the simplest way several inputs can relate to each other is linearly. So linear algebra is a part of understanding most interesting systems. Problems that aren’t stated in terms of linear algebra still have linear algebra in the background. As we’ve written about elsewhere, everything boils down to linear algebra. Linear systems tend to be very large in application. If a system depends on N inputs, you can expect matrices of size at least N × N to show up. But if these inputs are continuous, each input can turn into a large number of discrete points. For example, suppose you want to know what’s happening in a cubic meter of space. If you want to track things at a resolution of one centimeter then you have not just 3 inputs but 1003 = 1,000,000 inputs. Discretizing a differential equation over this cube creates a system of a million equations in a million unknowns. Linear systems also tend to be sparse, described by a matrix with mostly zero entries. In the example above, a solution every point in the cube depends on every other point in the cube, but not directly. The solution at each point directly depends only on that point’s neighbors. Simply having a large number of zeros in a matrix doesn’t necessarily help, but often there’s a pattern to the zeros that can be exploited. In this example, the geometric structure of the cube creates to an algebraic structure in the corresponding matrices. As another example, consider Google search results. The rank of each page depends on the number and rank of pages linking to it, so conceptually Google is manipulating a matrix with a row and column for every page they track. The vast majority of this matrix is zeros since no page links to more than a few of the 4.5 billion pages out there. Linear algebra at scale is not just a larger version of what you learn in high school. Because matrices are so large and so sparse, even storing and accessing big sparse matrices requires some cleverness. Since linear algebra is at the heart of so many problems, linear solvers are often in the inner loop of a larger application, and so efficiency is critical. Numerical linear algebra algorithms have been improving steadily for decades. For very large problems, algorithms have done more than Moore’s law to reduce computation costs. Of course you need accuracy as well as efficiency; it doesn’t matter how quickly you can compute a wrong answer. Two ways of computing the same thing that are algebraically equivalent in theory may not be equivalent at all when carried out by a real computer. One may be very accurate and the other useless. A great deal of scholarship and practical experience has gone into understanding linear algebra algorithms deeply, not just how they behave in theory but also how they behave in actual computer hardware. Linear algebra software packages like LAPACK are very mature. Countless experts have devoted their careers to refining these implementations. Most people have no business trying to improve on such software. For most of us, the challenge is to formulate our applications well and to choose the best algorithms, not to re-implement fundamental methods.
Coral reefs are dying at an alarming rate around the world. If we continue acting as if nothing is happening, scientist estimate that all the corals will disappear in the next 100 years. WILDCOAST, in collaboration with the Commission of Natural Protected Areas (CONANP), is implementing a coral reef conservation program in seven protected areas in the Mexican Pacific, from the Gulf of California to Oaxaca. Places like Cabo Pulmo, Bahias de Huatulco, Isla Espiritu Santo, and Isla Isabel, each world renowned for their marine biodiversity, are the focus of this program. To illustrate some of WILDCOAST’s work, during the first week of September, the WILDCOAST team, Ancla Marina, tourism outfitters, and CONANP installed ten anchorages for mooring buoys in Espiritu Santo National Park in the Gulf of California. Together they carried out a workshop in La Paz, Baja California Sur, on the installation and proper use of mooring buoys and exchanged experiences in visitor management in coral reef ecosystems. Park rangers and tourist service providers from four protected areas participated in the workshop, including those from Espiritu Santo, as well as Huatulco, Cabo Pulmo, and Isla Isabel National Parks. The workshop participants also exchanged experiences and evaluated visitor management strategies used in the different areas, including tools for marine species identification, and good diving and snorkeling practices. Together with CONANP and local partners, WILDCOAST is creating wildlife and dive guides for Huatulco, Cabo Pulmo and Espiritu Santo National Parks that will be used to inform visitors on park regulations, wildlife, and best practices.
The Safety Inspection Lockout and Tagout mobile app is used to track safety procedures related to ensuring that industrial machinery is properly shut off and not restarted until maintenance or servicing work has been completed. Lockout and tagout procedures (also known as loto procedures), require energy isolation for hazardous power sources before equipment and machinery repairs are started. Using the loto procedures app helps ensure that workers have a clear process for dealing with dangerous machinery and simplifies loto procedures according to OSHA standards, and the health and safety requirements of the job site. A critical element to any occupational safety and health process, the lock and tag app is an important tool designed to protect the health and safety of those who work around industrial machinery and equipment. Instead of relying on paper checklists for OSHA lockout inspections, this easy to use app makes it easy to stay on track with your construction site's loto program. The app helps you make sure workers have a clear process for dealing with dangerous machinery and is a helpful tool in improving occupational safety and health. Easily accessible from any smartphone or tablet, the app comes with date, time and signature capture, and can be easily customized to meet your company's needs.
Purchasing power decreased by 16.67% in 1680 compared to 1679. On average, you would have to spend 16.67% more money in 1680 than in 1679 for the same item. In other words, $1 in 1679 is equivalent in purchasing power to about $1.17 in 1680. The 1679 inflation rate was 0.00%. The inflation rate in 1680 was 16.67%. The 1680 inflation rate is higher compared to the average inflation rate of 1.22% per year between 1680 and 2019. Inflation rate is calculated by change in the consumer price index (CPI). The CPI in 1680 was 4.20. It was 3.60 in the previous year, 1679. The difference in CPI between the years is used by the Bureau of Labor Statistics to officially determine inflation. CPI is the weighted combination of many categories of spending that are tracked by the government. This chart shows the average rate of inflation for select CPI categories between 1679 and 1680. It's important to note that not all categories may be tracked since 1679. This table and visualization use the earliest available data for each category. $1 in 1679 has the same "purchasing power" or "buying power" as $1.17 in 1680.
This book is a welcome addition to the GNSS textbook literature by the team of authors who have since 1992 produced five editions of an excellent book on GPS targeting high precision users, Global Positioning System (GPS): Theory and Practice. Hofmann-Wellenhof is also the lead author of the 2003 book, Navigation, and for the second edition of the classic Physical Geodesy (2006). GPS Theory and Practice described GPS from a geodetic perspective, patiently explaining such concepts as geodetic reference systems and datum transformations, the theory of orbital motion and techniques for orbit determination, mathematical models and parameter estimation theory. In this book, he authors paid particular attention to those factors related to centimeter-level performance, including carrier phase modeling, ambiguity resolution, and differential positioning. These books nicely complemented those written by electrical and communications engineers, who emphasized signal structure and receiver design principles. Although not explicitly stated in the Foreword or Preface, this 500-page book is the first of a new series of editions dealing with GNSS that follow the general layout of the previous series on GPS. Approximately 60 percent of the book’s 14 chapters deal with fundamental topics that underpin all GNSS (and regional navigation satellite systems, and satellite based augmentation systems). In fact, the chapter headings and sub-headings are almost identical with the GPS books. This is not a criticism, as the GPS books were popular with students and researchers interested in the geodetic and algorithmic aspects of GPS. Why change a winning formula? Just as we now acknowledge that we are in a new world, where the acronym “GNSS” is steadily replacing “GPS” everywhere from journals to committees to conferences and magazines, the authors have responded by broadening the range of topics they deal with in this book. The first eight chapters are devoted to the generic topics referred to above, with separate chapters for the three main GNSSes: GPS, GLONASS, and Galileo. A further chapter describes “other” GNSS and regional systems, followed by a chapter on applications, and a final wrap up. The book has a disappointing start, but thankfully it gets better. Because the structure of the book has “generic” satellite-based positioning chapters at the start, Chapter 1 has to somehow introduce GNSS without overloading it with technical or historical detail (reserved for later chapters). This introductory chapter is frankly a curious mix of over-generalizations (one and a half pages on “global surveying techniques”), thumb-nail sketches (“history of satellite geodesy” is covered in two pages), and conceptual positioning framework (introduction to positioning, velocity and attitude determination; code vs phase-based, absolute vs. relative, static vs. kinematic, real-time vs. post-processing, surveying vs. navigation). Chapters 2 and 3 deal with Reference Systems and Satellite Orbits, respectively. These are, of course, fundamental concepts in GNSS. As a geodesist I fully recognize this and always begin my GNSS courses with several lectures on these geodetic topics — but I am probably in a minority of teachers that devote so much time on the subtleties of terrestrial reference systems and frames, coordinate system transformations, time systems, and orbital motion. Necessarily these chapters are brief and skirt across the complex topics. Still, in my opinion, insufficient mention is made of plate tectonics, the major complicating factor in terrestrial reference frames, yet several pages are devoted to the transformation between celestial and terrestrial reference frames. The latter subject is irrelevant for GNSS as that complexity is hidden from users by way of clever satellite ephemeris modeling in the broadcast navigation message. It is also curious that the entire Chapter 9 is devoted to Data Transformation. Chapter 3 goes into considerable detail on orbit description, Keplerian representation, perturbing accelerations, and perturbed satellite motion, and even orbit solutions. A small section even appears on broadcast (GPS case) and precise ephemerides. Here the authors are guilty of “too many equations, not enough numbers.” The equations are not illustrated by worked examples or order of magnitude calculations. It is, in my opinion, an unsatisfactory chapter. Chapter 4, Satellite Signals, on the other hand, is excellent, with the right balance of depth and breadth of material. The 50 or so pages provide the reader (for example, an undergraduate or graduate student) with fundamental information on the underlying physics of microwave signal propagation, atmospheric effects, signal structures for GNSS, and an adequate introduction to receivers and signal processing. (Note, there is no separate chapter on receiver hardware.) This chapter, together with the following three chapters, justify the purchase of this book. Chapter 5, Observables, shows the skill and experience of the authors. The material in this chapter was repeated in the earlier GPS books, but that is not a criticism. The topics covered are data acquisition (code, carrier and Doppler observations; noise and biases), data combinations (linear multi-frequency combinations, as well as code smoothing using carrier), atmospheric effects (ionosphere and troposphere), relativistic effects, antenna phase center variations, and multipath. The depth of material is adequate for a graduate course in GNSS. Chapter 6 and 7 deal with the Mathematical Models for Positioning and Data Processing, respectively. Chapter 6 topics are point positioning (including Precise Point Positioning), differential GNSS (DGNSS) positioning, and relative positioning (or carrier phase-based DGNSS). Chapter 7 is long (approximately 80 pages) and covers data preprocessing, ambiguity resolution, adjustment techniques, and quality measures more than adequately. Whether Chapter 8, Data Transformation, should be after Data Processing or should be part of Chapter 3, Reference Systems, is a matter of opinion. The “classical” material can be found in any geodesy textbook. There’s nothing new here, and no worked examples. Nevertheless, it is important material that needs to be given some prominence. Chapters 9, 10, 11 and 12 deal with each GNSS, RNSS (regional navigation satellite systems), and augmentation systems in turn, with the ”Big Three” GNSSes each given their own chapter. The treatment is broad rather than deep, as befits the target audience. The topics typically include historical background, reference systems, segments, services, signal structure, and outlook. All of this material can be easily found on the Internet from authoritative sources. However, the authors conveniently summarize the basic information. Through regular updated editions they will be able to keep the material current and relevant. Chapter 13,Applications, deals with user applications and a host of bits and pieces associated with GNSS products and issues (such as data formats). Unfortunately, it contains some glaring omissions. Perhaps one of the most important is high-accuracy applications for which multi-frequency, multi-constellation GNSS is very well suited to machine guidance in the mining, agricultural and construction industries. It is not mentioned. Nor is enough emphasis given to the global trend of continuously operating reference stations established to support real-time centimeter accuracy positioning for surveying, geodesy and machine guidance applications. Chapter 14, Conclusion and Outlook, is a rather short “wrap-up” of GNSS that in my opinion fails to celebrate the achievements of GPS, to describe how GPS is fundamentally changing the nature of “geospatial infrastructure,” or to emphasize enough how GNSS promises to transform the geosciences, the geospatial industries and society in general. How does the book rate overall? It is a good book, but not a great book on GNSS. It is more than adequate as a textbook for undergraduate and graduate-level courses in satellite-based positioning. But it does have its flaws, and they are not inconsequential. Until a better book on GNSS principles and practice is published, however, this earns my endorsement.
The Lewis & Clark Confluence Tower was built in commemoration of the historic expedition. It opened in 2010. The 180-foot tower has three viewing platforms at 50, 100 and 150-feet connecting the two towers that represent Captains Meriwether Lewis and William Clark, as well as, the Mississippi and Missouri rivers. Every level tells a story, and you get great views, such as the confluence of the rivers. You can even see the St. Louis skyline (on a clear day)! A view of the Mississippi and Missouri Rivers confluence from the tower!
Photography is more than an art. It's a skill. Professional photographers take time to study and learn their art, learning exactly what is required to take an eye-catching photo. There are seven elements of photography that break down each of the things a true artist should focus on, and they are: line, shape, form, texture, pattern, color and space. Each brings its own unique quality to a picture. The seven basic elements of photography – line, shape, form, texture, pattern, color and space – all refer to the way you set up your photo. Composition helps you represent any of these elements in the way you choose. Line ­– Can be vertical, horizontal, curved or jagged. Examples: roads, sunsets, bridges. Shape ­– Two-dimensional representation of objects. Examples: silhouetted photographs of birds. Form – Three-dimensional representation of objects, usually through the use of lighting and shadows. Texture – The use of lighting to bring out details of an object, making it easy to see whether a surface is smooth or soft. Pattern – The use of repetition to create an interesting photo. Examples: photos of gardens or flowers. Color – Using warm or cool colors to set a mood. Space – Either negative or positive space can be used to make a statement. Often seen when using the rule of thirds. The best way to illustrate one of the most popular photography composition techniques, the rule of thirds, is to put a nine-square grid over a photo. You would break an image into thirds both horizontally and vertically, arriving at nine segments total. If you place the most interesting element of your photos along one of those lines, your photo will naturally be well-composed, based on the general rules of form photography. The seven basic elements of photography all come down to lighting and composition. New photographers focus on these two items most. There are many photography composition techniques in addition to the rule of thirds, including symmetry, which utilizes tricks like reflections to make an otherwise ordinary photo more interesting, and depth, which combines the foreground and background in interesting ways to bring an image to life. Another important form in photography is "shooting light." That means looking for the way the light hits objects and featuring that in your photo. As you begin to play with these seven elements of photography, these professional techniques can take you from photographer to photographic artist.
Learning Goal: Build procedural fluency with solving systems by elimination. HSA.REI.C.5 – Prove that, given a system of two equations in two variables, replacing one equation by the sum of that equation and a multiple of the other produces a system with the same solutions. HSA.REI.C.6 – Solve systems of linear equations exactly and approximately (e.g., with graphs), focusing on pairs of linear equations in two variables.
The British poet John Dryden once wrote, “Truth is the foundation of all knowledge and the cement of all societies.” The God of the Bible is a God of truth, and in the list of seven things that he hates, we have already considered “a lying tongue.” But Solomon also includes in this list “a false witness who breathes out lies.” It may seem as if the Lord is repeating himself here, but in fact, while the two are related (“lying” and “false” translate the same Hebrew word), there is a subtle difference between a lying tongue and a false witness. “A lying tongue” describes the person who speaks something that is untrue when he knows it to be untrue. We saw, when considering that sin, that the motive of the lying tongue is to hurt the one being lied to or about. “A false witness” is similar, but the inclusion of the word “witness” indicates that Solomon is thinking of something a little more formal. A lying tongue can be exercised in any setting, but a false witness specifically breathes out lies in the context of a vow or an oath. A lying tongue might hurriedly fabricate an untruth, but a false witness carefully and deliberately plans the lies he plans to tell—or at least fails to honour a commitment he carefully and deliberately made. The most immediate application of this truth would be perjury in a court of law (see Exodus 20:16; Deuteronomy 5:20). To lie about a person under oath is to ruin their reputation and to invite a penalty upon them. False witness is not easily dismissed, for it is carefully planned to sound like a reasonable charge against its victim. David spoke of “malicious witnesses” who had risen against him, asking “things that I do not know” (Psalm 35:11). In other words, they fabricated testimony against him that was well thought out and not easily answered. Jesus was likewise subjected to the testimony of false witnesses (Matthew 26:60), as was Stephen (Acts 6:13). In both cases, it led to death. But the application can be taken a little further, because a false witness in a legal battle was not only lying, but lying under oath. He was not only lying under the general expectation that he should tell the truth, but lying against a specific oath he had made to tell the truth. Jewish custom recognised the need for oaths before a rabbinic or judicial court. Witnesses were typically reminded of their obligation to tell the truth and warned that, if they were found to be lying, they would suffer the consequences that the defendant would suffer if pronounced guilty. If we understand this practice of “swearing in” witnesses, it perhaps puts something of a fresh spin on the broader application of this sin. A lying tongue is always detestable to God, but lying against a vow that one makes is placed in its own category. What might this look like today—outside the court room? When a couple is married, they make vows to one another. The husband vows to lovingly lead his wife, and the wife vows to reverently follow her husband’s leadership. God expects us to honour those oaths. To fail to do so is to behave as a false witness. And it always hurts the other party. In acknowledging our dependence on God, we hereby commit ourselves to utilising the various practical means of grace which he has provided. We will therefore seek wise counsel about raising a godly seed; we will be intentionally faithful to fulfilling our covenantal responsibilities as members of BBC; we will expose __________ to the numerous opportunities to learn of Jesus Christ through the ministries of BBC. If parents stand before the church and dedicate themselves to raise their children for the Lord, but then neglect to expose their children “to the numerous opportunities to learn of Jesus Christ through the ministries” of the church (Sunday school, etc.), have they not made a vow that they are failing to uphold, thereby making themselves false witnesses? And be sure that failure in this regard will hurt your children. Another form that this might take is failure to live in light of a church covenant. Membership at BBC requires the signing of a church covenant. To formally and publicly agree to abide by the covenant, and yet to fail to do so, is to make oneself a false witness. For example, when the covenant reads, “We will not forsake the assembling of ourselves together, eager to make productive use of the means of grace,” is not the person who agrees to that term, and yet fails to regularly assemble and make use of the means of grace (preaching, prayer, Communion, etc.), guilty of false witness? Why is this sin so serious? We could say, first, because God hates false witness, and believers are expected to hate that which God hates (see Psalm 97:10). We could add to that because false witness always hurts others, and the Bible consistently urges us to act in the best interests of others (see Philippians 2:1–4). But we should also notice that those guilty of this sin are specifically warned of dire consequences: “A false witness will not go unpunished, and he who breathes out lies will not escape” (Proverbs 19:5). Again, “A false witness will perish” (Proverbs 21:28). We must, therefore, do all we can to avoid this sin. Biblical wisdom demands that we follow some practical steps to help us avoid falling into the sin of false witness. First, if you are guilty of false witness, repent. Confess your sin to God, seek forgiveness from others, and commit to be done with false witness. Second, strive to be trustworthy. As Christians, our words should reflect God’s truth. Jesus urged, “Do not take an oath at all…. Let what you say be simply ‘Yes’ or ‘No’” (Matthew 5:3–37). He was not categorically forbidding the taking of oaths, but saying that Christians should be of such honourable character that they speak and behave truthfully whether or not they have taken an oath. We are to speak the truth in love (Ephesians 4:15) and to put away lying (Ephesians 4:25). Third, as we have seen consistently in this series, we must not keep company with those who are known to be false witnesses. “You shall not spread a false report. You shall not join hands with a wicked man to be a malicious witness” (Exodus 23:1). To become wise, we must walk with those who are wise, for companions of fools will be destroyed (Proverbs 13:20). While false witnesses are assured of dire consequences, Christians should be driven more by love for God than by fear of consequences. “Whoever has my commandments and keeps them, he it is who loves me. And he who loves me will be loved by my Father, and I will love him and manifest myself to him” (John 14:21). Love for God should drive our obedience. And the wonderful promise is that obedience invites sweet fellowship with the one we love.
It turns out that everyone may wish to go to greater lengths in order to preserve their hearing well into old age. After all, there is new evidence that suggests that you health depends more upon your hearing than previously imagined. This comes after a study has found a correlation between overall brain health and hearing loss. We will explore this study in the context of the study, and then relay advice on how to keep your hearing safeguarded throughout life. The study that revealed this new information was completed by a joint effort of researchers from Johns Hopkins University and The National Institute on Aging. They had a sample size of 126 individuals who they brought in for annual screening such as routine physicals as well as MRIs. This study was completed over a period of almost two decades, using the same participants. The researchers found that there was a correlation between the overall size of the brains observed and hearing loss. It is well-established that brains shrink with age, and that this is a primary factor in forms of dementia and diminished mental abilities. The study revealed that there was a positive link between brain shrinkage and patients with hearing loss. The study revealed that people who hearing damage had also had their brain shrink at a considerably faster rate than individuals with a normal hearing threshold. Essentially, the study found that people with hearing loss are at a much higher risk for brain atrophy than others, which can result in various forms of neurological degeneration. This occurrence was able to be explained as a normal function of the brain, gone awry. When the brain senses that there is damage to a specific area, it attempts to compensate for the loss which results in heavy damage to the grey matter. This, in turn, results in shrinkage of the brain, and typically results in ailments such as dementia. The researchers concluded that people of all ages should take greater steps to ensure that they have healthy hearing throughout their lives. One of the most basic things that an individual can do in order to protect their hearing is to go to regular meetings with their primary care physician. Using your records as well as continuous measurements of your fluctuations in your hearing, your doctor will be able to catch any potential changes and help minimize any potential damage. The researchers from Johns Hopkins highly recommended that people who already suffer from hearing loss take additional steps to track their hearing with their doctor to ensure that they are doing everything to prevent further losses. These check-ups can be the difference between a high and low quality of life as you age.
An estimated 4 million U.S. adults report driving under the influence of alcohol every year, the majority of them men. When a partner gets a DUI (Driving While Under the Influence) or a DWI (Driving While Intoxicated), it can cause all sorts of issues in the relationship. You find yourself torn between feeling sorry for him, being upset with his recklessness, and being bitter about having to care for him after his DUI. Having a DUI or a DWI means that you will lose your driver's license, pay large fines, and possibly do some jail time. This can be a difficult obstacle, even in the most loving relationship. Try to keep your calm when you learn that your partner got a DUI or a DWI. Your first reaction may be to yell or get upset, or you may find yourself crying because your partner has gotten in trouble with the law. Depending on how many DUIs or DWIs your partner has had, his sentence could be fairly easy. His first offense will require less time without a driver's license – around 90 days depending on state laws – and could mean no jail time if he gets into an Accelerated Rehabilitative Disposition program and undergoes alcohol counseling. Find out the terms of your partner's sentence before you get upset. Talk about what has happened and how you will both deal with it. When your partner gets a DUI or DWI, you'll want to do your best to support him. There is nothing you can do after the fact, so being upset forever will not do any good. Try to find a solution to the issue. If there are outrageous fines to pay, which there usually are, have your partner talk to the court about a payment plan. Most courts offer a payment plan for DUI fines since many people cannot pay these fees upfront. Try your best to talk through the situation with your partner and come to a good solution for handling the financial aspects of the DUI. If your partner has to do jail time, try to find a way to ease this trouble for both of you. Plan visits to the jail as often as possible to see your partner. If he is allowed work release, which many people with a job are, pick him up in the morning and take him back to jail at night. However, if you find that this becomes a hassle due to having children or your own employment, enlist the help of friends and family to pick your partner up and take him back from work release. A DUI requires a lot a assistance and if you are not up for the challenge, don't take it on all by yourself. You'll start to resent your partner and become angry at the situation. When your partner gets a DUI or DWI, he will lose his driver's license, probably requiring you to drive him where he needs to go. This can get monotonous and may cause trouble in the relationship if not dealt with properly. If you are not able to drive your partner everywhere or become irritated with having to do so, ask for help. Your partner will not expect you to take care of everything for him. He will not want to feel like a burden. Keep an open communication with your partner and tell him if you're upset. But, remember too, that he won't lose his driver's license forever and if the roles were reversed, you'd need him to do the same thing for you. Your partner will more than likely be required to do counseling classes for his DUI or DWI. Some of these classes request that family be present. Support your partner by taking him to these classes and sitting with him. If your partner got a DUI because he may have a slight drinking problem, your support will definitely be needed. These classes teach people who have gotten DUIs or DWIs how to be more responsible, how to get support from family or friends, and offer help if needed in the form of AA meetings. Around 30 to 40 percent of DUI convictions involve repeat offenders, so you should be there for your partner so that you can understand more about the alcohol addiction and how to be safe when drinking. Even if your partner hardly ever drinks, and just happened to this one time, you'll be helping him get through this tough time of getting a DUI just by being there.
What to do if you smell gas. Gas is a flammable substance with a distinctive odour, similar to rotten eggs. • Do not operate electrical appliances. • Do not operate light switches or mobile phones. • Move away from the area. Be aware of carbon monoxide. Faulty or poorly maintained gas appliances can emit dangerous carbon monoxide. Unlike natural gas, it’s odourless. If carbon monoxide is present, you might experience tiredness, dizziness, vomiting and loss of consciousness. If you suspect someone is suffering from carbon monoxide poisoning, please visit a doctor or hospital immediately. IDENTIFY if you detect a rotten egg smell it could be a gas leak. LEAVE the area and don’t use anything that might create a spark.
Oxford biologist and noted atheist Richard Dawkins recently sat down with Archbishop of Canterbury Rowan Williams and had a nice chat about science, human origins and God. It was an entirely civil affair. Because Williams does not object to evolution, there was real agreement between them on a number of issues. But their essential difference surfaced right at the end. At issue was the origin of the universe and the possibility, put forward by some physicists, that the universe arose out of nothing and has evolved on its own ever since. With this idea in mind, Dawkins said to Williams, "What I can't understand is why you can't see [that this] is such a staggering, elegant, beautiful thing, why you would want to clutter it up with something so messy as a God." Williams agreed with the elegance bit but added, "I think you put your finger on one of the things that does seriously divide us. .... I'm not talking about God as an extra who you can shoehorn into that. That's just not how I see it." To which Dawkins replied emphatically, "That is exactly how I see it." The debate, if it could properly be called that, therefore shone a light on the God question as the great divider. And that question is not whether God exists or doesn't exist, but what kind of God it is that exists or doesn't exist. Plenty of Christians (not Williams) agree with Dawkins about the nature of God. Take Albert Mohler, for example, president of the Southern Baptist Theological Seminary and biblical inerrantist. He believes that religion and science are in a necessarily competitive relationship, writing that "evolution ... represents one of the greatest challenges to Christian faith and faithfulness in our times." On this Mohler is wholly in agreement with Dawkins. They agree on this point because of their shared assumption that, whether or not God exists, God properly plays the role of an idea among ideas. For them, God is capable of being displaced by scientific investigation and necessarily jostles with science within a single conceptual space, like opposing chessmen on a finite Cartesian grid. Their disagreement lies only in the fraction of that space given over to each competitor. Mohler's science-to-God ratio is the inverse of Dawkins', just as his theism is a mirror image of Dawkins' atheism. Their contrast is evident and their game is possible because of -- not despite -- their shared assumption. Both Dawkins and Mohler are excellent communicators and have spent years expressing their views with clarity and vigor. I admire them for the internal consistency of their perspectives and for their shared impatience with woolly-headedness. Their trust in the sturdiness of concepts really is a refreshing contrast to the intellectual laziness prevalent in much of our public discourse. But conceptual clarity is no substitute for openness to the world. So often the cost of clarity is narrowed vision, and both Dawkins and Mohler have had to divide the world in order to conquer it. I don't think it's unfair to say that, for Dawkins, science is good and religion is bad. Or that, for Mohler, the Bible is right and science is wrong. The integrity of both views depends on not taking seriously those features of reality that fall outside these artificial boundaries, and both views therefore flatten and artlessly shortchange the world. There is a way to release the science-religion debate from this static and deadening opposition, and Williams' remark about not "shoehorning God in" is suggestive of it. That way is to take seriously the notion that God, if God is real in any way at all, cannot be confined by any conceptual space. God is in no way, and can be in no way, "in addition to" anything. Instead, God must be, in the words of theologian Kathryn Turner, "beyond kinds." Put another way, God is not subject to our notions of similarity and difference, or even to our idea of existence. This is basic ex nihilo Christianity: God is the author of existence itself and therefore cannot exist as the world does or as we do or as concepts do. "Existence" is an idea derived from the push and pull of experience, and as Marilynne Robinson wrote in her novel "Gilead," "creating proofs [about God] from experience of any sort is like building a ladder to the moon. It seems it should be possible, until you stop to consider the nature of the problem." This kind of theology makes empiricists and creationists -- empiricists in their way -- crazy. Admirably labelled "theological doohickey postmodern BS" by one of my readers, it is anathema to common-sense types, atheist and religious, because it suggests that we live in a state of near-total ignorance about God. And nothing is more offensive to the modern mind than the idea of any kind of permanent, in-principle ignorance. Offensive it may be, but once we let this simple Christian claim settle in, everything changes. The flat and uncreative opposition of science and religion vanishes and the possibility of a noncompetitive relationship arises. And isn't it hopeful and imaginative to consider that God is not a small manipulable thing, like a hammer or a chessman or a wedge for splitting the world in two? Maybe all of that is our own dreadful creation, like making a bomb out of the beauty of physics. Please don't misunderstand me. There is not nothing to say about God. Acknowledging that God is beyond kinds is merely a starting point. Yet it is a critical step, because such acknowledgement equips us with a kind of theological central nervous system. It prevents us from being burned and from burning each other. The thoroughly boring God of the world's Dawkinses and Mohlers is a rather benign caricature and is largely harmless so long as it remains within that debate. They can have their game. But as we all know, truly horrific things can happen when people think they really know something about God, and this is true whether or not they believe in God. So admitting real ignorance up front seems like a good idea. Facing the radical insufficiency of even our best ideas about God -- even for a moment -- may help us see that the static and fruitless opposition that characterizes much of the science-religion debate is no more than an expression of our own desire to control, that is to say deny, God. And only after facing that insufficiency can we know that the setting-aside of our ideas about God, which is the selfsame act of acknowledging our ignorance, is also the act that opens us to the world -- the whole world -- as it really is.
The Indian Cobra is known around the world as highly venomous snake that feeds on rodents, lizards, and frogs. As well as biting, the Indian cobra can attack or defend itself from a distance by "spitting" venom, which, if it enters the opponent's eyes, causes severe pain and damage. The snake actually forces the venom through its fangs, by exerting muscular pressure on the venom glands, so that it sprays out in twin jets for 2 m (6 1/2 ft) or more. When threatened, the Indian Cobra will assume its characteristic posture. It will raise the front one-third of its body and elongate its long, flexible neck ribs and loose skin to form its distinctive hood, on which are resembled eyes. . Although the Indian Cobra is not an endangered species, it has recently been hunted for its distinctive hood markings in the production of handbags. It is listed under the treaty because it closely resembles other species that are threatened and in need of protection. The Indian Cobra's most known characteristic features are the wide black band on the underside of the neck, and the hood marking design which shows half-rings on either side of the hood. It is a smooth-scaled snake with black eyes, a wide neck and head, and a medium-sized body. Its colouring varies from black, to dark brown, to a creamy white. The body is usually covered with a spectacled white or yellow pattern, which sometimes forms ragged bands. The Indian cobra may grow from 1.8m to 2.2m. Those Cobras which have the single ring on the hood are found in Assam and Eastern India and spit venom like the Ringhals Cobra of South Africa which can eject a spray for a distance of more than two meters and cause severe eye pain, sometimes blindness. Keepers who attend this particular variety of Cobra sensibly wear goggles. The King Cobra or Hamadryad, is the largest of all poisonous snakes. This sometimes 5 meter long, lethal creature is entirely a snake eater. It enjoys Pythons, other Cobras, and even its own species. The King is aggressive, unpredictable, and can strike without provocation. It is most intelligent. When erect it can stand up to 2 meters in height. In certain fertility rites in Burma, a woman desirous of offspring is required not only to approach the King Cobra but to plant a kiss on its mouth. If she is successful in doing so she will bear many children; if she fails, obviously none. The Indian cobra feeds on rodents, lizards and frogs. It bites quickly, and then waits while its venom damages the nervous system of the prey, paralyzing and often killing it. Like all snakes, N. naja swallows its prey whole. This species sometimes enters buildings in search of rodent prey. In its characteristic threat posture, the Indian cobra raises the front one-third of its body and spreads out its long, flexible neck ribs and loose skin to form a disklike hood, on the back of which there are markings resembling eyes. Indian cobras pay more attention to their eggs than is usual in snakes. The 8 to 45 eggs (usually 12 to 20) are laid in a hollow tree, a termite mound or earth into which the snakes tunnel. The female guards the clutch throughout the incubation period, leaving them only for a short time each day to feed. The Indian Cobra eats rats and mice that carry disease and eat human food. Also, cobra venom is a potential source of medicines, including anti-cancer drugs and pain-killers. This species is highly venomous, and its bite can be lethal. Because it hunts rodents that live around people, it is often encountered by accident, and many people die each year from N. naja bites. Nagapanchami or the Serpent Festival occurs in India generally in August after the monsoon rains. It is then that the full impact of Cobra power is manifest. Throughout the country Cobras are either brought into the villages and fed, or effigies of the snake are anointed and worshipped. Rarely has it ever been recorded that a fatality has occurred from snakebite during this occasion; the Cobras appear to sense they are being revered. Although there may be variations in the date and in the local traditions and modes of observance, Nagapanchami is celebrated according to ancient rites. The festival continues to testify to the feelings of awe and veneration which the Cobra evokes in the minds of the population since the earliest times remembered. The Cobra is a graceful animal and appears always to carry an air of dignity and nobility. The physical charisma with which it is endowed is without doubt also one of the reasons why it, among all snakes, was chosen by the Nagas to be their totem. Snake charming is fascinating and at times mystifying. The eyes of the Cobra are hauntingly black and hypnotic; the snake is beautiful to watch when it is being worked by a skilled charmer. The hood is then spread and the markings apparent. The colours of the hood merge from black to brown to beige and, when framed against the sunlight, it appears almost translucent. No visit to India is complete without experiencing it. But the true essence of the art is not observed by the tourist. There are initiates of the Shiva cult who handle Cobras without any danger of being bitten. The ‘Commercial’ snakes, generally the Spectacled Cobra, have either had their fangs extracted or the poison sacs removed. In general their lifespan is shortened due to mouth rot. The performance, nevertheless, is spectacular and colourful.
A recent study found that more than 65 percent of the high rise buildings in the UAE were built with external panels containing combustible materials. Imminent updates to UAE building codes will prohibit the use of these materials, although plans to issue the updated codes have been delayed a number of times in order to make them even stronger. Significantly, the pending changes will not only restrict the use of combustible panels, they will also allow for manufacturers found providing unapproved building materials to be prosecuted. Despite these forthcoming changes, the fire risk posed by combustible claddings in the existing building stock remains a significant threat. Insurers and reinsurers recognize the threat, and the industry, along with other key stakeholders, is actively looking for solutions to minimize and mitigate this peril. ​High rises in the region were typically built with exterior wall panels fixed to the outside wall of the building with small studs. These panels commonly contained a polyethylene core that can easily ignite, even from a low ignition source. Once started, these fires can spread very quickly. ​The industry clearly has some options for dealing with this exposure including: ceasing to insure such risks; revising underwriting guidelines and rates to reflect the high hazard nature of this risk; investigating practical and cost-effective measures for minimizing the potential for an external wall fire; or some combination of the above. ​At XL Catlin, due to our existing underwriting guidelines, such risks will need to be classified as a high hazard exposure and this will have an effect on the terms and conditions as well as the capacity we would deploy. ​Over the years, test protocols (ASTM E-119 and NFPA 285) for determining the flammability of external wall panels have been incorporated into building codes in some countries, particularly the U.S. and the UK. As a result, the building stock in these countries is highly resistant to an exterior wall fire. Other countries, including the UAE, are following suit and making moves to regulate the use of exterior panels with combustible cores. Saudi Arabia, China and South Korea, for example, have already updated their codes to greatly restrict the use of panels with combustible cores. ​To minimize the risk possed by existing combustible panels, high rise building owners and managers should be aware of the precise composition of their exterior panels, especially in countries like UAE that historically have not required the use of noncombustible materials. If the panels have an ordinary polyethylene core, precautions should be taken to lessen the risk of the building becoming engulfed in fire. ​- Electrical systems on the exterior of the building should be checked and shielded where necessary. ​- Contractors working around the exterior of a building should be specifically alerted to this risk, and extra care should be taken when using, for example, welding equipment or anything else that could generate a spark. ​- For residential buildings with terraces, limits should be placed on the use of grills, plastic and wood furniture, and storage of combustible household items. ​Fire safety risk engineers can help owners and managers assess a building’s risk for an external wall fire, and identify appropriate steps to further minimize this threat. Insurance underwriters should also be aware of this risk – for both new and existing buildings – and base their underwriting and pricing decisions accordingly. ​A variety of retrofit options are also currently being studied to improve fire safety. These include installing noncombustible panels at specific intervals to create fire breaks, and adding sprinklers at certain elevations to stop fires from spreading. As a whole, the industry needs to balance providing underwriting capacity to assist their clients to benefit the broader economy while maintaining underwriting discipline and a focus on profitability, which ultimately protects the long-term sustainability of the industry. ​The region has also seen an uptick in losses due to natural events, particularly in Saudi Arabia and the UAE where severe weather events and flooding are becoming more frequent. ​While the scale of these events and losses remains relatively small in comparison to global standards, insurers and reinsurers have historically treated natural catastrophes in the region as a low risk. As a result, the cost of insuring against natural perils in the GCC region has been quite low compared to other parts of the world. Insurers and reinsurers have also not established significant reserves to respond to natural catastrophes in this region. ​While time will tell, these recent severe weather events could well represent a “new normal” which risk managers, insurers and reinsurers will have to confront. ​These developments suggest a different situation from the start of the year. At that time, insurance and reinsurance capacity dedicated to the region was on the rise, new projects were being canceled or deferred, and many existing facilities were operating at reduced capacities. For insurance buyers, that meant an increased competition for business and downward pressure on rates. ​However, as the risk and macroeconomic landscape changes, strengthening the relation between the businesses driving the economy and the insurance sector are paramount. XL Catlin’s view is that the focus needs to broaden beyond pure rating considerations to encompass enhanced risk management practices based on clear, quantifiable goals. ​An increased focus on risk management will also allow clients and (re)insurers to understand current exposures better, and identify cost-effective and practical opportunities to mitigate different risks over the short- and long-term. ​For example, fire safety engineers can help owners and managers assess a building’s exposure to an external wall fire, and identify appropriate steps to minimize this threat. And Property risk engineers can offer significant guidance on reducing the impact of a severe weather event; expertise developed from other parts of the world where such occurrences are much more widespread. ​Insurance penetration in the GCC region lags significantly below global averages. However, given the changes the region is witnessing, an opportunity exists to strengthen the ties between risk managers and the insurance industry. By working more closely together – and particularly with a heightened focus on risk management – companies throughout the GCC region will be better equipped to address today’s challenges, and have greater resilience to confront tomorrow’s threats.
SOMERSET HOUSE AND KING'S COLLEGE. On all the pride and business of the town."—Cowley. Old Somerset House—Rapacity of the Protector Somerset—John of Padua, Architect of the Original Building—Downfall and Execution of the Protector—Somerset House assigned to the Princess Elizabeth—Afterwards the Residence of the Queens of England—Its Name changed to Denmark House—Additions made by Inigo Jones—Banishment of the Capuchin Fathers, and Desecration of the Chapel—The Services in the Chapel restored, and Pepys' Account of them—Catherine of Braganza—Attempt to implicate the Royal Household with the Murder of Sir Edmundbury Godfrey—The Cemetery—Description of the Old Buildings—Their Demolition—Building of New Somerset House—Amusing Tradition relative to Somerset House—King's College. The building so familiar to Londoners, old and young, by the name of Somerset House, occupies the space formerly covered by four or five buildings of note in their day, of some of which we have already spoken. It appears from Stow that in order to make a level space of ground to hold the fair new palace which he purposed to erect—"that large and goodly house now called Somerset House"—the Protector Somerset pulled down, and "without any recompense," the Inns, as they were called, of the Bishops of Chester, Llandaff, Lichfield and Coventry, and Worcester, with all the tenements adjoining, and also the old parish church of St. Mary's. The original Somerset House, it is almost needless to remark, took its name from the Duke of Somerset, the Lord Protector of the reign of the boy-king, Edward VI.; but the present building is of much more recent date. By the attainder of Somerset it reverted to the Crown, and it was frequently tenanted by Queen Elizabeth. Anne of Denmark, the wife of James I., and Catherine of Braganza, the neglected queen of Charles II., both in succession held their courts within its walls. At length it came to be appropriated by usage as a residence to the queens-dowager, and was frequently appointed as a temporary residence for such of the ambassadors of foreign princes as the later Stuarts and the earlier Brunswick sovereigns cared especially to honour. Mr. A. Wood, in his "Ecclesiastical Antiquities of London and its Suburbs," is of opinion that the Protector Somerset already possessed some property on the site of Somerset House when he began the great work of pulling down his neighbours' houses around their ears and his own. But be this true or not, he seems to have known, or at all events to have made, little distinction between meum and tuum, and when he had once resolved on his end—namely, to build a palace on this central site, at a bend commanding the view of the river from London Bridge to the Abbey at Westminster—he was not likely to be at much loss as to the means to be employed. Wide space and materials were all that he needed, and these he soon obtained in a manner such as we should now probably distinguish by the term "by hook or by crook." And further, in order to complete the undertaking in a thoroughly substantial and, as it would now be called, "first-class" style, he pulled down also the charnel-house of Old St. Paul's and the chapel over it, together with a structure in "Pardon Churchyard, near the Charterhouse, throwing the dead into Finsbury Fields," and the steeple, tower, and part of the church of the Priory of St. John of Jerusalem at Clerkenwell. With these materials he commenced his work, unblessed by either the Church, or the people, or the poor. Bishop Burnet, alluding to the Protector's rapacity, admits that "many bishops and cathedrals had resigned many manors to him for obtaining his favour," though he adds, "this was not done without leave obtained from the king." He also accuses the Protector of selling chantry lands to his friends at easy rates, for which it was concluded he had great presents. The rise of Somerset House exposed its owner to the reflection that "when the king was engaged in such wars, and when London was much disordered by the plague that had been in it for some months, he was then bringing architects from Italy, and designing such a palace as had not been seen in England." Pennant tells us that the architect employed by the Protector Somerset in the erection of Somerset House was the celebrated John of Padua, the architect of Longleat, in Wiltshire, who is said, in Walpole's "Anecdotes of Painting," to have held, under Henry VIII., the post of "Devizer of His Majesty's Buildings." Whether the Protector Somerset ever resided in the palace he had thus been at so much trouble in building, there is some room to doubt. The building itself was commenced in 1546–7, and as soon after as the month of October, 1548, at which time the works were still going on, he was deprived of the Protectorship and committed to the Tower. He was, however, pardoned after two years' imprisonment, and restored to the Council; but in the following year he was again committed to the Tower on charges of high treason, and was beheaded on Tower Hill in January, 1552. One of the grounds of dissatisfaction at first exhibited against him appears to have been "his ambition and seeking of his own glory, as appeared by his building of most sumptuous and costly buildings, and specially in the time of the king's wars, and the king's soldiers unpaid." On the attainder of the Duke of Somerset his palace was, of course, forfeited to the Crown, and his nephew, King Edward, appears to have assigned it to his sister, the Princess Elizabeth, for her use whenever she visited her sister's court. But when she came to the throne, she preferred the regions of Whitehall and St. James's, and fashion followed in the wake of royalty westwards. At this period the building is spoken of as "Somerset Place, beyond Strand Bridge." On Elizabeth's succession to the throne some partial restoration of Somerset's property was probably made, for Somerset Place became the residence of the Dowager Duchess. Elizabeth seems to have lived here occasionally, most probably, however, at the expense of her kinsman, Lord Hunsdon, to whom she had given the use of it. Such, at all events, was the opinion of Pennant. Stow tells us that the queen of James I. made this house her palace, and that she entertained the king with a feast within its walls on Shrove Tuesday, 1616, when the latter was so delighted at her reception of him that he ordered it to be called Denmark House in her honour. The palace was much improved and beautified by the queen, who added much to it in the way of new buildings, Inigo Jones being called in to furnish the designs. She also brought a supply of water to it by pipes laid on from Hyde Park. In 1626 it was settled for life on Henrietta Maria, the queen of Charles I., for whom it had been stipulated on her marriage that she should be allowed the free practice of her religion, having been born and brought up a pious Catholic. Accordingly it was fitted up for the reception of herself and her household, including, of course, a body of priests to say mass daily, and to celebrate the offices of the Church. The priests in attendance on the queen were Capuchins. They had succeeded to the Oratorians, who had been expelled by the influence of Buckingham (Steenie) with his royal master. The foundation-stone of the chapel was laid by the queen, the work being carried out under the direction of Inigo Jones. The first stone was laid with great ceremony. From six in the morning there was a succession of masses daily till nearly noon, and as it was difficult to approach the sacraments elsewhere, except clandestinely, the confessionals were thronged constantly. On Sundays and festivals there was a controversial lecture at noon, and soon after followed vespers, sung by the Capuchins and musicians in the galleries. When vespers were over, there was a sermon on the gospel of the day, and lastly, compline. The chapel seems to have been also turned to account constantly in other ways. There were frequent "conferences" for the edification of Catholics and the instruction of Protestants, and on three days in each week the Christian doctrine was taught catechetically in English and in French. The consequence was that there were frequent conversions to the ancient faith, and the name of the chapel began to offend the ruling powers. Accordingly, when the queen was absent in Holland, it was resolved by the authorities to make an assault upon the place. The Capuchin fathers were silenced and driven out, then imprisoned, and at length banished; their dwelling itself was pulled down, and the chapel desecrated, in spite of its being the property of the queen. The Capuchins were brought back, and the chapel was repaired, when Henrietta Maria returned to England, a widowed queen, after her son's restoration. Here, in September, 1660, died the Duke of Gloucester, from the small-pox; and hence his body was taken by water "down Somerset Stairs," as Pepys tells us, to Westminster, to be buried in the Abbey. Pepys, in his "Diary," gives an account of a service held in the chapel of Somerset House in 1663–4. "On the 24th, being Ash Wednesday, to the Queen's chapel, where I staid and saw mass, till a man came and bade me go out or kneel down; so I did go out; and thence to Somerset House, and there into the chapel, where Mons. D'Espagne, a Frenchman, used to preach." In October he again visits Somerset House, and saw the queen's new rooms, "which are most stately and nobly furnished!" In January, 1664–5, he went there again, and was shown the queen's mother's chamber and closet, "most beautiful places for furniture and pictures." In consequence, however, of the plague in the June following, the Court prepared to leave Whitehall and Somerset House. The Queen went to France, and there died in 1669. On the death of Charles II. in 1685, Somerset House became the residence of Catherine of Braganza, who lived here until her return to Portugal in 1692. It had previously belonged to her as Queen Consort, and during the ultra-Protestant furore, which exhibited itself for some years prior to the Revolution, attempts were made to implicate her household in the pretended Popish Plot of the time, and to connect the mysterious murder of Sir Edmundbury Godfrey in 1678 with persons in her service. There is so much doubt and uncertainty mixed up with the story of the murder of Sir Edmundbury Godfrey, that it is almost impossible to winnow the truth from the falsehood, owing to the perjuries of Titus Oates and his confederate, Bedloe, the discharged servant of the Lord Belasyse. But it appears clear that the worthy justice of the peace was inveigled to a spot close to "the Watergate at Somerset House," under the pretence of his presence being wanted to allay a quarrel, and that he was strangled on the spot with a twisted handkerchief. His dead body, it would seem, was afterwards carried to Primrose Hill, at that time a retired and lonely spot, where a sword was run through it. For their presumed share in this murder three persons were hung at Tyburn in 1679. An attempt was made by Oates and Bedloe to implicate the Jesuits in the plot, and even the Queen, who then resided at Somerset House; but Charles, with his usual wit, refused to listen to the charge, telling Burnet that though "she was a weak woman, and had some disagreeable humours, she was not capable of a wicked thing." We have already said that, under the Stuarts, Somerset House was frequently appointed for the reception of ambassadors whom the sovereign and the court delighted to honour. The last foreigner of importance who lodged there was the Venetian ambassador, who made a public entry into it in 1763, shortly before the building was pulled down. From the time of the departure of Catherine of Braganza, Somerset House ceases to possess any interest in its strictly palatial character. It continued as an appurtenance of successive queens down to the year 1775, when Parliament was recommended, in a message from the Crown, to settle upon Queen Charlotte the house in which she then resided, "formerly called Buckingham House, but then known by the name of the Queen's House," in which case Somerset House, already settled upon her, should be given up and appropriated "to such uses as shall be found most useful to the public." Mr. Wood, in his "Ecclesiastical Antiquities," tells us that in the reign of James II., Dr. Smith, one of the four vicars-apostolic who acted as Catholic bishops in England, was consecrated at Somerset House. There was also in the grounds of Somerset House a small cemetery, in which the Catholic members of the Queen's household were buried. In 1638 Father Richard Blount, who had "reconciled" Anne of Denmark, the consort or James I., to the Roman Church, was buried here by the Queen's permission. The value of such a permission at that time may be inferred from the fact that, owing to the severity of the penal laws, Catholics were for the most part obliged to be buried in Protestant cemeteries, with rites distasteful to themselves; and they were only too glad when the priest who attended them in their last illness could bless a little mould which was put into their coffin, and perform the usual ceremonies in secret, and even at a distance from their bodies. A map and ground-plan of old Somerset, or Denmark House in 1706, shows that it consisted of one large and principal quadrangle, called "the Upper Court," facing the Strand. Its out-buildings were very extensive, and still more so its terraced gardens, facing the Thames, with stairs at either end: In the southern front of the quadrangle named above were the Guard Chamber, with a waiting-room, the Privy Chamber, the Presence Chamber, from the west end of which a flight of stone steps led down into the garden. On the western side, from the Strand nearly to the riverside, there ran along Duchy Lane (now absorbed in Wellington Street South) a row of coach-houses, stables, and store-yards. To the south-east angle of the chief quadrangle there was a passage down the "Back Stairs" to a second, or lower court, two storeys lower than the upper court. Here were the more private apartments of the queen—the "Coffee Room," "Back Stair Room," "Oratory," dressing-room, bed-chamber, and "Withdrawing Room," the two last-named facing the gardens and commanding a fine view of the reach of the river. Still further to the east, extending across what now is part of King's College, as far as Strand Passage, or Lane, were a variety of other buildings, occupied by the members of the Court, called the French Buildings, connected with the Yellow Room, the Cross Gallery, the Long Gallery, and leading to a "pleasance" which opened into the garden. A print in the Gentleman's Magazine, showing some of these last-named buildings before they were pulled down, together with the new building of Sir William Chambers on the north, leads us to suppose that, though interesting as a specimen of the style of Edward VI., their removal was no great loss from an architectural point of view. And half the garden just reflects the other." This was literally true here, for in front of both the greater and the lesser quadrangle there were square gardens, with straight gravel walks on each side, and three avenues of trees; a handsome flight of stone steps, with iron gates; and on either side some handsome statues of Tritons and Nereids. Along the river ran a raised terrace, with a heavy dwarf wall. In a print of the river front of Somerset House, dated 1706, there appears moored a little way off the stairs a sort of house-barge, under which is written "The Folly," and a queershaped wherry, approaching the form of a gondola. "I am extremely pleased," observes Stow, "with the front of the first court of Somerset House, next the Strand, as it affords us a view of the first dawning of taste in England, this being the only fabric that I know which deviates from the Gothic, or imitates the manner of the ancients." How amused would Pugin or Sir Gilbert Scott be to read this statement! and also the sentiment which follows:—"Here are columns, arches, and cornices that appear to have some meaning; if proportions are neglected, if beauty is not understood, if there is in it a strange mixture of barbarism and splendour, the mistakes admit of great alleviations." In all probability the architect was an Englishman, and this his first attempt to refine on the work of his predecessors. It is currently believed that James Stuart, the elder "Pretender," was at one time secreted in old Somerset House; and there is an allusion to this belief in the Town Spy, published in 1725:—"The Pretender's residing at Somerset House in the year of Peace was blabbed out by one of the Duke d'Aum—nt's postilions." The demolition of the old building was commenced as soon as an Act could be passed, and Sir William Chambers was appointed architect of the new buildings. They were commenced in 1776, and in 1779 one of the fronts was completed. The site occupies an area of upwards of 800 feet by 500. The front towards the Strand consists of a rustic basement of nine arches, supporting Corinthian columns, and an attic in the centre, and a balustrade at each extremity. Emblematic figures of Ocean and of the eight principal rivers of England in alto-relievo adorn the keystones of the arches. Medallions of George III., Queen Charlotte, and the Prince of Wales were formerly placed over the three central windows of the first floor. The attic is divided into separate portions by statues of Justice, Truth, Valour, and Moderation; and the summit is crowned with the British arms, supported by emblematical figures of Fame and the genius of England. The chief feature of the river front of Somerset House is its broad terrace, about 600 feet in length, raised on rustic arches, and ornamented with emblematic figures of the Thames. The centre of the large quadrangle opposite the chief entrance from the Strand is occupied by a gigantic piece of bronze work, executed by Bacon. The principal figure is a fanciful and almost allegorical representation of Father Thames. The building affords at present accommodation during the working hours of the day to about 900 Government officials, maintained at an annual cost of something like £275,000, and belonging to the Audit Office, the office of the Registrar-General, and the offices connected with Doctors' Commons. In the north front the annual exhibition of the Royal Academy was held from 1780 down to about the year 1837, when it was transferred to the National Gallery in Trafalgar Square. The use of apartments in Somerset House for the meetings of the society was also granted in 1780. The Royal Society removed from Somerset House to Burlington House, Piccadilly, in 1856. The Society of Antiquaries, and also the Royal Astronomical and the Geological Societies, have also at various times occupied apartments in Somerset House. "The royal patronage of the arts," writes Malcolm, in 1806, "is most conspicuous in this grand building, which contains the apartments of the Royal Society, the Society of Antiquaries, and the Royal Academy of Painting. The two former assemble on the east side of the vestibule or entrance, and the latter on the west." The Society of Antiquaries dates its origin from the year 1751. Malcolm tells us that previous to that time several unsuccessful, or at least interrupted, attempts had been made, in the reigns of Elizabeth, James, and Charles I., to establish such a society, but nothing effective was done until the reign of George II., who granted a charter, styling himself the founder and patron of the Society of Antiquaries, appointing Martin Folkes, Esq., as its president, and limiting the society's permanent income to £1,000 a year. The president must be assisted by a council of twenty members, half of whom are elected annually, along with himself, and the officers and members of the society are required to possess an accurate knowledge of the history and antiquities of their own and foreign nations, and to be "loyal and virtuous members of the community." The Archbishop of Canterbury, the Lord Chancellor, the Lord Privy Seal, and the Secretaries of State for the time being, are visitors of the society. The number of fellows is not limited by their charter. At their meetings descriptions and dissertations are read, and illustrative drawings are exhibited. Their transactions as a body are under the control of an elective director in the arrangement of communications to be published. Their official publication, in a handsome quarto form, is known as the "Archæologia." Pennant writes, in 1806: "The Royal Society and the Society of Antiquaries both hold their meetings here; and here also are annually exhibited the works of the British painters and sculptors." Mr. John Timbs, in his "Romance of London," tells us an amusing traditionary story relative to this place:—"A little above the entrance-door to the Office of Stamps and Taxes is let into the wall a white watch-face. Of this it is told that when the wall was being built a workman fell from the scaffolding, and was saved from being killed only by the ribbon of his watch, which caught upon a piece of projecting ornament. In thankful remembrance of his wonderful preservation, he is said, and is believed to this day, to have inserted his watch in the face of the wall." A very pretty story, indeed, if it was only true. But fortunately for the age of poetry, Mr. Timbs us into the real secret of the watch, which is espetially prosaic. "It was placed," he says, "impresent position, many years ago, by the Royal Society, as a meridian mark for a portable transit instrument in one of the windows of the ante-room;" and the late Admiral W. H. Smyth, eminent hydrographer to the Admiralty, we often tell his friends that, having assisted in moving the instrument, he well remembered the was being inserted in the wall. We fear, therefore, the poetic view must be dismissed. Running parallel with the buildings forming west side of the quadrangle, and having its front towards Lancaster Place, a new wing was build 1857, from the designs of Mr. Pennethorne, in style of architecture corresponding with the rest of the building. Here are the offices of the Inland Revenue Department, and in the basement sever rooms are set apart for the printing of postage and other stamps, postal wrappers, envelopes, &c. The vaults of Somerset House were formerly used for the purpose of keeping some of the various public records, which happily have now all be collected into one repository in Fetter Lane. The whole of the east wing was left incomplete by Sir William Chambers, but in 1829 this part the edifice was finished from the designs of Robert Smirke, R.A., and it now forms King's College, which was founded by royal charter in previous year. The entrance is a neat, though confined semi-circular archway from the Strand over which stand the Royal Arms, supported figures symbolical of Wisdom and Holiness, with the motto, "Sancte et Sapienter." The building extends from the Strand to the Thames, and occupies a considerable area of ground. The interior which is very capacious, is well calculated for intended objects. The centre of the principal floor is occupied by the chapel, under which is the hall for examinations, &c., and a new triangular wing one storey high, built in a line with Somerset House and fronting the Thames Embankment, adjoining the residence of the Principal, is now (1874) course of erection. The government of King's College is vested in a Council, which reports annually to the Court of Governors and Proprietors, as the official title of the corporation runs. Forty-two members compose this council, nine of whom are the official governors; one is the treasurer, eight are life governors, and the other twenty-four, of whom six go out every year, are elected by the Court of Proprietors, from a list prepared by the Governor There are certain endowments, producing in all an annual income of £880, which are specially appropriated to certain prizes, scholarships, and professorships, classical and scientific; but the College possesses no endowment applicable to general purposes, and the whole of the expenditure required for the ordinary every-day work of the College has to be defrayed out of the fees paid by the students. The general education of the College is carried on in six distinct departments—viz., the theological department; the department of general literature and science (divided into the classical, the modern, and the Oriental); the department of the applied sciences; the medical department; the evening classes; and finally, the school. This last is in the hands of a head master, subject to consultation with the principal, who has the general supervision of the whole College. The scientific professorships in the department of general literature and science are thirteen in number, of which two — physiology and practical physiology — are held by the same individual. There is also a lecturer in photography. It should be added that the education given here is strictly in accordance with the principles of the Church of England. The students of King's College are divided into two classes—the "matriculated" and the "occasional." The former are those who are admitted to the full prescribed course of study, while the latter, through inability to attend the whole course, devote themselves to the pursuit of one particular subject, as at the two great universities of England. The principals of King's College in the forty years which have passed since its foundation have been distinguished theologians, Bishops Otter and Lonsdale, Canon Jelf and Canon Barry.
- 30 - The hexagonal shaft of the pencil must be straight, as bowing can lead to “shudder” in hand-crank sharpen-ers and irregular collars produced by pocket sharpeners.... The goal of this task is for the student to sharpen the classroom pencils. This can become job in a classroom where the pencils are shared materials. Pencil sharpening hell. Yes it really does exist. Covering Graphite, Colour, Charcoal, Carbon and Pastel pencils (phew!) check out Phil Davies's short video for practical tips on how to sharpen like a pro to make sure you get long life from your drawing materials. - 30 - The hexagonal shaft of the pencil must be straight, as bowing can lead to “shudder” in hand-crank sharpen-ers and irregular collars produced by pocket sharpeners.
This month’s object of the month comes from a shipwreck from 129 years ago. This ceramic dish with a distinguishing floral pattern and stamped on reverse with ‘Ance Manufacture Imperial & Royale. Mouzin Lecae & Cie. Nimy’ with a shield and wreath. This item was retrieved from The SS Duke of Buccleuch in 1989 some 19 miles off the coast of Littlehampton along with a variety of other glassware and ceramics from established Belgian manufacturers, including a little-known glassmaker K. L. Dhur & Son from Belgium. The dish is from the Belgian village of Nimy where, still today you can buy a mixture of porcelain, ceramic and earthen works, though prices aren’t cheap! The company Imperiale & Royale in partnership with Mouzin Lecat & co. manufactured many pottery items of this design from the 1850s until the turn of the century. The cargo ship that was carrying this months' item was The SS Duke of Buccleuch which had been in 15 years service when it was shipwrecked on 7th March 1889. Built in 1874 the Buccleuch was an early version of a steamship which, in the new age of Imperial Rule and trade for the British Empire, was displacing the old wooden wind powered ships that had dominated British waters for almost 200 years. It weighed in at 3000 tons and was 115m in length. Carrying up to 600 tons of cermaics and glassware from Belgium and 2500 tons of machinery/railroad equipment from Middlesborough, this was a heavy hitting that you would not want to be stuck by in the middle of the ocean. However, this is one of the great nautical mysteries! A wooden sailing ship headed for London from New York, known as The Vandalia, came into contact with The Buccleuch on the night in question. Now with a weight that dwarfed the great steamer the Vandalia was sure to have sunk due to the impact, however, The Buccleuch with all 47 of her crew sank. The boats were said to have collided and came away from each other with ease. The Buccleuch sailed away into the distance and then met her watery end. The damage sustained to the Vandalia was major to the port bow and deemed a complete wreck but did not sink. The great mystery of The SS Duke of Buccleuach, the 47 crewmen lost to the English Channel almost 18 miles of the Littlehampton coast, suggests that both captains were at fault as they had both taken proper proceedings when approaching each other. Lights were not lit to signal either ship to move accordingly out of each others path. Thus the ships collided. The petroleum barrels kept the Vandalia afloat long enough for hands to abandon ship. The Buccleuch was struck on the starboard side and sank like a brick to settle on the seabed in a vertical position. Surprisingly the treasures of the Buccleuch are still very much in tact and can be appreciated as a moment in time for object of the month.
A syringe device having a barrel enclosing a first chamber therein containing coaxially aligned primary and secondary plungers. A connectable mixing and delivery tip encloses a second chamber therein and is connected to the barrel by a coupling. A slidable valve having an aperture therethrough to allow passage of air into and out of the secondchamber is disposed between the two chambers. The primary plunger is slidable out one end of the syringe for the selective application of pressure into the syringe barrel. As the primary plunger is urged down into the syringe barrel, air is forced around the secondary plunger and through the valve aperture into the second chamber wherein the substances contained therein are mixed by turbulent aeration. Continued application of pressure on the primary plunger drives the secondary plunger into the slidable valve to plug the aperture in the valve thereby ceasing the agitation of the contents in the second chamber. Further urging of the primary plunger drives the secondary plunger and the slidable valve up into the second chamber so that the mixed contents are expelled out of the syringe device through a delivery tip. With such a syringe mixing apparatus, substances can be mixed and expelled from the syringe by continued movement of the primary plunger into the syringe barrel. l. Field of the Invention Broadly conceived, the present invention relates to syringes. More particularly, the present invention relates to a syringe mixing apparatus wherein a plurality of substances can be first mixed together and then expelled all from a single delivery tip. 2. The Prior Art In the field of dentistry, there are many types of procedures that require the mixing of two or more substances before the mixed compound can be used in a particular dental procedure. A common practice in the dental arts is to measure the separate substances drop-wise into a well or mixing dish and to then mix the separate substances together using an applicator brush, which in turn is then used to apply the mixed compound to the desired teeth surfaces. As will be appreciated, in dentistry it is often necessary to mix relatively small amounts because of the small surface areas that are to be worked upon. Furthermore, the materials which are mixed are often expensive and therefore, rather than mix larger quantities, relatively small quantities are mixed repeatedly so as not to waste undue amounts of the materials in question. Repeated mixing, of course, becomes tedious and time consuming. The described procedure also suffers from other disadvantages. For example, when mixing separate substances in a well or mixing dish with an applicator brush, the applicator brush must be moved from the mixing well into the patient's mouth for purposes of applying the mixed substance to the tooth surfaces, then back to the mixing well to obtain fresh quantities of the mixed 21011~8 substance and then back to the patient's mouth again. This type of back and forth motion between the patient's mouth and mixing well means that each time the brush is removed from the patient's mouth, the applicator brush must be repositioned with the tooth surface when it is inserted back into the patient's mouth. Furthermore, there is added risk of inadvertently touching the lips, gums, or other parts of the mouth by virtue of having to reposition the applicator brush with the tooth surface each time the brush is removed back to the mixing well to obtain fresh quantities of the mixed substance on the brush applicator. Not only does this create the possibility of irritating the patient's lips, gums, or other tissue, but it also creates the possibility of contaminating the mixed substance with saliva, so as to change the composition of the mixed substance in an undesirable way. Further problems with the described procedure arise in connection with the potential bacterial contamination to the mixing well or dish by repeated moving of the applicator brush from the patient's mouth to the mixing well. This creates the need to sterilize the mixing well, which adds additional expense and time to office procedures, as well as creating potential risk for contamination of other patients if the mixing well is not properly sterilized between patients. Alternatively, if the mixing well is made of materials so as to be disposable, this adds an increased burden on the environment since typically such materials are made of plastics. Still other problems in connection with the described procedure may arise where the substances that are mixed are of a relatively low viscosity. In order to obtain a sufficient quantity of the material when using an applicator brush, the brush tip typically has to be made somewhat larger so that the required quantities of 21()ll38 3 -materials can be carried to the mouth. However, the larger size applicator brush makes precise placement and working of the materials more difficult and thus results in a less precise delivery of the materials to the desired tooth surfaces. Yet a further problem is that even once the substances are mixed in the mixing well, for some types of substances evaporation can become a problem. For example, it is common to mix hydrophilic resins in acetone for use in certain types of bonding procedures used in dentistry. The acetone can evaporate rapidly if it is left sitting very long in a mixing well, and this can adversely change the concentration of the mixed substance so as to change the bonding characteristics of the mixed substance. This further complicates the procedure and may require repeated mixing rather than using larger quantities of the mixed substance. As will be appreciated, this, of course, also adds to the tedium, expense and the difficulty of the overall procedure. The apparatus of the present invention seeks to resolve the above and other problems which have been experienced in the art. More particularly, the present invention provides an improved syringe mixing apparatus as is evidenced by the following advantages realized by this invention over the prior art. The features of the present invention will become more fully apparent from the following description and appended claims taken in conjunction with the accompanying drawings. Additional advantages of the invention may be learned by the practice of the invention. Briefly described, in one presently preferred embodiment of the apparatus of the present invention, the syringe apparatus is comprised of a standard syringe barrel 2 ~ 3 8 with a plunger therein. The syringe barrel is couplable, for example, by means of a standard luer coupling, to a disposable delivery tip which also serves as a mixing S chamber in which separate substances are to be mixed. Preferably the syringe barrel and delivery tip are constructed of materials that are transparent or semi-transparent to permit visual inspection of the contents and actuation of the plunger mechanism. A small secondary piston which is preferably elastomeric is disposed in the end of the delivery tip after the substances have been placed therein, and the secondary piston has a small aperture through it for providing passage of air into and out of the mixing chamber. A primary plunger is situated in the syringe barrel in conventional fashion and has a primary piston which is also preferably an elastomeric material in fluid-tight engagement with the interior wall of the syringe barrel. A secondary plunger is also situated within the syringe barrel and is moveable in tandem with the primary syringe plunger. In one presently preferred method for operating the apparatus of the present invention, as the primary syringe plunger is moved through the syringe barrel for a first distance, air is pressurized and is injected through the aperture in the secondary piston into the mixing chamber of the delivery tip to create bubbles, and hence a mixing turbulence which mixes the substances contained in the delivery tip. After the primary plunger has been moved through the first distance, continued movement of the primary plunger then causes contact with the secondary syringe plunger. This in turn seals the aperture of the secondary piston, so that continued movement of the primary plunger through a second distance by pushing it farther into the syringe barrel causes ejection of the now mixed substances through the delivery 2~. 01138 tip by virtue of the tandem movement of the secondary plunger and the secondary piston which has now been contacted and sealed at the end of the secondary plunger. Thus, as will be appreciated, by a single movement of the primary plunger, the substances placed in the mixing chamber of the delivery tip are first mixed by injection of air into the delivery tip to create a mixing turbulence, and then continued movement of the syringe plunger can be used to effect delivery of the mixed materials through the delivery tip. In another preferred method for using the apparatus of the invention, prior to sealing the end of the mixing chamber of the delivery tip with the secondary piston, the substances which are placed into the mixing chamber can be mechanically mixed, for example, by using a small stir rod. The secondary plunger can thereafter seal the mixed substances in the delivery tip. According to this method for using the apparatus, there is then no need to inject air for purposes of mixing, and the primary plunger is used to simply contact the secondary plunger so as to effect sealing of the secondary piston and subsequent delivery of the mixed substances through the delivery tip by continued movement of the primary syringe plunger. Figure 8 is an enlarged cross-sectional view of a portion of Figure 4 more particularly illustrating the slidable valve or secondary piston; and Figure 9 is an enlarged cross-sectional view of a portion of Figure l taken along line 9-9 illustrating the air flow passageways around the buttress end of the secondary plunger. Reference is now made to the drawings wherein like parts are designated with like numerals throughout. In the following description, the inventive aspects of the syringe apparatus are illustrated in the drawings and in the accompanying description which follows in the context of a device which has been developed and designed particularly for application in the field of dentistry. However, as noted above briefly in the objects and summary of the invention, the broad inventive principles of this invention will also be able to be usefully employed by others skilled in the art in the context of any of a variety of different industrial or other applications. Thus, while the following detailed description describes the invention as particularly applied to the field of dentistry, that is not to be construed as limiting with respect to the broad scope of the invention as claimed. With reference first to Figures l and 2 taken together, the overall syringe apparatus is generally 2~01.~38 designated at 10. In one aspect of the invention, the syringe apparatus is comprised of a barrel means that forms first and second chambers and wherein the second chamber is adapted for receiving first and second substances that are to be mixed therein and then subsequently delivered to a desired surface. In the illustrated embodiment, the barrel means is shown by way of example at the bracket indicated at reference numeral 12 in Figure 1, and is further comprised of a standard syringe barrel which is generally designated at 16 and a mixing and delivery tip which is generally designated at 14. In another aspect of the invention, the syringe apparatus is also comprised of a plunger means, which is illustrated by way of example at the bracket 18 in Figure 2. The plunger means in turn is comprised of a primary plunger member which is generally designated at 20 and a secondary plunger member which is generally designated at 22. With continued reference principally to Figure 2, the syringe barrel generally designated at 16 has an elongated syringe body 24 which, as shown best in Figures 1 and 4, forms an enclosure for a first chamber 25. At the distal end of the syringe body 24 there is a threaded luer coupling member 26. At the proximal end of the syringe body 24, there is a generally circular, flat disc 28 formed around the inlet opening to the syringe body 24 and which is used for purposes of applying finger pressure when pushing the primary syringe plunger 20 through the syringe barrel 16. Still referring principally to Figure 2, a separately couplable mixing and delivery tip 14 is comprised of an elongated body 30 which is conically tapered at its distal end as shown at 32 and which terminates in a slender, curved delivery tip 34 which has applicator bristles 36 extending out of the end thereof. The proximal end of the g elongated body 30 has a threaded luer coupler 38 which is adapted to be coupled to the threaded luer coupling 26 on the syringe barrel 16 so that the mixing and delivery tip 14 can be joined at the distal end of the syringe barrel 16. Thus, coupling members 38 and 26 together serve as a coupling means. Delivery tip 34 could, of course, be any type of tip, with or without bristles, or with a needle, as required for a particular application. As shown best in Figures 1 and 4, the interior of the elongated body 30 forms a second chamber 15 which is adapted for receiving first and second substances to be mixed therein. As hereinafter more fully described, the substances which are to be mixed could be any type of fluids, such as a liquid which is to be mixed with air, two different liquids which are to be mixed together, or a liquid and a solid or powdered substance which are to be mixed together. Disposed between the first chamber 25 of the syringe barrel 16 and the second chamber 15 of the mixing and delivery tip 14 is a slidable valve or secondary piston which is generally indicated at reference numeral 42. The valve or secondary piston 42 is preferably constructed of a soft, elastomeric material and is dimensionally sized so that the rings 46 and 48 which are disposed at opposite ends of the valve body 44 will form a fluid-tight fit when placed within the second chamber 15 of the mixing and delivery tip 14. The valve or secondary piston 42 also has an aperture 50 formed through the center of it, which is shown best in Figures 2, 4, and 8 taken together. With particular reference to Figure 8, the aperture 50 is diameterally enlarged through a portion of the valve or secondary piston 42 so as to receive the distal end 64 of piston rod 62 of the secondary plunger member 22. A second portion of the aperture 50 which is illustrated at 50 in Figure 8 is smaller in diameter and extends through the upper portion of the valve or secondary piston 42. The aperture 50, as more particularly described hereinafter, provides for passage of air to and from the second chamber 15. With reference again to Figure 2, the primary plunger member generally designated at 20 is comprised of an elongated piston rod 52 which has an elastomeric piston 54 at the distal end thereof. The piston 54 has enlarged rings 56 and 58 which form a fluid-tight fit when inserted into the first chamber 25 of the syringe barrel 16. The proximal end of the piston rod 52 has an enlarged circular disc 60 attached to it to which force is applied for purposes of pushing the plunger 20 into the syringe barrel 16. The secondary plunger member generally designated at 22 also has an elongated piston rod 62 which has a rounded distal end at 64. As shown best in Figure 8, the diameter of the piston rod 62 is slightly larger than the aperture 50 so that as the distal end 64 of the piston rod 62 is pushed into the aperture 50 the body 44 and enlarged rings 46 and 48 of the valve or secondary piston 42 will be urged more tightly against the walls of the second chamber 15 to thereby form a fluid-tight fit such that the contents of the second chamber 15 can thereafter be expelled as the valve or secondary piston 42 is pushed through the second chamber 15. The proximal end of the piston rod 62 on secondary plunger member 22 has a disc 66 attached to it. As shown best in Figures 2 and 9 taken together, the disc 66 is generally circular in configuration, but has four arcuate portions as shown at 68 removed so that the disc 66 does not form a fluid-tight fit within the first chamber 25. Thus, as will be hereinafter more fully described, as the first plunger member 20 begins to slide through the first chamber 25 in the syringe barrel 16, air is pressurized and is permitted to flow around the secondary plunger member 22 by means of the arcuate openings 68 formed around the periphery of disc 66 (see Figure 9). Also attached at an edge of disc 66 is a spike 70 which, as hereinafter more fully described, may be used to release the fluid-tight seal otherwise formed by the piston 54 of the primary plunger member 20. Reference is next made to Figures 3-6 taken together, which collectively illustrate several methods for using the syringe apparatus of the present invention. As shown first in Figure 3, a mixing and delivery tip generally designated at 14 such as that described above is placed so that two different substances can be loaded into the second chamber 15. The mixing and delivery tip 14 of Figure 3 is essentially identical to that described above except for the addition of the wings 13 which facilitate attachment of the luer couplings 38 and 26. The two substances which are loaded into the second chamber 15 are illustrated in Figure 3 as constituting two liquids which are shown at reference numeral 72. However, as noted above, the substances which are to be mixed can be comprised of two liquids, a single liquid which is to be mixed with air or a liquid and a solid or powdered medicament which are to be mixed. In one preferred method for using the apparatus, after the two substances are loaded into the second chamber 15 of the mixing and delivery tip 14, the valve or secondary piston 42 is then pushed into the end of the second chamber 15 to enclose it. In the embodiment illustrated in Figure 3, the valve or secondary piston 42 has a temporary finger tab 45 attached to the side of it to facilitate placement into the second chamber 15, the finger tab 45 thereafter being easily detachable and removable. As the valve or secondary piston 42 is inserted into the opening at the proximal end of the second chamber 15, air is 21~ 38 permitted to escape through the aperture 50 so that the contents of the second chamber 15 are not forced out of the delivery tip 34 and applicator brush 36. Once the valve or secondary piston 42 has been seated in the proximal end of the second chamber 15, the entire assembly is then coupled by means of the threaded luer couplings 38 and 26 so as to join the mixing and delivery tip 14 to the syringe barrel 16. With reference next to Figure 4, the primary plunger member 20 is shown fully retracted so that the piston 54 is situated at the inlet opening or proximal end of the syringe barrel 16. In this position, the primary plunger member 20 can then be actuated so as to move the piston 54 at the end of plunger rod 52 through the first chamber 25 of syringe barrel 16 for a first distance. Because of the fluid-tight fit created between the elastomeric piston 54 and the interior sidewalls of the syringe barrel 16, as the piston rod 52 is moved through the first distance, air is pressurized in the first chamber 25. As will be further seen from Figure 4, the piston rod 62 of the secondary plunger member 22 is smaller in diameter throughout its length than the smallest diameter of the first chamber 25 so that the pressurized air flows around the secondary plunger member 22 by virtue of the arcuate openings 68 and small dimeter or rod 62, and then through the remaining space of the second chamber 25 up through the aperture 50 in the valve or secondary piston 42. In this manner, the pressurized air is injected into the fluids contained in the second chamber 15. The injection of the air is shown at 74 in Figure 5. The injected air 74 in turn will cause a turbulent aeration and mixing of the fluids contained in the second chamber 15. As the primary plunger member 20 reaches the end of the first distance, which is shown in Figure 5, the turbulent aeration is terminated because thereafter 21~ 38 further movement of the primary plunger member 20 as shown in Figure 6 causes the elastomeric piston 54 to ride up and over the spike 70. This releases any pressurized air so that continued pressurization of air no longer occurs. Thereafter, continued movement of the primary plunger member 20 through a second distance through the length of the syringe barrel 16 causes the secondary plunger member 22 to thereafter move in tandem with the first plunger member 20. Continued movement of the primary plunger member 20 and corresponding movement of the secondary plunger member 22 then causes the distal end 64 of the piston rod 62 to become seated within the valve or secondary piston 42 as previously described, closing off the aperture 50 and more tightly sealing the valve or secondary piston 42 within the second chamber 15. Continued application of force on the primary plunger member 20 then causes the contents of the second chamber 15 to be expelled through the delivery tip 34 and applicator brush 36 as illustrated in Figure 7. In a second preferred method of using the apparatus of the present invention, in the case where highly viscous substances or where a liquid and a solid substance are loaded into the mixing and delivery tip 14 such that mixing by means of turbulent aeration is not possible, the contents of the mixing and delivery tip 14 can be mechanically mixed prior to placing the valve or secondary piston 42 into the proximal end of the second chamber 15 (See Figure 3). This mechanical mixing may be done, for example, by using a small stir rod (not shown). plunger member 20 and secondary plunger member 22 are first placed into direct contact with each other such as shown by the position in Figure 6 so that application of force on the primary plunger member 20 will cause the secondary plunger member 22 to move in tandem, first sealing the aperture 15 and then moving the valve or secondary piston 42 through the second chamber 15 to effect delivery of the contents thereof. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of e~uivalency of the claims are to be embraced within their scope. a first separately moveable plunger member comprising a distal end with a piston mounted thereon, the distal end and piston being slidably disposed within said first chamber; and a second plunger member moveable only in tandem with the first plunger member and comprising a distal end adapted for contacting the valve means and adapted for sealing the valve means as said contact is made, and further comprising a proximal end slidably situated in said first chamber so as to be contacted by said piston at the distal end of the first plunger member, the distal end of said second plunger member thereafter being moveable through said second chamber by continued movement of the first plunger member. a syringe barrel which is hollow so as to form said first chamber within the syringe barrel, said syringe barrel comprising a distal end having an outlet opening and a first coupling member, and further comprising a proximal end with an inlet opening for slidably receiving said piston; and a mixing and delivery tip which is hollow so as to form said second chamber, said mixing and delivery tip comprising a distal end having a delivery means through which the mixed substances are delivered to a desired surface, and further comprising a proximal end with an inlet opening slidably sealed by said valve means and a second coupling member securable to the first coupling member so as to join the mixing and delivery tip to the distal end of the syringe body. 3. A syringe apparatus as defined in claim 2 wherein the syringe barrel and mixing and delivery tip are formed of transparent material. 4. A syringe apparatus as defined in claim 1 wherein said valve means comprises a piston having an aperture for passage of air into and out of the second chamber, and which is adapted to be sealed when contacted by the distal end of said second plunger member. 5. A syringe apparatus as defined in claim 4 wherein each said piston is formed of an elastomeric material. primary plunger means extending partially into said first chamber for selective application of pressure so that as the primary plunger means is pushed into the first chamber through a first distance, pressurized air is injected from the first chamber through the one-way valve means into the second chamber so as to create mixing turbulence within the fluids contained in the second chamber; and secondary plunger means slidable within the second chamber and engaging and sealing said valve means such that as the primary plunger means contacts the secondary plunger means, continued movement of the primary plunger means through the first chamber for a second distance will also cause movement of the secondary plunger means and movement of the sealed valve means through the secondary chamber so as to terminate the turbulence and expel the contents of the second chamber. 7. A syringe apparatus as defined in claim 6 wherein said barrel is constructed of transparent material and comprises an inlet opening at a proximal end thereof, and wherein said coupling means comprises a first threaded luer coupling at a distal end of said barrel. 8. A syringe apparatus as defined in claim 7 wherein said mixing and delivery tip is constructed of transparent material and comprises an applicator means at a distal end thereof through which mixed fluids are delivered to a desired surface, and wherein said coupling means further comprises a second threaded luer coupling at a proximal end of said mixing tip for connection to said first luer coupling. 9. A syringe apparatus as defined in claim 6 wherein said valve means comprises a piston having an aperture through which said pressurized air is injected, and which is adapted to be sealed when contacted by said secondary plunger means. 10. A syringe apparatus as defined in claim 9 wherein said piston is formed of an elastomeric material. 11. A syringe apparatus as defined in claim 6 wherein said primary plunger means comprises a first plunger rod having an elastomeric piston at a distal end thereof, said piston slidably disposed in said first chamber in a fluid-tight manner, and a proximal end with a generally flat member to which force is applied to move the piston through the first chamber. 12. A syringe apparatus as defined in claim 11 wherein said secondary plunger means comprises a second plunger rod having a distal end adapted for contact and sealing of said valve means, and further comprises a proximal end with a disc thereon slidably disposed in said first chamber in a non-fluid-tight manner, and to which force is applied by the piston of said first rod to slidably move the second rod. 13. A syringe apparatus as defined in claim 12 further comprising means for breaking the fluid-tight seal of said piston in the first chamber as the piston contacts the disc at the proximal end of the second plunger rod. 14. A syringe apparatus as defined in claim 13 wherein the means for breaking the fluid-tight seal comprises a spike attached to an edge of the disc at the proximal end of the second plunger rod. a second plunger member moveable only in tandem with the first plunger member through a second distance, said second plunger member comprising a second piston rod with a distal end adapted to contact and seal the aperture in the elastomeric piston disposed in the second chamber, and further comprising a disc at a proximal end of the second piston rod to which force is applied by the piston at the distal end of the first piston rod when moving the second plunger member in tandem with the first plunger member through the second distance, and further comprising a spike attached at an edge of the disc; and whereby as the first plunger member is moved through the first distance air is pressurized and forced through the aperture in the elastomeric piston disposed in the second chamber to create a mixing turbulence in the fluids therein, and as the first plunger member is moved through the second distance, the spike breaks the fluid-tight seal of the piston at the distal end of the first plunger rod, and the piston at the distal end of the first plunger rod contacts the disc so as to move the second plunger member in tandem with the first plunger member, causing the distal end of the second plunger rod to seal the aperture and push the piston in the second chamber through the second chamber to expel the mixed fluids.
The season of giving might be behind us, but non-profits continue to target donors. Teach For America, Inc., attracts major contributions, but TFA teachers (often referred to as corp members) are raising questions about where the money is being spent. Many TFA teachers take out hefty loans to cover their living expenses as they get trained and begin their placements. These loans must be repaid within thirty days if the teachers leave the program for any reason. Philanthropists, corporate donors and foundations view Teach For America's as worthy of significant financial support (Ravitch, 2010). The non-profit register, Guide Star, lists TFA as a "public charity." The Heckscher Foundation for Children notes on it's web site, "We continue to support Teach for America (TFA), whose mission is to eliminate educational inequity by harnessing the talents of our nation's most promising future leaders." Teach For America's Board Members include high profile members such as Laurene Powell, (widow of Steve Jobs), Walter Isaacson, CNN analyst and Harvard Professor, David Gergen, Larry Summers, and owners of consumer goods companies: (1) Build-A-Bear, Inc., 2) Netflix, Inc., and (3) The GAP. Last fall, Apple stores convinced thousands of purchasers of new iPads to donate their old ones to "teachers in low income schools." Nine thousand of these iPads were then refurbished and distributed to Teach For America corps members. Governors of Arizona, Colorado, Mississippi and Texas, contributed state funds to the Teach For America organization, in 2010. And, Governor Jan Brewer of Arizona stated, " I am pleased to announce the award of $2 million of my discretionary funds to Teach For America." A significantly larger than reported operating budget was shared with an audience of donors in March 2011. Pearl Esau, then Executive Director of Teach For America's Phoenix region, stated: "Teach For America is expected to lose $21 million from its $880 million operating budget next year," (Colick, 2011). During the 3rd and 4th quarters of fiscal year 2011 funds for TFA arrived in the form of a $50 million grant from the U.S. Department of Education, a $100 million grant from the Walton Family Foundation and an endowment of $200 million pledged by Eli and Edith Broad and five wealthy supporters. In addition to the philanthropic and corporate donors, each of the 42 TFA regions across the country including the recently added 2011-2012 sites in Alabama, Kentucky, Seattle and South Carolina, pay "finder's fees," ranging from $2,000 - $5,000 per corps member annually, to the Teach For America organization. Any way you do the math, Teach For America raises a lot of money. And, this, in turn, raises a lot of questions. Corps members, their families, public agencies and others wonder, "Where does the money go?" It came as no surprise to me that more voices expressed concern about Teach For America's transparency in financial matters. These concerns persist across cohorts of corps members, and particularly in a tough economy, TFA interns suggest a hidden agenda that impacts financially struggling corps members and their families. This summer, on a balmy August afternoon in Del Mar, California, a mother of a newly trained TFA corps member assigned to teach in an urban city in upper mid-west, shared that her daughter received $3,000 from Teach For America. She thought it was a grant. When I told her that it would need to be paid back, she was shocked! She assumed that Teach For America would provide the financial support (especially for travel) to incoming corps members. "How else would college grads with student loans, go from their college in California to the TFA training on the East Coast, and then to the teaching assignment in the Midwest?" As a mom of children who incurred student loan debt in college, and one who heard hundreds of corps members experiences I appreciated her concern. An additional financial expectation was not what parents or corps applicants were expecting. Most tell me, "TFA is going to pay for grad school." That is not true. The Ameri-corps stipend of less than $5,000 per year is provided upon completion of two years of successful completion of teaching duties with TFA. If you complete one year, and have a car accident in year two, your stipend is not a guarantee. "I am having an enormous issue with Teach for America and the district after I was involved in a severe car accident. My arm was stitched in five places and I had back injuries. I was afraid to even ask for time off because after my first year, I was promoted to a unique nine-hour educational program for 5th and 6th graders that the District paid for. My principal took a gamble on this, and I was afraid to let them down by taking the needed time off. My condition worsened and I was taken to the E.R. during a school day when a lesson plan for a 30-minute period was missing. When I returned to school I was moved to a Kindergarten class! Kindergarten! I learned from corps members that Teach For America's "loans" cover several categories. "There are 'transitional loans", which are for covering unpaid summer training time/moving costs, etc. and these need to be paid back no matter what, but you have a two-year payment plan. But if you don't finish your two-year commitment for ANY reason, you have to pay them back in a month) and then there are also transitional grants, which are rarer and usually smaller than the loans and these don't need to be paid back (unless you don't finish your commitment and then again, they have to be paid back in one month), and there are also 'placement funds', which TFA gives to Corps members who aren't placed, once the school year starts (equivalent to the average teacher's salary in a region) and again, these don't need to be paid back unless you don't finish your commitment and then you have one month to pay them back" (Brandon). Another mother funded her only child's TFA-related-funds herself. This outlay, which began in June, covered her daughter's travel (airfare) to her assigned teaching region in the South, transport of her car from their family's Southwest home, funds for apartment deposit and first month's rent, funds to setup her apartment, funds to set up her classroom, roundtrip travel (airfare) to Teach For America's Corps Training Institute in Atlanta, and funds for living expenses for four months (June-September). "I've spent over $5,500 for my daughter so far, and she's not received her first pay check yet. We are a middle class family, but I didn't want Marla (pseudonym) to have to take out any TFA loans. I had some concerns about that." (Mrs. Jacoby). The problem occurs when corps members receive big transitional loans/grants packages from Teach For America to cover their summer/moving costs and sometimes also get on the placement fund for a month or more. Most TFAers are not as fortunate to have parent support until their first paycheck arrives from the school district, usually one month after the start of the academic year. The appeal of 'mission,' combined with a patriotic sounding brand name, and the image of intelligent, young, ivy league graduates teaching poor children, (although the majority of corps members hail from state universities) encourages an elite pool of donors, who desire to be part of the "movement,"(Caplan, 2007; Danois, 2011). The "service" notion entices friends and political allies of Teach For America to embrace the mission for the following reasons: (1) service doesn't require extensive training (no Education degree required), and (2) service supports a non-profit, tax exemption for donor's generosity. Ms. Kopp, CEO of Teach For America noted that TFA was not designed to bring career educators into the classroom, but to expose future leaders to issues in education. For corps members directly impacted by Teach For America's allocation of financial resources, this information might seems difficult to swallow. TFA's founder, Ms. Kopp, stated in a video-taped interview with Malcom Gladwell, "TFA is not a teaching organization, but rather a leadership development organization." One of TFA's regional web sites reports,"25 TFA alums will run for office in the next two years." Guidestar's Organization report on Teach For America, Inc. notes, "We believe that the best hope for a lasting solution is to build a massive force of leaders who have the insight and conviction that comes from teaching successfully in low-income communities" Corps members have persisted in advocating for changes to Teach For America's training, placement, and recruitment model. Now, the financial issues appear daunting. Brandon reports, "Corps members get entrapped by TFA financially (I still pay more per month in TFA loan payments right now than I pay in federal loan payments for my entire undergraduate education)." "TFA forces Corps members (in most cases) to rapidly pay back all their transitional loans and 'grants'/placement fund money (within a month) if they have to leave the program early for whatever reason (hence ensnaring many Corps members financially)" (Jeremy). Unfortunately, TFA makes any Corps member who leaves the organization early pay back their often significant (this person's amount is $5,000) transitional loans and 'placement funding' within a month. Life happens, even with corps who report that they contract cancer, are in severe automobile accidents, or have been physically harmed by inappropriate placements. I recall picking up my corps member grad students (figuratively) for university courses, which coincided with the start of their own teaching. They were grasping for any "this-works- try-it-tomorrow" strategies, as well as support from non-TFA veterans. Their facial expressions for the most part were giveaways, as glazed eyes, even on twenty-some things, betrayed their lack of sleep. Most were on financial overload too, as expenses tied to traveling from (a) their college to their assigned school district, and then (b) to their summer TFA training, and then (c) back to their school district, had to come from somewhere, because most would be working one full month before a pay check arrived from their real teaching job in the late summer/fall. So, beginning a master's degree program, from the university, added one more load onto novice TFAers, which, for most, was like trying to sop up the floods from a hurricane with a sponge. The resources to adequately prepare, place and support rookie TFAers appear to be in the coffers of Teach For America. The burden to manage financially for four months prior to a paycheck while figuring out teaching, should not be placed on mostly twenty-something-rookie TFAers, who, in spite of intelligence, resourcefulness, and tenacity, are still... recent college graduates with educational loans, travel and living expenses, that appear to not be covered by the not-for-profit donations of well-meaning donors. Barbara Torre Veltri, Ed. D. Assistant professor in the College of Education at Northern Arizona University, is the author of Learning on Other People's Kids: Becoming a Teach For America Teacher (Information Age, 2010.) She was introduced to TFA in 1999 and was the first university liaison with TFA based at Arizona State University, where she worked to develop a TFA specific master's program, that included modeling, on-site coaching and supervision in corps classrooms, presented workshops at All-corps meetings, taught courses for TFA, and developed outreach to their executive and program directors, over 7 years. After that she expanded research to visit other areas and gather corps data in three other states and of course, received documents and continues to be in touch with corps alumni and corps parents.
Most companies design around engineering constraints. At Apple, design was first, like designing the computer case first and having engineering make the components fit. I thought of the many times I’ve heard designers get pushback on their designs because of “feasibility issues”. If we’re going to create great products, let’s rise up to the challenge of figuring out how to implement the best designs we can think of with a can-do attitude. Steve Jobs’ describes his passion as “to build an enduring company where people were motivated to make great products. Everything else was secondary.” There are numerous examples where Apple put products before profits. Like delaying the launch of the iPhone because the case wasn’t quite right. Or having a whole new factory built so that enough anodized aluminum could be manufactured for the iMac and iPod Nano (instead of just using other metals). These decisions may seem fiscally frivolous yet Apple would ultimately attain the highest valuation of any company in the world. Putting products before profits to me is synonymous with putting customers first and that’s just good business. Apple’s design mantra is “Simplicity is the ultimate sophistication”. Apple works hard to keep products simple and intuitive and avoid the feature creep trap. Our own legacy has been creating delightfully easy offerings and that’s especially important to remember as we create new versions of our more mature offerings. One of the most impressive aspects of the biography is looking at the list of amazing people who worked at Apple over the years. Names like Wozniak, Atkinson, Rubinstein, and Ive read like a Silicon Valley Hall of Fame. “He created a corporation crammed with A players.” He (perhaps too cruelly) weeded out employees weren’t up to snuff. Are our own teams filled with A players and if not, how much better could we be if they were? Many of the innovations that make Apple products great were developed outside the company. Some examples: the iPhone multi-touch interface was created by FingerWorks, a small company Apple later acquired; the iPhone’s “gorilla glass” developed by Corning; and the first iPod’s 1.8 inch hard drive developed by Toshiba. Even Siri, the voice recognition technology at the heart of the new iPhone 4S was acquired. “He didn’t create many things outright, but he was a master of putting together ideas, art, and technology in ways that invented the future.” Tapping the fountain of open innovation is a great enabler for building great offerings and accelerating growth. Perhaps the most amazing aspect of Jobs’ personality was his attention to details. There was no detail too minor or insignificant for him to sweat. He fretted over the title bars in the original Mac. He demanded factory walls be painted bright white. He obsessed over the iPhone case. This attention resulted in a delightful shop/buy/use experience for customers. Jobs had what is colorfully described as a “reality distortion field” – the ability to bend reality to his will. He would wield this ability to convince people they could do something they thought was impossible. I prefer to describe it as challenging beliefs to accomplish the unprecedented. My favorite example is when he convinced Corning’s CEO Wendell Weeks that Corning could manufacture enough gorilla glass for the upcoming iPhone even though no factory was making gorilla glass at the time. To Weeks’ amazement, his scientists and engineers were able to rise to the challenge. Too often we shy away from hard problems rather than embracing these challenges as opportunities to innovate and create great products. The biography makes a compelling argument why it was Apple and not Sony that created the iPod and iTunes even though Sony was arguably much better positioned to do so. The reason is Sony, like just about every large company, is comprised of several semi-autonomous divisions and “achieving synergy in such companies by prodding the divisions to work together was usually elusive.” At Apple, they have only one P&L. Now while this arrangement serves Jobs’ desire for tight control, it also avoids internal battles and siloed mindsets. The one P&L approach is perhaps just the most extreme of many solutions for enabling synergy within a company. Whatever the solution, achieving synergy amongst our BU’s has to be a priority lest we suffer the same fate as Sony. This last lesson is a negative one. For me, the biggest surprise from reading the biography was Jobs was often a bigger jerk than I ever imagined. Now some may incorrectly conclude that with great genius comes a difficult personality. As Isaacson notes “The nasty edge to his personality was not necessary. It hindered him more than it helped him.” By brutally putting people down, unfairly taking credit for other people’s ideas, and having underlings work in fear of him, he probably lost people and contributions that would have made Apple even more successful (as crazy as that sounds). In “Good to Great”, Jim Collins writes “Level 5 leaders channel their ego needs away from themselves and into the larger goal of building a great company.” Rather than always having the answers, the level 5 leader helps build a culture where people can work together to find the right solutions. Jobs’ approach was too reliant on his cult of personality putting into question Apple’s continued success after his passing. Of course only time will tell.
How is copyright dealt with in fiction writing? For example, if I sell a story where I wrote that a character jogged to Burger King in his new Reeboks, would there be copyright infringement? Do I need to get approval from the holders of the copyright to use their names in my stories? And, how would I go about doing that? How do I find out what is copyright protected and what isn’t? And here, better late than never, is the response: Fortunately for the multitudes of authors who write fiction (and the innumerable publishing companies that print their books), writers are free, for the most part, to include trademarked names in their stories. The passage in question is especially innocuous, because the references to Burger King and Reeboks are benign: Nobody in the novel dies from eating a Whopper, and no character is fatally run over in traffic because his running shoes are defective. But even if the author had implicated one of these brands in someone’s death, legal retribution would be unlikely; the sheer volume of media overwhelms any one corporation’s efforts to monitor for and suppress defamatory references to their products. But risk is relative: If a writer with the stature of, say, J.K. Rowling had resorted to the plot device of a deadly hamburger or a dangerous pair of running shoes, her publisher would likely be sent a cease-and-desist letter. This terse request from the trademark owner would call on the publishing company to refrain from associating the company’s delicious and nutritious WhopperR brand beef-patty sandwiches or light but sturdy and comfortable ReeboksR brand athletic shoes with anyone’s death. But your editor would likely do the same, perhaps suggesting that instead, you call the fast food franchise Hamburger Prince or the shoes Teezoks. Interestingly, assigning closely similar names, or describing companies or products that resemble real ones but are not named in their honor (or, often, dishonor), is fair play. I’m no lawyer, but I don’t see any problem with having a character experience side effects from using a medication that is well documented to have caused such effects. do not take this as a legal advice, but as a guestimate -))). Very broadly speaking, in the US you might be liable under trade mark law for trade mark infringement and/or trade mark dilution. In order to succeed against you, however, the pills’ trade mark proprietor will have to demonstrate that you used the trade mark “in commerce” and your use was capable of confusing consumers as to the source of the pills or that your use diluted the trade mark. Not very likely in your case. Theoretically, you might also be liable for trade libel, provided that your statement was false and hence harmful to the product reputation of the pills. A true statement, albeit harmful, cannot be defamatory. Once again we’re asked to bow low before King Commerce. Rather wimpish, I think. The rot set in, of course, years ago. I remember that the Kinks were coerced to change the lyrics of “Lola” from “You drink champagne and it tastes just like Coca-Cola” to “You drink champagne and it tastes just like cherry cola”. Interestingly this alteration was required only in the U.S. version of the song, not in U.K. or European releases. I’ve made sure that my iPod sports “Lola” with the original lyrics. “Product placement” in film or television is smiled on; casual mentions in novels or songs must be vetted first. The King has spoken. Very interesting and very useful article. I’ve been confused for years…having read a James Herbert story where his character frequently drove a Monza MX5 (did he originally pen it as a Mazda MX5?). My current effort at a novel is set in York (England) and my character frequents real shops and fast-food chains. Until today, I’ve been concerned about these outlets responding unfavourably to my using their names. I did get around the problem in part by having my indecisive character thinking about having a McSomething for lunch…or shopping at ‘Marks and Thingy’. In American Psycho, Ellis uses trademark names prolifically. He also shreds acts like Genesis and Huey Lewis and the News with his satire, yet he invented a daytime talk show named The Patty Winters Show instead of picking one of several real shows that existed at the time. Maybe Oprah would’ve given him trouble, or maybe a fictional talk show gave him more artistic freedom. Inventing new companies can be fun, and it eliminates the risk of getting facts wrong. Do what best serves your story. Your publisher will tell you if there are legal concerns. What about all of those ads in magazines for writers, asking us to refer to, say, the particular tissue as “Kleenex brand facial tissue”? That gets cumbersome, but you say that isn’t necessary? I’ve never seen the advertisements you refer to; the idea is quaintly amusing. Just pat the cute little ads on their heads and send them along their way — and then dab your eyes with kleenex as you mirthfully ponder the absurdity of the request. Unless you’re on the payroll of Kimberly-Clark Worldwide Inc. (no comma necessary before Inc., by the way) or are writing ad copy for a company that sells its products, you can write kleenex any d*** way you please. I’m getting a book published, but my character’s name is the same as another already existing character. They are nothing alike and the already existing character is from a comic book. Will I get sued or anything if I still keep my character’s name. The major reason companies do this for each other is to help avoid having their trademark “co-oped” and converted into an ordinary word. I love music. I want to start a blog or website that envolves my love for music but I’m afraid I will be sued when using trade marked band names, song titles, and album titles in my articles. Do you know anything about that? I have written and am moving towards publication of a book for teenagers in which the characters (mice) have cheese names, taken from many international cheeses, such as ‘Géramont’, ‘Zottarella’, ‘Petrella’ , ‘Stilton’, ‘Rambol’, ‘Redesdale’, ‘Monterey Jack’, to mention a few. Would I be in violation of international trademark laws in publishing this? These are types of cheese, not brand names, so you’re safe. You’d also be safe with brand names, however — if, for example, you named the mice Ford, Chevrolet, and Dodge. I’m not a copyright expert, but I don’t think such a concept is copyrightable — and even if it were, I don’t know why you’d want to copyright it. Imitation is the sincerest form of flattery: Ursula K. LeGuin, for example, came up with the idea of the ansible, a virtually instantaneous interstellar communication device, and many other writers have adopted the name or the concept. As far as I know, she’s never expressed any indignation about the borrowing. My novel specifically involves many robots heads being blown off by, for example, Smith & Wesson hand-guns – but then how could Smith & Wesson be offended by their products being used to kill? They are specifically designed to kill. Also a can of Lynx deordorant is used to blow up a car and melt a robots face off (in combination with a cigarette lighter) – is this ok?? I think you’re safe, so to speak, with Smith & Wesson, but you might want to avoid specifying a brand name for the deodorant, unless it’s really important — and even then you should perhaps disguise it. Nephilim is a trademarked name, the basic meaning is a human/angel hybrid. If I use this in my comic book can I be sued? I don’t see how nephilim can be trademarked, because it’s been used quite often by various writers and game creators. See the “Popular Culture” section of the Wikipedia article “Nephilim.” Considering its widespread usage, I recommend you research other biblical/mythical nomenclature for an alternative name or make up your own name. Personal names can’t be trademarked. I recommend that those starting out as writers search for other writers using the same real or pen name. A would-be novelist named John Smith who finds that there’s another novelist by that name can use a middle initial or change his name. In your case, it’s probably best at this point to share the name rather than change yours. Interesting topic. A question regarding naming but a bit different than utilizing existing brand names. My father and I are putting the finishing touches on a book titled “The Googles” which he started in ’82 and has been collecting dust for many years. He is now concerned that since Google is around, are we in any way libel or in danger of a suit by using this title? One disclaimer: Googles have absouletly nothing to do with Google the company. They are creatures, not an entity. Nothing remotely regarding the internet or search engines, etc. in the novel. Also, I found another novel titled “The Googles” on copyright.gov with a fairly recent registration. Any thoughts or comments? Any information would be greatly appreciated. I’m not a copyright expert, but I see no reason why you should hesitate to use the coincidentally identical name. Thanks for a rapid and well-developed answer to an immediate question. I’m a playwright working on a comedy and have a situation in which the use of a trademarked name is integral to the plot of my latest work. Interestingly, the trademark in question is that of McDonald’s Restaurant Corporation, which makes your example citing Burger King a virtual direct hit. Now I have something to tell my publisher when she cries foul. McThanks! It looks like it has been close to a year since the last comment, but I was hoping we could still ask questions. As a horror writer using real locations for an attack, would it be prudent to rename the place or could the city which owns it cause grief. Specifically – this is Lake Eola in Orlando. They have swan boats, and a horrific attack happens there. This is fiction, but ti depicts the place in a derogatory light. Would it be better to call it Lake Lola, instead? I use gun names and car models in my books. I was wondering if I will get hit with a copyright infringement law. I’m Portuguese and the protagonist in my story is a young officer in the Portuguese army. It is an extremely difficult book to write because a lot of things can go wrong with it if I don’t do my research properly. I have, for example, tried to find as much information as I could on firearms. The standard issue rifle in the Portuguese army is still the 7.62mm G3 rifle, made under license from the German company Heckler & Koch. My first assumption is that there will be no copyright issues if I refer to it as “the G3”, the way everybody speaks about it, because that particular rifle (picturing it with a carnation sticking out the end of the barrel) has become an icon of the Portuguese 1974 revolution and has therefore acquired profound cultural significance in our country. Even despite its age, it is actually stilll cherished by many. That is why I see nothing problematic here: it is common knowledge that the G3 rifle was designed more than 50 years ago and is still standard issue in Portugal, as it is common knowledge that our Spanish neighbors have had the new 5.56mm G36, also designed by Heckler & Koch, for several years now. It is common knowledge that the G3 is not a poorly-designed rifle, but the ones still in use are worn out and do need to be replaced. Soldiers have commonly complained about the rifle’s weight, sometimes about its recoil, and the occasional jamming caused by the excessive wear. All of this, while detractive to the rifle’s overall reputation, is not “my” personal opinion: it is common knowledge, and even Heckler & Koch has to admit that even a perfectly-designed rifle cannot be immune to aging. Frankly, both in terms of copyright issues and misinterpreted intentions, I have always thought that creating my own rifle design that could replace the G3, without making it excessively similar to any of the legion other designs available around the world, is actually a much easier workaround to this problem. I already have enough technical knowledge anyway, from all that research. And this article seems to prove I was right. But it’s good to be able to reflect on this. I guess I don’t have to be afraid of mentioning “the G3” anymore. I’m looking forward to hear what anybody else thinks about this and whether they think I’m right or not. I am writing a fictionalized short story which names a well-known newspaper in Chicago. It also mentions someone who was very famous who worked there but is now deceased.But in a positive way. One of my characters (not the protagonist) is portrayed as one of the current editors of this real newspaper who is hiring a columnist. In a fiction story, can I use the name of a ‘real’ newspaper and have one of my characters say they work there? Are there legal issues? If possible, a prompt reply would be appreciated, as I’m on a deadline to submit this. Yes, you can refer to a character being employed by a company that exists in the real world. The only potential concern is if you imply or state that the company is doing or does something the real-life company would not want to be associated with. I’ve a character who becomes quite famous, and her first name (Maybelline) becomes a household word. A TV station begins running little segments that show women from a distance who look like her, and they use the old “Maybe it’s Maybelline…” commercial tag line. Viewers have fun trying to determine if the woman in the oversized sunglasses is this famous person or not. Would using that tag line from the commercial pose any kind of a problem for me? There is nothing derogatory at all towards the cosmetic company.
African-American and Latino students are less likely than their white and Asian counterparts to complete a four-year degree in a STEM field, at a time when demand for workers with those skills is growing. A group of researchers from the University of Houston will use a three-year, $1 million grant from the National Science Foundation to expand a project intended to spur interest in the field among younger students. Program co-founder Jerrod Henderson said he decided to target boys because other programs were available for girls. That was a few years ago, when he was at the University of Illinois at Urbana-Champaign. Flash forward two years, and Henderson and project co-founder Ricky Greer were both at UH. Henderson, who serves as principal investigator for the NSF grant, is instructional assistant professor in the Cullen College of Engineering and director of PROMES, or Program for Mastery of Engineering Studies. Greer is a graduate student in the UH College of Education. They launched the project, known as St. Elmo Brady Academy – St. Elmo Brady was the first African-American man to earn a Ph.D. in chemistry – at Hartsfield Elementary School and the UH Charter School. Henderson said elementary school students are the perfect age to plant the STEM seed. Undergraduate engineering students, along with those from teachHouston, a UH program to train students to teach math and science, meet with the younger students twice a week. Fathers or other male family members join the group for an engineering project on Saturday mornings. Engineering students serve as mentors for boys whose relatives can’t attend. “It is a platform for family development and family learning, as well as exposure to STEM,” Greer said. In addition to Henderson and Greer, faculty involved with the project include Mariam Manuel, co-principal investigator on the NSF grant and a science master teacher with teachHOUSTON, and Virginia Snodgrass Rangel, assistant professor in the College of Education. Snodgrass Rangel will evaluate the student-mentor relationship, seeking to determine if having a mentor, especially one who is African-American or Latino, can change younger students’ ideas about a future in science, engineering or another technical field. She also will measure the impact on the mentors themselves. Engineering programs nationally struggle to retain students, and she is curious to see if the opportunity to work with younger students can change that. Participating students’ standardized test scores in science and math will be tracked and compared with those of students at schools that did not participate in the program. UH students enrolled in teachHouston, a program based in the UH College of Natural Sciences and Mathematics to train high school math and science teachers, will get hands on experience with engineering-focused projects, Manuel said. Data to determine how well the program works won’t be known for several years, but Snodgrass Rangel is hopeful.
Q: What does CRPS/RSD stand for? A: CRPS is an abbreviation for the disease know as Complex Regional Pain Syndrome. RSD which is another term for this disease is an abbreviation for Reflex Sympathetic Dystrophy. A: CRPS or RSD is a disease of the sympathetic nervous system which is a component of the central nervous system. Q: What are the functions of the sympathetic nervous system? A: The sympathetic nervous system has three main functions. Q: How do you get CRPS/RSD? A: Usually some injury or noxious event, whether minor or major will stimulate the sympathetic nervous system. Normally, the sympathetic nervous system performs its respective functions then shuts down. In the case of CRPS/RSD, the sympathetic nervous system activates, does not shut down, stays on and remains active but in a dysfunctional manner. Q: How does CRPS/RSD affect the body? VERY HIGH LEVEL PAIN to any area of the body, e.g. head, neck, mouth, face, back, shoulder, arm, hands, chest, hips, legs, and feet. SWELLING, EDEMA TO ANY AREA OF THE BODY e.g. face, arms, hands, fingers, legs, ankles, and feet. MUSCLE SPASMS, TIGHTNESS, TWITCHING to any joint or area of the body, e.g. TMJ, head, neck, shoulder, elbow, wrist, fingers, hips, knees, ankles and toes. NUMBNESS, TINGLING SENSATION to any part of the body. SKIN ALTERATIONS to include skin blanching red, blanching white, and various rashes with or without itching on any part of the body. These rashes can come and go or stay visible. Also, the skin can alternately feel hot and or cold. MOOD ALTERATIONS to include insomnia, irritability, poor focus, poor memory and depression. Q: Do these symptoms occur all at once or can they occur individually or in combinations? A: These symptoms can occur individually, in any combination or all at once in any degree of severity. Q: Is the whole body affected by CRPS/RSD? or can parts of the body be affected? A: Since CRPS/RSD is a disease of the sympathetic nervous system, it can affect any part of the body including internal organs. The symptoms express themselves in what is known as a thermatomal distribution as illustrated below. Any of the symptoms listed can occur in any combination in one or more of the four color diagramed thermatomal areas. Q: How is CRPS/RSD diagnosed? A: CRPS/RSD is diagnosed mainly through proper patient history, clinical observation and examination. The more signs and symptoms of sympathetic nervous system dysfunction present, the greater the suspicion of CRPS/RSD should be and the more likely a diagnosis of CRPS/RSD will be made and confirmed. Q: How is CRPS/RSD treated? A: First, all sensory pain which might be keeping the sympathetic nervous system active must be identified, eliminated and or contained. Then as part of a diagnosis and initial form of treatment, especially when the head, neck and upper body are involved, cervical, sympathetic regional nerve blocks can be given. These nerve blocks are administered in the doctor’s office using the same local anesthetic as that used to treat a toothache. If there is relief of presenting symptoms, (i.e. pain, spasm etc.) obtained from the nerve blocks, usually within about fifteen minutes to one half hour, a diagnosis of CRPS/RSD is confirmed and further treatments can be recommended. These nerve blocks can be repeated in a timely manner if indicated. Q: What other treatments for CRPS/RSD are there? A: There are multiple ways to attempt to treat CRPS/RSD and relieve and or eliminate symptoms. There are medications that can be prescribed to control and attempt to regulate the sympathetic nervous system as well as manage pain, muscle spasm and inflammation in a non-narcotic manner. When any type of muscle spasm or tightness is present physical therapy is very helpful to regain mobility. There is a definitive CRPS/RSD diet patients can go on. Certain foods contain chemicals that stimulate the sympathetic nervous system and as such must be eliminated from the CRPS/RSD sufferer’s diet. If there are skin irritations and or rashes, Epsom salt baths can be very helpful. KEY: The most important point to understand when treating CRPS/RSD is that no one treatment works alone. The treatment must be multi-factorial and delivered concurrently. A: To eliminate, minimize and or control as many symptoms as possible and thus restore quality and enjoyment of life. Q: How bad can CRPS/RSD get? A: CRPS/RSD actually occurs in four definable stages. These stages are a progression of this disease and can overlap with the later stages showing more severe signs and symptoms of body dysfunction. In the advanced stages of CRPS/RSD, patients can become very sick with minimal and or no ability to fight disease or infection. KEY: EARLY DIAGNOSIS IS CRUCIAL. Q: Once signs and symptoms of CRPS/RSD are controlled, minimized or eliminated, can re-occurrence and or relapse of symptoms occur? A: Yes. Any new noxious, pain, and or stress producing event that the CRPS/RSD patient is exposed to can cause a flare up of any previous sympathetic nervous system mediated symptoms and or create new sympathetic nervous system mediated symptoms. The CRPS/RSD patient must be aware of this.
Hospital discharge is a high-risk period for adverse events.1–3 Written discharge summaries are the most important method for handover of care back to the general practitioner (GP),4 but are frequently not available when the GP reviews a patient following discharge. An effective electronic handover system had been developed by the Department of General Medicine for use within our hospital.5 We questioned whether this handover, in conjunction with other information routinely available in the electronic record (admission dates and unit, basic blood test results, discharge medications), could be sent to GPs on discharge as an ‘interim discharge information package (IDIP)’. Prior to and after the introduction of this IDIP, we sent questionnaires to a convenience sample of 60 GPs who saw large numbers of patients discharged from our hospital (28 responded to the initial survey and 25 responded to the follow-up). The questionnaires were separated by 6 months and were sent by mail. The initial survey consisted of two 5-point Likert scales assessing GPs’ satisfaction with discharge summary timeliness and whether they thought the IDIP would be useful. The follow-up survey used 5-point Likert scales to assess satisfaction with timeliness, usefulness and also if the IDIP was clinically relevant and if it reduced the number of tests ordered. Ethics approval was granted by Barwon Health. Following the trial, overall satisfaction had increased from 57% to 76% (chi squared, P = 0.036). GPs expressing dissatisfaction had decreased from 32% to 8%. The more IDIPs a GP received, the more likely they were to be satisfied. For the 12 GPs who had received >16 IDIPs, satisfaction was 92%; this reduced to 78% with 11–15 IDIPs, and was equivocal when fewer than 10 IDIPs had been received (4 GPs). GPs also reported that they found the IDIP clinically relevant and that they believed it resulted in their needing to order fewer investigations on patients they reviewed. Within the limits of this small pilot study, we have demonstrated that a small package of accurate information sent to GPs increased their satisfaction with the timeliness of discharge communication from hospital. In the future, it would be important to study which components of the IDIP are most clinically relevant to GPs and if there are any risks or pitfalls associated with using patient information in this way.
You’re trying to grow a garden, or just let your dogs out at night without a fuss, but animals are are running all over your yard. You’ll need a motion-activated sprinkler to deter cats, deer, and other animals from eating your plants, and making your yard their territory. How do you choose the right kind of motion-activated sprinkler? When choosing the right sprinkler for your yard, you must first decide what kind of animal you’re trying to scare out of your yard. There are different types of sprinklers, but the most prominent type of motion activated sprinkler is the scarecrow sprinkler, which can be adjusted to scare off even the most stubborn of animals. Some sprinklers are too sensitive to scare off larger predators, such as deer or raccoons, while others can be modified to change their sensitivity and height. It is important to know that if you set your sprinkler to ignore smaller animals, you may scare off the deer—but let smaller animals run rampant in your yard. We’ve gone through and checked out the best 8 motion-activated sprinklers to deter pests—like cats, raccoons, and deer—from your yard. Scarecrow sprinklers are some of the most common motion-activated sprinklers on the market. These sprinklers can be modified to target both large or small pests, and must be connected to a hose. The Scarecrow Motion Activated Sprinkler lives up to its name by scaring away pests of all shapes and sizes. Are you fighting with pests, such as deer, raccoons, cats, skunks, squirrels, and rabbits? The Scarecrow can be modified to repel all of these pests for up to six months on a single 9-volt battery. Protects up to 1,200 sq. ft. Combine water and motion to scare off the worst of pests with the Orbit Yard Enforcer! With day or night mode, this motion-activated sprinkler can be set to deter a variety of pests, including cats, squirrels, herons, and birds on the daytime setting, and deer, dogs, cats, skunks, rabbits, and raccoons on the nighttime setting. Customize your protection with the Orbit Yard Enforcer! Protects up to 1600 sq. ft. The powerful and effective Hoont Water Jet Blaster scares away animals of all shapes and sizes, such as cats, dogs, squirrels, skunks, deer, birds, and more! With low water consumption, this motion-activated sprinkler will keep your energy bills low and your garden or yard safe and sound from wild pests! The DIGOO 2-in-1 Water Sprinkler packs both sound and water to scare pesky pests out of your garden, yard, or near your pond! With adjustable sensitivity from 3 to 30 feet away, the DIGOO is perfect for all garden or year sizes. Doubling as a sprinkler and animal repellant, the DIGOO is the perfect addition to any garden! The Orbit Garden Enforcer keeps those plant-munching, pesky deer out of your yard with ease. The adjustable stand can be modified to hit deer higher on their body, which is more effective than smaller, shorter motion-activated sprinklers. The 120-degree sensor has both night and day modes with smart sensing technology to conserve your water and battery life. Keep those pesky deer out of your garden with the Orbit Pest Deterrent Sprinkler 60’. With a study metal spike and 120-degree sensor, the Orbit Pest Deterrent Sprinkler 60’ is the perfect addition to any garden, yard, or pond area. The sensor can be pointed upwards to ward off deer and larger prey, or pointed down to pick up the movements of cats, squirrels, raccoons, or other yard invaders. The Orbit Green Enforcer Motion Activated Sprinkler is the perfect height to get pesky cats, raccoons, or squirrels out of your yard. With a 120-degree sensor, this sprinkler swivels, covering a 360-degree radius of your yard, day and night! With three modes—day, night, or always on—the Orbit Green Enforcer Motion Activated Sprinkler keeps your yard looking, well, green! The Hoont Cobra is a low-to-the-ground motion activated sprinkler that is at the perfect height for blasting cats off your property. The 360 degree swivel ensures that your yard or garden is cat free—mainly due to the jet blast of water repelling pests from your lawn. The low water consumption of the Hoont Cobra perfect for those who want to go green, while the battery is located in an easy to access area for those who don’t want a fussy sprinkler. Picking the best motion-activated sprinkler for your yard or garden can be a difficult task, so we’ve narrowed it down for you! The Orbit Yard Enforcer is an unsurpassed scarecrow sprinkler. Able to be adjusted to deter all kinds of pests, the Orbit Yard Enforcer Protects up to 1600 sq. ft. with an adjustable line of sight, and has a 7,500-activation-cycle battery! Keep your water and battery consumption low, and the quality of your yard, garden, or pond high with the Orbit Yard Enforcer! The Orbit Garden Enforcer is the best motion-activated sprinkler to deter deer from eating your plants, or running amuck in your yard. The 120-degree sensor has three modes—day, night, and always on—to fit the needs of every garden. The Hoont Cobra is the perfect motion-activated sprinkler to keep cats out of your yard. The low price point, and low-to-the-ground sprinkler picks up day and night pests who are terrorizing your yard or garden. Whether you’re fighting with larger pests, such as heron and deer, or wrestling with keeping those smaller critters out of your yard or garden, a motion-activated sprinkler will turn your yard from a war zone into a perfectly protected sanctuary. Consider a scarecrow sprinkler to combat pests of all shapes and sizes, and cat-deterrent or deer-deterrent sprinklers to tackle specific vermin that are ruining your yard or garden. No matter which sprinkler you choose, prepare to take back your space, and get rid of nuisances in a safe and humane way—by just a spurt of water and a flashing light!
The diagram below shows a rectangle PRSU, which is made up of square PQTU and rectangle QRST. Figure D, E and F are squares and the area of square F is 1m2. The area of square D is 2/3 of the shaded area in rectangle QRST and QA = AT as shown in the figure below.
A dagger or khanjar with a two edged re-curved blade thickened at the point. The top section of the blade near the hilt is overlaid with gold patterning, an Indian technique of patterning sometimes described as false-damascening. The rock crystal hilt inlaid with rubies and is carved with floral designs to disguise the inclusions. The steel scabbard decorated with overlaid gold floral patterning is likely to be a later addition, with the inscriptions reading 'Sangli' in English and Marathi marking it out as a presentation piece.
In Romans 10:13, Paul the apostle explains that God’s gracious gift of salvation is received through faith in Christ Jesus alone. This small verse concisely and powerfully declares the deity of Jesus Christ in numerous ways. Thus, the following article will give a detailed exposition of this one verse’s repeated affirmations of the deity of Jesus Christ. The purpose of this is multifold. Firstly, the identity of Jesus is central to the salvation of one’s soul. If one calls upon the name of another “Lord” whom they also call “Jesus Christ” but who is not Yahweh Incarnate, the Second Person of the Eternal Trinity, then they will not be saved. Secondly, the identity of the Lord Jesus is central to affirming Christian orthodoxy over and against heretical groups claiming the name of Christ (e.g. Oneness Pentecostals, Jehovah’s Witnesses, Christadelphians, and so on). Lastly, the believing Christian will be strengthened in his faith concerning (a.)the identity of his Lord and Master, (b.)the salvation of his own soul, and (c.)his decision to not fellowship with heretical groups which deny the deity of Jesus the Eternal King. The One upon whom men are to call for salvation is the same One who is calling men to salvation, viz. the Lord our God. Yet Peter identifies the Lord our God as Jesus Christ, the only “name under heaven given among men by which we must be saved” just two chapters later. Thus, the name of the Lord, as viewed by the apostles, is not a mere title applied to a non-divine creature named Jesus Christ, nor is it a mere title applied to a mere man named Jesus. The apostles clearly identify Christ as the Lord, Yahweh, our God, the One who calls men to salvation, and the One upon whom men must call if they are to be saved. And this identification of Jesus as Yahweh God is further strengthened as we consider the details of this verse. It may be argued by a persistent unbeliever that the identification of Jesus Christ as Yahweh does not mean that Jesus is God. Assuming for the sake of argument that the above interpretation of “the name of the Lord” is incorrect, for the sake of argument, it would still be the case that Paul is identifying Christ as being able to save everyone calls upon his name. Universal in its scope, the word everyone excludes no person at any time in history or from any place on earth. The word “everyone,” therefore, implies that Christ is omnipresent and eternal. For if Christ is merely a man or an exalted creature, he cannot save everyone that calls on his name, seeing as only God is omnipresent and eternal. What is more, those who deny the deity of Christ imply that Paul’s application of Joel 2:32 to Jesus Christ is incorrect, which further implies that the Scriptures are fallible, unreliable, and not the Word of God. Either Christ can save everyone who calls upon his name, or he cannot. Scripture says that Christ can and will do this. Therefore, Jesus Christ is omnipresent and eternal, saving men from any continent at any time, and even simultaneously saving multitudes of persons from opposite ends of the earth. Jesus Christ is Yahweh. ...the Lord searches all hearts and understands every plan and thought. If Christ is a mere man or exalted creature, how can he know the thoughts of another man, let alone all of God’s elect who have and will call upon him for salvation? If Christ is to be the Savior of the speaking man as well as the mute, therefore, he must be omniscient, knowing the hearts and minds of men. It follows from this, therefore, that in addition to knowing hearts and minds of all men, including the unarticulated thoughts of men, Christ knows all languages. And if he knows all languages, then he must know them perfectly, for Jesus will save those who call upon him. He cannot misunderstand or misinterpret the internal/mental call of those whom he will save. Christ’s perfect knowledge of all languages must encompass not only the general classes of natural languages and the denotative meanings of their words, but also the connotative meanings of those same words. It also must encompass his understanding of other complex features of language which modify meaning - e.g. changes in inflection (to name one such feature). He must also have perfect knowledge encompassing the regional dialects of every natural language. If Christ fails to have perfect knowledge of all languages, regional dialects, forms of slang, forms of jargon, denotative meanings, connotative meanings, and every possible variation in inflection or pronunciation conducive to expressing one’s meaning, then Christ cannot save everyone who calls upon him. Only God knows the hearts of all men. Only God knows all languages, and knows them perfectly. Only God perfectly understands each individual’s use of language. Therefore, if Jesus Christ is the Savior of everyone who calls upon his name, he is by implication omniscient. And if he is omniscient, then he is God. These words are profound in what they imply. We have seen thus far that if Christ is to be the Savior of everyone who calls upon his name, then he must be omnipresent and omniscient. Here we further learn that if Jesus Christ is to be the Savior, then he cannot ever be hindered by some force stronger than himself as he saves his people from their sins. Yet only God is completely unhindered by any other power, for only he is omnipotent. If the Lord Christ is only an exalted creature or a mere man, then Paul is mistaken, seeing there would at least be the possibility of Christ not being able to save some of those who call upon him for salvation. Paul’s words are absolutely certain: He will save whoever calls upon his name. Therefore, if Jesus Christ is the Savior of sinners he must be omnipotent, for there can be no force greater than him, no hindering power keeping him from fulfilling his role as Savior. Paul’s words in Romans 10:13 are neither false nor hyperbolic. Therefore, Jesus Christ is God Almighty, Yahweh the Creator of all things. It is only by assuming that Jesus Christ is omnipresent and eternal that Paul can say Jesus will save everyone calls upon him. It is only by assuming that Jesus Christ is omniscient that Paul can say Jesus will save whoever calls upon him. It is only by assuming that Jesus Christ is omnipotent that Paul can say Jesus will save whoever calls upon his name. Thus, either Paul is incorrect in the assumptions he makes about Christ, or the one who denies the deity of Christ is wrong. If Paul is wrong, then the Scriptures are wrong. And if the Scriptures are wrong, then they are not the Word of God. But the Scriptures are the Word of God. Therefore, Paul is not wrong. Therefore, Paul’s assumptions are not wrong. Therefore, Christ Jesus, if he will save all who call upon his name, is Yahweh Incarnate, the Almighty, Everlasting King of kings. Incidentally, this refutes the so-called “Sacred Name Movement” which claims that “Yahweh” is the proper name of God the Father only. Some contemporary scholars will affirm that Christ is identified as Yahweh by the NT writers. However, this identification, they claim, is merely titular/agentive. Christ stands in the place of Yahweh, but he is not co-equal and co-eternal with the Father, whom they incorrectly identify as true bearer of the divine tetragrammaton. This view is refuted by the exposition of Rom 10:13, a text which is unintelligible if Christ is not omnipotent, omniscient, and omnipresent, co-equal with the Father, i.e. Yahweh the Second Person of the Trinity. Incidentally, Rom 8:27 attributes omniscience of this kind to the Holy Spirit as well, thereby establishing that the three persons - Father and Son and Spirit - equally have the attribute of omniscience. cf. Mark 2:6-8, John 2:23-25 & Rev 2:23.
Today's Tyee presents a Cole's Notes-style guide to green building certification systems in Canada. But there are even more systems operating in China, Japan and beyond. If the British Columbia wood products industry aspires to evolve beyond exporting raw logs and cheap 2x4s, it might be worth knowing more about these international green building programs. BREEAM --- The BRE Environmental Assessment Method was launched in 1990 in the United Kingdom, and served as the inspiration for LEED and many other systems. BREEM has certified more than more than 115,000 buildings, most of which are in the UK. THREE STAR --- China is on it's way to becoming the largest construction market in the world, and China's government has set ambitious targets and guidelines for green building. The Green Building Evaluation Standard, also referred to as the Three Star System, was introduced in 2006 and is administered by the China Green Building Council. The standard complements BREEAM and LEED, which presently are used in China for office buildings for multinational companies or upscale apartments. CASBEE --- The Comprehensive Assessment System for Building Environmental Efficiency was created by the Japan Sustainable Building Consortium in 2004. The CASBEE program is mainly a self-assessment system, though many elements have gradually been introduced into local Japanese regulatory directives. GREEN STAR --- Based on BREEAM and LEED, Green Star is a voluntary building rating system that evaluates the environmental design and construction of buildings in Australia, New Zealand and South Africa. It was launched by the non-profit Green Building Council of Australia (GBCA) in 2002. HK-BEAM --- The Hong Kong Building Environmental Assessment Method rewards buildings that are built, operated and maintained using sustainable practices. Because Hong Kong is a subtropical, high-density and high-rise community, HK-BEAM emphasizes indoor environmental quality more than other green building rating systems. NATIONAL GREEN BUILDING STANDARD --- A joint project between two American groups, the National Green Building Standard aims to establish a consensus-based, national standard for the US residential market. The Standard is maintained according to ANSI requirements and was approved by ANSI in January 2009. It sponsors the NAHB National Green Building Conference, the NAHB National Green Building Awards, and offers the Certified Green Professional designation for building professionals. In addition, Argentina, Brasil, Chile, Colombia India, Italy, Jordan, Mexico, Norway, Romania, Russia, Spain, Sweden and the United Arab Emirates are among the nations that have established national green building councils patterned on the US Green Building Council, which maintains the LEED standard.
How do lecturers in higher education, teaching health and social care, view the phenomenon of truth within the context of their teaching? This thesis addresses a topic which to date has not received any sustained attention within the field of health and social care. The thesis explores the understanding that lecturers in health and social care have of the nature of truth and how their conceptions of truth impact on their teaching and on their relationship with students. The study was conducted through interviews with nine lecturers, from five universities and several disciplines within health and social care, which allowed them to explore their understanding of truth in relation to their teaching. A phenomenological approach was employed, as this enabled the participants to describe the phenomenon of truth as it presented itself to them through their own lived experience and as it was imbricated in their teaching. In order to analyse the lived experience of the lecturers I used an interpretative phenomenological analysis (IPA) approach because it is concerned with the interpretation of particular experiences of a phenomenon. One of the key findings that emerged from the analysis was that none of the lecturers believed that there was one version of truth but rather multiple truths or realities, often based on uncertainty rather than a certainty. The suggestion was that what was being taught in class was a theory of provisional validity rather than an absolute truth and this heavily influenced the way these lecturers saw their role within their students’ journeys towards their own versions of truth and authenticity. The study participants held that if students could become comfortable with questioning truth and accepting that more than one version of the truth exists, then they were enabled to deploy the art of critical evaluation and analysis within their own learning. Underpinning my analysis of my findings regarding the lecturers’ perceptions of their role in encouraging critical thinking and authenticity is the work of Barnet and Kreber. Barnet (2007) claimed that in order to become authentic, an element of critical thinking is required and Kreber (2013) builds on this when she suggests that authenticity is associated with being true to self in a critical social theory sense. Further key findings are very much related to the unique dimension of my study being placed within health and social care and include the connections between the nature of truth and matters such as: the participant’s identity as a health and social care professional and the influence this has on their teaching; how conceptions of truth impact on the health and social care knowledge base within the disciplines of the participants and how this discipline knowledge underpins their teaching; the relationship between the participants’ conceptions of the nature of truth and the professional attributes that feature in the participants’ teaching; and how the understanding of the nature of truth links into the health and social care curricula. The thesis concludes by discussing implications for theory and practice that appear to flow from the findings of this study.
During our stay in Moscow we were fortunate to see a number of operas and ballets at the world famous Boshoi Theatre. The Bolshoi Theatre Company was founded in 1776. At first it gave performances in a private home, but it was able to acquire the Petrovka Theatre in 1780 at which time it began producing plays and operas. The Petrovka Theatre was destroyed by fire in 1805. The current building, designed by Osip Bove, was built in 1825 near the Maly Theatre, also designed by Bove, on Teatralnaya Ploshads (Theatre Square) to replace it. At that time, opera and ballet were considered to be more noble than drama. Thus, the opera house was named the "Grand Theatre" (Bolshoi in Russian means large or grand), and the drama theatre was called "Smaller Theatre" (Maly meaning little or small in Russian). The theatre saw its first performance on 18 January 1825. In 1853 a fire caused extensive damage. It was closed for repairs and reopened in 1856. It was the site of the proclaimation of the new USSR in 1922. This was probably saved its existence as Lenin, in his desire to eradicate all vestiages of the middle and upper classes, intended to have it torn down. During WWII the theatre was damaged in a German bombing raid, but was promptly repaired as a symbol of the Russian resolve to endure and triumph. The Bolshoi desperately needs major structural renovations. It is to be closed for extensive repairs as soon as sufficient funds have been raised. The theatre's companies will continue to perform in another venue during the period of reconstruction, estimated at three years. The Bolshoi has been the site of many historic premieres including Tchaikovsky's La Voyavoda and Mazeppa, and Rachmaninoff's Aleko and Francesca da Rimini. The building itself is magnificent in its old-world splendor. The hall is adorned with red and gilt trim throughout. The theatre curtain, a relic of the Soviet era, is adorned with CCP and the hammar and sickle insignia in red and gold. A huge Soviet crest surmounts the "Tzar's Box" (middle photo). A magnificent chandelier (right-most photo) hangs from the centre of the celing, which is also decorated with painted figures circling around it. The hall is adorned with red and gilt trim throughout. This plaque is on one side of the front facade of the Theatre. It states that "In the building of the Bolshoi Theatre the first all-union congress of Soviets on 30 December 1922 proclaimed the formation of the USSR and and accepted declarations and agreements regarding the formation of the Union of Soviet Socialist Republics". Teatralnaya Square (right) with the Bolshoi Theatre, showing a statue of Friedrich Engels (far right) in the foreground, is photographed from the Metropol Hotel, also a Moscow landmark. The Hotel Metropol (far right) occupies the other side of Teatralnaya Square. It was built in 1903 and is the only hotel in Moscow designed in the art-nouveau style (detail of mosaic decoration in photo at left). During Soviet times it was poorly maintained and was allowed to decay into the dismal state one came to expect of all Soviet hotels. In 1991 extensive renovations were undertaken to bring it back to its former glory and it is now one of the most prestigious of Moscow's downtown hotels. The hotel has 5 floors and 415 rooms, and (2006) accomodation starts at $370 US per night. The Metropol is a member of Inter-Continental Hotels and Resorts.
Volcanoes National Park, also known as Parc National des Volcans in french and Pariki y’Igihugu y’Ibirunga in native Kinyarwanda sits on the north-western province of Rwanda in a small town known as Musanze (formerly known as Ruhengeri). This park is apparently the oldest national park in the entire African continent and harbors the endangered Mountain Gorillas of Rwanda that are always available for tourists who would love to carry out gorilla tracking Rwanda Volcanoes National Park borders Virunga National Park in DRC and Mgahinga Gorilla National Park in south-western Uganda. Besides its huge population of Mountain Gorillas, this park also harbors the endangered Golden Monkeys, a primate specie that apparently offers the best diverse tourism activity within the national park at the moment. Volcanoes National Park also shelters five of the eight volcanoes of the Virunga Mountains which are: Mount Karisimbi, Mount Bisoke, Mount Muhabura, Mount Gahinga and Mount Sabyinyo) covering over 160 km2 of rain forest and bamboo. Several years ago, this same park acted as a base for American zoologist Dian Fossey - A lady who turned out to be so passionate about Gorilla conservation that her works still live on till today. The prime attraction of the park are the endangered mountain gorilla (Gorilla beringei beringei) with a lot other mammals that also inhabit its plains. Some of these mammals include: golden monkeys, buffaloes, black-fronted duiker, hyenas then bushbucks. A few reports also clam that a small number of elephants can be found at the park but on very rare occasions. The bird life spans at 178 species, with about 12 of the main species and 15 of the subspecies being endemic to the Virunga conservation area and the great Ruwenzori Mountains. Several activities are run by the Rwanda Development Board for tourists with the main one being gorilla trekking. Other activities however include Golden Monkey tracking, twin lakes visit, mountaineering, visiting Dian Fossey's tomb, touring Iby'Iwacu cultural village plus a few community visits. How to get there Volcanoes National Park is located in a small village called Musanze previously well-known as Ruhengeri, which is very accessible by public transport from Gisenyi or Kigali or from the airport. The drive to Volcanoes national park is 2 hrs and hence one can do gorilla tracking on the same day and drive back to Kigali after the trek. You will be required to arrive at the headquarters of ORTPN in Kinigi, at the park entrance, by 7:00 am, therefore, if you hope to trek gorillas for one day, you have to wake up very early for your journey so that you are on time. However, there isn’t any public transport from Musanze to the headquarters of the park at Kinigi.
As winter fades into spring, many families begin to consider buying a new home. Spring is the most popular time of year to both buy and sell a home, and a spring sale often means moving during the summer, when most children are out of school. According to the National Association of Realtors, from the mid-1980s until 2008, most American families remained in a home for about six years. Since 2008, that number has increased to an average of nine years. While the housing crisis left many families feeling financially insecure and afraid to risk selling or buying a home, the rise in wages and improvements in the economy of the last decade have created a market where many houses are for sale and lots of families ready to buy. So, if you are ready to take the step and purchase a home, where should you start? Here are a few tips to help make your home-buying experience a smooth one! Establish your priorities. Some families are looking for a forever home. Others are looking for a starter house or an older home to rehabilitate. “We knew we wanted a fixer-upper. We love DIY and were excited to take on the challenge. But not everyone wants to spend their weekends on home projects!” says Stacey Keller, Kansas City, MO, mom. Communicate with your partner and get on the same page about what you want in a house. This will help you narrow down the area where you want to live and the type of home you want to buy, as well as guide your budgeting. Set your budget. As with any large purchase, set a budget and to stick to it. Once you know what you are looking for in a home, you will better understand what you will need to spend to get what you want. “We had to set the budget right up front. Once we started looking at houses, it helped us to weed out the ones that were out of our price range, and it kept us from wasting time on houses that were way below our range,” says Debbie Brown, Olathe mom. Do your research. Researching on your own will help save you time and ensure you get what you want. “We wanted to understand the neighborhood, the schools, the buying and selling trends in the area. We wanted to know everything,” says Rachel Thornton, Shawnee mom. So many things can impact the value of your home long-term. This includes the local school system, the age of the neighborhood and the amenities in the community. Spend time driving around the areas where you are considering purchasing a home. This will help you get a feel for the entire area, and you can decide whether it is a good fit for your family. Look at the big picture. When putting a house on the market, sellers will stage the home to look its best. They often put on fresh coats of paint and have the house professionally cleaned. Although these touches are nice and help to show the home in its best light, other things are important to consider for the long-term. Walls can be repainted, knobs and pulls can be replaced and carpet can be cleaned or removed. When considering buying a home, you’ll benefit by looking carefully at whether or not the layout, location and functionality of the house work for your family. “It was hard not to get distracted by pretty lighting fixtures and gorgeous carpet. But in the end, we needed a house that fit our family’s lifestyle. We can always update the light fixtures,” says Barb Reynolds, Raytown mom. Decide when to compromise (and when not to). We all know the difference between want and need, and that difference is never more important than when you are buying a house. “I really wanted a huge walk-in pantry, but I knew we needed a big yard for the kids. I compromised on the pantry for the house with a great backyard. And really, I’m happy about it every day. We have so much fun in that yard,” says Grace Wilkins, Overland Park mom. Your home-buying experience will be much smoother if you create your want and need lists ahead of time. Do you want a whirlpool bathtub? Hardwood floors? A finished basement? Great, put those on the list. But what can you not live without? Four bedrooms? A fenced yard? Identifying your deal-breakers in advance will save you time and stress. What are the most common mistakes buyers make when purchasing a house? “Not budgeting closing costs and other fees! Hire a good professional inspector and don’t be afraid to ask any questions. Remember, your dream home is also an investment,” says Choo Lee, realtor with SBD Housing Solutions. There are lots of different kinds of loans. Shopping around for your mortgage loan is important. Some loans can require 20 percent down, for example, while others require as little as 3 percent. Shopping around can help you get the best deal and help you find the right loan for you. You are typically required to pay mortgage insurance until you have 20 percent equity in your home. However, your mortgage company is not going to stop charging you for mortgage insurance automatically when you reach that 20 percent equity mark. You will need to talk to them when you’ve reached the right point. The asking price on a home is negotiable. When you make an offer, you can make one well below the asking price if you choose. In a good housing market, the seller will not be as willing to negotiate on the price as in a down market. You should always have a home inspection. You have the right to have a home inspected after you’ve negotiated a contract with the seller. You should have the home inspection done and have your contract reflect that you can request repairs or back out of the deal based on the inspection results. Sellers pay the realtor fee. When you’re a buyer, you don’t pay a fee to the realtor.
Do you always seem to cross paths with people who are stuck on themselves, intolerant of people different from them, rude or downright arrogant? These people can be a great source of potential pain, and this article is here to help you sort the arrogant from the not-so-arrogant. Pay attention to their conversations. Don't eavesdrop, but when they're talking to you or to those around you, listen to them. Is it always about them? Do they get mad or irritated if the center of attention moves to someone else? These are good signs of arrogance. Arrogance and smugness are often a reflection of limited life experience, and feeling concerned that those with greater life experience "have got something over them." Rather than seeking to find out more through questions and learning (actions viewed by them as showing vulnerability), arrogant people tend to generalize from their limited, narrow life experiences and try to impose their small worldview on others. Jealousy of your achievements or seeming lifestyle can cause another person to feel smug or arrogant about something they think they do better than you or own/have that you don't. Arrogant people have an extremely strong need to look good. When you make them look bad - even if it is the slightest offense - they will usually be very mad at you. This happens when you question (or at least seem to question) their appearance, intelligence, athletic abilities, or anything else relating to their self-image. Challenge their worldview. Don't be aggressive––just skeptical and curious. If they get upset, gauge their anger. If it's minimal, they may be simply having a bad day. But if they're enraged, then they may see you as questioning their "perfect little world." And having one of those is usually indicative of arrogance. At some point or another, most people realize that the world doesn't revolve around them. Arrogant people counteract this by creating an atmosphere that revolves around them, and get angry if they're reminded of the real world. Ambiguity frightens arrogant people because it suggests imperfection, change, and lack of certainty (realities we all must contend with as best we can). As such, instead of accepting that the world behaves randomly and at times totally averse to one's preferences, the arrogant person seeks to control everything and everyone, which of course, is an impossible mission. Reality hurts when it intrudes; as such, an arrogant person is less likely than other people to self-reflect or analyze, thereby not seeing their own imperfections. They may also give themselves undue credit for positive achievements instead of acknowledging the input of others or of circumstances. Learn the quality of their friendships. Don't be nosy or gossipy, but if they are happy with someone one day and hateful with them the next, that's a sign of them having a lot of fair-weather friends. That's a sign of arrogance since it is very hard to be a truly good friend to someone who's stuck on themselves. Prideful people have a strong need to look good, and being self-sufficient is an effective way to do that. Since being a good friend to someone usually, means helping them, they often can't stand the thought of a good friendship. Ironically, arrogant people often can't understand why they don't have any reliable and supportive friends. See how they treat others who are unlike in some ways. In other words, how do they treat those with different beliefs, cultural backgrounds and ways of seeing the world? If it's inherently negative, then they're either over-zealous, ignorant of other people, or what to avoid those that contradict their fantasy land that caters to them and them only. Determine this based on their general personality and the people they're interacting with. Many times, prideful people have a serious "my-way's-the-only-way" attitude. This is simply a protective mechanism for their false image or their fantasy land. Observe how their personality is like. Take note of how they act, talk, and use their social status. Do they have a general sense of "coolness"? Are they a chatterbox? Do they act like they own the place, or act like the "big dog"? Are they very keen on their self-image? Many arrogant people have a false charm that no one seems to see through. But the arrogant person is usually more than happy to show their cruel side to those that they don't like. When they are cruel, their friends will usually ignore it or not do anything to stop it since they're afraid that they'll be treated badly by their "friend." Mention people you know that they don't like. This isn't meant to begin a conflict, but to gauge their rivalries, annoyances, and enmities. If their condemnation seems to be reasonable, they probably aren't hubristic. If it's harsh, they are. For the most part, arrogant people see people that they don't like as threats to their perfect little world. The more they hate someone, the more dangerous that person is to their fantasy land. And in turn, the bigger the threat, the harsher the criticism. Ask around to see what they've been saying about you. If they have been saying bad things about you, they may simply not like you. If they're nice to your face, but talk bad about you behind your back like it's their favorite hobby, then they probably have a problem with pride. Arrogant people often subconsciously know that they don't have any good friends. They compensate for this by creating the "impression" that they have a lot of friends - they have a "quantity, not quality" mentality. Then they simply insult their trophy friends when they aren't looking. Be compassionate. Don't be judgmental of arrogant people or you risk having as negative an outlook as they do. Arrogant people are often trying to hide certain vulnerabilities and fears. Most of the time, the need for a strong and unquestionable self-image comes out of deeply rooted pain. Obviously, you also don't need to be taken in by their claims to be superior to you. Stay principled and detached. But you can reach out and see the genuine good in them and praise what is real, rather than perceived or forced, talent. Sometimes, having someone push through the brusqueness can free the arrogant person to be much truer to themselves, allowing them to stop shielding themselves so fiercely. My fiance asked me to move in with him and now he always says hurtful things to me. Things like "Don't act like this is your house, you are a guest here." He wants to me to ask permission for everything. What do I do? In all honesty, your fiance is a jerk and you don't deserve to be treated like that. In the end, your self-respect matters and regardless of who the hatred comes from, you should care for yourself. If I were you, I would tell him his words are unacceptable. If you don't get an apology, break it off with him. Respect should be the most important aspect of a marriage, so it is not a good sign if he is already exhibiting a lack of it while you are still engaged. Is it good to be in a relationship with a guy who thinks it is all about him or thinks it is all about looking good? No, that's not a healthy relationship to be in. Relationships should not be so superficial or so one-sided. Best to put and end to this and hopefully eventually find a more caring person to be with. What causes arrogance in people? A lot of times it boils down to insecurity. The arrogant person can't deal with the fact that they are insecure/feel bad about themselves, so they build themselves up to hide their real feelings. My friend constantly complains about her neighbor being a control freak, interfering, etc., yet she just told me they are going on vacation together. Does this make any sense? It seems to be a case of either dual-arrogance or downright submissiveness on your friend's part. Possible dual-arrogance: Whilst the neighbor lady is controlling, maybe curt and very outspoken, it seems your friend is slightly less outspoken but, nevertheless, is unwilling to yield in any way e.g. show weakness of character nor allow herself to lose face. Possible submissiveness: Maybe this neighbor has asserted herself to such a degree as to actually strike genuine fear into your friend. She may be trying to avoid any conflict. I argue, I cry and act crazy. Does that make me arrogant? The manner in which you've summed up your attitude demonstrates that you make a lot of noise, act disagreebly and go out of control whenever anyone might try to check your misbehavior. You might be arrogant, there isn't enough to go by to assess that, but you certainly like to create drama and use it to your advantage. Perhaps assess how people act around you and how many real friends you actually have. It might help you to decide to act more maturely and with consideration for others. How can I get over an arrogant competitor? Ignore his attitude, be nice to him, and do your job. Arrogant people quite often tend to fail in a 1 on 1 competition. How do I deal with an arrogant boss? Be kind and keep your cool at work. If the issue doesn't go away, talk to someone in your Human Resources department for help. My former best friend has been arrogant for 8 years. I had enough, and now we are kind of enemies. What do I do from this point? Stop responding to fake crisis calls. If you don’t drop everything to take their “I’m so devastated! My boss gave me a look that I think means he secretly hates me and that jerk from marketing wore the same shirt as me” calls, they’ll find someone else who will or they’ll deal with it. Either way, it’s okay to step back and get off the first alert calling list for non-emergencies. Check out the helpful suggestions in this article about coping with arrogant people on wikiHow. How can I help an arrogant person, or help them to help themselves? You can help by bringing the arrogant instances to the person's attention and not letting the person treat you badly. The person has to first identify that they have a problem and be willing to work on it for themselves. There must be a solid effort made to change and recognize what is causing the arrogance in the first place (like the article referenced, often insecurities and past situations). Stay away from arrogant people as much as you can. They can cause you a lot of pain in your life. On the other hand, learning to deal with them in short bursts is a useful skill that can help you get on board good people in teams, at work, in sports, etc., provided they are aware that you won't tolerate their smug shenanigans. It doesn't always do to run away from others or you could be running all your life! Make sure you're not being arrogant. If you are, tone it down and look at the situation objectively, or in a non-biased way. Even though it's hard, don't hate arrogant people. They're usually trying to hide a painful past, an aspect of themselves they don't like, or have been seriously hurt by other people. Remember that they could be hurt by the same things that have hurt you, but they're simply addressing their pain in the wrong (unhealthy) way. Instead of resolving it, they're hiding it. This pain can express itself as arrogance, among many other things. Arrogant people also have a very hard time accepting apologies. This is particularly true if you've questioned their fantasy land or have seriously questioned (or have seemed to question) their self-image. Always remember that there's a big difference between being assertive and being arrogant. Equally, some people are very anxious rather than arrogant, and it is anxiety that causes them to dominate a conversation or to try to prove themselves as good as you. You can tell the difference by looking for empathy. An assertive or nervous person will check for your responses and even ask questions, while an arrogant person will ignore your needs and you completely and will continue to lack respect for your perspective. A summary of symptoms of arrogance include the following: intolerance of people different from themselves, inability to see different points of view, extremely harsh criticism of those they don't like, inability to form long-lasting relationships, and general narcissism. Do they joke about people who shouldn't be joked about? Making fun of someone going through a hard time is a sign of wanting cheap laughs, and not caring about other people's emotions. Prideful people usually couldn't care less about how people feel, since they nearly always have a difficult time empathizing with others. People going through a difficult time are often the target of jokes and insults by arrogant people. But these comments are made only when they're around people who they "know" will tolerate them, and not in the eye of the general public. When it comes to popularity contests, why are they popular? Is it because they treat their friends decently, or because they are simply "cool" to hang out with? Simply because someone's "cool" to be around doesn't mean that they treat people respectfully. The main things that make people "cool" are completely superficial: they're either rich, attractive, athletic, have a good personality (to those that fit their friendship criteria), or have a fake charm (that soon dissembles if you anger them when alone). Arrogant people can have all or a mix of these (and other) traits. When it comes to dealing with arrogant people, they nearly always have something to protect, either their self-image or their self-centred universe. If they get the impression that you're questioning either one, they will dislike you. Learn to live with that because it isn't about you at all; it's totally about their inability to control you. Arrogant people usually don't have truly good friends. Remember this when you wish you were as "popular" as they are. If they get in your face, leave or just ignore them and continue doing what you're doing. What makes them madder than anything is ignoring them, giving into them is giving them the satisfaction of knowing they have gotten to you. They are simply trying to inflate their ego, and insulting or arguing with them will inflate it a lot. Leaving will too, but not nearly as much, all they want is attention, because they are insecure. Depending on the situation, leaving might make them look stupid. They will hate you for this, but nobody wants to keep company with a total jerk! Don't pay lip service to their perfect world. This will not only help you stay true to yourself, but may help them to see things differently. Don't actually attack their fantasy land. Instead, say something like "I don't agree with you on that" or "I have different opinions on this". They might get angry, but these chances aren't as high as they would be if you questioned their self-centered universe outright. Instead of saying "Maybe if you'd get over yourself, you'd see things for what they are", try saying "What makes you say that?" or "Why do you hold that opinion?" This forces the person to answer a very direct, factual question. Keep in mind that there may be a psychological difficulty that may come across as arrogance (seeming aloof or closed off or insecure with a false sense of self). In some cases, this could be bipolar disorder, borderline personality disorder or a social phobia. It could be many things like a history of abuse or illness or bullying. Some people don't realize that their behavior marginalizes them from others and stops them from making friends. Be aware that while it is easy to call anyone "arrogant" as a wholesale generalization of a person's character, take into account your own mood, their mood, environmental conditions and life circumstances. Sometimes what people do or say has nothing to do with you. Be careful when you assume they're acting in a certain way to specifically to upset or anger you. Be smarter than them. No matter how much you might want to say something nasty to them, don't! It will do no good anyway. Don't go into platitudes about how arrogance is wrong. Just give a quick answer and let them understand that you don't want them in your life, being assertive doesn't necessarily mean putting things into words; be on your look-out; be smarter than them. If they've backstabbed you, point this out. No one – not even the arrogant person's "best friends" – will appreciate that behaviour. If you have to vent about an arrogant person, do so only to your best friends who won't tell anyone else. If your anger becomes common knowledge, it will start a conflict. There's a good chance that the prideful person won't understand why you don't like them. Just ignore their rude behaviour, and use a short and smart comeback if you must. Ironically, if you do win the argument or fight, they'll start playing the 'Victim' card, and start appealing to their 'friends' to not only help them feel good, but also make you look bad. If the arrogant person is considered 'cool' by a lot of people, their use of the Victim card could make you an outcast. Act discreetly when confronting cool people with an ample entourage. One of the symptoms of any antisocial personality disorder (such as psychopathy and sociopathy) is arrogance and disrespect for other people's rights. This is a dangerous aspect of arrogant people; if you have to live with a person like this, seek advice. This is why some arrogant people go on to become criminals.
Peel, Sir Robert, 1788–1850, British statesman. The son of a rich cotton manufacturer, whose baronetcy he inherited in 1830, Peel entered Parliament as a Tory in 1809. He served (1812–18) as chief secretary for Ireland, where he maintained order by the establishment of a police force and consistently opposed Irish demands for Catholic Emancipation . In 1819 he was chairman of the parliamentary currency committee that recommended and secured Britain's return to the gold standard. As home secretary (1822–27, 1828–30) Peel succeeded in reforming the criminal laws and established (1829) the London police force, whose members came to be called Peelers or Bobbies. Early in his career Peel scrupulously defended Tory interests, but he gradually came to believe in the need for change. The first sign of a modified outlook was in his sponsorship (1829) of the bill enabling Roman Catholics to sit in the House of Commons. In opposing parliamentary reform he recovered some of the Tory support that he lost by this position, and after the Reform Bill of 1832 (see Reform Acts ) had passed despite his opposition, he rallied the party and was prime minister for a brief term (1834–35). In 1834, however, Peel made the election speech known as the Tamworth manifesto, in which he explained that his party accepted the Reform Bill and would work for further changes but without infringing on established rights. This statement came to be regarded as the manifesto for the Conservative party now emerging, under Peel's leadership, from the old Tory party. Among the able young men who rallied around Peel were William Ewart Gladstone and Benjamin Disraeli . Peel was asked to form a cabinet in 1839 but declined when the young Queen Victoria refused to make requested changes in her household. He returned to power in 1841, however, and the reshaped party attitudes were very apparent in his new ministry, which introduced an income tax and a revised system of banking control, gave aid to the Irish Catholic Church, and attempted Irish land reform. Of far greater importance were the virtual abandonment of custom duties and the repeal of the corn laws . Peel had formerly defended these laws, which protected Tory agricultural interests, but he was impressed by the arguments of Richard Cobden against them and convinced by the disastrous effect of the potato famine in Ireland. The laws were repealed in June, 1846, but Peel's action split his party, and he resigned from office after a tactical defeat within the same month. Much abused as an apostate during his lifetime, Peel is now recognized as a practical statesman of forward-looking views and great courage. His memoirs were posthumously published (1856). His correspondence and private letters were edited by C. S. Parker (3 vol., 1891–99) and later by George Peel (1920).
Every number is one of a kind, as well as every person. The science of Numerology reveals the features and peculiarities of each person that correspond to a certain number. Also, the ruling planet impacts the overall mood and lifestyle of a person. Number one is the beginning, the source and base of every number. Number 2 has dark and light within. The symbol of number 3 is a triangle, which is the most stable geometric figure. Number 4 is a very powerful and pragmatic number. Number 5 is very changeable number, as it is guided by Mercury. Number 6 is influenced by Venus. Number 7 is also ruled by the Moon, as well as number 2. Number 8 is governed by Saturn, one of the most secret planet. The last in the cycle of numbers, 9 is the number of reload.
Predictions for eclipses are summarized in figures 1 through 4. World maps show the regions of visibility for each eclipse. The lunar eclipse diagrams also include the path of the Moon through Earth's shadows. Contact times for each principal phase are tabulated along with the magnitudes and geocentric coordinates of the Sun and Moon at greatest eclipse. On Sunday, 1997 March 9, a total eclipse of the Sun will be visible from parts of eastern Asia. The path of the Moon's umbral shadow begins in eastern Kazakhstan, and travels through Mongolia and eastern Siberia where it swings northward to end at sunset in the Arctic Ocean. A partial eclipse will be seen within the much broader path of the Moon's penumbral shadow, which includes eastern Asia, the northern Pacific and the northwest corner of North America (Figure 1). Due to the large value of gamma1 (=0.918) at this eclipse, the Moon's umbral shadow remains close to Earth's limb throughout the event. Thus, the Sun never climbs higher than 23° along the entire track. The path of the umbral shadow begins at sunrise in easternmost Kazakhstan at 00:41 UT. However, it requires an additional four minutes for the northern edge of the shadow to contact Earth. At 00:45 UT, the path is then 318 kilometres wide with the southeast edge of the umbra reaching deep into central Mongolia. An observer on the centre line will then witness a total eclipse lasting 2 minutes 11 seconds with the Sun 6° above the eastern horizon. Mongolia's capital city Ulaanbaatar lies just south of the path and experiences a tantalizing partial eclipse of magnitude 0.996 at 00:48 UT. Only 0.2% of the Sun's photosphere will then be exposed and it may be possible to see the corona and the diamond ring effect if skies are clear. By 00:50 UT, the Sun's centre line altitude is 12° and the duration of totality is 2 minutes 24 seconds. The industrial city of Darchan lies within the path ~30 kilometres south of the centre line where it looses only one second from the maximum duration. North of the path, the Russian hydroelectric city of Irkutsk also witnesses a deep partial eclipse of magnitude 0.988 at 00:54 UT. Traveling eastward, the shadow quickly crosses the Mongolian-Russian border as it passes south of Lake Baikal, the world's largest fresh water lake. At 00:55 UT, the path width is 361 kilometres, the centre line duration is 2 minutes 33 seconds and the Sun's altitude is 16°. Ulan-Ude lies just outside the northern limit and witnesses a partial phase of magnitude 0.998; only 0.1% of the Sun will then be visible. As the shadow's track curves northward, it engulfs the largest city in its path. Cita (pop. = 366,000) experiences mid eclipse at 01:00 UT and enjoys 2 minutes 15 seconds of totality. About 100 kilometres to the south, the centre line duration lasts 2 minutes 39 seconds at a solar elevation of 18°. Although the umbra first touched Earth only nine minutes earlier, it has already traveled 2,000 kilometres. At 01:08 UT, Russia's city of Mogocha witnesses a 2 minute 32 second total eclipse with the Sun at 20°. The shadow's course takes it increasingly northward where its southern half briefly enters the northern provinces of China (01:10 UT). The instant of greatest eclipse2 occurs shortly thereafter at 01:23:44.8 UT. Totality then reaches its maximum duration of 2 minutes 50 seconds, the Sun's altitude is 23°, the path width is 356 kilometres and the umbra's velocity is 0.836 km/s. From this point on, the path rapidly turns north and crosses some of the most desolate regions of northern Siberia. Finally, the umbra reaches the coast of the East Siberian Sea at 01:52 UT. The umbral duration (2m33s), path width (314 km), and Sun's altitude (16°), are now decreasing while the shadow's ground velocity is increasing (1.3 km/s). Continuing north across the East Siberian Sea and the Arctic Ocean, the Moon's umbra leaves Earth's surface near the North Pole at 2:06 UT. During the eighty minutes of central eclipse, the broad umbral shadow travels approximately 6800 kilometres, and encompasses 0.4% of Earth's surface. This eclipse is a member of Saros 120, the same series which produced the widely observed eclipse of 1979 February 26. Saros 120 is in its old age and will produce only two more total eclipses after 1997, each at increasingly northern latitudes. Dedicated eclipse observers will be drawn to this low Sun event for the possibility of seeing a naked eye comet during totality (Comet Hale-Bopp) as well as the reasonable weather prospects it offers. Only two previous comets have been naked eye spectacles during total eclipses (1882 and 1948), but Hale-Bopp must live up to its most optimistic predictions in order to make it number three. Mean cloud cover data suggest a 60% probability of clear skies in Mongolia with temperatures in the range of -10° C to -15° C. As one travels northeast along the path, visibility statistics increase while the mercury plunges. Eastern Siberia experiences some of the coldest temperatures on Earth, surpassed only by interior Antarctica! Although the probability of clear skies exceeds 80% here, the mean low temperature drops to -40° C with records below -60° C. Such temperatures make equipment operation all but impossible. Furthermore, transportation to this portion of the path is long, difficult and expensive. Perhaps the best trade off between cloud cover statistics and temperature is in Mongolia north of the capital city Ulaanbaatar. The city of Darchan lies just south of the centre line where the Sun stands 13° above the horizon during the 2 minute 24 second total phase. Darchan also offers logistical merit since it is readily accessible from Ulaanbaatar some 180 kilometres to the south. In any event, this will be a cold weather eclipse offering a serious challenge to keep equipment, film, batteries and fingers from freezing up before and during the crucial seconds of totality. Local circumstances for cities throughout Asia are given in Table 1. All times are given in Universal Time. Sun's altitude and azimuth, the eclipse magnitude and obscuration are all given at the instant of maximum eclipse. A detailed report on this eclipse is available from NASA as Reference Publication 1369 (see: NASA Solar Eclipse Bulletins). 1 Minimum distance of the Moon's shadow axis from Earth's center in units of equatorial Earth radii. 2 The instant of greatest eclipse occurs when the distance between the Moon's shadow axis and Earth's geocenter reaches a minimum. Although greatest eclipse differs slightly from the instants of greatest magnitude and greatest duration (for total eclipses), the differences are usually negligible. The year's first lunar eclipse occurs in western Virgo three days after the Moon's apogee (Figure 2). The event is a relatively large partial eclipse, with the Moon's southern limb dipping deeply into Earth's umbral shadow. While first penumbral contact occurs at 01:40 UT, most observers will have difficulty detecting the eclipse much before 02:30 UT. The partial eclipse commences with first umbral contact at 02:57 UT. The partial phases last nearly three and a half hours before ending with last umbral contact at 06:21 UT. Although it can not actually be observed, the eclipse technically ends when the Moon leaves the penumbral shadow at 07:38 UT. At the instant of greatest eclipse (04:39 UT), the Moon will stand at the zenith for observers near the Equator in South America. At this time, the umbral magnitude peaks at 0.92710 as the Moon's southern limb passes within 12 arc-minutes of the shadow's axis. In comparison, the Moon's northern limb lies a mere 2 arc-minutes outside the northern edge of the umbra. If the umbra is as bright as it was during last April's eclipse, observers should have no trouble tracing the deeply eclipsed southern limb of the Moon with the aid of binoculars or a small telescope. During the eclipse, Mars shines brightly (mv = -1.2) at opposition and is located ten degrees northwest of the Moon near the Virgo/Leo border. This event is well placed for most of the Western Hemisphere. Only observers in the western most portions of North America will miss the beginning of the eclipse which occurs before moonrise from the region. In contrast, most of Europe and Africa will witness moonset before the eclipse ends. Table 2 lists predicted umbral immersion and emersion times for twenty well-defined lunar craters. The timing of craters is useful in determining the atmospheric enlargement of Earth's shadow (see: Crater Timings During Lunar Eclipses). This eclipse will be particularly useful in determining the umbra's enlargement due to the depth of the eclipse and the off-axis geometry of the Moon's path through the shadow. The second solar eclipse of 1997 is a partial eclipse visible primarily from Australia, New Zealand and portions of Antarctica (Figure 3). First and last penumbral contacts occur at 21:44 UT (Sep 1) and 02:23 UT (Sep 2), respectively. Greatest eclipse takes place at 00:04 UT (Sep 2) when the magnitude reaches 0.898. Local circumstances for cities throughout Australia and New Zealand are given in Table 3. All times are listed in Universal Time. Sun's altitude and azimuth, the eclipse magnitude and obscuration are all given at the instant of maximum eclipse. If the eclipse is in progress at sunrise or sunset, this information is indicated by 'Ð r' or 'Ð s', respectively. The appearance of the eclipse at maximum phase for a number of locations is depicted in What Will The Eclipse Look Like? This is the fifty-third eclipse of Saros series 125. The series produced its last central eclipse in 1979 and is winding down with a series of partial eclipses of progressively decreasing magnitude. The series ends with a partial eclipse in 2358. The last eclipse of the year is a total lunar eclipse. Unfortunately, it is not visible from the Western Hemisphere (Figure 4). This time, Eastern Hemisphere observers are favored by the event which occurs in southern Pisces. The penumbral phase begins at 16:11 UT, but sharp eyed observers won't notice anything until over half the Moon lies within the tenuous outer shadow. The partial eclipse commences as the Moon enters the dark umbra at 17:08 UT. The one hour total phase begins at 18:15 UT and ends at 19:18 UT. Afterwards, the partial phases resume and continue until 20:25 UT. The eclipse ends as the Moon finally exits the penumbra at 21:22 UT. At greatest eclipse (18:47 UT), the umbral magnitude reaches a value of 1.200 with the Moon in the zenith from the Indian Ocean. The Moon's northern limb then passes within 6 arc-minutes of the umbra's centre while the southern limb lies 7 arc-minutes inside the shadow's edge. A large variation in shadow brightness can be expected and observers are encouraged to estimate the Danjon value at different times during totality (see: Danjon Scale of Lunar Eclipse Brightness). Note that it may also be necessary to assign different Danjon values to different portions of the Moon (i.e. - north vs. south). Observers in the western most Europe will miss the beginning of totality which occurs before moonrise. However, most of Asia and East Africa will see the entire event. Two bright planets are well placed during the eclipse. Saturn (mv =+0.5) is 25° northeast of the Moon while Jupiter (mv =-2.5) lies 40° southwest of it. Table 4 lists predicted umbral immersion and emersion times for twenty well-defined lunar craters. The timing of craters is useful in determining the atmospheric enlargement of Earth's shadow (see: Crater Timings During Lunar Eclipses). A full report Eclipses During 1998 will be published next year in the Observer's Handbook 1998. Details for the 1997 March 9 total solar eclipse have been published by NASA (see: NASA Solar Eclipse Bulletins). There is already a great deal of interest in the total solar eclipse of 1998. The path of this eclipse passes through the northern Galapagos Islands, northern Colombia, Venezuela and the Caribbean. The centre line duration is between 3 and 4 minutes, depending on the longitude. As a preview of things to come, a map of the path through the Caribbean is included (Figure 5). Caribbean islands in the path include Aruba (Oranjestad - 2m52s), Curacao (Willemstad - 1m55s), Montserrat (Plymouth - 02m56s), Antigua (St. Johns - 2m11s) and Guadeloupe (Basse-Terre - 1m13s; Les Abymes - 02m19s). Preliminary eclipse durations for cities are in parentheses. Detailed predictions for this eclipse are available in NASA RP 1383 - Total Solar Eclipse of 1998 February 26 [Espenak and Anderson, 1995].
VETERAN'S DAY LUNCH- Thursday, November 12th! Math- We will continue mastering our addition strategies this week. We want to be proficient so that when we use these strategies for subtraction, they come easy to us! Reading- We will be reading and listening to non-fiction books. We will see how the author presents the information to us in an interesting way. Then we will use what we have learned to write our own non-fiction pieces. Writing- This week in writing we will be selecting a non-fiction topic to write about. We will brainstorm ideas in order to determine what makes a non-fiction book interesting. We will begin organizing our topics and practicing note taking so that we don't plagiarize. Social Studies- Tomochichi is our final famous person we will be learning about for the second quarter. The students will have fun seeing how all three of these famous people (Mary Musgrove, James Oglethorpe, and Tomochichi) worked together to make Georgia what it is today.
Jacques Marquette, a French Jesuit priest, had made expeditions along the Northern lakes, proselyting among the Indian tribes. He had conceived the idea that there was a great western river leading to China and Japan. He was joined in his ambition to find this route, and the tribes along it, by Joliet, a man fired with the ambition and daring of the bold explorer. These two men, with five employees, started on their great adventure May 17, 1673. They found the Upper Mississippi River and came down that to the mouth of the Arkansas River, thence proceeding up some distance, it is supposed to near where is Arkansas Post. Thus the feet of the white man pressed once more the soil of this State, but it was after the lapse of many years from the time of De Soto’ s visit. Marquette carried into the newly discovered country the cross of Christ, while Joliet planted in the wilderness the tri-colors of France. France and Christianity stood together in the heart of the great Mississippi Valley; the discoverers, founders and possessors of the greatest spiritual and temporal empire on earth. From here the voyagers retraced their course to the Northern lakes and the St. Lawrence, and published a report of their discoveries.
1. Why Study English Grammar? Native Speakers and Grammar Study. The Legacy of the Eighteenth Century. 2. How Do We Study English Grammar? Why Do People Disagree about Grammar? What Are the Common Elements of English? 3. Nouns and Noun Phrases. What Are Some Common Subcategories of Nouns? What Makes Up a Noun Phrase? What Are the Functions of Noun Phrases? Verbal Nouns and Noun Phrases. 4. Verbs and Verb Phrases. What Are Some Common Subcategories of Verbs? What Makes Up a Verb Phrase? What Are Nonfinite Verb Phrases? How Do Adjectives Modify Nouns? Is All Well and Good? How Is the Passive Voice Formed? How Are Grammatical Relations Determined in the Passive Voice? Why Do We Need the Passive Voice? What Is a Truncated Passive? 9. Clause Type: Discourse Function. Crossover Functions of Clause Types. 10. Clause Type: Affirmative vs. Negative. What Is Negativity in Grammar? 11. Combining Clauses into Sentences: Coordination. How Is a Sentence Different from a Clause? 12. Combining Clauses into Sentences: Subordination. Restrictive and Nonrestrictive Relative Clauses. English Grammar : Language as Human Behavior.
Everyone who has spent more than a few minutes doodling in class knows what the Sierpinski triangle. What they might not know, however, is that this fractal has zero area and infinite "perimeter". What you see here is quite simple: a point is chosen at random, and then one of three things happens: it gets shrunken down into a square on the bottom left, it gets shrunken down into a square on the bottom right, or it get shrunken down into a square on the top (imagine a very crude pyramid made of three blocks within a larger block). Amazingly enough, that makes the Sierpinski triangle. The Lorenz attractor was initially designed as a toy weather model; one so simple that there was no guarantee that it had anything to do with the weather. For example, this model makes the assumption that all weather is the same at all places (more or less). Still, however, with this simplistic model, the initial sensitivity on initial conditions, and a new kind of science was born. People are still instriguied to watch the attractor go around and around. Here you'll find some cool live demos of stuff that we've put together for your viewing pleasure! Check out how a Sierpinski triangle can be generated using a very simple set of rules. Watch the Lorenz attractor loop around and around and demonstrate the butterfly effect in one of its purest forms. Look at how a Fourier sequence actually works.
Urge incontinence: a sudden and urgent need to urinate due to involuntary bladder contraction. Urinary incontinence can be caused from muscle weakness in the bladder or pelvic floor, or problems in the nerves that control urination. In general, it occurs when the muscle (sphincter) that holds the bladder’s outlet closed is not strong enough to hold back the urine. This may happen if the sphincter is too weak, if the bladder muscles contract too strongly, or if the bladder is overfull. Smoking, previous pregnancies, obesity, diabetes, bladder disease, certain medications or constipation can contribute to incontinence. Congenital problems or neurologic disease (for example: stroke, Parkinson’s disease, Multiple Sclerosis or a spinal cord injury) can also contribute to incontinence. If you have urge incontinence, you may leak when you get the urge to urinate. You will often urinate frequently. If you have mixed incontinence you may have symptoms of both problems. Medications: Anticholinergics, tricyclic antidepressants or alpha-adrenergic drugs are usually taken to treat urge or mixed incontinence. Injection therapy: The physician injects collagen, body fat or synthetic compounds around the urethra to bulk up or improve the function of the urethral sphincter and also compress the urethra near the bladder outlet. Posterior Tibial Nerve Stimulation (PTNS): The physician performs periodic stimulation of the posterior tibial nerve (near the ankle) as a weekly, outpatient therapy. Tension-free vaginal tape (TVT): Mesh tape is placed under the urethra like a hammock to keep it in its normal position. The tape provides support for a sagging urethra so it remains closed when you cough or move vigorously or suddenly. This is usually used to treat stress or mixed incontinence. Sacral nerve stimulation: A pacemaker-like device for the bladder is implanted through a tiny incision near the tailbone to stimulate the sacral nerves. This treatment is usually used for urge incontinence.
The body has a vast array of healing mechanisms and they are all regulated by the mind – mostly by the unconscious mind. This regulation involves two-way communication – from the mind to the body AND from the body to the mind. To use the mind to accelerate healing we need to be able to send positive messages from the mind to the body and receive the body’s feedback. When injured or in pain, instinctively the body tenses up. Doing so, blood flow is limited, muscle tension actually aggravates pain and the potential for healing is reduced. The antidote is to over-ride this instinct with deep relaxation. I lay down and spent hours deeply relaxing my body generally and the affected area specifically. This process involved putting the centre of my awareness into the area being relaxed, particularly the area needing healing and it was guided by the pain – see part iii) to come. This is another technique where having pre-existing skills is so helpful. How nice it would be if everyone knew how to do this by teenage years. However, it is never to late to learn. Start with the Progressive Muscle Relaxation, focusing upon the feeling of the body relaxing. Learn to take that feeling of relaxation into the affected areas. This does take time. It does take practice. It does bring rapid relief and it does definitely accelerate healing. Meditation is highly therapeutic. It leads to a state of deep physiological rest; a state of deep natural balance in which the ideal conditions for natural healing exist. Imagery provides the means to connect our conscious intention to heal with our mind’s unconscious healing control centre that knows how to do it. Together, the two techniques are synergistic. I meditated in my usual way 3 times daily for 40 – 60 minutes each time. This was done with the gentle but clear conscious intention that this meditation would help to accelerate healing but it was done without “trying” to make healing happen. By contrast, I also imagined the injured area healing using semi-literal healing that highlighted the end result of a fully functional, healthy, strong muscle and shoulder, with me being able to move and use it through its full range of movement. Again, ideally these are pre-existing skills. Good to learn and practice when you are well, but it is certainly realistic to learn and apply these techniques when the need is immediate – see the Resource list later. Perhaps paradoxically, pain is a great asset to accelerated healing. Obviously pain lets us know something needs attention, but more specifically, we can use it to focus our Mind-Body techniques; and to get feedback. With the deep relaxation, I focused upon the painful areas (which did relate directly to the injured tissue), put my full awareness (free of judgement or reaction) into them, and worked on relaxing those areas until they felt the same as the healthy areas of my body. This took some doing. It required some resolve, some capacity to feel the pain purely as it was, and some persistence. It was often only partially successful in any given session, but then I took heart from any progress made towards a more relaxed, less painful state (rather than bemoaning what was left to do). Occasionally there was complete relief from the pain and a deep sense of healing flowing. My sense is that this technique is a key principle in accelerated healing. Quite simply it takes practice and while what to do is essentialised above, the details of how to do it more thoroughly are in Meditation – an In-depth Guide and on the CD (or Download) Effective Pain Management. While we sleep, the body and the mind are at rest. Sleep provides a refuge, a relief from pain if we need it; and while we sleep there is plenty of energy available for healing. Before going to sleep I reminded my body, programmed it really, to continue healing. This one is simple. In a kind, friendly way, just before going to sleep, remind your body it will be free to heal unhindered, undistracted while it is asleep and maybe even imagine it doing so. Self talk can be very destructive or very self-affirming, very healing. I was pleased to notice that after all my years of conscious experience with this principle, the fact was this was easy for me. For many I have known, destructive self talk can provide a major challenge and can require a concerted effort to transform. What to do? Firstly, be gentle with yourself. Do aim to remember to notice the conversation/ the thoughts that flow through your mind. Recognise they are but thoughts. They are just thoughts. So even if they are potentially destructive thoughts, unless you take them seriously they are just thoughts – they come and go and no harm is done. Where the problems arise is when we take our thoughts (especially potentially unhelpful ones) too seriously and allow them to dictate how we function, how we are. So the ideal is just to recognize thoughts for what they are – just thoughts; to follow through on constructive thoughts and to let go of any unconstructive ones. Maybe it as an interim measure that is helpful while we work towards this somewhat idealistic goal, it can be helpful to gently correct or to actively dismiss unhelpful inner chatter and to give more weight to positive self talk. If this area is a major issue for you, affirmations may well be useful. In my view, there is a very clear hierarchy of healing. It starts with being so well that sickness and injury are simply no problem, and leads all the way to major external interventions. My approach is always to focus on what works and aim to do what is easiest, most natural and has the least side effects. So in healing, my personal approach is to start with the most natural thing that is likely to work, to give that some time and be open to noticing the response and moving down the hierarchy until something or some combination of things actually works. Surgery often makes really good sense. Being really well involves consistently living a healthy lifestyle. The emphasis is on recognizing how precious life is – and how fragile. It is a delight to be alive and it makes sense to celebrate life by following a healthy lifestyle. With a healthy lifestyle, prevention flows naturally, good health flows naturally, wellbeing flows naturally. Here there is a more conscious attention given to the prevention of illness. In my experience, while prevention makes all the sense in the world, people in general are completely under-whelmed by it as a motivator to adopt a healthy lifestyle. Illness is a great driving force, wellness can do it, but not so many are motivated long-term by the notion of prevention. Sad? Maybe. Fact? Absolutely! There are two big areas to consider – first what you can do for yourself, and then what can be done for you. Maybe you can completely resolve the injury/illness yourself – the body does have an amazing capacity for regeneration and healing. But if you do need any external help, how you respond, what you do, all of that will have a profound impact on the experience you have during the healing process and upon the final outcome. Break a leg, disregard the healing, eat badly, use it excessively – bad outcome! Break a leg, work with the healing, eat well, exercise judiciously etc, etc – good outcome! A healthy lifestyle, Lifestyle Medicine, creates the ideal conditions in which the body can best contribute to its own healing, and in which the body can work powerfully to gain the best result from any treatment, and to minimize the risk and impact of any side effects. Of course it can make sense to seek external help for healing, and this help can come in many forms. Each culture has its own Traditional Medicine. In the West we call this Conventional Medicine, but there is Traditional Chinese Medicine, Traditional Aboriginal bush medicine, Ayurveda, Tibetan Medicine and so on. There is also Complementary and Alternative Medicine and Palliative Care. The point is that all these modalities involve something being done to, or for you. The Australasian Integrative Medical Association (AIMA) has recently published a major policy statement on the range of healing modalities on offer and I will blog on that soon. Again, my personal preference is to use the most natural modality that is likely to work and that has the least side effects. What this is will vary from situation to situation and ideally we can be helped/advised by someone with a broad view that has our own best interests at heart. This is where it is valuable to have a key health practitioner and in my view the ideal person for this role is a General Practitioner who is trained and experienced in Integrative Medicine. AIMA is the peak body for this group and their website features contact details for accredited practitioners. I do like to have a sound medical diagnosis, so I did visit an orthopaedic surgeon, had X Rays and an MRI. (I was also curious to see how my lower spine looked on X Ray after all these years - crap basically, but it works well ). This proved to be a bit of a waste of time because once it was all organized, the arm was pain free and fully functional again. However, the tear was confirmed by the physical exam and on MRI. I have become pretty good with pain management over the years (have had root canal dentistry without anaesthetic) but this injury was excruciating. I needed to travel and I had commitments to lecture. I took two Nurofen three times and plain Panadol going to bed twice. Hard to give specific advice here – probably not appropriate either. Situations can be so different. Best to reflect on the principles, discuss your own situation with your family, health practitioners and any other confidants you trust and then decide what works best for you. Taking time to get to a point of confidence, including making time to formally contemplate the possibilities are two things that seem to help most people. These principles are usually discussed during the specific cancer residential programs Ruth and I present, and in the doing, people are assisted to sort out their priorities and options. Many of the actual Mind-Body techniques are practiced and developed in all our retreats. When unwell, constructive support is vital; support from family, friends and health professionals. Ruth looked after me extremely well. She also managed her own natural concern for how we would manage if the injury persisted and embarked on the trip expecting the best. It is fair to say that after working initially as a conventional doctor she has needed time, training and experience to be confident of accelerated healing, but having accomplished this, we were united in the possibilities. I am also fortunate to have a senior orthopaedic surgeon as a friend so I was able to call upon him for early phone advise and then to see him personally when we arrived in Sydney. The injury healed too quickly to call upon any other services. Seek out the best health professionals in every area of need that you have, ask for help, ask lots of questions, take your time, reflect upon and contemplate the answers and the options, then make deliberate (as in conscious) choices. Then seek help to follow through. The material presented in this and last week’s blog will be at the heart of the material to be covered in the next specific cancer programs Ruth and I will present – the first, Mind, Meditation and Healing at Wanaka, New Zealand in November 2014, the second, Cancer and Beyond in the Yarra Valley, Australia in May 2015. (Nutrition will be given thorough coverage as well, while there will be ample time with myself and Ruth for questions and discussion, along with good conversations amongst like-minded people). To be clear, there is a lot that can be done to accelerate healing. However, in my experience, to do this well it takes some commitment to learning and practice. Both are required. So I have added a resource list below featuring where more details are available via my own books, CDs etc, and I am working on an expanded reading list that I will add to my website soon – suggestions for inclusions are welcome via the comment section below. In conclusion maybe you know someone else in need and could forward this two-part blog. Maybe together we could help someone else to experience accelerated healing. It is not too late to end the year with deep natural peace. Profound insight. With over 50 years of leading meditation retreats and a wide variety of groups between us, Ruth and I invite you to join us for a 7 day meditation retreat amidst the beauty of the Coromandel Peninsula. The Mind that Changes Everything – for details on how to connect the conscious thought “I want to heal”, with the unconscious part of the mind that actually regulates healing. You Can Conquer Cancer – the complete manual for healing. Meditation – a Complete Path – the ideal starting point for meditation and the 2 key guided meditation practices. Effective Pain Management – how to transform the experience of pain and use it to accelerate healing. Emotional Health – how to recognize and let go of destructive emotions, while enhancing healthy emotions. A Good Life – ABC Compass program outlining something of my life and work. As part of the Mindbody Mastery on-line meditation program I helped to set up, we feature a regular blog series of masters of mind & body.. The aim is to keep going back to that question – Ever wondered what real-life masters of mind & body look like? Who they are? What do they do with their lives? How do they think, speak and act? And the answer – Would it surprise you to learn that by and large, they look very much like you and me, and that they mostly live in our own communities? Though some do choose to be renunciate monks and nuns and live in splendid isolation at some of the most spectacular places on earth (like the Himalayas) while yet others choose to live nomadic lives, spreading goodness and inspiration through the world (like the famed Sadhus and Fakirs of India). Invariably each has a keen sense of how they can help make our world a better place, and go about their work in their own unique ways – sometimes in the glare of public limelight, but most often just quietly, and with great dignity. Having show-cased some who have spectacularly renunciated the world and their identities in their quest for mind-body mastery in our previous two blogs, this time we venture closer to home, with Liz Schiemer of Pt Stephens, NSW, Australia, a Master of Positive Thinking. Her untiring work in improving mental health in her community is an inspiring story as you will see.
But roads quickly became clogged up, forcing the government to limit the amount of time cars can drive. Air pollution has also been a problem. To solve these issues, the authorities have invested heavily in public transport over recent years, a programme that was accelerated when Beijing was given the 2008 Olympic Games. But Professor Ou Guoli, from Beijing Jiaotong University, says more needs to be done. "We need other government policies and measures to reduce the amount of traffic on the roads," he said. Prof Ou said city centre parking should be made more expensive and there needs to be more "park and ride" facilities. There should also be policies to encourage people to get back on their bikes, he said.
SDI sat down with avid sidemount diver and instructor Pete Nawrocky with Dive Rite to discuss how and why divers get involved in Sidemount Diving. SDI – How long have you been diving? Pete – I started diving in 1971 and then in 1998 started with sidemount. SDI – Why did you want to dive sidemount? Pete – Caves. I got into it because I simply could not fit safely with backmount. And back then, the only way to do sidemount diving was to build your own harness. The inability to carry weight because they have a back problem, lack of mobility, or shoulder problem, is usually a reason people switch to sidemount. People feel comfortable with it once they have it on in the water; they find it a lot easier to work with after they have been trained and they are using the unit for a while. They seem to stick with it, whether boat or shore diving. And another common one is ahh… ‘cause it looks good. It’s a lifestyle change. People want to do something different than they have been doing before. SDI – Who is the ideal candidate for sidemount diving? Pete – The minimum certification level a diver should have is advanced scuba diver, but there is no ideal candidate. It really comes down to somebody who has a desire to do this type of diving and they make a commitment to it. SDI – Are there pre-requisite experiences for sidemount? If so, what are they? Pete – We want to make sure divers are comfortable in the water and with their equipment before trying a different style. A lot of people think in terms of wearing 2 tanks when talking about sidemount. Well, you don’t have to wear 2 tanks; you can wear a single tank while diving sidemount, and deciphering between 1 or 2 tanks really depends on the person. By and large the people that want to be in sidmeount have already made that decision, and the way they made that decision is most times they have tried it, whether they tried at the pool or a demo day event, or they have talked with their friends or seen people with it. Once they get started, they tend to stick with it. SDI – As a sidemount instructor, what advice would you like to share with divers who are considering the course/style? Pete – The first thing they should do is try it before they get involved in anything else. There’s a lot of events and demos for divers to actually get the gear on in water, so they can get a feel for the equipment. I strongly recommend that they take a course because it’s not difficult to dive with, but it is about gear management and gas management while you’re making the dive. You just don’t buy it, slap it on and go; it has to be fitted to an individual’s body shape for a proper wear. SDI – Once the diver has committed to trying sidemount, do they have a learning curve when transitioning from backmount to sidemount? Pete – Yes, there is a learning curve, and that curve is getting the gear configured to your body shape and learning to manage the equipment in the water as well as managing your gas consumption. Diving is both a mental and physical sport. Some people pick up on it right away and feel very comfortable with the configuration, and others have to change the way they swim in the water, since a frog kick is the preferred method of locomotion underwater and they have always been doing a butterfly, so mastering the frog kick and the equipment is the most important learning curve. SDI – You recently taught an SDI headquarters’ staff member, Taylor. Can you tell me a little bit about that? Pete – Taylor did great during the course; her trim was good, her fin kick was good, she handled it well, she did all the drills properly and took her time. The major skill in sidemount diving is the ability to handle your gear by yourself. There is no reason for a sidemount diver to have to have someone help them get dressed, that’s all part of the class. She learned how to get dressed on the surface and in the water as well as de-kitting on the surface and in the water. That’s the major thing about sidemount diving, you’re not supposed to need a caddy, so-to-speak, to help you get dressed and in and out of the water. SDI – If I want to try sidemount, where would I go to learn more? Pete – Demo Days are a great place to learn more. Most dive centers and manufactures will have demo days so divers can try out new equipment. Dive Rite and TDI are partnered up this year at several locations, one being at Dutch Springs on June 8, 2013. SDI – Last year SDI/TDI and Dive Rite teamed up with Buddy Dive Resort in Bonaire for a week of tech dive demos, presentations and training. Was sidemount included in the camp? Pete – Yes it was, and we will be doing it again this October, 2013. Bonaire was a blast! We had a pre-dive briefing and then off to the water. We demonstrated how to get dressed in the water while floating on the surface. Then with a sidemount instructor, they went on a guided dive, so they got the experience of actually diving sidemount. And for those who were qualified as instructors, we helped them work toward their instructor level. SDI – If you could rectify one myth about sidemount diving, what would it be? Pete – The first thing you have to understand, sidemount diving was propelled to where it is today by the consumer, not the manufacturers. This is what the people wanted. They saw the advantages, they tried it and they enjoyed it. The only thing I can say this is akin to is if you are a skier, and if you remember about 30 years ago when the snowboards started showing up on the slopes, everyone said it was a fad and it wasn’t going to stick. Now, it’s an Olympic sport. Sidemount diving is viewed upon the same way. You can see people discussing it, saying it’s not necessary, its only mission specific, but what it comes down to is that this is something that somebody wants to do and they make the active decision to dive this way and that’s why it took off, because they liked it. SDI – Whats next from sidemount? A – Sidemount is just a different way of carrying your gear. So to make it simple, the sport diver with a single sidemount may want to make that jump to deep dives, requiring decompression diving with a double sidemount, wearing two tanks. And after that, the next step would be technical sidemount, where you might be doing mixed gas dives, carrying 2 cylinders or you may be up to four or more because you’re doing trimix dives that require switching to different breathing mixtures. SDI – That leads me to my next question: is 2 tanks technical diving? Pete – No, technical diving isn’t what you’re wearing; technical diving is what you’re doing. Some people like the 2 tanks even though they are not doing deco, but they are planning 2 dives that day. Wearing both tanks on the first dive is all about gas management so they have enough gas in both tanks to make the second dive without changing their rigging. Individuals considering solo diving may look at sidemout diving as one of the best configurations to go with because you have full control over your equipment. In terms of gas management, if you have any problems with your hoses or regulators, you can actually see what you’re working with. SDI – And finally, is sidemount your preferred configuration for diving? Pete – Yes. If I’m not diving the rebreather, I prefer to dive sidemount.
Every year in Britain, more than 7 billion animals face the barbarity of slaughter - many fully conscious. Most spend their short, brutal lives in confinement, pain and misery. Viva! launches regular, hard-hitting campaigns and has forced the vegetarian and vegan debate back on to the agenda - on TV, radio and in the press. Find out more about our campaigns and support Viva! - Campaigning for Animals, Fighting for Change. Badgers are under threat because of the dairy industry. Find out why and what you can do to save them! Every day in the UK 2.5 million chickens are cruelly slaughtered for meat – that’s 30 deaths every second – each one an individual, each one a life lost. Dairy cows are kept in a cycle of near constant pregnancy and lactation (meaning huge stress, often leading to disease). So cruel it is illegal to produce in the UK, but not illegal to import it. Where's the sense in that? The best way to end animal suffering is to go vegan - or at least take steps in that direction. The sad life of most British pigs is one of confinement, mutilation and early death. Close Britain's pig slums! All farmed animals end their life at the abbatoir. Read why there is no such thing as humane slaughter. The UK leather industry is worth billions of pounds each year. An animal’s skin makes up 7-10 per cent of his or her total worth. Beef cows are bred simply to eat, get big and die. They gain weight quickly and are ready for slaughter at only 11 to 12 months old. Crocodiles are taken from the wild confined in an unnatural habitat for the rest of their lives. Just for a novelty meat. Britain’s favourite wild bird has been forced out of the ponds and crammed in their thousands into factory farms. No water, no life! Global warming, water pollution and many more environmental issues are closely tied to the food we eat. Most animals bred for meat in Britain are factory farmed. Find out why we say Cruel Britannia. Some people who give up eating meat continue to eat fish in the belief that is less cruel. Nothing could be further from the truth. Think goat's milk is kinder? Think again. More mutilations, early death and suffering. Viva!'s campaign has helped to save 70,000 annually. Every year, 30,000 'meat' horses leave Poland for Italian abbatoirs. Shot in their thousands in the Outback for a 'novelty' meat. Baby joeys are pulled from their dying mothers pouches and killed. In Britain ostriches are penned up, their eggs taken away and their chicks killed at one year old. Reindeer are gentle animals, increasingly subjected to - and stressed by - modern herding methods. The free-range nature of these farmed animals hides a litany of suffering. This cruel trade of live turtles is not only an animal welfare nightmare, it is also an ecological disaster. Each year wild boar are being hunted and shot. Boars are actually for the most part gentle and inquisitive.
During 2012-13 the Ubuntu Network worked with member institutions to package some examples of good practice from initiatives aimed at integrating Development Education into the post-primary curriculum. These examples of good practice have been packaged in multimedia format in order to make them available on the Ubuntu Network website and thus to make them accessible to all stakeholders, including teacher educators, teachers, and student teachers. Their purpose is to demonstrate the Development Education work that individual member institutions are engaging in and to help make good practice approaches transferable between member institutions. This IPP presents some theory of dialogic teaching and demonstrates how this approach was used in practice to integrate development themes in the teaching of English Pedagogy with a group of 3rd Year undergraduate students in the Dept of Education and Professional Studies at the University of Limerick. This approach to using talk in the classroom provides particular opportunities for Development Education as it supports students in exploring the perspectives of others in the world. This IPP presents a pedagogical approach for using photographs as a gateway to exploring aspects of Development Education, particularly in exploring students' deeper understanding of other cultures as well as concepts such as justice/injustice, equality/inequality, sustainability, and development/underdevelopment. This IPP presents a model for using storytelling as a pedagogical approach involving collaborative reflective dialogue to address development themes in the classroom. This approach takes the view that stories are a socially engaging part of life and have opportunities inherent in them for engaging minds and prompting reflective thinking. IPP 4: Using Documentary and Guest Speakers as mechanisms to explore development issues. This IPP demonstrates how the RTE documentary, Peter McVerry: A View from the Basement was used to explore the issue of homelessness with student teachers. It uses a dialogic approach, supporting the students to speak about aspects of the documentary that impacted them most and crucially how these are relevant to them as emerging teachers. It provides an insight into students’ reflections through the questions they posed to guest speaker, Fr. Peter McVerry.
Title: A Generall Map of Europe.. A striking dark impression of this scarce map of Europe. The map features an elaborate heraldic cartouche and many sailing ships and sea monsters. Blome's maps, because of their rarity and importance in the history of English Cartography, are essential items for regional collectors. Blome first began engraving maps for his Geographical Description Of The Four Parts Of The World in 1667. The completed volume was in small folio, and contained 24 maps (plus one duplicated), engraved by Francis Lamb, Thomas Burnford and Wenceslas Hollar. Blome's principal handicap in the production of the atlas was the lack of a domestic mapmaking environment comparable with that in Europe. Also, to finance his work, he undertook subscribers, in exchange for a promise to add their coat of arms to certain maps. In later editions, if the renewal fee was not paid, Blome added a different subscribers coat of arms, leading to multiple images on various editions of the same map. Condition is very good with a narrow margins as issued and a couple of repaired worm holes at the centerfold.
People waiting orderly and patiently in a queue, wherever and whenever, is known to be a quintesentially British thing. I could bear witness that such queues were living evidence of British civility, and no one or very few Brits appeared to be ‘disgusted’ with having to wait in a queue, as it often happens in Romania. Since the days when food was rationed, an entire ‘culture of queuing’ developed in Britain, with an array of posters reminding people where to wait in order to be served. Apart from these, especially designed pieces of urban furniture try to manage the crowds. I have already given a example, and here’s another one – a bus shelter where passengers can wait in line on one of the windy bridges of Edinburgh. However, it seems that this example of British civility is on the wane, as Brits are less and less willing to queue, even if this means renouncing the pleasure of shopping. In 2004, the average time a Briton was willing to queue in a high street shop was five minutes. That dropped to two minutes, while 51% of Brits won’t even enter a store where they notice a queue. It seems impatience grew as online shopping increased. I find queuing to be a very nice British feature :). I see it as a demonstration of a respect towards other people and myself. But it seems that the word 'impatience' is too general (i.e. it may lead to different actions: neglecting other people in an attempt to be served earlier or not wasting time in this particular queuer but go to another shop where generally the same goods can be bought). On my point of view, the British impatience is due to the high cost of time, and it's always a pity to waste it :). I did not mean that the article is wrong, in contrary I found it very interesting (sorry, I'm always told that I don't express myself clear enough). I'll try again ;), I think that phrase 'losing patience' is a very strong claim which can be interpreted differently :). I agree that the syntagm "losing patience" may have different interpretations, but I didn't want to have this 'politically correct' answer from your part. What I wanted to know was whether - given your personal experience in Britain - you find that the Brits are more impatient than how they were described to be. How patient are the Brits (+ foreigners settled in the UK)compared to the nations living in Kyrgyzstan, for instance?
They are available for both single-phase and for the three-phase network. Another headache – power outages associated with planned maintenance, repair elements of the system, or accident. Get more background information with materials from Ray Kurzweil. We have already mentioned that the break in the power supply can be stretched on the day. -Town residents alike 'calm' ability to deliver a serious inconvenience. Immediately raises several questions, including how to keep the products or supply water from wells? There are several possible solutions. Google shares his opinions and ideas on the topic at hand. K example, you can dig a deep cellar, or make equipment operating from a different source: pumps, kerosene lamps, candles, etc. True, the operation of such a kit would be economically unjustified and even uncomfortable. It is best to buy an independent power supply on the basis of an internal combustion engine. He does not depend on the climatic conditions of the terrain and weather conditions, is compact and is characterized by plenty of power. Distinguish between gasoline and diesel generators. Motor selection depends on the intensity of the plant. Diesel generators have usually a great resource, they are better suited for long work, moreover, their operation is cheaper. Petrol units are much less so if the generator will work in the short-term operation, that is, serve as an emergency source of electricity, better to buy one of these plants. The generator is chosen as a voltage stabilizer. We first determined the total power at the same time included consumers, and then, based on this value, selected setting. As an example, we calculate the power device designed for emergency lighting, refrigerator and tv. Certification of management systems – a procedure that allows a company a significant advantage over those businesses that have not been certified. First of all, the certification of management systems enhances the credibility of the company as to the subject of various business relationships. Management Systems Certification is needed for companies in various competitions. Enterprises certified management system undergoes a much smaller number of inspections by government officials. Management Systems Certification allows businesses to enjoy the benefits of when lending and insurance. But equally important is the fact that as a result of preparing for certification of management systems, operation of the plant is subjected to careful analysis, affects all areas of operation of the business that contributes to further improve the overall management and performance management. Management Systems Certification can take place in several standards. Most popular is received certification systems quality management. Requirements contained in this standard, aimed at creating a management system that will constantly maintain a high level of quality products and services, to create an effective and transparent system of governance, reduce costs and improve consumer confidence. In addition, the applicable environmental management system certification, which ensures environmental security as for the environment and the people involved in the production process, and for potential customers. Certification of management systems, occupational health and safety allows you to control production factors posing a risk to life and health and to manage them. Management Systems Certification conducted organizations accredited to the Federal Agency for Technical Regulation and metrology.
We have been asked in the past, what is green cleaning? Can it really help protect us and the environment? Do the products work as well as mainstream cleaners? Implementing green cleaning products and services would not only secure the trust in future clients who prefer these services but will also help those who do ask these questions feel at ease to try a more healthier and safer way to clean. Safety of our staff and clients are a priority. Knowing the product you are using, how to use the effectively and showing the client not only does it work, it is also better for your health as well, This allows for under educated clients to gain a new understanding on the toxins that are emitted through the use of most mainstream brands and the health effects it could have on their own bodies. How to implement green cleaning into our business we must be knowledgeable of the products that you are using. Which products is for what type of cleaning job and if they really are (on the commercial product side) a green cleaning product. The chemicals which are in the products you must know in order to wean those out that are not true. There are also natural herb and oil cleaner that would be a true green cleaning investment without chemicals at all. Offer the green cleaning services to existing clients and get their feedback on what they liked and didn't like to make adjustments and improvements on the products you use and how you use them. It is a safer, easy to budget and effective way to help all that is around you for a healthier more manageable way to clean. A green cleaning program shines with plenty of benefits. Not only is the environment affected by toxic chemicals, but human health is at risk as well. "Green cleaning", or the use of environmentally friendly cleaning methods, may help reduce the great amount of toxic chemicals which harm humans and the environment daily. Having a green cleaning program is highly beneficial. Making the world a better place is truly satisfying. If everybody just makes a small change it has a big impact on the earth. To start, reducing resource usage and pollution can bring acknowledgment from the community. Green cleaning improves self-esteem and efficiency in residents. Improved health becomes fewer sick days taken by employees. Liability from worker safety issues is reduced too. You see, green cleaning practiced by humans can both benefit their health and environment. Plus, benefits are increasing with more eco-friendly products now available. I've spoken before about the benefits of green cleaning for people around you. Your customer, the people around them, their overall environment enjoys cleaner air, not burdened by chemicals from conventional cleaning tools. The environment gets a little more breathing room, your employees enjoy a good working morale, and better health as well. It's easy to see how these benefits would add to me as an individual, as well as my business. I've mentioned how offering green cleaning adds a selling point to your brand, because of the benefits I mentioned earlier. The satisfaction of knowing you're helping conserve the ecosystem, being responsible and practically adept, helps me find more pride in the work. Natural, green cleaning components are a cost-effective way of providing cleaning solutions in several cases. Did you know an effective furniture cleaner can be made up of a mixture of vinegar, olive oil, and lemon juice? These are common household items you can use, that have no effect on the environment you apply them to. As a company, the community that I service has the satisfaction that I'm providing an effective, elegant, but also eco-friendly solution to their problems. Green cleaning is not meant to lower service quality, but rather improve it. The title of a previous post was "There's no excuse to not hire green cleaning services", which I wholeheartedly believe. If companies are going out of their way to provide healthier, cost effective, and quality solutions for their clients, all the more reason to consider them. Cleaning companies that Green clean use products that have less impact on the environment, such as low toxicity, more biodegradability, low volatile organic compound (VOC) content, and reduced packaging. You will find that the advantage of using a cleaning company that adheres to Green standards will lower the impact of harmful exposure to your building occupants. It improves indoor air quality, while still receiving effective building cleanliness. This along with less, and recyclable packaging also contributes to an all around better environment! In your search for a Green cleaning company use the IJCSA Green Cleaner Directory, there you will find a reputable and professional Green cleaning service provider. Being green cleaning certified matters to Mainstay Cleaning Services, it shows a higher-level of understanding and caring. Being certified goes beyond earning a certification, it shows our commitment to learning long lasting values that help us meet or beat customer expectations, and shows our commitment to the safety of our workers and building occupants. Being green cleaning certified holds Mainstay Cleaning Services to a higher standard than non-certified cleaning companies by ensuring we only use cleaning agents that meet the standards of containing:Low allergens, being non-caustic, non-toxic, and having 100% biodegradable agents. In addition to cleaning agents, we will also rely on high performance, energy efficient cleaning equipment, micro-fiber cloths, and the use of recycled paper products. The largest benefit of having a Green cleaning program is to reduce everyone's exposure to toxic chemicals (EPA, 2018, February 13). By using safe and approved cleaning agents we can safely work towards controlling the spread of germs and disease by focusing on problem areas and properly applying disinfectants, we can also help improve the overall indoor air quality by reducing air-borne dust and chemical gases, and we can help to reduce asthma attacks by keeping the dust, pollen's, and chemical allergens at bay. Equally as important, being green certified, Mainstay Cleaning Services will be partnering with our clients in maintaining a cleaner and healthier community. Often-times our clients not only work in a certain area, but also live there as well. By using cleaning agents that are non-toxic, and 100% biodegradable we are being good stewards to the environment, thus ensuring that we leave this planet to our children in better shape than when we received it. By implementing high efficiency / green cleaning programs, overall cleaning costs can be reduced therefore increasing everyone's bottom line. I feel that having the option to have a Green Cleaning Program is very beneficial. Some people even though they have an idea about Green Cleaning may not want it. In my experience a lot of people are not educated about how this program can be effective and leave a fresh clean scent. So I would get the attention of the people that I am in business with and if they are open come to a consensus on what Green Cleaning means to them and then continue to educate. Start by putting a plan together to service their needs.I would have a simple agreement to "Do Green". Which would include walk throughs to observe and audit the surroundings. Enlist steps to processes in regards to the goal of the audits. Assess what is needed and obtain the Green Cleaning Equipment that has to be utilized. Acquire needed educative materials and educate cleaning crew on keeping abreast and current on Green cleaning processes. Evaluate and monitor by being accountable to the program and key people who are involved which would vary depending on who in the market you are servicing. This Came from the Article Implementing A Green Cleaning Program. Carpet cleaning is very important and can be used in so many different aspects of our lives. The carpet in your home can not only affect the quality of your own life but also the lives of your children. Having a pet can cause major damage to a carpet. They can cause bad odor to the carpet as well as stains from urine and can even infect the carpet with fleas. Many home remedies you can use involve steam cleaning and vacuuming. Some products you can use is salt and borax. It is very important to quarantine the animal before cleaning the infected area. Having your home safe and clean will make you want to help others improve their lives around them. Please refer to the Carpet Cleaning Directory to find the right company for you. In my cleaning business I plan to target the schools. Schools can be one of the dirtiest places around. The fact that schools house so many students proves that they are succeptable to the germs around them. Once one student is affected then it spreads around the whole school and you have an outbreak. Implementing carpet cleaning in my business is very important because there are many uses for it in our schools. Many teachers in the teachers lounge drink coffee and there is bound to be coffee stains on the carpet. The materials used at schools involve inks and paints. I am able to clean these out of the carpet. In many highschools discarding gum has become a major problem. If it doesn't end up under the desk it will end up on the carpet which is why my services are needed. Lastly one of the most important rooms to clean in a school is the special ed room. With students who are unable to take care of themselves there is a liklehood that stains such as blood, poop, and urine will get on the carpet. This can be cleaned by my carpet company as well.
When filling in those. requirements it is necessary to put a symbol of diameter, plus-minus, degree, etc. Adding using the symbol table is not fast. And in the design it is desirable to optimize the routine work, it means the design of the drawings. For example, to insert a symbol from a symbol table, you need to go through three menus, and then select the one you need from a large number of fonts. Yes, there is a huge choice in the symbol table, but the designer does not need technical requirements for it, it needs quite a few special symbols. If you apply a dimension, the symbol of diameter and degree can be added quickly. then in the notes so simply does not work. It will be necessary to make quite a large number of mouse clicks until you get to the required symbol. Fortunately, there is the ability to quickly enter special characters using ALT-codes. To enter a special character, press and hold ALT to enter the numeric character code. Of course, on this list of ALT-codes is not limited, the full list can be found in nete, on the same wikipedia. I think that to speed up the input to remember a few digital codes will not be difficult, try. Of course, you can enter some special characters using internal Solidworks codes such as <MOD-DIAM>, <MOD-DEG>, <MOD-PM>. What is the analog of diameter, degree and plus or minus. But the ALT code is shorter and easier to type.
The differences between Stepper Motor and Direct Current (DC) Motor are explained considering factors like nature of loop of operation of the motor, controlling, the presence of brushes, its motion and displacement. The response time of the motor and the effect of overloading. Response time Response time is slow Feedback control with DC motor gives a much faster response time as compared to a stepper motor.
HomeCurrency ExchangeShould You Do Currency Exchange In Canada Or Before You Get There? Keeping an eye on currency exchange rates are crucial if you have to head into Canada and want to save money or stay with a set budget. One decision you might have to make is whether or not to do currency exchange in Canada or do it before you enter the country. The Canadian dollar isn’t really used anywhere else in the world, so your local cash from back home won’t be of too much use here. The one exception is perhaps American currency that might be accepted in stores or restaurants close the border with the southern neighbor, but it can be dicey. Otherwise, you either need to bring Canadian currency in or get currency exchange in Canada done at an airport or bank. The exception of course if you use debit or credit cards, which are all electronic anyway. Depending on where you come into Canada from, your currency might be either pegged or free floating. A pegged currency is a situation where a government of a nation decides that its currency will be determined relative to another nation’s currency. For instance, many countries, like the Bahamas, peg their own currency to the American dollar so there is equal value. Free-floating currencies, on the other hand, are permitted to have changes in value respective to all the other currencies available across the foreign exchange market. In terms of currency, you might also hear terms like real exchange rates and nominal exchange rates. Actual exchange rates are rates for products of one country being traded for services and products in another country. A nominal exchange rate will be of more interest to you because it’s the rate or value at which currencies between two countries are traded. If you have any flexibility in the timing of your trip to Canada, you might want to look for seasonal variations in the exchange rates. Sometimes of the year might be cheaper than others. For instance, even though Vancouver might always be flooded with tourists and visitors, finding hordes of tourists from Japan might be in different months than those from Europe. Of course, your flexibility in timing might depend on the scope of your trip and where you are coming from. If you live near the border in the United States, a weekend trip into Canada could be very spontaneous, but if you’re flying in from another continent or somewhere else far away, you might not have as much luck or latitude in waiting for good currency rates. Of course, you don’t always have to do currency exchange when you’re in Canada. You might be able to go to your local bank at home on a day exchange rates are good and pick up Canadian cash in advance of your trip and then take it with you when you do go. Wherever you change your currency, always keep an eye on the conversion rates. Most places charge a fee for currency exchange. If you’re doing more rather than less, flat fees are certainly preferable to percentages.
If you’ve read some of our other blog posts, you already know how great CBD is for you and your health. The plant molecule works powerfully via the endocannabinoid system to regulate mood, energy balance, and metabolism. With this rebalancing comes relief from many common diseases. Take CBD consistently and you’ll probably notice reductions in everything from arthritis to anxiety, from autoimmune diseases to cancer. That is, if you have any of those ailments. But keep in mind that CBD's not just for people. Even if you're totally healthy, there's a big possibility that your pet has something that would respond well to the plant compound. Why? As it turns out, CBD's just as good for pets as it is for people. Many of us humans struggle with disease because we subject ourselves to a lifestyle at odds with our ideal environment. Your pet’s health status is no different. Today, our pets are more likely than ever to suffer from anxiety, obesity, and even mental health disorders. Not good! CBD could be the solution. After all, your pet has an endocannabinoid system (ECS) that reacts to CBD just like yours does. And while their ECS might be slightly different than yours — concentrated more in the brain than in the surrounding nervous system — it still functions to maintain homeostasis, or balance, within their body. This biological reality has big implications for your pet's health. Just like you, the stressors of modern life can leave your pet prone to low endocannabinoid levels, something that's clinically called CECDS, or clinical endocannabinoid deficiency syndrome. Because eCB's are critical cellular signalers, being deficient in them is arguably worse than being deficient in vitamin C or vitamin D. The endocannabinoid system, after all, is what actually controls the enzymatic reactions that most vitamins and cofactors 'feed'. These conditions are growing increasingly common in our pets — but they may not have to be. Nature’s given us 2 types of cannabinoids to help maintain health: the eCB's we've been talking about, and phytocannabinoids from plants. If your pet is deficient in one type, the best thing they can do is supplement with the other. The primary cannabinoid in the hemp plant is — you guessed it — CBD. Short for cannabidiol, CBD is a fat-soluble organic molecule in the terpene family. If you’ve ever used essential oils, you’ve taken something similar. CBD can be thought of almost like the ‘essential oil’ of hemp. But it’s even more diverse than conventional EO’s; most CBD products contain a huge spectrum of over a hundred active compounds. These compounds synergize and make CBD’s health benefits even more powerful. These products are generally referred to as “full spectrum” CBD — that’s the kind to look for. Thankfully, the CBD industry has caught on to this concept, and it’s now easier than ever to find premium CBD products for your pet. Our hope is that the remainder of this guide will give you the confidence and understanding you need to select the perfect pet product. As we said earlier, “full spectrum” hemp products, which contain as many of hemp’s active ingredients as possible, are usually best. In an ideal world, you’d get your vitamin C from broccoli, not a capsule...right? Same idea here. But don’t worry if you’re not finding a full spectrum product; even less advanced CBD isolate products are powerful. There are many different ways your pet can take their CBD, too. Pets with IBS or other gut health issues may prefer CBD-infused edibles, which deliver the compound directly to the problem area intact. Pets with more generalized conditions, on the other hand, will probably do best on CBD oil, which is more bioavailable. The most important thing is that your pet takes CBD consistently — so feel free to select whatever delivery method makes this easiest! At King CBD Company, we carefully vet (no pun intended) every product carried in our store. So, now that you’ve found a quality product, the next thing to do is figure out your pet’s optimal dosing! In general, start low and go slow is an excellent rule of thumb for hemp. You might start off giving your pet ~1 milligram of CBD per 10 pounds of bodyweight, and slowly ramping things up from there. If you want to get even more advanced, consider starting a dosing journal to keep track of your pet’s progress. It’s not uncommon to increase dose by a few milligrams of CBD per week; a higher dose might actually be needed in some cases. In general, the more your pet is struggling, the more CBD they’ll need to fully overcome their health challenges. Use your best judgment, but don’t be afraid of giving your pet too much — CBD is virtually impossible to overdose on. Modern science has confirmed what we as pet owners & lovers have already known for years: having a companion animal is good for one’s health. We benefit from the pets in our lives in so many ways — emotionally, physically, even spiritually. Why not put effort and intention into benefiting them?
He is concentrating on a book he is helding in his hands. At the top of the pictureIn the foregroundIn the backgroundI don't know you can see water, it's the mouth of the Hudson River. In the middleIn the foregroundAt the top of the pictureI don't know there is the sky. A blue sky without any clouds. The scene takes place underabovebetweenI don't know these two parts. In the top left-hand cornerOn the left sideAt the bottomI don't know of the picture stands, on Liberty Island, the famous statue. On the rightIn the bottom right-hand cornerIn the middleI don't know of the picture the Twin Towers are in flames. The wind is blowing the smoke towards the right sidethe centrethe top right-hand cornerI don't know of the picture. This picture is a mapdrawingchartI don't know . It is a study on the variations of temperature over the 2,000 past years. . The numbers 0, 200, 400., 600, 800,1000,1200,1400,1600,1800, 2000, written abovebehindunderI don't know the horizontal line in the middleat the topat the bottomI don't know of the picture are years. 0 is toaboveinI don't know the bottom left-hand corner. On the left sideOn the right sideAt the topI don't know of the picture the numbers -0.8, -0.6, -0.4, -0.2...0, 0.2, 0.4, 0.6, 0.8 are written along a vertical line. When the graph is under this line the weather is cooler than usual. When the graph is in front ofabovenearI don't know the line the weather is warmer than usual. The picture shows that the Earth was coolerwarmerI don't know than today during the Middle Ages, and it was coolerwarmerI don't know between the year 1400 and the year 1900. The period between 1400 and 1800 is called ' Medieval Warm PeriodViking Colonization of GreenlandLittle Ice AgeI don't know '. We also can see on the picture that the Vikings arrived in Greenland during the warm period, in the year 10001200800I don't know .
The rise in flex applications across all industries from medical to automotive, aerospace and military uses means more opportunity for material suppliers to innovate and meet demand. Here what industry expert Chris Hunrath has to share, from general guidelines for designing circuits unique for flex and materials that can be autoclaved over and over. Listen in to this week’s OnTrack expert to learn about flex and material sets. General Guidelines for designing circuits unique for flex: In general, avoid circuits making turns or bends in bend/flex area - don’t make the circuits go in different directions there and also avoid plated holes in those areas. From a stackup standpoint, balance the construction. Thinner is usually better. Look for opportunities for cracking at the bend point. Cross hatch ground planes have multiple advantages. Pyralux HT, DuPont - new product with unbelievable thermal performance. A continuous operating temperature. Imagine a flex circuit that can be autoclaved over and over. Hi everyone this is Judy Warner with the Altium OnTrack Podcast. Thanks again for joining us. Today we have another incredible subject matter expert that you'll be familiar with because we've had him here before, which is Chris Hunrath from Insulectro and we're going to talk about flex and material sets and all kinds of really great things. So hang tight for that. Before we get going please, I invite you to connect with me on LinkedIn, I share a lot of things there for designers and engineers and on Twitter I'm @AltiumJudy and Altium is on Facebook, Twitter and LinkedIn. Today Chris has some Show and Tell and so I encourage you if you - Chris will take time to describe what he's showing, but if you want to see it, feel free to go to our YouTube channel at Altium, click under videos and you'll see all our podcasts there. And you can click on this podcast and then you'll be able to visually see the materials and things that Chris is referring to today - and that's always available by the way - on YouTube so we record simultaneously in video and in audio so just know that's always an opportunity there for you. So Chris, welcome back, thank you. So at the end of last time's podcast, we were talking about the rise in flex applications and sort of the increasing amount of business actually Insulectro's doing around flex materials, new materials are going out so I really wanted to take this opportunity to learn about what is driving this uptick in flex, what applications are driving it , what the cost, performance implications of that is, and so let's just start with what is driving this uptick in flex? So a lot of it's medical, you know, and the way electronics are finding their way into medical applications. Actually it's everything, it's automotive, it's aerospace military - military has always been a big user of flex, but of course you know, all the new inventions that are used in medical applications - certainly some devices are implantable and that's something that's not new, but then we're seeing a lot of applications where instruments are being created that are used, for surgeries and things and they use flex circuits and that's because you can make things very small which is always an advantage when it comes those applications and we're even seeing some applications where the products are reused. They're being sterilized, autoclaved, what have you and then they're being reused. But lots of new techniques, lots of new devices being developed using flex. Most people are familiar with traditional flex applications like your laptop screen, very often the interconnect between the main system and the screen is a flex circuit. You know the old flip phones all had flex circuits, your inkjet printers had a dynamic flex circuit between the printhead and the actual motherboard and the printer, and actually that's something I do want to point out is, you know we describe flex applications in two main buckets. One is dynamic flex and the other is the flex to install and it's just exactly what it sounds like is flex to install. Typically you're only bending the circuit once or twice to fit it in whatever it needs to go into and then that's it. Whereas dynamic flex, the part’s flexed in use many, many, many times. I think that something that most people can relate to because you can see it, is the flex inside copy machines right, you can see that dynamic flex moving again and again and so are the materials - the entire circuitry is rated to have X amount of dynamic motions for the life of, it or how does that work? -well that can get very complex. There are some good design guidelines out there by IPC and others you know. Again I always shout out to the board shops, some of them have good teams that help people choose the right construction, right stack up to get the most bend cycles out of the device. Are those the two most common types of copper used in flex by the way Chris? Is a rolled anneal an electroless? Yeah - and yes but unless you're dealing with very thin foils rolled annealed is the most common. That's what we call 'RA foils' the most common. Actually I have a sample here. This is some Pyralux clad. You can't see the dielectric inside, but it's got rolled annealed copper on both sides and it can vary from - you used to be limited to half ounce or 18 micron and thicker so a little side note on foils: as you go thicker it's harder to make electro positive foils because it's more plating time on the drum. With rolled annealed it's the opposite, thinner foils are harder to manufacture because you need more rolling processes to make the foil thinner and thinner and thinner. You used to be limited to 18 micron or half ounce, now we can get rolled annealed coppers thinner, down to 9 micron or quarter ounce. You can get a rolled annealed, but the structure is much better for flexing because the grain boundaries are in this direction platelet-type, overlapping grain boundaries which is better for bending. Any foil boundaries are like this and if you bend it you can cleave the grain boundaries in. You get more but it's not that easy - foil doesn't work and flex but you typically get more bend cycles out of rolled annealed. Okay very good. That's something actually I didn't know and it's something I've talked to my friend Tara Dunn, who's in flex - and it's just something that's never come up so I think that's kind of an interesting point. So, you mentioned with military applications - because my background - military was always SWaP right, Size, Weight and Power - so are those the same type of things that drive the other applications - obviously in smaller spaces - we can fold things up on themselves and get them into smaller packaging. When you talk about the dynamic, what other kind of things sort of drive the desire and the fit for flex? Now if you - depending on the design, whether it's strip line, micro strip, and whether or not you have in-plane shielding, it might be every other one's a signal. But still the weight and size is the difference between having cables right, which I'm holding up right now, versus having a flex circuit is huge right. And in the case of medical, some of those traces can be as narrow as 20 micron. So you can fit a lot of circuitry into a very small space. And you know depending on the on the medical device. We see some of our customers will build circuits that are very, very long and very, very narrow, and you can imagine how they're used in surgery and other medical applications. And you might have twenty circuits on that part but it's in a very, very, very small space. Oh that totally makes sense. Now - just to be clear 20 micron circuitry - it’s not easy to do, it's doable, not easy to do, but certainly 50 microns is, most board shops can do that these days and again you can fit a lot of circuits in a small space and of course they can flex, they can bend. But in the case of rigid flex where you have a rigid part and bridged with a flex part - and here's another example where you have this - is not necessarily rigid flex but you'd have components here and then a connector here. You're replacing all these cables right, of this section, so that's how it drives weight and space and even reliability. Fewer interconnections tend to be more reliable so that really helps. So flex has been growing quite a bit for us, for our business and so, a lot of its based on DuPont Kapton and DuPont Pyralux products and then they - there's a B-stage system for laminating the different layers and of course the core, or the clad material as the foil on both sides and then our customers will print and etch to whatever pattern they need and put those layers together as building blocks. Right so let's talk a little bit about design for flex since most folks listening here will be engineers or layout folks. What are some things that people need to keep in mind about designing these kind of circuits that's sort of unique to flex? So there's a couple of good - again some good guides out there - both by IPC, DuPont has flex manuals, for different types of categories. Whether it's multi-layer, single sided, double-sided flex, they have some good guidelines on that, but in general what you want to avoid is you don't want circuits to make turns or bends in the bend area. So, for example, I'm going to use this one is an example again. -concentrates on bending. And in general from a stack up standpoint, you want to try and balance the construction. Thinner is typically better. There's again - there's all kinds of iterations there's - if it's a multi-layer flex - there's loose leaf constructions where you wouldn't necessarily bond the different layers together in the flex or bend region. You'd have them not connected. A bookbinder system is another way to do it where depending on the direction of the bend, the layers that are on the outside of the bend are actually longer. The layers on the inside - and again the fabricators that are skilled in that know how to space that - and to change the length of the circuit. But you know from a simpler standpoint, or from a more general standpoint thinner is typically better balanced. Balanced constructions are typically better for flex. Well balanced construction is always a good idea, I'm just saying but I could see that right. Because I think you - what you're saying if I'm hearing you right, is you have to look for those opportunities for cracking right, or stressing at the bend radius, because that makes sense right. Just from a physics standpoint it makes sense that things would want to give or pull right? Right, when you bend a flex circuit the other side compresses against it right, and every circuit will fail at some point. It's a matter of how many cycles you get out of it before it fails. Right how do you measure those cycles by the way? Well there are some standardized tests and there's an MIT bend test - there's some other testing that's done to see how a particular material, or even a design or stack up performs where it's bent repeatedly until you get failure. And then you can - you can rate the stack up or the and/or the material. Where can you get that data? You mentioned IPC as a source. Is there any other thing - resources you could share - that I could share with the listeners where they could maybe look at some of these readings? Yeah actually so DuPont's website, the Pyralux website, has some data on that and certainly some of the folks there could put your listeners in touch with some of the design guidelines. Oh Jonathan Weldon, yeah he's a great resource for that. So speaking of Jonathan Weldon, he's been working with HDPUG; they've been looking at shield layers or for reference planes and they've been looking at the difference in solid planes and cross hatch systems, and so this is just a simple - this is actually a simple test circuit microstrip construction where you have a reference plane on one side and your tracer on the other. Imagine if there were a strip line construction and you had copper on both sides with your transmission line in the middle, one of the challenges with all PCBs, and especially with flex, is absorption of moisture and then that moisture released during assembly causing delamination and one of the things that you can do to mitigate that is to bake the parts. Well if you have soft solid copper areas - baking does not work as well - because the moisture has got to go around the copper it can't go through it. So cross hatch ground planes are great for two purposes. One is, it's a moisture egress for baking, the other advantage is it's actually better for flexibility it makes the part more flexible. The downside is the high frequency applications - you can run into some issues. So and one of the interesting things that Jonathan and company, they were looking at, was the difference between a round opening and a - what's typically used as it's.. Kind of a diamond shape? Exactly, exactly and really it's more of a square turned on its side, but yeah the diamond shape versus the you know... It's funny how a circuit design is always in orthogonal patterns but that's not necessarily the best way to go and anyway the round shape was better for signal performance. Oh, for the high speed applications? This is true okay, alright. Yeah, so there's some interesting data on that but I would recommend to a customer, depending on their their frequency bandwidth bit rate, depending on what kind of design it is, that they would look at using an open plane. It works basically with a screen, for lack of better words, versus a solid plane because the reliability goes way up. Okay now you just made me think of something. Last time we talked, we were talking about prepregs and glass, being reinforced right. When you're using adhesive systems for flex, I'm assuming they're non-reinforced? It's a more stable material though so tell us a little bit about that, about the stability, the dimensional stability? Yeah so - so really in flex circuits the Kapton film, a polyImide film, because it's a thermoset, it is acting like the fiberglass in your flex circuit. You don't have skew issues because there's no glass, so you don't have micro-DK effects. Now if you do have a crosshatch plane, you will have a different - you'll have a micro impedance effect if you would. But that usually doesn't change with differential pairs unless - again depending on where you put the traces - but you don't have the fiberglass micro-DK effect at all. Now, Kapton's interesting - it's very thermally stable but it's not as mechanically strong as glass reinforced laminate. So it tends to change more from mechanical distortion than it does for thermal. It's not shrinking like epoxies do when they cure. Certainly when you - when you remove all the copper (and I actually have a piece here) this is a piece of Pyralux AP, with all the copper etched off. This is 100 percent polyimide, used to have copper cladding on it and the copper's been mostly etched off. You can see a little bit of copper left from the tape I use to run this through an etcher, but the material is pretty strong but it can distort mechanically, more so than thermally. So again this is kind of like the fiberglass in a regular PCB, and then you'd have B-stages of some sort, to put all the layers together. So the actual substrate is creating the stability in the case of flex? Okay that makes sense. It's a polyimide film, in the case of Pyralux, which is a DuPont branded flex material it's based on Kapton film. Okay so we talked about ground planes, we talked about where to not put - - is there any other sort of design for flex things that you'd want to mention that are just rather commonplace? Okay well we'll try to track some of those down and put those in the show notes because I think that would be really helpful to have something kind of, tangible. Something I remember learning from someone else, is also talking about tear dropping pads? Yes. Is that something that you would recommend as well? Yeah that's good for a couple of different reasons. One is that the more material that goes under the cover lay, again helps mechanically support the pad. It's also important - typically you don't put holes or pads into your bend area, but it could be an area where you could concentrate bending. So in other words, you go from a trace to a pad, that's going to become a concentration of - right at the edge of the pad - concentration of stress and so if you do the teardrop, that distributes that stress over a larger area and helps prevent circuit cracking. But again, you would try and avoid that in your design. We would make that a bend area. And actually, speaking of rigid flex, one of the things that you would typically do is the cover lay would go into the rigid portion only 50 mils. -Okay and then you would keep the cover lay and its adhesive out of the plate through hole areas in the rigid portion and rigid flex - and that's also a 'keep out' region for plated through hole so you wouldn't want plate through holes going through that region. So again a lot of this stuff is spelled out in some of the manuals that you get from DuPont and others. Alright, I'll reach out to Jonathan and - and you and I can scrounge up some things and we'll make sure to include those here. Last thing I wanted to talk to you about - which I was just stunned by - is that you told me that DuPont has come out with a new material that has unbelievable thermal performance. Can you tell us a little bit about that? -but instead of using acrylic or epoxy adhesives to bond the Kapton layers together, you would use this thermoplastic polyimide layer. It's got a very high melting point and thermoplastic's already used in PCB, people familiar with EPI-P and LC, those systems. The only way thermoplastics work in PCB, or reflow assembled PCB, is to have a high melting point otherwise it would melt at assembly. So this is a piece of the thermoplastic polyimide that DuPont manufacturers. It's the HT bonding film. This could either be a cover lay or it could be an adhesive layer to put - to make a multi-layer PCB. -But the nice thing about this, is it has a - 225 Celsius operating temperature, which is very, very high. What does that convert to in Fahrenheit? Oh gosh - 225 C it's over 400 degrees Fahrenheit. I see, 225 - - Fahrenheit okay I wasn't hearing you correctly, so it was Fahrenheit okay. Oh no - hang on, 225 C, I should know all this without me - - 437 Fahrenheit. -and that's an operating - continuous operating temperature? Which is crazy, cuz some materials can take that heat for a little while but not continuing operating temperature right? Right, so most PCB materials that go through a reflow assembly, which is either done at 260 Celsius, depending on the type of solder work, or 288 C, they can withstand that for a short period of time most PCB materials survive that. It's the operating temperature most epoxy systems will come in around 130 to 150 C operating temperature - maximum operating temperature. That's wild, so I'm guessing - so what are the applications where this will be exciting news? I was gonna ask you about that earlier. I don't really know what temps they autoclave at but you mentioned that before that medical applications could - to cut autoclave to kill the bacteria, but like what's the normal temp of an autoclave, how many times can you do that? So we have one customer that builds some parts that are autoclaved at 135 C but it's with steam, and it's hard on circuits, it's hard on electronics. Yeah seems like that would be. But for HT it wouldn't be any issue because you're nowhere near on the melting point. Now it will absorb some moisture, which could be removed from - could be removed with a bake but a lot of applications it won't matter if the assembly is already done. It doesn't really matter. You know there is some change in the transmission properties of the material when it absorbs some moisture. Again that could be removed with a bake but that is one of the challenges with reusable medical devices, is sterilization and how well the materials hold up, and an HT would be good for that. The downside of HT, is it does require a 600 degree lamination - Fahrenheit. Okay well there you go, so how many board shops have lam presses that go up to that temp? So we took a look at our customer base, and it's not a lot of them, or some of our customers had laminate, or have lamination presses that are capable, they're rated that high, but they haven't been turned up that high for a long, long time. So it's funny, some of our customers have started making some HT, all the weaker heaters, that the press might be 10 years old, they turn it up for the first time to a higher temperature; they start popping heaters and they have to go and replace them. But actually we're seeing a trend though. A lot of our customers are buying laminating equipment and right now that's a whole 'nother story because lean times are way out on equipment in general, but what we're seeing is people are making sure they have that high temperature capability and it's not just for something like HT, it's for LCP and FEP as well. They have some good properties, electrical and and signal properties. That's a big deal these days. Performance wise they're very good. Right they're harder to fabricate but they do have some good properties you know. Even - we talked about last time - repeat glass-reinforced PTFE materials, some of them require high lamination temperatures. -wait hold on - Teflon Kapton? Oh okay. It's called 'TK' - it's a Pyralux product from DuPont and so it has a core of Kapton to act as the XY stabilizer, but then it has a Teflon material on both sides and again, this is a building block but it's very low loss, and very low DK. So a DK of about two and a half with a very, very low loss. But unlike glass reinforced Teflon systems, this has no fiberglass so, no skew and no detrimental effect from the fiberglass. It's using the Kapton instead, as the stabilizer, because if you had a piece of - I should have brought out a piece of Teflon - but PTFE films you can easily - it can be mechanically stretched. Yeah, one time when I was in the RF and microwave board space, I had the board shop I was working for take all the materials like Rogers, Taconic, whatever and I had them strip all the copper off and I went like the 4000 series 6000 series 3000 series all the way up to 58, 80 and strip off the copper. Because when you see them clad, they don't look that different from each other. But I'm like here's Teflon - this is like a piece of rubber, and imagine heating that up, exposing that to aqueous hot processes and so I think that really helped people to understand how vastly different they are and I think it was a good visual actually to help people understand how radically different these are and when you start stripping off all the copper and you have fine lines and all that then it's - it's a whole different animal. TK material is - the core material is nice because the Kapton layer does provide mechanical strength. Again though, the TK, instead of requiring 600 degree lamination, it requires 550. So it's still a high temperature product which requires the right press book, the right materials, and lamination, and it also requires a press being capable. And the other too is the board shop needs to get accustomed to the dimensional changes during the lamination process with these materials. Again - a lot of it's mechanically driven, but you need to know how to work with it so that's something I think the boardshop needs to have experience with. Well and I imagine that you're not going to see these materials outside of sort of high performance or high speed capable board shops? -I don't know if that's true I guess I'm looking to you for an answer in there but it's an assumption I would make. Here's the interesting thing about AP, AP by itself, is actually pretty good electrically. It's the adhesive layers you use that incur a lot of the loss. So then if you get into the thermoplastic systems that have better electrical performance, now you're getting into the temperature range. So it's one of those give-and-take situations, but you can mix and match the materials to some degree. You could use, for instance HT bonding film with AP clads, your operating temperature would default to the AP operating temperature, which is still pretty high at 180 °C, but electrically it's pretty good. You get away from the acrylic and the epoxy adhesives, which aren't great electrically, in terms of loss, dielectric constant so yeah, I think as I think as board shops become better equipped with high temperature systems, you'll see a broader use of these materials. Yeah, if you wanted to get rid of skew completely you could use a film based system. Yeah it was crazy, I mean that makes sense and I'm sure there's some challenges there cuz I could tell they had to rigidize the bottom, or put some kind of carrier or something, because they didn't want it to flex quite that much but they just stacked these film systems on top of each other and I'm like huh, didn't know you could do that but they were clearly doing it on a routine basis so that was interesting. -and then use regular rigid prepreg as a bonding system so and the board's not - when it's all done, it's not flexible it's rigid. I actually have a board here. Unfortunately it's single sided so it's kind of like a potato chip, but because there's only one layer of copper and one layer of prepreg, but this is actually DuPont's AP product with Isola's tachyon prepreg, and it's a spread glass prepreg. So you have the spread glass prepreg on one side and you've got the Pyralux AP in the other. So you minimize how much glass is in here, which really drops the amount of impact or micro DK effect which would lead to skew and other signal performance issues. So there are lots of different ways you could use the flex materials even in a rigid design. Yeah I did see that and I was shocked and I - it's something I hadn't heard a lot about. Anyways well, we're about out of time today, again. But thank you so much, every time I talk to you, I feel like I learned so, so much and it's fascinating to me where the industry is going and what's happening with flex and it's exciting it's really an enabler right and these high, high temp products and that so it's a really exciting time to see. We always break through one way or another it's just interesting to see who gets it done. So it's very interesting to see what we're doing with flex. -we provide all these different building blocks to meet the need of what the customer needs. So the best way I heard the two described is the difference between a geek and a nerd is - a geek is the one who gets things done. So I would like to think I'm somebody who'd get stuff done, so that would put me in the geek camp but in any case. Alright check geek, and the second question I have for you: on a scale from one to ten how weird are you? [Laughter] Oh gosh, I would say - five. I'm sorry but if we're in this industry we're at least 5 or above. I think we have to be a little wacky to do what we do - okay well thanks I appreciate it so much and again, we were talking on the phone yesterday we have more to cover, so I'm gonna for sure have you back again and talk about printed electronics which is on the rise and you know a lot about. And also I'm very excited to talk about - oh there it is! Electronics, that's a whole other - whole other world of electronics and yeah. Wait, wait, wait bring that back and tell our listeners what exactly that is. So this was printed with a zebra label printer where the - and no changes to the machine by the way - but the special foil is put into the system where you normally put a roller with a pigment film, so instead of printing a black label you're printing metal foil so yeah, it's kind of interesting. Yes what is that for? Dude, you're still not answering my question here. What is that intended for? So I'm gonna use that for an antique stereo I have. I have an antique FM stereo the tube, old tube radio, I'm going to use that as an antenna. I see - oh see definitely five-weird. I say I'm gonna make that matrix instead of the hot crazy matrix I'm gonna make like the geeky-weird matrix and so yeah - you're at least at a five -high and a geek. But anyway printed electronics is pretty exciting, I mean and again, it's all material science based. As the materials get better you're gonna be able to do more things. Higher conductivity inks, higher temperature inks, I mean there's all kinds of things you can do in that area. Typically the substrates are different - they're typically lower cost, lower temperature capable substrates, but you could - you can make all kinds of things so we'll get it the next time. Okay we'll definitely do that and the other thing I'm excited to talk to you about - because I know nothing about it - is paste interconnects and you shared a little bit, so anyways we have at least one or two more podcasts ahead of us, so for our listeners; stay tuned and we'll make sure and share everything Chris has talked about today and hook you up with resources through DuPont, HDPUG, IPC, wherever we can find and we'll make sure and share those resources that will help you lay out a better flex and onboard as much information as you can.
Comprehensive and thorough diagnosis of thyroid cancer involves a number of procedures and tests. Usually, the process of evaluating for thyroid cancer starts with finding a lump or nodule in your gland. You may find it or see it yourself, or, in some cases, your doctor may detect it during an exam. It's also fairly common for thyroid nodules to be discovered when you have X-rays of your head or neck for other purposes. Examining your neck can sometimes help you find lumps or enlargements that may point to thyroid conditions, including nodules, goiter, and thyroid cancer. You can do a test at home to help detect nodules, which—if noticed—should be brought to your doctor's attention for further evaluation. To underscore the importance of early detection, the American Association of Clinical Endocrinologists (AACE) encourages Americans to perform a simple self-exam they call the Thyroid Neck Check. While it is not conclusive and may not enable you to detect all nodules (most can't be seen or felt), those that are closer to the surface or large may be found with this simple test. Take a sip of water and hold it in your mouth. Stretch your neck back and swallow the water. Look for an enlargement in your neck below your Adam's apple, above your collarbone. Feel the area to confirm an enlargement or bump. Again, this self-check does not replace an exam by a medical professional. A thorough examination by a physician is needed to diagnose or rule out thyroid cancer. Your doctor will likely first conduct a thorough physical exam. This exam should include palpation of your thyroid, where your doctor physically feels for enlargement and lumps in your thyroid gland and assesses the gland's size, asymmetry, and the firmness. Your doctor will also look for any enlarged lymph nodes in your neck and the area around the gland. Keep in mind that thyroid nodules are very common. Most, however, are benign (non-cancerous). According to the American Cancer Society, about two or three in 20 thyroid nodules are cancerous. There are a variety of tests and procedures that your doctor may use to diagnose thyroid cancer and rule out other thyroid conditions. Thyroid-stimulating hormone (TSH): Your doctor may check the TSH level in your blood to evaluate your thyroid's activity and test for hypothyroidism (underactive thyroid) or hyperthyroidism (overactive thyroid). This test's results can help your doctor determine which imaging tests to do to visualize your nodule, depending on the result. That said, with thyroid cancer, your TSH level is typically normal. T3 and T4: These are the main hormones that your thyroid makes. Your doctor may test your levels to check how your thyroid is functioning. Like TSH, these hormone levels are usually normal when you have thyroid cancer. Calcium: When medullary thyroid cancer is suspected, your doctor will typically test for high levels of calcium, as this can be an indicator of the disease. Thyroglobulin: The thyroid makes a protein called thyroglobulin that's then converted into T3 and T4. If you've already been treated for thyroid cancer and you've had a thyroidectomy, your doctor may check to make sure your cancer is gone or to see if it has come back by looking at your thyroglobulin level. Though this test can't diagnose cancer, it can be a marker for it. Since you no longer have a thyroid to make thyroglobulin, if there's more than a very low level in your blood, or if it rises after having been low, this may indicate cancer. In this case, your doctor will likely do some other tests to verify and treat you accordingly. If your doctor thinks you may have thyroid cancer, you will need to have a biopsy to tell for sure. Thyroid nodules are typically biopsied using a needle in a procedure known as fine needle aspiration (FNA) biopsy. In some cases, your doctor will begin with this test, but some doctors may do blood and imaging tests first. An FNA is simple, safe, and performed in your doctor's office. During an FNA, your doctor will use a needle to remove, or aspirate, cells from the nodule. To ensure the needle goes into the nodule, your doctor may use ultrasound to guide the process and will likely take a number of samples from different places in the nodule. Once the cells are aspirated, they are examined under a microscope by another doctor called a pathologist to determine whether the nodule is malignant (thyroid cancer) or benign. Sometimes, however, the results of an FNA are "indeterminate," meaning that it's unclear whether the nodule is cancerous or not. In the case of indeterminate samples, the biopsy is usually repeated and/or genetic or molecular testing may be done. If it's indeterminate a second time, your doctor may consider a surgical biopsy or surgery to remove half of your thyroid gland, called a lobectomy. Both a surgical biopsy and a lobectomy require putting you to sleep with general anesthesia. In the case of the lobectomy, if you do have cancer, this is often both diagnostic and an early treatment step. However, you may eventually end up needing your entire thyroid removed, called a thyroidectomy. Thyroid nodules are common and most are benign (non-cancerous), but determining which ones are benign and which ones are cancerous can be a tricky process. This is why researchers have created various molecular (genetic) tests that are used on cell samples obtained from a thyroid nodule. These tests help your doctor decide whether the thyroid nodule is likely cancerous or not, which often impacts whether or not you will need to have thyroid surgery. The hope is that more unnecessary surgeries can be prevented. One tool, called the Afirma Thyroid FNA Analysis, is a molecular diagnostic test that measures gene expression patterns within the FNA sample to make a diagnosis of either "benign" or "suspicious for malignancy." If the analysis shows the nodule to be benign, then periodic follow-up and monitoring of the nodule is typically recommended (which is usual for benign nodules). If the nodule is suspicious for malignancy, your doctor can proceed with surgery. Research suggests that the Afirma test is best for ruling out cancer, meaning it has an excellent negative predictive value. Other tests include the ThyGenX and ThyroSeq tests. The ThyGenX test analyzes a cell sample for gene mutations and markers to assess for the risk of cancer. This test is particularly good for ruling in cancer, so it has an excellent positive predictive value. Even more refined, the ThyroSeq test is good at both ruling in and ruling out cancer. If you already had an FNA biopsy that found an indeterminate thyroid nodule and your doctor is recommending a thyroidectomy, you may be interested in having another FNA done with a doctor who uses one of these molecular tests. In the end, having a more conclusive result could potentially prevent unnecessary surgery. Less commonly, if a thyroid nodule is close to your voice box, known as the larynx, a laryngoscopy may be performed to make sure it's not interfering with your vocal chords. You may also have a laryngoscopy if you're going to have surgery to remove part or all of your thyroid to see if your vocal chords are moving the way they should be. This test involves inserting a lighted flexible tube to view your larynx at high magnification. A thyroid ultrasound can tell whether a nodule is a fluid-filled cyst or a mass of solid tissue, but it cannot determine if a nodule or lump is malignant. It can also tell how many nodules there are, as well as how big they are. As noted, ultrasound is also often used to help your doctor do a fine needle aspiration biopsy. In this nuclear scan, also known as a radioactive iodine uptake (RAI-U) scan, you're given a radioactive tracer dose either in pill form or as an injection, followed by the scan. Nodules that absorb more radioactive iodine are more visible on the scan. These are known as "hot nodules" and are more likely to be benign. The nodules that show less radioactivity are called "cold nodules" and can be either benign or cancerous. By itself, this scan can't diagnose thyroid cancer, but it works especially well in the diagnosis process if your thyroid has been removed or you have high levels of TSH. A computed tomography (CT) scan is a specialized type of X-ray that is sometimes used to evaluate the thyroid. A CT scan can't detect smaller nodules, but it may help detect and diagnose a goiter or larger thyroid nodules. It can also help determine the size and location of any thyroid cancer and whether or not it has spread to other areas. Similar to CT scans, an MRI can help detect enlargement in your thyroid gland, as well as tumors and tumor size. It can also be helpful in detecting the spread of tumors. The symptoms of thyroid cancer often indicate another thyroid issue rather than cancer, so your doctor will need to rule out these other thyroid problems while looking for the disease. Remember, a thyroid nodule is far more likely to be benign than cancerous. If you have a benign (non-cancerous) nodule, your doctor may decide to just keep an eye on it. This means that you'll need regular thyroid function tests and physical exams to check for any changes in how your thyroid is working. It's possible you will never need treatment at all if the nodule remains the same. If your nodule does get bigger, you will likely need another fine needle aspiration biopsy to see what's going on. Some doctors may start you on a medication that suppresses your thyroid from making too much hormone, such as Synthroid (levothyroxine). The point is to stop the nodule from getting any larger and perhaps even shrink it, but there isn't any clear research that this is always effective. Additionally, it may not be necessary to shrink small benign nodules that aren't causing any difficulty. If you're having problems breathing or swallowing, you will likely need to have the nodule surgically removed, even though it's non-cancerous. You will also need to have the nodule surgically removed if your test results come back as indeterminate or suspicious so that it can be examined for cancer. A goiter is an enlargement of your thyroid that's typically painless and may be large enough to be seen or felt. Goiters can cause problems like difficulty swallowing or breathing, coughing or hoarseness, or there may be no symptoms at all. They can be diagnosed using many of the same tests and procedures as listed above. Treatment for a goiter depends on how large it is and what's causing it, but may involve simply watching it, medications, surgery, or using radioactive iodine to help make it smaller. Graves' disease is an immune system disorder that's one of the most common causes of hyperthyroidism, an overproduction of thyroid hormones. One of the main symptoms can be an enlarged thyroid, so your doctor will check you for Graves' disease using the same tests and procedures indicated for thyroid cancer diagnosis. Treatment for Graves' disease usually involves medication, radioactive iodine therapy, and potentially surgery. Other conditions that can cause the thyroid to produce too much hormone include toxic multinodular goiters, Plummer's disease, and toxic adenoma. These are treated the same way as Graves' disease with medication, radioactive iodine therapy, and surgery, and are diagnosed using the same tests and procedures listed above as well.
Early Brown Stonefly hatch is one of the earliest hatches where you might get a chance to fish for rising trout on many trout streams central and northern Wisconsin. The hatch occurs from mid-March into mid-April depending on the temperatures, Winter snow amounts and the stream location. Before the Wisconsin DNR started the early season catch & release we never got a chance to experience this wonderful early season stonefly hatch. The small stonefly nymphs crawl out of the water along the shoreline to shuck their nymphal case and become a winged adult. The stoneflies come back on the water to lay their eggs (ovipositing) and flutter and skate across the surface. Especially in midday and early afternoon you can find these bugs flying about and landing on the water, but it does depend on the seasonal weather. This can make for some great early season dry fly action to rising trout when you catch it right. The early brown stonefly measures 5/8" long from head to the end of it's grayish-brown wings. The blackish body is only about 3/8" long. When the stoneflies are not fluttering on the surface I use a small stonefly nymph pattern and fish it in the shallows. Beware, early season the trout are generally not found in the places they are found during the summer months. My preferred pattern when the stoneflies are on the water ovipositing is a size #14 dry fly hook and a low riding dry fly with a deer hair wing that lays right in the film. I have heard of others doing well on a parachute Adams patterns. What ever you use be sure to twitch the fly to get the trout's attention. The Early Brown Stonefly hatch is often confused with the much smaller sized Little Black Stonefly which hatches before the Early Brown Stoneflies begin to hatch.
I am attending IBM IMPACT and this is a duplicate post for one running on the IBM IMPACT Blog. Business rules get everywhere – we have business rules in our user interfaces, in data quality, in business processes and more. But when organizations adopt a business rules management system they are focused on improving decision-making. A business rules management system like WebSphere Operational Decision Management lets us automate, improve and manage decision-making in our business processes and systems. To be successful with a BRMS, as with any technology, we need to be clear what we are going to do with it. We need requirements for our project which clearly specify what is needed and how that will add value to our business. If better decision making is our goal then our requirements specification process should ensure that we know which decisions we are trying to improve and have a good idea of the decision-making involved. Our requirements should include an understanding of what policies or regulations apply, what information is available, who makes the decision and who has a say in how the decision should be made and much more. In my client work I find that too many projects do not do a good job of this. Without good decision requirements these projects are likely to flounder around and fail to have a positive business impact. Traditional requirements techniques like use cases or requirements lists tend to identify the need for decision-making but are not suitable for specifying how decisions should be made. Simply capturing business rules in a spreadsheet or list focuses on details before structure and often results in a “big bucket of rules.” To be successful we need a new way to capture our requirements – a decision requirements model. Each decision is specifically identified and described with a precise, detailed question and a set of possible (allowed) answers. So a retention decision might be written as “Which of the available retention offers should be made to this customer if they call in to cancel their service?” with any of the defined marketing offers being allowed as an answer. Decision-making involves information and the most basic piece of a decision requirements model is an understanding of the data or information that must be available to make the decision at the business concept or business object level. To make a decision requires knowledge, know-how. This could be expertise or best practices in people’s heads, it could be policy or regulation, or it could be analytic insight like the results of data mining or a predictive analytic model. The model should identify what knowledge must be available to make any decision and what additional knowledge will help make a good decision. A single decision definition is not enough to specify a business rules project. Decision requirements models also describe how to go about making the decision, how to answer the question. Perhaps you have to decide how valuable the customer is, how likely they are to respond to a retention offer and which product category would be most appealing before you can pick a retention offer. This begins to identify the pieces of your decision-making and each piece can be identified as a decision, a sub-decision if you like. You can repeat this process, breaking down each piece of the decision-making into more granular decisions or questions until you understand it fully. This decision decomposition outlines how you want to make this decision, or how you want the people in your call center to make the decision, each time. It’s not code or business rules but it is precise. With this in hand you can look at the information and knowledge you identified and see if still seems complete given your new understanding (it won’t be) and you can see exactly where in the decision making each element comes into play. Finally decision requirements need to include organizational, application and business context for the decision making. Understanding who decides how a decision should be made as well as who has to make it clarifies the organizational context of a decision. Understanding the business processes, events and systems that need a decision shows how your decision-making will need to be implemented, its application context. Understanding the business objectives and metrics that will be impacted gives you a performance context. In this example we are using the notation being adopted as part of the Decision Model and Notation standard, currently under development for the Object Management Group (both IBM and my company, Decision Management Solutions, are submitters on this standard). Decisions show as rectangles with the solid arrows showing information requirements – decisions at the arrow end require the information that comes from an external information source (oval) or decision at the blunt end. Knowledge Sources, such as policy documents or regulations, are shown as documents and the dashed line shows which decisions require which knowledge sources – known as authority requirements. With a decision requirements model in hand your business rules can be identified and described not as a single list but in terms of the rules for each of these sub-decisions. This focuses rule capture and completeness checking on a much more well defined scope. You also already know what the possible actions for the rules can be (the answers allowed for the question you identified) and how these rules fit into the larger picture. You also know which organizations and external knowledge sources are likely to be relevant. The model also shows you who cares about the rules and will need to review them as well as who will be maintaining them over time (because you know which pat of the organization owns or makes each decision in it).
We speak to Ride New Orleans about the transport situation in a city still affected by Hurricane Katrina, 13 years later. One humid morning on August 29, 2005, the people of New Orleans were literally overwhelmed by Hurricane Katrina, which also struck the Gulf Coast of the U.S. Experts estimate that Katrina caused more than $100 billion in damages. As a city that is completely surrounded by water, New Orleans was at particular risk. Although half of New Orleans actually lies above sea level, its average elevation is about six feet below sea level. When Katrina arrived, it flooded many of the city’s unstable levees and drainage canals, washing away many municipal and citizen properties. We speak with Executive Director of public transportation advocacy group Ride New Orleans, Alex Posorske, about the situation 13 years later, and how they’re working with the municipality to improve quality of life for residents of New Orleans. Katrina demonstrated how much stronger and organized civil movements were in responding to disasters in comparison with governmental bodies – especially the federal government itself. With around 34,000 people rescued in New Orleans alone, many ordinary citizens commandeered boats, offered food and shelter, and did whatever else they could to help their neighbors. The Federal Emergency Management Agency (FEMA), on the other hand, took days to establish operations in New Orleans, and even then did not seem to have a sound plan of action, according to HISTORY.com. In the last couple of years, Ride New Orleans has been looking at how well the transit system can provide access to jobs in a reasonable amount of time. The researchers found that the average New Orleanian who has a car reaches 86 percent of the region’s jobs in 30 minutes or less. If that same New Orleanian relied exclusively on transit, he/she could only reach 11 percent of the available jobs out there in 30 minutes or less. “If you don’t have a car [in New Orleans], it’s a real disadvantage,” Posorske says. Ride New Orleans also looks at its campaign through a grassroots angle, organizing rallies and protest stands calling for better transit that is not “designed for danger,” as one of the protest signs reads. Every third Saturday of every month, Ride New Orleans sits down with concerned riders to discuss their problems and how they want to see them fixed. They also discuss actual steps with stakeholders and the municipality to make omni-scale improvements. New Orleans is divided by the Mississippi River, and New Orleanians have an inactive ferry service – another facet of the city’s transit. Early last year, the city administration pushed through a new ferry terminal. But the design neglected the needs of New Orleans’ riders: it neither offered cover from the elements, nor a bridge for people to cross to the nearby railway track, which has six or seven trains running every day. They raised their concerns to the city administration but officials insisted on going forward with the project. “This is going to demonstrably mean a step backwards for riders in terms of comfort and convenience,” Ride New Orleans argues, even as they continue to protest the absence of facilities to both protect and make life easier for the community. “What if you’re trying to scurry to catch a connection on the other side of the ferry terminal and after you get off the boat, a ferry comes through and you miss the bus, and you sit there for another 30 minutes [to catch the next one]?” elaborates Posorske. After much campaigning and rallying to pressure the city administration, officials finally gave in to uphold riders’ concerns and work on the Canal Street ferry terminal, with changes expected to finally take place in September. Posorske explains that, when he was visiting Chicago last November, he watched some 20 people board a bus at a time; contrary to his experience in his hometown, it didn’t take them much time to pay the fare and board. “The CTA in Chicago has been doing a very interesting program with an off-board fare collection in one particular busy route in the Belmont Blue Line station,” he says. Ride New Orleans dreams of pilot programs around better treatment for transit, such as temporarily dedicated lanes and off-board fare collections. According to the group, New Orleanians need projects that could durably speed up the reliability, reduce the travel times of riders, and increase connectivity without the commitment of a multi-billion-dollar investment for new buses, new vehicles, and new transit lines.
The parts of a tornadic thunderstorm include the anvil at the top, the rain-free zone that is often near a funnel, the dropped-down wall cloud which produces the funnel, a shelf cloud that forms in a location where rain has cooled the air, and bulbous or cauliflower-shaped mammutus clouds. Original diagram by Dawn Adams. Please do not use without including a link to this page or a citation crediting Dawn Adams and Tapestry Institute. Click picture to see a larger image. The National Severe Storms Laboratory estimates that more than 800 tornadoes occur each year in the United States. They have been recorded in every state, as well as in many other countries. In the United States, tornadoes are particularly common in the Great Plains, from the Rocky Mountains to the Appalachians, and they most often form in the spring and early summer. Why and how tornadoes form is still not well understood. Often they form from rotating thunderstorms called mesocyclones. These form when winds in the bottom layer of the atmosphere just above the ground are moving in a different direction, and at a different speed, than winds in the layer directly above. Where the two layers touch each other, the conflicting winds “pull apart” or “shove around” the air at the boundary, until sometimes the air starts to roll over itself. This initial rotating is like that of a pencil if you roll it across a table with the palm of your hand. Sometimes, again for reasons not fully understood, this spinning movement is turned vertically — as if you stood the spinning pencil up on its tip — and if this type of rotation develops in a thunderstorm it is a mesocyclone. Tornadoes then form in several different ways, none of them well understood, from the rotating mesocyclone as the spinning air tightens and speeds up in small, localized regions. One thing that is thought to tip the horizontal rotation vertically is the rising hot air beneath the thunderstorm. Such an updraft fuels the storm’s power. Because of the updrafts, strong winds on the ground that are rushing toward a thunderstorm are one of the indicators that severe weather may be imminent. The sky is also usually very dark, the color of a bad bruise, and the clouds commonly have a sickly greenish cast. Conditions that might spawn a tornado can be recognized long before things get that far, though, by anyone who studies the sky. Towering thunderheads are most likely to produce tornadoes. Heavy rain, hail, and lightning are usually associated with such storms. Tornadoes often form at the base of a wall cloud, which is a lowering of the underside of the thunderhead. Not all wall clouds produce tornadoes, but anyone who sees such signs — especially large hail — should pay close attention to local weather reports for storm watches and warnings. Thirty years ago, tornado warnings were issued when a spotter reported seeing a funnel touch the ground, and since rain and darkness often made it hard for spotters to see them, it was not uncommon for a tornado to suprise everyone in its initial path. Now Doppler radar is used to pinpoint tornado location and direction of travel with more reliability. Doppler radar image. Click on the picture to learn more. Not all tornadoes are that strong, but some are stronger. The Fujita scale estimates the power of a given tornado by assessing the kind of damage it has done to human-built structures. F-0 is the weakest rating, and F-5 the strongest. Estimated wind speeds within the funnel itself are based upon what would be necessary to cause the amount of damage seen, and also upon measurements taken by scientists doing research such as NSSL’s VORTEX2. Fortunately, most tornadoes are low-strength. Fewer than 2% of all tornadoes are F-4 or F-5 storms, but they take 80% of lives lost to tornadoes. Seven out of ten tornadoes are F-0 or F-1, and almost all the rest are F-2 and F-3. A newer Enhanced Fujita (EF) Scale is increasingly used to estimate a tornado’s power, with an EF-3 tornado packing windspeeds of between 136 and 165 miles per hour. Like the original F scale, the EF scale is calculated based on the Degree of Damage (DOD) to structures, which means that the power of tornadoes in open country without structures usually cannot be estimated. Most tornadoes can be survived by following National Weather Service guidelines to go into an innermost room on the ground floor of a structure. What should you do if a tornado is nearby? At that moment, the safety of yourself and others in your household may depend on you making the right decision. What you do can even affect others in your community; motorists who blocked highways may have caused injuries and deaths in the 1998 Oklahoma City tornadoes, that could otherwise have been avoided.
Like many critics of the money mischief taking place under the auspices of the United States government and Federal Reserve banking system, I have called Federal Reserve notes IOUs. I was wrong. They are not IOUs. An IOU is a promise to pay. Federal Reserve notes promise nothing, although they were legitimate IOUs until November, 1963. That's when they were stripped of the printed promise to redeem in lawful money. All that remained was the printed statement that the note was legal tender for public and private debts. It was no longer a promise to pay anything. Researchers have never been able to discover where the order came from to remove the promise to redeem in lawful money that made the Federal Reserve and U.S. notes legitimate IOUs. . After the promise to redeem was removed in 1963 the paper notes became IOU-nothings. Economist Walter Spahr called them that. New York banker John Exter called them that. We all should call them that if we wish to be accurate. Although the paper notes are IOU-nothings, coins aren't. Dimes, quarter-dollars, half-dollars, and dollars prior to 1965 were made mostly from silver and were actual money, not promises. Even the base-metals coins of today are intrinsically more valuable than paper notes and they stand on different legal footing than paper currency. Banks may acquire paper IOU-nothings for the price of printing, but they are obliged to acquire coins from the Treasury Department at face value. Not only did the Federal Reserve have to make its paper notes redeemable in lawful money until late 1963, so did the U.S. Treasury Department. U.S. notes or silver certificates did not pretend to be "money," either. Here is a five-dollar U.S. note from the 1950s stating that it was issued by the United States and was worth "Five Dollars payable to the bearer on demand." Obviously, if the certificate was an IOU for five silver dollars it could not simultaneously BE five dollars. All paper certificates and notes were IOUs. Money was metal coins. In fact, banker J.P. Morgan, who understood money better than most, said; "Gold is money. Nothing else." J.P. would have laughed in our faces if we called bonds, T-bills, certificates, notes, and other paper promises "money." He knew how to play the paper games better than most, but he was never foolish enough to think that a gold-backed note was actually gold itself. Time and habit pulled the wool over people's eyes, however, and from 1933 until the present time printed IOUs became thought of as money itself. If IOU-nothings aren't money what IS? A great many self-appointed monetary historians on the Internet indict the Federal Reserve System as a collection of gangsters aiming to destroy us all. That's hardly the case. The banks were given their legendary power to create debt-based currency by the U.S. Congress. It was Congress that gave up its Constitution-mandated authority to maintain a system of honest money. If there's fraud going on among financial institutions we're justified to pin the blame on Congress. Congress created the Federal Reserve and it can dismantle it. The Constitution says so.
Now providers can be sued for HIPAA violations related to breaches of protected health information. When it enacted HIPAA, the Department of Health and Human Services (HHS) chose to use a “carrot” rather than “stick” approach to enforcing the law. Penalties have been given for major breaches, but aside from that, there is little financial skin in the game for providers. At least until now. When a provider wrongfully discloses protected health information, HIPAA does not provide patients with a legal remedy other than reporting the incident to HHS. But courts have begun to look at the issue differently, ruling, in some cases, that providers can be sued under state rules pertaining to privacy and negligence for breaches. “Courts are beginning to say that just because the federal government didn’t give a remedy, it shouldn’t preclude patients from bringing a suit in states,” said Chad Eckhardt, a member in the regulated business group at Frost Brown Todd in Cincinnati, Ohio. It was a 2014 Supreme Court decision in Connecticut that set a precedent allowing providers to be sued for HIPAA violations. A patient filed a lawsuit against her obstetrician when the provider mailed her medical records to a court in response to a subpoena related to paternity suit filed by her ex. She was not informed of the subpoena by her provider and she filed for negligence, negligent emotional distress, breach of contract, and negligent misrepresentation as to the safety of her records. Although originally dismissed, her case ended up at the state supreme court, which ruled that her case stated a claim for which relief may be granted and remanded it for trial. There are numerous torts for which individuals can seek redress for personal injury, but some are not suitable for filing lawsuits related to HIPAA violations. Two such torts are invasion of privacy and public disclosure of private facts, Eckhardt said. Plaintiffs have to prove damages. Those torts rarely result in physical damage, so plaintiffs have to prove mental or emotional distress. Courts, he said, are reluctant to provide a remedy for non-physical damage under torts. Negligence is another category that requires plaintiffs to prove damages. Under this tort, physicians can be considered negligent because they did not comply with a standard of conduct (HIPAA). “If the federal government says this is the minimum standard of confidentiality and you don’t meet those minimum standards, you are negligent as a matter of fact,” Eckhardt said. Breach of contract is another option for plaintiffs, though the damages are much less than with a tort, Eckhardt said. Some states, like Ohio and West Virginia, have also created torts specifically for the unauthorized disclosure of medical records. “More states are creating this tort for unauthorized released of records and if they don’t have one, courts are going to try to find a remedy for harm done if there is actual damage to an individual,” Eckhardt said. A case out of Indiana was the first to show that employers can be held accountable for their staffs’ HIPAA violations. A patient sued Walgreens and one of its pharmacists when she found out the pharmacist had looked up and released medical records to the plaintiff’s ex-boyfriend. The pharmacist was currently married to the woman’s ex, to whom she provided prescription information. The woman won $1.4 million in damages, holding Walgreens accountable for the employees’ breach of confidentiality under HIPAA for reasons including negligent supervision. Physicians need to ensure they are training all employees upon hiring them and annually thereafter, he said. Consistent training can help a provider prove they have not been negligent in supervision of their employees and reduce their liability. As part of training, the importance of caring for hyper-sensitive information like HIV status and mental health conditions should be emphasized. In addition, practices need to review office processes to determine where people can get tripped up. For example, if a subpoena is received, what should employees at each level do with the request?
A soft, white, metallic element. Calcium occurs naturally in the earth's crust at concentrations of 3.64%, which makes it the 5th most abundant element in the earth's crust. The pure material was first isolated by Sir Humphrey Davy in 1808. Calcium is found as salts, such as calcium carbonate (limestone, marble, chalk, etc) and calcium sulfate (gypsum). It is an essential component in plants and animals. As a metal, calcium is harder than sodium but softer than aluminum or magnesium. It tarnishes on exposure to air. Metallic calcium is used in smelting metals as a getter to absorb oxygen, carbon dioxide, and [[sulfur dioxide|sulfur] gases. Flame color is yellow-red. Reacts violently with water, alcohols and dilute acids to evolve hydrogen. Dissolves in ammonium hydroxide to form a blue solution.
Written by Houghton professor Jon Arensen. This book follows the life of a man named Lado. He was born in Sudan approximately 1920. He grew up living the traditional life of his Murle people- herding the goats, planting sorghum and hunting antelope with a spear. But Lado was different. Even as a young boy he wondered about the world around him. As he grew older he was increasingly confused by the different manifestations of the tribal god named Tammu. As a teenager he was captured in a raid and taken away as a slave. He was later adopted into the tribe that enslaved him. Under these conditions his questions about suffering and God became more intense. He was rescued by British troops and learned Arabic under the protection of the District Commissioner. Eventually he returned to his home at Boma as the official translator for the military. It was here that Lado met Kemerbong (Richard Lyth). A meeting that changed the rest of his life.
The Foundation estimates that more than half of all Kerries in the US are now bred by puppy mills and other marginal breeders. However, 74% (6-Sep-03 QOTW) of this web site's visitors have purchased their Kerry from a "reputable" breeder. As reported by James Gorman (“Call of the Wild,” New York Times, October 17, 2017, p. D1), researchers are trying to understand why wolves never grow up to be like dogs, no matter what fantasies some of today’s dog trainers and pet food manufacturers try to suggest. Kathryn Lord and Elinor Karlsson, heading a team of researchers that splits its time between the University of Massachusetts Medical School and the Broad Institute in Cambridge, want to know why. By the age of about 12 weeks the ears have risen from hanging down along the cheek almost to the jaw line, to folding over at about the skull top, and tip pointing toward the outer corner of the eye. This is perfect--all that is wanted! Spaying and neutering is an often suggested remedy for various behavior problems. This article will be a review of the effects of spaying and neutering on behavior.
People with hearing and sight disabilities using screen readers and other assistive tech must be able to access content on government websites, but getting and staying compliant is a challenge. Updates for Section 508 accessibility legislation go into effect in January, creating new specifications for how federal agencies must make websites and other digital information channels navigable for users with disabilities, and experts say these requirements are poised to become the new standard for state and local governments as well. Section 508 is a 2001 amendment to the Workforce Rehabilitation Act of 1973, designed to help sweeping accessibility legislation keep pace with the rapidly evolving nature of technology. Early this year, lawmakers passed a long-awaited refresh of Section 508 that goes into effect Jan. 18. The exact updates are complex and nuanced, but at a basic level they stipulate that federal government websites must be accessible for people with hearing and sight disabilities using screen readers and other assistive tech. The requirements also note that content should be accessible for people with cognitive, language and learning disabilities, while requiring adherence to WCAG 2.0 standards, a set of guidelines used throughout the world. Advocates have praised the updates, while also noting that lawmakers must be diligent in continuing to make tweaks as new technologies become common. In an email conversation, Brian Charlson, the director of technology at the Carroll Center for the Blind, wrote that Section 508 updates mark the government’s most comprehensive tech accessibility legislation to date. “By incorporating the WCAG 2.0 guidelines into the 508 standards, we are getting the best and latest thinking of hundreds of accessibility experts all over the world, as well as harmonizing internationally with those countries who have also adopted the WCAG 2.0 guidelines in their accessibility efforts,” wrote Charlson. Dave Schleppenbach, vice president of research and development with the accessibility advocacy group See Write Hear, said in a phone conversation that not being able to keep up with updates has been an ongoing problem in terms of making tech accessible for users with disabilities. There is continual discord between people who create and edit content, the massive companies that issue updates to the most common Web browsers, and the much smaller companies that make assistive tech. What this means is that a city government, for example, makes a site compliant and in line with a set of specifications. That same week, however, Google or Microsoft might issue a Web browser update, which a smaller company that makes screen reading software does not account for, nullifying the city government’s work to stay up-to-date with specifications and rendering its site unusable for the disabled. This can create a legal gray area because the agency followed specs and requirements, but the Web browser’s discord with the screen reader means its website still isn’t usable. Schleppenbach said this is, however, an issue companies in the accessibility space are aware of and working to correct. Another challenge is that people with disabilities are often unable to report problems with using a website because they would need a better grasp of said website to do so. There is, however, guidance in this area available for municipal and state governments, which experts say may have some of their own localized laws to factor in, but are, for the most part, likely to take cues from the federal government and follow requirements laid out by Section 508. When the government in Fresno County, Calif., for example, recently launched a new website, it did so with the help of Vision, a software and consulting company in the gov tech space that helped them train the 160 public servants who will be editing Web content with Section 508 accessibility in mind. This meant making sure photos with vital information got alt tags on the backend that could be deciphered by a screen reader, adding transcripts or subtitles to videos, and writing in a way that could be easily understood by users with learning disabilities, people older than 65, or parents who don’t speak English and so have their children read and translate vital government information for them. “The whole concept for getting our content editors trained in more than just the actual content management system tool was new to us,” said Daniel Moore, information technology manager for Fresno County. Prior to the redesign process, Fresno County did a quick scan of its old home page to gauge past accessibility consideration and found that about half of the images on the home page were missing tags for screen readers. To help local governments in similar situations, Vision is offering a free four-part Web series titled 18 Minutes to Get You Ready for January 18, which is open to all governments, not just Vision clients. Vision is far from the only company providing accessibility evaluations and trainings. SeeWriteHear, among other groups, also has a vast set of resources to help governments and other interested agencies become compliant.
Creativity can be viewed, not just as a set of skills and strategies, but as an overarching metacognitive skill that integrates a range of subordinate generic skills. Key to developing creativity is to engage in a cycle of ideation, reflection and adjustment, within a feedback rich environment. Blogs have the ability to garner external comments that can prompt these processes. Case study research was undertaken to explore what forms of feedback promote metacognitive development and how those forms can best be elicited within a blog. Findings indicated that blog comments can motivate, provide information, enhance quality and promote reflection, and that a range of strategies can be applied in blogs to best obtain the most valuable forms of feedback for creative development.
One of the most astonishing, yet least reported stories of our time is the rise of global education. Of course, that's not surprising. Big, complicated statistical narratives are shoddy clickbait for media companies trying to survive in the attention economy. And education reporting is tough. There are so many gaps, plenty of people still being left behind, and the debates on best practice are endless. But the overall trends are clear. In the 1970s, only half of the world's kids attended some form of schooling. Today, that number is 9 out of 10, and we're starting to see younger generations reach literacy levels of up to 100%. Enrolment in primary education is nearly universal in Eastern Asia and Northern Africa, and sub-Saharan Africa is catching up fast, with enrolment increasing from 62 million children in 1990 to 149 million children in 2015. We're starting to see the gender gap narrowing too. Last month, the World Economic Forum released a report showing the number of girls in school worldwide has increased by 5% in low and middle income countries in the last decade, meaning girls are finally reaching parity with boys. UNESCO says the global education gender gap will be closed worldwide in the next ten years. That's a big deal, because women who receive an education are less likely to contract infectious diseases, or lose children, and more likely to become entrepreneurs, invest in their communities and empower other women. Just take a moment to think about this. We have never lived in a world where boys and girls receive the same amount of education. As we get closer to that milestone, we're going to start seeing a very different kind of global society. What's more, by the end of this decade most of those newly literate boys and girls are also going to have access to the internet. We've got a new generation coming of age in a second age of discovery, a second Renaissance that looks a lot like the first: new maps; new media; and a leap in health, wealth and education (alongside a whole lot of political and economic uncertainty). That generation is also growing up in a world where our ideas about education are changing. You often hear people say that our schooling systems were built for the 19th century, and are in urgent need of an update. That's starting to happen. In Germany and Scandinavia, they're using outdoor education, with a focus on encompassed craftsmanship, community service, outdoor pursuits and physical skills. No longer do kids sit in rows while their teachers lecture, lessons are now collaborative. The system is geared towards improving communication, confidence, character and resilience rather than pushing kids through what have essentially become exam factories. In France, there's a school called 42, whose name comes from Hitchhikers Guide to the Galaxy. It has no teachers; students learn to deal with ambiguity, complexity and diversity. They're taught to understand that in the modern world there is not typically one right answer when you make decisions. There are just different shades of how correct you might be. In Kenya, Bridge International Academies runs the administration of 400 schools with more than 100,000 pupils entirely on tablets and smartphones. They're now expanding into India, Liberia, Nigeria, and Uganda, with varying degrees of success. In June this year, the US State Department partnered with online education platform Coursera to allow refugees from around the world to take all its courses for free and obtain certification. There's obviously still a long way to go. One in every 10 primary-school-age children remain out of school, and an estimated 103 million youth around the world still cannot read and write. Being enrolled in school doesn't necessarily mean that children are learning well either. And while technology helps provide students with access to learning tools and resources they didn't have before, it's not a silver bullet. The best education happens when you've got a good teacher, and when you get personalised instruction. The human connection is still the most important thing. That all being said, imagine what's possible in a world where everyone can read and write. A world where every person has access to all of humanity's combined knowledge? We're getting very close to that now thanks to the dedicated hard work of hundreds of thousands of educators over the last few decades. That's definitely something worth celebrating.
At age 70, a writer reflects on the so-called ‘American Century’—and the world it wrought. Seventy-three years ago, on February 17, 1941, as a second devastating global war approached, Henry Luce, the publisher of Time and Life magazines, called on his countrymen to “create the first great American Century.” Luce died in 1967 at age 69. Life, the pictorial magazine no home would have been without in my 1950s childhood, ceased to exist as a weekly in 1972 and as a monthly in 2000; Time, which launched his career as a media mogul, is still wobbling on, a shadow of its former self. No one today could claim that this is Time’s century, or the American Century, or perhaps anyone else’s. Even the greatest empires now seem to have shortened lifespans. The Soviet Century, after all, barely lasted seven decades. Of course, only the rarest among us live to be 100, which means that at 70, like Time, I’m undoubtedly beginning to wobble, too. The other day I sat down with an old friend, a law professor who started telling me about his students. What he said aged me instantly. They’re so young, he pointed out, that their parents didn’t even come of age during the Vietnam War. For them, he added, that war is what World War I was to us. He might as well have mentioned the Mongol conquests or the War of the Roses. We’re talking about the white-haired guys riding in the open cars in Veteran’s Day parades when I was a boy. And now, it seems, I’m them. In March 1976, accompanied by two friends, my wife and I got married at City Hall in San Francisco, and then adjourned to a Chinese restaurant for a dim sum lunch. If, while I was settling our bill of perhaps $30, you had told me that, almost half a century in the future, marriage would be an annual $40 billion dollar business, that official couplings would be preceded by elaborate bachelor and bachelorette parties, and that there would be such a thing as destination weddings, I would have assumed you were clueless about the future. On that score at least, the nature of the world to come was self-evident and elaborate weddings of any sort weren’t going to be part of it. From the time I was 20 until I was 65, I was always 40 years old. Now, I feel my age. Still, my life at 70 is a luxury. Across the planet, from Afghanistan to Central America, and in the poverty zones of this country, young people regularly stare death in the face at an age when, so many decades ago, I was wondering whether my life would ever begin. That’s a crime against humanity. So consider me lucky (and privileged) to be seven decades in and only now thinking about my death. Recently, I had the urge to tell my son something about my mother, who died before he was born. From my closet, I retrieved an attaché case of my father’s in which I keep various family mementos. Rummaging around in one of its pockets, I stumbled upon two letters my mother wrote him while he was at war. (We’re talking about World War II, that ancient conflict of the history books.) Almost four decades after her death, all I had to do was see my mother’s handwriting on the envelope—“Major C. L. Engelhardt, 1st Air Commando Force, A.P.O. 433, Postmaster, New York 17, N.Y.”—to experience such an upwelling of emotion I could barely contain my tears. So many years later, her handwriting and my father’s remain etched into my consciousness. I don’t doubt I could recognize them amid any other set of scribblings on Earth. What fingerprints were to law enforcement then, handwriting was to family memories. And that started me wondering: years from now, in an electronic world in which no one is likely to think about picking up a pen to write anyone else, what will those “fingerprints” be? There are so many futures and so few of them happen. On the night of October 22, 1962, a college freshman, I listened to John F. Kennedy address the American people and tell us that the Russians were building “a series of offensive missile sites” on the island of Cuba and that “the purposes of these bases can be none other than to provide a nuclear strike capability against the Western Hemisphere.” In other words, the president of the United States was telling us that we might be at the edge of the sort of world-ending, monster-mutating nuclear war that, from Godzilla to Them, had run riot in the popular culture (and the nightmares) of my childhood. At that moment, I looked directly into the future—and there was none. We were, I believed, toast. My family, my friends, all of us, from Hudson Bay, Canada, to Lima, Peru, as the president put it. Yet here I am fifty-two years later. As with so many futures we imagine, somehow it didn’t happen and so many years after I’m still wondering when I’ll be toast. If, on that same night, you had returned from the future to tell me (or other Americans) that, nearly half a century hence, the Soviet Union would barely be a memory, that there would be no other great power challenging the United States for supremacy, and that its only serious enemies would be scattered bands of Islamic extremists, largely in countries no American of that era had even heard of, my sense of wonder would have been indescribable. And I don’t doubt that the godlier among us would have fallen to their knees and given thanks for our deliverance. It would have gone without saying that, in such a future, the US stood triumphant, the American Century guaranteed to stretch into endless centuries to come. If, on September 10, 2001, I had peered into the future (as I undoubtedly did not), whatever world I might have imagined would surely not have included: the 9/11 attacks; or those towers collapsing apocalyptically; or that “generational” struggle launched almost instantly by the Bush administration that some neocons wanted to call "World War IV" (the Cold War being World War III), aka the Global War on Terror; or a “kill list” and drone assassination campaign run proudly out of the White House that would kill thousands in the tribal backlands of the planet; or the pouring of funds into the national security state at levels that would put the Cold War to shame; or the promotion of torture as a necessary part of the American way of life; or the creation of an offshore prison system where anything went; or the launching of a global kidnapping campaign; or our second Afghan War, this time lasting at least 13 years; or a full-scale invasion, garrisoning, and occupation of Iraq lasting eight years; or the utterly improbable possibility that, from all of this, Washington would win nothing whatsoever. Nor, on that September day, still an editor in book publishing, barely online, and reading almost everything on the page, could I have imagined that, at age 70, I would be running a website called TomDispatch, 24/7, driven by the terrible news that would, before that day, have amazed me. Once upon a time, if you saw someone talking to himself or herself while walking down the street, you knew you were in the presence of mental illness. Now, you know that you're catching a snippet of a mobile or smartphone conversation by someone connected eternally to everyone he or she knows and everything happening online every minute of the day. Not so long ago, this was material for some far-fetched sci-fi novel, not for life. If, on September 10, 2001, you had told me that the very way we are connected to each other electronically would encourage the evolution of an American surveillance state of breathtaking proportions and a corporate surveillance sphere of similar proportions, that both would have dreams of collecting, storing, and using the electronic communications of everybody on the planet, and that, in such a brief space of time, both would come remarkably close to succeeding, I wouldn’t have believed you. Nor would I have been able to absorb the fact that, in doing so, the US national security state would outpace the “bad guys” of the totalitarian regimes of the previous century in the ambitiousness of its surveillance dreams. I would have thought such a development conceptually inconceivable for this country. And in that, touchingly, I would still be reflecting something of the America I grew up believing in. In my youth, I lived in the future. Riveted by the space operas of Isaac Asimov, among others, I grew up as a space nerd, dreaming of American glory and the colonization of distant planetary systems. At the same time, without any sense of contradiction, I inhabited future American worlds of wholesale destruction dotted with survivalist colonies in post-apocalyptic landscapes littered with mutants of every sort. I‘m no neuroscientist, but I wouldn’t be surprised to discover that we, as a species, are hardwired for prediction. Preparing eternally for whatever danger might be just around the corner seems like such a useful trait, the sort of thing that keeps a species on its toes (once it has them). As far as I can tell, the brain just can’t help itself. The only problem is that we’re terrible at it. The famed fog of war is nothing compared to the fog of the future or, as I’ve often said, I’d be regularly riding my jetpack in traffic through the spired city of New York, as I was promised in my childhood. Our urge to predict the future is unsurpassed. Our ability to see it as it will be: next to nil. It’s been almost 13 years since the 9/11 attacks and there’s still no learning curve in Washington. Just about every step of the way in Afghanistan and Iraq, it’s only gotten worse. Yet from that history, from repeated military interventions, surges, and Hail Marys in each of those countries, Washington has learned…? Yep, you guessed it: that, in a crisis, it’s up to us to plunge in again, as in Iraq today where the Obama administration is sending back troops,drones, and helicopters, plotting to support certain government figures, deep-six others, and somehow fragment various Sunni insurgent and extremist groups. And don’t forget the endless advice administration officials have on offer, the bureaucratic assessments of the situation they continue to generate, and the weaponry they are eager to dispatch to a thoroughly destabilized land—even as they rush to “broker” a destabilizing Afghan election, a situation in which the long-term results once again aren’t likely to be positive for Washington. Consider this curious conundrum: the future is largely a mystery, except when it comes to Washington’s actions and their predictably dismal outcomes. Doesn’t it amaze you how little Washington gets it? Fierce as the internal disagreements in that capital city may be, seldom has a ruling group collectively been quite so incapable of putting itself in the shoes of anyone else or so tone deaf when it comes to the effects of its own acts. Take Germany where, starting with Edward Snowden’s NSA revelations, the public response to reports of massive American surveillance of the communications of ordinary Germans and their leaders wasn’t exactly greeted with enthusiasm. Now it turns out that the NSA wasn’t the only US “intelligence” agency at work in that country. The CIA and possibly other agencies were recruiting spies inside German intelligence and its defense ministry. Polls show that public opinion there has been turning against the US in striking ways, but Washington just can’t take it in. A little noted truth of this level of spying and surveillance is: it’s addictive. Washington can’t imagine not doing it, no matter the damage. If you keep an eye on this situation, you’ll see how the US national security system has become a self-inflicted-wound machine. Here’s a question for our American moment: Why, in its foreign policy, can’t the Obama administration get a break? You’d think that, just by pure, dumb luck, there would be a few small victories somewhere for the greatest power on the planet, but no such thing. So for the post-American Century news jockeys among you, here’s a tip: to follow the waning fortunes of that century in real time, just keep an eye on Secretary of State John Kerry’s endless travels. He’s the Jonah of the Obama administration. Wherever he goes, disaster, large or small, trails behind him, even when, as in Afghanistan recently, his intervention is initially billed as some sort of modest triumph. Consider him the waning American Century personified. Think of the drone as a barometer of the American Century in decline. It’s the latest “perfect weapon” to arrive on the global scene with five-star reviews and promises of victory. Like the A-bomb before it, by the time its claims proved false advertising, it was already lodged deeply in our world and replicating. The drone is the John Kerry of advanced weaponry. Everywhere it goes, it brings a kind of robotic precision to killing, the problem being that its distant human trigger fingers rely on the usual improbable information about what’s actually on the ground to be killed. This means that the innocent are dying along with all those proclaimed “militants,” “high-value targets,” and al-Qaeda(-ish) leaders and “lieutenants.” Wherever the drone goes, it has been the equivalent of a recruiting poster for Islamic militants and terror groups. It brings instability and disaster in its wake. It constantly kills bad guy—and constantly creates more of them. And even as the negative reports about it come in, an addicted Washington can’t stop using it. The true legacy of the foreshortened American Century, those years when Washington as top dog actually organized much of the world, may prove apocalyptic. Nuclear weapons ushered that century in with the news that humanity could now annihilate itself. Global warming is ushering it out with the news that nature may instead be the weapon of choice. In 1990, when the Soviet system collapsed and disappeared, along with its sclerotic state-run economy, capitalism and liberal democracy were hailed in a triumphalist fashion and the moment proclaimed “the end of history.” In the 1990s, that seemed like a flattering description. Now, with 1 percent elections, an unmitigated drive for profits amid growing inequality, and constant global temperature records, the end of history might turn out to have a grimmer meaning. Global warming (like nuclear war and nuclear winter) is history’s deal-breaker. Otherwise, the worst humanity can do, it’s done in some fashion before. Empires rise and fall. They always have. People are desperately oppressed. It’s an old story. Humans bravely protest the conditions of their lives. Rebellions and revolutions follow and the unexpected or disappointing is often the result. You know the tale. Hope and despair, the worst and the best—it’s us. But global warming, the potential destruction of the habitat that’s made everything possible for us, that’s something new under the sun. Yes, it’s happened before, thanks to natural causes ranging from vast volcanic eruptions to plummeting asteroids, but there’s something unique about us torpedoing our own environment. This, above all, looks to be the event the American Century has overseen and that the drive for fossil-fuel profits has made a reality. Don’t fool yourself, though; we’re not destroying the planet. Give it 10 million years and it’ll regenerate just fine. But us? Honestly, who knows what we can pull out of a hat on this score. My father’s closest friend, the last person of his generation who knew him intimately, died recently at 99. To my regret, I was no longer in touch. It nonetheless felt like an archive closing. The fog of the past now envelops much of his life. There is nobody left to tell me what I don’t know about all those years before my birth. Not a soul. And yet I can at least recognize some of the people in his old photos and tell stories about them. My mother’s childhood album is another matter. Her brother aside, there’s no one I recognize, not a single soul, or a single story I can tell. It’s all fog. We don’t like to think of ourselves that way; we don’t like to imagine that we, in the present, will disappear into that fog with all our stories, all our experiences, all our memories. Here’s a question that, in a globally warming world, comes to mind: Are we a failed experiment? I know I’m not the first to ask, and to answer I’d have to be capable of peering into a future that I can’t see. So all I can say on turning 70 is: Who wouldn’t want to stick around and find out? Here’s the upbeat takeaway from this requiem for a foreshortened American Century: history is undoubtedly filled with seers, Cassandras, and gurus of every sort exactly because the future is such a mystery to us. Mystery, however, means surprise, which is an eternal part of every tomorrow. And surprise means, even under the worst conditions, a kind of hope. Who knows just what July 20, 2015, or 2025, or 2035 will usher on stage? And who knows when I won’t be there to find out. Not I. By the way, I have the urge to offer you five predictions about the world of 2050, but what’s the point? I’d just have to advise you to ignore them all.
The Affect Issue measures the typical number of citations received in a selected yr by papers revealed within the journal throughout the two preceding years. Supporting choice making in health care and planning health providers including any mandatory modifications. Professionals working within the realm of social and behavioral health embrace the following. Say that this undermines individual freedom and personal accountability, and fear that the state could also be emboldened to take away an increasing number of choice within the title of higher population well being overall. By Roman instances, it was well understood that proper diversion of human waste was a needed tenet of public well being in city areas. Headquartered in Bethesda, Maryland and an agency of the Division of Health and Human Services, the NIH is liable for health associated and biomedical research. Get started by requesting further info on public health diploma packages. Public well being efforts are impeded by this, as an absence of education can result in poorer well being outcomes. It contains additional info on the core areas of public health studies as well as info on accreditation, degree packages and fellowship programs. As a way to specialize, students may need to spend up to six years studying each normal public well being principles as well as their area of focus. Whereas biostatisticians and informatics professionals are largely concerned with collecting and analyzing hard information, neighborhood health specialists take a extra holistic take a look at public health. Many nations have implemented main initiatives to chop smoking, comparable to elevated taxation and bans on smoking in some or all public places. For instance, within the United States , the front line of public health initiatives are state and local well being departments The United States Public Health Service (PHS), led by the Surgeon General of the United States , and the Facilities for Disease Control and Prevention , headquartered in Atlanta , are concerned with several international well being actions, in addition to their national duties. They labored with various meals shelters to make sure needy families also acquired primary well being products – shampoo, toothpaste, cleaning soap, etc. Non-profit: Jobs in this setting usually deal with a particular population, equivalent to minorities or mothers, or address particular well being disparities. The flexibility to arrange multi-faceted tasks and encourage participation from numerous populations implies that public health professionals may be perfectly fitted to jobs in businesses and non-income of all kinds. In a worldwide, interconnected world, international organizations play an essential function in unifying the overarching objectives of the general public well being business. In this manner, environmental well being could be seen as an outreach of neighborhood health. Many state and local well being departments are paired with the native social services division.
Objective: The aim of this study is to determine the frequency and severity of camel racing injuries among children aged 5 to 15 years during the period 1992 to 2003 in the State of Qatar. Setting: The study was conducted in the Hamad General Hospital, State of Qatar, from January 1992 to December 2003. Patients and Methods: A total of 275 subjects aged 5 to 15 years with camel racing injuries who were seen at the Accident Emergency Department, Critical Care, and Physiotherapy Departments of the Hamad General Hospital were studied. The sociodemographic information and the details of the injury of the studied subjects were collected. The Abbreviated Injury Scale system was used to determine the severity of injury. Results: Overall, 275 camel racing injuries were reported among boys aged 5 to 15 years. The majority of patients were Sudanese (91.3%). The most commonly injured locations were upper limb (23.2%), lower limb (21.1%), and head (20.7%), followed by other injury locations. Seventeen patients were disabled as a result of their injury, and another 3 injuries were fatal. This study revealed that 34% of injuries were considered to be minor, 22.1% moderate, 18.1% serious, 11.6% critical, and 6.4% maximum. Conclusions: The injury severity caused by the camel racing significantly affected the length of hospital stay. At present, the government is serious about this problem, and there is a draft of proposed legislation intended to prevent the employment of children below the age of 12 as camel jockeys. Table 2 shows the distribution of injuries by anatomic location and mean length of hospital stay. As can be seen from this table, the main location of injuries among children were upper limb (23.2%), lower limb (21.1%), head (20.7%), chest (8.4), abdomen (7.2%), and neck (6.6%), followed by other types of injuries. Head injuries had the maximum hospital stay compared with other injuries (9.9 ± 7.5). Table 3 gives injuries scored by the AIS. A little more than a third of the injuries (34.4%) were considered to be minor, 22.1% moderate, 18.1% serious, 11.6% critical, and 6.5% maximal-fatal. There was a statistically significant correlation between the duration of hospital stay and the AIS (ρ = 0.820; P = 0.001) and between age and AIS (ρ = 0.241; P = 0.0028). Figure 1 shows the trend of camel racing injuries reported from 1992 to 2003. It is worth noting that the number of camel racing injuries increased gradually from 1997 until 2002, and there was sudden decrease in the year 2003. The current study in Qatar has shown that camel racing injuries were higher in Sudanese boys because most of the camel jockeys were brought from Sudan. The possible reason for this significant problem could be their poor financial background or their willingness to come to work in Qatar. Sudanese children tend to be lightweight and thus are deemed quite suitable as camel jockeys. Because all the subjects were boys (due to the nature of the sporting event), our study could not show a gender effect on injury severity. As far as the authors have surveyed, this is the first study involving camel racing injuries among children in an oil-rich Arabian Gulf country. One of the reasons for publishing our data was to raise the level of awareness of both the frequency and severity of camel-related children's trauma in Qatar. This situation may be applicable to other Arabian Gulf countries including the United Arab Emirates, Kuwait, Saudi Arabia, and the Sultanate of Oman. These injuries appeared to be associated with limitations in the legality, education, and level of compliance with basic safety measures, especially with regard to riding helmets and other safety equipment. This study demonstrated that the injury severity caused by the camel racing significantly affected the length of hospital stay. Head injuries had the maximum stay in the hospital compared with other injury locations, followed by neck injuries. Although most of the patients had minor, uncomplicated injuries that did not need hospitalization, our study focused on those with serious (18.1%) and severe injuries (11.3%), and those who needed hospitalization with critical (7.3%) and maximum (6.5%) injuries. Of the 18 patients with maximal injuries, 3 deaths were fatal and recorded. The rest of the children were taken abroad to Germany or the United Kingdom or other countries for further management, and the final outcome of these patients were unknown. Camel racing has the highest mortality of all sports in this region and can be more dangerous than motorcycle or car racing.3-12 The mortality of hospitalized patients in our study was much less than 1% (3 deaths), most probably due to excellent medical and rehabilitation services, though it is possible that more patients may have died before arriving at the hospital. In the present study, we observed that the patients who were thrown from camels and subsequently had a camel fall on them tended to have more serious injuries. A camel can weigh more than 750 kg, and serious injury may be inflicted if the camel rolls over the rider. The outcome of the treatment of the injured patients indicated that half of them realized good improvement through physiotherapy, but 10% of them were totally disabled. The injuries in this study were measured in AIS, which is limited to anatomic injuries by body regions. Due to the lack of data, we could not use the Injury Severity Score. There is a need to do further research in this area. The study has to be repeated after implication of new laws. Additionally, some deaths occurring outside of the hospital setting should also be measured and included. Furthermore, more recently, a committee21 was established for further investigation, and as a result, the State of Qatar issued a decree that bans the import, recruitment, training, or involvement of children in camel racing. Otherwise a very severe penalty will be imposed, and that person would be deported from the country immediately. The committee wants to make Qatar the first Gulf country to tackle this problem in an effective manner. The practice of employing child camel jockeys, some as young as 5 or 6 years, has come under a cloud over the past few years, with human rights forums urging an end to it. This is quite evident from Figure 1: there was a sudden decrease in the number of injuries in the year 2003. There have been no camel race-related injuries reported to accident and emergency services since January 2005. Camel race clubs are planning to use robots instead of children in the future, and these are currently being tested. Camel racing injuries involving children in Qatar account for a considerable number of disabilities and, although not measured in this study, deaths. In summary, safety in all potentially dangerous sports should be foremost in the minds of those who administer, supervise, and participate in such games, because camel racing injuries in children account for a considerable number of disabilities and, although not measured in this study, deaths. In the present study, there are certain limitations in the data obtained from the hospital. First is the reliability of reported levels of injury severity by casualty doctors. Some misclassification between levels is possible. Second is underreporting of injuries, especially for minor injury falls. There may be differences in distribution between minor injuries from falls that are reported and those that are not. This may result selection bias. Lastly, mortality rates have likely been underestimated in this study because those children who died on site would not have been brought to the hospital. The present study has provided insight into the nature and frequency of camel racing injuries affecting children and youth in Qatar. Camel racing-related injuries significantly affected the duration of hospital stay and the injury severity. Upper and lower limb were the most common location of injuries found in the injured subjects. At present, the government is serious about this problem, and there is a draft of proposed legislation intended to prevent the employment of children below the age of 12 as camel jockeys.
If you use midi instruments such as an external drum machine or something similar, it’s worth spending some time labelling up your studio in Logic so that your workflow is quick and doesn’t stop you from writing your next hit. In this tutorial, we’ll look at how you label the different areas of the step editor so that they are consistent with your external synth. In this example, we’ll be using the Roland JD-XI to create a drum lane. In this drum lane, we’ll be labelling the kick drum channels, snares etc. 1. An existing audio project or template (to save for quick work later). Once created, any midi information created or sent will be sent directly to that port on the midi interface. Create a midi region so that we can input some data. To do that, select the pencil tool and create the region on the arrange window. Select the midi region and open the step editor. To do this, select “WINDOW” from the menu and click “OPEN STEP EDITOR”. The step editor window will now open. To create the template for the synth, select “LANES” from the menu. A list of different types of drum lanes will appear. This is where the editing of names and lanes will happen. To name the setup for the synth, select the name next to the option “LANE SET”. A drop down menu appears for the different GM options. To rename the currently selected GM drum lane set, click where it says “GM DRUM KIT…”. Directly below is the option “RENAME LANE SET”. The name should now be editable, so name it something appropriate. In this tutorial, we’ll name the lane set as “JD-XI”. Now we’re ready to label our midi instrument. Using the CMD key, the mouse pointer should change to a pencil. Click one of the lanes in the midi lane area. If setup correctly, the midi instrument should now be triggered. To change the label for that lane, select the lane you have added a midi note too. Look for the menu option on the left hand side of the screen that is labelled “LANE”. Click the name that logic has performed. This is an editable field and will enable you to re-label the lane. In this example, I’ve used “KICK” as the field name. Repeat until you are happy. If using a sampler or instrument that only has a limited number of triggered notes, you may want to delete some lanes. To delete the lanes that aren’t being used, use the command “CMD+BACKSPACE”. This will delete the currently highlighted line. Now save the project as a template. This will make your GM template available upon opening Logic Pro X.
To facilitate individual success and to challenge students, a balanced programme of classes are available that provide stimulating language learning. Authentic, up-to-date materials relevant to the needs of the students are used to instigate genuine discussion and develop higher-order thinking skills. Each class is different depending on the needs and profiles of the students. The framework of classes focuses on fluency, confidence and communicability as a whole, as learners develop through English to achieve presence in the language and unlock their own skills. The barrier between real-world and classroom is reduced as far as possible to spur the greatest development. Mini-groups of maximum 4 students of a similar age, profile and language level allow teachers to give optimal individual attention to each student. The multicultural environment means students develop their trans-cultural awareness. Larger group interactive sessions challenge scholars to develop nimbleness in English and stay aware of their own use of the language. Communicative skills are challenged by the mixed-level environment, allowing students to enhance their all-round communication abilities. Plenaries are group sessions designed to build listening skills and stimulate higher-order thinking. There is no size limit on the class. The addition of certain group classes to many Tutorial hours allows dedicated students to develop their communicative competencies, intercultural understanding, presentation and communication skills. Interacting with other learners of similar requirements allows students to enact what they have covered in individual classes and understand more about the complexities of communicating with other non-native speakers. Learn more here. This programme contains all the group classes offered at the school. Throughout the programme there is an emphasis on fluency and confidence within the four traditional skills of communication - speaking, writing, listening and reading. Students are encouraged to understand the complexities and challenges of using English as a global language. The goal of this programme is to allow talented learners to access their skills in English and develop competencies across a range of contexts including business and academia, as well as in their personal lives. Learn more here. Available for: Academic English, Business English, General English, Exam English, English for Specialisations. Available all year round, this course allows students the opportunity to prepare for the IELTS exam. In the Quatorial format, students develop the language and strategies required to perform to the best of their ability in the exam. Progress is monitored via weekly full mock tests, and students can expect to be challenged and to become more autonomous learners through the structure and rigour of the course. Students spend half the day on IELTS group-classes, and the other half attending to broader communicative skills in English. Course components are here. The course combines classes in small groups and tutorials with a full emphasis on the exact requirements of the Abitur English exam - focusing on relevant texts, grammar, listening comprehension and writing. Students test their level and measure their progress by submitting written assignments and taking part in oral evaluations and debates. Some classes are dedicated to the Abitur whilst some attend to all-round communicative competence.
My favorite professors have always been the ones who explain how it works in the real world. I understand theory is important, but sometimes it isn’t practical. I learned all the basics in library school about how to run a library, but patron interaction strategies were limited. I don’t recall much beyond the “reference interview.” However, the people skills that are required for dealing with patrons on a daily basis are critical to maintaining a positive and effective library. Dealing with strangers does not come naturally to all of us. I will be the first to admit that while I will take on leadership roles and have no problem instructing a class, confrontation or initiating conversation with a stranger is not on my Top 10 list of fun things to do. If you want a strongly worded blog post or letter, I’m your gal. Lobbying legislators face to face? Not my skill set. It has been a long time since I have been described as mousy and feel that I have grown out of this issue to a certain extent as I grow older, but it still pops up now and then. So, why is this an issue in librarianship? There is a stereotype that librarians are mousy, timid people. Now, you and I both know that isn’t true. Well, most of the time. If you are going into librarianship you need to understand something - you are going to be dealing with people. A lot of them. On a daily basis. Librarianship, unless you are in a back room cataloging or digitizing, is not for the timid. And certainly don’t go into Children’s and Young Adult librarianship if you aren’t ready to corral toddlers during storytime or stand up to the teen boy who is towering over you. Kids are cute and teens are quirky, but they are also fickle, loud, energetic and travel in packs, so you will most likely be outnumbered at any given time. Each interaction with a patron is a different experience. They will be irate, needy, happy, confused, in a hurry, and myriad other moods. Librarianship is customer service and sometimes you just have to bite your tongue and smile. Public relations is an area that tends to be an elective in library programs. I don’t recall one course where it was taught beyond planning a program in a Children’s or Young Adult literature course. As countless professional journals have featured lately, public relations and advocacy are critical to library survival. One of the most important public relations opportunities is often one of the most forgotten - direct interaction with the patrons. Every encounter with a patron is chance to present a positive or negative image of your library (and libraries as an institution). Being rude or snarky has no place in this interaction. First of all, it’s just unnecessary and potentially detrimental to your library's continued existence - think of how many business you no longer patronize because of poor experiences there. The library is a public service and is funded by public money. You never know who you are dealing with and the impact your interaction with that person may have on your library. Does that mean that you should allow others to walk all over you or treat everyone with kid gloves just in case the person is the mayor’s sister-in-law? Certainly not. But assertive does not mean aggressive or rude. Treating the patron with professionalism and respect, even when you have to bite your tongue, is one of the most important skills a librarian can possess. I learned how to deal with patrons on the job, but would have appreciated if this issue had been addressed more in library science programs. Role-playing, discussions, and even instruction about "real library" situations from active librarians would be an invaluable addition to library school programs. However, as long as you keep respect and professionalism in mind, you can learn the skills you need while in the trenches.
Rosebank was built by Andrew Inglis Clark, a significant figure in Australian public life. Clark was elected to the Tasmanian House of Assembly in 1878 at a time when Australia was moving towards nationhood. Attacked by the Hobart Mercury as a revolutionary with his ‘proper place among the Communists’, Clark was a staunch republican and believed that government should benefit everyone. He supported progressive legislation including legalising trade unions, reforming laws on lunacy, employment, custody of children and prevention of cruelty to animals. He advocated women’s suffrage and contributed to the development of the Australian Constitution. He is largely remembered today for the Tasmanian Hare Clark system of proportional representation in political elections. Clark was a criminal lawyer but his refusal to accept anything beyond a modest fee prevented him from making a fortune. He was appointed Judge of the Supreme Court and played a major role in the foundation of the University of Tasmania in 1889, later serving as Vice-Chancellor from 1901 to 1903. In private life, Clark made plenty of time for his family. His son Conway remembered him under his ‘vine and figtree’ with his wife and children at Rosebank. The leopard could as soon change his spots as I become a supporter of plutocracy and class privilege. Andrew Inglis Clark died in 1907. … The leopard could as soon change his spots as I become a supporter of plutocracy and class privilege. In politics, he remained true to the liberal principles which had inspired him from the first. Fearless in his utterances on all public questions, and careless of consequences when he had once struck out upon the course which he believed to be right, he won for himself a character for rectitude which anyone might well envy. Huon Cry fruit processing factory later occupied the same site. Secheron House is in the background. Rosebank just out of shot at top of photo. The industrial–residential mix of Battery Point has long gone. Many of the local Battery Point women worked in the fruit factory.
When the book is dry, head outside and use a soft paintbrush or cloth to gently brush away the mildew from the cover and each page. Slide a sheet of waxed paper under each page to protect the page behind it. Slightly dampen a clean, soft cloth with hydrogen peroxide and gently wipe down each page allowing it to air dry completely before moving to the next page.... Children imitate, so if they see that you enjoy reading and treat books gently and with respect, it is likely that they will do the same. Choose books your child will enjoy When you read aloud together, choose books that you both like. I’m not opposed to the occasional treat, but it’s the attitude of expecting it because you as a parent or others have it. Just because I have an iPhone, doesn’t mean my children will get one. We don’t have to give our kids everything we have. It’s okay to make them wait for things in life.... Inspector George Gently (also known as George Gently for the pilot and first series) is a British television crime drama series produced by Company Pictures for BBC One, set in the 1960s and loosely based on some of the Inspector Gently novels written by Alan Hunter. 14 Monthly Book Clubs Your Kids Won’t Hate Posted December 21, 2018 by Meredith Raico Show your children that reading can be an adventure and a treat with a box full of books, gifts, and even costumes delivered to your home with their name on it.... But contrary to how things may seem, most kids like to behave in a manner that makes them (and you) proud—at least most of the time. The best way to get there: Help your child feel as if you and she are on the same team. These six strategies show you how. Kids can click/treat, too, whenever the puppy has all feet on the floor. Small dogs and puppies can safely learn about "Be a Tree" from kids, but if your dog is large or too excitable, it is not a good idea to set him loose with kids. Treat Kids Like Kids In the absence of a partner, it can be tempting to rely on your children for comfort, companionship, or sympathy. But your kids are not equipped to play this role for you. Keep away noisy kids or dogs. Don’t thrust your hand out at it. Once a cat gets used to your presence, you want to gently offer it a finger to sniff, but you need to do this stealthily. "Frankly, I worry about kids who don't do this!" Despite the ongoing need to test limits, kids also need to learn the importance of respect for others — and respect begins at home. Despite the ongoing need to test limits, kids also need to learn the importance of respect for others — and respect begins at home.
As a startup operator, I participated in seven new product launches selling to schools, districts, colleges, and universities. There were many other product ideas that were never developed (that’s a good thing — we likely saved both $ and the market from ideas that weren’t going to work!) Over the course of those product development cycles we made plenty of mistakes, but we also successfully rolled out products that turned into hundreds, even thousands of paying institutional customers for these new products. 1.) Look for prototypes for early signs of a market. If you want to identify signs of demand for your product, look no further than the current activities of your target customers. Innovative teachers and school leaders often realize they have a problem before entrepreneurs do. That’s why they build in-house prototypes in order to address the problem. What does a prototype look like? Well, for example, prototypes for school workflow products often involve off-the-shelf tools like Microsoft Word or Google Docs or MS Excel or Google sheets. You can find prototypes in most schools and districts across the country. Teachers and school leaders use these tools and more to create a solution as best they can. When prototyping, teachers and school leaders invest time and energy into the tools they have available, and often (not always) are limited by what these prototypes can do. Customers who build in-house prototypes do so because the current choices in the market don’t meet their feature or price needs. The prototypes are useful, but many customers want more than the prototype provides. This is an opportunity for an entrepreneur to drill down and learn more about the prototype, what problem it’s trying to solve, and what the customers likes and doesn’t like about it. If there’s pain in building/maintaining the prototype and there’s a pattern of similar prototypes across a set of schools or districts, this is a strong signal of an early market and spells opportunity for an ambitious entrepreneur. 2.) Develop a product thesis (or two or three) and meet with potential customers. Once you’ve found a problem you can solve, put together your product thesis. This is as simple as a slide deck that describes your understanding of the problem and the proposed solution. The key here is to not over-invest or over-engineer a product at this stage because you still don’t know if your product idea is worth building. Continue learning about the market opportunity and whether or not you have an idea that will resonate with your target audience. Meet with lots of potential customers with the objective of learning more about the problem, getting feedback on your thesis, and iterating the product idea. You’re focused on research and development at this point, not on selling. How do you know if you’ve met with enough prospective customers to validate or invalidate the idea? Eventually, conversations become more consistent with customers nodding their approval for your proposed solution. At this point, you might be onto something, but you’re not there yet. 3.) Drill down to see what customers are willing to pay for early on. “What’s beyond what you’d be willing to pay for this solution?” (you’re out of the money with this price). You’ll learn what they’re willing to pay for and get valuable information on how to charge as well. 4.) Keep grinding away at R&D until you can pre-sell customers. It’s a strong signal that there’s real demand for your solution. Customers who are willing to pay for your product before it’s built are bought into your product being a must-have, not a nice-to-have. Second, you need co-creation partners with a voice into how your product is developed from day one. Without these partners, you run the risk of building your product in a vacuum and then having to redo it when customers get their hands on it. You could give away the product, but customers who get something for free are more reluctant to give direct, honest feedback. Early customers with skin in the game give it to you straight. Third, you now have product development that’s paid for by customers. Congratulations! Following this process also helps you determine price before you go to market with the knowledge that customers are willing to pay. Determining price just before going to market is a recipe for failure. Don’t make this mistake as you develop new products. What other product development moves have you seen that help ensure a successful launch? On the flip side, what happened that sabotaged your launch or caused problems? I’d welcome your thoughts.
One of the first things Erica Sperling does when she visits a classroom in Newport Beach is tell the students she doesn't carry a gun. The crime-prevention specialist is the face of law enforcement for many elementary school students in the city. She is the author of a Newport-specific curriculum that she has used to teach sixth-graders about making good choices when it comes to drugs, bullying and cybersafety. But when she first meets the students, the question about whether she's armed inevitably — and quickly — comes up. She's happy to answer though. It's the start of her relationship with students, she said. "That's one of my goals, [for kids] to just have positive interaction with law enforcement," she said. Sperling is the Newport Beach Police Department's replacement for the anti-drug DARE program. In 2011, Police Chief Jay Johnson decided to cut DARE — Drug Abuse Resistance Education — from the department's budget because it mandated that a sworn police officer present the material in the classroom. That's expensive, Johnson said. But the department was also happy to comply with then-Mayor Mike Henn's request to keep some kind of uniformed personnel in the classroom, Johnson said. "The sooner we can start the prevention process and the education process, the better in the long run," he said. "That saves us from having crime problems down the road with these same kids, and it keeps them out of trouble, quite frankly." That's where Sperling, a civilian employee who carries a badge and wears a uniform, took over. In spring 2012, Sperling started a pilot program in sixth-grade classes in Newport Beach. From a survey of hundreds of those students, she developed a curriculum she calls Step Up. It expanded from the drug-prevention message of DARE to include peer pressure, bullying and cyberbullying, smoking, marijuana and synthetic marijuana, alcohol, prescription drug abuse and more, Sperling said. Step Up made it into classrooms in the fall, and when classes start this spring, a new crop of sixth-graders will be introduced. By the end, Sperling said, the students have so much knowledge that they can teach the lessons. Every seminar she does ends with a review that involves them acting out skits revealing what they've learned. One skit in particular impressed Sperling, she said. The sixth-graders acted out a party where a student declined to drink but held a drink for a friend. In the skit, a photo of the student, holding a beer, ends up online. "It really showed how you have to stop and think about every little thing and what you're putting online," Sperling said, something she tries hard to impress on the sixth-graders. When Newport-Mesa Unified School District students return to school next month, Sperling will begin the second year of teaching hundreds of sixth-graders — probably with the gun question. "It's fantastic," Sperling said. "I love it. I love being with the kids."
Pedro Aphalo, Ford Denison, Peter Langridge and Victor Sadras conceived and developed this project. The project crystallised with the support of OECD Co-operative Research Programme: Biological Resource Management for Sustainable Agriculture Systems, sponsoring the attendance of European and North American scientists. We are particularly grateful to Primal Silva and Gary Fitt (CRP leaders), and Janet Schofield (CRP Secretariat) for their support and professional input. The South Australian Grains Industry Trust (SAGIT), and Grains Research and Development Corporation (GRDC), kindly provided additional funding to support the workshop. We thank Mariano Cossani, Lachlan Lake, Toni Pihodnya and Gemma Weedall for venue logistics and organisation. Malcolm Buckby’s career has ranged from being the manager of a family farm to being the elected representative for the Light electorate in the House of Assembly in the South Australian Parliament, serving as the Minister for Education, Children’s Services and Training, Shadow Minister and Member of Standing Committees. My time as a Research Economist at the University of Adelaide gave me the knowledge of the SA economy and the privilege of working with some of the best economic minds in the state. I am currently the Manager of the Rural Services Division of the Royal Agricultural and Horticultural Society of SA and deliver administration and policy advice to a range of rural bodies including Beef and Sheep Societies, SA’s Country Shows and the SA Grain Industry Trust. Lachlan Lake is a pulse physiologist in the Sustainable Systems Research Division, SARDI. Lachlan has been working in agricultural research since 2003 in projects focusing on Australia’s major pulse species in projects investigating physiological drivers of yield, stress adaptation, N fixation, disease resistance and modelling. Lachlan completed his PhD in chickpea physiology at the University of Adelaide and is currently undertaking a GRDC funded Postdoctoral Fellowship investigating canopy dynamics and waterlogging tolerance in lentil. Lachlan’s work has been driven by the importance of pulses in sustainable farming systems and the need to improve pulse adaptation to Australian conditions in the face of limited resources. Bill Long is a farmer and for the past 23 years has managed his own company – Ag Consulting Co, a South Australian based agricultural consulting business established in 1995. The company provides agronomic and farm business management advice to farm businesses across SA and manages and conducts research and communication projects to growers on a range of agronomic and farm management issues. He has participated in and managed projects on carbon, climate, snails, controlled traffic, seeding systems, inter-row sowing systems, cereal and pulse canopy management, leaf disease control in cereals and pulses, weed management, plant growth regulants, pollination, soil carbon and stubble. He has been a member of the BCG Yield Prophet team to improve understanding of soil water and the use of crop modeling to assist advisors and farmers knowledge on soil water/plant production relationships. He was a founding member of the Yorke Peninsula Alkaline Soils Group, the SA and Vic Independent Consultant group and the Ag Excellence Alliance and is past Chairman and committee member of; SA GRDC Advisor Update Committee, TopCrop SA, Crop Science Society of SA and the Snail Management Action Group and the Grain Pest advisory group. He served on the GRDC’s southern panel from 2011 until 2017. Bill has developed farm business benchmarking programs and was involved in the development of Plan to Profit®, a farm business analysis tool. Bill holds a bachelor of Applied Science in Agriculture, is a graduate of the Institute of Company Directors and undertook studies in the use of decision support tools and farmer and advisor decision-making processes. He has a keen interest in ag extension and adoption practices. In more recent times and as a result of the studies in decision-making, Bill spends more time with clients running farm boards and thinking strategically about their business management and development opportunities. With his wife Jeanette and son Will, he grows lentils, chickpeas, beans, cereals and canola, and runs sheep on his properties on Eyre Peninsula and the mid north in SA. He is passionate about the grains industry and enjoys the complexity and challenges of understanding and managing farming systems across Australia. John Kirkegaard is a Chief Research Scientist at CSIRO Agriculture and Food, based in Canberra and Adjunct Professor at the University of Western Australia and Charles Sturt University. He was raised on the Darling Downs in rural Queensland, studied agriculture at The University of Queensland where he received his PhD studying the effects of soil compaction on the growth of grain legumes in 1990. The same year, he joined CSIRO Plant Industry in Canberra to work on the Land and Water Care Project, and his subsequent career at CSIRO has focussed on understanding soil-plant interactions to improve the productivity, resource-use efficiency and sustainability of dryland farming systems. Over the last 28 years, his research teams and collaborators have investigated aspects of improved crop sequence, rotational benefits and productivity of canola and other Brassica species, improved subsoil water use by crops, development and integration of dual-purpose crops, and improved productivity in conservation agriculture. He has led numerous national research programs, is a regular invitee to international forums and advisory committees on agriculture and food security, and was Visiting Professor at Crop Science Department, University of Copenhagen in 2012. A hallmark of his innovative research has been his active integration of farmers and advisers into his research teams, which has undoubtedly led to more rapid adoption and impact in agriculture. He was recipient of the grains industry “Seed of Light” award in 2009 for effective communication of research results to industry, and in 2014 his GRDC National WUE team was awarded the Eureka Prize in sustainable agriculture for research to improve the water-use efficiency of Australian agriculture. He was elected a Fellow of the Australian Academy of Science in 2016, was recipient of the Farrer Medal for distinguished contribution to agriculture in 2017, and is an ISI Web of Knowledge Highly Cited Researcher for Agricultural Sciences in 2018. Allan Mayfield brings extensive agronomy and farming knowledge and 40 years of experience in government and as an independent agronomic consultant to his role with the South Australian Grain Industry Trust. Allan has a Bachelor of Agricultural Science and PhD in Plant Pathology. He was instrumental in setting up the Hart Field Site and starting precision agriculture and associated research in South Australia. His industry involvement is extensive and includes seven years as a GRDC Southern Panel member, six years as research coordinator for SPAA (Southern Precision Agriculture Australia) and 10 years as the research manager for the Hart Field Site Group. In addition to his role with SAGIT, he assists the Grains Research and Development Corporation in project management. He is a life member of the Crop Science Society of SA, a fellow of the Australian Institute of Agricultural Science & Technology, and a Churchill Fellow 2002. John Porter is an internationally known agro-ecological scientist with an expertise in ecosystem services in agro-ecosystems, including agro-ecology, simulation modelling and food system ecology. His main contribution has been multi-disciplinary and collaborative experimental and modelling work in the response of arable crops, energy crops and complex agro-ecosystems to their environment with an emphasis on climate change, ecosystem services and food systems. Porter has published 145 papers in peer-reviewed journals out of a total of about 350 publications. On average, his peer-reviewed papers have been cited more than 100 times each. He has personally received three international prizes for his research and teaching and two others jointly with his research group. His career H index is 57 and with 131 papers receiving over 10 citations. From 2011 to 2014 he led the writing of the critically important chapter for the IPCC 5th Assessment in Working Group 2 on food production systems and food security, including fisheries and livestock. This chapter was one of the most cited from the IPCC 5th Assessment and formed an important scientific bedrock of the COP21 agreement in Paris in 2015. Francis Ogbonnaya is the Program Manager, Oilseeds and Pulses, Grains Research and Development Corporation (GRDC) and has served at various levels within the organization. At GRDC, Ogbonnaya has been involved in setting R & D initiative and strategies that have strongly influenced and promoted innovative R & D options aimed at delivering enduring profitable outcomes for Australian farmers. Ogbonnaya joined GRDC in 2012 from the International Center for Research in Dryland Agriculture (ICARDA), Syria where he led and coordinated multinational and international collaborative R&D initiatives with National Agricultural Research Institutes (NARIs) in Africa, Central Asia and Middle East and Advanced Research Institutions (ARIs) in Australia, Europe and North America and contributed to the formal release of several high-yielding varieties by national research partners in Africa and Central Asia. Together with university lecturers, he has co-supervised thesis research on wheat improvement and mentored many postgraduate students (MSc and PhD), mostly in Africa, Australia, Central Asia, Middle East and European countries for which he received The Jeanie Borlaug Laube Women in Triticum Mentor Award (2012). Prior to starting at ICARDA, Ogbonnaya served as Scientist and Senior Research Scientist and led key scientific research team within the Department of Primary Industry Victoria, Biosciences Research Division working on translational research with emphasis on exploiting primary gene pool of wheat to improve cereal cyst nematode control, pre-harvest sprouting tolerance, salinity tolerance, multiple disease resistance and water limited yield improvement in wheat. Ogbonnaya obtained his PhD degree in Agricultural Science (Plant Breeding and Genetics) from the University of Melbourne, Australia, and a B. Agric Science Honours degree from the University of Nigeria, Nsukka. Ogbonnaya has published over 150 papers in referred journals, book chapters and peer reviewed conference papers. Stephen Loss has an Honours degree in Agricultural Science and PhD in Plant Nutrition from the University of WA. He worked as a crop agronomist with the WA Department of Agriculture for a decade, before joining CSBP fertilisers where he managed their field trial program and soil and plant testing services for the 12 years. In 2012 Stephen joined ICARDA based in Amman Jordan to lead an ACIAR funded project promoting conservation agriculture in northern Iraq. When the project ended in 2015, Stephen joined GRDC as an R&D Manager initially in Canberra, and then established their new office in Adelaide. He is currently the Manager of Soils and Nutrition for the southern region. Megan Ryan completed her PhD in Ecology at the Australian National University. In her thesis she compared the growth and nutrition of crops and pastures on organic and conventional farms, with an emphasis on phosphorus and mycorrhizal fungi. Megan then worked at CSIRO Plant Industry in Canberra where she examined the impact of canola on the growth and nutrition (P, Zn, N) of following cereal crops. Since 2003, Megan has been at the University of Western Australia where her main research area has been pasture ecology and nutrition. During this time Megan has researched a wide range of topics including the potential for domestication of Australian native perennial legumes as pasture species, development of phosphorus-efficient pasture systems, and root morphological and physiological adaptations that aid phosphorus uptake in pasture legumes, chickpeas and native plants; she has also continued to work on mycorrhizal fungi. Since 2015 Megan has been an ARC Future Fellow; her project is focused on how plants adapt to fluctuating availability of phosphorus. Other recent grants focus on identification and renovation of highly oestrogenic pastures and improving seed harvest of subterranean clover. Megan is also involved in the newly established ALBA (Annual Legume Breeding Australia) joint venture between the University of Western Australia and the company PGG Wrightsons. Daniel Rodriguez is a crop scientist with the Centre for Crops Sciences at the University of Queensland. He leads the Farming Systems Research Team and his work focuses on the development and application of quantitative systems modelling approaches in agriculture. He is a leader in the application of these approaches at the crop and whole farm levels. At the crop levels his work focuses on identifying more profitable and less risky combinations of genetic (G) traits and managements (M) across the multiple environments (E) found in the sub-tropical and tropical summer cropping systems of Australia and Eastern and Southern Africa. At the whole farm level, he is interested in quantifying benefits and trade-offs from alternative farm business designs also in Australia and across Eastern and Southern Africa. He was Chief Editor of Agricultural Systems until 2018. John Passioura has a bachelor’s degree in agricultural science (1958) and a Ph.D. in soil chemistry (1963) from Melbourne University, Australia. He currently holds an emeritus appointment at CSIRO Agriculture in Canberra, and was formerly Chief Research Scientist and Leader of the Crop Adaptation Program there. His research has ranged over: soil chemistry and physics (transport of water and nutrients in soil); plant physiology (water relations, drivers of growth rate and adaptation to abiotic stresses); and wheat pre-breeding and agronomy directed at improving water-limited productivity of dryland crops. He was elected Fellow of the Australian Academy of Science in 1994. He spent 6 years on partial secondment to the Australian Grains Research and Development Organization (GRDC) where he oversaw a portfolio of projects on soil and water management that aimed at improving both the productivity and environmental performance of Australian grain farms. More recently he has written several reviews relating to crop productivity and the pursuit of effective agricultural research. He has also been a consultant to the CGIAR, having undertaking high-level reviews of several of their programs, existing or prospective. Jairo Palta is an Honorary Research Fellow at CSIRO Agriculture & Food in Perth, Western Australia. He is also Adjunct Research Professor at The University of Western Australia Institute of Agriculture & School of Agriculture and Environment and Visiting Research Professor at the Institute of Water and Land Conservation, Chinese Academy of Sciences, in Yangling, Shaanxi China. He completed a Ph.D in Crop Physiology at La Trobe University, Melbourne, Australia and conducted Post-doctoral research at the Centre for Arid-Zone Studies at University of Bangor, North Wales, UK, and the the Lab of Nuclear Medicine and Radiation Biology of the University of California, Los Angeles (UCLA). He held positions with the International Center for Tropical Agriculture (CIAT) and CSIRO Plant Industry. At CSIRO he was the leader of the Subprogram “Improving Crop and Pasture Production and Quality”, acting leader of the program “Improvement of Rainfed Crops and Pastures”. He also served as Seconded Scientist for the Cooperative Research Centre for Legumes in Mediterranean Agriculture (CLIMA), was member of the Review Panel for the UNEP project “Structure and stability of plant communities in response to drought in East Africa” and the Environmental Physiology Panel for the Ecological Research Division of the US Department of Energy. He is currently involved in several international research initiatives (Expert Working Group [EWG] on Adaptation of Wheat to Abiotic Stress, Nutrient Use Efficiency and Heat and Drought Wheat Improvement Consortium [HeDWIC]. He is one of the Editor-in-Chief of Field Crops Research, Consulting Editor for Plant and Soil, Associate Editor for Crop and Pasture Science, Functional Plant Biology and Frontiers of Plant Sciences. He has published over 160 papers in referred journals and as book chapters and is the editor of two books. Peter Hayman is an agricultural scientist who has worked on the application of climate science to farming systems. His focus on low rainfall cropping and irrigated viticulture in southern Australia but has been involved in climate risk projects in Philippines, Cambodia, Sri Lanka and India. In 2004 he was appointed as Principal Scientist, Climate Applications with SARDI, prior to that time he was coordinator of climate applications in NSW DPI. He has worked closely with climate scientists, crop modellers, economists and farmers with a main interest on how the advances of climate science can be communicated and used in decision making. Mariano Cossani is Senior Research Agronomist with SARDI, working on crop ecophysiology and abiotic stress adaptation. Mariano has an Agronomy Engineer degree from the University of Buenos Aires, and a Masters and a Ph. D. by the University of Lleida. His research experience encompasses aspects of resource capture and resource use efficiency of cereals, and adaptation of crops to the climate change effects, such as heat stress and drought. He developed methods based on empirical field data to assess the co-limitation of resources in wheat systems of Mediterranean environments that proved to be useful on other crops as canola. He has been working during five years for CIMMYT of the CGIAR System Organization, where he developed conceptual models for adapting wheat to hot environments through the use of physiological traits, phenotyping and strategic crossing. He has done oral presentations in diverse scientific meetings hold in countries of the five continents of the world.
While gas continues to be the many widely-used warming gas within the U.S., this type of gasoline may possibly not be obtainable in every area. Homeowners without accessibility natural gas lines or people who choose another type of warming gas risk turning to electrical energy to energy home heating systems. Despite its high efficiency, electric heating frequently costs even more to operate than a normal gas furnace. When you compare a gas furnace to an electric heater, consider not just the upfront and operating costs of every option, but additionally just how these methods affect the environment. The average cost for a brand new fuel furnace ranges from about $2, 300 to $3, 000 dependent on maker relating to a November 2012 article by customer Reports. Qualitysmith.com estimates the cost of a unique electric furnace at $1, 000 to $1, 500 at the time of December 2012. Electric room heaters is available for less at most diy stores. Even though the upfront costs of electric home heating might appear below those related to fuel heating, operating prices for gas heaters are often far lower than those related to electric heat. The expense of operating any heat is determined by four elements, including the particular gas, the cost per device of gas, the amount of temperature supplied per product of gasoline, as well as the effectiveness regarding the home heating. The quantity of heat is assessed in Brit Thermal Units - or BTUs - whilst the effectiveness score is found by checking the yearly gas utilization performance. The University of Maine shows utilizing this formula to calculate home heating expenses: price per product times 1, 000, 000 BTUs/BTUs per unit/efficiency. This formula offers you the gas cost of 1, 000, 000 BTUs of heat. The average house needs 50, 000, 000 to 150, 000, 000 BTUs of heat each winter season in accordance with the U.S. Department of Agriculture. Last year, the typical domestic buyer when you look at the U.S. spent 11.7 cents per kilowatt hour of electrical energy in line with the U.S. Energy Information management. The EIA additionally states that each kilowatt-hour of electricity provides about 3, 412 BTUs of heat. In line with the U.S. division of Energy, electric furnaces provide yearly gas usage efficiency - or AFUE - ratings of 95 to 100 percent. Electrical area heaters all have actually AFUE ranks of 100 %. Assuming an electrical heater with 100 percent efficiency, property owners should expect you'll spend roughly $34.32 per million BTUs of temperature. Gas is usually offered in therms, where one therm is equal to 100, 000 BTUs. The EIA states this 1 therm prices about $1.01 as of November 2012. Gasoline furnaces into the U.S. need a minimum AFUE of 78 per cent, though this quantity can get much higher. Presuming a furnace with a 78 % efficiency, consumers should be prepared to pay around $12.96 per million BTUs of temperature. While costs for fuel and electrical energy differ eventually by area, gasoline furnaces typically cost never as to operate than electric furnaces or heaters. As the price of purchasing and operating a new furnace usually tops the list of concerns for buyers, it's also important to think about the ecological costs of each and every of these home heating fuel options. Inspite of the reasonably high effectiveness of most electric heating units, electric heating is inherently ineffective. In accordance with the U.S. ecological coverage department, many electrical energy is produced utilizing strategies which are only 30 percent effective. Furthermore, coal signifies among primary fuels familiar with produce electricity. While natural gas production does launch greenhouse fuel emissions also pollutants, this gas burns a lot cleaner than coal and poses never as damage to the environmental surroundings, according to the EPA.
In the previous two articles in this series on potash exploitation, we looked at the production of either MOP or SOP from anthropogenic brine pans in modern saline lake settings. Crystals of interest formed in solar evaporation pans and came out of solution as: 1) Rafts at the air-brine interface, 2) Bottom nucleates or, 3) Syndepositional cements precipitated within a few centimetres of the depositional surface. In most cases, periods of more intense precipitation tended to occur during times of brine cooling, either diurnally or seasonally (sylvite, carnallite and halite are prograde salts). All anthropogenic saline pan deposits examples can be considered as primary precipitates with chemistries tied to surface or very nearsurface brine chemistry. In contrast, this article discusses ancient potash deposits where the chemistries and ore textures are responding to ongoing alteration processes in the diagenetic realm. Unlike the modern brine pans where brines chemistries and harvested mineralogies are controllable, at least in part, these ancient deposits show ore purities and distributions related to ongoing natural-process overprints. Table 1 lists some modern and ancient potash deposits and prospects by dividing them into Neogene and Pre-Neogene deposits (listing is extracted and compiled from SaltWork® database Version 1.7). The Neogene deposits are associated with a time of MgSO4-enriched seawaters while a majority of the Pre-Neogene deposits straddle times of MgSO4 enrichment and depletion in the ocean waters. Many primary evaporite salts dissolve congruently in the diagenetic realm; i.e., the composition of the solid and the dissolved solute stoichiometrically match, and the dissolving salt goes entirely into solution (Figure 1a). This situation describes the typical subsurface dissolution of anhydrous evaporite salts such as halite or sylvite. However, some evaporite salts, typically hydrated salts, such as gypsum or carnallite, dissolve incongruently in the diagenetic realm, whereby the composition of the solute in solution does not match that of the solid (Figure 1b). This solubilisation or mineralogical alteration is defined by the transformation of the "primary solid" into a secondary solid phase, typically an anhydrous salt, and the loss of water formerly held in the lattice structure. The resulting solution generally carries ions away in solution. More than a century ago, van't Hoff (1912) suggested that much subsurface sylvite is the result of incongruent solution of carnallite yielding sylvite and a Mg-rich solution. According to Braitsch (1971, p. 120), the incongruent alteration (dissolution) of carnallite is perhaps the most crucial process in the alteration of subsurface potash salts and the formation of diagenetic (secondary) sylvite. Typically, a new solid mineral remains, and the related complex solubility equilibrium creates a saline pore water that may, in turn, drive further alteration or dissolution as it leaves the reaction site (Warren, Chapters 2 and 8). Specifically for ancient potash, reaction 2 generates magnesium and chloride in solution and has been used to explain why diagenetic bischofite and dolomite can be found in proximity to newly formed subsurface sylvite. Bischofite is a highly soluble salt and so is metastable in many subsurface settings where incongruent dissolution is deemed to have occurred, including bischofite thermal pool deposits in the Dallol sump in the Danakhil of Ethiopia (Salty Matters, May 1, 2015). In many hydrologically active systems, solid-state bischofite is flushed by ongoing brine crossflow and so help drive the formation of various burial dolomites. Only at high concentrations of MgCl2 can carnallite dissolve without decomposition. In the lab, the decomposition of carnallite in an undersaturated aqueous solution is a well-documented example of incongruent dissolution (Emons and Voigt, 1981; Xia et al., 1993; Hong et al., 1994; Liu et al., 2007; Cheng et al., 2015, 2016). When undersaturated water comes into contact with carnallite, the rhombic carnallite crystals dissolve and, because of the common ion effect, small cubic KCl crystals form in the vicinity of the dissolving carnallite. As time passes, the KCl crystals grow into larger sparry subhedral forms and the carnallite disappears. Carnallite’s crystal structure is built of Mg(H2O)6 octahedra, with the K+ ions are situated in the holes of chloride ion packing meshworks, with a structural configuration similar to perovskite lattice types (Voigt, 2015). Potassium in the carnallite lattice can be substituted by other large single-valence ions like NH4+, Rb+, Cs+ or Li(H2O)+, (H3O)+ and Cl- by Br- and I-. These substitutions change the lattice symmetry from orthorhombic in the original carnallite to monoclinic. When interpreting the genesis of ancient potash deposits and solutions, the elemental segregation in the lattice means trace element contents of bromide, rubidium and caesium in primary carnallite versus sylvite daughter crystals from incongruent dissolution can provide valuable information. For example, in a study by Wardlaw, (1968), a trace element model was developed for sylvite derived from carnallite that gave for Br and Rb concentration ranges of 0.10–0.90 mg/g and 0.01–0.18 mg/g, respectively. In a later study of sylvite derived by fresh-water leaching of magnesium chloride under isothermal conditions at 25 °C. Cheng et al. (2016), defined a model whereby primary sylvites precipitated from MgSO4-deficient sea water, gave Br and Rb concentration ranges of 2.89–3.54 mg/g and 0.017–0.02 mg/g, respectively (no evaporation occurred at saturation with KCl). In general, they concluded sylvite derived incongruently from carnallite would contain less Br and more Rb than primary sylvite (Figure 2; Cheng et al., 2016). The burial-driven mechanism widely cited to explain the incongruent formation of sylvite from carnallite is illustrated in Figure 3 (Koehler et al., 1990). Carnallite precipitating from evaporating seawater at time 1 forms from a solution at 30°C and atmospheric (1 bar) pressure, and so plots as point A, which lies within the carnallite stability field (that is, it sits above the dashed light brown line). With subsequent burial, the pressure increases so that the line defining carnallite-sylvite boundary (solid dark brown line) moves to higher values of K. By time 2, when the pressure is at 1 Kbar (corresponds to a lithostatic load equivalent to 2-3 km depth), the buried carnallite is thermodynamically unstable and so is converting to sylvite + solution (as the plot field now lies in the sylvite + solution field (Figure 3). If equilibrium is maintained the carnallite reacts incongruently to form further sylvite and MgCl2-solution. Thus, provided the temperature does not rise substantially, increasing pressure as a result of burial will favour the breakdown of carnallite to sylvite. However, as burial proceeds, the temperature may become high enough to favour once again the formation of carnallite from sylvite + solution (that is the solution plot point from A moves toward the right-hand side of the figure and back into the carnallite stability field). Sylvite, interpreted to have formed from incongruent dissolution of primary carnallite, is reported from the Late Permian Zechstein Formation of Germany (Borchert and Muir, 1964), Late Permian Salado Formation of New Mexico (Adams, 1970), Early Mississippian Windsor Group of Nova Scotia (Evans, 1970), Early Cretaceous Muribeca Formation of Brazil and its equivalents in the Gabon Basin, West Africa (Wardlaw, 1972a, b; Wardlaw and Nicholls, 1972; Szatmari et al., 1979; de Ruiter, 1979), Late Cretaceous of the Maha Sarakham Formation, Khorat Plateau, Thailand and Laos (Hite and Japakasetr, 1979), Pleistocene Houston Formation, Danakil Depression, Ethiopia (Holwerda and Hutchinson, 1968), and Middle Devonian Prairie Formation of western Canada (Schwerdtner, 1964; Wardlaw, 1968) (See Table 1). This well-documented literature base supports a long-held notion that there is a problem with sylvite as a primary (first precipitate) marine bittern salt, especially if the mother seawater had ionic proportions similar to those present in modern seawater (see Lowenstein and Spencer, 1990 for an excellent, if 30-year-old, review). We know from numerous evaporation experiments, that sylvite does not crystallise during the evaporation of modern seawater at 25°C, except under metastable equilibrium conditions (Braitsch, 1971; Valyashko, 1972; Hardie, 1984). The sequence of bitterns crystallising from modern seawater bitterns was illustrated in the previous Salty Matters article in this series (see Figure 1 in October 31 2018). Across the literature documenting sylvite-carnallite associations in ancient evaporites, the dilemma of primary versus secondary sylvite is generally solved in one of three ways. Historically, many workers interpreted widespread sylvite as a diagenetic mineral formed by the incongruent dissolution of carnallite (Explanation 1). Then there is the interpretation that some sylvite beds, perhaps associated with tachyhydrite, were precipitated in the evaporite bittern part of a basin hydrology that was fed by CaCl2-rich basinal hydrothermal waters (Explanation 2: see Hardie, 1990 for a good discussion of this mechanism). Then there is the third, and increasingly popular explanation of primary or syndepositional sylvite at particular times in the chemical evolution of the world oceans (MgSO4-depleted oceans). Changes in the relative proportions of magnesium, sulphur and calcium in the world’s oceans are well supported by brine inclusion chemistry of co-associated chevron halite (Figure 4). Clearly, there are vast swathes of times in the earth’s past when the chemistry of seawater changed so that MgSO4 levels were lower than today and it was possible that sylvite was a primary marine bittern precipitate (see Lowenstein et al., 2014 for an excellent summary). In my opinion, there is good evidence that all three explanations are valid within their relevant geological contexts but, if used exclusively to explain the presence of ancient sylvite, the argument becomes somewhat dogmatic. I would say that that, owing to its high solubility, the various textures and mineralogical associations of carnallite/sylvite and sulphate bitterns found in ancient potash ore beds reflect various and evolving origins. Ambient textures and mineralogies are dependent on how many times and how pervasively in a potash sequence’s geological burial history an evolving and reactive pore brine chemistry came into contact with parts or all of the extent of highly reactive potash beds (Warren, 2000; 2010; 2016). In my experience, very few ancient examples of economic potash show layered textures indicating primary precipitation on a brine lake floor, instead, most ancient sylvite ores show evidence of at least one episode of alteration. That is, various forms and textures in potash may dissolve, recrystallise and backreact with each other from the time a potash salt is first precipitated until it is extracted. The observed textural and mineralogical evolution of a potash ore association depends on how open was the hydrology of the potash system at various stages during its burial evolution. The alteration can occur syndepositionally, in brine reflux, or later during flushing by compactional or thermobaric subsurface waters or during re-equilibration tied to uplift and telogenesis. Tectonism (extensional and compactional) during the various stages of a basin’s burial evolution acts as a bellows driving fluid flow within a basin, so forcing and speeding up the focused circulation of potash-altering waters. A similar, but somewhat less intense, textural evolution tied to incongruent alteration is seen in the burial history of other variably hydrated evaporite salts. For example, CaSO4 can flip-flop from gypsum to anhydrite and back again depending on temperature, pore fluid salinity and the state of uplift/burial. Likewise, with the more complicated double salt polyhalite, there are mineralogical changes related to whether it formed in a MgSO4 enriched or depleted world ocean and the associated chemistry of the syndepositional reflux brines across extensive evaporite platforms (for a more detailed discussion of polyhalite see Salty Matters, July 31, 2018). Kainite-kieserite-carnallite also show evidence of ongoing incongruent interactions. This means that, as in gypsum/anhydrite/polyhalite or kain ite/kieserite sequences, there will be primary and secondary forms of both carnallite and sylvite that can alternate during deposition, during burial and any deep meteoric flushing and then again with uplift. In Quaternary brine factories these same incongruent chemical relationships are what facilitate the production of MOP (sylvite) from a carnallitite feed or SOP from kainite/kieserite/schoenite feed (see articles 1 and 2 in this series). To document the three end-members of ancient sylvite-carnallite decomposition/precipitation we will look at three examples; 1) Oligocene potash in the Mulhouse Basin where primary sylvite textures are commonplace, 2) Devonian potash ores in western Canada, where multiple secondary stages of alteration are seen, and 3) Igneous-dyke associated sylvite in east Germany where thermally-driven volatisation (incongruent melting) forms sylvite from dehydrated carnallite. From 1904 until 2002, potash was conventionally mined in France from the Mulhouse Basin (near Alsace, France). With an area of 400 km2, the Mulhouse Basin is the southernmost of a number of Lower Oligocene evaporite basins that occupied the upper Rhine Graben, which at that time was a narrow adiabatic-arid rift valley (Figure 6a). The graben was a consequence of the collision between European and African plates during the Paleogene. It is part of a larger intracontinental rift system across Western Europe that extended from the North Sea to the Mediterranean Sea, stretching some 300 km from Frankfurt (Germany) in the north, to Basel (Switzerland) in the south, with an average width of 35 km (Cendon et al., 2008). The southern extent of the graben is limited by a system of faults that place Hercynian massifs and Triassic materials into contact with the Paleogene filling. Across the north, a complex system of structures (including salt diapirs) put the basin edges in contact with Triassic, Jurassic and Permian materials. In the region of the evaporite basins, the Paleogene fill of the graben lies directly on the Jurassic basement. The sedimentary filling of this rift sequence is asymmetrical with the deeper parts located at the southwestern and northeastern sides of the Graben (Rouchy, 1997). Palaeogeographical reconstructions place the potential marine seaway seepage feed to the north or perhaps also southeast of the Mulhouse Basin, while marginal continental conglomerates tend to preclude any contemporaneous hydrographic connection with Oligocene ocean water (Blanc-Valleron, 1991; Hinsken et al., 2007; Cendon et al., 2008). At the time of its hydrographic isolation, some 34 Ma, the basin was located 40° north of the equator. Total fill of Oligocene lacustrine/marine-fed sediments in the graben is some 1,700m thick. The saline stage is dominated by anhydrite, halite and mudstone. The main saline sequence is underlain by non-evaporitic Eocene continental mudstones, with lacustrine fossils and local anhydrite beds. Evaporite bed continuity in the northern part of the basin is disturbed by (Permian-salt cored) diapiric and or erosional/fault movement. Consequently, these northern basins are not considered suitable for conventional potash mining (Figure 6a). The Paleogene fill of the basin is divided into 6 units; a pre-evaporitic series, Lower Salt Group (LSG), Middle Salt Group (MSG), an Upper Salt Group (USG - with potash), Grey Marls Fm., and the Niederroedern Fm (Figure 7; Cendon et al., 2008). The LSG and lower section of the MSG are interpreted as lacustrine in origin, based on the limited palaeontologic and geochemical data. However, based on the presence of Cenozoic marine nannoplankton, shallow water benthic foraminifera, and well-diversified dinocyst assemblages in the fossiliferous zone below Salt IV, Blanc-Valleron (1991) favours a marine influence near the top of the MSG, while recognising the ambiguity of marine proportions with brackish faunas. Many marine-seepage fed brine systems have salinities that allow halotolerant species to flourish in marine-fed basins with no ongoing marine hydrographic connection (Warren, 2011). According to Blanc-Valleron and Schuler (1997), the region experienced a Mediterranean climate with long dry seasons during Salt IV member deposition. S2 Unit: 11.5 m thick with distinct layers of organic-rich marls, often dolomitic, with dispersed anhydrite layers. S1 Unit: 19 m thick, evenly-bedded and made up of alternating metre-scale milky (inclusion-rich) halite layers, with much thinner marls and anhydrite layers. Marls show a sub-millimetric lamination formed by micritic carbonate laminae alternating with clay, quartz, and organic matter-rich laminae. Hofmann et al. (1993a, b) interpreted these couplets as reflecting seasonal variations. Anhydrite occasionally displays remnant swallowtail ghost textures, which suggest that at least part of the anhydrite first precipitated as subaqueous gypsum. Halite shows an abundance of growth-aligned primary chevron textures, along with fluid-inclusion banding suggesting halite was subaqueous and deposited beneath shallow brine sheets (Lowenstein and Spencer, 1990). S Unit: Is 3.7 m thick and consists of thin marl layers and anhydrite, similar to the S2 Unit, with a few thin millimetric layers of halite. Mi Unit: With a thickness of 6 m, it is mostly halite with similar characteristics to the S1 Unit. Sylvite was detected in one sample, but its presence is probably related to the evolution of interstitial brines (Cendon et al., 2008). Ci Unit (“Couche inférieure”): Is formed by 4 m of alternating marls/anhydrite, halite, and sylvite beds (Figure 7). The Ti unit consists of alternating beds of halite, marl and anhydrite. The top of the interval is the T unit, which is similar to the S unit and consists of alternating beds of marl and anhydrite. Above this is the Ms or upper Marl, near identical to the lower marl Mi. The Mi is overlain by the upper potash bed (Cs), a thinner, but texturally equivalent, bed compared to the sylvinitic Ci unit. Thus, the Oligocene halite section includes two thin, but mined, potash zones: the Couche inferieure (Ci; 3.9m thick), and Couche superieure (Cs; 1.6m thick), both occur within Salt IV of the Upper Salt group (Figures 5, 7). Both potash beds are made up of stacked, thin, parallel-sided cm-dm-thick beds (averaging 8 cm thickness), which are in turn constructed of couplets composed of grey-coloured halite overlain by red-coloured sylvite (Figure 5b). Each couplet has a sharp base that separates the basal halite from the sylvite cap of the underlying bed. In some cases, the separation is also marked by bituminous partings. The bottom-most halite in each dm-thick bed consists of halite aggregates with cumulate textures that pass upward into large, but delicate, primary chevrons and cornets. Clusters of this chevron halite swell upward to create a cm-scale hummocky boundary with the overlying sylvite (Figure 5c; Lowenstein and Spencer, 1990). The sylvite member of a sylvinite couplet consists of granular aggregates of small transparent halite cubes and rounded grains of red sylvite (with some euhedral sylvite hoppers) infilling the swales in the underlying hummocky halite (Figure 5b,c). The sylvite layer is usually thick enough to bury the highest protuberances of the halite, so that the top of each sylvite layer, and the top of the couplet, is flat. Dissolution pipes and intercrystalline cavities are noticeably absent, although some chevrons show rounded coigns. Intercalated marker beds, formed during times of brine pool freshening, are composed of a finely laminated bituminous shale, with dolomite and anhydrite. The sylvite-halite couplets record combinations of unaltered settle-out and bottom-nucleated growth features, indicating primary chemical sediment accumulating in shallow perennial brine pools (Lowenstein and Spencer, 1990). Based on the crystal size, the close association of halites with sylvite layers, their lateral continuity and the manner in which sylvite mantles overlie chevron halites, the sylvites are interpreted as primary precipitates. Sylvite first formed at the air-brine surface or within the uppermost brine mass and then sank to the bottom to form well-sorted accumulations. As sylvite is a prograde salt it, like halite, probably grew during times of cooling of the brine mass (Figure 8a). These times of cooling could have been diurnal (day/night) or weather-front induced changes in the above-brine air temperatures. Similar cumulate sylvite deposits form as ephemeral bottom accumulations on the floor of modern Lake Dabuxum in China during its more saline phases. The subsequent mosaic textural overprint seen in many of the Mulhouse sylvite layers was probably produced by postdepositional modification of the crystal boundaries, much in the same way as mosaic halite is formed by recrystallisation of raft and cumulate halite during shallow burial. Temperature-based inclusion studies in both the sylvite and the halites average 63°C, suggesting solar heating of surface brines as precipitation took place (Figure 8b; Lowenstein and Spencer, 1990). Similar high at-surface brine temperatures are not unusual in many modern brine pools, especially those subject to periodic density stratification and heliothermometry (Warren 2016; Chapter 2). Mineralogically, potash evaporites in the Mulhouse Basin in the Rhine Graben (also known as the Alsatian (Alsace) or Wittelsheim Potash district) contain sylvite with subordinate carnallite, but lack the abundant MgSO4 salts characteristic of the evaporation of modern seawater. The Rhine graben formed during the Oligocene, via crustal extension, related to mantle upwelling. It was, and is, a continental graben typified by high geothermal gradients along its rift axis. In depositional setting, it is not dissimilar to pree-120,000-year potash fill stage in the Quaternary Danakil Basin or the Dead Sea during deposition of potash salts in the Pliocene Sedom Fm. The role of a high-temperature geothermal inflow in defining the CaCl2 nature of the potash-precipitating brines, versus a derivation from a MgSO4-depleted marine feed, is considered significant in the Rhine Graben deposits, but is poorly understood and still not resolved (Hardie, 1990; Cendón et al., 2008). World ocean chemistry in the Oligocene is on a shoulder between the MgSO4-depleted CaCl2-rich oceans of the Cretaceous and the MgSO4-enriched oceans of the Neogene (Figure 4). Cendón et al. (2008) conclude brine reaction processes were the most important factors controlling the major-ion (Mg, Ca, Na, K, SO4, and Cl) evolution of Mulhouse brines (Figure 9a-d). A combined analysis of fluid inclusions in primary textures by Cryo-SEM-EDS with sulphate- d34S, d18O and 87Sr/86Sr isotope ratios revealed likely hydrothermal inputs and recycling of Permian evaporites, particularly during the more advanced stages of evaporation that laid down the Salt IV member. Bromine levels imply an increasingly concentrated brine at that time (Figure 9a). The lower part of the Salt IV (S2 and S1) likely evolved from an initial marine input (Figure 9b-d). Throughout, the basin was disconnected from direct marine hydrographic connection and was one of a series of sub-basins formed in an active rift setting, where tectonic variations influenced sub-basin interconnections and chemical signatures of input waters. Sulphate-d34S shows Oligocene marine-like signatures at the base of the Salt IV member (Figure 9c, d). However, enriched sulphate-d18O reveals the importance of synchronous re-oxidation processes. As evaporation progressed, other non-marine or marine-modified inputs from neighbouring basins became more important. This is demonstrated by increases in K concentrations in brine inclusions and Br in halite, sulphate isotopes trends, and 87Sr/86Sr ratios (Figure 9b, c). The recycling (dissolution) of previously precipitated evaporites of Permian age was increasingly important with ongoing evaporation. In combination, this chemistry supports the notion of a connection of the Mulhouse Basin with basins situated north of Mulhouse. The brine evolution eventually reached sylvite precipitation. Hence, the chemical signature of the resulting brines is not 100% compatible with global seawater chemistry changes. Instead, the potash phase is tied to a hybrid inflow, with significant but decreasing marine input. There was likely an initial marine source, but this occurred within a series of rift-valley basin depressions for which there was no direct hydrographic connection to the open ocean, even at the time the Middle Salt Member (potash-entraining) was first deposited (Cendon et al., 2008). That is, the general hydrological evolution of the primary textured evaporites in the Mulhouse basin sump is better explained as a restricted sub-basin with an initial marine-seepage stage. This gradually changed to ≈ 40% marine source near the beginning of evaporite precipitation, with the rest of hydrological inputs being non-marine. There was a significant contribution of solutes from recycled, in part diapiric, Permian evaporites, likely remobilised by the tectonics driving the formation of the rift valley (Hinsken et al., 2007; Cendon et al., 2008). The general proportion of solutes did not change substantially over the time of evaporite precipitation. However, as the basin restriction increased, the formerly marine inputs changed to continental, diapiric or marine-modified inputs, perhaps fed from neighbouring basins north of Mulhouse basin. As in the Ethiopian Danakhil potash-rift, it is likely brine interactions occurred both during initial and early post-depositional reflux overprinting of the original potash salt beds. The Middle Devonian (Givetian) Prairie Evaporite Formation is a widespread potash-entraining halite sequence deposited in the Elk Point Basin, an early intracratonic phase of the Western Canada Sedimentary Basin (WCSB; Chipley and Kyser, 1989). Today, it is the world’s predominant source of MOP fertiliser (Warren, 2016). The flexure that formed the basin and its subsealevel accommodation space was a distal downwarp to, and driven by, the early stages of the Antler Orogeny (Root, 2001). Texturally and geochemically the potash layers in the basin show the effects of multiple alterations and replacements of its potash minerals, especially interactions between sylvite and carnallite in a variably recrystallised halite host. Regionally halite constitutes a large portion of the four formations that make up the Devonian Elk Point Group (Figure 10): 1) the Lotsberg (Lower and Upper Lotsberg Salt), 2) the Cold Lake (Cold Lake Salt), 3) the Prairie Evaporite (Whitkow and Leofnard Salt), and 4) the Dawson Bay (Hubbard Evaporite). Today the remnants of the Middle Devonian Prairie Evaporite Formation constitute a bedded unit some 220 metres thick, which lies atop the irregular topography of the platform carbonates of the Winnipegosis Fm. Extensive solutioning of the various salts has given rise to an irregular thickness to the formation and the local absence of salt (Figure 11a). The Elk Point Group was deposited within what is termed the Middle Devonian “Elk Point Seaway,” a broad intracratonic sag basin extending from North Dakota and northeastern Montana at its southern extent north through southwestern Manitoba, southern and central Saskatchewan, and eastern to northern Alberta (Figure 11a). Its Pacific coast was near the present Alberta-British Columbia border, and the basin was centred at approximately 10°S latitude. To the north and west the basin was bound by a series of tectonic ridges and arches; but, due to subsequent erosion, the true eastern extent is unknown (Mossop and Shetsen, 1994). In northern Alberta, the Prairie Evaporite is correlated with the Muskeg and Presqu’ile formations (Rogers and Pratt, 2017). Hydrographic isolation of the intracratonic basin from its marine connection resulted in the deposition of a drawndown sequence of basinwide (platform-dominant) evaporites with what is a uniquely high volume of preserved potash salts deposited within a clayey halite host. The potash resource in this basin far exceeds that of any other known potash basin in the world. Potash deposits mined in Saskatchewan are all found within the upper 60-70 m of the Prairie Evaporite Formation, at depths of more than 400 to 2750 metres beneath the surface of the Saskatchewan Plains. Within the Prairie Evaporite, there are four main potash-bearing members, in ascending stratigraphic order they are: Esterhazy, White Bear, Belle Plaine and Patience Lake members (Figure 11b). Each member is composed of various combinations of halite, sylvite, sylvinite, and carnallitite, with occurrences of sylvite versus carnallite reliably definable using wireline signatures (once the wireline is calibrated to core or mine control - Figure 12; Fuzesy, 1982). The Patience Lake Member is the uppermost Prairie Evaporite member and is separated from the Belle Plaine by 3-12 m of barren halite (Holter, 1972). Its thickness ranges from 0-21 m and averages 12 m, its top 7-14 m is made up halite with clay bands and stratiform sylvite. This is the targeted ore unit in conventional mines in the Saskatoon and Lanigan areas and is the solution-mined target, along with the underlying Belle Plaine Member, at the Mosaic Belle Plaine potash facility. The Belle Plaine member is separated from the Esterhazy by the White Bear Marker beds made up of some 15 m of low-grade halite, clay seams and sylvinite. The Belle Plaine Member is more carnallite-prone than the Patience Lake member (Figure 12). It is the ore unit in the conventional mines at Rocanville and Esterhazy (Figure 11b) where its thickness ranges from 0-18 m and averages around 9 m. In total, the Prairie Evaporite Formation does not contain any significant MgSO4 minerals (kieserite, polyhalite etc.) although some members do contain abundant carnallite. This mineralogy indicates precipitation from a Devonian seawater/brine chemistry somewhat different from today’s, with inherently lower relative proportions of sulphate and lower Mg/Ca ratios (Figure 4). The Prairie Evaporite Fm. is nonhalokinetic throughout the basin, it is more than 200 m thick in the potash mining district in Saskatoon and 140 m thick in the Rocanville area to the southeast (Figure 11a; Yang et al., 2009). The Patience Lake member is the main target for conventional mining near Saskatoon. The Esterhazy potash member rises close to the surface in the southeastern part of Saskatchewan near Rocanville and on into Manitoba. This is a region where the Patience Lake Member is thinner or completely dissolved (Figure 11b). Over the area of mineable interest in the Patience Lake Member, centred on Saskatoon, the ore bed currently slopes downward only slightly in a westerly direction, but deepens more strongly to the south at a rate of 3-9 m/km. Mines near Saskatoon are at depths approaching a kilometre and so are nearing the limits of currently economic shaft mining. The main shaft for the Colonsay Mine, which took IMC Global Inc. more than five years to complete through a water-saturated sediment column, finally reached the target ore body at a depth of 960 metres. Such depths and a southerly dip to the ore means that the conventional shaft mines near Saskatoon define a narrow WNW-ESE band of conventional mining activity (Figure 11c). To the south potash is recovered from greater depths by solution mining; for example, the Belle Plaine operation leaches potash from the Belle Plaine member at a depth of 1800m. The Prairie Evaporite typically thins southwards in the basin; although local thickening occurs where carnallite, not sylvite, is the dominant potash mineral (Worsley and Fuzesy, 1979). The Patience Lake member is mined at the Cory, Allan and Lanigan mines, and the Esterhazy Member is mined in the Rocanville area (Figure 11c). Ore mined from the 2.4 m thick Esterhazy Member in eastern Saskatchewan contain minimal amounts of insolubles (≈1%), but considerable quantities of carnallite (typically 1%, but up to 10%) and this reduces the average KCl grade value to an average of 25% K2O. The converse is true for ore mined from the Patience Lake potash member in western Saskatchewan near Saskatoon, where carnallite is uncommon in the Cory and Allan mines. The mined ore thickness is a 2.74-3.35 metre cut off near the top of the 3.66-4.57metre Prairie Lake potash member. Ore grade is 20-26% K2O and inversely related to thickness (Figure 12). The insoluble content is 4-7%, mostly clay and markedly higher than in the Rocanville mines. A typical sylvinite ore zone in the Patience Lake member can be divided into four to six units, based on potash rock-types and clay seams (Figures 12, 13a; M1-M6 of Boys, 1990). Units are mappable and have been correlated throughout the PCS Cory Mine with varying degrees of success, dependent on partial or complete loss of section from dissolution. Potash deposition appears to have been early and related to short-term brine seaway cooling and syndepositional brine reflux. So the potash layering (M1-M6) is cyclic, expressed in the repetitive distribution of hematite and other insoluble minerals (Figure 13). Desiccation polygons, desiccation cracks, subvertical microkarst pits and chevron halite crystals indicate that the Patience Lake member that encompasses the potash ore was deposited in and just beneath a shallow-brine, salt-pan environment (Figure 13b; Boys, 1990; Lowenstein and Spencer, 1990; Brodlyo and Spencer, 1987; pers. obs). Clay seams form characteristic thin stratigraphic segregations throughout the potash ore zone(s) of the Prairie Evaporite, as well as disseminated intervals, and constitute about 6% of the ore as mined. For example, the insoluble minerals found in the PCS Cory samples are, in approximate order of decreasing abundance: dolomite, clay [illite, chlorite (including swelling-chlorite/chlorite), and septechlorite, quartz, anhydrite, hematite, and goethite. Clay minerals make up about one-third of the total insolubles: other minor components include: potassium feldspar, hydrocarbons, and sporadic non-diagnostic palynomorphs (Figure 13; Boys, 1990). In all mines, the clays tend to occur as long continuous seams or marker layers between the potash zones and are mainly composed of detrital chlorite and illite, along withauthigenic septechlorite, montmorillonite and sepiolite (Mossman et al., 1982; Boys, 1990). Of the two chlorite minerals, septechlorite is the more thermally stable. The septechlorite, sepiolite and vermiculite very likely originated as direct products of settle-out, syndepositional dissolution or early diagenesis under hypersaline conditions from a precursor that was initially eolian dust settling to the bottom of a vast brine seaway. The absence of the otherwise ubiquitous septechlorite from Second Red Beds west of the zero-edge of the evaporite basin supports this concept (Figure 9, 10). Texturally, at the cm-scale, potash salt beds in the Prairie Evaporite (both carnallitite and sylvinite) lack the lateral continuity seen in primary potash textures in the Oligocene of the Mulhouse Basin (Figure 14). Prairie potash probably first formed as syndepositional secondary precipitates and alteration products at very shallow depths just beneath the sediment surface. These early prograde precipitates were then modified to varying degrees by ongoing fluid flushing in the shallow burial environment. The cyclic depositional distribution of disseminated insolubles as the clay marker beds was possibly due to a combination of source proximity, periodic enrichment during times of brine freshening and the strengthening of the winds blowing detritals out over the brine seaway. Possible intra-potash disconformities, created by dissolution of overlying potash-bearing salt beds, are indicated by an abundance of residual hematite in clay seams with some cutting subvertically into the potash bed. Except in, and near, dissolution levels and collapse features, the subsequent redistribution of insolubles, other than iron oxides, is not significant. In general, halite-sylvite (sylvinite) rocks in the Prairie Evaporite ore zones generally show two end member textures; 1) the most common is a recrystallised polygonal mosaic texture with individual crystals ranging from millimetres to centimetres and sylvite grain boundaries outlined by concentrations of blood-red halite (Figure 14a). 2) The other end member texture is a framework of euhedral and subhedral halite cubes enclosed by anhedral crystals of sylvite (Figure 14b). This is very similar to ore textures in the Salado Formation of New Mexico interpreted as early passive precipitates in karstic voids. Petrographically, the halite-carnallite (carnallitite) rocks display three distinct textures. Most halite-carnallite rocks contain isolated centimetre-sized cube mosaics of halite enclosed by poikilitic carnallite crystals (Figure 14c); 1) Individual halite cubes are typically clear, with occasional cloudy crystal cores that retain patches of syndepositional growth textures (Lowenstein and Spencer, 1990). 2) The second texture is coarsely crystalline halite-carnallite with equigranular, polygonal mosaic textures. In zones where halite overlies bedded anhydrite, most of the halite is clear with only the occasional crystal showing fluid inclusion banding. Bedded halite away from the ore zones generally retains a higher proportion of primary depositional textures typical of halite precipitation in shallow ephemeral saline pans (Figure 14d; Brodylo and Spencer, 1987). Crystalline growth fabrics, mainly remnants of vertically-elongate halite chevrons, are found in 50-90% of the halite from many intervals in the Prairie Evaporite. Many of the chevrons are truncated by irregular patches of clear halite that formed as early diagenetic cements in syndepositional karst. In contrast, the halite hosting the potash ore layers lacks well-defined primary textures but is dominated by intergrown mosaics. From the regional petrology and the lower than expected Br levels in halite in the Prairie Evaporite Formation, Schwerdtner (1964), Wardlaw and Watson (1966) and Wardlaw (1968) postulated a series of recrystallisation events forming sylvite after carnallite as a result of periodic flushing by hypersaline solutions. This origin as a secondary precipitate (via incongruent dissolution) is supported by observations of intergrowth and overgrowth textures (McIntosh and Wardlaw, 1968), collapse and dissolution features at various scales and timings (Gendzwill, 1978; Warren 2017), radiometric ages (Baadsgaard, 1987) and palaeomagnetic orientations of the diagenetic hematite linings associated with the emplacement of the potash (Koehler, 1997; Koehler et al., 1997). Dating of clear halite crystals in void fills within the ore levels shows that some of the exceptionally coarse and pure secondary halites forming pods in the mined potash horizons likely precipitated during early burial, while other sparry halite void fills formed as late as Pliocene-Pleistocene (Baadsgard, 1987). Even today, alteration and remobilisation of the sylvite and carnallite and the local precipitation of bischofite are ongoing processes, related to the encroachment of the contemporary dissolution edge or the ongoing stoping of chimneys fed by deep artesian circulation (pers obs.). Fluid inclusion studies support the notion of primary textures (low formation temperatures in chevron halite in the Prairie evaporite and an associated thermal separation of non-sylvite and sylvite associated halite (Figure 15; Chipley et al., 1990). Most fluid inclusions found in primary, fluid inclusion-banded halite associated with the Prairie potash salts contain sylvite daughter crystals at room temperature or nucleate them on cooling (e.g. halite at 915 and 945 m depth in the Winsal Osler well; Lowenstein and Spencer, 1990). In contrast, no sylvite daughter crystals have been observed in fluid inclusions outlining primary growth textures from chevron halites away from the potash deposits. The data illustrated as Figure 15 clearly show that inclusion temperatures in primary halite chevrons are cooler than those in halites collected in intervals nearer the potash levels. Sylvite daughter crystal dissolution temperatures from fluid inclusions in the cloudy centres of halite crystals associated with potash salts are generally warmer (Brodlyo and Spencer, 1987; Lowenstein and Spencer, 1990). Sylvite and carnallite daughter crystal dissolution temperatures from fluid inclusions in fluid inclusion banded halite from bedded halite-carnallite are the hottest. This mineralogically-related temperature schism establishes that potash salts occur in stratigraphic intervals in the halite where syndepositional surface brines were warmer. In the 50° - 70°C temperature range there could be overlap with heliothermal brine lake waters. Even so, these warmer potash temperatures imply parent brines would likely be moving via a shallow reflux drive and are not the result of primary bottom nucleation (in contrast to primary sylvite in the Mulhouse Basin). Whether the initial Prairie reflux potash precipitate was sylvite or carnallite is open to interpretation (Lowenstein and Hardie, 1990). Analysis of subsurface waters from various Canadian potash mines and collapse anomalies in the Prairie Evaporite suggest that, after initial potash precipitation, a series of recrystallising fluids accessed the evaporite levels at multiple times throughout the burial history of the Prairie Formation (Chipley, 1995; Koehler, 1997; Koehler et al., 1997). Likewise, the isotope systematics and K-Ar ages of sylvite in both halite and sylvite layers indicate that the Prairie Evaporite was variably recrystallised during fluid overprint events (Table 2; Figure 16). These event ages are all younger than original deposition (≈390Ma) and likely correspond to ages of various tectonic events that influenced subsurface hydrology along the western margin of North America. This notion of ongoing fluid-rock interaction controlling the chemistry of mine waters is supported by dD and d18O values of inclusion fluids in both halite and sylvite, which range from -146 to 0‰ and from -17.6 to -3.0‰, respectively (Figure 18). Most of the various preserved isotope values are different from those of evaporated seawater, which should have dD and d18O values near 0‰. Furthermore, the dD and d18O values of inclusion fluids are probably not the result of precipitation of the evaporite minerals from a brine that was a mixture of seawater and meteoric water. The low latitude position of the basin during the Middle Devonian (10-15° from the equator), the required lack of meteoric water to precipitate basinwide evaporites, and the expected dD and d18O values of any meteoric water in such a setting, make this an unlikely explanation. Rather, the dD and d18O values of inclusion fluids in the halites reflect ambient and evolving brine chemistries as the fluids in inclusions in the various growth layers were intermittently trapped during the subsurface evolution of the Prairie Formation in the Western Canada Sedimentary Basin. They also suggest that periodic migration of nonmarine subsurface water was a significant component of the crossflowing basinal brines throughout much of the recrystallisation history (Chipley, 1995). Ongoing alteration of carnallite to sylvite and the reverse reaction means a sylvite-carnallite bed must be capable of gaining or losing fluid at the time of alteration. That is, any reacting potash beds must be permeable at the time of the alteration. By definition, there must be fluid egress to drive incongruent alteration of carnallite to sylvite or fluid ingress to drive the alteration of sylvite to carnallite. There can also be situations in the subsurface where the volume of undersaturated fluid crossflow was sufficient to remove (dissolve) significant quantities of the more soluble evaporite salts. Many authors looking at the Prairie evaporite argue that the fluid access events during the alteration of carnallite to sylvite or the reverse, or the complete leaching of the soluble potash salts was driven by various tectonic events (Figure 16). In the early stages of burial alteration (few tens of metres from the landsurface) the same alteration processes can be driven by varying combinations of brine reflux, prograde precipitation and syndepositional karstification, all driven by changes in brine level and climate, which in turn may not be related to tectonism (Warren, 2016; Chapters 2 and 8 for details). In the potash areas of the Western Canada Sedimentary Basin, the notion of 10-100 km lateral continuity is a commonly stated precept for both sylvinite and carnallitite units across the extent of the Prairie Evaporite. But when the actual distribution and scale of units are mapped based on mined intercepts, there are numerous 10-100 metre-scale discontinuities (anomalies) present indicating fluid ingress or egress (Warren, 2017). Sometimes ore beds thin and alteration degrades the ore level (Section A-A1-A2), other times these discontinuities can locally enrich sylvite ore grade (B-B1; Figure 19). Discontinuities or salt anomalies are much more widespread in the Prairie evaporite than mentioned in much of the potash literature (Figure 19). Mining for maintenance of ore grade shows that unexpectedly intersecting an anomaly in a sylvite ore zone can have a range of outcomes ranging from the inconsequential to the catastrophic, in part because there is more than one type of salt anomaly or “salt horse" (Warren, 2017). Figure 20 summarises what are considered the three most common styles of salt anomaly in the sylvite ore beds of the Prairie Evaporite, namely 1) Washouts, 2) Leach anomalies, 3) Collapse anomalies. These ore bed disturbances and their occurrence styles are in part time-related. Washouts are typically early (eogenetic) and defined as... “salt-filled V- or U-shaped structures, which transect the normal bedded sequence and obliterate the stratigraphy” (Figure 20a; Mackintosh and McVittie, 1983, p. 60). They are typically enriched in, or filled by, insoluble materials in their lower one-third and medium-coarse-grained sparry halite in the upper two thirds. Up to several metres across, when traced laterally they typically pass into halite-cemented paleo-sinks and cavern networks (e.g. Figure 20b). Most washouts likely formed penecontemporaneous to the potash beds they transect, that is, they are preserved examples of synkarst, with infilling of the karst void by a slightly later halite cement. They indicate watertable lowering in a potash-rich saline sump. This leaching was followed soon after by a period of higher watertables and brine saturations, when halite cements occluded the washouts and palaeocaverns. Modern examples of this process typify the edges of subcropping and contemporary evaporite beds, as about the recently exposed edges of the modern Dead Sea. As such, “washouts” tend to indicate relatively early interactions of the potash interval with undersaturated waters, they may even be a part of the syndepositional remobilisation hydrology that focused, and locally enriched, potash ore levels. In a leach anomaly zone, the stratabound sylvinite ore zone has been wholly or partially replaced by barren halite, without significantly disturbing the normal stratigraphic sequence (clay marker beds) which tend to continue across the anomaly (Figure 20b). Some loss of volume or local thinning of the stratigraphy is typical in this type of salt anomaly. Typically saucer-shaped, they have diameters ranging from a few metres up to 400m. Less often, they can be linear features that are up to 20 m wide and 1600m long. Leach zones can form penecontemporaneously due to depressions and back-reactions in the ore beds, or by later low-energy infiltration of Na-saturated, K-undersaturated brines. The latter method of formation is also likely on the margins of collapse zones, creating a hybrid situation typically classified as a leach-collapse anomaly (Mackintosh and McVittie, 1983; McIntosh and Wardlaw, 1968). Of the three types of salt anomaly illustrated, leach zone processes are the least understood. Historically, when incongruent dissolution was the widely accepted interpretation for loss of unit thickness in the Prairie Evaporite, many leach anomalies were considered metasomatic. Much of the original metasomatic interpretation was based on decades of detailed work in the various salt mines of the German Zechstein Basin. There, in an endemic halokinetic terrane, evaporite textures were considered more akin to metamorphic rocks, and the term metasomatic alteration was commonly used when explaining leach anomalies (Bochert and Muir, 1964, Braitsch, 1971). In the past two decades, general observations of the preservation of primary chevron halite in most bedded evaporites away from the potash layers in the Prairie Evaporite have led to reduced use of notions of widespread metamorphic-like metasomatic or solid-state alteration processes in bedded evaporites. There is just too much preserved primary texture in the bedded salt units adjacent to potash beds to invoke pervasive burial metasomatism of the Prairie Evaporite. Collapse zones in the Prairie Evaporite are characterized by a loss of recognizable sylvinite ore strata, which is replaced by less saline brecciated, recemented and recrystallized material, with the breccia blocks typically made of the intrasalt or roof lithologies (Figure 20c), so angular fragments of the Second Red beds and dolostones of the Dawson Bay Formation are the most conspicuous components of the collapse features in the Western Canada Sedimentary Basin. When ore dissolution is well developed, all the halite can dissolve, along with the potash salts, and the overlying strata collapse into the cavity (these are classic solution collapse features). Transitional leached zones typically separate the collapsed core from normal bedded potash. Such collapse structures indicate a breach of the ore layers by unsaturated waters, fed either from below or above. For example, in the Western Canada Sedimentary Basin, well-developed collapse structures tend to occur over the edges and top Devonian mud mounds, while in the New Mexico potash zone the collapse zones are related to highs in the underlying Capitan reef trend (Warren, 2017). Leaching fluids may have come from below or above to form collapse structures at any time after initial deposition. When connected to a water source, these are the subsurface features that when intersected can quickly move the mining operation out of the salt into an adjacent aquifer system, a transition that led to flooding in most of the mine-lost operations listed earlier. Identifying at the mine scale the set of processes that created a salt anomaly in a sylvite bed also has implications in terms of its likely influence on mine stability whatever decision is made on how to deal with it as part of the ongoing mine operation (Warren, 2016, 2017). Syndepositional karst fills and leach anomalies are least likely to be problematic if penetrated during mining, as the aquifer system that formed them is likely no longer active. In contrast, penetration or removal of the region around a salt-depleted collapse breccia may lead to uncontrollable water inflows and ultimately to the loss of the mine. Unfortunately, in terms of production planning, the features of the periphery of a leach anomaly can be similar if not identical to those in the alteration halo that typically forms about the leached edge a collapse zone. The processes of sylvite recrystallisation that define the edge of collapse anomaly can lead to local enrichment in sylvite levels, making these zones surrounding the collapse core attractive extraction targets (Boys 1990, 1993). Boundaries of any alteration halo about a collapse centre are not concentric, but irregular, making the prediction of a feature’s geometry challenging, if not impossible. The safest course of action is to avoid mining salt anomalies, but longwall techniques make this difficult and so they must be identified and dealt with (see Warren 2017). In addition to; 1) primary sylvite and 2) sylvite/carnallite alteration via incongruent transformation in burial, there is a third mode of sylvite formation related to 3) igneous heating driving devolatisation of carnallitite, which can perhaps be considered a form of incongruent melting (Warren, 2016). And so, at a local scale (measured in metres to tens of metres) in potash beds cut by igneous intrusions, there are a number of documented thermally-driven alteration styles and thermal haloes. Most are created by the intrusion of hot doleritic or basaltic dykes and sills into cooler salt masses, or the outflow of extrusive igneous flows over cooler salt beds (Knipping, 1989; Grishina et al, 1992, 1998; Gutsche, 1988; Steinmann et al., 1999; Wall et al., 2010). Hot igneous material interacts with somewhat cooler anhydrous salt masses to create narrow, but distinct, heat and mobile fluid-release envelopes, also reflected in the resulting salt textures. At times, relatively rapid magma emplacement can lead to linear breakout trends outlined by phreatomagmatic explosion craters, as imaged in portions of the North Sea (Wall et al., 2010) and the Danakhil/Dallol potash beds in Ethiopia (Salty Matters, May 1, 2015). Based on studies of inclusion chemistry and homogenization temperatures in fluid inclusions in bedded halite near intrusives, it seems that the extent of the influence of a dolerite sill or dyke in bedded salt is marked by fluid-inclusion migration, evidenced by the disappearance of chevron structures and consequent formation of clear halite with a different set of higher-temperature inclusions. Such a migration envelope is well documented in bedded Cambrian halites intruded by end-Permian dolerite dykes in the Tunguska region of Siberia (Grishina et al., 1992). Defining h as the thickness of the dolerite intrusion in these salt beds, and d as the distance of the halite from the edge of the intrusion, then the disappearance of chevrons occurs at greater distances above than below the intrusive sill. For d/h < 0.9 below the intrusion, CaCl2, CaCl2, KCl and nCaCl2, mMgCl2 solids occur in association with water-free and liquid-CO2 inclusions, with H2S, SCO and orthorhombic or glassy S8. For a d/h of 0.2-2 above the intrusion, H2S-bearing liquid-CO2 inclusions are typical, with various amounts of water. Thus, as a rule of thumb, an alteration halo extends up to twice the thickness of the dolerite sill above the sill and almost the thickness of the sill below (Figure 22). In a series of autoclavation laboratory experiments, Fabricius and Rose-Hampton (1988) found that; 1) at atmospheiric pressure carnallite melts incongruently to sylvite and hydrated MgCl2 at a temperature of 167.5°C. 2) the melting/transformation temperature increase to values in excess of 180°C as the pressure increases (Figure 23). A similar situation occurs in the dyke-intruded halite levels exposed in the mines of the Werra-Fulda district of Germany (Steinmann et al., 1999; Schofield, et al., 2014). There the Herfa-Neurode potash mine is located in the Werra-Fulda Basin in the Hessian district of central Germany (Figure 24a). The targeted ore levels consist of the carnallite-rich Kaliflöz Hessen (K1H) and Kaliflöz Thüringen (K1Th) intervals, which form part of the Zechstein 1 (Z1) bedded Werra salt succession (Warren, 2016). In the mine the K1H and K1Th units range in thickness from 2 m to 10 m, are generally subhorizontal and occur at a depth of 650–710 m below the present-day surface. In the later Tertiary, basaltic melts intruded these Zechstein evaporites as numerous sub-vertical dykes, but only a few dykes attained the Miocene landsurface. Basaltic melt production was related to regional volcanic activity some 10 to 25 Ma. Basalts exposed in the mine walls, where it cuts non-hydrous units of halite or anhydrite, are typically subvertical dykes, rather than subhorizontal sills. The basalts are phonolitic tephrites, limburgites, basanites and olivine nephelinites. Dyke margins are usually vitrified, forming a microlitic limburgite glass along dyke edges in contact with halite (Figure 24b; Knipping, 1989). At the contact on the evaporite side of the glassy rim, there is a cm-wide carapace of high-temperature salts (mostly anhydrite and ferroan carbonates). Further out, the effect of the high-temperature envelope is denoted by transitions to clear halite, with higher temperature fluid inclusions (Knipping 1989). All of this metre-scale alteration is an anhydrous alteration halo, the halite did not melt (melting temperature of 804°C), rather than migrating, the fluid driving recrystallisation was mostly from entrained brine/gas inclusions. The dolerite/basalt interior of the basaltic dyke is likewise altered and salt soaked, with clear, largely inclusion-free halite typically filling vesicles in the basalt. Heating of hydrated (carnallitic) salt layers, adjacent to a dyke or sill, tends to drive off the water of crystallisation (chemical or hydration thixotropy) at much lower temperatures than that at which anhydrous salts, such as halite or anhydrite, thermally melt (Figure 24c; Table 3). For example, in the Fulda region, the thermally-driven release of water of crystallisation within carnallitic beds creates thixotropic or subsurface “peperite” textures as carnallitite alters to sylvinite layers. These are layers where heated water of crystallisation escaped from the hydrated-salt lattice. Dehydration-driven loss of mechanical strength focuses zones of magma entry into particular subhorizontal horizons in the salt mass, wherever hydrated salt layers were present. In contrast, dyke and sill margins are much sharper and narrower in zones of contact with anhydrous salt intervals and the intrusive is sub-vertical to steeply dipping (Figure 24b versus 24c). Accordingly, away from the immediate vicinity of the direct thermal aureole, heated and overpressured dehydration waters can enter carnallite halite bed, and drive the creation of extensive soft sediment deformation and peperite textures in hydrated layer (Figure 24c). Mineralogically, sylvite and coarse recrystallised halite dominate the salt fraction in the peperite intervals of the Herfa-Neurode mine. Sylvite in the altered zone is a form of dehydrated carnallite, not a primary-textured salt. Across the Fulda region, such altered zones and deformed units can extend along former carnallite layer to tens or even a hundred or more metres from the dyke feeder. Ultimately, the deformed potash bed passes back out into the unaltered bed, which retains abundant inclusion-rich halite and carnallite (Schofield et al., 2014). That is, nearer the basalt dyke, the carnallite is largely transformed into inclusion-poor halite and sylvite, the result of incongruent flushing of warm saline fluids mobilised from the hydrated carnallite crystal lattice as it was heated by dyke emplacement. During Miocene salt alteration/thermal metamorphism in the Fulda region, NaCl-fluids were mixed with fluids and gases originating from thermally-mobilised crystallisation water in the carnallite, as it converted to sylvite. This brine/gas mixture altered the basalts during post-intrusive cooling, an event which numerical models suggest was quite rapid (Knipping, 1989): a dyke of less than 0.5 m thickness probably cooled to temperatures less than 200°C within 14 days of dyke emplacement. The contrast in alteration extent between anhydrous and hydrous salt layers shows alteration effects are minimal wherever the emplacement temperature of the magma is below that of the anhydrous salt body as it is next to a basalt dyke. If this is the mechanism driving entry of igneous-related volatiles (gases and liquids) into a salt body, then the distribution of products (including CO2) will be highly inhomogeneous and related to the minerally of the salt unit adjacent to the intrusive. Worldwide, dykes intersecting salt beds tend to widen to become sills in two zones: 1) along evaporite units within the halite mass that contain hydrated salts, such as carnallite or gypsum (Figure 24c) and, 2) where rising magma has ponded and so created laccoliths at the upper or lower halite contact with the adjacent nonsalt strata or against a salt wall (Figure 22 vs 24). The first is a response to a pulse of released water as dyke-driven heating forces the dehydration of hydrated salt layers. The second is a response to the mechanical strength contrast at the salt-nonsalt contact. In summary, sylvite formed from a carnallite precursor during Miocene salt alteration/thermal metamorphism in the Fulda region, NaCl-fluids were mixed with fluids originating from thermally-mobilised crystallisation water in the carnallite, as it converted to sylvite. This brine mixture altered the basalts during post-intrusive cooling, an event which numerical models suggest was quite rapid (Knipping, 1989): a dyke of less than 0.5 m thickness probably cooled to temperatures less than 200°C within 14 days of dyke emplacement. Over this series of three articles focused on current examples of potash production, we have seen there are two main groups of potash minerals currently utilised to make fertiliser, namely, muriate of potash (MOP) and sulphate of potash (SOP). MOP is both mined (generally from a Pre-Neogene sylvinite ore) or produced from brine pans (usually via processing of a carnallitite slurry). In contrast, large volumes of SOP are today produced from brine pans in China and the USA but with only minor production for solid-state ore targets. Historically, SOP was produced from solid-state ores in Sicily, the Ukraine, and Germany but today there are no conventional mines with SOP as the prime output in commercial operation (See Salty Matters, May 12, 2015). The MgSO4-enriched chemistry of modern seawater makes the economic production of potash bitterns from a seawater-feed highly challenging. Today, there is no marine-fed plant anywhere in the world producing primary sylvite precipitates. However, sylvite is precipitating from a continental brine feed in salt pans on the Bonneville salt flat, Utah. There, a brine field, drawing shallow pore waters from saltflat sediments, supplies suitably low-MgSO4 inflow chemistry to the concentrator pans. Sylvite also precipitates in solar evaporator pans in Utah that are fed brine circulated through the abandoned workings of the Cane Creek potash mine (Table 1). Large-scale production of MOP fertiliser from potash precipitates created in solar evaporation pans is taking place in perennially subaqueous saline pans of the southern Dead Sea and the Qaidam Basin. In the Dead Sea, the feed brine is pumped from the waters of the northern Dead Sea basin, while in the Qaidam sump the feed is from a brine field drawing pore brines with an appropriate mix of river and basinal brine inputs. In both cases, the resulting feed brine to the final concentrators is relatively depleted in magnesium and sulphate. These source bitterns have ionic proportions not unlike seawater in times of ancient MgSO4-depleted oceans. Carnallitite slurries, not sylvinite, are the MOP precipitates in pans in both regions. When feed chemistry of the slurry is low in halite, then the process to recover sylvite is a cold crystallisation technique. When halite impurity levels in the slurry are higher, sylvite is manufactures using a more energy intensive, and hence more expensive, hot crystallisation technique. Similar sulphate-depleted brine chemistry is used in Salar Atacama, where MOP and SOP are recovered as byproducts of the production of lithium carbonate brines. Significant volumes of SOP are recovered from a combination of evaporation and cryogenic modification of sulfate-enriched continental brines in pans on the edge of the Great Salt Lake, Utah, and Lop Nur, China. When concentrated and processed, SOP is recovered from the processing of a complex series of Mg-K-SO4 double salts (schoenitic) in the Odgden pans fed brines from the Great Salt Lake. The Lop Nur plant draws and concentrates pore waters from a brine field drawing waters from glauberite-polyhalite-entraining saline lake sediments. All the Quaternary saline lake factories supply less than 20% of the world's potash; the majority comes from the conventional mining of sylvinite ores. The world's largest reserves are held in Devonian evaporites of the Prairie Evaporite in the Western Canada Sedimentary Basin. Textures and mineral chemistry show that the greater volume of bedded potash salts in this region is not a primary sylvite precipitate. Rather the ore distribution, although stratiform and defined by a series of clay marker beds, actually preserves the effects of multiple modifications and alterations tied to periodic egress and ingress of basinal waters. Driving mechanisms for episodes of fluid crossflow range from syndepositional leaching and reflux through to tectonic pumping and uplift (telogenesis). Ore distribution and texturing reflect local-scale (10-100 metres) discontinuities and anomalies created by this evolving fluid chemistry. Some alteration episodes are relatively benign in terms of mineralogical modification and bed continuity. Others, generally tied to younger incidents (post early Cretaceous) of undersaturated crossflow and karstification, can have substantial effects on ore continuity and susceptibility to unwanted fluid entry. In contrast, ore textures and bed continuity in the smaller-scale sylvinite ores in the Oligocene Mulhouse Basin, France, indicate a primary ore genesis. Across the Quaternary, we need a saline lake brine systems with the appropriate brine proportions, volumes and climate to precipitate the right association of processable potash salts. So far, the price of potash, either MOP or SOP, and the co-associated MgSO4 bitterns, precludes industrial marine-fed brine factories. In contrast, to the markedly nonmarine locations of potash recovery from the Quaternary sources, almost all pre-Quaternary potash operations extract product from marine-fed basinwide ore hosts during times of MgSO4-depleted and MgSO4-enriched oceans (Warren, 2016; Chapter 11). This time-based dichotomy in potash ore sources with nonmarine hosts in the Quaternary deposits and marine evaporite hosted ore zones in Miocene deposits and older, reflects a simple lack of basinwide marine deposits and appropriate marine chemistry across the Neogene (Warren, 2010). As for all ancient marine evaporites, the depositional system that deposited ancient marine-fed potash deposits was one to two orders of magnitude larger and the resultant deposits were typically thicker stacks than any Quaternary potash settings. The last such “saline giant” potash system was the Solfifera series in the Sicilian basin, deposited as part of the Mediterranean “salinity crisis,” but these potential ore beds are of the less economically attractive MgSO4- enriched marine potash series. So, what are the factors that favour the formation of, and hence exploration for, additional deposits of exploitable ancient potash? First, large MOP solid-state ore sources are all basinwide, not lacustrine deposits. Within the basinwide association, it seems that intracratonic basins host significantly larger reserves of ore, compared to systems that formed in the more tectonically-active plate-edge rift and suture association. This is a reflection of: 1) accessibility – near the shallow current edge of a salt basin, 2) a lack of a halokinetic overprint and, 3) the setup of longterm, stable, edge-dissolution brine hydrologies that typify many intracratonic basins. Known reserves of potash in the Devonian Prairie evaporite in West Canadian Sedimentary Basin (WCSB) are of the order of 50 times that of next largest known deposit, the Permian of the Upper Kama basin, and more than two orders of magnitude larger than any other of the other known exploited deposits (Table 1). Part of this difference in the volume of recoverable reserves lies in the fact that the various Canadian potash members in the WCSB are still bedded and flat-lying. Beds have not been broken up or steepened, by any subsequent halokinesis. The only set of processes overprinting and remobilising the various potash salts in the WCSB are related to multiple styles and timings of aquifer encroachment on the potash units, and this probably took place at various times since the potash was first deposited, driven mainly by a combination of hinterland uplift and subrosion. In contrast, most of the other significant potash basins listed in Table 10 have been subjected to ongoing combinations of halokinesis and groundwater encroachments, making these beds much less laterally predictable. In their formative stages, the WCSB potash beds were located a substantial distance from the orogenic belt that drove flexural downwarp and creation of the subsealevel sag depression. Like many other intracratonic basins, the WCSB did not experience significant syndepositional compression or rift-related loading. Adams, S. S., 1970, Ore controls, Carlsbad potash district, southeast New Mexico, in J. L. Rau, and L. F. Dellwig, eds., Third Symposium on Salt, v. 1: Cleveland, Ohio, Northern Ohio Geological Society, p. 246-257. Baadsgaard, H., 1987, Rb-Sr and K-Ca isotope systematics in minerals from potassium horizons in the Prairie Evaporite Formation, Saskatchewan, Canada: Chemical Geology, v. 66, p. 1-15. Baar, C. A., 1974, Geological problems in Saskatchewan potash mining due to peculiar conditions during deposition of Potash Beds, in A. H. Coogan, ed., Fourth Symposium on Salt, v. 1: Cleveland, Ohio, North Ohio Geological Society, p. 101-118. Blanc-Valleron, M.-M., and M. Schuler, 1997, The Salt Basins of Alsace (Southern Rhine Graben), in G. Busson, and B. C. Schreiber, eds., Sedimentary Deposition in Rift and Foreland Basins in France and Spain (Paleogene and Lower Neogene): New York, Columbia University Press, p. 95–135. Blanc-Valleron, M. M., 1991, Les formations paléogènes évaporitiques du bassin potassique de Mulhouse et des bassins plus septentrion aux d'Alsace (Document BRGM, 204), Université Louis Pasteur, Paris, 350 p. Borchert, H., and R. O. Muir, 1964, Salt deposits-The origin, metamorphism and deformation of evaporites: London, D. Van Nostrand Co., Ltd., 338 p. Boys, C., 1990, The geology of potash deposits at PCS Cory Mine, Saskatchewan: Master's thesis, University of Saskatchewan; Saskatoon, SK; Canada, 225 p. Boys, C., 1993, A geological approach to potash mining problems in Saskatchewan, Canada: Exploration & Mining Geology, v. 2, p. 129-138. Brodylo, L. A., and R. J. Spencer, 1987, Depositional environment of the Middle Devonian Telegraph Salts, Alberta, Canada: Bulletin Canadian Petroleum Geology, v. 35, p. 186-196. Cendon, D. I., C. Ayora, J. J. Pueyo, and C. Taberner, 2003, The geochemical evolution of the Catalan potash subbasin, South Pyrenean foreland basin (Spain): Chemical Geology, v. 200, p. 339-357. Cendon, D. I., C. Ayora, and J. P. Pueyo, 1998, The origin of barren bodies in the Subiza potash deposit, Navarra, Spain - Implications for sylvite formation: Journal of Sedimentary Research Section A-Sedimentary Petrology & Processes, v. 68, p. 43-52. Cheng, H. D., Q. Y. Hai, H. Z. Ma, X. Y. Zhang, Q. L. Tang, Q. S. Fan, Y. S. Li, and W. L. Miao, 2016, Implications for the origin of secondary sylvite from a simulation of carnallite dissolution: Journal of Geochemical Exploration, v. 165, p. 189-198. Cheng, H. D., H. Z. Ma, Q. Y. Hai, Z. H. Zhang, L. M. Xu, and G. F. Ran, 2015, Model for the decomposition of carnallite in aqueous solution: Int. J. Miner. Process., v. 139, p. 36-42. Chipley, D., and T. K. Kyser, 1991, Large scale fluid movement in the Western Canadian sedimentary basin as recorded by fluid inclusions in evaporites, in J. E. Christopher, and F. M. Haidl, eds., Sixth international Williston Basin symposium, v. 6, Special Publication Saskatchewan Geological Society, p. 265-269. Chipley, D., T. K. Kyser, and T. Danyluk, 1990, Fluid flow events in the Elk Point Baain of western Canada as recorded in evaporite minerals: Summary of Investigations 1990, Saskatchewan Geological Survey; Saskatchewan Energy and Mines, Miscellaneous Report, p. 211-217. Chipley, D. B. L., 1995, Fluid history of the Saskatchewan sub-basin of the western Canada sedimentary basin: Evidence from the geochemistry of evaporites: Doctoral thesis, University of Saskatchewan. Chipley, D. B. L., and T. K. Kyser, 1989, Fluid inclusion evidence for the deposition and diagenesis of the Patience Lake Member of the Devonian Prairie Evaporite Formation, Saskatchewan, Canada: Sedimentary Geology, v. 64, p. 287-295. Duan, Z. H., and W. X. Hu, 2001, The accumulation of potash in a continental basin: the example of the Qarhan Saline Lake, Qaidam Basin, West China: European Journal of Mineralogy, v. 13, p. 1223-1233. Eatock, W. H., 1987, The Big Quill Lake sulphate of potash project, in C. F. Gilboy, and L. W. Vigrass, eds., Economic Minerals of Saskatchewan, Proceedings of a Symposium Held in Regina, Saskatchewan 17 & 18 November 1986, : Saskatchewan Geological Society Special Publication Number 8, p. 206–210. Emons, H., and H. Voigt, 1981, Untersvchungen zur kalten zersetzunung von Carnallit.: Freiberg Forschungsher, v. A628, p. 69-78. Evans, R., 1970, Genesis of sylvite- and carnallite-bearing rocks from Wallace, Nova Scotia: Third Symposium on Salt, v. 1, p. 239-245. Fabricius, J., and Rose-Hansen, J., 1988, Pressure-dependent melting curve of natural carnallite, KMgCl3.6H2O, in a closed system where evaporation is prevented. IX Symposium on Fluid Inclusions, 4-6 May, 1987, Oporto, Portugal. Fuzesy, A., 1982, Potash in Saskatchewan: Report, Saskatchewan Department of Mineral Resources, v. 181, 45 p. Garrett, D. E., 2004, Handbook of Lithium and natural Calcium Chloride; Their deposits, processing, uses and properties Amsterdam, Elsevier Academic Press, 476 p. Gendzwill, D. J., 1978, Winnipegosis mounds and Prairie Evaporite Formation of Saskatchewan – seismic study: Bulletin American Association Petroleum Geologists, v. 62, p. 73-86. Gutsche, A., 1988, Mineralreaktionen und Sto€transporte an einem Kontakt Basalt-Hartsalz in der Werra-Folge des Werkes Hattorf: Unpubl. diploma thesis, thesis, Georg-August-Universita, Gottingen. Hardie, L. A., 1984, Evaporites: Marine or non-marine?: American Journal of Science, v. 284, p. 193-240. Hinsken, S., K. Ustaszewski, and A. Wetze, 2007, Graben width controlling syn-rift sedimentation: the Palaeogene southern Upper Rhine Graben as an example: International Journal of Earth Sciences, v. 96, p. 979-1002. Hite, R. J., 1961, Potash-bearing evaporite cycles in the salt anticlines of the Paradox Basin, Colorado and Utah; Article 337: U. S. Geological Survey Professional Paper. p. D323-D327. Hofmann, P., A. Y. Huc, B. Carpentier, P. Schaeffer, P. Albrecht, B. Keely, J. R. Maxwell, D. J. S. Sinninghe, L. J. W. de, and D. Leythaeuser, 1993a, Organic matter of the Mulhouse Basin, France; a synthesis: Organic Geochemistry, v. 20, p. 1105-1123. Hofmann, P., D. Leythaeuser, and B. Carpentier, 1993b, Palaeoclimate controlled accumulation of organic matter in Oligocene evaporite sediments of the Mulhouse Basin: Organic Geochemistry, v. 20, p. 1125-1138. Holser, W. T., 1979, Trace elements and isotopes in evaporites in R. G. Burns, ed., Marine Minerals, v. 6, Reviews in Mineralogy, p. 295-346. Holt, N. M., J. García-Veigas, T. K. Lowenstein, P. S. Giles, and S. Williams-Stroud, 2014, The major-ion composition of Carboniferous seawater: Geochimica et Cosmochimica Acta, v. 134, p. 317-334. Holter, M., 1972, Geology of the Prairie Evaporite Formation of Saskatchewan, Canada, Geology of saline deposits (Geologie des depots salins), v. 7, UNESCO Earth Sci. Ser., p. 183-189. Hong, X. L., S. P. Xia, and S. Y. Gao, 1994, Dissolution kinetics of carnallite (in Chinese with English abstract). Chin. J. Appl. Chem., v. 11, p. 26-31. Horita, J., A. Weinberg, N. Das, and H. Holland, 1996, Brine inclusions in halite and the origin of the Middle Devonian Prairie evaporites of western Canada: Journal of Sedimentary Research Section A-Sedimentary Petrology & Processes, v. 66, p. 956-954. Jensen, G. K. S., B. J. Rostron, M. J. M. Duke, and C. Holmden, 2006, Bromine and stable isotopic profiles of formation waters from potash mine-shafts, Saskatchewan, Canada: Journal of Geochemical Exploration, v. 89, p. 170-173. Koehler, G., T. K. Kyser, R. Enkin, and E. Irving, 1997, Paleomagnetic and isotopic evidence for the diagenesis and alteration of evaporites in the Paleozoic Elk Point Basin, Saskatchewan, Canada: Canadian Journal of Earth Sciences, v. 34, p. 1619-1629. Koehler, G. D., 1997, The Geochemistry and Petrogenesis of Carnallite and its Relationship to the Diagenesis of the Devonian Prairie Formation: PhD thesis, University of Saskatchewan. Koehler, G. D., T. K. Kyser, and T. Danyluk, 1990, Stable Isotope evidence for the petrogenesis of carnallite In the Middle Devonian Prairie Evaporile Formation, Saskatchewan: Summary of Investigations 1990, Saskatchewan Geological Survey; Saskatchewan Energy and Mines, Miscellaneous Report 90-04. Liu, C., Y. Ji, Y. Bai, F. Cheng, and X. Lu, 2007, Formation of porous crystals by coupling of dissolution and nucleation process in fractional crystallization: Fluid Phase Equilib, v. 261, p. 300-305. Lowenstein, T. K., 1988, Origin of depositional cycles in a Permian ''saline giant''; the Salado (McNutt Zone) evaporites of New Mexico and Texas: Geological Society of America Bulletin, v. 100, p. 592-608. Lowenstein, T. K., and R. J. Spencer, 1990, Syndepositional origin of potash evaporites; petrographic and fluid inclusion evidence: American Journal of Science, v. 290, p. 43-106. Mackintosh, A. D., and G. A. McVittie, 1983, Geological anomalies observed at the Cominco Ltd. Saskatchewan potash mine: Potash '83; Potash technology; mining, processing, maintenance, transportation, occupational health and safety, environment., p. 59-64. Matthews, R. D., and G. C. Egleson, 1974, Origin and Implications of a Mid-Basin Potash Facies in the Salina Salt of Michigan, Fourth International Symposium on Salt, v. 1, Northern Ohio Geological Society, p. 15-34. McIntosh, R. A., and N. C. Wardlaw, 1968, Barren halite bodies in the sylvinite mining zone at Esterhazy, Saskatchewan: Canadian Journal of Earth Sciences, v. 5, p. 1221-1238. Mossman, D. J., R. N. Delabio, and A. D. Mackintosh, 1982, Mineralogy of clay marker seams in some Saskatchewan potash mines: Canadian Journal of Earth Sciences, v. 19, p. 2126-2140. Mossop, G., and I. Shetsen, 1994, Geological Atlas of the Western Canada Sedimentary Basin: Canadian Society of Petroleum Geologists and Alberta Research Council, 504 p. Prud'homme, M., and S. T. Krukowski, 2006, Potash, in J. E. Kogel, N. C. Trivedi, J. M. Barker, and S. T. Krukowski, eds., Industrial Minerals and Rocks, SME (Soc. Mining Metallurgy and Exploration), p. 723-742. Richter-Bernburg, G., 1986, Zechstein 1 and 2 anhydrites; facts and problems of sedimentation, in G. M. Harwood, and D. B. Smith, eds., The English Zechstein and related topics, v. 22: London, UK, Geological Society Special Publication, p. 157-163. Rogers, M. B., and B. Pratt, 2017, Stratigraphy of the Middle Devonian Keg River and Prairie Evaporite formations, northeast Alberta, Canada: Bulletin of Canadian Petroleum Geology, v. 65, p. 5-63. Root, K. G., 2001, Devonian Antler Fold and Thrust Belt and Foreland Basin Development in the Southern Canadian Cordillera: Implications for the Western Canada Sedimentary Basin: Bulletin of Canadian Petroleum Geology, v. 49, p. 7-36. Rosell, L., and F. Ortí, 1981, The saline (potash) formation of the Navarra basin (Upper Eocene, Spain). Petrology: Revista Instituto Investigaciones Geologicas, v. 35, p. 71-121. Rouchy, J.-M., 1997, Paleogene Continental Rift System of Western Europe: locations of basins, paleogeographic and structural framework, and the distribution of evaporites: In: Busson, G., Schreiber, B.C. (Eds.), Sedimentary Deposition in Rift and Foreland Basins in France and Spain (Paleogene and Lower Neogene). Columbia University Press, New York, pp. 45–94. Schwerdtner, W. M., 1964, Genesis of potash rocks in Middle Devonian Prairie Evaporite Formation of Saskatchewan: Bulletin American Association Petroleum Geologists, v. 48, p. 1108-1115. Smith, D. B., 1996, Deformation in the late Permian Boulby Halite (EZ3Na) in Teesside, NE England: Geological Society, London, Special Publications, v. 100, p. 77-87. Smith, D. B., and A. Crosby, 1979, The regional and stratigraphical context of Zechstein 3 and 4 potash deposits in the British sector of the southern North Sea and adjoining land areas: Economic Geology, v. 74, p. 397-408. Valyashko, M. G., 1972, Playa lakes, a necessary stage in the development of a salt-bearing basin [with discussion]: Geology of saline deposits, Unesco Earth Sci. Ser., p. 41-51. van't Hoff, J. H., 1912, Untersuchungen Uber die Bildungsurhältnisse der ozeanischen Salzablangerungen, insbesondere das Stassfurter Salzlagers: Leipzig. Voigt, W., 2015, What we know and still not know about oceanic salts: Pure Appl. Chem., v. 87, p. 1099-1026. Volozh, Y., C. J. Talbot, and A. Ismail-Zadeh, 2003, Salt structures and hydrocarbons in the Pricaspian basin: Bulletin American Association Petroleum Geologists, v. 87, p. 313-334. Wardlaw, N. C., 1968, Carnallite-sylvite relationships in the middle Devonian Prairie evaporite formation, Saskatchewan: Geological Society America Bulletin, v. 79, p. 1273-1294. Wardlaw, N. C., 1972a, Unusual marine evaporites with salts of calcium and magnesium chloride in Cretaceous basins of Sergipe, Brazil: Economic Geology, v. 67, p. 156-168. Wardlaw, N. C., 1972b, Syn-sedimentary folds and associated structures in Cretaceous salt deposits of Sergipe, Brazil: J. Sediment. Petrol., v. 42, p. 572-577. Wardlaw, N. C., and G. D. Nicholls, 1972, Cretaceous Evaporites of Brazil and West Africa and their Bearing on the Theory of Continent Separation.: 24th Inter. Geol. Congr., v. 6, p. 43-55. Wardlaw, N. C., and D. W. Watson, 1966, Middle Devonian salt formations and their bromide content, Elk Point area, Alberta: Canadian Journal of Earth Sciences, v. 3, p. 263-275. Warren, J. K., 2000, Geological controls on the quality of potash, in R. M. Geertmann, ed., 8th World Salt Symposium, v. 1: Amsterdam, Elsevier, p. 173-180. Williams-Stroud, S. C., 1994, Solution to the Paradox? Results of some chemical equilibrium and mass balance calculations applied to the Paradox Basin evaporite deposit: American Journal of Science, v. 294, p. 1189-1228. Wittrup, M. B., and T. K. Kyser, 1990, The petrogenesis of brines in Devonian potash deposits of western Canada: Chemical Geology, v. 82, p. 103-128. Woods, P. J. E., 1979, The geology of Boulby Mine: Economic Geology, v. 74, p. 409-418. Worsley, N., and A. Fuzesy, 1979, The potash-bearing members of the Devonian Prairie Evaporite of southeastern Saskatchewan, south of the mining area: Economic Geology, v. 74, p. 377-388. Xia, S. P., X. L. Hong, and S. Y. Gao, 1993, Study on the dissolution kinetics and mechanism of carnallite and KC (in Chinese with English abstract): Journal Salt Lake Research, v. 1, p. 52-60. Yang, C., G. Jensen, and J. Berenyi, 2009, The Stratigraphic Framework of the Potash-rich Members of the Middle Devonian Upper Prairie Evaporite Formation, Saskatchewan: Summary of Investigations 2009, Saskatchewan Geological Survey; Misc. Rep. 2009-4.1, 28 p. Zak, I., 1997, Evolution of the Dead Sea brines, in T. M. Niemi, Z. Ben-Avraham, and J. R. Gat, eds., The Dead Sea, The Lake and Its Setting, v. 36: Oxford, Oxford University Press; Monographs on Geology and Geophysics, p. 133-144.
As part of extra-curricular activities, children will get to try out an activity that they otherwise might have never considered. Whether it’s the hockey team or chess club or book club, these provide a diverse range of outlets for children that they wouldn’t get to experience in an academic setting. Even if they realise that after a few weeks they’re not enjoying it, at least they would have given it a go. If they do enjoy the activity, though, it could help give them an idea of what to pursue in university, and then in later life. A student who joins a film club, for example, might be inspired to go into a career in film, and an Olympic athlete might start out by joining an after-school gymnastics club. If children aren’t doing any activities, they can start to feel like they aren’t accomplishing anything, which can in turn affect their self-worth. A child doing lots of extra-curricular activities, on the other hand, doesn’t have time for these feelings. Children doing activities can discover hidden talents that they might not otherwise have if they’d stuck with purely academic subjects. If a child’s talented at something, then they’ll obviously enjoy it, which in turn will improve their self-esteem and make them feel better about themselves and more confident. Children will get to meet new people, and widen their social circle – as part of extra-curricular activities, they’ll meet students from other classes, and possibly even from other schools. This is a chance for them to interact more with people their own age, as they encounter new social situations that are outside the structure of a normal school day. Depending on what the activity is, they’ll also learn teamwork skills – and that’s not just in sports. Children will also work together if they’re in a band, or drama club, or helping out with the school newspaper, for example. As well as learning new skills that are relevant to the extra-curricular activity, children will also be able to pick up skills that will serve them well in later life, such as time management, or how to set themselves goals. For example, they’ll learn how to manage their time properly, as they figure out how to prioritise different commitments. Even if it’s just setting aside time to finish homework, play an instrument, or practice a new football skill, it’s something all children need to learn. It’s always important to consider a child’s future, and one of the biggest benefits of extra-curricular activities is that they look good. According to the Times Higher Education magazine, achievements from outside the classroom can be more important that academic achievements. When they leave school, potential employers or universities will want to know what kind of person the student is before they offer them a job or a place on their chosen course. Extra-curricular activities can be included on CVs, or listed on college or university applications. Academic performance isn’t the only thing that matters nowadays – it’s important that the students can prove they’re well-rounded and capable, and taking part in clubs and sports teams is the best way of showing that.
As Texas risks a return to federal oversight of its election laws, Gov. Greg Abbott could face increased scrutiny of his role in advising on and defending redistricting maps and a voter ID law that could ultimately be struck down as discriminatory. Days before Martin Luther King Jr. Day, the scene that played out among the Greater Arlington Missionary Baptist Church’s wooden pews was, in some ways, reminiscent of the civil rights movement from decades before. Civil rights activists and social justice advocates had gathered last week to plan a protest. They talked about the fight for equity and the importance of standing up for their community. And they discussed the role of a collective voice to draw attention to the grievances laid out by the NAACP’s Arlington branch over the selection of Gov. Greg Abbott as the North Texas MLK parade’s honorary grand marshal. Abbott “has done more to damage and undermine African-American and Latino civil and voter rights” than any modern-day governor, the NAACP-Arlington said. It pointed, in part, to the role of Abbott, a former attorney general, in both defending and advocating for redistricting maps and strict voter ID requirements that have been tangled up in court for years over concerns they discriminate against Texans of color. Abbott’s response to that criticism came in a tweet, saying that the parade would be a “worthy celebration” and that he served as governor for all Texans. “I’m a Christian, I’ve committed my life to ensuring justice, I come in peace,” Abbott wrote. But amid the controversy surrounding Abbott’s selection, sponsors had backed out of the parade, leaving organizers tens of thousands of dollars short of the necessary funds for an event permit, and Abbott’s parade appearance was over before it began. The controversy over the MLK parade could very well be chalked up to a local dust-up. But it is also emblematic of the scrutiny Abbott has faced as the high-profile court battles over the validity of some of the state’s electoral measures near their seventh year of litigation. In both cases, Texas is risking a return to federal oversight of its election laws. Such an outcome would only intensify the scrutiny of Abbott, who has had an outsized role in advising on, defending and advancing measures that could ultimately be struck down as discriminatory. Abbott’s tenure as attorney general — the the state’s top lawyer — overlapped with the 2011 legislative session, when Republican lawmakers redrew the maps of congressional and state House and Senate districts to shore up their control of the state. That same year, they also passed what would be considered one of the nation’s strictest voter ID laws. As AG, Abbott — whose office did not respond to multiple requests for comment for this story — was not responsible for passing either measure. But he advised lawmakers as they considered the maps and voter ID requirements, ultimately giving the measures his legal blessings. Since then, numerous court rulings have found that the state ran afoul of the Constitution or federal voting protections for voters of color in enacting both measures. Pointing to those rulings, opponents have argued that Abbott is behind some of the most egregious legal advice given to the Legislature. Abbott’s team has more recently downplayed his advisory role in drawing up the embattled legislation. But after the 2011 legislative session, his office asked the Department of Justice and a special three-judge panel in Washington, D.C., to greenlight the new voter ID law and the redrawn maps. Back then, the state was still required by the Voting Rights Act to obtain federal approval of changes to its election laws, a safeguard for minority voting rights called preclearance. But Abbott was unable to speedily obtain preclearance from the DOJ and the D.C. court, which determined that Texas had not proved the measures would not discriminate against voters of color. And the cases — and Abbott’s involvement — would only grow more complicated as he continued to defend the state against legal challenges back home. In 2012, amid legal squabbling on the redistricting front ahead of an election, a federal three-judge panel in San Antonio ordered interim maps for congressional and state House districts. The parties in the case were instructed to negotiate fixes to potential legal violations in the maps while deferring to lawmakers’ preferences in the original boundaries. Abbott’s office “agreed to a number of fixes” to the maps and didn’t appeal that interim plan to the U.S. Supreme Court — a move that reflected Texas’ “willingness to some degree to increase minority voting strength,” said Nina Perales, vice president of litigation for the Mexican American Legal Defense and Educational Fund and one of the attorneys who has represented plaintiffs suing the state in both cases. The court at the time warned that the interim maps were still subject to revision. But Abbott in 2013 urged state leaders and lawmakers to adopt the interim maps because that would “insulate the state’s redistricting plans from further legal challenge,” Abbott wrote in a letter to House Speaker Joe Straus that’s often referred to in the litigation. When the Legislature gaveled out that May without acting on the maps, it was at Abbott’s urging that then-Gov. Rick Perry immediately called lawmakers back to approve the maps during a special session. “This strategy is discriminatory at this heart and should not insulate either plan from review,” the judges wrote. The voter ID case ran on a different timeline, but Abbott was still key to its implementation. Minutes after the 2013 announcement that the U.S. Supreme Court was axing the portion of the Voting Rights Act that required Texas to obtain preclearance of its election laws, Abbott tweeted that the state’s voter ID law should go into effect immediately. His office continued to defend the law in court as Abbott began his campaign for governor that summer. A district judge in Corpus Christi would eventually rule in 2014 that not only did the law have a racially discriminatory impact but that lawmakers had crafted it to intentionally discriminate against voters of colors who were less likely to have the required IDs to vote. Abbott’s successor, Ken Paxton, inherited the litigation the following year as the case made its way through the appellate process at the U.S. 5th Circuit Court of Appeals. A three-judge panel and then the full 5th Circuit, which is considered to be among the country’s most conservative appellate courts, eventually ruled that Texas lawmakers discriminated against voters of color by enacting the 2011 law. The case made its way back to the lower court, where a federal judge drew up a temporary fix to the law for the 2016 elections. Abbott re-emerged as a key player in the state’s legal strategy during last year’s legislative session when lawmakers again hoped to adopt an interim fix amid ongoing litigation but were up against a series of bill-killing deadlines. Just an hour before a key deadline, Abbott issued an emergency declaration that helped push the voter ID legislation onto the House calendar for a crucial vote to adopt the legislation. The 5th Circuit is now considering whether lawmakers discriminated on purpose in crafting the 2011 bill and what the Legislature’s 2017 revisions mean for the litigation. Almost seven years and millions of dollars in legal fees later, the two cases are still far from being resolved. But with multiple findings of intentional discrimination by lawmakers now under consideration, the specter of a return to federal preclearance has extended the close examination of lawmakers’ actions to Abbott’s role in advising on and defending the measures that could land Texas there. After nixing the portion of the Voting Rights Act that required federal oversight of Texas’ election laws, the Supreme Court left open the possibility that future, purposeful discrimination could require states to face federal supervision of new election laws. Legal experts have said Texas is near the top of the list of those most at risk of being put back under preclearance, though they add that persuading a majority of the high court to put Texas back under preclearance wouldn’t be easy. As the cases continue to go through the legal system, it appears that Abbott — like other Republicans — is likely to continue benefiting politically from the litigation in spite of criticism from groups like the Arlington NAACP. Recent polling by The University of Texas and the Texas Tribune showed that 47 percent of Texans do not believe the state’s election system discriminates against racial and ethnic minorities. Among Republicans, that figure goes up to 78 percent. Smith, who supported both the redistricting maps and the voter ID bill in 2011, says he experienced Republicans’ political affinity toward the voter ID debate when he unsuccessfully attempted in 2009 to author less severe voter ID legislation. Despite virtually no evidence that in-person voter fraud is rampant as Republicans leaders have espoused for years, the divisive issue motivated Republicans because of its viability on the campaign trail, Smith said. “There’s just no downside,” Smith said.
Uttarakhand was carved out of northern Uttar Pradesh on 9th November 2000 as the 27th State of India. Char-dhams, the four most sacred and revered Hindu temples of Badrinath,Kedarnath, Gangotri and Yamunotri are nestled in the mighty mountains. Uttarakhand has long been called "Land of the Gods" Dehradun is the Capital of Uttarakhand. The Uttarakhand state is the second fastest growing state in India. Basmati rice, wheat, soybeans, groundnuts, coarse cereals, pulses, and oil seeds are the most widely grown crops. Fruits like apples, oranges, pears, peaches, litchis, and plums are widely grown here. Skiing, Ice Skating, Sailing, Parasailing, Kayaking, Canoeing, Rafting, Yachting, Trekking, Rock Climbing, Hiking, Paragliding, Sky Diving and Bungee jumping also serve as another interesting sport's activity for tourists in Uttarakhand.
Southern Africa is expected to receive erratic rainfall in the 2018/19 agricultural season, according to the latest outlook produced by regional climate experts, who have predict that seasonal rainfall will be “normal to below-normal” across most of the region, except for Tanzania. The consensus forecast produced by the 22nd Southern African Regional Climate Outlook Forum (SARCOF) held in Lusaka, Zambia, from 22 to 24 August, shows that most of the 16 Southern African Development Community (SADC) countries are likely to receive “normal to below-normal” rainfall for the period October 2018 to March 2019. The SARCOF forecast is divided into two half-seasons, from October to the end of December 2018 and from January to the end of March 2019. The forecast shows that areas likely to receive normal to below-normal rainfall from October to December 2018 include eastern Angola, the extreme northern and southern parts of the Democratic Republic of Congo (DRC), western and southern Madagascar, southern Malawi, most of Botswana, eSwatini, Lesotho, Mozambique, Zambia and Zimbabwe, as well as most of Namibia and South Africa, except the western fringes of the two countries along the Atlantic coast. The rainfall forecast does not change much during the second half of the season from January to March 2019 when most of the region is expected to receive normal to below-normal rainfall. Climate experts have forecast an early onset of the 2018/19 season, a false start, which could be followed by prolonged dry spells that disturb the timing and spatial distribution of rainfall around the region. While developing this outlook, the climate scientists took into account oceanic and atmospheric factors that influence climate over southern Africa. For most of the SADC region, rainfall is forecast to be insufficient to meet the needs of the agricultural and power generation sectors.
I am British, I was born here and live here, my family is British. I work here and pay taxes here. I love my country, I’m proud of its history, it’s impact on the world, of our Queen and I feel patriotic when I see Britain doing great things within the global community and I also believe in Europe and the European Union. I don’t consider myself European other than in the geo-political sense as I am British but I don’t feel afraid to be part of the something that brings together the countries on this continent that has prevented any further war like our citizens, our parents and our grandparents saw in the two World Wars at the beginning of the 20th Century. I consider the European Union to be an organisation that is a membership and an investment in 28 countries working together to maintain political harmony, lay out guidelines for conducting business and trade, allowing free trading and free movement as well as making consideration for the worlds environment, it’s resources and it’s sustainability. Equally I can separate this membership as something positive without seeing or feeling that we lose our sovereignty, our ability to make decisions outside of it and lose our identity as a nation. I can acknowledge that the EU has it’s chunkiness’s and there will always be suggestion for unnecessary rules and laws which in the main never make it to a signed off stage, this happens in Westminster every day. I acknowledge that there are some controls required to ensure that both free movement and free trade are not abused but equally acknowledge that in real terms the EU is a young organisation and needs time still to develop it’s role. The European Union to me represents a community of vital importance to the UK’s future economically and politically. Whilst I agree that some reform is required in certain areas- an exit is unnecessary and would only have a detrimental impact on the UK. In my opinion the UK needs to stand up and be counted within the EU and rather than be a “side-line” player we need to take a seat firmly next to Germany as one of the world’s largest economies and well respected countries and lead the way. Yes being part of this community does require give and take but there is no reason why we can’t have more influence and get more of what we want out of membership. The ability to freely trade and move within the union is an opportunity both economically and socially. I am not concerned by EU migration I don’t feel uncomfortable walking down the street and hearing other languages that I don’t understand and I do respect those that come to the UK and take up jobs using skills that we simply don’t have or want to have anymore and have been crucial in the recent early stages of the rebuilding of UK manufacturing and other industries and developing and supporting the growth of businesses that we didn’t have in the UK before. Equally migration is a two way street, as a British citizen you and your children have the choice to move to where you want to move to, travel and see places within Europe without hindrance or Visa. In a growing international community this only brings more opportunity and ultimately choices. No one should move to a another state and become a purposeful burden but we can’t remove that ability to be able to have the opportunity to try and join a system and pay into it. The UK itself through the commonwealth and other countries has seen communities arrive for decades and become an established and highly valuable part of society, be it Asian, West Indian, Irish, Italian and African. Many of these people have come to the UK, worked and built businesses and generations on provide valuable professions, services and tax revenues and whilst acute aware of where they came from their children and children’s children consider themselves British. Many will complain about legislation regarding environment, product quality and health and safety, whilst these things came about because of the EU are they not actually things we should have adopted as a responsible nation anyway? Providing sustainable fishing and agriculture without wastage? Ensuring that products and goods meet certain quality standards? Ensuring that employees are looked after properly? Is it not better to have the option to work longer hours as opposed to being told you have too? I know these rules were put in place by an organisation outside of the UK but are they really that bad? Are they actually not quite sensible? The way that political game play works would they have not taken years to be implemented anyway for fear of losing the vote from proportion of society. Sometimes having an external body is a good way of managing some of what are elected government decides to do or not do! These regulations don’t stop you being British, don’t stop you from making a living, don’t stop you from making choices for you and your family. Leaving the EU is an unnecessary course of action there is no value in it at all but standing in the background moaning and groaning and feeling like the Great British pride is hurt because someone else told us to do something. If we don’t like something we need to take a greater hand in creating something that is of value. As a nation we’ve done it before many times. Let’s become a leader and influencer within Europe – other countries will listen to us because of who we are and what we have done in the past. Let’s stop electing people to the European Parliament that don’t want us to be there and put in people that will voice the nations opinion as a member of the community and actually turn up. By leaving we only bring uncertainty and complication with no guarantee of how we move forward. By staying we retain opportunity and in real terms the ability to continue to grow as an economy as we have done in the last few years by people a part of the EU. We retain the options we have to travel, live and work where we want. We can be a part of the EU and develop business and relationships throughout the world in the emerging nations, through the commonwealth and so on. Nothing stops us from doing this. Let’s not pass on to the next generation a potentially poisonous chalice when we can just change our approach and have the best of all worlds.